Data Science Ph.D. Qualifier | Oluseun Olulana | Ignore or Infer: Lessons from Fair Learning-To-Rank With Unknown Demographics

Tuesday, October 10, 2023
1:00 p.m. to 2:00 p.m.

United States

Floor/Room #
Hagglund 301 Conference Room

DATA SCIENCE 

Ph.D. Qualifier Presentation

Oluseun Olulana

 Tuesday, October 10th, 2023 | 1:00 PM

Location: Hagglund 301, Campus Center

Committee: 

Prof. Elke Rundensteiner, PhD Advisor

Prof. Fabricio Murai, Co-Advisor

Prof. Nima Kordzadeh,  Co-Advisor

 

Title:  

Ignore or Infer: Lessons from Fair Learning-To-Rank With Unknown Demographics

 

Abstract: 

As ranking models are increasingly ubiquitous in affecting our daily lives, the FairML community has worked to develop so-called fair learning-to-rank (LTR) models. Fair LTR models rely on the availability of sensitive demographic features such as race or sex to mitigate unfairness harms. Unfortunately, in practice, regulatory obstacles and privacy concerns prevent the collection and use of demographic data. As a result, practitioners may choose to ignore the absence of these attributes or, alternatively, they may turn to demographic inference tools to infer these features. Therefore, with significance to practitioners, we ask: How do errors in demographic inference impact the fairness performance of popular fair LTR strategies? We examine LTR fairness strategies, in particular when utilizing in-processing and post-processing fairness models such as DELTR and DetConstSort, to evaluate their effectiveness when used with inferred demographic attributes. We conduct an empirical investigation perturbing the inferred attribute to model different levels of inference error and then assess the impact of these controlled error levels on the fairness strategies. We perform three case studies with real-world data sets and popular inference methods. Additionally, we conduct analyses of scenarios where we ignore the demographic attributes. Our findings reveal that as inference noise grows, fair LTR strategies suffer in fairness performance, however, post-processing fairness strategies are more robust to inference errors. Moreover, opting to ignore missing protected attributes leads to improved outcomes under certain stringent circumstance.

Audience(s)

Department(s):

Data Science
Contact Person
Kelsey Briggs

Phone Number: