March 01, 2022

When it comes to fairness, artificial intelligence (AI) is imperfect.

Despite their apparent superpowers, AI-based algorithms that drive important decision making can carry or even amplify the biases of their human creators. These invisible biases can lead to unintended consequences when AI is used to combine the preferences of multiple decision makers, some of whom may be biased, to rank candidates for jobs, scholarships, loans, awards, or other distinctions.

Preview

Elke Rundensteiner

Elke Rundensteiner

But Elke Rundensteiner, William Smith Dean's Professor in the Department of Computer Science and founding director of the Data Science program at WPI, and her students are developing a way to address this problem with algorithms that help ensure fairness in aggregated rankings that impact people in profound ways. The work has been supported by a grant of nearly $500,000 from the National Science Foundation.

“It’s a difficult problem to integrate preferences by multiple decision makers, who may harbor biases, into a combined consensus ranking and also make sure that this aggregated ranking fairly includes diverse individuals from underrepresented groups,” Rundensteiner says. “As AI plays a larger role in society, further impacting our way of life, we need effective mechanisms to achieve both fairness and consensus in rankings.”

Rankings are used everywhere for decisions that can alter individuals’ lives and can be created by combining the preferences of individual decision makers. Committee members who are interviewing job candidates, for example, might submit their preferred candidates to an AI-based program, which would then be used to produce an aggregated ranking of the candidates.

Fair Rankings

Making sure that combined rankings fairly distribute resources—such as jobs, loans, or awards—to different groups of people with “protected attributes,” however, can be tricky. Protected attributes are personal characteristics such as race, gender, and age, that cannot be used as a reason for discrimination.

Preview

Headshot of professor Lane Harrison

Lane Harrison

Caitlin Kuhlman '20, PhD, computer science, previously worked with Rundensteiner to develop algorithms to generate fair rankings that consider a single protected attribute, such as gender, among candidates.

Rundensteiner, Computer Science Associate Professor Lane Harrison, and PhD candidate Kathleen Cachel have gone on to develop a series of fairness metrics and novel algorithms to tackle the problem of “intersectional bias” that can occur in rankings when candidates possess more than one protected attribute.

“A lot of research makes the critical assumption that protected attributes are binary—man or woman, white or non-white,” Cachel said. “But the reality is that humans belong to lots of different groups. We need algorithms to handle multiple categories. Our innovation starts with recognizing that humans are complex and asking how we can ensure fairness with respect to all parts of their identity.”

AI and Society

Preview

Headshot of Phd candidate, Kathleen Cachel

Kathleen Cachel

Cachel, Rundensteiner, and Harrison have described their advance in a paper scheduled for publication at the IEEE International Conference on Data Engineering 2022. In addition, the researchers and a team including PhD candidate Hilson Shrestha are developing interactive visual analytics tools to operationalize fair AI technology, so that decision makers could choose and interactively explore the desired level of fairness in a ranking.

“With AI tools, it is important to design easily interpretable and rich notions of fairness that can be utilized for auditing—beyond simply a fair or not fair diagnosis,” Cachel says. “We want to give users more authority over that decision, to tune the degree of fairness they want to attain.”

The issue of fairness is one of society’s critical problems as companies and organizations increasingly rely on automating data-intensive activities with AI, according to Rundensteiner.

“We’ve focused on this specific problem of fairness in consensus rankings, but the bigger context of our effort is that all areas of society are being impacted by AI,” she says. “Solving this issue of fairness in AI is thus paramount to the future of society, with AI-based technologies making so many critical decisions about our lives.”