ACCEPTED PAPERS (WORKSHOPS)
Nielsen, A., & Woemmel, A. (2024). Invisible Inequities: Confronting Age-Based Discrimination in Machine Learning Research and Applications. In: 2nd ICML Workshop on Generative AI and Law. [Paper]
Abstract: Despite heightened awareness of fairness issues within the machine learning (ML) community, there remains a concerning silence regarding discrimination against a rapidly growing and historically vulnerable group: older adults. We present examples of age-based discrimination in generative AI and other pervasive ML applications, document the implicit and explicit marginalization of age as a protected category of interest in ML research, and identify some technical and legal factors that may contribute to the lack of discussion or action regarding this discrimination. Our aim is to deepen understanding of this frequently ignored yet pervasive form of discrimination and to urge ML researchers, legal scholars, and technology companies to proactively address and reduce it in the development, application, and governance of ML technologies. This call is particularly urgent in light of the expected widespread adoption of generative AI in many areas of public and private life.
WORKING PAPERS
Fragile AI Optimism [Working Paper] [AEA Registry] [Poster]
first author, with Hendrik Hüning and Lydia Mechtenberg
Winner of the 2024 Theodore Eisenberg Poster Prize at the Conference on Empirical Legal Studies (CELS), Atlanta.
Best Paper in the area "Technology, Privacy, and Information" at the American Law & Economics Association Conference (ALEA) 2023, Boston.
Abstract: We study how public attitudes toward AI form and shift using an online deliberation experiment with 2,358 UK citizens in the context of criminal justice. First, we replicate prior survey evidence suggesting public support for adopting AI as a decision-support tool, particularly when certain fairness features are met. We then show that this stated support is fragile: it declines significantly with group deliberation, as supporters are 2.6 times more likely than opponents to change their attitudes. Quantitative text analysis indicates that opponents contribute more arguments in group deliberation, both in terms of frequency and topic range, and supporters are more responsive to counterarguments. These results suggest that stated support for AI reflects lower attitude strength as it appears to be easily raised through informational framing but quickly reversed through deliberation. More broadly, they caution against inferring public legitimacy of increased AI deployment from stated support alone.
(Previously circulated as: "Public Attitudes Toward Algorithmic Risk Assessments in Courts: A Deliberation Experiment")
Algorithmic Fairness and Human Discrimination [Working Paper] [AEA Registry]
Abstract: Fairness constraints in algorithm design aim to reduce discrimination. Their impact, however, also depends on the adoption of the algorithm by human-decision makers as they typically retain full authority in high-stakes contexts. In a hiring experiment, I find suggestive evidence that protecting group membership in algorithmic predictions leads individuals to be more conservative in updating their beliefs about candidates based on these predictions. I then find a significant increase in discrimination in their hiring of candidates under this algorithm, driven by those who initially believe that group membership predicts performance. Finally, independent of the algorithm features, about 26% of participants make hiring decisions that cannot be explained by beliefs and are likely based on taste. These results suggest that algorithmic fairness features can paradoxically exacerbate human discrimination based on statistical beliefs by hindering adoption and, unsurprisingly, remain orthogonal to taste-based discrimination.
Social Disparities in Digital Skills: Evidence from Germany [OSF] (Draft available upon request)
Abstract: This paper documents gender and socioeconomic gaps in digital skills relevant to the labor market, using a representative German household sample. Men and individuals with higher education backgrounds demonstrate greater proficiency levels. Both groups also hold more optimistic beliefs about outperforming others, conditional on actual skills. These belief gaps are not driven by overconfidence, but by underconfidence among women and individuals with lower education backgrounds in the upper tail of the skill distribution. Early-life socioeconomic background is not significantly associated with adult digital skills or beliefs.
Digital Skills: Social Disparities and the Impact of Early Mentoring [CESifo Working Paper] [AER Registry] [SOEP-IS Module]
with Fabian Kosse and Tim Leffler
Abstract: We investigate social disparities in digital skills, focusing on both actual proficiency levels and confidence in these skills. Drawing on a representative sample from Germany, we first demonstrate that both dimensions strongly predict labor market success. We then use this sample to identify gender and socioeconomic disparities in levels and confidence. Finally, using a long-run RCT panel framework with young adults, we confirm these disparities and provide causal evidence on the effects of enhancing the social environment in childhood. Assigning elementary school-aged children to a mentoring program persistently reduces socioeconomic gaps in confidence related to digital skills, but it does not affect the level of digital skills.
WORK IN PROGRESS
Age Discrimination in Machine Learning: Causes and Consequences
with Aileen Nielsen
Abstract: This paper is part of a broader research agenda addressing the systemic neglect of older people - a rapidly growing and vulnerable group - in technology development, governance and regulation. Our research here focuses on Large Language Models (LLMs), such as ChatGPT, which are becoming increasingly influential in our society, shaping both everyday information access and consequential decision making. Although these models are designed with safeguards against discriminatory outputs, we find that they exhibit 'selective fairness', providing stronger protection against discrimination based on sensitive attributes such as race and gender, but weaker protection against other attributes such as age. We explore the legal and technical reasons for these differences. We then conduct a behavioral study to examine how age discrimination in LLMs may shape social norms and perceptions of ageism, potentially reinforcing this entrenched form of discrimination in our society.
PHD THESIS
On Human Factors in Machine Fairness [PhD Thesis]
written at the Department of Economics, University of Hamburg