ACCEPTED PAPERS (WORKSHOPS)
Nielsen, A., & Woemmel, A. (2024). Invisible Inequities: Confronting Age-Based Discrimination in Machine Learning Research and Applications. In: 2nd ICML Workshop on Generative AI and Law. [Paper]
Abstract: Despite heightened awareness of fairness issues within the machine learning (ML) community, there remains a concerning silence regarding discrimination against a rapidly growing and historically vulnerable group: older adults. We present examples of age-based discrimination in generative AI and other pervasive ML applications, document the implicit and explicit marginalization of age as a protected category of interest in ML research, and identify some technical and legal factors that may contribute to the lack of discussion or action regarding this discrimination. Our aim is to deepen understanding of this frequently ignored yet pervasive form of discrimination and to urge ML researchers, legal scholars, and technology companies to proactively address and reduce it in the development, application, and governance of ML technologies. This call is particularly urgent in light of the expected widespread adoption of generative AI in many areas of public and private life.
WORKING PAPERS
Public Attitudes Toward Algorithmic Risk Assessments in Courts: A Deliberation Experiment [Working Paper] [AEA Registry]
first author, with Hendrik Hüning and Lydia Mechtenberg
Winner of the 2024 Theodore Eisenberg Poster Prize at the Conference on Empirical Legal Studies (CELS), Atlanta.
Best Paper in the area "Technology, Privacy, and Information" at the American Law & Economics Association Conference (ALEA) 2023, Boston.
Abstract: We study public attitudes toward algorithmic risk assessment tools in the criminal justice system using an online deliberation study with 2,358 UK participants and apply quantitative text analysis to identify key topics, biases, and sentiments underlying these attitudes. Participants were presented with a scenario about algorithmic tools used to assist judges in making early release decisions, and then randomly assigned in groups of three to deliberate about the scenario via free-form messenger chats. The scenarios varied between subjects, but not within groups, in three algorithmic features: (i) inclusion vs. exclusion of discriminatory variables in input data, (ii) development by private vs. public institutions, and (iii) full vs. limited judicial discretion over the tool. Prior to group deliberation, the majority approved of these algorithms, with particularly high approval for those developed by public institutions or allowing full judicial discretion. However, deliberation significantly reduced approval in all treatment groups and diminished the effects of information treatments, leading to a convergence of attitudes. Text analysis suggests a negativity bias in the social learning process, with arguments against the tools (e.g., algorithmic bias) showing stronger associations with attitude changes than supportive arguments (e.g., cost savings), even though both types of arguments were equally present in the discussions. These findings highlight the malleability of stated public support for the introduction of algorithms into core public institutions, such as the justice system, especially when they are deliberated upon in greater depth.
Algorithmic Fairness: The Role of Beliefs (Draft available upon request) [AEA Registry]
Abstract: This paper investigates how decision-makers' statistical beliefs about protected groups affect their acceptance of fairness-aware algorithmic recommendations in consequential decision-making. In an online hiring experiment, I find that decision-makers who believe that protected group membership strongly predicts job performance outcomes are significantly more likely to override algorithmic hiring recommendations that explicitly exclude protected group membership from input data—even when these recommendations are highly informative at the individual level—resulting in increased disparities in hiring outcomes. This behavior is specific to the exclusion of protected group membership and does not extend to the exclusion of other input variables. This finding suggests that fairness-aware algorithmic decision-support tools may backfire in the very situations they are intended to address: mitigating biased human judgment. More broadly, it provides a behavioral account of the disparate impact of algorithmic fairness interventions based on the exclusion of protected attributes.
Upcoming Presentations: Hong Kong University (Law & Tech Centre), and Asian Conference on Organizational Economics
Digital Skills: Social Disparities and the Impact of Early Mentoring [CESifo Working Paper (Now Available)] [SOEP Module] [OSF] [AEA Registry]
with Fabian Kosse and Tim Leffler
Abstract: We investigate social disparities in digital skills, focusing on both actual proficiency levels and confidence in these skills. Drawing on a representative household sample from Germany, we first demonstrate that both dimensions strongly predict labor market success. Then, we use this sample to identify gender and socio-economic disparities in skills and confidence. Finally, using a long-term RCT panel framework with young adults, we confirm these disparities and provide causal evidence on the effects of enhancing the social environment in childhood. Assigning elementary school-aged children to a mentoring program persistently reduces socio-economic gaps in confidence related to digital skills but does not affect the level of digital skills.
WORK IN PROGRESS
Age Discrimination in Machine Learning: Causes and Consequences
with Aileen Nielsen
Abstract: This paper is part of a broader research agenda addressing the systemic neglect of older people - a rapidly growing and vulnerable group - in technology development, governance and regulation. Our research here focuses on Large Language Models (LLMs), such as ChatGPT, which are becoming increasingly influential in our society, shaping both everyday information access and consequential decision making. Although these models are designed with safeguards against discriminatory outputs, we find that they exhibit 'selective fairness', providing stronger protection against discrimination based on sensitive attributes such as race and gender, but weaker protection against other attributes such as age. We explore the legal and technical reasons for these differences. We then conduct a behavioral study to examine how age discrimination in LLMs may shape social norms and perceptions of ageism, potentially reinforcing this entrenched form of discrimination in our society.
Upcoming Presentations: 2025 ACM Symposium on Computer Science and Law