I am an Assistant Professor in the Computer Science department at Carleton College. This fall, I am teaching CS 251 (Programming Languages).
My research aims to understand and improve the trustworthiness of machine learning models. Specifically, I look at a property called multiplicity: the phenomena where many models may perform equally well on a task, yet give different outputs for a given input. Multiplicity can harm machine learning reliability and fairness because it indicates that some decisions a model makes are arbitrary. This arbitrariness may be unavoidable, but is often hidden because the alternative models and decisions are not considered. As a result, people may be overconfident that their model’s decisions are objective, which can cause them to over-rely on their model or make it harder to challenge a harmful model output.
In my research, I use techniques from formal methods, machine learning, and human-computer interaction to measure and control the amount of multiplicity that is incurred throughout machine learning pipelines.
I received my Ph.D. in computer science from UW-Madison. At UW-Madison, I was part of the MadPL group and was co-advised by Aws Albarghouthi and Loris D’Antoni. Before grad school, I received my B.A. in mathematics from Carleton College and then worked for 2 years as a software developer at Epic in Madison.
News
- May 2025 - UCSD (home to my advisor Loris) published an article highlighting my CHI 2025 publication.
- April 2025 - I defended my thesis! Starting this fall, I will be an Assistant Professor at Carleton College. Carleton is also my alma mater so returning will be exciting and surreal!
- January 2025 - My paper “Perceptions of the Fairness Impacts of Multiplicity in Machine Learning” was accepted to CHI! Very excited to travel to Japan this spring.
- October 2024 - Attended the INFORMS conference in Seattle to present my work on dataset multiplicity.
Publications
(+) Equal contribution
Perceptions of the Fairness Impacts of Multiplicity in Machine Learning
Anna P. Meyer, Yea-Seul Kim, Aws Albarghouthi, and Loris D’Antoni
CHI 2025
[pdf] [video]
On Minimizing the Impact of Dataset Shifts on Actionable Explanations
Anna P. Meyer (+), Dan Ley (+), Suraj Srinivas, and Himabindu Lakkaraju
UAI 2023 (Oral Presentation)
[pdf] [code]
The Dataset Multiplicity Problem: How Unreliable Data Impacts Predictions
Anna P. Meyer, Aws Albarghouthi, and Loris D’Antoni
FAccT 2023
[pdf] [video] [code]
Certifying Robustness to Programmable Data Bias in Decision Trees
Anna P. Meyer, Aws Albarghouthi, and Loris D’Antoni
NeurIPS 2021
[pdf] [slides] [video] [code]
Preprints and workshop papers
Verified Training for Counterfactual Explanation Robustness under Data Shift
Anna P. Meyer (+), Yuhao Zhang (+), Aws Albarghouthi, and Loris D’Antoni
DMLR workshop at ICLR 2024
[pdf] [code]