about me

I am a senior researcher in the Fairness, Accountability, Transparency, and Ethics in AI (FATE) group at Microsoft Research Montréal. I’m broadly interested in examining the social and ethical implications of natural language processing technologies; I develop approaches for anticipating, measuring, and mitigating harms arising from language technologies, focusing on the complexities of language and language technologies in their social contexts, and on supporting NLP practitioners in their ethical work. I’ve also worked on using NLP approaches to examine language variation and change (computational sociolinguistics), for example developing models to identify language variation on social media.

I was previously a postdoctoral researcher at MSR Montréal. I completed my Ph.D. in computer science at the University of Massachusetts Amherst working in the Statistical Social Language Analysis Lab under the guidance of Brendan O’Connor, where I was also supported by the NSF Graduate Research Fellowship. I received my B.A. in mathematics from Wellesley College. I interned at Microsoft Research New York in summer 2019, where I had the fortune of working with Solon Barocas, Hal Daumé III, and Hanna Wallach.

recent news

May 2024: One paper accepted to ACL 2024 contributing a framework formalizing the benchmark design process, led by Yu Lu Liu, and another to Findings of ACL on impacts of language technologies’ disparities on African American Language speakers, led by Jay Cunningham.

Mar. 2024: Two papers accepted to NAACL 2024: one examining expectations around what constitute fair or good NLG system behaviors, led by Lucy Li, and the other examining the shifting landscape of practices and assumptions around disagreement in data labeling.

Nov. 2023: I was a guest speaker at the Gender & Tech event, hosted by the University of Cambridge Centre for Gender Studies to celebrate The Good Robot Podcast and the launch of a volume on feminist AI!

Oct. 2023: Our paper on responsible AI practices in text summarization research, led by Yu Lu Liu, has been accepted to Findings of EMNLP 2023.

Oct. 2023: Jackie C.K. Cheung, Vera Liao, Ziang Xiao, and I will be co-organizing a tutorial on human-centered evaluation of language technologies at EMNLP 2024!

Oct. 2023: The third edition of our workshop bridging HCI and NLP will take place at NAACL 2024 (co-organized with Amanda Cercas Curry, Sunipa Dev, Michael Madaio, Ani Nenkova, Ziang Xiao, and Diyi Yang)!

July 2023: I gave a keynote at the Workshop on Online Abuse and Harms (WOAH) at ACL 2023.

June 2023: I gave a keynote at the Workshop on Algorithmic Injustice at the University of Amsterdam, and participated in a panel on algorithmic injustice at SPUI25.

May 2023: One paper accepted to ACL 2023 contributing a dataset for evaluating fairness-related harms in text generation, led by Eve Fleisig, and two more accepted to Findings of ACL: a paper on conceptualizations of NLP tasks and benchmarks led by Arjun Subramonian, and a paper on the landscape of prompt-based measurements of bias.

Nov. 2022: Our paper on representational harms in image tagging has been accepted to AAAI 2023.

June 2022: I gave keynotes at the Second Workshop on Language Technology for Equality, Diversity, and Inclusion and the 1st Workshop on Perspectivist Approaches to NLP.

May 2022: Delighted to be continuing at MSR Montréal as a senior researcher!

May 2022: Honored to have served as ethics co-chair for ACL 2022.

May 2022: Our paper exploring NLG practitioners’ evaluation assumptions and practices, led by Kaitlyn Zhou, has been accepted to NAACL 2022.

May 2022: Vera Liao, Alexandra Olteanu, and I co-organized a CHI panel: “Responsible Language Technologies: Foreseeing and Mitigating Harms”.

Dec. 2021: Honored to have been named one of the 100 Brilliant Women in AI Ethics for 2022.

Dec. 2021: The second edition of our workshop bridging HCI and NLP will take place at NAACL 2022 (co-organized with Hal Daumé III, Michael Madaio, Ani Nenkova, Brendan O’Connor, Hanna Wallach, and Qian Yang).

May 2021: Two papers accepted to ACL 2021: a paper investigating four benchmark datasets for measuring stereotyping, and a paper examining race, racism, and anti-racism in NLP.

Apr. 2021: We co-organized a workshop at EACL 2021 at the intersection of human-computer interaction and natural language processing (with Michael Madaio, Brendan O’Connor, Hanna Wallach, and Qian Yang).