DEADLINE EXTENSION: 1st ACL Workshop on Gender Bias for Natural Language Processing

****DEADLINE EXTENSION OF WORKSHOP PAPERS TO 3rd MAY***

1st ACL Workshop on Gender Bias for Natural Language Processing

1st Workshop on Gender Bias for Natural Language Processing

2nd August, Florence

Gender and other demographic biases in machine learned models are of increasing interest to the scientific community and industry. Models of natural language are highly affected by such biases, and biases in widely used products such as Google Translate and Alexa are understandably causing distrust and alarm in the general public. Research into how to fairly represent gender in natural language models is emerging. Examples of this are approaches such as data curation which aims to reduce model bias via changes in training and evaluation data or approaches that are changing learning algorithms themselves. While these approaches show promising results, there is more to do to solve identified and future bias issues. In order to make progress as a field, we need standard tasks which quantify bias.

This workshop will be the first dedicated to the issue of gender bias in NLP techniques and it includes a shared task on correference resolution.

Shared Task

We invite work on gender-fair modeling via our shared task, GAP (Webster et al. 2018). GAP is a coreference dataset designed to highlight current challenges for the resolution of ambiguous pronouns in context. GAP is gender-balanced dataset and evaluation is gender disaggregated. Previous work has shown state-of-the-art resolvers are biased to yield better performance on masculine pronouns due to differences in the public discourse between genders. Participation will be via Kaggle, with submissions open over a three month period in the lead up to the workshop. Google is sponsoring a prize pool of $25,000.

Topics of interest

We will invite submissions of technical work exploring the detection, measurement, and mediation of gender bias in NLP models and applications. Other important topics are the creation of datasets labelled with demographic information such as or metrics to identify and assess relevant biases or focusing on fairness in NLP systems.

Paper Submission Information

Submissions will accept regular short papers of 4-6 pages and long paper 8-10 pages, plus additional pages for references, following the ACL 2019 guidelines. Supplementary material can be added. Blind submission is required. Shared task participants will be invited to submit short papers (4-6 pages, plus references). No need to anonymise papers in this shared task submission.

Important dates

Workshop

May 3rd Deadline for Submission [Workshop Papers] EXTENDED

May 15 Notification of acceptance

May 22 Camera ready submission

August 2 Workshop in Florence

Shared task

Jan 21 Public leaderboard opens for system development

April 15-21 Test phase (official test data available)

April 26 Results announced

May 3 Submission of system description papers

May 24 Description paper reviews completed

June 7 Camera-ready papers due

Keynote Speakers

Pascale Fung, Hong Kong University of Science and Technology

Melvin Johnson, Google AI

Programme Committee

Rachel Rudinger, John Hopkins University, US
Saif Mohammad, National Council Canada, Canada
Svetlana Kiritchenko, National Council Canada, Canada
Kai-Wei Chang, University of Washington, US
Kaiji Lu, Carnegie Mellon University, US
Lucie Flekova, Amazon Alexa AI
Sharid Loáiciga, University of Gothenburg, Sweden
Zhengxian Gong, Soochow University, China
Marta Recasens, Google, US
Bonnie Webber, University of Edinburgh, UK
Ben Hachey, The University of Sydney, Australia
Mercedes García Martínez, Pangeanic, Spain
Ryan Cotterell, University of Cambridge, UK

Organizers

Marta R. Costa-jussà, Universitat Politècnica de Catalunya, Barcelona

Christian Hardmeier, Uppsala University

Kellie Webster, Google AI Language, New York

Will Radford, Canva, Sydney

Leave a Reply

Your email address will not be published. Required fields are marked *