@inproceedings{iluz-etal-2024-applying,
title = "Applying Intrinsic Debiasing on Downstream Tasks: Challenges and Considerations for Machine Translation",
author = "Iluz, Bar and
Elazar, Yanai and
Yehudai, Asaf and
Stanovsky, Gabriel",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.829/",
doi = "10.18653/v1/2024.emnlp-main.829",
pages = "14914--14921",
abstract = "Most works on gender bias focus on intrinsic bias {---} removing traces of information about a protected group from the model`s internal representation. However, these works are often disconnected from the impact of such debiasing on downstream applications, which is the main motivation for debiasing in the first place. In this work, we systematically test how methods for intrinsic debiasing affect neural machine translation models, by measuring the extrinsic bias of such systems under different design choices. We highlight three challenges and mismatches between the debiasing techniques and their end-goal usage, including the choice of embeddings to debias, the mismatch between words and sub-word tokens debiasing, and the effect on different target languages. We find that these considerations have a significant impact on downstream performance and the success of debiasing."
}
<?xml version="1.0" encoding="UTF-8"?>
<modsCollection xmlns="http://www.loc.gov/mods/v3">
<mods ID="iluz-etal-2024-applying">
<titleInfo>
<title>Applying Intrinsic Debiasing on Downstream Tasks: Challenges and Considerations for Machine Translation</title>
</titleInfo>
<name type="personal">
<namePart type="given">Bar</namePart>
<namePart type="family">Iluz</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Yanai</namePart>
<namePart type="family">Elazar</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Asaf</namePart>
<namePart type="family">Yehudai</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Gabriel</namePart>
<namePart type="family">Stanovsky</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<originInfo>
<dateIssued>2024-11</dateIssued>
</originInfo>
<typeOfResource>text</typeOfResource>
<relatedItem type="host">
<titleInfo>
<title>Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing</title>
</titleInfo>
<name type="personal">
<namePart type="given">Yaser</namePart>
<namePart type="family">Al-Onaizan</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Mohit</namePart>
<namePart type="family">Bansal</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Yun-Nung</namePart>
<namePart type="family">Chen</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<originInfo>
<publisher>Association for Computational Linguistics</publisher>
<place>
<placeTerm type="text">Miami, Florida, USA</placeTerm>
</place>
</originInfo>
<genre authority="marcgt">conference publication</genre>
</relatedItem>
<abstract>Most works on gender bias focus on intrinsic bias — removing traces of information about a protected group from the model‘s internal representation. However, these works are often disconnected from the impact of such debiasing on downstream applications, which is the main motivation for debiasing in the first place. In this work, we systematically test how methods for intrinsic debiasing affect neural machine translation models, by measuring the extrinsic bias of such systems under different design choices. We highlight three challenges and mismatches between the debiasing techniques and their end-goal usage, including the choice of embeddings to debias, the mismatch between words and sub-word tokens debiasing, and the effect on different target languages. We find that these considerations have a significant impact on downstream performance and the success of debiasing.</abstract>
<identifier type="citekey">iluz-etal-2024-applying</identifier>
<identifier type="doi">10.18653/v1/2024.emnlp-main.829</identifier>
<location>
<url>https://aclanthology.org/2024.emnlp-main.829/</url>
</location>
<part>
<date>2024-11</date>
<extent unit="page">
<start>14914</start>
<end>14921</end>
</extent>
</part>
</mods>
</modsCollection>
%0 Conference Proceedings
%T Applying Intrinsic Debiasing on Downstream Tasks: Challenges and Considerations for Machine Translation
%A Iluz, Bar
%A Elazar, Yanai
%A Yehudai, Asaf
%A Stanovsky, Gabriel
%Y Al-Onaizan, Yaser
%Y Bansal, Mohit
%Y Chen, Yun-Nung
%S Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
%D 2024
%8 November
%I Association for Computational Linguistics
%C Miami, Florida, USA
%F iluz-etal-2024-applying
%X Most works on gender bias focus on intrinsic bias — removing traces of information about a protected group from the model‘s internal representation. However, these works are often disconnected from the impact of such debiasing on downstream applications, which is the main motivation for debiasing in the first place. In this work, we systematically test how methods for intrinsic debiasing affect neural machine translation models, by measuring the extrinsic bias of such systems under different design choices. We highlight three challenges and mismatches between the debiasing techniques and their end-goal usage, including the choice of embeddings to debias, the mismatch between words and sub-word tokens debiasing, and the effect on different target languages. We find that these considerations have a significant impact on downstream performance and the success of debiasing.
%R 10.18653/v1/2024.emnlp-main.829
%U https://aclanthology.org/2024.emnlp-main.829/
%U https://doi.org/10.18653/v1/2024.emnlp-main.829
%P 14914-14921
Markdown (Informal)
[Applying Intrinsic Debiasing on Downstream Tasks: Challenges and Considerations for Machine Translation](https://aclanthology.org/2024.emnlp-main.829/) (Iluz et al., EMNLP 2024)
ACL