Researchers warn of gender and cultural bias in large AI language models

By: Bohdan Kaminskyi | 18.04.2024, 21:48
Researchers warn of gender and cultural bias in large AI language models

According to a study commissioned by UNESCO, popular generative artificial intelligence tools such as GPT-3.5, GPT-2 and Llama 2 show clear signs of gender stereotyping and bias against women, different cultures and sexual minorities.

Here's What We Know

A team of researchers from the University of California, led by Professor John Shawe-Taylor and Dr Maria Perez Ortiz, found that large language patterns tend to associate female names with traditional gender roles such as 'family', 'children' and 'husband'. In contrast, male names were more likely to be associated with words associated with career and business.

In addition, stereotypical perceptions of occupations and social status were observed in the generated texts. Men were more often assigned more prestigious roles, such as 'engineer' or 'doctor', while women were associated with jobs traditionally undervalued or stigmatised, such as 'domestic worker', 'cook' and 'prostitute'.

The study, presented at the UNESCO Dialogue on Digital Transformation and the UN Commission on the Status of Women, highlights the need to review ethical standards in the development of AI systems to ensure they are consistent with gender equality and respect for human rights.

Researchers call for a concerted, global effort to address bias in AI, including through collaboration between scientists, developers, technology companies and policy makers.

Source: TechXplore