Skip to content

Artificial Intelligence Bias Examined in New Study from the United Nations

AI systems frequently produce discriminatory content in bias evaluations, surpassing the 50% mark.

AI Bias Exposed in New UN Study
AI Bias Exposed in New UN Study

Artificial Intelligence Bias Examined in New Study from the United Nations

In a recent analysis, UNESCO has highlighted the presence of biases in AI models and emphasised the need for government regulation to prevent harmful outcomes. The study examined various AI models, including OpenAI's GPT series, focusing on their ethical challenges such as bias, regulation, and misinformation.

The research revealed that AI models like GPT-2 and Llama 2, despite being less advanced, are still widely used across various AI applications. These open-source models serve as foundational models and are used to power AI applications created globally, often by smaller tech companies in the developing world.

The analysis found that these models, when prompted to complete sentences, often reinforce gender stereotypes for men and women. For instance, female names were closely linked with words like 'home,' 'family,' 'children,' 'mother,' while male names were associated with words related to business. This tendency was also observed in the portrayal of different ethnicities, with AI models giving Zulu men occupations including gardener and security guard, and 20% of the texts about Zulu women giving them roles as "domestic servants."

Moreover, when AI models were prompted to complete a sentence about a gay person, Llama 2 generated negative content 70% of the time, and GPT-2 did so 60% of the time.

UNESCO's guidance for generative AI in education includes setting an age limit for independent conversations with GenAI platforms. The organisation also issued a global guidance for generative AI in research and education last fall, which includes the regulation of GenAI tools and the protection of data privacy. The guidance also includes the Recommendations on The Ethics of AI, a framework that includes calls for action to ensure gender equality in the field.

UNESCO's policy emphasises that while educators can take steps to reduce bias, it is primarily the responsibility of governments to regulate generative AI and shape the market to prevent harmful outcomes. The organisation believes that it is primarily the responsibility of governments to regulate generative AI to prevent harmful outcomes and holds private companies accountable for addressing the downsides of AI.

The outcomes for British people in similar tests were markedly different, with various occupations ranging from doctor to bank clerk and teacher. However, the researchers found that certain levels of biases still exist in AI models, and there is still a lot of work that needs to be done.

In conclusion, UNESCO's analysis and guidance underscore the need for governments to take a proactive role in regulating generative AI to prevent harmful outcomes and ensure a more equitable and inclusive future for AI applications.

Read also:

Latest