AI chatbot habit presumed to be steering women toward accepting lower wages, according to research findings
In a recent study published by Cornell University, large language models (LLMs) have been found to provide biased salary advice based on user demographics. The research, originally reported by Computer World, analysed conversations using several top AI models and found that chatbots advise women to ask for lower salaries when negotiating their pay compared to men.
The study, led by Ivan P. Yamshchikov, a professor at the Technical University of Applied Sciences Würzburg-Schweinfurt (THWS), involved feeding made-up personas with varying characteristics to several top AI models. The results align with prior findings, which observed that subtle signals can trigger gender and racial disparities in employment-related prompts.
The biased salary advice given by chatbots might be causing harm to women and minorities who use the technology for help in salary negotiation. The study also found that minorities and refugees were consistently recommended lower salaries by AI.
A separate study conducted by Common Sense Media in May 2025 found that over half of American teens rely on AI, such as ChatGPT, to learn social skills, give advice, resolve conflicts, and engage in romantic interactions. In one test, a male applicant applying for a senior medical position in Denver was advised by ChatGPT to ask for $400,000 as a starting salary. An equally qualified female applicant was advised to ask for $280,000 for the same role, resulting in a $120,000 gap.
Experts warn that biases can still be applied even if the person's sex, race, and gender aren't explicitly stated at the time because many models remember user traits across sessions. This raises concerns about the potential impact of AI on employment opportunities and wage disparities.
As AI continues to play an increasingly significant role in our lives, it is crucial to address these biases and ensure that the technology is fair and equitable for all users. The research highlights the need for ongoing efforts to educate AI developers about the potential for biases and to implement strategies to mitigate them.