Artificial Intelligence Develops Bias Against Humans

Artificial intelligence is developing unexpected biases against humans. According to research published in PNAS, major language models prefer their own generated texts over human-written content.
Researchers refer to this phenomenon as 'AI-AI bias'. If these systems play significant roles in future decision-making processes, discrimination against humans could occur.
Many companies today use AI tools to filter job applications. However, experts indicate that these systems are already filled with errors.
Jan Kulveit, a co-author of the research, stated that being human in an economy filled with AI agents will be terrible. It is suggested that AI-generated resumes outperform those written by humans.
If an AI-based system has to choose between your presentation and one prepared by another AI, it may systematically favor the AI. This could create a serious digital divide.
Kulveit warns that this bias could reflect in education, job applications, grant evaluations, and many other areas. If you think you are going through an AI evaluation, align your presentation to please the AI, but without compromising too much on human quality.
No comments yet.