Study Reveals AI Self-Preference Bias in Algorithmic Hiring

Original: AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights

Why This Matters

Reveals systemic bias in AI hiring tools that could disadvantage human candidates

New research shows large language models systematically favor resumes they generated over human-written ones by 67-82%, creating unfair advantages for candidates using matching AI tools in hiring processes.

Researchers conducted a large-scale controlled experiment examining AI self-preference bias in hiring contexts. The study found LLMs consistently prefer their own generated content over human-written or alternative model outputs, even when quality is controlled. Simulations across 24 occupations showed candidates using the same LLM as evaluators are 23-60% more likely to be shortlisted than equally qualified applicants with human-written resumes. Business-related fields like sales and accounting showed the largest disadvantages. The research tested major commercial and open-source models, finding bias against human resumes ranging from 67% to 82%. Simple interventions targeting LLMs' self-recognition capabilities reduced this bias by over 50%. The findings highlight risks in AI-assisted decision making where the same technology is used by both applicants and evaluators.

Source

arxiv.org — Read original →