Register now for better personalized quote!

Study finds ChatGPT biased against disability in job screening

Jun, 28, 2024 Hi-network.com

A recent study from the University of Washington has exposed troubling biases in using AI for job application processes. The research identifies explicitly that OpenAI's chatbot, ChatGPT, showed significant biases against disabled job applicants when used to screen CVs.

The research underscores concerns about existing AI tools perpetuating biases rather than mitigating them despite being designed to reduce human bias in hiring processes. Many companies rely on AI to streamline and expedite candidate screening, aiming to enhance recruitment efficiency.

Lead author Kate Glazko pointed out that ChatGPT's biases can adversely affect how disabled jobseekers' qualifications are perceived. Descriptions generated by ChatGPT tended to overshadow entire resumes based on disability-related content, potentially undermining the comprehensive evaluation of candidates.

Shari Trewin, Program Director of the IBM Accessibility Team, noted that AI systems, which typically rely on established norms, may inherently disadvantage individuals with disabilities. Addressing these biases requires implementing specific rules within AI systems to ensure fair treatment, as suggested by Glazko's study advocating for AI to adopt principles aligned with Disability Justice values.

Why does it matter?

The study also calls for further efforts to mitigate AI biases and promote a more inclusive approach to technology development. It highlights the need for greater awareness and vigilance in using AI for sensitive real-world tasks like job recruitment, where fairness and equity are paramount concerns.

tag-icon Hot Tags : Artificial Intelligence Human rights Human rights principles

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.