In a recently published report titledTowards a standard for identifying and managing bias in artificial intelligence, the US National Institute of Technology (NIST) argues that machine learning processes and data are not the only sources of bias in artificial intelligence (AI). While computational and statistical sources of AI bias are important, human and systemic biases are relevant as well. 'Systemic biases result from institutions operating in ways that disadvantage certain social groups, such as discriminating against individuals based on their race. Human biases can relate to how people use data to fill in missing information, such as a person's neighbourhood of residence influencing how likely authorities would consider the person to be a crime suspect.' The report argues in favour of a socio-technical approach to mitigating bias in AI, and introduces guidance for addressing three key challenges for mitigating bias -datasets, testing and evaluation, and human factors.
Register Email now for Weekly Promotion Stock
100% free, Unsubscribe any time!Add 1: Room 605 6/F FA YUEN Commercial Building, 75-77 FA YUEN Street, Mongkok KL, HongKong Add 2: Room 405, Building E, MeiDu Building, Gong Shu District, Hangzhou City, Zhejiang Province, China
Whatsapp/Tel: +8618057156223 Tel: 0086 571 86729517 Tel in HK: 00852 66181601
Email: [email protected]