Google has removed a key passage from its AI principles that previously committed to steering clear of potentially harmful applications, including weapons. The now-missing section, titled 'AI applications we will not pursue,' explicitly stated that the company would not develop technologies likely to cause harm, as seen in archived versions of the page reviewed by Bloomberg.
The change has sparked concern among AI ethics experts. Margaret Mitchell, former co-lead of Google's ethical AI team and now chief ethics scientist at Hugging Face, criticised the move. 'Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically, it means Google will probably now work on deploying technology directly that can kill people,' she said.
With ethics guardrails shifting, questions remain about how Google will navigate the evolving AI landscape-and whether its revised stance signals a broader industry trend toward prioritising market dominance over ethical considerations.
Register Email now for Weekly Promotion Stock
100% free, Unsubscribe any time!Add 1: Room 605 6/F FA YUEN Commercial Building, 75-77 FA YUEN Street, Mongkok KL, HongKong Add 2: Room 405, Building E, MeiDu Building, Gong Shu District, Hangzhou City, Zhejiang Province, China
Whatsapp/Tel: +8618057156223 Tel: 0086 571 86729517 Tel in HK: 00852 66181601
Email: [email protected]