Register now for better personalized quote!

Microsoft vice-chairman calls for human control over AI

Aug, 30, 2023 Hi-network.com

Microsoft's president and vice-chairman, Brad Smith, has warned about the weaponization of AI and stressed the importance of human control over the technology. Smith stated that while every technology, including AI, has the potential to be both a tool and a weapon, measures need to be put in place to ensure that AI remains subject to human supervision. This is particularly crucial when it comes to critical infrastructure and military applications.

AI safety measures

'We have to ensure that AI remains subject to human control. Whether it's a government, the military or any kind of organisation that is thinking about using AI to automate, say, critical infrastructure, we need to ensure that we have humans in control, that we can slow things down or turn things off.' Stated Brad Smith for CNBC.

Smith called for the implementation of laws and regulations to secure human control over AI. He drew comparisons to safety measures used in other technologies, such as circuit breakers in electricity and emergency brakes on school buses. Smith argued that just as these safety measures are required for other technologies, similar measures should be applied to AI. These safety breaks would allow humans to slow down or deactivate AI systems if necessary.

Brad Smith's insights on the impact of AI on the future of work

'It is a tool that can help people think smarter and faster. The biggest mistake people could make is to think that this is a tool that will enable people to stop thinking,' said Smith. Smith added that AI should be seen as a tool that supplements human work rather than completely replacing jobs.

AIas a tool of warfare

Brad Smith has consistently emphasised the importance of human control over AI in his public appearances and interviews. Back in 2019, In an interview with The Telegraph, Smith talked about the risks posed by militarised advanced technologies such as lethal autonomous weapon systems (LAWS) equipped drones that could be programmed to operate fully or partially autonomously. He noted that such technologies should 'not be allowed to decide on their own to engage in combat and who to kill'. Smith then concluded that governments should consider the ethical questions posed by advancements in the field of LAWS.

Fast forward four years later, there is still no globally binding agreement on LAWS. The 2023 Group of Governmental Experts (GGE) of the Convention on Certain Conventional Weapons (CCW) adopted a report on non-binding prohibitions and limitations related to LAWS. The GGE highlighted, among others, that LAWS must comply with international law, especially humanitarian law (IHL), and that adequate training must be given to human operators. However, the International Committee of the Red Cross (ICRC) has found these prohibitions vague and 'urged states to launch negotiations for new legally binding rules.'Namely, ICRC highlighted the need to ensure that LAWS that target humans are prohibited and recommended that the deployment of LAWS shall be programmed in a way that they only target objects that are military objectives by nature.
Additionally, Secretary-General Ant

tag-icon Hot Tags : Artificial Intelligence Future of work

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.