Register now for better personalized quote!

HOT NEWS

White House promises on AI regulation called 'vague' and 'disappointing'

Nov, 13, 2023 Hi-network.com

The "voluntary commitments" from seven leading AI tech companies to help address safety, security, and trust risks associated with their ever-evolving technologies aren't worth the paper they could have been written on, according to tech industry experts.

On Friday, US President Joseph R. Biden Jr. said he met with representatives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI at the White House, and all committed to safety standards when developing AI technologies.

"The Biden-Harris Administration has secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology," a White House statement said. The agreements include "external" security testing of AI systems before their release, third-party discovery and reporting of vulnerabilities in their AI systems, and the use of watermarks to ensure users know when content is AI generated.

Despite the positive spin on the agreements from White House officials, they're unlikely to do much to rein in Ai development.

"The latest Biden effort to control AI risks by gaining voluntary unenforceable and relatively vague commitments from seven leading AI companies is disappointing," said Avivah Litan, a vice president and distinguished analyst at Gartner Research. "It is more evidence that government regulators are wholly unequipped to keep up with fast-moving technology and protect their citizenry from the potentially disastrous consequences that can result from malicious or improper use."

In May, the Biden Administration met with many of the same AI developers and rolled out a so-called "AI Bill of Rights" for US citizens; those non-binding guidelines were an effort to offer guidance and begin a conversation at the national level about real and existential threats posed by generative AI technologies such as OpenAI's ChatGPT.

The White House has also indicated it is working on an executive order and pursuing bipartisan legislation to further responsible innovation and control the ability of China and other competitors to obtain new AI technology programs and their components, according to the The New York Times.

The executive order is expected to create new restrictions on advanced semiconductors and control the export of the large language models (LLMs). LLMs are computer algorithms that processes natural language inputs and independently generate images, video, and written content.

Ritu Jyoti, an IDC vice president of AI and Automation research, said that while the assurances from AI firms is a "great start," they  must evolve into more concrete actions globally "and hopefully [the] White House's forthcoming executive order will better serve the commitments and have the impact we are looking for."

China has already released a comprehensive set of rules around the responsible public use of generative AI that become effective in August, representing much faster regulatory progress than has been made in the US, according to Litan.

"How China uses AI for its own national agenda is not addressed by these rules, and none of the GenAI frameworks today address international cooperation for the common global good," Litan said. "The US needs to get its own act together before it can help lead a global effort that addresses the existential risks posed by AI."

By default, rules governing the proper use of AI and safety measures must at minimum be an international effort because the technology is portable and isn't limited by geography or national borders, Litan said.

"We do have precedents for global agreements that mitigate existential risks - for example, with nuclear weapons and climate change. But those efforts, where the risks and controls are much clearer than they are with AI, are flawed as well. So, imagine the difficulty we will have in controlling AI risk at a global level," she said.

In June, Senate Majority Leader Chuck Schumer, D-NY, announced the SAFE Innovation Framework, which calls for increased transparency and accountability involving AI technologies.

Schumer's SAFE effort has a better chance of forcing AI companies to safeguard against misuse of their technology because Congress might, in fact, be able to pass laws that at least are enforceable and penalize those who violate them, according to Litan.

"Updates to US laws could potentially deter bad actors from inflicting harm via use of their AI models and applications," she said. "Schumer has thoughtfully outlined the problems and a path forward for creating sensible helpful legislation. But Congress has a terrible track record when it comes to getting ahead of technology risks and passing helpful enforceable laws."

Alex Ratner, CEO of Snorkel AI, a startup that helps companies develop LLMs, agreed with Litan that regulating AI will be difficult at best, as there are no longer one or two closed-source platforms; instead, many open-source variants have popped up that in some cases are even better than the proprietary ones.

"And the number of models is quickly climbing," Ratner said.

However, any attempts to control AI should be an industry-wide effort and not placed in the hands of "monopolies." 

While efforts to put in place guardrails around AI are a good thing, they bring with them concerns that over-regulation could stifle innovation, according to Luis Ceze, a computer science professor at the University of Washington and CEO of AI model deployment platform OctoML.

The "cat," Ceze noted, is out of the bag at this point and there are now many LLM libraries to choose from in creating generative AI platforms.

"We have an ecosystem of technologies to support these emerging models; we have hundreds of AI businesses that didn't exist in 2022," Ceze said in an email response toComputerworld. "I am a huge proponent of responsible AI. But it will require a surgical approach. It's not just a single technology at stake; it's a foundational building block that has the potential to advance healthcare and sciences."

tag-icon Hot Tags : Artificial Intelligence Government Emerging Technology Chatbots Natural Language Processing

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.