Some of the biggest names in technology are coming together to help keep AI safe. Tech giants Google and Microsoft are partnering with OpenAI and Anthropic to ensure that AI technology progresses in a "safe and responsible" fashion.
This partnership comes after a White House meeting last week with the same companies, plus Amazon and Meta, to discuss the secure and transparent development of future AI.
Also: Generative AI will soon go mainstream, say 9 out of 10 IT leaders
The watchdog group, called the Frontier Model Forum, noted that while some governments around the world are taking measures to keep AI safe, those measures aren't enough. Instead, it's up to technology companies.
In a blog post on Google's site, the forum listed four main objectives:
An OpenAI representative added that while the forum would work with policymakers, they wouldn't get involved with government lobbying.
Membership in the forum is open to any company that meets three criteria: developing and deploying frontier models, demonstrating a strong commitment to frontier model safety, and a willingness to participate in joint initiatives with other members. What's a "frontier model?" The forum defines that as "any large-scale machine-learning models that go beyond current capabilities and have a vast range of abilities."
Also: AI has the potential to automate 40% of the average work day
While AI has world-changing potential, the forum says, "appropriate guardrails are required to mitigate risks." Anna Makanju, vice president of global affairs at OpenAI, issued a statement: "It's vital that AI companies -- especially those working on the most powerful models -- align on common ground. This is urgent work. And this forum is well-positioned to act quickly to advance the state of AI safety."
Microsoft President Brad Smith echoed those sentiments, saying that it's ultimately up to companies creating AI technology to keep them safe and under human control.