Register now for better personalized quote!

Despite DALL-E military pitch, OpenAI maintains its tools won't be used to develop weapons

Apr, 12, 2024 Hi-network.com
OpenAI and Microsoft
Lionel BONAVENTURE / AFP/Getty Images

Documents publicized by The Intercept on Wednesday reveal that Microsoft Azure pitched its version of DALL-E, OpenAI's image generator, to the US military in October 2023. The presentation, given at a Department of Defense (DoD) training seminar on "AI Literacy," suggested DALL-E could help train battlefield tools via simulation. 

Microsoft pitched DALL-E under the Azure OpenAI (AOAI) umbrella, a joint product of Microsoft's partnership with OpenAI, which merges the former's cloud computing with the latter's generative AI power. 

Also: OpenAI's Voice Engine can clone a voice from a 15-second clip. Listen for yourself

The presentation deck -- in which OpenAI's logo appears above the company's mission, "Ensure that artificial general intelligence (AGI) benefits humanity" -- details how DoD could use AOAI for everything from run-of-the-mill ML tasks like content analysis and virtual assistants to "Using the DALL-E models to create images to train battle management systems."

This revelation created some public confusion due to OpenAI's own usage guidance. Historically, OpenAI's policies page stated its models should not be used for military development. But in January, The Intercept noticed that OpenAI had removed "military" and "warfare" from the page; it now only prohibits the use of "our service to harm yourself or others," including to "develop or use weapons". 

screenshot-2024-04-11-at-4-31-52pm.png

A slide from Microsoft's presentation to the Department of Defense. 

Screenshot by Radhika Rajkumar/

When asked about the change, the company told CNBC it was intended to make space for certain military use cases that do align with OpenAI's mission, including defensive measures and cybersecurity, which Microsoft has been separately advocating for. OpenAI maintained that other applications were still not permitted: "Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property," a spokesperson said. 

Also: OpenAI CEO: We're happy if Microsoft makes a sale, and they're happy if we make a sale

However, weapons development, injury to others, and destruction of property can be seen as possible outcomes of training battlefield management systems. Microsoft told the Intercept via email that the October 2023 pitch has not been implemented, and that the examples in the presentation were intended to be "potential use cases" for AOAI. 

Liz Bourgeous, an OpenAI spokesperson told The Intercept that OpenAI was not involved in the Microsoft presentation and reiterated the company's policies. "We have no evidence that OpenAI models have been used in this capacity," said Bourgeous. "OpenAI has no partnerships with defense agencies to make use of our API or ChatGPT for such purposes."

The response to the pitch exemplifies how maintaining policies across derivative versions of base technology is tricky at best. Microsoft is a longtime contractor with the US Army -- AOAI is likely preferable for military use than OpenAI due to Azure's increased security infrastructure. It remains to be seen how OpenAI will differentiate between applications of its tools in the midst of the partnership and Microsoft's continued endeavors with the DoD. 

tag-icon Hot Tags : Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.