Register now for better personalized quote!

Generative AI should be more inclusive as it evolves, according to OpenAI's CEO

Jun, 19, 2023 Hi-network.com
BlackJack3D/Getty Images

Generative artificial intelligence (AI) tools such as ChatGPT are gaining new capabilities every day and the public needs clarity about the use of these tools. This will also be important to make sure the tools are as inclusive as possible for a global audience.

In addition, the emerging technology should be developed alongside public consult, with humans remaining in control, according to Sam Altman, CEO of OpenAI, the brainchild behind ChatGPT. This was essential to mitigate potential risks or harm that might be associated with the adoption of AI, Altman said during a dialog session in Singapore, which was part of the CEO's month-long global tour that also included regional markets South Korea, Indonesia, Japan, and Australia. 

Also: Generative AI: Just don't call it an 'artist' says scholars in Science magazine 

Held at Singapore Management University and organized by AI Singapore and the Infocomm Media Development Authority, the dialog session was attended by students, developers, and industry players. 

Elaborating on how risks could be managed as AI continued to evolve, Altman said it was important to let the public learn about and experience new developments as these came along. It would ensure any potential harm could be uncovered and resolved before its impact widened. 

This would be more effective than building and testing a piece of technology behind closed doors, and releasing it to the public on the assumption that all possible risks had been identified and plugged. 

Also: 5 ways to explore the use of generative AI at work

"You can't learn everything in a lab," he said. Regardless of how much a product is tested with the aim to minimize harm, someone will think of ways to exploit it in ways its creators never thought of. This was true with any new technology, he noted. 

"We believe iterative deployment is the only way to do this," he said, adding that introducing new releases gradually also enables societies to adapt as AI evolves and incorporate feedback on how AI should be improved. 

Pointing to how the rapid rise of ChatGPT had triggered conversations worldwide about the risks of AI, Altman said such discussions were important and would not have surfaced if the generative AI platform had not been deployed. 

Also: Leadership alert: The dust will never settle and generative AI can help

As AI gains traction and the interest of nations, it also will be critical to address challenges related to bias and data localization. 

For OpenAI, it means figuring out how to train its generative AI platform on datasets that are "as diverse as possible," Altman said. These need to cut across multiple cultures, languages, and values, among others. 

While it is impossible to build a system that everyone collectively agrees is unbiased, the key here is for AI models to learn as much as possible and allow users to retain control. He further underscored the need for humans to keep a tight loop and not be too trusting, giving AI systems full rein to make decisions. 

He also reiterated his call for governments to work on regulations around AI, adding that until there was more clarity around what the various jurisdictions plan to do, it would be difficult to determine which framework would work best. 

He suggested that what could emerge was a mix of scenarios where some countries might band together to establish a common legal framework, while organizations could establish industry standards and best practices to adopt. Civil groups also might suggest codes of practice. 

Also: 92% of programmers are using AI tools, says GitHub developer survey 

Meanwhile, OpenAI will continue to work on ChatGPT releases that are dramatically better than the previous, Altman said, noting that more videos and images will be added to the platform's learning base. 

The company's aim is to increase the role of ChatGPT as a tool for humans and enhance their ability to carry out tasks, he said. 

Other data systems also matter

However, organizations looking to adopt generative AI should remember not to do so in isolation of other systems. 

Chatbots, for instance, that are powered by generative AI models should be integrated with the company's customer relationship management (CRM), said June Yang, Google Cloud's VP of cloud AI and industry solutions.  

Customers often visit a company's website with product queries or questions specific to their accounts, Yang said during a virtual media briefing to provide updates on the US vendor's generative AI plans. This means the most recent and relevant information needs to be pulled from the CRM system and product database. 

Data used to train large language models that power generative AI systems are unlikely to be up-to-date and chatbots may end up pushing outdated information to customers. 

Also: Is humanity really doomed? Consider AI's Achilles heel 

She noted that Google's clients mostly have chosen to take a more cautious approach with generative AI, making the technology available first to employees with a reminder that not all responses generated will be accurate. Other clients, in particular the startups, have opted to push boundaries and tap generative AI to transform their industry, she said. 

When asked if there was a lack of localization, especially for businesses in Asia, since AI development had been mostly led by US players and models trained on data outside of Asia, Yang acknowledged that much of the innovation had come from the US. However, she noted there has been significant academic research in AI being carried out in Europe and Asia. 

Google, too, has trained its models to support multiple languages. Its speech-to-text platform Chirp, for instance, currently is capable of handling more than 100 languages, she said. Chirp's foundation model has been added to Google's machine learning training and development platform, Vertex AI, which was recently expanded to include generative AI support. Foundation models are accessible through APIs (application programming interfaces).   

Also: Generative AI can save marketing pros 5 hours per week, according to research

Yang said Google has been open about sharing its techniques and research results, which it is happy for others to leverage and expand on to drive their own AI innovation. The US vendor also is open to collaborating with partners, she said. 

Elsewhere, in China, Alibaba Cloud introduced a partnership program in April to build generative AI models that are customized for companies across verticals, including finance and petrochemicals. The Chinese cloud vendor hopes to accelerate the development of such applications that are powered by its large language model, Tongyi Qianwen, which will be integrated with all of Alibaba's own business applications, including e-commerce, search, navigation, and intelligence voice assistance. 

Artificial Intelligence

The impact of artificial intelligence on software development? Still unclearAndroid 14's AI-generated wallpapers are super fun. Here's how to create themAI aims to predict and fix developer coding errors before disaster strikesGenerative AI is everything, everywhere, all at once
  • The impact of artificial intelligence on software development? Still unclear
  • Android 14's AI-generated wallpapers are super fun. Here's how to create them
  • AI aims to predict and fix developer coding errors before disaster strikes
  • Generative AI is everything, everywhere, all at once

tag-icon Hot Tags : Artificial Intelligence Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.