Register now for better personalized quote!

HOT NEWS

Should we fear the rise of artificial general intelligence?

Nov, 13, 2023 Hi-network.com

Last week, a who's who of technologists called for artificial intelligence (AI) labs to stop training the most powerful AI systems for at least six months, citing "profound risks to society and humanity."

In an open letter that now has more than 3,100 signatories, including Apple co-founder Steve Wozniak, tech leaders called out San Francisco-based OpenAI Lab's recently announced GPT-4 algorithm in particular, saying the company should halt further development until oversight standards are in place. That goal has the backing of technologists, CEOs, CFOs, doctoral students, psychologists, medical doctors, software developers and engineers, professors, and public school teachers from all over the globe.

On Friday, Italy became the first Western nation to ban further development of ChatGPT over privacy concerns; the natural language processing app experienced a data breach last month involving user conversations and payment information. ChatGPT is the popular GPT-based chatbot created by OpenAI and backed by billions of dollars from Microsoft.

The Italian data protection authority said it is also investigating whether OpenAI's chatbot already violated the European Union's General Data Protection Regulation rules created to protect personal data inside and outside the EU. OpenAI has complied with the new law, according to a report by the BBC.

The expectation among many in the technology community is that GPT, which stands for Generative Pre-trained Transformer, will advance to become GPT-5 - and that version will be an artificial general intelligence, or AGI. AGI represents AI that can think for itself, and at that point, the algorithm would continue to grow exponentially smarter over time.

Around 2016, a trend emerged in AI training models that were two-to-three orders of magnitude larger than previous systems, according to Epoch, a research group trying to forecast the development of transformative AI. That trend has continued.

There are currently no AI systems larger than GPT-4 in terms of training compute, according to Jaime Sevilla, director of Epoch. But that will change.

Epoch

Large-scale Machine Learning models for AI have more than doubled in capacity ever year. 

Anthony Aguirre, a professor of physics at UC Santa Cruz and executive vice president of the Future of Life, the non-profit organization that published the open letter to developers, said there's no reason to believe GPT-4 won't continue to more than double in computational capabilities every year.

"The largest-scale computations are increasing size by about 2.5 times per year.  GPT-4's parameters were not disclosed by OpenAI, but there is no reason to think this trend has stopped or even slowed," Acquirre said. "Only the labs themselves know what computations they are running, but the trend is unmistakable."

In his biweekly blog on March 23, Microsoft co-founder Bill Gates heralded AGI - which is capable of learning any task or subject - as "the great dream of the computing industry.

"AGI doesn't exist yet - there is a robust debate going on in the computing industry about how to create it, and whether it can even be created at all," Gates wrote. "Now, with the arrival of machine learning and large amounts of computing power, sophisticated AIs are a reality, and they will get better very fast."

Muddu Sudhakar, CEO of Aisera, a generative AI company for enterprises, said there are but a handful of companies focused on achieving AGI as OpenAI and DeepMind (backed by Google), though they have "huge amounts of financial and technical resources."

Even so, they have a long way to go to get to AGI, he said.

"There are so many tasks AI systems cannot do that humans can do naturally, like common-sense reasoning, knowing what a fact is and understanding abstract concepts (such as justice, politics, and philosophy)," Sudhakar said in an email toComputerworld. "There will need to be many breakthroughs and innovations for AGI. But if this is achieved, it seems like this system would mostly replace humans.

"This would certainly be disruptive and there would need to be lots of guardrails to prevent the AGI from taking full control," Sudhakar said. "But for now, this is likely in the distant future. It's more in the realm of science fiction."

Not everyone agrees.

AI technology and chatbot assistants have and will continue to make inroads in nearly every industry. The technology can create efficiencies and take over mundane tasks, freeing up knowledge workers and others to focus on more important work.

For example, large language models (LLMs) - the algorithms powering chatbots - can sift through millions of alerts, online chats, and emails, as well as finding phishing web pages and potentially malicious executables. LLM-powered chatbots can write essays, marketing campaigns and suggest computer code, all from just simple user prompts (suggestions).

Chatbots powered by LLMs are natural language processors that basically predict the next words after being prompted by a user's question. So, if a user were to ask a chatbot to create a poem about a person sitting on a beach in Nantucket, the AI would simply chain together words, sentences and paragraphs that are the best responses based on previous training by programmers.

But LLMs also have made high-profile mistakes, and can produce "hallucinations" where the next-word generation engines go off the rails and produce bizarre responses.

If AI based on LLMs with billions of adjustable parameters can go off the rails, how much greater would the risk be when AI no longer needs humans to teach it, and it can think for itself? The answer is much greater, according to Avivah Litan, a vice president and distinguished analyst at Gartner Research.

Litan believes AI development labs are moving forward at breakneck speed without any oversight, which could result in AGI becoming uncontrollable.

AI laboratories, she argued, have "raced ahead without putting the proper tools in place for users to monitor what's going on. I think it's going much faster than anyone ever expected," she said.

The current concern is that AI technology for use by corporations is being released without the tools users need to determine whether the technology is generating accurate or inaccurate information.

"Right now, we're talking about all the good guys who have all this innovative capability, but the bad guys have it, too," Litan said. "So, we have to have these water marking systems and know what's real and what's synthetic. And we can't rely on detection, we have to have authentication of content. Otherwise, misinformation is going to spread like wildfire."

For example, Microsoft this week launched Security Copilot, which is based on OpenAI's GPT-4 large language model. The tool is an AI chatbot for cybersecurity experts to help them quickly detect and respond to threats and better understand the overall threat landscape.

The problem is, "you as a user have to go in and identify any mistakes it makes," Litan said. "That's unacceptable. They should have some kind of scoring system that says this output is likely to be 95% true, and so it has a 5% chance of error. And this one has a 10% chance of error. They're not giving you any insight into the performance to see if it's something you can trust or not."

A bigger concern in the not-so-distant future is that GPT-4 creator OpenAI will release an AGI-capable version. At that point, it may be too late to rein in the technology.

One possible solution, Litan suggested, is by releasing two models for every generative AI tool  - one for generating answers, the other for checking the first for accuracy.

"That could do a really good job at ensuring if a model is putting out something you can trust," she said. "You can't expect a human being to go through all this content and decide what's true or not, but if you give them other models that are checking..., that would allow users to monitor the performance."

In 2022,Timereported that OpenAI had outsourced services to low-wage workers in Kenya to determine whether its GPT LLM was producing safe information. The workers hired by Sama, a San Francisco-based firm, were reportedly paid$2 per hour and required to sift through GPT app responses "that were prone to blurting out violent, sexist and even racist remarks."

"And this is how you're protecting us? Paying people$2 an hour and who are getting sick. It's wholly inefficient and it's wholly immoral," Litan said.

"AI developers need to work with policy makers, and these should at a minimum include new and capable regulatory authorities," Litan continued. "I don't know if we'll ever get there, but the regulators can't keep up with this, and that was predicted years ago. We need to come up with a new type of authority."

Shubham Mishra, co-founder & global CEO for AI start-up Pixis, believes while progress in his field "cannot, and must not, stop," the call for a pause in AI development is warranted. Generative AI, he said, does have the power to confuse masses by pumping out propaganda or "difficult to distinguish" information into the public domain.

"What we can do is plan for this progress. This can be possible only if all of us mutually agree to pause this race and concentrate the same energy and efforts on building guidelines and protocols for the safe development of larger AI models," Mishra said in an email toComputerworld.

"In this particular case, the call is not for a general ban on AI development but a temporary pause on building larger, unpredictable models that compete with human intelligence," he continued. "The mind-boggling rates at which new powerful AI innovations and models are being developed definitely calls for the tech leaders and others to come together to build safety measures and protocols."

tag-icon Hot Tags : Artificial Intelligence Technology Industry Generative AI Emerging Technology

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.