Register now for better personalized quote!

Pharma CEO: Don't halt AI research, our work is too important

Sep, 01, 2023 Hi-network.com
Recursion Pharmaceuticals

How is society supposed to address the risks of artificial intelligence? There are those who argue the conceivable benefits outweigh the immediate danger, and so, restrictions should be pursued lightly.

"I think these broad, blanket suggestions that we stop work [on AI] are a little bit misguided," said Chris Gibson, co-founder and CEO of Recursion Pharmaceuticals, in a recent interview with . 

"I think it's just really important that folks continue to embrace the opportunity that exists with machine learning," said Gibson.

Also:The 5 biggest risks of generative AI, according to an expert

Gibson's company is working with Big Pharma to employ AI in drug discovery. 

Gibson was responding to a letter published in March by Elon Musk, AI scholar Yoshua Bengio, and numerous others calling for a temporary halt to AI research to investigate the dangers. 

The petition called for a pause to what it describes as "an out-of-control race" for AI superiority, producing systems that its creators can't "understand, predict, or reliably control."

Gibson zeroed in on what he deemed unrealistic concerns, such as the potential for machine learning programs to become sentient, a scenario that scholars who've considered the matter consider fairly remote. 

"We don't want to pause for six months or a year, because of how much opportunity there is moving forward," says Chris Gibson, co-founder and CEO of Recursion Pharmaceuticals.

Recursion Pharmaceuticals

"The work we're doing at Recursion is super interesting, training multi-billion parameter models that are really, really exciting in the context of biology," Gibson told . "But  they're not sentient, they're not gonna become sentient, they're very far from that."

One of Gibson's principle concerns is to preserve the ability of his firm and others to move forward with work on things such as drug discovery. Recursion, which partners with Bayer and Genentech, among others, has five drug candidates currently in the clinical stages of the drug development pipeline. The company has amassed over 13 petabytes worth of information in Phenomaps, its term for databases of "inferred relationships" between molecules.

Also: 'OpenAI is product development, not AI research,' says Meta's chief AI scientist LeCun

"Models that are held in isolation to answer really specific questions, I think, are really important for advancing humanity," said Gibson.  "Models like ours, and other companies like ours, we don't want to pause for six months, or pause for a year, because of how much opportunity there is moving forward."

Gibson's firm, which is public, in July announced that it received a$50 million investment from Nvidia, whose GPU chips dominate AI processing.

Gibson was measured in his remarks about those who worry about AI or who have called for a halt. "There are really smart people on both sides of the issue," he said, noting that a Recursion co-founder had stepped away from day-to-day running of the company several years ago because of concerns about the ethical challenges of AI.

Yoshua Bengio, an advisor to Recursion, is one of the letter's signatories. 

"Yoshua is brilliant, so this is putting me on the spot just a little," said Gibson. "But, I would say, I think there are really important arguments on both sides of the debate."

Also:The great puzzle of the body and disease is beginning to yield to AI, says Recursion CEO

The different perspectives of the parties for and against a moratorium "suggests caution," he said, "but I don't believe that we should pause all training, and all inference, of ML and AI algorithms for any period of time." 

Gibson's team followed up with to point out that Bengio, in his blog post on the matter of AI risks, has drawn distinctions between threats versus societally useful applications of AI such as healthcare.

Gibson is in accord with peers of Bengio such as Meta Properties chief AI scientist Yann LeCun, who has spoken out against the initiative of his friend and sometime collaborator. 

Gibson did allow that some notions of risk, however improbable, need to be carefully considered. One is the end-of-humanity scenarios that have been outlined by organizations such as the Future of Humanity Institute.

"There are people in the field of AI who think that if you ask an ML or AI algorithm to maximize some sort of utility function, say, make the world as beautiful and peaceful as possible, then an AI algorithm could, probably not totally incorrectly, interpret that humans are the cause of most of the lack of beauty and lack of peace," said Gibson.  

Also:ChatGPT: What The New York Times and others are getting terribly wrong about it

As a result, a program could "put in place something really scary." Such a prospect is "probably farfetched," he said. "But, the impact is so big, it's important to think about it; it's unlikely any one of our airplanes are gonna crash when we go up in the sky, but we certainly look at the warning because the cost is so substantial."

There are also "some things that are really obvious we could all agree on today," said Gibson, such as not allowing programs to have control of weapons of mass destruction.  

"Would I advocate for giving an AI or ML algorithm access to our nuclear launch systems? Absolutely not," he said. 

On a more prosaic level, Gibson believes that issues of bias need to be dealt with in algorithms. "We need to make sure that we're being really cautious about the datasets, and making sure the utility functions we optimize our algorithms against don't have some sort of bias within them."

Also: AI could have 20% chance of sentience in 10 years, says philosopher David Chalmers

"You do have more bias creeping into the outcomes of these algorithms that are becoming more and more part of our lives," observed Gibson. 

The most basic concerns, in Gibson's view, should be obvious to all. "A good example is, I think, it's more risky to give an algorithm uncontrolled access to the internet," he said. "So, there could be some near-term regulations around that."

His position on regulation, he said, is that "part of being in a high-functioning society is putting all those options on the table and having an important discussion around them. We just need to be careful not to over-extend ourselves with broad-based regulation that's directed at all ML or all AI."

A pressing concern for AI ethics is the current trend of companies such as OpenAI and Google to disclose less and less of the inner workings of their programs. Gibson said he is against any regulation requiring programs to be made open-source. "But," he added, "I think it's very important for most companies to share some of their work in various ways with society, to keep moving everybody forward." 

Also: Why open source is essential to allaying AI fears, according to Stability.ai founder

Recursion has open-sourced many of its datasets, he noted, and, "I would not exclude the possibility of us open-sourcing some of our models in the future."

Obviously, the large questions of regulation and control come back to the will of any particular nation's citizens. A key question is how the electorate can be educated about AI. In that regard, Gibson was not optimistic.

While education is important, he said, "My general belief is that the public seems uninterested in being educated these days." 

"The people who are interested in being educated tend to tune into these things," he said, "and most of the rest of the world doesn't, which is super unfortunate."

Artificial Intelligence

Generative AI will far surpass what ChatGPT can do. Here's everything on how the tech advancesChatGPT's new web browsing feature is a big disappointment. Use this plugin insteadWhat is Amazon Bedrock? 4 ways it can help businesses use generative AI toolsCan generative AI solve computer science's greatest unsolved problem?
  • Generative AI will far surpass what ChatGPT can do. Here's everything on how the tech advances
  • ChatGPT's new web browsing feature is a big disappointment. Use this plugin instead
  • What is Amazon Bedrock? 4 ways it can help businesses use generative AI tools
  • Can generative AI solve computer science's greatest unsolved problem?

tag-icon Hot Tags : Artificial Intelligence Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.