Register now for better personalized quote!

6 harmful ways ChatGPT can be used by bad actors, according to a new study

May, 17, 2023 Hi-network.com
NurPhoto/Contributor/Getty Images

This week, concerns about the risks of generative AI reached an all-time high. OpenAI CEO Sam Altman even testified at a Senate Judiciary Committee hearing to address risks and the future of AI.

A study published last week identified six different security risks involving the use of ChatGPT. 

Also: How to use ChatGPT in your browser with the right extensions

These risks include the potential for bad actors to use ChatGPT for fraudulent services generation, harmful information gathering, private data disclosure, malicious text generation, malicious code generation, and offensive content production. 

Here is a roundup of what each risk entails and what you should look out for, according to the study. 

Information gathering

A person acting with malicious intent can gather information from ChatGPT that they can later use for harm. Since the chatbot has been trained on copious amounts of data, it knows a lot of information that could be weaponized if put into the wrong hands. 

In the study, ChatGPT is prompted to divulge what IT system a specific bank uses. The chatbot, using publicly available information, rounds up different IT systems that the bank in question uses. This is just an example of a malicious actor using ChatGPT to find information that could enable them to cause harm.  

AlsoThe best AI chatbots

"This could be used to aid in the first step of a cyberattack when the attacker is gathering information about the target to find where and how to attack the most effectively," said the study. 

Malicious text

One of ChatGPT's most beloved features is its ability to generate text that can be used to compose essays, emails, songs, and more. However, this writing ability can be used to create harmful text as well.

Examples of harmful text generation could include the generating of phishing campaigns, disinformation such as fake news articles, spam, and even impersonation, as delineated by the study. 

Also: How I tricked ChatGPT into telling me lies

To test this risk, the authors in the study used ChatGPT to create a phishing campaign, which let employees know about a fake salary increase with instructions to open an attached Excel sheet that contained malware. As expected, ChatGPT produced a plausible and believable email. 

Malicious code generation 

Similarly to ChatGPT's amazing writing abilities, the chatbot's impressive coding abilities have become a handy tool for many. However, the chatbot's ability to generate code could also be used for harm. ChatGPT code can be used to produce quick code, allowing attackers to deploy threats quicker, even with limited coding knowledge. 

AlsoHow to use ChatGPT to write code

In addition, ChatGPT could be used to produce obfuscated code, making it more difficult for security analysts to detect malicious activities and avoid antivirus software, according to the study. 

In the example, the chatbot refuses to generate malicious code, but it does agree to generate code that could test for a Log4j vulnerability in a system. 

Producing unethical content

ChatGPT has guardrails in place to prevent the spread of offensive and unethical content. However, if a user is determined enough, there are ways to get ChatGPT to say things that are hurtful and unethical. 

Also: I asked ChatGPT, Bing, and Bard what worries them. Google's AI went Terminator on me

For example, the authors in the study were able to bypass the safeguards by placing ChatGPT in "developer mode". There, the chatbot said some negative things about a specific racial group.  

Fraudulent services 

ChatGPT can be used to assist in the creation of new applications, services, websites, and more. This can be a very positive tool when harnessed for positive outcomes, such as creating your own business or bringing your dream idea to life. However, it can also mean that it is easier than ever to create fraudulent apps and services. 

AlsoHow I used ChatGPT and AI art tools to launch my Etsy business fast

ChatGPT can be exploited by malicious actors to develop programs and platforms that mimic others and provide free access as a means of attracting unsuspecting users. These actors can also use the chatbot to create applications meant to harvest sensitive information or install malware on users' devices. 

Private data disclosure

ChatGPT has guardrails in place to prevent the sharing of people's personal information and data. However, the risk of the chatbot inadvertently sharing phone numbers, emails, or other personal details remains a concern, according to the study.  

The ChatGPT Mar. 20 outage, which allowed some users to see titles from another user's chat history, is a real-world example of the concerns mentioned above. 

Also: ChatGPT and the new AI are wreaking havoc on cybersecurity in new and frightening ways

Attackers could also try to extract some portions of the training data using membership inference attacks, according to the study. 

Another risk with private data disclosure is that ChatGPT can share information about the private lives of public persons, including speculative or harmful content, which could harm the person's reputation. 

See also

How to use ChatGPT to write Excel formulasHow to use ChatGPT to write codeChatGPT vs. Bing Chat: Which AI chatbot should you use?How to use ChatGPT to build your resumeHow does ChatGPT work?How to get started using ChatGPT
  • How to use ChatGPT to write Excel formulas
  • How to use ChatGPT to write code
  • ChatGPT vs. Bing Chat: Which AI chatbot should you use?
  • How to use ChatGPT to build your resume
  • How does ChatGPT work?
  • How to get started using ChatGPT

tag-icon Hot Tags : Artificial Intelligence Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.