ChatGPT Security Risks | Spiceworks | #datingscams | #lovescams | #facebookscams


  • ChatGPT can be used to create plausible phishing emails and malware to spread misinformation, affect data and financial security.
  • It is crucial that employees refrain from uploading sensitive information to ChatGPT and take a pinch of salt when conversing about intellectual property and trade secrets on the generative AI tool.
  • Experts called on for timely cybersecurity training and ensure a dedicated channel for two-way communication is always open to employees.

Last week, Italy temporarily banned the use of ChatGPT within its borders, citing privacy concerns. Italy’s data protection watchdog, Garante per la protezione dei dati personali’s (Italian Data Protection Authority) concern stems from a March 2023 breach of OpenAI that exposed email IDs, user conversations and payment information.

Moreover, Garante believes that OpenAI isn’t taking the appropriate steps to check the age of users signing up to avail of the AI-based service (it is supposed to be 13 years or higher).

The decision has sparked a debate across Europe, wherein the European Consumer Organization (BEUC) called for a probe into all major chatbots. Ursula Pachl, deputy director of the BEUC, told Euronews, “Consumers are not ready for this technology. They don’t realize how manipulative and deceptive it can be.”

“They don’t realize that the information they get is maybe wrong. I think this incident with ChatGPT is very important. It’s kind of a wake-up call for the European Union because even though European institutions have been working on an AI Act, it will not be applicable for another four years. And we have seen how fast these sorts of systems are developing.”

As such, data privacy regulators in Ireland and the U.K. will follow suit. Meanwhile, the French regulator has requested Garante for more details even as their German counterpart’s commissioner for data protection told Handelsblatt newspaper that the country may also ban ChatGPT.

And that’s just for privacy issues. ChatGPT is also being used for malicious purposes.

On being asked about the worst possible outcome in an interview with ABC, OpenAI CEO Sam Altman said, “There’s a set of very bad outcomes. One thing I’m particularly worried about is that these models could be used for large-scale disinformation. I am worried that these systems, now that they’re getting better at writing computer code, could be used for offensive cyberattacks.”

According to a 2023 Checkpoint report, the AI chatbot can and is also being used to create malware, including infostealers for Microsoft Office documents, PDFs, and images-based targeting, Python script that performs cryptographic operations or, in other words, encryption tools; developing dark web marketplaces and pushing fraudulent schemes.

However, the potency of malware created from code generated by ChatGPT is debatable. So as of now, there are two ways of looking at the cybersecurity risks of ChatGPT, viz., privacy and disinformation/misinformation, and phishing, both of which are intertwined.

See More: Using ChatGPT: Where to Get Started

Use of ChatGPT to Create Phishing Campaigns With Company Information

ChatGPT’s success led Bank of America analysts to proclaim AI to be on the verge of its “iPhone moment.” The company assessed that its economic impact could be as much as $15.7 trillion by 2030.

Millions of users currently leverage ChatGPT. Over a dozen organizations have also implemented the generative AI tech in their respective products and services.

“ChatGPT is without a doubt the hottest new tool to hit the technology space,” Rachel Jones, CEO of SnapDragon Monitoring, told Spiceworks. “Yet, while it offers significant benefits to genuine businesses, in the wrong hands, it is a cyber weapon of severe destruction that has the potential to hit internet users in mass.”

Some of the ways in which amateur and even seasoned cybercriminals are leveraging ChatGPT in malicious operations, including phishing scams that are tailored to an organization’s structure and internal workings.

“ChatGPT users can ask the tool to learn about the way organizations communicate with their customers and then generate realistic phishing emails, where they encourage victims to click on links leading to fake websites where they are asked to input sensitive information, such as PII and payment details,” Jones added.

“Unlike traditional phishing scams, there will be fewer language and cultural mistakes to spot if the email is fake, which will result in more people falling victim to these threats.”

To create a successful phishing campaign, threat actors need certain organizational information, such as a dataset of company-generated emails, the events its employees partake in, projects it may be working on, etc.

Some of this information could be sourced from the web, which is why it is crucial to limit the dissemination of unnecessary information and tightly hold their cards to their chest.

Julia O’Toole, CEO of MyCena Security Solutions, attested to this. “When criminals use ChatGPT, there are no language or culture barriers. They can prompt the application to gather information about organizations, the events they take part in, and the companies they work with at phenomenal speed,” she told Spiceworks. “They can then prompt ChatGPT to use this information to write highly credible scam emails.”

Phishing emails can serve as the carriers of dangerous malware, including ransomware, worms, trojans, etc. Additionally, threat actors can also manipulate their targets by leveraging disinformation tactics to create a sense of urgency for them to act on the email, such as clicking a link or downloading a malicious file.

“When the target receives an email from their ‘apparent’ bank, CEO or supplier, there are no language tell-tale signs the email is bogus. The tone, context and reason for carrying out the bank transfer give no evidence to suggest the email is a scam. This makes ChatGPT-generated phishing emails very difficult to spot and dangerous.”

As such, it is crucial that employees refrain from uploading sensitive information to ChatGPT and take a pinch of salt when conversing about intellectual property and trade secrets on the generative AI tool.

How to Neutralize the Threat From ChatGPT-Driven Attacks?

1. A call for policy changes

In an open letter titled Pause Giant AI Experiments: An Open Letter, the nonprofit Future of Life Institute called for a curb on all AI stuff more advanced than the recently released GPT-4, which goes beyond ChatGPT’s GPT-3.5.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects,” the letter reads.

“We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”

While protocols, regulations and ethical inquiries into AI-related matters is a good idea, it is doubtful if the government could step in to halt the development of the tech, let alone private players. Unless, of course, in the event of a major repercussion.

See More: 5 Tasks ChatGPT Does Best: And 5 It Can’t

2. Securing against ChatGPT-based threats at the company level

Both Jones and O’Toole opined that each internet-facing employee must practice appropriate cybersecurity hygiene. There is no way around that. Employees must be well-trained and well-versed in the basics of cybersecurity.

“When it comes to internet users, the best advice is to treat all emails requesting personal and financial information with skepticism. Avoid clicking on links in emails and instead visit the site directly. If you do receive an email urgently requesting information, call the organization instead. No security-conscious business will see this as a nuisance, and it could end up saving you from significant financial losses,” Jones said.

O’Toole added, “When it comes to protecting against ChatGPT phishing scams, users must be wary of links received in emails. If an email arrives with a link, never click on the link. As a habit, verify its authenticity first. For example, if your bank calls you asking for personal information, hang up and call back the bank via the phone number found on their website.”

The onus is on the organization to create a dedicated channel and ensure it is always open for two-way communication. Moreover, Jones also suggested using other AI-based tools for continuous monitoring.

“Businesses must do more to communicate with their customers on the threat posed by ChatGPT. Firstly, warn them about email scams and phishing, and secondly take steps to proactively monitor for fake versions of websites being published online. AI tools can help spot these fake domains and then work to have them removed before they cause harm,” Jones said.

Meanwhile, O’Toole highlighted the importance of maintaining immaculate password hygiene. “Never use just one root password, even with variations, such as JohnSmith1, John$mith1!, Johnsmith2, to protect all your online accounts. If one password is phished, criminals can find its variations and access everything,” O’Toole continued.

“The same threat applies when using password managers. Because all your passwords are saved behind one single password, the risk of losing everything is even higher. If that master password is phished, criminals can open all your accounts at once.”

“Instead, users should think of passwords the same way as keys to their house, office or car. They don’t need to know the grooves or make their own keys. They just need to find the right key or password to use it.”

“The easiest way is to use tools to generate strong unique passwords like ‘7D£bShX*#Wbqj-2-CiQS’ or ‘kkQO_5*Qy*h89D@h’ but don’t centralize them behind a master key or identity. That way, passwords can be generated, impossible to break, and changed at will, without the risk of a single point of failure, so that in case one password is phished because of a ChatGPT-generated email, it will only impact one online account,” O’Toole concluded.

How can threat actors leverage ChatGPT to threaten the cybersecurity of organizations? Comment below or let us know on LinkedInOpens a new window