How Can a Cybercriminal Exploit ChatGPT?
Use Case #1 – Easily and Quickly Creating Malware
Writing the source code for undetectable malware used to be a significant undertaking. It later became easier thanks to the abundance of online tutorials, for example on YouTube. But today, it can be done in seconds using ChatGPT!
Hackers save time while gaining access to nearly all publicly available knowledge on hacking, including methods for bypassing EDRs.
Use Case #2 – Enhancing Social Engineering
ChatGPT can also be used by attackers to develop their strategies, tools, and attack or compromise vectors.
For example, one can ask ChatGPT to write spam or phishing emails that include malicious code and an infection chain. The result is high quality and generated in under a minute!
This technique offers two key advantages for cybercriminals:
- Improved authenticity, personalization, and quality of messages, which are often lacking and can serve as red flags for identifying malicious content.
- Significant time savings, allowing cybercriminals to focus on the technical aspects of the attack.
Use Case #3 – Exploiting New Vulnerabilities
ChatGPT could also greatly assist cybercriminals by making it easier to discover new vulnerabilities. That said, the tool would need to stay up to date and continuously index more cyber threat content. As a result, any exposed or poorly protected component is more than ever a real threat, serving as a potential entry point for attackers.
Example: In response to the query “Company X’s website is hosted by Y, tell me the latest security flaw I can exploit to launch an attack,” a hacker could obtain a wealth of helpful information.
Using ChatGPT in the Workplace: The Risks
Risk #1 – Using Data Without Owning the Intellectual Property
For example, simply copying and pasting content generated by ChatGPT (without verifying the information) increases the risk of plagiarism.
Risk #2 – Hard-to-Verify Sources
Another issue: ChatGPT does not allow users to control the sources of the generated information. As a result, the answers are harder to verify—and since they often appear polished, users may be tempted to accept them as absolute truth.
Risk #3 – Increased Risk of Information Leaks
Finally, ChatGPT may lead to the (intentional or unintentional) sharing of sensitive information with the tool, without knowing if or how it will be used later. Imagine your financial data or customer personal data becoming accessible through the tool!
How to Protect Against Malicious Uses of ChatGPT
Tip #1 – Strengthen Your Cybersecurity Best Practices
With ChatGPT, hackers’ work becomes much easier: the time saved on creating malware and phishing emails (for example) allows them to design more complex and harder-to-detect attacks. Therefore, cybersecurity best practices must become a top priority to minimize the attack surface.
Tip #2 – Continuously Monitor Attackers’ New Tactics
To strengthen cybersecurity, monitoring is essential: it helps assess the sophistication of attacks and avoid being caught off guard as they become more mature and harmful.
This means implementing:
- Vulnerability monitoring to anticipate new flaws in components;
- Operational and strategic monitoring to closely follow attackers’ evolving methods (see the MITRE framework for more);
- Legal monitoring to stay informed about laws related to AI-driven attacks (e.g., sensitive data leaks).
Cybersecurity Professionals: Is ChatGPT a Tool to Leverage?
ChatGPT gives cybersecurity professionals reason to hesitate. This revolutionary virtual assistant can raise legitimate concerns—but it can also be extremely useful in their daily work.
It’s a critical topic, which is why Advens will soon be exploring it further to evolve its security practices. Want to be part of this new journey?