New Malware Trend: ChatGPT and AI Pose Threat, Warns Meta Security Team

In the Brief:

  • AI tools like ChatGPT are being used for malware, scams, and spam
  • Meta found 10 malware families posing as ChatGPT
  • Bad actors are using AI because it's popular
  • ChatGPT-related browser extensions are malicious
  • Meta's chief security officer calls ChatGPT the new crypto for bad actors
  • AI is Meta's largest investment

3 - 5 minute read

Artificial intelligence tools are being used by “bad actors” to distribute malware, scams and spam, according to a recent report by Meta’s security team. The report found that 10 malware families posing as ChatGPT and similar AI tools were found in March, some of which were found in various browser extensions. The report also highlighted the fact that bad actors tend to move to where the latest craze is, referencing the hype around digital currency and the scams that have come from it.

“Since March alone, our security analysts have found around 10 malware families posing as ChatGPT and similar tools to compromise accounts across the internet.”

As an industry, we’ve seen this across other topics popular in their time, such as crypto scams fueled by the interest in digital currency.

The research comes amid a major interest in artificial intelligence, with ChatGPT, in particular, capturing much attention brought to AI. Meta explained that these “bad actors” have moved to AI because it’s the “latest wave” of what is capturing people’s imagination and excitement. The report also warned that some of these “malicious extensions” included operations with ChatGPT functionality which coexisted alongside the malware.

Meta’s chief security officer, Guy Rosen, went one step further in a recent interview with Reuters by stating that “ChatGPT is the new crypto” for these bad actors. It should, however, be noted that Meta is now making its own developments in generative AI. Meta AI is currently building various forms of AI to help improve its augmented and artificial reality technologies.

While the rise of AI tools being used for malicious purposes is concerning, it’s not entirely surprising. As technology advances, so do the ways in which it can be used for nefarious purposes. The fact that bad actors are using AI tools like ChatGPT to distribute malware, scams, and spam is a reminder that vigilance is needed in the technology space.

The Bottom Line

The rise of AI tools being used for malicious purposes is concerning, but not entirely surprising. Traders should be aware of the potential impact on the asset in question and take appropriate security measures. Vigilance is needed in the technology space to prevent bad actors from exploiting new technologies like AI.

Disclaimer: The content in this article is provided for informational purposes only and should not be considered as financial or trading advice. We are not financial advisors, and trading carries high risk. Always consult a professional financial advisor before making any investment decisions.

Leave a Reply

Your email address will not be published. Required fields are marked *