3 - 5 minute read
Artificial intelligence tools are being used by “bad actors” to distribute malware, scams and spam, according to a recent report by Meta’s security team. The report found that 10 malware families posing as ChatGPT and similar AI tools were found in March, some of which were found in various browser extensions. The report also highlighted the fact that bad actors tend to move to where the latest craze is, referencing the hype around digital currency and the scams that have come from it.
“Since March alone, our security analysts have found around 10 malware families posing as ChatGPT and similar tools to compromise accounts across the internet.”
As an industry, we’ve seen this across other topics popular in their time, such as crypto scams fueled by the interest in digital currency.
The research comes amid a major interest in artificial intelligence, with ChatGPT, in particular, capturing much attention brought to AI. Meta explained that these “bad actors” have moved to AI because it’s the “latest wave” of what is capturing people’s imagination and excitement. The report also warned that some of these “malicious extensions” included operations with ChatGPT functionality which coexisted alongside the malware.
ChatGPT-related Malware on the Rise, Meta Says— Slashdot (@slashdot) May 3, 2023
Meta’s chief security officer, Guy Rosen, went one step further in a recent interview with Reuters by stating that “ChatGPT is the new crypto” for these bad actors. It should, however, be noted that Meta is now making its own developments in generative AI. Meta AI is currently building various forms of AI to help improve its augmented and artificial reality technologies.
While the rise of AI tools being used for malicious purposes is concerning, it’s not entirely surprising. As technology advances, so do the ways in which it can be used for nefarious purposes. The fact that bad actors are using AI tools like ChatGPT to distribute malware, scams, and spam is a reminder that vigilance is needed in the technology space.
Now you can tap with your finger 👉 instead of your controllers on @MetaQuestVR — so navigating #virtualreality feels as natural as swiping through your phone.— Meta (@Meta) February 21, 2023
Read more: https://t.co/50zg3u3bsx pic.twitter.com/rnzgrGxrlS
The Bottom Line
The rise of AI tools being used for malicious purposes is concerning, but not entirely surprising. Traders should be aware of the potential impact on the asset in question and take appropriate security measures. Vigilance is needed in the technology space to prevent bad actors from exploiting new technologies like AI.