fbpx

The evolution of the Dark Web AI ecosystem

Hacker using laptop into dark web

ARTICLE SUMMARY

The article discusses the emergence of Dark AI tools in the realm of cybersecurity, exploring the threat intelligence surrounding this growing area.

It’s been a year since ChatGPT was released, bringing the topic of Artificial Intelligence (AI) firmly into the zeitgeist.

Since then, AI has been a hot topic of conversation across mainstream press and social media alike. Whilst not a new concept itself, as many cybersecurity professionals would point out, ChatGPT made AI accessible to the masses.

Margarita del Val, Threat intelligence Expert at Outpost24’s KrakenLabs, has conducted extensive research into Dark AI tools and here, she looks at the threat intelligence that surrounds this growing area of cybersecurity.

Margarita is a Threat Intelligence Analyst Expert with over five years of experience in the field, currently working in Outpost24 within the Strategic Research Team of KrakenLabs. As a Criminologist-background, Margarita is passionate about researching the cybercrime ecosystem, trying to understand the psychology behind threat actors and their motivations. This drives her career and keeps her excited about every new challenge that comes her way.

Like anything when it becomes wildly successful, there’s always someone wanting to exploit it for some sort of gain. ChatGPT and its popular Google counterpart Google BARD are no exception. Birthed from the success of mainstream AI programmes were its nefarious counterparts, including products such as WormGPT, FraudGPT, DarkGPT, DarkBARD, and DarkBERT. These products have a much more malicious use, according to their creators, intended for criminal activity, from coding malware to writing phishing messages. That’s because they’re based on Large Language Models (LLM), which are a type of generative AI that is trained on text to produce text-based content, like ChatGPT.

What does this mean for the future of cybercrime?

Cybercriminals are always evolving. As the technology and intelligence to stop them gets better, cybercriminals have to work harder and smarter to carry out even more sophisticated attacks. Not only is new technology necessary to continue carrying out crime, it’s also a lucrative revenue stream for those who can quickly create new tools.

Dark AI tools were immediately popular – and profitable. In any case, there will always be a market for efficient new tech that can be used to easily exploit with little skill or time commitment. Tools, such as Dark AI, that are easily available on the Dark web, often lower the entry requirements to cybercrime. Worryingly, making it easier than ever for unskilled hackers and so-called script kiddies to foray into cybercrime.

When Things Go Wrong

It’s important to note that a tool with rapidly soaring popularity is not necessarily a good thing – or trustworthy. There’s no standard, regulation or safety guarantee in the criminal underworld.

Additionally, a quick rise often comes with the increased risk of a fall just as quick. It’s important to note that, despite the intricate ‘business’ ecosystems that surround these tools, doing business with cybercriminals is still doing business with cybercriminals. The same threats that apply to legitimate tools (like ChatGPT) also apply to their malicious Dark web equivalents. It could be the case that the infrastructure used becomes the target of a DDoS attack and/or its creator becomes a target of doxing.

What’s Next for Dark AI?

As some Large Language Models (LLM) are open source, anyone with enough skill and knowledge can train them to create AI products that are specifically tailored for cybercriminals, as has been observed with WormGPT, FraudGPT, DarkGPT, DarkBARD and others.. Conversely, closed-access models are also not safe, given that anyone could gain access to them and start reselling the product.

As previously stated, these tools are created by and circulated within criminal forums. With the rise of Dark AI tools comes the rise of other cybercriminals looking to exploit their success. Some scammers set up websites and Telegram channels looking to deceive people into purchasing non-existent access and others were trying to deceive victims with fake versions of legal AI tools in order to spread malware, such as what happened with Google Bard.  This highlights the soaring popularity of AI in general and realistically, the development of Dark AI models is likely to continue and evolve.

How can organisations best protect themselves?

As always with cybersecurity, it’s imperative to stay ahead of the curve. Threat intelligence is key to keeping one-step ahead of cybercriminals. With constant and consistent knowledge via threat intelligence comes the ability to proactively put preventive measures in place before it becomes an issue. It won’t be long before cybercriminals exploit novel technology, like AI models, again. With a lull in Dark AI activity, there’s no better time to get ahead.

RELATED ARTICLES

Get ready to hone your tech skills with Adam Biddlecombe, "The AI Guy" and the Co-Founder & CEO of Mindstream, an esteemed AI-focused news blog.
Join Ellen Wyllie from Ipsotek on a journey through the misconceptions surrounding AI, the impact of AI in education, and the imperative need for legislation...
In this article, Michelle Espinosa from Applause, delves into the need for greater diversity in software development and specifically the training of generative AI to...
Aisha Mendez, Associate Partner for AI & Automation at Infosys Consulting UK, takes a look at why businesses must prioritise AI to stay ahead, unlock...

This website stores cookies on your computer. These cookies are used to improve your website and provide more personalized services to you, both on this website and through other media. To find out more about the cookies we use, see our Privacy Policy.