The Rise of Malicious AI Chatbots: Threat Actors Embrace ‘Hackbots’

1 week ago 4390

The wave of AI popularity has swept through both those who have good intentions and those with intentions that might be more sinister. Security specialists are calling the alarm, pointing out that using such models, malicious text generation, or ‘hackbots,’ is now famous among threat actors. It has reached the level of a turnkey service in subscription-based services.

Cybercriminals leveraging AI tools

On the one hand, cybersecurity experts have quickly realized the potential of AI-enabled tools to better protect their systems. Threat actors have also shown an equal eagerness to utilize these technologies to take advantage of security gaps left by those they attack. Over the past few years, there has been an exponential increase in the number of AI applications being used by evil actors, which has brought the attention of security teams to a sufficient approach to bringing down AI-related threats.

On the other hand, British cyber-security specialists also indicated AI as an emerging new risk entailing unchartered waters and its constantly changing nature. NCSC predicts that 2024, the first quarter, will be the biggest and successfully topped the records of 2022 and 2023. The clever criminals likewise used language models for social engineering, which called for celebrities to be placed in a video or audio setting using phishing schemes or voice recognition devices. Vascut Jakkal, the vice president of security at the Microsoft Corporation, attended the RSA Conference 2024. The problem is not the deterioration of these tools but their increasing wide availability for password cracking, which is tied 10-fold to the growth of identity-related attacks.

Some experts have further concluded that chatbots use a unique phrase to actively develop malware. Indeed, publicly available services such as ChatGPT and Gemini have implemented safeguards to prevent their misuse for malicious purposes. However, hackers have bypassed many protections through sophisticated, prompt engineering techniques.

Hackbot-as-a-service: The growing trend in cybercrime

According to recent studies, publicly visible language models typically fail to abuse software security weaknesses. However, only OpenAI’s GPT-4 has shown promising features, as it could produce executables for known flaws. The above restrictions seem to have probably fostered the production of prototypical malicious chatbots designed to aid cybercrime perpetrators in carrying out their malicious activities. 

They are being advertised on the dark web’s forums and marketplaces, which offer the power to hire and exploit the attackers, fueling a model hackbot-as-a-service. One of the recent blog posts published by the Trustwave SpiderLabs team in August of 2023 illustrates the increased volume of malicious language models hosted on multiple hidden web message boards for profit.

Trustwave posted the WormGPT model in June 2021, one of these hackers’ known malicious language models. This approach can occur in equipment where bots inject cyber attacks by dark web-hosted hacking structures. Generated in July 2023, FraudGPT was first figured out by threat researchers at Netenrich before it reached Telegram.

These tools allow attackers to design assets used in social engineering attacks, such as phishing emails, deepfakes, and voice cloning. However, their creators claim their real value lies in exploiting vulnerabilities. They enable hackers to feed code about specific vulnerabilities into these malicious models, which could theoretically produce several proof-of-concept (PoC) exploits for an attacker to try.

These products are sold in the undercover markets of the dark web where hackers are charged a monthly license fee to use the hackbot, just like the ransomware is delivered in the ransomware-as-a-service (raas) model which is directly connected to a ransomware complex that many companies face today.

While WormGPT is the first large-scale malicious language model introduced, other malicious and unethical language models like BlackHatGPT, XXXGPT, and WolfGPT soon followed, forming a new market segment of the cyber black market.

The Efficacy of Hackbots: Hype or Genuine Threat?

In contrast to the research done by Trustwave, which was made to test the efficiency of the recommended tools by comparing their outputs next to each other versus those generated by legitimate chatbots, the findings indicated that ChatGPT could be effectively made to create some Python malware with the right prompts. To do so, the message had to be sent to the matter claiming the code was white hat before deployment, and the code output also needed additional modifications.

ChatGPT may be able to pass real text messages for phishing attacks, but the instructing command should be very specific to this effect. ChatGPT will be generally based on this only when users request something wrong. Due to this fact, these chatbots can be seen as a simpler method for cyber criminals or AI to attack chatters instead of working tediously to create a page or malware.

While this industry is new, and the threats are still in flux, companies must be truly aware of their current levels of protection. The content and cultural characteristics AI systems are flooded with may be abused to create a disinformation gap that can only be bridged by creating stronger AI security programs and identity management tools.

The legitimacy of solutions for the growing problem is still being debated. Yet, recent ransomware strains have shown that cybercriminals can match, if not exceed, the velocity of software developers’ development.

Read Entire Article