MSP cybersecurity news digest, December 13, 2024

OpenAI confirms threat actors are writing malware using ChatGPT

OpenAI has confirmed that threat actors are exploiting ChatGPT for malicious purposes, including developing malware, spreading misinformation, and conducting spear-phishing campaigns. In one case, researchers identified activity linked to TA547 ("Scully Spider"), which used an AI-generated PowerShell loader to deploy the Rhadamanthys info-stealer. Another example involves the Chinese group "SweetSpecter," which targeted OpenAI employees with phishing emails containing malicious attachments. These attachments deployed spyware, and the attackers used ChatGPT to analyze vulnerabilities and craft scripts for their campaigns.

Another case highlights Iran’s "CyberAv3ngers," who used ChatGPT to improve attacks on critical infrastructure. This group used the AI tool to identify default credentials in industrial systems, create malicious scripts and plan post-compromise activities, including password theft and network exploitation.

A fourth example comes from researchers, who observed cybercriminals targeting French users with multi-step malware infection chains. These attackers used generative AI tools to create and deliver AsyncRAT in targeted attacks. These AI-generated malware campaigns are being used more and more by less skilled attackers. This is shown in a phishing attack targeting French users that used HTML stealing and password-protected ZIP archives.

The malware included VBScript and JavaScript code, with detailed comments explaining the code's function, a hallmark of AI-generated scripts. OpenAI has since banned all accounts linked to these actors and shared indicators of compromise with cybersecurity partners, but these cases emphasize the efficiency generative AI provides even to low-skilled attackers.

“LLM jailbreak-as-a-service” lowers tech barriers for attackers

Generative AI is revolutionizing cybercrime by drastically lowering the barriers for attackers. In Japan, 25-year-old Ryuki Hayashi used ChatGPT to write ransomware in just six hours, showing how AI accelerates malicious coding even for inexperienced criminals. His quick arrest and three-year sentence, however, marked a significant step in holding AI-powered attackers accountable.

Authorities have apprehended four individuals in China for using ChatGPT to develop ransomware, infiltrate networks, and extort victims. The group demanded 20,000 Tether (approximately $20,000) to restore access to compromised systems, underscoring the financial motives behind such attacks. Despite ChatGPT's ban in the country, the perpetrators bypassed restrictions with VPNs, illustrating the growing trend of circumventing ethical safeguards in AI platforms.

Similarly, tools like WormGPT emerged as unregulated AI models explicitly tailored for cybercrime, enabling users to craft phishing campaigns, malware, and hacking scripts. WormGPT successors’ like FraudGPT and DarkBERT are further popularizing AI-based cybercrime tools. However, many of these variants were scams or jailbroken versions of legitimate AI bots, reflecting a shift toward manipulating mainstream models like ChatGPT through prompt engineering and jailbreak techniques.

The rise of such activities reveals the dual nature of generative AI: while it offers immense potential for innovation, its misuse empowers low-skilled threat actors to launch complex attacks, highlighting the urgent need for robust defenses.

North Korea’s AI deceptions have grown in sophistication to infiltrate global companies

North Korea’s cyber operations have escalated in sophistication, heavily using artificial intelligence to achieve their goals. AI tools have enabled state-sponsored attackers to create fake LinkedIn profiles, complete with AI-generated images and deepfake videos, allowing them to infiltrate global companies. In one case, a senior engineer at a Japanese cryptocurrency exchange was tricked into downloading spyware during a staged "technical assessment," resulting in a significant breach of funds. Similar AI-assisted scams stole at least $10 million in cryptocurrency over a period of six months, funding Pyongyang’s nuclear weapons program.

Attackers like Andariel and Ruby Sleet have targeted defense and aerospace firms, stealing information on tanks, submarines and uranium processing. U.S. Air Force bases, NASA, and defense companies are among the victims. In a separate case, KnowBe4, a U.S.-based cybersecurity firm, unknowingly hired a North Korean operative who used AI to fake his identity, bypassing background checks and interviews. Once hired, the individual deployed infostealer malware through a company-issued laptop, attempting to steal sensitive credentials. North Korea’s attackers exploit global remote work trends.

Using “IT mule laptop farms” in the U.S., they connect to devices at night via VPNs, creating an illusion of working normal hours. AI-generated profiles and recruiter personas have been so convincing that even sophisticated security firms were initially fooled. Meanwhile, researchers found entire repositories detailing North Korea’s playbooks for these operations, including spreadsheets mapping stolen identities and fake resumes.

Python libraries using ChatGPT and Claude impersonators to distribute JarkaStealer malware

With the growing popularity and widespread use of ChatGPT and other AI-powered tools, they have become a prime target for cybercriminals, who exploit these technologies in their schemes. Researchers have uncovered two malicious packages, gptplus and claudeai-eng, uploaded to the Python Package Index (PyPI) repository by a user named "Xeroline." These packages impersonated popular AI models like OpenAI's ChatGPT and Anthropic's Claude, and together received nearly 3,600 downloads. The packages promised to provide access to GPT-4 Turbo and Claude AI APIs, but upon installation, they deployed a Java-based information stealer called JarkaStealer.

The malicious code, hidden in the packages' "init.py" files, used Base64 encoding to download a Java archive file (JavaUpdater.jar) from a GitHub repository and, if necessary, a Java Runtime Environment from Dropbox. Once executed, JarkaStealer steals sensitive data, including web browser information, system data, screenshots, and session tokens from applications like Telegram, Discord and Steam. After being stolen, the data is stored, transferred to the attacker's server, and removed from the victim's computer.

The malware, available as part of a malware-as-a-service (MaaS) offering, is sold for $20 to $50 on a Telegram channel, with its source code also leaked on GitHub. The attack primarily targeted users from countries like the U.S., China, India, France, Germany and Russia, as part of a year-long supply chain attack campaign.

This highlights the ongoing risks of software supply chain attacks and the importance of vigilance when using open-source components.

AI-powered fraud schemes improving; FBI shares tips to combat them

The FBI warns that cybercriminals are using artificial intelligence (AI) to improve the sophistication and scale of their fraud schemes. These scams span various domains, including romance fraud, investment cons, job hiring scams, and even impersonations of authority figures. Generative AI significantly reduces the effort required by scammers, enabling them to craft more convincing schemes using realistic text, images, videos, and even voice cloning.

Examples of AI-driven fraud include creating fake social media profiles for phishing or romance scams, producing deepfake videos of authority figures to solicit payments, and designing promotional materials for fraudulent investment schemes like cryptocurrency scams. Other schemes involve generating fake pornography to extort victims or crafting realistic disaster imagery to solicit donations for non-existent charities.

To combat these threats, the FBI recommends practical measures such as creating a private verification phrase with trusted contacts, scrutinizing the media for subtle AI flaws (e.g., distorted hands or unnatural tones), and verifying communications through official channels. Limiting exposure of personal images and voices online is also advised, as is avoiding unsolicited requests for sensitive information or financial transactions.