Hackers Target Open-Source AI Assistant OpenClaw to Steal API Keys and Hijack Accounts

The open-source AI assistant OpenClaw, previously known as Clawdbot and Moltbot, has become a new frontier for cyberattacks. Cybersecurity firm Hudson Rock recently discovered that info-stealer malware is now targeting the assistant’s configuration files. These files contain highly sensitive data, including API keys and authentication tokens, which are essential for connecting OpenClaw to various applications like Telegram and calendars. By gaining access to these files, attackers can potentially seize control of these services, acting on behalf of the user. This incident marks a significant shift in malware evolution, moving from traditional targets like browser credentials to the very core of personal AI agents.

A New Vector for Cyberattacks

The attack on OpenClaw highlights a growing vulnerability in the AI ecosystem. As AI assistants become more integrated into daily workflows, they are turning into valuable targets for hackers. These assistants often require extensive permissions to function effectively, accessing everything from files and calendars to messaging apps. The stolen configuration files, such as openclaw.json and device.json, can provide attackers with a “blueprint of the user’s life.” Hudson Rock’s analysis revealed that the stolen data included not just gateway tokens but also private cryptographic keys that could allow an attacker to bypass security checks and access encrypted logs or paired cloud services.

Hackers Target OpenSource
Image generated by: Grok

A Shift in Hacker Tactics

According to Hudson Rock, the specific malware used in this case was not custom-built to attack OpenClaw. Instead, it employed a broad file-grabbing routine that swept the compromised system for any sensitive files, and it “inadvertently struck gold” by capturing the AI assistant’s entire operational context. Previously, such malicious programs were primarily focused on stealing data from web browsers and cryptocurrency wallets. This incident, however, signals a strategic pivot. Experts at Hudson Rock anticipate that as AI agents become more prevalent, hackers will develop specialized modules designed specifically to decrypt and parse their configuration files, much as they do for Chrome or Telegram today.

Broader Implications for the AI Industry

The security issues are not unique to OpenClaw. The rapid adoption of open-source AI tools exposes organizations to significant supply chain threats that often evade traditional security measures. Malicious add-ons, known as “skills,” have also been a significant problem, with researchers finding hundreds of malicious skills in OpenClaw’s official repository designed to steal cryptocurrency-related information. These vulnerabilities are amplified because users can misconfigure settings, grant excessive system access, or install unvetted skills, turning the AI assistant into a highly privileged entity operating outside of normal security controls.

Looking Ahead: Securing the Future of AI Assistants

As AI assistants grow in popularity, the risk of similar attacks is expected to increase. The incident has prompted calls for greater attention to security in the AI space. In response to these growing threats, the developers of OpenClaw have announced a partnership with VirusTotal to scan for malicious skills and establish a threat model to help users audit for misconfigurations. Security experts recommend that users of OpenClaw and similar tools pay close attention to the security of their systems, use reliable antivirus programs, and be cautious about the permissions they grant. For developers, the focus must be on building more secure systems from the ground up, as the very design of highly autonomous AI agents can amplify inherent security risks.

Related Posts