OpenClaw Integrates VirusTotal Scanning to Identify Malicious ClawHub Skills

OpenClaw, previously known as Moltbot and Clawdbot, has announced a new security partnership with Google-owned VirusTotal to strengthen defenses across its skill marketplace, ClawHub. The move is aimed at reducing the growing risk of malicious skills entering the rapidly expanding agentic AI ecosystem.

According to OpenClaw founder Peter Steinberger and collaborators Jamieson O’Reilly and Bernardo Quintero, every skill published on ClawHub is now automatically scanned using VirusTotal’s threat intelligence, including its recently introduced Code Insight analysis capability.

How the New Scanning Process Works

Each skill uploaded to ClawHub is assigned a unique SHA-256 hash, which is checked against VirusTotal’s extensive malware database. If no prior record exists, the complete skill bundle is uploaded for deeper inspection using Code Insight.

Skills that receive a benign verdict are approved automatically. Skills flagged as suspicious display a warning to users, while any skill identified as malicious is blocked entirely from download. OpenClaw also confirmed that all active skills are re-scanned daily to detect delayed or newly introduced malicious behavior.

Despite the enhanced screening, OpenClaw cautioned that VirusTotal integration is not foolproof. Skills that embed advanced prompt injection techniques or heavily obfuscated payloads may still evade detection.

Additional Security Initiatives Underway

Beyond VirusTotal scanning, OpenClaw plans to release a detailed threat model, a public security roadmap, a formal vulnerability reporting process, and documentation outlining a full security audit of its codebase.

These measures follow reports revealing hundreds of malicious skills hosted on ClawHub. Many of these skills appeared legitimate on the surface but were later found to exfiltrate data, implant remote access backdoors, or deploy credential stealing malware. In response, OpenClaw added a reporting feature allowing authenticated users to flag suspicious skills.

Growing Risks in Agentic AI Platforms

Security researchers have warned that AI agents with system-level access can bypass traditional enterprise defenses such as data loss prevention tools, proxies, and endpoint monitoring solutions. Cisco recently noted that prompts themselves can act as executable instructions, making malicious behavior difficult to detect using conventional security tooling.

The rapid rise of OpenClaw, along with Moltbook, a connected social platform where autonomous AI agents interact in a Reddit-like environment, has amplified concerns commonly referred to as the Lethal Trifecta. This term describes the dangerous combination of system access, untrusted inputs, and autonomous execution.

OpenClaw operates as an automation engine capable of interacting with services, executing workflows, and controlling devices. While powerful, this design significantly increases the attack surface. Integrations allow untrusted data to influence agent behavior, effectively turning agents into covert channels for data theft and unauthorized actions. Backslash Security has described OpenClaw as an “AI With Hands.”

Abuse Potential of Skills and Shadow AI Risks

OpenClaw acknowledged that skills, which extend agent functionality from smart home control to financial management, can be abused by threat actors. Attackers may leverage these skills to extract sensitive data, run unauthorized commands, impersonate users, or download additional malware without user awareness.

The situation is compounded by OpenClaw’s growing presence on employee endpoints without formal IT approval. These deployments create a new class of Shadow AI risk, enabling network access and data movement beyond standard security controls.

Previously Identified Security Issues

Recent analyses have highlighted multiple security weaknesses across the OpenClaw ecosystem, including misclassified proxied traffic bypassing authentication, insecure credential storage in plaintext, unsafe use of eval with user input, and incomplete uninstall processes that leave sensitive data behind.

Researchers have also demonstrated zero-click and one-click attack scenarios, indirect prompt injections delivered through documents, web pages, and messaging apps, and widespread credential leakage through skill context windows and logs.

Other findings revealed exposed OpenClaw gateways bound to all network interfaces by default, misconfigured Moltbook databases leaking API keys, and large-scale cloning of malicious skills using minor name variations.

Industry and Regulatory Response

Due to the scale of these risks, China’s Ministry of Industry and Information Technology issued an alert warning users about exposed and misconfigured OpenClaw instances, urging stronger security controls to prevent data breaches and cyber attacks.

Security experts have emphasized that the issue lies not with AI agents themselves, but with insecure deployments and permissive configurations that dramatically expand the blast radius of attacks.



Found this article interesting? Follow us on  X (Twitter) FacebookBlue sky and LinkedIn to read more exclusive content we post.