The German Federal Office for Information Security (BSI) is actively developing clear security criteria and “best practices” for the protection of AI agents, a spokesperson told a news agency on Tuesday.
Recognising that AI systems will continue to evolve-sometimes abruptly-the BSI warns that it will become increasingly difficult to distinguish human actors from AI bots online. The agency also pointed out that malicious actors could weaponise AI agents for cyber‑attacks.
With regard to the newly released service OpenClaw, which has already attracted attention in specialist circles, the BSI currently advises that only IT professionals who are experienced in configuring and securing servers should use it. The office recommends running the service on a dedicated system or inside a sandbox environment.
The BSI is particularly critical of the open sharing of “skills” that determine how the AI interacts with its surroundings. Many of these skills, distributed through exchange portals, have been identified as being compromised with malware.
OpenClaw (also known as ClawdBot or MoltBot) is an open‑source framework for a personal AI assistant. Users can control the bot through messengers such as WhatsApp, Telegram or other channels. The service requires a language model that runs either locally or in the cloud. The BSI warns that a misconfigured setup can quickly lead to unauthorized takeover of the server.



