German Government Alarms Over US AI Myth Boosting Cyber Attack Risks

German Government Alarms Over US AI Myth Boosting Cyber Attack Risks

The German government expresses concern that a new Artificial Intelligence (AI) model originating from the United States could dramatically increase the risks associated with severe cyberattacks. Specifically, a product named “Mythos” developed by the US firm Anthropic, is feared because it possesses the capability to discover and potentially exploit security vulnerabilities of previously unknown scope.

According to reporting in the “Handelsblatt”, citing sources within the government, the National Security Council has addressed the model and the potential consequences of its widespread use. This council, which coordinates policy decisions on security matters for the federal government, is holding its discussions confidentially, leading to no official statement from the government regarding the matter.

The German Federal Criminal Police Office (BKA) also identifies significant inherent risks in these new applications. Carsten Meywirth, Director at the BKA and head of the cybercrime department, conceded that the initial ability to quickly find and patch vulnerabilities is a positive development. However, he cautioned that experience shows cybercriminal methods and attack vectors rapidly adapt to the current state of technology. Meywirth clarified that attack vectors refer to concrete ways hackers target systems, such as exploiting software flaws or manipulating emails. He stressed that this dynamic cannot be halted unilaterally.

In response, the federal government intends to initiate discussions with the US company. A spokesperson for the Ministry of the Interior confirmed to the “Handelsblatt” that “The federal government is currently in communication with the manufacturer, Anthropic”. However, when asked whether “Mythos” could contribute to the development of dangerous cyber weapons, the spokesperson remained guarded, stating that the Ministry currently cannot comment on possible implications.