German regulators are issuing a stark warning against the uncritical acceptance of misinformation generated by artificial intelligence, raising concerns about the potential erosion of public trust and the risks of widespread error. Klaus Müller, President of the Bundesnetzagentur – the German Federal Network Agency responsible for enforcing the European AI Act – voiced particular anxieties in an interview with the Neue Osnabrücker Zeitung, highlighting the dangers stemming from the quality of advanced language models like ChatGPT and Gemini.
While acknowledging manageable challenges in areas requiring definitive answers, such as mathematical or scientific calculations, Müller emphasized the significant peril that arises when AI generates claims within complex social, political, or historical contexts. He characterized the practice of AI fabricating information as “a nicer word for lies” stressing the potential for severe consequences if users and institutions fail to critically evaluate AI-generated content.
“The danger lies in the potential to damage trust and cause grave mistakes if individuals, organizations and the media treat this technology uncritically” Müller stated, expressing deep concern about the ramifications. He specifically cautioned against an unquestioning reliance on AI outputs, which could distort public understanding and undermine established facts.
Despite anxieties concerning the burgeoning field of AI, Müller dismissed fears of a catastrophic, self-governing artificial intelligence scenario. He rejected the popular trope of a rogue AI, like the “Terminator” escaping laboratories to subjugate humanity, branding such notions as unrealistic fantasies. However, he remained firm in his message: the immediate and pressing concern isn’t a technological apocalypse, but a gradual erosion of truth and a potential crisis of credibility fueled by the uncritical adoption of AI-generated falsehoods.
The Bundesnetzagentur’s intervention signals a growing recognition within German regulatory bodies that while AI presents numerous opportunities, it also poses substantial risks to the integrity of information and the stability of democratic processes, demanding a more cautious and discerning approach than currently observed.


