According to a recent cybersecurity monitor published by the Federal Office for Information Security (BSI) and the Program for Police Crime Prevention (ProPK), only one-fifth of internet users in Germany currently check the source of content to detect misinformation or scams generated by artificial intelligence. While nearly half of those surveyed stated they can identify AI-generated material, only 28% actively search for inconsistencies within images.
The study, which surveyed 3,060 people in January, revealed that approximately one-third have taken no specific measures to detect such content. Furthermore, awareness remains low regarding concrete fraud scenarios, with only 38% believing that criminals can manipulate AI programs to leak sensitive data.
BSI President Claudia Plattner stressed that the ability to identify AI-generated material is crucial for pinpointing risks and combating falsehoods. Meanwhile, ProPK Chair Stefanie Hinz drew attention to “cybertrading fraud” a commonly observed scam where criminals promote supposed investment opportunities using AI-generated videos of celebrities. She strongly advised the public to approach such offers with critical skepticism.


