Expert Consensus Suggests AI Risks Are Undervalued Compared to Public Concern
Mixed

Expert Consensus Suggests AI Risks Are Undervalued Compared to Public Concern

According to a new study released by the Chair of Communication Science at RWTH Aachen, experts significantly differ from the general public regarding AI developments. While the average citizen appears more risk-aware, the survey found that experts rate AI capabilities as more likely to materialize, more useful, and less perilous than the broader population.

The research surveyed a sample group of 1,100 citizens and 119 AI specialists. The experts assessed a range of scenarios, including medical diagnostics, autonomous weapon systems, and political decision-making. Lead researcher Philipp Brauner noted that, statistically, experts judge AI developments with a strong emphasis on perceived utility-a weighting nearly three times greater than the consideration of risk. In contrast, the study revealed that the public places greater importance on weighing potential risks.

The researchers issued a warning regarding a structural challenge: If AI continues to be developed purely through a lens of narrow utility and function, the resulting systems risk overlooking the core risk priorities of the general populace. The study describes this danger as “Procrustean AI”. To remedy this potential misalignment, the research strongly advocates for increased public participation throughout the entire lifecycle of artificial intelligence-from its initial development and implementation to its regulation. This study was published in the academic journal, “AI & Society”.