A Formal Model of Cognitive Vulnerability
Published in conference proceedings:
CXI International Scientific Conference «International Scientific Review of the Problems and Prospects of Modern Science and Education». USA, Boston, December 2025.
G ≈ 0.65
Smart Fanatic Impact
Bmot
Motivated Reasoning
Dec 2025
Published
Singularity Threshold
D > 0.7, A < 0.5
Population at Risk
73% – 95%
Primary Regulator
A (Attention)
Model Sensitivity
α₂ (Environment) = 5.6%
Resolving the contradiction: why smart people believe in irrational concepts (Stanovich, 2009).
Motivated reasoning (Bmot) is independent of general intelligence.
High IQ serves as a tool for generating complex arguments in defense of errors.
A dangerous agent (High Impact) capable of justifying and spreading cognitive errors.
This work introduces a formal mathematical model of "Stupidity" (G) — not as a lack of intelligence, but as an architectural cognitive vulnerability. Stupidity is defined as a system failure that occurs when information filtering demands exceed attentional control resources.
Central thesis: high IQ does not protect against irrational beliefs. The model separates cognitive biases into two types: stochastic errors (Berr), which are reduced by intelligence, and motivated beliefs (Bmot), which are orthogonal to intelligence.
Key discovery — the "Stupidity Singularity": when digital noise D > 0.7 and attention control A < 0.5, the model demonstrates exponential growth in G, where the agent completely loses rational agency. Monte Carlo simulations (N=10,000) show that 73% of the population falls into the critical risk zone (G > 1.0).
Practical conclusion: under information overload, traditional methods — education and knowledge accumulation — are insufficient. The priority becomes "attention hygiene": reducing digital noise (D), training attention control (A), and developing critical thinking (C).
The Petrenko Formula opens perspectives not only for modern psychology, but also for cybernetics and artificial intelligence architecture.
The cognitive vulnerabilities described in the G-model are directly applicable to modern LLMs (Large Language Models). Models with massive data volumes exhibit the same patterns: when context is overloaded with user queries (high D) and attention is diluted across tokens (low A) — the model begins to "hallucinate," rationalizing answers instead of seeking truth.
The "Stupidity Singularity" for AI is not a metaphor. Transformer architecture is literally built on the Attention mechanism, and when it becomes overloaded, the same exponential growth in errors occurs.
This creates a bridge between cognitive security for humans and AI Alignment — two disciplines working on the same fundamental problem: how to preserve an agent's rationality under conditions of information chaos.
A practical guide to integrating cognitive security metrics into corporate decision-making processes.
A comparative analysis of cognitive biases in biological agents and LLM hallucinations.
A basic overview of motivated reasoning mechanisms and architectural vulnerabilities.