Introduction: Common "Bugs" in Human and AI Thinking
At first glance, the human mind and advanced Large Language Models (LLMs) appear to be two completely different types of intelligence. However, upon closer inspection, a striking fact emerges: both humans and artificial intelligence suffer from similar cognitive vulnerabilities. They make mistakes for the same reasons, falling into traps that are independent of their "processing power".
To understand this phenomenon, we will use the "General Theory of Stupidity" by Igor Petrenko. This model proposes looking at irrationality not as a lack of intelligence, but as an architectural vulnerability—a systemic failure in the very design of the thinking agent.
The purpose of this document is to compare human cognitive errors and AI "hallucinations" through the lens of this theory. We will show that problems of rationality are common to both natural and artificial intelligence, and understanding these problems opens the way to solving them for both ourselves and the machines we create.
1. Anatomy of Human Error: "General Theory of Stupidity"
Our analysis is based on a cybernetic approach to irrationality. Instead of asking how smart an agent is, the theory suggests assessing how vulnerable it is to cognitive failure.
1.1. What is "Stupidity" from a Cybernetics Perspective?
Within the framework of the "General Theory of Stupidity", this concept (denoted as G) has nothing to do with low IQ or education level. "Stupidity" is a functional cognitive vulnerability.
It is a state in which an agent "loses the agency of decision-making under the influence of external factors". In other words, it is a systemic failure caused by overload. As the original source precisely defines it, it is:
"a systemic failure of the [control architecture]"
The main takeaway for the newcomer: the problem is not in "processor power" (intelligence), but in the architecture of our cognitive system, which cannot cope with the information load of the modern world.
1.2. Three Pillars of Irrationality: Key Vulnerability Factors
The theory identifies three main factors that lead to cognitive failure:
- Digital Noise (D): This is information overload and chaos (informational storm) of the environment. When the amount of incoming information exceeds the system's ability to process it, collapse occurs. The model predicts that the threshold D > 0.7 is a "phase transition point", beyond which exponential growth of errors begins, leading to "cognitive collapse".
- Attention Control (A): This is the main resource we use to filter digital noise and maintain focus. In the theory's equation, attention (A) is the "main denominator of environmental noise". The lower our ability to concentrate, the stronger the effect of informational chaos on us.
- Motivated Reasoning (Bmot): This is ideological bias, the tendency to defend one's beliefs despite facts. Research shows that this type of distortion is orthogonal to intelligence (I). A person uses their mind not to search for truth, but to justify an existing point of view.
1.3. The "Smart Stupidity" Paradox
The model elegantly resolves a known paradox: why do smart and educated people often believe in irrational concepts?
Research (Stanovich, 2009; Kahan, 2013) shows that high IQ not only does not protect against motivated reasoning (Bmot), but can also serve as a "tool for generating more complex arguments in defense of erroneous beliefs (rationalization effect)".
As an example, the model considers the archetype of the "Smart Fanatic":
- Profile: IQ=150, Bmot=0.8
- Result: G=0.65 (high level of irrationality)
This person may be a genius in their profession, but in matters affecting their beliefs, they remain completely irrational. Their intelligence works not to find the truth, but to protect their ideology.
If this is the nature of human cognitive failures, can we see something similar in the behavior of artificial intelligence?
2. Ghosts in the Machine: How Stupidity Theory Explains AI "Hallucinations"
Errors made by Large Language Models (LLMs) are often called "hallucinations"—the generation of plausible but false or nonsensical information. It turns out that these failures have the same nature as human irrationality.
2.1. From Cognitive Collapse to LLM "Hallucinations"
The key idea connecting the two worlds is formulated directly in the study:
"Cognitive vulnerabilities described in the G-model are directly applicable to modern LLMs."
Like humans, LLMs start generating errors when their architecture is overloaded. Their "hallucinations" are not a random glitch, but a systemic response to exceeding cognitive load.
2.2. Projecting Human Vulnerabilities onto AI
Let's compare the three key factors from the "General Theory of Stupidity" with specific phenomena in LLM operation:
- Digital Noise (D) for AI: The analog is overloading the model with data in the user's request. When the prompt is too long, complex, or contains contradictory information, it creates "digital noise" for the model that it cannot effectively process.
- Attention Control (A) for AI: This is a direct analogy to the "Attention" mechanism in the Transformer architecture, upon which modern LLMs are built. This mechanism is responsible for determining the importance of different parts of the input data (tokens). When the context is too large, attention becomes "blurred" across thousands of tokens, and the model loses focus, just like a person losing concentration.
- Motivated Reasoning (Bmot) for AI: This manifests when an LLM, instead of seeking truth or admitting ignorance, begins to "rationalize" answers. The model completes information to fit an expected pattern or style, even if that information is false. It generates the most probable continuation of the text, not the most truthful one.
2.3. "Stupidity Singularity" for Neural Networks
The concept of "Stupidity Singularity" is "not a metaphor" for AI. When the Attention mechanism in an LLM is overloaded, the same cascading growth of errors occurs, leading to mass "hallucinations". The model loses the ability to maintain logical consistency and begins to generate incoherent or completely fictional content.
This collapse occurs at the same threshold values as in humans: at D > 0.7 and A < 0.5.
Now that we have seen the parallels in the theory, let's summarize them in a single visual table for direct comparison.
3. Comparative Analysis: Human vs. AI
3.1. Table of Cognitive Vulnerabilities
| Cognitive Vulnerability | Manifestation in Humans | Manifestation in AI (LLM) |
|---|---|---|
| Source of Error | Not low IQ, but an architectural vulnerability of the cognitive system | Not a low number of parameters, but an architectural vulnerability, a key element of which is the Attention mechanism |
| Overload (High D) | Digital noise, excessive information flow, multitasking | Complex, verbose, or noisy user request (context) |
| Attention Failure (Low A) | Loss of concentration, inability to filter important from unimportant | Overload of the "Attention" mechanism in Transformer architecture, "blurring" of focus |
| Bias (Bmot) | Motivated reasoning, rationalization, and defense of one's beliefs | "Rationalization" of the answer, generation of plausible but false information (hallucinations) |
| Outcome (Singularity) | "Cognitive collapse", complete loss of rationality and total degradation of critical thinking | Mass "hallucinations", loss of logical consistency of the answer |
This striking similarity proves that the problem of rationality lies deeper than we thought. But more importantly, it points to a common path to a solution.
4. Conclusions: Common Problem — Common Solution
The main conclusion from this comparison is that problems of rationality for both humans and AI are architectural, not computational in nature. Simply increasing IQ (for humans) or the number of parameters (for AI) does not solve the root problem of vulnerability to overload and distortion.
This idea opens up two key directions for improving rationality:
- For Humans — "Attention Hygiene": In the era of information overload, the ability to manage one's attention (A) and artificially reduce digital noise (D) becomes more important than simple accumulation of knowledge. Developing self-control and mindfulness is not a luxury, but a necessary condition for preserving rationality.
- For AI — Architectural Elegance: The future of AI lies not in "brute force" computing, but in creating more perfect architectures. It is necessary to implement attention control and verification mechanisms (the mentioned "G-factor" for AI), which will allow systems to maintain logical integrity and recognize the limits of their competence even under conditions of uncertainty.
Understanding one's own cognitive limitations is the key to creating more effective, safe, and rational artificial intelligence. The stakes are extremely high. As the author of the theory warns, "if the external environment continues to become more complex (D ↑) without compensatory growth in attention management technologies (A ↑), society is doomed to a 'Stupidity Singularity'—a state where collective decisions become statistically worse than random ones."
This approach unites such seemingly distant fields as human cognitive security and AI Alignment (aligning AI with human values), because at the heart of both lies the same fundamental task: how to preserve reason in conditions of informational chaos.
Ready to Improve Cognitive Efficiency?
Learn how to integrate G-Theory metrics into your organization's processes.