Theory Induced Blindness: A Closer Look
by Joseph Mark Haykov
July 12, 2024
Abstract
This paper aims to clarify that the cognitive bias that Daniel Kahneman referred to as "theory-induced blindness" is in reality better conceptualized as "assumption-induced blindness" (AIB). Kahneman is correct that this cognitive bias is induced by continuously using a false theory. However, it is caused by a false implicit assumption—a wrong initial hypothesis—accepted as an axiom from which the flawed, blindness-inducing theory is subsequently derived. The true cause of this blindness is a wrong initial assumption accepted as an axiom. Therefore, the blindness is, in reality, inevitably induced by a false axiomatic assumption, not a false theory.
Keywords
Cognitive Bias
Theory-Induced Blindness
Assumption-Induced Blindness
Axioms
Kahneman
Implicit Assumptions
Logical Deduction
Empirical Evidence
JEL Codes
B41: Economic Methodology
D01: Microeconomic Behavior: Underlying Principles
D03: Behavioral Microeconomics: Underlying Principles
Introduction
Theory-induced blindness is a cognitive bias first defined by Daniel Kahneman in his 2011 book, Thinking, Fast and Slow. In it, Kahneman—a renowned Nobel Prize-winning psychologist—describes a cognitive bias he refers to as "blindness induced by learning, accepting, and using a false theory." This blindness is classified as a cognitive bias because it represents a "heresy in reality." Due to believing in such false heresy, we end up using an axiom that is false in reality but true in theory, and then use this flawed theory to make claims about reality that are a priori known to be false. Making theoretical claims and decisions based on such dogma, despite knowing that it is logically deduced from a hypothesis that does not hold true in reality, is irrational and therefore classified as a cognitive bias.
Kahneman specifically writes:
"The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing. You give the theory the benefit of the doubt, trusting the community of experts who have accepted it."
According to Kahneman, the blindness is not caused by the theory itself but by a false implicit assumption embedded in an axiom—an initial hypothesis—accepted as true. This flawed theory is then logically deduced from the false axiom. While the blindness appears to stem from prolonged use of the flawed theory, its true origin lies in the false axiom underpinning the logically derived claims. We subconsciously confuse axioms, which are accepted without evidence because they are deemed "self-evidently" true and can therefore always turn out to be false in reality, with empirical evidence or facts, which are independently verifiable and cannot turn out to be false.
Kahneman further elaborates on this point in his discussion of Bernoulli’s flawed theory:
"The longevity of the theory is all the more remarkable because it is seriously flawed. The errors of a theory are rarely found in what it asserts explicitly; they hide in what it ignores or tacitly assumes."
This quote highlights Kahneman's perspective, which we share, underscoring that the root cause of blindness lies in the implicit assumptions underlying the theory’s axioms, leading to a flawed understanding of human action. As we can see, what causes the blindness is the failure to recognize that any long-used scientific theory is invariably derived from a set of axioms. Barring errors in deductive logic, such a theory cannot contradict reality unless one of the axioms is false. Logical deductions—similar to mathematical theorem proofs—are independently verifiable for accuracy. Thus, the only way a theory can turn out to be false in reality is if one of its axioms is false. Until the false axiom—like Daniel Bernoulli’s erroneous assumption about risk—is corrected, the flawed, blindness-inducing theory will not accurately describe reality.
In this sense, the false implicit assumption that causes theory-induced blindness is, as Kahneman puts it:
"If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing."
It is precisely this false assumption—that “there must be a perfectly good explanation that you are somehow missing”—that causes the blindness. There is no such perfectly good explanation, barring the obvious one: one of your axioms is false, and until you figure out which one, using a theory that is a priori known to be flawed is a singularly bad idea that can lead to disaster.
Formally, theory-induced blindness is a cognitive bias whereby people irrationally continue to use flawed theories, falsely believing that there is a phantom "good missing explanation" as to why their theory is false in reality. No such "good missing explanation" exists, barring a flaw in one of the axioms. By not fixing a flawed axiom, one allows laziness to prevail. In this sense, theory-induced blindness is simply intellectual laziness, with the brain subconsciously shirking the "slow, expensive, System 2 work" it knows it will have to do to correct the flawed axiom and re-derive the correct theory. The brain is lying, telling us: "Don’t worry about the false axiom; there is a perfectly good explanation for it," though no such perfectly good explanation exists in reality. Theory-induced blindness is simply intellectual laziness.
Therefore, for the purposes of this discussion, and in general, we propose renaming this cognitive bias as assumption-induced blindness (AIB). While the blindness is induced by using a false theory, it is caused by relying on a false assumption-dependent axiom from which the flawed, blindness-inducing theory is logically deduced. We confuse the guaranteed certainty of the error-free nature of logical deduction with the error-prone nature of axioms, which are hypotheses that could always turn out to be false in reality—and often do. For example, the Euclidean assumption that the shortest distance between two points is a straight line is often violated in reality, as evidenced by GPS technology, which accounts for the curvature of space-time.
Manifestations of Assumption-Induced Blindness
In our white papers, we examine various manifestations of Assumption-Induced Blindness (AIB) across physics and mathematics, particularly in finance and cryptocurrency payment processing. As we show, this cognitive bias, which is theory-induced and false initial assumption-dependent, is prevalent across all areas of study. Thus, it should not surprise us that the Bitcoin white paper is also full of AIB.
However, the primary way in which AIB manifests in reality is as a related and much better understood phenomenon known as the Dunning-Kruger effect. The Dunning-Kruger effect is a cognitive bias in which people with limited competence in a particular domain overestimate their abilities. The cause of this overestimation is axiom-induced blindness—for the mistakes made by those with limited competence are due to relying on false assumptions about how things in that domain truly operate in reality, assumptions that are known by experts to be inaccurate.
In summary, AIB is best conceptualized in reality as the failure to switch from using the incorrect—empirically falsified, initial flawed assumption—to using the relatively more accurate alternative hypothesis that fits how reality truly operates. This is similar to the wrong (theoretically flawed) choice not to switch doors in the Monty Hall problem based on additional information made available by the host.
With this understanding of AIB, we are now prepared to explore our first example of how AIB manifests in reality, as described in detail in the TNT white paper. But first, we invite you read our SEC legal brief.
References
Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Bernoulli, D. (1738). Specimen theoriae novae de mensura sortis. Commentarii Academiae Scientiarum Imperialis Petropolitanae, 5, 175-192. (Translated into English by Dr. Louise Sommer: Exposition of a New Theory on the Measurement of Risk. Econometrica, 22(1), 23-36.)
Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System. Retrieved from https://bitcoin.org/bitcoin.pdf.