AIB: Assumption Induced Blindness
by Joseph Mark Haykov
June 20, 2024
Abstract
As we carefully examine how Daniel Kahneman defines theory-induced blindness in his 2011 book "Thinking, Fast and Slow," we notice something peculiar. Although the bias is referred to as ‘theory’-induced blindness, the book makes it clear that the blindness is actually induced by an implicit assumption underlying an axiom from which a theory was derived. In this paper, we further explore the concept of assumption-induced blindness (AIB).
Keywords: Assumption-Induced Blindness (AIB), Theory-Induced Blindness, Axiom of Separation, ZF Set Theory, Bell's Inequality, Quantum Entanglement, Dunning-Kruger Effect, Empirical Evidence, Logical Claims, Axioms in Mathematics, Euclidean Geometry, Riemannian Geometry, Cognitive Bias, Dogma in Mathematics
JEL Codes: C02, C65, D80, D83, G14, K20, L20, L21
Introduction
Rather than using the flawed Bernoulli assumption that Kahneman referenced in his book, we should like to introduce another, and we feel a much better, more illustrative example of the so-called theory-induced blindness, which we shall refer to as assumption-induced blindness in this paper. In the case of ZF set theory, the implicit assumption-dependent axiom that causes the blindness happens to be the Axiom of Separation. The Axiom of Separation allows us to take a set C={A,B} and create two subsets D and E, where D={A} and E={B}. Each subset contains one of the original elements of C, thereby effectively "splitting" the set C into its constituent elements.
This axiom is crucial for forming subsets based on specific properties and is fundamental to the operations within set theory, enabling the precise manipulation of sets and their elements. The Axiom of Separation seems self-evidently true and was therefore posited as an axiom in ZF set theory by Zermelo and Fraenkel without requiring any further evidence or proof.
However, the Axiom of Separation does not always hold true in reality. For example, when set elements are entangled elementary particles, such as photons or electrons, the Axiom of Separation is violated. It is precisely for this reason that Bell’s Inequality, which holds true in ZF set theory, does not hold true in reality, as evidenced by the 2022 Nobel Prize in Physics. We note in passing, as illustrated in this video from MIT online lectures1, the Axiom of Separation from ZF set theory is used to derive Bell’s Inequality. At approximately the 1 hour and 15 minute mark, the lecturer uses the Axiom of Separation, for example, to split up the set N(U,¬B) into N(U,¬B,¬M) and N(U,¬B,M). In this particular case, when set elements are entangled particles, the Axiom of Separation cannot be used because N(U,¬B) cannot be split up into two component subsets, namely, those elements that are also in set M, and those elements that aren’t, as the elements of N(U,¬B) are all entangled.
Now, we ain’t claiming to be a theoretical physicist— merely having worked with a few trading stat-arb on Wall Street — but it doesn’t necessitate being one to see that theories can quickly turn into wild, rather than educated guesses, owing to assumption induced blindness. For example, the existence of dark matter has been proposed because otherwise, Einstein’s E=mc2 equation don’t hold true in reality. However, Einstein’s theory could turn out to be false because of a false underlying axiom, not because there is dark matter. Many such crazy guesses, like the holographic universe, and so on, have been proposed to explain the violation of Bell’s Inequality. However, in reality, it was clear and obvious that this inequality would be violated as soon as the first double-slit experiments were performed over a hundred years ago, and the real-world existence of quantum entanglement was proven. Quantum entanglement violates the Axiom of Separation, and therefore, Bell’s Inequality does not hold true in reality. Before considering ideas like ‘non-locality,’ ‘dark matter,’ ‘holographic universes,’ and others, perhaps these losers that can’t get a job at Renaissance that are still working on physics should fix the bug in their source code, specifically ZF set theory, to make it align fully with how reality operates, before making wild guesses.
The purpose of this discussion, however, is not to delve into theoretical physics, but to explain that axiom-induced blindness (AIB) is simply the Dunning-Kruger effect impacting scientists. In other words, AIB is what happens when the Dunning-Kruger effect occurs in a subject who also happens to be a top expert in a their field. In this case, the Dunning-Kruger effect manifests in reality as axiom-induced blindness, which is induced by dogma or implicit assumptions embedded in the underlying axioms from which the theory logically follows. Such implicit assumptions, once embedded in core axioms, have a tendency over time to become conflated with evidence or empirical facts. It is exactly for this reason that, in this paper, we explicitly note which claims we make are evidence-based and which ones are axioms.
The underlying, fundamental error in thinking that causes what Daniel Kahneman termed ‘theory-induced blindness’ is the disregard for the fact that the relative certainty of logical claims that collectively form any theory in mathematics is not the same for all claims. We can be relatively far more certain of the truth of evidence-based claims than we can ever possibly be certain of the truth of any implied assumptions underlying the axioms from which the rest of the logical claims that constitute a theory, namely theorems, are deduced. This fact is never fully accounted for or utilized. Specifically, we all have a strong propensity to drastically underestimate the odds of a blindness-inducing educated guess—an implicit assumption-dependent axiom—turning out to be false, as exemplified by the Axiom of Separation in ZF set theory, which does not always hold true in reality.
What we are saying here, dear reader, is that dogma (or hearsay)—specifically in applied mathematics—is represented by axioms that are accepted as true because they appeared to be self-evidently true to whoever first posited them, by making what is hopefully an accurate and well educated guess. However, guesses can always turn out to be wrong, like that axiom of separation — a wrong guess made by Mr. Zermelo or Mr. Fraenkel, about a century ago, unsurprisingly so, for we knew a lot less back then, particularly about quantum entanglement. Any such old axioms are mere assumptions, being just guesses at the truth, and as such, can always turn out to be wrong. However, what is orders of magnitude less likely to turn out to be wrong and consequently false are logical claims based on factual, real-world evidence and rational reasoning, represented by formal proof in mathematics using deductive logic. The reason is simple: unlike axioms, evidence-based claims and proof by logical deduction are both independently verifiable for accuracy and truthfulness.
Recalling our middle school mathematics, we appreciate that the accuracy of proof in mathematics is fully guaranteed because it is independently verifiable. Many individuals, including those as young as fifth graders capable of logical reasoning, have proven the Pythagorean Theorem individually, for themselves. However, it's important to note that the “absolute” truths established through such proofs are conditional rather than absolute.
All such mathematical proof does is establish logical equivalence between the axioms and the theorems, which are absolutely guaranteed to hold true as long as the axioms hold true, both in theory and in reality. For example, the Pythagorean Theorem is absolutely certain, due to independent verification, to hold true under the foundational Euclidean axioms. Yet, Einstein’s adoption of Riemannian geometry to describe curved space-time provides a more accurate model of the universe, challenging traditional Euclidean perspectives. This concept has practical applications in technologies like GPS, which must account for time dilation effects due to the differing speeds of clocks on satellites compared to those on Earth. Indeed, in our objective reality, where GPS technology is indispensable, the shortest distance between two points is not necessarily a straight line—not just in theory, but also in fact.
However, what happens is that as we use Euclidean theory of geometry, we often forget that the axiom that the shortest distance between two points is a straight line, is not an independently verifiable fact, like the fact that the Sun rises every day, whose certainty is as close to absolute as it gets. The assertion that the shortest distance between two points is a straight line is a guess, an assumption-dependent axiom, which could always turn out to be false. Forgetting this, and failing to question our axioms causes the blindness.
It is specifically for this reason that, in all our white papers, we clearly, explicitly, and always distinguish between any axioms and logical claims based on empirical evidence, which is independently verifiable. Doing so facilitates subsequent error detection, which helps mitigate AIB (assumption-induced blindness). Should any claims logically deduced from the axioms that form our theory (any theorems) turn out to contradict empirical evidence, there is but one way in which this could possibly ever occur: one of our axioms (or educated guesses) must have been wrong. But which one?
Rather than question reality, we must always identify specifically which one of the assumption-dependent axioms is being violated. If any claim logically deduced from a set of axioms is violated in reality, we know for a fact that one of the axioms is false, and before we do anything else, we must figure out which one of the axioms is based on an implicit assumption that is false. In other words, if our theory contradicts empirical evidence, barring errors in deductive logic, as facts cannot turn out to be false, one or more of the axioms underlying our theory must be inducing blindness. We may believe all the axioms to be true, but we must acknowledge the certainty that one of them is, in fact, false. Once we know there is a flaw, finding it becomes much easier. In other words, if your theory contradicts reality, it is certain that it contains not either what is referred to as Type I or Type II error in statistical inference, but BOTH: it contains at least one axiom that is false, and also, at the same time, is missing a true, correct ‘one-true’ hypothesis needed to replace the ‘false’ one. Murka/godfather, если много ваших стало попадать (теория противоречит действительности) у вас в банде крыса (а в аксиоме ошибка), и пока эта сука не получит перо, ни хуя у вас ни банда ни наука не будет работать.
This whole issue goes back to a fundamental misunderstanding of how hypothesis testing works in statistical inference—a very counterintuitive process that took me 30 years of hard work to fully master. The key concept that people have a strong propensity to forget is that not only specifically in the context of statistics and probably, but also in all of mathematics, a theory is represented by a dual set of logical claims: axioms, and theorems that logically follow from the axioms. That all that a theory is. However, what the theorems state does not matter in any way shape of form at all, because theorems, unlike axioms, are not independent logical claims. In other words, barring errors in deductive logic used to prove the truth of the theorems, like Fermat’s last theorem, theorems are guaranteed to hold true, not only in theory, but also in reality, so long as the axioms hold true. What this means is that once we have established that a theory contradicts reality, it is certain that we have a Type I, and also a Type II error, both, with absolute certainty, probably 1. Type I error = incomplete, Type II = inconsistent (incorrect, contradictory to reality). False=both, incomplete and incorrect, incomplete, but correct can not turn out to be false.
Schools of Economic Thought
While there are many schools of economic thought, the mainstream view, adopted by the vast majority of real-world economists and used by major central banks, including the US Fed, to set interest rates, is mathematical economics. This approach is rooted in the Nobel Prize-winning work of Ken Arrow and Gerard Debreu: the Arrow-Debreu general equilibrium framework. There is a very good reason why this specific approach is almost universally preferred and chosen over less popular competing schools of thought: it is provably more useful (has higher use value to the modeler) than competing theories from other schools of economic thought, and here is why.
Jerry : I don't understand. Do you have my reservation?
Rental Car Agent : We have your reservation, we just ran out of cars.
Jerry : But the reservation keeps the car here. That's why you have the reservation.
Rental Car Agent : I think I know why we have reservations.
Jerry : I don't think you do. You see, you know how to *take* the reservation, you just don't know how to *hold* the reservation. And that's really the most important part of the reservation: the holding. Anybody can just take them.
Similarly, economists from all schools of economic thought know how to make true claims; they just don’t know how to make claims that are also useful, in addition to being true. Except, of course, for mathematical economics, where logical claims are guaranteed to be relatively more useful than logical claims made by any competing school of economic thought. This is due to the formal axiomatic approach combined with strict deductive reasoning used in mathematical economics, as we illustrate next.
References
Bell, J. S. (1964). "On the Einstein Podolsky Rosen Paradox." Physics Physique Fizika 1, 195 – Published 1 November 1964.
Einstein, A., Podolsky, B., & Rosen, N. (1935). "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" Physical Review, 47(10), 777-780.
Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Zermelo, E., & Fraenkel, A. A. (1922). "Zur Theorie der Mengen." Mathematische Annalen, 86, 230-253.
Aspect, A., Grangier, P., & Roger, G. (1982). "Experimental Realization of Einstein-Podolsky-Rosen-Bohm Gedanken-experiment: A New Violation of Bell's Inequalities." Physical Review Letters, 49(2), 91-94.
Nobel Prize. (2022). "The Nobel Prize in Physics 2022." Retrieved from https://www.nobelprize.org/prizes/physics/2022/summary/
Dunning, D., & Kruger, J. (1999). "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments." Journal of Personality and Social Psychology, 77(6), 1121-1134.