Uncertainty vs Maximum Likelihood
Abstract
Certainty and probability are closely related but subtly distinct concepts. Probability refers to the likelihood of an event occurring, such as the odds of winning at blackjack. Certainty, on the other hand, pertains to the confidence in the truth of a logical proposition or claim. Mathematically, we can measure the certainty that a logical proposition is true in reality, not just in theory, by assessing the probability of encountering a real-world event that would falsify the proposition.
For example, the proposition "2 + 2 = 4" is universally true in algebra theory, but can be false in reality. While 2 apples plus 2 apples equals 4 apples, the statement "2 moons of Mars plus 2 moons of Mars equals 4 moons" is false because Mars has only 2 moons, regardless of how many times they are counted. In contrast, the claim "the dog ate my homework" is based on evidence. Once its truth is established, it cannot be disproven in the future, unlike purely theoretical claims, as exemplified by “2+2=4” which remain conditionally true — if the underlying axioms hold. When it comes to real world facts, if ten people saw the dog eat the homework, this evidence proves beyond a reasonable doubt that the claim is true, barring extraordinary circumstances such as mass hallucinations or drug influence. Conversely, the statement "2 + 2 = 4" is true according to Peano's fifth axiom of arithmetic—the induction principle—but if there are only 2 moons, this axiom does not apply.
This essay explores methods for accurately estimating the certainty of logical claims, providing readers with insightful understanding and practical applications.
Introduction
The term "theory-induced blindness" was coined by Nobel Prize-winning psychologist Daniel Kahneman in his influential 2011 book Thinking, Fast and Slow. In this paper, we refer to this behavioral phenomenon as assumption-induced blindness (AIB) instead to more precisely characterize the underlying bias. In his exploration of theory induced blindness, Kahneman critiques Daniel Bernoulli's expected utility theory, which posits that people base their decisions on the expected value of outcomes multiplied by their respective probabilities. Kahneman argues that Bernoulli's assumption, while theoretically sound, is provably false in reality. This is because human decisions are often biased and influenced by factors beyond rational calculations of expected utility, making Bernoulli's axiom fundamentally inaccurate.
As Kahneman points out, conclusions drawn from this false axiom do not apply to reality, as evidenced by observed behaviors consistently diverging from the theory's predictions. Bernoulli’s utility theory not only fails to describe human behavior accurately but has also been falsified through multiple reproducible experiments. Quoting Kahneman directly: “The longevity of the theory is all the more remarkable because it is seriously flawed. The errors of a theory are rarely found in what it asserts explicitly; they hide in what it ignores or tacitly assumes.” This highlights that the root cause of blindness lies in the implicit assumptions underlying an axiom, leading to a flawed understanding of human action.
Kahneman further elucidates: “The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing. You give the theory the benefit of the doubt, trusting the community of experts who have accepted it.”
This illustrates that such blindness itself stems from the assumption that, as Kahneman puts it, “if you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing.” We demonstrate that no good explanation for why a theory is false in reality exists, except for the only possible explanation: the presence of a false axiom. No other explanations exist, and assuming they do causes AIB—a false assumption about why a theory lies about reality. This assumption is inherently flawed from a mathematical standpoint and universally false without exception.
Certainty of Logical Claims in Applied Mathematics and AIB
In applied mathematics, certainty regarding a statement or logical claim, such as the assertion that 2+2=4, hinges on the likelihood of this claim ever facing falsification under unforeseen conditions. It's crucial to recognize that absolute certainty is only attainable for conditional claims. We can never be fully certain of unconditional claims about reality, posited to universally hold true, because the future, from our human perspective, remains partially unknowable. Therefore, we cannot be sure which currently accepted truths might eventually prove to be false or under what circumstances.
For instance, consider the Euclidean axiom stating, "the shortest distance between two points is a straight line." This axiom does not universally hold true; practical applications like GPS navigation, which employ Riemannian geometry and account for effects such as time dilation, demonstrate that the shortest path between points is not always a straight line. Hence, in reality, there are no universally applicable claims. The only certainty we achieve applies to conditional claims, such as "The Pyramids in Egypt were there for years after World War II" or "The Moon orbits Earth, and both are round." Such claims are not only independently verifiable within a specific timeframe but also conditioned on historical events, ensuring their irrefutable truth.
In theory and practice, certainty resides in conditional claims whose accuracy can be established within specific contexts and conditions, rather than in unconditional claims purported to hold universally across all circumstances. The reason is clear: we cannot foresee all potential scenarios under which an axiom could be proven false. For example, the Axiom of Separation in ZF set theory seems self-evidently and universally true but fails to hold true in reality when set elements involve entangled elementary particles like photons, as evidenced by the experimental falsification of Bell's Inequality.
Consider another example: the claim that "Because IBM closed at $54.95 last Friday, we could have bought 100 shares of IBM at $55 per share on that day" can never be false, as it hinges on the independently verifiable fact that IBM closed at $54.95, which itself cannot be false. The certainty of this claim is absolute because it cannot be that IBM did not close at $54.95 on Friday, as we know its closing prices on the days before and after, not just on that Friday, ensuring the accuracy of $54.95, barring extraordinary circumstances like mass hallucination, which are implausible in practical discussions.
However, unlike the claim about IBM, the assertion that "2 + 2 = 4" is not universally true but conditional on the operands of the "+" operator and the result after the "=" sign, all being natural numbers as defined by Peano's fifth axiom, which posits an infinitude of natural numbers. Thus, 2 apples + 2 apples = 4 apples, and 2 moons of Jupiter + 2 moons of Jupiter = 4 moons of Jupiter, but 2 moons of Mars + 2 moons of Mars = 2 moons of Mars, because Mars only has 2 moons, regardless of how many times they are counted.
In applied mathematics, theories like the theory of evolution in biology, ZF set theory, and Euclidean geometry consist of two fundamental components: axioms and theorems. These elements establish a logical framework where theorems are universally true under the condition that the axioms are true. If the axioms are false, the entire theory collapses, as the theorems derived from them would no longer hold. In this sense, the Pythagorean theorem is conditional upon the Euclidean axioms, and has been shown to be false in non-Euclidean, or Riemannian geometry, which provides a more accurate description of our reality than the Euclidean assumption that the shortest distance between two points is a straight line, which isn’t always true.
Mathematics demonstrates logical equivalence between axioms and theorems, making the entire theory a tautology guaranteed to hold true, contingent upon the truth of its axioms. The nature of mathematical proof ensures independent verifiability and accuracy. For instance, the widespread understanding of concepts like the Pythagorean theorem suggests that most readers have verified it independently during their education.
Indeed, theorems are absolutely certain to hold true, conditional on the truth of the axioms and barring errors in deductive logic. Because the accuracy of such proofs is independently verifiable, our certainty in them is absolute. We can be absolutely certain that Andrew Wiles really did prove Fermat’s Last Theorem in 1992, because his proof has gone through the peer review process, and has been verified for accuracy independently. Such independent verification guarantees with absolute certainty that Fermat’s Last Theorem is indeed true. If any theorem does not hold true in reality, given that empirical facts cannot be false, we can assert with absolute certainty that one of the axioms must be false. Before seeking any other explanation for why a theory contradicts reality, it's essential to identify and correct the false axiom.
Assumption Induced Blindness (AIB) – characterized by Daniel Kahneman as the tendency to search for a non-existent "perfectly good explanation that you are somehow missing" – arises from erroneous axioms whose guaranteed existence is ignored. The key here is that any theory, inherently a tautology, can only conflict with empirical evidence if one of its axioms is flawed.
Comparing assumption-induced blindness to not realizing a traitor within a group draws parallels to narratives like the old Soviet song "Murka" or those depicted in the Godfather movies. Just as a RICO organization faces disruption when co-conspirators are caught by law enforcement, signaling a traitor—Murka and Tessio—the failure of any theory to align with reality indicates a flaw. Until this flaw is addressed and the traitor (or flawed axiom) eliminated, neither the gang nor the flawed theory can function effectively.
You see, dear reader, while incarceration in a US or Mexican prison is singularly unpleasant, it is nothing but a trifle, a mere walk in the park, compared to Soviet prison under Stalin. If you were unfortunate enough to wind up mining gold in Bodaybo after getting arrested by the NKVD, you had a six-month life expectancy before freezing to death or getting poisoned. In the end, Tessio gets whacked and so does Murka, no matter how lovely in bed, by getting a feather (knife) in the heart.
Source: CIA reading room document on Soviet prison conditions.
Mathematically and formally redefining AIB reveals its core issue: it stems from a deceitful axiom, a false assumption, a “rat”. AIB emerges from the false belief in a phantom "perfectly good explanation." In reality, the only viable explanation for why a logically derived theory contradicts empirical evidence is a compromised axiom, a traitor. Until this root issue is resolved, any further pursuit of the flawed theory influenced by AIB is futile.
Selecting Theories with the Lowest Likelihood of Containing False Axioms
The inherent, fundamental root cause of Assumption Induced Blindness (AIB) is the misapplication of the principle known as Occam’s Razor. Although the source of this information is Wikipedia, all claims in it are fully referenced and therefore independently verifiable for accuracy.
"The origins of what has come to be known as Occam's Razor are traceable to the works of earlier philosophers such as John Duns Scotus (1265–1308), Robert Grosseteste (1175–1253), Maimonides (Moses ben-Maimon, 1138–1204), and even Aristotle (384–322 BC). Aristotle writes in his Posterior Analytics, 'We may assume the superiority ceteris paribus [other things being equal] of the demonstration which derives from fewer postulates or hypotheses.' Ptolemy (c. AD 90 – c. 168) stated, 'We consider it a good principle to explain the phenomena by the simplest hypothesis possible.'"
Having explained the concept of AIB, it becomes absolutely clear why Aristotle thought that, all other things being equal (meaning in the case of two competing alternative theories that both are equally able to explain all the relevant facts), we should pick the theory that posits the smallest number of axioms. Given that any one axiom can always turn out to be false, the fewer the axioms, the less likelihood that a theory will ever be falsified in the future.
We note in passing that this argues for more complex theories rooted in a smaller number of axioms, not simpler theories. The complexity referred to in Occam’s Razor pertains to the number of axioms, not the length of the deductive logic used to prove the resulting theorems. Indeed, it becomes apparent that the complexity of the logical proof used to reach the theorems is directly proportional to the accuracy of the theory, as defined by its likelihood of ever turning out to be false. The better, more accurate the theory, the more complex it will be, as it is rooted in a smaller axiom set.
However, as we go about proving a theorem in mathematics, we observe that only a subset of the axioms are actually necessary to prove a particular logical claim. This means that, given two theories A and B, where A necessitates a subset of the axioms of B, you can be certain that theory B is ‘ill-formed’ or ‘suboptimal’ relative to A because it achieves less certainty while having the exact same use value (or marginal utility) to the modeler. In other words, as consumers of a theory, we want the minimum risk of the theory ever being falsified while maintaining its use value. This aligns with the concept of portfolio optimization in finance, where we exclude all suboptimal portfolios from consideration—those not on the efficient frontier.
So, how do we achieve “theory optimization” in applied mathematics? By excluding those theories that are inefficient, defined by using ‘too many’ axioms, or equivalently, a wrong, suboptimal axiom set. It is the underlying axioms that fully and entirely define a theory, as the same set of theorems are always pre-defined by—meaning derived from, or resulting from, or logically following from—some set of underlying axioms. The resulting theorem set is always the same given axioms, waiting to be logically deduced by independently verifiable formal proof.
Therefore, as we can clearly see, the ‘mathematical efficiency’ of a theory is intrinsically linked to its foundational axioms. Theories (theorems) derived from a minimal and optimal set of axioms are more robust and less likely to be falsified, ensuring the theory’s longevity and reliability. Thus, the goal of theory optimization is to identify and utilize the most efficient axiom set possible, minimizing the risk of future falsification while maintaining the theory’s explanatory power and utility.
If a theory lies about reality, there is but one explanation possible: one of the axioms is false. Of this, we can be absolutely certain. But how do we pick the theory that is not only useful—having the same use value, as measured by its ability to explain the relevant facts—but also has the lowest risk of ever being falsified? It is that theory which relies on the least number of assumption-dependent axioms, posited as true unconditionally, as exemplified by the Axiom of Separation in ZF set theory.
Selecting a School of Thought
While there are many schools of economic thought, the mainstream view, adopted by the vast majority of real-world economists and used by major central banks, including the US Fed, to set interest rates, is mathematical economics. This approach is rooted in the Nobel Prize-winning work of Ken Arrow and Gerard Debreu: the Arrow-Debreu general equilibrium framework. There is a very good reason why this specific approach is almost universally preferred and chosen over less popular competing schools of thought: it is provably more useful (has higher use value to the modeler) than competing theories from other schools of economic thought, and here is why.
Jerry : I don't understand. Do you have my reservation?
Rental Car Agent : We have your reservation, we just ran out of cars.
Jerry : But the reservation keeps the car here. That's why you have the reservation.
Rental Car Agent : I think I know why we have reservations.
Jerry : I don't think you do. You see, you know how to *take* the reservation, you just don't know how to *hold* the reservation. And that's really the most important part of the reservation: the holding. Anybody can just take them.
Similarly, economists from all schools of economic thought know how to make true claims; they just don’t know how to make claims that are also useful, in addition to being true. Except, of course, for mathematical economics, where logical claims are guaranteed to be relatively more useful than logical claims made by any competing school of economic thought.
Certainty → use value under principle of exclusion
In the real world, within the objective reality that we all observe and influence independently, determining what is ultimately true and what is merely a wrong guess involves a process of verification and consensus. To distinguish between guaranteed to be true statements and likely false incorrect assumptions , we rely on the principle of independent verification.
Consider the historical misconception that the Earth was flat. This idea was prevalent for a long time but was eventually debunked by scientific observations and evidence. Today, we assert that the Earth is round based on verifiable evidence. But how do we know that the Earth is truly round in objective reality?
The key lies in the ability to independently verify this claim. Objective truth is determined by whether an assertion can be confirmed through consistent and repeatable observations and experiments, regardless of individual beliefs or biases. For instance, the roundness of the Earth is verifiable through multiple means:
Astronomical Observations: Ancient Greek astronomers, such as Aristotle, observed the curvature of the Earth's shadow on the moon during a lunar eclipse, providing early evidence of its round shape.
Circumnavigation: Explorers like Ferdinand Magellan provided practical proof by circumnavigating the globe, demonstrating that one could travel around the Earth and return to the starting point.
Modern Technology: Today, satellite imagery offers clear visual evidence of the Earth's roundness. Individuals can observe these images and confirm the shape independently.
Personal Observation: People can travel to different parts of the world, witnessing firsthand the curvature of the horizon from high altitudes or observing the gradual disappearance of ships over the horizon.
The assertion that the Earth is round is thus an accurate description of reality, independently verifiable by anyone willing to investigate. Even if a subset, or even the entirety, of the human population were to believe otherwise, the objective truth remains unchanged. This is because objective reality is not contingent on individual beliefs but on the consistent, independent verification of claims through observation and evidence.
In summary, the process of distinguishing objective truth from mere conjecture involves the ability of many individuals to independently verify the accuracy of an assertion. This principle of independent verification ensures that our understanding of reality is based on consistent and repeatable evidence, rather than subjective beliefs.
The next question we ask ourselves is naturally, which logical claims – or equivalently, statements, or assertions – are certain to hold true, not only in theory, but also, and far more importantly, in reality? In other words, in theory, 2+2=4, but in reality, 2 moons of Mars + 2 moons of Mars = an undefined value, for while 2+2=4 of anything in theory, in reality, Mars only has 2 moons, no matter how many times they are counted. The reason why 2+2 is not always 4 in reality is that the truth of claims proven by formal application of logical deduction in mathematics, is not universal, but conditional on the truth of the underlying axioms. In this particular case, the axiom that is being violated in reality is Peano’s fifth axiom – the induction principle – which assumes infinitely many natural (meaning countable) numbers – an assumption clearly violated when counting only two moons.
This discrepancy highlights the distinction between theoretical certainty and practical application. Mathematical truths are derived from axioms and logical deductions, which are internally consistent within their defined systems. However, their applicability to real-world scenarios depends on whether the assumptions underlying these axioms hold true in the specific context. When the assumptions do not align with reality, the theoretical conclusions may not apply.
In conclusion, to determine the certainty of logical claims in reality, we must scrutinize the assumptions and axioms underlying these claims and ensure their alignment with observable and verifiable facts. Objective truth in the real world is thus established through a combination of rigorous logical deduction and empirical verification, ensuring that our assertions accurately reflect the nature of reality.
The phenomenon of not noticing a false underlying axiom that contradicts reality in a theory is a well known and researched cognitive bias, that causes not only theory induced blindness, as described by Kahneman in 2011.