Pascal’s Wager and Theory-Induced Blindness
By Joseph Mark Haykov, with Nathan and Phillip as Interns
October 10, 2024
Abstract
Pascal's Wager asserts that believing in God is more advantageous than disbelief. Blaise Pascal, a devout Christian, formulated his argument around belief in the Biblical God, referred to as Yahweh in Judaism and Allah in Islam, acknowledging that this conception of God is shared across these Abrahamic religions. According to Pascal, belief in this one true God leads to eternal rewards in heaven, while disbelief results in eternal punishment in hell. This presupposition about the nature of God is a foundational aspect of Pascal’s argument, distinguishing it from belief in other deities. Despite its philosophical significance, Pascal's Wager has often been marginalized or dismissed, a phenomenon that can be related to cognitive biases discussed by Daniel Kahneman, such as confirmation bias and belief perseverance, resulting in “theory-induced blindness.” This paper explores how deeply held assumptions and biases can obscure the rational evaluation of Pascal’s argument, preventing it from being understood or assessed objectively.
Formal Systems
The fact that cognitive biases—such as confirmation bias and anchoring bias—not only exist but also adversely affect our decision-making is an empirically established fact, one that is independently verifiable for accuracy. This means that this assertion cannot turn out to be false, much like the fact that the pyramids exist in Egypt or that the North Pole is cold. To understand how these biases work, it is essential to first recognize that "rationality"—in the context of human behavior and decision-making—assumes the ability to reason logically: starting with a set of assumptions that lead logically to conclusions. This process of logical deduction operates through what are known in mathematics as formal systems, which are used to prove all theorems in mathematics. Formal systems are foundational tools that derive conclusions from a set of underlying axiomatic assumptions, following the formal rules of deductive logic that guide all logical reasoning. Mathematicians rely on formal systems to prove theorems, as famously demonstrated by Kurt Gödel in 1931 with his first incompleteness theorem.
To delve a bit deeper into this concept, in abstract mathematics, a formal system consists of three essential components: a formal language, a set of axioms, and a set of inference rules. The axioms and definitions form the first set (A), while the second set (B) includes all corollaries, lemmas, and theorems that logically follow from the axioms in set A through the application of inference rules, as is perfectly illustrated by algebra. Each set of axioms (A), together with the chosen inference rules, uniquely defines a corresponding set of theorems (B). This means that once set A is fixed, all the resulting logical claims contained in set B are determined by the formal rules of inference. However, this does not imply a one-to-one correspondence between individual axioms and individual theorems. Rather, it is the entire set of axioms and rules that collectively determine which theorems are derivable within the system.
Assuming the axioms in set A are true within the context of the formal system, all the claims in set B are proven to hold true within that system, provided there are no errors in the proofs. What guarantees that all claims in set B hold true universally, given the truth of the axioms, is that mathematical proofs can be—and are—independently verified for accuracy. Logical deduction is fundamental to rational thought, as evidenced by the fact that countless individuals have proven the Pythagorean Theorem for themselves, often in middle school. In any formal system, all theorems are already embedded in the axioms, awaiting proof. However, Gödel's incompleteness theorems show significant limitations for formal systems in general, particularly in systems based on Peano's axioms. Gödel's two incompleteness theorems demonstrate that within any sufficiently complex formal system—such as arithmetic—there are true statements that cannot be proven within the system itself, including those concerning the system’s own consistency. This limitation is paralleled by Alan Turing's halting problem, which shows the limits of algorithmic computation, and Heisenberg's uncertainty principle, which highlights fundamental limits on what can be known about physical systems.
While these principles establish broad theoretical limits on knowledge and provability, their practical impact on applied formal systems—such as those used in physics, chemistry, and biology—is minimal. Just as debugged software reliably computes your taxes, theorems derived from formal systems like algebra or geometry remain valid as long as the system’s axioms hold true. In mathematics, statements like 2 + 2 = 4 are guaranteed to be true as long as the axioms are consistent.
All mathematics, without exception, functions as a formal system in which theorems, like Fermat's Last Theorem, are embedded in the axioms, waiting to be logically deduced—a process that can take centuries, as demonstrated by Andrew Wiles' proof in 1995. The absolute reliability of formal proofs through logical deduction in mathematics is ensured by independent verification. For example, the Pythagorean Theorem is universally accepted as true because it has been independently proven by countless mathematically literate individuals. This is why we can be certain that the Pythagorean Theorem holds true universally—both in theory and in practice—so long as the underlying axioms, such as the Euclidean assumption that the shortest distance between two points is a straight line, remain valid. However, if this assumption does not hold, the Pythagorean Theorem no longer applies.
In reality, the shortest distance between two points is not always a straight line. For example, GPS systems use Riemannian geometry, which aligns with Einstein’s theory of curved space-time, to accurately calculate positions. In Riemannian geometry, the shortest distance is not a straight line, which is why the Pythagorean Theorem does not apply in such contexts. Factors like time dilation, evidenced by the differing clock rates of GPS satellites compared to those on Earth, further emphasize this deviation from Euclidean geometry.
Dual Consistency in Applied Formal Systems
What we aim to convey here is that any system—formal or informal—can misrepresent reality in two ways: through a Type I error (a false positive, accepting a false conclusion) or a Type II error (a false negative, rejecting a true conclusion). However, because the theorems in a formal system are proven to hold true universally—conditional on the truth of the axioms, which is a cornerstone of logical consistency—within any scientific or applied formal system that accurately models aspects of reality and whose axioms do not contradict known facts, neither type of error is possible.
Not being able to prove a claim, due to limitations within a system, is distinct from asserting a false claim is true or a true claim is false. These two cases—Type I and Type II errors—are the only possible errors in any proof. Gödel’s incompleteness theorems show that certain true claims cannot be proven within some systems. For instance, according to Gödel’s first incompleteness theorem, the Riemann Hypothesis, while potentially true in reality, may be unprovable under Peano’s axioms. However, as long as the system is consistent, it cannot produce false claims about reality—no Type I or Type II errors can arise in a fully consistent formal system. Thus, a "dually consistent" formal system can never "lie" about reality, though it may necessarily exclude some true claims from its set of theorems.
Formally, dual consistency in any “applied formal system” requires that the system’s axioms not only avoid internal contradictions but also align with real-world phenomena—an additional condition necessary specifically for systems modeling the real world. Consider the arithmetic statement 2 + 2 = 4. This is universally true within the formal system of arithmetic based on Peano’s axioms. The addition operation presupposes that both the numbers involved and the result are natural numbers, unconstrained by physical limitations.
However, when applying this arithmetic to real-world contexts, we must ensure that the physical situation corresponds with the assumptions embedded in Peano’s axioms. For example, if we say, "2 moons of Mars + 2 moons of Mars," it would be incorrect to conclude that Mars has "4 moons." Mars has only two moons, Phobos and Deimos. The issue here is not with the arithmetic itself—since 2 + 2 = 4 holds true within the abstract formal system—but with the mismatch between the real-world scenario and the assumptions of the system. Peano’s second axiom assumes that for every number n, there exists a successor n′, but in this case, Mars only has two moons, and there is no successor to the second moon. This example demonstrates that while formal arithmetic is valid, its application to this scenario contradicts reality because the assumptions embedded in the formal system do not align with the constraints of the physical world.
This highlights the critical importance of accurate modeling by ensuring dual consistency in applied mathematics, especially given the limitations of formal systems when applied to real-world scenarios. All axioms must accurately reflect empirical facts; otherwise, any corollaries, lemmas, or theorems will hold true only in theory but fail in practice, rendering such systems—akin to intellectual exercises like solving crossword puzzles or playing solitaire—enjoyable to explore but lacking practical value.
Theory-Induced Blindness
Theory-induced blindness is a cognitive bias—a form of irrational behavior—described by Daniel Kahneman in his 2011 book, Thinking, Fast and Slow. Rather than summarizing the concept, let’s refer directly to Kahneman’s own words:
"The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing. You give the theory the benefit of the doubt, trusting the community of experts who have accepted it."
Given that, in the real world, all scientific theories—bar none—are applied formal systems—structured sets of assertions logically deduced from underlying axioms or hypotheses—theory-induced blindness does not stem from the theory itself but rather from a false implicit assumption embedded within an axiom—an initial hypothesis accepted as true without empirical verification. We refer to such false implicit assumptions as dogma. Every so-called "blindness-inducing theory" is logically derived from a dogma-dependent axiom using correct deductive logic.
While it might appear that the blindness stems from the long-term use of a flawed theory, the true origin lies in the false axiom that underpins these logically sound conclusions. The confusion arises from mistaking axioms—accepted without empirical evidence because they seem "self-evidently" true—for facts, which are independently verifiable and cannot be false. If an implicit assumption embedded in the axioms turns out to be false, the axioms must be corrected, as facts are immutable, but axioms are not—they are simply assumptions.
Kahneman further elaborates on this idea in his discussion of Daniel Bernoulli’s flawed theory of how individuals perceive risk:
"The longevity of the theory is all the more remarkable because it is seriously flawed. The errors of a theory are rarely found in what it asserts explicitly; they hide in what it ignores or tacitly assumes."
This quote reinforces Kahneman’s view—and ours—that theory-induced blindness is caused by a tacit, false assumption embedded in an axiom, leading to flawed conclusions. The blindness results from a failure to recognize that every long-standing scientific theory is logically derived from a set of axioms. As long as the deductive process is sound, a theory can only contradict reality if one or more of its axioms is false.
Just like mathematical theorems, logical deductions can be independently verified for correctness. Therefore, a theory can only fail to describe reality if one of its foundational axioms is incorrect. Until the false axiom—such as Bernoulli's erroneous assumption about risk—is corrected, the flawed, blindness-inducing theory will persist and fail to accurately represent reality.
This concept can be aptly illustrated by a real-world metaphor. It mirrors the narrative in the famous Russian song "Murka," based on real-life events. In the story, the arrest of too many gang members reveals the presence of a traitor—Murka—within the gang. The gang cannot operate effectively until the traitor is identified and eliminated. Similarly, in any formal system, a flawed theory cannot function properly until the false axiom—much like the traitor in the gang—is discovered and corrected.
"If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing."
What this quote means is that if an observation does not fit the facts, it indicates that one of our axioms or definitions is wrong. Until the flawed, dogmatic underlying axiom is questioned and addressed, theory-induced blindness will continue to obscure our understanding of reality.
There Is No Good Explanation Other Than Dogma
The false belief that "there must be a perfectly good explanation you're somehow missing" lies at the core of all theory-induced blindness. In reality, no such explanation exists other than the only possible one: an axiom in your theory is flawed, causing the entire theory to fail. These false assumptions, which we refer to as dogma, are educated guesses—mere hypotheses that could always turn out to be wrong. Yet, owing to prolonged use, they become accepted as facts through misguided belief. Until a false axiom is identified and corrected, reliance on a flawed theory inevitably leads to disaster.
Consider the movie The Godfather, written by Mario Puzo, who had an intimate understanding of the mafia's real-world workings through extensive research. In the film, the character Tessio justifiably and inevitably gets "whacked" at the end, with the famous line, "It's nothing personal, just business," delivered as Tessio begs for mercy. In the mafia world, betrayal cannot be forgiven; failing to eliminate the traitor is fatal to everyone else in the gang. In such cases, either the traitor or the rest of the gang must die, or at best, spend their lives in prison or hiding—both cannot continue to coexist—hence, "nothing personal, just business." This is why even a lovable traitor, like the character Murka, must face the same fate.
Similarly, no theory built on a false premise can ever function effectively. A false axiom—like a traitor in a gang—will inevitably undermine the entire structure, leading to a flawed, inaccurate theory that will fail in reality until the false axiom is identified and replaced.
Theory-induced blindness is a cognitive bias where individuals irrationally cling to flawed theories, engaging in wishful thinking by believing in a phantom, non-existent "good missing explanation" for why their theory doesn’t align with reality. It is akin to refusing to accept that a trusted gang member—perhaps someone you are emotionally attached to—is a rat, a traitor whose actions could jeopardize the entire group. Since no other explanation exists for why a flawed theory fails except that one of its axioms is flawed, theory-induced blindness manifests as an irrational and illogical refusal to acknowledge this flaw. Indeed, the only valid explanation for a theory's failure is that at least one underlying axiom—the "traitor"—is effectively a “liar.”
Failing to address a flawed axiom allows intellectual laziness to take hold. Theory-induced blindness, in this sense, is a form of intellectual laziness, where the brain subconsciously avoids the "slow, expensive System 2 work" (as Kahneman describes it) required to identify and correct the flawed axiom and derive the correct theory. The mind deceives us by suggesting, "Don’t worry about the false hypothesis; there’s a perfectly good explanation for it," when in fact, no such explanation exists. Thus, theory-induced blindness ultimately stems from intellectual laziness.
We propose renaming this cognitive bias as dogma-induced blindness (DIB). While the blindness is induced by the repeated use of a false theory, its root cause is the initial reliance on a dogmatic, assumption-dependent axiom. From this flawed "traitor" axiom, the blindness-inducing theory is correctly logically deduced. People inevitably confuse the guaranteed certainty of error-free logical deduction with the error-prone nature of axioms—hypotheses accepted as "self-evidently true" but which can turn out to be false.
Consider the axiom of separation in Zermelo–Fraenkel (ZF) set theory, which posits that any set containing two elements can be split into two distinct subsets, each containing one of the elements of the original set. While this seems self-evidently true in our shared, objective “macro-level” reality, the axiom does not hold in scenarios involving entangled photons. In such cases, "entangled" means inseparable, and thus Bell's Inequality—which is valid in ZF set theory—does not hold in reality because Bell's Inequality’s derivation (mathematical proof) explicitly makes use of the axiom of separation, which implicitly assumes separability, which does not hold true in reality in this case. This was empirically confirmed, as demonstrated by the 2022 Nobel Prize in Physics, awarded for the experimental falsification of this principle.
Why Disbelieving is Such Hard Work
Disbelieving in false hypotheses is notoriously challenging—a point emphasized by Daniel Kahneman and other psychologists. The difficulty stems from one of the fundamental principles of logical deduction: the principle of non-contradiction. This principle, central to all logical systems, dictates that a statement and its negation cannot both be true simultaneously.
Formal systems, in which theorems are logically deduced from axioms assumed to be true, have been integral to mathematical reasoning since ancient times. Mathematicians like Euclid formalized these proofs using methods of deduction that remain fundamental to mathematics today. The law of non-contradiction, employed by Euclid, ensures internal consistency within any mathematical proof—whether in algebra, geometry, or other disciplines—by requiring that no proposition can be both true and false simultaneously. This principle prevents logical contradictions and maintains coherence within the system.
A classic example of how the law of non-contradiction functions is the method of proof by contradiction. In this technique, an assumption is shown to lead to a contradiction, thereby proving the original statement true. Euclid famously used proof by contradiction to demonstrate that there are infinitely many prime numbers. He began by assuming the opposite—that there are only finitely many primes—and then showed that this assumption leads to a logical contradiction. By disproving the finite assumption, Euclid confirmed that the set of prime numbers must be infinite. This powerful method relies directly on the law of non-contradiction to derive valid results from false assumptions, and it is a cornerstone of mathematical reasoning across all formal systems, including algebra and geometry.
The principle of non-contradiction is crucial for maintaining logical consistency within any formal system. It ensures that any claims contradicting the axioms or theorems derived from them are recognized as false within the system. This principle forms the foundation of proof in every branch of mathematics. For instance, dividing by zero in algebra leads to contradictions—which are mathematically equivalent to fallacies—because doing so renders the system inconsistent, allowing absurd conclusions such as proving that 2 equals 3. Therefore, in any proper formal system, the principle of non-contradiction must never be violated, as it is the foundation of all logical reasoning.
This principle is not just fundamental in formal mathematics but in all forms of rational thought. Assertions that contradict established axioms or facts are often automatically rejected, even at a subconscious level, because such contradictions are inherently recognized as invalid. Rigorous adherence to the principle of non-contradiction means that any proposition conflicting with an established axiom is automatically dismissed. This rejection is not merely procedural—it is a logical necessity to maintain the coherence and consistency of any formal system.
However, this very principle that upholds the integrity of logical systems also makes it exceedingly difficult to disbelieve false hypotheses. Once a hypothesis is accepted as an axiom, the mind becomes resistant to recognizing evidence that contradicts it. The principle of non-contradiction, while essential for logical deduction, fosters a kind of cognitive inertia, making it difficult to let go of established beliefs—even when they are false.
This is why disbelieving is such hard work—a challenge that can be understood as a logically supported difficulty rooted in fundamental principles of reasoning. Disbelieving a false hypothesis requires not only identifying contradictions—a task that, by itself, is straightforward—but also the mental effort to override the deeply ingrained principle of non-contradiction that governs our reasoning processes. To reject a false hypothesis, one must be willing to restructure the entire logical framework built upon it, which is a complex and intellectually demanding task. Our brains, prone to cognitive shortcuts, often resist this effort, leading us to falsely believe that everything is fine and to avoid the hard work of rethinking our assumptions.
DIBIL: Understanding Dogma-Induced Blindness Impeding Literacy
DIBIL—Dogma-Induced Blindness Impeding Literacy—refers to a condition where individuals become functionally illiterate due to being misinformed rather than uninformed. This form of functional illiteracy arises from implicit, unexamined false assumptions embedded within a formal system’s axioms, which we refer to as dogma. These dogmas are subjectively held beliefs, often based on hearsay or cultural conditioning, absorbed without critical scrutiny. When flawed assumptions form the foundation of a formal system, the logically deduced conclusions may be technically correct in reasoning but ultimately erroneous, given the false premises underpinning the system.
At its core, DIBIL highlights how cognitive biases develop from a dogmatic adherence to the principle of non-contradiction. While this principle is essential for maintaining logical consistency in reasoning, it inherently overlooks the possibility that the underlying axioms—mere educated guesses or initial hypotheses—could be false. In contrast, evidence that contradicts a false theory is rooted in independently verifiable real-world facts, which cannot be dismissed as easily.
What we are emphasizing is that DIBIL is a cognitive bias resulting from the confusion between axioms (assumed foundational statements) and objective facts. The danger lies in treating these unexamined beliefs as indisputable truths, leading to faulty conclusions in situations where empirical accuracy is crucial. Let us break down of the term “DIBIL”:
DIB: Dogma-Induced Blindness
This component of the acronym refers to the cognitive and perceptual narrowing that occurs when individuals are unwilling or unable to see beyond the confines of their ingrained beliefs, which are often based on hearsay rather than evidence. These beliefs, unless rigorously evidence-based, can easily turn out to be false, unlike facts, which are independently verifiable and cannot turn out to be false. Dogma-induced blindness fosters resistance to new ideas and stifles openness to alternative viewpoints. It creates an intellectual echo chamber where contradictory evidence is ignored, rationalized away, or dismissed because hearsay is conflated with fact. This narrowing of perspective not only impedes intellectual growth but also perpetuates ignorance, as individuals remain trapped within the boundaries of their dogma, unable to recognize or engage with alternative perspectives that challenge their preconceptions.
IL: Impeded Literacy, or Functional Illiteracy
"Impeded Literacy" refers to a paradoxical consequence of dogma-induced blindness: a form of functional illiteracy where individuals believe they are informed but, in fact, hold onto falsehoods. Those affected by DIBIL may possess basic reading and writing skills, but they lack the critical literacy required to engage deeply with texts, distinguish fact from hearsay, and synthesize complex information. This impeded literacy undermines their ability to navigate the complexities of modern information landscapes, leaving them vulnerable to misinformation, manipulation, and the sway of simplistic, dogma-driven narratives. As a result, their capacity for independent thought and reasoned judgment is diminished, rendering them literate in form but effectively illiterate in function when it comes to understanding and engaging with the world around them.
Real-World Manifestation: The Dunning-Kruger Effect
DIBIL serves as a stark warning against the dangers of intellectual stagnation, arising from the rejection of more accurate theories due to theory-induced blindness. This blindness, in any formal system, is inevitably and invariably dogma-induced. The consequences of DIBIL often mirror those seen in the Dunning-Kruger effect, where individuals, blinded by dogma, overestimate their abilities and knowledge—relying on false, oversimplified assumptions—even when experts recognize these assumptions as incorrect. The Dunning-Kruger effect describes a cognitive bias wherein individuals with limited knowledge or competence in a domain overestimate their own ability, while those with greater expertise may underestimate theirs. Similarly, DIBIL fosters a false sense of understanding, where individuals are unaware of their own ignorance due to unchallenged dogmatic beliefs.
Dogmatic adherence to beliefs not based on evidence impairs not only personal intellectual growth but also has broader societal implications, perpetuating ignorance and hindering our collective ability to address complex issues with the nuance and insight they require. For instance, in public discourse, DIBIL can lead to the widespread acceptance of pseudoscientific claims, resistance to scientific consensus, and the propagation of misinformation, all of which undermine informed decision-making and societal progress.
Conclusion: Addressing DIBIL for Societal Progress
Addressing DIBIL is crucial for promoting intellectual growth and societal progress. It emphasizes the need for vigilance against the comfort of dogma and highlights the importance of continuous learning and intellectual openness. By remaining receptive to new ideas and critically evaluating our own assumptions, we can guard against the cognitive pitfalls that DIBIL represents. This process is vital for building a more enlightened and capable society. Through education, media literacy, lifelong learning, and open dialogue, we can cultivate a culture that values intellectual resilience and adaptability. Such a culture will pave the way for a future where diverse ideas and informed perspectives can thrive. Indeed, disbelieving is hard work—but it is essential for growth.
Pascal’s Wager: A Deeper Analysis Beyond Dogma
Pascal’s Wager is often fundamentally misunderstood by individuals suffering from Dogma-Induced Blindness Impeding Literacy (DIBIL). These individuals, whether they are labeled "useful idiots" or intellectuals, depending on the perspective, are functionally illiterate in critical thinking. Their understanding is shaped and constrained by dogma—whether they are misinformed or uninformed matters little in practice. Such individuals tend to reduce Pascal's Wager to a simplistic dichotomy: a choice between only two possible outcomes—either God exists, or God does not exist. This limited view reflects mathematical illiteracy, as DIBIL sufferers often ignore the complexity of potential outcomes and fail to consider other possibilities, such as the existence of multiple gods. This oversight introduces the risk of a Type II error—rejecting a true conclusion by precluding the possibility of alternative realities.
In reality, the analysis of Pascal’s Wager must extend beyond a binary choice. We must introduce the variable N—the number of gods—which is not restricted to 0 or 1. According to Peano's fifth axiom, the principle of induction, N could represent any natural number, including infinity. This perspective aligns with polytheistic traditions, such as those found in Greek mythology, the Bhagavad Gita, and various other religious systems across cultures.
This broader understanding implies that belief in one specific deity, such as Yahweh or Allah, inherently excludes belief in and worship of all other competing deities, labeled "false gods." This exclusion is vividly depicted in the "Golden Calf" episode in the Torah, where the worship of other gods is expressly forbidden. Disbelief in false gods becomes a complex intellectual task—highlighting Daniel Kahneman’s work on cognitive biases, which we explore in this paper. The first commandment, "You shall have no other gods before me," implicitly acknowledges the existence of other gods, albeit false ones, and enforces the exclusivity of Yahweh/Allah as the path to salvation.
Once we address the false assumptions influenced by DIBIL, we can examine Pascal's Wager from a more nuanced perspective. Under the assumption that "God" exists, we might consider Roger Penrose's hypotheses regarding universal consciousness and quantum effects, concepts that bear similarities to ancient Hermeticism. Hermeticism posits that God is the "All," within whose mind the universe exists—an omnipotent entity shaping reality. This concept resonates with the core beliefs of Egyptian religion, which significantly influenced the Abrahamic religions central to Pascal’s Wager, as seen in Judaism, Christianity, and Islam. The notion of God as "the All" can be analogized to the quantum field in modern physics, where everything is entangled—leading to what Einstein famously described as "spooky action at a distance."
"Spooky action at a distance" refers to quantum entanglement, a phenomenon that Einstein found troubling because it suggested that God might indeed "play dice" with the universe—something Einstein himself rejected. Unlike Einstein, whose background was academic, our experience is considerably more practical, having spent 30 years trading mathematical arbitrage on Wall Street using applied formal systems to make money, focusing on tangible results. On Wall Street, we don’t "throw darts at the board; we bet only on sure things”. This means we don’t rely on assumptions that could prove false; we focus solely on independently verifiable facts. If facts suggest that God is, in some sense, "playing dice" with the universe, we believe the evidence and seek to understand how it works and where the opportunity lies. Our pursuit of understanding God's design must, by logical necessity, be rewarded.
Einstein’s groundbreaking equation, E=mc2, provides a powerful framework to understand this concept. It can be interpreted within the economic framework of Pareto efficiency—a concept from mathematical economics. Pareto efficiency describes a state in which resources are optimally allocated, maximizing productivity and welfare in "perfect trade" conditions. These conditions mirror the moral and ethical equilibrium proposed in religious texts, such as the Torah, where adherence to the Ten Commandments would theoretically result in a "perfect" and harmonious society. According to the First Welfare Theorem in the Arrow-Debreu model of mathematical economics, a Pareto-efficient equilibrium, where both welfare and productivity are maximized, is guaranteed in a perfectly competitive market—just as moral adherence could lead to an ideal social equilibrium.
Unfettered and Symmetrically Informed Exchange
It is an evidence-based claim, independently verifiable for accuracy—meaning this assertion cannot turn out to be false—that any parasitic infestation, such as locusts in a field, termites, carpenter ants, or vermin like rats consuming grain stored in a warehouse, directly reduces economic efficiency. In economic terms, the consumption of goods and services by "economic parasites" arises from involuntary exchanges, such as robbery, theft, extortion, and kidnapping. These criminal activities are punishable by law because any "unearned extraction of wealth" by such parasites—whether thieves, robbers, or kidnappers—inevitably reduces economic efficiency.
A stark real-world example of this inefficiency is the comparison between Haiti and the neighboring Dominican Republic. In Haiti, widespread lawlessness has resulted in a GDP per capita nearly ten times lower than that of the Dominican Republic. This significant inefficiency, manifesting in a tenfold reduction in average consumption, arises from a violation of the principle of unfettered trade—a necessary condition for achieving Pareto efficiency as outlined in the Arrow-Debreu framework, a cornerstone of mathematical economics.
According to the First Welfare Theorem of mathematical economics, real-world inefficiencies are inevitable when two key conditions are violated: 1) unfettered (fully voluntary) exchange and 2) symmetrically informed exchange. George Akerlof’s seminal 1970 paper, The Market for Lemons, vividly demonstrated how asymmetric information leads to market inefficiencies. For instance, a fraudulent used car dealer—referred to as an "economic parasite" in Marxist terms—might sell a defective car, known as a “lemon,” to a less-informed buyer. In such cases, the market fails to operate efficiently because one party lacks the information necessary to make an informed decision. To achieve efficiency, trade must be both voluntary and symmetrically informed, ensuring all parties have equal access to relevant information.
A violation of market efficiency can also be seen in the existence of arbitrage in the foreign exchange (Forex) market. Arbitrage allows individuals to profit merely by exploiting price differences between currencies at different banks—often with just the press of a button—without contributing to the production of goods and services consumed with that wealth. This represents unearned wealth extraction through asymmetric information, as the trader benefits from knowledge of currency price discrepancies that others lack.
While many econometric and financial models are notoriously inaccurate—evidenced by frequent mispredictions from institutions like the Federal Reserve—derivative pricing models, such as those used to calculate futures prices for the S&P 500 Index, are far more precise. The reason for this precision is that arbitrage opportunities, just like finding $100 on the street, are exceedingly rare in efficient markets like the NYSE and CME. When arbitrage opportunities do arise, they are quickly eliminated, underscoring their role as indicators of inefficiency in less competitive markets.
Arbitrage allows individuals to consume goods and services produced by others without contributing to their production—just as finding $100 on the street allows one to purchase goods without producing anything in return. This is the very definition of economic rents, a well-known form of market failure. Public choice theory explains how "economic parasites," referred to as “successful rent-seekers” in this context, capitalize on asymmetries in information to extract value from the economy without contributing corresponding value in productivity. Such rent-seeking behavior inevitably undermines overall economic efficiency.
No Arbitrage Constraint on Exchange Rates
We begin by analyzing the foreign exchange (Forex) market, where approximately 30 of the most actively traded currencies are exchanged. These exchange rates can be mathematically represented by an exchange rate matrix, denoted as E. In this matrix, the value in row i and column j represents the exchange rate from currency i to currency j. This matrix provides a structured model for understanding how exchange rates—whether between currencies or between goods and services—are organized to prevent arbitrage, which by definition is a market inefficiency.
Arbitrage is impossible when a uniform price is maintained for an asset across different markets. Specifically, in the Forex market, the exchange rate from currency A to currency B must be the reciprocal of the exchange rate from currency B to currency A. For example, if 1 USD buys 0.50 GBP, then 1 GBP should buy 2 USD. This reciprocal relationship is critical for eliminating arbitrage opportunities that could arise from discrepancies in exchange rates.
Let the matrix E represent the exchange rates among the approximately 30 major liquid currencies traded in the Forex market. The no-arbitrage condition can be defined through a constraint on the individual elements eij of E, which states that:
This relationship ensures that exchange rates are consistent and arbitrage opportunities are avoided.
We use the notation ET to refer to the Hadamard inverse of the transpose of E, that is:
The Hadamard inverse and the transpose are commutative operations, meaning that the transpose of the Hadamard inverse is the same as the Hadamard inverse of the transpose. Specifically,
The no-arbitrage constraint, E=ET, ensures the absence of arbitrage by enforcing symmetry and reciprocity in exchange rates. This constraint is analogous to a matrix being involutory—that is, equal to its own inverse. However, we refer to matrices that satisfy the condition of being the Hadamard inverse of their own transpose as evolutory, rather than involutory. An evolutory matrix satisfies the constraint: eij=1÷eji, which reflects the reciprocal nature of exchange rates.
This distinction is important because, while for an involutory matrix A we have A·A1=I (the identity matrix), for an evolutory matrix E, the relationship is different. Specifically, we have: E·ET=E²=n·E=(ET·ET)T
However, the matrices E·ET and ET·E do not multiply to form n·E. Instead, they result in two other distinct matrices, which depend on the specific structure of E.
As we can see, when multiplied by its reciprocal transpose, the evolutory matrix does not produce the identity matrix, but rather a scalar multiple of E, scaled by the row count n, effectively becoming E2. This occurs because, under the constraint E=ET, the matrix E exhibits certain structural properties. Specifically, E has a single eigenvalue, equal to its trace, which is n.
This is due to the fact that the exchange rate of a currency with itself is always 1, meaning that the diagonal entries of E are all equal to 1. Thus, the trace of E—which is the sum of the diagonal elements—is n, the number of currencies. This structure implies that E is not an identity matrix but is instead scalar-like, in the sense that its eigenvalues are tied to its trace.
Simplification of E Through Evolutory Constraints
Imposing the E=ET constraint simplifies the matrix E, leaving it with a single eigenvalue, n, and reducing it to a vector-like structure. This occurs because any row or column of E can define the entire matrix, significantly reducing the dimensionality of the information required to quote exchange rates. For example, the matrix E can be expressed as the outer product of its first column and first row, with each row being the reciprocal of the corresponding column. Consequently, all rows and columns of E are proportional to one another, making them scalar multiples. This property renders E a rank-1 matrix, meaning all its information can be captured by a single vector.
Higher Powers and Roots of E
An intriguing property of the constrained matrix E=ET is its behavior when raised to higher powers. In theory, an unconstrained matrix raised to the fourth power would have four distinct roots. However, due to the E=ET constraint, E has only two fourth roots: E and ET. This can be expressed as: E⁴=E·E·E·E=(ET·ET·ET·ET)T=n²·E
This suggests a deep connection between the structure of ET and the physics of symmetry. In this framework, the relationship E⁴=n²·E=m·c² suggests a potential analogy to Einstein’s famous equation E=mc², where mass could be viewed as the fourth root of energy. However, due to the E=ET constraint, mass exists as a strictly constrained subset of energy states, governed by symmetry, much like the constraints imposed by quantum entanglement.
Here, mass can be viewed as “compressed energy”. While E theoretically has four roots, in reality, only two roots exist due to the E=ET evolutory constraint imposed on E by quantum entanglement. Under this constraint, mass is equivalent to energy but exists as a strictly constrained subset of all possible energy states, limited by the E=ET condition.
Although this connection remains conjectural, it aligns with the principles of supersymmetry in theoretical physics and echoes the ancient axiom, "as above, so below." This idea also resonates with the geometry of the Egyptian pyramids and with the notion that "42" is the "answer to the ultimate question of life, the universe, and everything," as humorously proposed in The Hitchhiker's Guide to the Galaxy. While this reference is not directly tied to quantum physics, it touches on the probabilistic nature of existence.
Implications and Speculations
At this point, we must acknowledge that our expertise in theoretical physics is limited to interactions with physicist colleagues during our tenure managing the stat-arb book at RBC on Wall Street. Therefore, please treat our comments about physics with considerable skepticism, particularly the ideas about quantum set theory outlined below, which are purely speculative. These speculations may assist a physicist not involved in financial markets, unlike those making real money at hedge funds such as Renaissance, founded by the late Jim Simons.
In a matrix that simplifies to a vector-like structure, the entire matrix can be described by any of its rows or columns. This reduction has profound implications:
Instead of requiring all elements of a matrix (which in a full matrix would be n2 values), only the elements of a single row or column vector are needed, drastically reducing the dimensionality of the information required.
This vector represents a form of data compression, where instead of storing or processing multiple independent pieces of information, one vector informs the entire structure. This simplification could improve the efficiency of computations and analyses involving E.
Extending This Idea to a Formal System Axiomatic Framework
Extending this idea to a formal system, particularly in the context of a set theory better suited to quantum mechanics, leads to intriguing possibilities. In quantum mechanics, states can be superposed and entangled. A matrix that simplifies to a vector-like structure might suggest a system where states are not independently variable but are intrinsically linked—analogous to quantum entanglement at a mathematical level.
A new set theory that models such matrices could consider sets where elements are fundamentally interconnected. Traditional set theory deals with distinct, separate elements, but this new theory could focus on sets where elements are vector-like projections of one another.
Such a theory could be useful in fields like quantum computing or quantum information, where understanding entangled states in a compressed, simplified form could lead to more efficient algorithms and a better understanding of quantum systems. By utilizing a matrix that reduces to a vector-like structure as a basic element, we could potentially model systems where traditional notions of independence between elements are replaced by a more interconnected, entangled state representation. This could open new avenues in both theoretical and applied physics, especially in handling complex systems where interdependencies are crucial.
Just as Euclidean geometry is inappropriate for modeling curved space-time, perhaps it’s high time someone developed a better set theory than Zermelo-Fraenkel set theory, one better suited for modeling quantum entanglement—possibly with the Axiom of Separation either removed or modified to correctly model entanglement.
For more insights and to explore how theoretical physicists can be compensated for their work on quantum set theory using a one-true money system backed by patents, visit us at tnt.money. Just type "tnt.money" into your browser and hit Enter.
The Role of Linear Algebra in Market Efficiency
As mathematical economists, we recognize that linear algebra captures the essential idea that, in an arbitrage-free market, the reciprocal relationships between exchange rates—whether of different currencies or all goods and services—must be consistent. The concept of an arbitrage-free exchange rate matrix E, where E=ET (indicating that it is equal to its reciprocal transpose), imposes constraints on exchange rates, thereby eliminating opportunities for arbitrage. By simply transposing and reciprocating the exchange rate matrix, we ensure that prices across the market align in a way that precludes arbitrage.
In this framework, prices are represented as exchange rates of all goods and services relative to a single specific row or column in the full exchange rate matrix E, chosen as the unit of account. This approach supports the theories of Arrow and Debreu and is consistent with aspects of Marx’s value theory, particularly in terms of regulating exchange relationships. The key role of money in this framework is to regulate markets by preventing arbitrage, functioning as a single unit of account in which the prices of all other goods and services are expressed. This uniform pricing mechanism inherently prevents the existence of multiple prices for the same asset, which would otherwise facilitate arbitrage.
This concept is vividly illustrated by the real-world practice of quoting all currencies in the foreign exchange (FX) market against a single standard currency, currently the U.S. dollar. This practice plays a pivotal role in reducing the scope for arbitrage, nudging the market toward an ideal no-arbitrage condition. By standardizing currency pairs relative to the dollar, there is greater predictability and consistency in exchange rates. This systemic approach minimizes the discrepancies and gaps that arbitrageurs typically exploit, leading to a more stable and equitable trading environment.
While the application of linear algebra might seem excessive in financial contexts, it is particularly warranted in this scenario. Viewing the prices of goods and services through an exchange rate matrix underscores money’s role strictly as a unit of account. In the real-world FX market, where all currencies are traded in pairs, cross rates for pairs such as EUR/GBP or EUR/JPY are determined using the U.S. dollar as the unit of account. This method not only emphasizes money’s functional use as a unit of account but also highlights the practical utility of quoting all prices relative to a single standard asset.
The advantages of adopting this methodological approach are significant. It enhances market efficiency by increasing information symmetry among participants and reducing arbitrage opportunities. By establishing consistent prices for each asset across all markets, this approach fosters a more transparent and stable trading environment. When all prices are expressed relative to a single, universally recognized unit of account, the market naturally converges toward a state of equilibrium where arbitrage opportunities are minimized, if not entirely eliminated.
Moreover, this framework aligns with the fundamental principles of general equilibrium theory, where the existence of a common unit of account ensures that the economy operates efficiently. The unit of account simplifies the comparison of value across diverse goods and services, enabling market participants to make informed decisions based on consistent and comparable data. As a result, the market operates more smoothly, with reduced transaction costs and fewer opportunities for market manipulation.
Implications for Financial Markets and Beyond
The implications of using linear algebra to enforce no-arbitrage conditions extend far beyond currency markets. In financial markets more broadly, the concept of an arbitrage-free matrix can be applied to a wide range of assets, from stocks to commodities. By ensuring price consistency across different markets, linear algebra helps create a more efficient and fair trading environment. This consistency is especially important in global markets, where pricing discrepancies can lead to significant arbitrage opportunities, potentially destabilizing entire economies.
Moreover, this approach is highly relevant in the design of algorithms for automated trading systems. By incorporating linear algebra principles, traders can better predict price movements and identify potential arbitrage opportunities. Embedding the no-arbitrage condition within these algorithms can also make markets more resilient to exploitation, reducing inefficiencies and economic distortions.
In summary, the role of linear algebra in market efficiency is not just theoretical; it has practical implications that enhance the stability and fairness of financial systems. By utilizing an exchange rate matrix where E=ET and by standardizing prices relative to a single unit of account, we can achieve a more efficient and equitable marketplace. This approach not only aligns with foundational economic theories but also provides a robust framework for modern financial markets, ensuring greater transparency and integrity in their operation.
This method not only enhances our understanding of economic performance but also creates a framework for producing more just and equitable markets—both in theory and practice. As we explore these ideas, we are reminded that, in the grand scheme of things, the universe itself may be the ultimate arbiter of efficiency and balance, ensuring that, in the end, all things are made right.
And one last thought: Einstein was clearly wrong. God does indeed play dice with the universe—but loaded dice, loaded in a way that guarantees fairness, ensuring that God always wins in the end and all is set right. Everything is entangled and, therefore, Pareto-efficient and balanced in the long run—ensuring that, over time, everyone gets their due, returning everything they took. In this reality, E always equals ET! Isn’t that what the restated E=m·c² really means: E⁴=ET·n²=m·n²? And if m is mass, what is n, exactly? Is it perhaps time? Time to stop being DEBILITATED by DIBIL?
References
Pascal, B. (1670). Pensées. [Original publication in French].
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
Wiles, A. (1995). Modular elliptic curves and Fermat's Last Theorem. Annals of Mathematics, 141(3), 443–551.
Euclid. (circa 300 BCE). Elements.
Einstein, A. (1916). The foundation of the general theory of relativity. Annalen der Physik, 354(7), 769–822.
Akerlof, G. A. (1970). The market for "lemons": Quality uncertainty and the market mechanism. The Quarterly Journal of Economics, 84(3), 488–500.
Arrow, K. J., & Debreu, G. (1954). Existence of an equilibrium for a competitive economy. Econometrica, 22(3), 265–290.
Bell, J. S. (1964). On the Einstein Podolsky Rosen paradox. Physics Physique Физика, 1(3), 195–200.
Penrose, R. (1989). The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford University Press.
Einstein, A., Podolsky, B., & Rosen, N. (1935). Can quantum-mechanical description of physical reality be considered complete? Physical Review, 47(10), 777–780.
Einstein, A. (1905). Does the inertia of a body depend upon its energy content? Annalen der Physik, 323(13), 639–641.
Arrow, K. J. (1951). An extension of the basic theorems of classical welfare economics. In Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (pp. 507–532). University of California Press.
Zermelo, E. (1908). Untersuchungen über die Grundlagen der Mengenlehre I. Mathematische Annalen, 65(2), 261–281.
Guillemin, V., & Sternberg, S. (1984). Symplectic Techniques in Physics. Cambridge University Press.
Aspect, A., Clauser, J. F., & Zeilinger, A. (2022). [Awarded for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science]. Nobel Prize in Physics. NobelPrize.org.
Adams, D. (1979). The Hitchhiker's Guide to the Galaxy. Pan Books.
Peano, G. (1889). Arithmetices principia, nova methodo exposita. [Reprinted in: van Heijenoort, J. (Ed.). (1967). From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931 (pp. 83-97). Harvard University Press].
Gödel, Kurt. "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I." Monatshefte für Mathematik und Physik 38 (1931): 173-198.