DIBIL in Mathematical Economics
by Joseph Mark Haykov
October 22, 2024
Introduction
Formal systems are foundational frameworks in mathematics and science designed to eliminate the possibility of making logical thinking errors. In any formal system, a theorem that has been proven to be correct becomes an established fact within that system. Formal systems consist of:
Formal language: Used to express statements precisely.
Axioms: Assumed truths within the system that serve as the starting points for reasoning.
Rules of inference: Define how new statements can be logically derived from axioms and previously established theorems.
These components enable conclusions to be deduced from initial premises through rigorous reasoning, ensuring that conclusions follow inevitably from the assumptions. In mathematics, a formal system begins with axioms and definitions, from which lemmas, theorems, and corollaries are derived using formal inference rules. This structured approach guarantees that conclusions are consistent with the axioms, making formal systems critical not only in mathematics but also across various scientific disciplines.
In applied formal systems, such as those used to model objective reality in physics, facts are assertions whose truth can be independently verified. Physics, which studies the fundamental laws governing the universe, serves as a robust framework for understanding objective reality—particularly phenomena that are measurable and observable. However, depending on one’s philosophical perspective, other frameworks like metaphysics, logic, or mathematics may also be considered foundational to understanding different aspects of reality.
While quantum mechanics—which includes concepts like wave-particle duality—describes physical phenomena at the microscopic scale, other theories like general relativity are necessary to describe gravitational phenomena on macroscopic scales. A unified theory that fully integrates quantum mechanics with general relativity remains an open challenge in physics. Nevertheless, it remains an indisputable fact that our shared reality involves fundamental particles and fields, as described by physics. This encompasses not only known particles and forces but also phenomena like dark matter and dark energy, which are still subjects of ongoing research. Importantly, while dark matter and dark energy are strongly supported by observational evidence, their exact nature remains an open question. Physics, therefore, aims to study all that exists in our shared objective reality—that is, all that is real.
Everything in our shared objective reality involves fundamental particles and fields, as described by physics. This implies that concepts like existence, objective facts, and independent verifiability are deeply interconnected within the discipline. If we posit, as a self-evident truth, that nothing unreal exists by definition, then all that exists in this shared objective reality of ours is real. This, in turn, means that any logical claims about our shared objective reality must be independently verifiable for accuracy.
This logically follows from the definition of the term “objective,” which inherently requires independent verifiability. In this shared objective reality of ours where nothing unreal exists, the key distinction between objectively true logical claims—which we refer to as objective facts—and subjective opinions is that objective facts are verifiable in our shared existing reality. This distinction is the fundamental difference between a hypothesis and a real-world objective fact.
Objective, real-world facts may refer to either empirical observations or mathematical truths. This definition captures two distinct categories of facts:
Empirical Facts: Statements whose truth can be confirmed through observation or experimentation. For example, the fact that the Earth is roughly spherical, not flat, is verifiable by observing satellite images, circumnavigating the globe, or measuring the Earth’s curvature through experiments. Similarly, the existence of the pyramids is an empirical fact, observable by visiting Egypt.
Mathematical Facts: Statements proven within a formal mathematical system, based on its axioms and inference rules. For instance, the correctness of the Pythagorean theorem is a fact in Euclidean geometry because its proof follows logically from Euclidean axioms. Anyone familiar with Euclidean geometry and its deductive rules can verify this fact by following the theorem’s proof.
A common characteristic of both empirical and mathematical facts is their independent verifiability—their truth can be established by any rational individual. However, the methods of verification differ: empirical facts are confirmed through sensory experience or experimentation, while mathematical facts are validated through logical deduction within a formal system.
In formal mathematics, a statement is syntactically true if it can be derived from the axioms via inference rules. This contrasts with empirical facts, which must be semantically true in the real world, grounded in observed data. This distinction defines "objective" in both formal systems and reality, as statements that are independently verifiable by others.
For objective empirical scientific facts, reproducibility of experiments is necessary for their acceptance. Similarly, for mathematical facts, a proof must be rigorously checked and verified by other mathematicians to ensure consistency and correctness within the given formal system.
The distinction between hypotheses and theorems highlights why some mathematical claims, such as the Riemann Hypothesis, remain unresolved and are not yet considered established facts. While a hypothesis may seem likely to be true, its status as a fact is contingent upon being proven within the current axiomatic system. This is because any underlying conjecture or assumption could turn out to be false.
For instance, Euler’s conjecture—a generalization related to Fermat's Last Theorem—was proposed by Leonhard Euler in 1769 but was disproven in 1966 by L. J. Lander and T. R. Parkin, who discovered a counterexample through a computer search. Hypotheses are thus conjectures proposing potential truths within a formal system, awaiting rigorous proof. Once proven, they become theorems and are considered objective facts within that system.
This process is exemplified by Fermat’s Last Theorem, which remained a conjecture for centuries until Andrew Wiles provided a proof in 1994. Similarly, the Poincaré Conjecture was proposed by the French mathematician Henri Poincaré in 1904. This famous problem in topology, particularly in the study of 3-manifolds, remained unsolved for nearly a century until it was proven by Grigori Perelman in 2003. In contrast, proven theorems, such as the Pythagorean theorem, cannot be false within the axiomatic structure of Euclidean geometry.
In Euclidean geometry, the Pythagorean theorem is classified as an objective fact because it holds universally under Euclid’s axioms, and its proof can be independently verified by anyone using these axioms and inference rules. This logical consistency allows even students to confirm its truth early in their mathematical education.
However, in non-Euclidean geometries, the Pythagorean theorem does not hold in its standard form. In curved space, the sum of the squares of the sides of a right triangle no longer equals the square of the hypotenuse. This means that the curvature of space affects how distances are measured.
This reflects the broader principle that mathematical facts are contingent upon the axioms and definitions of the specific formal system in which they reside. For example, in Riemannian geometry—where space curvature is crucial for understanding phenomena like general relativity—different geometric principles must account for this curvature.
Clocks on GPS satellites, for instance, must account for time dilation due to both their relative velocity (special relativity) and the difference in gravitational potential (general relativity), demonstrating the need for modified geometric principles. These relativistic effects are essential for the precise functioning of the Global Positioning System, highlighting how advanced mathematical frameworks are applied to real-world technologies.
The universal principle of logic and rationality—using deductive reasoning to arrive at logically valid conclusions—ensures that any rational individual can derive the same result from the same premises within a formal system. Logically valid conclusions are those that follow inevitably from the system’s axioms and inference rules, ensuring that theorems like the Pythagorean theorem are verifiable by anyone working within the framework of Euclidean geometry.
This logical formal system framework also allowed Isaac Newton to apply mathematics effectively to describe the laws of physics. Newton used deductive reasoning within his formal system of classical mechanics to model physical laws based on empirical observations. For example, Newton’s laws of motion were formulated based on observations but expressed mathematically with the same logical rigor as a formal system, connecting assumptions to conclusions through rigorous deductive logic. This rational structure explains why mathematical formulations of physical laws are universally verifiable within their respective systems, while the laws themselves are subject to empirical validation through experimentation and observation.
Proofs as Facts
In formal systems, proofs are regarded as objective facts because their truth is established through a process of logical deduction from axioms and inference rules, and their correctness is independently verifiable. A key characteristic of mathematical proofs is this independent verifiability: anyone with knowledge of the system’s axioms and rules of inference can follow the logical steps of the proof and arrive at the same conclusion. This verifiability transforms the correctness of proofs—and the truth of resulting corollaries, lemmas, and theorems—into objective facts within the formal system.
Once a proof has been rigorously derived and verified, it is considered correct within the context of the system in which it was produced, such as algebra or arithmetic. However, historically, errors have been found in accepted proofs, and the mathematical community relies on peer review and replication to ensure the correctness of proofs. For instance, within the framework of Peano’s axioms of arithmetic, the statement 2+2=4 is not a hypothesis—it is an objectively proven fact. The logical deduction from Peano's axioms leads inevitably to this conclusion, and anyone familiar with basic arithmetic can verify it, making it a universally accepted truth.
Proofs are thus logically reliable within a consistent formal system. Their truth—or more precisely, their correctness—guarantees the truth of the theorems derived from them, entirely dependent on the internal logic and coherence of the system’s axioms and rules of inference. If the formal system is consistent—meaning no contradictions arise from its axioms—then any statement proven within the system is guaranteed to be true in that context. Therefore, the only way a proven statement could be considered unreliable is if the axioms themselves are inconsistent, because in an inconsistent system, any statement can be both proven and disproven due to the principle of explosion (ex contradictione sequitur quodlibet). This is why, in scientific theories that model reality using mathematics, the validity of the theory’s conclusions depends not only on the correctness of the mathematical proofs but also on the empirical validity of the axioms when tested against observed facts.
In applied formal systems, such as those used in physics or chemistry, the axioms are based on empirical observations or definitions that align with real-world data. These empirical axioms are provisional and subject to revision based on new evidence. Any logically consistent scientific theory is, by definition, also an applied formal system and relies on both logical consistency and empirical accuracy. However, it is important to clarify that, unlike purely abstract mathematical systems, applied formal systems are contingent upon the empirical correspondence to reality of their axioms. For example, Newton’s laws of motion can be seen as axioms that describe how physical systems behave. As long as these laws are consistent with empirical evidence, the theorems derived from them—such as equations of motion—are considered both mathematically and physically valid. However, if new evidence shows that the axioms conflict with observed reality, the theory itself must be revised or discarded. In contrast, in purely abstract formal systems, such as those in mathematics, proofs remain valid as long as the system's axioms are internally consistent, irrespective of any external or empirical considerations.
Gödel’s Incompleteness Theorems introduce an important limitation to our understanding of formal systems. These theorems demonstrate that in any sufficiently powerful formal system capable of expressing arithmetic (such as one based on Peano’s axioms), there will always be statements about numbers that are true but cannot be proven within the system’s axioms; these statements are undecidable within the system. While this constrains the completeness of formal systems, it does not undermine the reliability of proven theorems. So long as the formal system is consistent and sound, any theorem derived from the axioms is guaranteed to be true within the system.
However, it is imporant to note that this guarantee of truth is contingent upon the internal consistency of the system’s axioms. In other words, the "truths" hold only within the confines of the formal structure defined by the system's axioms. For instance, the Pythagorean theorem is correct within the framework of Euclidean geometry because no contradictions have been found within Euclid's axioms, but this theorem does not hold in the same way in hyperbolic geometry, where the underlying axioms differ.
This interplay between completeness and consistency highlights the boundaries of formal systems. Although we may not be able to prove every true statement within a system, the truths we do prove are indisputable as long as the system remains consistent. This is why established mathematical theorems, such as those in number theory or geometry, have endured over time—any inconsistencies would have been exposed through the rigorous process of independent verification. While Gödel’s theorems indicate that formal systems have intrinsic limitations, they do not diminish the reliability of proven theorems within a consistent framework—a key point often overlooked.
Dual Consistency in Applied Formal Systems
Errors in representing reality can occur in two fundamental ways: a Type I error (a false positive, rejecting a true claim—akin to disbelieving an honest person) or a Type II error (a false negative, failing to reject a false claim—akin to believing a liar). These two categories are commonly understood in statistical hypothesis testing and illustrate potential pitfalls in scientific and mathematical reasoning. In a sound formal system, such errors do not arise if the rules of deduction are properly followed, leading to correct conclusions derived from the axioms and inference rules.
When evaluating any logical claim, whether within a formal system or in real-world scenarios, there are only four possible outcomes:
Correct Decision: Accepting a true claim.
Correct Decision: Rejecting a false claim.
Type I Error: Rejecting a true claim.
Type II Error: Accepting a false claim.
In formal systems, a hypothesis refers to an assertion, statement, or proposition that remains unproven or uncertain. For example, the Riemann Hypothesis—a conjecture about the distribution of prime numbers—is widely believed to be true but has not yet been proven. A hypothesis in formal systems is neither inherently false nor true; it is simply a proposition awaiting proof or disproof based on the system's axioms. This concept mirrors the statistical notion of a hypothesis, where uncertainty persists until sufficient evidence is gathered to either reject or fail to reject any claim. In both formal systems and statistics, a hypothesis represents an uncertain conjecture requiring validation through logical deduction or empirical testing.
However, in formal systems such as algebra, we do not reject the Riemann Hypothesis as false; we merely acknowledge it as unproven, though it is widely believed to be true. This is not equivalent to incorrectly rejecting a true claim, so it does not constitute a Type I error. Likewise, we are not accepting a potentially false claim as true, so there is no risk of a Type II error. In formal systems, hypotheses exist in a provisional state—they are neither accepted nor rejected until proven. Once a theorem is proven, it is universally true within the system, assuming the system is consistent. Thus, neither Type I nor Type II errors, as defined in statistical hypothesis testing, are directly applicable to a formal system with consistent axioms. In a correct formal system, theorems are guaranteed to hold universally, provided the axioms themselves are internally consistent.
Gödel’s Incompleteness Theorems introduce an important caveat to this understanding. These theorems demonstrate that in any sufficiently powerful formal system capable of describing arithmetic (such as one based on Peano’s axioms), there will always be true statements that cannot be proven from the system’s axioms. For example, it is possible that certain propositions are undecidable within the system—they can neither be proven nor disproven using those axioms. However, it remains uncertain whether specific conjectures like the Riemann Hypothesis are independent of Peano Arithmetic or simply remain unproven using our current methods. This situation does not represent a Type I error because we are not rejecting a true claim; rather, we are unable to prove the claim within the system. Therefore, the incompleteness demonstrated by Gödel does not involve traditional errors as understood in hypothesis testing. Instead, it shows that a formal system may contain true statements that are unprovable within its own framework. Such unproven or unprovable propositions are classified as hypotheses in formal systems.
Dual Consistency in any applied formal system requires that the system's axioms avoid both internal contradictions and contradictions with established empirical facts:
Internal Consistency: The system's axioms must not lead to contradictions. This ensures that the system’s logic is sound and that any theorems derived from these axioms are valid within the system.
External Consistency: The system's axioms must not contradict empirical observations. For applied sciences, this means that the axioms must align with real-world data. If an axiom is found to conflict with empirical evidence, it may need to be revised to maintain the theory's relevance to real-world phenomena.
When these two forms of consistency are ensured, the theorems derived from the system’s axioms hold true not only within the abstract formal system but can also be applied successfully in practice. For example, the mathematical models of Newtonian mechanics remain effective in many real-world applications as long as Newton's laws—the axioms of the system—are consistent with the observed behavior of physical systems. However, in regimes such as relativistic or quantum mechanics, where Newtonian axioms no longer apply, the formal system must be revised to maintain external consistency with empirical data.
A well-known example that illustrates the need for dual consistency involves the application of mathematical concepts to physical reality. Peano’s axioms define the natural numbers and include the principle that every natural number nn has a unique successor n′n′, implying an infinite set of natural numbers. While this mathematical concept of infinity is fundamental, physical quantities are inherently finite—we can count only a finite number of objects, such as the two moons of Mars.
When considering the equation 2+2=4, it remains a mathematical truth. However, applying this equation to Mars's moons assumes the availability of four countable moons, which contradicts the physical reality of only two moons. This discrepancy highlights that, under the framework of dual consistency, the mathematical model loses external applicability in this context. Similarly, while Euclidean geometry is internally sound, it does not accurately describe the curved space-time of our universe, where Riemannian geometry serves as a more applicable model.
When counting Mars's moons, an appropriate mathematical model would account for the finite number of moons without altering the fundamental axioms of arithmetic. This underscores the importance of selecting suitable models when applying mathematical concepts to the real world. It highlights that while mathematical truths are universally valid within their formal systems, their application to physical scenarios must account for empirical constraints by ensuring that the models and assumptions are consistent with real-world observations.
Given axioms that are both internally consistent and externally applicable—what we refer to as dual consistency—all corollaries, lemmas, or theorems derived from them are likely to hold true within both the formal system and, when properly aligned, in relation to the real world. Without such dual consistency, the applicability of theorems to reality may be limited, rendering the mathematics purely theoretical in certain contexts. This distinction creates a clear delineation between applied mathematics and purely theoretical mathematics.
Applied mathematics employs mathematical theories and models to solve practical problems, relying on logical deductions from established axioms and ensuring that these models accurately reflect empirical observations. This is the practical value of applied mathematics, in contrast to purely theoretical mathematics, which explores logical structures without immediate concern for empirical applicability.
In conclusion, proofs in formal systems are objective facts because they result from valid logical deductions from a set of consistent axioms. These proofs, when verified independently, are reliable within the formal system. In applied formal systems, the reliability of these facts extends to the real world as long as the system's axioms are both internally consistent and appropriately aligned with empirical facts. By ensuring dual consistency, formal systems can yield conclusions that are both logically valid and empirically applicable, thereby bridging the abstract and real-world domains.
Universal Causality in Formal Systems: The Foundational Principle of All Mathematics
The effectiveness of logical deduction in modeling reality under dual consistency is grounded in the principle of logical causality, which governs the relationship between premises and conclusions in formal systems. In this context, logical causality refers to the same concept as logical implication or logical inference—the process by which conclusions necessarily follow from premises according to established inference rules. This principle parallels physical causality, as exemplified by Newton’s laws in classical mechanics. For instance, Newton's third law, which states that for every action, there is an equal and opposite reaction, highlights the deterministic role of causality in the physical world.
Similarly, in formal systems, logical causality embodies the idea that if the inference rules—based on the "if cause, then effect" structure inherent in deductive logic—are properly applied, and if the axioms of the formal system are consistent with reality, then the theorems derived from those axioms will also hold true in reality. This is because the inference rules, which govern the logical steps used to derive theorems, are designed to reflect the necessary relationships between premises and conclusions. In other words, the logical structure of formal systems aligns with the universal causality that governs real-world phenomena by ensuring that valid conclusions (effects) logically follow from true premises (causes).
Furthermore, these inference rules ensure internal consistency within the formal system itself. Fundamental principles such as the law of excluded middle and the law of non-contradiction help prevent contradictions within the system. Thus, the applicability of theorems in reality depends on whether the axioms accurately reflect empirical observations. For instance, Euclidean geometry holds true in flat space, but when applied to curved space—as in general relativity—its axioms no longer correspond to the empirical reality of that space. Hence, while logical causality guarantees the internal consistency of a formal system through valid inference, the external validity of the system depends on the truth of its axioms when tested against real-world phenomena.
This deterministic relationship between axioms (causes) and theorems (effects) ensures that conclusions derived within formal systems are logically consistent, and under dual consistency conditions, are also universally applicable in reality. These dual consistency conditions are:
The axioms correspond to empirical reality.
The inference rules, reflecting logical causality, are correctly applied to derive valid conclusions.
This principle is illustrated by a simple example: when the axioms of arithmetic hold true, the statement 2+2=4 is valid both within the formal system and in the real world. Here, the logical causality inherent in the arithmetic operations ensures that the conclusion logically follows from the premises, aligning mathematical truth with empirical observation.
Causality in Physics and Recursion in Formal Systems
In physics, causality governs the relationship between events—where one event (the cause) leads to another (the effect). This principle is fundamental across various domains, including electromagnetism, thermodynamics, and advanced theories like general relativity and quantum mechanics. In none of these domains is causality empirically observed to be violated. Even in general relativity, causality dictates the relationships between spacetime events, preventing faster-than-light communication and ensuring that causes precede effects within the light cone structure of spacetime. Similarly, in quantum mechanics, although individual events are probabilistic, causality is preserved at the statistical level, with overall behavior governed by conservation laws such as those for energy and momentum.
In formal systems, logical causality—as we've defined it to be synonymous with logical inference—serves a similar function. Axioms (causes) lead to theorems (effects) through inference rules grounded in logical deduction, where each step deterministically leads to the next. This mirrors the way physical causality governs the progression of events in the physical world, albeit in a metaphorical sense within the realm of abstract logic. The structured progression of logical inference ensures that conclusions are logically consistent with premises, just as physical causality ensures that effects follow causes in a predictable manner.
The analogy extends to recursion in computation, where one computational step leads deterministically to the next, much like one physical event triggers another. Just as recursive functions in programming define a sequence of actions, recursive logical steps in formal systems define how one truth leads to another. The effectiveness of modeling reality using formal systems arises from this structural correspondence to physical causality. Recursion and logical inference mirror the cause-and-effect relationships inherent in the physical world, suggesting that recursive programming can fully define aspects of our reality.
While Turing machines are a foundational model of what is computable in theory, recursive lambda functions are equally powerful and capable of computing anything that a Turing machine can compute. Programming languages like Scheme—which emphasize recursion—are Turing-complete and provide a perspective on how computation can be structured entirely around recursive processes. Scheme's recursive structure reflects a cause-and-effect approach in computation, illustrating how complex operations can be built from simpler ones through well-defined recursive rules.
By acknowledging these parallels, we can appreciate how concepts of causality and structured progression permeate physical theories, formal systems, and computation in general. Since everything can be modeled using recursion, logical inference, and binary logic, this suggests that reality itself operates fundamentally on principles akin to causality. This understanding underscores the importance of selecting appropriate models and paradigms when exploring complex phenomena, whether in the physical world or within abstract computational frameworks.
Causal Determinism in Logical and Physical Systems
The deterministic nature of processes in both logical and physical systems ensures that outcomes follow predictably from their starting points, given the governing principles. In formal systems, if the axioms are consistent, then the theorems derived from them follow with certainty, provided the inference rules—which systematically guide logical deduction—are applied correctly. This deterministic relationship between axioms and theorems supports the internal consistency of the formal system, ensuring that no contradictions arise from valid deductions.
Similarly, in the physical world, if we know the initial conditions and the laws governing a system, we can predict its future behavior with a high degree of certainty in classical mechanics, or probabilistically in quantum mechanics. Even though individual quantum events are probabilistic, the overall behavior of quantum systems adheres to causal principles, with statistical predictability maintained through conservation laws and the deterministic evolution of the wave function as described by the Schrödinger equation.
In quantum mechanics, causality is preserved in a nuanced form. Despite the inherent randomness of individual quantum events, interactions still comply with fundamental conservation laws, such as those governing energy and momentum. While specific outcomes cannot be predicted with certainty, the statistical distribution of outcomes conforms to precise mathematical formulations. This probabilistic framework does not violate causality but represents it in terms of probabilities rather than deterministic outcomes. Thus, conservation laws ensure that causal relationships are maintained at the statistical level, even when individual events are unpredictable. Unpredictability in quantum mechanics reflects the probabilistic nature of underlying physical processes, not a breach of causality.
In both contexts—logical systems and physical systems—the "if-then" structure plays a crucial role. In formal systems, logical deduction ensures that conclusions (theorems) follow necessarily from premises (axioms) through valid inference rules. In physical systems, cause-effect relationships ensure that effects follow causes in a consistent and predictable manner, governed by physical laws. While the domains are different—abstract reasoning versus empirical phenomena—the structured progression from premises to conclusions or from causes to effects underscores a foundational aspect of determinism in both logic and physics.
Universal Causality and Its Limitations
While the principle of universal causality ensures that every effect has a cause, there are inherent limitations on what can be known and predicted about these causal relationships. These limitations are well-documented in both formal systems and physical reality.
Gödel’s Incompleteness Theorems show that in any sufficiently powerful formal system capable of expressing arithmetic, there are true statements that cannot be proven within the system. This sets a limit on what can be deduced from a set of axioms, introducing fundamental constraints on our ability to derive all truths solely from logical deduction.
In physics, the Heisenberg Uncertainty Principle restricts the precision with which certain pairs of properties—such as position and momentum—can be simultaneously known. This reflects a fundamental limit on measurement and affects our ability to predict exact outcomes, even though the underlying causal processes remain consistent.
Turing’s Halting Problem demonstrates that there are computational problems for which no algorithm can universally determine whether a given program will halt. This introduces yet another form of undecidability, highlighting limitations in computational predictability and our capacity to foresee all computational behaviors.
These limitations illustrate that while causality—both logical and physical—remains a foundational principle, there are intrinsic constraints on predictability and knowledge. However, these constraints do not undermine the underlying causal structure of the universe. Instead, they highlight the complexity of systems, where specific effects may be difficult or impossible to predict in detail, even though the broader causal relationships are well-understood.
Acknowledging these limitations encourages a deeper exploration of systems, accepting that uncertainty and undecidability are inherent aspects of both mathematics and the physical world. This understanding emphasizes the importance of developing models and theories that can accommodate these intrinsic limitations while still providing valuable insights into the causal relationships that govern reality.
Conclusion: Logical Causality as the Foundation of Reasoning
In both formal systems and physical reality, the principle of causality serves as the backbone of predictability and understanding. In formal systems, logical causality—our term for the logical inference embedded within deduction—ensures that theorems are valid consequences of axioms. Similarly, physical causality ensures that effects are the result of preceding causes in the physical world.
The deep connection between these two forms of causality—logical and physical—lies in their shared progression from cause to effect, explaining why formal systems can model reality precisely when their axioms align with empirical observations.
Thus, the principle of universal causality—applied to both physical and logical systems—provides a robust framework for bridging the abstract and physical realms. By grounding the if-then structure of deductive reasoning in axioms consistent with empirical facts, we ensure that our formal systems remain aligned with the real-world behaviors observed in the universe.
The First, One-Truth Postulate of Mathematics
The concept of causality, which exhibits a recursive nature (where effects can become causes for subsequent events), extends beyond computation into the physical world, functioning similarly to an inductive process in formal logic. Just as induction allows us to derive general principles from specific instances, causality applies universally to all formal systems and is not contradicted by any known formal system. This forms the foundation of the "if-then" logic that governs all deductive reasoning in our shared reality. This is why causality is independently verifiable across both abstract (mathematical) and physical domains. In essence, "if cause, then effect" represents the fundamental structure of both physical reality and formal logical systems, uniting them under the principle of universal causality.
It is as though the inherent causality of the universe has imprinted itself onto human cognition through the process of inductive reasoning (the method of reasoning from specific observations to broader generalizations). This internalization manifests as rational logic, providing a shared basis for universal agreement on the truth of any logically deduced claim—so long as the underlying system remains logically consistent and adheres to the rules of "if-then" logic. In this way, the universal law of causality governs both the abstract realm of formal systems and the tangible workings of the physical world, ensuring a cohesive framework for understanding reality.
If we propose, as a foundational axiom—the first "one-truth" postulate of all mathematics in any formal system—that causality holds universally, we assert that every cause, in every context, results in an effect. In other words, not some actions, not most actions, but all actions—without exception—produce an effect. This aligns with a key principle in science: every event or change has a cause, and by investigating deeply enough, we can uncover it. In the physical world, this principle is exemplified by conservation laws governing quantities such as energy and momentum, which are preserved through causal processes. To date, nothing in observed reality contradicts this fundamental law of causality.
In mathematics and logic, the principle of causality underpins the structure of formal systems: each logical deduction (the effect) follows necessarily from its premises (the cause). The "if-then" structure of deductive reasoning mirrors the relationships inherent in mathematical systems, where conclusions follow inevitably and consistently from the assumptions, provided the system is consistent. This reflects the deterministic nature of logical implication in well-defined formal systems, analogous to the deterministic nature of classical physical processes.
Thus, the universality of formal systems is grounded in consistent logical principles that reflect the causality observed in the physical universe. This deep connection explains why formal systems, when based on axioms consistent with empirical facts, can model reality with such precision and reliability. Both mathematics and physics rely on consistent, predictable relationships between premises and conclusions to develop robust theories that are logically sound and empirically valid.
Limits to Predictability
While the principle of universal causality ensures that every cause has an effect, there are well-known limitations to what is knowable. These limitations are demonstrated by Gödel’s Incompleteness Theorems, the Heisenberg Uncertainty Principle, and Turing’s Halting Problem, as discussed earlier. These insights make one thing clear: even though we may understand the rules that govern systems, the outcomes—the effects of actions—may still be unpredictable or unknowable in certain instances due to inherent limitations such as randomness or complexity in the universe.
However, this unpredictability does not undermine the causal structure of the universe. Instead, it highlights the complexity of systems where specific effects are difficult to predict, even though the broader causal relationships remain well understood. This reflects a fundamental constraint on our ability to foresee the future with absolute certainty. The precise effects of causes may be elusive due to intrinsic randomness or the complexity of interactions in the universe, even when the underlying causal structure is fully grasped.
The unpredictability inherent in quantum mechanics and other complex systems emphasizes the distinction between knowing the rules and being able to predict specific outcomes. This is akin to knowing everything about football but being unable to accurately predict who will win any given game. Even though the system is far from random—for example, the weakest professional club will almost certainly beat a high school team—prediction can still be elusive when the competitors are closely matched.
This concept resonates with broader philosophical and theological ideas, such as the notion of "forbidden knowledge" mentioned in ancient texts like the Torah—a text that has existed for over 2,000 years. In this context, "forbidden knowledge" refers to insights beyond human comprehension, understood only by God, the "creator of the original source code" of the universe. While these philosophical discussions extend beyond the scope of this paper, they offer intriguing parallels to the limits of human understanding in both formal systems and natural laws.
Theory-Induced Blindness: DIBIL in Mathematical Economics
In mathematical economics, a phenomenon known as theory-induced blindness arises when strict adherence to specific models or assumptions prevents the recognition of alternative possibilities or insights outside those frameworks. We refer to this as dogma-induced blindness impeding literacy (DIBIL). DIBIL occurs when false assumptions are conflated with facts, leading to a cognitive blindness that obscures potential truths beyond the established dogma represented by these axioms.
The implications of DIBIL suggest that, although formal systems—whether in mathematics, physics, or economics—are grounded in logical principles, they may still obscure certain aspects of reality that the system’s axioms or structures do not fully capture. This obscurity can arise when the wrong axioms are chosen for a particular task or when assumptions are accepted without sufficient scrutiny.
As demonstrated by Gödel, and reflected in the works of Heisenberg and Turing, there are inherent limitations to knowledge. Gödel’s Incompleteness Theorems show that in any sufficiently powerful formal system, there are true statements that cannot be proven within the system itself. Heisenberg’s Uncertainty Principle reveals fundamental limits to the precision with which certain pairs of physical properties (like position and momentum) can be simultaneously known, highlighting inherent limitations in measurement and predictability. Turing’s Halting Problem establishes that there is no general algorithm capable of determining whether any arbitrary computer program will eventually halt or run indefinitely, underscoring limitations in computational predictability.
These limitations mean that, despite the power of formal systems and the principle of universal causality, our knowledge remains inherently bounded. We can never fully know which axioms are sufficient to model all aspects of reality. Therefore, the risk of dogma-induced blindness exists when we become overly reliant on a single theoretical framework, leading to a narrowed perspective that hinders the discovery of new insights.
However, there is one axiom we should always include in all formal systems, and we can always rely on.
The First, One-Truth Postulate: The Universal Principle of Causality
One principle stands above all others in our understanding of the world: the universal principle of causality, which we define as the first, one-truth postulate of all rational inquiry and formal systems. This principle remains consistent with every known logical and empirical truth. We call it the first, one-truth postulate because it is implicitly embedded in all forms of reasoning—whether in deductive logic, common sense, or scientific thought.
This postulate reflects the ancient Roman adage cui bono—"who benefits?"—suggesting that understanding the likely cause of an effect involves considering who stands to gain. While the cui bono principle may serve as a heuristic in specific real-world contexts and does not always hold true, the first, one-truth postulate of causality remains universally valid. In every context—whether in logical reasoning or empirical reality—the principle of causality asserts that every cause, without exception, produces an effect.
If we cannot rely on this fundamental principle, the very foundation of rational thought and logical deduction collapses. Without it, we would regress to pre-scientific modes of thinking, abandoning the structured reasoning that has driven human progress. Denying this principle would not only undermine scientific advancement but also hinder rational inquiry and judgment, both of which are critical for expanding human knowledge. Rejecting causality would impede the evolutionary progress of humanity, leading to real-world consequences. Without this principle, we would lose the ability to make reasoned judgments—a dire outcome.
Thus, the one principle that can never turn out to be false in our shared objective reality—the one we can always rely on, and the one that precludes theory-induced blindness—is the principle of universal causality, the first, one-truth postulate of all rational systems. While it may have been overlooked or forgotten, it remains central to our understanding and must be remembered well.
This postulate is crucial because, as it pertains to David Hilbert’s program, while Gödel proved that any sufficiently powerful formal system is incomplete, we assert that as long as the law of causality holds in our shared objective reality, any formal system whose axioms are consistent with real-world facts and acknowledges the principle of causality remains relevant for modeling reality. This is because such systems maintain coherence with empirical evidence and logical consistency. This holds true unless the universal law of causality is violated (an exceedingly unlikely event) or one of the system’s axioms contradicts empirically proven facts.
Pascal’s Wager: A Formal System Approach
To illustrate the practical application of formal systems in decision-making, we turn to Pascal’s Wager. Blaise Pascal (1623–1662) was a French mathematician, philosopher, scientist, and inventor who made significant contributions to probability theory, as well as fields such as engineering and physics. He is best known in mathematics for Pascal’s Triangle, a recursive structure used in combinatorics, and for his pioneering work on probability theory, which laid the foundation for modern decision theory and risk analysis. Beyond his contributions to mathematics, Pascal developed one of the first mechanical calculators, the Pascaline, invented the hydraulic press, and made significant contributions to fluid mechanics and geometry. Though disputed, he is sometimes credited with early designs related to the roulette wheel, stemming from his experiments with perpetual motion machines.
This paper focuses on Pascal’s famous philosophical argument known as Pascal’s Wager, which combines his mathematical reasoning with his reflections on belief. Pascal’s Wager presents belief in God through a rational, decision-theoretic lens, framing it as a bet with possible outcomes based on whether God exists. The argument can be summarized as follows:
If God exists and you believe in God, you gain infinite happiness (often conceptualized as eternal life in heaven).
If God exists and you do not believe in God, you suffer infinite loss (often conceptualized as eternal suffering in hell).
If God does not exist and you believe in God, you lose very little (a finite cost of time, resources, etc.).
If God does not exist and you do not believe in God, you gain very little (a finite gain, such as saved time or resources).
Pascal’s reasoning is rooted in probability theory and utility theory: even if the probability of God's existence is low, the infinite value of the potential reward (eternal happiness) outweighs the finite cost of belief. From this perspective, belief in God becomes the rational choice since the potential gain vastly exceeds the potential loss, regardless of the odds (Pascal, 1670).
Pascal’s argument can be viewed through the lens of formal systems and decision theory, where the axioms (beliefs and assumptions about the existence of God) lead to theorems (outcomes or utilities) based on logical inference rules. The wager is built on the assumption that if a decision can lead to an infinite reward with finite cost, it maximizes expected utility to believe, even if the probability of God's existence is small. This aligns with formal logic's approach of deriving consistent outcomes from initial premises.
Clarifying the Concept of Belief: Statistical Hypothesis Testing vs. Religious Faith
Since this paper touches on the subject of God and religion, it is essential to clarify that our approach is rooted in mathematical reasoning—specifically in the context of probability theory and hypothesis testing under uncertainty, and nothing more. This methodology has been consistently applied by the author in a professional context, particularly in financial analysis, highlighting the robustness of this approach. Importantly, this discussion is distinct from the traditional understanding of "belief" or "faith" in a religious context.
In any sound formal system, such as statistics, the term "belief" refers to the selection of the hypothesis most likely to be true based on the available evidence. This sharply contrasts with religious faith, where belief often involves acceptance without empirical evidence or the testing of alternatives.
In statistics, we begin with a hypothesis known as H₀, the null hypothesis, which serves as our default assumption. For example, in a study examining the relationship between cigarette smoking and cancer mortality, H₀ might propose that there is no relationship between smoking and cancer. However, if data from a regression analysis reveal a strong correlation between smoking and increased cancer mortality, we may reject H₀ in favor of H₁, the alternative hypothesis, which posits that there is indeed a relationship.
The decision to "believe" in H₁ over H₀—under the definition of "belief" as it is used in statistics—is based on the likelihood that H₁ is more consistent with objective facts, i.e., the evidence present in our shared reality. Essentially, belief in statistics refers to a rational choice to accept the hypothesis with the higher probability of being true, given the data at hand. This process is guided by probabilistic reasoning and empirical testing, always subject to revision as new data emerge.
This statistical notion of belief—selecting the hypothesis that is more likely to align with reality, even when absolute certainty is unattainable—differs fundamentally from religious belief. In religion, belief often operates on axioms or truths accepted as inviolable, without requiring empirical validation or testing against alternative hypotheses. Religious faith thus hinges on the acceptance of principles that transcend the need for the kind of evidence that drives hypothesis testing in statistics.
Therefore, it is essential to be precise and respectful, acknowledging that belief, especially in the religious context, can be deeply personal and sensitive for many. The goal here is not to challenge religious faith but rather to highlight the distinction between how belief functions in mathematics and how it is understood in religious practice. This is, after all, a paper about formal systems and probabilistic reasoning—not a discourse on theology or faith.
Dually Defined Null Hypothesis
An intriguing aspect of Pascal's Wager, when analyzed rigorously using probability theory, lies in the construction of the null and alternative hypotheses. Pascal posits as an axiom, which we will designate as H₀ (the null hypothesis), that God exists, along with heaven and hell. In applied mathematics and statistics, we typically attempt to disprove H₀ by testing against the alternative hypothesis—H₁, which, in this case, posits that God does not exist.
However, this binary formulation is insufficient. In any correct formal system, particularly in hypothesis testing, failing to consider all relevant alternatives introduces the possibility of what, in statistics, is referred to as a Type II error—failing to reject a false null hypothesis. This represents a lapse in logic and rigor, as it overlooks valid hypotheses that could potentially be true. Such oversights are unacceptable in a proper formal system because they compromise the integrity of the hypothesis-testing process, rendering it fundamentally flawed.
Pascal’s Wager, framed as a bet within the context of a formal system, inherently involves probability—a mathematical discipline that Pascal himself helped to pioneer. As a mathematician, Pascal's intention was to construct a rational decision-making framework. Introducing errors by believing in an axiom that omits alternative hypotheses would contradict the very foundation of his wager. Thus, the wager is not merely a philosophical argument but also a formalized bet based on probabilities. Failing to account for all logical possibilities undermines its mathematical validity.
In the context of Pascal's Wager, we must consider more than just the binary existence or non-existence of a single god. Specifically, the question of how many gods exist must be addressed. According to Peano’s axioms, which describe the properties of natural numbers, we can treat the number of gods, N, as a natural number. Peano’s second axiom states that for any natural number n, there exists a successor n′. This implies that the number of gods could be 0, 1, 2, 3, and so on. Limiting the hypothesis to a single god violates this axiom and introduces logical inconsistency, making the entire system unsound under the inference rules of any valid formal system.
By failing to consider the possibility of multiple gods, we introduce a Type II error into our reasoning—failing to reject a false null hypothesis. This makes any formal system based on such an assumption inherently unsound. To avoid this error, we must expand our hypothesis space beyond the simplistic binary formulation of "God exists" or "God does not exist."
Thus, instead of just two hypotheses, we need at least four to cover a broader range of logical possibilities:
H₀: There is only one God, specifically Yahweh, the God referenced by Pascal. Pascal, as a devout Christian, referred to Yahweh, also known as "the Father" in the New Testament, as the singular, monotheistic God. This deity is also identified as Allah in the Quran, with Islam recognizing the same monotheistic deity worshiped in Christianity and Judaism, though each religion provides its own theological interpretations. This clarification ensures that we are aligning with Pascal’s reference to the God of the Abrahamic traditions—Judaism, Christianity, and Islam—while respecting the nuances in their doctrinal differences.
H₁: There are multiple gods, and Yahweh is the supreme god who should be worshipped above all others.
H₂: There are multiple gods, but Yahweh is not the supreme one to worship.
H₃: There are no gods at all.
By expanding the hypothesis set in this manner, we avoid the logical insufficiency of the original binary formulation and preclude the possibility of a Type II error—failing to reject a false null hypothesis due to inadequate consideration of alternatives. Mathematically, N, the number of gods, could be any natural number, and in a sound formal system, N should range from 0 upwards, reflecting our lack of complete knowledge. Restricting N arbitrarily to just 0 or 1 introduces the risk of Type II error, compromising the integrity—or soundness—of the formal system.
A sound formal system cannot allow such errors, as they conflict with logical rigor. Such oversights would effectively misrepresent the range of possible outcomes. When a formal system permits Type II errors, it becomes logically inconsistent, thereby losing its status as a sound formal system.
This approach aligns with Nassim Taleb's observation that just because we haven’t seen a black swan, it does not mean one does not exist. In probability and hypothesis testing, all plausible alternatives must be considered, or else the process becomes logically flawed.
Dual-Null Hypothesis: H₀ or H₁?
Now the question becomes: which hypothesis should we select as our null hypothesis, H₀ or H₁? Having two different null hypotheses can be problematic because, in applied mathematics, we don't operate on uncertainty—we base our decisions on what can be reasonably deduced. This approach has allowed us to consistently succeed in statistical analysis, where success is grounded in rational, evidence-based decisions. Absolute certainty in the objective reality we share is strictly limited to what can be independently verified. In other words, we can only be absolutely certain about empirical facts and deductive reasoning.
Logical deduction ensures that as long as our axioms are true, the theorems derived from them will also hold true. The accuracy of deductive logic in mathematics is absolute because it can be independently verified. For instance, you can personally prove the Pythagorean Theorem and confirm its validity. In mathematics, if A (axioms) is true, then B (theorems) must logically follow and are guaranteed to hold true both in theory and in reality, as long as the axioms are not violated. This is why using formal systems provides a foundation of certainty that informs our decision-making process—and why 2 + 2 is always 4, unless one of Peano’s axioms is violated. For example, "2 moons of Mars + 2 moons of Mars" does not equal "4 moons of Mars" since Mars only has two moons, Phobos and Deimos. In this case, Peano’s second axiom, which posits that each natural number has a successor, is violated. The formal system of Peano’s arithmetic becomes unsound and inconsistent with reality when its key axioms are violated.
This reminds us that axioms themselves are educated assumptions—initial hypotheses like the ones we are considering now, H₀ or H₁. An axiom is accepted without proof and deemed 'self-evident' by those who propose it—in this case, by ourselves. This brings us to the critical question: which of the hypotheses, H₀ or H₁, should we utilize?
We can avoid arbitrary guessing by following the advice of Bertrand Russell: rather than relying on dogma, we should consult the original sources that Pascal referenced. In this case, according to the Torah, Yahweh, the deity Pascal discussed, commands: "You shall have no other gods before me" (Exodus 20:3, NIV). This implies that H₁—which posits Yahweh as the primary deity, deserving of exclusive worship—should be our null hypothesis.
This acknowledgment of Yahweh as the foremost deity aligns with the concept of multiple gods in other religious traditions, such as in the Bhagavad Gita and the pantheon of Greek and Roman gods, where a hierarchy of divine beings can, in theory, coexist. While it's convenient that H₁ does not contradict the existence of many religions with multiple gods, that’s not the primary reason for choosing H₁ over H₀.
The real reason we must adopt H₁ is that H₀ contains a logical contradiction: it claims both "there are no gods except Yahweh" and "Yahweh is the only god." This creates a conflict because atheism (no gods) and monotheism (one god) are mutually exclusive ideas. Grouping them together violates the law of the excluded middle, a principle in formal logic that states something must either be true or false—there is no middle ground. In a formal system, which underpins hypothesis testing in mathematics and probability theory, contradictions are not allowed because they undermine the binary logic required for consistency. By including such conflicting propositions, even in the form of assumptions or hypotheses, we violate the law of the excluded middle, making the entire system unsound. This is why dividing by zero is prohibited in algebra: after that, you can prove anything, like 2 = 3, and so on.
Thus, if we were to adopt H₀, the entire argument—the formal system—would lose soundness, as it would no longer qualify as a valid formal system.
To put this more plainly, Yahweh asking that "no other gods be placed before Him" while assuming there are no other gods is logically akin to instructing someone to avoid eating lobster, unicorn meat, and pork (where unicorns don’t exist). It’s also similar to asking someone to drive 55 miles per hour from Boston to London across the Atlantic Ocean in a car. For a more concrete example, it's akin to the infamous attempt to legislate that pi equals 3.2—which was proposed in the United States in the early 20th century. These are self-evident fallacies and have no place in rational discussion.
As a result, H₀ cannot serve as a valid hypothesis in the context of any sound formal system. Any theorems derived using H₀ as an axiom would be inherently invalid—coming from a fundamentally unsound formal system. Therefore, any formal system built on H₀—as it attempts to conflate atheism and monotheism—would be logically unsound. This, however, is not a "mathematically proven fact" about atheism itself but rather about the inconsistency within the specific formal system being proposed.
In conclusion, within the context of our logical framework, the hypotheses that remain logically sound are H₁ (Yahweh as the primary deity) and H₂ (other gods may exist, and Yahweh is not necessarily supreme). H₀ (no gods except Yahweh) and H₃ (no gods at all) are logically unsound as axioms in this formal system due to the contradictions they introduce. Historically, many rational thinkers, such as the Greek philosophers, considered the possibility of multiple gods, perhaps to avoid such logical inconsistencies. It's interesting how history unfolds—those deeply rational thinkers may have been onto something after all.
Reconsidering Hypotheses in Formal Systems
Thank you, Blaise Pascal, for your insight, and fortunately, we now live in an era where individuals can hold diverse beliefs without fear of persecution—whether atheist or otherwise. Hopefully, we can all agree on that!
The reason we mention not burning atheists at the stake is that, under a rigorous formal system framework, any axiomatic assumption or belief consistent with atheism (H₀ or H₃) leads to an unsound formal system. This is because such an assumption inherently introduces logical inconsistency by denying the possibility of other valid hypotheses. In statistics, a Type I error occurs when we incorrectly reject a true null hypothesis. In this context, by excluding possible outcomes (such as the existence of multiple gods), we prematurely dismiss hypotheses that could be true, thereby compromising the integrity of our formal system.
In the context of our shared objective reality, the only two hypotheses that remain logically sound are:
H₁: Yahweh (also known as Allah in Islam) is the primary deity.
H₂: Other gods may exist, and Yahweh is not necessarily supreme.
H₀ (no gods except Yahweh) and H₃ (no gods at all) are not only unsound but are also unlikely to hold true in our shared objective reality. This unsoundness arises because H₀ combines mutually exclusive concepts—atheism (no gods) and monotheism (one god)—which creates a logical contradiction. By violating the law of excluded middle, which states that contradictory statements cannot both be true at the same time, we render the formal system inconsistent.
This is why many ancient Greek philosophers considered the existence of multiple gods, each with specific names. Their acceptance of multiple deities allowed them to explore philosophical ideas without encountering logical contradictions within their formal systems. By considering the existence of multiple gods, they maintained logical consistency and soundness in their reasoning. Perhaps they were onto something after all!
In constructing a sound formal system, especially when dealing with metaphysical concepts like the existence of deities, it is crucial to avoid logical contradictions and consider all plausible hypotheses. By excluding potential outcomes or combining mutually exclusive concepts, we introduce errors that compromise the system's integrity.
Therefore, H₁ and H₂ remain the logically sound hypotheses within our formal framework, as they do not introduce contradictions and allow for a consistent exploration of possibilities. This careful consideration ensures that our formal system remains robust, reliable, and free from inherent logical errors.
Addressing Common Objections under H₁
The Sincerity Objection: One common objection is that believing in God simply to avoid hell may seem insincere, potentially leading to the very outcome one hopes to avoid. However, under the properly selected H₁ hypothesis (which posits Yahweh as the primary deity), even an attempt to believe in Yahweh results in a relative risk reduction of going to hell. In this context, striving for sincere belief is a rational choice within the framework of Pascal’s Wager. Thus, this objection does not hold in a rational argument about God.
The Infinite Utility Problem: This objection focuses on the use of infinite rewards (heaven) and infinite punishments (hell) in rational decision-making, arguing that infinite values distort the process by making all finite outcomes seem irrelevant. This objection misunderstands the nature of Pascal's Wager. The wager relies on accepting the infinite nature of the rewards and punishments as a premise. Questioning their infinite nature changes the foundational assumptions of Pascal’s argument. Therefore, to evaluate the decision rationally within this framework, one must accept the infinite stakes (Pascal, 1670).
The Moral Objection: Another objection suggests that believing in God purely out of self-interest is morally questionable, reducing faith to a selfish gamble rather than sincere devotion. Even if initial belief stems from self-interest, it can be a starting point for genuine faith and moral growth over time. As belief deepens, sincerity and authentic devotion may develop, making this objection less relevant in the long term (Pascal, 1670).
The Probability Objection: This objection challenges the assumption that even a small probability of God’s existence justifies belief due to the infinite reward, arguing that assigning probabilities to metaphysical claims is inherently problematic. While the probability of God's existence may be uncertain, it is not necessarily negligible. With no prior knowledge of the true probability, the principle of indifference suggests assigning an initial estimate of 50%. Therefore, the potential for an infinite reward still justifies belief within Pascal's framework (Pascal, 1670; see Roger Penrose's work on unknowable probabilities).
The Cost Objection: Some argue that Pascal's Wager underestimates the potential costs of belief, including sacrifices in time, resources, and personal freedoms. However, one does not need to devote excessive resources to hold a belief in God. Moderate religious practices can be integrated into one's life without significant sacrifices, minimizing potential costs while still allowing for the possibility of infinite rewards (Pascal, 1670).
The Agnosticism Objection: This objection argues that Pascal’s Wager presents belief as a binary choice, potentially ignoring the rational stance of agnosticism. However, the wager addresses the reality that either God exists or does not—this is a binary fact. Agnosticism reflects uncertainty about this reality, but in decision-making under uncertainty, Pascal's Wager suggests that belief is the rational choice due to the potential infinite reward.
The Many Gods Objection: This objection posits that, given the multitude of belief systems, believing in the "wrong" God might still result in negative consequences. While there are many belief systems, Pascal specifically advocated for belief in Yahweh, the God referred to in the Ten Commandments: "You shall have no other gods before me" (Exodus 20:3, NIV). Yahweh, also known as "The Father" in the New Testament and "Allah" in the Qur’an, is the one God that Pascal’s Wager advises belief in.
At this point, it's worth recalling a quote—often attributed to Mark Twain but not definitively confirmed: "It’s not what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so." In any rigorous analysis, it's essential to reference original sources rather than rely on second-hand interpretations. We encourage careful examination of source material to ensure a thorough understanding of the wager and its underlying formal systems.
To clarify further: under the properly formulated H₁ hypothesis, worship of non-Yahweh entities is classified as idol worship, which is self-evident by definition—worshipping a false god constitutes idolatry. However, this classification does not contradict the fact that the Torah mentions multiple supernatural entities, such as angels, cherubim, seraphim, nephilim, and giants. Some of these beings obey Yahweh, while others do not. Under H₁, these entities are considered "false gods" in the context of worship but may still exist as conscious beings distinct from humans.
The only remaining task is to determine whether H₁ (Yahweh is the primary deity) or H₂ (other gods may exist, and Yahweh is not necessarily supreme) is true. As we use a formal system to reach a conclusion, we cannot use H₀ (no gods except Yahweh) and H₃ (no gods at all) as axioms. This is not because they could never turn out to be true, but because they are unsound and are encompassed by the sound axioms. In other words, under the H₁ hypothesis, it could turn out to be the case that H₀ is true, but under the H₀ hypothesis, it could never turn out to be the case that H₁ is true, making H₀ inherently unsound. The same logic applies to H₃. H₀ and H₃ are simply bad axioms that cannot be used in rational discourse. But don’t worry, dear reader—we won’t leave you lurking in the dark; we will provide an answer. However, we will return to Pascal and God later. For now, let’s return to the main topic of this paper: the consequences of using the wrong axioms for the task at hand.
Interpreting John Kerry's Statement
John Kerry's Quote:
"It's really hard to govern today. You can't—the referees we used to have to determine what is a fact and what isn't a fact have kind of been eviscerated, to a certain degree. And people go and self-select where they go for their news, for their information. And then you get into a vicious cycle."
John Kerry’s comment reflects his concern over the diminishing influence of traditional authoritative sources—referred to as “referees”—who once played a central role in determining what is considered factual but are no longer universally trusted. He expresses frustration over the challenge of governing in an environment where individuals increasingly self-select their sources of news and information, leading to a cycle of reinforcing existing biases.
However, Kerry’s perspective raises important questions. Facts, by definition, do not require referees or authority figures; the truth of objective facts is independently verifiable by any rational individual, regardless of the source presenting them. His frustration may stem from the difficulty of governance in a fragmented media landscape, where individuals often favor narratives that align with their personal beliefs rather than seeking out objective facts.
While Kerry laments the erosion of trusted arbiters of truth, the real-world situation is more nuanced. People may be rejecting unverified claims that traditionally went unquestioned, which can either lead to more critical thinking and skepticism or cause individuals to self-select information based on ideological alignment rather than verifying the accuracy of claims.
Kerry implies that governance becomes difficult without universally trusted referees to establish facts. However, true facts are objective and verifiable, regardless of any authority. This highlights the need for public literacy and critical thinking when approaching unverified claims. Kerry seems to conflate subjective beliefs and opinions with objective facts, expressing concern over the loss of control in shaping which narratives dominate public discourse. What he may be mourning is the loss of a monopoly over dogma—claims presented as facts but lacking independent verifiability.
This distinction is crucial under U.S. common law: facts are independently verifiable, while dogma or hearsay are merely assertions that can be used by dishonest actors to manipulate or mislead. In libel law, for example, truth is an absolute defense, emphasizing the legal and moral principle that facts, when verifiable, stand independently of opinion or authority.
Content Warning: Sensitive Example
If someone refers to a convicted criminal as a "diseased pederast" after they were convicted of child abuse and contracted a disease in prison, such a statement would be legally protected under U.S. libel law—but only if both the conviction and medical condition are verifiable facts. Even highly insulting statements are legally protected if factually accurate. This example underscores the importance of distinguishing between objective facts and subjective opinions, highlighting the need to be mindful of how facts are presented, especially when dealing with sensitive topics.
More important than the legal aspects, this example illustrates the necessity of separating verifiable facts from subjective opinions. While facts, when independently verifiable, are protected both legally and morally, factual statements can have harmful consequences if presented in a derogatory or harmful way. It is essential to handle facts with care and respect, especially in discussions of sensitive topics, as the presentation of facts can significantly affect others. Nonetheless, there is a clear distinction between fact and hearsay.
Key Points
Integrity of Facts: A clear distinction between verifiable facts and subjective opinions is essential for public discourse, decision-making, and governance. Kerry’s statement raises concerns about losing centralized authorities to arbitrate facts, but facts do not require arbitration—they require verification. As the saying goes, "You are entitled to your opinions, but not your own facts." For society to function cohesively, it must distinguish between dogma (claims that may be false) and objective facts (those that are independently verifiable and cannot be false).
Public Discernment: The ability to critically evaluate information and distinguish facts from unverified claims is essential to combat misinformation. Encouraging the public to reject hearsay in favor of verifiable truths strengthens societal resilience against false narratives.
Verification Mechanisms: Independent verification is the cornerstone of ensuring that factual claims remain accurate and trustworthy. Unlike opinions or hypotheses, facts are valid because they can be verified through proper methodology, not because an authority declares them so. This applies to both scientific inquiry and public discourse.
By emphasizing the importance of independently verifiable facts, as opposed to hearsay or subjective interpretations, this analysis highlights the critical role of objective truth in maintaining societal cohesion. In contrast to dogma, facts are unchangeable truths, much like how 2 + 2 = 4 is always true in the formal system of arithmetic, provided its axioms hold. Understanding the distinction between facts and opinions is fundamental to effective governance, communication, and public discourse.
By adhering to objective truths, fostering public discernment, and upholding mechanisms for verifying facts, society can safeguard itself against misinformation and ensure that decisions are based on reliable, independently verifiable information. This strengthens not only the fabric of society but also rational discourse, governance, and decision-making processes.
Theory-Induced Blindness: The Role of Dogma
Theory-Induced Blindness (TIB) is a cognitive bias described by Daniel Kahneman in his 2011 book Thinking, Fast and Slow. Rather than summarizing, let’s refer directly to Kahneman’s words:
"The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing. You give the theory the benefit of the doubt, trusting the community of experts who have accepted it."
Kahneman's description emphasizes the difficulty of challenging established theories due to TIB. This bias occurs when individuals become so invested in a theory that they fail to recognize its flaws, often attributing inconsistencies to their own misunderstandings rather than questioning the theory itself.
Scientific theories, as applied formal systems, are structured sets of assertions logically deduced from underlying axioms or hypotheses. Theory-Induced Blindness (TIB) does not arise from the theory's logical structure but from an implicit assumption embedded in its axioms—a dogma, or an accepted truth without empirical verification. Any theory that induces blindness is logically deduced from such a dogmatic axiom using sound reasoning.
The blindness results not from long-term use of the flawed theory but from the false axiom underpinning it. Axioms, by definition, are accepted as true without proof. However, if an axiom turns out to be incorrect, the entire theory derived from it must be revised. Facts are immutable and verifiable, but axioms are assumptions that may be flawed.
Kahneman supports this idea in his critique of Daniel Bernoulli’s flawed theory of how individuals perceive risk:
"The longevity of the theory is all the more remarkable because it is seriously flawed. The errors of a theory are rarely found in what it asserts explicitly; they hide in what it ignores or tacitly assumes."
This quote reinforces the idea that Theory-Induced Blindness (TIB) arises from an incorrect axiom—a tacit assumption that does not reflect reality. While the theory itself may remain logically valid within its formal system, it fails to accurately describe reality because its foundation is flawed—contradicted by established objective facts. For example, Peano’s second axiom states that for every natural number n, there exists a successor n'. However, this assumption may not hold true in certain real-world contexts, such as counting physical objects like Mars' moons. Since Mars only has two moons, the concept of a continual successor fails in this context. This illustrates how an axiom that works perfectly within a formal system can break down when applied to the complexities of physical reality.
In short, even logically sound axioms may not always align with empirical facts, and this disconnect is a core element of TIB. Much like mathematical theorems, theories can be independently verified for internal consistency within the confines of their logical structure. However, any theory will fail to describe reality if one of its foundational axioms is incorrect. Until such a false axiom—like Bernoulli’s erroneous assumption about risk—is identified and corrected, the theory will continue to misrepresent reality.
This concept can be metaphorically illustrated by the famous Russian song Murka, where a traitor within a structured criminal gang undermines the group’s ability to function effectively. Until Murka, revealed as a "MUR" traitor, is ruthlessly eliminated, the compromised gang remains at risk of collapse, with its members either killed or imprisoned under Stalin’s brutal regime. Similarly, a flawed theory cannot function properly until the false axiom, like the traitor, is identified and corrected. The presence of a flawed axiom threatens the entire structure of the theory, much like how Murka’s betrayal endangered the gang.
As Kahneman observes:
"If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation you're somehow missing."
This belief—that "there must be a perfectly good explanation"—lies at the heart of theory induced blindness. In reality, no such explanation exists when a theory fails to fit the model; the real issue is that at least one of the theory’s axioms is flawed, causing the entire theory to deviate from reality. In a correct formal system, no Type I or Type II errors are possible, as explained earlier, because every conclusion follows logically from valid axioms.
These false assumptions, or dogmas, are often educated guesses that may turn out to be wrong. However, through prolonged use and acceptance, they can become ingrained as "facts." Until the flawed axiom is corrected, continued reliance on the theory will inevitably lead to error.
A useful metaphor for this concept can be found in The Godfather. The character Tessio’s betrayal of the Corleone family leads to his inevitable execution, with the famous line: "It’s nothing personal, just business." Betrayal cannot be tolerated in the mafia world, and much like a false axiom in a formal system, a traitor must be eliminated for the structure to remain intact. In the case of a flawed theory, the false axiom is the "traitor" that undermines the entire framework. Until it is discovered and replaced, the theory will continue to fail.
Dogma-Induced Blindness (DIB)
Theory-Induced Blindness (TIB) refers to the cognitive bias where individuals persist in believing flawed theories, assuming there must be an explanation for the theory’s failure to align with reality. The true cause of this failure, however, lies not in the reasoning but in a flawed axiom—a hidden assumption, or dogma, which underpins the theory. In this sense, TIB can be more accurately described as Dogma-Induced Blindness (DIB), where reliance on an unchallenged dogmatic axiom prevents the recognition of the theory’s shortcomings.
People often mistakenly equate the error-free nature of logical deduction with the infallibility of axioms. While the deductive process itself may be flawless, a theory built on a flawed axiom is destined to fail, much like a gang betrayed from within by one of its own members. Until the dogma (the faulty assumption) is identified and corrected, the blindness will persist, and the theory will remain out of step with reality.
DIB is a form of intellectual inertia, where individuals resist engaging in what Daniel Kahneman calls the "slow, expensive System 2 work"—the deliberate, effortful thinking required to critically assess and correct flawed axioms. Re-examining and re-deriving an entire logical structure based on a corrected axiom is a time-consuming and difficult process. Our brains, which are naturally inclined toward efficiency and energy conservation, tend to avoid such mental effort. As a result, people often engage in wishful thinking, holding on to the belief that the theory must still be correct, despite the evidence to the contrary.
DIB, therefore, results from intellectual laziness and a reluctance to challenge deeply ingrained assumptions. The only way to resolve the issue is through rigorous examination of a theory’s foundational axioms. Identifying the "traitor"—the flawed assumption—at the theory’s core is essential for restoring soundness and bringing the theory back in line with empirical reality.
DIBIL: Understanding Dogma-Induced Blindness Impeding Literacy
Dogma-Induced Blindness Impeding Literacy (DIBIL) refers to a cognitive bias where individuals become functionally illiterate—not due to a lack of access to information but because they have been misinformed or rely on flawed assumptions. This condition arises from the uncritical acceptance of dogmas, which are false or unexamined beliefs embedded within personal or cultural frameworks. Dogmas are subjective assumptions, often adopted early in life through societal influences or hearsay, and are accepted without questioning or proof. When reasoning is built on these flawed assumptions, even logically sound deductions can lead to false conclusions.
Formally, DIBIL can be defined as a cognitive error where individuals confuse empirical facts—those that are independently verifiable—with axioms or assumptions, which are foundational premises within a particular formal system of thought. Facts are objective and can be confirmed through observation or experimentation, whereas axioms are accepted principles within a formal system framework, treated as self-evident but not necessarily subject to empirical testing.
For example, in mathematics, Peano’s second axiom holds that every natural number has a successor. This is valid within the mathematical system, but if directly applied to real-world scenarios—such as counting the moons of Mars, which total only two—the assumption becomes inapplicable. The key distinction is that facts, such as the number of Mars' moons, are verifiable through empirical observation, whereas axioms are assumptions that may require revision when they conflict with reality.
The risk of DIBIL lies in treating unexamined assumptions as indisputable truths. When individuals conflate assumptions with empirical facts, their reasoning becomes vulnerable to significant errors, particularly in fields where precision is vital. By building their understanding on shaky foundations—such as dogmas presented as certainties—people risk forming misconceptions and making poor decisions, especially when objective verification is needed.
In essence, DIBIL prevents individuals from critically evaluating the difference between what is verifiable (fact) and what is merely assumed (dogma). This conflation results in a distorted understanding of reality and undermines intellectual rigor, especially in contexts where evidence-based reasoning is essential. To combat DIBIL, one must rigorously challenge and verify the assumptions underlying their belief systems, ensuring that empirical accuracy guides decision-making processes.
Recognizing and addressing DIBIL is essential to improving one’s ability to distinguish between verifiable facts and tacit assumptions. Critical thinking requires a clear understanding that assumptions, while necessary in many systems of thought, are not immutable truths and may need revision in light of new evidence. Developing this awareness fosters critical literacy grounded in empirical reality rather than unexamined beliefs, enhancing decision-making in both formal contexts (like mathematics or economics) and real-world scenarios.
DIBIL also offers insight into the Dunning-Kruger effect, an empirically observed phenomenon where individuals with limited knowledge overestimate their competence because they fail to recognize the inaccuracies in their understanding. These individuals often have not critically examined their foundational beliefs and therefore exhibit unwarranted confidence.
Conversely, those with more expertise recognize two key insights. First, by inquiring how exactly less knowledgeable individuals arrive at their conclusions, it becomes evident that such people are overconfident because their conclusions are based on oversimplified or inaccurate assumptions, which experts know to be flawed. Second, experts are well aware of the potential fallibility of assumptions and, as a result, tend to exhibit cautious self-doubt—perhaps overly so. Experts understand that any assumption could be proven wrong and therefore adopt a more nuanced, critical approach to drawing conclusions. This explains why less knowledgeable individuals may display overconfidence, while experts appear more reserved in their judgments.
Why Disbelieving is Such Hard Work
Disbelieving false hypotheses is notoriously challenging—a point emphasized by Daniel Kahneman and other psychologists. This difficulty often stems from cognitive biases and one of the fundamental principles of logical deduction: the principle of non-contradiction. Central to all formal systems, this principle dictates that a statement and its negation cannot both be true simultaneously. Along with the law of excluded middle, it forms the backbone of logical reasoning, ensuring that proven theorems within formal systems remain internally consistent. Independent verification and adherence to these logical principles safeguard the integrity of formal systems, despite the limitations highlighted by Gödel’s incompleteness theorems.
Formal systems—where theorems are logically deduced from axioms assumed to be true—have been integral to mathematical reasoning since ancient times. Mathematicians like Euclid formalized these proofs using methods of deduction (and mathematical induction when dealing with infinite sets), which remain fundamental to mathematics today. The principle of non-contradiction, employed by Euclid, ensures internal consistency within any mathematical proof, whether in algebra, geometry, or other disciplines. It requires that no proposition can be both true and false simultaneously, preventing logical contradictions and maintaining coherence within the system.
A classic example of this principle is the method of proof by contradiction. In this technique, an assumption is shown to lead to a contradiction, thereby proving the original statement true. Euclid famously used this method to demonstrate that there are infinitely many prime numbers. He began by assuming the opposite—that there are only finitely many primes—and then showed that this assumption leads to a logical contradiction. By disproving the finite assumption, Euclid confirmed that the set of prime numbers must be infinite. This method relies directly on the principle of non-contradiction to derive valid results from false assumptions and is a cornerstone of mathematical reasoning across all formal systems.
The principle of non-contradiction is crucial for maintaining logical consistency within any formal system. It ensures that any claims contradicting the axioms or theorems derived from them are recognized as false within the system. This principle is foundational in every branch of mathematics. For instance, dividing by zero in algebra leads to contradictions—mathematically equivalent to fallacies—because doing so renders the system inconsistent, allowing absurd conclusions such as proving that 2=3. In any proper formal system, violating the principle of non-contradiction undermines the foundation of logical reasoning.
This principle is not limited to formal mathematics; it applies to all forms of rational thought. Assertions that contradict established axioms or empirical facts are often automatically rejected, even subconsciously, because such contradictions are inherently recognized as invalid. Rigorous adherence to the principle of non-contradiction means that any proposition conflicting with an established axiom is dismissed as logically impossible. This rejection is not merely procedural—it is a logical necessity to maintain the coherence and consistency of any formal system.
However, the very principle that upholds the integrity of logical systems also makes it exceedingly difficult to disbelieve false hypotheses. Once a hypothesis is accepted as an axiom or a strongly held belief, the mind becomes resistant to recognizing evidence that contradicts it. The principle of non-contradiction, while essential for logical deduction, can foster a form of cognitive inertia. It makes it difficult to let go of established beliefs, even when they are false, because subconsciously, we may reject contradictory evidence automatically due to this ingrained logical principle.
This is why disbelieving is such hard work. Rejecting a false hypothesis requires not only identifying contradictions—a task that is straightforward in principle—but also the mental effort to override deeply ingrained beliefs that are supported by the principle of non-contradiction. To reject a false hypothesis, one must be willing to overcome the mental block that results from contradicting a firmly held assumption and be prepared to restructure the entire logical framework built upon it, which is a complex and intellectually demanding task. As Kahneman points out, our brains, prone to cognitive shortcuts and biases, often resist this effort. We tend to believe that everything is fine and avoid the hard work of rethinking our assumptions.
By doing so, we unconsciously fall into a trap of cognitive comfort, avoiding the discomfort of challenging deeply held beliefs. This phenomenon underscores why disbelieving or revising false assumptions can feel like an uphill battle—it requires conscious effort to recognize contradictions and to adjust one’s belief system accordingly. The process involves confronting not just logical inconsistencies but also our innate resistance to cognitive dissonance, which is why disbelieving often requires more mental effort than simply holding on to the status quo.
The Flaw in Formal Systems: Axioms and Their Limits
In formal systems like Zermelo-Fraenkel (ZF) set theory, axioms are foundational assumptions accepted without proof. For example, the Axiom Schema of Separation allows for the creation of subsets by specifying properties that elements must satisfy. According to this axiom, any set consisting of two elements can be divided into two separate subsets, each containing one element from the original set. The Axiom of Pairing complements this by grouping elements together, while the Axiom Schema of Separation divides them into subsets based on their properties.
These formal structures are crucial for understanding relationships between elements, such as correlations in statistics, which measure relationships between real-world particles, forces, or other entities. In ZF set theory (or any formal system incorporating set theory), these entities can be represented as elements in a set, where the axioms provide the foundation for defining relationships like “co”-”relation.” In any formal system, the correlation between two variables depends on the assumption that they can be analyzed within a shared framework: set theory and probability theory.
This assumption—that elements or variables can be separated or grouped based on defined properties—underpins the analysis of relationships, particularly in fields like statistics and probability. Set theory and its axioms provide a logical structure to support this, which is essential for understanding how correlated properties interact within these formal systems.
In classical physics, systems are often considered divisible into independent parts, meaning the properties of the whole can be inferred from its components. This reflects the assumption of separability, similar to the Axiom Schema of Separation in mathematical frameworks. However, quantum mechanics challenges this assumption with phenomena such as quantum entanglement, where particles are so deeply interconnected that the state of one particle cannot be fully described without reference to the other, regardless of the distance between them.
Entanglement defies the classical notion of separability and introduces a paradox in frameworks that rely on it. For instance, when deriving Bell’s Inequality, the principle of local realism assumes that measurement results of one particle are independent of the other in an entangled pair. This mirrors the separability assumption in set theory, where distinct elements are treated independently. Bell’s Inequality sums correlations from different measurements, assuming each particle can be considered separately. However, quantum mechanics shows that entangled particles exhibit non-local connections, which violate this separability and lead to violations of Bell’s Inequality.
This violation of classical assumptions reveals a broader limitation of formal systems: while axioms are logically consistent within their frameworks, they are not guaranteed to capture the full complexity of physical reality. Axioms are tools to facilitate reasoning within formal systems, but they are not empirically verified truths. In the context of quantum mechanics, the assumption of separability embedded in classical frameworks—though consistent with ZF set theory—is inconsistent with reality when the elements in question are photons. This inconsistency is evidenced by violations of Bell’s Inequality, as demonstrated in experiments by physicists Alain Aspect, John Clauser, and Anton Zeilinger, who were awarded the 2022 Nobel Prize in Physics for their work in quantum entanglement. These findings highlight the failure of separability in the quantum realm, where entangled particles do not behave as independent entities.
This inconsistency violates the dual-consistency requirement for sound applied formal systems. This requirement states that for a formal system to be sound in application, it must not only be internally consistent (i.e., free from contradictions within its own framework) but also have its axioms be externally consistent with empirical reality. When an assumption like separability contradicts empirical evidence—such as the behavior of entangled photons—the formal system becomes unsound in its applied context. While the axioms may remain valid in their theoretical domain, they fail to maintain relevance when confronted with the complexities of quantum phenomena. This necessitates a reevaluation or revision of these assumptions to better align with empirical reality.
This discrepancy illustrates the gap between formal systems and empirical reality. While the Axiom Schema of Separation remains valid in the abstract world of mathematics, its assumption of separability is not applicable to the quantum world. The limitations of classical assumptions, including separability, become apparent when confronted with empirical facts like quantum entanglement. Axioms remain logically sound within their respective formal systems, but new scientific discoveries challenge their applicability in certain aspects of the physical universe.
The distinction between axioms and empirical facts is critical. Axioms are assumptions accepted without proof, while facts are independently verifiable through observation or experimentation. Quantum entanglement is an empirical fact, whereas separability is an assumption grounded in classical logic. When empirical evidence contradicts an assumption, the assumption require revision, not facts. Recognizing these limitations helps prevent Dogma-Induced Blindness Impeding Literacy (DIBIL), where unexamined assumptions are treated as indisputable truths.
Acknowledging that axioms are tools for reasoning rather than immutable truths allows us to refine theories, ensuring that they remain both logically sound and empirically valid. This is particularly important in light of quantum phenomena, which challenge classical notions. Developing a quantum set theory that does not assume separability may help bridge the gap between abstract reasoning and quantum reality. Such efforts would better align formal systems with our evolving empirical understanding.
However, this discussion is beyond the scope of this paper, which focuses primarily on theory-induced blindness in mathematical economics rather than quantum physics. The point remains: axioms and formal systems provide valuable frameworks for understanding relationships, but their applicability to reality is contingent on their ability to accommodate empirical facts. Revising axioms in response to new evidence is critical for maintaining the soundness of applied formal systems.
The Importance of Distinguishing Facts from Axioms
Unlike axioms, which are unproven hypotheses or foundational assumptions subject to potential falsification, facts are independently verifiable and certain in objective reality. Recognizing this distinction is crucial: while axioms may lead to coherent logical conclusions within formal systems, they should not be mistaken for empirical truths that apply universally.
This distinction becomes especially important when influential figures emphasize the need for authoritative “referees” to verify facts. In reality, facts are verifiable by any rational individual, independent of authority. Relying on external figures to define facts can be a symptom of Dogma-Induced Blindness Impeding Literacy (DIBIL)—a cognitive bias in which unexamined adherence to dogmas impairs one’s ability to distinguish between hypotheses and facts. To avoid this, it is vital to differentiate between subjective beliefs and objective, verifiable truths.
We must also recognize that everyone is susceptible to DIBIL. Each of us harbors certain dogmatic beliefs that can distort our understanding and lead to flawed conclusions. Acknowledging this susceptibility is the first step toward overcoming it and refining our thinking.
A dominant axiomatic assumption in mainstream economic theory, first proposed by Friedman and Schwartz in their 1963 work, A Monetary History of the United States, 1867–1960, posits that the primary cause of the Great Depression was the central bank’s failure to act during the late 1920s and early 1930s. Specifically, the Federal Reserve did not provide sufficient support to individual banks facing closures due to bank runs. These runs were triggered by the banks' inability to convert representative money (e.g., checking and savings account balances) into commodity money (such as gold coins). While this hypothesis remains influential, alternative explanations suggest that other factors—such as structural economic weaknesses, trade policies, and psychological factors—also played significant roles in causing the Great Depression.
This highlights the importance of formal systems in economic modeling: they ensure soundness by preventing the inclusion of assumptions that may later prove false. If we were to accept Friedman’s hypothesis as an axiom—that is, as a foundational, self-evident truth—our formal system would become unsound. This is because, if the hypothesis were later disproven, the formal system would effectively misrepresent reality. A sound formal system, when constructed with proper inference rules, does not generate false conclusions from false premises. As explained previously, a consistent formal system does not “lie” about reality; under the inference rules, there is no possibility of incorrectly rejecting a true hypothesis. As such, hypotheses cannot serve as the foundation for a sound formal system unless proven beyond doubt. Assuming a hypothesis to be true without proof and treating it as an axiom introduces the risk of logical errors, rendering the system unsound. This is one reason why Marx’s economic theory became unsound: his assumption regarding agency costs flowing from agents to principals did not align with empirical reality, leading to flawed conclusions.
To accurately model money and central banking within a formal system, it is essential to avoid assumptions that could later be disproven. For instance, while Friedman’s hypothesis suggests that the central bank’s inaction caused the Great Depression, using this hypothesis as an axiom would be unsound, as it remains subject to empirical validation and potential falsification. Instead, a sound approach must focus on facts that are irrefutable. One such fact is that rapid deflation was a key feature of the Great Depression. This is not a hypothesis—it is an empirical reality. While the specific causes of this deflation are debated, its occurrence is undeniable. From this, we can derive a self-evident axiom: volatility in the money supply, whether through inflation or deflation, is harmful to economic growth. This is a universally observed phenomenon across real-world economies, with no empirical evidence contradicting it. Moreover, no responsible economist disputes this claim. This is evident in the real-world behavior of central banks, which treat deflation as a dire threat and actively combat inflation to stabilize prices. Therefore, this principle can safely serve as an axiom in a formal system to model the effects of monetary policy on economic outcomes.
In contrast, Friedman’s hypothesis about central banking cannot serve as an axiom because it remains subject to empirical validation and may be disproven. In any sound formal system, only axioms that are self-evidently true can be accepted—by definition of what constitutes an axiom—to preserve the system’s soundness. While influential, Friedman’s hypothesis does not meet this standard, unlike the consistently observed effects of monetary volatility, which are universally supported by empirical evidence. This distinction is critical for maintaining the integrity of mathematical economics as a reliable and robust formal system for modeling real-world phenomena. It is this commitment to sound axiomatic foundations that has made the Arrow-Debreu framework so impactful. Its rigor and consistency have earned it multiple Nobel Prizes and solidified its position as a cornerstone of mainstream economic theory. The framework's strength lies in its soundness, which is why it continues to be widely adopted in both academic research and policy-making.
By recognizing the distinction between facts and axioms and remaining open to revising assumptions in light of new evidence, we can avoid the pitfalls of DIBIL and improve our decision-making processes across both abstract and practical domains.
The Zero-Dogma Approach: Grounding Theories in Verifiable Truth
In this discussion, we adopt a zero-dogma approach, ensuring that every claim is anchored in independently verifiable facts. This rigorous commitment to truth forms the foundation of our theory, which operates within a formal system while meticulously avoiding the pitfalls of unverifiable assumptions—or dogmas—often embedded in axioms that undermine competing frameworks.
This approach offers a decisive advantage: our theory is provably the maximum likelihood theory—the "best scientific" theory available. It is the least likely to be disproven because it avoids unverifiable axioms. In contrast, competing theories that depend on additional assumptions are vulnerable to logical collapse if even one of those assumptions is invalidated—assuming these theories are consistent in the first place. Theories that are internally or externally inconsistent are excluded from discussion, as fallacies, by definition, are invalid and not worth further examination. In other words, if any of the axioms contradict facts or each other, the theory is unsound. In consistent theories, if any underlying axiom is disproven, the entire theory will be falsified.
Our theory remains robust because it is built not only on verifiable foundations but also on a minimal subset of the axioms in any competing alternative, ensuring both theoretical strength and practical reliability. While shared assumptions between competing theories and ours would invalidate both if proven false, we maintain an edge by being more cautious and critical from the outset.
Formally, within any formal system, if a set of axioms A is true, then the logically deduced claims B are universally true, provided none of the axioms in A are violated. Since our theory derives from a strict subset of the axioms used by competing theories, it has a minimal statistical likelihood of being falsified. If any of the axioms in A are false, all competing theories relying on A will be falsified alongside ours. However, if additional assumptions in competing theories are proven false, our theory remains valid while theirs collapse. This makes our theory the most likely to remain true compared to theories that depend on a larger set of assumptions.
An astute reader will recognize this principle as Occam’s Razor. However, this principle originated in Aristotle's Posterior Analytics, where he states:
"We may assume the superiority, ceteris paribus [other things being equal], of the demonstration which derives from fewer postulates or hypotheses."
Aristotle’s formulation is not only the original but also more precise than the later version associated with William of Occam. While Occam’s Razor is often misunderstood as favoring the "simpler" theory, Aristotle correctly emphasized that the superiority of a theory lies in minimizing assumptions while preserving logical integrity.
In any consistent scientific theory—because it is also a formal system—a smaller set of axioms reduces the likelihood of any claim in B being falsified because fewer assumptions are susceptible to being disproven. Importantly, this does not imply that the theory itself is simpler. A more reliable theory often involves more complex deductions. A theory with fewer initial assumptions in A typically requires a longer and more intricate chain of reasoning to arrive at the same conclusions in B. Aristotle’s principle acknowledges that a simpler hypothesis set does not necessarily lead to a simpler overall theory, as the deductive process may become more involved.
Unlike the superficial interpretation of Occam’s Razor, which favors the "simpler" theory without accounting for the complexity of the deductive process, Aristotle’s principle of parsimony balances simplicity in assumptions with the necessary complexity of logical deductions. In other words, there is no free lunch in mathematics: if you want a more reliable theory grounded in fewer axioms, it requires a longer, more intricate chain of deductive reasoning. Put in layman’s terms, the more accurate the theory, the more complex it is likely to be—because that complexity arises from reducing assumptions and relying on solid deductive logic to build a stronger foundation.
Any dually consistent applied formal system based on the smallest subset of axioms, compared to all competing alternative theories, becomes, by definition, the best scientific, maximum likelihood theory that aims to accurately reflect economic realities. It offers the highest probability of truth compared to any existing alternative theory today, owing to its use of fewer axioms than any competing alternative theory. This assertion is supported by rigorous deductive reasoning, which enhances the credibility of our theory, given that all claims are based on independently verifiable facts.
This underscores the critical importance of avoiding Dogma-Induced Blindness Impeding Literacy (DIBIL)—a cognitive bias where dogmatic beliefs are mistaken for facts. DIBIL highlights the dangers of uncritically accepting axioms that lack empirical verification. Whether in theoretical models or real-world decision-making, rational thought demands a clear distinction between unproven assumptions and verifiable truths.
A zero-dogma approach ensures that our reasoning remains firmly grounded in reality. By relying exclusively on independently verifiable facts and maintaining openness to revising axiomatic assumptions, we enhance our functional literacy and make more effective, informed decisions. This commitment to critical thinking and empirical evidence fortifies our understanding of complex issues, enabling us to navigate them with greater clarity and confidence.
By explicitly enumerating and scrutinizing our assumptions—recognizing that they could prove false or inapplicable in different contexts—we ensure that our theories remain flexible and adaptable. This mindset is essential for progress, as it prioritizes truth over dogma, empowering us to stay grounded in reality. Ultimately, this leads to more reliable and effective outcomes, reinforcing the superiority of our zero-dogma approach in both abstract and practical domains.
Sorting Dogma from Fact in Mathematical Economics
To separate dogma from fact in economics, we must define efficiency correctly. Our initial goal is simple: define and measure economic efficiency in a factual way—in a manner that is as "self-evidently true" to everyone as possible. A first part of this process involves comparing two key equilibrium concepts: Nash Equilibrium and Pareto Efficiency, both of which describe equilibrium states but differ fundamentally in their implications for individual and collective outcomes.
In mathematical economics, which shares the fundamental axiom of rational utility maximization with mathematical game theory, a Nash Equilibrium describes a situation where rational utility maximizers engage in strategic interactions. The equilibrium is defined by the condition that "no player can benefit by unilaterally changing their strategy, assuming others’ strategies remain unchanged." If this condition is violated—under the assumption of rational utility maximization—the situation is not an equilibrium, as rational utility-maximizers will by definition change their strategy unilaterally if it leads to a higher payoff. However, while this condition ensures strategic stability for each individual, it does not imply that the outcome is collectively optimal.
In contrast, Pareto Efficiency focuses on collective welfare. An outcome is Pareto-efficient if no player can be made better off without making another player worse off. This concept ensures that all available mutual gains have been realized, but it does not account for fairness or equity. Pareto Efficiency concerns only allocative efficiency, not how benefits are distributed. It is a widely accepted and fundamental measure of efficiency in economics, as there is no alternative that fully captures both efficiency and equity considerations as effectively. While alternative concepts such as Kaldor-Hicks Efficiency exist, which allow for potential compensation and thus broader improvements, they do not fully resolve issues related to equity and fairness. Additionally, Kaldor-Hicks Efficiency can be harder to implement in practice compared to Pareto Efficiency.
Moreover, in reality, even Pareto-efficient outcomes, as described in the Arrow-Debreu framework, are rarely achieved due to market imperfections, information asymmetries, and externalities that prevent the optimal allocation of resources. Therefore, striving for Pareto Efficiency remains a crucial first step. Instead of criticizing it for being unfair—which it may well be—we should focus on achieving at least that minimum level of efficiency first and then address fairness and broader efficiency concerns. After all, we must learn to walk before we can run.
Achieving Pareto Efficiency requires that all players are fully and symmetrically informed—not only about the rules of the game and payoffs (complete information) but also about how their actions affect others. There are four types of information, as currently defined in game theory:
Complete Information: In game theory, complete information means that all players know the structure of the game, including the payoffs, strategies, and rules of all participants, before any play occurs. This comprehensive knowledge allows players to fully understand the potential outcomes of their strategic choices.
Perfect Information: Perfect information refers to situations where all players are fully informed of all actions that have taken place in the game up to that point. This means that every player knows the entire history of the game, including the moves chosen by other players. Classic examples of perfect information games include chess and checkers, where each player can see all pieces and moves made by their opponent.
Imperfect Information: Imperfect information refers to situations where players do not have full knowledge of each other’s actions at every point in time. Even if they know the structure and payoffs of the game (complete information), they may not know exactly what moves have been made by their opponents at the time of decision. For example, in poker, players do not know the cards held by others. This is the key difference between "imperfect" and "perfect" information. Imperfect information—such as not knowing the other player’s cards—can hinder the achievement of Pareto Efficiency because it prevents players from fully understanding the effects of their strategy changes on others. In such cases, players cannot ensure that their choices will not inadvertently harm others, making it challenging to guarantee a Pareto-efficient outcome in reality.
Incomplete Information: Incomplete information differs from imperfect information. It refers to a situation where players lack knowledge about fundamental elements of the game, such as the payoffs or preferences of other players. In such contexts, players must form beliefs about unknown variables, which is the basis of Bayesian Nash Equilibrium.
While the existing definitions of perfect and imperfect information can be somewhat confusing—where "not perfect" is not the same as "imperfect"—it is important to adhere to the established definitions to maintain clarity and consistency. Thus, Pareto Efficiency evaluates whether resources are used efficiently for everyone, not just for individuals. Unlike Nash Equilibrium, which guarantees strategic stability by ensuring that no player has an incentive to unilaterally deviate, Pareto Efficiency ensures that improvements to one player’s payoff do not harm others. This is not achievable when players are imperfectly informed.
Imperfect information introduces uncertainty about how actions affect others. This is why, repeated interactions or enforceable agreements often become necessary to mitigate strategic uncertainty or asymmetric information—both of which result in players having imperfect information that causes Pareto-inefficient outcomes. In financial markets, asymmetric information between buyers and sellers can prevent the realization of Pareto-efficient trades, as one party may exploit their informational advantage. For instance, you have as much chance of coming out ahead in a stock trade where your counterparty is Warren Buffett as you do of winning a tennis match against John McEnroe at his prime. Therefore, mitigating imperfect information by addressing information asymmetries and enhancing coordination mechanisms is essential for moving toward more Pareto-efficient and equitable outcomes—this is a well-established empirical and theoretical fact.
The key point is that rational utility-maximizing players' strategies will always form some kind of a Nash Equilibrium, where each player’s strategy is a best response to others. Yet, such equilibria often result in outcomes that are not Pareto-efficient. As John Nash demonstrated in 1950, an equilibrium exists in finite non-cooperative games under the utility maximization axiom, assuming players have complete information about the game's structure. The extension of Nash Equilibrium to games with incomplete information was later developed by John Harsanyi, leading to the concept of Bayesian Nash Equilibrium. Thus, even a Nash Equilibrium with incomplete information can be stable. A stable Nash Equilibrium is one where small deviations from the equilibrium strategy will lead players back to the equilibrium, making it robust to perturbations.
However, even stable Nash Equilibria under complete information do not guarantee Pareto-efficient outcomes. This only ensures individual strategic stability, not collective efficiency as defined by Pareto Efficiency. Players, given complete information in a Nash Equilibrium, act according to individual incentives, but this individual rationality, especially in the presence of imperfect information, can still lead to suboptimal collective outcomes. A classic example is the Prisoner’s Dilemma, where both players choose to defect—a stable Nash strategy—resulting in a lower collective payoff than if they had cooperated. While defection is a Nash Equilibrium, it is not Pareto-efficient; both players could achieve a better outcome through cooperation, but cooperation is not stable within the Nash framework without additional mechanisms to enforce it.
Achieving Pareto Efficiency within a Nash framework requires more than just individual rationality; it also necessitates perfect information and transparency. Perfect information ensures that all players are fully aware of all past actions, enabling them to make informed strategic decisions that account for the effects on others. Transparency eliminates imperfect information, meaning that players have complete knowledge of how each player's actions impact the others. It is essential for players to understand how their actions affect both their own payoffs and the welfare of others, as the interdependence between players’ payoffs is crucial for collective efficiency. However, Nash Equilibrium does not inherently guarantee the level of transparency needed to achieve Pareto Efficiency, highlighting the limitations of relying solely on this framework to define collective welfare.
Thus, while Nash Equilibrium and Pareto Efficiency are both valuable concepts in mathematical economics, they serve different purposes and rest on different underlying assumptions. Recognizing these differences and applying verifiable principles to assess efficiency helps avoid the dogma of assuming that individual rationality will naturally lead to collective welfare. This distinction is crucial for developing economic models that better reflect real-world complexities.
For example, some Austrian economists often describe so-called "free market" solutions—such as CarFax reports that mitigate asymmetric information between buyers and sellers—as purely market-driven outcomes. However, in reality, CarFax relies significantly on data obtained through government-regulated entities, which mandate the accurate reporting of accidents and mileage. Without this regulation, intended to prevent dishonesty by sellers, CarFax reports would be useless, as they are in countries like Russia, where laws are not enforced. Despite concerns about rent-seeking, as exemplified by regulations like those prohibiting the sale of raw milk while allowing raw oysters, government oversight plays a key role in enforcing group-optimal outcomes.
Cause-and-Effect: Imperfect Information Causes Pareto-Inefficiency
In both mathematical economics and real-world scenarios, imperfect information prevents the achievement of Pareto-efficient outcomes. George Akerlof’s seminal work, The Market for "Lemons", illustrates how imperfect information—specifically asymmetric information—can lead to significant inefficiencies. In Akerlof’s example, sellers of used cars often possess more information about the quality of the cars they are selling than buyers. This asymmetry results in a market dominated by low-quality "lemons" because buyers are unable to accurately assess the quality of the cars. Consequently, high-quality cars are driven out of the market because sellers of good cars cannot obtain fair prices, leading to market breakdown and Pareto inefficiency. In this scenario, mutually beneficial transactions are missed, as the market fails to allocate resources efficiently between buyers and sellers.
A deeper issue stems from what we refer to in this paper as the Rent-Seeking Lemma, a concept closely related to opportunistic behavior known as rent-seeking, as developed in public choice theory by Tullock and Buchanan (recognized with the 1986 Nobel Prize). Rent-seeking behavior refers to a form of economic inefficiency where agents seek to increase their wealth without creating new value, often through manipulation or exploitation of existing resources. This concept is closely tied to the principal-agent problem, where the agent (in this case, the seller) possesses more information than the principal (the buyer) and can exploit this asymmetry to their advantage. For example, the seller, acting as the informed agent, may misrepresent a low-quality car as high-quality, extracting unearned wealth in the process. As described by Jensen and Meckling in their seminal paper, Theory of the Firm: Managerial Behavior, Agency Costs, and Ownership Structure (1976), and their 1994 paper, The Nature of Man, such behavior stems from the variability of honesty and inherent self-interest among economic agents, reflecting the "opportunistic nature of man." Predictable opportunistic exploitation of information asymmetry leads to inefficiencies and a breakdown in trust, which in turn undermines the optimal functioning of markets.
In markets with imperfect information, economic "parasites"—a term originally coined by Vladimir Lenin to describe individuals who consume goods and services produced by others without contributing to their creation—exploit information asymmetries without adding value to the market. In public choice theory, "successful rent-seekers" engage in similar behavior by extracting wealth through manipulation rather than productive activities. Economic parasites, such as fraudulent used car dealers, systematically extract unearned wealth in the form of economic rents from uninformed buyers. This dynamic leads to a breakdown in market efficiency, as dishonest behavior is incentivized while honest agents are driven out, compounding inefficiencies.
In such markets, the absence of mechanisms to verify quality—such as CarFax reports—enables the informed party (the seller) to take advantage of the uninformed party (the buyer), leading to a persistent failure to achieve efficient outcomes under imperfect information. This not only violates Pareto efficiency but also leads to a market dominated by adverse selection and reduced welfare for both parties over time.
A similar phenomenon occurs in the Prisoner’s Dilemma in game theory, though in this case, the inefficiency stems from strategic uncertainty rather than asymmetric information. In the classic Prisoner’s Dilemma, each prisoner is uncertain about the other’s decision, which prevents them from cooperating, even though mutual cooperation would lead to a better outcome for both. Without trust, the result is imperfect information, and each prisoner rationally defects to avoid the worst-case scenario (being betrayed while cooperating). This strategic uncertainty results in a Nash Equilibrium where both players defect, leading to an outcome that is Pareto-inefficient. If the prisoners were not imperfectly informed about each other’s strategies, they could achieve a Pareto-efficient outcome by cooperating.
In both cases—whether dealing with asymmetric information in a market, as in Akerlof’s example, or strategic uncertainty in the Prisoner’s Dilemma—imperfect information leads to outcomes that fall short of Pareto efficiency. Whether due to strategic uncertainty or asymmetric information, participants are unable to make fully informed decisions, resulting in inefficiencies. When information is complete and transparent, individuals can coordinate better and achieve outcomes where no one can be made better off without making someone else worse off—a Pareto-efficient allocation.
This principle is well-established in economic theory and can be observed empirically. In markets with greater transparency, efficiency improves as buyers and sellers make informed decisions using tools like CarFax reports. Similarly, in game-theoretic scenarios, the introduction of communication or mechanisms that reduce strategic uncertainty can lead to cooperative outcomes that are more efficient. For example, within criminal organizations like the Mexican mafia, punishing informants ("rats") mitigates strategic uncertainty. No co-conspirator is likely to confess, given the threat of retribution against their family. This deterrence mitigates imperfections in information, facilitating greater cooperation and ensuring stability within the group—a form of group-optimal Pareto efficiency where no individual has an incentive to deviate.
However, this type of enforced cooperation does not result in a socially optimal outcome for society as a whole. The First Welfare Theorem, as established in the Arrow-Debreu framework, proves that competitive markets with voluntary exchanges lead to Pareto-efficient outcomes that maximize overall welfare. In contrast, the mafia’s enforcement mechanisms rely on coercion and involuntary exchanges, which reduce welfare for society at large, as only unfettered trade is mutually beneficial. This underscores that involuntary exchange is neither mutually beneficial nor Pareto-improving. While the mafia may achieve internal stability, their activities—often centered on illegal markets—create externalities that harm societal welfare, violating the conditions necessary for true Pareto efficiency as defined by economic theory.
Yet, while the theory of Pareto Efficiency is compelling, how can we be certain that these theoretical conclusions hold true in real-world economies? More importantly, how can we ensure that the theories we use in mathematical economics provide real-world use-value? For there is an inviolable law of economics established by Aristotle and misattributed to Marx that the use value of any product, including mathematical economic theories, is related to their exchange value.
To address these questions, we must first establish how to define and measure Pareto Efficiency in a way that is independently verifiable, making our estimates objective facts. This requires clear, empirical criteria that can be observed and tested in real-world economies. It is not enough for an economic model to claim efficiency based on theoretical constructs alone; we need measurable benchmarks that allow us to determine whether a given outcome is Pareto-efficient in practice.
GDP vs. Gross Output vs. Intermediate Consumption: Measuring Pareto Efficiency
How can we determine if an economy is truly Pareto efficient? Since absolutes are elusive in both reality and mathematics, we must establish a practical benchmark that is independently verifiable. After all, independent verifiability distinguishes fact from hypothesis. The correct question is: How can we measure the relative Pareto efficiency of two economies, A and B, in a way that is independently verifiable—not just in theory but also in practice?
Currently, relative rankings of Pareto efficiency are based on real GDP per capita and its growth over time, adjusted for negative externalities such as environmental pollution. This approach dominates because no other available data objectively measures the relative efficiency of two economies in a verifiable way. However, this approach overlooks the costs associated with production, particularly intermediate inputs like oil and gas—necessary for production but not directly consumed by individuals. Reducing these inputs leads to greater efficiency, as fewer resources are used to achieve the same output. This principle underlies federal mandates on fuel efficiency and the broader green movement, which aim to reduce reliance on non-renewable resources, minimize intermediate consumption, and thus increase overall efficiency. While we do not make judgments on the real-world impact of these policies, their stated intent is clear: to improve productive efficiency by reducing resource use.
Consider house construction as an example. The finished house contributes to final consumption (or GDP) and enhances welfare as a final product. However, the lumber used to build the house is part of intermediate consumption—a necessary cost in creating the final product. If the builder can produce the same quality house using less lumber, intermediate consumption is reduced, thereby improving productive efficiency. This principle is universal: using fewer inputs to generate the same output is a hallmark of efficiency in production.
This distinction explains why Gross Output (GO), which captures all economic activity—including both final goods and services (measured by GDP) and intermediate consumption—is seldom emphasized. GO reflects the total volume of production, while GDP focuses on final goods and services, correlating more directly with consumer utility and welfare.
The more an economy can reduce intermediate consumption without sacrificing output, the more efficient it becomes. However, GDP, as currently calculated by governments, includes not only final goods and services but also government expenditures, such as military spending. Military spending is classified as final expenditure because it represents a final outlay by the government, not an intermediate input used in further production.
Nevertheless, government spending does not enhance general welfare in the same way that consumer goods do. Expenditures like defense spending are necessary costs—akin to paying for security services, which maintain order but do not directly increase consumer well-being. For instance, hiring a security guard to check IDs as you enter a building is a necessary cost, but it does not directly enhance consumer welfare. Similarly, while defense spending provides essential security, it does not improve welfare in the same way that increased consumption of goods and services does.
The same principle applies to education and social welfare. These are costs incurred to achieve specific societal benefits. As long as these benefits are realized, lower spending on education aligns more with efficiency. The money spent on schooling is a cost toward the educational outcome—just like learning a new language: the faster and more affordably you can learn Spanish, the better, because the lower the cost, the greater the benefit. Similarly, the faster and more affordably housing for the needy can be built, the greater the benefit to society, maximizing general welfare.
While government spending indirectly supports the economy by facilitating trade and protecting citizens, it remains a cost, much like intermediate consumption. It does not directly enhance consumer welfare in the way that consumer goods and services do. However, current national accounting standards classify government spending, including military expenditures, as part of GDP because it is considered final expenditure. Redefining it as intermediate consumption would require revising the definitions of "final" and "intermediate" consumption in GDP calculations. Properly classifying these expenditures is critical, as reducing costs without reducing output improves productivity. Nevertheless, the current classification aligns with international accounting standards.
Things become clearer when we consider the source of these standards: they are shaped by those who benefit from them, often classifying government expenditures—like the salaries of officials who draft these standards—as benefits rather than costs. This tacit assumption overestimates welfare contributions from government spending. GDP captures all final expenditures, including those by the government, regardless of their true contribution to welfare. This misclassification of costs as benefits facilitates rent-seeking and contributes to the principal-agent problem, where agents (government officials) prioritize their own interests over those of the public.
As North Koreans might observe, even if military spending is efficient, it can still diminish welfare if a disproportionate portion of GDP is allocated to the military rather than services that directly benefit the population. Welfare is maximized when GDP is used to produce goods and services that enhance well-being, rather than excessive military spending. This highlights a deeper issue: the axiomatic-definitional misclassification of costs as benefits in mainstream economic accounting can enable rent-seeking behaviors, detracting from true economic welfare.
Many economists accept these flawed definitions, often without direct personal benefit. This can be attributed to theory-induced blindness (DIBIL)—a cognitive bias where academics unknowingly propagate incorrect assumptions. While some errors are honest attempts to model reality, others are deliberate, driven by rent-seeking behaviors. For example, why do theoretical physicists continue using the Axiom Schema of Separation in Zermelo-Fraenkel set theory, which fails to describe inseparable entities like entangled particles? Whether due to historical inertia, reluctance to challenge the status quo, or simply complacency akin to the old Soviet joke, "They pretend to pay us, and we pretend to work," this persistence is notable in both quantum physics and economics. However, the misclassification of defense spending as final consumption is unlikely to be random.
This paper aims to explore the root causes of purposeful definitional errors in economic accounting and policy. These are not random mistakes but deliberate behavioral nudges, similar to how businesses influence consumer behavior by replacing opt-in policies with opt-out ones, increasing uptake. Such nudges enable unearned wealth extraction by economic agents—or parasites—as predicted by the Rent-Seeking Lemma. According to public choice theory, rent-seeking agents manipulate definitions and policies to prioritize their utility over public welfare.
The universality of rent-seeking becomes particularly clear when we consider Vladimir Lenin's characterization of "economic parasites" as individuals who consume goods and services produced by others without contributing to the creation of real GDP. This concept is echoed across economic theories. In public choice theory (Tullock-Buchanan), these individuals are referred to as successful rent-seekers, extracting unearned wealth in the form of economic rents. In agency theory (Jensen-Meckling), they are termed fraudulent agents, extracting unearned wealth through agency costs.
Despite different terminologies, the core idea remains the same: successful rent-seekers—or economic parasites—inevitably consume goods and services produced by others without making a reciprocal contribution. This mirrors finding $100 on the street and using it to buy goods and services one did not produce—an unexpected windfall unrelated to productive efforts.
We posit as a self-evident truth that any parasitic infestation—whether locusts devouring crops, termites or flying carpenter ants destroying homes, or rent-seekers and other economic parasites like thieves and robbers pilfering wealth—leads to deadweight loss. It directly reduces efficiency by allowing non-productive economic parasites to consume goods and services without contributing. Identifying such rent-seeking behavior helps mitigate the inefficiencies it introduces.
While GDP is useful, it currently misclassifies costs, such as government expenditures, as welfare-enhancing final consumption, leading to inefficiencies. To properly measure Pareto efficiency, especially across economies, we must refine national accounting standards to accurately distinguish between true final consumption and necessary costs like government spending. By doing so, we can better reflect an economy's actual contribution to welfare and prevent rent-seeking behaviors.
Although this introduction has been extensive, there is much undiscovered rent-seeking behavior lurking beneath the surface. If you look under the right rocks—using a formal system—you can expose DIBIL and the associated rent-seeking activities currently facilitated by some economically compromised individuals. These individuals, who align with Lenin’s definition of “economic parasites," enable the extraction of unearned wealth in the form of economic rents by economic parasites. By propagating flawed economic theories, they pave the way for rent-seekers to influence legislation that allows for this wealth extraction.
How? Before we label anyone else as misguided or criticize their approaches, we begin our discussion of "misguided individuals" with Karl Marx, the one who was mostly right but made one wrong assumption—a common mistake, as we explain next.
Karl Marx: What Was He Trying to Say?
Karl Marx fundamentally argued that by analyzing the economy as a system where equilibrium is Pareto-efficient, we can identify group-optimal outcomes for society. In simpler terms, Marx sought to understand how humans, collectively, can maximize general welfare by enhancing collective benefits and minimizing collective costs through voluntary and equitable economic exchanges. The ultimate goal of maximizing welfare can be broken down into two key objectives:
Maximizing Collective Benefits: This involves improving labor productivity, allowing individuals to gain more leisure time and better enjoy the fruits of their labor.
Minimizing Collective Costs: This involves reducing negative externalities, such as resource depletion and pollution (e.g., plastic waste in oceans), which impose costs on society as a whole.
What makes this analysis particularly interesting is that, in the absence of externalities like pollution, Pareto-efficient outcomes—derived from the First Welfare Theorem in the Arrow-Debreu framework—can be achieved through Pareto-improving exchanges. In such exchanges, agents trade their labor for the goods and services they consume, using money as a unit of account to establish arbitrage-free prices. But what do "arbitrage-free prices" mean? In the context of Pareto-efficient and real-world economic outcomes, the explanation is straightforward: arbitrage-free prices ensure that no one can make a profit without contributing real value, preventing price discrepancies across markets.
Here is where Marx's analysis intersects intriguingly with concepts like Pascal's Wager. For Marx, rationality—especially given the persistent rent-seeking behavior in various religious organizations, such as selling indulgences—led him to a critical conclusion about religion. He famously argued that religion was "the opium of the people" (Marx, 1843), a tool used to pacify the masses. This belief was largely based on his interpretation of the H₀ hypothesis, which religious authorities insisted upon as the ultimate truth. But what about the H₁ hypothesis—the alternative hypothesis?
Our Approach: Under the Properly Selected H₁ Hypothesis
In this paper, we posit an axiomatic assumption—drawing inspiration from Pascal's philosophical reasoning—that a higher-order entity exists with specific attributes. Specifically, we assume that God is all-powerful and all-loving, aligning with traditional teachings about Yahweh, God the Father of Jesus, and Allah as described in the Qur'an. Under our properly and formally defined H₁ hypothesis, these attributes define what we refer to as "God." These teachings can be traced back to original source material, notably the Torah. Some scholars argue that the Torah may have roots in Egyptian mythology, particularly influenced by the ancient Hermetic principle: "As above, so below." This principle becomes compelling when considering the complex interplay between the exchange rates of goods and services in an economy. Before delving into that, let us explore some speculative connections between these concepts.
Assuming the existence of a higher-order entity, we can draw parallels to Roger Penrose's hypotheses regarding universal consciousness and quantum effects—concepts that echo ancient Hermeticism. Hermeticism posits that God is "the All," within whose mind the universe exists—an omnipotent force shaping reality. This idea resonates with core beliefs from Egyptian religion, which influenced the Abrahamic religions central to Pascal’s Wager: Judaism, Christianity, and Islam. The concept of God as "the All" can be analogized to the quantum field in modern physics, where everything is interconnected—a notion Einstein alluded to when describing "spooky action at a distance."
"Spooky action at a distance" refers to quantum entanglement, a phenomenon that troubled Einstein because it seemed to imply that fundamental interactions in the universe are interconnected in ways that classical physics cannot explain. Unlike Einstein, whose approach was deeply theoretical, our perspective is rooted in practical applications. With over 30 years of trading mathematical arbitrage on Wall Street, we’ve applied formal systems to generate consistent profits, focusing only on tangible, independently verifiable results. On Wall Street, as famously depicted in the movie Wall Street, strategies are not based on chance but on calculated, sure outcomes. This pragmatic approach compels us to accept empirical evidence suggesting that the universe operates on principles that could be interpreted as God "playing dice." Understanding the mechanics behind this, we argue, presents both intellectual and financial opportunities. Pursuing an understanding of these universal designs is a logical endeavor, one that could naturally lead to rewards.
Einstein’s equation, E=mc², unveils a profound relationship between energy and mass—a fundamental balance in the physical world. Analogously, this concept can inspire insights into other systems of balance and transformation. In economics, this idea is reflected in the principle of Pareto Efficiency, a cornerstone of mathematical economics. Pareto Efficiency describes a state where no individual can be made better off without making someone else worse off—a perfect allocation of resources that maximizes productivity and welfare. This concept mirrors the moral and ethical equilibrium envisioned in religious texts like the Torah, where adherence to divine commandments theoretically results in a harmonious society.
According to the First Welfare Theorem in the Arrow-Debreu model of mathematical economics, a Pareto-efficient equilibrium—where both welfare and productivity are maximized—is guaranteed in a perfectly competitive market. This economic ideal parallels the moral adherence proposed in religious traditions, where following divine law could theoretically lead to an ideal social equilibrium. Just as perfect trade conditions in a market lead to Pareto efficiency, adherence to moral laws may lead to a "perfect" societal balance, maximizing both individual and collective well-being.
Here, Karl Marx may have missed an opportunity to apply the same rigorous analysis he used in economics to examine the complexities of belief systems. Could there be a deeper interplay between rent-seeking behavior and the way religious doctrines are stated? In reality, what Marx was attempting to articulate aligns with Adam Smith’s notion that through mutually beneficial trade, individuals maximize their labor productivity and minimize the amount of time they spend working. Essentially, this involves trading one’s labor, measured in wages and money, for goods and services, thereby effectively exchanging labor for consumption in a market-driven economy.
The Labor-For-Goods Dynamic Equilibrium Model within Mathematical Economics
Mathematical economics operates as a formal system, in which theorems—such as the First Welfare Theorem—are derived from a set of foundational axioms and formal inference rules. These axioms include key assumptions such as local non-satiation, convex preferences, and the existence of complete markets. From these premises, the First Welfare Theorem is derived, establishing that any competitive equilibrium is Pareto efficient. This theorem, along with others like the Second Welfare Theorem, forms the foundation of the Arrow-Debreu model, which is central to mainstream mathematical economics. For instance, the Federal Reserve Bank of the United States employs general equilibrium models based on the Arrow-Debreu framework to inform critical policy decisions, such as the setting of interest rates.
While the conclusions drawn from the Arrow-Debreu axioms—such as rational, utility-maximizing representative agents—are theoretically robust within the model's idealized conditions (such as perfect markets), our paper introduces a dynamic alternative. Specifically, we present a model that demonstrates how Pareto-efficient Nash equilibria, as predicted by the First Welfare Theorem, can be achieved through dynamic processes rather than static ones. Our model, centered on exchanges of labor for goods and services, illustrates how a series of mutually beneficial, Pareto-improving trades leads to the same Pareto-efficient Nash equilibrium predicted by the First Welfare Theorem, but through a dynamic mechanism. We call this framework the Labor-For-Goods Game Theory Model, which formalizes the existence of a Pareto-efficient Nash Equilibrium through ongoing trade interactions.
This dynamic model is central to our paper, as all claims and assertions are developed within its framework. Our model does not contradict the Arrow-Debreu framework; instead, it leverages specific axioms to reflect the dynamic processes observed in real-world markets. While the Arrow-Debreu model focuses on static equilibrium, our model emphasizes how Pareto-efficient outcomes emerge through continuous, mutually beneficial trades. This approach offers a more nuanced understanding of equilibrium, not as a static state, but as an emergent property of ongoing trade interactions.
Explanation: Labor-For-Goods (and Services) Setup
In the Labor-For-Goods (and Services) framework, we model Pareto-efficient outcomes using game theory as group-optimal Nash equilibria. Unlike in the Prisoner’s Dilemma, where individual incentives lead to suboptimal outcomes, rational, utility-maximizing agents in this model exchange their labor for goods and services to achieve a group-optimal, Pareto-efficient result. This is made possible by the assumption of perfect (symmetric) information, similar to that used in the First Welfare Theorem, but with the added constraint of no arbitrage.
In this system, money functions as the unit of account, measuring both wages and prices. The Nash equilibrium in this setup results in a Pareto-efficient allocation, meaning that no agent can be made better off without making another worse off.
While not all Nash equilibria are Pareto efficient—as exemplified by the Prisoner’s Dilemma—our model is specifically designed to ensure that the Nash equilibrium leads to a Pareto-efficient outcome. This is achieved by maximizing mutual benefits through trade under three key assumptions:
Arbitrage-free prices,
Symmetric information about the goods and services being exchanged, and
Unfettered voluntary trade in an open market.
These assumptions—(1) arbitrage-free prices, (2) symmetric information, and (3) voluntary exchanges driven by rational agents seeking to improve their individual welfare—ensure that all trades are ex-ante mutually beneficial (before the trade) and ex-post mutually beneficial (after the trade). The absence of information asymmetry is crucial for preserving this mutual benefit.
By eliminating information imperfections that could otherwise distort trade outcomes, this setup guarantees at least a locally Pareto-efficient allocation of resources. These conditions create an ideal environment where agents engage in trades that enhance the welfare of all parties. As a result, the model upholds both the rational decision-making of individual agents and the collective welfare of the economy.
The Economic Model and Collective Costs
Mathematically, this economic model—understood as a formal system of real-world interactions—holds because the only net costs involved in producing real GDP at the collective level are:
The labor contributed by individuals, and
Negative externalities, such as pollution and resource depletion, which affect society as a whole.
Externalities are costs imposed on third parties not directly involved in a transaction, making them collective costs. Similarly, labor constitutes a collective cost because every agent in the economy contributes labor in some form, except for those engaged in non-productive or harmful activities, such as theft or economic exploitation. A sound formal system must account for all agents, including those who do not contribute positively to the economy.
While firms and individuals incur private costs for inputs such as raw materials, capital, or technology, these are not collective costs in the same way that labor and externalities are. For example, the ownership of raw materials used for intermediate consumption does not directly affect final consumption (i.e., GDP), which ultimately determines collective welfare. Although intermediate goods contribute to final GDP through production processes, the mere transfer of ownership (e.g., through stock market trading) reflects a redistribution of wealth rather than a contribution to productive activity. Such ownership transfers do not influence Pareto efficiency unless externalities are involved.
However, externalities related to ownership changes—such as positive externalities from more efficient capital allocation when stock prices are accurately established—fall outside the primary scope of this model and would require separate analysis. Nonetheless, our dynamic model offers insights into both positive and negative externalities related to ownership changes, which can be further explored in future layers of analysis.
Private vs. Collective Costs: Ownership’s Role in Pareto Efficiency
Negative externalities—such as pollution or resource depletion—are collective costs borne by society as a whole, whereas the ownership of capital is a private cost that does not directly influence collective welfare. In contrast, labor represents a net contribution by everyone, making it a universal collective cost in this framework. Therefore, negative externalities and labor are the primary collective costs considered in our model.
To illustrate this, consider Bob and Alice on a deserted island. Their collective costs and benefits are optimized through mutually beneficial trades, leading to a Pareto-efficient outcome, where neither can be made better off without making the other worse off.
However, when defining Pareto efficiency, the concept of ownership becomes irrelevant. Whether Bob "owns" the banana tree or Alice "owns" the water spring has no direct impact on the outcome. What matters is the exchange of resources in a mutually beneficial way. For example, even if Bob claims ownership of the banana tree and Alice claims ownership of the water spring, they can still achieve a Pareto-efficient outcome through trade. The perception of ownership is irrelevant as long as resources are allocated in a way that prevents either party from improving their welfare without reducing the welfare of the other.
In simpler terms, Pareto efficiency is concerned not with what resources agents think they own but with what they actually exchange. By trading the fruits of their labor, Bob and Alice maximize collective welfare, aligning with Adam Smith’s principle from The Wealth of Nations—that mutually beneficial trade improves overall welfare by maximizing labor productivity, thus reducing the time spent on labor. This principle, self-evidently true since 1776, serves as a foundational axiom in our formal system, where the fruits of one’s labor, measured by wages, are exchanged for the fruits of another’s labor, measured by price.
No sound formal system, based on such self-evident axiomatic assumptions, can contradict real-world facts. In this sense, Pareto efficiency pertains to how resources are allocated through trade, not to who claims ownership of them. Once mutually beneficial trades cease (i.e., when no further Pareto improvements can be made), the economy has reached an efficient state—regardless of resource ownership.
Conclusion: The Universal Role of Labor and Externalities
From a macroeconomic perspective, labor and negative externalities are the primary collective costs that impact everyone in the economy. This holds true both in practical reality and in the mathematical foundation of our model. These core principles regarding collective costs are not only empirically testable but also logically consistent within the model's mathematical structure, built on reasonable economic assumptions. As a result, the model provides a robust framework for understanding how collective costs shape economic outcomes.
Pareto Efficiency and Gradient Descent: The Role of Money and Arbitrage-Free Exchanges
In this model, Pareto efficiency is achieved in a manner analogous to gradient descent optimization. The process unfolds through a series of Pareto-improving exchanges between rational, utility-maximizing agents in the economy. Each unfettered exchange is akin to a step in a gradient descent algorithm, where participants trade goods, services, or labor in ways that improve collective welfare—just as each step in gradient descent reduces a cost function.
Money plays two key roles in this process:
As a unit of account, money allows participants to measure and compare the value of goods and services, facilitating fair exchanges.
As a medium of exchange, it enables transactions to occur smoothly, allowing the economy to "move" through the gradient of mutually beneficial trades.
Additionally, money functions as a store of value when it is not actively used for exchanges, such as when funds are held in a bank account for extended periods.
This aligns with empirical data from the Federal Reserve Bank of the United States1, which identifies money’s three key functions: medium of exchange (E), unit of account (U), and store of value (S). These functions are universally observed in real-world economies. Any formal model that ignores them would not only be inconsistent with empirical reality but also mathematically unsound, as it would contradict the key definitions of how money operates in economic systems.
We also assume the principle of no free lunch, meaning that no arbitrage opportunities exist. All trades are mutually beneficial and reflect fair value, with no possibility of risk-free profit. This corresponds to the "no free lunch" concept in gradient descent, where the algorithm progresses naturally toward an optimal solution without shortcuts. This assumption is crucial for the model to align with reality. As the economy progresses through a series of these mutually beneficial, arbitrage-free exchanges, it converges toward Pareto efficiency, much like gradient descent iteratively approaches the minimum of a cost function.
Each exchange nudges the economy closer to a state where no further Pareto improvements can be made. In gradient descent, optimization stops when the gradient of the cost function reaches zero—indicating that the minimum has been reached. Similarly, in our model, Pareto efficiency is achieved when no additional mutually beneficial trades are possible. At this final state, no one can be made better off without making someone else worse off—just as gradient descent halts once it reaches an optimal point.
Conditions and Axioms
Our core axiom of human behavior is the principle of rational utility maximization, a fundamental assumption in both mathematical economics and game theory. This axiom posits that individuals act to maximize their utility or wealth, subject to the constraints they face.
To more accurately reflect observed economic realities, we introduce the Rent-Seeking Lemma: the rational, utility-maximizing representative agent is prone to fraudulent or opportunistic behavior when the perceived costs of engaging in such actions are sufficiently low. This lemma acknowledges that agents will exploit opportunities for personal gain if the penalties or risks of such behavior are minimal, which deviates from the idealized assumption that all agents always act in a socially optimal manner.
Rent-Seeking Lemma
The Rent-Seeking Lemma posits that rational, utility-maximizing agents are prone to opportunistic behavior when the perceived costs of exploiting such opportunities are sufficiently low. This behavior leads to inefficiencies and underscores the importance of robust property rights and well-functioning markets to mitigate these tendencies.
This phenomenon is well-documented in Agency Theory, particularly in Jensen and Meckling’s 1976 paper, Theory of the Firm, which introduced the principal-agent problem. In this framework, managers (agents) may act in their own interests rather than in the best interests of the owners (principals). Their 1994 work, The Nature of Man, further formalized the axiomatic structure of economic systems based on rational, utility-maximizing agents, aligning closely with the Rent-Seeking Lemma. It illustrates how rational agents, given the opportunity, may exploit commercial transactions for personal gain, even at the expense of overall market efficiency.
Further evidence of rent-seeking behavior is found in George Akerlof’s 1970 paper, The Market for Lemons, which illustrates how information asymmetries in markets can lead to exploitation. Better-informed agents extract value from less-informed counterparts, a form of wealth-extracting behavior described by the Rent-Seeking Lemma. This erodes market efficiency by redistributing wealth without corresponding productive contributions, aligning with both Agency Theory and public choice theory.
Interestingly, both Marxist theory and free-market economics acknowledge the tendency for unearned wealth-seeking. Vladimir Lenin criticized the nonproductive bourgeoisie as "economic parasites," accusing them of consuming valuable goods and services without contributing to real GDP. This critique mirrors rent-seeking behavior described in public choice theory, as developed by Gordon Tullock and James Buchanan (the latter receiving the 1986 Nobel Prize for his work). In public choice theory, successful rent-seekers—akin to Lenin’s "economic parasites"—extract wealth without contributing to productivity.
Thus, the Rent-Seeking Lemma captures a universal phenomenon: in both free-market and Marxist critiques, a subset of agents exploits systemic opportunities to accumulate wealth without producing value, distorting economic efficiency and fairness. However, this does not imply that Marx’s broader conclusions were correct. Quite the opposite—his ideas were fundamentally flawed. Marx’s error lay in his belief that the bourgeois principals could extract unearned wealth from the, by definition, better-informed agents (the workers). This contradicts Agency Theory, which shows that unearned wealth typically flows in the opposite direction: from less-informed principals to better-informed agents.
These contradictions with empirical truths render Marxism an unsound formal system. The tragic consequences of relying on such flawed theories were starkly demonstrated during the Holodomor in Ukraine, where Soviet collectivization efforts led to widespread famine and even instances of real-world cannibalism—a historical fact in the twentieth century. This empirical reality underscores the dangers of relying on unsound formal systems, where theoretical errors can lead to catastrophic real-world outcomes.
By contrast, on Wall Street, we avoid such fundamental mistakes. The use of rigorous formal systems is essential for making real, reliable profits, ensuring that decisions are grounded in sound, empirically tested models rather than flawed theoretical assumptions. Those of us who actually make money on Wall Street, as the movie says, don’t "throw darts at the board"—we bet on sure things by applying formal systems in mathematical arbitrage, much like Jim Simons and his team at Renaissance Technologies. If you’re curious, it’s worth looking up what they do.
Soundness, Completeness, and Consistency in Formal Systems
We raise the issue of the unsoundness of the Marxist formal system of economics to illustrate that for any formal system to be considered sound, none of its axioms or definitions can contradict empirical, objective, real-world facts. In a sound system, all conclusions must logically follow from its axioms, and those axioms must align with observable reality—defined as being self-evidently true—if the system is intended to model the real world.
This principle explains why communism, derived from Marxist economic formal systems, has consistently failed in practice, despite being implemented multiple times. The unsoundness arises because the system’s axioms—such as its assumptions about agency costs and the flow of wealth—contradict observable economic behaviors and incentives. Just as a mathematical system becomes unsound when its axioms contradict facts, any economic formal system that violates empirical truths will fail to produce reliable models of reality, leading to systemic collapse and widespread failure.
Maintaining soundness via dual-consistency in a formal system is, therefore, crucial for reliably modeling and predicting real-world outcomes.
This brings us to the Arrow-Debreu framework, which, while sound, is incomplete. In this model, money is primarily defined as a unit of account, which works well in equilibrium because that’s the role money plays once the system has reached a steady state. However, the other functions of money—store of value and medium of exchange—become essential during the dynamic process of achieving equilibrium in real-world economies. By focusing solely on equilibrium, the Arrow-Debreu model does not explain how the economy dynamically reaches that equilibrium, leaving the model incomplete.
Our Labor-For-Goods Game Theory model complements the Arrow-Debreu framework by explaining how equilibrium is dynamically achieved. It incorporates the full definition of money as it operates in the real world—serving as a unit of account, store of value, and medium of exchange—thus completing the model. By accounting for the dynamic process through which economies reach equilibrium, our model maintains both soundness and completeness, while ensuring consistency with real-world economic behavior.
The Gradient Descent Process: Arbitrage-Free Exchange Rates
To recap: each exchange in the economy brings it closer to a more efficient allocation of resources, much like how each step in gradient descent moves toward an optimal solution. In this analogy, each mutually beneficial trade is a step toward an economy-wide Pareto-efficient allocation. These trades improve general welfare by enabling participants to exchange in ways that benefit both parties, without creating arbitrage opportunities. Eventually, the process of mutually beneficial exchanges reaches a point where no further improvements can be made—similar to reaching the maximum or minimum of a function where the gradient becomes zero. At this point, Pareto efficiency is achieved: no one can be made better off without making someone else worse off, and no more mutually beneficial trades are possible.
The arbitrage-free exchange rates condition in this model follows the same no-arbitrage principle that governs exchange rates in the foreign exchange (Forex) market. Let the exchange rate matrix E represent the rates between approximately 30 major currencies, where the element eij represents the exchange rate from currency i to currency j. The no-arbitrage condition requires that the exchange rate from currency i to j is the reciprocal of the exchange rate from currency j to i (i.e., eij = 1 ÷ eji).
For example, if 1 USD buys 0.5 GBP, then 1 GBP must buy 2 USD. This condition eliminates arbitrage opportunities by enforcing symmetry and reciprocity in exchange rates. Mathematically, this is expressed by stating that matrix E is equal to the transpose of E after taking its element-wise reciprocal, also known as the Hadamard inverse.
The Hadamard inverse of an n-by-n matrix E = [eij] is defined as:
The no-arbitrage constraint imposed on E is given by:
As we can see, the Hadamard inverse and the transpose are commutative, and the no-arbitrage condition can be stated either way, equivalently. This condition ensures consistent exchange rates and prevents risk-free profit opportunities. We will get back to this constraint later in the paper.
In practice, the no-arbitrage condition in the Forex market is enforced using the US dollar as the unit of account to determine cross rates for currency pairs like JPY/EUR or GBP/EUR. In these cases, the dollar is not used as a medium of exchange but serves purely as a unit of account to ensure consistent pricing and prevent arbitrage opportunities.
In the foreign exchange market, where goods (represented by currencies) are exchanged directly without using money as a medium of exchange, we can clearly see that the primary role of money—aligned with the Arrow-Debreu framework—is that of a unit of account. This role is necessary to enforce the no-arbitrage condition on the exchange rate matrix by quoting prices in a consistent unit of account, such as the US dollar’s role in the Forex market.
Mathematically, arbitrage—such as profiting from trading currencies in the FX market—represents unearned wealth obtained through superior information. This is similar to how a used car dealer in a "lemon" market extracts unearned wealth from an uninformed buyer. An economic parasite, or arbitrageur, can gain wealth by exploiting currency discrepancies without contributing to productivity.
This is akin to finding $100 on the street—the person who found the money can use it to purchase goods and services, consuming resources without making any reciprocal contribution to productivity. This behavior aligns with Lenin’s definition of economic parasites and corresponds to successful rent-seekers in public choice theory, who gain wealth through manipulation or exploitation rather than through productive activities.
In public choice theory, rent-seeking includes opportunistic behavior such as arbitrage. To prevent such behavior, prices are structured relative to a unit of account, ensuring consistency across markets. By maintaining uniform pricing, this structure eliminates inconsistencies that could otherwise be exploited for arbitrage. As a result, the behavior of economic parasites—who would otherwise capitalize on price discrepancies—is effectively precluded.
Thus, it becomes clear that the primary function of money is as a unit of account. It only serves as a medium of exchange secondarily, facilitating transactions for goods and services. Given that most money today is digital, its role as a unit of account is paramount. However, we will explore this in greater detail in the main section of the paper.
The Role of Property Rights and Arbitrage-Free Pricing
Although the First Welfare Theorem assumes ideal market conditions, such as voluntary trade and symmetric information, it does not explicitly address the need for well-defined property rights. However, both the Rent-Seeking Lemma and the principal-agent problem illustrate that clear and enforceable property rights are essential for market efficiency. Without these rights, agents who fail to perform their fiduciary duties—often described as economic parasites—can exploit their positions in any organization, including government, to extract unearned wealth in the form of agency costs or economic rents. This behavior introduces significant inefficiencies, which can be severe enough to prevent Pareto efficiency in real-world economic systems.
The importance of property rights becomes even more apparent when considering that, under the Rent-Seeking Lemma and the principal-agent problem, the only individuals whose incentives are truly aligned with maximizing labor productivity are the beneficial owners. These owners directly reap the rewards of productivity improvements. By contrast, workers compensated with fixed wages are more likely to prioritize their own self-interest, which may not necessarily align with maximizing labor productivity. Under this framework, the principal-agent problem persists in most commercial, arms-length transactions, though it may not apply in cases where personal relationships, such as family, are involved (e.g., if a family member is running the business). Nevertheless, the principal-agent problem remains pervasive within the broader axiomatic framework, highlighting the crucial role of property rights in maintaining market efficiency.
Furthermore, to align with the concept of no unearned wealth, a market must also satisfy the no-arbitrage condition. Exchange rates between goods and services must remain consistent across markets to prevent arbitrage, where wealth-maximizing rational agents exploit price discrepancies for risk-free profits. Arbitrage disrupts market efficiency by enabling wealth extraction without productive contribution, similar to rent-seeking behavior. Without consistent pricing across markets, wealth can be unfairly redistributed through these exploitations, undermining both efficiency and fairness.
Implications of Opportunism: First Welfare Corollary
The tendency toward opportunistic behavior, as predicted by the Rent-Seeking Lemma under our core "opportunistic nature of man" axiom, implies that for trade to be genuinely mutually beneficial, two essential conditions must be met. This is referred to as the First Welfare Corollary of the Rent-Seeking Lemma of rational behavior:
Unfettered Markets: Traders must be free to engage in voluntary exchanges without undue restrictions. This freedom maximizes the opportunity for Pareto-improving trades, where at least one party benefits without making the other worse off.
Symmetric Information: To prevent exploitation, information symmetry is crucial. When one party possesses more information than the other, it can lead to rent-seeking behavior or the extraction of unearned wealth, undermining the fairness and efficiency of exchanges. Asymmetric information, as described by George Akerlof in The Market for Lemons, creates opportunities for opportunistic agents—sometimes referred to as "economic parasites" (a term borrowed from Lenin)—to extract value without contributing productively. This undermines the potential for mutually beneficial exchanges.
To maintain fairness and efficiency, markets must promote both information symmetry and unfettered voluntary exchange. However, while these conditions—unfettered trade and symmetric information—are required by the First Welfare Theorem and are key elements of the First Welfare Corollary, they are not sufficient on their own. Additional ideal market conditions are necessary for both the First Welfare Theorem and more complex models, such as the Labor-for-Goods model, to function effectively within a sound formal system that accurately reflects economic reality.
Market Conditions for Pareto Efficiency: Labor-For-Goods Game Theory Model
The following conditions are essential for achieving Pareto efficiency in the Labor-For-Goods Game Theory Model:
Well-Defined Property Rights: Clear and enforceable property rights prevent resource misallocation and promote optimal resource use. Agents can only trade goods they legitimately own, reducing the risk of rent-seeking through ambiguous claims or exploitation.
Voluntary Exchange: Voluntary exchange ensures that all trades are mutually beneficial. When agents freely engage in exchanges that improve or maintain their utility, the market moves toward Pareto improvements—trades where at least one party benefits without making the other worse off.
Symmetric Information: Symmetric information guarantees that all agents have access to the same information, preventing exploitation due to information asymmetry. With equally informed participants, the market functions more fairly, reducing opportunities for unearned wealth extraction and ensuring efficient resource allocation.
Arbitrage-Free Exchange Rates: Arbitrage-free exchange rates maintain price consistency across markets, preventing distortions caused by price discrepancies. The absence of arbitrage ensures that prices reflect the true value of goods and services, preventing agents from profiting without contributing productively to the economy.
Local Non-Satiation: Local non-satiation assumes that agents prefer more of a good to less, meaning they will continue trading until no further utility improvements are possible. This drives the market toward optimal resource allocation, as agents pursue mutually beneficial trades until no gains are left.
Perfect Competition: Perfect competition ensures that prices accurately reflect supply and demand. In a perfectly competitive market, no single agent can manipulate prices, resulting in fair and optimal pricing across goods and services. This facilitates efficient resource distribution by guiding agents' decisions in line with market conditions.
Complete Markets: Complete markets ensure that all potential trades can occur, eliminating unexploited gains from trade. When markets are complete, the exchange of all goods and services is possible, leaving no valuable trades unrealized.
No Externalities: The absence of externalities ensures that all social costs and benefits are reflected in market prices. When external costs (such as pollution) or benefits (such as public goods) are excluded from pricing, inefficiencies arise, distorting resource allocation. Proper pricing of externalities ensures the market reflects the true social value of goods and services.
Rational Behavior: Rational behavior assumes that agents act to maximize their utility or wealth, contributing to overall market efficiency. As part of the core axiom of utility maximization, rational behavior ensures that agents' decisions align with broader market outcomes.
Conclusion:
For the Labor-For-Goods model to function optimally and achieve Pareto efficiency, the market must not only ensure unfettered trade and symmetric information but also satisfy the additional conditions outlined above. Together, these conditions guarantee efficient resource allocation, prevent unearned wealth extraction through rent-seeking or arbitrage, and ensure that all potential gains from trade are realized. By adhering to these principles, the market reaches an allocation where no agent can be made better off without making someone else worse off.
Labor-For-Goods Game Theory Model: Formal Proof of Pareto Efficiency Under Assumed Conditions
We demonstrate that, under the assumptions of well-defined property rights, complete markets, symmetric information, voluntary exchange, local non-satiation, and arbitrage-free exchange rates, a competitive market will result in a Pareto-efficient allocation of resources. We begin by establishing a local Pareto optimum through mutually beneficial trades and then extend this result to a global Pareto optimum by introducing additional conditions that eliminate inefficiencies, ensuring that no further improvements can be made without making other agents worse off.
Part 1: Local Pareto Optimum Through Mutually Beneficial Trade
Assumptions for Local Pareto Optimum:
Symmetric Information: All agents have equal access to relevant information about the goods or services being traded.
Voluntary Exchange: Agents engage in trade only if both parties expect to benefit from the exchange.
Local Non-Satiation: Agents prefer more of any good to less, ensuring they continuously seek out and engage in beneficial trades.
Proof:
Symmetric Information and Voluntary Exchange: With symmetric information, no agent can exploit hidden knowledge to take advantage of another. Each trade is mutually beneficial, as both parties are fully aware of the value of the goods or services being exchanged. Since voluntary exchange implies that agents only trade when they expect to improve or maintain their utility, each exchange results in a Pareto improvement.
Key Result: Each trade improves or maintains utility for both parties, meaning no one is made worse off, and at least one party is better off.Local Non-Satiation: Given that agents prefer more of a good to less, they will continue to trade as long as opportunities for mutually beneficial exchanges exist. This process pushes the market toward a local Pareto maximum, where all possible gains from trade have been realized, and no further mutually beneficial trades are possible.
Key Result: At the local market level, all mutually beneficial trades have been exhausted, and no agent can improve their position without making someone else worse off.
Conclusion (Local Pareto Maximum):
At this stage, no agent can further improve their welfare through additional mutually beneficial trades within the local market. Thus, a local Pareto optimum is achieved, where no further Pareto-improving trades are possible within the given set of exchanges.
Part 2: From Local Pareto Optimum to Global Pareto Efficiency
To extend the local Pareto optimum to the entire economy and ensure global Pareto efficiency, we introduce additional assumptions that eliminate inefficiencies beyond the local context. These conditions guarantee that every possible beneficial trade is realized across the entire economy.
Additional Assumptions for Global Pareto Efficiency:
Well-Defined Property Rights: Clear and enforceable property rights prevent resource misallocation and ensure that all trades occur with legitimate ownership.
Complete Markets: All goods and services can be traded, meaning no beneficial trade is blocked due to missing markets.
No Externalities: The costs and benefits of each agent’s actions are fully internalized, so prices reflect the true social value of goods and services.
Perfect Competition: Agents are price-takers, and market prices accurately reflect supply and demand, guiding resources to their most efficient use.
Arbitrage-Free Exchange Rates: Prices or exchange rates are consistent across markets, preventing agents from exploiting price discrepancies for risk-free profits.
Proof of Global Pareto Efficiency:
Well-Defined Property Rights:
Clear property rights ensure agents can only trade goods they legitimately own. This eliminates inefficiencies from rent-seeking or resource misallocation.
Key Result: Legitimate ownership ensures resources are allocated efficiently, preventing rent-seeking and ensuring all trades are efficient.Complete Markets:
Complete markets ensure that all potential goods and services can be traded, removing any barriers to beneficial trade.
Key Result: Complete markets ensure every possible mutually beneficial trade occurs, leaving no gains from trade unrealized.No Externalities:
The absence of externalities ensures that the prices of goods and services reflect their true social costs and benefits, preventing inefficiencies caused by unaccounted external costs or benefits.
Key Result: Prices reflect true social value, ensuring efficient resource allocation.Perfect Competition:
In a perfectly competitive market, prices are determined by supply and demand, and no agent can manipulate prices. This ensures prices guide resources efficiently.
Key Result: Prices allocate resources efficiently, aligning with market conditions.Arbitrage-Free Exchange Rates:
The assumption of arbitrage-free exchange rates ensures that exchange rates, typically represented by relative prices and achieved by E = ET, are quoted and traded using money as a unit of account, thereby precluding opportunistic arbitrage opportunities. This prevents potentially rent-seeking agents from exploiting price discrepancies for risk-free profit. By maintaining consistent pricing across markets, this condition eliminates inefficiencies arising from arbitrage.
Key Result: Consistent pricing across all markets eliminates distortions caused by arbitrage opportunities, ensuring efficient resource allocation.
Conclusion (Global Pareto Efficiency)
With these additional conditions, we extend the local Pareto optimum to a global Pareto optimum. When the following conditions hold:
Well-defined property rights,
Complete markets,
No externalities,
Perfect competition, and
Arbitrage-free pricing,
all potential Pareto improvements across the economy are realized. No agent can improve their welfare without making another agent worse off, confirming that the market is globally Pareto efficient.
Final Conclusion: Part I
The proof above demonstrates that local Pareto efficiency is achieved through mutually beneficial trade under the assumptions of symmetric information and voluntary exchange, according to the First Welfare Corollary, along with the additional assumption of local non-satiation. This ensures that agents are self-motivated to engage in mutually beneficial trades, consistent with the rational, opportunistic, utility-maximizing representative agent axiom.
By introducing further conditions—well-defined property rights, complete markets, no externalities, perfect competition, and arbitrage-free exchange rates—we extend this result to the entire economy, ensuring global Pareto efficiency. While this framework achieves a high level of Pareto efficiency, there may still be other unidentified conditions that could preclude mutually beneficial trade. As with any theory, there is no claim to a universal global maximum of efficiency. However, this represents the highest level of Pareto efficiency achievable within this theory and, to our knowledge, in reality.
Therefore, under these conditions, the market achieves a Pareto-efficient allocation of resources, where no agent can be made better off without making someone else worse off. With this understanding of the axioms and definitions provided, and recognizing that we are discussing the U = S + E model, which captures the real-world use-value and exchange-value of money within the framework of a formal system, we can now proceed with our discussion about money, with everyone aligned on the precise meanings of the terms we are using.
Another purpose of this proof is to clarify that if the predictions of both the First Welfare Theorem (within the Arrow-Debreu framework) and the Labor-for-Goods Game Theory model—both being fully sound and consistent with reality—fail to align with real-world outcomes, such as Pareto efficiency and high, growing real GDP per capita, it conclusively indicates that one or more of the axioms or perfect market conditions are violated in practice. Identifying and addressing the violated condition(s) will enable us to improve real GDP growth.
This reflection on Marx’s ideas shows that his concerns were fundamentally about how economic systems can avoid inefficiencies created by parasitic rent-seeking, unequal access to information, and involuntary exchanges. His focus on maximizing social welfare by ensuring productive contributions from all economic agents remains relevant today, particularly in discussions surrounding income inequality, rent-seeking behaviors, and the role of government intervention in promoting efficiency.
But we digress, as this discussion is specifically about arbitrage-free prices.
No Arbitrage Constraint on Exchange Rates
We begin by analyzing the foreign exchange (Forex) market, where approximately 30 of the most actively traded currencies are exchanged. These exchange rates can be mathematically represented by an exchange rate matrix, denoted as E. In this matrix, the value in row i and column j represents the exchange rate from currency i to currency j. This matrix provides a structured model for understanding how exchange rates—whether between currencies or between goods and services—are organized to prevent arbitrage, which by definition is a market inefficiency.
Arbitrage is impossible when a uniform price is maintained for an asset across different markets. Specifically, in the Forex market, the exchange rate from currency A to currency B must be the reciprocal of the exchange rate from currency B to currency A. For example, if 1 USD buys 0.50 GBP, then 1 GBP should buy 2 USD. This reciprocal relationship is critical for eliminating arbitrage opportunities that could arise from discrepancies in exchange rates.
Let the matrix E represent the exchange rates among the approximately 30 major liquid currencies traded in the Forex market. The no-arbitrage condition can be defined through a constraint on the individual elements eij of E, which states that:
This condition mathematically ensures that for any two currencies, the product of their exchange rates in both directions equals 1, thereby preventing arbitrage opportunities. This relationship ensures that exchange rates are consistent and arbitrage opportunities are avoided, reflecting the "as above, so below" idea. We use the notation ET to refer to the Hadamard inverse of the transpose of E, that is:
The Hadamard inverse and the transpose are commutative operations, meaning that the transpose of the Hadamard inverse is the same as the Hadamard inverse of the transpose. Specifically:
The no-arbitrage constraint, E=ET, ensures the absence of arbitrage by enforcing symmetry and reciprocity in exchange rates. This constraint is analogous to a matrix being involutory—that is, equal to its own inverse. However, we refer to matrices that satisfy the condition of being the Hadamard inverse of their own transpose, ET, as evolutory, rather than involutory. An evolutory matrix, E=ET, satisfies the constraint:
which reflects the reciprocal nature of exchange rates.
This distinction is important because, while for an involutory matrix A, we have A⋅A−1=I (the identity matrix), for an evolutory matrix E, the relationship is different. Specifically, we have:
E \cdot E_T = E^2 = n \cdot E
Dually defined for dual consistency as
E \cdot E_T == (n \cdot E^T)^T=(E^T \cdot E^T)^T
E·ET=E²=ET² =n·E=(ET·ET)T
However, the matrices E⋅ET and ET·E do not multiply to form n⋅E. Instead, they result in two other distinct matrices, depending on the specific structure of E.
As we can see, when multiplied by its reciprocal transpose, the evolutory matrix does not produce the identity matrix but rather a scalar multiple of E, scaled by the row count n, effectively becoming E2. This occurs because, under the constraint E=ET, the matrix E exhibits certain structural properties. Specifically, E has a single eigenvalue equal to its trace, which is n.
This is due to the fact that the exchange rate of a currency with itself is always 1, meaning that the diagonal entries of E are all equal to 1. Thus, the trace of E—which is the sum of the diagonal elements—is n, the number of currencies. This structure implies that E is not an identity matrix but is instead scalar-like, in the sense that its eigenvalues are tied to its trace.
Simplification of E Through Evolutory Constraints
Imposing the constraint E=ET simplifies the matrix E, leaving it with a single eigenvalue, n, and reducing it to a vector-like structure. This occurs because any row or column of E can define the entire matrix, significantly reducing the dimensionality of the information required to quote exchange rates. For example, the matrix E can be expressed as the outer product of its first column and first row, with each row being the reciprocal of the corresponding column. Consequently, all rows and columns of E are proportional to one another, making them scalar multiples. This property renders E a rank-1 matrix, meaning all its information can be captured by a single vector.
Higher Powers and Roots of E
An intriguing property of the constrained matrix E=ET is its behavior when raised to higher powers. In theory, an unconstrained matrix raised to the fourth power would have four distinct roots. However, due to the constraint E=ET, E has only two fourth roots: E and ET. This can be expressed as:
This suggests a deep connection between the structure of ET and the physics of symmetry. In this framework, the relationship E4=n2⋅E=m⋅c2 suggests a potential analogy to Einstein’s famous equation E=mc2, where mass could be viewed as the fourth root of energy—compressed energy that can be released, for example, in a nuclear explosion.
While E theoretically has four roots, in reality, only two roots exist due to the E=ET evolutory constraint imposed on E by quantum entanglement. We suppose the two roots E and ET are real, and there are two other imaginary roots. Although we are not experts in physics, this concept could be explored further by those familiar with the mathematical properties of complex numbers and quantum systems.
Under this evolutory constraint on energy, mass is equivalent to energy but exists as a strictly constrained subset of all possible energy states, limited by the E=ET evolutory condition.
Although this connection remains conjectural, it aligns with principles from supersymmetry in theoretical physics and echoes the ancient Hermetic axiom, "as above, so below." This idea also resonates with the geometry of the Egyptian pyramids and even touches on the notion that "42" is the "answer to the ultimate question of life, the universe, and everything," as humorously proposed in The Hitchhiker's Guide to the Galaxy. While this reference is not directly tied to quantum physics, it humorously reflects the probabilistic nature of existence.
However, as this paper focuses on mathematical economics, such arbitrage-free prices are represented by a matrix of exchange rates E=eij, which satisfies the condition eij=1÷eji akin to ensuring that if a dollar costs 50 pence, a British pound must cost $2. Otherwise, prices would be inconsistent, leaving room for arbitrage. This condition ensures that no agent can exploit price discrepancies between markets for profit without adding value. By preventing arbitrage and unearned profits, market efficiency is maintained.
The Problem with Marx’s Model
At its core, Karl Marx’s ideas envisioned an economy where the means of production are collectively owned, and resources are allocated efficiently with the goal of achieving fairness and equality for all. His model sought to eliminate the perceived injustices of capitalism by redistributing wealth, abolishing private property, and ensuring that laborers received the full value of their work. Despite the intellectual appeal of these ideas, communism has consistently failed in practice. The key question is: Why has communism failed every time it has been tried?
Marx’s economic theory offered a vision of a more equitable society, but its practical implementation has consistently encountered significant problems. The core issues with Marx’s model stem from several unrealistic assumptions or "dogmas" embedded in the system, particularly regarding ownership, incentives, and market coordination. Let us break down the key problems:
1. The Principal-Agent Problem
Marx’s vision failed to account for the principal-agent problem, a fundamental issue in economics. This problem arises because utility-maximizing agents tend to act opportunistically—seeking to increase their wealth by any available means when the expected costs of getting caught are sufficiently low. This phenomenon is referred to as the Rent-Seeking Lemma of rational wealth maximization.
In systems where workers are compensated with fixed salaries or where collective ownership prevails, individuals often act in their own self-interest rather than working to maximize societal welfare or productivity. Without the pressure to maximize efficiency—as private ownership encourages—workers may lack personal incentives to improve labor productivity or conserve resources. This misalignment of incentives is a key reason why productivity tends to stagnate under communism.
In contrast, capitalist systems with well-defined property rights incentivize capital owners to maximize output and labor efficiency, aligning their efforts with broader economic growth. In many communist systems, however, the lack of individual ownership leads to lower productivity and widespread inefficiency.
2. No Enforceable Property Rights
A foundational error in Marx’s model is the elimination of private property. Enforceable property rights are crucial for incentivizing individuals or firms to invest in and maintain productive resources. When private ownership is removed, as in Marxist systems, individuals have little reason to invest in or improve resources, leading to resource misallocation and underperformance in the long run.
Collective ownership leads to the well-known issue of the tragedy of the commons. Without clear ownership, responsibility for managing resources becomes diffused, resulting in inefficiency and waste. Personal ownership encourages individuals to take care of assets, whereas communal ownership lacks such incentives, ultimately harming the economy’s productivity and sustainability.
3. Failure to Recognize the Necessity of Market Signals
Another major flaw in Marx’s dogma was the dismissal of the price mechanism in a free market. Marxist systems aimed to eliminate price-based allocation through central planning, assuming that planners could accurately determine society’s needs. However, market prices communicate essential information about scarcity, consumer preferences, and production costs. Without arbitrage-free prices, central planners lack the necessary tools to balance supply and demand, leading to resource misallocation, shortages, and surpluses.
In centrally planned economies, prices are set by fiat, creating misalignments between the actual value of goods and their planned prices. This mismatch results in inefficiency and makes it impossible to achieve Pareto-optimal outcomes. Arbitrage-free exchange rates are essential for ensuring that no one can extract unearned profits without contributing value—a condition that Marx’s system overlooked.
4. Unfettered Exchange
Marx’s system also ignored the importance of voluntary exchange. Pareto efficiency requires voluntary, mutually beneficial trades. However, Marxist economies often relied on coercion to enforce economic decisions, whether through mandatory labor, forced redistribution, or state-controlled markets. Involuntary exchanges prevent the achievement of Pareto-efficient outcomes because, by definition, someone is made worse off.
In contrast, voluntary trade ensures that both parties benefit from the transaction. Without the freedom of exchange, mutual gains from trade become impossible, leading to economic inefficiency and reduced collective welfare.
5. Information Asymmetry
Marx’s model also failed to address information asymmetry, a critical issue that leads to inefficiency in markets. Complete and symmetric information is vital for agents to make informed decisions. In communist systems, central planners often lack accurate, real-time information, leading to mismanagement and economic stagnation. This distance from reality hinders efficient resource allocation.
In contrast, capitalist markets use price signals to relay crucial information to producers and consumers, enabling decisions that align supply with demand. Marx’s dismissal of the role of the market meant ignoring this critical flow of information, further contributing to the system’s inefficiency.
Conclusion
By failing to address these fundamental economic principles—such as the principal-agent problem, property rights, the role of prices, voluntary exchange, and information asymmetry—Marx’s model consistently fell short in practice, leading to the repeated failure of communist systems.
Key Takeaways:
Principal-Agent Problem: Lack of personal incentives under collective ownership leads to reduced productivity.
Property Rights: Absence of enforceable property rights results in resource misallocation and inefficiency.
Market Signals: Dismissal of the price mechanism prevents effective resource allocation.
Voluntary Exchange: Reliance on coercion undermines Pareto efficiency.
Information Asymmetry: Central planners' lack of real-time information hampers effective decision-making.
Marx’s focus on maximizing social welfare by ensuring productive contributions from all economic agents remains relevant today, particularly in discussions surrounding income inequality, rent-seeking behaviors, and the role of government intervention in promoting efficiency. However, his model's practical shortcomings highlight the necessity of addressing these economic principles to achieve a truly equitable and efficient society.
The Dogma That Undermined Marx’s Model
The failure of Marxism can be traced to a fundamental misunderstanding of the omnipresence of rent-seeking and the principal-agent problem. The core dogma—Karl Marx's naive and incorrect belief—was that capitalists (owners of capital) could systematically extract unearned wealth (what Marx termed "surplus value") from their employees (workers). Marx argued that workers generate more value through their labor than they receive in wages, with capitalists appropriating this surplus for themselves. However, anyone with practical business experience can easily see the flaws in this theory. For instance, try underpaying your plumber, electrician, or architect, and see how much "surplus value" you can truly extract from them. Marx, lacking practical experience in business, understandably embraced this misconception.
In a free-market economy, labor is exchanged voluntarily for wages, with workers being better informed about the quality and effort of their own labor. While both workers and capitalists may share symmetrical information regarding agreed-upon wages, there exists an asymmetry in the knowledge of the quality and intensity of labor. Workers, who perform the labor, always know more about its actual quality than the capitalists who employ them—just as a seller typically knows more about the quality of their product than the buyer.
This asymmetry implies that capitalists, being less informed about the true quality of labor, cannot systematically extract unearned wealth from the better-informed workers in a voluntary and unfettered exchange of labor for wages. In fact, this asymmetry acts as a protective mechanism for workers, shielding them from exploitation. The notion that capitalists (principals) could consistently appropriate surplus value—or economic rent—from their better-informed agents (workers) misinterprets the dynamics of such exchanges. This misconception was key in Marx’s rejection of private ownership and his belief that central planning could replace the efficiency and adaptability of market mechanisms. Ultimately, this flawed assumption significantly contributed to the collapse of communist systems.
Operating under this false premise, Marx logically advocated for the abolition of private ownership of the means of production and the establishment of collective ownership. His belief that capitalists could extract surplus value from their, by definition, better-informed workers was misguided. If this assumption had been correct, Marxist policies might have led to a more equitable and efficient economy. However, Marx overlooked the central role that private incentives play in driving productivity, innovation, and resource efficiency. While his logic was internally consistent, it was built on a faulty foundation—similar to assuming that entangled photons can be separated, an assumption that is inconsistent with any formal system based on Zermelo-Fraenkel (ZF) set theory. Bell's inequalities, for example, are mathematically valid within such formal systems but fail in reality because the Axiom of Separation, fundamental to ZF set theory, contradicts the principles of quantum entanglement. As any programmer knows, "garbage in, garbage out"—a false assumption inevitably leads to flawed conclusions.
Given the inherent information asymmetry favoring workers regarding the quality and effort of their labor, any surplus value would logically flow from capitalists to workers—through agency costs—rather than the other way around. Unearned wealth can only flow from labor to capital in coercive systems such as feudalism, serfdom, or slavery, where the voluntary nature of exchange is absent. In such coercive environments, the formal system breaks down and no longer accurately reflects economic reality.
In contrast, centrally planned economies, which Marx envisioned, lacked the necessary incentives, market signals, and freedom of exchange required for efficient resource allocation. Rather than producing the fairness and equality Marx anticipated, these systems often resulted in stagnation, corruption, inefficiency, and, in extreme cases, famine and societal collapse. Historical examples such as the Holodomor in Ukraine and Mao’s Cultural Revolution in China illustrate the devastating consequences of such policies, including widespread famine and, at times, even cannibalism. The dogma of central planning, combined with the elimination of private property, created economic systems fundamentally incapable of achieving Pareto efficiency, leading to severe socio-economic consequences.
Marx’s vision of a more equitable society contained one critical flaw: he believed that agency costs flowed from agents (workers) to principals (capitalists), when in reality, they more often flow in the opposite direction in a system of voluntary exchange. This misunderstanding led him to advocate for abolishing private property rights—an essential mechanism for achieving efficient economic outcomes. The absence of enforceable property rights, the failure to utilize market prices, and reliance on coercion rather than voluntary trade all contributed to the collapse of communist systems.
Communism’s failure is rooted in dogmatic assumptions about human behavior, incentives, and market mechanisms. Pareto-efficient outcomes, as outlined by the First Welfare Theorem, can only be achieved when property rights are secure, markets are competitive, prices are free from distortions, and all trades are voluntary. Marx’s model failed precisely because it violated these key conditions.
However, before dismissing Marx’s labor theory of value entirely, it is worth reconsidering what we may have prematurely discarded, owing to DIBIL (Dogma-Induced Blindness Impeding Literacy). By re-examining Marx’s ideas within a modern mathematical framework—specifically one that ensures the no-arbitrage condition on the exchange matrix, where it becomes the transpose of its own Hadamard inverse—could we use Marx’s labor theory in a way that is relevant today?
This question is rhetorical, meant to point out the obvious: by properly re-deriving the First Welfare Theorem using a Labor-for-Goods model, we can accurately model how relative Pareto efficiency is dynamically achieved through trade. It is important to note that absolute Pareto efficiency has not yet been defined by anyone, as far as we know, and relative Pareto efficiency is not significantly impacted by variations in rational behavior or local non-satiation across different regions. These factors do not exhibit enough cross-sectional variation between various economies to account for the large observed differences in real-world per capita GDP between countries like Haiti and the Dominican Republic, or Russia, Ukraine, Norway, and Ireland, and so on. This naturally leads us to a closer examination of which specific violations of the nine conditions in the Labor-for-Goods model result in relatively more Pareto-inefficient outcomes in the real world.
If we could sort out Pascal’s Wager using formal systems, surely we can figure out which economies are relatively more or less Pareto-efficient, and why. But we can do that at some other point in time, as this paper is about to draw an important conclusion from what we've discussed above.
Bell’s Inequality and the Axiom of Separation
As illustrated in an MIT online lecture2, the Axiom of Separation from Zermelo-Fraenkel (ZF) set theory is implicitly assumed—and, in fact, used—when attempting to derive Bell’s Inequality from core axioms. In a formal system utilizing ZF set theory, the concept of correlation assumes that the two elements whose properties are related (or correlated, in this case) must be part of the same set (constructed using the Axiom of Pairing) and that these elements can be separated using the Axiom of Separation. This is demonstrated in the MIT video, where the lecturer derives a simplified version of Bell’s Inequality from the axioms. At approximately the 1-hour and 15-minute mark, the lecturer uses the Axiom of Separation to split the set N(U,¬B) into its two components: N(U,¬B,¬M) and N(U,¬B,M).
However, when the set elements in real-world experiments represent entangled particles—such as pairs of entangled photons or electrons—the Axiom of Separation fails to accurately represent reality. This is because the axiom implicitly assumes that the elements of a set have independent, well-defined properties, which can be used to categorize them into distinct subsets.
In the case of entangled particles, the individual set elements (particles) do not have independent, separable properties before measurement. They are described by a single, inseparable quantum state. Therefore, applying the Axiom of Separation breaks down, as the elementary particles cannot be divided into subsets without losing the entanglement correlations that define them. This failure directly challenges the assumption of local hidden variables, which rely on separable properties, and explains why Bell’s Inequality is violated—because classical set-theoretic approaches cannot capture the true quantum correlations exhibited by entangled systems.
Simply put, any formal system that includes the Axiom of Separation in its set theory is not sound for modeling entangled particles, including, but not limited to, electrons, photons, and other quantum entities that exhibit entanglement.
This insight points to the necessity of developing a more accurate formal system, free of the Axiom of Separation, to properly model the quantum phenomena that defy classical logic. If, instead, we represent these quantum states using vectors constrained by the condition E = ET (where ET is the transpose of the Hadamard inverse of matrix E), we may—with substantial effort—develop a new set theory that models quantum entanglement more accurately. This could be analogous to the development of Riemannian geometry, which was introduced to better reflect the true structure of space-time in general relativity.
In this framework, quantum entanglement would not rely on classical separability but instead be modeled using a more appropriate mathematical structure, one that preserves the interconnectedness of quantum states. The E = ET constraint could serve as the mathematical foundation for this new theory, providing a way to maintain symmetry and reciprocity within quantum systems without falling into the pitfalls of classical set theory, which inadequately captures the complexity of entangled particles. This shift could open the door to a more nuanced understanding of quantum phenomena, free from the limitations imposed by classical separability and local hidden variables.
If you want to learn more, please visit us at tnt.money, where we may have a potential way to fund this groundbreaking research using what we call "one-true money." Simply type "tnt.money" into your web browser and hit Enter to explore further. We believe that we have just proposed the foundation of a unified field theory that connects quantum mechanics with gravity, grounded in the evolutory constraint on all possible energy states.
Directions for Further Research
Development of a New Formal System: The necessity of moving beyond the Axiom of Separation in Zermelo-Fraenkel set theory is clear when dealing with quantum entanglement. Future research should focus on developing a new set theory that accurately reflects the non-separability of quantum systems. This could involve evolving the concept of entangled states using a new mathematical framework where E = ET governs the relationship between quantum states, much like the role of the metric tensor in general relativity. A deeper exploration of evolutory matrices might provide insights into the geometric and algebraic properties needed to model entanglement more effectively.
Linking Quantum Mechanics and Gravity: One of the most elusive goals in physics has been the unification of quantum mechanics and gravity. Our suggestion to constrain quantum states using the evolutory matrix E = ET offers a pathway toward this unification. Further research could explore how this constraint, when applied to quantum fields, may lead to a better understanding of the curvature of spacetime and its interaction with quantum states—bridging the gap between quantum mechanics and Einstein’s theory of general relativity.
Exploring Evolutory Constraints in Supersymmetry: The evolutory constraint may have deep connections to supersymmetry. By studying how supersymmetric particles might adhere to similar matrix constraints in energy states, researchers could find new ways to explore the symmetry-breaking processes that govern particle interactions. Additionally, understanding whether these constraints offer a new method to detect or predict particles beyond the Standard Model could significantly advance high-energy physics.
Quantum Economics and Arbitrage-Free Markets: As mentioned in earlier sections, the condition E = ET enforces symmetry and reciprocity in exchange rates, akin to preventing arbitrage in financial markets. Could this formalism be applied to better model global financial systems, preventing inefficiencies and economic distortions? Research in quantum economics could explore how the principles of quantum entanglement and no-arbitrage constraints can influence market behavior, risk management, and even the structure of future decentralized finance systems.
Experimental Validation of Evolutory Constraints: Theoretical development is only one part of the journey. To substantiate the evolutory matrix theory, real-world experimentation would be crucial. This could involve simulating quantum states under the evolutory matrix constraints and observing whether their behavior aligns with predictions, both in quantum experiments (e.g., entangled photon pairs) and in cosmological observations (e.g., gravitational waves). Collaboration between theoretical physicists and experimental researchers would be key to validating these hypotheses.
By pursuing these lines of inquiry, we may uncover new insights not only into quantum mechanics and general relativity but also into more efficient financial systems and broader economic models. We look forward to the potential applications of this evolving field of study.
Conclusion
Einstein famously remarked that he did not believe God "plays dice" with the universe, expressing his discomfort with the inherent randomness in quantum mechanics. Upon closer reflection, however, this view may not fully capture the nature of the universe. If God did not "play dice"—if there were no randomness at all—even God would be constrained by monotony. Our analysis offers a different perspective: God does indeed "play dice," but these dice are loaded in such a way that they ensure fairness. This mechanism guarantees that all interactions remain Pareto-efficient and balanced over time, ensuring that, in the long run, everyone receives what they are due, effectively restoring equilibrium in all exchanges.
This leads us to speculate about the deeper implications of Einstein’s famous equation, E = mc². When restated mathematically as:
E4=ET⋅n2=m⋅n2
and for dual-consistency as:
E4=(ET⋅n2)T=m⋅n2
where E_T represents the transpose of the Hadamard inverse of matrix E, and E^T denotes the transpose of matrix E, we uncover a potential new relationship between energy, mass, and the structural properties of the universe. Under the constraint E = E_T, we know that E⁴ = m \cdot n² has two fourth roots: E_T and E^T. These two roots represent two recursively entangled energy states—akin to +/− or good/bad, and so on, as everything in this realm is defined dually in relation to each other, such as hot and cold. This suggests a deeper connection between energy, mass, and time, hinting at an intrinsic link between temporal dynamics and the fundamental equations that govern the cosmos.
More importantly, it is time to move beyond dogmatic thinking and re-examine our existing assumptions. Many assumptions that we have taken for granted may be flawed, and it is only by questioning these assumptions that we can gain new insights.
To learn more and explore these ideas further, we invite you to visit our website at tnt.money. Simply type "tnt.money" into your web browser, hit Enter, and discover where this journey takes you. Why should you want to do this? Because we have just demonstrated—unless you can find an error in our proof above—that our unified field theory of the universe, as a correct formal system, is the best scientific theory currently available. It is built on a smaller set of axioms than any competing alternative, making it the least likely to ever be proven false. This also makes our theory the most likely to turn out true, positioning it as the maximum likelihood or best scientific theory available.
Any correct formal system that relies on fewer axioms is less likely to be falsified compared to one that depends on additional assumptions, since axioms accepted without proof as "self-evidently true" could ultimately be proven false. This aligns with the Aristotelian principle of parsimony. Although often associated with Occam’s Razor, it is important to clarify that it is not necessarily the "simplest" explanation that is most likely correct, but rather the one based on fewer axioms. Ironically, such a theory may appear more complex because it necessitates a greater number of deductions—hence the length and depth of this paper.
P.S.
Dear reader, on Wall Street, under SEC Rule 10b-5, we’re not allowed to make any false promises. So when we say "True-NO-Trust," we mean it. That’s why we embrace formal systems—there’s no need to trust us. Everything we present is an independently verifiable, objective fact grounded in our shared reality. We’re so accustomed to operating under SEC Rule 10b-5, where we make no promises to anyone, because otherwise, we could go to prison for making false claims to investors. It’s truly refreshing not to worry about misleading anyone when discussing how physicists can become rich using TNT, our "one true money." By relying on a formal system, we’re fully protected from misleading anyone about anything at all.
Goodbye.
No-Arbitrage Constraint on Exchange Rates
In this analysis, we explore the foreign exchange (Forex) market, where approximately 30 of the most actively traded currencies are exchanged. Exchange rates in this market can be structured as a matrix, denoted by E, where each element e_ij in row i and column j represents the exchange rate from currency i to currency j. This matrix provides a framework for understanding how exchange rates are organized to prevent arbitrage opportunities—market inefficiencies that allow risk-free profit.
Arbitrage is prevented when a uniform pricing structure is maintained across different markets. Specifically, in the Forex market, the exchange rate from currency A to currency B must be the reciprocal of the exchange rate from currency B to currency A. For example, if 1 USD buys 0.50 GBP, then 1 GBP must buy 2 USD. This reciprocal relationship is critical to eliminating arbitrage opportunities that might arise from discrepancies between exchange rates.
Exchange Rate Matrix and No-Arbitrage Condition
Let E be a matrix representing the exchange rates between major currencies in the Forex market. The no-arbitrage condition imposes a constraint on the elements e_ij of E, such that: e_ij = 1 / e_ji for all i, j
This condition ensures that the product of exchange rates in both directions between any two currencies equals 1. In other words, it enforces the symmetry needed to prevent arbitrage.
The Concept of the Evolutory Matrix
We introduce the concept of an evolutory matrix, a matrix constrained such that the element-wise reciprocal of its transpose equals itself. Mathematically, this condition can be expressed as:
E_T = (E^T)^{∘(-1)}
where:
E^T is the transpose of E, and
(E^T)^{∘(-1)} denotes the Hadamard (element-wise) inverse of E^T.
The Hadamard inverse of a matrix is the matrix formed by taking the reciprocal of each individual element. The condition E_T = E enforces the symmetry and reciprocal relationships required by the no-arbitrage principle. This constraint reflects the idea that for each currency pair i and j, the exchange rate from i to j is exactly the reciprocal of the exchange rate from j to i:
e_ij = 1 / e_ji for all i, j
Thus, the matrix E must satisfy this evolutory constraint, which ensures the absence of arbitrage in the exchange rates.
Properties of Evolutory Matrices
An evolutory matrix E, as we define it, satisfies E = E_T, where:
E_T = (E^T)^{∘(-1)}
This condition is distinct from that of an involutory matrix, where A = A^{-1}. While an involutory matrix is its own inverse under standard matrix multiplication, an evolutory matrix is its own transpose under the element-wise reciprocal operation.
To further understand this structure, consider the fact that exchange rates between the same currency are always equal to 1. Hence, the diagonal elements of E are all 1, implying that:
e_ii = 1 for all i
As a result, the trace of E, which is the sum of the diagonal elements, equals the number of currencies, n.
Matrix Multiplication and Evolutory Constraint
If we consider the matrix product E * E_T, we see that this product reflects the relationship between exchange rates. Under the evolutory constraint E = E_T, the product E * E_T becomes a scalar multiple of E, scaled by the number of currencies n:
E * E_T = n * E
This relationship can be interpreted in the following way: since the exchange rate of a currency with itself is always 1 (the diagonal elements of E are all 1), the sum of these rates across all currencies (the trace of E) is n, the number of currencies. Thus, the matrix product E * E_T results in scaling the matrix E by n.
This structure implies that E is not an identity matrix but behaves in a scalar-like fashion, where its eigenvalues are connected to its trace. Importantly, the evolutory condition E = E_T ensures the consistency of exchange rates and eliminates arbitrage opportunities.
Conclusion
By introducing the concept of an evolutory matrix, we provide a new framework for modeling exchange rates under the no-arbitrage condition. The constraint E_T = E, where E_T is the transpose of the Hadamard inverse of E, ensures that the exchange rate between any two currencies is reciprocal, thereby preventing arbitrage. This formal structure helps us understand how exchange rates are constrained to maintain consistency in the Forex market, with the matrix E satisfying both symmetry and reciprocity.
Moreover, under the constraint E = E_T, we know that E⁴ = m * n² has two fourth roots: E_T and E^T. These two roots represent two recursively entangled energy states—akin to +/− or good/bad, and so on, as everything in this realm is defined dually in relation to each other, such as hot and cold. This suggests a deeper connection between energy, mass, and time, and hints at an intrinsic link between temporal dynamics and the fundamental equations that govern the cosmos. Moreover, these two energy states, when superimposed under E = E_T conditions, create causality and enable formal system inference rules to function in our shared reality.
1https://www.stlouisfed.org/education/economic-lowdown-podcast-series/episode-9-functions-of-money