Enhancing the Logical Rigor of Mathematical Economics: Bridging Facts, Hypotheses, and Real-World Outcomes
by Joseph Mark Haykov
Abstract
Mathematical economics aspires to model real-world interactions among individuals who trade labor for goods and services in (ideally) unfettered markets. Yet, unlike the formal systems in physics, chemistry, engineering, computer science, and biology, economics often struggles to achieve precise predictive power—hence its reputation as the “dismal science.” Macroeconomic models frequently fail to forecast recessions, active fund managers routinely underperform simple index strategies, and widely accepted “laws” of economics sometimes clash with observable realities. For example, the concept of stagflation appears fundamentally incompatible with certain foundational Keynesian assumptions.
These persistent shortcomings arise from conflating facts—propositions that, given extensive evidence, cannot be refuted—with hypotheses, which are probabilistic claims open to falsification by new data. Lacking a clear distinction between these two categories, the discipline risks eroding its credibility and producing less reliable insights.
This paper proposes a more rigorous framework for mathematical economics by explicitly differentiating between proven theorems and unproven conjectures. By introducing strict definitions, empirical validation operators, and stable semantic constraints, it prevents “semantic drift” and clarifies which propositions are independently verifiable—and thus cannot turn out to be false—versus those that remain subject to future refutation. Drawing inspiration from no-arbitrage conditions in currency markets and reciprocal structures in conceptual embeddings, the framework establishes unidimensional semantic spaces and reciprocal definitional relationships to ensure logical clarity and coherence.
Keywords: Mathematical Economics, Empirical Validation, Facts vs. Hypotheses, No-Arbitrage Conditions, Pareto Efficiency, Semantic Stability, Agency Theory, Rational Agents, Marx’s Labor Theory of Value, Rent-Seeking Lemma
Introduction
Classical first-order logic (L) employs inference rules—such as modus ponens, the principle of non-contradiction, and universal generalization—to derive conclusions that are guaranteed, with absolute certainty, to accurately reflect reality, provided the axioms correctly describe the phenomena in question. When foundational assumptions align with real-world conditions, the theorems deduced in L hold universally and with absolute certainty within those conditions. This property ensures consistent truth propagation, making L indispensable across mathematics and the sciences. From arithmetic to physics, L provides a unified, contradiction-free foundation for constructing and validating theories.
However, this strength also reveals a critical vulnerability: if even a single axiom fails to accurately represent reality, all conclusions derived from it become unreliable. Consider the case of Mars’s moons. Peano’s second axiom posits that every natural number n has a successor n′ (e.g., n + 1). While 2 + 2 = 4 is indisputably valid within Peano arithmetic, the statement “2 moons of Mars + 2 moons of Mars” is undefined in the physical world because there is no “third” moon to succeed the second. This mismatch between reality and Peano’s second axiom does not indicate an internal contradiction in Peano arithmetic; rather, it underscores that even correct mathematics can yield unreliable real-world conclusions when its underlying axioms fail to align with actual conditions.
A similar problem occurs in geometry. Riemannian geometry replaces Euclidean geometry in contexts like GPS triangulation because the Euclidean axiom—that the shortest distance between two points is a straight line—does not hold in curved spacetime. Originally proposed by Albert Einstein in his theory of General Relativity and later experimentally validated, this insight transformed a once-speculative hypothesis into an empirically verified fact. For example, satellite clocks are adjusted to run at a different rate than those on Earth to account for the effects of time dilation as predicted by Einsteins theory. These examples demonstrate that, although first-order logic (L) itself is precise, its conclusions can become unreliable when derived from axioms that fail to accurately describe the relevant domain.
Thus, the utility of first-order logic (L) depends on two fundamental pillars:
Accurate selection of axioms: Axioms must not only appear “self-evidently” true but must also reflect the actual phenomena being described.
Sound application of inference rules: Logical consistency must be rigorously maintained to ensure valid deductions.
When these pillars are upheld, L becomes the most powerful tool for real-world reasoning, guaranteeing absolute fidelity to reality—provided its axioms are true and its logic is error-free. Yet its capabilities are often misunderstood, particularly in the context of Gödel’s incompleteness theorems, whose implications are frequently misinterpreted or misapplied.
Gödel’s Incompleteness Theorems and Their Broader Implications
Gödel’s two incompleteness theorems, foundational results in modern logic, reveal inherent limits on provability within formal systems and carry profound implications for logic, economics, and science:
First Incompleteness Theorem:
In any consistent formal system F capable of expressing sufficient arithmetic (for example, by using Peano’s axioms), there exist statements in F that can neither be proved nor disproved within F.Second Incompleteness Theorem:
A consistent formal system F cannot prove its own consistency.
These results mirror analogous insights from other fields. For instance, Heisenberg’s uncertainty principle in quantum mechanics sets fundamental limits on the simultaneous knowledge (not just measurement) of a particle’s position and momentum. Similarly, Turing’s halting problem demonstrates that no algorithm can universally determine whether a given program will terminate or run indefinitely. Such parallels underscore that constraints on provability and certainty—limits on what is inherently knowable—are not unique to formal logic but apply to diverse forms of reasoning.
Indeed, Gödel’s incompleteness theorems imply that certain propositions, like the Riemann Hypothesis, might remain unprovable within Peano’s axioms. Yet this limitation does not diminish the certainty of established results, such as the Pythagorean theorem, which is a proven fact under Euclidean axioms and can never be false within that system. Gödel’s theorems do not invalidate mathematics or logic; rather, they delineate the boundaries of what can be proven within a given formal system.
In practical mathematics, Gödel’s Second Theorem highlights the necessity of external consistency checks. For example, disallowing division by zero prevents absurdities like 2 = 3. These constraints emphasize the importance of external verification and precise definitions of axiomatic boundaries. However, they say nothing more about the inherent power or limitations of first-order logic (L).
In reality, multiple mathematicians have independently verified core logical principles, and no inconsistencies have been discovered throughout the entirety of mathematics. Consequently, when applied correctly—with strict adherence to inference rules and derivation processes—first-order logic (L) remains an indispensable tool for deriving conclusions that reliably reflect reality.
Conflating Facts and Hypotheses in Economics
Mathematical economics aspires to the rigor and precision of pure mathematics. In principle, formal axioms and logical derivations should produce reliable conclusions about markets, prices, production, and distribution. Yet in practice, economic forecasting often falls short: macroeconomic models fail to predict recessions, investment strategies grounded in complex theories seldom outperform simple benchmarks, and widely accepted "laws" of economics frequently clash with the messy realities of global trade and finance.
Why does this persistent disconnect exist? The root cause lies in the conflation of facts with hypotheses. In mathematics, the distinction is clear: a proven theorem is a fact within its axiomatic system – it cannot be false – whereas a conjecture remains a hypothesis until it is proved or disproved. Economics, however, often blurs this critical boundary, leading to semantic drift, conceptual ambiguity, and the entrenchment of unsupported assertions. Over time, ideas that should remain provisional or context-specific are treated as universal truths, leaving models built on precarious foundations.
A fact is a statement that is objectively true and thus could never turn out to be false in the future. In L-language terms, a statement is objectively true if and only if its accuracy can be independently verified by multiple observers, each using reliable methods to arrive at the same conclusion. Once confirmed in this way, it cannot turn out false at any future point.
Example (Mathematical): The proof of the Pythagorean theorem is checkable by anyone, making it absolutely and objectively true.
Example (Empirical): "A typical human hand has five fingers." Any observer can count and confirm five fingers, making it objectively true beyond all doubt.
No appeals to personal opinion or authority are required--only the fact that any competent observer can independently verify it. This aligns with the traditional English usage of "objective," as reflected in standard dictionary definitions, and applies equally to both mathematical theorems and everyday empirical facts.
This interpretation of objectivity has a long-standing pedigree in philosophy and science: a proposition is objectively true if it can be independently verified by anyone with sufficient competence and resources. It underlies much of classical mathematics (where proofs must be replicable) and the empirical sciences (where observations and experiments must be reproducible to gain acceptance). Far from being novel or fringe, it is deeply rooted in standard usage and in the classical philosophy of science.
A Proposed Rigorous Framework for Mathematical Economics
This paper advocates reorienting mathematical economics toward the strict epistemic distinctions that define mathematical practice. Specifically, we propose treating each economic “law” or “principle” as a hypothesis until it is either theoretically proven under well-defined axioms or overwhelmingly validated by empirical evidence. To achieve this, we introduce an empirical validation operator Ξ, which systematically classifies propositions based on the strength of their empirical support.
This framework extends external consistency checks for claims expressed in L-language, ensuring that axioms remain consistent with observed facts and preventing false predictions from undermining the reliability of mathematical economics. By enforcing dual consistency—internally (logical coherence) and externally (empirical alignment)—this approach enables mathematical economics to approach the precision and dependability of pure mathematics. Simple empirical truths, such as 2+2 being undefined when counting Mars only two moons, exemplify the value of this rigorous dual-consistency standard.
Formal System and Logical Foundations
We begin with a formal system S=(L, Σ, ⊢), where:
L: A first-order language.
Σ: A set of axioms.
⊢: A derivability relation, where Σ ⊢ φ indicates that φ is derivable from Σ via standard inference rules.
Properties of S
Consistency: There is no formula φ such that both Σ ⊢ φ and Σ ⊢ ¬φ.
Rational Agents: Rational agents rely on S for logical inference, ensuring that reasoning is both sound and free from contradictions.
Key Definitions
WFF (Well-Formed Formula): Any syntactically valid formula φ in L.
Derivability (⊢): The derivability relation adheres to standard inference rules, including:
Modus Ponens: If φ and φ→ψ are both derivable, then ψ is derivable.
Law of Non-Contradiction: ¬(φ ∧ ¬φ) for all φ in L.
Law of the Excluded Middle: φ ∨ ¬φ for all φ in L.
These definitions and properties form the foundational tools for logical reasoning within S, ensuring that all conclusions drawn remain consistent and reliable.
Distinguishing Facts and Hypotheses
To prevent semantic drift and ensure that L-language accurately models reality, it is essential to clearly distinguish between facts and hypotheses in both theoretical frameworks and their empirical foundations. To formalize this distinction, we introduce an empirical validation operator (Ξ), defined as Ξ: L -> [0,1], where Ξ(φ) represents the degree of empirical certainty associated with any proposition φ. This operator bridges purely theoretical constructs and observable phenomena, providing a systematic way to evaluate how well propositions align with established evidence.
Facts: Ξ(φ) = 1
A proposition φ is considered an empirically validated fact if Ξ(φ) = 1. For instance, the claim “The Earth is approximately spherical” is supported by multiple independent lines of evidence—satellite imagery, gravitational measurements, and global circumnavigation. It is directly verifiable by any rational agent (e.g., by traveling around the world), so Ξ("Earth is spherical") = 1. Similarly, mathematical facts, such as the Pythagorean theorem, are provable within their axiomatic systems and correspond to truths that any diligent student can independently verify. In both cases, truth is objective, as rational agents can repeatedly validate it through independent verification.
Once a proposition surpasses a critical threshold of independent confirmations, the notion that an objective fact “could turn out to be false” disappears entirely. Consider the double-slit experiments in physics, which confirm the particle-wave duality of light. These experiments are so robust and consistent that they leave no room for failure, cementing the phenomenon as an empirical fact. This transition from hypothesis to fact mirrors the behavior of a conductor cooling below its critical temperature, where electrical resistance vanishes. Before this transition, residual doubts may persist; afterward, they are eliminated, marking a “quantum” epistemic shift from hypothesis to fact.
Hypotheses: 0 < Ξ(φ) < 1
A proposition φ is classified as a hypothesis if 0 < Ξ(φ) < 1—that is, it is supported by some evidence but not enough to establish it as a fact. For example, the claim “dark chocolate consumption improves cardiovascular health” is strongly supported by multiple observational studies and controlled experiments. However, uncertainties—such as the influence of confounding dietary factors, sample sizes, and long-term implications—prevent it from reaching Ξ(φ) = 1. While it is highly unlikely that dark chocolate has no cardiovascular benefits, the claim remains a hypothesis until all plausible doubts are rigorously addressed through independent, conclusive confirmations. Once every uncertainty is resolved, the claim can transition from near-fact to a full-fledged fact.
Conversely, a proposition with Ξ(φ) ≈ 0 lacks significant empirical support. For instance, the claim “Unicorns exist” has no credible evidence, so Ξ("Unicorns exist") ≈ 0. However, as Nassim Taleb observes, absence of evidence is not evidence of absence. For example, the claim “black swans exist” was once widely doubted but later proven true. Such propositions in L-language remain hypotheses with insufficient evidence; future discoveries could alter their status, moving them along the spectrum of empirical validation.
The Law of the Excluded Middle (LEM) and L-Language
We adopt the law of the excluded middle (LEM) in L-language not only because it aligns with classical first-order logic but also because it accurately reflects how the real world operates. In science, mathematics, and daily life, propositions are treated as either true or false, with no third option. This binary framework governs fields ranging from physics and chemistry to mathematical economics and quantum mechanics. Consequently, every proposition φ in L-language must satisfy “φ or ¬φ,” with no allowance for partial truths.
Under this requirement, the empirical validation operator Ξ adheres to the rule Ξ(¬φ) = 1 – Ξ(φ), reinforcing the bivalent nature of truth in L. By remaining consistent with Peano’s axioms and classical mathematics, we ensure that L-language is fully compatible with real-world applications where LEM is fundamental. While alternative logics (e.g., fuzzy logic) permit partial truths, such systems do not alter the real-world necessity of binary decisions. For example, one must either replace the bag in a vacuum cleaner or not—there is no feasible “partial” outcome, regardless of how one might formally model it using fuzzy logic.
Near-Facts and Unresolved Hypotheses
Certain claims approach Ξ(φ) ≈ 1 without achieving universal recognition as facts. For instance, the statement “Cigarettes cause cancer” is overwhelmingly supported by extensive evidence, yet residual uncertainties—however small—persist due to statistical limitations such as p-values and confounding variables. Until every plausible doubt is resolved, the claim remains a near-fact rather than a full-fledged fact.
Similarly, the Riemann Hypothesis, though widely believed to be true, remains unresolved. Neither proven nor disproven, it persists as a hypothesis within Peano’s axiomatic framework. Should a rigorous proof be discovered, it would transition into a theorem—an established fact within its formal system.
Historical examples underscore the importance of distinguishing among hypotheses, near-facts, and facts. Fermat’s Last Theorem, colloquially referred to as a “theorem” despite lacking proof for centuries, remained a hypothesis until Andrew Wiles provided a rigorous proof in 1994. In contrast, Euler’s Conjecture, once widely assumed to be true, was disproven in 1966 when L. J. Lander and T. R. Parkin found a counterexample using computational methods.
These cases highlight the necessity of carefully separating what is assumed to be true from what is proven to be true. By maintaining this distinction, we preserve the integrity of formal systems and ensure that propositions are classified appropriately within their epistemic contexts.
Definitions and Rational Alignment
The core axiom underpinning both mathematical economics and game theory is that representative agents (or “players” in game theory) are rational utility maximizers, inherently motivated to “win” under the rules of the game in question. However, it is essential to define rational more precisely than simply “capable of following first-order logic inference rules.”
Let S = (L, Σ, ⊢) be a formal system, and let Ξ: L -> [0,1] be the empirical validation operator. The following definitions formalize the distinction between facts and hypotheses:
Fact: A proposition φ is a fact if and only if:
Σ ⊢ φ (φ is logically derivable from Σ with a valid proof), or
Ξ(φ) = 1 (φ is empirically incontrovertible, independently verifiable, and cannot be falsified by new evidence).
Example: “The Earth is spherical” qualifies as an empirical fact (Ξ(φ) = 1), just as a proven mathematical theorem qualifies as a logical fact.
Hypothesis: A proposition ψ is a hypothesis if and only if:
ψ is neither derivable from Σ (Σ ⊬ ψ) nor disprovable by Σ (Σ ⊬ ¬ψ), and
0 < Ξ(ψ) < 1.
Hypotheses remain provisional and may be revised or refuted based on new evidence or proofs.
Rational Belief Alignment
Let F = { φ | φ is a fact } and H = { ψ | ψ is a hypothesis }.
A rational agent’s belief set B(t) at time t must satisfy B(t) ∩ F = F, ensuring that all known facts—both logical and empirical—are included. Hypotheses, however, remain tentative and are excluded from the agent’s definitive beliefs until they are either proven or empirically validated.
Rational Agents, Belief Sets, and Empirical Validation
A rational agent’s belief set B(t) is the set of propositions the agent accepts as true at any given time t. To maintain logical consistency and ensure alignment with both theoretical rigor and empirical evidence, the L-language framework imposes specific rationality conditions. These conditions guarantee that beliefs evolve as new information emerges, and that agents remain committed to facts while staying open to revising hypotheses.
Key Rationality Conditions
No Contradiction:
The agent’s belief set must never simultaneously derive a statement φ and its negation ¬φ. Formally:
not (B(t) ⊢ φ and B(t) ⊢ ¬φ).
This principle, rooted in classical logic, guarantees internal consistency. If an agent could derive both a proposition and its negation, their belief system would collapse into incoherence. Thus, no rational agent should endorse contradictory beliefs.Empirical Alignment:
All propositions φ for which Ξ(φ) = 1 must be included in B(t). This condition reflects a commitment to empirical certainty: if a fact is fully validated, rational agents must accept it. Because such facts cannot be refuted by future evidence (being either logically proven or empirically incontrovertible), excluding them would mean rejecting established truths, thereby compromising rationality.Hypothesis Revision:
If Ξ(¬ψ) = 1 for some hypothesis ψ—i.e., if the negation of ψ is fully validated empirically—then ψ must be removed from B(t+1). This ensures that beliefs remain responsive to evidence. When new data decisively contradicts a previously held hypothesis, rational agents must discard or revise it immediately, preventing outdated or disproven assumptions from lingering in their belief set.
By adhering to these conditions—avoiding contradictions, accepting fully verified facts, and discarding disproven hypotheses—rational agents maintain logical consistency, empirical integrity, and intellectual humility. As new information arises, agents refine their belief sets dynamically, incorporating reliable evidence and eliminating refuted claims. This process fosters robust, fact-aligned reasoning and ensures that beliefs remain firmly tethered to reality.
Cognitive Biases: Definitions, Mechanisms, and Conditions
Cognitive biases occur when an agent, operating within a formal reasoning framework and guided by empirical evidence, fails to update posterior beliefs according to Bayesian principles. Instead of adjusting probabilities precisely in response to new information—as prescribed by Bayes’ rule—the agent’s posterior distributions systematically diverge from normative Bayesian values. These deviations undermine evidence-based reasoning and lead to suboptimal decisions.
Common Cognitive Biases and Their Systematic Deviations
Confirmation Bias
The agent disproportionately favors evidence that aligns with existing beliefs while neglecting contradictory data that would require lowering priors. Consequently, posterior beliefs remain anchored to initial assumptions instead of shifting appropriately in response to disconfirming evidence.Availability Heuristic
The agent overestimates the probability of events based on how easily examples come to mind, rather than considering representative statistical frequencies. Dramatic or memorable cases overshadow more typical, less salient data, leading to distorted assessments of likelihood and risk.Representativeness Heuristic
The agent disregards base rates and robust statistical information, relying instead on superficial similarity or stereotypes to estimate probabilities. By ignoring reliable numerical evidence, the agent’s posterior beliefs diverge further from objective data.Framing Effect
The agent’s decisions are influenced by how logically equivalent information is presented (e.g., emphasizing “loss” vs. “gain”). Although the underlying data remain identical, the agent’s posterior beliefs and decisions depend on presentation rather than invariant logical relationships.Dunning-Kruger Effect
The agent’s decisions are driven by belief in false or oversimplified axioms, which the agent mistakenly regards as valid. Although the agent’s derived conclusions may follow logically from these flawed assumptions, they turn out false in reality—similar to believing that Mars has four moons simply because “2 + 2 = 4” under a misapplied set of axioms. True experts, aware of the specific inaccuracies in these assumptions, can pinpoint the errors leading to the misguided belief.Sunk Cost Fallacy
The agent persists in failing endeavors due to previously incurred, irrecoverable costs rather than optimizing decisions based on current and future expectations. Rationally, sunk costs should be irrelevant, yet this bias causes posterior beliefs to ignore present conditions and maintain failing strategies.
These biases illustrate how even a formally defined, evidence-informed reasoning framework can go awry when Bayesian updating is hindered by systematic distortions.
Monty Hall Problem: An Illustration of DIBIL (Dogma-Induced Blindness Impeding Literacy)
The Monty Hall problem illustrates how dogmatic assumptions and entrenched beliefs can obstruct rational Bayesian updating. In this scenario, a contestant on a game show selects one of three doors, behind one of which is a prize. After the initial choice, the host—who knows the location of the prize—opens another door that does not contain the prize. The contestant is then offered the option to switch to the remaining unopened door.
Bayesian reasoning clearly shows that switching doors increases the probability of winning from 1/3 to 2/3. Yet many participants, including contestants on televised shows, fail to update their posterior probabilities correctly. Instead, they cling to the erroneous belief that both doors are equally likely, disregarding the critical conditional information provided by the host’s choice.
This refusal to switch—despite overwhelming logical and empirical evidence—epitomizes DIBIL (Dogma-Induced Blindness Impeding Literacy). The “dogma” here is a static, incorrect notion of probability, derived from intuition or insufficient teaching, which prevents agents from integrating new evidence effectively. Empirical studies confirm this bias: a significant fraction of contestants consistently choose not to switch, thus missing the better odds. This behavior—resistant to logical arguments or empirical demonstrations—highlights the powerful role of dogma in obstructing Bayesian literacy and rational belief revision.
Systematic Bias and Bayesian Distortion
A unifying characteristic of cognitive biases, including those demonstrated in the Monty Hall problem, is the systematic deviation of an agent’s posterior probabilities from the values prescribed by Bayesian inference. Rather than introducing random noise or minor inaccuracies, cognitive biases result in consistent, predictable distortions. The agent’s posterior beliefs fail to align with P(H|E)Bayes, hindering accurate, evidence-based decision-making and understanding.
Opportunities for Corrective Measures
Cognitive biases reveal the tension between heuristic-driven, intuitive judgments and the precise probabilistic updates required by Bayesian logic. Recognizing these distortions offers pathways for corrective measures, such as:
Deliberate analytical thinking to counteract intuitive errors.
Leveraging L-language’s structural safeguards to prevent semantic drift and guide agents toward rational, evidence-informed reasoning.
By addressing these biases systematically, agents can improve their alignment with Bayesian principles, thereby enhancing both decision-making and overall understanding.
Theory-Induced Blindness (TIB) and Dogma-Induced Blindness Impeding Literacy (DIBIL)
Definitions
Theory-Induced Blindness (TIB)
TIB occurs when an agent persistently adheres to a flawed theory T—a subset of hypotheses H—even after incontrovertible evidence refutes at least one critical proposition within T.Formal Definition:
Let T be a set of hypotheses, T ⊆ H.
Suppose there exists a hypothesis ψ ∈ T for which Ξ(¬ψ) = 1, meaning the negation of ψ is an empirically validated fact.
If, despite Ξ(¬ψ) = 1, the agent never removes ψ from their belief set—i.e., ψ remains in B(t+k) for all k > 0—then TIB holds.
In other words, TIB occurs if:
∃ψ ∈ T such that Ξ(¬ψ) = 1, yet ψ ∈ B(t+k) ∀k > 0.
The agent fails to discard a refuted hypothesis ψ, continuing to treat the flawed theory T as valid despite definitive contradictory evidence.
Dogma-Induced Blindness Impeding Literacy (DIBIL)
DIBIL arises when an agent erroneously treats a hypothesis as a fact. In this case, the agent assumes Ξ(ψ) = 1 prematurely, despite insufficient empirical validation, leading to the misclassification of a mere hypothesis ψ as a fact—even though Ξ(ψ) < 1.Formal Definition:
Let ψ be a proposition that is not a fact, meaning ψ ∈ H and Ξ(ψ) < 1.
If Ξ(¬ψ) = 1 (the negation of ψ is an empirically validated fact), but the agent continues indefinitely to hold ψ in B(t+k), acting as if ψ were a fact, then DIBIL occurs.
In other words, DIBIL occurs if:
∃ψ ∈ H with Ξ(¬ψ) = 1, yet ψ remains in B(t+k) ∀k > 0 as though ψ were a fact.
The agent never corrects this misclassification, maintaining a refuted hypothesis in the belief set as if it were established truth.
Interaction of TIB and DIBIL
DIBIL introduces a false “fact” into the belief system.
This occurs when a hypothesis, lacking full empirical validation, is prematurely treated as an axiom or foundational assumption. For example, certain economic models (e.g., Keynesian frameworks) may treat unverified hypotheses about monetary behavior as axiomatic premises, thereby elevating unproven statements to “fact” status within their logical structure.
TIB prevents the removal of the false fact.
Once the incorrect assertion introduced by DIBIL becomes integrated into a broader theory T, TIB ensures its persistence. With a flawed “fact” embedded in the foundational layer of T, the agent consistently resists revising their stance, even when confronted by overwhelming contradictory evidence.
Reinforcing Cycle of TIB and DIBIL
Together, TIB and DIBIL form a self-reinforcing cycle:
DIBIL establishes the incorrect assertion as a foundational “truth.”
TIB ensures that the agent never revisits or removes this entrenched falsehood.
Regardless of the strength of refuting evidence, the agent’s belief system remains impervious to correction, trapped in a stable but erroneous state that obstructs rational inference and objective understanding.
Consequences
TIB and DIBIL, working in tandem, lock the agent into persistent false beliefs. This resistance to correction undermines rational attempts at belief revision, Bayesian updates, or empirical alignment. Together, they explain how dogmatic or theory-driven errors can become permanently ingrained, rendering the belief system immune to logical contradiction or corrective data.
Separating Dogma from Fact in Mathematical Economics
To effectively distinguish dogma from fact in mathematical economics, it is essential to establish a precise and universally applicable definition of economic efficiency. This begins with a comparison of two foundational equilibrium concepts: Nash Equilibrium and Pareto Efficiency. While both describe equilibrium states, they differ significantly in their implications for individual and collective outcomes.
Nash Equilibrium: Strategic Stability
In mathematical economics, as in game theory, Nash Equilibrium logically follows from the axiom of rational utility maximization—a cornerstone of both disciplines. It represents a state in which rational utility maximizers engage in strategic interactions such that:
Equilibrium Condition: No player can benefit by unilaterally changing their strategy, assuming the strategies of other players remain unchanged.
If this condition were violated, the scenario would fail to qualify as an equilibrium, because rational players would adjust their strategies to maximize payoffs. Nash Equilibrium thus guarantees strategic stability for each individual player (or representative agent in an economy).
Limitations of Nash Equilibrium
While Nash Equilibrium ensures strategic stability at the individual level, it does not necessarily guarantee collective optimality:
Individual Rationality: Nash Equilibrium prioritizes self-interest, focusing on maximizing individual payoffs.
Collective Welfare: The resulting equilibrium may still be suboptimal or inefficient for the group as a whole.
For example, in prisoner’s dilemma scenarios, Nash Equilibrium leads both players to defect, resulting in a collectively worse outcome than if both had cooperated. This divergence underscores that strategic stability does not imply economic efficiency or maximization of collective welfare.
Pareto Efficiency: The Baseline for Collective Welfare
Pareto Efficiency emphasizes collective welfare by ensuring that no individual can be made better off without making someone else worse off. Importantly, all Pareto-efficient outcomes are inherently Nash Equilibria, since any equilibrium must satisfy individual strategic stability. However, a Nash Equilibrium outcome is considered “better” or more “optimal” if it is Pareto-efficient, as it ensures that all potential mutual gains are realized.
Kaldor-Hicks Efficiency and Practical Challenges
Alternative approaches, such as Kaldor-Hicks Efficiency, extend Pareto Efficiency by introducing the concept of potential compensation—where “losers” in an allocation could theoretically be compensated by the “winners.” However, these alternatives face practical challenges:
Implementation Complexity: Identifying and compensating affected parties is often prohibitively difficult.
Equitable Resource Allocation: Ensuring fairness in compensation schemes is rarely practical.
While these extensions are theoretically elegant, they often fail to address the complexities of real-world economics.
Barriers to Achieving Pareto Efficiency
Despite its simplicity and elegance, Pareto Efficiency is difficult to achieve in real-world conditions due to several barriers:
Market Imperfections: Distortions such as monopolies, regulatory constraints, or lack of competition disrupt efficient price signals.
Information Asymmetries: Unequal access to critical information leads to suboptimal decisions by market participants.
Externalities: Costs or benefits imposed on third parties (e.g., pollution) are not reflected in market transactions, distorting resource allocation.
Conclusion
Although attaining Pareto Efficiency is challenging, it remains a critical objective in economics. Rather than critiquing it for potential unfairness or inequality, efforts should focus on establishing this baseline efficiency first. As the saying goes, “one must learn to walk before attempting to run”: ensuring efficient resource allocation provides the foundation upon which broader concerns like equity and fairness can be addressed.
Types of Information in Game Theory
In mathematical game theory, the type and availability of information play a crucial role in shaping strategic interactions. Four primary categories of information influence decision-making:
Complete Information
Complete information exists when all players are fully aware of the game’s structure, including its payoffs, strategies, and rules for every participant. This comprehensive knowledge enables players to operate from the same set of axioms, allowing them to reach similar conclusions about optimal strategies. However, complete information does not imply awareness of past or future moves by other players—it solely pertains to the game’s framework and parameters.Perfect Information
Perfect information is present when all players are fully aware of every action taken throughout the game’s history. Each player knows all prior moves made by every other participant. Classic examples include chess and checkers, where all pieces and moves are visible to both sides. In these scenarios, each player’s perspective mirrors that of a fully informed third party, possessing complete historical knowledge. Perfect information allows players to make decisions with full awareness of the game’s evolution up to the current point.Imperfect Information
Imperfect information arises when players lack access to either complete historical data or private information held by others. Even with complete information about the game’s structure and payoffs, uncertainty may persist due to hidden elements. For example:In poker, players cannot see each other’s cards, creating uncertainty about opponents’ strategies and private holdings.
This lack of transparency makes it challenging to achieve outcomes like Pareto Efficiency, as players cannot fully account for the strategic choices of others.
Imperfect information reflects a partial understanding of the game’s dynamics, where strategic uncertainty prevents precise outcome optimization.
Incomplete Information
Incomplete information occurs when players lack fundamental knowledge about the game itself, such as the payoffs, preferences, or strategies of other participants. In these situations, players must form probabilistic beliefs about the unknown elements. This context gives rise to Bayesian Nash Equilibria, where strategies are based on probabilistic assessments rather than certainties.Incomplete information is a broader form of uncertainty that encompasses gaps in both the game’s structure and private knowledge, further complicating strategic interactions.
Clarifying Terminology
The distinctions between perfect and imperfect information are nuanced yet foundational in game theory:
Perfect Information: Full knowledge of the game’s history, including all prior moves.
Imperfect Information: Gaps in private knowledge, even when the full history of observed moves is known.
For example:
A player in a chess match (perfect information) knows all prior moves.
A poker player (imperfect information) faces uncertainty due to hidden cards.
Achieving Pareto Efficiency under imperfect information requires not only knowledge of the game’s structure and history but also access to all private information held by each player. Without this, strategic uncertainty persists, limiting the potential for collectively optimal outcomes.
How Imperfect Information Leads to Pareto Inefficiency
Imperfect information is a significant barrier to achieving Pareto-efficient outcomes in both mathematical economics and real-world scenarios. By distorting decision-making and incentivizing opportunistic behavior, it prevents the optimal allocation of resources and fosters inefficiency across markets and strategic interactions.
Imperfect Information and Market Breakdown
George Akerlof’s seminal work, The Market for “Lemons”, vividly illustrates how imperfect information disrupts markets. In Akerlof’s example:
Asymmetric Knowledge: Sellers of used cars have more information about their vehicles’ quality than buyers.
Distorted Pricing: Buyers, unable to distinguish between high- and low-quality vehicles, offer prices that reflect average quality.
Market Failure: High-quality sellers exit the market because they cannot secure fair prices. This breakdown prevents mutually beneficial transactions and leads to Pareto inefficiency, where resources are misallocated between buyers and sellers.
This dynamic demonstrates how imperfect information leads to market inefficiency by driving honest participants out and undermining the trust needed for optimal trade.
The Rent-Seeking Lemma: Incentives for Opportunism
Imperfect information also fosters opportunistic behavior, as explained by the Rent-Seeking Lemma, derived from the rational utility maximization axiom. Rational agents, aiming to maximize their welfare, conduct cost-benefit analyses before engaging in transactions. When potential benefits outweigh costs, some agents resort to dishonest practices like fraud or misrepresentation.
Rent-Seeking Behavior: Defined by Tullock and Buchanan in public choice theory, rent-seeking describes efforts to extract wealth without creating value, often by exploiting or manipulating existing resources.
Example: A used-car seller might misrepresent a low-quality vehicle as high-quality to extract unearned wealth from an uninformed buyer.
As noted by Jensen and Meckling in Theory of the Firm: Managerial Behavior, Agency Costs, and Ownership Structure (1976), this behavior stems from self-interest and variability in honesty among economic agents. Such exploitation exacerbates inefficiencies, erodes trust, and drives honest participants out of the market.
Economic Parasites and Systemic Inefficiencies
Markets plagued by imperfect information foster what can be termed economic parasites—agents who extract value without contributing to its creation.
Agency Theory Perspective: Economic parasites exploit information asymmetries to gain unearned wealth.
Public Choice Theory Perspective: Rent-seekers manipulate systems for personal gain without adding productivity.
For example, fraudulent car dealers profit by deceiving uninformed buyers, incentivizing dishonest practices and driving out honest agents. This dynamic creates systemic inefficiencies, erodes trust, and reduces overall economic welfare, preventing Pareto-efficient outcomes.
Imperfect Information Precludes Pareto Efficiency
According to the Rent-Seeking Lemma, imperfect information inherently precludes Pareto efficiency:
Mutual Benefit Obscured: Under imperfect information, it is impossible to determine whether a transaction is truly Pareto-efficient—that is, mutually beneficial to all parties.
Fraud Risk: Without symmetrical information between counterparties, the risk of fraud and exploitation inevitably arises, disrupting optimal resource allocation.
Game Theory and Strategic Uncertainty: The Prisoner’s Dilemma
The prisoner’s dilemma exemplifies how imperfect information and strategic uncertainty lead to Pareto inefficiency.
Scenario: Two co-conspirators must decide whether to cooperate or defect, but each fears betrayal.
Outcome: Acting rationally, both choose to defect, resulting in a Nash Equilibrium where neither benefits as much as they could through mutual cooperation.
Inefficiency: This equilibrium is Pareto-inefficient because the group fails to achieve the best possible outcome.
If the prisoners had symmetric information and could fully trust each other’s intentions, they could reach a Pareto-efficient outcome via cooperation.
Market Inefficiencies: Akerlof’s “Lemons”
Similarly, in markets, asymmetric information distorts decision-making and outcomes:
Imbalance of Knowledge: Sellers know more about the quality of their goods than buyers.
Distorted Prices: Buyers adjust prices downward, reflecting average rather than high quality.
Market Failure: High-quality sellers exit the market, leading to suboptimal resource allocation.
These inefficiencies mirror the prisoner’s dilemma, where imperfect information prevents individuals from making mutually beneficial decisions.
Conclusion: Imperfect Information as a Barrier to Efficiency
Imperfect information undermines decision-making in both markets and strategic interactions. It fosters fraud, erodes trust, and prevents participants from achieving Pareto-efficient outcomes. Whether through market distortions like Akerlof’s “Lemons” or strategic uncertainty in the prisoner’s dilemma, the lack of transparency ensures suboptimal results, highlighting the critical need for mechanisms to address information asymmetries.
Bridging Theory and Practice: Measuring Pareto Efficiency Objectively
While the theory of Pareto Efficiency is compelling, its application to real-world economies requires measurable benchmarks and empirical verification. An economic model cannot claim efficiency based solely on theoretical constructs or assumptions. To distinguish between dogma and fact, it is essential to:
Define Pareto Efficiency in an independently verifiable manner.
Establish empirical criteria for testing efficiency in practice.
By grounding theoretical conclusions in observable evidence, mathematical economics can move beyond abstraction and provide meaningful practical insights. Aligning theory with the complexities and imperfections of real-world markets ensures that economic models deliver genuine use-value, effectively bridging the gap between theoretical rigor and practical applicability.
GDP vs. Gross Output vs. Intermediate Consumption
How can we determine if an economy is truly Pareto efficient in an objective way? Independent verifiability is key to distinguishing objective fact from hypothesis. Since no universally accepted method defines absolute Pareto-efficiency, let us begin with a simpler task: How can we measure the relative Pareto efficiency of two economies, A and B, in an independently verifiable manner—applicable both theoretically and practically?
Rankings of Pareto efficiency generally rely on real GDP per capita and its growth over time, adjusted for negative externalities like environmental pollution. This approach dominates because it provides the only data available for objectively comparing two economies’ relative efficiency—no alternative data exist. However, it overlooks production costs, particularly intermediate inputs (e.g., oil and gas) that are essential for production but not directly consumed by individuals. Reducing these inputs increases efficiency, because fewer resources generate the same output. This principle underlies federal fuel-efficiency mandates and the broader green movement: both strive to reduce non-renewable resource use and thus boost overall efficiency. While we do not evaluate the real-world impacts of these policies here, their stated aim—to improve productive efficiency by lowering resource consumption—remains clear.
Consider house construction as an example. The finished house contributes to final consumption (and thus GDP), directly enhancing welfare. However, the lumber used to build the house is intermediate consumption—a necessary cost for producing the final good. If the builder can use less lumber without sacrificing quality, intermediate consumption decreases, thereby improving productive efficiency. This principle is universally valid: employing fewer inputs to generate the same output indicates greater efficiency.
This distinction explains why Gross Output (GO)—which includes both final goods and services (counted in GDP) and intermediate consumption—is seldom emphasized in policy discussions. GO measures total production volume, whereas GDP focuses on final goods and services, correlating directly with consumer utility and welfare (i.e., finished products with direct use-value).
The more an economy reduces intermediate consumption without sacrificing output, the more efficient it becomes. However, current GDP calculations by governments include not only final goods and services but also certain government expenditures, such as military spending. Although such spending is treated as final expenditure for accounting purposes, it does not enhance general welfare in the same way consumer goods do. For instance, paying a security guard to check IDs is a necessary cost for maintaining order but does not directly improve consumer well-being. Similarly, defense spending is crucial for security but does not raise welfare like an increase in consumer goods would.
The same reasoning applies to expenditures on education and social welfare. Spending on education is a cost incurred to achieve educational outcomes. If the same results can be obtained with less expenditure, efficiency improves. Likewise, providing housing for the needy at a lower cost while preserving quality raises societal benefit by maximizing utility while minimizing collective costs. Each instance shows that cutting costs without reducing output increases productivity and aligns resources more closely with welfare.
Although government spending indirectly supports the economy by enabling trade and protecting citizens, it remains a cost no different from intermediate consumption—rather than a direct contributor to consumer welfare. Still, current national accounting standards classify government spending (including military expenditures) as part of GDP because it is counted as final expenditure.
Redefining such spending as intermediate consumption would require revisiting what “final” and “intermediate” mean in GDP calculations. Accurate classification is critical: reducing costs without cutting output boosts productivity. Today’s definitions, however, conform to established international accounting standards.
Upon closer reflection, it becomes clear that standards often emerge from processes shaped by those who benefit from them. Government expenditures—such as salaries for officials drafting these standards—are categorized as benefits rather than costs, verifiably overstating their contribution to general welfare. GDP includes all final expenditures, including government spending, regardless of its effect on welfare. This misclassification paves the way for rent-seeking and exacerbates the principal-agent problem, wherein agents (government officials) place their interests above the public’s welfare.
As North Koreans might note, even if military spending is efficient in a technical sense, it can undermine overall welfare if a large share of GDP goes to the military instead of services that directly benefit citizens. True welfare maximization occurs when GDP is channeled toward consumer goods and services that improve well-being, rather than disproportionately allocated to the military.
This issue reveals a deeper concern: axiomatic or definitional misclassifications in mainstream economic accounting can enable rent-seeking that erodes overall welfare. Many economists accept these flawed definitions without personal gain, partly due to Theory-Induced Blindness (TIB) or Dogma-Induced Blindness Impeding Literacy (DIBIL)—cognitive biases that lead them to perpetuate incorrect assumptions. While some errors arise from genuine modeling attempts, others are deliberate, serving rent-seekers. We believe classifying defense spending as final consumption is likely no accident.
This paper seeks to investigate the underlying causes of intentional definitional errors in economic accounting and policy. These are not random oversights but deliberate behavioral nudges, comparable to opt-out marketing strategies that businesses use to boost product uptake. Such nudges enable unearned wealth extraction by “economic parasites,” as described by the Rent-Seeking Lemma. Public choice theory shows how rent-seeking agents create definitions and policies that serve their interests at the public’s expense.
Vladimir Lenin’s notion of “economic parasites”—those who consume but do not produce—resonates across multiple frameworks. Public choice theory’s successful rent-seekers, agency theory’s fraudulent agents, and Lenin’s economic parasites all describe actors who draw unearned wealth from productive participants. This universal pattern underscores how successful rent-seekers invariably feed off others’ value creation.
We assert—as a fact that cannot turn out to be false—that any parasitic infestation, whether locusts on crops, termites in homes, or thieves, fraudsters, and rent-seekers in economies, produces deadweight losses, thereby lowering efficiency and welfare. Reducing such rent-seeking is therefore vital to improving efficiency.
Although real per capita GDP remains useful, it currently inflates welfare by labeling intermediate consumption as benefits. To more accurately measure Pareto efficiency—especially when comparing different economies—we must refine national accounting standards to distinguish genuine final consumption from costs such as government spending. Doing so would yield a clearer view of an economy’s welfare contribution and curb rent-seeking activities.
This introduction, though extensive, only begins to explore the hidden forms of rent-seeking. By using a formal system, one can uncover how DIBIL and related behaviors enable compromised economic agents to persist. According to Lenin’s definition, these “economic parasites” allow for unearned wealth extraction, legitimizing rent-seekers’ influence on legislation. By challenging flawed theories, we can guide policy reforms that curtail exploitation and enhance true economic welfare.
The Labor-For-Goods Dynamic Equilibrium Model within Mathematical Economics
Mathematical economics operates as a formal system, where theorems—such as the First Welfare Theorem—are derived from foundational axioms using L-language first-order inference rules. However, here is where the problem arises. Some axioms, like local non-satiation (where representative agents always prefer more of a good to less), do hold true universally. Other axioms, such as rationality, do not hold universally. As discussed, agents often exhibit bounded rationality due to cognitive biases—like DIBIL—stemming from the use of hypotheses known to be inaccurate as axioms.
Comprehensive Arrow-Debreu Assumptions
Rational Utility Maximization
Statement: Each consumer (agent) is rational and seeks to maximize a well-defined utility function subject to a budget constraint.
Implication: Consumers make choices that maximize utility given their initial endowments and prevailing prices.
Notes: This also implies internal consistency (transitivity, etc.) and that utility functions accurately represent preferences.
Complete, Reflexive, and Transitive Preferences
Statement: Consumers can rank all possible bundles of goods. Preferences are complete (any two bundles can be compared), reflexive (any bundle is at least as good as itself), and transitive (consistent ordering across multiple bundles).
Implication: Ensures well-defined preference orderings that can be represented by a utility function under certain continuity/monotonicity conditions.
Continuity and (Strict) Monotonicity
Continuity: Small changes in a consumption bundle lead to small changes in preference ordering, enabling standard analytical tools.
Monotonicity (Local Non-Satiation): More of any desirable good is strictly preferred to less, so no “bliss point” is reached within normal consumption ranges.
Convex Preferences
Statement: Consumers prefer “averages” to extremes, formalized by convex indifference curves.
Implication: Ensures well-behaved demand functions and is critical for uniqueness or stability of equilibria.
Notes: Strict convexity is often assumed to avoid corner solutions.
Firms Have Convex Production Sets
Statement: Production sets exhibit constant or decreasing returns to scale, ensuring convex input-output possibilities.
Implication: Helps guarantee the existence of a price-taking, competitive equilibrium where marginal cost equals marginal revenue.
Notes: Non-convexities (e.g., increasing returns to scale) can invalidate certain welfare theorems or yield multiple equilibria.
No Externalities / No Public Goods
Statement: Production and consumption activities affect only the parties directly involved. Public goods or external costs/benefits (pollution, free-rider issues) either do not exist or are fully internalized via well-defined property rights.
Implication: Each participant faces the true costs and benefits of their actions, making private optima coincide with social optima.
Notes: If externalities exist or goods are non-excludable/non-rival, standard Arrow-Debreu efficiency results may not hold unless special corrective mechanisms are in place.
Complete Markets
Statement: Every possible good or service (including future/contingent goods) is traded in a market at some price.
Implication: Agents can fully insure or hedge against all future contingencies; nothing remains unpriced.
Notes: In reality, many markets (especially for future states or rare events) do not exist—leading to incomplete markets and potential inefficiencies.
Price-Taking Behavior (Perfect Competition)
Statement: All agents (consumers and firms) treat prices as given—no individual participant can influence the market price of any good.
Implication: Ensures equilibrium emerges purely from aggregate supply and demand, with no monopoly or monopsony power.
Notes: Real-world market power or strategic interaction (e.g., oligopoly, cartels) violates this assumption.
Well-Defined, Transferable Property Rights
Statement: Each good (or factor of production) belongs to some agent, who can buy, sell, or trade it freely. Property rights are secure and enforceable.
Implication: Prevents ambiguity over ownership. Goods (and endowments) can be optimally allocated via voluntary exchange if no externalities exist.
Notes: Weak enforcement or ill-defined rights leads to market failures (e.g., the tragedy of the commons).
No Transaction Costs, No Barriers to Trade
Statement: Exchange (buying/selling) is frictionless; there are no legal or technical barriers to entry or exit, and no taxes, tariffs, or costly intermediaries.
Implication: Prices reflect pure supply-demand conditions, and agents can freely move to more profitable opportunities.
Notes: Even small transaction costs can block mutually beneficial trades.
No Uncertainty or Fully Known States (Classic Arrow-Debreu)
Statement: Either the future is certain or there is a complete set of state-contingent markets spanning every possible state of the world (making uncertain futures tradeable in present-value terms).
Implication: Agents can optimize across time and contingencies, as if they effectively “knew” outcomes or had perfect insurance.
Notes: In real economies, incomplete markets for future states (particularly uncertain ones) can disrupt Arrow-Debreu results.
Infinite Divisibility
Statement: Goods (and possibly labor/capital) can be subdivided infinitely and traded in arbitrary fractional amounts.
Implication: Assures continuity in demand and supply functions, avoiding lumpy or integer constraints that complicate equilibrium.
Notes: In reality, many goods are indivisible (e.g., an airplane). Large, lumpy investments can cause non-convexities.
Agents’ Endowments and Preferences Are Common Knowledge (Optional)
Statement: Each agent’s initial resources and utility functions are known to all participants (or at least to the market mechanism).
Implication: Facilitates price determination and existence proofs; no hidden information leads to adverse selection or moral hazard.
Notes: Imperfect or asymmetric information (e.g., Akerlof’s lemons) disrupts standard Arrow-Debreu efficiency results.
How These Assumptions Underlie the First Welfare Theorem
Existence of a Competitive Equilibrium
Under assumptions 1–13, one can show (via the Arrow-Debreu existence proof) that a competitive equilibrium of prices and allocations exists.Pareto Efficiency of Equilibrium
The First Welfare Theorem states that if all these conditions are satisfied, every competitive equilibrium allocation is Pareto efficient—no one can be made better off without making someone else worse off.Implications in Practice
Even small deviations (e.g., externalities, incomplete markets, market power, or incomplete information) can cause systematic departures from Pareto efficiency, explaining why real economies often fail to realize this ideal Arrow-Debreu outcome.
Notes on Exhaustiveness
Some authors re-group or rename these assumptions (e.g., merging “no externalities” with “complete property rights,” or listing “local non-satiation” under “strict monotonicity”), but the core essence remains the same.
In more advanced treatments (e.g., Arrow-Debreu under uncertainty), complete state-contingent markets are emphasized separately, sometimes called “financial completeness.”
Under these premises, the First Welfare Theorem proves that any competitive equilibrium is Pareto efficient. Along with the Second Welfare Theorem, it underpins the Arrow-Debreu model—a cornerstone of mainstream mathematical economics. For instance, the Federal Reserve Bank of the United States relies on general equilibrium models grounded in Arrow-Debreu assumptions to guide critical policy decisions, such as setting interest rates.
While the Arrow-Debreu model’s conclusions—rational, utility-maximizing agents operating in perfect markets—are logically robust within its idealized assumptions, real-world economies seldom meet these conditions. In this paper, we isolate factual axioms from those that are idealized or known not to hold in reality, then re-derive the First Welfare Theorem dynamically under a game-theoretic “Labor-for-Goods and services” unfettered-exchange model, ensuring a careful distinction between factual premises and idealized ones.
Introducing the Labor-For-Goods Game Theory Model
The Labor-For-Goods Game Theory Model offers a dynamic perspective on how Pareto-efficient Nash equilibria—predicted by the First Welfare Theorem—can emerge through ongoing, real-world interactions rather than from static assumptions. In this model, individuals earn wages by supplying labor and then exchange these wages for goods and services produced by others. Each voluntary trade is mutually beneficial, incrementally moving the economy closer to a Pareto-efficient outcome.
While the Arrow-Debreu framework provides a powerful lens for understanding competitive equilibria, it treats equilibrium as a static condition, assuming perfect markets and symmetric information. In contrast, the Labor-For-Goods model captures the continuous adjustments that occur as rational agents engage in trade. Each transaction serves as a “step” in a gradient descent, reducing inefficiencies one exchange at a time.
This dynamic viewpoint does not contradict Arrow-Debreu theory; rather, it enriches it by illustrating the step-by-step path toward equilibrium. By focusing on the process of getting there, the Labor-For-Goods Game Theory Model bridges theoretical insights with practical observations, demonstrating how real-world markets evolve toward efficiency over time.
Explanation: Labor-For-Goods (and Services) Setup
The Labor-For-Goods (and Services) framework uses game theory to model Pareto-efficient outcomes that yield group-optimal Nash equilibria. Unlike the Prisoner’s Dilemma—where individual incentives can lead to collectively suboptimal results—rational, utility-maximizing agents in this model trade labor for goods and services under the local non-satiation axiom. Provided these trades occur under unfettered, symmetric information exchange (and barring negative externalities), the resulting Nash equilibrium becomes a collaborative, Pareto-efficient allocation that benefits all parties.
Essentially, people are inherently self-motivated to consume more of real GDP. In a perfect market, the only way to consume more is to produce more. Therefore, barring frictions (like transaction costs), a group-optimal outcome is assured, and the only way it can fail to emerge is through impediments—mutually beneficial trades that do not occur for one reason or another, such as monopolies or rent-seeking. Thus, in this dynamic model, “perfect market conditions” are dually defined in terms of the absence of such frictions.
For now, let us turn to the costs of producing the real GDP that we, as a group, consume—costs that are borne collectively by everyone who participates in the economy.
The Economic Model and Collective Costs: Labor and Externalities
This economic model—conceived as a formal system rooted in real-world interactions—builds on Adam Smith’s insight that overall welfare improves when individuals exchange the fruits of their labor. Producers earn wages and spend these on goods and services produced by others, creating a mutually beneficial cycle. Mathematically, the model asserts that the net collective costs of producing real GDP come from only two sources:
Labor contributed by individuals.
Negative externalities, such as pollution and resource depletion, which affect society at large.
Understanding Externalities
Externalities are costs or benefits imposed on third parties not directly involved in a given transaction (e.g., pollution). They are collective costs because they affect the broader population. Similarly, labor is a collective cost: everyone who is productively engaged invests time and effort, except those involved in non-productive or exploitative activities (theft, fraud, etc.). A rigorous formal system must account for all agents, including those who add no positive value.
While firms and individuals do incur private costs for inputs such as raw materials, capital, or technology, these expenses are not collective costs in the same sense as labor and externalities. For instance, mere ownership or transfer of raw materials used in intermediate consumption does not directly affect final consumption (i.e., GDP), which underpins collective welfare. Ownership transfers (e.g., stock sales) redistribute wealth rather than increase production, and thus do not alter Pareto efficiency—unless externalities are involved.
Ownership and Pareto Efficiency
Externalities tied to ownership changes—such as positive externalities from more efficient capital allocation—lie beyond this model’s primary scope, though it can be extended to explore such effects in future work:
Negative externalities (e.g., pollution or resource depletion) represent collective costs shared by society.
Capital ownership remains a private cost that does not, by itself, alter collective welfare.
Hence, labor and negative externalities are the only two collective costs—both theoretically and practically—because, at the group level, no additional costs beyond these are incurred in producing real GDP.
Illustrating Collective Costs: Bob and Alice on a Deserted Island
Consider Bob and Alice, stranded on a deserted island. Their combined costs and benefits can be optimized through mutually beneficial trades, leading to a Pareto-efficient outcome in which neither can improve their situation without harming the other.
Ownership of resources (e.g., a banana tree or water spring) is irrelevant to Pareto efficiency.
Mutual exchange of goods and services can still yield an efficient allocation.
Once no further mutually beneficial trades are possible, the economy reaches an efficient state—regardless of who owns which resource.
In a nutshell, this restates Adam Smith’s principle from The Wealth of Nations: mutually beneficial trade inherently boosts overall welfare by improving labor productivity and reducing the time each individual must expend to obtain what others produce—provided these exchanges remain voluntary and free of fraud or externalities. It is a self-evident truth that was as valid in 1776 as it is today.
Conclusion: The Universal Role of Labor and Externalities
No sound formal system built on self-evident axioms can contradict real-world facts. In this model, Pareto efficiency concerns how resources are allocated through trade, not who owns them. Once no more Pareto improvements are possible, the system is efficient.
From a macroeconomic perspective, labor and negative externalities remain the only collective costs incurred by society as a group, both in theory and in reality. There are no additional costs to producing GDP for the group as a whole. By incorporating facts that cannot turn out to be false as axioms, the Labor-For-Goods game theory dynamic equilibrium model offers a robust framework for examining how these factors influence economic outcomes, Pareto efficiency, and ultimately, collective welfare.
The Role of Money: A Unit of Account to Preclude Arbitrage
Arbitrage occurs when an asset or good can be bought in one market at a lower price and sold simultaneously in another market at a higher price. The arbitrageur profits from this price discrepancy—often by pressing buttons on a computer—without contributing to GDP. This effectively lets individuals consume resources they did not produce, aligning with the public choice definition of rent-seeking: extracting unearned wealth. Whether described as an “economic parasite,” a “successful rent-seeker,” or a “fraudulent agent,” the underlying issue is asymmetric (imperfect) information, allowing certain market participants to exploit others’ lack of knowledge. In many legal frameworks, this practice is tantamount to fraud—consuming goods and services others have produced without making a corresponding contribution to productivity. It mirrors the scenario of finding $100 on the street and using it to buy lunch: like arbitrage profits, it enables one to consume without producing. Such behavior distorts incentives and resource allocation, ultimately undermining overall economic efficiency.
In foreign exchange (Forex) markets—where roughly 30 major currencies are actively traded—exchange rates between different currencies can be organized into an exchange rate matrix E. Each element eij specifies how many units of currency j one obtains for a single unit of currency i. This matrix representation clarifies how competitive pressures adjust exchange rates to eliminate arbitrage (i.e., risk-free profit absent any real productive activity).
A key requirement for preventing arbitrage is internal consistency in exchange rates. Concretely:
eij · eji = 1
For example, if 1 U.S. dollar (USD) exchanges for 0.50 British pounds (GBP), then 1 pound must exchange for exactly 2.00 USD. Should this reciprocal condition fail, arbitrage becomes possible: a trader can exploit the mismatch in a series of trades to end up with more of the original currency than they started with—an unearned gain that erodes economic efficiency.
Enforcing no-arbitrage conditions in currency markets illustrates how money, when serving as a unit of account, precludes inefficiencies caused by multiple prices for the same underlying asset. In the Forex context:
Exchange Rate Matrix (E): Represents how currencies convert into one another.
Reciprocity: Each rate eij must satisfy eij = 1 ÷ eji.
Self-Identity: Each currency must convert into itself at a rate of 1.
Rank-1 Constraint: The matrix E must have rank 1, ensuring a single, consistent scale for defining all cross rates.
By using the U.S. dollar as the unit of account for pricing other currencies, these properties reinforce price consistency and discourage arbitrage.
Ensuring Efficiency via No-Arbitrage
No-arbitrage conditions underlie stable and efficient foreign exchange markets:
Internal Consistency: Exchange rates remain coherent, blocking rent-seeking strategies based purely on price discrepancies.
Aligned Wealth Accumulation: Profits must stem from genuine value creation rather than arbitrage.
Reduced Information Asymmetry: Resources flow toward more productive uses, raising overall welfare.
Without strict no-arbitrage rules, market participants could exploit price gaps to gain unearned income—akin to “economic parasites” or “successful rent-seekers” in public choice theory, or “fraudulent agents” in agency theory—thereby distorting resource allocation and harming economic efficiency. Thus, no-arbitrage constraints protect exchange rate integrity, ensuring that trade outcomes reflect true productivity and accurate information.
Prices as Exchange Rates
Within this framework, prices of all goods and services can be viewed as exchange rates relative to a chosen row or column of the matrix E—designated the unit of account. This approach underpins Arrow-Debreu theory and, interestingly, resonates with some aspects of Marx’s ideas. Money’s essential role emerges as a regulator of markets by preventing multiple prices for the same asset, which would otherwise enable arbitrage (= unearned wealth extraction = rent-seeking = inefficiency).
A real-world example appears in the foreign exchange (FX) market, where currencies are typically quoted against a single base currency—currently, the U.S. dollar. By standardizing currency pairs relative to USD, arbitrage opportunities diminish, nudging the system toward a no-arbitrage condition. Centralized quoting fosters predictability and shrinks the gaps that arbitrageurs might exploit, creating a more stable and equitable trading environment.
Although linear algebra can be extensive in finance, it suits this context well. Viewing prices as entries in an exchange rate matrix underscores money’s role solely as a unit of account. In FX, each currency pair (e.g., EUR/GBP or EUR/JPY) is derived from its respective rate to USD, highlighting money’s function as a universal reference. This method enhances market efficiency by increasing information symmetry and minimizing arbitrage possibilities, ultimately ensuring consistent asset prices across markets.
Significance and Real-World Implications
The Labor-For-Goods Game Theory Model provides a dynamic lens for analyzing economic processes. By incorporating money and enforcing arbitrage-free conditions, the model captures the practical complexities of real-world economies, showing how:
Incremental Trades Improve Resource Allocation
Iterative trades, guided by rational pricing, allow the economy to reduce inefficiencies step by step and converge toward Pareto efficiency.Money Stabilizes Markets
Acting as a unit of account, a medium of exchange, and a store of value, money enables seamless trades and preserves efficiency gains over time.Arbitrage-Free Pricing Drives Progress
Rational pricing ensures that genuine value creation fuels economic growth, preventing distortions caused by unearned profits or exploitative practices.
By addressing these core principles, the model bridges the gap between static equilibrium frameworks like Arrow-Debreu and the evolving dynamics of real-world economies. It offers valuable insights for policy design, market regulation, and institutional development.
Conclusion
The Labor-For-Goods model highlights the dynamic journey toward efficiency, emphasizing how markets evolve continually through incremental trades. By integrating concepts such as gradient descent, money as a stabilizing force, and arbitrage-free pricing, the model provides a realistic picture of how economies approach Pareto efficiency in practice.
This framework underscores the importance of:
Clear metrics for value measurement.
Stable semantics for pricing and exchange.
Consistent incentives that guide rational decision-making.
Together, these elements help ensure markets operate efficiently and fairly, laying the groundwork for sustained economic progress.
CONDITIONS AND AXIOMS
Wall-Street Style Inference Rules: Dually Defined
This is where our “Wall-Street style” inference rules become more stringent and formal than the freewheeling, “child’s play” approach sometimes used by theoretical mathematicians. Outside of academic fantasyland—where, much like in the old Soviet Union, people pretend to pay mathematicians and mathematicians pretend to work—that’s not how real mathematicians earn real money. On Wall Street, especially in statistical arbitrage, we must comply with SEC Rule 10B-5 and always remind clients that investments can lead to losses. However, professional mathematicians in finance do not lose their own money. I speak from experience running stat-arb at RBC and my own hedge fund. Our former colleagues at Renaissance Technologies also engage in statistical (or mathematical) arbitrage; you can look up their methodology. If you don’t want to lose money—like we don’t—you must adhere to rules stricter than what you might be used to, namely, those outlined in this white paper.
As Don Corleone (from The Godfather) famously cautions:
“It’s an old habit. I spent my whole life trying not to be careless. Women and children can afford to be careless, but not men.”
On Wall Street, carelessness can lead to consequences far beyond financial losses—sometimes ending in prison time, as seen in high-profile cases involving individuals like Sam Bankman-Fried, Michael Milken, and others. As practicing mathematicians in finance, we cannot afford errors—and we don’t make them—because we follow rigorous, fail-proof inference rules.
To borrow a line from Carlito’s Way (1993), when Carlito Brigante tells David Kleinfeld:
“Dave, you’re a gangster now. A whole new ballgame. You can’t learn about it in school.”
Well, in our school of applied Wall-Street-style mathematics, you can learn about it—because we use formal systems and the L-language, just as Bertrand Russell taught us. Anything stated in the L-language can only misrepresent reality under very specific conditions—namely, when the mathematician is being a careless fool who doesn’t distinguish a fact (Ξ(φ)=1) from a hypothesis. The term “old man Funt” appears in Ilf and Petrov’s 1931 book The Golden Calf, referring to a character who takes the fall for a fraudulent businessman—much like Joe Jett at Kidder Peabody when I began trading stat-arb there. In finance, mathematicians don’t take the fall; a proper Funt does.
Indeed, while Leona Helmsley went to prison—on account of being careless and not hiring a proper Funt (what else do you expect from a woman?)—she wasn’t wrong when she said:
“Only the little people pay taxes,”
as is evident from the capital gains tax rates billionaires pay on income derived primarily from capital gains, compared to regular income tax rates. That’s not a hypothesis but a fact.
So, what distinguishes our inference rules from those used by others who risk legal troubles due to negligence—or from those with no money because they don’t participate in the game at all?
We do not use hypotheses as axioms, only facts.
Axioms must be self-evidently true facts, as standard math texts affirm—never hypotheses. For instance, Milton Friedman’s claim that the central bank caused the Great Depression is plausible (and likely correct), but it remains a hypothesis, susceptible to falsification. On Wall Street, we rely on the Arrow-Debreu framework, a formal system that prevents conflating hypotheses with axioms, a common pitfall elsewhere.
We use the fact-based axiom that the Great Depression was caused by rapid deflation triggered by bank failures. Consequently, any volatility in the price level hinders economic growth. Central banks worldwide fear deflation more than anything else and strive to prevent excessive inflation. This is not mere theory but an observed, real-world fact.
Nothing must contradict reality, and everything is dually defined.
In reality, everything is defined relative to its reciprocal opposite: hot vs. cold, love vs. hate, and in theoretical physics, the particle-wave duality. Properly structured formal systems mirror this duality—like algebra based on Peano’s arithmetic, which models reality via an object-action duality:
In Peano’s arithmetic, the object is the absence-existence duality (0 = nothing, 1 = something), and the action is the addition-subtraction duality (+1 successor vs. “–” predecessor).
Multiplication is the repeated application of addition, and division is its dual—the repeated application of subtraction.
Likewise, root-exponent relationships follow the same pattern, all derived from Peano’s axioms.
Hence, for our formal inference rules to remain consistent with reality:
Axioms and definitions must be self-evidently true.
Everything must be properly and dually defined.
Beyond that, we adhere—almost “religiously”—to the established rules of first-order logic, which employ dual relationships (e.g., “if cause, then effect”), reflecting the inherent dualities we observe in the real world.
Defining the Ground Rules for Rational Economic Agents
At the core of every formal system lies a primary axiom that expresses the object-action duality being modeled. This principle is universal—spanning mathematics, logic, and the formal sciences—because our reality comprises discrete objects (nouns) and actions (verbs) that transform or move those objects through time.
In mathematical economics and game theory, this duality focuses on the representative agent—treated as an object in L-language—that carries out two fundamental roles originally codified by the Arrow-Debreu framework. Specifically, we define each agent dually:
Producer (Labor Supplier)
The agent supplies labor to produce goods and services, receiving wages in return.Consumer (Goods and Services Purchaser)
The agent spends these wages on goods and services produced by other agents (e.g., food, rent, entertainment), often reflected in typical CPI baskets.
Rather than isolating production and consumption into separate entities, this setup treats each agent as a combined “producer-consumer” object—one who both earns (by creating value) and spends (by purchasing output). Crucially, these representative agents do not exist in isolation; they form a set of group-objects whose exchanges collectively constitute the economy.
By defining every member of this group in dual terms, the system mirrors the real-world cycle of production and consumption:
Production Side: Labor is supplied; wages are earned.
Consumption Side: Those wages fund the acquisition of finished goods and services.
This integrated approach ensures that production activities directly link to consumption decisions, reflecting the object-action structure at the heart of modern economic models. Each agent’s actions as a producer affect what can be consumed, and each agent’s actions as a consumer affect the demand for further production.
Through this lens, economic agents become rational utility maximizers who optimize their well-being by choosing how much labor to supply (object-action: producing) and how to allocate their wages (object-action: consuming). This cyclical interplay between production (supply side) and consumption (demand side) embodies the dual nature of agents in a properly constructed formal system.
The Opportunistic Bounded Rationality Utility Maximization Axiom
A foundational axiom in mainstream economic theory holds that individual consumer-producers (sometimes called “players” or “representative agents”) seek to maximize their utility within the constraints they face. This core assumption—also employed in game theory and other decision sciences—states that, given multiple options, rational agents choose the one that best serves their self-interest. Hence the notion of the “rational utility-maximizing representative agent.”
Clearly, this principle does not encompass every human decision; for instance, individuals sometimes volunteer to fight in wars—a choice that clearly conflicts with straightforward utility maximization. However, in the specific context of mathematical economics focused on arm’s-length commercial transactions (where money serves as the medium of exchange for goods and services)—thus excluding inheritance, gifts, charity, and so on—there is no evidence contradicting the observation that participants generally strive to maximize their own utility, albeit under bounded rationality shaped by various cognitive biases (as discussed earlier).
Still, recognizing that no one is perfect (agents may misuse inference rules or confuse hypotheses with facts) highlights the need for additional conditions that account for real-world complexities like opportunism, information asymmetry, and rent-seeking. Ignoring these factors can oversimplify or distort actual market dynamics, since real economic behavior often departs from the ideals of bounded rational utility maximization unless such opportunistic elements are properly addressed.
Introducing the Rent-Seeking Lemma
To better capture observed economic conditions, we introduce the Rent-Seeking Lemma. This lemma holds that rational, utility-maximizing agents will, when given the chance, engage in exploitative or even fraudulent behavior if the perceived costs—legal, reputational, or otherwise—are sufficiently low. In other words, if agents can gain unearned wealth at minimal risk, they often will.
Acknowledging opportunism in this way addresses a major shortcoming in idealized models, which sometimes assume that agents refrain from unethical or nonproductive actions simply because it does not enhance total welfare. Real agents may not prioritize collective well-being; they focus on improving their own positions. Integrating the Rent-Seeking Lemma ensures that the model recognizes these darker but common tendencies, highlighting the need for systems—such as enforceable property rights, transparent markets, and accountability mechanisms—to align individual incentives with social welfare.
Agency Theory and Information Asymmetry
The principal-agent problem, articulated by Jensen and Meckling, highlights how rent-seeking and opportunism can arise whenever one party (the agent) is better informed than another (the principal). Even if both are rational, utility-maximizing actors, the better-informed side can exploit its informational advantage to secure unearned gains. This parallels Akerlof’s “lemons” problem: imbalances in knowledge allow some market participants to profit at the expense of others—without contributing any real value.
If left unaddressed, these conditions undermine the elegant conclusions derived from models assuming perfect rationality and complete information. Recognizing this vulnerability compels us to refine our axioms, acknowledging that market participants are not angels; they may actively exploit others if doing so appears rational. From this vantage point, establishing property rights and no-arbitrage conditions shifts from an abstract theoretical ideal to a practical necessity for curtailing predatory behavior. Moreover, once we recognize the need to mitigate externalities, the role of property rights grows even more vital, as illustrated by the Coase Theorem.
Before examining how the legal system and property rights operate within the labor-for-goods game theory model, however, we must solidify our formal axiomatic definitions. These include not only the consumer-producer representative agent but also money as a key element of the system. With these definitions in place, we can then explore the factors—both in theory and in reality—that keep economies from achieving full Pareto efficiency.
Axiom 0: Definition of Collective Production Costs
Over any given time period (a month, a year, a millennium), the collecive cost of all goods and services produced and consumed by human beings on Earth consists entirely of the following dual components:
Labor Costs: The effort exerted by humans in the production process.
Negative Externalities: These include the consumption or degradation of natural resources necessary for production (e.g., oil, gas, minerals) as well as unintended costs imposed on society or the environment (e.g., pollution, ecological damage).
Key Properties
Exhaustiveness:
No additional categories of costs exist.Any real-world cost ultimately reduces to one of these two components. For example:
Capital costs are derived from labor and resources used to produce capital goods.
Opportunity costs reflect foregone labor or resource use (e.g. land dedicated to a hunting preserve or private residences, unavailable to trespassers).
Self-Evidence:
This axiom is based on the fundamental observation that all production requires labor, consumes natural resources, and generates additional externalities such as pollution. Beyond these, no other real costs exist.Unfalsifiability at Present:
To falsify this axiom, one would need to identify a type of cost outside these two categories—what we consume that is not based either on somone’s labor or the resources provided by this planet (i.e. black caviar).
Supporting Justification
Labor Costs:
Human effort is indispensable for all production, whether physical, intellectual, or managerial.
Even automation involves labor in the design, construction, and maintenance of machines or software.
Resource Depletion Costs (or Negative Externalities):
Every good or service requires physical inputs, including raw materials, energy, or land. The use of these inputs (e.g., dedicating land as a factory site or a farm or a private residence, thereby excluding other uses) represents a collective cost.
Resource depletion is a measurable and unavoidable cost of production.
Many production processes generate societal or environmental costs, such as pollution, deforestation, or climate change.
These costs are real and impactful, even if market mechanisms fail to fully account for them.
Axiom 1: Representative-Agent (Consumer-Producer)
Utility maximizing (in arm’s-length commercial transactions) under bounded rationality and prone to opportunism. No further elaboration is needed here, as this axiomatic framework is rigorously described in The Nature of Man (1994). It is the second axiom, money, that we now proceed to define.
Axiom 2: Money-Duality, Object–Action
A Reality-Consistent Axiomatic Definition of Money: U = S + E
In any sound formal system, no axiom may contradict established facts. Real-world money consistently serves as a unit of account, a medium of exchange, and a store of value. This paper proposes a rigorous, reality-consistent definition of money—summarized as “U = S + E,” where U denotes the total spendable supply, split between saving (S) and spending (E). Drawing on analogies to Peano arithmetic and insights from Jevons, Menger, Walras, Arrow–Debreu, and Keynes, we show that money’s dual roles—spend vs. save—align naturally with its cross-sectional (present) and temporal (future) measurement capacities. By insisting on logically consistent axioms that never conflict with real-world observations, we unify theoretical models and empirical evidence. This approach highlights both the power and the limits of money’s store-of-value (S) vs. medium-of-exchange (E) tradeoffs, while preserving the clarity and predictive strength of a formal system.
1. Introduction
In any sound formal system, no axiom may conflict with real-world facts. This principle applies to the well-documented reality (as noted by sources like the St. Louis Federal Reserve) that real-world money consistently fulfills three core functions:
A unit of account (prices are denominated in money).
A medium of exchange (money is accepted as payment).
A store of value (money can be saved for later use without losing all purchasing power).
Any axiomatic definition of money in the context of mathematical economics—i.e., a formal system underpinned by the Arrow–Debreu framework (forming the core of mainstream economics and used by most central banks, including the U.S. Federal Reserve, to set real-world interest rates)—must not contradict these empirically validated roles. By definition, any framework that does is invalid.
1.1 Peano Arithmetic Analogy
In Peano arithmetic, the object “1” represents a countable unit, and the action “+1” is the successor function (adding one more unit).
“0” denotes the absence of that unit, while “-1” serves as its inverse (subtracting a unit).
We seek a similar framework for money:
Money-as-Object: An agent holds or stores money (coins, digital balances). At any moment, it can be spent (E) or saved (S) for future use.
Money-as-Action: An agent uses money as a unit of account (U) to measure relative prices before choosing to spend or save.
Crucially, these relative prices are defined in two ways:
Cross-sectional (aligned with E, the spending role).
Temporal (aligned with S, the saving role).
These dual definitions capture how money’s value is measured both “right now” (spending) and “over time” (saving).
2. Incorporating the Jevons–Menger–Walras View: U + E
2.1 The Medium of Exchange (E)
Early economists like Jevons, Menger, and Walras emphasize money’s medium-of-exchange role for solving the “double coincidence of wants” problem. Practically, money-as-E enables direct trade without bartering:
“Measure Twice, Cut Once”: Agents compare the cost of goods/services (like a car’s price) with their wages/income. Once feasible, the transaction closes via money E.
Closing the Purchase: Money functioning as E effectively “seals the deal,” aligning with free-trade principles and mutual benefit.
2.2 The Unit of Account (U)
Money as a unit of account (U) quantifies how goods and services relate to one another—or to wages—before any purchase (ex ante).
Cross-sectional: For example, determining whether a car’s monthly payment fits within one’s wages, or weighing rent vs. grocery bills. It also involves balancing the trade-off between saving and spending—reflecting money’s dual nature in U.
Temporal: Measuring present vs. future prices to capture inflation or deflation. In practice, estimating whether future investment income (stocks, bonds, etc.) will cover living expenses—again with money as the benchmark.
Although frameworks like Arrow–Debreu may not explicitly mention money, they implicitly depend on a unit of account to determine equilibrium prices.
2.3 Linking U to Modern “Spendable” Supply
Modern Economies: The Federal Reserve’s M2 measure (cash, checking/savings accounts, money market funds) corresponds to “concurrently spendable” wealth—very close to the circulating, immediately spendable supply of money. These money-objects can be used as E under the “U = S + E” definition.
Historic Examples: Ancient Rome’s aureus coins functioned similarly, representing a tradable stock of money.
Key Observation: Because money is an object, if it can be used as E, it can also—by definition—be saved rather than spent. This is why money must be defined not just as E but also as S, aligning with Keynes’s “liquidity trap,” where money remains unspent (held in accounts or in gold) instead of circulating.
3. Defining Money’s Dual Usages: U = S + E
3.1 Exclusive Dual-Use Principle (XOR)
We assert that a money-object exists in exactly one of two states at a time:
Saved (S) = stored for future use (store of value).
Spent (E) = used as a medium of exchange now.
Formally, an agent’s money can only be in one state at any moment—“S XOR E.” Summing these states yields the total “spendable” supply (U). Hence:
Money-as-Action: US + UE = U
Money-as-Object: U = S + E
Here, US and UE denote how much is allocated to saving vs. spending, respectively. Their sum (U) matches the object-level partitioning of money into S + E.
A Note on Rare Liquidity Traps
John Maynard Keynes (1936) highlighted liquidity traps—cases where large portions of the “spendable” M2 supply (U) serve as a store of value (S) instead of circulating as E. Although historically rare (since bonds often fill that store-of-value role), a severe liquidity trap sees normally spendable M2 remain idle—underscoring the conceptual and practical distinction between S and E.
3.2 Tying This to the L-Language
Object: M in Money. M can be in state S (saved) or E (spent).
Action: measurePrice(U). Compares goods cross-sectionally (now) or over time (inflation/deflation).
Thus:
Money-as-Object: M, transitioning between S or E.
Money-as-Action: measurePrice(M, g, t) for cross-sectional; measureInflation(M, g, t1, t2) for temporal.
Reciprocal Definitions:
If M cannot measure cross-sectional or temporal prices, it is not money.
If an action measures prices but lacks a stored object M, it violates dual consistency.
Final Synthesis: Within the L-language framework, money is a dual pair:
Unit of Account (U): The “action” side, measuring relative prices across goods (ex ante) and over time (inflation/deflation).
Medium of Exchange (E) + Store of Value (S): The “object” side—either spent or saved for future use.
The principle “U = S + E” asserts that an agent’s total money can be split between storing value (S) and active spending (E), both tied to the same unit of account (U). By enforcing XOR usage in real time (S XOR E), we ensure money is either spent or saved—never both simultaneously.
Hence, the L-language enforces:
No free-floating concepts: Each idea (money) has an object–action dual.
Temporal vs. Cross-Section: Money measures both inflation (over time) and relative prices (across goods).
Exclusive Dual-Use: S or E, summing to total U.
In essence, money is “U = S + E.” This approach unifies insights from Jevons, Menger, Walras, Arrow–Debreu, and Keynes in a Peano-style formal system. It remains compatible with both static equilibrium models (like Arrow–Debreu) and dynamic, game-theoretic processes (like Labor-for-Goods exchanges), illuminating how real economies move toward Pareto efficiency through continuous trade.
Rational Utility Maximization Corollary: Rent-Seeking Lemma Implies Economic Parasites
Mathematically, both rent-seeking and agency costs stem from the same core dynamic: economic parasitism, a direct corollary of the rational utility maximization axiom (here referred to as the rent-seeking lemma). This axiom posits that self-interested agents systematically weigh costs and benefits before any transaction. Consequently, individuals more inclined to dishonesty will predictably turn to fraudulent, illegal, or opportunistic (non-mutually beneficial) behaviors when the perceived costs are low and the rewards sufficiently high. This mechanism is well documented in both theoretical models and real-world cases.
Illustrative Examples
Consider, for instance, the opportunistic behavior arising from legislation in San Francisco (later partially repealed) that effectively decriminalized theft under $950. As expected, this led to a surge in crime, forcing numerous retailers to shut down or relocate—an independently verifiable fact.
Additionally, involuntary exchanges have been empirically linked to lower Pareto efficiency, often measured by real per capita GDP. For example, Haiti’s per capita GDP is roughly one-tenth that of the Dominican Republic’s, despite both nations sharing the same island and similar geographic conditions. The key difference lies in Haiti’s greater lawlessness—marked by frequent violations of the assumption of unfettered trade—which directly undermines economic efficiency.
Not all allegations of parasitic behavior, however, turn out to be true. Accusations are not facts but rather hypotheses that can prove false. Each claim must be independently verified and judged on its merits to avoid mischaracterizations or unwarranted conclusions.
The Historically Unjustified Use of “Economic Parasites”
Vladimir Lenin famously deployed the term “economic parasites” to describe the capitalist bourgeoisie, accusing them of exploiting workers’ productivity without contributing themselves. This closely aligns with the idea of rent-seeking in public choice theory and the notion of fraudulent agents in agency theory. Public choice theory labels rent-seeking as the quest for unearned wealth—obtaining resources without creating value. Under this view, successful rent-seekers consume goods or services (real GDP) produced by others without contributing to that output, which matches Lenin’s depiction of economic parasitism.
However, Lenin’s characterization of capitalists as “economic parasites” traces back to Karl Marx’s arguments in Das Kapital (1867). Marx claimed that capitalists extract unearned wealth from workers through “surplus value”—a flawed interpretation of producer surplus that disregards opportunity costs. While this notion may hold within Communist theory, in reality, it “just ain’t so,” echoing the sentiment often attributed to Mark Twain:
“It’s not what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.”
Marx’s assertion also contradicts empirical findings illuminated by agency theory. In unfettered exchanges, it is the better-informed agent (e.g., an employee or intermediary)—not the less-informed principal (e.g., the employer or capitalist)—who is positioned to extract any unearned wealth. Employees inherently possess more knowledge about the quality and value of their labor, while both parties share equal information about wages. Consequently, under rational utility maximization and voluntary trade, it is theoretically and practically impossible for a less-informed principal to defraud a better-informed agent.
Historical Lessons in Flawed Axioms
History provides stark lessons on the dangers of building policies upon conclusions drawn from false axioms—such as Marx and Lenin’s flawed premise that less-informed principals can exploit better-informed agents in voluntary exchanges. The misapplication of such assumptions has led to dire outcomes, including catastrophic events like the Holodomor in the Soviet Union. That tragedy ushered in societal collapse and even documented cases of cannibalism.
Even Stalin eventually acknowledged the upheaval was excessive—see his article “Dizzy with Success”—though the damage had already been done. Later, when Stalin’s rent-seeking countermeasures (e.g., informants, or stukachi, and gulags) were relaxed, unchecked rent-seeking reemerged, contributing directly to the Soviet Union’s eventual collapse.
A Fact-Based Approach
Under the Wall Street style of reasoning—as the movie Wall Street puts it—successful market players do not “throw darts at a board.” Instead, they bet on sure things, relying on sound, fact-based axioms. Avoiding hypotheses that might turn out false ensures stability and prevents the pitfalls of flawed assumptions. Consequently, staying grounded in what is provable and verifiable keeps systems robust and avoids the perils of “what you know for sure that just ain’t so.”
Conclusion: The Need for Realistic Foundations
The axioms we adopt define the boundaries of our theoretical world. It is impossible to overemphasize a core point raised at the outset:
The L-language, while our most precise tool for reasoning about reality, carries an inherent vulnerability: if even one axiom fails to accurately represent reality, all conclusions derived from it become unreliable.
While rational utility maximization is an important building block, it does not hold unconditionally. Ignoring rent-seeking, information asymmetry, and opportunistic motives leaves any model incomplete. Incorporating the Rent-Seeking Lemma and insights from agency theory refines economic models so they more faithfully reflect observable realities.
Ultimately, the soundness and completeness of a formal model hinge on whether its axioms adhere to both logical consistency and empirical truth. By adopting axioms that mirror actual agent behavior—and by including market structures designed to deter opportunistic exploitation—we establish a framework robust in theory, realistic in practice, and ethically meaningful.
More importantly, having laid out the two key axioms:
Consumer-Producer Representative Agent
A rational utility-maximizer (in arm’s-length commercial transactions), bounded by cognitive limitations and prone to opportunism.Money as U = S + E
Where money is a medium-of-exchange/store-of-value object that also acts as a unit of account for both cross-sectional (relative price) and temporal (inflation/deflation) measurements.
—and recognizing that rent-seeking naturally follows from Axiom 1 (the rent-seeking lemma), we can now revisit the full set of Arrow-Debreu “perfect market” assumptions. In the Labor-for-Goods framework, these assumptions serve as binding constraints that limit the possibility of achieving Pareto efficiency under the default axioms alone.
Understanding where and how opportunism, bounded rationality, and money’s dual roles clash with Arrow-Debreu conditions sets the stage for identifying which assumptions need to be relaxed, modified, or enforced. In doing so, we can more clearly see why real-world economies deviate from the perfectly efficient ideal—and, crucially, how policy, institutional design, and market structures can address these gaps to bring actual outcomes closer to Pareto efficiency.
Dually-Defining the Arrow-Debreu Assumptions
Having laid out our two key axioms—(1) a consumer-producer representative agent and (2) money as U = S + E—we can now revisit the 13 Arrow-Debreu perfect market conditions under a dual perspective. Traditionally, these assumptions are presented in one unified list. However, not all assumptions serve the same purpose in achieving an efficient outcome:
Mathematically Necessary for Optimality
These assumptions guarantee that an equilibrium solution (or “optimal” outcome) exists in a purely mathematical sense. They ensure well-defined preferences, continuous and convex production sets, and so on. Without them, the system might fail to produce a meaningful or stable equilibrium.Practically Necessary for Efficiency
These assumptions ensure the real-world feasibility of that equilibrium, preventing monopolistic distortions, incomplete markets, or missing property rights from undermining Pareto efficiency. They acknowledge that frictionless trade, enforceable contracts, and zero transaction costs are crucial for bridging the gap between theoretical equilibrium and actual outcomes.
1. Mathematically Necessary Conditions
Below are the Arrow-Debreu conditions that primarily secure the internal consistency and existence of an equilibrium.
Rational Utility Maximization
Object: Each agent’s well-defined utility function.
Action: Agents choose the option that maximizes utility, subject to budget constraints.
Why It Matters: Rational choice underlies the demand side of equilibrium. Without rational utility maximization, it’s unclear how to predict agent behavior mathematically.
Complete, Reflexive, and Transitive Preferences
Object: Preference orderings over all possible consumption bundles.
Action: Agents can compare any two bundles (complete), regard each bundle as at least as good as itself (reflexive), and apply consistent rankings (transitive).
Why It Matters: Ensures that each agent’s preference system can be represented by a continuous utility function, critical for equilibrium analysis.
Continuity and (Strict) Monotonicity
Continuity: Small changes in a bundle cause small changes in utility, avoiding “jumps.”
Monotonicity (Local Non-Satiation): More of a desirable good is strictly preferred to less.
Object: Consumable goods.
Action: Agents’ utility changes smoothly and consistently as goods vary.
Why It Matters: Makes demand curves tractable; ensures no corner solutions undermine equilibrium existence.
Convex Preferences
Object: The shape of each agent’s indifference curves.
Action: Agents prefer “averages” or mixtures to extremes (convexity).
Why It Matters: Uniqueness or stability of equilibrium often hinges on convexity. Non-convex preferences can yield multiple or no equilibria.
Firms Have Convex Production Sets
Object: Production possibility sets for each firm.
Action: Firms exhibit constant or decreasing returns to scale; expansions in inputs lead to proportionate or sub-proportionate expansions in output.
Why It Matters: Convex production sets allow for linear or near-linear optimization. Non-convexities (e.g., increasing returns to scale) complicate or invalidate standard equilibrium results.
Infinite Divisibility
Object: Goods, labor, and capital can be fractionally divided.
Action: The market supports continuous quantities of goods and production inputs.
Why It Matters: This sidesteps lumpy or discrete constraints, ensuring standard calculus-based arguments for equilibrium and marginal analysis apply.
(Note: We place “Infinite Divisibility” here because it supports the mathematical tractability of splitting goods and resources into marginal increments.)
Together, these assumptions ensure the mathematical “machinery” of Arrow-Debreu works smoothly to identify an equilibrium solution—an allocation of resources where no agent can unilaterally improve their outcome given prices and incomes.
2. Practically Necessary Conditions
While the assumptions above secure existence and mathematical optimality, the following assumptions ensure that the identified solution is actually Pareto efficient and feasible under real-world conditions.
No Externalities / No Public Goods
Object: Production or consumption that affects third parties.
Action: Agents fully internalize their actions’ costs and benefits; no “free riders” or unpriced spillovers.
Why It Matters: If negative externalities are not accounted for, private incentives deviate from social optimality, undermining efficiency.
Complete Markets
Object: Every possible good (current, future, contingent) exists as a tradable asset.
Action: Agents can buy or sell any imaginable resource or claim.
Why It Matters: Untraded contingencies lead to incomplete risk-sharing and potential inefficiencies. Complete markets ensure all resources and risks are priced.
Price-Taking Behavior (Perfect Competition)
Object: The market structure (many buyers, many sellers).
Action: No individual participant can influence prices; all treat prices as given (parametric).
Why It Matters: Ensures that equilibrium prices reflect aggregate supply and demand, rather than monopoly or monopsony power.
Well-Defined, Transferable Property Rights
Object: Goods, services, resources, and factors of production.
Action: Secure and enforceable ownership; owners can freely sell or trade.
Why It Matters: Ambiguous or unenforceable property rights lead to opportunism (e.g., tragedy of the commons). Well-defined rights are pivotal for efficient market exchange.
No Transaction Costs, No Barriers to Trade
Object: Exchanges between agents.
Action: Frictionless buying and selling; no taxes, tariffs, or large costs of entry/exit.
Why It Matters: Even small transaction costs can deter mutually beneficial trades, imposing deadweight losses.
No Uncertainty or Fully Known States (Classic Arrow-Debreu)
Object: The future states of the world.
Action: Agents either live in a world of certainty or have complete markets covering all possible future contingencies.
Why It Matters: Where uncertainty is not fully covered by state-contingent markets, risk cannot be perfectly hedged, creating inefficiencies.
Agents’ Endowments and Preferences Are Common Knowledge (Optional)
Object: Information regarding each agent’s initial resources and utility functions.
Action: All participants (or the market mechanism) know these data sets.
Why It Matters: If information is hidden or asymmetric, moral hazard and adverse selection can arise, potentially invalidating perfect-market outcomes.
By dually defining these conditions—(object = the entity or resource, action = how it’s treated or exchanged)—we clarify how real-world frictions (e.g., externalities, incomplete markets, asymmetric information) undermine the theoretical Arrow-Debreu equilibrium.
Connecting to the Labor-for-Goods Dynamic Equilibrium Model
In the Labor-for-Goods model:
Mathematically Necessary Conditions (1–5, 12) ensure that a stable equilibrium is identifiable.
Practically Necessary Conditions (6–11, 13) ensure that the system actually approaches Pareto efficiency in a frictionless manner.
However, once we factor in:
Bounded Rationality (agents may not behave perfectly optimally),
Opportunism (rent-seeking, fraud, principal-agent issues), and
Money as U = S + E (which must also be integrated to avoid arbitrage and ensure robust price signals),
we see how real economies deviate from these ideal conditions. Each deviation can be traced back to relaxing or violating at least one of these 13 assumptions—be it monopoly power (violation of perfect competition), unpriced externalities (violation of “no externalities”), or incomplete risk markets.
Summary: Dually-Defining the 13 Conditions
The Arrow-Debreu framework sets forth 13 conditions that can be split into two fundamental categories:
Mathematically Necessary for Optimality:
Rational Utility Maximization
Complete, Reflexive, and Transitive Preferences
Continuity and (Strict) Monotonicity
Convex Preferences
Firms Have Convex Production Sets
Infinite Divisibility
Practically Necessary for Efficiency:
6) No Externalities / No Public Goods
7) Complete Markets
8) Price-Taking Behavior (Perfect Competition)
9) Well-Defined, Transferable Property Rights
10) No Transaction Costs, No Barriers to Trade
11) No Uncertainty or Fully Known States
13) Agents’ Endowments/Preferences Are Common Knowledge (Optional)
By satisfying all of the above simultaneously, Arrow-Debreu’s First Welfare Theorem holds: every competitive equilibrium is Pareto efficient. In reality, even small departures from these prerequisites can cause large-scale inefficiencies, illuminating the many ways real markets diverge from the ideal.
This dually-defined perspective complements the Labor-for-Goods Dynamic Equilibrium Model, helping us identify which assumptions must be enforced, relaxed, or modified to account for opportunism, bounded rationality, and the unique role of money.
Dually Defining Arrow-Debreu and Why Only Some Assumptions Affect Relative Efficiency
In Arrow-Debreu theory, 13 “perfect market” assumptions underlie the First Welfare Theorem, guaranteeing that any competitive equilibrium is Pareto efficient. These conditions can be broadly grouped into two categories:
Mathematically Necessary for Optimality (Core Mathematical Assumptions)
Practically Necessary for Efficiency (Real-World Friction Assumptions)
However, because no agreed-upon definition of absolute Pareto efficiency exists, we rely on measuring relative Pareto efficiency—i.e., which economy, A or B, is closer to an ideal efficient state. Crucially, not all Arrow-Debreu assumptions will shift how we compare A’s efficiency to B’s. Some are effectively “universal” or “uniformly approximated” across modern economies, so they neither help nor hurt relative comparisons.
1. Core Mathematical Assumptions (Not Directly Relevant for Relative Comparisons)
These assumptions are crucial to ensuring a well-defined, solvable equilibrium in theory. Yet, in real-world applications, they tend not to differentiate which economy is more efficient because either both economies satisfy them (to a roughly equal degree), or else violations are so extreme that the economy is not analyzable under Arrow-Debreu logic at all. They include:
Complete, Reflexive, and Transitive Preferences
Continuity and Local Non-Satiation
Strict Convexity (Preferences and Production Sets)
Infinite Divisibility
Rational Utility Maximization (as a baseline for mathematical modeling)
If both economies have standard “well-behaved” preferences and production, none gains an efficiency edge purely from “more continuity” or “stricter convexity.” Conversely, if one economy severely violates these assumptions (e.g., no transitivity of preferences at all), it essentially drops out of standard microeconomic modeling—making comparisons with the other economy moot. Hence, in practice, these assumptions do not meaningfully alter a relative ranking of Pareto efficiency.
2. Real-World Friction Assumptions (Determinants of Relative Efficiency)
By contrast, certain Arrow-Debreu conditions do vary significantly across real economies and thus materially affect which economy is “more” Pareto-efficient in a comparative sense. These are the assumptions that, when breached, introduce frictions and distortions—leading one economy to deviate more severely from the ideal. They include:
No Externalities / No Public Goods
Reason: Unpriced spillovers can make an economy appear more productive while hiding social costs, or vice versa.
Complete Markets
Reason: Incomplete/absent markets prevent optimal risk-sharing or trade of certain goods, creating inefficiencies that differ between economies.
Price-Taking Behavior (Perfect Competition)
Reason: Monopolies, monopsonies, or oligopolies in one economy reduce output and raise prices, harming efficiency more than in a competitive economy.
Well-Defined, Transferable Property Rights
Reason: Enforceable ownership deters theft, fraud, and opportunism; weak property rights lead to more severe rent-seeking, lowering relative efficiency.
No Transaction Costs, No Barriers to Trade
Reason: Tariffs, taxes, and regulatory barriers can block trades that would otherwise raise welfare; economies differ in how much red tape they impose.
No Uncertainty or Fully Known States
Reason: Economies vary in how completely they cover risk (contingent claims, insurance). One that handles uncertainty better may be relatively more efficient.
Agents’ Endowments and Preferences Are Common Knowledge (Optional)
Reason: Information asymmetries or hidden data (moral hazard, adverse selection) can severely reduce efficiency in one economy compared to another.
These assumptions, when violated to different extents in A vs. B, can materially shift which economy appears closer to a Pareto-efficient allocation. For instance, if Economy A enforces property rights rigorously but Economy B does not, we typically see greater rent-seeking in B, lowering its relative efficiency.
Why Absolute Pareto Efficiency Remains Undefined
Economists and policymakers do not currently have a universally accepted way to measure absolute Pareto efficiency—i.e., an unambiguous scale of 0 to 100% “efficient.” Instead, we compare actual economies to see which meets more of the frictionless Arrow-Debreu conditions. This relative approach reveals that:
Some assumptions (convexity, transitivity, local non-satiation) are effectively universal or trivially satisfied, so they do not alter efficiency rankings.
Other assumptions (externalities, property rights, market competition) vary significantly and therefore drive relative differences in real-world performance.
Putting It All Together:
The Excluded Assumptions
Complete/Transitive Preferences, Continuity, Strict Convexity, Infinite Divisibility, etc.
Either both economies approximate them equally, or a severe violation makes Arrow-Debreu inapplicable. Thus, they don’t help us pick which economy is “more” efficient.
The Remaining Assumptions
No Externalities, Complete Markets, Perfect Competition, Property Rights, No Transaction Costs, Known States (Uncertainty), Common Knowledge (Info).
These “friction-based” conditions differ measurably across economies, impacting how efficiently each economy allocates resources. Thus, they do matter for relative efficiency.
Relative, Not Absolute
Because no standard defines absolute Pareto efficiency, we gauge each economy only by comparing its compliance with the friction-based assumptions to that of another economy.
The economy that less severely violates these conditions typically emerges as “more Pareto-efficient.”
Conclusion: Relative Rankings and Policy Implications
In summary, dually defining Arrow-Debreu assumptions clarifies which are purely mathematical prerequisites (that do not affect cross-country rankings) and which introduce real-world frictions causing tangible discrepancies in how economies approach Pareto efficiency. By addressing these friction-based assumptions—externalities, property rights, incomplete markets, and asymmetric information—each economy can improve its relative standing.
What we are saying here, dear reader, is that if you are running a government and wish to improve the general welfare of the public (as the U.S. Constitution mandates as a key governmental function—which is why insider traders go to jail, since insider trading exploits asymmetric information and undermines the general welfare), you are in reality constrained by the resources available to you.
Looking at the List of countries by GDP (PPP) per capita (Wikipedia link), one thing becomes strikingly obvious: aside from a few exceptions like Norway, which benefits from abundant natural resources, the correlation between per capita GDP and violations of one or more friction-based assumptions is direct and substantial. These violations cause inefficiencies that manifest as large differences in real per capita GDP between otherwise comparable nations—for instance, Ireland versus the United Kingdom.
Policy-makers seeking to enhance efficiency relative to other nations should focus on mitigating or eliminating these key frictions:
Accurately pricing externalities,
Enforcing property rights uniformly,
Maintaining robust competition,
Lowering transaction costs and trade barriers,
Deepening financial markets for risk-sharing,
Promoting transparency to reduce information asymmetry.
In the absence of an absolute, one-size-fits-all metric for Pareto efficiency, these steps become the primary lever for improving an economy’s performance on the only measure available: its ranking relative to other economies. However, as we are about to explain under the rent-seeking lemma, even this can be drastically simplified.
The Primacy of Law and Property Rights in Achieving Pareto Efficiency
Any transaction satisfying three core conditions—(1) it is unfettered, (2) all parties are symmetrically informed, and (3) all externalities are fully priced—will, by definition, be mutually beneficial. Consequently, each of the seven conditions we’ve identified as essential for achieving relative (as opposed to absolute) Pareto efficiency ultimately mandates that these three requirements remain unviolated:
Condition 1 (No Externalities/No Public Goods) ensures all externalities are accounted for in prices,
Conditions 6 and 7 (No Uncertainty or Fully Known States, Agents’ Endowments and Preferences Are Common Knowledge) address information symmetry,
and the remaining assumptions prevent trade from being fettered or restricted.
For instance, the existence of a regulatory monopoly—such as prohibiting the production of moonshine—creates a binding constraint on maximizing overall welfare. Some mutually beneficial transactions never occur simply because they are illegal, thereby reducing the potential Pareto efficiency that would otherwise arise if those trades were allowed and properly regulated.
Once every transaction is mutually beneficial, the sum total of these trades naturally leads to a Pareto-efficient equilibrium where all possible gains are realized—unless some friction stops certain Pareto-improving transactions from taking place. Absent such impediments, the resulting equilibrium is by definition efficient. This concept directly aligns with how we define a Pareto-efficient Nash Equilibrium in mathematical game theory: each unfettered trade, free of imperfect information or unpriced spillovers, guarantees Pareto efficiency.
However, when reconsidering all of these assumptions through the lens of the Rent-Seeking Lemma, one condition proves absolutely fundamental: property rights. Ronald Coase’s famous Coase Theorem underscores this point. It asserts that under the following two conditions, which is to say IF:
Transaction costs are negligible (or zero),
Property rights are clearly defined (no ambiguity about who owns what),
THEN parties will negotiate resource allocation efficiently, regardless of who initially holds those property rights. The outcome is invariably an efficient allocation of resources, because rational agents, facing minimal costs to bargain, will strike deals that maximize mutual benefits.
The Real-World Application of the Coase Theorem
In reality, transaction costs are seldom negligible—lawyers are not free, and neither are judges. Nonetheless, property rights serve as a practical mechanism for mitigating externalities even in the presence of these costs. For example:
If your neighbor plays loud music, and property rights over noise levels are well-defined, you can call the police to enforce those rights.
If your neighbor pollutes your water, legal recourse is also available, albeit at substantial expense (think Erin Brockovich).
Institutional enforcement mechanisms, such as police or courts, reduce the transaction costs individuals would face in resolving disputes, thereby promoting efficient outcomes despite real-world frictions.
These examples show that, although transaction costs do exist and can be significant, the enforcement of property rights can effectively mitigate negative externalities. Conversely, when property rights are not well-defined, no mechanism exists to address externalities. Thus, property rights become the critical foundation enabling both efficient trades and the resolution of disputes.
Efficient Law Enforcement as the True Foundation
As crucial as property rights are, they are ultimately just a symptom of a more fundamental driver of systemic efficiency: well-designed laws and their effective enforcement to keep rent-seeking at bay. Just as theft rates spiked in San Francisco once theft ceased to be prosecuted, so do rent-seeking, agency costs, and other opportunistic behaviors by economic parasites (as predicted by the rent-seeking lemma) proliferate whenever the government agencies tasked with enforcing property rights fail to function effectively. Put differently, property rights are not inherently valuable in isolation; they emerge as the natural result of properly formulated and enforced laws that suppress economic parasites (rent-seekers).
The Legal System Must Allocate Property Rights Sensibly
If the law assigns property rights to the polluter rather than the harmed party, even ironclad enforcement may fail to produce efficient outcomes. In the absence of transaction costs, misallocated property rights do not matter much (per the Coase Theorem). However, once transaction costs come into play, the closer the current allocation is to the Pareto-optimal one, the lower these costs—and the more relatively Pareto-efficient the outcome. In practical terms, it is often far less expensive and more straightforward to call the police on a neighbor cooking meth than it is to structure a legally binding contract that effectively bribes them not to do so.
Likewise, if the legal system operates under the assumption that capitalists inherently exploit others in free trade, it may implement a misguided allocation of property rights that undermines efficiency.
Efficient Enforcement Trumps the Law Itself
The Soviet Union famously had what some called the “best constitution in the world.” Yet even the most well-crafted laws are meaningless without mechanisms to enforce them. Institutions responsible for law enforcement must act impartially and effectively, safeguarding property rights (or other legal claims).
Ultimately, property rights gain their full potency only within a legal framework that properly allocates rights and consistently enforces them—thereby preventing rent-seeking and maintaining the conditions necessary for Pareto efficiency.
The Challenge of Rent-Seeking in Government
Enforcing efficient laws is easier said than done, particularly because not only governments themselves but also the law enforcement agencies they operate are susceptible to rent-seeking and agency costs. Under the rent-seeking lemma, some subset of public officials and bureaucrats—those who are relatively less honest—will almost certainly attempt to exploit their positions for personal gain rather than serving the public interest.
This situation creates a paradox: while sound law enforcement is vital for achieving Pareto efficiency, the enforcement process can itself be rife with rent-seeking and mismanagement. Historical and contemporary examples abound, including:
Tammany Hall (late 19th to early 20th century), a New York City political organization notorious for bribery, patronage, and graft, systematically undermining governance for private ends.
Operation Greylord (1980s), an FBI investigation in Chicago that uncovered widespread bribery and judicial corruption within the city’s court system.
Bob Menendez and his indictments on federal bribery charges, alleging misuse of his Senate office for personal financial gain.
Police corruption provides further illustration:
The Knapp Commission (1970s) in New York City exposed extensive police payoffs, kickbacks, and hush money, dramatized in the film and memoir of Frank Serpico, who testified about systemic corruption in the NYPD.
The Rampart Scandal (late 1990s) in the Los Angeles Police Department revealed officers involved in unprovoked shootings, planting evidence, and stealing drugs, ultimately shaking public confidence in law enforcement.
These cases demonstrate how public officials—including those tasked with upholding law and order—can become key perpetrators of rent-seeking, distorting resource allocation and eroding the very trust and efficiency they are supposed to protect. Yet, it bears noting that these instances, troubling as they are, remain relatively limited within countries that are, overall, among the best-governed in the world—particularly in terms of property rights and Pareto-efficiency, as evidenced by robust GDP growth. Many other nations face corruption on a far broader scale, underscoring how the severity of rent-seeking can vary substantially across different institutional contexts.
Conclusion: The Interdependence of Laws, Enforcement, and Efficiency
Property rights are essential to achieving Pareto efficiency; however, they are not the ultimate foundation of efficient outcomes. Rather, they emerge from a properly functioning legal system that (1) formulates sensible laws and (2) enforces them effectively, thereby blocking not only robbery and theft but also other forms of “non-criminalized unearned wealth extraction” by economic parasites (rent-seekers). Absent these foundational elements, property rights lose much of their capacity to mitigate externalities or enable efficient exchange.
Moreover, minimizing rent-seeking within government institutions is critical to maintaining effective law enforcement. Addressing agency costs is vital not only in corporations—per Jensen and Meckling—but also in government organizations, where such costs are even more challenging to tackle due to the lack of clear market signals (like a low P/E ratio for stocks) that might indicate inefficiency or poor management.
This underscores a broader truth: achieving Pareto efficiency is both an economic and an institutional challenge, requiring careful design and governance of the very systems—legal, administrative, and judicial—that underpin trade, property, and enforcement. Only by confronting these deeper institutional issues can we ensure that property rights—and the efficiency gains they promise—are fully realized in practice.
In the next section, we will examine whether the economic reality truly aligns with these theoretical insights.
Use–Exchange Value Duality of Aristotle
The Arrow-Debreu framework formalizes the use–exchange value duality first noted by Aristotle, in which consumer surplus is defined as the difference between:
The subjective use value of a good or service to the consumer (reflected in the maximum price they are willing to pay), and
The objective exchange value of that same product (i.e., its market price).
Mathematically, for a consumer:
“surplus”=“use-value”−“exchange-value.”
In simpler terms, a consumer experiences surplus whenever the use value (their subjective, personal valuation) exceeds the exchange value (the objective price they pay). This distinction follows from Aristotle’s original insight, later captured and formalized in modern general equilibrium theory.
Reciprocal Transpose for Producers Under Arrow-Debreu (and Wall-Street Rules)
Under both Arrow-Debreu assumptions and proper Wall-Street inference rules, the dual-definition of consumer surplus becomes its reciprocal transpose for producers. Specifically, for a producer, producer surplus is defined as the difference between:
The objective use value of a good or service to the producer (reflected in the actual market price at which it is sold, i.e., revenue), and
The subjective exchange value of that same product (i.e., its cost to the producer, which is inherently subjective because it includes not only objective expenses such as labor and capital but also opportunity costs—e.g., working rather than enjoying leisure).
In mathematical economics, this is precisely how producer surplus is currently defined: as the difference between a producer’s revenues and all of their costs, including opportunity costs.
Extending Surplus to the Collective Level
When we aggregate the definition of consumer–producer surplus to the group level (i.e., everyone collectively as a group), a key simplification emerges:
Consumer Surplus = use value - price
Producer Surplus = price - cost
Summed across all consumer–producers, the price variable cancels out. Thus, the total surplus — i.e., the general welfare of the public — simplifies to:
"Total Surplus" = "use value (real GDP)" - "cost of producing it"
Here, the cost for the entire group represents the total labor time spent producing that output, barring externalities. In other words, absent any externalities, the real cost of production at the societal level is measured in labor time.
This subtle yet crucial point underscores that, from a macroeconomic or group-wide perspective, the price dimension effectively disappears. What remains is the difference between the total use value of goods and services — that is, the real GDP consumed collectively — and the total cost measured as collective labor time spent making it (again, assuming no externalities).
The key twist is that total surplus can only be increased by raising labor productivity, because collectively, everyone’s surplus—summed across the entire economy—is equal to real GDP minus the time it takes to produce it. This expression effectively captures overall labor productivity. The higher that productivity, the more each individual can produce for the same amount of work, which in turn leaves more room for either additional consumption or leisure.
Therefore, Pareto-efficiency, in a strict dual sense, must acknowledge:
Collective benefits: the measure of collective well-being from consumption, as objectively measured (though not solely) by real GDP.
Collective costs: the measure of productive efficiency, as objectively measured by labor productivity, or how much time it takes to produce the real GDP that is consumed.
In this way, Pareto-efficiency becomes dually defined, not only by real per-capita GDP but also by the amount of labor-time required to generate it. For instance, the fact that the labor force participation rate in the United States has declined substantially in recent years suggests a measure of Pareto-efficiency that might not be reflected simply by growth in real per-capita GDP—people may be choosing more leisure relative to GDP.
The foundational Axiom of Collective Production Costs
Axiom 0 clearly explains that in the real-world, production costs of real GDP consist exclusively of resource use and labor. Under this axiom:
Real Gross Output = Real Final Consumption (Real GDP = collective benefits)
+ Real Intermediate Consumption (Real Costs = labor + all externalities)
For society collectively (as a group), real intermediate consumption—as distinct from nominal intermediate consumption measured in money—consists entirely of labor costs and resource depletion (negative externalities). Efforts to price these elements more accurately are reflected in mechanisms such as carbon credits.
In practical terms:
Intermediate consumption includes not only commodities like lumber, oil, and gas but also pollution and, most importantly, human labor.
Labor constitutes the largest share of intermediate consumption, signifying the real cost of producing the real GDP we all consume.
Final Conclusion
With this dual definition of Pareto-optimality, we can see how intermediate consumption—the portion of real GDP not consumed by the public (e.g., military spending)—imposes costs without necessarily producing equivalent benefits. Even if labor productivity (the other side of the Pareto-efficiency equation) is maximized, a paradox persists: productivity gains can endure under involuntary exchange (akin to slavery or feudalism) if imperfect information is minimized.
A prime example is the Soviet Union under Stalin. Resources were allocated to maximize labor productivity but failed to bolster public consumption or overall welfare. The regime curtailed information asymmetry through stukachi (informants) and gulags for underperformers, yet these strategies resulted in an inefficient allocation of real GDP under involuntary exchange.
In modern markets, the role of informants is effectively replaced by stock market signals. According to Jensen and Meckling’s agency theory, managerial inefficiencies become apparent when firms show lower-than-benchmark P/E ratios. Rewarding CEOs with stock options aligns their incentives with shareholders, thereby reducing agency costs and boosting governance. This widely validated approach shows that aligning management and shareholder interests maximizes firm value and elevates labor productivity.
Looking at per-capita GDP rankings, it becomes evident that enforceable property rights for fractional shareholders are pivotal for raising efficiency. If you purchase a factory, the more certain you are of retaining ownership a century later, the more Pareto-efficient (as measured by per-capita GDP and growth) the overall economy becomes. This link emerges starkly when comparing real-world per-capita GDP across nations.
The underlying reason is straightforward: workers on hourly wages have minimal motivation to enhance productivity, while beneficial owners, who directly reap the rewards of cost savings, are the ones who implement real-world productivity improvements. Yet such innovation only happens if owners are sure they will benefit from their efforts—otherwise, they have no incentive to invest in more efficient production. In essence, if you’re uncertain you’ll still own the factory next year, why pour resources into property upgrades? This mirrors the tragedy of the commons: an unprotected asset base leads to underinvestment and stagnation.
In many former Soviet republics, low per-capita GDP can be explained by a prisoner’s dilemma scenario: corrupt officials and feeble law enforcement create strategic uncertainty about future asset ownership. For instance, authorities may be allowed to take bribes or even expropriate property (the Russian term otzhat’ zavod illustrates such forced appropriation). This constant risk of asset seizure by the next ruling coalition undermines investment and fosters lethargic growth.
In theory, one might curb such rent-seeking by promoting better self-governance. Yet, as Animal House famously implies, sometimes a more direct approach is necessary:
“We gotta take these bastards. Now we could do it with conventional weapons, but that could take years and cost millions of lives. No, I think we have to go all out.”
How, you ask? By creating a superior monetary system that curtails rent-seeking by entrenched financial interests. To that end, we proudly present the TNT-Bank-issued water-backed bearer (permissionless) money—our proposed solution to reduce rent-seeking in the financial sector and guide the global economy toward a more Pareto-efficient future.
Q&A: Core Points of Potential Objection
Q1: The paper’s tone shifts between formal academic logic and cinematic/pop-cultural references. Isn’t that jarring?
A1:
While mixing rigorous analysis (e.g., Gödel’s incompleteness theorems, Arrow-Debreu assumptions) with references to movies (The Godfather, Carlito’s Way) may seem unusual, it’s an intentional choice:
Engagement: Pop-cultural or historical anecdotes can make abstract or technical ideas more relatable and memorable to a wide audience.
Illustrative Power: Iconic quotes and real-world events (including personal accounts from Wall Street) show concretely how rent-seeking or dogma-induced blindness appears in practical scenarios.
We acknowledge this style differs from a strictly formal academic format. However, it aims to reach diverse readers—economists, mathematicians, policymakers, and the general public—who may find narrative hooks more compelling than purely formal exposition.
Furthermore, the stories and examples drawn from actual Wall Street arbitrage activity are independently verifiable:
Past Employment: Anyone can confirm my professional history in quantitative trading.
Trading Records: Documented performance data is available for verification, ensuring that these anecdotes are not only illustrative but factually grounded as well.
In short, the dual tone—serious theoretical foundations coupled with colloquial examples—serves to bridge the gap between abstract logic and real-world application.
Q2: You treat both empirical facts (e.g., Earth is spherical) and mathematical facts (2+2=4) as equally ‘impossible to be false.’ Isn’t that philosophically debatable?
A2:
In theory, people can—and often do—debate almost anything. In reality, however, an “objectively true fact” is a claim about the world that cannot turn out to be false in the future, specifically because any rational observer can independently verify its accuracy. The point is straightforward:
Just as no new data can refute the Pythagorean theorem within Euclidean geometry,
No new data can refute the fact that humans (or most mammals) have five fingers on each hand, which you can verify simply by looking at your own hand.
Once a proposition is so thoroughly confirmed that any competent, unbiased observer can test it and reach the same conclusion, it becomes objectively true—beyond realistic doubt or further refutation.
Q3: Your paper emphasizes rent-seeking and opportunism as primary sources of inefficiency. What about honest mistakes or simple ignorance?
A3:
The framework does not deny that bounded rationality, human error, or incomplete knowledge can also create inefficiencies. Indeed, the paper explicitly cites cognitive biases—like the availability heuristic or confirmation bias—that distort decisions absent any malicious intent.
However, opportunism—the “Rent-Seeking Lemma”—emerges as the most enduring driver of relative large-scale inefficiency under rational utility maximization. When rational agents discover they can extract wealth without creating value (e.g., via fraud or manipulation), and there is no effective mechanism blocking this propensity to “steal,” they will exploit these opportunities systematically. This repeated behavior not only disrupts entire markets and government institutions but also creates far more persistent and systemic distortions than sporadic mistakes rooted in ignorance.
Critically, relative Pareto efficiency depends on how well an economy can block rent-seeking “economic parasites.” In other words, two economies may each have occasional honest errors, but the one that better prevents systematic opportunism will generally achieve higher relative efficiency over the long run.
Q4: TIB (Theory-Induced Blindness) and DIBIL (Dogma-Induced Blindness Impeding Literacy) seem closely related. How do we avoid confusing them?
A4:
They are dually defined rather than strictly separate, and they do overlap—but here’s how we distinguish them:
Theory-Induced Blindness (TIB)
Failing to discard a hypothesis once it’s definitively refuted. The agent clings to a now-false premise, ignoring contradictory evidence.Dogma-Induced Blindness Impeding Literacy (DIBIL)
Prematurely elevating a hypothesis to fact status, treating it as indisputable before sufficient proof exists.
In practice:
DIBIL is about “crowning” a hypothesis as fact too soon.
TIB is about “failing to dethrone” a hypothesis that has already been disproven.
Though distinct, they often appear together: a community first crowns an unverified claim as fact (DIBIL), then refuses to abandon it (TIB) even when evidence mounts against it.
Q5: The text sometimes labels certain economic ideas or frameworks as ‘dogma.’ Isn’t it tricky to tell dogma from legitimate theory?
A5:
In the paper’s terminology, a proposition is dogma if it is treated as absolutely true—even when (a) it hasn’t been rigorously proven, and (b) contradictory data exist. A “legitimate theory” remains open to falsification and is not given “cannot be false” status.
Theory: A testable, falsifiable statement that scholars continue to probe with new data and methods.
Dogma: A statement that, despite contradictory evidence or the lack of conclusive proof, is upheld as beyond question.
Of course, real-world economics can blur the line, especially when strong beliefs or political motives prevail. The paper’s solution is to maintain a clear separation between “facts” (Ξ(ϕ)=1) and “hypotheses” (0<Ξ(ϕ)<1), ensuring no proposition is labeled a “fact” before earning that status through overwhelming logical or empirical validation.
Q6: You emphasize that strong property rights and law enforcement fix externalities (via Coase) and deter rent-seeking. Isn’t that oversimplified? What about corruption or high transaction costs?
A6:
It’s a misinterpretation to think we’re arguing that property rights alone magically solve everything. Rather, in the real world, under the Rent-Seeking Lemma, no other reliable mechanism for regulating externalities exists besides well-defined property rights and strong enforcement. Here’s the nuance:
Coase Theorem: It holds only when rights are well-defined, enforceable, and transaction costs are low.
Real-World Frictions: Corruption, legal fees, or bureaucratic red tape drive up costs and undermine those Coasean conditions.
Initial Allocation: Who starts with which property rights matters greatly, because reassigning them in reality is expensive—if not outright blocked by powerful interests.
In essence, clarity of ownership plus effective enforcement deters opportunistic agents (so-called “economic parasites”) from seizing or destroying value without consequences. At the same time, we fully acknowledge that building a fair enforcement framework is often monumentally challenging, requiring transparent governance and robust anti-corruption measures. Lacking that, the Coasean ideal remains only partially attainable.
Q7: The paper’s strict distinction between final consumption and intermediate consumption might be oversimplified. What about ‘useful’ government services or public goods?
A7:
Labeling defense, policing, or regulatory oversight as “intermediate consumption” does not imply these services lack value. Rather, the paper highlights that such services function as inputs—necessary for safeguarding or enabling final consumption outcomes—rather than providing direct consumer utility themselves.
For instance, having a navy patrolling the Caribbean protects your luxury yacht from pirates. This security allows you to enjoy sailing in peace, just as lumber or materials in a boat’s construction allow you to own and use the yacht in the first place. You’re not “consuming” police forces or naval operations for pleasure; you’re consuming the safety they produce so you can sail without fear of robbery or enslavement.
If the same security level can be achieved with fewer resources (e.g., more efficient law enforcement), that frees up more capacity for direct consumption—foods, leisure, housing, entertainment—thereby boosting overall efficiency. Thus, categorizing government services as intermediate costs doesn’t dismiss their importance; it simply reflects that they are inputs in the chain leading to final, utility-generating goods and experiences.
Q8: Why rely on classical, bivalent logic (L-language) rather than multi-valued or fuzzy logic for economic modeling?
A8:
The paper explicitly states that in actual practice, real-world decisions—legislative votes, court judgments, contractual agreements—universally reduce to binary outcomes: a bill either passes or it does not; a verdict is “guilty” or “not guilty.” While fuzzy or multi-valued logic can model degrees of truth in theory, the final, enforceable choice is binary.
Paper’s Explanation: Once you leave the realm of speculation and enter action—enforcement, signing deals, passing laws—outcomes must be discretely enforceable. There is no partial “pass” of a bill or “somewhat binding” contract.
Why Bivalent Logic? Courts and regulators eventually render singular yes/no rulings or decisions. The paper’s L-language thus captures how real-world yes/no decisions function more accurately than a multi-valued system.
Hence, bivalent logic suits the text’s focus on enforceable economic and legal actions, where a “definitive” result is mandated.
Q9: The paper says society’s ‘only real collective costs’ are labor and resource depletion. Doesn’t that ignore capital, technology, or intangible investments?
A9:
The paper fully acknowledges the significance of capital, technology, and intangible investments. However, it points out that at the macro (collective) level, these all ultimately reduce to labor plus resources:
Capital Goods
A factory, for example, is constructed by human effort (labor) and requires raw materials such as cement, steel, and wiring.Technology/Software
Even “intangible” products—like software—demand programmers (labor) and physical infrastructure (hardware, electricity). Generating that electricity itself depends on natural resources (coal, oil, uranium, or biomass).Renewable Energy
Solar panels, wind turbines, and hydroelectric dams likewise necessitate labor to design, manufacture, install, and maintain, plus physical materials (silicon, rare-earth metals, concrete).
Conclusion
While capital and technology are vital for individual firms, every good or service in a society ultimately springs from two collective inputs: human effort (labor) and resource usage (materials, energy sources, land). Even so-called “renewables” require machinery and infrastructure forged from the Earth’s existing materials. From a societal perspective, once you trace each production layer back to its root, nothing is created without people’s work plus Earth’s resources—that’s it.
Q10: What about altruism, charity, or volunteerism that don’t fit ‘rational utility maximization’ or ‘rent-seeking’?
A10:
The paper does not dismiss the possibility of altruism or ethical motivations. However, it explicitly focuses on arms-length commercial transactions—the standard domain of mathematical economics—where individuals exchange goods, services, or labor-time for wages. This scope excludes gifts, inheritances, or purely altruistic charitable acts.
In real-world markets, where the bulk of interactions involve commercial exchange, the Rent-Seeking Lemma asserts that whenever systemic opportunities for unearned wealth arise, a subset of individuals—especially those less constrained by moral sentiment—will exploit them. While altruism may temper opportunistic behavior, it rarely eradicates it at scale. In fact, inheritance disputes among family members underscore that genuine altruism is often scarce enough to be omitted from most economic analyses without sacrificing real-world accuracy.
Consequently, the paper emphasizes institutional safeguards—robust property rights, credible law enforcement, and transparency—to block rent-seeking at the societal level. Goodwill alone typically falls short in deterring high-stakes or large-scale exploitation, which tends to thrive in the absence of structural checks and balances.
Q11: The paper uses strong language about ‘blocking’ rent-seeking. Isn’t that too militaristic for something like economic policy?
A11:
“Blocking” emphasizes proactive prevention of exploitative tactics rather than hoping they fade out. Economically, rent-seeking persists as long as it remains profitable. Legislation, enforcement, and market design can raise the costs and risks of parasitic behaviors, effectively blocking them.
In short: It’s not literal militarism; it’s a metaphor for closing loopholes and enforcing rules so that unearned wealth extraction is no longer a low-risk/high-reward proposition.
Q12: Isn’t there a risk of overstating how easily we can classify statements as ‘fact’ or ‘hypothesis’ Ξ(ϕ)=1 vs. 0<Ξ(ϕ)<1 in complex economic models?
A12:
This question misrepresents our core argument. In mathematical economics, “facts” refer to actual, observed real-world events or observations that cannot turn out to be false. Far more importantly, within L-language (first-order logic under dual consistency), the end-user is compelled to switch to a more likely hypothesis using Bayesian updates whenever new evidence arises.
For instance, no one disputes the fact that rapid deflation was observed prior to—and thus contributed to—the Great Depression. That’s a verifiable event. However, there are two different hypotheses as to the root cause of the deflation that led to the Depression, and neither hypothesis is a fact:
H0: It was caused by the Fed’s mistakes (unintentional mismanagement).
H1: It was caused by opportunistic rent-seekers establishing the Fed, leading to an inevitable chain reaction culminating in the Depression (and eventually World War II, etc.).
Neither H0 nor H1 can be proven conclusively; they remain hypotheses—akin to the Riemann Hypothesis in mathematics—potentially unprovable within current axiomatic or evidentiary limits. While in statistics we typically must decide which hypothesis is more likely, based on available data, in this case such a decision is unnecessary because both H0 and H1 can be true at the same time. They are not mutually exclusive: it is entirely possible that economic parasites created the Fed (as per H1) and turned out to be incompetent (as per H0).
Ultimately, the paper contends that labeling any proposition a “fact” in economics demands extraordinary confirmation—logical proof or overwhelming empirical consensus. Until then, it remains a hypothesis:
Benefit: This approach reduces the risk of dogma or Theory-Induced Blindness (TIB) by avoiding prematurely treating an unverified statement as indisputable truth.
Pragmatic Approach: If new evidence does overturn a “fact,” it simply reverts to the “hypothesis” category. The paper advocates being conservative in anointing facts, precisely to prevent dogmatic entrenchment.
Final Remark
By addressing these twelve questions, the paper preemptively clarifies or rebuts the most likely points of confusion or objection. This ensures that key concepts—from the strict fact/hypothesis divide to the role of property rights and bivalent logic—are both well-founded and resilient to common critiques.