Lesson 1
The Importance of Auditing Your Axioms
by Joseph Mark Haykov
June 29, 2025
Abstract
The theories of rationality taught in universities exist in an academic fantasyland—clean, elegant, and utterly divorced from reality. In the real world—especially on Wall Street—the game is played under conditions of uncertainty, opportunism, and catastrophic risk. Here, a flawed model isn’t a minor error—it’s a career-ending disaster. Carelessness is a luxury afforded to amateurs, not players operating in high-stakes environments.
This paper introduces a different framework, battle-hardened in real arenas where one false assumption leads directly to ruin. Instead of the naïve assumption of a “perfectly rational agent,” we propose the BROSUMSI agent: an individual who is Boundedly Rational, Opportunistic, a Subjective Utility Maximizer, and Susceptible to Incentives.
The fundamental flaw in academic models is their casual attitude toward assumptions. Mistaking provisional hypotheses for rock-solid truths—as academics frequently do—leads directly to financial annihilation or even prison. To counteract this, we offer a disciplined, fail-proof set of inference rules crafted specifically for Wall Street—rules built explicitly for individuals who cannot afford to make mistakes. These aren’t theories for children playing pretend; they’re strategies for adults playing to win.
Introduction
Mathematical game theory is a powerful tool for modeling strategic interactions in economics, politics, biology, and beyond. It leverages classical first-order logic to derive optimal strategies (theorems) from fundamental assumptions (axioms). Its clarity and rigor have made it invaluable for analyzing both competitive and cooperative dynamics across diverse real-world contexts.
Yet, game theory faces a fundamental limitation as a model of actual human behavior. Its core behavioral axiom—originally articulated by von Neumann and Morgenstern—that individuals act as "rational utility maximizers" does not fully align with how real people behave. Decades of empirical research confirm that human decisions systematically deviate from this idealized standard. People don’t possess infinite cognitive resources or perfect information. Instead, they rely heavily on heuristics, are swayed by emotions and social pressures, and frequently engage in actions contrary to narrow academic definitions of self-interest—such as the WWII Japanese Kamikaze pilots, or Islamic jihadists today. Simply put, the classic assumption of perfect rationality is incomplete.
Modern behavioral and experimental economics—drawing on seminal insights from Simon (Bounded Rationality), Kahneman and Tversky (Subjective Utility in Prospect Theory), and Becker (Opportunism)—show that humans are more accurately modeled as BROSUMSI agents: Boundedly Rational, Opportunistic, Subjective Utility Maximizers who are Susceptible to Incentives.
BROSUMSI: A Scientific (Empirical and Falsifiable) Axiom
Unlike mere philosophical tautologies, the BROSUMSI axiom stands out as a powerful scientific principle precisely because it is falsifiable—it generates concrete, testable predictions. For example, as demonstrated by Nobel laureate Gary Becker, BROSUMSI clearly predicts that if the "cost" of crime is significantly reduced (e.g., through decriminalization), opportunistic agents who respond to incentives will commit more crime. If empirical observations show no such increase, the axiom would be invalidated.
Yet consistently, real-world evidence supports the axiom rather than refutes it. Consider California’s Proposition 47, which reduced penalties for theft under $950. This policy functioned as a real-time, field-based falsification test—less controlled than a laboratory experiment, but far more consequential than a thought experiment. If BROSUMSI were false, we would expect no systematic behavioral shift in response to the altered incentive structure. Instead, post-2014 data showed significant increases in retail theft and widespread store closures—an outcome that aligns precisely with BROSUMSI’s core prediction.
This empirical robustness positions BROSUMSI as the most reliable framework currently available for modeling real human decision-making. Theorems derived from its premises demonstrate markedly higher predictive accuracy in real-world settings than those based on traditional alternatives—most notably the Rational Utility Maximizer assumption, which consistently fails under empirical scrutiny.
Put plainly: The BROSUMSI axiom has never been falsified by behavioral evidence in documented economic, social, and experimental contexts involving incentive-driven decisions and remains fully testable through controlled experiments—much like those pioneered by Kahneman and Tversky in developing Prospect Theory. As a result, BROSUMSI may be viewed not merely as a useful model, but as a foundational law of human behavior—akin, in scope and reliability, to conservation principles in physics (e.g., the first law of thermodynamics or the preservation of information in quantum mechanics).
Our background is not academic theory, but proprietary trading in statistical arbitrage—a domain where flawed models aren’t harmless; they’re lethal. In this world, assumptions are battle-tested under pressure, and failure has consequences. Academic macroeconomics, by contrast, has repeatedly failed to predict real-world inflection points—not due to weak mathematics, but because its underlying axioms are rarely interrogated. As investors like Warren Buffett have noted, economic forecasts often lag reality, not lead it. By contrast, the natural sciences achieve predictive superiority because they operate in closed systems, where variables can be isolated and assumptions tested with precision.
Human behavior does not share this luxury. It is adversarial, culturally conditioned, and hyper-responsive to incentives. That’s why, in our world, a formal audit of one’s axioms isn’t optional—it’s a prerequisite for survival.
A common excuse among social scientists, particularly economists, is that human actions are inherently less predictable than physical phenomena. But this excuse collapses under serious scrutiny. Even in quantum mechanics—where Heisenberg’s uncertainty principle limits the precision of simultaneous measurements—wavefunction evolution remains deterministic, governed by Schrödinger’s equation. Uncertainty arises only during measurement, and even then, the statistical outcomes are mathematically precise. This is not “fuzziness” but structured probabilism. Likewise, while individual human actions may contain variance, aggregate behavior under well-defined incentive structures is statistically robust. The same way thermodynamic laws emerge from quantum mechanics, large-scale human behavior conforms to systematic patterns captured by the BROSUMSI framework. To claim otherwise is not empirical caution—it’s epistemological evasion.
While individual behavior may range from the unwaveringly ethical to the chronically exploitative, aggregate opportunism is not merely probable—it is inevitable. This claim is empirically testable and supported by extensive literature, including landmark studies such as the Milgram obedience experiments. In these trials, ordinary individuals—susceptible to incentives such as authority and social pressure—were induced to commit acts that, under current U.S. law, would be classified as criminal offenses, including torture and assault. The findings underscore how predictable incentive structures can override personal morality, confirming BROSUMSI’s explanatory and predictive power at the population level.
Milgram's subjects — fully aware they could leave freely — chose to administer fake lethal shocks. Why? Because BROSUMSI agents rationally optimize subjective utility: the immediate psychological cost of defying authority (awkwardness, confrontation) outweighed the abstract moral cost of compliance. Their 'choice' was not coerced at gunpoint but engineered by corrupted axioms equating obedience with virtue. This is opportunism in its purest form: trading ethical integrity for social convenience.
Moreover, the Stanford Prison Experiment reproduced these effects under different but equally powerful incentives: previously law-abiding individuals systematically engaged in abusive and criminal behavior when placed in hierarchical, high-pressure environments. These converging results demonstrate that under the right incentive structures, moral integrity is not a stable individual trait—it is a conditional outcome of context.
Opportunism and responsiveness to incentives—such as increased criminal behavior when benefits outweigh costs—have been extensively examined by leading economists: Becker (crime and punishment), Williamson (transaction cost economics), Tullock, Buchanan, and Krueger (public choice theory), and Jensen and Meckling (agency theory). Given this robust intellectual foundation, we need not belabor the point further—especially since opportunistic behavior emerges under both bounded and perfectly rational models.
Instead, this paper focuses on a more neglected but crucial dimension: bounded rationality, and its implications for formal modeling using Classical First-Order Logic (CFOL). Rather than lazily dismissing behavioral deviations as mere "irrationality" or cataloging them as “cognitive biases,” we aim to construct a rigorous, logically precise method for capturing the true structure of human decision-making. For professionals operating in high-stakes environments, relying on simplistic heuristics such as “people are irrational” is not only intellectually shallow—it is financially reckless. We can, and must, do better.
Utility Maximization: The Root Cause of So-Called “Bounded” Rationality
Let’s set the record straight: People aren’t maximizing objective truth—they’re maximizing subjective utility. Crucially, any cognitively unimpaired adult (excluding children, neurodegenerative disorders, intellectual disability, or psychosis)—regardless of formal education—intuitively applies basic first-order logic like modus ponens.
Thus, genuine bounded rationality rarely stems from deductive failure. It emerges when agents actively select flawed axioms—premises that maximize psychological payoff despite objective falsehood.
The brutal truth: Humans run capable logic engines on corrupted data. While individuals occasionally make inference errors, the systematic patterns we label ‘cognitive biases’ don’t arise from random miscalculations — they’re the rational output of shared but flawed axioms. Logic isn’t broken; our premises are.
In the real-world, bounded rationality typically emerges because people adopt and reason from premises that are false, distorted, or self-serving. Individuals choose to believe whatever assumptions enhance their happiness, bolster their self-image, or promise a greater personal payoff, regardless of their objective validity. In other words, real-world errors aren't due to flawed deduction; they're due to flawed starting axioms.
To state this plainly: If your logical inference is sound, the only way your conclusions could turn out wrong is if one or more of your foundational axioms aren't actually true. Why do we fall for flattery? Because we happily adopt the premise that we look “marvelous,” even if it’s objectively questionable. Every BROSUMSI agent (Boundedly Rational, Opportunistic, Subjective Utility Maximizer Susceptible to Incentives) prioritizes subjective utility over objective truth when selecting axioms.
A common misunderstanding of BROSUMSI is thinking that seemingly altruistic or self-sacrificial behaviors contradict subjective utility maximization. This misconception persists due to outdated views of utility as strictly material or self-interested. Under BROSUMSI, utility is explicitly endogenous—it depends entirely on each individual's personal set of axioms. These axioms might include spiritual beliefs, moral imperatives, or social-identity constructs, all of which define what an individual finds rewarding.
Thus, a suicide bomber who believes martyrdom grants eternal paradise is maximizing subjective utility according to their internal axioms. Similarly, altruistic acts yield emotional, reputational, or moral payoffs, making them entirely rational from the agent’s subjective perspective. BROSUMSI doesn't label such behavior "irrational"; it explains it rigorously as logical actions based on personalized premises.
This framework’s explanatory power is remarkable. Take any cognitive bias and you'll find it stems not from defective logic, but from the (often subconscious) choice to accept axioms that maximize subjective utility, even at the expense of objective accuracy. Consider these familiar biases:
Anchoring Bias: Accepting an arbitrary premise from an authority figure, saving cognitive effort and maximizing personal convenience. (As Bertrand Russell noted: “Many people would rather die than think; in fact, most do.”)
Confirmation Bias: Refusing to abandon a cherished belief, avoiding the psychological discomfort and loss of identity utility.
Sunk Cost Fallacy: Persisting in losing actions because admitting error reduces psychological comfort more than continuing down the doomed path.
Monty Hall Fallacy: Misjudging probabilities not due to failed logic, but due to an unchallenged axiom — “once a door is opened, it’s now 50/50.” The agent doesn’t update P(H|E); they never re-evaluate their prior.
Base-Rate Neglect: Treating vivid evidence (e.g., a 99% test result) as a fixed axiom (anchoring) while ignoring priors — a satisficing substitution that bypasses Bayes’ Rule by anchoring on surface information.
Status Quo Bias: Preferring current circumstances solely because change introduces uncertainty and potentially lowers emotional comfort—irrespective of objectively superior alternatives.
Optimism Bias: Overestimating positive outcomes because optimism inherently feels better, even if unsupported by reality.
Authority Bias: Blindly trusting authority figures to minimize the cognitive cost of independent critical analysis.
Self-Serving Bias: Crediting oneself for successes while blaming external factors for failures, preserving psychological utility at truth’s expense.
Hindsight Bias: Claiming after the fact that an outcome was predictable, maintaining the comforting illusion of control and competence.
Dunning-Kruger Effect: Overestimating competence in unfamiliar areas to avoid the psychological pain of admitting inadequacy.
The foundational insight: humans are not intrinsically irrational—they execute flawless logic upon corrupted premises. Every documented cognitive bias emerges from the same root cause: agents deliberately adopt self-serving axioms to maximize subjective utility—where utility includes psychological payoff (e.g., emotional comfort, identity preservation) and cognitive economy (minimizing costly System 2 deliberation). Though deductive reasoning operates perfectly, its conclusions inherit the flaws of its inputs—inputs strategically selected for personal advantage, not empirical fidelity.
Theory-induced Blindness: Utility Maximization via Cognitive Laziness
Theory-induced blindness, first described by Kahneman, is an ideal illustration of the BROSUMSI principle at work. Despite popular claims that “disbelieving is hard” because false axioms are inherently tricky to detect, the real cause is simpler and more aligned with human nature: People—yes, even scientists—stick to false premises because doing so preserves their subjective utility. Accepting these comfortable but erroneous axioms maximizes status, certainty, professional credibility, or self-esteem.
Consider Kahneman’s famous Bernoulli example in Thinking, Fast and Slow: the mistake wasn't in the logic or reasoning steps, but rather in the uncritical acceptance of an incorrect initial axiom. Why is such an axiom stubbornly retained? Not because it's hidden or logically intricate, but because abandoning it exacts a steep subjective cost. Rejecting a foundational assumption means admitting error, damaging intellectual identity, hurting professional status, and losing peace of mind. Yet even this is only part of the story.
The deep-rooted reluctance to question our core assumptions maps neatly onto Kahneman’s dual-process cognitive framework. Our intuitive “System 1” effortlessly operates on pre-existing axioms, maximizing subjective utility on autopilot. To seriously audit those axioms—to explicitly ask, “Wait, is this actually true?”—requires activating the analytical, deliberative “System 2.” And System 2 is slow, costly, and tiring. Thus, we persist in cognitive biases not merely because they feel good, but also because the very act of questioning and reevaluating our foundational assumptions demands significant cognitive effort—a cost our subconscious mind actively resists paying.
To summarize clearly and bluntly: The difficulty in disbelieving isn't primarily cognitive—it’s motivational. Rejecting comfortable axioms subconsciously reduces subjective utility, a price we're instinctively reluctant to pay. Theory-induced blindness thus exemplifies a universal law: People maximize subjective utility, even when choosing which foundational assumptions to believe or discard.
Now, dear reader, this was indeed a lengthy preface—but necessary. Brace yourself. Because what follows is going to shake the very foundations of what you think you know.
🌍 The Law of Collapse (For Smart Teens)
Imagine our world as a massive game full of people making deals, trading resources, solving problems, and building things together. Everyone is playing their part in this giant, interconnected puzzle.
Now, game theory—the mathematics behind strategic decisions—helps us predict and understand smart moves within this huge puzzle. Think of it as chess, but not limited to a board. It applies to everything: business deals, war strategies, friendships, relationships—you name it.
But here’s the big catch:
Game theory is only as good as the rules it uses—and those rules have to match reality exactly.
And that is where things usually go wrong.
💥 Why Do Systems Collapse?
They collapse because people don’t always play fair.
They collapse because people sometimes lie, cheat, or hide crucial information.
But most importantly, systems collapse because people willingly believe in things that aren't true—especially when those beliefs make them feel good.
Here’s the Law of Collapse summed up simply:
Any system that rewards false beliefs or hidden truths—without correcting them—will inevitably break down. Not "maybe." Not "possibly." Always.
🧠 Why Do People Believe False Things?
It's not because people are dumb.
It's not because people are bad at logical thinking.
It's because believing nice stories feels good.
Humans naturally choose comforting ideas, even if they are untrue, because these comforting beliefs help protect our self-esteem, lower stress, or create a feeling of control. This is what we call subjective utility—prioritizing emotional comfort over objective facts.
That's precisely why people:
Stay in unhappy relationships, believing: “Things will improve eventually.”
Continue losing money in shady investments, thinking: “This time is different.”
Ignore obvious red flags, saying: “They wouldn’t deceive me.”
And it's not just gullible or naive people who fall into this trap. Brilliant minds—professors, CEOs, scientists, even top investors—regularly fall victim, too.
⚠️ So, What's the Solution?
You can’t simply stop people from having feelings or emotions.
But you can prevent systems from rewarding deception or comforting lies.
And this is where Axiom Audits become your strongest defense.
🔍 What Exactly is an Axiom Audit?
Think of an axiom as the first brick you place when building a wall. It’s your initial assumption, your foundation.
If that brick is cracked, shaky, or unreliable, your entire wall—the whole structure of your logic and decisions—becomes unstable.
An Axiom Audit means deliberately pausing before building on assumptions and asking yourself tough questions:
❓ “Wait—why exactly do I believe this?”
❓ “Does it really match reality?”
❓ “Or am I just believing it because it feels nice or comfortable?”
Without performing regular Axiom Audits, people construct elaborate, seemingly logical arguments on shaky foundations. Inevitably, these weak structures collapse under the weight of their own falsehoods.
🚩 Real-World Example: Why This Matters
Remember the 2008 financial crisis?
People assumed housing prices could “never go down.” It was comforting, profitable, and seemed unshakeable—so they built massive financial bets on top of this belief.
Nobody stopped to perform an Axiom Audit. Nobody asked if this assumption was actually true.
When reality showed up—and prices did collapse—the entire system crumbled, causing catastrophic consequences worldwide.
This scenario isn’t unique. Cult scandals, political collapses, cryptocurrency scams—all follow the same tragic pattern. Same story, different mask.
🛡️ Bottom Line: How to Prevent Collapse
Here’s the three-step formula for stopping collapse in any system:
Audit your axioms. Question your assumptions constantly.
Only build strategies and beliefs on truth. Prioritize reality, not comforting falsehoods.
Never reward lies. Even if they’re pleasant or easy, refuse to accept them.
🔁 Quick Recap (In Plain English)
People aren't stupid—they’re human.
Humans naturally embrace comforting falsehoods, because these feel better.
Most major mistakes and collapses stem not from faulty logic, but from bad foundational assumptions.
If we neglect to verify our foundational assumptions—our axioms—our systems gradually drift towards inevitable collapse.
To prevent collapse, consistently audit the very foundations of your beliefs: your axioms.
🔍 Bayes’ Rule Applied to Axioms:
Why “Rational Utility Maximizer” Is a Dead Hypothesis
In classical logic, axioms are considered foundational truths, accepted without challenge. However, in real-world applications—particularly behavioral economics and finance—axioms are more accurately viewed as hypotheses. They are provisional beliefs, accepted temporarily for logical deduction, but always open to revision or outright rejection based on empirical evidence.
Understood this way, axioms are perfectly suited to Bayesian analysis, which provides a systematic method to evaluate and update beliefs based on real-world data.
Bayes’ Rule, formally stated, is:
P(H∣E)=[P(E∣H)⋅P(H)]÷P(E)
In simpler terms:
The probability that your assumption (axiom) is correct, given the evidence, directly depends on how accurately this assumption predicts observed reality.
🔥 Case Study: "Rational Utility Maximizer" (RUM)
Let's make it crystal clear:
Hypothesis (H₁): Agents are perfectly Rational Utility Maximizers.
Evidence (E): Observed behaviors include frequent phenomena such as:
Sunk Cost Fallacy
Confirmation Bias
Loss Aversion
Altruism and other seemingly “irrational” acts
Violations of Expected Utility Theory predictions
Under the assumption of perfect rationality (H₁), these behaviors are practically impossible—they should never occur. Thus, the probability of observing these behaviors if people really were rational utility maximizers (P(E|H₁)) is effectively zero.
Applying Bayes’ Rule:
If the observed evidence E (e.g. cognitive biases, altruism, sunk cost behavior) has an extremely low likelihood under the Rational Utility Maximizer hypothesis (P(E|H₁) ≈ 0), then regardless of how strong the prior belief P(H₁) once was, the posterior probability P(H₁|E) collapses.
In plain English: the Rational Utility Maximizer isn’t just outdated—it’s probabilistically obsolete. The data didn’t disagree with it; the data devastated it.
✅ Introducing a Superior Alternative: BROSUMSI
Now, consider an empirically robust alternative hypothesis (H₂):
Hypothesis (H₂): Agents are Boundedly Rational, Opportunistic, Subjective Utility Maximizers, and Susceptible to Incentives (BROSUMSI).
Observed Evidence (E): The same real-world behaviors (sunk costs, biases, altruism, etc.) not only align with BROSUMSI—they're exactly what we'd expect from agents behaving under subjective utility constraints and incentive structures.
Thus, under BROSUMSI, we have:
P(E∣H2)≈1
Applying Bayes again clearly demonstrates:
P(H2∣E)=[P(E∣H2)⋅P(H2)]÷P(E)≫P(H1∣E)
In plain English:
BROSUMSI isn’t just intuitively appealing—it’s statistically and empirically validated. It accurately predicts actual human behavior, unlike its outdated predecessor.
📌 The Crucial Role of Axiom Audits (in Bayesian Terms)
An Axiom Audit simply means rigorously evaluating and updating your foundational beliefs (P(H|E)) in light of real-world data.
This isn’t merely good practice—it's essential intellectual and professional discipline.
Refusing to update your axioms, even when faced with clear contradictory evidence, isn’t just stubborn—it’s epistemically reckless. On Wall Street, ignoring reality is the equivalent of continuing to administer a drug after it has been proven ineffective: it's professional malpractice.
🧠 Bottom Line
The Rational Agent axiom (H₁) holds virtually zero credibility based on observed evidence.
BROSUMSI (H₂) is empirically robust, statistically validated, and provides explanatory power across both successes and failures observed in real-world systems.
Bayes’ Rule demands continual updates to our foundational assumptions—this is not optional; it is intellectually mandatory.
Conducting systematic Axiom Audits is how you structurally and scientifically align your assumptions with reality—rather than sentimentality or wishful thinking.
This Is Not Novel: Bertrand Russell Said It First
Let’s be candid, dear reader—we're not claiming groundbreaking originality here. We’re echoing a vital truth first stated eloquently by Bertrand Russell in his legendary 1959 interview. When asked what single piece of advice he would offer future generations, Russell replied:
"When you are studying any matter, or considering any philosophy, ask yourself only: What are the facts, and what is the truth that the facts bear out? Never let yourself be diverted either by what you wish to believe, or by what you think would have beneficent social effects if it were believed."
Russell's insight succinctly captures the heart of our message—the necessity of anchoring your beliefs strictly in evidence and objective reality, rather than personal comfort, convenience, or ideological bias.
What we’ve done here—leveraging Bayesian reasoning and rigorous logic—is formalize and substantiate Russell’s timeless wisdom. The underlying principle remains unchanged and essential:
Cultivate the intellectual and emotional discipline to trust facts—not wishes.
To master this indispensable discipline:
Clearly articulate your beliefs as precise hypotheses rather than vague, comforting intuitions.
Rigorously audit these axioms, honestly gauging the gap between subjective conviction and empirical evidence.
Display the intellectual courage necessary to demote comforting dogmas into provisional, revisable hypotheses whenever reality demands it.
Only through relentless, structured "Axiom Audits" can you reliably avoid the perilous cognitive trap we've identified as DIBIL (Dogma-Induced Blindness Impeding Literacy)—a state in which comforting dogma, adopted due to wishful thinking, displaces objective truth. This dogmatic blindness effectively renders you functionally illiterate, causing you to systematically derive conclusions from flawed premises. By diligently auditing your axioms, you ensure that your decisions—and ultimately your entire worldview—remain firmly grounded in reality.
On Wall Street, in executive boardrooms, or on any high-stakes battlefield, this isn't merely prudent advice—it’s essential for survival. More than that, it represents the only genuine path to sustainable success.
How BROSUMSI Differs from Mainstream Consensus
For decades, behavioral economics has meticulously documented the ways humans deviate from traditional rational choice theory—through heuristics, biases, and emotional distortions. Pioneers like Kahneman, Tversky, Simon, and Thaler revolutionized our understanding by demonstrating conclusively that humans are not perfectly rational. However, they largely stopped short of explicitly identifying why these systematic deviations consistently emerge and persist across contexts and cultures.
This paper provides that missing explanatory step through a profound paradigm shift:
Cognitive biases are not caused by defective reasoning—they result from strategically corrupted foundational assumptions (axioms).
Here’s precisely what's new:
1. Shifting the Locus of Failure
Prior behavioral models located the root cause of cognitive biases in flawed computations—often characterizing biases as automatic, heuristic-driven "System 1" errors overriding logical thinking.
In stark contrast, we demonstrate conclusively that the real failure point occurs much earlier, at the stage of initial axiom selection itself. Human minds function as highly competent logical engines—but they're routinely fed distorted, convenient, or comforting inputs (axioms), chosen to maximize subjective utility rather than to accurately reflect objective truth.
Example: Confirmation bias isn’t irrational logic—it’s rigorous logic applied to axioms deliberately chosen to protect personal identity and emotional comfort.
2. Axiom Selection as Utility Maximization
Gary Becker famously established that utility maximization drives human behavior; Ziva Kunda proved motivation profoundly shapes our beliefs. Our framework synthesizes these insights and demonstrates explicitly that subjective utility maximization fundamentally governs the very choice of axioms upon which every subsequent belief and decision rests.
Example: The Dunning-Kruger effect isn’t primarily incompetence; rather, it's rational and logical self-assessment based on axioms intentionally selected to overestimate personal skill, thereby avoiding psychological discomfort and protecting self-esteem.
3. Providing a Unified Theory of Bias
Historically, cognitive biases have been studied individually as isolated phenomena—such as anchoring, sunk cost fallacy, optimism bias, and others. However, our BROSUMSI framework reveals they all originate from a single root cause: each bias is simply a rational response by agents seeking to maximize their subjective utility through deliberate axiom selection.
Critical insight: This is not merely "bounded rationality"; it’s more accurately understood as bounded honesty.
4. A Practical Solution Emerging Naturally from Diagnosis
If the underlying problem is systematically corrupted inputs, the remedy logically follows: systematic disinfection via disciplined "Axiom Audits." This approach is not simply advocating generic "critical thinking"; rather, it prescribes a targeted, precise, and Bayesian scrutiny of your foundational premises.
Summarizing the Shift in Simple Terms:
Legacy models assert: "Humans reason poorly."
BROSUMSI demonstrates: "Humans reason flawlessly—but based on comfortable lies they deliberately adopt."
This shift isn’t incremental—it’s foundational.
Why Has Academia Missed This?
Traditional behavioral and neoclassical models alike implicitly assumed axioms were either purely rational or randomly "noisy." Both perspectives overlooked a crucial dimension—agency in belief adoption:
BROSUMSI agents do not stumble into false axioms by accident—they actively adopt them.
Subjective utility maximization isn’t restricted merely to actions—it’s fundamentally embedded in our deliberate choice of reality itself.
This groundbreaking framework doesn’t merely explain observed behavior—it exposes the deeper, hidden strategic game beneath all observable games. On Wall Street, in geopolitics, and indeed in everyday life, the critical advantage goes decisively to those who rigorously audit their axioms. Everyone else is unwittingly playing logic-chess on a board that doesn’t even exist.
Conclusion
Let's be absolutely clear: the models of rationality taught in universities are not merely flawed—they represent a dangerously misleading fiction. Elegant, internally consistent, and seductive, these models are nonetheless utterly irrelevant once you step out of academic fantasyland. They amount to a set of childish rules governing a game nobody in the real world ever plays. In the high-stakes arena—where fortunes are made and lost, empires rise and crumble, and the consequences are absolute—you don't need elegant theory; you need rules that are fail-proof.
In this framework, we've pinpointed the fundamental vulnerability at the core of human decision-making: the BROSUMSI agent, an opportunistic creature wired to pursue subjective utility, consistently biased toward self-serving assumptions—including choosing what to believe. We named the resulting cognitive pathology DIBIL (Dogma-Induced Blindness Impeding Literacy)—the insidious mental decay that occurs when comfortable hypotheses (dogmas) masquerade as immutable axioms. It is precisely this cognitive rot, this single point of failure, that underlies every catastrophic collapse—from bankrupt enterprises to fallen nations.
This realization exposes the final—and most lucrative—truth of the entire strategic game:
An amateur—essentially a child in this unforgiving arena—builds their entire worldview on comforting dogmas, self-serving lies designed to maximize perceived utility. But this strategy is the hallmark of the fool. It is self-evidently true that real and lasting happiness, success, and strength cannot be constructed on lies—especially the lies we tell ourselves.
A genuine architect of success understands a harsh but liberating truth: any system built upon false axioms is nothing but a precarious house of cards, destined inevitably to collapse and erase all illusory gains. Thus, the ultimate winning strategy—the one true path to sustained advantage—is to ground your system, your beliefs, and your decisions in brutal, unyielding, reality-aligned honesty.
In the final accounting, truth is not merely a moral preference.
It is the ultimate competitive advantage—not in theoretical discourse, but in the unforgiving crucible of the real world.
Well-Formed Formula
Before undertaking an audit of our axioms—our foundational beliefs—it is critical to establish a fundamental requirement: each hypothesis must first be syntactically well-formed, clearly specifying the conditions under which it holds, before we engage with its semantic interpretation. In simpler terms, a hypothesis is considered valid only if it explicitly defines the exact circumstances or conditions under which it is true. Without clearly stated conditions, a hypothesis becomes not merely incomplete, but fundamentally ambiguous and thus syntactically invalid.
Consider a familiar physical example:
Water boils at 100°C—but explicitly under conditions of sea-level atmospheric pressure (approximately 101.3 kPa). At the summit of Mt. Everest, water boils at roughly 70°C, due to significantly lower atmospheric pressure.
In mathematics, the crucial importance of conditions becomes equally evident:
The Pythagorean theorem explicitly holds true within the axioms of Euclidean geometry, describing flat, two-dimensional spaces. In the context of the real world, however—more accurately described by Riemannian geometry, which accounts for curved spacetime—the theorem does not universally apply.
This clearly illustrates an essential principle:
Every axiom or foundational belief must explicitly state the precise conditions under which it asserts validity.
If conditions remain unspecified, the result is syntactic invalidity. Consider a common assertion:
"Money is the root of all evil."
This proposition is syntactically invalid as a formal hypothesis because it provides no explicit conditions under which money specifically becomes problematic. A syntactically valid hypothesis might instead be:
"Money borrowed from the mob to pay off gambling debts is the root of all evil."
In this revised statement, conditions—borrowing from criminals and incurring gambling debts—are explicitly articulated, creating a syntactically sound, clearly analyzable, and falsifiable hypothesis.
Thus, before analyzing semantic validity, meaning, or empirical truthfulness, each axiom must first be rigorously vetted for syntactic correctness. Explicit conditions must accompany every hypothesis to ensure axioms reliably reflect and predict phenomena within their specific and explicitly defined boundaries.
Two SYNTAX Rules of Axiom Audit
Rule 1: Conditional Duality
The first foundational syntax rule of axiom auditing—Conditional Duality—requires every hypothesis to explicitly define two essential components:
Claim: What is being asserted.
Condition: The precise circumstances under which this assertion is true.
Without this explicit dual structure, hypotheses become ambiguous and lose their evaluability, making coherent analysis impossible.
Rule 2: Semantic Stability (No Semantic Drift)
The second critical syntax rule—Semantic Stability—requires that every term within a hypothesis maintain a single, precisely defined, and stable meaning throughout the entirety of its usage. Semantic drift, the unintended shifting of a term’s meaning, introduces ambiguity and compromises clarity and reliability. By enforcing semantic stability, we ensure the embedding matrix of key terms remains at exactly rank 1, preserving logical coherence and clarity.
Specific formal systems and programming languages inherently guarantee semantic stability. For instance, lexically-scoped languages (e.g., Scheme) enforce semantic stability by clearly associating each symbol with exactly one stable meaning within a defined context, explicitly preventing ambiguity and confusion.
Example Revisited: "Money is the root of all evil"
Let's reconsider our earlier hypothesis:
ϕ: "Money is the root of all evil."
Initially, it might appear syntactically sound, but semantic drift quickly reveals it as a pseudo-statement:
Under a literal, everyday interpretation (e.g., physical coins or currency), the statement seems metaphorically plausible yet remains vague and subjective.
Under a rigorous economic interpretation—such as the Arrow–Debreu model, where money is explicitly defined as a numéraire (an abstract unit of account)—ϕ becomes nonsensical. Within this mathematically precise context, asserting "money is evil" is equivalent to claiming "degrees Fahrenheit are evil" or "kilograms are immoral."
In mathematical economics, this latter definition (money as numéraire) is widely accepted as CFOL-compatible. Thus, "money is evil" fails not due to logical contradiction, but due to semantic ambiguity. The logical form "ϕ ∨ ¬ϕ" ("money is evil or not evil") doesn't fail logically; instead, it fails semantically, because the hypothesis lacks a stable referent and thus falls structurally beyond CFOL’s evaluative scope.
Conclusion: Enforcing Rigorous Syntax
These two foundational syntax rules form the cornerstone of the axiom auditing process, termed CFOL Syntax Checking:
Conditional Duality: Explicitly define both the assertion and the conditions of validity.
Semantic Stability: Ensure stable, unambiguous definitions of all terms, preserving embedding matrix rank at exactly 1.
Rigorous adherence to these syntax rules ensures that our axioms become robust, precise foundations for logical reasoning, empirical analysis, and predictive modeling.
Two SEMANTIC Rules of Axiom Audit
Having rigorously established syntactic criteria, we now address critical semantic requirements. These rules ensure axioms are not only structured correctly, but also meaningful, valid, and logically coherent relative to empirical reality.
Semantic Rule 1: Empirical and Logical Consistency
This first semantic rule demands two conditions:
Empirical Consistency: Every axiom must align with observable, verifiable facts. If an axiom contradicts empirical observations, it becomes empirically invalid.
Example: "Water freezes at 50°C under standard atmospheric pressure" clearly contradicts observable reality and is therefore empirically invalid.
Logical Consistency: Axioms must adhere strictly to logical coherence. They must satisfy the Law of Non-Contradiction (LNC), meaning an axiom cannot simultaneously assert both a statement and its negation. Furthermore, axioms within an entire set must maintain internal logical consistency without contradictions.
Example: "All swans are white" and "Some swans are black" cannot simultaneously coexist logically.
Thus, semantic validity requires both empirical congruence and internal logical coherence.
Semantic Rule 2: Completeness of the Hypothesis Space
The second semantic rule requires that the set of axioms comprehensively cover all logically possible scenarios within their specified domain. Completeness means no relevant scenario is overlooked or excluded. Failure in completeness undermines the reliability and accuracy of any logical, probabilistic, or truth assessments derived from these hypotheses.
Historical Illustration (Pascal’s Wager):
Pascal’s Wager illustrates semantic incompleteness vividly. Pascal originally proposed:
H₁: A Christian God exists; belief yields infinite reward, disbelief infinite punishment.
H₂: No deity exists; outcomes depend solely on finite worldly actions.
Pascal's hypothesis set excluded logically coherent alternatives, such as:
H₃: A deity rewards rational skepticism and punishes blind faith.
H₄: An indifferent deity rewards ethical behavior irrespective of religious belief.
H₅: Multiple gods exist, each with distinct criteria for rewards and punishments.
H₆: An infinite variety of cosmic principles exist, affecting outcomes differently.
Thus, Pascal’s wager is fundamentally unsound because it operates within an incomplete hypothesis space, neglecting alternative logical possibilities—a pattern that systematically feeds DIBIL (Dogma-Induced Blindness Impeding Literacy). Nowhere is this clearer than in the evolution of biblical monotheism.
A straightforward reading of the Torah reveals that its foundational texts do not assert strict monotheism, but rather monolatry—the exclusive worship of one deity amid acknowledged rivals. For example, Genesis 1:1 uses the plural noun Elohim; Exodus 12:12 speaks of God executing judgment “against all the gods of Egypt”; Deuteronomy 32:8–9 references a divine council; and numerous passages (e.g., Exodus 20:3, 23:13; Deuteronomy 13:6–10) legislate against worship of other deities, not their existence.
These passages indicate that the original context was not a universe with only one god, but a competitive religious landscape, in which Yahweh’s exclusivity was enforced by legal and social mechanisms rather than by cosmological uniqueness. Named rivals—including Baal, Asherah, Chemosh, Molech, and Dagon—are treated as real and relevant threats.
This selective narrowing of the hypothesis space—from acknowledging many gods while prescribing exclusive worship of one, to ultimately claiming the existence of only one God—reflects not a neutral search for metaphysical truth, but a consolidation of religious authority. The resulting pattern is structural rather than merely theological: by systematically omitting viable alternatives, the tradition enforces loyalty and suppresses dissent, thereby producing systemic DIBIL.
In sum, any framework that restricts its foundational assumptions—whether theological, philosophical, or institutional—risks entrenching dogma at the expense of intellectual and empirical completeness.
Clarification on Gödel’s Incompleteness Theorems
Gödel’s incompleteness theorems state no formal mathematical system can simultaneously achieve completeness and consistency. However, these theorems specifically apply to formal mathematical proof systems—not directly to empirical hypotheses. Our semantic completeness requirement explicitly refers to coverage of logical possibilities within empirical domains, not formal mathematical completeness. Hence, semantic completeness remains essential for empirical rigor and logical coherence, distinct from the formal completeness addressed by Gödel.
Summary (Checklist)
Before provisional acceptance, each axiom must satisfy:
SYNTAX Checks:
Conditional Duality: Explicitly defined assertion plus precise validity conditions.
Semantic Stability: Consistently stable meaning for every term.
SEMANTIC Checks:
Dual Consistency: Externally empirically valid and internally logically coherent.
Completeness: Fully exhaustive of logically relevant possibilities.
As explained previously, by employing Bayesian reasoning (Chapter 1), we systematically select as an axiom the hypothesis most strongly supported by the current available evidence. However, this approach is valid if and only if our hypothesis space comprehensively covers all relevant logical possibilities. Provided this completeness criterion is satisfied, the selected hypothesis—the one best fitting all observed evidence—automatically emerges as the maximum-likelihood assumption.
Consequently, any theoretical framework constructed upon axioms derived in this rigorous manner inherently qualifies as the “best scientific” theory available. Specifically, it possesses a reduced probability of subsequently being falsified when compared to any competing explanatory alternative—assuming adherence to Aristotle’s principle of parsimony (i.e., avoiding unnecessary axioms).
While rooted deeply in classical logical principles originally articulated by Aristotle, this systematic axiomatic auditing framework represents a unique and powerful synthesis of ancient reasoning rigorously adapted for contemporary empirical and computational modeling.
Aristotle’s Parsimony – The Original Occam’s Razor
Occam’s Razor, traditionally credited to William of Occam (c. 1287–1347), is often succinctly summarized as:
“Entities should not be multiplied beyond necessity.”
Though Occam himself never explicitly wrote these precise words, his clearest documented formulation states in Latin:
“Frustra fit per plura quod potest fieri per pauciora.”
(“It is futile to do with more things that which can be done with fewer.”)
Yet remarkably, approximately 1,500 years earlier, Aristotle (384–322 BC) had already emphasized simplicity and parsimony as fundamental guiding principles of sound reasoning. In his Posterior Analytics, Aristotle explicitly asserts:
“We may assume the superiority, other things being equal, of the demonstration which derives from fewer postulates or hypotheses.”
In other words, Aristotle’s insight clearly identifies a critical logical guideline:
Simpler explanations—those depending on fewer and more straightforward assumptions—are inherently preferable, provided they adequately explain all observed phenomena.
Thus, Aristotle and Occam fundamentally express the same vital idea: simpler theories requiring fewer assumptions or entities are superior, unless additional complexity is explicitly justified by empirical evidence.
What Occam articulated centuries later, though insightful, ultimately amounts to a less precise reiteration of Aristotle’s original logical reasoning. Aristotle explicitly identified the underlying justification for preferring parsimony:
Fewer assumptions (axioms): The fewer foundational axioms we adopt when deriving conclusions using classical first-order logic (CFOL), the fewer potential sources of error we introduce.
Accuracy within CFOL: A logically valid theorem derived via CFOL fails to accurately reflect reality only if at least one underlying axiom is incorrect.
To clearly illustrate, consider the familiar example of the Pythagorean theorem:
Under the axioms of Euclidean geometry—which assume flat, two-dimensional planes—the theorem is universally valid.
However, within the curved spacetime of our actual physical universe (accurately described by Riemannian geometry axioms consistent with Einstein’s theory of general relativity), the theorem does not universally hold.
Thus, when confronting empirical reality—which is characterized by three spatial dimensions plus time (four-dimensional spacetime)—Aristotle’s principle of parsimony becomes critically important. Every new assumption we introduce carries an inherent risk of falsification. The fewer assumptions we rely upon, the lower the probability that our conclusions will subsequently be invalidated by reality.
Indeed, Aristotle’s original insight regarding parsimony provides the fundamental logical underpinning for all modern applied sciences—including physics, biology, computer science, and mathematics. Each discipline continuously strives to reduce complex phenomena to a minimal set of foundational axioms or fundamental laws, from which all other results logically follow. This disciplined quest for minimal foundational assumptions aligns precisely with the methodological approach advocated in this paper.
Necessary Criteria Beyond Parsimony
Crucially, we must recognize that while parsimony is necessary, it is by no means sufficient for rigorous axiom auditing. In addition to ensuring syntactic validity (explicitly stating conditions and avoiding semantic drift), axioms must also satisfy two essential semantic criteria:
Completeness: The axiom set must cover all logically relevant scenarios within its intended domain.
Consistency: The axiom set must be internally coherent and must not contradict externally verifiable empirical evidence.
Only when our axioms are rigorously demonstrated to be simultaneously parsimonious, complete, and consistent can we confidently derive reliable conclusions from them. Assuming no computational or logical errors occur independently within our CFOL deduction processes, these conclusions will necessarily align with empirical reality—provided all foundational axioms indeed accurately represent reality.
It is crucial to emphasize that consistency with empirical reality is paramount. For example, many people—even those well-versed in algebra—assume that “2 + 2 = 4” is a universal truth. In fact, this statement holds only within the framework of Peano’s axioms, which presuppose an infinite sequence of natural numbers. In the physical world, however, such abstraction can break down. Consider Mars, which has precisely two moons: Phobos and Deimos. If one attempts to combine “2 moons of Mars” with “2 moons of Mars,” the result is not 4, but an undefined operation—since Mars has only two moons, the notion of a successor beyond the second does not apply. Thus, while “2 + 2 = 4” is contextually valid within Peano arithmetic, it does not universally correspond to empirical reality. This underscores the necessity of ensuring that our axioms are appropriately aligned with the domains to which we apply them.
Bayesian Axiom Selection: The Dual Meaning of Parsimony
Here, Aristotle’s parsimony principle takes on a critical, dual meaning:
If we rely only on strictly factual axioms—those with no empirically observed contradictions (such as the BROSUMSI axiom)—then the resulting theorem logically becomes the hypothesis least likely ever to be falsified, outperforming any competing alternatives in explanatory strength and reliability.
However, even when we must incorporate axioms that aren’t strictly factual or universally valid (for instance, the axiom “smoking causes lung cancer,” which isn’t universally true because some smokers never develop lung cancer and some lung cancer patients never smoked), our theory remains the “best scientific” theory available—as long as our axioms are selected correctly using Bayesian inference. Specifically, the selected hypothesis is always the most consistent with all available evidence, and thus inherently becomes the least likely to be invalidated compared to any competing explanations, assuming no simpler alternative (one relying upon fewer axioms) equally fits the empirical data.
To put this succinctly and analogously:
Just as Ordinary Least Squares (OLS) regression gives us the statistically optimal (maximum likelihood) linear model based on observed data, systematic axiom auditing—via Bayesian selection and Aristotle’s Parsimony—yields the scientific theory that is least likely to ever be empirically falsified.
Thus, Aristotle’s principle of parsimony—rigorously formalized and applied in concert with Bayesian inference—forms the essential logical foundation upon which robust, reliable, and minimally falsifiable knowledge about the empirical world is constructed. Could this framework perhaps serve as an effective guide for teaching AI systems proper and rigorous reasoning skills?
The Architecture of Belief: A Blueprint for Childhood Programming
At its core, the systematic programming of belief is based on the principles of Operant Conditioning by psychologist B.F. Skinner. Individuals are guided toward certain behaviors and beliefs through reinforcement (rewards) and deterrents (punishments). This behavioral framework is the engine of social consensus.
However, to understand why this conditioning is uniquely powerful in childhood, we must look deeper than behavior alone—into the core dependencies of the developing human mind: the need for physical survival, emotional attachment, and cognitive order. These dependencies create the ideal conditions for Skinner's model to operate with profound and lasting effect.
Stage 1: Primal Programming via Attachment (Ages 0-7)
This earliest stage leverages Operant Conditioning at a biological, pre-verbal level, centered on Attachment Theory (Bowlby).
The Ultimate Reward (Positive Reinforcement): Proximity and Safety. The primary reinforcement for a young child is the caregiver's presence, warmth, and protection. A belief or action that secures this reward is reinforced with an intensity tied to survival itself.
The Ultimate Punishment (Negative Reinforcement): Threat of Abandonment. The primary punishment is not mere disapproval but the terror of separation. A caregiver's withdrawal creates intense anxiety, acting as a powerful deterrent against any behavior that threatens the attachment bond. The child will unconsciously adopt any belief required to avoid this punishment.
Axiom Injection via Authority: At this stage, the child's mind operates with a near-absolute Authority Bias. The caregiver is the sole dispenser of reward and punishment, making them the ultimate source of reality. Their statements are not treated as testable hypotheses (
ψ_hypothesis
) but are injected directly into the axiom set (Σ_fact
). The child lacks the cognitive hardware (Piaget's Preoperational Stage) to question these foundational truths.
Expansion Insight: The initial axioms of religion, morality, and identity are not learned through simple social praise. They are installed via an operant conditioning loop where the stakes are existential: caregiver approval (life) vs. disapproval (threat of abandonment).
Stage 2: Social Calibration and Identity Fusion (Ages 7-18)
As the child's world expands, the Skinnerian reward/punishment system scales up from the family to the tribe (school, peer groups, community). The driving need shifts from raw survival to social belonging.
The Reward: Social Proof and Inclusion. Acceptance by the peer group becomes the new primary reward. The mechanism of Social Proof is now fully engaged. Adopting group beliefs is positively reinforced with inclusion, status, and praise.
The Punishment: Shame and Ostracism. Ridicule and social exclusion are now the primary punishments. This leverages the deep-seated Fear of Alienation. Being cast out from the group is a form of social death, a powerful deterrent against non-conformity.
Identity Fusion: During adolescence, the beliefs installed through this continuous reinforcement become fused with the self-concept. They are no longer just ideas one has; they are part of who one is. An attack on the belief is perceived as a direct attack on personal identity.
Formally: If
ψ ∈ Σ_identity
, thenAttack(ψ) → Attack(Self)
.
Expansion Insight: This is why arguments about politics or religion with adults are rarely logical debates. They are identity-level conflicts. You are not challenging a proposition; you are challenging their sense of self, which was forged in the fire of adolescent operant conditioning.
Stage 3: The Maintenance Protocol - Cognitive Dissonance
Once the core axioms are installed and fused with identity, the system needs a self-maintenance protocol to ensure stability. This is the role of Cognitive Dissonance (Festinger).
The Cost of Questioning: When an individual encounters evidence that contradicts a core belief, it creates a state of intense psychological discomfort. This is the "utility drop" our BROSUMSI model describes—a form of self-inflicted punishment.
The Path of Least Resistance: To resolve this dissonance and avoid the punishment of internal conflict, the agent will almost always choose to reject the new evidence rather than change the core belief. This is reinforced by using Confirmation Bias to seek out data that supports the original axiom.
DIBIL as an Equilibrium State: The BROSUMSI agent, seeking to maximize subjective utility, will almost always choose the path that avoids the punishment of cognitive dissonance. DIBIL is therefore not just a bug; it is the equilibrium state of a system optimized to protect its core axioms.
The Updated Defense: Achieving Cognitive Sovereignty
Given this deeper understanding, the "Axiom Audit" requires more than just intellectual curiosity. It requires the courage to consciously face the punishment mechanisms you have been conditioned to avoid.
Identify the Emotional Moat: Before auditing an axiom, identify the punishment that protects it. Is it the primal fear of abandonment? The social fear of ostracism? The internal punishment of cognitive dissonance? Acknowledge the conditioned deterrent upfront.
Embrace Discomfort as a Signal: Treat the feeling of dissonance not as a punishment to be avoided, but as a compass pointing directly toward a core, programmed belief. The more uncomfortable it feels, the more important it is to investigate.
Practice Strategic Dissociation: Temporarily separate the belief from your identity. Frame it not as "I am a person who believes X," but as "This system is currently running the axiom X. Let's examine its validity."
Seek High-Quality Dissent: Actively seek out the smartest, most articulate people who hold the opposite belief. Your goal is not to win the debate, but to understand their model of reality (
M'
) and stress-test your own (S
).
By following this expanded protocol, an individual moves beyond mere critical thinking and toward Cognitive Sovereignty—the state in which you, not your childhood programming, are in control of your own axiomatic foundations.
The Software of the Soul - Narrative as a Pre-Suasion Protocol
Preface: The Problem of Shared Fictions
The BROSUMSI framework has established that agents operate as rational, opportunistic Turing machines executing on a set of axioms. But this raises a profound question: how does the framework account for the immense power of shared, compelling fictions—religions, national myths, political ideologies—to override the direct, experience-based utility calculations of individual agents?
The answer is not that these narratives make agents "irrational." Rather, these fictions function as a form of Pre-Suasion: an external protocol that performs a strategic axiom injection, corrupting the agent's belief system before its own rational learning process begins. This chapter will formalize this two-stage process, demonstrating how narrative hijacks, rather than breaks, the logic of a BROSUMSI agent.
1. The Internal Engine: Recursive Rationality
First, we must understand how a BROSUMSI agent's belief system evolves organically. It is not static; it is a dynamic, recursive learning engine, which we term the Recursive Lambda Comparator. An agent's beliefs are constantly updated based on the outcomes of its choices.
Consider the parable of the fox and the cheese.
Initial State: The fox begins with a simple axiom:
ψ_1: "Cheese is valuable"
. This axiom drives its repeated attempts to reach the cheese.Recursive Learning: With each failed attempt, the agent incurs a negative utility cost (frustration, wasted energy). Its internal comparator weighs this cumulative cost against the expected reward.
Axiom Revision: After sufficient negative reinforcement, the agent's logic performs a rational update to reduce cognitive dissonance. The original axiom is revised or replaced:
ψ_1
becomesψ_2: "The effort to get this cheese is not worth the reward."
This is recursive rationality in its purest form. The agent is not being illogical; it is adapting its worldview based on empirical experience. Its beliefs evolve as a function of its own history, a feedback loop between choice and consequence.
2. The External Exploit: Narrative as a Pre-Suasion Function
Now, introduce an external variable: another fox whispers, "That cheese is cursed."
This is not evidence; it is narrative. In our framework, this acts as a Pre-Suasion Function. Its purpose is to perform a direct axiom injection into the agent's belief set (Σ
), bypassing the agent's own experiential learning process entirely.
The Injection: The belief
ψ_cursed: "The cheese is dangerous"
is inserted into the fox'sΣ
without any supporting data.The DIBIL Bias Gap: This immediately creates a massive DIBIL Gap. The agent's subjective certainty in
ψ_cursed
(Ξ_subjective
) is now extremely high due to the powerful social/narrative cue, while the objective evidence for it (Ξ_objective
) is zero.Cognitive Lock-In: The agent's recursive rationality engine now begins its work, but it starts from a corrupted initial state. Every future decision will now be processed through the lens of this injected axiom. The fox will rationally avoid the cheese not because its experience proved it was bad, but because its foundational belief system was hijacked beforehand. The narrative becomes a self-fulfilling prophecy, reinforced by the agent's own flawless logic.
3. The Complete Two-Stage Model of Belief Formation
Human belief formation under the influence of narrative is therefore a two-stage process:
Stage 1: Pre-Suasion (The Narrative Injection): An external myth, ideology, or story injects a foundational axiom into the agent's belief set. This is the primary vector for cultural and social programming.
Stage 2: Recursive Rationality (The Confirmation Loop): The agent's internal logic engine takes this (potentially corrupted) axiom set as its starting point. It then proceeds to make perfectly rational choices that confirm and reinforce the initial axiom, creating a powerful feedback loop of cognitive lock-in.
This explains why it is so difficult to reason someone out of a deeply held ideology. You are not arguing against their logic; their logic is sound. You are arguing against a foundational axiom that was likely installed via Pre-Suasion years ago and has been rationally reinforced by every subsequent choice they have made.
Conclusion: The Hijacked Turing Machine
The BROSUMSI framework, expanded with this two-stage model, can now fully account for the power of shared fictions. It shows that narratives are not a form of magic that breaks rationality. They are a structural exploit of the human cognitive architecture.
A compelling story doesn't make an agent irrational. It makes an agent recursively rational about a false premise. It hijacks the Turing machine. The logic remains pristine; it's the foundational data that has been poisoned. And this, ultimately, is the most sophisticated and insidious form of generating the DIBIL that plagues all human systems.
The Two-Factor Utility Axiom – Comfort and Dominance
Preface: The Source Code of Motivation
After auditing the logic of systems, the structure of games, and the mechanisms of belief, we arrive at the final and most fundamental component of the framework: the source code of the utility function itself. What primal drives does the BROSUMSI agent's flawless logic serve?
The answer is not found in complex, abstract philosophies, but in a clean, powerful, and empirically validated two-factor model. The entire spectrum of human motivation, from the quest for survival to the pursuit of empire, can be reduced to two primal drives: the need for Comfort and the desire for Dominance.
1. The Drive for COMFORT (Absolute, Internal Utility)
The first driver is the agent's fundamental need to achieve and maintain a state of physical well-being, safety, and homeostatic balance. This is the drive to minimize pain, distress, and physical insecurity.
Definition: Comfort is the utility derived from an object or action's direct, intrinsic contribution to an agent's physical welfare.
Nature: This utility is absolute and internal. It is not dependent on the status or beliefs of other agents. It is a private calculation of self-preservation and satisfaction.
Example: A coat's ability to protect the body from cold and injury provides a direct, positive utility of Comfort. This is U_comfort(x).
2. The Drive for DOMINANCE (Relative, External Utility)
The second, and often more powerful, driver is the agent's will to achieve superior status and rank within a social hierarchy. This is the drive to be above, to be recognized as superior, and to signal this superiority to others.
Definition: Dominance is the utility derived from an object or action's power as a signal of exclusivity and superiority over other agents.
Nature: This utility is relative and external. It has no value in isolation; it exists only in comparison to others. Its power is derived from the fact that others cannot obtain or afford the same signal.
Example: A sable coat's ability to signal wealth, prestige, and cultural standing provides a utility of Dominance. This U_dominance(x) is often far greater than its intrinsic value as a source of warmth.
3. The Complete Utility Function: A General Formulation
The total subjective utility that a BROSUMSI agent seeks to maximize is therefore a function of these two primal components.
U_total(x) = f(U_comfort(x), U_dominance(x))
The exact nature of the function f—whether it is additive, multiplicative, or follows a more complex non-linear relationship—is unknown and likely varies by agent and context. What is axiomatically certain is that total utility is derived from these two inputs.
This two-factor model resolves all apparent contradictions in human behavior. An agent is constantly making trade-offs between these two primal drives, rationally optimizing the output of f based on their circumstances and incentives:
Comfort-Prioritizing Behavior: An agent choosing a sensible, warm, but unfashionable coat is making a rational choice to optimize f by prioritizing the U_comfort variable.
Dominance-Prioritizing Behavior: An agent engaging in conspicuous consumption is making an equally rational choice to optimize f by prioritizing the U_dominance variable, often at the direct expense of U_comfort.
4. Compatibility with Economic and Constitutional Frameworks
This ordinal utility function, grounded in Comfort and Dominance, remains fully compatible with key economic frameworks:
Consumer and producer surplus in microeconomics.
Welfare economics, especially those focused on maximizing average or total well-being.
Constitutional economics, particularly models grounded in normative evaluation.
Example: Pareto Efficiency and Welfare
Under standard interpretations of the U.S. Constitution’s directive to “promote the general welfare,” policy outcomes are often evaluated using Pareto efficiency: An outcome is Pareto-efficient if no agent can be made better off without making another worse off.
But worse off in what dimension? In terms of subjective utility—as formally defined by each agent’s own ordinal welfare function f(Comfort, Dominance).
🧠 Epistemic Framing
As we have established, this concept of utility must be handled with formal precision:
Utility is not truth.
It is a preference-ranking, expressed numerically to enable decision logic.
In BROSUMSI, utility is agent-relative, logic-compatible, and updateable over time.
Non-Additive Utility: The Thermodynamics of Dominance
Core Axiom
BROSUMSI are epistemically unable to treat utility as a cardinal value—it’s not a sum any one of us is able to compute. It’s ordinal. What matters isn’t how much utility a choice offers, but whether it ranks higher than the alternatives. Agents (myself included) maximize rankings, not totals.
Dominance–Comfort Dynamics
Dominance (D) and Comfort (C) don’t add—they trade. They’re not ingredients in a utility recipe; they’re competing thermodynamic reactions on a shared axis. Every decision we make under pressure, in a market, in public, even in art, is a navigation along that frontier.
Here’s the blunt reality of the trade-off:
To gain Dominance, we must burn Comfort (e.g., spend irrational sums on status goods that serve no function beyond flexing).
To preserve Comfort, we risk sacrificing Dominance (e.g., wearing functional Crocs to a black-tie gala is social suicide).
We all (all BROSUMSI) model this as:
U ≥ U′ if and only if Φ(C, D) ≥ Φ(C′, D′)
Where Φ is not separable—it’s an agent-specific utility surface encoding:
How ruthless we are willing to be in trading C for D.
Our marginal rate of substitution from C to D, MRS_C→D.
Art: Dominance as an Exothermic Reaction
Take what we call the Basquiat Reaction:
Wealth + Social Context + Auction → Dominance + Scarcity Shockwaves
Art doesn’t just accumulate symbolic capital—it ignites it. It doesn’t “add value”; it detonates value. Utility in this domain is catalytic, not arithmetic. If I buy a painting and it doesn’t sting, it doesn’t signal. If it’s affordable, it’s irrelevant.
Rule: If conspicuous consumption doesn’t hurt (C-loss), it doesn’t work (D-gain).
And the proof is everywhere. The resale value of a Basquiat collapses the moment the buyer’s identity reveals the purchase was for a tax write-off or estate shelter. Dominance only exists when the cost is visible and real.
Harvard: The Dominance Option
Now consider elite university admissions. Don’t tell me donors are “buying education.” That’s nonsense. They’re purchasing a compound derivative—a long-dated dominance option.
Donation = Strike Price paid to short 'Meritocracy'
Exercise condition:
Child gets admitted → Option delivers Dominance asset
This isn’t an education play—it’s a prestige hedge. The implied MRS_C→D is astronomical. The family would burn $50 million in Comfort (donations) for a marginal D-gain. Why? Because it depreciates competitors’ Dominance—dominance being zero-sum.
If utility were additive, then $50M + $80K tuition should yield “more education.” But it doesn’t. What it yields is reputational arbitrage—accelerated decay of everyone else’s U_D.
Knowledge Hoarding: Phase Transition from C to D
Information behaves the same way. There’s no additive model here. Open knowledge warms; closed knowledge dominates.
State
Utility Regime
Dominance Mechanism
Open Knowledge
U_C
None (signals naïveté/poverty)
Closed Knowledge
U_D
Artificial scarcity → rents
This isn’t a smooth function—it’s a first-order phase transition. Once the cost to exclude others falls below the rent from hoarding, the utility regime flips. Patent trolling is the canonical case: as soon as litigation costs drop below extractable dominance rents, U_C → U_D instantly.
The BROSUMSI Audit Protocol
Here’s how we all audit utility in any high-stakes environment:
Map the Trade-Off Frontier
For any good x, we ask: what Comfort am I willing to burn for 1 more unit of Dominance?
Detect Phase Shifts
If the price of x approaches infinity while its function approaches zero, we know Dominance has absorbed all utility mass. That’s the Basquiat phase.
Stress-Test with Counterfactuals
Harvard: If donations didn’t deliver Dominance, would anyone pay? → P(U_D) = 0
Art: If displayed in a truck stop bathroom, would value persist? → Ξ_objective collapses
Dominance evaporates in the absence of context.
Why This Completes the Framework
Let’s be clear—we didn’t just reject additive utility. BROSUMSI utility-maximizing exposed something deeper: Comfort and Dominance are quantum-entangled. Boosting one necessarily distorts the other. They don’t coexist peacefully; they warp each other’s geometry.
So-called “non-rivalrous” goods? Illusory. In real-world strategic environments—markets, elite admissions, social capital—every utility function collapses to a Dominance eigenstate upon measurement.
Utility is not additive—it’s a curved surface. Agents don’t add values—we trace geodesics warped by the local trade-offs between Comfort and Dominance.
This isn’t just compatible with BROSUMSI.
It is BROSUMSI.
It’s the thermodynamic core of what it means to be a rational, opportunistic, subjective utility-maximizing agent governed by incentives in a world where everything—status, safety, truth—is up for auction.
The Neurobiological Logic of BROSUMSI: A Framework for the Metabolic Substrate of Belief
A central claim of the BROSUMSI model is that agents do not merely passively inherit flawed axioms, but actively and opportunistically select beliefs that maximize their subjective utility. This axiom-selection process, we propose, is not only describable at the algorithmic and behavioral level but is also deeply grounded in the neurobiology of decision-making and motivation. Here, we outline the neurobiological foundations that may support and constrain the BROSUMSI agent, with reference to empirical findings from neuroscience and neuroeconomics.
1. Dopaminergic and Opioid Systems: The Neurochemistry of Utility
Emerging evidence suggests that the dopaminergic system underpins value-based learning and reinforcement across species (Schultz, 2016; Montague et al., 1996). When an individual adopts beliefs or strategies that are expected to enhance social status, control, or dominance, the mesolimbic dopamine pathway—including projections from the ventral tegmental area (VTA) to the nucleus accumbens (NAcc)—is robustly engaged (Bhanji & Delgado, 2014). This neurochemical response is not qualitatively distinct from that observed in addiction, where maladaptive value is assigned to certain cues or actions (Volkow & Morales, 2015).
Conversely, beliefs that maintain internal comfort or psychological safety may recruit endogenous opioid pathways, particularly in regions such as the periaqueductal gray (PAG), which mediate stress reduction and homeostatic balance (Leknes & Tracey, 2008). This dual-system model suggests that the hedonic “reward” of holding certain axioms—whether social dominance or psychological comfort—can be traced to distinct, but interacting, neuromodulatory mechanisms.
Implication:
Attempts to challenge or audit these comfort- or dominance-supporting axioms may result in transient neurochemical states associated with discomfort or even aversive affect, as the neural “reward” for the old belief is withdrawn.
2. Prefrontal Cortex: The Neural Substrate of Axiom Auditing
Cognitive control and the capacity to interrogate one’s own beliefs are widely associated with prefrontal cortical function, particularly the dorsolateral prefrontal cortex (dlPFC) (Miller & Cohen, 2001). The dlPFC is implicated in effortful reasoning, error detection, and cognitive flexibility—functions essential for “axiom auditing.” However, research in social and affective neuroscience shows that under conditions of high motivational salience (e.g., social threat, status challenge), activation in affective regions (e.g., ventromedial prefrontal cortex, amygdala) may override dlPFC-mediated control (Pessoa, 2008; Shenhav et al., 2016).
Thus, the integrity of axiom auditing is not a purely cognitive process but is subject to modulation by motivational states and neurochemical feedback. When beliefs confer strong subjective utility (dominance or comfort), neurobiological signals may attenuate dlPFC recruitment, biasing cognition toward rationalization rather than critical scrutiny (Sharot et al., 2011).
3. Neural Plasticity and Belief Fixation
At the synaptic level, value-congruent beliefs may become entrenched through mechanisms such as long-term potentiation (LTP) in cortico-striatal and cortico-limbic circuits (Lisman et al., 2018). The phenomenon whereby beliefs that maximize perceived utility are more “neurochemically rewarding” may foster selective reinforcement, while contrary evidence is either not encoded or rapidly forgotten (Kappes et al., 2020).
Over time, this can lead to what we may call “neurobiological lock-in”—a state in which beliefs are structurally privileged by the brain’s own learning mechanisms. Belief revision is thus not only an epistemic or logical challenge but a physiological one, constrained by the neural costs of reconfiguring entrenched synaptic weights.
4. The Metabolic Cost of Axiom Auditing
The human brain is metabolically expensive, accounting for approximately 20% of resting energy expenditure (Raichle & Gusnard, 2002). Effortful cognitive tasks, including the critical evaluation of beliefs (axiom auditing), increase demand on prefrontal resources (Laughlin, 2001; Sokoloff, 1960). In this light, holding on to metabolically “cheap” (i.e., reinforcing, low-conflict) axioms is not simply psychologically convenient but biologically parsimonious. This metabolic constraint may help explain why even highly trained individuals, including academics, may systematically avoid costly belief revision except under substantial pressure or incentive.
5. Implications for High-Stakes Environments
In environments characterized by high risk, rapid feedback, and significant consequences for error—such as financial trading floors—failure to accurately audit foundational axioms can lead to catastrophic outcomes. Here, the metabolic and motivational barriers to axiom auditing are confronted by powerful extrinsic incentives for truth-tracking. By contrast, in “low-stakes” contexts where feedback is delayed or consequences are muted, maladaptive or utility-maximizing but empirically false axioms may persist unchallenged.
6. Conclusion: A Neurobiological Imperative for Axiom Auditing
The BROSUMSI model, when extended to the level of neurobiology, suggests that the imperative to audit one’s axioms is not merely an epistemic ideal, but a neurobiological and metabolic necessity—especially in competitive, adversarial, or rapidly changing environments. Advances in cognitive neuroscience reinforce the view that rational belief revision is not simply a matter of will or training but is deeply embedded in the neural economy of energy, reward, and risk. Cultivating deliberate, structured axiom auditing is thus a form of cognitive and neurobiological hygiene, essential for sustained adaptation and success in complex systems.
The BROSUMSI agent’s greatest vulnerability is not ignorance, but metabolic cowardice. Auditing axioms demands prefrontal energy that evolution reserves for life-or-death threats. On Wall Street, every decision is life-or-death. Here, the luxury of self-deception compounds until it explodes—taking careers, firms, and markets with it. The 2008 crash wasn’t a computational failure; it was a collective neurochemical surrender. Surviving requires treating your dlPFC not as an organ, but as a muscle. Starve it of discomfort, and it atrophies. Force-feed it truth, and it becomes a weapon.
References
Schultz, W. (2016). Dopamine reward prediction-error signalling: a two-component response. Nature Reviews Neuroscience, 17(3), 183–195.
Montague, P. R., Dayan, P., & Sejnowski, T. J. (1996). A framework for mesencephalic dopamine systems based on predictive Hebbian learning. The Journal of Neuroscience, 16(5), 1936–1947.
Volkow, N. D., & Morales, M. (2015). The brain on drugs: from reward to addiction. Cell, 162(4), 712–725.
Bhanji, J. P., & Delgado, M. R. (2014). The social brain and reward: Social information processing in the human striatum. Wiley Interdisciplinary Reviews: Cognitive Science, 5(1), 61–73.
Leknes, S., & Tracey, I. (2008). A common neurobiology for pain and pleasure. Nature Reviews Neuroscience, 9(4), 314–320.
Miller, E. K., & Cohen, J. D. (2001). An integrative theory of prefrontal cortex function. Annual Review of Neuroscience, 24(1), 167–202.
Pessoa, L. (2008). On the relationship between emotion and cognition. Nature Reviews Neuroscience, 9(2), 148–158.
Shenhav, A., Botvinick, M. M., & Cohen, J. D. (2016). The expected value of control: An integrative theory of anterior cingulate cortex function. Neuron, 79(2), 217–240.
Sharot, T., Korn, C. W., & Dolan, R. J. (2011). How unrealistic optimism is maintained in the face of reality. Nature Neuroscience, 14(11), 1475–1479.
Lisman, J., Cooper, K., Sehgal, M., & Silva, A. J. (2018). Memory formation depends on both synapse-specific modifications of synaptic strength and cell-specific increases in excitability. Nature Neuroscience, 21(3), 309–314.
Kappes, A., Harvey, A. H., Lohrenz, T., Montague, P. R., & Sharot, T. (2020). Confirmation bias in the utilization of others’ opinion strength. Nature Neuroscience, 23(1), 130–137.
Raichle, M. E., & Gusnard, D. A. (2002). Appraising the brain's energy budget. Proceedings of the National Academy of Sciences, 99(16), 10237–10239.
Laughlin, S. B. (2001). Energy as a constraint on the coding and processing of sensory information. Current Opinion in Neurobiology, 11(4), 475–480.
Sokoloff, L. (1960). The metabolism of the central nervous system in vivo. In Handbook of Physiology.
Conclusion: The Kuhnian Imperative
Let us be brutally clear: the academic establishment isn’t merely wrong—it’s structurally incapable of self-correction. Thomas Kuhn exposed this pathology decades ago: when anomalies shatter a paradigm (e.g., "rational agents"), academics don’t surrender their axioms—they dig deeper into denial, polishing doomed models while reality bleeds out. This is "normal science": the elegant, suicidal ritual of rearranging deck chairs on the Titanic.
But we don’t have that luxury. On Wall Street, anomalies aren’t academic puzzles—they’re bloodstains on your P&L. The 2008 crash wasn’t an "unforeseen anomaly"—it was Kuhn’s crisis in vivo, where unaudited axioms ("housing never falls") metastasized into systemic collapse. Academics call this "complexity." We call it professional malpractice.
BROSUMSI is your Kuhnian revolution. It doesn’t tweak the old paradigm—it napalms it. Where academia sees "irrationality," we expose strategic axiom corruption: agents rationally maximizing utility by choosing comforting lies. Where economists whine about "predictive limitations," we deploy axiom audits: Bayesian scalpels excising dogma.
This is the final truth:
All paradigms rot from the axioms out.
Audit yours, or reality will audit you—with compound interest.
The choice isn’t theoretical. It’s survival. In Kuhn’s terms: cling to your "normal science," and die with the ancients. Or wield BROSUMSI, and win.