VC Letter – TNT-Bank Money Blockchain and Smart Contract Protocol
By Joseph and Nathan Haykov
July 22, 2025
What Is a Monopoly?
The classical definition of a natural monopoly in economics states that a firm is a natural monopoly if its cost function is subadditive—that is, a single producer can supply total market demand more efficiently than multiple competing firms.
But when applied to real-world infrastructure and markets, this concept requires a more precise operationalization. We propose a formal dual-definition that defines a natural monopoly through two necessary and sufficient conditions:
Formal Dual Definition of Natural Monopoly
An industry constitutes a natural monopoly if and only if:
Merging Condition:
Consolidating all incumbent firms—under optimal post-merger production—reduces total cost:C(Q)<i=1∑nC(qi)
where Q=∑i=1nqi is the total market output.
Stability Condition:
No subdivision of the merged entity (into any number of firms, under any output allocation) results in lower total cost:C(Q)≤k=1∑mC(q~k),∀m≥2, ∀(q~1,...,q~m)∈P(Q)
where P(Q) represents all feasible partitions of output Q.
Together, these two conditions establish what we term global subadditivity—the gold standard for verifying natural monopoly status in both theory and practice.
Key Advantages of This Dual Definition
Policy Readiness:
Regulators can first check if merging existing firms would yield cost savings (as in a potential reversal of the AT&T breakup), then test whether any subsequent partition (e.g., regional carve-outs) might achieve even lower costs.Practical Focus:
The approach considers actual, feasible industry structures rather than purely hypothetical partitions.Dynamic Stability:
This ensures that a monopoly will not be destabilized by future competitive entry or market fragmentation.
Case Study: U.S. Natural Gas Distribution
Pre-Structure: 8 regional suppliers
Total cost (fragmented): $2.1B/year
Post-Merger: Single optimized network
Cost = $1.4B/year → Condition 1 satisfied (33% savings).
Subdivision Tests:
Duopoly (North/South split): Cost = $1.7B
Oligopoly (3 firms by consumer density): Cost = $1.6B
Specialized entrants (e.g., LNG vs. pipeline): Cost = $1.8B
→ Condition 2 satisfied → Regulated monopoly upheld.
Theoretical Advantages Over Global Subadditivity
Avoids the "Hypothetical Partition" Problem:
Focuses on actionable industry structures rather than all mathematically possible partitions.Dynamic Consistency:
Embeds sustainability (no profitable entry) and contestability (no frictionless undercutting).Regulatory Innovation:
Mirrors the FCC's "Public Interest Test" for mergers:Step 1: Prove merger-specific efficiencies,
Step 2: Prove no less-restrictive alternative exists.
Key Academic References Supporting this Framework
Baumol et al. (1982) – Contestable Markets
Emphasizes sustainability as a condition requiring no entrant to achieve lower cost structures.
Williamson (1968) – Economies of Scale, Entry, and Public Policy
Clarifies that merger-related efficiencies must be evaluated net of reorganization costs.
Sharkey (1982) – The Theory of Natural Monopoly
Confirms that subadditivity must hold across the relevant industry output range.
Under the Arrow–Debreu framework, an industry constitutes a natural monopoly when two sequential conditions are met: (1) consolidating all incumbent firms significantly reduces total costs after optimal restructuring, and (2) no feasible subdivision of this monopoly can replicate or surpass this cost advantage. This dual definition—grounded in actual, observable restructuring rather than theoretical cost functions alone—ensures both static efficiency and dynamic stability. Thus, it resolves practical policy limitations inherent in definitions relying solely on subadditivity.
The dual-test framework presented here—requiring merger-specific efficiencies and subdivision stability—redefines natural monopoly as a dynamically enforceable market structure. It improves upon traditional subadditivity approaches by anchoring theory firmly to observable firm behavior, bridging Arrow–Debreu’s gap between mathematical precision and policy relevance. Regulators now have a clear, two-step process: first, confirm that consolidation improves efficiency; second, validate that no better fragmentation alternative exists.
This refined approach equips policymakers with a practical standard to regulate natural monopolies – but theoretical elegance doesn’t pay bills. We’re here to make money, not polish academic pedestals. The fatal flaw in economics' standard definition – labeling firms 'natural monopolies' based solely on subadditive cost functions – is this: It mistakes spreadsheet fantasies for steel-and-concrete reality. By reducing a fundamental engineering constraint to a mathematical curiosity, it permits the absurdity of declaring a firm a 'natural monopoly' when no entity can physically serve full market demand.
Redefining Natural Monopoly: An Infrastructure-Based Framework
The conventional economic definition of a natural monopoly—predicated on the subadditivity of cost functions—rests on theoretical abstractions often disconnected from physical and economic reality. We propose a paradigm shift: a true natural monopoly does not emerge from the mathematical shape of a cost curve, but from demonstrable infrastructural dominance.
We therefore advance the following three necessary and sufficient conditions for identifying a natural monopoly:
1. Fixed Cost Absorption with Marginal Scalability
A natural monopoly exists if and only if a single system can serve total market demand after incurring a one-time sunk fixed cost F, while operating with near-zero marginal cost MC≈0.
This condition applies to industries where scale economies are structurally embedded in front-loaded capital investment—such as power grids, internet infrastructure, or core software layers—rather than arising from abstract production functions.
2. Economic Non-Replicability
The infrastructure must be economically non-replicable, not merely technically challenging to duplicate. This is formalized as a revenue constraint:
R(Q)<2F
where R(Q) is the total market revenue and Q the total market demand. This ensures that no rational entrant, even with full market capture, could justify building a second system. Duplication becomes structurally irrational.
3. Operational Invulnerability
The system must meet three engineering-grade performance criteria:
Feasibility: Infrastructure must physically reach all users (full spatial coverage).
Integrity: It must function reliably under maximum projected demand Qmax.
Viability: It must sustain >99.99% reliability, through redundant design and systemic resilience.
Conclusion
This framework replaces speculative cost modeling with empirically testable thresholds. By tethering natural monopoly status to observable infrastructure constraints and revenue economics, it offers a practical, enforceable definition that distinguishes markets where monopoly is not just efficient—but technologically and economically unavoidable.
As a result, regulators, investors, and policymakers can shift focus from arbitrating theoretical efficiencies to governing real-world infrastructural inevitabilities—where market competition is not suppressed by force, but precluded by design.
Illustrative Examples
1. The Linux Kernel: Natural Monopoly via Spontaneous Standardization
The Linux kernel exemplifies a natural monopoly through organic standardization. Its fixed cost F encompasses decades of global investment in kernel development and ecosystem support, while marginal costs MC≈0 reflect frictionless digital replication.
Critically, the non-replicability condition holds:
R<2F
The aggregate revenue potential of the open-source ecosystem cannot justify a functionally independent kernel rebuild. Although technically feasible, duplication is economically irrational—expenditures would exceed any plausible returns.
Importantly, the coexistence of distributions (e.g., Red Hat, Debian, Arch) does not challenge the monopoly—it confirms it. These distributions represent dialectical variations of a shared infrastructure, not competing kernels.
Network effects (developer toolchains, hardware compatibility) entrench the standard.
Fragmentation costs (security vulnerabilities, coordination complexity, interoperability failures) deter duplication.
As a result, competition shifts upward—to user interface and application layers—while the kernel remains singular and uncontested.
Analogy: Language Standardization
Just as English became a global lingua franca, with regional variants (American, British, Australian), the core grammar remains universal. The value lies in the standardized foundation, not proprietary control.
Thus, Linux’s dominance is not imposed—it is an emergent outcome of coordination efficiency and cost structure. A new kernel could be written, but the combination of economic futility and ecosystem inertia eliminates the incentive.
2. DNS Root Zone: Infrastructure Without Revenue
The Domain Name System’s root zone is an even purer case of natural monopoly. Its fixed cost F includes the deployment and global maintenance of root servers, while marginal cost per DNS lookup is effectively zero:
MC=0
Yet it generates no direct revenue:
R=0
This creates a monopoly enforced not by regulation or IP rights, but by sheer economic disinterest. Duplication of root infrastructure is technically possible—but no rational actor would fund infrastructure with no monetization path.
3. NTP: Temporal Coordination as Public Utility
The Network Time Protocol (NTP) infrastructure offers another example. The fixed investment in time-server infrastructure is significant, but marginal delivery costs are near-zero.
MC≈0,R=0
Like DNS, NTP succeeds or fails in binary terms (accurate or not). Given its public-good nature and lack of revenue model, duplication is technically possible but economically unjustifiable. The result is a stable, spontaneous monopoly—enforced not by exclusion, but by cost structure.
4. The Linux Paradox: Optimality Without Competition
Natural monopolies raise a subtle dilemma: How can we ensure optimality when competition disappears?
With NTP, success is binary—either synchronization occurs or it doesn't. But operating systems are path-dependent and complex. Could Linux persist even if suboptimal?
Here, open governance mitigates this risk. Forkability, transparent contribution models, and decentralized stewardship allow innovation within the standard. Monopoly exists at the infrastructure level, but evolution is still possible. The result is a monopoly that resists stagnation.
Conclusion
This framework reorients the concept of natural monopoly from theoretical cost curves to empirical, infrastructure-based conditions. A natural monopoly exists when:
Irreversible Infrastructure: One-time fixed cost enables full-market service
Economic Non-Replicability:
R<2F
Operational Invulnerability: Proven resilience, scalability, and full user coverage
Rather than relying on speculative subadditivity, this model allows regulators and investors to focus on measurable system properties. It clarifies where monopoly is not just efficient, but unavoidable—where the goal is not enforcing competition, but governing durable, defensible systems.
Venture Capital as an Engine of Natural Monopoly Formation
In contemporary venture-funded markets, natural monopoly is neither accident nor anomaly—it is the explicit terminal state of strategically architected infrastructure. Venture capital does not merely fund high-growth competitors; it underwrites the creation of economically irreplicable systems.
This capital allocation strategy aligns directly with the conditions that define a natural monopoly:
High fixed costs F
Near-zero marginal costs MC≈0
Revenue-constrained non-replicability R<2F
The Venture Blueprint: Manufacturing Monopolies in Three Phases
Fixed Cost Absorption
Venture capital is deployed to preemptively absorb the entire fixed cost F of infrastructure development—before competitors can react. This creates irreversible capital asymmetry. By the time rivals contemplate entry, the revenue constraint R(Q)<2F is already operative, rendering duplication economically irrational.Entrenchment Through Adoption Tactics
Aggressive growth strategies—freemium pricing, loss-leading, and network lock-in—accelerate user entrenchment. These tactics convert technical feasibility into economic irreversibility, where replication becomes not just difficult, but structurally value-eroding.Delayed Monetization Post-Dominance
Profitability is deferred until systemic dominance is incontestable. Pricing power emerges not from competitive advantage, but from the absence of alternatives as the market consolidates around a single architecture.
Empirical Validation: Monopolies in the Wild
GitHub
Absorbed global infrastructure costs F; marginal cost per user MC≈0.
The revenue available in the developer ecosystem cannot justify rebuilding a parallel platform R<2F.
Monopoly persists through network effects, not feature differentiation.Cloudflare
Deployed a global edge network at a cost so high that HTTP request revenue cannot fund replication.
Near-zero marginal costs per request make undercutting impossible.Stripe
Embedded payment abstraction layers into internet infrastructure.
Though technically replicable, economic futility prevents duplication—the market cannot support two Stripes.
Inversion of Traditional Venture Philosophy
This model reverses conventional VC logic:
Traditional VC
Monopoly-Seeking VC
Funds competitors within markets
Funds the last infrastructure a market will need
Diversifies risk across portfolios
Concentrates on market finality
Seeks product-market fit
Engineers structural non-replicability
In this framework, the venture capitalist evolves from capital allocator to institutional engineer—constructing not merely companies, but permanent market architectures.
From Discovery to Inevitability
The result is not competitive outperformance—it is systematic preclusion.
Competition does not fail because of pricing aggression.
It collapses because the cost of replication is mathematically unjustifiable.
Once:
R<2F
...no rational investor will fund duplication. The market equilibrium resolves to singularity.
Thus, venture capital transitions from discovering winners to manufacturing inevitability.
Natural Monopolies: The Great Divergence in Capital Strategy
To traditional value investors—disciples of Warren Buffett—venture capital can appear reckless. It violates sacred investment principles: positive cash flow, consistent earnings, and margin-of-safety valuations. Venture capitalists routinely fund companies that are hemorrhaging cash, often with no near-term path to profitability. To the value mindset, this looks indistinguishable from speculation.
But this perception stems from a category error.
Value investing and venture capital are not competing strategies. They are orthogonal investment philosophies, rooted in fundamentally different conceptions of durability.
Two Philosophies, Two Definitions of Moats
Value Investing:
Focuses on brand-based moats, such as Coca-Cola. These moats are protected by:
Brand sentiment
Distribution scale
Consumer switching costs
Venture Capital:
Seeks to construct infrastructural moats, such as Linux or the DNS system. These are defended not by perception, but by economic physics:
High fixed cost F
Near-zero marginal cost MC≈0
Why Infrastructural Moats Dominate
Moat Type
Primary Threat
Durability Horizon
Brand (e.g., Coca-Cola)
Cultural shifts, brand dilution
Decades (vulnerable to sentiment and taste)
Infrastructure (e.g., Linux)
Technological paradigm shift
Centuries (absent foundational obsolescence)
Key Insight:
Infrastructure cannot be economically replicated once:
Rmarket<2F
That is, no rational actor will fund duplication if the total addressable market cannot support two full-cost deployments. DNS root servers are not rebuilt; Linux is not rewritten—not due to technical limits, but because the return on replication is structurally negative.
The VC Endgame: Terminal Monopoly
Venture capital is not merely backing startups—it is financing economic inevitabilities:
Absorb fixed costs F before competitors can react.
Accelerate adoption until revenue R<2F, making duplication irrational.
Lock in the market via network effects and infrastructural inertia.
The output is not simply a company—it is a permanent market architecture.
Buffett vs. The Builder
Value Investor
Venture Capitalist
Buys proven durability
Manufactures future durability
Extracts value from existing moats
Pours concrete for new ones
Optimizes risk-adjusted return on known assets
Accepts risk to shape irreversible outcomes
Buffett looks backward, identifying time-tested franchises.
The venture capitalist looks forward, designing structural monopolies.
Each model is coherent—but they operate on incommensurable timescales and opposite axes of durability.
The VC Investment Opportunity of a Lifetime: TNT Bank Money
We’ve already seen the emergence of engineered natural monopolies: Linux, DNS, NTP, GitHub, Stripe. Though operating in different domains, each exhibits the same core structural pattern:
Massive fixed cost F
Near-zero marginal cost MC≈0
Economic non-replicability once
R<2F
These systems do not win by outperforming competitors in a conventional marketplace. They win by becoming the market infrastructure—rendering replication financially irrational.
Once the cost of supporting two systems with two competing standards exceeds the benefits of choice, fragmentation collapses. Positive network externalities enforce monopoly stability: the more users adopt a single standard, the more costly and inefficient deviation becomes.
Every developer understands this intuitively. Writing software for multiple operating systems increases complexity, overhead, and maintenance cost. But writing for a single, dominant standard dramatically reduces effort. That’s what
R<2F
fundamentally means: when standardization lowers epistemic load and transaction cost, everyone benefits—collectively.
Linux is the archetypal case. It doesn’t dominate through control or coercion. It dominates because its positive externalities make fragmentation uneconomical. It’s not just a product—it’s an economically inevitable convergence point.
Now, Apply That Same Logic—Not to Code, but to Money
Now imagine taking that same monopolistic infrastructure logic—not to operating systems, not to content delivery, not to payments—but to the foundational layer of money itself.
What if we could architect a financial protocol with:
A one-time sunk infrastructure cost
Zero marginal cost per transaction
And network effects so powerful that supporting two systems would make no economic sense?
That’s TNT Bank.
It’s not a bank.
It’s not a product.
It’s not a better app.
It’s a canonical protocol for value transfer: scalable, trust-minimized, and impossible to rationally duplicate once deployed. Like Linux, DNS, and Stripe—it doesn’t aim to compete. It aims to end competition.
This is not just a fintech opportunity.
It is the last financial infrastructure layer the market will ever need.
TNT Bank: A Foundational Financial Protocol
TNT Bank is not another fintech product.
It is not a slicker UI, a marginally better experience, or a challenger brand in banking.
It is an infrastructural protocol—designed to become the canonical layer for value transfer.
Once deployed, TNT Bank functions as:
High-trust:
Independently verifiable, continuously auditable, and cryptographically immutableInfinitely scalable:
With marginal transaction costs that asymptotically approach zeroTechnically unchallenged:
Abstracts away national and institutional fragmentation at the protocol layer
Just as Linux obsoleted redundant operating systems and DNS unified internet naming conventions, TNT Bank has the potential to replace thousands of siloed financial architectures—from regional banks and neobanks to closed-loop fintech systems and remittance networks.
This Is Not a Product Play — It’s a Market Finalization Play
The opportunity here is not to compete.
It is to end the competition.
TNT Bank is not a bank.
It is not an app.
It is not a brand.
It is the last financial protocol the world needs.
Conclusion: Infrastructure Eats Finance
TNT Bank offers venture capital the rarest of outcomes:
A terminal infrastructure investment at the base layer of the global economic stack—money.
This isn’t about better features or market share.
It’s about economic inevitability.
Once deployed, TNT Bank becomes:
Economically non-replicable
Technically indispensable
Systemically convergent
Duplication becomes irrational, and the market consolidates around a single canonical ledger.
This is not just the opportunity of a decade.
It is the opportunity to build what no one will ever rationally rebuild.
Let Us Show You How: Wall Street Returns to the Wheel
Our background is in trading statistical arbitrage on Wall Street. We rarely looked beyond the Russell 2000, so at first, all of this—blockchains, protocols, open-source finance—felt a bit foreign.
We can only assume this is what Peter Thiel means when he says, “competition is for losers.” Naturally, he wants monopolies. He’s a profit-maximizing investor operating in a competitive market, where long-term value accrues not from competing—but from owning and defending moats. Just as Warren Buffett taught us.
But here’s what Silicon Valley sometimes forgets:
Structured monopolies are just game theory in motion.
And game theory is the native language of Wall Street.
It’s how we made our money in statistical arbitrage: by playing a game—trading stocks—where we were always the better-informed counterparty. We systematically win, every time we play, because we know how to profit from diversifiable risk in places where others are excluded—via regulatory barriers.
Just like a licensed casino, we monetize variance in a system rigged not by chance, but by structure. That’s why Citadel pays platforms like Robinhood for so-called “dumb” retail order flow—so we can profit from it, predictably.
We come from a world where mathematical economics isn’t abstract theory—it’s a tradeable edge.
We understand rent-seeking not as an academic concept, but as a business model. We’ve seen Nash equilibria enforced not in pitch decks, but in regulatory filings and capital structures optimized to exclude rivals.
We also know how monopolies are built, defended, and regulated—not from the outside looking in, but from inside the machinery itself.
We didn’t just observe this game.
We helped build it.
And now, the adults from Wall Street are back at the wheel.
What you think you know about game theory?
Child’s play.
What you’ve read about rent-seeking, Folk Theorems, or Pareto-efficient Nash equilibria?
Introductory notes.
But you’re about to learn.
And more importantly: you’re about to profit.
Because we understand something else—something every serious investor eventually learns the hard way:
What’s worse than being wrong… is being right too early.
And now, dear investors, is exactly the right time:
✅ The market is ready
✅ The infrastructure is feasible
✅ The regulators are asleep
✅ And the monopoly is mathematically provable
CAPM, Regulation, and the Real Limits of Arbitrage
As we all know, the Capital Asset Pricing Model (CAPM) posits a foundational axiom:
Diversifiable risk earns no premium.
This is because such risk can be eliminated through portfolio diversification and thus cannot be monetized by rational, risk-neutral investors.
In theory, and in practice, any strategy that profits from diversifiable risk should be arbitraged away—unless it is explicitly gated by regulation.
And indeed, in the real world, the few domains where such asymmetric payoffs persist—casinos, state lotteries, gambling books—are tightly regulated, licensed, and surveilled.
The lesson is simple:
When a system reliably generates returns from diversifiable risk, it becomes a public utility—not a market opportunity.
As dramatized in 21 (based on true events), even card counting—a mathematically valid, diversifiable edge in blackjack—gets you kicked out of Vegas. Not because it's cheating, but because it's profiting from variance in a system designed to suppress it.
When retail investors try to monetize diversifiable risk, the system pushes back—hard.
Where We Play: Arbitrage Meets Regulation
While Peter Thiel structures monopolies like the NYSE—charging a fee for every transaction—we all know how that story ends.
Even with his exceptional talent for lobbying (and to be clear, we have deep respect for Thiel as a master of the regulatory game), even the most well-defended gatekeeping profit centers eventually face appropriation or regulation.
That’s not cynicism.
That’s reality.
Even in the U.S., the state eventually shows up—especially when the rents are too obvious and the public-utility logic too strong to ignore.
So what do we do?
We profit from statistical arbitrage—by running a regulated casino, or more precisely, a licensed market.
But only where risk is protected by regulation, not threatened by it.
We operate in markets where entry is gated not by branding, not by marketing spend, but by statute.
We go where:
Arbitrage meets licensing
Math meets monopoly
Variance is walled off by policy
And where compliance is the moat—the ultimate barrier to entry
Welcome to the Game of Statistical Arbitrage—For Real
Not the textbook version.
Not the hedge fund marketing pitch.
Not the backtest on simulated data.
This is the real-world version:
Where math meets monopoly
Where risk is gated by regulation
Where compliance itself becomes the moat
Where timing is everything
And where those who understand structure—not story—win
We’re not here to play the game.
We’re here to define it.