Key takeaways
- A lot of today’s “low” AI pricing is loss-led or quietly cross-subsidized. Compute is pricey. Prices don’t always admit it. - The subsidy shows up all over the stack. VC-funded capex, cloud credits, vendor lock-in economics, and public incentives for chips, data centers, and energy. - Physics is undefeated. Power limits, chip supply, privacy requirements. “Free forever” AI doesn’t really pencil out. The IEA expects data center electricity demand to hit ~945 TWh by 2030. - If your product depends on “cheap tokens,” plan for the bill to change. Instrument usage, route models, cache, and budget like it’s a utility, because it kinda is. ---
What it means when AI has a subsidization problem
When I say AI has a subsidization problem, I’m not wagging a finger. This isn’t a moral speech. It’s unit economics. A subsidized AI product is one where the price you pay or the “free tier” you’re enjoying sits below the full cost of the whole machine:
Training, meaning capex plus R&D. Inference, meaning GPUs, power, networking. Safety and privacy controls, which add compute and ops work. And then the boring-but-real stuff like staff, facilities, support, compliance. The kind of costs nobody puts on the homepage. Sometimes the subsidy is intentional. Growth strategy, land grab, grab users now and worry later. Other times it’s structural, because hyperscalers can eat losses and still come out on top. If you want the clean one-liner:
AI has a subsidization problem when AI services are priced below their true compute, energy, and operational costs, so the bill gets pushed onto investors, clouds, or taxpayers.
Where the subsidy comes from
1) The “picks and shovels” loop: VC funds capex, Big Tech collects rent
Marc Bara’s “picks and shovels trap” doesn’t sugarcoat it. Hundreds of companies raise billions to build GPU clusters and tooling, but the biggest customers, hyperscalers, often end up owning the infrastructure. The same piece cites:
- $202B invested in AI infrastructure in 2025
- ~$12B/year in direct consumer AI spend
- a projection the four largest hyperscalers spend $650B by 2026 on AI infrastructure
- McKinsey’s estimate of $6.7T cumulative AI infrastructure investment by 2030
That’s not “a few servers.” That’s a global capex wave. When capex is fueled by investors chasing growth, you can sell inference “cheap” today and call it adoption. Everybody claps. For now. Source. Medium analysis, with referenced reporting and estimates. See “The Picks and Shovels Trap: AI’s $200 Billion+ Subsidy for Big Tech.”
https://medium.com/@marc.bara.iniesta/the-picks-and-shovels-trap-ais-200-billion-subsidy-for-big-tech-de1d216ce9ad
2) Operating losses: someone is literally paying to serve your prompts
Sometimes the subsidy isn’t even subtle. CNBC reported OpenAI expects ~$5B in losses on $3.7B revenue in 2024, with costs tied to “running its services” plus salaries and overhead. No, that single data point doesn’t “prove” everything. Companies lose money all the time. But it fits the bigger pattern pretty neatly: we’re still in a land-grab phase, and usage is being encouraged even if margins suffer. Source: CNBC
https://www.cnbc.com/2024/09/27/openai-sees-5-billion-loss-this-year-on-3point7-billion-in-revenue.html
3) Cloud credits: free compute is still compute
If you’ve ever been around startups, you’ve seen the credits game. The product feels “free-ish.” The burn is absolutely not. Google Cloud’s AI startup program advertises up to $350,000 in cloud credits over 2 years, with up to $250k in year 1 usage coverage, then up to $100k in year 2 at 20% coverage. Microsoft’s startup onboarding flow advertises $1,000 in Azure credits, and up to $5,000 with verification. Those credits aren’t charity. They’re customer acquisition spend. But to the builder, the effect is the same: AI looks cheaper than it is, which is a big reason AI has a subsidization problem in the first place. Sources.
- Google Cloud AI startup program. Https.//cloud.google.com/startup/ai
- Microsoft Learn: https://learn.microsoft.com/en-us/azure/signups/overview
4) Energy and “real world physics” pushes back
Stanford Social Innovation Review says it plainly: this era of “free” or heavily subsidized AI is temporary, and the long-term shape looks more like a metered utility. The International Energy Agency brings the physics receipts. In its Energy and AI reporting, the IEA projects global electricity demand from data centers will more than double by 2030 to ~945 TWh, and AI-optimized data center electricity demand will more than quadruple by 2030. When the power bill doubles, the “free tier” tends to get… smaller. Slower. Less cute. Sources.
- SSIR. Https.//ssir.org/articles/entry/low-cost-ai-illusion-nonprofits
- IEA: https://www.iea.org/news/ai-is-set-to-drive-surging-electricity-demand-from-data-centres-while-offering-the-potential-to-transform-how-the-energy-sector-works
5) Public incentives: chips, tax credits, and the “triple subsidy” debate
This is where the line between private subsidy and public subsidy gets blurry fast. The US CHIPS and Science Act includes $39B in subsidies for chip manufacturing and a 25% investment tax credit for qualified manufacturing investments.Now IRS lays out the credit structure in its own guidance. This isn’t “AI funding” exclusively, sure. But semiconductors are a hard dependency for modern AI, so it’s part of the same pipeline. There’s also an emerging debate, including commentary like “Taxpayer-Backed AI? The Triple Subsidy No One Voted For,” about whether governments should backstop AI infrastructure financing even more directly. Agree or disagree, it’s still the same underlying question: when AI has a subsidization problem, who ends up paying?
Sources.
- CHIPS Act overview. Https.//en.wikipedia.org/wiki/CHIPS_and_Science_Act
- IRS credit explanation. Https.//www.irs.gov/credits-deductions/advanced-manufacturing-investment-credit
- Commentary: https://musictech.solutions/2025/11/14/taxpayer-backed-ai-the-triple-subsidy-no-one-voted-for/
Why “subsidized chatbots” won’t feel subsidized forever
Fast Company’s vibe is basically: don’t get comfy. AI can follow the “growth at all costs” playbook, think Uber and Lyft in the early days, then prices creep up once habits form and the market consolidates. SSIR describes the same slow squeeze, just in practical terms. Free tiers get slower or smaller. Premium features like privacy, longer context, domain tuning slide behind paywalls. AI starts acting like a utility, because eventually somebody has to pay the utility bill. Source: Fast Company
https://www.fastcompany.com/91511668/dont-get-too-used-to-subsidized-chatbot-costs
How to build products assuming AI has a subsidization problem
I try to treat LLMs like any other expensive dependency. Measure it. Put limits on it. Design so you can swap it out when the economics change. Because they will. ### 1) Instrument tokens and latency like you mean it
If you don’t know your tokens per request distribution, you don’t know your gross margin risk. You’re guessing. And guessing gets expensive. A lightweight starting point:
# rough monthly inference cost model
requests_per_day = 200_000
avg_input_tokens = 500
avg_output_tokens = 250
days = 30
tokens_per_month = requests_per_day * * days
cost_per_million_tokens = 3.0
# plug in your provider/model pricing
monthly_cost = * cost_per_million_tokens
printThen run scenarios. Prices double. Usage goes 5x. You move 20% of traffic to a smaller model. What breaks first?
2) Benchmark self-hosting vs API, don’t guess
NVIDIA’s inference benchmarking guidance is refreshingly practical. Measure throughput and latency, pick a target QoS, size the infra, estimate TCO. Do the math before you tell your boss you can “just run it on a couple GPUs.” I’ve watched that optimism die in real time. Source: NVIDIA Technical Blog
https://developer.nvidia.com/blog/llm-inference-benchmarking-how-much-does-your-llm-inference-cost/
3) Route models and cache aggressively
In my experience, most apps don’t need a frontier model for every single request. Not even close. Cache deterministic-ish results like FAQs, policy answers, boilerplate. Route models. Let a smaller model handle triage and rewriting, and save the big model for the hard stuff. And please, put guardrails on context size. “Unlimited context” is how you wake up to a bill and just stare at it for a full minute.
4) Design for vendor portability
When AI has a subsidization problem, price shocks are normal. So it pays to stay slippery. Keep a clean abstraction around your model provider. Store prompts and outputs in a provider-neutral format. Avoid building the whole product around one vendor’s “special sauce” unless you’ve decided the lock-in is worth it. If you’re already in “AI everywhere” mode, my ranty companion piece is here:
“i don’t want AI on everything — how to choose what’s worth it”
Conclusion: plan like the subsidy ends, because it probably will
AI has a subsidization problem because the true cost of compute, power, and infrastructure is crashing into “cheap tokens” pricing and growth incentives. Enjoy the current phase, sure. Just don’t architect your product like it’s stable. If you’re building with LLMs this week, do one thing. Add cost instrumentation and run a pricing shock scenario. Then tell me what breaks. I’m genuinely curious. Drop a comment, or check out my write-up on tooling choices in Top 10 agentic coding tools in 2026 if you’re using AI mostly for dev workflows. ---
Sources
- Marc Bara (Medium), The Picks and Shovels Trap. AI’s $200 Billion+ Subsidy for Big Tech
https.//medium.com/@marc.bara.iniesta/the-picks-and-shovels-trap-ais-200-billion-subsidy-for-big-tech-de1d216ce9ad - Fast Company, Don’t get too used to “subsidized” chatbot costs
https.//www.fastcompany.com/91511668/dont-get-too-used-to-subsidized-chatbot-costs - Stanford Social Innovation Review (SSIR), The Low-Cost AI Illusion
https.//ssir.org/articles/entry/low-cost-ai-illusion-nonprofits - International Energy Agency (IEA), AI is set to drive surging electricity demand from data centres…
https.//www.iea.org/news/ai-is-set-to-drive-surging-electricity-demand-from-data-centres-while-offering-the-potential-to-transform-how-the-energy-sector-works - CNBC, OpenAI sees $5 billion loss this year on $3.7 billion in revenue
https.//www.cnbc.com/2024/09/27/openai-sees-5-billion-loss-this-year-on-3point7-billion-in-revenue.html - NVIDIA Technical Blog, LLM Inference Benchmarking. How Much Does Your LLM Inference Cost?
https.//developer.nvidia.com/blog/llm-inference-benchmarking-how-much-does-your-llm-inference-cost/ - Google Cloud, AI startup program (credits up to $350,000)
https.//cloud.google.com/startup/ai - Microsoft Learn, Get up to $5,000 in Azure credits to build and grow your startup
https.//learn.microsoft.com/en-us/azure/signups/overview - CHIPS and Science Act overview
https.//en.wikipedia.org/wiki/CHIPS_and_Science_Act - IRS, Advanced Manufacturing Investment Credit (25% credit)
https.//www.irs.gov/credits-deductions/advanced-manufacturing-investment-credit - MusicTech.Solutions, Taxpayer-Backed AI? The Triple Subsidy No One Voted For (commentary)
https://musictech.solutions/2025/11/14/taxpayer-backed-ai-the-triple-subsidy-no-one-voted-for/ - Reddit discussion (community perspective. Not a primary source), AI seems to be being deeply subsidised…
https://www.reddit.com/r/selfhosted/comments/1qbriq6/ai_seems_to_be_being_deeply_subsidised/