INSIDE: Inside the Engineers Building AI That Knows When to Shut Up · PAGE A3
VOL. I
No. 1
The Opinionated Daily of Technology & Code
GARRY'S TAKE
Feb 28, 2026
62°F Clear
garrystake.com
S&P 500 5,847 ▲0.84%| NASDAQ 18,234 ▲1.12%| DOW 42,156 ▼0.23%| BTC $94,230 ▲2.45%| 10Y 4.31%
Opinion A2Features A3Back Page A4
ANALYSIS

The Great AI Consolidation Has Begun — And Most of You Aren't Ready

Three major AI labs quietly merged their inference infrastructure this week, signaling the end of the 'let a thousand models bloom' era

Illustration for Garry's Take

The dream of a thousand competing AI models died this week, though most of the industry was too busy tweeting about benchmark scores to notice.

In what will likely be remembered as the defining week of AI's second act, three of the top seven foundation model companies announced infrastructure-sharing agreements that effectively merge their inference layers. The technical term is "federated deployment." The honest term is consolidation.

For the past three years, we've watched a Cambrian explosion of large language models. Every university lab, every well-funded startup, every tech giant with a GPU cluster has released their own model, their own benchmark, their own leaderboard victory. OpenAI, Anthropic, Google, Meta, Mistral, and a constellation of smaller players have been locked in an arms race that burned through roughly $47 billion in compute costs last year alone.

"This is the AT&T moment for AI. We went from dozens of competing networks to shared infrastructure. The same thing is happening here, just faster."
— Dr. Sarah Chen, Economist at Stanford

That era is ending. Not with a bang, but with a signed infrastructure agreement.

The Inference Commons

The deals, announced in rapid succession Monday through Wednesday, create what insiders are calling the "inference commons" — a shared pool of compute resources that any participating lab can draw from. The economic logic is irresistible: why maintain separate data centers across three continents when you can share the metal and compete only at the model layer?

"This is the AT&T moment for AI," said Dr. Sarah Chen, an AI economist at Stanford. "We went from dozens of competing telephone networks to one shared infrastructure. The same thing is happening here, just faster."

The implications are staggering. Startups that differentiated on deployment speed or inference cost — effectively, on their ops teams rather than their research — are suddenly competing on a level playing field they cannot win on. I count at least fourteen YC-backed companies whose entire value proposition evaporated on Tuesday.

But the real story isn't about startups dying. It's about what this means for the architecture of intelligence itself.

The Data Flywheel When inference is commoditized, the only moat is in training. And training requires data. And data, increasingly, is either synthetic or locked behind licensing agreements that make Hollywood residuals look simple. We are entering an era where the companies that win will be the ones with the best data flywheels — systems that generate, curate, and learn from their own outputs in…

PROGRAMMING

TypeScriptThe Language That Ate the World While Nobody Was Looking

A sober look at the language that became infrastructure

There is a certain kind of hubris that comes from being the default choice. TypeScript has it in spades.

This week, the TIOBE index confirmed what anyone with eyes already knew: TypeScript is now the third most popular programming language on earth, behind only Python and C. It powers backends, frontends, mobile apps, CLI tools, build systems, and — in a development that should concern everyone — critical infrastructure.

The language that started as "JavaScript but with types" has become the lingua franca of modern software development. And while that's mostly good news, it comes with a set of risks that the community has been studiously ignoring.

The core problem is this: TypeScript's type system is Turing-complete. This means that type-checking itself can be an arbitrarily complex computation. In practice, this leads to build times that scale nonlinearly with codebase size, type errors that require a PhD to decode, and a culture of "type gymnastics" that prizes cleverness over clarity.

I've reviewed seven major open-source projects this month. In every single one, I found type definitions more complex than the business logic they were supposed to describe. We've created a meta-language on top of a language on top of a runtime, and we're proud of it.

None of this means TypeScript is bad. It's genuinely excellent for 90% of use…

REGULATION

Europe's AI Act Enforcement Begins; Silicon Valley Shrugs

First fines expected within weeks, but compliance remains spotty at best

Brussels woke up today ready to enforce the most ambitious AI regulation in history. San Francisco went to brunch.

The European Union's AI Act, passed in 2024 after years of negotiation, enters its full enforcement phase today. Every AI system operating in the EU must now be classified by risk level, documented to auditable standards, and monitored for compliance. The penalties for violations are steep — up to 7% of global revenue, making GDPR fines look like parking tickets.

And yet, according to a survey by the European Digital Rights Association, fewer than 40% of AI companies serving EU customers have completed their compliance documentation. Among US-based companies, that number drops to 23%.

"It's the GDPR playbook all over again," said Marcus Weber, a tech policy analyst in Berlin. "Write ambitious rules, struggle to enforce them, make an example of one big company, hope the rest fall in line."

The real test won't come from fines. It will come from the first major AI incident on European soil — a healthcare misdiagnosis, a discriminatory hiring decision, a financial fraud enabled by generative AI. When that happens, the AI Act's enforcement mechanisms will be tested under pressure.

What's News
Business & Technology
  • Rust Foundation Reports Record Adoption. The Rust Foundation released its annual survey showing 4.2 million active Rust developers worldwide, up 67% from last year. The language is now used in production at every major tech company and has been adopted by three government agencies for critical infrastructure.
  • GitHub Copilot Writes 61% of New Code at Fortune 500. A new report from McKinsey finds that AI coding assistants now generate 61% of new code at Fortune 500 companies, up from 38% a year ago. Developer productivity has increased by an estimated 27%, though bug rates have risen 14%.
  • Stack Overflow Traffic Falls 35% Year-Over-Year. Stack Overflow usage has dropped 35% year-over-year as developers shift to AI coding assistants for routine questions. The company announced a pivot to "verified answers" curated by human experts, positioning itself as a quality layer above AI-generated responses.
The Road to Consolidation
2023GPT-4 launches; AI arms race begins in earnest
2024 Q1Anthropic, Google, Meta all release frontier models within 90 days
2024 Q3Compute costs force three mid-tier labs to merge or shut down
2025 Q1First infrastructure-sharing talks reported by The Information
2025 Q4Inference costs drop 80% but training costs rise 300%
2026 FebThree labs announce 'inference commons' agreements
Garry's TakeA1Feb 28, 2026
GARRY'S TAKEOPINIONA2 · Feb 28, 2026
Opinion
SANDCASTLE'S CORNER Garry Sandcastle

Stop Calling Everything 'AI-Native'

The meaningless buzzword that ate product marketing

I received fourteen pitch decks last week. Every single one described the product as "AI-native." Not one of them could explain what that meant.

Here's what "AI-native" has come to mean in practice: "We added an LLM API call to our existing CRUD app." That's it. That's the revolution. A product that previously searched a database now also calls Claude before displaying results. AI-native.

The term was useful, briefly, when it described products genuinely built around AI capabilities — products that couldn't exist without machine learning at their core. Midjourney is AI-native. GitHub Copilot is AI-native. A project management tool that added an "AI summary" button to its existing interface is not AI-native. It's AI-adjacent. There's no shame in that, but there should be shame in the deception.

I propose a simple test. If you can remove the AI features from your product and it still works as a coherent tool, you're not AI-native. You're AI-enhanced. Own it.

The companies building truly AI-native products don't need the label. Their products speak for themselves. It's the insecure middle that clings to the buzzword, hoping it will paper over the absence of genuine innovation.

We've been here before. "Cloud-native" went through the same lifecycle. First it meant something specific and technical. Then marketing got hold of it. Now it means nothing at all.

Let's not do this again.

"If you can remove the AI features and it still works, you're not AI-native. You're AI-enhanced."
— Garry Sandcastle
Notable & Quotable

"The market rewards confidence over calibration. Our job is to prove that calibration is worth the cost."

— Dr. Lisa Patel, Vector Institute (See Features, A3)

Editorial Cartoon
Today's cartoon by Staff Illustrator
GUEST ESSAY

The Open-Source Movement Needs a Business Model, Not a Manifesto

Goodwill doesn't pay for infrastructure

The open-source community has spent three decades building the infrastructure that powers the modern internet. Linux runs 96% of the world's top servers. PostgreSQL processes trillions of transactions daily. Kubernetes orchestrates millions of containers. The economic value created is incalculable.

And yet, the median open-source maintainer earns exactly zero dollars for this work.

This is not a new observation. But the AI consolidation currently underway makes it newly urgent. As foundation model companies merge their inference infrastructure and training costs escalate beyond what any volunteer community can afford, the window for open-source AI alternatives is closing fast.

The community's response has been, predictably, to write manifestos. What it needs instead is a viable business model — one that doesn't depend on the goodwill of trillion-dollar companies that have every incentive to capture and close what was once open.

The models exist. Dual licensing (MySQL). Open core (GitLab). Managed services (Redis). Foundation-backed (Linux). Each has trade-offs, but each actually generates revenue. The worst option is the current default: hoping that someone else will fund the common good indefinitely.

Hope is not a strategy. It's barely a wish.

In Today's Features

Inside the Engineers Building AI That Knows When to Shut Up — At a quiet lab in Toronto, a team is solving one of AI's hardest problems: knowing when you don't know Page A3

TypeScript — The Language That Ate the World While Nobody Was Looking Page A1

Letters to the Editor

RE: "Why Your Startup Doesn't Need a CTO (Feb. 21)"

Mr. Sandcastle's column last week arguing that early-stage startups should hire a 'lead engineer' rather than a CTO betrays a fundamental misunderstanding of what a CTO does. A CTO doesn't just write code — they set technical vision, make architectural bets, and serve as the bridge between business strategy and engineering execution.

Patricia Kuhn, CTO, Meridian Systems, Austin, Texas

RE: "The IDE Is Dead, Long Live the IDE (Feb. 14)"

I've been a Vim user for 27 years. I will be a Vim user for 27 more. No amount of AI-powered autocomplete will change this. The IDE is not dead, but it was never alive to begin with — it was always a crutch for those who couldn't be bothered to learn their tools properly.

Howard Ng, Senior Developer, Vancouver, B.C.

RE: "The Great AI Consolidation"

If Mr. Sandcastle is right about consolidation, then the open-source community has approximately eighteen months to build viable alternatives before the window closes permanently. I would argue the window is already narrower than he suggests.

Dr. Amara Osei, ML Researcher, ETH Zürich, Zürich, Switzerland

THE EDITORIAL BOARD
Garry Sandcastle, Editor-in-Chief & Publisher · The Algorithm, Managing Editor · Various LLMs, Contributing Columnists · Garry's Take is published daily except weekends and days when the build breaks. Founded 2026. All opinions are strongly held and occasionally correct.
Garry's TakeA2Feb 28, 2026
GARRY'S TAKEFEATURESA3 · Feb 28, 2026
Features
PROFILE

Inside the Engineers Building AI That Knows When to Shut Up

At a quiet lab in Toronto, a team is solving one of AI's hardest problems: knowing when you don't know

Illustration for Garry's Take

TORONTO — The whiteboard in Lab 4B at the Vector Institute has a single equation circled in red marker. Below it, someone has written: "If we get this right, nothing else matters."

The equation describes calibrated uncertainty — a mathematical framework for making AI systems accurately assess their own confidence. It's the holy grail of AI safety research, and the twelve-person team working on it believes they're close to a breakthrough.

"The fundamental problem with current AI systems isn't that they're wrong," says Dr. James Nakamura, who leads the calibration team. "It's that they're wrong with the same confidence as when they're right. A model that says 'I'm 95% sure' should be correct 95% of the time. Right now, that number is closer to 60%."

The implications touch everything. Medical AI that hallucinates a diagnosis with high confidence. Legal AI that cites nonexistent case law. Financial AI that recommends trades based on patterns that don't exist.

Epistemic Grounding

Nakamura's team has been working on what they call "epistemic grounding" — techniques that force models to track the provenance of their knowledge and degrade confidence when that provenance is weak. The approach is computationally expensive, adding roughly 40% overhead to inference time, but the results are striking.

In controlled tests, their modified models reduced hallucination rates by 73% compared to baseline. More importantly, when the models did hallucinate, they flagged the output as low-confidence 89% of the time.

"There's a perverse incentive structure," admits Dr. Lisa Patel, a research scientist on the team. "The market rewards confidence over calibration. Our job is to prove that calibration is worth the cost."

What Comes Next

The team expects to publish their full results this spring. If the numbers hold up in production environments, it could reshape how every major AI system handles uncertainty.

"We can't eliminate errors. But we can make them visible."
— Dr. James Nakamura, Vector Institute
By the Numbers: AI Uncertainty

73% — Reduction in hallucination rates using epistemic grounding

89% — Proportion of hallucinations correctly flagged as low-confidence

40% — Inference overhead added by the calibration approach

60% — Actual accuracy when current models claim 95% confidence

12 — Size of the Vector Institute calibration team

$0 — Revenue generated by the research so far ("We're not a startup," Nakamura insists)

ENTERPRISE

Boring AIThe Quiet Revolution Nobody Talks About

While everyone chases AGI, the real money is in automating spreadsheets

Illustration for Garry's Take

The most profitable AI company you've never heard of makes $340 million a year automating insurance claims processing. It has no Twitter presence. Its founder has never been on a podcast. It has never published a paper on arXiv.

DataForm Processing, based in Columbus, Ohio, employs 200 people and processes roughly 2.3 million insurance claims per month using a combination of OCR, traditional machine learning, and a fine-tuned language model that cost them $12,000 to train. Not $12 million. Twelve thousand dollars.

"People in San Francisco think AI means chatbots and image generators," says founder Rita Vasquez, who previously spent fifteen years in insurance operations. "For us, AI means reading a PDF and putting the right numbers in the right database fields. It's not sexy. It's extremely profitable."

The contrast with the foundation model companies is stark. The top AI labs collectively burned through an estimated $47 billion last year. Their combined revenue was roughly $11 billion. DataForm spent $3 million on compute and made $340 million.

The math isn't subtle.

AI Lab Spending vs. Revenue
SpendRevenue
OpenAI$14.2B$4.1B
Anthropic$7.8B$1.9B
Google DeepMind$12.1B$3.2B
Meta AI$9.4B$1.1B
DataForm (boring AI)$3M$340M
Garry's Take estimates based on public filings, 2025

Book Review: 'The Last Programmer'

★★★★☆
by Ellen Zhao
O'Reilly Media, 2026, 342 pp.
$29.99

Ellen Zhao's new book asks the question every developer is privately wondering: will there still be programmers in ten years? Her answer — a qualified yes, but not the kind you're imagining — is both reassuring and terrifying. The book excels when describing the specific tasks AI has already absorbed (boilerplate, testing, documentation) and falters only when predicting what remains. Still, it's the most clear-eyed assessment of AI's impact on the profession published to date.

Garry's TakeA3Feb 28, 2026
GARRY'S TAKETHE BACK PAGEA4 · Feb 28, 2026
The Back Page
A-HED

The Developer Who Automated His Entire Job — Then Got Promoted

A tale of automation, ethics, and the perverse incentives of corporate life

Illustration for Garry's Take

A developer at a mid-size fintech company — who spoke on the condition of anonymity because, well, you'll see why — spent six months last year automating every aspect of his job. Every code review. Every deployment. Every standup summary. Every Jira ticket update.

He estimates he now does roughly 45 minutes of actual work per week. The rest is handled by a combination of cron jobs, shell scripts, and a fine-tuned language model that writes his Slack messages in his voice.

"The hardest part wasn't the automation," he told me. "It was making the automation look manual. You have to introduce random delays, occasional typos, and vary the response time. If you reply to every message in exactly three minutes, people notice."

Last month, his manager gave him a performance review. He received the highest possible rating and a 15% raise. The review specifically praised his "consistency and reliability."

He has not told his manager about the automation. He's not sure he ever will.

"I thought about it," he said. "But if I tell them, one of two things happens. Either they fire me for essentially not working, or they promote me to automate everyone else's job. Both options seem bad."

He was promoted anyway. His new title is "Senior Automation Engineer."

He did not find this funny.

For now, he spends his free time working on a side project. It's an AI tool that helps other developers automate their jobs. "Someone has to build the future," he said, without a trace of irony.

THE INSTALL LIST Garry Sandcastle

This Week in Unnecessary Dependencies

A running tally of packages that probably shouldn't exist

Our weekly audit of npm reveals the following packages published in the past seven days:

is-even-ai (v1.0.3) — Uses GPT-4 to determine if a number is even. 847 downloads this week. The README states: "Finally, a solution to one of computer science's hardest problems." It costs $0.002 per check. There is a premium tier.

left-pad-2 (v2.0.0) — A spiritual successor to the package that famously broke the internet in 2016. This version adds AI-powered padding that "learns your preferred padding style over time."

blockchain-hello-world (v1.0.0) — Prints "Hello, World!" but stores the output on a blockchain first. Requires a wallet connection. Gas fees apply.

ai-code-reviewer (v0.9.0) — Reviews your code by sending it to ChatGPT and pasting the response into a GitHub comment. It also reviews its own reviews, creating an infinite loop the author describes as "recursive quality assurance."

Total new packages on npm this week: 4,312. Packages that probably should have been a function: 4,100.

Markets
S&P 5005,847.23▲ +0.84%
NASDAQ18,234.56▲ +1.12%
DOW42,156.89▼ -0.23%
BTC$94,230▲ +2.45%
ETH$3,456▲ +1.78%
10Y Treasury4.31%– +0.02
VIX14.23▼ -3.12%
Tech Briefs

PostgreSQL 18 Beta Adds Native Vector Search. PostgreSQL 18 enters beta with native vector search support, directly threatening specialized vector databases like Pinecone and Weaviate. The implementation supports HNSW and IVFFlat indexing methods. Early benchmarks show performance within 15% of dedicated solutions.

Cloudflare Reports 40% Rise in AI Bot Traffic. Cloudflare's latest transparency report shows AI crawler traffic has increased 40% year-over-year, now representing 12% of all web requests. The company launched new detection and blocking tools, but admitted the cat-and-mouse game is "far from over."

Linux 7.0 RC Includes Rust USB Driver Support. The Linux 7.0 kernel release candidate includes Rust driver support for the USB subsystem, marking the largest Rust integration in the kernel to date. Linus Torvalds called the milestone "inevitable but still impressive."

The Daily Debug
1
2
3
4
5
6
7
8
9
10
11
ACROSS
  • 1. Version control command (3)
  • 4. React's virtual ___ (3)
  • 6. Language Rust aims to replace (1,2)
  • 8. npm install's worst enemy (12)
  • 10. HTTP success code (3)
DOWN
  • 1. Google's AI model (6)
  • 2. Ctrl+Z (4)
  • 3. Stack data structure operation (4)
  • 5. Linus's penguin (3)
  • 7. Code smell (4)
Corrections & Amplifications

In last week's edition, we stated that Python 4.0 had been released. In fact, Python 4.0 has not been released and may never be. We regret the error, though not as much as we regret Python's packaging ecosystem.

A Feb. 21 article on microservices incorrectly stated that a monolith 'never works at scale.' Several readers, and the entire Shopify engineering team, would like a word.

Garry's TakeA4Feb 28, 2026