Something that has unfortunately occupied a LOT of my time lately is a very contentious series of municipal debates around data centers coming to my township. Not your grandma’s data centers, either. I’m talking about 18-building, compute-heavy monstrosities… and naive township commissioners who don’t see the wolf in sheep’s clothing.
These projects are extractive in a way Pennsylvania knows all too well. In my great-grandparents’ day, the coal mines they worked in to feed their families poisoned the workers, gutted the land, and left behind run-down little towns in their wake when they ran dry. Now data centers have arrived, hungry for different resources: water and power. Same logic of extraction, only instead of digging up minerals, they’re draining aquifers and causing power outages.
I’ve been on a bit of a doom-and-gloom spiral since the project was announced. But recently, I heard two perspectives from technology communicators that made me feel just a little bit better about the state of things:
Everyone seems to believe that AI will keep getting faster, smarter, and better, and that if you dump enough money into GPUs and data centers, you’ll eventually yield the kind of returns that justify all the hype.
But what if that assumption is wrong?
On August 12, The New Yorker ran an essay by Cal Newport titled What If AI Doesn’t Get Much Better Than This? And just a couple days prior, one of my favorite voices in science communication, Hank Green, published a YouTube video explaining why he’s literally changing his investment strategy because of this super-speculative sector.
The thread tying both together: diminishing returns.
The Plateau
Here’s the crux. Big AI companies are burning through billions training and retraining models. They can afford it only because localities hosting data centers absorb the costs—financial, environmental, or both. Each new round of training requires exponentially more compute, yet the improvements look increasingly… incremental.
For example: remember when we thought GPT-5 would blow the doors off? Critics weren’t impressed.
At some point, the cost of scaling outpaces the benefit. Even if the technology keeps inching forward, the payoff may never make up for what was spent in the first place.
Basically, just because you’re the company that spends billions to generate a slightly better model doesn’t guarantee you’ll be the one to profit from it.
So if the real value of AI isn’t in forever training bigger, hungrier models, then where will it show up in daily life (and the stock market)? Hank thinks the answer lies in small, practical integrations.
Think:
- An A.I. tool to help aging individuals manage day-to-day tasks such as calling transportation, managing medications or troubleshooting technology.
- An A.I. suite that helps artists share and market their work without sacrificing hours to social-media.
- Educational A.I. tools that act like a study buddy when your human one is busy. Quizzing you, rephrasing concepts, helping you actually learn.
These don’t require billion-dollar breakthroughs. They’re cheap, targeted, and genuinely useful implementations of a technology that is already almost there. They save time, increase access, and quietly improve lives.
The catch? That value is diffuse. Spread out across thousands of small businesses, classrooms, and households. It’s not the type of “big bet” that the companies at the top of the stock market right now are spending for. But if you care about how people actually live and work, it’s the more interesting story.
And those giant, resource-hungry data centers? The industrial behemoths in disguise? I’m still hoping we never have to build them.
A Side Speculation
One more thought I can’t shake: deepfakes. (Yes, those blasted trampoline-jumping raccoons got me too.)
The point where we can’t always tell reality from fiction, even in video format? It’s here. And TRYING REALLY HARD TO LOOK AT THE BRIGHT SIDE, I’m wondering/hoping if this might spark a 21st-century renaissance in trustworthy journalism. Imagine a world where the value of a subscription isn’t just access to stories, but a guarantee: this video was captured by our staff, on the ground, verified by humans.
In an internet overrun by algorithmic crappola, real reporters could become a bastion of truth. Maybe that’s naive. Maybe it’s hope disguised as analysis. But if AI erodes trust in the noise, human institutions of trust may matter more than ever.
Want to read a free version of Cal’s New Yorker article? He published a shorter version on his website: What if AI Doesn’t Get Much Better Than This?
This video is more about manufacturing and fast fashion, but for a great explanation of communities bearing negative externalities of industrialization, check out The Story of Stuff. It’s a classic for a reason.
Leave a comment