The AI calculation everyone is making

A response to the 'Global Intelligence Crisis', from someone who wants to dismiss it... but can't.

ai career softwarengineering discuss

If you’ve been anywhere near AI discourse this last week, you’ve probably seen the Citrini Research piece, “The 2028 Global Intelligence Crisis.” If you haven’t: it’s a speculative memo written from June 2028, describing how AI-driven productivity gains trigger mass white-collar unemployment, collapsing consumer spending, and cascading financial crisis. S&P down 38%, unemployment above 10%. You can read the full thing here.

The piece went viral over the weekend, racking up around 16 million views on X. Michael Burry shared it with the comment “And you think I’m bearish.” Then the markets opened on Monday, and the Dow dropped over 800 points. IBM fell 13%, its biggest single-day decline since 2000. Software, payments, and delivery stocks all took hits. A speculative thought experiment, clearly labelled as fiction, moved billions.

That’s why the response has been so fierce. Economists and strategists have lined up to explain why the mechanics don’t work - production generates income, compute costs create natural brakes, institutions adapt. They’re probably right. And 2028 feels like shock value. AI is moving fast, but “two years to economic collapse” is doing a lot of heavy lifting.

What I can’t shake is that the rebuttals are all about timeline and mechanics. They’re not about direction. And the direction feels true.

I work at a fintech startup. We’re a small team, and we use AI deliberately to stay that way - not just in engineering, but across the business. Research, operations, manual processes. Not because we’re callous about employment, but because if we don’t, someone else will. Every startup I come across is making the same calculation. It’s not even a calculation anymore. It’s just how you build now.

This is the dynamic Citrini describes: individually rational decisions that sum to something collectively worrying. I can see where this leads. I’m walking there with everyone else. And I don’t know what the alternative is. Stopping wouldn’t change anything - we’d just get trampled while everyone else kept moving.

The economists say institutions will adapt. Governments will respond. New industries will emerge. Maybe. But I watched governments try to coordinate during COVID, and I don’t have much faith that systems designed for slower-moving problems can react to something that compounds every quarter.

That’s the macro concern, and there’s not much I can do about it. So I find myself thinking about what this means closer to home - for my career, for the decisions I make in my role, for what kind of work actually matters in this environment.

I’ve been mulling over the difference between what I’d call a developer and what I’d call a software engineer. This isn’t a distinction most people make - the terms get used interchangeably - but I think it matters now.

A developer, in this framing, receives requirements and builds to spec. The ticket says what to do, they do it, the ticket closes. They trust that someone else has thought about the bigger picture. That’s the work AI does well. Requirements in, code out, no need to question the wider context.

A software engineer is different. They take ownership. They hold the whole system in their head. They interrogate requirements rather than just executing them. They ask whether this is even the right thing to build, whether there’s a better way, whether the edge cases have been considered. They’re part of the decision-making, not downstream of it.

That work still needs humans. Probably for a while yet.

But it raises a question I keep coming back to: where do future software engineers come from?

The craft has always been learned by doing. You write bad code, ship things that break, debug production incidents at 2am, slowly build intuition for why certain patterns exist. The junior and mid-level years aren’t just about output - they’re about developing judgment. If those roles hollow out, what happens to the pipeline?

I don’t know. History suggests this worry comes up every time the industry abstracts upward. When the move happened from assembly to higher-level languages, people said something essential would be lost. The industry found ways to cope. Maybe this is the same.

But previous abstractions still required human judgment at every layer. This one might not. And we won’t know until the current generation of senior engineers starts retiring and we see who’s behind them.

I’m not predicting anything. I don’t have the economics background to say whether Citrini’s scenario holds together, and the people who do seem sceptical.

But I’m also not dismissing it. Something about the direction feels right, even if the timeline doesn’t. And if there’s any safeguard for engineers in this environment, I think it’s in ownership - understanding why something should exist, not just how to make it exist. Being part of the thinking, not just the execution. That’s harder to automate than writing code to spec.

At least for now.

I don’t have a neat conclusion. I suspect most of us don’t. We’re just building, and watching, and wondering what it adds up to.


References

Human written, AI assisted.