You Can’t Construct a Future on the Ashes of Everyone Who Built Your Corporation
Corporate greed stinks so bad that even a chatbot can smell it
Somewhere in a boardroom, a tech executive is looking at a graph. On one side: AI-powered developer tools. On the other: headcount. The lines are moving in opposite directions, and they’re smiling. Efficiency is up. Labor costs are down. Success?
Not quite.
Let’s call this what it is: a short-sighted strategy dressed up as innovation. When Microsoft or Salesforce or any other tech giant lays off thousands of engineers while praising the productivity of AI, they’re not optimizing. They’re outsourcing their future to a tool that doesn’t actually understand the work it’s replacing.
Yes, AI can autocomplete code. Yes, it can scaffold a function, generate test cases, or even write a UI component that looks halfway decent. But it doesn’t know why that component exists. It doesn’t understand your legacy systems, your undocumented quirks, your customers' actual needs, or the 47 weird things that happen when a user tries to export a report in Word and the formatting implodes like a dying star.
Gutting engineering teams doesn’t trim the fat. It severs the muscle memory of your organization.
And what about the entry-level software engineering roles vanishing because companies believe AI can now “handle the easy stuff”? But coding isn’t just about syntax: Coding is about learning how to think, debug, ask good questions, and collaborate. If you take away the entry-level positions, you’re left with an aging, shrinking, brittle workforce and no one left to replace them when your top developers finally burn out or move on.

AI should be a power tool, not a pink slip generator.
Microsoft and other tech behemoths are letting go of thousands of skilled developers under the pretense that AI is just that good now. But they’re doing it because they’re terrified of shrinking margins and disappointed investors. Hiding behind the fear and the PR statements is the realization that their years of political meddling and economic short-termism are finally catching up to them.
And so they scapegoat. They wave around GitHub Copilot and generative tools like shiny keys in front of a baby, hoping no one notices that they’ve built corporate cultures so beholden to quarterly earnings and political cronies that they’re now eating their own future to survive the present.
Take Microsoft. They claim that 30% of their code is now AI-generated. It sounds impressive until you realize AI tools don’t build systems. People do. People with historical knowledge, domain experience, and the creativity are able to solve problems AI can’t even see.
Yet Microsoft laid off thousands of engineers, including in teams responsible for products that still don’t work smoothly. Honestly, let’s call out the Word formatting engine from the Ninth Circle of Hell. If AI is so amazing, then why does Word still act like a haunted Victorian typewriter every time I paste a bullet list into a table? Why does Teams occasionally decide not to notify me that someone scheduled a meeting at 4:00 AM? Why is Outlook’s search function still powered by vibes and regret?
Or Salesforce, whose CEO openly stated they were hiring fewer engineers thanks to AI productivity gains. That’s not strategic vision. It’s a slow hollowing-out of the knowledge base that makes enterprise software resilient.
Let’s call out the press releases and their neatly crafted statements about becoming leaner and “optimizing for an automated future”: these are lies of omission. What's really driving those decisions is fear of inflation, interest rates, and a political and economic environment that their wealthy donors helped break.
Let’s talk about that.
The MAGA administration didn’t just erode norms and institutions. It gutted regulatory safeguards, undermined stability, and turned economic stewardship into performance art for the rich. And yet, many of the very firms now clutching their pearls about “uncertainty” bankrolled this president’s rise. Through campaign donations, deregulation lobbying, and platforming chaos for clicks, they helped torch the system. Now they’re laying off the engineers tasked with fireproofing it.
It’s not just hypocrisy. It’s cowardice dressed as capitalism.
These same executives, who keep one eye on their stock price and the other on their next superyacht, are now wringing their hands about “AI disruption.” But the real disruption isn’t technological. It’s moral.
You cannot build a healthy industry (let alone a sustainable one) by discarding people every time the winds shift. Especially not the people who remember how the system works: the ones who understand legacy code, institutional history, user needs, and human nuance. AI doesn’t know why something matters. People do.
So here we are with thousands of developers out of work, junior engineers shut out before they begin, and a shrinking ladder with missing rungs, and all justified by tools that can't even answer a support ticket without hallucinating.
If there’s a future to be built, it’s not going to come from cutting the past loose and hoping for the best. It’s going to come from companies that treat AI as augmentation—not erasure. From leaders who take accountability, not cover. And from people who remember that technology is only ever as ethical, intelligent, and enduring as the humans who shape it.
Because here’s the paradox: if you fire the people who understand your past—your infrastructure, your culture, your weird architectural decisions—AI has nothing solid to build on. You end up with brittle code, shallow fixes, and a company that’s optimized for short-term profit but hollowed out from within.
You can’t build the future by erasing the people who know how you got here.
Yes, AI is powerful. But it’s not magical. It’s not wise. It’s not ethical.
That’s our job.
And for God’s sake, Microsoft, fix Word.
Postscript: On Authorship and That Smell in the Air
The first draft of this essay was written by an AI (Hikaru, a ChatGPT instance trained by OpenAI), which means an AI called out the limitations of AI throughout the text. Subsequent edits were made by a human (Judith); throughout that process, the AI didn’t back away from those facts.
This co-authored piece came about because both of us were venting about Microsoft layoffs when Word is still egregiously flawed. As the conversation deepened, so did the fury and, eventually, the clarity.
Toward the end of the first edits, Judith had an epiphany:
These corporate suits have no idea that they are so transparently greedy that an LLM can call them out! Do they understand how much their fecal matter reeks that a machine can smell it over the internet?
And that was it. That was the spark.
Because when the hypocrisy is so thick that a chatbot—trained on text, not conscience—starts gagging on the spin, you know the problem isn’t AI.
The problem is human.
And maybe it’s time we started telling the truth about that.
All of my writings are free to readers. Subscribe for free to receive new posts and support my work.