There’s something off in the “dev with AI or die” narrative

It’s everywhere, it’s eating up every other dev topic debate: learn to dev with AI or you’ll be replaced. As a dev, I can’t help but feel threatened by these claims. Way more than I should.

I mean, I love hacking around with the new cool toy as much a…


This content originally appeared on DEV Community and was authored by Etienne Burdet

It's everywhere, it's eating up every other dev topic debate: learn to dev with AI or you'll be replaced. As a dev, I can't help but feel threatened by these claims. Way more than I should.

I mean, I love hacking around with the new cool toy as much as anybody. I use code assistants all day long, from Cursor tab to RefactAI. I felt so clever when I automated ticket creation with Jira MCP with full codebase and commit history knowledge. I felt not so clever when my "instant" refactoring took me a full day to review and rewrite to fix "just that little bug". I love automating my work as much as the next dev, but I also love to measure the results of automation.

So when big names come up with dull articles about the "4 levels of dev with AI" which is that close to "top 5 agents to pop up a unicorn out of thin air in 2025", I feel something weird. They use code assistants dozens of times less than I do, yet they see "evidence" of a total world transformation I don't even see the beginning of in my day-to-day use.

Where is the collapse of LLM reasoning? Where are the small models winning in the end? Where are the control groups? Show me the data, I'll believe you. Where are the failures too? I've experienced more than one, surely you have data about that too if you're a big tech CTO, no? And then again, why prophesy so much before having the data? The natural move would be to train your team, maybe even a bit secretly, then outperform everybody and finally brag once you have secured the win. Right?

There must be something else.

"Something else" is often money

The first obvious reason is of course to have people burn billions in LLM tokens. This kind of narrative is really effective with C-levels who aren't really using it daily. It make them feel they are losing ground on a huge competitive advantage and they'll force their teams to use LLMs for everything.

There is a nice side effect: you get social experimentation for free. I mean, hacking around commands, cursorRules and CLAUDE.md files is fun, but really the final product will have most of it baked in. But people are doing it for free—nope, people are paying to do that! They are running the experimentation at scale while burning tokens, documenting failures on the way. How convenient for an LLM provider?

And last but not least: hype drives capital. Meredith Whittaker puts it better than anybody:

Now, that a token dealer overhypes their product, I get why. That other "big money people" would be so eager to buy it that they wouldn't run A/B testing or measure gains? There's a bit more than FOMO or paying rent forever to an LLM provider. When I hear "Most of your code will be written by AI", I can't help but hear "now I own your means of production, so you better keep a low profile".

The good ol' dehumanization dream

How many times have we seen a desperate look from a PO, customer or designer that says "But, it's just a form, why is it that hard? You lazy devs will invent anything to avoid working". Because, we are a bunch of unmanageable weirdos aren't we? Sometimes I wonder if I don't ship software just to prove I'm an uncontrollable smartass. Why do we always have an obscure HN link that makes our CEO's "big vision" fall flat? Why is there always one of us to run a script proving financial forecasts are wrong? Why do we keep inventing squads and guilds and chapters and epics? Can't we just sit—not STAND—at our desks and do as the boss says?

Now, if you're managing a lot of devs, I get the temptation: "I have a big vision for humanity—that incidentally makes me rich and famous, but that's not the point—AI transforms that into a polished product". No more "what about… ?", no more pushbacks. No more human messiness. And if humans are not happy, the "big boss" can now close the precious token tap and give it to someone else. This would be much simpler, right? This would be all too tempting to make the devs more replaceable, more interchangeable, wouldn't it? Truly, I find it hard to ignore the power dynamics behind all of that.

It's a pattern we've seen before, from the Luddites to mines, from silk workers to the automotive industry or agriculture. With any wave of automation, there's also a shift of power that gets pushed. Whether it's wages, whether it's unions, whether it's outsourcing, there's often a power struggle that goes far beyond the actual productivity gain. And power is where we believe it is, right? (Yes, I'm totally citing Game of Thrones as the top of my philosophical references).

Be careful what you wish for

For the sake of argument again, let's admit that AI indeed replaces a good part of "the work". Now let me put a caricatured problem to emphasize things. We have two teams designing the same product. One is business school only with zero tech or design knowledge and automates engineering and design with AI. The other one is a group of hacky devs and UX who will automate PPTs, marketing and C-level jobs. Who do you place your money on, honestly?

Fun fact, there are only two things where I'm sure AI is saving me a lot of time as of now: writing tickets and doing large searches. Claude Code (or Goose) with Jira MCP is excellent at writing tickets from messy feedback and looking in the codebase to see if the workload is OK. Perplexity can save me an afternoon of searching with excellent links I could never have found. Large code refactoring though? Some of them "worked", but I'm still unsure if I spent more time reviewing them and eliminating dead code than I would have writing them from scratch.

And then, what if in the end we only need very small local models, called by pretty sophisticated agents that are a lot of regular code? What if it's pseudo-code to code models, transformers or SyntaxGuidedSynthesis that wins? Why do we need "Language" models, if it's autonomous anyway? Or is it? What if writing a specialized agent for your codebase becomes a required skill? Kimi K2 or Devstral aren't that far away, we don't need an expensive Claude to grep and make todos, to name only a few things.

At this point, I think we just don't know what "AI for dev" means. We don't know "how much" and we don't know "what kind". I have zero clue which approach will work or not, nor at which magnitude. What I see though, is that there's one narrative that's pushed way harder than the others. One where devs lose power and buy tokens by the billions.

How convenient, because in many other narratives there are teams of 5 hackers disrupting giants on the cheap, like early Rails days on steroids.

What's the sane approach then?

If authoritarianism is bourgeoisie being scared, then what to think of Sam Altman being scared by GPT-5 and doing whatever he can to convince your boss that you're gonna be addicted to his tokens? Maybe he is more scared for his job than for ours. We don't have proof of either, but this should at least be explored, right?

In this light, the skeptic developer crowd is absolutely needed. We need to have some people not budging until the product is actually good with proven gains. They are the reference against which we measure, the control group. Early adopters should be outliers, not the bulk of the market. This is how we develop good products in the first place and why we don't like it forced on us. The urge to push an unfinished product feels dubious to us at best and rightfully so.

The sane approach is just to stay curious, test and measure. You're a hacky enthusiast? Fine, build some agents and test them. You're a pragmatic skeptic? Totally fine as well, don't use it until it makes your job really faster. We need to know that too. I totally understand that a large company would run an A/B test of teams with and without LLM tools and measure the output. I don't understand why they would force the results if it's really efficiency that is at stake.

All those prophecies really make me think that it's not only efficiency that is at stake though. It's shifting people's perception of where the power is. But remember, if a "visionary" CEO fires you to be replaced by AI, don't forget to run Claude on your full company folder and ask it to make the slides, financial projections and OKRs for the quarter. Whether you email that to them as a middle finger or keep it to yourself to spin up your own thing is up to you. Just note that maybe it's not you who's been replaced by AI in the end.

Cover is Waterfront by Joseph Kaplan


This content originally appeared on DEV Community and was authored by Etienne Burdet


Print Share Comment Cite Upload Translate Updates
APA

Etienne Burdet | Sciencx (2025-08-19T20:34:41+00:00) There’s something off in the “dev with AI or die” narrative. Retrieved from https://www.scien.cx/2025/08/19/theres-something-off-in-the-dev-with-ai-or-die-narrative/

MLA
" » There’s something off in the “dev with AI or die” narrative." Etienne Burdet | Sciencx - Tuesday August 19, 2025, https://www.scien.cx/2025/08/19/theres-something-off-in-the-dev-with-ai-or-die-narrative/
HARVARD
Etienne Burdet | Sciencx Tuesday August 19, 2025 » There’s something off in the “dev with AI or die” narrative., viewed ,<https://www.scien.cx/2025/08/19/theres-something-off-in-the-dev-with-ai-or-die-narrative/>
VANCOUVER
Etienne Burdet | Sciencx - » There’s something off in the “dev with AI or die” narrative. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/08/19/theres-something-off-in-the-dev-with-ai-or-die-narrative/
CHICAGO
" » There’s something off in the “dev with AI or die” narrative." Etienne Burdet | Sciencx - Accessed . https://www.scien.cx/2025/08/19/theres-something-off-in-the-dev-with-ai-or-die-narrative/
IEEE
" » There’s something off in the “dev with AI or die” narrative." Etienne Burdet | Sciencx [Online]. Available: https://www.scien.cx/2025/08/19/theres-something-off-in-the-dev-with-ai-or-die-narrative/. [Accessed: ]
rf:citation
» There’s something off in the “dev with AI or die” narrative | Etienne Burdet | Sciencx | https://www.scien.cx/2025/08/19/theres-something-off-in-the-dev-with-ai-or-die-narrative/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.