tl;dr There are two types of developers right now: those still polishing the axe handle and those ripping through forests with a chainsaw. The person paying for the tree to come down doesn’t care which tool you use. If you’re still refusing to pick up the chainsaw, this post is for you.
I created this article, but it has been reviewed and refined with help from AI tools: Claude and Grammarly.
The Abstraction Ladder
We’ve been climbing the abstraction ladder in software for decades. Machine code to assembly. Assembly to C. C to high-level languages with garbage collection and type systems. Each rung let us express intent more clearly and worry less about the mechanics underneath.
We’ve just climbed another rung. We can describe what we want in plain English and get working code back. Natural language is now a programming interface. Not a gimmick - an actual shift in how software gets made.
And it’s splitting the industry in two.
Two Types of Developer
Most development teams right now have both types.
There’s the developer who spends time perfecting their Vim config, hand-crafts every function, knows the keyboard shortcut for everything and is proud of it. They view AI-generated code with suspicion, maybe contempt. “I don’t need a crutch.” “It just produces slop.” “Real developers write their own code.” They’ve spent years mastering the axe - the angle of the swing, the sharpness of the blade, the rhythm of the work - and they’re not about to put it down for some loud, unreliable machine.
Then there’s the developer felling trees at five times the rate. They’re not precious about how the code gets written. They’re precious about whether it works, whether it’s tested, and whether it ships. They use Claude Code, Copilot, Cursor, whatever gets the job done. They’ve worked out that the hard part of software was never typing. It was thinking. And now they spend more time thinking and less time typing.
Which one are you?
The Stigma Has Flipped
A year or two ago, there was a stigma around AI tools. Admitting you’d used ChatGPT to help write code felt a bit like admitting you’d copied from Stack Overflow without understanding the answer. People would whisper about it, or not mention it at all.
That’s flipped. Today, if you walk into a job interview and proudly declare that you refuse to use AI tools, you’re not signalling craftsmanship. You’re signalling that you’ll be slower, less productive, and unwilling to adapt. The stigma hasn’t disappeared. It’s just changed direction.
The Chainsaw and the Axe
For centuries, if you needed to fell a tree, the axe was it. Skilled lumberjacks took genuine pride in their technique.
Then the chainsaw arrived.
Some resisted. They’d spent years mastering the axe. They’d built their identity around it. The chainsaw felt crude, noisy, maybe a bit like cheating. But here’s the thing: the person who needs a tree felled doesn’t care about your axe technique. They care about the tree being down, safely, quickly, and affordably. If one person shows up with a chainsaw and another with an axe, the chainsaw wins on speed and cost every time.
If you’re more attached to the craftsmanship of the axe than you are to the actual goal of felling the tree, you’ve confused the tool for the job.
Be Honest With Yourself
I’ve seen a few patterns in the developers who are resisting this. See if any sound familiar.
The “I type faster” developer who refuses autocomplete suggestions because they know what they want. The “at least I know it’s right” developer who spends hours on manual refactors that an agent could do in minutes. The “it’s probably wrong” developer who dismisses AI-generated code without actually reading it. And the “I don’t trust it” developer who says this but hasn’t actually invested any time learning how to use it properly.
There’s also a deeper one that I think a lot of us are dancing around, myself included: the worry that if AI can do your job, maybe your job wasn’t as hard as you thought.
It’s worth being honest about this. Because a lot of what we do as developers isn’t hard. It’s tedious. It’s boilerplate. It’s the hundredth CRUD endpoint, the hundredth React form, the hundredth migration script. The hard part was always the design, the trade-offs, the debugging, understanding what to build and why. AI doesn’t replace that. It replaces the typing.
Nobody Cares About Axes
Your product owner doesn’t care whether you hand-crafted the code in Vim or generated it with Claude Code. They care whether the feature works, whether it shipped on time, and whether customers are happy. Full stop.
And if you think about it, this isn’t even new. Product owners have always had to deal with agents that don’t deliver what they want. Those agents were called developers. Every PO has sat in a sprint review watching a demo of something that technically meets the acceptance criteria but completely misses the spirit of what was asked for. Every PO has written a story that seemed perfectly clear to them, only to get back something that makes them wonder if the developer even read it.
That’s exactly what developers now face when wrangling AI agents.
The developer who complains “the AI doesn’t understand what I want” is experiencing the same frustration their PO has felt about them for years. The skills are the same: writing clear requirements, providing good context, reviewing output critically, iterating on feedback. The developers who were already good at understanding what the business actually needed - not just what the ticket said - are the ones who seem to be getting the most out of AI tools.
The irony writes itself.
Where the Effort Moves
The work is shifting to either end of the process. On the input side: planning, design, writing good user stories, breaking features down into well-defined, testable pieces. Garbage in, garbage out has never been more literal. On the output side: PR reviews, automated testing, quality gates, security scanning. Someone still needs to understand what was produced and why, and to verify it actually does what it should.
The developer’s role is moving from “person who writes code” to “person who orchestrates the creation and validation of code”. That’s not a demotion. If anything, it’s a promotion. But it requires letting go of the identity that was built around typing.
Where This Is Heading
The models are getting better. Fast. The quality of AI-generated code today is dramatically better than a year ago. The gap between what a model produces and what a skilled developer would write is narrowing. At some point - and I don’t think it’s as far away as we’d like to believe - it will make less and less sense to have a human in the loop for routine coding tasks.
Think about autonomous driving. Right now, we insist on a human behind the wheel because the technology isn’t quite there yet. But when self-driving cars have a safety record that significantly exceeds human drivers, it starts to become irresponsible to insist on human control. At that point, human driving becomes recreational. Something you do on a track for fun, not on public roads.
I think coding is on a similar trajectory. There will come a point where AI-generated code is more reliable, more consistent, and more secure than what most humans produce. When that happens, insisting on hand-written code for routine work will look a lot like insisting on hand-calculated spreadsheets when Excel exists.
There’ll still be a need for human review and oversight, especially in safety-critical and highly regulated domains. But the direction of travel is pretty clear.
The Messy Bit in the Middle
That future is still a while away. But it’ll arrive faster than most of us expect.
Right now we’re in the awkward in-between phase. Working out how to incorporate LLMs and AI agents into existing workflows. How to get the efficiency gains without sacrificing quality. How to maintain the right level of human oversight without creating bottlenecks that negate the speed advantage.
This is genuinely hard. It means rethinking processes, team structures, what we even value in a developer. It means getting comfortable with a new way of working while still being accountable for the outcomes. And it means accepting that the tools are imperfect today, but that waiting for perfection isn’t an option.
The organisations and developers who work this out first will have an enormous advantage. The ones who keep polishing the axe because that’s what they know will get left behind.
Pick Up the Chainsaw
If you’re still holding the axe, I’m not asking you to throw it away. I’m asking you to be honest about why you’re still holding it. Is it because the axe genuinely produces better outcomes in your context? Or is it because you’ve built your professional identity around the swing?
If it’s the second one, put it down. Pick up the chainsaw. Learn how to use it safely. It’ll feel clumsy at first. You’ll miss some of the control. But you’ll fell more trees in a week than you used to in a month.
And if you’re already using the chainsaw - don’t get complacent. The chainsaws are getting better too. The developers who thrive won’t be the ones who adopted AI tools early. They’ll be the ones who kept adapting.
The tree doesn’t care which tool you use. Neither does the person paying for it to come down.
Thanks for reading.
