I was coding with Claude the other day, and like many engineers, I experienced the magical surge of productivity, and awe, watching it produce options for solutions, with reasoned pros and cons and then implement whichever one I thought best (I’m not full vibe coding…yet). I thought that it was like having an enchanted loom, that could produce the patterns that I wanted, but sometimes struggled to execute, whether due to limited time, patience, brain power and yes, sometimes, skill. Of course you often have to steer it hard to stop it falling off a cliff, or trying to push you off a cliff. Some of that was around architectural decisions, and quality engineering, but also some of that was around things it couldn’t possibly know, like a data source’s reliability, a particular stakeholder’s preferences, or the skillset of the team who’ll maintain the solution. But mostly it’s been pretty wonderful. I imagined my situation was like the weavers of the Luddites, except instead of the employer owning the machinery, I owned the machinery, and my weaving skills were still required (for now) to ensure it operated effectively .
That paragraph was all I was planning to write, just a short something about what it’s been like coding with AI. Before putting it there I thought it best to brush up on the Luddites, as my recollection is from secondary school history which is now quite… historical. I also wanted to check this comparison wasn’t so well trodden as to make it trite. So I did some superficial Googling, at which point I became, very very quickly, very confused. The comparison was well established, but almost every piece came to the opposite conclusion of me, stating the Luddites parallel is just accurate full-stop, no qualifications. These pieces all told a variation of ‘Luddites fought against being put out of work by automated machines, but they ultimately failed, and automated textile production became the dominant norm - and this is what’s going to happen with engineers and AI code generation’. A scattering of practicing developers pushed back in general terms, but nobody had systematically challenged the parallel . At this point, I knew I needed to do some deep research. And like any self-respecting software engineer writing about how AI may or may not be taking our jobs, I asked AI to do this for me .
The parallels aren’t totally wrong. The Luddites were skilled craftspeople who were being put out of work by new machines. Brian Merchant's Blood in the Machine makes this case thoroughly, and that part does correlate to what's happening now: AI coding tools are being deployed as managerial leverage to justify smaller teams, to depress wages and shift bargaining power away from developers. You also see this in the hiring freezes and firings justified by "AI productivity gains". (Whether or not the AI productivity gains are real, or the firms just over-hired in the pandemic, AI is still receiving the blame). But the parallel breaks when you look more closely at what kind of skill is being disrupted, what it’s being replaced with, and what kind of product is being made.
The new machines targeted by the Luddites - power looms, shearing frames, wide knitting frames - all shared the common feature of allowing unskilled or semi-skilled workers to produce what had previously required years of craft training . The impact on skill was pretty straightforward, less was needed. Contemporary studies also suggest AI tools can close the gap between junior and senior output on some tasks, typically generating functional code faster . But this is far from the whole picture. Yes, AI has made creating functional code for less experienced engineers, or non-engineers, a lot easier. But it hasn’t made the skill of judgement any less necessary . Understanding the why and not just the what and how is what allows you to steer the AI generated implementation to something that is not overly complex, or weird, or replacing your whole tech stack just to ‘fix’ what is actually a typo.
The mismatch continues when we look at the respective roles of textiles and software, and the nature and scale of labour required to maintain them. Unlike textiles, software is used in core societal systems where failure causes cascading disruption, think patient records, banking transactions or energy grid distribution. Textiles, while central to many societal functions, do not share this failure profile - textiles serve systems, they aren’t systems themselves.
This role difference is compounded by the maintenance required by the two products. When a piece of textiles starts being used by consumers, its maintenance burden is typically low. Maybe there’s some tailoring and (ahem) patching, and of course there’s washing, but there’s not a high burden. When software starts being used by consumers, it is typically heavily and continuously worked on. New features are added, breaking changes are introduced by third party dependencies, bugs are noticed in the early hours and fixed. It takes teams of software engineers, working full-time, just to keep software working as intended. In order to do this well, engineers need to understand the intent behind the code. It’s this understanding that means additions or fixes not only work, but work without compromising the cohesiveness and quality of the project. It’s a very different undertaking compared to the odd trip to the tailors or doing the weekly laundry. Software is rarely ‘finished’ in the same way that textiles are, and that high maintenance burden undermines the Luddite comparison.
What’s more, current evidence suggests that AI generated code is actually increasing this maintenance burden. Google's DORA 2024 report found that increased AI usage correlated with decreased delivery stability and GitClear's research documents surging code duplication . You can automate production and still make the overall system more expensive to run if the thing you're producing requires more human attention after it's made than before. Again, this is a very different dynamic to what happened with machine-made cloth.
Some of those reports are from 2024, which in terms of model progression is ancient history. What if that maintenance burden wasn’t on humans? What if it was on AI instead? This is where projects like Devin, Amazon Q agents, and the swarming architectures of Steve Yegge’s Gas Town are heading: fleets of AI agents handling not just code generation but bug triage, testing, and resolving merge conflicts . We’re moving from an engineer working an enchanted loom, to directing a team of enchanted weavers (where the direction still requires understanding every thread and pattern). The effort of getting work done moves even further from the implementation to the asking. (And, as all software engineers know, a lot of the complexity in this line of work boils down to the asking.) And yet, it’s still not there: SWE-bench Pro collapses on unfamiliar codebases, Devin handles bounded tasks but not ambiguous system-level work that requires judgement from context outside the ticket. Gas Town requires “Stage 7+” operators and "will rip your face off" without expertise. Despite the extra abilities of multi-agent solutions, depth of skill and experience remain crucial for these engines of automation .
But suppose, for argument’s sake, it does get there. There’s a system that can generate, fix and extend code to a high standard, well above what is possible today. Does Luddite: software engineer parallel hold up then? Probably not - the history of systems and tools shows us that they always break at some point. So you need someone to fix it, and that someone can only fix it if they have an understanding of how the system works, so that someone is by definition a software engineer. It’s uncertain how many of us will be needed, as the systems become more reliable, but we’ll still need to be highly skilled, as we’re becoming fixers of the last resort.