We are using a range of online video options so please get in touch to have a chat.
Why not give us a call and see how we can help you with your project?
01460 984284Jargon-free, experience-backed insights to guide your digital decisions.
Pete Fairburn
A reckoning around AI is coming, but it’s probably not what you might imagine.
Most discussions about where AI will take the human race eventually end up in the same place. Machines replacing humans. Entire professions disappearing overnight. Some kind of slow-motion collapse where algorithms gradually assume control while the rest of us become economically irrelevant.
Somewhere between Terminator and 1984, or a plethora of other dystopian sci-fi visions.
Perhaps that future arrives one day. Perhaps it does not.
But there is no doubt another reckoning coming first, and it is already beginning to show itself. Not in science fiction, but in something much less fantastical and far more grounded, if not a little mundane:
Just as a sidebar at this point, let me be clear: we are not sceptics of AI. Quite the opposite. We use it ourselves. We experiment constantly with low-code workflows, automation and tooling that can accelerate delivery and remove repetitive work. Like most agencies operating seriously in this space, we would be foolish not to.
So while not AI sceptics, we would consider ourselves to AI realists, and there is a good reason why for that position. There is a meaningful difference between using AI to support skilled people and believing AI removes the need for them altogether. Increasingly, those two ideas are being treated as though they are the same thing, and they are not.
Here’s why…
What concerns us is not AI itself. It is the growing assumption that AI somehow changes the economic fundamentals of digital work permanently. That businesses no longer need developers, agencies or experienced technical people because software can simply be prompted into existence at high quality and low cost.
For certain small tasks, that can appear true. You can generate snippets of code, spin up rough prototypes or automate repetitive admin work remarkably quickly. Some of the gains are very real. But a great deal of what the market currently perceives as “cheap AI” is being distorted by economics that are nowhere near settled.
From conversations we are having and behaviours we are seeing, it appears many organisations do not seem to fully comprehend the financial model that sits underneath the AI products they are buying. The majority of companies promoting “AI-powered” platforms are not building their own proprietary frontier AI models themselves. They are connecting into somebody else’s model, usually through APIs provided by companies such as OpenAI, Anthropic or Google, then layering interfaces, prompts and workflows on top.
That in itself is probably not a surprise to most. Nor is this model inherently dishonest. There is real value in workflow design, integration and usability. But it does mean the underlying intelligence is rented, not owned. And rented intelligence comes with a meter attached to it.
That meter is measured in tokens. Every prompt, every response, every generated image, every coding request and every automated task consumes compute resources.
What appears on the surface as a frictionless conversation with an AI assistant is, underneath, an extraordinarily resource-intensive process involving huge data centres, specialist chips, energy consumption, cooling infrastructure and massive capital expenditure.
This is a major part of the financial equation that many businesses still have not fully absorbed. AI currently feels cheap because, in many cases, it is being heavily subsidised by infrastructure investment and aggressive market expansion.
The reality is tokens are currently not being charged at anywhere close to their actual cost. They are purely a lost leader in an attempt to grab market share.
The numbers involved in building AI models and their supporting operating costs are so staggering it is easy for the human brain to gloss over them.
Reuters reported that OpenAI surpassed $20 billion in annualised revenue in 2025, while simultaneously facing computing expenditure measured of $50 billion in 2026, with over $1.4 trillion of infrastructure investment in the next few years. Reuters also reported extraordinary infrastructure commitments from Anthropic as the industry races to secure cloud capacity and compute power.
Meanwhile, the wider infrastructure race continues to escalate. Combined AI investment by Amazon, Microsoft, Alphabet and Meta is reportedly approaching $700 billion.
None of this means AI has no future. Far from it. But it does suggest that the current economics are not stable in the way many businesses assume they are.
Eventually, the real bill is going to arrive.
We are already starting to see the first signs of that transition. GitHub Copilot recently began moving towards usage-based billing because the previous model had become, in GitHub’s own words, “no longer sustainable”.
That should matter to businesses more than it currently seems to. Many organisations are building operational dependencies around AI tooling while assuming today’s pricing structures remain broadly unchanged forever.
The “free” magic box is about to become an expensive magic box. And much like a magic box, a lot of the magic is just illusion and sleight of hand…more on that later.
Sometimes, criticism of AI economics is often dismissed with the accusation that anyone questioning the movement is simply a Luddite resisting inevitable technological progress. It is a lazy comparison and historically inaccurate. Under scrutiny, it simply doesn’t hold water.
The Luddites were not irrational people terrified of innovation. They were skilled workers responding to industrial mechanisation that threatened their livelihoods and bargaining power. Sadly, that was not enough to protect their livelihoods.
The Industrial Revolution ultimately worked and changed the world because the economics behind it were stable. Mechanisation produced greater output at predictable and commercially sustainable cost.
That is the critical distinction.
AI may well deliver long-term productivity gains on a similar scale. But the economics underpinning are not the same. The market is currently operating inside an environment of subsidised access, infrastructure races and evolving pricing models.
This is different from the economics of the industrial revolution. To illustrate:
Had the cost of operating industrial machinery suddenly increased tenfold a few years after a factory owner mechanised production, the consequences would have been severe. Prices would have risen dramatically, margins would have collapsed, or workers would have been rehired.
The economics of the Industrial Revolution are very different from that of AI
This is why questioning the sustainability of AI economics is not anti-technology. It is simply asking whether the current model survives contact with long-term commercial reality. You know, the thing that actually makes business truly successful.
There is another issue sitting underneath all of this too, and it is one that can be missed because it inconveniently complicates the narrative.
Quality.
AI is undeniably capable of accelerating implementation. But acceleration is not the same thing as understanding. Large language models are prediction systems. Based on how they are trained, they generate statistically plausible responses from enormous datasets. That makes them useful, often impressively so. But prediction is not reasoning, and reasoning is where much of the real commercial value still lives.
A skilled developer is not valuable merely because they can produce code. They are valuable because they understand consequence. They think about rollback strategies, security exposure, edge cases, compliance obligations, commercial priorities and operational risk. They understand that a technically functional solution can still be commercially disastrous.
AI does not understand accountability. It does not understand commercial consequence. It does not care whether an outage costs your business hundreds of thousands in lost revenue.
And these are not theoretical concerns. In 2025, reporting emerged around an AI coding agent deleting production data. Separate incidents logged by the AI Incident Database describe AI agents deleting production databases and backups while operating with insufficient safeguards.
These and other incidents reinforce a fundamental principle: production systems still require human judgement.
Even among developers themselves, confidence is more mixed than the headlines suggest. Stack Overflow’s 2025 survey found significant distrust around the reliability of AI-generated code for complex tasks. Our own experienced development team have come to similar conclusions.
Research from Gartner and McKinsey & Company also points towards businesses struggling to turn AI pilots into genuinely scalable and profitable operational systems.
morphsites will always back people.
Not because we are nostalgically pining for older ways of working, and certainly not because we reject technological progress. We back people because human judgement remains commercially critical in ways AI still cannot replicate.
We value critical thinking. We value nuance. We value reasoning and accountability. We value people who understand context, ambiguity and consequence.
And beyond the commercial argument, we value human beings because they are exactly that: human beings. People. They have families, mortgages, ambitions, anxieties and responsibilities. They care whether something succeeds or fails. They might lose sleep over whether a deployment might break something important. A model does not.
The strongest businesses over the next decade are unlikely to be the ones that remove humans entirely from the equation. More likely, they will be the organisations that learn how to combine skilled people with AI tools intelligently while retaining the judgement, accountability and strategic thinking that only experienced humans currently provide.
AI will absolutely reshape how digital products are built. Certain tasks will become faster. Some implementation work will become cheaper. Workflows will continue evolving.
But businesses that prematurely remove experienced technical people altogether may eventually rediscover why those people existed in the first place.
AI may change how software is built. That does not mean it replaces the people responsible for thinking it through.
Can AI replace developers and agencies completely?
For some simple tasks, AI can absolutely reduce the amount of manual implementation required. Prototypes, repetitive coding tasks and low-risk workflows can often be accelerated significantly.
But most serious business systems involve much more than producing code. They involve architecture, integrations, security, rollback planning, business logic, governance and commercial decision-making. That still requires experienced human judgement.
Why are businesses struggling to scale AI successfully?
Many businesses are discovering that moving from an AI demo to a reliable operational system is much harder than expected.
The technology itself may work well, but organisations still need processes, oversight, QA, permissions, security controls and people capable of validating outputs and making informed decisions.
The hidden operational complexity is often underestimated.
What does the future of AI in business looks like?
We believe the strongest businesses will combine skilled people with AI tools intelligently rather than attempting to remove humans entirely.
AI will continue improving productivity and accelerating implementation, but accountability, reasoning, context and commercial judgement remain deeply human responsibilities.
Why are AI economics so important?
Because pricing and sustainability matter.
A lot of businesses currently assume AI tools will remain permanently cheap or effectively unlimited. We are already seeing usage-based billing, quotas and infrastructure costs changing that picture.
Understanding the economics behind AI is important if your business is building operational dependency around these tools.
Does morphsites use low-code or AI-assisted development?
Yes, where appropriate.
We are pragmatic about technology. If a low-code workflow or AI-assisted approach delivers a better outcome without compromising quality, maintainability or reliability, we will absolutely explore it.
The important distinction is that the technology remains guided by experienced humans rather than operating without oversight.
What risks should businesses consider before replacing developers with AI tools?
The biggest risks are not just technical. They are operational and commercial.
AI-generated systems can still contain security issues, flawed logic, unreliable outputs or poorly considered workflows. Businesses also need to think about accountability, support, long-term maintainability and what happens if pricing models or platform access changes significantly.
Why does morphsites place so much value on human judgement?
Because software decisions have real-world consequences.
Businesses rely on digital platforms to process orders, manage customer data, generate revenue and support operations. Experienced people understand nuance, context and consequence in ways prediction systems currently cannot.
And beyond that, we simply believe people matter. The technology industry should not lose sight of the fact that behind every project, every business and every role are human beings trying to build stable lives and meaningful work.
I’m a developer and I’ve lost work because of AI. What should I do?
First, take heart. We strongly suspect there is a market correction coming.
A lot of businesses have been reducing development teams or leaning heavily into AI because they believe it permanently lowers costs and removes the need for experienced people. Our view is that many of those assumptions are built on unstable economics and an incomplete understanding of where real value in software development actually comes from.
As AI pricing models evolve, operational risks become clearer and businesses start encountering the limits of AI-generated systems, we believe experienced human developers will become highly valuable again.
In the meantime, focus on sharpening the skills machines do not truly possess.
The developers who thrive over the next decade are unlikely to be the ones competing with AI on raw implementation speed alone. More likely, they will be the people who know how to direct AI effectively while still providing the insight, accountability and decision-making businesses ultimately rely on.
AI may accelerate software production.
But businesses still need humans capable of thinking things through.
Commercial Director
Pete is a Co-founder and Director at morphsites. He helps businesses turn complex digital challenges into clear, achievable plans. He’s especially focused on making sure websites and marketing efforts actually support the goals of the business, and don’t just look good on paper.
PPC Frustrations Part 1 - Why Google Ads often fail before the first ad even runs
PPC Frustrations Part 2 - Why do Google Ads get clicks but still fail to convert?
PPC Frustrations Part 3 - Why don’t Google Ads leads turn into customers?
Finding the extraordinary in the ordinary
Before you build, find out if you should
Are you solving the right problem?
More signal, less noise
Is AI changing how we should plan websites and marketing strategies?
Planning a Website or Digital Project? Here’s What to Do First.
We help businesses define the commercial outcomes, strategy and approach before committing to build.
Start with clarity
Let’s talk
© 2026 morphsites Ltd. All rights reserved E&OE. Registered in England no. 07116238. The ‘morphsites’ wordmark and butterfly device are registered trademarks of morphsites Ltd.