Is Your Construction AI Strategy Built on Sand?
Navigating the Hype, Uncovering the Real Economics, and Future-Proofing Your AI Investments
AI is undeniably the the future, a global force generating huge excitement - and almost a hysteria - as solutions rapidly emerge promising to 'solve inefficiencies' across project management, design review, reality capture analysis and all other areas in construction.
And for a good reason. The possibilities are world-altering, and construction is an industry ripe for a little disruption. I personally both use AI and constantly to explore ways in which it can streamline and improve my role and the industry at large.
However, my experience at Autodesk DevCon in Amsterdam - witnessing the intense enthusiasm for large language models (LLMs) and their potential impact on our industry - crystallised a crucial insight regarding the underlying economics of the current AI boom:
The cost of AI/LLMs is currently subsidised - when companies are investing in AI now, how are they forecasting the long-term ROI or payback?
But first… lets talk about Uber
Uber: The Growth Playbook
At DevCon what I seen is a lot of very talented people looking at the use of LLMs with reverence at how it can impact our industry. And they are right. But there needs to be a balance of caution as LLM and AI tool providers look for market penetration they are giving you attractive deals to pull you in.
Uber's meteoric rise in the early 2010s wasn't just a product of technological innovation—it was also fuelled by a deliberate strategy of investor-funded subsidies aimed at rapidly capturing market share.
From its inception, Uber adopted a "growth at all costs" approach, securing over $20 billion in investor funding to subsidise rides and driver incentives. This capital allowed Uber to offer fares below market rates and provide generous bonuses to drivers, creating a competitive edge over traditional taxi services that relied on fare revenues to cover operational costs (American Affairs Journal).
In its expansion into new markets, Uber heavily subsidised both riders and drivers. For instance, in cities like New York and San Francisco, Uber reduced fares by up to 25%, while at the same time ensuring drivers received their standard earnings by covering the difference. This dual-subsidy approach was critical in rapidly building a user base and establishing Uber's presence in various cities (Business & Human Rights Resource Centre).
Strategic Objectives Behind the Subsidies
The rationale for this subsidy model was multifaceted:
Market Penetration: By offering lower prices, Uber attracted a large customer base quickly, making its service a preferred choice over traditional taxis.
Driver Recruitment: Generous incentives and guaranteed earnings attracted drivers to the platform, ensuring service availability and reliability.
Regulatory Leverage: A substantial user base provided Uber with political and social leverage against regulatory challenges, as any restrictions on the service could face public backlash.
The Shift Towards Profitability
As investor expectations shifted towards profitability, Uber began reducing subsidies. This transition involved increasing fares and decreasing driver incentives, leading to higher costs for consumers and lower earnings for drivers . The reduction in subsidies also exposed the underlying challenges in Uber's business model, as the company struggled to maintain growth and service levels without the cushion of investor funding (Forbes)
This 'growth at all costs' model, as exemplified by Uber, provides a crucial lens through which to view the current landscape of AI and LLM adoption in construction. As I witnessed at Autodesk DevCon, the reverence for LLMs and their potential impact on our industry is well-earned. However, the excitement must be balanced with caution: just as Uber attracted users with attractive deals, so too are LLM and AI tool providers leveraging subsidies for rapid market penetration.
Are LLMs Subsidised?
The current surge in large language model (LLM) adoption, driven by companies like OpenAI and Google, mirrors Uber's early growth strategy - leverage substantial subsidies to rapidly acquire users and outpace competitors.
LLM providers are currently offering their services at prices that may not reflect the true cost of development and operation (mainly compute and energy consumption).
OpenAI:
GPT-4 Turbo is priced at $10 per million input tokens and $30 per million output tokens.
The newer GPT-4o model is even more affordable at $2.50 per million input tokens and $10 per million output tokens.
GPT-4o Mini offers services at $0.15 per million input tokens and $0.60 per million output tokens.
Google Gemini:
Gemini 2.5 Pro charges $1.25 per million input tokens and $10 per million output tokens for prompts up to 200,000 tokens.
Gemini 2.5 Flash offers services at $0.10 per million input tokens and $0.40 per million output tokens.
A free tier is available with limited access and rate limits.
The sheer scale of the infrastructure required to train and deploy models like GPT-4 – involving massive GPU clusters and significant energy consumption – suggests that the current per-token pricing is unlikely to cover these fundamental costs in the long run.
These pricing models, especially the free tiers and significantly reduced rates, suggest that current offerings are subsidised to encourage widespread adoption. Beyond market share, vendors may also offer low costs to capture your data for further model training (data acquisition) and to establish platform lock-in, making it difficult for you to switch providers later.
This blog post does a good job of explaining this The True Cost of AI vs. Human Labor - JinalDesai.com
An Impending Shift
Just as Uber eventually reduced its subsidies, leading to higher costs for users and drivers, LLM providers may also adjust their pricing structures as they seek profitability for their investors. Businesses heavily reliant on these AI services should be prepared for potential cost increases and consider strategies to mitigate dependency on any single provider.
For instance, a sudden tripling of API costs could quickly render AI-powered tender submission or design optimisation tools economically unviable for construction firms that have integrated them into their core bidding processes and design workflows, potentially impacting their competitive edge.
Understanding the parallels between Uber's growth strategy (and that used by many start-ups to gain significant market share) and the current LLM market can help you make informed decisions about integrating AI services into your operations and processes.
Now you understand what this post is all about. Let’s look at ways the trap can be sprung.
An architecture of optionally: rationality in the face of hysteria
In this turbulent landscape, construction companies and users of AI/LLM products who will truly thrive are those who carefully consider when, where, and why to integrate specific AI tools and workflows into their technology stack and architecture.
What you’re looking for is a rational response in the face of hype, market share grab, and noise.
Their are some high-level strategies which can help you navigate the AI and LLM landscape. These are summarise here, and discussed in further detail in the rest of the post:
Robust cost and value forecasting: ensure that future costs are factored in the best you can - what are the trends and forecasts for compute and energy pricing (the two big drivers of actual costs)? Ensure business value is genuine and not hype - try small experiments taking the benefit of optionality and barbell strategies (below).
Optionality: the ability to benefit from uncertainty without being hurt by downside. It’s about keeping choices open and paying little for the exposure to high upside.
For instance, rather than deeply integrating a single proprietary AI platform for all document management (e.g., contract analysis or drawing interpretation), a construction firm can maintain optionality by exploring multiple LLMs via an abstraction layer. This allows them to pivot to an alternative with minimal disruption if one provider significantly increases prices or changes its API.
Barbell Strategy: placing your bets at two extremes—ultra-safe on one end, and high-risk/high-upside on the other—while avoiding the “middle” that has hidden risks.
For a construction company, the 'safe' end of the barbell might involve sustained investment in proven technologies like established BIM software and traditional automation methodologies. The 'risky, high-upside' end, however, would encompass small, controlled pilot projects using cutting-edge AI for tasks such as generative design or predictive maintenance on equipment, always with clearly defined budgets and exit strategies.
With these strategies you can avoid the fear of missing our (FOMO) that could drive some hasty AI investments in the industry.
Find the ‘Tipping Point’
It’s easy to get caught up in hype and look to catch on to the high speed train feeling that the future is passing you by. God knows I’ve been approached by the tender submissions team in the business I work for with some version of “what are we doing with AI?”.
Remember though, the vendors are doing the same thing. Rushing to keep up with an AI revolution which seems to be gaining speed, rather than settling into any kind of steady rhythm. We can’t predict what the leading tech will be next week, never mind what it will be once you’ve got your compliance and security governance done, negotiated terms, and actually got the procurement team to raise the order.
I’ve always been one for fast adoption but in the case of AI I think there is sense in holding off with any bigger decisions. The best course of action in my opinion in tiny experiments. More on that when we discuss Barbells.
When doing any decision making you and your business need to have your sums done. Though it may seem long winded and boring, you need to actually consider the business case - specifically what the payoff is - or return on investment (ROI) - and what the future costs may be. This will give you an understanding of at what price you reach the cost ‘tipping point’ when the investment no longer becomes viable.
For the value side, I would recommend undertaking tiny experiments. This gives you a few benefits:
A better feel for actual value
The opportunity to test their wings with AI architectures and tools, without huge risk of failure or error
You can quickly move on from anything where value does not meet expectations, or where the technology moves on (which is very likely!)
Builds AI literacy and confidence in your team without large cost of error
Remember, market penetration and lock-in are critical to a software vendors long-term profitability in the construction tech space, for a buyer/user this can exacerbate the future impact of any cost increases. Two business critical areas to consider in this AI age are data ownership (especially the right to commercialise or share) and data portability.
The Full Cost of AI
The problem is both the value and the cost are difficult to predict. We’ve already looked at the value side (tiny experiments) but the cost side may be much more difficult to predict. The difficulties here include understanding:
Likely future costs of licensing or API access
Data preparation and labelling for construction-specific datasets (e.g., building codes, material specifications).
Integration costs with existing construction management software and workflows.
Training and upskilling construction professionals to effectively use and manage AI tools.
Addressing data privacy and security concerns related to sensitive project information.
Potential legal and ethical implications of using AI in regulated or safety-critical applications.
The ongoing operation, maintenance, and training
The lifecycle and replacement cost
You need to make especially sure that for the ROI or payoff calculations that you aren’t looking at simple license and implementation costs. Instead think in terms of Total Cost of Ownership (TCO).
For the avoidance of doubt, for a purchasing/investing decision ROI needs to exceed TCO and an acceptable payback period.
But AI and LLMs are so new who out there has the experience to understand TCO? Does any business have sufficient data, given than LLMs have only been around for a few years and have few examples of being rolled into key business processes - least of all in construction.
The Real Costs of AI
Running LLMs like GPT-4 or Gemini 1.5 involves:
Huge GPU clusters, often costing hundreds of millions in CAPEX
Massive power consumption - in the range of kilowatts per inference cluster
Incredible expenditure in R&D and model training
These costs aren’t getting recouped via my £25/month subscription…
This pricing model is not sustainable at scale without continued subsidy. As per Uber (and Netflix, Amazon AWS, etc…) these prices are subsidised to hook you, and then lock you in. They are priced to capture market share, and when scaling is done the investors will expect their money back - then some! There is a battle going on, and you’re the territory being claimed.
Unfortunately, no one knows when - or if - this will happen. There is no saying that models don’t become more efficient and prices actually go down - but the point here is uncertainty, risk and opportunity.
What we’re seeing right now is AI strategies being formed by the early movers which are built on assumptions which won’t hold long term. If you build a core process or product on top of an LLM today, and tomorrow the price triples or access is throttled, or it’s made redundant due to development elsewhere, you’re exposed.
Not a great landscape, but let’s not kid ourselves - you have to make these decisions. The worst options is in the middle - doing nothing.
These principles offer a robust framework for navigating the volatility and uncertainty of the AI market:
Model for real cost, not invoice cost
Use shadow pricing to estimate what inference or fine-tuning should cost based on GPU hours, energy draw, and likely future margin pressures. There are open-source cost models you can adapt to your use case.
Assume prices will increase or tiered access will be introduced. Plan for that volatility.
Architect for Flexibility
Avoid hardwiring your business to a single model or vendor. Build an abstraction layer - whether through LangChain, open-source RAG stacks, or internal orchestration layers - so you can swap out models as economics or technologies shift.
Consider open-source or fine-tuned smaller models for tasks that don’t require GPT-4 level intelligence.
Link to Business Value, Not Hype
As above, anchor AI use cases to measurable ROI - cost savings, risk reduction, or new revenue. Then test that ROI under more pessimistic pricing assumptions. If the use case only makes sense when compute is cheap and unlimited, it’s not robust.
Optionality and Barbells
I’m a big fan of the writing and thinking of Nassim Taleb. His books The Black Swan, Antifragile, and Skin in the Game have been real eye openers for me in how real-world risk and opportunity works across domains. I’d also highly recommend his book of aphorisms ‘The Bed of Procrustes’.
Anyway - over the course of these books he has two powerful mental models which fit really will with innovation, and this context of AI adoption and subsidy-fueled growth.
Barbell Strategy
Definition: The barbell strategy is about placing your bets at two extremes—ultra-safe on one end, and high-risk/high-upside on the other—while avoiding the “middle” that has hidden risks.
Image from “This Is What I Learned from Nassim Taleb” by Gold Republic (give it a
In this context:
The “safe” side is focussing on existing and proven tools to bring about simpler automation rather than reasoning, using open-source tools, or leveraging vendor provided tools (AI within enterprise tools like Microsoft CoPilot). This will be safe, with limited downside risk, but you shouldn’t expect a huge reward either.
The “risky but high-reward” side is selectively experimenting with AI tools to find the huge value upsides - playing with subsidised tools like GPT-4o or Gemini Flash - knowing the costs may change, and working with smaller more cutting edge AI vendors on pilots and trials, and developing your own tools with proprietary or open-source models. The risk of failure is high here but you limit downside to keeping experiments small - however - the upside could be huge if you stubble on something which can generate extreme value in your market or business.
The middle—total dependency on one subsidised proprietary vendor—is what Taleb would consider fragile: it looks efficient but contains hidden tail risks (e.g., sudden price hikes, API changes, or ethical issues). It’s also doing nothing in the AI space. Don’t be here, this is lock-in or this is extinction.read!)
Optionality
Definition: Optionality is the ability to benefit from uncertainty without being hurt by downside. It’s about keeping choices open and paying little for the exposure to high upside.
Image from Tinkering = Optionality by Farood Javed (also well worth a read!)
In this context:
Using a number of low-cost LLMs offers you optionality — you get access to cutting-edge AI tools, and if one platform disappears or becomes expensive, you can switch.
Building modular systems, extraction layers, data pipelines, and integrations that don’t rely heavily on any one AI provider gives you strategic flexibility. When one tools is made redundant through development or becomes too expensive you want to be able to unplug it, and plug an alternative in, with minimal disruption to your business.
Optionality also means saying “yes” to reversible AI experiments (cheap pilots, tests) and “no” to irreversible commitments (deep integrations, long-term contracts).
Why This Matters for your AI Strategy
We are in a “subsidy era” of AI, but history suggests this won’t last.
Taleb’s concepts help businesses avoid lock-in, reduce downside, and position themselves for upside if AI becomes radically more capable or cost-efficient.
Think: “Keep costs low, play with the tools, own your data, keep your architecture loose.”
Plug and Play Architecture: Guarding Against Lock-In
To finish this off, here’s a simple go-to aide memoir for how you can prepare for AI in your business:
Always build abstraction layers to bring LLMs into your processes and architecture
Own your own data pipelines, developing in-house expertise in AI integration rather than relying solely on vendor solutions
Prioritise data interoperability and open standards. Focus on modularity in you digital infrastructure
Monitor ROI continuously, not just at purchase
Model future costs under likely subsidy removal
Wrapping Up
There is another school of thought. As LLMs become better understood, as the world invests in more renewable energy sources, and as LLMs consume more and more training data they will become commoditised, which in turn will drive down prices. The value will be in specialist models and how the tools are employed creatively to solve real-world problems and opportunities.
This thought process started with “what happens when the price goes up?” but in reflection is more about dealing with the volatility in the market as vendor jostle for position and technology improves on almost a daily basis. How can construction leaders keep their heads, make good decisions, and not lose their shirt!
The concepts and principles discussed in the post looking at AI are equally true with other software and hardware platforms. Vendors want your money and your loyalty, but construction business need to retain their optionality to make decisions on tools.
I’ll leave you with this. The LLMs, platforms and software we buy from others are tools, and should be looked at as such. At the end of the day, your competitors can buy them too. They should never be mistaken as assets. Your assets are:
Your open architecture, able to swap out tools as needed by the business
Your code base for integrations, etc…
Your teams skill in integrating, deploying, training, and supporting the tools - and the collateral they use to achieve these things
The way you design your processes to make the best use of the purchased tools
The tools you build internally in your business which are yours alone to exploit
Think assets first, then tools.