Will agentic AI quietly steal the role of ‘design thinking’ from AEC professionals? asks Richard Harpham
Across the global technology landscape, a new kind of artificial intelligence is quietly taking on work that once required skilled professionals.
Software companies are experimenting with autonomous coding agents like Devin that can design, write and debug entire applications with minimal supervision. In healthcare, AI-driven platforms from Insilico Medicine are accelerating drug discovery by testing millions of chemical combinations in simulation. Logistics networks at companies like Amazon increasingly rely on AI systems that optimise supply chains in real time across thousands of warehouses and delivery routes.
This new wave of agentic AI – systems that plan, execute and refine tasks autonomously – is beginning to transform entire industries. And now it is coming for the built environment.
Discover what’s new in technology for architecture, engineering and construction — read the latest edition of AEC Magazine
👉 Subscribe FREE here
At AEC Magazine’s NXT BLD conference in London last year, a recurring theme emerged from multiple speakers: AI will soon enable buildings to practically design themselves.
According to the most optimistic technologists, a new generation of AI systems will optimise building footprints, generate floor plans, route ductwork, size structural members and produce fully coordinated BIM models in minutes. Autonomous design agents will simulate thousands of possibilities, resolve constraints and output a buildable model almost instantly, often before a designer has finished their coffee.
At first glance, the vision sounds liberating, but beneath this appealing future will a deeper concern emerge? When machines begin producing answers instantly, something subtle can disappear, the reasoning process that once created them, and that loss of professional thinking may become what I call the Great AI Brain Robbery. Soon the AEC industry may need to decide whether to master it or risk quietly become its accomplice.
The temptation of autonomous design
The promise of AI in design and construction has never been more compelling.
BIM models are becoming richer, more structured and more data-driven. New AI systems can analyse enormous libraries of past projects, regulatory frameworks, manufacturer data and construction workflows. Emerging agent frameworks such as AutoGPT have demonstrated how AI agents can break complex goals into smaller tasks and execute them iteratively without constant human prompting. Similar ideas are now being applied in enterprise software platforms from companies like Microsoft and GitHub, where AI assistants increasingly generate, test and improve code automatically. The same technological shift is beginning to appear in the built environment.
In most cases, these tools will begin making recommendations before designers even ask the question while compliance could be checked continuously in the background. Some designers will inevitably lean back and say: “Finally the computer is truly aiding design.”
But then what? Over time, architects could risk becoming prompt operators for generative design models. Instead of asking, “What should we design?”, teams may increasingly ask, “What did the AI generate?”
That subtle shift carries consequences. When professionals stop actively reasoning through decisions, the human context that once guided those decisions begins to fade.
When expertise begins to erode
If decision-making increasingly moves inside AI systems, professional insight risks becoming secondary to algorithmic output. And the consequences will extend well beyond architecture.
Mechanical engineers may overly rely on automated routing engines that technically solve spatial conflicts but overlook practical realities, service access, installation sequencing, maintenance requirements or aesthetic considerations that experienced engineers understand instinctively.
Structural engineers may accept optimised beam sizing generated by algorithms without fully interrogating the assumptions behind those calculations.
The danger is not that AI will produce flawed designs, but that fewer people will retain the experience necessary to recognise those flaws when they appear
Contractors may inherit design models that appear perfectly coordinated on screen but unravel when confronted with construction conditions that were not incorporated into the AI.
Other industries are already wrestling with similar questions.
Software engineers now debate how heavy reliance on AI-generated code could weaken core programming skills. Financial regulators increasingly scrutinise opaque algorithmic trading systems. Even airline pilots, long accustomed to autopilot, must regularly retrain to maintain manual flying skills in case automation fails. The danger is not that AI will produce flawed designs, but that fewer people will retain the experience necessary to recognise those flaws when they appear.
AEC at a strategic crossroads
Some firms are already experimenting with autonomous AI workflows, allowing generative systems to produce conceptual layouts, cost estimates and coordination models. In these environments, meaningful portions of design thinking are quietly shifting into algorithmic black boxes.
The long-term consequences remain unclear. But another approach is emerging. Rather than replacing professional judgement, some firms are attempting to amplify it.
Their goal is not simply faster design cycles or lower production costs. It is the creation of better buildings, buildings that feel thoughtful, perform well and can actually be constructed efficiently.
For these firms, automation alone is insufficient. What they need are tools that preserve and extend professional knowledge.
Several emerging platforms, including newer knowledge-centric BIM systems such as Skema, are exploring this direction by focusing less on generating geometry and more on capturing the decision making context behind it.
From BIM to intelligence platforms
For decades, BIM has functioned primarily as a digital representation of buildings. In the next phase of industry evolution, BIM may evolve into something more powerful: a knowledge platform for the built environment.
Instead of static models, future systems will combine BIM data with knowledge graphs, project histories and contextual design intelligence. The emerging AEC AI solutions like Skema will reference patterns from past projects and the accumulated expertise of design teams.
In other words, the system will not just automate modelling, it will capture professional experience, then curate it, suggesting new design solutions, grounded in the knowledge of your own organisation, precisely when it is needed.
The future should not be driven by prompts
The real risk of the Great AI Brain Robbery is not that machines will take our jobs. It is that professionals may slowly surrender the thinking that made their work valuable in the first place.
If designers become prompt operators and algorithm auditors, then the industry will have traded expertise for convenience. I believe an alternative future is more interesting, where AI can help the AEC industry become more intelligent precisely because of the people within it, not in spite of them. Used well, these tools could preserve professional insight, make experience reusable across projects and allow teams to focus on deeper design challenges. But achieving that future will require a conscious choice.
Because the real question facing the industry is not whether AI will design buildings. It is whether the people who design them will still remember how.
Richard Harpham will be presenting at NXT BLD 2026 in London on 13-14 May, where he will explore how AEC software business models are shifting from traditional named-user licences to token, outcome-based, and even percentage-of-fee pricing as AI and automation reduce the number of seats firms need.