AI is reshaping architecture at the extremes of massing and visualisation, but the profession’s value remains in the missing middle, writes Garry Miley
In the daily office chatter surrounding AI and architecture, there is a prevailing sense that a single technological wave is about to break across the profession, carrying everything and everyone along in its wake. In reality, the incoming tide is behaving differently. Setting aside project and office administration, AI adoption is currently clustering at two opposite poles of the design workflow: rules-based massing and generative visualisation.
The two poles: massing and viz
At one end are massing and feasibility tools. Many of these are built upon traditional rules-based software rather than “true” AI, though their functions have been augmented by recent developments. These applications are typically fed a diet of hard constraints – zoning, floor area ratios, and budgets – to instantly generate a range of spatial options. For developer-led, typologically familiar work in urban centres, these tools can already produce proposals that sit well beyond a first draft. While they are not yet capable of high architectural complexity, they can generate plausible elevational options for what is, in effect, a computer-designed volume.
At the opposite pole are generative imaging systems, such as Nano Banana Pro, which are more accurately described as “true” AI because they are trained on vast datasets of real-world examples. These systems have advanced to provide surprising degrees of spatial coherence, producing persuasive visualisations from simple prompts and reference imagery. Alongside these, image-to-3D tools have emerged that can infer rough geometry from sketches, creating foundational scaffolds for applications like Rhino.
Discover what’s new in technology for architecture, engineering and construction — read the latest edition of AEC Magazine
👉 Subscribe FREE here
In both instances, these tools are merely automating tasks previously done by hand, only at vastly accelerated speeds. However, for developer-led, constraint-driven projects, we are approaching a threshold where the marriage of conventional software and AI modelling could generate viable solutions with significantly reduced architectural input. In this specific sector, the traditional role of the architect risks being compressed into that of a secondary reviewer rather than a primary creator.
The missing middle
If one pole of the design workflow is feasibility and the other is visualisation, the “missing middle” is where design actually becomes architecture – the realm of ingenuity found in buildings that aren’t merely dictated by the data of constraint. It is here that AI technology, for reasons not yet comprehensively explained, fails to convince.
First, there are the practical hurdles that AI struggles to clear. For a design to be considered successful, it requires more than just an agreeable aesthetic; it requires auditability. Why did the tool generate this specific plan? Which constraints dictated which decisions? Is the result truly code-compliant, and can that compliance be verified? Furthermore, how does this output translate into BIM or IFC environments without necessitating a complete redraw from scratch? Perhaps most critically, the question of professional indemnity remains: who carries the liability when an automated decision results in a failure down the line?
Beyond these practicalities, there is a more profound absence. As of yet, no AI tool can propose an architectural space with the nuanced sophistication of something like the Barcelona Pavilion. The “middle” remains the domain of human synthesis – the place where technical requirement, cultural context, and poetic intent are woven into a singular, buildable reality.
Where opportunity lies
Just because AI has not yet cracked the “middle” does not mean it lacks a future role in the evolution of the
craft. In fact, it is highly probable that architects will rely increasingly on these tools to refine their output in this central territory. This is where the true opportunity lies: in breaking down the process of architectural discernment into more finite, analysable components. Rather than treating design judgement as a purely idiosyncratic quality, we can begin to deconstruct decisions into manageable layers, creating AI-informed frameworks to evaluate design options more rigorously.
This concept is perhaps best illustrated by “teasing out” the specific elements that constitute a successful streetscape. In cities with a powerful urban identity – Paris being the pre-eminent example – there exists a legible but complex set of relationships that render a street coherent: facade rhythm, the hierarchy of openings, material palettes, cornice lines, ground-floor thresholds, and those subtle “rules with exceptions” that keep a place vital.
In principle, we could train models to recognise these patterns – not to replace architectural judgement, but to make it more explicit and testable. Such systems could become vital civic tools, particularly for historic cities facing development pressure. They could help design teams verify whether a proposal truly participates in the “DNA” of a street or merely disrupts it.
This is where architects must take the lead. AI models are only as effective as the datasets they ingest; architects will be required to curate these datasets with the annotations and evaluation criteria that give them value. By using AI to analyse thousands of successful urban relationships, we can gain a deeper understanding of what makes a street function. We can identify the hierarchies of importance, the degrees of flexibility, and the moments where an exception strengthens, rather than weakens, the whole.
The result would not be a tool that “designs” streets, but a framework that makes our understanding of character more explicit, teachable, and defensible. This principle applies across the missing middle: from circulation hierarchies in complex buildings to the calibration of visual privacy in the workplace. These judgements currently feel intuitive, but they can be broken down and understood more systematically – with the ultimate goal of better serving the end user.
What this means for practice
It is likely that AI will continue to make inroads into roles currently performed by architects. However, for the time being, the profession retains the advantage within the “missing middle”. If the last decade was defined by the rise of digital modelling, and the coming months are dominated by massing automation and image coherence, the next meaningful phase must be more ambitious. It should focus on developing systems that support and enhance architectural judgement rather than attempting to replace it.
AI models are only as effective as the datasets they ingest; architects will be required to curate these datasets with the annotations and evaluation criteria that give them value
This shift necessitates the creation of tools that assist in generating more meaningful, useful, and informed space – tools that can justify decisions, not merely output geometry. We require evaluation methods that account for spatial quality and urban character, not just raw metrics. Crucially, we need frameworks that help us understand and articulate the very discernments that make architecture valuable. The goal is not to automate the architect out of the process, but to provide a more rigorous, evidence-based foundation for the decisions only an architect can make.
Garry Miley is an architect and lecturer in architecture at South East Technological University in Ireland. His work focuses on the impact of artificial intelligence on architectural design, education, and professional practice, with current research exploring how theories of architectural quality—particularly Gestalt theory—might be made legible to computational systems. He writes regularly on architecture, technology, and planning.
Main image: As of yet, no AI tool can propose an architectural space with the nuanced sophistication of something like the Barcelona Pavilion