AI

The AI enigma – challenges for workstations

16 0

AI has quickly been woven into our daily workflows, leaving its mark on nearly every industry. For design, engineering, and architecture firms, the direction in which some software developers are heading raises important questions about future workstation investments, writes Greg Corke


You can’t go anywhere these days without getting a big AI smack in the face. From social media feeds to workplace tools, AI is infiltrating nearly every part of our lives, and it’s only going to increase. But what does this mean for design, engineering, and architecture firms? Specifically, how should they plan their workstation investments to prepare for an AI-driven future?

AI is already here

The first thing to point out is if you’re into visualisation — using tools like Enscape, Twinmotion, KeyShot, V-Ray, D5 Render or Solidworks Visualize, there’s a good chance your workstation is already AI-capable. Modern GPUs, such as Nvidia RTX and AMD Radeon Pro, are packed with special cores designed for AI tasks.

Features such as AI denoising, DLSS (Deep Learning Super Sampling), and more are built into many visualisation tools. This means you’re probably already using AI whether you realise it or not.


This article is part of AEC Magazine’s 2025 Workstation Special report

It’s not just these tools, however. For concept design, text-to-image AI software like Stable Diffusion can run locally on your workstation (see page WS30). Even in reality modelling apps, like Leica Cyclone 3DR, AI-powered features such as auto-classification are now included, requiring a Nvidia CUDA GPU (see page WS34).

Desktop software isn’t going away anytime soon, so firms could end up paying twice – once for the GPUs in their workstations and again for the GPUs in the cloud

Don’t forget Neural Processing Units (NPUs) – new hardware accelerators designed specifically for AI tasks. These are mainly popping up in laptop processors, as they are energy-efficient so can help extend battery life. Right now, NPUs are mostly used for general AI tasks, such as to power AI assistants or to blur backgrounds during Teams calls, but design software developers are starting to experiment too.

Advertisement
Advertisement

Cloud vs desktop

While AI is making its mark on the desktop, much of its future lies in the cloud. The cloud brings unlimited GPU processing power, which is perfect for handling the massive AI models that are on the horizon. The push for cloud-based development is already in full swing – just ask any software startup in AEC or product development how hard it is to get funded if their software doesn’t run in a browser.

Established players like Dassault Systèmes and Autodesk are also betting big on the cloud. For example, users of CAD software Solidworks can only access new AI features if their data is stored and processed on the Dassault Systèmes 3D Experience Platform. Meanwhile, Autodesk customers will need to upload their data to Autodesk Docs to fully unlock future AI functionality, though some AI inferencing could still be done locally.

While the cloud is essential for some AI workflows, not least because they involve terabytes of centralised data, not every AI calculation needs to be processed off premise. Software developers can choose where to push it. For example, when Graphisoft first launched AI Visualizer, based on Stable Diffusion, the AI processing was done locally on Nvidia GPUs. Given the software worked alongside Archicad, a desktop BIM tool, this made perfect sense. But Graphisoft then chose to shift processing entirely to the cloud, and users must now have a specific licence of Archicad to use this feature.

The double-cost dilemma

Desktop software isn’t going away anytime soon. With tools like Revit and Solidworks installed in the millions – plus all the viz tools that work alongside them — workstations with powerful AI-capable GPUs will remain essential for many workflows for years to come. But here’s the issue: firms could end up paying twice — once for the GPUs in their workstations and again for the GPUs in the cloud.

Ideally, software developers should give users some flexibility where possible. Adobe provides a great example of this with Photoshop, letting users choose whether to run certain AI features locally or in the cloud. It’s all about what works best for their setup — online or offline. Sure, an entry-level GPU might be slower, but that doesn’t mean you’re stuck with what’s in your machine. With technologies like HP Z Boost (see page WS32), local workstation resources can even be shared.

But the cloud vs desktop debate is not just about technology. There’s also the issue of intellectual property (IP). Some AEC firms we’ve spoken with won’t touch the cloud for generative AI because of concerns over how their confidential data might be used.

I get why software developers love the cloud — it simplifies everything on a single platform. They don’t have to support a matrix of processors from different vendors. But here’s the problem: that setup leaves perfectly capable AI processors sat idle on the desks of designers, engineers, and architects, when they could be doing the heavy lifting.

Sure, only a few AI processes rely on the cloud now, but as capabilities expand, the escalating cost of those GPU hours will inevitably fall on users, either through pay-per-use charges or hidden within new subscription models. At a time when software licence costs are already on the rise, adding extra fees to cover AWS or Microsoft Azure expenses would be a bitter pill for customers to swallow.


This article is part of AEC Magazine’s 2025 Workstation Special report

➡️ Subscribe here

Advertisement

Leave a comment