Escape Technology

Working and rendering beyond the desktop

588 0

Increased demands on visualisation are pushing AEC businesses, and specifically the visualisers within them, to explore new ways to deliver content – accurately and quickly, writes Lee Danskin, CTO of Escape Technology


Technology continues to raise the bar on not just what’s possible, but what’s expected, from images, video and now commonly AR and VR in the AEC space. These expectations have pushed visualisation to the point where traditional workflows are struggling to cope.

The need for accuracy/quality brings with it increasingly large data sets, putting huge strain on hardware, infrastructure and the people who manage it. Visuals are now part of the ongoing design process rather than just a tool used to win jobs, calling for a collaborative way of working well beyond the limits of a single desktop or laptop.

Over 40 pages of dedicated workstation reviews, features and coverage. (Click image to read)

This demand for lifelike renders is pushing AEC businesses, and specifically the visualisers within them, to explore new ways to deliver – accurately and quickly. With a dazzling number of ways to deliver the end result, what are the options for practices looking for the best match for their business and clients?

A question of scale

There are of course practices which can still thrive with a single visualiser using a desktop to produce still images. Problems arise however when scaling comes in, when the need changes from a still image to a video, or to explorable virtual content. These bring with them huge amounts of additional data, which can clog up the whole pipeline.

One natural tried and tested solution is a render farm. Obviously, there’s a large capital expenditure cost with this route, along with considerations like the space to physically house the servers, the expense of powering and cooling it all, centralised storage and the expertise to manage the whole thing effectively.

But even a render farm brings with it a range of challenging decisions to make. Should it be based on traditional CPU; or the more memory-limited, less versatile but better at rendering GPU; or the newer XPU, specialist softwares that combine CPU and GPU? And beyond that, how quickly will renders be needed? Is it better to prepare for quick turnaround with more servers on hand, or fewer servers that are used more often but take longer?

An alternative is to utilise cloud render farms to provide the computing power needed to render these detailed images. Although viable, there are potential issues, including the rough edges between differing versions of software/plugins on workstations and the cloud render servers, and perhaps more importantly the lack of control the visualiser has between sending the files for rendering, and seeing the results hours later. This ‘fire and forget’ way of rendering can prove costly if, for example, an essential texture-map file is missing. The render will be inaccurate and need to be redone – costly in terms of both money and time.

Advertisement
Advertisement

Another approach is to commit to a game-engine pipeline, utilising something like Unreal Engine or Unity to present visuals in real time – albeit with a slightly lower level of accuracy than a ‘proper’ render. This sidesteps the need for traditional rendering, with an impressive and immersive final result, but it does demand the conversion, cleanup and optimisation of the datasets used to remove all the granular detail that the game engine simply doesn’t require.

This calls for specialists, and can actually result in a bottleneck as the remaining data is manipulated to fit into GPU memory and run smoothly on a single machine running the engine. Often this results in a completely new way of working, unsurprisingly feeling like game development, with a much more collaborative approach enabled by code management (Perforce, Jenkins, Git etc.) and continuous build that ensures that each iteration is up to date.

A completely new way of thinking

With all that said, we’ve reached the point where the best solution is something new. The demands of massive datasets, collaboration and virtual content mean we have to do more than simply tweak parts of the design process – we need a different approach entirely that breaks free of the single desktop/laptop.

The answer is a centralised approach to data, and feeding it as and when it’s needed to a graphics application controlled remotely. This moves the machine from under the desk, to a data centre or the public cloud. Wherever a visualiser is working, they can access this machine and work from it as if it is under their desk.

How is this possible? It’s thanks to leaps in multiple technologies that have all converged to the point where this new world of remote rendering is not only viable, it’s easily accessible.

Networking options up to 400Gb/s now make the typical 1Gb/s speed of the structured cabling in most buildings seem archaic. Similarly, NVMe drives in workstations, centralised storage and servers can transfer data at over 10Gb/s (20x faster than traditional hard drives). With these potential speeds available, it makes no sense to try and transfer huge amounts of data from a server to a workstation using ‘just’ the 1Gb/s cable – unless of course you enjoy waiting an hour for a complex scene to load! Instead, put the machine next to the storage, and use the 1Gb network to stream the desktop to an end device.

We’re seeing more businesses becoming cloud-first or utilising their own private data centres in order to reap the benefits of a remote-compute model and keep up with the demand for more technically demanding, accurate visuals

The potential leap in performance is huge. Data is no longer transferred: what would be seen on the workstation’s monitor is streamed to the end device, wherever that may be. The hard work is all done by the remote workstation next Working and rendering beyond the desktop to the centralised storage. It also allows collaboration like never before, with the data available to everybody who needs it.

With the right remote desktop software, the experience for the visualiser is no different from using the machine under their desk. All that’s needed is a device with a display – desktops, laptops or tablets for example – and remote desktop software that can enable these workflows. Escape Technology’s Sherpa is a prime example, with a scalability suitable for practices of all sizes. Like many solutions, Sherpa allows you to dial up or down the number of cores you need depending on the project, giving you excellent control of spend over the lifetime of a project.

A framework, not a file format

Creating visuals remotely has been further empowered by Open USD (Universal Scene Description), a technology originally from Pixar which is now available under many different names and has been described as ‘the html of the metaverse’.

It’s a new framework that enables not just the creation of 3D content, but also brings all of the tools in the visualisation pipeline (modelling, shading, lighting, rendering etc.) together. With Open USD, all of these tools can exchange information, opening up even more opportunities for collaboration.

It also tackles the issue of digital content creation applications such as Maya and 3ds Max being only able to load data on a single thread. Even a powerful machine like HP’s new Threadripper Pro Z6 G5 A, packed with up to 96 cores and designed for 3D rendering will be shackled if data is only loaded onto it via a single thread at a time. More time will be spent loading data than actually rendering.

Like using a renderer file format, Open USD takes the rendering away from the DCC applications and ensures that data is loaded over all cores, fully leveraging the power of all the new technologies available to us and enabling a dramatic performance boost in all aspects of the workflow and process.

Is local ever best?

In short, our answer is now ‘no’. There are simply so many ways to get better performance from a remote setup than a traditional local one, including access from wherever the visualiser happens to be (home, office, on site); the ability to define the performance you need on a daily basis (why have a 96-core workstation when you simply need to clean up a model on a single core?); and the ease of collaboration thanks to the centralisation of data.

There are always exceptions of course. VR workflows are one such example, which need a powerful, physical machine to execute the environment to the highest levels of performance. Even so, that’s just the end point. Actually, generating the environments (as opposed to executing them) can still be done using remote workstations. And for VR planned correctly, CloudXR can even be considered as a fully remote solution.

We’re seeing more businesses becoming cloud-first or utilising their own private data centres in order to reap the benefits of a remote-compute model and keep up with the demand for more technically demanding, accurate visuals. With a growing number of DCC solutions also offering more cloud-based services too, it’s clear that this is not just the future of visualisation, it’s the present.


This article is part of AEC Magazine’s Workstation Special report

Scroll down to read and subscribe here

Featuring

Advertisement