Omniverse Cloud APIs let developers stream interactive ‘digital twins’ into the mixed reality headset
Nvidia has introduced a new service that allows firms to stream interactive Universal Scene Description (OpenUSD) industrial scenes from 3D applications into the Apple Vision Pro mixed reality headset.
The technology makes use of Nvidia’s new Omniverse Clouds APIs (read our story), using a new framework that channels the data through the Nvidia Graphics Delivery Network (GDN), a global network of graphics-optimised data centres.
“Traditional spatial workflows require developers to decimate their datasets – in essence, to gamify them. This doesn’t work for industrial workflows where engineering and simulation datasets for products factories and cities are massive,” said Rev Lebaredian, VP of Omniverse and simulation technology at Nvidia.
“New Omniverse cloud APIs let developers beam their applications and datasets with full RTX real-time physically-based rendering directly into vision pro with just an internet connection.”
In a demo unveiled today at Nvidia GTC, Nvidia presented an interactive, physically accurate digital twin of a car streamed in full fidelity to Apple Vision Pro’s high-resolution displays.
The demo featured a designer wearing the Vision Pro, using a car configurator application developed by CGI studio Katana on the Omniverse platform. The designer toggles through paint and trim options and even enters the vehicle — blending 3D photorealistic environments with the physical world.
“The breakthrough ultra-high-resolution displays of Apple Vision Pro, combined with photorealistic rendering of OpenUSD content streamed from Nvidia accelerated computing, unlocks an incredible opportunity for the advancement of immersive experiences,” said Mike Rockwell, VP of the Vision Products Group at Apple. “Spatial computing will redefine how designers and developers build captivating digital content, driving a new era of creativity and engagement.”