Laser scanning and point-cloud data will be widely used technologies in the coming years. Martyn Day visited Houston to attend the only dedicated conference to digitally capture the real world in points.
Every year in a leafy suburb of Houston, the world’s only dedicated scanning event brings together people from across the industry to catch up on the latest state of the art technology and meet important customers. Called SPAR, it is organised and run by SPAR Point Research, a company which is dedicated to cover and consult on 3D scanning, imaging and position capture technologies.
The company is headed up by Tom Greaves who formerly worked for Daratech, a consulting group that focussed on CAD and Process Plant. In my research for these articles Spar Point has provided invaluable access to the key players in the industry.
In the past, the three-day symposium has reflected the industry’s niche focuses, but with the expanded use of laser scanning and increased need to capture the real world the show has been growing. This year’s event attracted 750 attendees, up over 23% on the year before. The topics covered ranged from Process Plant and Avatar to protecting ancient monuments and detecting snowfall on Mars. The breadth of application for this technology is clearly mushrooming.
A dedicated track on Scan to Building Information Modelling (BIM) was new for this year and the sessions were surprisingly packed out. Autodesk and Bentley Systems were in attendance, both in the exhibition space and on stage. Saying that, the majority of the case studies given in the BIM stream concerned getting cloud-point data into Revit, but more on that later.
There were many great presentations given at the event but I want to concentrate on three. The first was given by Paul Debevec, associate director, Graphics Research, University of Southern California Institute for Creative Technologies (ICT); the second by Rajeev Kalandani, Virtual Manufacturing supervisor from Ford Motor Company; and the third was a highly informative Scan to BIM session by Pat Carmichael, manager, Advanced Technologies, HKS architects.
Paul Debevec took from his initial research project at UC Berkley, trying to capture shapes from photographic images, all the way through to the work he did on the blockbuster movie, Avatar. Mr Debevec started by trying to capture the campus of his University and product a photorealistic image.
Using cameras on kites and from the highest vantage points, and old aerial photography, he managed to develop a photogrammetric modelling system that created a texture mapped, 3D model given minimal user input. This was demonstrated at the SIGGRAPH computer graphics exhibition in 1997 where a Hollywood movie director picked up on the technology and asked Mr Debevec for help with a film he was trying to make, called the Matrix.
In fact the technology that Mr Debevec developed was used to capture many of the real-world scenes that involved the slow motion bullet dodging. While that was very impressive Mr Debevec was not happy that he was ‘stuck’ with the lighting that was captured in the original photographs and bitmaps, so set about looking at ways of liberating the geometry from the sunlight.
Mr Debevec’s next project for SIGGRAPH 2004 was to capture the Parthenon, put back the Elgin marbles, return it to its former glory and animate it – this section really was quite extraordinary. With the assistance of Quantapoint laser scanners, the team scanned the Parthenon in five days – and on one of those days it was not possible to do anything due to a strike.
The laser scanner used could shoot 100,000 points per second and capture panoramic views. In total they took 120, 60 million point scans and with a variety of techniques the site was pulled together. However, there was scaffolding and cranes all over the building and site, which had to be digitally removed and then converted into surfaces, converting small sections of the geometry at any time due to the processing needs.
The next issue was capturing the textures. Taking photographs is good but they have the lighting and shading ‘baked’ into them, impacting the texture map that would be applied to the model. Mr Debevec needed to come up with a way to divide out the impact of the lighting conditions. This was primarily achieved through taking pictures at different times of the day but it is also a challenge to take inconsistently lit photographs to create texture maps.
The other solution meant that every time a photograph was taken, another picture was taken at a light-measuring device that comprised three balls on a wooden disc. One ball was black, one chrome and one diffuse grey. The controlled image would provide all the radiance, sun colour and intensity information about the light in that scene. This meant every photograph came with detailed information on how the hemisphere was lit at that exact moment. This enabled the light to be factored out of the texture map, giving near perfect colourisation.
The Parthenon’s famous frieze was taken ‘for safe keeping’ by Lord Elgin between 1801 and 1812 and is currently held in the British Museum. Mr Debevec wanted to scan the frieze and digital place it back in the model, on a trip to see it Mr Debevec took a number of photographs of the gallery and put some feelers out to see if he could scan the marbles. As it was a politically sensitive issue, Mr Debevec decided to find another solution and discovered that many plaster casts of the marbles were created 200 years ago and one intact set were available in Basle Switzerland.
Building his own scanner out of a video projector and a video camera, his team managed to capture and digitise these copies. Based on his tourist photos from the British Museum, Mr Debevec also managed to digitally recreate the rectangular room in which the marbles are housed. Mr Debevec then demonstrated the video, starting with renderings of the sculptures, highlights of carvings discovered on the blocks, together with a canon ball and impact carter, when it was used as an ammo dump by the occupying Ottomans. A time lapse sky was added to the hemisphere lighting the model as the building is found today, cutting to the British Museum the frieze is magically reunited with the Parthenon and eventually coloured and rejuvenated, finally pulling back to show the complete complex as it would have looked in ancient times.
However, Mr Debevec was not finished there. One of the biggest limitations of computer graphics has been in replicating the human face. With so much complexity and subtlety, our eyes can quickly recognise a real person talking and a computer generated animation. With more research on lighting and capturing all the subtlety that skin contains, Mr Debevec came up with a way of capturing geometry and lighting allowing highly dynamic and realistic animation.
This work came to the attention of a certain James Cameron and this new technology was applied to the characters of Avatar. All the key actors were scanned in great detail for the Weta animation house. This led to incredibly lifelike facial expressions and lighting of skin (albeit blue skin). Even some of the characters which were not big blue aliens were scanned and in some shots were animated instead of acted. All in all an absolutely breathtaking keynote.
Avatar is a tough act to follow and many on the conference circuit have come up against someone who did a bit of work on the film. Rajeev Kalandani from Ford Motor Company was up next and he apologised for having to talk about digital factories for Powertrain assembly instead of aliens. While this may sound like it has no applicability to architecture, just hold that thought a second.
Mr Kalandani’s talk was entitled, ‘Visualising the Elephant in 3D — The changing paradigms in Powertrain Manufacturing at Ford’. The automotive world is in transition from 2D to 3D. While the automotive design has been done in 3D for a considerable period of time, the manufacturing design is still in the process of moving to create virtual manufacturing environment.
To design the manufacturing system for a Powertain design Ford allows 48 months lead-time till the first one rolls off the factory line. In the traditional process, the manufacturing division could really only get seriously going when the product was fully designed, back loading the design of the manufacturing plant. By adopting an upfront virtual manufacturing approach, the assembly lines can be designed simultaneously with the product design, offering obvious benefits.
Also as the products are designed in 3D, the manufacturing teams can now run casting and solidification analysis, Finite Element Analysis (FEA), CNC machining and virtual assembly line simulation. Everything can be run and tested virtually in the computer before the design is finished, improving quality and removing unwelcome surprises on fabrication.
Ford has a number of uses for scanners. Mr Kalandani explained that as each component moves around the factory it sits in a bespoke pallet and these pallets are not cheap. Using scanning technology they can take a pallet, compare with new components, check for interference, redesign the cradle and issue a detailed rework order, saving time and a fortune in the process.
Another use of laser scanners is to capture each of its manufacturing plants (factories) in great detail, so they can run manufacturing simulation in the actual plant. Having decided to start with one plant, Mr Kalandani describes the fly through of the first colour cloud model of its Cleveland line as a watershed moment for senior management.
The company can combine CAD as it moves through the assembly line with the point-cloud of the factory giving an amazing virtual experience. The result was more money to buy laser scanners and the directive to scan all 30 facilities worldwide.
The use of scanned data has extended with Autodesk’s Navisworks, which can mix both CAD geometry and point-cloud and provide clash detection, prior to any installation work.
Mr Kalandani explained that while CAD models are precise in nature, they may not be accurate representations of ‘as built’. This is another major advantage of laser scanning. While Ford has many 2D drawings of its factories, the laser scans give rapid access to the as-built nature of the factory content. This can then be used to more accurately make design changes but require multiple applications to make any real use out of the point-cloud; FSP Viewer, PoinTools View Pro, PoinTools Model, Navisworks and Geomagic Fashion are all used in conjunction with the FARO scanning equipment and software.
With all this processing, Mr Kalandani said that there was an increasing cost to turning point-cloud to CAD and it is not always necessary with increasing cost data rates; points, meshes, tessellated and finally CAD (solid models). Ford only models objects that are going to be physically manufactured, tessellation is for visualisation, analysis and virtual assembly and mesh is for analysis and rapid prototyping. Coloured Point-clouds are great for visualisation and clash detection.
The technology does not come without its issues, aligning and segmenting data from large areas, in excess of 250,000 sq ft means manipulating multi-gigabyte files and the computing power is never enough. Alignment is an issue with 2D AutoCAD data and maintaining all this data is a headache. The interoperability, or lack of it is an area Mr Kalandani would like to see improved, with a reduction in the number of applications required.
Summing up, Mr Kalandani stated that ‘Field checks’ was the killer application for laser scanning in Ford, saving money when integrating utilities and structural elements into existing facilities. Ford’s vision is to fabricate off-site and have plug and play installations.
Scan to BIM
This is a market segment that did not exist last year. There was scanning in transport and architecture but the BIM part is really a new introduction and like all three letter abbreviations, it really depends on how you define BIM. In this case you would walk away with the idea that BIM was Autodesk Revit, as the majority of projects were completed in Revit. The key problem with Revit and scanned data is that it does not currently support point-clouds in their native formats. So ‘Scan to BIM’ is not really an automated process, which we all hope it will be, it is more of an ordeal. However HKS architects’ Pat Carmichael, manager, Advanced Technologies, was on hand to show what can be achieved if you have nailed that process.
Over the last eleven months the firm has completed a number of projects using Revit and scanners, experimenting as they go to get a defined process. The company uses Leica’s Cyclone and ClearEdge3D Edgewise to build a wireframe of the point-cloud. Mr Carmichael was a big fan of Edgewise as it dramatically assisted this process.
50 United Nations Plaza is a Federal Building in San Francisco comprising 360,000 sq ft of office space, 1,200 rooms, over seven floors with a basement and an attic, which needed to be remodelled for new Federal clients. Mr Carmichael described the building as having the characteristics of a tank, as it could withstand anything a number of earthquakes have thrown at it so far. It is a large building with 450 ft hallways that took an amazing three weeks to scan, convert and have detailed drawings. A team was sent out to laser scan the building and courtyard, while the architectural team built the family of parts for items such as windows and doors, required for the Revit model. From the edge-converted scanned model, the Revit model was quickly generated, snapping to the edges — this produced a phenomenally precise model.
Without laser scanning Mr Carmichael said that it would have taken months to complete the project, despite having the original 1932 drawings, which only reflected the exterior. The Revit models created were used for visualistaion, construction drawings and amazingly detailed capture of significant historical architectural elements, such as the stairs.
Carmichael finished off with a great demonstration of the company’s own ARCHengine, which is developed using Unreal Tournament. All its 3D models can be loaded up and flown through in real time with varying degrees of rendering. Overall an excellent presentation and demonstrated that Scan to BIM could be done with Revit, once a process was ironed out. Under questioning at the end, Carmichael did admit that the file format issues created in generating the point-cloud required a full time ‘File-Wrangler’, someone who had all the conversion utilities and could get the right information, in the right format to the right person at the right time.
The bottom line is that this firm can survey, model and produce new construction drawings on a 360,000 sq ft job in three weeks.
Laser scanning and point-cloud data is exceptionally useful and has proven itself in many different sectors. The SPAR conference is a fantastic event to really get a taste as to just how diverse these applications can be and the value that companies derive from using it.
While in CAD we have for so long spent time creating new geometry from scratch, it will become increasingly easy to capture existing geometry and intelligently edit and manipulate these forms. In a way we can ‘sample’ the real world and remix it in our design tools.
The key early areas appear to be in capturing old buildings for renovation and retrofit as well as capturing buildings of outstanding cultural importance for preservation. I expect this to expand and see cross over opportunities in factory design, modelling in context, GIS and the user of 3D raster for archive formats.
The key take away from my days in Houston came from Ford’s Mr Kalandani, in that to produce 3D solids or geometry has the highest cost. If data can stay in point-cloud and perform its task, then there is no point in modelling everything. It certainly seems to be working for Ford.