Computational Design Symposium

1258 0

In the second part of his analysis of the Computational Design Symposium at Autodesk University, Martyn Day asks what the impact of computational design tools will be and when Autodesk’s will be available.

Robert Aish has been at Autodesk for around 18 months now. Formerly working at Bentley Systems, where he designed the Generative Components (GC) layer for MicroStation, Dr Aish opted to move to Autodesk and leave that work behind. This meant he had to start from scratch to develop a new application for computational design on a new platform. To date, the annual Computational Design Symposium, held the day before Autodesk University in Las Vegas, is the only real chance to see what advances Dr Aish’s team has made.

The technology shown at the Computational Design Symposium did not fail to impress, with Dr Aish providing a compressed demonstration of his scripting language, codenamed D Sharp, to produce the complex structures and surfaces of the Metz Pompidou Centre, a competition prize winning design by Shigeru Ban, Jean de Gastines and et Philip Gumuchdjian.

Choosing to model something difficult and real is in keeping with the theme of the event, which aims to showcase existing, completed, work with an array of existing computational solutions. At the inaugural event, Neil Katz of SOM, gave a memorable talk on his work for the World Trade Centre, which involved him creating special programs to explore different options in configurations.

Algorithmic Design

Dr Aish believes that with computing power and 3D, we are at a point of design innovation. Pre-CAD, when designing, an architect would grab a felt tip or pencil and draw. When computers came along the software mainly emulated those old drafting techniques of creating representational 2D geometry. CAD meant we could edit, copy and change the drawings, increasing productivity. This is very similar to what word processing did to writing, but it but did not change the way we write. Drafting tools do not change the way we draw and it is still a surprisingly manual process.

Computational design is designed to take some of these manual tasks away and reduce the risk of handling complexity. A conglomeration of rules defines buildings, for instance, to populate a facade with a specific type of glazing panel, or use specific mullions, the computer can complete all this.

Then there is the ‘what ifs’: should you chose a different glazing solution, or wish to study the impact of a shading solution, testing every option is repetitive and tedious, requiring the redrawing the facade many times, due to the number of different possible options. With technologies like Computational Design, sliders can be created to make geometrical changes to the model, with the computer’s processor taking the strain.

Advertisement
Advertisement

This does, however, alter the design process, as the architects or the person creating the design rules for the computational design system needs to analyse and define what make up the building’s important rules before they are scripted into a D Sharp program. This bespoke scripting language will give teams of users the ability to make the design rules themselves.

Vancouver Convention Centre

One of the talks given this year highlighted the benefit of this approach to common problems, specifically curtain walling. The stunning Vancouver Convention Centre design has 150,000 square feet of inclined structural glass curtain wall, which needed to be analysed using advanced Finite Element Analysis (FEA) and then optimised to minimise deflection from its own weight and wind. This end result was a unique curtain wall that is supported by wind trusses hung from the roof structure above.

The new Vancouver Convention Centre Expansion Project in British Columbia opened on April 3, 2009 and is a model in Sustainability. Designed by Seattle-based LMN Architects, it features a six-acre green roof. Featuring 150,000 sq ft of floor-to-ceiling glass throughout the building, the company deployed computational design techniques and analysis to validate the design.
Photos: LMN Architects

The $833 million project was conducted as a Building Information Modelling design, using integrated models and advanced CAD techniques. While BIM brought its challenges, there is little doubt that this steel job would have been more difficult without it. With more than 19,000 unique steel pieces fabricated for the structure, some of an enormous size, it was imperative that the owners, engineers, and fabricators could see what was being built and where conflicts may occur. In the end, communication lines were open and BIM was instrumental in understanding the geometry, minimising the amount of steel required, and solving problems before they happened.

Conclusion

Dr Aish is in the Autodesk’s Platform Group, not the Architecture/ Engineering/ Constructions division, which would indicate that the technology is seen as a core capability that all the verticals could use. It does after all manipulate standard AutoCAD geometry. While much of the emphasis is around architecture and fabrication for that industry, it is clear that Autodesk sees this as potentially beneficial to all its customers.

With the new releases of Autodesk product almost upon us, it would be wild speculation to think that Autodesk is ready to release its computational design tools to the market. Looking at the company’s modus operandi, there should be a lengthy trial on the Autodesk Labs website before any commercial version would be available.

Also, starting from scratch, 18 months is not a lot of time for Dr Aish to flesh out all that would be required. However, the complexity of the Metz model, which was created, albeit in a canned demo, was exceedingly more advanced than any geometry demonstrated in the previous Symposium. I would hazard a guess at seeing something more concrete on Labs in the next 18 months.

Advertisement

Leave a comment