説明
主な学習内容
- Understand the core concepts behind Generative Design
- See how Dynamo can be used with an optimization engine to implement Generative Design workflows for AEC
- Learn best practices for implementing complex, large-scale Dynamo graphs, bringing the data into Revit
- Learn how Van Wijnen has implemented Generative Urban Design in the real world
スピーカー
- Kean WalmsleyKean Walmsley is the director of Systems Design/Architecture Engineering, focused on the research area of human-centric building design. He has previously worked on projects exploring the integration of IoT data with BIM (Digital Twins) using Autodesk Platform Services, as well as Generative Design in the AEC space. He has worked in various roles – and in various countries – during his career at Autodesk, including building and managing teams of software developers in Europe, the Americas, and Asia/Pacific. Kean engages regularly with Autodesk’s developer and computational design communities, providing technical content and insights into technology evolution.
- LVLorenzo VillaggiLorenzo Villaggi is a Principal Research Scientist within the AEC Industry Futures group, Autodesk Research. His work focuses on novel data-driven design approaches, reusable design intelligence, advanced visualization and sustainability. Recent projects include a net zero carbon and affordable housing development for Factory_OS in California, the NIS engine Factory for Airbus in Hamburg, the Alkmaar Affordable Housing District for Van Wijnen in the Netherlands, the Autodesk Mars Office in Toronto, and the Embodied Computation Lab for Princeton University.
KEAN WALMSLEY: Thanks for joining this session on Generative Urban Design-- A Collaboration Between Autodesk Research and Van Wijnen. Everybody had a great AU this year? Yes? Yay. Great. OK, cool. Well, I'm just hoping that I've still got 20-minutes' worth of vocal chords because I feel like I don't. Luckily, I have my colleague, and friend and colleague, Lorenzo Villaggi from The Living, who's here to cospeak with me. And he'll take you through the first half of this presentation.
Originally, when we submitted the talk, or the proposal, our hope was to have Frank-- sorry, Jelmer Frank Wijnia from Van Wijnen join us here. But unfortunately, for various reasons, he wasn't able to get across. But he contributed to this deck, so I wanted to make sure you know he was listed there and you have his contact details as well. OK, and with that, I'll hand it over to Lorenzo. And I'll come back in 20 or 30 minutes.
LORENZO VILLAGGI: All right.
KEAN WALMSLEY: So, thanks.
LORENZO VILLAGGI: Thank you so much, Kean. Can you guys all hear me? Is this working? OK. So as Kean mentioned, I'm going to talk about this emerging framework of generative design and some of its applications across multiple scales with a particular focus on the scale of the city.
My name is Lorenzo Villaggi. I am an architect and research scientist with The Living, which is a first of its kind Autodesk studio. It's an architecture and research and design studio. And what we do at The Living is investigating the future of architecture through real-world applications today. We are part of Autodesk Research, also known as OCTO, or Office of the CTO. And together with all the other research groups, we try to think about new ways to empower people to make things, with The Living focusing on the AC industry or, more broadly, to the built environment.
Over the years, we've been collaborating with a very diverse set of clients and collaborators from universities, artists, public institutions, and forward-thinking private corporations as well. This is just a range of some of our most recent, the most recent completed projects, just to give you a flavor of what we're doing beyond the topic of discussion today.
The work lies pretty much at the intersection of practice and research. We always try to apply cutting-edge research, ideas, and technologies to real-world projects, from the integration of machine learning techniques into the design process to address natural material variation for wooden planks, to generative lead-designed office spaces, to temporary structures that push the boundaries of sustainability through biomanufacturing methods, and public interfaces that monitor water quality through biosensors-- so, literally, using living organisms to sense the environment.
And of course, as a research group, most of our projects are extended in the form of scientific publications. So we're actively contributing to a evolving and growing public body of knowledge around architecture and technology. And most of these papers in the past years have been focusing on generative design and some of its most technical aspects.
So generative design is a framework. It's a very flexible framework that can be applied to a very diverse set of design problems and across multiple scales. So what I'm going to talk about is three applications from an industrial component, the scale of an industrial component, all the way to the city.
But before diving into that, let me clarify a little bit. You probably all already have good knowledge of what generative design is. But I'd like to offer our own definition of it. Generative design is a form codesign that combines artificial intelligence and human creativity to create solutions that would otherwise not be possible. In other words, you can think of generative design as instead of drawing or modeling explicitly the form of what you want, you start by setting the goals and the constraints of your design problem and then tasking the computer with the automatic generation, evaluation, and evolution of high-performing solutions.
And an important idea that lies at the foundation of generative design is evolution, which is the way nature has been designing living things for millions and millions of years. And an important concept of evolution is the idea of species. And you can see the species as a sort of model that can encapsulate all the unique characteristics and abilities of a single individual to survive. And through this iterative process which is evolution of mutation, breeding, and selection, species are able to adapt and improve themselves over generations through interactions among themselves and through the interaction with the environment.
So we encapsulate this entire process, that takes millions of years, in the computer to solve real-world challenges. And the steps that generative design goes through are always these three-- generate, evaluate, and evolve. In computational terms, this entails a parametric model that can create a vast solution space, a set of design goals in the form of simulation algorithms, and then a intelligence system, in our case, a genetic algorithm that can learn the design space of possible solutions, and learn how to improve them, and improve their performance over generations.
So let's start with the first application, generative design for airplane components. The project is called the Bionic Partition and was born out of a collaboration with Airbus, with airplane manufacturer Airbus. And they set this goal of changing the way they design airplanes to reduce the carbon emissions of their airplanes by reducing the weight. And we collaborated with them to help them figure out how to achieve this goal.
But we started small, and we started with this wall partition that divides the galley from the seating area. And it may look very straightforward, but it's actually a quite complex part of the airplane to design. And this is because, mainly due to the complexity and the stringent structural and functional requirements, it can only be 1-inch thick. It can only be attached to four points on the fuselage. And it has to host this jump seat for the cabin attendants to sit during takeoff and landing, which creates a very asymmetrical loading condition throughout the partition. And it has to allow for that cut-out for stretchers to go through in case of emergency. And all of this needs to be met while reducing weight and making it very stiff.
So to address this, we looked at nature. And, in particular, we looked at how slime mold grows and encapsulated that logic into a custom algorithm. And we looked at slime mold because it creates this adaptive network of branches which are both resilient, efficient, and redundant, which makes these networks very resilient, and they distribute their mass very optimally throughout the space.
So based on that, we created this algorithm and this rule-based geometric system that can create tens of thousands of solutions while meeting all those requirements. The result was a very vast solution space of design options. And many of these went beyond the typical engineering and rule of thumbs.
The same data set can be explored through custom data analysis tools, like this one where each point corresponds to a design option, and the axes represent the domains of the solution space, in this case, weight and maximum displacement, which were both meant to be minimized through the process. And through tools like this, we're able to navigate the trade-offs between the different metrics and identify a subset of high-performing designs or, eventually, the one they want to move forward with.
Of course, this process was also happening at the microscale, and areas of higher stress, regions of higher stress, would turn into denser lattice structures. And eventually, the entire partition was 3D printed in metal, turning it into the largest, currently the largest, metal 3D printed component for airplane parts. The result was a-- and this is the full-scale partition. The result was a partition that is 50% lighter and 10% stronger than the one's that are currently used. It's actively being certified to be part of the A320 Airbus fleet.
The second application is generative design for architecture, and the project is the new Autodesk offices in Toronto. Am I good with time? Yeah, OK. The project involved the refurbishment of 60,000 square feet, retrofitting 60,000 square feet of office space for around 300 people.
Generative design, when it's applied to architecture, it inserts itself into a wide ecosystem of design and tools from the quantitative capturing of data about the architectural problem, as well as qualitative information about the architectural problem that is incorporated into the model, to the also the assessment of the solutions, not only through a quantitative perspective, as I showed you earlier, but we can also use immersive platforms to qualitatively assess the solutions that we generate.
And then, finally, when the architecture and the space is built, we can mine the space, and monitor data, and understand how people inhabit the space, and feed that data back into the model, and basically using generative design as a tool for post-occupancy adaptation purposes.
Another important thing to note about generative design for architecture is that it becomes much more complex than when it is applied to engineering. And this is because you have to capture all the social and experiential features about architecture that are inherent in architectural problems and that do not belong to engineering ones. And the other problem is that they're hard to quantify.
So in this case, we designed six different design goals, and two of them were directly derived from human-level data that we gathered through systematic surveys. The workflow follows the three canonical steps-- generate, evaluate, and explore. And for this first step, generate, we created a rule-based geometric system which-- and the idea was to subdivide the entire floor space into neighborhoods.
And then we used those seeds for each neighborhood to control the shape and the angle of the boundaries, relative of the boundaries, between each neighborhood. And one edge of each neighborhood would then turn into a cluster of meeting rooms, and the remaining space would become desk layouts. So these are the ingredients of our model that can generate a vast multitude of floor plan options.
And then we create, in the evaluation part, the design goals. The first one is adjacency that measures and minimizes the travel distance between teams, individuals, and the amenities and teams that they want to be close to, work-style preference that measures the suitability of each neighborhood to each team's preferred acoustic and lighting preferences, buzz that maximizes the spread and intensity of high-productivity, high-intensity, high-activity areas throughout the space, productivity that minimizes distraction sources, but acoustic and visual distraction sources at the level of the desk, daylight that maximizes natural daylight across the entire floor plan, and, finally, views to outside that measures and maximizes the number of unoccluded sight lines from each desk to the outside.
Once these two steps are completed, we can run our evolutionary process and generate tens of thousands of solutions that improve the performance along all those design goals, generation after generation. We're using custom data analysis and data visualization tools to better understand the design space, like this one that offers a view of the lineage of the designs across generations, and then the more typical ones that I showed earlier that show the design space across two or maybe three dimensions if we use color. And these are the tools that we use to structure the discussions with the stakeholders and identify a subset of high-performing solutions up until we find the candidate one that we would further develop into the subsequent design phases.
The result was a very rich workplace, a very rich-in-features workplace with very unique characteristics that went beyond the typical "one size fits all" approach to workplace design which has dominated workplace design for the past 10 years or so and that lots of research has proved that is not beneficial or healthy for the well-being of the users. So these are some of the photos of the completed and used office space, which opened last year around this time.
Finally, the third application is generative design on the scale of the city. And towards the end of 2016, we were approached by Van Wijnen, which is a very, very forward-thinking company that I'm going to talk about in a second. And the goal was to apply generative design for the first time at the scale of the city to design a net-zero energy residential neighborhood for low-income families.
Van Wijnen is a development and construction company that is based in the Netherlands that is at the forefront of modularized, prefabricated housing systems. And they're on the path of revolutionizing how they design, and make, and deliver their products by integrating cutting-edge technologies throughout the lifecycle of buildings, from the design to occupancy. So we partner up with them to help them achieve this goal.
One important thing that we wanted to note is too, besides Jelmer Frank Wijnia, that Kean already introduced, that was unable to join us today and that played an important and critical role in the process, we wanted to also acknowledge two other people from Van Wijnen that played as well very important roles throughout the project.
And these are Peter Hutten, CEO of Van Wijnen, who you can see on the very top right of this slide, and Hilbrand Katsma, COO of Van Wijnen, who some of you may have seen from last year's mainstage keynote where he delivered the story of this very same project. Oops. And what you'll see now is a short video that will summarize the project, with an interview of Jelmer Frank Wijnia.
[VIDEO PLAYBACK]
[MUSIC PLAYING]
- Van Wijnen is a construction company. It's located in the Netherlands. It's a big construction company who doesn't only focus on building houses. It's also caring about the human needs of the residents. So in 2014, Peter and Hilbrand discovered generative design during AU. And they discovered that it would be a very nice thing to have for designing urban plans.
We talked about creating a tool, a urban planning tool. I was working to collect the different goals they need for setting up a design. And those goals existed out of different parameters, which we, together with The Living, created in Dynamo. Normally, I am drawing or designing plans with a pencil and paper. And now we can use the computer to create different solutions for one design. And working close with such a company gave me a lot of inspiration, also for [INAUDIBLE] gave him a lot of inspiration.
[MUSIC PLAYING]
[END PLAYBACK]
LORENZO VILLAGGI: Cool. So, many thanks again to Jelmer Frank Wijnia, who was the person interviewed, who spearheaded the generative design project from the Van Wijnen side. So the project was structured in two phases, phase 1 and phase 2. Phase 1 was about the application of generative design at the scale of the city and the delivery of a design for further development for one specific site, with a very specific set of design goals and a very specific set of requirements and constraints, while phase 2 was about the-- which Kean will be talking about very shortly-- it was about the extension of the model that we used in phase 1 as a general purpose and reusable tool that can be applied to a very diverse, or like any kind of site boundary where design goals can be added or removed and where constraints, or requirements, or inputs that can be changed by the user.
So the workflow, again, I just want to stress how flexible the framework is. It's still made of these three same steps. An important and critical aspect of the project was meeting in person with Jelmer Frank and the Van Wijnen team to gather necessary information about the project and better understand what we were trying to solve.
The site was Almaar, which is a city in the northern area of the Netherlands, a couple of hours away from Amsterdam. And a great advantage of Van Wijnen, which it was very advantageous for us, was the use of standardized housing typologies which is being developed by Fijn Wonen, which means "fine living," and is a subsidiary company of Van Wijnen. And here you can see three of the standard typologies, the 101, 201, and 301.
Additionally, we collected constraints from local building code to make every design option code compliant. And this includes setbacks, building orientation, maximum building heights, and street access location, as well as the developer's wants, so the requirements that the developer has set for the project, which, in other words, means the amount of stuff that we would have to place in the site. And this includes, additionally, in addition to the housing typologies that I showed earlier, apartment buildings of varying number of stories, as well as a hierarchy of internal circulation to the site for cars, pedestrians, and cyclists, as well as a set of rules and set quantities of parking spaces.
The design goals that we identified throughout the process, together with the stakeholders, are meant to represent not only Van Wijnen and the developer's perspective but also the end users. So we gathered-- we designed environmental kind of design goals, like solar gain, but also financial ones, including project cost and profitability of the project, and then as well, also, more architectural and urban design ones, like backyard size, architectural variety, and views to the outside.
The geometry system follows five different steps. We start with the acquisition of the boundary line and the creation of what we've been calling this boundary-aware mesh that adapts its topology to the existing morphology of the existing context. Then we use that same mesh to automatically generate avenues and streets. And then two subsequent optimization steps place first the house units and then the apartment buildings. And then, finally, we assign, program, and the green areas and parking spaces.
So this is short animation that-- is that OK? Just making sure that I'm still in time-- short animation that shows the geometry system in action. You saw already the first steps, the boundary-aware mesh, the generation of the streets that tries to meet the criterias that we have set at the beginning, the optimization processes that place the two housing types, the one- and two-story ones, and the apartment buildings. And then all those rules are executed automatically to generate a multitude of neighborhood layout options.
One important thing that we wanted to note as well is that it's not sufficient to have a parametric model, a simulation engine, and a genetic algorithm to make good work through generative design. So at The Living, we've been developing custom analysis and diagnostic tools that we use throughout the process of the design, of the generative design process, to understand the quality of the design space.
And this is done to ensure that, A, we achieve enough variation or variability among the population of design solutions that we create, which means that the relationship between the computer and the user becomes a true collaboration. We find solutions that are not expected. And then, B, we want to make sure that the design space achieves an appropriate level of complexity that allows the genetic algorithm, in our case, but in more in general an intelligent system to learn how to improve the performance of the solutions over time.
Once all that is set up and we are sure that the design space is good, we run the optimization. Here, we generated over 17,000 design solutions at the scale of the city. And because this is a seven-dimensional design space, we have seven different goals. We use tools like the [? pair-y ?] scatter plot that allows us to match every-- to make any pair combination between all the different design goals and help us identify the trade-offs that are more interesting to further explore.
So from here then, we can start zooming in and using more interactive data analysis and data visualization tool. This was an early prototype of what the current refinery tool is using in the UI that allows you to sort and visualize the design goals, that designs interactively by changing the domain, so changing what design goals you are visualizing and using. It allows you also to hover over every design point and see on the side what the design looks like, save the designs, and use it for discussion with the stakeholders.
Then, finally, we identified a handful of high-performing solutions up until we identified the one that we wanted to move on with. And this particular one was high performing along solar gain, so meeting the requirements of the future inhabitants but also was meeting the requirements of the developer because it was high-performing along the two financial metrics.
But what was really interesting about this is that we discovered that the generative design system went beyond the typical rule of thumbs that a designer or a planner, in this case, would have used to come up with design solutions. The typical rule of thumbs for these kind of problems would have been placing all similar apartment buildings and similar houses in the same area, making a big parking lot, and then making the remaining space, making it green like a public park, which is a fair design strategy. It's a very efficient packing strategy.
But maybe it's not very interesting from a design point of view, and maybe it doesn't meet all the criteria that complex projects like this put on the table. And maybe it doesn't meet all the developer's wants or, at the same time, also the users wants. So what happened here is that generative design was able to create families of designs that used the strategy of creating horizontally oriented streets, like you can see here, to further subdivide the entire neighborhood in smaller parcels, and effectively increasing the diversity of architectural types and units that are placed in each parcel, and creating these more secluded, green public areas that are mixed with the different residential units.
So I wanted to conclude my part by offering a couple of benefits of generative design and, more specifically, about generative urban design. First of all, it's a form of codesign. It's about augmenting the abilities of the designer by truly collaborating with the computer. So we're not talking about a tool, but we're talking about collaboration.
It helps you manage complexity in complex design problems, and especially at the urban scale where the number of stakeholders increase and the number of conflicting requirements and design goals increase. It allows you to incorporate large amounts of data, both from past projects but also from future ones and current requests. It helps you navigate trade-offs among different designs. And one thing that we found very useful was that it helps us structure the discussion among many stakeholders in a very transparent way and in data-informed way.
And last but not least, the generative design model offers a living model, a model that goes beyond the design phase and that you can use by mining data from the occupied space, and understand how users use the space, and push that data back into the model, and use generative design as a tool to suggest possible changes to the space in the future. And thank you very much, and I'll hand over the presentation to Kean.
KEAN WALMSLEY: Great. Thanks, Lorenzo. OK, so I joined the project for phase 2. So phase 2 is really when we wanted to take the work that had been done in phase 1, as Lorenzo says, and sort of make this reusable tool that Van Wijnen could run in-house on a variety of different problems. So key to that was the ability to sort of upload their own site boundary from AutoCAD and then be able to run the application inside Dynamo in order to find different solutions.
So firstly, let's talk a little bit about the space of tools that are available now from Autodesk in this space. So traditionally, of course, all of you are familiar with Revit for parametric modeling and Dynamo for visual programming. So these, of course, are more about recording decisions in a traditional design scenario.
As we move into generative design, it's really about describing goals and constraints and the tools-- and the processes, if you like, that we have right now that are being focused on by a tool called Refinery-- is option generation, which many of you may have seen with Project Fractal. Any Fractal users out there? Great. OK-- and now design optimization, which is using a genetic algorithm to refine a set of candidate solutions and then explore those solutions.
Now sort of going beyond that-- and this is where we are today with Refinery. Going beyond that, we do intend to enable the integration increasingly of machine learning to provide additional insights and avoid doing some complex sort of simulations when going through the generative process. One good example might be like daylight studies or something that takes a long time to do for lots of various solutions. So in this case, we can use machine learning as a shortcut. And there's other opportunities to integrate machine learning down the line.
So this is the graph, the Dynamo graph, that I was involved in creating. And this is a sample run of just one neighborhood inside Dynamo. And I'll talk a little bit about the graph itself and how Refinery works. So this is the graph, and here's a subset of the graph. And basically, behind the nodes in this area, of course, you have your input parameters, and you have to specify what your input parameters, and those will be picked up by Refinery. And behind these nodes, of course, is the logic of the graph, and the various pieces of the process that need to be run through, and then the metrics that get picked up on the other side.
So behind these nodes, there's a bunch of data as well. So for example, we have embedded solar energy calculations that are created in some Python code. We have construction costs and revenue information that's embedded in the graph, as well, and then, of course, a geometry system, whether we're talking about sort of individual-- I think that's called nucleic, or nuc-- anyway, nucleiod geometry system or a sort of a more concentric one, such as Rotterdam. And then, or finally, there's the more conceptual geometry system as well. So all these concepts for the geometry system are embedded in the graph.
So I really wanted to include this image, which was the next plot that we had to do. It looks kind of dull. It's just 50 meters by 50 meters, so it's kind of boring. But I did want to show it so you actually realize that it is a real project that they were working on. And this is in Loskade in the north of the Netherlands.
So, yes, 50 by 50 meter lot, so not very big and not that interesting in terms of the shape, let's say. But it had three access points. So we specified three access points that were fixed. And then there was a green zone. So it was important for us to have the capability to specify zones inside the lot that would be excluded from the parceling algorithm so there wouldn't be any buildings placed on those green zones.
So we did that inside AutoCAD by having the concept of exclusion zones. So we'd have the lot boundary that we'd export as an SAT file, and it would be picked up by the graph, and then would have exclusion zones in the same coordinate space that would also be exported as SAT files and picked up by Dynamo too. OK?
We also needed to give the capability to add in additional housing types. So here, for example, as Lorenzo mentioned, there's the 101, 201, and 301. But they also wanted to add in the additional type, which is a 401, with its various kind of cost information, et cetera, so the metrics would pick up the information when doing calculations about revenue, and profit, and those sort of things.
So obviously, through the process, there was the eventual selection of a design from the various solutions created. And we needed the ability to export to Revit, mainly with a view to visualize the results in VR. So that was the main workflow that they had for bringing it back into Revit. We weren't creating architectural objects inside Revit. We were really just bringing the geometry, the solid geometry, from the graph back into Revit and then using VR to explore them. It is possible, of course, to connect through and place architectural elements, but that was not in the scope of this particular tool.
So this is a view of Refinery. So this is how it looks when you're exploring various results. These are studies that have been created already with the Van Wijnen graph. Here's a scatter plot that allows you to kind of plot the trade-offs in this or view different dimensions. Here, we're on the x-axis. We're looking at solar potential. On the-- sorry, on the y-axis, solar potential, on the x-axis, it's the percentage of apartments versus houses. And then the size, we're looking at revenue, and the color is just the sort of program information.
Down here, we have a number of 3D thumbnails. So these are all navigable. You can spin them around and zoom in on them. And so it's nice way to identify interesting designs. When you hover over one of these solutions up here, the appropriate-- or the thumbnail for that will get highlighted down here, and vice versa. OK, so there's this nice connection between the two areas in terms of exploration.
So here it is live inside Dynamo. I've just sort of brought it across and run it. We can take a look at the graph a little bit there. When you launched Refinery, you sort of go to the View menu and click on Refinery. In this case, it'll take a little while. I've got a number of different runs that I've done previously. And then you'll see the thumbnails kind of coming up here. These are pretty big runs in general. So then the amount of geometry that gets created, et cetera, is fairly significant. So that's something that it's a very complex graph with a lot of results.
So here, I can-- let's just take a quick look. And I can do that, do what we talked about earlier, run the solar potential on one side and then perhaps the backyard, compare that with, say, the backyard size. Actually, I was going to look at revenue against backyard size, rather. We can also take a look-- so this was, I think, a random run. We can take a look at an actual optimized run here. And you can see there's some sort of, perhaps, some correlation there in terms of the data output. So you can see that there's some sort of Pareto frontier.
If you don't want to use a scatter plot to compare the results, you can also take a look at a parallel coordinates view. And as you hover over these, the items in the bottom, it'll sort of go through and show the different input and output parameters and how that works there. Similarly, if you don't want a graphical view at the bottom but you want to just access the data, you can click on the tabular view. And then, of course, you can see the individual values. And similarly, hovering over items here in the bottom will show them in the top. OK?
So that's a quick introduction to Refinery. So some of the challenges with this project, well, it was a very-large scale project. Perhaps-- Colin McCrone, who was previously on the Dynamo team, sort of described it as possibly the largest Dynamo graph in existence. I don't know. It's the only one I've worked on, so I can't really say for sure.
In fact, I should probably mention Michael. You should probably be on this list as well. You were involved early on, right, before you went on sabbatical, I think. So there was possibly eight people involved in creating the graph, which had its own set of challenges in the sense that it's not-- Dynamo isn't currently a multi-user editing solution, as some of you probably have found out or discovered. So we needed a gatekeeper who sort of managed the contributions from various people.
We used GitHub as a repository to just store the various graphs. And each of us had our own version that had our name on it. And then there was a master. And I was the gatekeeper. I had to pick up the various graphs that people had written, and copy and paste sections of the graph across, and test everything, and make sure it'll work. So I was the lucky one, in that sense, I think you could say.
We also had some anecdotal stuff. I mean, it was fixed, but it was an interesting one. The graph was so big, and there was a very subtle bug inside Dynamo that meant that the saving of the graph took more than a minute to do. And as the autosave inside Dynamo was set to 60 seconds, it never stopped saving. So basically, it just locked up completely. Luckily, they fixed the Save bug so it happened quicker. But that was kind of an interesting moment that we had to disable autosave for the project to even work.
So there's this definitely this significant complexity to deal with. I showed you the overall graph. But of course, there are custom nodes. There's Python code. There's more complexity below the surface there. So there was a lot to manage.
We did try and restrict our use of custom nodes. I think, ideally, we'd have gone through and pulled out reusable components that we could then reapply down the line. And I do you think we will do that at some point. But for the immediate needs of the project, we really just restricted custom nodes to repeated areas of the graph that where we just wanted to avoid bugs being introduced because it would be changed in one place and not the other. So we went through and did that to add in some custom nodes. But longer term, I'd really like to mine the graph for more repeatable and reusable components.
So I just wanted to share like six tips and tricks that kind of bubbled up during this particular graph. I mean, it just [INAUDIBLE] this particular project just in case they're of interest or of use to you when you're working with Dynamo and Refinery. So first of all, perhaps slightly non-traditionally, we structured-- well, I think it's fairly typical to structure your nodes going-- the data flowing from left to right within Dynamo. That's for sure. But we also structured things in layers from top to bottom.
Now, what we did is we made sure that-- so the top here, you can kind of see these gray-- sorry, this is orange and-- well, they actually kind of look like the same color on this display, but it's not right. It's not the same color as that. These nodes are all input nodes. The orange ones are kind of fixed constraints that really don't change, that aren't optimized by the Refinery system. And these ones here are actually the input parameters that are optimized by Refinery. They're the ones that control by Refinery.
The green nodes are the kind of real guts of the algorithm, the various steps of the processing that they needed to go through. The blue layer is the display layer. And then the gray layer is the export to Revit layer. And this is where we kind of have data coming into there and then flowing through. And I'll explain what we do at the end here in a later slide.
OK, so we structured it in layers. One of the reasons we did this is that it's all very well having your inputs on the left, and they flow through to the right, but it can be very confusing if the inputs are actually needed for a process later on. So here, for example, we have these processes that relate to the placement of houses. There's a bunch of parameters, input parameters, that feed into this process but aren't needed before. So having them separated here just made it much easier for us to find them, and it also meant that we didn't have these horrible spaghetti wires going all the way through the graph, which I didn't like very much. So that's kind of how and why we structured it like this, for what it's worth.
I also sort of insisted on-- I mentioned I don't really like spaghetti going all over the place inside Dynamo. We used a technique which may or may not be interesting. But we'd often have code blocks with just a single variable name that we just placed in different places and feed the wires through there. And then you just pulled them around, and it moves the wires out of the way of everything else that's going on. So it just allowed us to kind of maintain the cleanliness, I suppose, of the graph. So this is just one area of the graph where we just have code blocks moving, keeping wires around. OK, so that's just another little trick that we used.
We also used-- obviously, we used the logical coloring of groups. That's obvious. But we also made sure that the labels were all set at maximum font, as big as they could be. Because then, of course, as you're zooming in and out, you can see them more often than not. Because, of course, when you zoom out with Dynamo, sometimes the text disappears, of course. So that was something we kind of insisted on, always have a logical group name and make sure that the label is maximum font.
The other thing that we did, and this is something that I implemented once actually meeting Michael in Berlin. We chatted about this problem. I was like, the graph is so big that every time I see this message at the bottom, I sort of pull my hair out, as you can see-- so Run Completed with Warnings. I'm like, OK, where? Where do I even start? Right? I'd zoom into a section, and I'd see that little sort of the thing, the tooltip. [INAUDIBLE] OK, well, is it here, or is there one earlier?
In the end, I got so irritated. I said to Michael, you guys need to fix this. He said, no. You fix it. There's an API that you can use to-- well, he didn't say like that it. He says, you can create a view extension inside Dynamo that searches through for the warnings, and lists them, and then allows you to navigate through.
So that's when I created Warnamo. So it's on the Dynamo package manager. Go ahead and install it. Basically, what it does is it just creates a panel of all the warnings, and it sorts them from left to-- so it goes through, and it gets the kind of the x location of each one, and sorts them based on that. So it shows the leftmost one at the top of the list. And so if you maintain the structure of making sure everything flows from left to right, the earliest warning should be at the top of the list.
And then as you click on that, then it'll take you to the location in the graph and automatically pull up the tooltip so you know what the warning is about. Of course, the problem might be further left than that, of course. But at least it's a starting point, and you know where to start looking. OK, so that was another thing that we ended up finding to be extremely useful. Oops.
The other thing, preview nodes, it can be a bit frustrating when you've got a big graph, and if you've got graphics contributed from all over the place, and you're never sure, quite sure, where they were. So we insisted on the discipline of making sure the only nodes that had the preview flags set to "true" should be inside a blue group, the display group.
Now, eventually, when we started doing the Revit export, and we had, obviously, the preview graphics inside Revit alongside the actual geometry, [? a ?] different scales, we realized that we actually had to go beyond that and be able to turn off all the graphics completely. So in the end, we ended up having the discipline of only having a single node with the preview flag set.
And then we had all these other items that get fed through into this list or this code block which contribute to a list. And then anything we wanted to turn off from the list we'd just take a list [? or ?] empty and replace it with that, if that kind of makes sense. So it ended up actually working really well. And I don't know if this is weird or not. I mean, I don't know enough about Dynamo to know whether this is normal. But for us, it worked great, and it worked really well.
And then we did the same thing for the Revit export in the sense it's exactly the same data coming through. And we did a similar list and replacing with list [? or ?] empty. And then we have just to have a scale process before we use the import instance stop by [? geometry's ?] node inside Revit to import the geometry. OK, so that is-- it's not resolved here because this is in Dynamo sandbox. But inside Dynamo for Revit, that's how we'd pull the geometry into Revit.
So implementing your own generative workflows, as of Monday of this week, you can download Refinery. It's in public beta. And I recommend you do so. Give it a try. So this is the URL, http://autode.sk/RefineryBeta. It works with Dynamo 2.02 as well as daily builds of 2.2.1. I've been using 2.1, and it works great.
It's still in beta, so do expect some quirks. One of the known issues just relates to the volume of data that's created, especially for a graph like the Van Wijnen graph where it's generating a lot of geometry. That's all currently stored in JSON format on the hard disk. And previously, you wouldn't really even realize that these runs were creating so much geometry. Now it's surfaced in the UI that there's a trashcan icon, and it tells you how much space has been taken. And you can easily get rid of previous runs, which is a great enhancement. But up before then, I was having to check the app data folder just to make sure that my disk remains usable.
I'd suggest starting small. So don't start with a graph like the one you saw today. You don't have to, and it's really a bad idea. Start small. I did a session on Tuesday called Getting Started with Generative Design for AC, which was really about just getting started with Refinery. And it took a very simple example of a tiling of a floor plan.
Again, you'd input the floor plan from AutoCAD. And then you'd have three parameters. You'd have the angle of the tiles, and then an offset in x and y, and then you basically measure the number of completed tiles, the area of discarded tiles, the number of partial tiles. And so we sort of showed how you can do various runs and optimize that graph.
And it works pretty well. It's fairly basic, but it's a good starting point for this kind of thing. So you can take a look at that. It's on my blog, keanw.com. You should also be able to find it via the recording and the materials for this class. Just let me know if you can't track it down.
So what's next? Well, Van Wijnen are very interested in expanding this to look at the role of the property developer-- or the project developer, I should say, and then bringing in GIS data, such as from Esri. So that's something that we're talking about is in its early stages. They're also interested in going beyond the initial step of focusing on urban design to looking at the architectural aspect as well. So that's something they want to do and then, of course, gather occupancy feedback and information and start to integrate that more into the design process.
So here's a quote from Jelmer Frank. Should I read it? No, I'll summarize it. So basically, he said that it's been very beneficial for Van Wijnen. It's a big time-saver. It's easy to compare different results. There's still some work to be done. There's still stuff that needs to happen. For them, it feels like right now the generative urban design process is more like a partner in crime than replacing any jobs. So that's an important point, I think, to say is that it's a valuable tool, and it's very useful. It's saving them a lot of time. But there is still a very important role to be played by the designer.
So, and I think with that, we've got about 10 minutes for Q&A. OK, we'll turn this on. OK, perfect. Excuse me. Is it on? I think I turned it on. Maybe I turned it off. Here, pass it back. And I'll-- oh. No. I did turn it on. But there is another one at the back. I'll just-- oh, that's mine. OK, this should work.
AUDIENCE: Thank you. You mentioned a few time-- oh. Is that still OK?
KEAN WALMSLEY: Yeah.
AUDIENCE: Yeah, OK. You mentioned a few times that the study size in the space that it was geometry. [INAUDIBLE]
KEAN WALMSLEY: Yes. Right. So, you know--
AUDIENCE: Yeah, how would you do--
KEAN WALMSLEY: Yeah, so there's a few things that are going to happen. We're going to optimize-- so it definitely would scale. The tool itself scales to larger plot sizes without any problem. You might have to add a few parameters for more streets to go across it, otherwise there aren't enough streets. But that's trivial.
Yes, it'll create more graphics and more geometry for sure. And that is something that we're looking at anyway in terms of optimizing the format for storing the graphics and just making it more performing generally. So I think that that is in progress. And I don't have any doubts that it will scale without any problem.
Yeah, OK. And of course, right now, it's running locally, so which means it's taking up CPU time, and you're sort of blocking the CPU for a while because it's quite CPU intensive. But the plan longer term is to have that go to the cloud, of course. So it'll be more straightforward in that sense. Questions? OK. Oh, dear. This is dying. You know what? I'm gonna do this.
AUDIENCE: [INAUDIBLE] generative design. Can you talk about [INAUDIBLE]
So just thinking about the [INAUDIBLE].
LORENZO VILLAGGI: OK. [INAUDIBLE].
Optimization algorithms. And in this case, yeah, it was like not a combination of different goals into one single fitness function. It was seven different outputs that were used as a way to drive the optimization. And then, of course, at the end of the process, you can start weighting your design goals as a way to prioritize one versus the other.
And that's where the stakeholders com into play and start saying, we have all these different goals. But for us, architectural variety is two times more important than project costs. So we can start applying coefficients, and weights, and sort, and navigate a design space in a different way, in a more targeted way.
KEAN WALMSLEY: OK, any other questions?
AUDIENCE: [INAUDIBLE].
LORENZO VILLAGGI: I can hand over my--
KEAN WALMSLEY: No, no. It's OK. You keep yours. You're going to need it.
LORENZO VILLAGGI: OK.
KEAN WALMSLEY: There was one here. I did see [INAUDIBLE] I'll come back.
AUDIENCE: Can you explain a little bit more about the evolutionary algorithm? Is it always the same, or do there exist several ones?
LORENZO VILLAGGI: There are definitely lots of evolutionary algorithms. And this is not like new technology. They've been around for quite some time. If you'd like some very technical information, I would direct you later to Mike Dewberry, who is the mastermind behind Refinery. To our experience for architectural problems, which are complex problems that go beyond the typical and original use and purpose of evolutionary algorithms, the one that has yielded the best results is the NSGA-II, which is the one that we've been using so far.
And this is because of the reasons I explained earlier. It doesn't require prior weighting, so you don't need, before you start the project, you don't need to know what is the most important goal. It can start unbiased in that way. And it's the one that yielded the best results as far as-- the models that we create are quite complex, and there's no direct one-to-one relationship between the inputs of the geometric model and the outputs at the very end.
And it just, from like the experience in the past years, that was the one [? that I ?] was able to explore and improve the performance of the solutions over time and the best way. But I'm sure there might be other ones that could perform better in a similar way, but we haven't done rigorous comparison testing between different algorithms so far.
AUDIENCE: This one's probably more for you, Kean. We have a similar graph that generates these kind of plots as well. And we ran into an issue with Dynamo where after several runs it would start having memory leaks. So we had to split up the graph into a few bigger graphs. And I was just wondering with running this multiple times sequentially if you had that issue as well.
KEAN WALMSLEY: So, no. Technically, it's running in Dynamo CLI, or there is a version which [? I ?] was actually using a different approach. But it's basically a headless version of Dynamo being executed repeatedly with different runs. And I think it's basically starting up the executable regularly in order to do the run.
So it's not doing the execution inside Dynamo sandbox itself. So that, I think, is probably why that issue's not coming up. So even with a very big graph, the headless version of Dynamo is going through very quickly and performing well. So, yeah. Yes? I'm sorry. Would you mind passing this [INAUDIBLE]?
AUDIENCE: Yeah, thank you. I was wondering why you used Dynamo instead of writing a Revit add-in to make these runs, and if this wouldn't have reduced the complexity and the run efficiency, the runtime.
KEAN WALMSLEY: [INAUDIBLE] this one, or [INAUDIBLE] I mean, personally, I think it's probably-- I mean, I would say that the answer is probably because Dynamo isn't dependent on Revit and can be used in other places as well, which gives a certain flexibility. But I don't know if that's the engineering answer. So I'm going to ask--
AUDIENCE: [INAUDIBLE].
KEAN WALMSLEY: Yeah.
MICHAEL: So Refinery is a C# extension to Dynamo into Revit. The reason that Kean and Lorenzo and the rest of the team made the generative solution in Dynamo is we expect that there's a certain amount of custom tailoring of the design solution to every project. We've spent a few years trying to solve this generally. There's like TED Talks of people thinking they can encapsulate all of architecture in a single algorithm. And this is just not possible.
There's going to have to be some sort of scripting or design intelligence encoded for every project. And we wanted to have that kernel of intelligence stay in the Dynamo world because it's a little more accessible to people who aren't [? SLA ?] programmers who want to codesign intelligence and get sort of the biggest tent for people to apply this to their projects.
KEAN WALMSLEY: Thanks, Michael. So with that, I think we're out of time. But thank you very much for coming. I hope it's been useful.
[APPLAUSE]