Description
Key Learnings
- Learn how to use Dynamo to create Generative Design for Revit studies.
- Discover what’s possible and not possible with Generative Design for Revit.
- Learn how to use model assist tools within Revit to automate design.
- Learn how to use co-authoring generative design directly within a Revit project.
Speakers_few
- Ben GulerAs a design technology fanatic, Ben has been a vital driver for computational design, process management, and standardization. With a gamut of technological avenues, he has successfully identified and executed solutions for a robust set of project deliveries. His architectural background, knowledge of BIM platforms and software engineering allows for a technological bridge that is critical to being effective in the computational design paradigm. The experience writing Revit Add-ins and stand-alone software, helps delivering solutions that tie the AEC market together. As a husband and father, Ben enjoys going on hikes with his wife and daughters.
- BABill AllenBill Allen, a seasoned leader in the AEC industry. He currently serves as the president of EvolveLAB, where he orchestrates a dynamic synergy aimed at empowering Architects, Engineers, and Contractors to optimize the built environment through the strategic application of artificial intelligence and data-driven design. With an impressive track record spanning 20 years, Bill brings unparalleled expertise to the AEC industry, having consulted with pioneering firms at the forefront of technological advancement. An accomplished public speaker, he has graced the stage as a keynote and featured speaker at numerous prestigious events, including the most watched Autodesk University talk, titled 'The Future of BIM is NOT BIM, And It's Coming Faster Than You Think.' Beyond his professional pursuits, Bill is a co-founder of The Bare Roots Foundation, a nonprofit organization passionately committed to ensuring that all individuals are granted the fundamental rights to clean food, shelter, and drinking water, embodying his dedication to global humanitarian efforts."
BILL ALLEN: All right, welcome everybody to our class, Generative Design for Revit-- Why the Co-Authorship Approach Is so Powerful. Today you have myself and Ben Guler. And a little bit about us, I am the President and founder of EvolveLAB as well as On Point Scans and Disrupt Repeat.
These firms synergistically support architects, engineers, and contractors as part of their process and streamlining that. And we also have Ben. Ben, do you want to give an introduction to yourself?
BEN GULER: Sure. So, basically, VP of Innovation here at EvolveLAB. Background in architecture and migrated to become a BIM manager. Started to learn how to build apps and code and EvolveLAB presented an opportunity to get to do that every day so I had to come and jump on that. And so I've been doing that a lot.
But pretty much I'm a generalist and a general solution solver, essentially. And used my background to bridge the gap between client and customer needs and the tech world where working closely with developers. And I even do some coding myself, even now. So it's great. I love it.
BILL ALLEN: Cool. Thanks, Ben. All right so a little bit about EvolveLAB, we're a cross-section of different expertise including licensed architects, app developers, BIM managers. Basically, we build a lot of custom tools for architects, engineers, and contractors but then we also do a fair amount of BIM services.
I have experience working at larger firms as well as some smaller firms and niche specialties in that space. And so a large cross-section of different team members. But it's not just Ben and I, it's a whole team here. And super excited to be at EvolveLAB. And that's a little bit about us.
So let's talk about generative design. Generative design becomes one of those buzzwords in our industry. It becomes very abstract. People misuse it and under recent years has actually come under some criticism. And so with the advent there was-- Daniel Davis did this amazing article on why generative design is doomed to fail. Which is a very dramatic statement and very powerful statement.
And so he had a really thought provoking article and different reasons why he thought generative design was going to fail. Also, Paul Winter talked about, just recently, this last year of why there's a generative design fail.
And some of the reasons of that being why aligning design goals is so important. And if you don't do that those are also reasons for generative design to fail. And so there's a lot of criticism around generative design. And one of the questions-- it really challenged me personally and I had to do some self-reflection.
Because a big part of our services is data driven design, generative design, computational design, all those kinds of things. Using data to inform design. And so I really had to take a step back and read through the articles and think about, gosh, are these comments true?
Which leads us to, should we throw out the generative design baby with the bathwater, right? Is it an all or nothing kind of answer? And so the place I personally landed was with Daniel Davis' quote which was, "Until we get to a point where algorithms replace designers, which may never happen, algorithms will only be practical if they work with humans" if they work in tandem with humans.
And I thought that was a really profound quote and it really resonated where I personally landed on this topic. So let's talk about algorithms. Algorithms, basically, are a way to help us think faster. So as we're going through as an architect or an engineer or a contractor there's multiple things that we're trying to coordinate, right?
Multiple objectives, multiple tasks, that we're always trying to coordinate for our projects. And so that could be anything from daylighting to departmental adjacencies to acoustics. And we're working with structural consultants and all of this kind of information.
And there's just a lot going on. A lot of tasks, a lot of objectives that we're all trying to do simultaneously. And with experience, we can do these things. It comes as second nature. But the question is, is there a way that we could leverage data, leverage algorithms, to help us do these things more efficient and do these things faster?
So let's kind of talk about how the brain works. Typically, if we were thinking about an objective or a task we can do that. We can multi thread and think about a few objectives or a few tasks, but, can we do those things optimally? And usually, the answer is no as soon as we start adding two or three objectives and tasks.
And we want to try to streamline and optimize that process. We are challenged with that a lot of times. And so very similar, as an example, long division and things like that. We don't typically do that anymore as a manual process. We use calculators and other things like that to help us streamline that process.
And so with multi-objective optimization and multiple task coordination tools can help us to do that. But we are challenged, including myself. So some call it multitasking I call it doing something else while I was trying to remember what I was doing in the first place. This is the story of my life. Walking around, my head is in one place and my body is in another, trying to remember where I left off on some past task.
And so, basically, these tools just help us to streamline this process and help us to think faster. They're just tools and that's what they are. So any comment on that, Ben? I think you might have had a comment--
BEN GULER: Yeah, I know. So I muted myself there a little bit. But basically, yeah, I can totally relate to that. It's like our brains are a-- when we multitask we just single task really fast then we switch tasks really fast and then we remember what we just did in the previous single thread. So I can definitely relate there.
BILL ALLEN: Cool. Thanks, Ben. Cool. All right so let's talk about Generative Design for Revit. And I want to talk about, basically, what is the proposed process? What is the recommended process? And we're going to look at this like any literature or documentation that you'd see on the topic. This is that process, all right?
So let's look at that. Basically, the first step is to define your problem. What are you trying to solve? And then you start to go through and define your constraints, your algorithms, your input, your outputs, et cetera.
And then, we feed this into Generative Design for Revit, right? And we go through this generative design engine. It generates options, analyzes options, iterates on those options, and it's this cycle. It's an evolutionary solver. And it tries to go through a fitness algorithm to optimize your design and presents those to you.
You rank the options that are outputted from that and then you integrate it into your project, all right? So this is the proposed process of how we could use generative design. And basically they're ingredients, right? This is the analogy. Some of this stuff becomes very technical so I like to use analogies.
So if we're looking at ingredients for generative design, basically, you have your inputs, you have your algorithm, and you have your output. And this is what it would look like in the world of Dynamo. If you start to further abstract this analogy in a parallel graph chart, which is common with the Autodesk generative design tool, we have these ingredients that we're trying to utilize and to be able to create different variations of a pie, for example.
And so in here if we have the constraints, which equal our ingredients, the variation of those constraints, or our algorithim or our recipe, and then the solution is the pie. And so as you use more butter and less sugar or if you have more eggs or this kind of fruit you're going to get a different kind of pie. And the same holds true when we're designing buildings.
And so one of the things that you should start taking into account when thinking about your building is, what kind of constraints do I want to utilize? And what do I want to minimize for? What do I want to maximize for? So this example generative design solution that we're looking at, we built for Atlas Group London.
And, basically, we want to think-- and my background's in health care so I think a lot in the space of health care it comes a little more natural to me just because of my background --but I think of nurse travel distances or we're maximizing daylighting or we're thinking of the adjacency for the departments.
These metrics become very specific to health care and hospitals. And the point being is that if you're designing a school or you're working on science and technology the metrics that you would use for that different kind of building typology would be different, right?
You're going to be trying to solve for different things, probably. And so that's what's kind of nice about some of these tools. You're on the hook for creating the algorithms but you also get to choose what you want to optimize for, your min's and your maxes, if that makes sense.
All right so let's look at this example of generative design process in action. We're going to be looking at randomization of tile patterns as an example and this is going to be our process, all right? So let's jump into it.
Our problem is let's randomize tile patterns. We don't want to spend a lot of time manually laying out a bunch of different tile patterns. And so we're going to sort these tiles based on distance to an attractor point. And so this would be your recipe.
You would think of, in your algorithm you're thinking about, what you want to solve for and how you want to solve it. And so this becomes our recipe. We're going to be using an attractor point in a distance from other points to drive the origin and location and the aperture of the tiles where they land, if you will.
They show up that they're blue or white, et cetera. And, basically, Autodesk Generative Design gives us four ways to do this. They use randomize, optimize, cross product, and like this. And here's just a few examples of this where we can start to say, OK randomize. You're going to see the examples in here are very random.
The one on the top left is very different than the right. Ones in the middle are very different than the one adjacent to them, et cetera. So it's very random. Hence the name randomize. If we look at the optimize, now this is maybe not the best example of an optimization. But, maybe, think if we're trying to optimize the location of that point in the center. Or if we're trying to think if we want to optimize the point on the top right.
Those are different ways that you could think about optimization. But there's probably a better use case for optimization. But you can see in the graph chart on the optimization it's starting to arc. You're starting to see multiple threads there on that and that's where it's starting to get into that fitness is what it's trying to accommodate and think about as it relates to optimize.
Cross product is basically going to do a variation of all of those parameters. And so, again, you're going to get a wide range of this but it's going to be every variation of the parameter in representing that for you. And then, lastly, is like this. And so you'll see almost all these examples are very similar to each other.
We have mostly blue tiles over on the left side here. And there's a few scattered throughout but most of them are just trying to show us an example that's close to one that we picked, which is this like this. So this is the traditional process.
And then we're going to go ahead and rank those options. You choose which ones you want. And then you integrate it. You integrate it into your project. Now the point of this is very targeted. Very targeted case study where we're looking at trying to implement it specifically for tile patterns. But we have other kinds of case studies that are very real examples.
And so this one I want to reference, the Hobbs Trail project, that we teamed up with Hufft on-- they're an architecture design build and fabrication shop. And, basically, we worked with them to help build these incredible structures. And really the point we want to drive home here is this idea of co authorship. And I want to turn it over to Ben to talk about this because Ben specifically worked on this project. So Ben, I'm going to let you chat about this.
BEN GULER: Yeah, for sure. So let's see. I'm going to-- there you go. Yeah, so this is a really interesting project where the problem was pretty well defined. We have these substrate surfaces, these shell theoretical surfaces, that we'd have to populate using these segmented members to assimilate within the environment. To seem natural within the trail in the park.
So, basically, there are quite a few different ideas that we wanted to optimize towards. So you can see here on the sketches we started to brainstorm some ideas and how we would handle it. But some of them is cost, ability to manufacture it, and essentially the number of cuts and the number of members, basically. Because the more you increase that the more cuts you have, the more cost you have and the--
But the more you increase the size and the members of that it impacts the aesthetic. So maybe the last thing would be its visual appearance and how you segment it. How many segments do you have. And then making sure that you could adjust those.
So, yeah, as you can see here we kind of looked at, OK we'd have to have a jig so that we can put these together. And then how do you design those intersections such that you could recycle that jig. Then, we built out the Dynamo script that starts to run variations.
And these shell surfaces had different makeups. Some of them were more vertical, this kind of ellipsoid surfaces, and then some of them were more wide. And so we tested different settings in how to interlace the segments as they overlap one another and using Dynamo. And then, once we built that Dynamo script, we could run it through the GD solver to generate many, many, solutions for us to browse to see which one do we like.
And the interesting part about this one too is we have the data, the hard data, that we can put output and metrics for but that we also have the visual that we care about. How does it look like? And we don't really have a metric for that, really. So which is so powerful to have these thumbnails to look at them, to observe them.
Because that's something that's not as easily quantifiable, since it's subjective. So it just shows the mushiness of how some of these problems that we're trying to solve are. So, yeah, some of the constraints are the parameter, the output, the count of segments, the cost, which is correlated to the count, the size of the members, so can you have thinner members so they have a certain aesthetic or you have deeper members, the actual shape of the surface, and then the extrusion length.
Because we could also overextend some of the lengths just to have a certain aesthetic. And that one-- there's no really-- well, the only metric is it cost's more that we could quantify. But the other metric, which is harder to pinpoint, is the aesthetic. Does it look good if you have that aesthetic that's like a nesting thing?
So yeah it was really a cool project to be part of and it's really cool to see it build and hopefully one day I get to go visit. I haven't done that yet. So looking forward to do that and it's really cool that we were able to use Dynamo and GD to handle this.
And perhaps one thing I didn't say is, once we built those things out we would bake all that geometry as adaptive components in Revit so we could schedule that and that's how we use that for the document. So yeah, this was a really fun project.
BILL ALLEN: Well thanks, Ben. All right so some lessons learned from that project and past projects and just kind of talking about what's possible, what's not possible with generative design for Revit. What's possible is being able to do these optimization or fitness studies. What's possible is doing optioneering. And then also different ways to interrogate these models.
That's where GD for Revit becomes extremely useful and helpful. Some of the challenges in where we're going to start getting into is what's not possible. And what other solutions could we augment and still leverage generative design and still be within the Revit environment.
But what's not possible right now is that real time co-authorship, right? So if I execute a Dynamo script I push it to generative design. If I want to make any kind of modifications I can't do that in real time. I have to go back to Dynamo. I've got to republish it. That's one of the things that's not possible.
Another thing is iterative design. Right now the process would be very linear. Again, creating your script, publishing it, ranking it, baking it. If you want to do that you've got to start again from the beginning and go through that linear process again. And then, lastly, is any local modifications.
If I have an overall geometry within my canvas or in my generative design solutions and I just want to tweak one little thing over here. I can't really do that locally within my study. I could do it back when I'm in Revit, but not in Revit GD. And so that's where Ben and I really saw an opportunity. Specifically for this process and how to overcome some of these. So I'm actually going to turn it back over to Ben and let him talk about that specifically.
BEN GULER: Right. Thank you, Bill. So, right, and I also-- just on the previous slide I want to chime in that there's well-- So we want to use the right tool for the job. And Dynamo, as we've already shown, was the right tool for those jobs. Especially for the Hufft one. It was really specific, the structure, and then we have to design that logic in that form.
And so that's where Dynamo really, shines. But there's also parts where Dynamo is not the best to do that. And that's what you were starting to hint on, Bill, in the previous one. So, I have this quote here, "Power users design tools for power users, not everyone else." And this is just natural "path of least resistance."
If I want to build an automation, it's because I have to do that job, so I'll just build it. I'm OK with a command line UI which is something super basic. No one else might be able to understand that UI but I could get my job done. It would take additional effort and additional resources to build something out for it so that other people than myself could use that.
So I have this example here in this analogy with, if anyone's familiar with, Wintergatan a band that built this Marble Machine. And this Marble-- this very first prototype, basically, is duct taped together out of cardboard and plywood. And only the creator could really run it because it knows all the quirks. It knows where you have to hold it at this node otherwise the thing could fall apart.
So, a lot of times, screen scripts when they get to a certain level of complexity and it's trying to do all these things like the Marble Machine, where it's trying to replicate a whole band in one machine. That complexity starts to slow them down and makes them barrier to entry for other users that would like to recycle that.
So it works great on this project but my other project that I just introduced you to I have a lot of red Dynamo notes. It wasn't adapted for that. So scripts have these trade outs. They're really great, they're fast to develop, they're slower to scale and adapt to other ones. Once you scale them more, if you do do that, then they get more complex. The more complex they get then the slower they run. And then it kind of diminishes its effectiveness.
And so scripts are really best for those kind of custom use cases that we've already looked at above-- previously, Bill mentioned. So using the same analogy, the Marble Machine, the next generation of the Marble Machine is the Marble Machine X, basically, which is very different.
There's steel, and welding, and much more stable. It doesn't fall apart. You could actually run it on its own for hours, actually, and it still performs. And the marbles are there, and it actually plays the notes correctly, and it's optimized. And this analogy plays more into how apps work versus Dynamo scripts, where you have more breathing room to scale and to plan.
And you have to actually have it. So it's also somewhat of a limitation, in a way, where you have to invest more resources, more time, more planning. But it has other trade-offs, like it can minimize the barrier to entry because you can have a custom UI where users don't have to know what the Dynamo nodes are because that's already-- it's a simplified UI for you.
It is slower to develop. It is more scalable, but it can handle complexity-- better complexity because it is more scalable. So apps are better for adoption, basically, where we want to deploy that to multiple users and multiple conditions.
So if we zoom out a little bit and we start to frame these ideas and where we're at is-- and I'll zoom up a bit more. When we looked at CAD-- and we had the AutoCAD days-- the basic building blocks were points and lines. And then with those-- and those are not the only building blocks, but those are the basic ones. With those, you could do a lot of things, basically.
And then we had to learn Revit when we transitioned to that. And then the basic building blocks there are also point-based families and line-based families. So you interacting with this similar, simplified element as a base building block that is very predictable. You click, and you click, and you see it.
And then, from that, we have GD, which is from drawing walls to generating buildings. So we saw that there's some kind of interim step there that could be bridged where you don't have to use-- from points and lines to sliders, toggles, and service selection, that could be something that's in between there. Maybe a mixture of both. Maybe I can still do the points of slide. so it's simple, so I could see that.
And maybe I can still get some of those sliders. So what would it look like to have a GD co-authoring tool that leverages the power of generative algorithms but still work as simple as the wall tool. So that's, essentially, our questions, essentially. And that's how we started this journey on the path to build out these solutions.
So how do we-- what would we want to do to see in a tool like that? So we'd want iterative instead of linear. So I'd want to iterate through just the way we design naturally. I want to be able to do small things and big things at the same time and not have to wait for it to finish. And then I could just select.
We want immediate feedback to better understand, if I change an input, I see exactly what happened right away on the screen on the other side of how that change affected the result. We want it responsive. So if I make a quick change anywhere, split-second decision as it's actually computing, that's taken into account. As I'm changing actively as it's generating, actually, it responds to me as soon as I make any updates.
And then intuitive to use-- so no need to have a steep learning curve. Try to have it as intuitive as possible. And so this is what we would want.
And what we have here on the screen is our EvolveLAB Morphis tool that is demonstrating how this kind of layout tool-- which is a pretty simple tool right now that could just, with some sliders and selecting a surface boundary, start to optimize and start to draw things on it with this low-resolution geometry that then gets baked into Revit with, in this example, office furniture.
So in order to do this, we need to make some trade-offs. And we have to be very specific about those trade-offs and what those are. So for that, we've outlined interactivity over speed. I'd rather have it to be more interactive at the sacrifice of speed.
So what that translates into is, if I want the algorithm to-- it's OK to have the algorithm run a bit slower if I could tell what the algorithm is doing, if I could see it on the screen. That's a decent-enough sacrifice because I get the feedback right away, so it's interactive, which led to, how do we thread decisions and bypass certain things that Revit has?
And then, also, using lightweight geometry to display that so that we could really have this rapid output. Then speed over accuracy-- so this really played a role in how we chose our heuristic solver, our meta-heuristic solver where it doesn't have to be the best solver that generates the very best solution based on the inputs because, maybe, the problem is not even framed that right. So we want it to be as fast as possible so we can see it. And then the gap of inaccuracy could be filled by the author themselves.
Then simplicity over extensiveness-- so we want to make it as simple as drawing a wall. We don't want to have it so you have to click all these five different things. And if you click them in the right order in that specific way, then it will produce the results that are desirable.
And then specific over general-- so instead of trying to solve all the problems all the time is encapsulating the problem to something that's very contained and very easy to understand and then build off of that. That's your-- built onto that additional and expanded from a smaller module there.
So those are some of the trade-offs that we made with our initial wants of what we'd like to see. And, essentially, what I want to go over next is, why did we make these trade-offs specifically? And it's because we're human, basically.
So changing behavior is very hard. If I look at the cycle over here and you start with precontemplation-- like, ah, should I do it? Then you're contemplating. I mean, before you even contemplate, you have precontemplation in the state when you hear about it, perhaps.
Then you have contemplation, like, oh, should I do it? Then you prepare yourself. OK, I'm going to do it tomorrow. I'm going to do it tomorrow.
And then you actually do it-- action. That's the fourth stage, already. It took you a while work up to get there. And then it's great. You relearn. You have a three-day workshop.
But if you don't use Revit in your project-- let's say, if you learn Revit and there's no maintenance-- number 5, then this whole cycle all the way to 4 is you relapse. You go to 6 because you didn't use it in practice, so then you lost all of it.
And I've seen firms that have done that where it's like, OK, well, we're back at square one where it's like, should I learn it again? Because it's the cycle where it's not a trivial thing to learn. And the more you use it, you learn that. So we wanted to capitalize on this.
And another recollection could be switching from AutoCAD to Revit. That was not a very-- it was very hard for certain users to do that. In fact, some would argue that it's probably easier to just not have known AutoCAD and just go straight into Revit and learn that. Because if you know AutoCAD, you have to unlearn certain things.
So CAD users would say, I need levels, now? I could just draw a line before. What's a family? It's supposed to be a block. We have blocks. Where are my layers? What's going on here?
So those kind of things-- that was a learning of transition. So getting new software, you have to learn that and use software, basically. And we built it within Revit so you don't really have to learn-- you minimize having to learn another software, basically.
So don't change behaviors. Reuse the learned behaviors. The fact that they're already in Revit-- well, there's a lot of things that they have to learn to be in Revit. There's a concept of walls, and a concept of levels, and how things are hosted, and the way orbit in 3-- all those learned patterns. Let's reuse that so we don't have to reinvent that.
And if you have to use a-- introduce new behaviors, do it with a solid strategy. So over here is an example of, OK, drawing walls is easy. And then, is it, though, really easy in Revit? And it depends who you ask.
So if you ask a BIM manager, yeah, it's WA, and then you click a point, and you click a point, and there you go. You have a wall. It's pretty easy, right?
If you ask a CAD user that just recently got converted, it's like, oh, well-- they'll have a whole spiel about it. Or they'll say, well, there's no command line, first of all. And then if you press W and then A, you don't press Enter afterwards because you don't have to anymore because that's not how-- it's not a command line.
And then there's some parallels you click, and you escape at the end. And there's some parallels that they could talk about and why they're not doing things the way they are accustomed to doing it. And then in this third category, you ask someone-- an alien-- they could start with, OK, well, there's the physics of Earth and the fact that there's a number 4, so the table and the mouse.
And there's friction, and you have control because of that friction. And that's why they use the mouse peripheral. And then they can go into explaining the human anatomy and why we need buildings to even-- because of our limited form, and explain computers. And anyway, so it depends who you ask, essentially.
And we want to address that human behavior part with the way we're designing our tools. So how do we reuse learned behavior? You have the users in Revit. They're already drawing things within Revit.
So you could build the application directly in Revit, which recycles all the controls that you have in Revit-- the browsing, how to get to do it in full plans, how do you get to-- how to navigate, basically, the whole software. The whole that-- that's given for you. You don't have to introduce your own system for that.
You can use the same shortcuts. So the fact that you have WA, OK, use that. So when you press WA at the end of A, it executes the command. Don't introduce something-- a different pattern, basically, because that's a different learned behavior that you have to introduce.
Recycle Revit's drawing tools. So, for example, in the example on the right, you can see how when you're drawing that line, it's not really a Revit line, but it's our own line. It's just, you're drawing it the same way that you would draw it with Revit where it's just, you click a point, and you click a point. And then you hit Escape if you're done.
And then the last one here is using the same click order. Because you can click and hold click and drag, or you could click on and release. There's so many things that-- details, actually-- we take for granted because it's just second-hand nature, like tying your shoelace.
You don't have to articulate how that happens. You just know it, basically. And Revit users just know those things because they have to use the application day in and day out. So, again, we want to recycle that and take advantage and capitalize on that with the design of the tool.
So what can these trade-offs offer? So we went over what the trade-offs are, but why and how do they affect the grander scheme of things? So there's three aspects that we outline here for the trade-offs.
One is, it builds trust with the end user, with a designer. They could start to rely on the tool. It builds-- if fosters adoption. So it organically grows so other people could also use it. And that would speak to its success, basically.
And then having a vision of dynamic scaling where, we mentioned, we're going to work out locally but have the ability to grow outwards to make it the whole flow plate, if you want to, or multiple floors of this type of room of this wing and then run and generate for that. So I'll go through these more one at a time.
So how do we build trust? So we visualize the feedback right away. So lightweight geometry, stream straight to Revit. We minimize the response time-- so really low latency so that you could see any action-- what its reaction is with the tool.
Then react to user changes-- so if I move a wall that's not a setting in our application-- it's just something within Revit-- you also react to that. So you track the cursor moves, the button presses, and any Revit geometry updates. So that's how you build that trust by lowering that latency.
Other things are the choice in our algorithm is-- it's optimized to our speed, basically, and so that it could show you, what's work in progress? So you can see that, even if it's the wrong answer-- it's generally the wrong answer-- but you see that it's doing something. And you see that it's doing it there, not somewhere else.
And just the fact that it's vibrating over here on this-- something over here, that alone gives you information, even if it's the wrong information. If you let it sit for 5 seconds, then it renders a much clearer picture of what it needs to be, basically.
And, ironically, predictability, basically, where even if it's a novel tool where it could generate novel solutions, perhaps, you could start to predict it at a certain level of scope where it's-- you start to learn where it succeeds-- it's handling a situation, a condition very well, or where it surprises you. You know that, OK, this condition-- I don't know. It's going to surprise me. I know that.
So I could predict that's going to surprise me. And then you could know that, OK, this condition? It's probably going to fail. It failed for me in the past, and it's not-- this is a very difficult condition to solve with a tool unless the tool gets updated to a new algorithm or new constraints, basically. So that predictability also builds that trust where it's like, OK, I kind of know what it's doing.
And then the last thing here is a workable output. So everything that's being generated bakes to Revit as native elements. And that gives you kind of this fallback where you can always go back to going manual, and it might just give you partial success. Maybe it gets you 50% of the way there or 80% of the way there. So then you just have to bridge that gap to the last 20%.
So it's still better and it's not a winner-takes-all where it's all or nothing, basically. You don't have to wait for the whole thing to be done. You could wait for just a little bit to be done. And if you're happy with that, just go with that, and then work your way around it from there. So that's one of the macro goals is to build that trust with the user.
The second one is to foster adoption. So how does this organically grow to other people? Because there's two ways you can do it. You have the top down thing where it's like, you have to use the software for this from now on. Or, hey, I found this software, and you guys should use it. It's really great.
It's similar to that organic, grassroots effect. And that one's always more effective, the way I see it. So make it rewarding to use the app.
It's faster. I can finish my process and my task, my design faster in a more efficient manner. That's great. It's fun, and it feels good-- the interactivity the animation, it's actually like-- it feels really cool to actually use it.
And it's really rewarding to see that all that content when you generate, press that button, to see all of that-- if you look at the GIF here-- you can see all that content actually like be in Revit as Revit families. That's great. For this one, there was an undo. Change some settings, bake again, and then there you go. You could see, OK, now my content is there.
And then the way we would go about these things is, you have to measure that adoption. So if you have these hypotheses that, OK, we're going to be having these techniques and these technologies is going to foster this, can you validate that? So we see, only the power users using it because then you're still back at trying to solve that problem. Or are other users using it? And are they using it just for fun, just to play around with it, or are they actually using it on a project? So all those kinds of things that we want to have-- foster that adoption. And the way to do it, the mechanism to do it is through analytics and measuring that.
And the third macro goal here for the reason-- the design decision we've made with the trade-offs is allow for dynamic scaling. So we start local first and then with the path to grow to global. Because it allows for to build that trust, and it allows for that kind of interactivity.
And then, on the GIF on the side here, you could see how, basically, you're growing it from one space to many spaces but only because you've built that trust and the user knows how to predict that can you do that. So you can expand it to that afterwards.
And then there's two facets to this. This facet is the tool vision, the vision for the tool to grow. And then the vision for the end users-- allow the majority of users, whoever wants to, become power users.
The app should have those under-the-hood settings when available and when you have trust available so that you could have power users of the tool and not be as limiting. Or other people that don't have that path, they could just-- they're happy with what they are, and they can manually finesse the rest instead of finessing with the settings to get it really close to exactly what they want from the settings and the output from the GD tool.
So these are the three macro goals, essentially, that really help us build this co-authoring solution that is adaptable, is trusted, and has vision of growth for the future. So I think we're going to end on this slide here where you've nicely mentioned this earlier, Bill, about Daniel Davis. "Until we get to a point where algorithms replace place designers, which may never happen, algorithms will only be practical if they work with humans."
And we really try to work within that, and that concept, and the human behavior, and the way designers really rely on the designers' power and the fact that they have their intelligence and the fact that they've all the experience that they had and just try to automate just a little bit more automation.
Instead of trying to automate everything, if there's just a little bit more automation from what you already offered, utilizing a powerful engine, you could really have to have this nice synergy between the machine, and the human, and the designer. And that's what we connect to share with you guys. So thank you.
BILL ALLEN: Thanks, Ben. Very well-stated. I love it. And I think the big takeaway I had, too, was the idea of trying to get that mass adoption put in place. Because I remember, like for myself, I got so excited about Dynamo and trying to get those things in there, and that's where I was so excited about Dynamo Player because you were able to bring those solutions to the end user to get that adoption. I thought that was so great.
And I love Daniel Davis's comment, here. It's like, it only becomes practical if those solutions can work hand-in-hand with the humans. It's the whole people process technology piece of it. Process technology-- that's so easy. It's the people. It's the behavior part.
You're not going to necessarily change people's behaviors. So how do you accommodate a tool so that it's pragmatic, so that people will use it, so that it becomes grassroots, so it gets adopted at a firm? Because you don't want to put all that energy into a tool and then not get any adoption. That would be the worst-- invest all that time and money.
And so how do you make sure that the tool accommodates humans and behavior in the way that we design and the way that we work? And so I think yourself and Daniel Davis are so good at recommending that and commenting on that. And so that's one of the reasons we wanted to just mention Daniel's quote here. So thanks, Jalen, for that quote. And thanks, Ben, for that, as well.
So that's it for our class. We thank you guys. We hope you guys enjoyed it. I hope you guys learned a lot. If you guys have any questions or anything like that, we'd love to follow up with you. And thank you so much. Have a great day.