说明
主要学习内容
- Learn about the latest Dynamo initiative, the cloud data model, and how to access your design data with GraphQL.
- Explore the AEC Data Model API and how to use string-based navigation to query the design data.
- Gain a clear understanding of how data exchanges can streamline interoperability.
讲师
- Sean FruinSean Fruin is a Mechanical Engineer and Mechanical Applications Product Owner at IMEG, a full-service engineering firm with over 60 offices throughout the US. He is fascinated with automation and exploring computational design solutions for MEP design. He has had the opportunity to learn many aspects of the design industry, working in manufacturing as an MEP designer and consulting for General Contracting around the globe, specializing in BIM Management and Autodesk Revit development. Sean is living his dream, playing with the latest technologies, acquiring the knowledge to innovate, improving efficiency, and sharing his insights with the AEC community.
SEAN FRUIN: All right, hey, guys. Thank you for joining me and my colleague Jasmine Lee here for exploring the AEC Data Model with Dynamo. So thanks for joining me, Jasmine. You also have a class, right, Jasmine?
JASMINE LEE: Yeah, I do. I'm excited to join in on this. I definitely know about the AEC data model, but learning about the Dynamo side would be really interesting for me today.
SEAN FRUIN: You definitely know a lot about data, correct?
JASMINE LEE: Yes.
SEAN FRUIN: What's the name of your class coming up?
JASMINE LEE: It's Enhancing the AEC Industry with Data Collection, something like that. It's very long, so I'll just give a quick title.
SEAN FRUIN: Good. Titles are always long. All right, let's jump into this. Yeah, so our story really starts with an innovative company called Meta. They're doing cool things, but this actually isn't where it started, what I'm talking about. I'm talking about when they were still Facebook.
So the first kind of catalyst for all of this is when Steve Jobs introduced the iPhone at that famous booking. Well, then what happened is a whole bunch of people got iPhones, and everyone wanted a good, working mobile app. So there was a big race to how can we build a mobile app on our old technology. They were having tons of problems, slow bandwidth, a lot of people pinging the API. The old Rest API wasn't that good. It would either over-fetch or under-fetch data. It just wasn't really set up for the needs of Facebook. So four Facebook employees hunkered down, went down to the basement, locked himself away, and came up with GraphQL.
So then by 2012 or '13, right, they have a speedy app ready to work. That does it on this whole new revolutionary API. It then got outsourced and a whole bunch of people jumped on the train, including Autodesk. And that's kind of where we are at today and kind of the foundation for this whole thing. And that is the whole Autodesk Platform Services, the Data Exchange, and the Autodesk AEC Data Model.
So the AEC industry had a lot of those issues, very parallel to what Facebook was facing. Again, Jasmine, I know you know about this. So do you want to go through and let us know your story about some of the issues that you've encountered trying to collect data for IMEG?
JASMINE LEE: Yeah, definitely. So I would say as a company that works on all disciplines, and we're all over the country in different offices, and we have a lot of acquisitions as well, it's definitely hard to collect data in a very standardized manner, because everyone has different workflows. There is a corporate standard, but depending on when that standard was implemented, that always changes as well. And so I would say collecting granular data in an organized manner is definitely the biggest challenge.
SEAN FRUIN: What's the thing that frustrates you most? Could it be opening up bulky Revit files?
JASMINE LEE: Yeah, definitely. I think depending on the version, depending on how big that file is, it really depends on how long it's going to take. And that really limits your workflow and how fast you can work through it.
SEAN FRUIN: Yeah, and then because all these issues, obviously, data tends to be unconnected, unstructured, and therefore kind of meaningless. So yeah, it's a big, big challenge that we've been trying to face at IMEG and that we'd like to-- that Autodesk has also been trying to solve, would like to solve.
So Autodesk, again, went with this idea, this foundation of this whole new cloud computing that they're doing with the GraphQL. So it kind of fixes some of these pain points that we have. It's very efficient in what it's grabbing. It's not over-fetching. It's not grabbing too much data or too little. It's flexible in handling queries, so the end user can throw a lot at it.
There's a single input rather than multiple inputs, just meaning, again, more optimization, easier to fetch stuff. The schemas and everything and types are easily defined and flexible, and then integrating with existing database I think is a huge one for Autodesk, because they have a lot of existing APIs, and they need to figure out a way to harmonize all those.
So that's what we're going to talk about a lot today, but, of course, get some Dynamo in there. I'm a big Dynamo guy. So first, we're going to talk about learning about the latest Dynamo initiatives and all the different packages they have to be able to collect this data from the cloud. We're going to explore and talk a little bit deeper about what GraphQL is and how to use it.
We will get a better, clear understanding of GraphQL and data exchanges and how to utilize that to streamline and collect the data that you're looking for. And then, hopefully, we'll have some fun and you'll learn some tricks, tips on Dynamo along the way.
For our agenda, we're going to start off kind of explaining this big picture of the AEC Data Model and the Platform Services. We'll then dive in a little bit again to the Dynamo landscape. And then we're going to use each of the packages that they have. For example, one's a fun golf simulator from last year. Another one is just comparing projects or snapshots of projects, and then we can look at actually collecting data across multiple projects.
So yeah, does that sound good to you, Jasmine?
JASMINE LEE: Yep, sounds good.
SEAN FRUIN: All right, what do you know about the AEC Data Model, our Platform Services? Is it confusing to you, as confusing to most of us?
JASMINE LEE: Yeah, it's definitely a little confusing on what it falls under. I would say I think that was what I kind of had a challenging time with when I first dove into it. And I'm sure you'll talk about it.
SEAN FRUIN: I'm going to try to talk about it. So I came up with this. I did pass it by some Autodesk people, and then didn't tell me I was wrong, so I think I got pretty close. But let's see.
Yeah, so I think it's just one piece, one cog, right, in this bigger machine that is the Autodesk Platform Services that we've been hearing about the last couple of years. So what is Autodesk Platform Services, you might ask? Well, we also know Autodesk has a bunch of acquisitions. We've got Spacemaker, Unity, Data on 360 was a new one released, Wild VR. So they have all these different programs from a bunch of different acquisitions, and that means they have a bunch of APIs, and that means they have a bunch of services.
But their challenge is now to take all that, suck it into the black hole, and then come up with this unified, cloud-based platform. Unified, I think, is the key word there, because, again, we want all this data to connect all these APIs working seamlessly together.
So the Autodesk Platform Services, they have three different clouds. We have the Flow for media, the Fusion for manufacturing, and the Forma for us AEC people.
So the AEC Forma is what we're going to talk about today, and then you have all of these different services and APIs within that industry cloud. And we are going to focus on these two today, the Data Exchange API Functionality Services and the AEC Data Model Services.
So what's the big idea here? Again, going to go through this one more time, the idea is to start getting granular. So rather than me sharing a whole Revit model with you, I could share subsections of that Revit model. This really helps with security concerns. I don't want to give you my whole model. I just want to give you part of my model, because I'm worried about IT or for other issues.
It also helps with just sharing. Now I don't have to share these big, bulky models, and they could even be Revit 2024, but I could share it with you, and you could be able to give it into Revit 2025. So that's the first piece, is the granularity.
The second piece to all this is the interoperability. So the mindset, the shift is to get away from our authoring applications-- Revit, Rhino, Inventor-- as the project. Rather, this whole idea is to have the project be something that lives in the cloud, and all these software programs are actually like the authoring tools to do analysis, to do your calculations.
So for Revit, I might bring in some stuff from the architect, a granular subset of that data, do what analysis I need on that data, and then send it up to the cloud. So I think it's really cool. It opens up a lot of possibilities.
And the last critical piece to this is openness. So something that's cool about this is we have all these authoring Autodesk applications. But what they're starting to introduce is the need not to even open up Revit, but being able to access the data from Revit through the cloud with GraphQL. So I can get what I want when I want it, and I can bypass Revit. I don't even have to open up that application. It's good news for those senior engineers that don't like Revit.
So I know we're talking quite a bit about GraphQL and over-fetching and everything. I found this diagram that I liked a lot. So without GraphQL, kind of a Rest API, when you go in, you're grabbing everything out of a Revit model. And that's over fetching. With the GraphQL, you can say specifically what you want. Hey, I only want windows that this parameter equals file rate parameter equals whatever. So you're not over grabbing.
I could also go in there and grab more than just windows. I could say, hey, I want windows that are this fire rating, and I want doors that are this fire rating. And it's just one endpoint, one time that I have to talk to the computer. And then it only has to send that smaller-piece data packet, if you will, through the wires, through the air, through the modem to us. So queries are better and speed's better.
On the schema for the GraphQL stuff, it looks like this. I was working a lot with the development team and some of the product owners over at Autodesk on this, and I asked them what the hardest part was about doing this. I expected it to be the API keys and all that, which we'll get into. They actually said no, it was coming up with a schema that's global enough in a way that will work with Revit, but also will work with something like Inventor, and hence why one of the first things you see here is element groups.
So in the AEC Data Model, you have element groups. There's a versioning component to that. You go down a layer, you have elements. Then, you have reference properties. So this is like relationships, hence the graph idea. Then you can get its properties, like width and whatever, and then you can get property definitions, which is critical for good structured data and to be able to do something with it.
So to take it out of language and put it into pictures, it kind of looks like this, where an element group is actually like a Revit project. They just didn't want to call it project, because the element group might be something in some other application. So it's a collection of elements.
Then we go down a layer, and then we have elements. And this works exactly like you'd expect if you're native, if you know Revit. So an element, windows, walls, levels, and then we have the reference properties. So a window would probably reference its host wall. A wall, it would actually reference the level that it belongs to.
Then you go down another layer. We get into those properties. So windows typically have a width, walls typically have a length. And then in the property definitions, the simplest term is the units, but I think eventually they'll open some more with that and being able to do pretty much dropdowns and enums, is what that's called. So it'll be interesting to see how the schema evolves, but a lot of work.
It's one of those things in programming where you spend a lot, a lot of time simplifying, and when you're done, it looks so simple, and you're like, where did all the time go? That looks simple. So I understand. I definitely respect the time that went into this.
Yeah, so first off, to access the AEC Data Model, one of the first things that you can do is go in here and play with this one thing. So this is more of a development tool, but what you do in here is it kind of walks you through the steps of accessing the information on the cloud through the AEC Data Model. So here's all those URL or URN keys that I've been talking about, all these big passwords. Unfortunately, you have to go through here, copy and paste. Did you play around with this at all, Jasmine?
JASMINE LEE: Yeah, I definitely did.
SEAN FRUIN: And how was that experience for you?
JASMINE LEE: There's definitely a learning curve to it. I'd say there is some documentation on it, but the documentation doesn't always align with the results that you may get. And there's a lot of copy and pasting items and you're not sure which item, because there's repeated items.
And I know an issue that we had is we didn't see the projects when we first tried this out, and so we had to do a little debugging there. And you'll probably talk about that reason why. We probably couldn't see those projects.
SEAN FRUIN: Yeah, I think I missed that, actually, on the PowerPoint. But we can go back to it.
So there is-- that's actually a really good point, and that is the-- let me do this real quick, unhide. Yeah, and that is kind of getting everything set up.
So there is kind of steps you have to do to be able to do pre-checks to get into the AEC Data Model. One thing is you need to have that admin's rights to ACC. So if you don't have that, good luck, because you have to have that to be able to get in there and turn it on. So I had to pull some hair to get that. I'm finally an admin. I have the badge of honor, and I'm now an AEC admin for IMEG, and it was an uphill battle.
Once you do that, you have to go into your AEC account and turn it on. And this is, I think, alluding to what you mentioned, Jasmine, which is we thought we'd just turn the switch on and magically all of our models that are on ACC, we can also access all that. No, we actually had to re-upload all of the models that were already uploaded after we turned it on. So I think this is a very good point to bring up.
If you start-- if you want to start using this, turn it on now and start uploading models, because it's only models. It can only access data from models that are uploaded after the activation on there.
Also there is Revit 2024 and up. It needs that so there's less stuff with the parameters and everything in there on why that's the case. And then I think you have to do something with the API letting Autodesk have access to your hub. So again, there's a lot of this API web keys, this confusion and a little convoluted and something I'm still learning. Have you experienced problems with API keys in other programs you use to extract data, Jasmine?
JASMINE LEE: I think the ways that I've done it is more manual, but I can definitely say that in this process, when I was uploading those new models, the biggest challenge was upgrading the models for me. My background is, again, in mechanical engineering, so I had to search up a lot of documentation, like what is the best way to even upgrade this model.
SEAN FRUIN: I remember you calling me and being like, Sean, what should I do with the links? Because I'm updating all these links. Should I unlink it first?
JASMINE LEE: Yeah. Yeah.
SEAN FRUIN: Fun, fun, fun times.
JASMINE LEE: And then there's so many different know-hows of, like, this is the best way. No, this is the best way. And so it's like figuring out and testing these different ways, and that's definitely a big time sink.
SEAN FRUIN: Yeah, but fun. That's why we're here today.
Yeah, so this is kind of just going through this video, going through and doing the activation in ACC Hub. Then you can see one of the IDs, so use this to your account if you have trouble. But yeah, that activation button, definitely go in there with admin rights and see if you have that activated to get going.
So once that's done, then you come here and access the Data Explorer, which we found out was not that great. Depending on who you are, so I'm sure we're going to get a lot of users like this. Too much coding, hell, no-- I'm not going to do it. Nope, I don't think so. But they do have an answer for us. That's where Dynamo comes in, a much easier user interface, I believe.
So there's three packages right now that kind of deal with all this AEC Data Model, AEC Construction Cloud and getting data from there. One I'm calling the OG Data Exchange. This is the old one. This was actually developed on the old SDK. And I call it old, OG, but it's not really old. Like, I did a whole presentation on it just last year, and I think it still serves a purpose.
There is the newer GraphQL nodes which use a Data Exchange as an input, but then you query the data within that Data Exchange with the GraphQL kind of way. And then we have the big one, which is the AEC Data Model nodes. So this one, you don't have to have a GraphQL, or you don't have to have a Data Exchange. You do use GraphQL to actually get the data that you need across a whole project. I should call it element groups according to the schema. You can actually get data across all projects within a hub.
And this is just a summary of it. So the OG ones, it includes geometry, yes. The two GraphQL ones do not include geometry yet. That is in the works. There's actually an interesting story here, where again, talking to the developers and the team over at Autodesk, the initial goal was to do the whole Autodesk data model thing. That was the initial scope. And they failed, essentially. Trailing is fine. So then they lowered the scope, and actually, that's how Dynamo or the data exchange was created.
And then now, it's premature, and a nice product, and has its own use cases separate from the Autodesk data model. But it's never a straight line from your goal and your objective typically.
But the inputs then, the two-- DX is for Data Exchange, the OG and the GraphQL ones, data exchanges both take Data Exchange as inputs. The Autodesk Data Model does not.
So this is interesting with speed. So for larger data sets, the AEC data model is, honestly, the best. That's what it's designed to do, is to access large data sets. Again, you're grabbing everything from a model.
What you don't want to do, and I kind of learned the hard way, is make a Data Exchange like the whole project and then expect to query that with the Data Exchange GraphQL nodes. That is not what it was designed to do. They're more to send smaller sets of data, and they work really well then. And then, yeah, if you tried to do the same thing with the one with geometry, you're going to run into trouble.
And then for smaller data sets though, that's where the GraphQL one really shines. The Data Exchange GraphQL one really shines with speed. If you have a smaller subset, the OG ones are pretty good.
And then, actually, it's weird. I did some testing. I'll show you how I did that. But then the Autodesk Data Model ones actually ended up being slower when we had a smaller data set, which I just found interesting. So they're faster on a large data set, slower on a smaller data set, which is interesting.
The outputs, input and output down for the OGs, because I don't really know what it is-- it's a Dynamo list of geometry and parameters. But the other two are very structured JSONs, typically everything out of a GraphQL query. Therefore, everything from the Autodesk Construction Cloud will be in JSON format, and we use it all the time-- very standard, useful, long, some of these queries. But we'll look at to see how we can make sense of that here down below.
Cool. So one of the nice tools-- I want to bring up some Dynamo here-- is this tool called TuneUp. It's actually an add-in for Dynamo that was developed internally at Autodesk. And the whole point of TuneUp was to find bottlenecks in your graphs.
So we know Dynamo is not the fastest tool. It does a lot of geometry, and a lot of times you can accidentally make a huge bottleneck in the middle of your graph or script. So that's one thing. And then you get the overall execution plan. So this is how we're able to measure these different things. And then there's also stuff that holds data and everything, and you can kind get into that with it, too.
Yeah, this is what it looks like when you activate it. So here, right, it's in milliseconds. But here we have the-- which one is this? This is the AEC one, and here we have-- execution time was 30,000 milliseconds. Can do the math on me for that, Jasmine? What's that in minutes. Quick, quick, quick. I'm just kidding. I am not good at those quick conversions. It was pretty quick, though, if I remember.
So that's an overview, and then, I guess, I can hop in and kind of just wire one of these up and show you just a demo of what this kind of looks like, if you want to see that. So let's just start with a new Dynamo graph.
Interesting enough, so this is Dynamo Sandbox. We do not have an active version of Revit open right now. It's not connected to Revit at all.
Interesting thing, too, is they added this way to-- this is a new feature, having your log in to your Autodesk account in here. So this is actually doing the validation and the credentials on the back end. So that's one of the reasons why this was introduced. This is a requirement for a lot of this bigger ecosystem that Autodesk is building.
But then we can go over here. And so we have this package, which is the AEC nodes. We also have the GraphQL nodes.
So one of my complaints to them was I could take a hub. Notice I do not-- going to put this to manual. Notice I do not have to go and copy big long keys anymore, because the authentication right here and everything on the back end wrapped in this. I have a nice dropdown to show me all the hubs that I have.
And this is a nice Dynamo thing. So I'm going to Object Type. I use this all the time when I'm kind of exploring different things with Dynamo. So this is hub, and this is hub. I'd expect the output to be the same thing.
But if I go over here and do that and copy this and do that, you can actually see they're not the same thing. So this is Dynamo AEC GraphQL, and this one is Dynamo AEC GraphQL hubs. That doesn't make sense. It even says the same thing. Maybe I was wrong there. But I will show you that this doesn't work.
Let me go back into here. So, now I already forgot which one's the AEC one and which one's the Data Exchange one. We'll find out very quickly, though.
But then from Hubs I can go to Items. No, that gets the Projects. Projects-- so from Hubs, I dive down to get Projects. I want to see if this will run. That runs. There's all my projects within my hub.
But now watch. If I take this and go like this up to here, Did I do something weird? No auto complete. I did do something weird. OK, I copied the same one twice. So notice that does not work.
And then if I go and use my Object Type, right, we see this one is FDX, Dynamo GraphQL. So they look like they'll have the same outputs, but they do not have the same outputs.
So I'm going to delete all this stuff down here, and we will just focus on for now this Autodesk Data Exchange one. So here we go and go to Hubs, Projects. So what's kind of cool here is I have my test project called Sigma. So I could just go-- I can make a string.
And this acts as like a filter, too, so I can just do that, and I should get my four ones that contain Sigma in it. Or I can, of course, get very particular and go down to that 25. And there we go.
So now we have a project one, Project with that. And then I can take my projects and then get items. And so my items, all of these different projects, Revit projects. And these are the AEC designs.
So if we go back, if you think about that schema hierarchy, we are at the element group level. The terminology is not uniform yet across the whole thing.
This is interesting. It does cache data. I'm not going to get into that too much for sake of time. But from AEC Designs, I'm going to cheat. Rather than doing a filter here, I am just going to use some Dynamo tricks.
So I'll go to AEC Data, and I want to get rid of this top row here. So I'm going to just put 0, and that will kind of shorten my list. Does that make sense to you, Jasmine? I know you're not too familiar with--
JASMINE LEE: So the AEC Data 0 is the first--
SEAN FRUIN: Yeah, so I said, hey, give me give me the 0. So if there was another list here, it would just give me-- it would just give me a 0.
So let me show you this. I can add this now. Which one of these do you want to open? Let's do one that's not too big. Let's do simple Data Exchange. Well, it's not that fun, because that would just be a 00, right? Because the index and the listings are 0, so now I have that one.
But let's say I want to show tower one. Was it 13? No, that's the Autodesk School, the BIM model, the Autodesk School. Let me--
JASMINE LEE: Oh, I see.
SEAN FRUIN: --tower. Another thing you could do is split this up. I don't know if we have time for that, but I did play around with this. Actually, I will show you this in another example later, splitting that up. But yeah, kind of tricks in there.
So here we have one AEC Designs. Now I can go to my items, Data Objects. So if I hook this up to there, I don't need to put filters or anything. That's why they're blue. I do need to put in AEC Data. That's why it's red. But I click this, and we should-- you might have to wait just a little bit. It is pinging the cloud.
Since I was running this with just like a native Revit file, and I actually did still find that the native Revit file-- so I took the Snowdon's tower model and just got a full 3D default view with everything in it. He came in here with just Dynamo Native and got stuff, and then it actually ended up being a little bit faster than using the GraphQL, which is interesting. But I think they're working on optimizing the stuff on the back end. And maybe because we are streaming on this wonderful Autodesk University, obviously it's taking up some bandwidth.
JASMINE LEE: So Sean, I just wanted to clarify something for those who aren't very familiar with the AEC Data Model. So how this differs from the Data Explorer is that we're not doing the item of copy and pasting the IDs because of the Your Name account being associated with the Dynamo?
SEAN FRUIN: Yeah, it's just kind of-- all that back-end stuff is kind of baked into the Dynamo nodes. So you're essentially doing the same thing. Kelly, right? Michael Kelly's been working on, and he's your co-speaker-- he has essentially built a UI to get around some of that and playing around.
Yeah, it's kind of a step up. It's easier to step up, because you don't have to deal with all that. And it's this kind of a natural progression of a prototyping tool.
And I'm just going to go back to the PowerPoint, and we'll come back to this one. Because I don't know what's taking so long.
But yeah, when it comes out-- so at the end, you get this JSON format. And it's dictionaries within dictionaries, so it's pretty interesting stuff.
But I'm going to take a step back, and I want to talk about this example with the OG Data Exchange. So I love this example. Number one, it's 95% a true story. Number two, it's fun, and it hits everything, all those objectives that we were talking about at the beginning.
So the goal of this was to build a generative design workflow that essentially gets a hole in one through basic inputs. That would be basic physics, like the club velocity, the angle, and all that.
The story starts, though, with me not being very good at golf. Last time I golfed, I actually broke my friend's driver and had to have everybody stop and walk out there and go get it. So I was like, maybe I could-- I'm smarter than that, so maybe I could build a generative design tool to do it. So that was in the back of my head.
But then after I built with Brian Nickel, who is the CEO at-- oh, my god, I just forgot his company's name. It'll come back to me. I was with Brian, great friend of mine. And we're standing in the queue for the Star Wars ride. And it kind of gets boring. And he shows me this top secret Revit model. I'm like, that's cool. And then we go and ride the ride, and it's great.
But then, like a year later, two years later, I am like, hey, can I use that top secret Revit model that you have? And he's like, no, it's a top secret Revit model. I'm like, no, Data Exchange. I won't give away your IP. I just need parts. Isn't it a granular subset of that? So then he's like, let me think about it. And then he looked at all the security stuff and said, sure.
So then this is the workflow for that. We took that Revit model, that top secret Revit model that we can't share. But then we used our own family to put a family at the holes and then at all the tee-off locations.
Then, just for fun, just to show the two different geometries coming from two different programs, I had a nice-- I had a scanned version of me from a conference. So I took that into Rhino. I simplified it, because it had way too many triangles for Dynamo. So I used Rhino to do that, created a Data Exchange from Rhino, created Data Exchange from Revit, put those up in the cloud, got those exchanged with Dynamo, put it in a remember node, and then send it off to generative design. Yeah, this is the details of that.
But this is what that-- this is the OG Data Exchange. So here you can either load or create a Data Exchange. So here we loaded a Data Exchange from the Rhino Data Exchange that we created. And then we loaded one that was the Revit model.
Here's the nice thing. They have unit conversion stuff here. The GraphQL ones don't have that yet. We'll talk about that more. But then there's a whole bunch of math. You'll probably look at that. It's in the handout, the class last year. I'll put a link to the class. But that was just kind of finalizing the math, right?
So then we put in some Remember nodes so we could disconnect this. But then this is the general design thing. So essentially, projectile motion, you're changing the velocity, speed, the angles, all of that. It does its thing. So I thought it was really cool as it's using real physics, and it's using a real Revit model, so it works, in theory at least.
So yeah, the generative design, it loops around. We have the target. It is getting close to the hole. We have the data remember node, so we're measuring the distance where the ball hit. There's no bouncing or anything. It was pretty elementary. But then we're optimizing that function to find what combination of velocity, clubs, and rotation angle gets you that.
Yeah, so this is creating the Data Exchange within Rhino. It's all very similar, no matter what program you're using. And then this is actually running the genetic algorithm generative design.
And this is what it looks like on the AEC Cloud. And here's me playing around the inputs a little bit. We don't have to go through all this, but you can tweak those sliders. If you try to do that manually and get a hole in one, it would take you forever.
And that's a really bad one. So again, we're measuring distance to whatever hole we specified. But let me play this.
So Rania, we did get close. Remember, I said, it was a 95% true story. The 5% lies, we never got a hole in one. I think the settings were too big, but it got really close. But that was the percentage. But we did find the closest shot.
So that's that one. So outcomes, right? We did kind of build a simulator that was decent. Brian was able to keep his-- he was able to share a subset of his Revit model and keep the granular, protect his IP he was under contract for. And we are able to share that out, and everyone can have a golf simulator that's worse than NES. But hey, right?
And also just the interoperability between programs-- I just thought was a really fun use case to show. Are there any programs that you would like to combine together, Jasmine?
JASMINE LEE: That's a great question. I mean, I think the options are limitless, but--
SEAN FRUIN: When you look into Speckle, right? Because Speckle has quite a bit of connectors. They kind of saw this road a little bit ahead of Autodesk. They're actually almost ahead of them. So it's amazing how many connectors Speckle has.
JASMINE LEE: Yeah. Definitely.
SEAN FRUIN: I see one with energy modeling or like EnergyPlus or something.
JASMINE LEE: Yeah, I think something very downstream could be getting data out of VR and part of the innovation team.
SEAN FRUIN: They have that.
JASMINE LEE: Yeah.
SEAN FRUIN: There are connectors for that with-- maybe it's on the roadmap. We'll have to look into that. One of my goals is to use Data Exchange eventually to get that Revit school model into Fortnite. When it's possible, I'll definitely submit a class on it.
So let's jump to the next example. So this one is comparing models using the AEC cloud. So the idea here was, again, we've been working on energy modeling a lot. I've been kind of pitching this idea of, hey, let's take snapshots. Like, let's not go room by room, detailed energy model. Let's take a snapshot. Let's build a simple energy model and then make it more detailed as we get more information.
But then I thought it was fun. Hey, let's take a snapshot at each phase of this. So I'm doing a lab that we start off with a rectangle and we add details on the way. And I thought it'd be cool to take-- create a Data Exchange from each one of those phases and then be able to compare it downstream to see how much the differences are.
So that's what I kind of explored with this example here. Let's see if Dynamo ever finished over here. It did finish.
Yeah, so you have these dictionaries in here. It's kind of another Dynamo tip, too, is I found it kind of hard to parse this. So you have nested dictionaries in dictionaries. But I could say dictionary-- we have properties. This is kind of just like I was doing with the numbers earlier where-- oh, don't take forever again. Did I misspell properties? Oh, I put a capital. There you go.
So now, if we open up this dictionary 0, we just have the properties in here. It looks like the order switched, which that's kind of weird. But what's one that we want? Let's say family name. So I go in here.
So first, I have to get the properties, open up that dictionary, and then I put family name is not capitalized. And that gets me down to the parameters that-- so now for each one of my elements, I get one-- I get a list of just that one. So it's 6,555 elements. That's quite a few elements. And this was just the simpler project.
JASMINE LEE: Sean, I wanted to chime in on why it's helpful to have that family name and being able to filter down. I know when we were first looking at data and really manually looking through it, it was really helpful where our IMEG had a standard and we had very standardized family names, and so it was much easier to through that data.
SEAN FRUIN: Oh, 100%.
JASMINE LEE: Yeah, being able to leverage that family name.
SEAN FRUIN: Which this, I cut it out. But the people problem is still interesting. So I think we're pretty good at standards that I make, where we could do that. We could start answering those questions. It'll be interesting to see another one.
What I love about this is I got into a habit. So you can create dictionaries with Dynamo, which I do all the time now. So I could do-- here's a little Dynamo trick, 0..5. And then I go A..-- let's see. B, A, B, C, D, E.
So let's say the numbers are my values, and this is my keys. This needs to be a string. But my point is I would a lot of times collect data. Like if you go through all the scripts that I've written-- and that's not quite right. Did I misnumber the account?
Oh, well, usually it works. Usually, I take the data from getting elements, and then I put them in a dictionary, because I find it a lot easier to sort and understand what's going on when you have this key with the value. You don't have to worry about the list staying in the right order and everything. So that's been my habit the last couple of years, so it was really nice to see that, get that out of the box with just a few nodes here. So that's pretty cool.
Yes, we are on the data comparison one, correct? So let me open up that one.
Notice, too-- so we didn't have to run this one. So this says null. I believe if we just run it-- let's find out. That was really quick, right? So it has caching capabilities. So until I refresh the data-- so until I put a different input into it, it is going to remember everything that it did already. So that's kind of nice.
I wish they would cache the filtering inputs in, which I haven't even shown yet. Yeah, that's nice.
So here we had a couple errors. So here, I'm just trying to filter down to analytical spaces. And so then I get just the analytical spaces. And then I'm saying again, right-- so here's that properties and then something. So I'm asking for the peak load of the analytical spaces.
I could also just maybe do a count. It's not-- you might have to take my word for it on this one. Yeah, the idea was that we could build, use these graphs, which are pretty cool nodes. Something like Power BI might be better. I believe Power BI is on the roadmap for the AEC Data Model. It already works with the exchanges. I actually have a whole class on that, which I'm really excited about. Yeah, I just want to take advantage of the cool graphs that you could build in Revit or inside Dynamo, which I thought was neat.
And that kind of gets us to our next example, and it has another feature. So I'm going to run through this one pretty quick. But this, I wanted to point out this fact about data and how critical it is that we go up the data pyramid. So do you know what this data means, Jasmine? Any idea?
JASMINE LEE: Well, the left-hand column looks like times to me. And then the rest--
SEAN FRUIN: Give me some insight. Give me some insight on this data. What's it telling you? What's it telling you? Yeah, what's it telling you?
JASMINE LEE: Maybe-- so, I mean, it goes-- if I'm thinking of--
SEAN FRUIN: Let's go up the pyramid a little bit.
JASMINE LEE: OK. OK. That makes more sense to me now. Yeah.
SEAN FRUIN: Now we've got information. We've got units, right? Yeah, and we know. We know what it is. It's a temperature. We've got units, so that's better. I can make some sense of it.
But hey, I know Fahrenheit. I have no idea. I went through engineering school. I have idea what 36C is. My wife is from Turkey. She miraculously can convert pretty quickly, but I have no idea. That means nothing to me. I can get no insight from 36C. Can you?
JASMINE LEE: No.
SEAN FRUIN: Is that a hot day, a T-shirt day? Is that a sweatshirt day? I don't know. But then, let's get better. Let's convert it, right? So we know, oh, 91. That's a definite T-shirt day, right? So now we have the knowledge, because we know we're used to those units. Then finally, wisdom-- we can start to tell the story.
So why did I walk us through this? Well, because we need to give Autodesk, the development team a little spanking, because they kind of did it. So you can create with the OG Data Exchange. You can create new parameters, and you can create a Data Exchange through Dynamo.
And to add parameters, it's very strict. It makes you go up the pyramid. So I can start with the number, but then I get that red. I need a type ID. What happens when I do a type ID? Oh, well, I need a name, of course, just like we need a temperature to make sense of it.
I need units. I need to know what it is. This is air flow. I then need to give it a description. Then, I give enough so the person on the other end of receiving this data can make sense of it. I guess the units are kind of baked into the Revit.
But the output of the AEC Data Exchange is I don't know. I don't know what those units are. I actually do know because I created this one from a very simple wall that I knew was 10 feet.
So we are in metric, and we have no idea that we're in metric. Even worse, I found out that coming out of there is a string. So I had to convert it back to a number, but then there is some nice nodes in the newer Dynamos that let you convert units. But it'd be really nice to call it a half solution, because I still had to go manually. I didn't know that this was a length, and I didn't know it was a meter, and I didn't know I wanted to get two feet. It'd be a lot. If that information was just nested in, I wouldn't have to do that unit conversion.
So case in point, if you're trying to use this for anything, right now it's all in metric, and there's no way to switch it out. So that's that one.
So then, yeah, then the next example I want to just show is this idea of collecting across multiple projects. This has been what you've been working on, Jasmine, and one of our wonderful bosses, Mike Lawless, is like, dream goal. And I don't think he realized how hard this task would be. I don't think any of us did.
But it's been cool to explore this ecosystem to see if it gets better. And I think we're having pretty good results compared to the more expensive and other solutions that we've found, definitely opening up every model and all that. But again, we do have to upgrade.
Some of our internal goals that I make-- like, I'm a product owner, so I need to get a return on investment, because I have no idea how many diffusers we place in a year. Who knows the answer to that? We're trying to get to those answers, and that's the predictive analysis. So we can do those analyses. And then we want to explore large language models, which we've already done.
So part of this, by the way, I've been on the beta team for Data Exchange, and part of it, in this example, I ended up building. So last year, I could not use Dynamo Player. And with this version, with the GraphQL nodes, you can actually use Dynamo Player, so that's what I was just showing here.
So what we did is we built a Dynamo tool that extracts the-- you go in there, and you pick your Data Exchange, or you pick your AEC model, then it actually exports a JSON file for you. So there's a nice-- again, the data coming out of both those nodes is a JSON, but then it puts it into a Dynamo dictionary. Well, then there's this wonderful package, JSON objects that takes a dictionary and puts it right back into a JSON, and then we can write, specify a file, and then get that out. So then we have a JSON that we can get out and play around with.
Back to a large language AI. We have this thing, Meg, that we've been working on internally that can access stuff on SharePoint and everything, because one of our biggest issues is accessing data. But it can't access Data Model yet. So one of the things we've been exploring is just throwing this into ChatGPT, which is kind of fun. And I was working on that a little bit before we got here.
So I was going to try to build this directly into Dynamo, but that kind of proved to be a little rough. But this is a JSON that just had the file names in it. So the prompt, I will provide you with the JSON from-- trying to give a lot of context from Autodesk Data Model, I want you to be a data analysis and help me analyze the data. I will ask you questions, you will provide me accurate results.
And I love this one. So Steve, our developer that's been working on Meg, taught me this prompt with no chit chat, and it's like the most genius thing I've ever seen, because it totally gets rid of all the filler that ChatGPT gives you. Thanks, Steve.
All right, so then I fed it that, and I said, what is in this data set? And so it starts to analyze it. So it's just really interesting to think about, again, kind of parsing this large data set. If it's structured enough, could we start using the large language models? And what does this look like if we connect it to Revit directly?
And I think through the AEC cloud, it's a little bit past my pay grade or skill level, but I think it's very possible. And I think we've seen some people already do that. So Yeah, exciting times. So I encourage you guys to make. I will include the Dynamos and the data set. Just go have fun. Make yourself activate your thing. Get yourself a JSON, and go have fun with ChatGPT.
The other one was querying. So I'm going to leave this one a little short. But the other idea was through the other class I'm doing, I've been using Power BI a lot. I thought Power BI was just for visualizations. Ends up the Power Query is super powerful.
So taking that JSON, getting it into Power Query is another really good way to clean out the data, find what you need. I've actually found that really beneficial.
So then we're going to evolve a little bit. So that was kind of an overview of the Dynamo stuff. It's still in beta, a lot of pieces moving. Go to the handout, look at the graphs and everything. But I just want her to leave us on this idea of where is this going.
So Michael Kelly, our partner, was building this. This is Postman. So it's a web API tool. So here, you can see I'm doing a lot of those same things of going down, finding those URL keys, filling it in. He said that this experience was a little bit better than the Explorer that he had.
He was able to put some programming on the back end to assign stuff to variables quite a bit easier. So that's what he's doing here. He's assigning that to the variable, so it's a couple less clicks.
But one of the really cool things about Postman is you can develop these GraphQL queries. But then you can then convert them into a language that you need. So that was kind of a second step.
But here he was able to, again, export out that JSON. And here he's able to say, OK, give me the C-sharp code that I need. And there's all these different languages in there. Give me the code that I need to now go build this application inside of C-sharp. And that's what he's doing here.
It's funny. This started off as a tool using a different API than you guys were working on, and then he was very quickly ready to sub in the AEC Data Model parts.
And there you saw right again a freaking long token, and then having to sign in to Autodesk and do all that. Well, yeah. So here you have something similar like the Dynamo Player 1 where you're able to go through. Right now, it's a dropdown to say, hey, give me these nodes, or give me these hubs. Give me the projects in these hubs. Here's what I'm looking for. And then with this UI, you can extract this information out. So here we have a table of counts, I believe.
Again, I know a little bit over the head, Dynamo. I just want to give a feeling of where this is all going and how third party people, like Allied BIM, Brian's company-- I told you I'd remember the name eventually-- how they can start building applications to tap in to your models on the AEC cloud, and then use GraphQL to get the data that they need in a granular way.
So yeah, moving forward, this is the end of our story today, but I hope it's a new chapter for you guys. I hope you start to embrace and explore this. Again, it's a very big paradigm shift, this idea of granular data. I have no idea what the next 10 years is going to be. I've heard from Autodesk this whole idea is like a 10-year vision. That's insane. Hey, why not jump on the bandwagon early and start to think about what your workflows would look like using these new tools, and jumping in there, and then giving feedback to the team to make it all better?
To just kind of wrap it up, data is king. That's what I was just saying. Data is foundation to AI machine learning. We need good data to do all that.
GraphQL-- so GraphQL optimizes our data collection. We're kind of getting rid of visioning issues and we have better control over our data through these new APIs and everything.
And just to sum it up, so project A is kind of like, hey, I can collect a whole bunch of information across projects in a hub, throw it up into the cloud. A future state is this project B's, where I can grab-- not the same, because Revit and Inventor have different APIs. But they're able to drag it up into the cloud and then use the AEC Data Model to standardize that data into a uniform set.
And then we have the power of GraphQL to be able to access subsets and give very precise queries to what we need on the back end. So these are all those parts coming together. Do you have any questions or final comments, Jasmine?
JASMINE LEE: No, I think I definitely learned a lot. So I'm glad I joined.
SEAN FRUIN: All right. I think we're finally seeing a crack in our data silos. And thank you, everybody.