Descripción
Aprendizajes clave
- Understand the future strategy for Autodesk in relation to BIM and Forge
- Understand how Forge High-Frequency Data Management is being used to open up a BIM network
- Discover new opportunities for software development within the BIM space
- Help developer partners plan early for their strategies
Oradores
- JAJames AweJim Awe is the Chief Software Architect for the Construction Business Unit at Autodesk. He is a veteran of the industry with a long history in BIM software. He is currently responsible for technical strategy of the platform to enable Design-to-Make workflows and a CDE (Common Data Environment)
- SLShiya LuoShiya Luo is a software engineer working in the Quantum team. She is working on the data side of Quantum, how to manage ubiquitous information in the lifecycle of construction and help better the process of BIM.
JIM AWE: All right so my name is Jim Awe, and Shiya is here with me to do a demo in a few seconds. So we're here to talk to you about quantum, which you may have heard us announce this last year, and then we went silent for a while. I'm still not going to give you a whole lot of specifics, but at the end I'll explain why that is. But let's just dive into it.
So the first thing I want to do is set some expectations appropriately. This is not about training you on a specific API, so you're not going to walk away with any details on a particular API. I'm not going to officially announce any dates. I will give you some hints about where we're headed. You will see an interesting use of HFDM. So who went to Farzad and Kai's class? About half of you, so you have a little bit of an introduction to HFDM. I'll talk about it a little bit. I won't go into a lot of details, but you'll see how we're using it to do something interesting in Quantum.
You will suffer through my analogies and conceptual descriptions, which I always get beat up about, all the analogies I have. And I have tons of them in here. Hopefully you'll get some insights into long-term strategy and be able to plan accordingly. So that's really what this is about, is for you guys to change your thinking a little bit from what we traditionally tell you as developers.
Here's a product centric API. Go use that to extend it. This takes a little bit of a mind shift to see where we're going, and it takes a while. So it's good to start early. And then, hopefully it'll spark some new ideas that you can start acting on and planning for.
So the first thing I would do is reintroduce HFDM, which if you were at that session you heard a little bit about. But basically it's the breakthrough technology that allowed us to do what we're doing in Quantum and rethink how we approach stitching together an ecosystem in the BIM world. But it basically allows for a web-based microservice architecture. It allows much more granular data.
So instead of passing entire files around you can just pass small packets of data between the applications. And that allows you to do real-time coordination between the applications, if appropriate, and you'll see both cases in my examples. Some are real-time, some are what I call gated asynchronous, which is more like a file based workflow but with much smaller bits of data.
Branching and merging. Farzad talked to you a little bit about that. It just means that change management is much easier when you know exactly what changed between one version and the other. Versus I get a giant file version one and I get a giant file version two, and it's hard for me to figure out what changed. And then, I use this spreadsheet metaphor a lot.
HFDM allows us to construct workflows like the way a spreadsheet works. Not literally the UI for a spreadsheet, but think about the way a spreadsheet works, where you focus on a piece of data in a given cell. You change that piece of data, and it magically ripples through the system and gives you an answer back fairly quickly. We want that kind of behavior.
When you make a design change that you, in quick order, know the ramifications of that design change all the way through. All the way down to fabrication-- cost, schedule, all those things-- instead of I make a design change, I throw it over the wall, and it takes three weeks for somebody to move that data through the system and come back with an answer for their particular part of it. So we took all of these characteristics of HFDM and started thinking about how we would use them.
And I want to start with some early prototypes that we did. I actually showed these last year at AU, but I want to replay them because it takes you through our awakening to the capabilities as we were playing around with HFDM. And they're very simple examples, but I think they're illustrative of the thought process you go through in realizing what capabilities are.
So here we have the simplest data model you can imagine, which is just a box. So we have these properties sets that define boxes. And on the upper left, we have a 3D version of the app, on the right a 2D version of the app, and on the bottom a report that's just listing statistics. And when I change the data in any one of the apps, it's immediately reflected in the others.
And you may say, well, we've seen that in AutoCAD and Revit and all kinds of apps, where you can coordinate 2D and 3D. But this is three completely separate applications that know nothing about the other applications, and you'll see why that's important as we start adding new capabilities to the system based on the same box data.
So the same 2D app is on the left, but now we take an existing application, which in this case is FormIt. It could be Revit, it could be Civil 3D, it could be anything. And as we move those boxes around, we're immediately reflecting the changes in the other application. So we're not exporting a file and importing it and changing the data. We're doing the equivalent of that spreadsheet. We're just watching certain cells of the data, and when those change we do something. We don't have to move all of the data around in order to react to those changes.
Then we wanted to see what would happen if we added something like a simulation or analysis service. So same two box applications on the left. The one on the right is a new one that somebody added in a couple hours. And all he did was he grabbed a open source physics engine, hooked it up to three.js, built the app on the right, and he just listens to events from the other two box applications. And when somebody adds a new box, he simulates dropping it from a certain distance.
So if we had done that the traditional way, which is to bring new functionality to where the data already lives inside of a monolithic application, it would've taken a lot of effort. So for instance, if we had tried to do that and bring that functionality into AutoCAD or Revit, you'd have to hook it up to AutoCAD's database or Revit's database, their graphics system, their UI control. It would be a huge, huge integration job to hook that up.
With HFDM it's really simple. You just point at the data and say, hey, when it changes I'm going to do something different in my window. So now you have these three applications that are all coordinating with very little integration effort. And then we took analysis service from Insight 360 that does energy analysis and hooked that widget up. So it's doing energy calculations on the box, which is probably a garbage calculation, but it's still sending the data around.
So if we had real data coming from the building, it's very easy to take existing services and wire them in. So just the ability to rapidly add new apps, new targeted applications, new targeted services and wire them in without having to bring them all into one application framework or export large amounts of data between them, was a big deal.
So we did those early experiments, and then we tried to apply them to something that's been a problem in the AEC industry forever, which is interoperability and collaboration. So anybody that's done a BIM projects is intimately familiar with data wrangling, or the idea that you've got to take data from one place and manually get it to another place. Because there's a diverse ecosystem of tools on the project and it's not all done with one application.
And when you do that data wrangling, moving the data around, you end up with a lot of oversharing. You're passing more data than you really needed to the other parties. And there's a lot of noise because like I said earlier, you get giant Revit file version 1 and then you do some work. And then you get giant Revit file version 2 and you have to figure out what changed.
Another factor that we ran into in the industry was how something gets fabricated or installed on site is increasingly affecting the design. And we'll see a couple of customer examples of that. And then the last one we heard a lot from customers was that Autodesk does a good job telling me the effects of a design change as long as it involves geometry or a floor plan, but if it involves things like cost and schedule it could take weeks for the ramification of that design change to make it through to the fabricators, the cost estimates, all of that. And back to this spreadsheet metaphor, we really want that feedback loop to be much, much quicker by letting data flow through the system much easier.
So this sounds a lot like the original concept of BIM, and it is. We're not changing the idea that all of the different phases of the lifecycle of the project or all the different people involved can work around a digital model and keep that digital data alive. What we're changing is the realization that that's way more complex than just having a single model in a single application. You really need a much more diverse ecosystem in order to cover the full lifecycle.
So I have a few concepts in here that are the main themes of Quantum, and we'll see examples of them as we go through. But I don't want you to think we're doing something different than what the concept of BIM was. We're just embracing the fact that it's not this waterfall process. It's much more-- and you'll see this graphic more often now. It's much more integrated between design and make.
So the first theme is this idea of implicit collaboration. And I use an analogy with Uber or Lyft, just as a simple example of a couple of different people and applications that are involved in a transaction, where they don't think about it a lot. You just pull out your phone, you say pick me up, and that's basically all you worry about. The drivers see the information they need without trying to filter through what data is relevant to them. And that transaction is handled fairly easily without people explicitly thinking I have to export something, I have to upload it, I have to connect the dots between all these applications. So that's what we want the field to be, is that people are thinking a lot less about all the explicit steps they have to do to move data through the system from one application to another.
So Quantum is often mistaken for a new product, and that's not the case. We're really talking about a platform for products. So I use this analogy of a bingo card. We're basically trying to supply the bingo card itself, and then there'll be applications and services that are the so-called chips on the bingo card. And we're really trying to connect them in a row in order to create a workflow.
So if you take something like the curtain wall or the facade of the building, there's multiple companies involved, from the design to the fabrication to the installation. They're not all going to use the same tool, the same model. They use slightly different things. But they need a consistent workflow in order to coordinate between the design, fabrication, and installation.
So we're trying to supply that playing field. We will likely connect some existing applications, like Revit. We will likely create some new ones, and we expect third parties to develop new chips on the bingo card as well, because this grid gets way more complex than what I have here. There's tons of diversity in the ecosystem.
So when you move between any two particular apps, traditionally we've tried to either get all those apps into the same space so they can all share the data, or we tried to move the data in whole from one to the next. And that's not what we're trying to do here.
We're really trying to just move only the necessary data between those two systems. So pretty much like we do in computer science, we have an interface definition that just defines your module has to coordinate with somebody else's module. You don't have to share all the code in order to do that. You just have to have a well-defined interface. So let's see some examples of that in customer workflows.
So this is an example from CW Keller, who was-- Farzad mentioned them with the shark tank earlier. They've been very influential in shaping our thinking around this problem. So this specific example from them-- they're responsible for this feature wall.
And what they got from the architect all the time was a two gigabyte Revit model, where they had to load it up, spend time to narrow it down to these 12 structural attachment points, and then throw the rest of the data away because they didn't care about the rest of the model. And then every time they got a revision they had to go through that process again. So what they wanted was for the architect to just give them the 12 points and ignore all the rest.
So that's where we came up with this idea of a contract between various parties. You can think of it as an interface definition or a contract that just says, hey, I expect you to just apply these 12 points, and you're going to put them into an HFDM and property set. And then on the other side, I'm going to pick those points up and then use them in my tools and my process, and we're going to avoid oversharing all of this extra data that people don't really need to exchange.
And then, doing that allows us to decouple the BIM model where it is appropriate to do. So this other example of the SF MoMA museum in San Francisco clearly demonstrates this. The outside of the building has this crazy exterior that they had to-- Snohetta had to figure out with the designers, the fabricators, and the installers in one continuous feedback loop. They had to figure out how to make this thing. And it required a new process that they developed on the fly with a variety of tools. Some of them off the shelf, some of them they wrote some plugins themselves.
But when it came to the interior, that's just standard, traditional stuff. They already have a process for it. Revit already does that stuff just fine. So really what you want to do is decouple it and say, the facade can be done with one set of tools and one process, and the interior can be done with another one. And that's actually how we came up with the name Quantum for this project, is the parallel to physics.
Newtonian physics on the right. It works fine. Everybody understands it. No need to change it, but you need to make an exception for something that doesn't quite follow the traditional rules. But you shouldn't have to throw out everything that already works just to make that exception. So it's not an all-or-nothing play. So if anybody says Quantum is a replacement for Revit that's completely false. Revit is a big part of the Quantum story. In fact, all existing applications can plug into Quantum, and you can use whatever works now and then you can plug-in new processes where you need to.
So quick example. I'm going to have to speed up a little bit. So this was our evolution of us playing around with the tools. There's three major parts to keep conceptually in your brain. And by the way, this isn't an application. This is just what we used to play around with the underlying pieces. The upper left hand corner is the data that's flowing back and forth between these various apps. And it's not the entire model. It's only these contracts or interface definitions.
The lower part is what we call an orchestration graph, and think of that as the kind of logic you would put in when you author a spreadsheet. You're saying, take data from this cell, do something with it, output it to this other cell, trigger a chain reaction. So what we started with was three different applications. One is the architect defining the box shape of the building, another one is the curtain wall designer, and then we're going to add to this steel-- so the structural engineer-- and they start coordinating around this. So the structural engineer ends up with their data in a focused app that's only worried about the structure, but we don't lose the coordination with the rest of the pieces.
So I can go back to this app, like I would in Navisworks or BIM 360 Glue, and I can see how everything fits together. But still, each party, each discipline, has got their own dedicated environment, and they're controlling their thing independently of the others. So now, this allows the steel people to now collaborate around that system of the building, all the way from design to fabrication and installation.
So we go back and we start adding other services that take steel further into detailing, and you'll see that we get shop drawings and connections when it goes back to the steel environment. And I may skip ahead a little bit to save time. So now we get shop drawings, and every time we click on a thing we get a connector.
So this code was factored out from the Advanced Steel product and made as a web service that we can then plug into the graph. And that opens up new opportunities. So then they made this app that was dedicated for the person on the shop floor who's only worried about I got to take one connector and get it made, and I got to figure out how to drive the machine to get that one made. Previously that was hard to do, because you couldn't put Advanced Steel or Revit on the shop floor and have that guy use it. It's just too complex. Now you can make these more focused, dedicated applications without losing the overall workflow.
So they go back and they change the shape of the building and it ripples through. And then they also did for assembly-- this is the erection sequencing that you would do on site. So now we can connect the workflow from design to fabrication to installation without having to bring everything into one single application.
And we did that-- I won't show you this video, but we did the same thing the other direction in that bingo card grid. Because now when I change the shape of the building, it sends the footprint information of the building to a service that's calculating the cut and fill for the site. So now you're coordinating the building and the site together. So it goes in multiple directions, either a coordinating system to system or coordinating across the design to the fabrication and installation lifecycle.
So it's important to point out here that we have these big desktop applications that have a ton of functionality in them. We now have the capability to carve out some of that functionality, make it available as a service, and then use it in a more flexible way in these orchestration graphs.
Let's see. Real quickly here, just to show you that Revit is part of this thing. So Dynamo on the upper left hand side. On the right hand side is this test app where we check the manufacturability of these curtain wall panels. So it's checking for things like-- this represents the fabricator whose checking if things are planar, the number of unique panels because the cost is going to go up if you're not coordinated with those things. It's bouncing the data back and forth off of HFDM so we have a quick feedback loop.
We can do the same thing with Revit. So here Revit, with Quantum, we've isolated this one face. And we're sending that over to the panelization app and communicating quickly back and forth between those two without sending the entire Revit model, so we're only focused on that one part. So you can think-- we focused on curtain molds here, but you can think of all kinds of parts of the building where, if you decouple it and connect it to the fabrication process, you can get a lot more insight on how that part gets built. So we've seen the facade, we saw steel. You could do things like wall framing and just send the wall boundaries to a framing service and have it figure out how to do it. So there's all kinds of ways to do that.
These orchestration graphs end up being fairly complex, and we see customers doing this already. So CW Keller we've talked about a few times. There's another company called Front, Inc. that was very influential in our thinking in Quantum. And Front does crazy projects like this, where there's no way you're going to design that facade upfront and then hand people some plans and ask them to build it. So that has to be one continuous design and manufacturing process, where they know exactly what's going to happen every time they make a design change.
So they front develop this out of necessity on this high end building, but it's very similar to what we do in software now for a CICD process. You just have a ton of diverse tools and people working on a super complex system, and you can't manually move data around and have everybody verify that it's all correct. It's just too time consuming, so you have to automate it somehow.
And that's exactly what we've done in the software industry. When we make a design change, we have to get that all figured out with some automation. So the idea of orchestration is that same thing. I make a design change. It kicks off a bunch of analysis, validation, moving data to the next tool in order to remove all the uncertainty from the project.
So I'm going to quickly set up what Shiya is going to show you. So that was the idea behind Quantum, is to decouple the BIM model, support the workflows from design to fabrication and insulation, and allow a much more diverse set of tools to be used on the project in a seamless way.
Most of the examples I showed you so far were real-time coordination, so as I made a change it immediately rippled through. We know that that's not always going to be the case because people have to have, what I call, gated asynchronous workflows, meaning I work on it for a while and I'm not giving it to anybody else until I'm ready to. And then they may not accept the changes until they're ready. So we had to come up with this way of using HFDM to communicate all the data in an orderly way and respect the ownership of data, so that you weren't just blasting data out into the system and having people react to it too early.
And then, one of the key things was we can't exclude anybody. If we're going to support an ecosystem, it can't just work for only new apps that are natively using HFDM. It has to work for things like Revit and AutoCAD in Civil 3D and all those existing apps that are-- they're already file based. But we've proved already that apps like Revit have no problem communicating with HFDM and exchanging the right amount of data.
So HFDM is a component, and you can use that for a lot of different cases. You saw in Farzad and Kai's presentation you can build an app out of that, where HFDM is an implementation detail of that specific app. In this context, we use HFDM to support this idea of stitching together the ecosystem. And Quantum really establishes the rules that say, if you're going to be part of the ecosystem as an app or a service, you have to follow these minimum rules in order for it to all work. But those rules are very, very minimal because otherwise you couldn't support a full ecosystem.
So there's two major concepts that Shiya is going to talk about. This idea of contracts and Escrow. And a contract is just a HFDM property set that defines this minimal exchange of data. So you can think of that as the same thing as an interface definition in software. You're just getting it down to my module only cares about this data so that's what we're going to agree to exchange.
And then Escrow is a special HFDM repository that supports these, what I call, gated asynchronous workflows. Meaning you're going to park a contract into-- an instance of a contract-- into escrow. And it's going to stay there for a while until some condition is met, and then somebody else can pull that data from there. And Escrow is going to keep track of all the changes over time between all of the different applications. So from a technology standpoint, it's really just HFDM, but from an ecosystem standpoint it's a set of rules of how you use that so that it's not just a free-for-all of data.
So an example of a contract we already saw. It would be like these 12 points that we need for the structural attachment. And that can connect multiple things. So in this example, the curtain wall attachments, you would define the contract. That design would put them in a certain place, the fabricators would fabricate around those, and the installers would make sure that they ended up there. So it completes the loop between all of those three people on a small set of data, instead of saying, you all have to agree to use the same tool, the same giant model of the building just to agree on those attachment points.
So then Escrow we talked about, so I'm going to hand it over to Shiya now, and she's going to show you this in action. And what we did was we set up an experiment with cones and cylinders, so it's going to look kind of abstract. Why are we passing cones and cylinders around? But the idea was to scale down a data set that represented real-world interactions, get it to work, but then scale it back up. So when you see cone and cylinder think architect, structural engineer, site engineer passing data back and forth, and it's really the same kind of workflows. OK, go ahead.
SHIYA LUO: Hello. Hi, my name is Shiya Luo. I am a software engineer on the Quantum team, and I led this small project called the Escrow. And when it was handed to me there were a few different requirements that we had to fulfill. So one of them is that when there are four apps, they're collaborating with each other, but they don't talk to each other directly. And the data flows through HFDM, which is high frequency data management-- you guys probably already heard of it earlier today-- with this thing that we call contacts. And the big, main features of contracts is that it has version history and access control and change management, which belongs to version history. So we built there really, really stripped down version to just demonstrate how this works, but you'll see that it's pretty cool.
So first you start off with a control frame app-- is what we call it. It's basically four points. It simulates someone-- like an architect who is doing the design-- but different parts of the building corresponds to different projects. And then each of those four points will correspond to points on two apps. One is a cone app and one is a cylinder app, and there are two fabrication processes that takes in those four points and build a mesh out of it. So the four points corresponding to two of the points on cone and cylinder.
And then, after the scene, cone and cylinder app have finished their design, they will then send the mesh over to the aggregate app, which is like Navisworks, that will then aggregate the mesh together. But they don't have any of the information that cone and cylinder have, like the points. They just have the mesh to see if they would clash together. But the trick to this question is that these four apps don't talk to each other because in reality, a lot of the apps, or the processes, people don't really want to take responsibility of data that they are not responsible for. But in real life, what happens is that everyone is responsible for that one file, and you don't really know who changed what in the process.
So we came up with this idea called Escrow, and within the Escrow there are these things we call contracts, and they just look like that. That's just the code for them. And in this app, there are two instances of the same type of contract called the ref line, which the control frame app would then pass down. Well, they are just coordinates of two points. And the cone and cylinder app would take one of the contracts and then build a mesh from on top of that.
And then, after they have designed a mesh, they will then pass down the mesh onto another contract called the mesh contract, which contains the triangle vertices, normals, and indices, which is what a mesh is. And then those two contracts then get passed down to aggregate app that would then combine the meshes. So the thing is that none of these apps, for example aggregate, have access to the entire data. So they don't have to have responsibility for all the data in this project.
So it turns out that this challenge is actually solved in computer science, called Git. Can I just see a raise of hands of how many people are already familiar with it? OK, so everybody, so I don't have to go through that. So basically you get a project that's called a repository, which is also the term that we keep using in HFDM. And you can branch out to a different feature without interfering with the master branch, and you can continue to do work on the master branch and then in the end merge together when you're satisfied with the change.
And then in Git what we also often have is that we have a local and we have a remote. And remote is really a server that houses all the information that everyone's sharing, but the local doesn't have to be in sync with the remote. But you should be able to very easily sync them up together.
So in this case, in the local versus remote, is the cylinder object on the left and the mesh on the right. So the cylinder app is what has the material of the cylinder, the two points, and also the mesh. But what the cylinder app only wants to upload is the actual mesh, so the mesh or the remote won't have the two points. So it doesn't have any information of the points. So in the applications we built-- the Escrow is based on HFDM, and for speed of development we also have a little data store on the side that's MongoDB. And there's four apps-- the control frame, cone, cylinder, and an aggregate app.
And a contract. This is a sample of the entire contract. This is not the final spec or anything. It's just our implementation for this. There's a template ID which defines the template that's also user defined, that takes in how the contract should look like. And then we have a unique identifier, which is just the HFDM workspace ID.
And then, also a human readable name, and all the the mapping between local and remote, and the people that have access to this data. So either they'll have read and write access, which would be the owner or the subscriber, who only can listen to the changes but not submit changes. And then, there's also validation to see if there's collision or [INAUDIBLE].
So let me get on with my demo. So first you log in. So we start off in the control frame map which defines finds the four points. And in there I can create a new contract, putting a contract name, and then select the template that can also be user-defined. And you select the mapping. The driving property here is the local point, and the driven property is the points on Escrow and the HFDM property.
I now did a little change on all of the points, and so the app will listen for what I've changed. And I click on the push button. That is the local repository. So the app listens to if something has changed, and it asks me if I want to push to Escrow. And in this case, I decide that I'm pushing all the code or the data. So now in the cylinder app, I can pull in the points. I can then change the mesh of the cylinder and then push to a cylinder mesh contract that takes in the mesh data.
And then I can go into the cone app and do the same thing. I can pull the cone ref line contract, do some changes, and then push it to the cone mesh contract. And I can also introduce a similar mesh into my cone app and see if they actually collided. And this will be a case where I want to see the changes on the other end but I don't have access to it.
And then this is the aggregate app that only have the mesh of cone and cylinder, and it wants to know whether they collided. And now I'm just doing another change to see that it reflects back on to the cylinder app. And there's another thing that we do called fetch, where we fetch the changes that's in Escrow but we don't apply to it. And then if I'm satisfied with it, I'll just pull it, and then it will reflect on the cone or cylinder.
And then the last feature is a validation. Here we just did a few preset validations, simulate like Navisworks but on the server, where you put a cone mesh contract and cylinder mesh contract to find that if it collides we're going to send a warning. So it's either pre commit or post commit. And those of you familiar with Git, pre commit is where it stops the change and post commit is you get a warning but you still push the changes. And then there's also other points to close on the contract.
So now I go back to the cone and cylinder app and I make two of the points too close together. Push the ref line. And then the cone app is the one that has access to the mesh. So now it's changed. I try to push the mesh into Escrow. Because it's a pre commit it's going to send me a warning saying that you can't do this. And that's the crux of the application. I'm going to hand it back to Jim to show you guys the rest of the other demos.
JIM AWE: So that was kind of abstract with cones and cylinders, but think about it another terms. So we saw the reference lines from the control frame app were controlling where those cones and cylinders ended up. So imagine our contract is back to those structural attachment points for the facade. You could make the reference lines all of the floor slabs-- where on the floor slab these things are. So that's the contract between them, and that's all the data that needs to flow between those two parties. And then once you set that up, then the data kind of flows freely through there without a lot of extra effort.
Add on top of that validation checks. So you put in a bunch of assumptions like, well, we shouldn't have-- the reference line can't be two points in the same place. They shouldn't be tilted out of elevation. There's all kinds of checks you could put in to make sure that, as data flows through the system and people change things, that you're not violating some assumption that you you've set up.
So once again, we're so used to doing that in software development we put asserts and validation tests and regression testing all over to make sure that bad data doesn't inadvertently get through the system. So we're trying to do that same thing for the process here, is make it a repeatable process so that you take out a lot of the places where you can introduce human error every time you have to remodel something or manually transport data from one to another.
So we boiled it down to this simple problem just to flesh out the details of how contracts and Escrow work on HFDM. And now we're in the process of scaling it back up to real-world data models and plugging in real components. We've done things like Revit and Dynamo interaction in the abstract, but this formalizes it a little bit more. So you'll start to see, in the coming months, more realistic examples.
I want to do two add-ons here to make sure you're thinking about this in the right way. So you saw this in Farzad's demo. This shows interop between two desktop applications. So Civil 3D on the right with a site mesh and Revit on the left. Normally that requires this big export of data and an import, and you have to manually move the file around. So we just re-implemented that using HFDM, and it turned out to be super simple.
I wish this model had a whole bunch of extra stuff in it so you could see how we only have to export a portion of it, but it kind of looks like we're exporting the whole thing. But we're basically bouncing that off of HFDM and into Revit. And it's fairly instantaneous in that this was actually done by a development team of ours in China, which has notoriously slow internet connections. And it's going from a desktop app, bouncing through the US servers where the HFDM servers are, bouncing back to China, and it's fairly instantaneous. So it just shows that when you're moving granular bits of data around rather than large files, the communications is a lot more efficient.
And then this hopefully will be a bit of an eye opener. So I use the term LMV here. And don't think we use that externally. So Forge Viewer. Most of you are familiar with the Forge Viewer. And the process of getting data into the Forge Viewer is you have to send a static file through the model derivative service. It creates a package of derivatives, one of which is an SVF file, which then is loaded into the Viewer. That obviously takes a while. You have to save the original file, you have to upload it, gets translated. It's static.
What if we have this data coming from HFDM? Is there a way to reuse the Viewer? And it turns out there is, you just have to write something that loads it directly from HFDM into the Viewer. So this simple example it looks like nonsense because there is this sphere and a sight in here, but the sphere came from the standard way of translating the file in the model derivative service. The site mesh was loaded in directly from HFDM. So anything you build on top of the Forge Viewer is still valid. We can just supply the data from a different source.
And that has all kinds of ramifications. So I could, if you think back to-- I have a contract that's specifying attachment points. I have a Revit model. I can now compare two versions of the contract in the Viewer and see visually where those structural attachment points move. I can overlay that contract on top of the Revit model, overlay it on top of the fabricators model, make sure everything lines up visually. So there's all kinds of interesting things there.
And then, I shouldn't show you this next slide but I want to address it. So last year at AU we announced Quantum and then we went dark for a while. And part of the reason is that it started as a project within the AEC group and it's kind of grown into-- this is probably important for the entire Forge platform. So if you think about what's actually AEC specific-- oh, that last build is there-- AEC specific versus just generally that's the way you stitch together applications, a lot of this stuff needs to become part of the Forge platform.
And one thing you'll notice over here on the right hand side of the green is that we really need to come up with some common schemas for low-level pieces of data, like geolocation, units of measure, display mesh. If we don't, then we're just going to have-- everybody's going to go off and use HFDM, and they're all going to create their own definition for those things and we'll be back to the same interoperability problem that we've had for years. It's not as hard as-- if you've experienced IFC and other standardization efforts in the past, it's not quite that hard because we're really talking about low-level things. But if we can make those consistent across the ecosystem, then it's much easier to have common behaviors.
You might have missed it in the cone and cylinder example when Shiya was doing it, but when she was in the cone app, she brought the display mesh from the cylinder directly into there. Now normally, you'd have to go to a destination to do that coordination. But if we can agree on these low-level data types, any application can pull in any other application's display mesh in order to coordinate around it. So we want to get that right. So we slowed down a little bit. We're trying to take much more of a long range view, and that's why you heard us go dark a little bit last year.
And then the other part, if you look on the right hand side, is a contract is another type of data, just like a file is a type of data. It has to belong to a project, it has to have access rights applied to it, it has to have some kind of exposure and a UI, and we already have that in BIM 360. So we don't want to say, oh, well, if you're using Quantum data you have to have a whole other project and a whole other access rights and all that. So we have to integrate that BIM 360.
So long story short, it was more of a targeted idea when we started out last year. It's grown into more of a platform idea, and we're just going to take our time and get this thing right. But we do want to be involved in-- so in other words, you can't walk away and go start using Quantum APIs today because we're not ready, but we do want to start having strategic discussions with people.
So if you do want to talk about, well, how can we plan for this and how can we get involved, there's lots of those discussions to be had. And you really do have to start early, because like I showed, we started with HFDM two years ago and it takes a while to reverse your thinking and start re-architecting things. So the sooner you get started with strategically thinking about it the better.
All right, so wrapping up. Questions to ask yourself. As a developer, you may have been pigeonholed into just the design part of the workflow because that's all you had access to or that's the only API we provided. You'll hear over and over again from Autodesk, our company strategy is to connect, design, make, and use. And we have a big portfolio of products at our disposal. We just have never quite stuck them together yet, but that's what our strategy is. So how can you participate in that bigger landscape of design, make, use instead of just design?
I often talk about the-- right now data has a center of gravity. So Revit has a center of gravity, AutoCAD has a center of gravity, Fusion has a center of gravity, so you as a developer have to choose what's your center of gravity and where are you going to attach to. This kind of reverses that a little bit and the center of gravity becomes data itself and not so much on the tool. So that opens up all kinds of new capabilities about where you can put your stuff.
What part of the de-coupled BIM model are you really interested in? So given now that you don't have to absorb the entire model, do you want to focus on the steel structure, on the site, on the HVAC system? It's easier for you to zero in on the part that's really of interest to you. There's also horizontal capabilities that exist throughout the ecosystem. So we think about tools that we have, like rendering, 3D printing. Those are totally horizontal capabilities. Before we had to integrate those into a specific product. Now we can wrap those as a service and we can use them anywhere in the ecosystem. You can do the same thing.
I personally think there's going to be new opportunities around contracts and Escrow. That represents a new coordinated way that data goes through the system, and I think there's all kinds of functionality that you can build around that new clearinghouse of how data moves around. Same thing for CICD, the pipeline. There's all kinds of validation checks you can put in, analysis checks, and little services that you can add value to the system.
And then, like I said earlier, when do you want to start thinking about it? Right now this was the kickoff to get you to start thinking strategically about where this may go and where the opportunities are. And we're happy to have those strategic discussions with you, and then hopefully in the next year we'll be able to give you something a little more concrete to actually play with. But we can certainly start having those strategic discussions now. And with that, I don't know if there's time for questions or not, but-- if not, then come up and ask us afterwards. Oh.
AUDIENCE: [INAUDIBLE]
JIM AWE: Yeah.
AUDIENCE: [INAUDIBLE]
JIM AWE: Yeah, orchestration's a fundamental part of it, so we will definitely provide the orchestration component. And then the expectation is that ourselves and third parties will provide services that you plug into that orchestration. Now there's a tricky part about-- well, then, who deploys the services and who-- yeah, yeah. So that's why it's not so easy to roll this out.
So technically, it's not that hard to do. When we started with HFDM and the orchestration service we could create prototypes really fast, and we got super excited. But to do that at scale and to figure out business models and stuff, it's trickier than it looks. So that's why we're slowing down a little bit.
AUDIENCE: [INAUDIBLE]
JIM AWE: So we've been talking to a lot of people. I don't know if we've talked to Building Smart directly. We're certainly aware of them. I want to re-emphasize that these contracts that we're talking about are very low-level things, like reference line, array of points, outline. They're very, very simple things. So we're not at the level that we had to worry about with things like IFC. What's the definition of a wall? Which is a much, much trickier question.
We may progress to that over time, but there is a ton of exchanges that can happen at real low-level data of just I need an array of 12 points or I need some reference lines.
AUDIENCE: [INAUDIBLE]
JIM AWE: Say again.
AUDIENCE: [INAUDIBLE]
JIM AWE: So we're getting a lot of input from customers themselves. We're trying to replay existing projects that they've done, where they said, we had to-- SF MoMA is an example of they had to figure out what that interface was between the facade and the rest of the building. They already did it themselves by-- I don't know Snohetta did exactly this but customers in general do stuff like they write a plug-in to Revit, they dump stuff to an Excel spreadsheet, then they write another plug-in that moves it to the next one.
In a lot of cases, they already know the types of data that they're moving around. It's just not very repeatable and it doesn't take advantage of the larger ecosystem. So we're trying to make that part repeatable. And then eventually, we may get to two people have to exchange a higher level definition of what a wall is, but that's further down the road.
AUDIENCE: [INAUDIBLE]
JIM AWE: You know, I have given virtually zero thought to business model stuff. It's going to have to be thought of at some point, but I don't think we're talking about transactions.
AUDIENCE: [INAUDIBLE]
JIM AWE: Huh?
AUDIENCE: [INAUDIBLE]
JIM AWE: Yeah. So one business model-- I'm just totally speculating. You could charge for a transaction or you could leverage the knowledge of all of those transactions. Google doesn't charge you for a search, but it makes its money indirectly by knowing what use search for. So there's lots of business models to apply here, but we're a long way from me telling you what it's going to be.
AUDIENCE: Is it a waste to start with HFDM and then [INAUDIBLE]?
JIM AWE: There is. In fact, that's what we've done internally. So remember that I could use HFDM as a component and just write my application, and we have lots of examples of that internally, where people wrote a new application based on HFDM. That doesn't mean it connects to the rest of the ecosystem in an orderly way. So yes you can.
The Quantum part comes into it-- if you want to play by the rules of the ecosystem and be able to plug-in your service or have your app talk to other apps. I'm not sure what our release schedule is on HFDM as its own component so we'll have to figure that out, but certainly that would be doable if we can package it up the right way. OK, thank you.
[APPLAUSE]