Description
Principaux enseignements
- Learn how powerful data coming from the Autodesk Forge platform is solving today’s problems.
- Hear a customer’s perspective on how data can help sustainability initiatives at design time.
- Hear from one customer using Fusion Data to build efficient bills of materials from design data, simplifying the process.
- Get questions answered about the future of the Autodesk Forge Data platform from Autodesk experts.
Intervenants
- Kevin VandecarKevin Vandecar is a Developer Advocate Engineer and also the manager for the Autodesk Platform Services Media & Entertainment and Manufacturing Workgroups. His specialty is 3ds Max software customization and programming areas, including the Autodesk Platform Services Design Automation for 3ds Max service. Most recently he has been working with the Autodesk Platform Services Data initiatives, including Data Exchange API and Fusion Data API.
- Tobias HathornTobias Hathorn is a Director of Product Management focused on Data Interoperability at Autodesk, as well as a licensed Architect. His career at Autodesk began developing and designing BIM products like Revit, FormIt and Dynamo. He has recently focused on cloud data workflows while contributing to the Data Exchange initiative. He has presented on AEC domain topics to a variety of audiences at Autodesk University, TechX, BiLT NA, and the Denver based Revit User Group. His passion is connecting data between apps and the cloud - thus empowering more project personas to contribute to the convergence of designing and making a better world.
- FRFrode TørresdalFrode Tørresdal is head of development of the BIM and Structural Engineering department and sustainability manager of Norconsult Digital. He has been a developer on various BIM and CAD platforms since he started working in the company in 1999. In the last years Frode has worked a lot with the Autodesk platform services and has also spent some time investigating augmented and virtual reality. Frode is an experienced speaker and has held many presentations of various conferences.
- Nem KumarNem Kumar is director of consulting at CCTech and has been doing product development with companies from Manufacturing, Oil & Gas and AEC domain. He has vast experience in Desktop as well as Cloud software development involving CAD, CAM, complex visualization, mathematics and geometric algorithms. He has been actively working with Autodesk Vertical and AEC product teams. His current areas of interest are Generative Modeling and Machine Learning.
- PNPhil NorthcottPhil Northcott is the CEO of C-Change Labs, co-founder of BuildingTransparency.org, and co-creator of EC3. He is a veteran leader of advanced R&D projects, and leads a team of dedicated software engineers bringing the best of information technology to the fight against climate change, with a core focus on carbon-intensive construction materials. Phil's has extensive experience in Computer-aided design, optimization, manufacturing, and quality. As CEO of C-Change, he leads the development of the EC3 service, its integrations with the Autodesk ecosystem, and its implementation as a free tool and as the in-house tool of choice for major multinationals.
KEVIN VANDECAR: All right. Welcome everyone. My name is Kevin Vandecar. I'm part of Developer Advocacy and Support Group at Autodesk. Our group basically supports the Forge APIs as well as the desktop APIs. And that's all becoming our platform now. One in.
So just a little bit of logistics to get started. So what we're going to do is we're going to go through some questions and let the panelists give their opinion and their perspective. So each panelist will have about 30 to 60 seconds on commentary. And then I'm going to ask the audience if there's any input. And we'll see if we can get some audience participation as well.
So the speakers, I'm Kevin. And I would like each one of these guys to introduce themselves, our great panelists. Thank you for being here. So let's start with Shelly.
SHELLY MUJTABA: Hi, everyone. Good afternoon. I hope everyone's having a great AU. I'm Shelly Mujtaba. I'm Vice President for Product Data. So everything we've been talking about with data in the cloud, cloud information models, all that is work that my team does in collaboration with the product teams.
KEVIN VANDECAR: OK. Shelly is with Autodesk. So Nem is one of our partners.
NEM KUMAR: My Name is Nem Kumar. I work for CCTech. So we convert one liners to products coming from AC and other industries. So thanks.
KEVIN VANDECAR: Thanks for being here, Nem. And Tobias is also one of our Autodesk guys.
TOBIAS HATHORN: Yes, I'm att Autodesk here. I used to be an architect in real life, and that was 16 years ago when I joined Autodesk. And I've been a part of the Revit development team and then join Forge as I was on a data journey. It's in the past tense.
KEVIN VANDECAR: Oh. Candy.
TOBIAS HATHORN: All right, I'll put candy in. But yeah, I'm on Shelly's team. I'm a Director of Data Interoperability. I have a real passion for data and what it can do for workflows.
KEVIN VANDECAR: And Frota is one of our partners.
FRODE TORRESDAL: Yeah, my name is Forde Torresdal. I work for Norconsult Informasjonssystemer. So we are Norway's largest engineering company. And I work in the digitalization part of Norconsult group. I've been a BIM developer for over 20 years. And I've worked with Forge, that's also past tense, since 2015 when it was all brand new and still called.
KEVIN VANDECAR: You in data API, right?
FRODE TORRESDAL: Yeah.
KEVIN VANDECAR: All the way back. And Phil.
PHIL NORTHCOTT: Phil Northcott. I lead C-Change Labs. We're 100% climate change focused cloud software company. We've worked closely with Autodesk for about three years now. Autodesk is also on the board of the nonprofit building transparency at Auto Org who holds all the IPv4 everything that we do.
We offer free tools to help AEC professionals make sustainable decisions. We're able to offer enterprise grade free tools because people, including Autodesk, Microsoft, Amazon, the government of British Columbia, Interface Carpet, and a number of others fund us to do exactly that.
KEVIN VANDECAR: Awesome. Well thank you all for being here today. It's going to be a really great talk. So just you've seen this probably several times this week already, the Safe Harbor. We will probably be talking about some forward looking things. A lot of the topics are a little bit generic and talking about some of our public APIs as well. So after the class, if you do have any questions about what's public and what might be under the Safe Harbor, just feel free to ask me and I'll help give you the guidance.
OK. So the very first thing, and we've already gotten some prize into the pool here, is that you've seen that we've changed our name. So we've gone from Forge past tense to Autodesk Platform Services. And this was announced by Andrew on our main stage and also by Raji during our Forge Developer session.
So what we're going to do here is this little jar here is like our swear jar. And as these guys say Forge, they're going to give you some candy. And at the end, we'll give away the jar with the candy in it.
All right so to start with data, Autodesk has these three strategic goals in mind, granularity, interoperability, and accessibility. And yesterday you saw Ben Cochran talking about this during the developer session. So it's very, very important to us.
So we'll get started with the first concept. A very generic thing is what is data? So it's probably the most generalized term that you'll hear out there, right? We've been talking big data, databases, MongoDB, specific types of data databases, document DB, all these things. But what does data really mean?
So that's kind of the challenge. How do we structure data? How do you store it, and where do you store it? So as an example to get you started to thinking about this is our Model Derivative Service, which we've had for a number of years. It's the core of the platform services.
This is how viewing takes place. You translate a model from a source file into a data bucket. And you basically can view and access the data. Now a lot of the workflows use the viewer directly, but you don't need to do that. It's really actually just a data bucket.
So the way this data is provided is through a JSON structure. And it's basically key and value pairs. And this allows the consumption of the data in a generic way. But it puts the organization and structure of that data onto the consumer. So let's start with what is the term data mean to you. Shelly?
SHELLY MUJTABA: How much time do we have for this answer?
KEVIN VANDECAR: 60 seconds.
SHELLY MUJTABA: OK, 60 seconds. So far for me specifically and a lot of us working on these problems, data means everything that is not just produced by Autodesk products, but everything that you produce in the service of the projects that you're working on.
Whether the data comes from third parties, whether the data is derived from the formats that we are creating or third parties are creating, or whether it's insights that I've created or AI and ML learning that is created from that data, all that is essential for you to get your projects done for you to complete your workflows.
Because our strategy is that you're not here to use our tools. You are here to build the next amazing building. You're here to make the next amazing manufacturing project. Or you're here to make the next amazing movie. So how can we make it easier for you to move this data from one place to another seamlessly?
KEVIN VANDECAR: All right, thank you. Nem, how about you?
NEM KUMAR: So I'm actually someone who saves a lot of money in sense. And also what happens is when I get a toy for my kid, I try to see if it's more, like, LEGO type. So I see data as a toy which is made from LEGOs. When you unbundle it, you get so many types of toys you can make at home.
So file was the old style of, and database, you unbundle it. You can have different uses. That is the way I see data, actually. It has a lot of powers.
KEVIN VANDECAR: So building blocks to a solution.
NEM KUMAR: Yeah.
KEVIN VANDECAR: Awesome. Tobias, how about you?
TOBIAS HATHORN: Yeah, so I came from the Revit world. And for me, data has always been the I in BIM, to see an amount of sort of clarity you can bring from categorization and type driven and properties that are attached to it. And for a long time, it was always on the design side of the house. And in recent connectors we've been building, we're connecting to Power Automate that really gets the design data into more business processes.
And that's kind of a shade of data. It's not just design data, but it's also, like, how is my business doing data and how do I bring those two together? And just the last point on it, my wife is still an architect. And so when I try to describe what we're working on, it's using Revit and getting at least into a spreadsheet and all the different things that you can do from then on, so I also like to boil it down to that very simple case. Whatever you want to put in a spreadsheet, that's the data that I care about.
KEVIN VANDECAR: Yeah, and that's a good point. I think spreadsheets have been used to store data since the beginning.
TOBIAS HATHORN: Even before Revit.
KEVIN VANDECAR: Before Revit. Well, that's going way back. OK, so Frode.
FRODE TORRESDAL: In the AEC, where I work, [INAUDIBLE] that's everything. It's not like the BIM model. That's central for me. So it's the geometry that's the data, and also the metadata, but also everything you connect to. You have documents you have IoTs now coming in. That's also data that's connected to the entire project. So everything is actually data.
KEVIN VANDECAR: So all the outside data feeding into the model as well. Yeah, fantastic. And Phil?
PHIL NORTHCOTT: Well, all that's true, so I'm going to take a different spin on it. It's fundamentally the product of work. It's what we do every day. We generate information.
It's very expensive to generate good data. It's very expensive to have bad data. And it's our goal to connect the information you guys all create about your buildings with as valid information as we can about the sustainability of materials, specifically around climate change.
KEVIN VANDECAR: Great. Sounds great. So anyone in the audience have some input about what their view of data is just raise your hand. OK, our first Guinea pig. Data sucks.
Oh, I'm so sorry. My first time [INAUDIBLE].
AUDIENCE: As a programmer, it's everything that's not code.
TOBIAS HATHORN: That's true.
KEVIN VANDECAR: That's boiling it down. Awesome. Thank you so much.
OK, so let's move on. So historically, Autodesk has been concerned about data since the beginning. So if we rewind all the way back to the beginning of time, for Autodesk anyways, we'll look at AutoCAD. And how many people know the Julian date in AutoCAD, the Julian number? Anyone here? No one's old enough to know that, probably.
So basically, because we're back in the DOS world, 640k of memory, they had to figure out a way to make date and time stamps very easy and structured as small as possible. So they took a floating point number and basically made that the time stamp. And it's just an example of how important it is to make data structured well and as efficient as possible.
So when we think of data, do you feel that there is any unique data by the technology itself? So Fusion data or Revit data, is that unique to those products? Or is there industry considerations there?
So the follow-up question is, is there data that should be standardized? So you can answer it, which part of that you want. Let's start with Nem this time. Oh, I'm sorry.
NEM KUMAR: I personally think that it should not be tied to any specific technology. Data should be irrespective of that. And standardization, I think we should leave it to the people who are going to use it. So that kind of when they get the data, they should be able to decide whether they want to standardize it or not. So I would be of that kind of opinion.
KEVIN VANDECAR: OK. Thank you. Tobias?
TOBIAS HATHORN: It's a good one. My brain went right to Rubik's cube and just being able to switch it around and kind of reformat into a different pattern. So for me, I'd rather think about, again, back to the building blocks analogy, how to have pieces that come together to form it a certain way and then be able to reformat it for another purpose.
So I point to some of the maybe GraphQL examples that we have of this presentation layer sitting on top of data that's underneath of it, and the reconfiguration and being able to write queries in a different way to access kind of consistent data underneath of it.
KEVIN VANDECAR: OK, great. [INAUDIBLE]
FRODE TORRESDAL: Well, the last question there, it should be standardized as absolutely yes because if not, it will be-- if Autodesk has their way of standardizing data and the rest of the world will not necessarily follow that. And so but when I think of data, and so in my world, it's very often you have this graphical object [? and/or ?] state connected in either or that is in the model by a method or it's connected to something, a document or to-- so I think if you standardize it, I think you will find that that's not that much you have to do to make it work for almost all technologies.
KEVIN VANDECAR: So you mentioned, if Autodesk creates it in a certain way, then people may or may not use it. So who creates the standard? That's the trick.
FRODE TORRESDAL: Yeah, that's the tricky one. So that has to be either it has to be some sort of consortium of the biggest players or something that has to sit down and agree on how the data should be standardized.
KEVIN VANDECAR: Absolutely. I think a great example is IFC being kind of an industry standard format. Yeah, so Phil?
PHIL NORTHCOTT: I feel a bit like a baseball player being asked whether there should be points in the game of baseball. One of the things that's been very successful in the computing industry is the use of open standard APIs. So that's an Application Programming Interface, and what it says is, I am a piece of software. I offer the following services using the following data. And in this data format, this field means exactly this, this field means exactly this, this field means exactly this.
It's extremely important to have open standards for data so that you can move it from place to place without an enormous amount of effort being required simply to reformat it from language A to language B. I'll give you a classic example. Lots of people love to have date fields, and it's very easy to confuse, for example, the day and the month.
In the data world, we use the ISO date time standard, which is absolutely unambiguous, right down to the time zone and everything. And that's how it's all done on the back end. If you want to pass a date between two systems, it must be done that way because there's no way that those two computers are necessarily in the same time zone. Has to be UTC.
So what I would say is there is a whole profession called open standards in computing. We've spent a lot of time doing this. We have not by any stretch of the imagination solved all the problems, but we've made the first 1,000 mistakes. Please don't repeat them.
KEVIN VANDECAR: Awesome, awesome. And Shelly.
SHELLY MUJTABA: Yeah, absolutely. So I think I always try to look five years ahead, 10 years ahead when we talk about data. And where we have-- the journey we have taken a lot of the decisions that have been made is because we have been tied to files, where the product from all of these different products and services has primarily been files.
I like to think of a future where the files become secondary. Files are something that are a byproduct or a result of data that lives in the cloud. So if every element if every Revit door, every window, every AutoCAD design, every line, every point is in the cloud, then the formats themselves become less relevant. It is how well the data is described, how semantically transparent that data descriptions are. And you can generate any format you like from that data because it's rich enough, and it can get richer and richer. So absolutely there should be standards, but these standards have to now evolved to accommodate data that lives in the cloud, not in the files.
KEVIN VANDECAR: Absolutely. That sounds great.
PHIL NORTHCOTT: I want to add one more thing-- everything you said, of course, is correct-- and that is, the usual way that an open standard gets going is that somebody has a purpose. They sit down and they write it, but they write it with a view to be used by lots of people. They put it out in the open, and then they make it so it's extensible and people extend it. But usually somebody, some small group of people first put it out there, and the good ones get picked up by a community.
SHELLY MUJTABA: I absolutely agree, and we have been actually doing that, where wherever it makes sense to adopt existing standards, we're not in the business of inventing new standards because we have other things to do. So everyone has already spent thousands of hours figuring something out. That's where we start with.
PHIL NORTHCOTT: Right.
KEVIN VANDECAR: Fantastic. And I think, to put it just kind of in a simple term, so the data is one thing, how it's stored is one thing. And then how it's rendered to the ultimate use is where the standard can really fit well.
PHIL NORTHCOTT: No, standards are for moving data between systems. How you present it to a user is something for the UX designers to figure out.
KEVIN VANDECAR: OK, yeah. Anyone want to contribute to this conversation? Yeah. You saw me with the socks.
NEM KUMAR: So I don't have a direct answer but I have some philosophical answer for this question. So I think the standardization should be done in such a way that it is not restrictive, but it avoids a chaos. So that level of standardization should be done.
If we do too much of a standardization, probably it will become too much of a restriction. We have seen it in the past. So we should do some sort of a standardization to avoid the chaos, but not to limit the kind of usage of the data. So that is my answer to that.
AUDIENCE: Awesome. Thank you so much.
KEVIN VANDECAR: Thank you. OK, one more.
AUDIENCE: [INAUDIBLE] I was going to ask the panel for their thoughts on open standards being one piece of the puzzle, and then the relevance of open reference implementations of those standards perhaps.
KEVIN VANDECAR: Open which, again?
AUDIENCE: Reference implementations--
KEVIN VANDECAR: Oh, yes.
AUDIENCE: --of those standards. Essentially, what's the relative weight that one should put on that second piece of the puzzle perhaps?
SHELLY MUJTABA: I can try to take that one. So every time we adopt an open standard, we will provide a reference implementation when it comes to the cloud-based data. So we have been considering a number of different things which we want to look at, such as USD, IFC. So every time we say, we support this, you will get reference implementation. You will get compatibility with these standards. So I can speak for Autodesk, at least, but other vendors may have different point of view on this.
PHIL NORTHCOTT: Another way I'll say the same thing is, the reference implementation is a wonderful piece of documentation for your API.
TOBIAS HATHORN: Yeah. You mentioned UX just a minute ago. We talk a lot about developer experience inside of the Autodesk platform services, and it's for that exact case. How do we train people, give people some starting point to pick up and start using the open standards. Yeah.
KEVIN VANDECAR: Awesome. OK, so let's move on. So fast forward to today. Data is still very file based. Shelly touched on that earlier. You look at Microsoft Office or the Google Docs and Sheets, and it's still very kind of containerized based on files.
So in the Autodesk ecosystem, a lot of data is still stored in files, as you know. So you know that we're moving away from this file-based data to bring data to the cloud where it can be consumed and collaborated on. So the APS platform is driving this vision, and we've had two services that really affect data in different ways, Model Derivative and design automation services.
So first of all, how many people know about Model Derivative? OK, great. And design automation? Fantastic. OK, good.
So I want to just ask what the impact of these technologies has been on the data landscape. So Tobias, we'll start with you this time.
TOBIAS HATHORN: Sure. I can talk a little about both. There is a headless Revit, which is a design automation utility, and I've always been impressed by that as a really clear example of the value of Autodesk's platform, taking people who are familiar with Revit and jobs they want to have done, and then being able to offload that to run in the cloud. Brilliant.
When it comes to Model Derivative, that to me is an example of a technology that does something really well and is very well adapted, streamlined, and introduces some pain into customer workflows. And so the new data products that we're working on are in some ways an antidote to that, being more of a durable database that you can update over time and transact deltas on so you're not having to recompute that antiderivative.
KEVIN VANDECAR: Awesome.
FRODE TORRESDAL: That's good to hear. We worked a lot with Model Derivative, so we are translating the model every day. So we have used that a lot. And the goal there, obviously, is to get the 3D models in a web browser as well as getting the metadata from the model.
Design automation, like, it's the headless Revit, like Tobias said. I haven't got around to play with it. So it looks very promising. I have things we would like to do, but I have not found time for it yet.
KEVIN VANDECAR: OK. That's cool.
FRODE TORRESDAL: It's still in the future.
KEVIN VANDECAR: Thank you.
PHIL NORTHCOTT: Yeah, the Model Derivative is a classic kind of data object model. We use those all the time. It's a standard practice in software, and a very good one. Yeah, so it works very well.
Design automation is a really-- we also have not found a use case we were able to use it for. So it's, I think, a very good idea, but I think it's done at too coarse a level of granularity.
But one of the key-- I think, when we're thinking about how these things work, lots of little, well self-contained objects that can be passed around are wonderful for machines. Machines love that. Multi-step processes and so on can be built beautifully out of that.
Humans need a much simpler interface, and that's where the file concept, even if not of literal files, it becomes really important because you want to say, what is a self-contained piece of work where everything is at the same point in time, everything is at a certain level of recordkeeping? So the file concept can't go away at the user interface, but it's generally unhelpful at the machine level interface.
KEVIN VANDECAR: Got it.
SHELLY MUJTABA: I want to say something controversial here. So my hope is that both Model Derivative and design automation go away one day because a world where, as you are changing the Revit design, the data is getting updated in the cloud does not need Model Derivative because your data is already represented at an element level in the cloud and available through APIs. And then the world where systems are reacting to that data changing and can compute in the cloud, again, at a granular level then does not require design automation. You should not have to fire up a whole Revit engine to make one edit in a drawing. You should be able to just do it reactively in the cloud. Now--
PHIL NORTHCOTT: You just execute its method.
SHELLY MUJTABA: Yeah, exactly. So then this looks like far future maybe, but this is happening today with Fusion now. With Fusion Cloud Information models, you have design that is getting updated in real time in the cloud, and now we have [? I3 ?] partners, such as [INAUDIBLE], that have built ERP integration with it. So as the CAD model changes in Fusion, it updates the bill of materials in SAP in real time. So that future is not that far away. So that's my hope.
[APPLAUSE]
Me too.
NEM KUMAR: Yeah, I think I cannot add anything else. I can just say one thing. When I talk to clients, they have those one liners or some kind of things that they want to make. So they would have different systems, whether they're talking to [INAUDIBLE] systems. They would have 3D and they want to calculate CAD.
If these components of Autodesk platform services were not there, it becomes difficult because you have to have those solutions not on cloud. And if it is not on cloud, it doesn't become collaborative. If it is not collaborative, I think it's very slow and it's kind of tied to things.
So these legal blocks which this platform services provide help us kind of people to provide solutions which are more generic in nature, which are pretty good and faster to develop also. So thanks for [? us ?] for developing those.
SHELLY MUJTABA: Yeah. Ken, can I just add one more thing? So I just want to walk everyone through the journey we are taking in the data world around cloud information models, and I'll use Fusion as an example. So we started Fusion. The first step was to unlock all that data and put that in the cloud, make it available through APIs at a granular level. So now you can start to read the data, you can get events in which you can react to changes asynchronously, and do downstream workflows.
The next step for us was, can you now add data back into the CAD model, the data that does not impact the design itself, but it may be additional data, such as SAP ID or such as IDs or cost information or sustainability information. Can you add that back into the design? So that's something we are working on now.
What comes next is, can you actually manipulate the CAD model now externally? So that is the evolution we'll see in all of our industry, and that's what we are going towards. So again, hopefully this goes away in a few years.
KEVIN VANDECAR: Fantastic.
SHELLY MUJTABA: Yeah.
KEVIN VANDECAR: We have a fan over here.
SHELLY MUJTABA: Do you want to add something?
AUDIENCE: Yeah, sure. Is that OK?
KEVIN VANDECAR: Yeah of course. You get the hat.
AUDIENCE: Oh, thank you. Love hats. We've been playing with this granular data kind of a GitHub thing. It's called Speckle Systems. I don't know if you guys have heard of it.
PHIL NORTHCOTT: Yeah.
AUDIENCE: And it does that very thing. The file format that they use is a JSON file, which is great because it has a tree format hierarchy. So if at any point a new standard needs to be added, it can be done that way, in a file format that's built for web design and stuff like that. Works great.
They've started adding these things that we're talking about of, hey, wait for updates. So if there's an update to a Revit file that you've shared with somebody else, and they go ahead and they move something, that then gets pushed back up to the cloud. And as soon as that push happens, it alerts you on your end. And you can set it to automatically update your model, so you could have real time collaboration with people through the cloud because you're not pushing big Revit files up.
And I was talking to my coworker today. We just made the jump into BIM Collaborate Pro from Navisworks this year. We're having a hard time getting subs to use it, and I think it's because it's the publishing format from Revit up into BIM 360. And then waiting for Clash to be ran again takes so long because these files are huge.
But if you can push granular data directly from Revit as you're modeling it, you could have Clash be ran, like, within seconds, and all of a sudden you have a product that is better than Navisworks because it's faster and it's collaborative. So anyway, that's my word vomit.
SHELLY MUJTABA: And that's exactly we are going. We will start with Publish, we'll get into Sync. So with every Sync, you have a cloud data model. Yep.
KEVIN VANDECAR: Great, great input there. OK. So now we're going to move into some of the future initiatives, and it's already happening now. So we introduced the Data Exchange in Fusion data features this year. So Data Exchange is now a feature that is available in Revit and it's coming to other products.
The API is currently in beta, so the APIs are available as well. And Fusion data, the read APIs are already available, and Fusion is writing this data today. So both are available now. These products are providing the data authoring capabilities seamlessly without the customer really needing to understand the implementation. They just see the benefits. So again, from an API perspective, fully exposed.
So let's start with Data Exchange. So how many people have used Data Exchange? We've got a couple people. So just a quick word about what it is, basically allows you to publish a subset of your model to the Data Exchange that can then be shared with other consumers.
So it allows you to do things like, let's say you have a Revit model. You want to put out a bid for painting interior walls, so you would basically just create a view of those walls and only the walls that you want to publish. You put the data into the Data Exchange, and then you can share that with the consumers.
And they don't have to see the entire model. They don't have to see that the model is continuing to be developed. You can give them a version of that subset of data and let them do their estimating and send you back the data that's interesting.
So the question here is, how does Data Exchange solve problems today? And I think I'm at Frode now.
FRODE TORRESDAL: Yes. So I have actually tried Data Exchange. I've been part of the vanguard team there. So what we see-- what we have done so far is I have a class about that later today that we take-- as you said, you take just a part of the model that you exposed out, and you can use that to integrate with other application without using files to export.
Typical workflow for us would be export an IFC file into another application, do sustainability calculating there, and then perhaps manually or through our add-in add it back into the Revit model. But by using Data Exchange, you can do that seamlessly and without doing any of the boring, tedious work that just generates errors and takes a long, long time. And it's, as I said, it's just boring and it makes mistakes, everything. So I think that is the easiest thing to achieve with the Data Exchange. There's probably a lot of things you can do, but I think that is the most-- yeah.
KEVIN VANDECAR: Definitely. Yeah, we're just getting started with the possibilities, I think. So Phil?
PHIL NORTHCOTT: Well, I think the biggest thing it does is, for relatively simple data movement tasks, it lets people who aren't coders do the coding, which is very convenient because you want to democratize as much of this as possible so people can do it immediately and without a lot of overhead. We almost exclusively go straight to the APIs because that's our profession. But if it wasn't, if that wasn't my profession, tools like Zapier, for example, have pioneered this approach, and I think it's really important to put the power in the hands of the people at the point of design.
KEVIN VANDECAR: Absolutely. Great. Shelly.
SHELLY MUJTABA: Yeah. So Data Exchange solves a couple of problems. One is a short-term problem, which over time will go away, which is how do you take subsets of data and represent it in the cloud. When we have cloud information models for all of our industries, this problem tends to go away because now all the data is in the cloud so you can-- but there's another problem that still remains, which is how do you take subsets of data, whether it's from a file-based source system or from a cloud-based source system, how do you take subsets of that data and exchange it with another party. So it may be across organizational boundaries, it might be across product boundaries, but you don't want to send the entire data sets to folks. And how do you do this in a gated fashion so that there are different points at which you make that exchange and there's a complete audit trail of those exchanges?
So that's the problem which is a long term problem, which Data Exchange continues to solve. And so you see how you can take subsets of data from Revit today, send it to Power Automate. That feeds into a BI dashboard that might integrate with SAP. You can take data from Rhino and bring it into Revit, or Revit to Rhino, but you can also take Rhino data now to Inventor.
So the interesting thing about Data Exchange is, because it's cloud based, all the data goes to a central cloud location. Every time you add a connector, whether we build a connector or whether you build it through the connector toolkit, which we'll be releasing soon, it has an n squared effect. So when we added the Rhino connector, it enabled Rhino to Revit path, but it also enabled Rhino to Inventor path. So every time the connector adds, it adds to that whole ecosystem.
KEVIN VANDECAR: Fantastic.
SHELLY MUJTABA: Yeah.
KEVIN VANDECAR: Nem, I know you guys have worked with it.
NEM KUMAR: Yeah, and I think it's been repeated probably [INAUDIBLE]. But again, I'll say that whenever we talk to clients, again, they would be working with different systems. And when they're working with different systems, as a developer, what now gets avoided is I don't have to create plugins.
Now I just need to write a REST API. It just connects to the [INAUDIBLE] and then we get the data. So it saves a lot of time to connect systems. Other than the other parts which people added, I think this is one very powerful stuff because you do not work on one system, as simple as that. You always from-- either it's power behind [INAUDIBLE] going towards very complex softwares. The whole workflow involves a lot of systems, softwares. So definitely, for me, that's the best value that it provides, actually.
KEVIN VANDECAR: Fantastic. And Tobias, this is your baby.
TOBIAS HATHORN: Yeah, I have a lot to say on this, so I'm glad I get to go last. There's different classes of workflows, geometry workflows that you might want to reference between different applications. Rhino and Revit, perfect example. Curtain wall from one place going against floors in another place so that you can coordinate different things.
Then there's the business style of workflows, like Power Automate enables you to do. So some of that rich BIM data in Revit can trigger all types of downstream workflows from pre-construction, estimation, notifications, like the gentleman was saying about Speckle. There's just a lot that you can start to do now with different people that aren't Revit experts. So I really love your answer about enabling the low coders or anybody to do anything.
The way data exchanges work in Revit today is, if you can set up a view in Revit, then you can filter down to a meaningful subset of the model that you want to give to somebody else. And that model is structured geometrically, but you have custom parameters that are attached to that geometry, and that custom parameter is actually what you want to use. And then tools like Power Automate enable citizen developers like ourselves to just sit down and bang out a quick workflow. Connect six or seven different things together, and you've got a spreadsheet that updates the Power BI dashboard that also sends a notification to your stakeholders that it's time to go take a look at that.
And we've seen tools like Dynamo really take off because they allow people to script workflows right on top of Revit. And I think data exchanges are just going to do that same sort of thing, except connecting into a bunch more products than there is Dynamo for.
KEVIN VANDECAR: Yeah, fantastic. I think it kind of reinforces our API first methodology as well, where we introduced the APIs. At the same time, we kind of introduce the first connector, and we want the ecosystem to grow. We want more connectors. We want more products to be contributing to this.
And from a customer consumption perspective, like you say, they don't even need to know what's underneath the covers. It's about exchanging data. And that's where it's very powerful.
TOBIAS HATHORN: I have to add a postscript. Shelly was talking about the n squared situation, but I hear about a ton of workflows that are, I'm going to go from Revit to the cloud and then back to Revit, either into a similar collaborator in the same office or to the structural engineer who's going to read it into Revit from their office. And the same is true for Rhino. Because Data Exchange connectors exist, now you can have Rhino work sharing, so a lot of possibilities open up.
KEVIN VANDECAR: Absolutely. I mean, it could be as simple as just exchanging a section of the building in the design phase so that you're not having to work on the giant model at the same time. Iterations are easier, that sort of thing.
TOBIAS HATHORN: Yeah.
SHELLY MUJTABA: I just want to add one thing to this. Now, it changes the workflows for our customers, all of you. Something we've been talking to customers about is, how does this world change when the Rhino and Revit products don't have to exist on the same desktop. They were actually two different people who could be working on this simultaneously. What does that workflow look like?
So there's also a business engineering aspect to this is, as this data is getting into the cloud, things will not remain the same in your businesses. And so something to think about is, how does that future workflow look when different parties can now look at this data at the same time and act on it?
KEVIN VANDECAR: Awesome.
SHELLY MUJTABA: Yeah.
KEVIN VANDECAR: Any contributions from the audience? Yeah.
AUDIENCE: I think it's an interesting thing to talk about exchanging data just how you like and in an ad hoc fashion. But in a design project, that can lead to absolute anarchy and chaos. I'd like to see something like ISO 19650 brought into this conversation and actually having controlled exchanges of information, rather than just chaos.
TOBIAS HATHORN: Yeah. It's a great call out. There's kind of two workflows right now. One is a plugin-based workflow, so enabling ad hoc exchanges directly from the app, very low latency.
But then there's another exchange pattern that's hooked up through Autodesk Docs. And so on a Revit file publish, as you're committing to the source record, source code, then it's triggering exchanges to update as well. So you have a lot more control over what gets updated when.
PHIL NORTHCOTT: And I can chime in here. The problem you're talking about is, how do you work in a collaborative environment without stepping on each other's designs and without people being concerned that their work is going to be damaged in some way that they don't control. It's all about trust.
And I'm going to call back to what somebody talked about, Git. Git has established a way to do this where you're checking out your own copies and then you're pushing back, and then there's people who are in charge of doing the merge. That workflow has come through very painful experience over the past 35 years or so. It's about the sixth or eighth version control system that I'm personally familiar with.
It's not simple, but every one of the structures in that is written in blood. So I would say, if you're looking for a model follow, that's where to start your inspiration.
KEVIN VANDECAR: Awesome. So let's continue on with Data Exchange a little bit. So we're bringing new features online. So from a developer perspective, the connector SDK is going to include the ability to work with geometry, and we're also looking at GraphQL to put on top of it to make it easier to consume data. So it's already moving forward very quickly.
So what types of data and product connectivity do we or partners need to provide with Data Exchange? So this is a little bit more of a customer question, but I'd like to just see what you guys think to start with real quick. So it's your turn, right?
PHIL NORTHCOTT: I'm going to pass. I don't know enough about the subject.
KEVIN VANDECAR: OK. What do you--
SHELLY MUJTABA: Want me to go?
KEVIN VANDECAR: Yeah.
SHELLY MUJTABA: OK. I saw an amazing PowerPoint demo, actually, which connected Revit models into PowerPoint, which actually Nem and his company worked on. So the sky's the limit here.
You know, that's part of the proposition here, is we don't know what workflows our customers can imagine. We don't know what the future looks like. Can we provide all the tools that you need to be able to build your own connectors?
We didn't imagine that somebody would connect this to PowerPoint. We did imagine that somebody will connect something like this to SAP. But again, we probably would not be able to do it, but this is why we're providing all the tools for this.
KEVIN VANDECAR: Fantastic. And Nem, you're the genius behind the PowerPoint.
NEM KUMAR: So I think what type of data, I would say we work in more simulation site, so what happens is the metadata is also very important. And the tessellation or the triangulation of the geometry is also an important one.
So if I want to prioritize, I think these two should be probably the best ones or the prioritized ones, because this would enable the cloud transformation from the current structure that we have. If you're able to see models on cloud, if you're able to see data on them, then these two would be the priority ones.
And if I wanted to say, which software is probably-- Inventor, [? the other one ?] actually, then SolidWorks or Navisworks. This kind of software should be, if we make connectors from them, that would be more useful or prioritize probably, should be.
KEVIN VANDECAR: Fantastic. Yeah, and that's one of the points is that we're providing the SDK so that there can be a SolidWorks connector. It's not just an Autodesk ecosystem. Tobias?
TOBIAS HATHORN: I love the Power Automate connector because Power Automate is the platform that already has connectors to an number of other systems. So that would be my-- you mentioned Zapier. That'd be another one. Just hook this into other tools that are connected to other platforms so data can go in places that are accessible already.
The other side of social community and democratization of design information, I saw a presentation from AWS where they were talking about how to spin up a web app that's served with data coming out of a data exchange, and how that can be done using the AWS stack. But it can serve this amazing amount of people that don't have access or literacy in design tools that we use. So that could get a lot more of the community involved in projects that are going on, help them to vote on what they see and what they'd like to see, and even adjust parameters and see how that might affect the design. To me, that's really exciting, that a lot more people can participate in social urbanism.
KEVIN VANDECAR: Cool.
FRODE TORRESDAL: I think the connector SDK will be very important because everyone have to write their own to get this to really work. And one example will be structural analysis. There you have a very often the model is in Revit, and you might use a robot from Autodesk that probably you will have a connector to [? Robot ?] in sometime.
But also I don't know because I work quite a bit with this, that you will make controls in other software, not just everything on a bridge or a building in one calculation. So we calculated in several software, and then we will have need to have this connector. And it will really ease that workflow because in these cases, very often they remodel the entire model in a new software. That don't export on RC or anything. Everything is made once more, and that is quite time consuming.
KEVIN VANDECAR: Yeah, started over. Any comments from the audience? Connectors you would like to see, data workflows falling out of this? Yeah, go ahead.
AUDIENCE: OK. So I might be talking a lot about it because after a very long time, I got fascinated with something which is in front of me. So I work with the same company Nem works with. So the PowerPoint came out of the internal hackathon that we ran.
And there were other applications. They did not made it to the stage because our CEO Sandeep said, this is the one I want to demo. So when we started the hackathon, people started asking me what exactly is this Data Exchange and so and so forth.
So I was talking to them and I said, for a very long time, people were saying data is oil. I never saw data as oil because I cannot touch that oil. I can't touch the oil.
So the way I explained them is how exactly this Data Exchange is. Now, this whole technology is actually converting that data into something called as oil. Now you have to build a connectors. You have to build literally a pipelines which takes the oil to a particular engine, and then you can actually do something with that engine. You can actually pump out the water out of the engine or you can run your vehicle out of it.
So Data Exchange is actually making that data, converting that into something usable, or maybe metaphorically the oil. So it depends on how many pipelines that you can connect and what kind of engines that you can build to consume that data and produce something out of it. And then they got excited and said, OK, now we can actually-- PowerPoint is the engine, and I'll connect a pipeline and then do something about it. So that is--
KEVIN VANDECAR: The data flow itself. And there was comment over there.
AUDIENCE: Sorry [INAUDIBLE].
One thing that could be really exciting for my company-- we're an MEP firm-- is we say, OK, there's going to be this RTU on the roof. We're going to call it RTU1. We have to send that out to the rep to help them-- to help us size it.
So if we use Data Exchange and just sent that RTU out, and then it's a Trane-- say we go with the Trane model. If Trane makes a connector, we can be like, hey, we updated the geometry on this thing. Or we changed where the hood intake is, so now you need to re verify like where the exhaust fan is because now it might be too close or whatever.
And then further down, there could be an IoT thingy on the RTU. And it'd be like, all right, it shipped and whatever. And the whole time the model is-- there might be a swear jar for saying the single source of truth, but it would help reinforce the Revit model as being that central thing, or maybe on the cloud or whatever. But it would all kind of be in one spot and all the way through the closeout dock. Serial number would get assigned to it, and then like we would just have RTUs just solved, one less thing.
KEVIN VANDECAR: Yeah. Thank you. That's a great segue, actually, to the next topic. So we're going to go into cloud information models next. Move back out of the way here.
And so we introduced the very first information model called Fusion Data. So supports this recognition, first of all, of the new terminology for our industry labels around the data in these industries. So Fusion being manufacturing, Forma being AEC, and Flow being for media and entertainment.
So Fusion's a little bit ahead at this point because they're already writing their information model. So when you use a Fusion 360-- when you're authoring a Fusion 360 model, you're actually creating that central information model. So that data is going to the cloud, and it's going in real time. So what you are authoring is being saved automatically, kind of like a Word document is saved automatically to the cloud.
So that data is going there in real time and it also becomes then the source of truth, because that data, when Fusion restarts and reloads that model, it's reading that data back off the same information model. So if there's an API application or other applications that are accessing that data, it's the closest to the truth of the design.
So what does real time and source of truth mean to you? He said, it's a swear jar topic, so I'd like to hear what that means. And also, in the context of our first information model, how does this solve problems today? We're back to Shelly, I think, right?
SHELLY MUJTABA: Yeah. I'll just go back to the same idea that it changes linear workflows to be radial workflows. So multiple disciplines, multiple organizations can collaborate on the same data simultaneously. Multiple systems can react to it.
And the sum of the parts becomes larger than the specific pieces of it because now you can bring in not just our product, but you can bring in third parties. And you can build your own integrations that you haven't been able to do before. Previously, your options were write a plugin for Fusion, write a plugin for Revit. You can build a cloud-based microservice that can now look at this data and do interesting things, connect it further out.
So it builds this-- dramatically expands the ecosystem, but it also very radically changes how work gets done in the future. So that's kind of where things are heading.
KEVIN VANDECAR: OK, great. Nem?
NEM KUMAR: No one wants to steal data. In time, we should be able to touch what is happening in the production, what kind of manufacturing is happening, the counts and everything. So real-time information is very much important. And that, too, if we are able to see that it's only location-- it should not be at various locations.
I'll give an example. Everybody uses Google Docs or OneDrive, and online you're able to edit them. Now, when you're editing it, you are able to collaboratively work with other colleagues. So if he changes something, you're able to see instantaneously appearing it on your screen.
So very small delta information travels at a very quick way, and that brings-- just imagine if this was a delay of one day. If this was a delay of even hours, it was so bad. You could not take decisions. So I think that it's that kind of importance is there.
KEVIN VANDECAR: Fantastic. Tobias?
TOBIAS HATHORN: We've touched on a lot of analytical, analysis-based workflows. And I think having a source of truth that's always in the cloud, kind of always on and always up to date, enables a lot of microservices around it to constantly be kind of giving you feedback loops about how your design is behaving. So the climate examples, the carbon examples that you have been working on, I think these become a lot more plausible and a lot less expensive because you're not maintaining a design model and an analysis model that are a week apart from each other, as far as latest revisions.
So I'm just really excited about these kind of real-time agents, microservices that are just informing design feedback in real time. So you know you've just made a really bad design move right away, rather than two weeks later when now the project is too expensive.
KEVIN VANDECAR: Yeah, so immediate feedback. Yes. Frode.
FRODE TORRESDAL: So yeah, real-time, in my head, that's like Google Docs that was mentioned here. If I write and another person in another part of the world is writing in the same document, you see it. But in the AEC industry, that doesn't necessarily work that well. You have to have more like a Git that you push it at a certain time to make to get the source [INAUDIBLE] you don't want to see all the experimenting from your colleague that doesn't really know how to use Revit, [? the ?] way the walls keep popping up and down.
So Fusion, I haven't worked with that, I'm just envious when I see how far they have come.
KEVIN VANDECAR: Fantastic.
SHELLY MUJTABA: We're getting there. We're getting there on AEC [? too. ?]
FRODE TORRESDAL: Yeah, great.
KEVIN VANDECAR: Phil?
PHIL NORTHCOTT: I'm a bit at risk here because I'm geeky enough to know that it's always a copy. There's a copy in your video card that's putting it to your screen. There's a copy in memory of the thing that's cached in the cloud.
So what I will say maybe more usefully is real-time updating refers to working on a stream of updates that are happening over time. One thing I'm going to address here is, when we are setting up these cloud flows, we have to realize that while we're presenting a real-time experience to the user, in fact time is what it is, and packets get lost in transfer, machines go down, disks fail, all these things happening. So when building a platform like this, it's really important to instrument it strongly and to do what we call good data hygiene. So for example, when you're making a change, it either all happens or none of it happens, so that it can be replayed again.
So I guess what I'll say is there's a very great deal of testing that needs to go on to make this work well. And you have to really think about time and failure. That said, with the right model for the right people-- and his comment about a Git-type structure for high stakes things like structural design-- this can be extremely powerful.
KEVIN VANDECAR: Awesome. Awesome. OK, so we're getting towards the end. Anyone have comments on this? [? Sorry. ?] You already got [INAUDIBLE].
AUDIENCE: Yeah, I'm just-- this gets a bit nerdy, but there used to be the high-frequency data management system in Forge, which worked on operational transform, which then is what the Google Docs uses. And that didn't really-- that wasn't high bandwidth enough, and now there's been a lot of research in Conflict-free Replicated Data Types.
KEVIN VANDECAR: CRDTs.
AUDIENCE: CRDTs. Is there any thought about revisiting that idea? Because that's actually the solution to what we're really talking about here, which is if you're working-- two people are working on the same Revit model, somebody deletes a big chunk of something that the other person has been working on, then the only way to really resolve that isn't Git-type resolve. It's actually something much higher level, like a CRDT system.
SHELLY MUJTABA: Absolutely. So yeah, now I can get a little bit more geeky. So we are actually collaborating with Microsoft on the Phluid Project. So you may be aware of Microsoft fluid, which is this open source effort to build credits. They started with Office-based use cases.
Now, as all of you know, Office documents are way different than a construction document or a Revit file or manufacturing. So we have actually been working very closely with them, and actually, we have made significant open source contributions into that project.
So however, when we look at the lifecycle or the maturity cycle of where we are today and where we are going, as I mentioned before, we are going to first make all this data available in read-only mode. So conflict doesn't happen at that point of time. Next, we are going to make data available in a way that you can add metadata without impacting the design, so low chances of conflict. We can manage that.
It's the last stage, when you're actually manipulating the design, and by that time we are working on a lot of back end technology with Microsoft, obviously. But there's also a lot of experience related to topics like how do you do conflict resolution. How do you do-- on the server side, how do you do reliable causal ordering and stuff?
But on the client side, what does it look like when you do this kind of non-ordered sort of operation? So these things will converge as we move forward on one direction and making the data, democratizing the data. By that time, we will be figuring out how we work on the client side.
KEVIN VANDECAR: All right, great. So first of all, I want to thank all of our panelists. We're almost out of time.
I want to remind-- excuse me. I want to remind you about our names for these new industry information models. And if you're interested in providing additional feedback, there are some workshops that are still going on. There's one on Thursday. You can scan the QR code and it will take you to a place where you can register and go provide feedback.
I really appreciate you guys joining today. I hope it was an informational and interesting session. There are some Forge to platform services lenticular stickers on the chair by the back door, if you'd like a sticker. And thank you so much for your time and presence today.
[APPLAUSE]
Downloads
Étiquettes
Produit | |
Secteurs d'activité | |
Thèmes |