説明
主な学習内容
- Learn about why businesses need Autodesk Forge Connector for improving business productivity.
- Learn how to build an Autodesk Forge Connector using Autodesk Forge Data Exchange.
- Learn how to make the data exchange secured.
- Learn about conceptualized connected workflows in a cross-platform environment.
スピーカー
- Sandip JadhavCEO and Co-Founder of CCTech, a certified Autodesk FORGE SYSTEMS INTEGRATOR partner, digital transformation enabler for AEC and manufacturing industries. CCTech is also a leading cloud platform developer such as simulationHub CFD. We are building a wide range of Autonomous CFD apps HVAC, buildings, and valve industry. We are specialized in engineering application development for our clients using BIM, Machine Learning, AI, WebApps, cloud computing technologies. We building digital twins, engineering configurators, by the convergence of REVIT IO, Inventor IO, various forge services.
- Nem KumarNem Kumar is director of consulting at CCTech and has been doing product development with companies from Manufacturing, Oil & Gas and AEC domain. He has vast experience in Desktop as well as Cloud software development involving CAD, CAM, complex visualization, mathematics and geometric algorithms. He has been actively working with Autodesk Vertical and AEC product teams. His current areas of interest are Generative Modeling and Machine Learning.
- Vijay MaliVijay is passionate about people and technology and working on how to bring them together to make the world a better place, a place to fulfill individual dreams. In the role of COO, his vision is to build a people-centric organization with excellent processes and systems. He is on the mission of creating an environment of freedom, collaboration, and growth for people. He wants to harness the technology to build agile and scalable systems supporting the growth of both people and organization. In his 15 years of career, Vijay has played many technical roles. He has experience in providing CFD solutions for many complex problems. He has conceptualized many software solutions, including the Pedestrian Comfort Analysis & Control Valve Performer app developed on Autodesk Forge and simulationHub. Vijay is known for his transformative way of teaching and trained more than 500 candidates on complex topics like computational fluid dynamics and design optimization. He has delivered talks at various events and engineering colleges about CFD and its use in product design optimization. Vijay holds a master's degree in aerospace engineering from the Indian Institute of Technology (IIT Bombay).
SANDIP JADHAV: Hello, everyone. Welcome to our class, Why and How to Build Autodesk Forge Data Exchange Connectors? I'm Sandip. I'm with my colleague, Nem. We are going to walk through how to build Forge Data Exchange Connectors.
Before diving into the details of class, let me do a quick introduction about myself. I'm Sandip. I'm a CEO of CCTech. I've been working into the space of architecture, manufacturing, engineering over the last 22 years. I am a developer by heart, and I really like to develop a great product and platforms. So this is a great opportunity for me to present Forge Data Exchange Connectors to this class. Nem, would you like to introduce yourself?
NEM KUMAR: Hey, guys, I'm Nem Kumar. I had consultancy at CCTech. We have been working in a AC domain for part 16 years. And we see an exciting years coming ahead. Thank you.
SANDIP JADHAV: Thank you, Nem. So this is the outline of our class, I'm going to talk about the building data, or the AC space, in more detail because AC is a space where have a lot of data that we crunch. And I'm going to talk about the changing technology scenarios and how the data can help us to solve our problems.
I'm also going to talk about data exchange platform. I'm going to go through the workflows, the data structure model. And I'm also going to talk about how you can build connectors to connect the different systems and the BIM systems.
And at the last, I'm also going to compare with Forge Model derivative and the Data Exchange, because they look like similar, but they are not. And I'm going to also talk about when to use Forge Data Action and when to use for Forge Model derivative.
So today, we are living in a hyperconnected world. Wherever we go, we have our devices, we have our laptops, we have our mobile devices, and we are in continuous touch with each other. We are getting a lot of information very quickly.
So in social life, we are updated every second of what's happening in other parts of the world. This has led to our physical life being driven by the digital world. We click a button, and we get a car.
So the question comes in our mind, how can we leverage this hyperconnected world for improving our productivity in architecture, in manufacturing, in fabrication, in construction, in all product development? We work with today with diverse teams who are in different geographical domains, on different engineering disciplines, and who are using different kinds of software systems. Each of these software systems has a very different sort of representation of data.
So we need a bridge who can help us to interpret the data from one software system to other software system. If we can do that, what can happen is we can create autonomous workflows. We need workflows which, on themselves, which are basically automated also, and intelligent. This means we should have a platform or a data platform where we can reinterpret the data and contextualize as per the software and the engineering discipline that we work in.
So I'm going to talk on this for next 40 to 45 minutes. So let's take an example of a building design. When we start with any building, we work with, really, a broad level of themes and broad level of what I would say is specializations.
In buildings, we are having a construction team. We are having a structural team. We are having an [INAUDIBLE] team. We are having aesthetic teams. We have a factory design team. We have electrical teams. All of these teams are working with a very specialized focus in development, but we are developing a single system that is building.
Now, to work with this cross-discipline, cross-domain system, what we have done is we have built interfaces through APIs. We all use our drawings for a very detailed level of fabrication design. We use Excel for our cost calculation. We use Revit for the modeling. We use Inventor for our factor design.
So to connect this, from last 30 years, we have been building connectors, and we have been building the workflow through files or, what I would say, data are being reinterpreted there. Now, when we move the data to a file, there is a loss, because the data, what you write in Revit file, when it gets opened into Inventor, it has to be reinterpreted.
So this creates a lot of problems. And I think some of the customers have said that they spent more than 30% time in exchanging data and coordinating between the tools. So wouldn't it be great if we can just get a very small portion of data that is being changed? Because when something changes in the system, it is not the whole thing that change. It's a very, very small change that happens. So to go to a file-based approach is very discomforting. If we can transfer only a very small set of data, that will be really great.
The second thing is, when we create connected workflows like this, right now, the only way is that we transfer complete design data to the other software, and then that design data transfer to the other software. We are working with the different vendors. Those vendors themselves are working with other vendors. So data keeps on flowing from one machine to other machine, one company to other company, and one person to other person.
This creates a problem about intellectual properties. So one question that always is in mind of owners and the customers is that, how do we make sure that we protect this data? So this is one of the, definitely, a very big problem that customers face.
And they always say this-- why I need to share my complete model? Can I share a small subset of data? It's pretty dangerous, also, if it reaches to the wrong hands. So this is another pain point that we realize that customers are facing.
The third problem is that, how do we move the data? So if you want to move data, you again need different translators. You need the different licenses. This industry is highly regulated by the government, so there are a lot of government norms that you need to satisfy. So to do that, you also need to build a lot of model [INAUDIBLE].
Again, so the data you're moving from one place to other place. And then, to do that, there is, again, time and costs that get involved. So these are the really key problems that we see.
So again, some of these customers have highlighted this, that I don't want to keep on doing this again, and again, and again. I don't want to create a hack solution. I want something, I would say, persistent and permanent solution.
So if you look at our building design from abstract level, it's a multi-objective system. We have a client whose expectations are continuously changing, who does have a client who's going to ask for more and more as time progress and progresses. There is a design. There's design requirements which we need to satisfy. There is sustainability objective. You also need to make sure that you are doing the whole project within the cost, within a budget.
There are climate challenges. We need to also make sure that we are not harming to the world. And also, our design changes with a changing climate. And then, there is always a rework, which makes the whole job very, very challenging.
So it's a continuously changing system. We are looking at a system which is multiple objective, which is dynamically changing, and we need to do something about it. Another interesting that happens in AC industry is that there are different consumers, and there are different producers for that data.
So architectural data can be consumed by civil structure. Structure data can be consumed in construction. So these are the consumers and producers where the data moves from one location to other. And it's, again, a continuous process. It's a bidirectional process. It's not a one-directional process. And all good building designers have to make sure that we factor in this, and with these challenges, also, we produce a system which is able to respond to these changes.
With this thing, everyone is saying, OK, let's try to solve this problem with data. So we are now putting the cameras. We are putting the sensors. We are trying to do drones. And so much of things that are now coming in, and then everyone is capturing the data.
This itself has created a data deluge. How do we process this data? We are having the files and files with terabytes of data getting accumulated in our servers, but it's not actionable data. Because if you cannot make the decisions off of that data, that data is of no use.
So what is the solution for this problem? How do we go ahead of these challenges? The solution to these challenges is in data engineering.
Data engineering is, in a sense, I would say, not well understood, and also being not looked at right. People consider AI, ML at a very high level and data engineering at a very basic level. But today's world needs a lot more data engineering than any other thing.
And data gathering is going to be pervasive. It is going to be part of architecture. It is going to be part of engineering, manufacturing-- everywhere.
So we need to devise the strategies. We need to devise the platform which can help us to solve the data engineering problem. And to do that, data exchange platform definitely is a real step ahead.
So let me explain what the data, when we say, means. So we take the smallest unit of factual information that can be used as a basis for calculations, reasonings, or for discussions. So this is, again, a very interesting part. When we say that, oh, I have a data, and then people say that, I have terabytes of data. But if it's a black box of data, it's generally you cannot make a decision out of that data. That data is of no use. We generally need a very, very small sort of data on which basis you can make a decision.
For example, you have a door into your building or a windows into your buildings. So you are interested to know whether those doors open inside or outside. That's a very small amount of information. But if, to get that data, you need to download the complete BIM model, then it's useless.
So this is the sort of thing that we are talking about. And this is a very, very important area where all the architecture engineering, manufacturing, specialized people will be working on. So when we start to talk about data engineering, the first thing that comes in our mind is the files. Definitely, file has served us, I think, on thousands of years.
Earlier, it was paper-based data. It has moved to digital. And then, there's a lot of data we have been using in the form of Revit, AutoCAD files, Inventor files that we are moving the data. So it's really good if you have a large data with very little changes that you are doing, and it has served us well. We have been able to create a big project with just working with the files.
But when we start to look at making our applications smarter, and when we start to look at making our application of, autonomous this file starts to become a hindrance. Because when you want to create a workflow which is very fast responsive, which is highly distributed, then what you need is a very small amount of data that is being continuously communicated. Because most of the time, in the project, we are making the small changes, and that small changes need to be communicated across very quickly.
So this distinction, I think, is one of the foundations to build our data platform. So I hope that you are able to understand this and you can dive in this more to understand what our data means. So one of the concepts that, once you start with the data, is about the CRUD. That is Create, Read, Update, Delete.
It's not really a new term. This was written by James Martin in 1983, a book. So what we are talking about that we have a data. We should able to persistently refer that data.
That data can be in file. It can be in a database or anywhere. So we should be able to create the data, we should able to read the data, we should able to update the data, and we should be able to delete it.
The efficiency of any database depends on how quickly we are able to create the data, how fast we are able to read the data persistently, how easily we are able to update the data, and how accurately we are able to read the data. So based on that, you will compare it two databases, you will compare two data systems that are available to you.
So as we go progressively ahead, we will be seeing this coming in the form of APIs. So any data system always has an API for creating the data, reading the data, updating the data, and deleting the data. And the interfaces will be changing with system to system, as we will see.
One more concept that I would like to introduce before going more into [INAUDIBLE] data is graph viewing. We were grown up with Oracle. They were one of the rational databases. Then that went into SQL, and then NoSQL came, which opened up other possibilities. That was definitely a great way of doing it.
Then, we wrapped those databases with the REST APIs to start fetching the data from our cloud systems. This helped us to create the last 10 years of Renaissance in the world of cloud computing and creating connected workflows. But this also became a problem when you have a data which is you've not earlier thought of.
So you end up calling hundreds of REST APIs to find out one particular set of data. With GraphQL, what you can do is you can choose what it is that you want to query. So it's a kind of query-based system. And people are saying that it will, REST, REST peacefully in the past. But I think REST will stay, but GraphQL is definitely going to catch up.
So here, what you can do is you can specify what that you want. This is an example of-- Star Wars example, where you ask for a hero, name, frame, and it gives you that information. Fusion Data API today from Autodesk is using GraphQL. We are going to see this more and more, so it will be great if you learn about this, do some small courses in Coursera, and build your expertise, because GraphQL is here to stay. It is going to be an important thing to work with the cloud application.
So Forge Data, this is the one process, the one thing that has occurred has been working for a long time, it's one of the areas that is I would say holy grail of CAD, of BIM. We are having good data, but how are we able to get a very small portion of data is not that easy. And I would say this is one of the first systems in the world which is able to give you small data from gigabytes of data in a very easily persistent manner.
I think this slide is explained pretty well, what is the Forge Data Vision. I believe what we had was a file-based interconnect between the software. So we were creating a Revit file or Inventor file, sharing those files, and then we were looking, writing a translator after translator after translator. And then, we were building our own apps, which we were trying to make a sense of this to our data. Every time we make a change, we were required to work with the files or with the API, and that itself was a challenging task.
What we were missing was that we were already always working on the very top level of this software, with Revit or with the Inventor. We were working with the API, which was on a presentation level or business logic level. Now, with our Forge Data, we are able to work with real data structures. So we are directly able to communicate at a data structure level. It is like a connection between two human beings at our neural network.
So if our neural networks get connected, we can communicate very fast. I think Elon Musk is trying to do that for humans. But here, we can see, we are able to do it with a CAD application. That's really great.
We are-- now, we will able to talk with Revit, AutoCAD employee at their data structure level. And that opens up an infinite number of possibilities, because once you have that data structure level of access, you can get the data and interpret as the way you want it at a very micro level of granularity that you will able to get. So this is now really foundation for Forge data, in which you are able to work at a data level, not at a very higher level of communication.
This is a work in progress. So I think this is not still completely being, I would say, launched. But we can see something in the Forge-- sorry Fusion, data, Fusion BIM APIs. as we are able to get a manufacturing date of the product, markup information data.
There is going to be now more cloud information models. There's going to be a cloud information model for architectural space. There's going to be a cloud information for model for media space. And there is a cloud information model for manufacturing space.
Interesting thing is that what you see in architectural space is not same as what you see in manufacturing space. So for example, when you look at a window in architectural space, it's a block of a thing which constitutes a shape like a window. When you look at a manufacturing, you are looking at a very high level of details in a window. You need to know a window which can be manufactured. So you're talking about a very micro, microns level of accuracy of details, whereas in architecture space, you are comfortable working with a limited kind of accuracy.
So it's the same window. But when it comes to architectural space, it is interpreted differently. When you come to manufacturing, it's interpreted differently. And that is the way it's supposed to be.
The question is, how they talk with each other. And this is where the Forge detection is. It is going to help you to interpret the cloud information model of architecture and a cloud information model for manufacturing. And connectors are going to play a very, very important role in doing that.
Well, as you can see, there is a plan to build a lot of new connectors. Already, there are connectors for some of the softwares like Revit, Rhino, like the one we have created, PowerPoint. But the need for building of connectors is, again, going to be really a lot.
And you can even do same-- I would say from the same data, you can interpret a different kind of information. So there can be two different connectors that's going to be possible going forward that you might build. So this is one of the areas that is going to be fast-growing and fast-exploring.
We have seen what is the Forge Data. We have understood that there are going to be a different cloud information model, and we are going to get a lot of data from this cloud information model. So let's try to understand what is a Forge Data Exchange.
Data exchange, basically, helps you or enables you to share subsets of data across your design and make applications at scale. So three important points. One is a subset of data. So we are not talking about the complete data. If you want to work with the complete data, there are already solutions [INAUDIBLE]. You want to work with a subset of data.
And then, you make that available to other software, other applications. And third is that it should work at a scale. So we are not looking at a query at one or two. It will be thousands of query.
So this is the key part that differentiates data exchange with other platforms. So it is able to bring your design data, make data to other applications. So let's see. This is a good example to understand what we are trying to see.
The current world that we live is of files and apps. And these applications have definitely helped us, like AutoCAD really invented. And we need to move that data to other applications. So what it does is that you bring that data to a single source of truth. That is the cloud information model.
The Data Exchange helps you to bring that data there. It keeps the data secure. It keeps that data persistent, which is a very, very important task to build that.
Those who are coming from a developer background, this is your GitHub. This is the GitHub. Like the GitHub what you had for a code, now we're talking for the design, make, construction data. I mean, and so data is very huge, very diverse. We are not looking at the C++ files, but we are looking at the Revit file, AutoCAD file, proprietary files. And to be able to do this is really a challenging task, and I'm very glad that Atlas is able to solve these challenges.
So what we do is here, from, say, applications like Revit, you take a subset of data, pass that data to a cloud information model. Once you have that data in cloud information model, any of the connectors of Forge data can start consuming it. So in this case, Autodesk Revit is a producer of data, and all these other applications are the consumer of this data.
One good thing that I realize is that it is not restricted, that only Autodesk applications can consume. Any application in the world can consume it, and anyone can develop that. That's really the democratization of data that I see, and I think that is what going to drive this platform ahead.
So let's try to understand, basically, what is it that we mean by granular data, or a subset of data? So I will go with an example. So you have a building design which is in Revit. What you want to do is you want to share this only small portion of it.
So what you do is you create an exchange. In Revit, there is an option where now, you can choose a particular view. Only that view will be uploaded to your ACC.
Those who you want to invite, you send a message or email that this is the data I would like to share with you. So let's say you're working with someone on elevator or building, some one with railings, or some one with certain artifacts. You can share that particular information.
They can take that data. They might be working, as I said, in mechanical space. They might be using Inventor. They will get that small portion of data.
Again, this is a very rich information of data. This is not just a geometric information. This is complete property metadata kind of information.
So that data needs to go to people in Inventor. Once you get that data in Inventor, you start to build your railings or things that you want. Let's see you make some changes. So in our case, David is our producer. So it changes-- as those changes are made, you can push those changes to your cloud information model.
Once you push those changes to cloud information model, the consumer, Inventor, can get those updates. So there is a kind of arrangement there where the data itself updates. So data itself is doing a follow-up with you.
So you do that, and it will say, oh, there is a new update, that things have changed. Would you like to change it? You accept it, and you will get that information.
So to do this, there is a very-- what I would say, very detailed information that is being passed in a sort of graph-based fashion like we talked earlier. And that information is then being loaded into another connector which can be used by many other applications, not just Inventor. It can be consumed by Power Automate, a cost engineering application, a simulation application, structural application. It can be consumed by anyone.
So this is a very interesting thing. This is not one to one This is one to many. So here, we are looking at one consumer, multiple connectors to consume that data.
So this is the way this data works. You can also take it to applications like [INAUDIBLE] and others where you want to do other design parts, other than Inventor. So multiple parts, [INAUDIBLE].
So let's try to do some abstraction, because we are talking over the data engineering, and then we need to see how we can scale it for others. So for this particular example, what we can observe is our data is getting produced in Revit. So we take that data. So we have something called a Producer Connector, which that data into Autodesk Docs, and then there is a connector at Inventor end which consumes that data, and it creates a model in Inventor.
So one question that might come in your mind-- why don't we directly build a connector between Revit and Inventor, no intermediate connection? The thing is, let's say that person who is working on Inventor is not present, or the application is not started. What will happen? Where that data will reside? So that's one of the parts.
So there are multiple challenges in that, and that data need to be consistently available. You don't want to keep on doing this again, and again, and again. So having a hosting provider is a very, very essential part.
And it also is a controller. It is a place where you keep your trust. So this is the data you are giving it, and you are trusting it.
So again, Autodesk is one of the important trusted partners for all these big companies. And that's where the Autodesk Docs, Autodesk ACC plays a critical role. Once we have that data in Autodesk Docs, then you can write multiple connectors.
You can write a connector for Power Automate. You can write a connector for Teams. You can write a connector for PowerBI. Infinite possibilities.
So this is what it is that you can do. But is it like you can do only for Revit, or is it only that you can do it for Inventor? No. Actually, once you look at this, it is possible that, going forward, that there could be an application, one, which did not [INAUDIBLE] application. It still will be able to work with a connector, give that data to a hosting provider, something like ACC. And so tomorrow, there could be another hosting provider that, I think, Autodesk will come up with for a generic kind of data. And that can be used by another consumer, and it is going to this.
So this is, on a broad level, that is happening. The data is getting uploaded from one application, and it has been consumed by another application. So we saw a workflow. Let's try to see it in action.
So let me play this small video. The quality of video is not great because I'm trying to capture three screens. One is Revit, in which, as you can see, we are drawing over the wall. And this geometry change is being pushed into our ACC server.
Here, you will push this geometry into the Data Exchange tool on the ACC server, and it will get reflected here on the second window. As you can see, it's pulling up that information. And then, you will able to see an update into Inventor window. So once you see that update in the Inventor window, you can give a permission to update your geometry. It will bring that data into Inventor.
So this is the way it works. You have Revit as a producer. You have inventor as a consumer. And ACC is the broker through which you talk this communications.
So let me go to our next slide. So we are talking about the data, and then, the one question that comes in mind is how this data is being organized. And as we know, we are looking at a very diverse set of systems, so there will be always ever-expanding representation and format.
So here, the data team has used entity component system-- ECS-- design pattern to represent the data. Here, we have some important new concepts that will be coming in. I think Forge documentation is a good source to learn about it.
So there will be collections, spaces, assets, relationships, snapshots, and revisions. And there is an excellent tutorial, excellent information that is available on Autodesk documentation. I will not spend too much time on this.
And the third and last concept that I would like to cover is about security. One of the challenges when people share their data, or apprehension that they share the data, is about the security. In two-legged authorization, you authorize your app to access your data, and then, once you get authorization, for everybody to be able to do it unless you revoke that authorization.
In three-legged authorization, you have a Forge, you have app, and you also have end user. So end user has to give explicit consent to share data with your application. So this is really great to make sure that the data that you are sharing, that it's being shared with your knowledge, and it is-- you're not getting exploited by anyone else. So this has been, really, a great progress that has happened with the security of data exchange.
I'm going to now go into Data Exchange Connectors, because that's one of the important objectives of our class. So how do you bring the Data Exchange Connectors? Already, Autodesk has built some data exchange connectors.
There is a connector for Revit. There is already a lot of connectors in progress for [INAUDIBLE], Grasshopper, and others. And then, there is going to be more, and more, and more. So this is going to be a very expanding and really [INAUDIBLE] explore the space of building our data connectors.
To build a data connector, there is a data connector feed that is available. Again, a good documentation about that is available on Forge website. This is the part of the [INAUDIBLE] where you basically read the data. So these are the APIs which helps you to read those data into that data structure format. And once you read that data, you can do something about it.
Again, an example to demonstrate. This is the example we saw earlier, where you can see that all the raw information is coming in a very structured format to you. And any change is made into that vault will be, again, coming in a very structured format.
So it's a very granular data. So you can quickly make out what is the change that is happening. And this helps you to go to a more autonomous workflow. So here, machine-readable kind of data is there. And that can make sense and make smart workflows.
We have been also working on building a connector. So the first connector that I will be talking about, our PowerPoint connector that our team build. Why PowerPoint? PowerPoint is used by 500 million people. I'm reaching to you through PowerPoint.
300 million presentations are made every day. It is covering 95% of the time, and average presentation is four hours. So definitely, this is the place where the business happens. This is the place where the business moves. So it's important that our data also reaches through PowerPoint.
In PowerPoint, there is an interesting feature that is Microsoft [INAUDIBLE], that you can look at a 3D model within your PowerPoints. This is not a video, so let me just quickly stop it and show you. This is, what I would say, a 3D view that is being embedded.
So this is really great if you want to build an application for your-- where you want to demonstrate your design intent. So PowerPoint, as I said, is a really great tool for you to communicate your design intent. And you can validate your assumptions and things with your client, collaborate with vendors. And it's a view only, so pretty secure.
The question is, if there is some change that happens in design, how do we communicate that? You can build a connector. That's the part that we have built. And this definitely helps you to get more customers, be able to get a new client.
So this is an example where we are using a Revit, in which we have built a two-bedroom, hall, kitchen kind of apartment. This design is being, again, shared to Data Exchange. So once you share with the Data Exchange, in our PowerPoint, we have built a connector-- so a consumer kind of connector-- where I'm able to see the view of this apartment.
So this is great I can show to my customer that, OK, this is the apartment that I'm working on, and this is how your design looks like. Let's say there is a manufacturer of railings, and he is working with your customer. So it's important that we create a different type of railings and demonstrate to the customer.
But the process of going from a Revit kind of application to PowerPoint is time-consuming. And you might be sitting at a very small, light machine or a laptop or tablet where you are doing a demonstration to a customer, and your designer is sitting at some other place at the back office. So how do we communicate these design changes?
Again, a connector is a good solution. As you can see, the designer has drawn the certain railings. Now, these railings, what we will do is we will upload this to the ACC server, in which it will create as a new version. Once you've created a new version, we will go to a PowerPoint and try to grab those changes.
So it's uploading the data. The data is updated. And now, here you can see the new railing has come in. So it's very persistent, consistent data, and this is with no File Transfer. I think that's really a thing. We are talking about a workflow which is working in autonomous fashion.
Let me go to our next slide. So this is the workflow of our application. You go to Revit, it connects to your Autodesk Construction Cloud. You take authorization. You build this thing using web hooks. So this is how-- we already saw a demonstration of that.
The data is being returned to a state right now. So we have to convert it into GITF or OBJ. That's another of the things. And Autodesk team is working on directly exporting to the OBJ files, which will be, I think, faster.
Implementation of it was, again, straightforward. Basically, we are taking the data and then checking whether that is the same file or not. If it's not the same, then we are updating the exchange and then bringing down that data to PowerPoint seamlessly. I will show a last connector and then go to move towards concluding this class.
PRESENTER: Here is a quick demo--
SANDIP JADHAV: This is a class-- this is the connector for which we are collaborating with Autodesk. So the video was created by Autodesk and is showing.
PRESENTER: Here is a quick demo to our Rhino Read Connector. The Rhino Read Connector enables users to read geometry and properties data from an exchange into Rhino. This will help our customers share a subset of data from their working models with the users who are leveraging Rhino. This will enable workflows like sharing an architectural Revit model with a designer who can use it to leverage the Rhino capabilities, such as create complex geometries and easy integration with other plug-ins for downstream analysis and automation. For this example, our workflow starts with Revit.
[AUDIO OUT]
SANDIP JADHAV: As you can see, the Rhino Read connector provides a great connection between Revit and Rhino. So you can not just bring geometry data, but you also bring a lot of metadata, proper data, and layer information, and that data is actionable. You can create a really intelligent model out of this exchange.
Let me move to the next slide.
PRESENTER: Here is a quick--
SANDIP JADHAV: So one of the parts that I think that you will be wondering, when to use the Data Exchange? Because we already have something called Model Derivative service from [INAUDIBLE], and it also is able to do some of these things. Model Derivative service is great if you have a large data, and the data is very logically and hierarchically organized. But when you want to do very small data, Model Derivative Service is not good.
And we have seen, if you are working with a configurator, things, Model Derivative does take time. Data Exchange will definitely change that. We will be able to pass on this small amount of data, very small amount of changes. And it can very quickly take your data from one system to another system.
So if you have large data, definitely, or you go for a Model Derivative. If you want to do with a small amount of data, Data Exchange Service is the one that we prefer. So basically, Data Exchange is definitely good for a subset of data, a very granular type of data. And if you have a really big model, big data that you are looking at completely, the data that you want, then going with your file-based approach or going with the modularity would be a better option.
So after going through this class, definitely, you would have thought, OK, I understood all of this. How do I start? Where do I start?
So first thing is definitely, you've come to this class is a good start. Educating yourself about different data platform services is definitely a great start. This will definitely help you to go a long way.
I think many of you are coming from the architecture engineering construction background. So you basically try to identify the gaps in current workflow-- the business problems. Find the cause and cost escalation reasons in your projects. That is a good point to start building your data exchange connectors.
You can use work with companies like us who are system implementation partners to build these connectors or you can use your in-house teams if you have the full-stack developers and data engineers to build the solutions. Our journey will be going to be small. We are starting with a very small subset of things. But believe me, this is going to be the whole thing as we go higher.
These are some of the references I would recommend you to go through. Every month, these documents are getting updated, and a lot more information is coming across. I will be happy to connect with you in our booth. We are exhibiting in AU At Construction 366 booth number. Me and my colleagues will be there. You can also reach us through our email addresses, and we'll be happy to interact and discuss further on Forge Data Exchange. Thank you.