설명
주요 학습
- Learn how to extract product and quantity information from Revit data using the AEC Data Model API.
- Learn how to extract BOM information from Autodesk Fusion CAD data using the Manufacturing Data Model API.
- Explore the limitations and learn about possible work-arounds to fully automate synchronizations.
발표자
- Christian GessnerChristian Gessner is a co-founder and Head of Research & Innovation at COOLORANGE. In this role, he drives research into cutting-edge technologies that enable customers to effectively automate, implement, and customize Autodesk CAD, PDM, and PLM solutions, ensuring seamless integration with enterprise systems. With over 25 years of experience in full-stack software development, Christian specializes in Autodesk product data and lifecycle management and Microsoft development technologies. Before founding COOLORANGE, he was a member of the data management software engineering team at Autodesk.
CHRISTIAN GESSNER: Hi, and welcome to Spit It Out-- How AEC and the Manufacturing Data Model API Can Feed ERP Systems. My name is Christian Gessner. I'm a co-founder of COOLORANGE, and I've been a passionate software developer for more than 20 years. My team and I research into new technologies to advance our software products.
Our software products actually help customers to automating their CAD, PDM, and PLM workflows and integrate their CAD data with ERP and other business-critical systems. In this class, I tell you about the journey using the Autodesk Data Model APIs to implement these ERP integrations.
We're going to start with a short introduction for everyone who doesn't know COOLORANGE yet, and then we briefly talk about the data model and granular data before we see the two projects in action. The focus will be the manufacturing and AEC Data Model APIs, but we will also see or have a chance to compare with other APIs from the ERP systems.
And during this class, and especially at the end of this class, I also want to share my key learnings and some limitations and possible workarounds for these limitations that I discovered while working with the data model APIs. So let's get started. Who is COOLORANGE? COOLORANGE was founded 15 years ago, 2009, by former Autodesk employees. And since then, we delivered software tools and implementation services for our customers.
Our focus has been improving common engineering and construction workflows through automation and through connectivity. Our headquarter is in a beautiful valley in Northern Italy, and we have offices in Germany, Spain, in Australia, and in United States. And we are a 100% committed Autodesk partner, so we have the Platform Services Certified certification and we are authorized developer and service provider. And with that, 100% focus on Autodesk customers.
And we are integration experts. So in the past decade or so, we connected and integrated Autodesk software with almost 50 different CAD, PDM, PLM, ERP, MES, or DMS systems. And we offer a broad range of automation and integration software products and services. And out of these many integrations, I want to focus on two today. And the first project that we see is Fusion integration that transfers items and bill of materials to Microsoft Business Central, an ERP system.
And the second project is an integration with Odoo inventory, where Windows Task Scheduler performs a Quantity Takeoff from a Revit file that's stored inside of ACC. And this data is then finally used to create a delivery in this ERP system.
Before we go into much of the details, let's talk about granular data first. When Autodesk announced their industry platforms, Fusion, Forma, and Flow, and along with their data models, they promised us developers a fileless future. So they say we can access granular data without the need to deal with files anymore.
Now, you may think, what's the problem with files? Well, files can be big as they contain all the information, even information that we probably not interested in the moment. And files-- and this is even a bigger issue-- require the authoring tool, so the CAD application is needed to open the file and get out all the information that we need.
Let's take a bill of material or component structure as an example. So when we just need the bill of material out of a CAD file, out of an assembly, we have to download this file locally-- probably also the references of these files-- and then we have to start the CAD application to process the data. Or we can use Autodesk's Design Automation APIs, which are basically doing the same.
So you see the problem? Files can be big. And to overcome this, Autodesk and the data models nowadays separated all the concerns of a file into the geometry, into the structure, into features, and properties, and thumbnails, and so on. And they save everything in their industry clouds. And they're not only saving this information in the industry clouds, but also all the relation between those properties, and structures, and components, and everything.
And this is what they call a "graph," or a "rich, extensible web of interrelated data." Let's use a building or a house as an example. So the file would be a Revit file. And if we just want to get some information about the size of a window, we still have to open the entire file. But if you look at the graph and the same building as a graph, we can access all this information granularly. And that's the biggest benefit, I think.
We don't have to use the CAD application. We just have to use a simple API to access everything that's inside of the graph and the relations, and, well, that's basically it. Now, all the walls, all the doors and windows, and even the roof, everything can be accessed individually. And not only that, everything is in relation to each other. I hope this picture makes this clear. And all the components that we see here, they have their individual properties.
And now Autodesk is actually provide us with something that's called GraphQL to query this data. GraphQL is a query language for an API. It's an open source project created by Meta Facebook. And what I like best about it-- it's super easy to learn, and the code is human-friendly to read. Basically, it gives us developers the power to ask for exactly what we need and nothing more. Let me show you an example.
This is a very simplified GraphQL query, but let's say I'm only interested in the windows of this building. I can filter window elements just separately. I can specify this filter and only get back the windows. And also, I can say what kind of payload I want to get back from this API. I'm only interested in the name and in the properties with height or fire rating, so I can specify what I want to get back. And what the API returns is just information that we need.
So if you want to learn more about the Autodesk Data Model, I highly recommend Augusto Goncalves's class, "Access Granular Design Data Using GraphQL in Autodesk Platform Services." This class is available in the Autodesk University online class library. And Augusto did a really good job explaining GraphQL in all its details that are necessary to understand the topic. So this is granular data in a nutshell, and I hope you get the basic idea of what the Autodesk Data Model concept looks like.
Now let me show you how we use this in our projects. I'm going to start with the Manufacturing Data Model example or project. And in this project, we connected Fusion with the ERP system, Microsoft Business Central. So the customer is using this ERP system, and what they ask us to do is basically to implement an interface into the CAD application, Autodesk Fusion. And the data that they want to get back is basically the bill of material information. So the component structure and the quantities of these components, and also the items along with all their properties, physical properties and even thumbnails.
Now, this user wants to trigger the action of the transfer of this data to the ERP system on demand, so we implemented two buttons. One is to check the items and the bill of material structure, and the other one is to actually transfer the data which creates or updates the items and bill of materials in the ERP system. And when an item is created in the ERP system and the ERP system generates the number for this, some new part number is generated.
We also write back this information into the Fusion component so that we know about it and the users know about it. And this is how the solution is designed from an API perspective. So what you see in the middle is the app that we created. It's an ASP.NET web app, and it provides the user interface that we use inside of Fusion. And it implements all the web services to talk with the APS APIs and with the Business Central APIs.
Then, on left-hand side, you can see we needed to create this Fusion add-in. And then also, we use the authentication API from APS and the Manufacturing Data Model API in V2, which is currently in beta. On the right-hand side, we can see that Microsoft provides us with an OAuth endpoint for authentication, along with an OData API in version 4.
Now, when a user clicks the button inside of Fusion, a browser gets opened that's embedded in Fusion. And this sends a request to our web application. And along with that request, it sends the project ID and the item ID. Now, when the app receives this, it first authenticates with APS Authentication API, and then it uses the Manufacturing Data Model API to read the entire component structure.
And once this is done, it shows the results. And while it shows the results, it asynchronously reads remaining information such as physical properties and thumbnails. Now the user can click the Check button, which actually then triggers an authentication against the Microsoft OAuth endpoints. And what we get back is a bearer token that we use to read the items and the bill of materials using the OData API. And this allows us to compare the data between what's inside of the Fusion CAD data and inside of the ERP system.
And as a result, the user can click the transfer button, which then synchronizes this information. And again, when an item is created in the ERP system, we also update the properties of the component inside of Fusion to persist the ERP number that's being created. I think best is to see a demo of that, so this is how it looks like.
We are inside of Business Central, the Microsoft ERP system, and we searched for "bike frame." Bike frame is the component or the assembly that I created in Fusion. It's not there yet. Also, on the bill of material level, we don't see that there is any bike frame yet. Now let's jump over into Fusion. And we see the button that's been created by the add-in. And when the user clicks this button, we authenticate against the APS-- the Autodesk Platform Services-- and then we have to provide the username and password.
And what's happened next is then that the data model API is reading all this information. And you can see the structure is shown. In the meantime, the physical properties and the thumbnails are loaded. The check button is available, and we can see details of the thumbnail. Now, the user can check. In our case, nothing has been transferred yet, so it's easy to identify that everything has to be transferred.
And a simple click on the transfer button is now reading all this information that we gathered from the Manufacturing Data Model API and sends it over to ERP-- to Microsoft Business Central. We get back the number, and this number is stored in the file in the data. Let me put it like this. That's now the result in Business Central. So when we search for bike frame, what we see is a newly created component.
It has a description, it has a thumbnail, and it also has the physical properties, the weight, the volume. Everything is directed from the Fusion Data. We see the same for the bill of material. So we find the bike frame inside of our list of build materials. We see the BOM header has metadata, and we see the bill of material lines along with the right position and along with the right quantities.
Now, if we change something in the CAD data, so let's say we add new components or we update properties of existing components, another check is actually identifying these changes. So here we can see that two components have been added. Other components properties description has been changed on this one. Another description has been changed on this other component. And we also can see that the bill of material has been changed for this particular line and also for the main assembly.
Now, what the user can do is you can simply click the Transfer button again, and these changes are synced with Business Central with the ERP system. That's pretty cool, isn't it? because it allows the user to not manually enter data into the ERP system anymore, and he gets left with accurate data, saves a lot of time, and avoids human errors. And that's how the solution looks like.
And now I want to talk about how we approach this project and also highlight some of the key factors for success, I would say, that were very important for this project. First of all, we heavily made use of the existing samples and tutorials. So for example, for the Fusion add-in that we needed to create, we just found that Fusion ships along with a couple of samples, one of which is a browser example that we just used.
And let me show you the code that we actually modified in that so we get rid of a couple of code lines and we added the following ones. So we needed the information where this data in Fusion Teams exists. So we needed the data file's ID and the project ID, and this is what we got out of the object datafile.
And with that, we simply built a URL that we in, the next step, then sent to a browser command input. That's actually a control that you can add to Fusion. And this leaves you with a new window that's actually hosting a browser. So super simple to create this add-in in Python, by the way.
Then we had to think about-- how should our web application look like? and we found that we could reuse the APS Hubs Browser example. And we downloaded it from GitHub and just removed the actual browser on the left and the view that you can see on right here in the screenshot. And we were left with a web application that already supports the Autodesk authentication. And this is where we started.
Next step was then to actually investigate in the Manufacturing Data Model API to get all the components and all the information that we needed. And luckily, on the documentation side, on the APS documentation, we found all the examples that we needed from reading the model hierarchy of a design over getting the thumbnails out of a part to retrieving the physical properties of component. So everything was there, and we just took that code and used the Manufacturing Data Model Explorer to test and fine-tune these queries.
And to better understand the actual data structures of the Manufacturing Data Model API, we used a tool called Voyager, which is embedded into the Manufacturing Data Model Explorer. And this allowed us to best understand the data structures of the data model, along with all the references and everything. So we were equipped with everything right from the beginning because Autodesk is doing a great job in documenting and providing samples.
But let's have a look at the actual code and how we fine-tuned it. So I want to start with the GraphQL queries that we use, and this is actually the main query. So what you can see here is that we query for items. And we are particularly interested in the design items. And this item has a property called "tipRootComponentVersion." So you have to know that these items have different versions, and we want to get just the latest version-- the tip version-- of our main assembly of our bike frame.
And this tipRootComponentVersion has occurrences, and there is a property on occurrences that we can ask to get all the structure-- so not structure, but all the references-- you can say that's kind of an xref-- describing the parent component version along with the child component version. And with that, we can later build the structure.
Also, what we wanted to have is the basic properties of both the root component and all the occurrences. So we used fragments to have a common set of properties that we can use for both the tipRootComponentVersion as well as all our child component versions. And we had to define the kind of properties that we want to read out in a single place and using it in multiple places. That was very, very cool to use.
And here's the query in action. And what I want to show here is that the query doesn't return all the occurrences at once. So what we see here is when we query for this particular item's query here, what we get back is the tipRoot version and also the occurrences, but only, in this case, 38 of them. So we needed to create a second query that actually asks for the remaining occurrences.
What it needs to run is actually the version of our root component, and then also the cursor that comes back from the first query. And when we do this, we actually get the remaining occurrences. In this case, it's only eight of them. It can be cases where your assembly is larger and you get a lot of occurrences back, and then you get a second cursor. In our case, cursor is null, so we know that we retrieved everything that we needed.
One thing about the item query is that it asks for a hub ID. And remember, I told you that from Fusion, we only sent the project ID along with the item ID. So then this is not enough for us, so we need the hub ID. So according to the documentation, there is an endpoint or a query called Project By Data Management API ID, which allow us to actually specify the project ID, and it gets us back a project.
So the cool thing here is that this project has a property hub, and this hub has the ID that we need. And because we already are asking for this hub, we also know that in a later point in our program, we need to do a mutation, so we need to update properties with the new ERP number. And to update something using a mutation, we need the property definition ID. So we just use the single query to get only the Hub ID but also the ID of the property definition that we can later use.
And this shows the awesomeness, I would say, of such a granular API like the GraphQL API. And know we had to go to our C# code. The Data Model Explorer helped us to fine-tune the queries-- now we needed to embed this in our application. And something that you don't see when you're using the Manufacturing Data Model Explorer is that the payload is JSON, but the actual query is a string. You can also see this if you use your browser inspector tools or tools like Telerik Fiddler.
The query is a string, so in the C# application that we created, we had to deal with that. So we reformatted the new lines or the line breaks, and we escaped the quotation marks to fit the query into a string. And from there, everything else was super easy. So we used this function in several instances, and it cleaned up the payload before we sent it.
Let me compare this with what we have found with the Business Central API-- OData API. So first of all, I have to say in order to use OAuth, the authentication, you have to enable Microsoft Azure-- or you have to enable this OAuth in your Microsoft Azure portal-- and get your client ID along with the client secret. And you have to provision your user in Business Central to allow API access.
And by the way, you can find this in the handout of this class-- how to do this. Now, once this was done, we get a simple authentication mechanism, which is a 2-legged authentication, where we only have to provide the access token URL that contains the tenant ID of our Business Central instance or tenant, along with the client ID, and client secret, and a predefined scope. And that's how you can access then, later, the endpoints.
Speaking about the endpoints, to understand the data structure with OData, there is something called "metadata," or OData metadata. This is an endpoint that lists all the entities and all the behaviors of the properties in-- yeah, just with a single call. So you can see I have over 5,000 lines here. And I want to concentrate on the production BOM.
So you can see, then, an entity type called "ProductionBOMs." And it has a key field, and we know that the number property is the key of this particular entity. We also see various property definitions here-- a description, and a version, the status, and everything. And what's even more interesting is that we see something called "NavigationProperty." So this NavigationProperty is basically an array of subcomponents. In our case, it's the BOM rows, or BOM lines, Microsoft calls it. And knowing that, we could build our calls.
And there's one thing that's very special, and it's kind of similar to what we have in GraphQL. So let's say we want to query the production BOMs. I want query at one with number 1001. And what I usually would get back is just the bill of material header with all its information. But we can also tell the API to expand and also show the BOM lines along with the BOM header.
And then we can also ask for granular data. There is a select statement in here that allows us to specify the properties that we want to get back. In this case, number, description, and the unit of measure code. And we can even do this for our expanded objects. So we can select only the properties that we're interested in, and this gives us back a very clean JSON payload and just the data that we need, just like GraphQL. I mean, not as comfortable as GraphQL, but it's possible with OData as well.
And then one thing that's not specific to OData but to Business Central, because this is how they implemented this, is the way to update your items. So we can see here we want to send a patch request to the item cards endpoint. We want to update this item. Along with the body that contains the information that we want to update, we also have to specify an if-match header. I think this is to ensure that the data that we want to update has not been modified by somebody else in the meantime.
And to know the value for this if-match header, we have to query the item. And you can see here in the payload, it gives us back an OData ETag. And this ETag is actually the information that we need to actually put to the if-match header in order to be able to update anything inside of this Business Central ERP system. So there's a lot of differences, but there's even things that help us developers in both OData and the GraphQL APIs.
And now the last thing I want to talk about in this project, because I think it was key to successfully implement it, is the occurrences and the component structure. So for this, I actually created an assembly-- a very simple one. We can see the main assembly, and this main assembly consists of 14 screws-- you can see these screws inside of these rails-- a bottom plate where the rails are screwed to, two linear rail assemblies.
So the linear rail assembly is an assembly that consists of a linear rail and a carriage. And if we ask for this particular assembly and ask for all the occurrences for this assembly, what we get back from the Manufacturing Data Model API is a flat list of occurrences. And an occurrence is basically a link between the parent component and the child component.
So in our case, the main assembly has one screw, it has another screw, and so on. The main assembly has a bottom plate, a linear rail assembly, and another linear assembly. And those two linear rail assemblies, they both have a linear rail and a carriage wagon. And this is what we get back from the API.
Now, the challenge is to build a structure out of that, and not only a structure, but also preserving the quantities and the positions for the structure so that we can successfully send it to another system or consume it somewhere else. This was key. I mean, the code for that in C# is not rocket science, but you have to be aware that you get back a flat list of occurrences and you somehow need to create a structure out of this information.
Now, and actually, this is the project approach for the first project with Fusion and Microsoft Business Central. The other project is about AEC Data Model. And there, we automatically sent delivery orders from ACC or Revit file into an ERP system called Odoo. So the customer is using Odoo to manage inventories and deliveries. And the customer is using ACC and the Submittals feature. So they wanted us to analyze their ACC submittals.
And whenever a submittal reaches a specific state or is approved or closed, they needed an extraction of all the duct pipes and fittings from the Revit files that are actually linked to that submittal or referenced with that submittal. And with this information, we needed to create products and deliveries in Odoo. And when creating these products and calculating the right quantities, they wanted us to also consider that these duct pipes are in a specific length.
So when calculating the quantities of everything that we see here in the screenshot-- and this is almost 2000 different fittings and pipes and everything. So they wanted us to just use standard length. And the solution of this from an API perspective is the following-- so we decided to use Microsoft PowerShell as a programming language. Why? because this can be triggered by the Windows Task Scheduler, and this can periodically run and check for any changes.
Now, on the left-hand side, you can see the Autodesk environment, Autodesk Construction Cloud. Of course, we have to use the Authentication API along with the ACC-- Autodesk Construction Cloud-- API and the Data Management API to manage the references of the Revit files with the ACC submittal and the ACC submittal itself, and everything. And then we also used the AEC Data Model API to actually get the granular data out of the Revit files.
And on the right-hand side, we see the Odoo JSON RPC API, which we needed to use to talk to the system. What our workflow does is the Task Scheduler that runs, let's say, every five minutes or so, is first authenticating with APS. Then it reads all the submittals and finds out those submittals that are released or approved. And then, if it's approved, it finds the references in all the Revit files.
Once this information is gathered from the ACC and Data Management APIs, we use the AEC Data Model API to read the granular information about the ducts and the fittings that we need out of these files. When this is done, we authenticate with Odoo and then create the products that are not already there and create the delivery information that they need to send these products onto the construction site and attach the products with their individual quantities to this delivery information.
Once everything is done, we also lock the Revit files to indicate in ACC that this file information is already transferred over to the ERP system. Let me show you a quick demo of that. What we can see here is the inventory module inside of Odoo. And we see there is currently no open delivery to do. And when we look at the products, we see there is also no product. I removed everything before the demo.
Now, back in ACC, what we see here is that this particular submittal has two referenced files-- two Revit files. And if I open one of these Revit files, I can see they contain the information that we need-- the duct, flex ducts, regular ducts, and also the fittings. And this is the information that we're looking for.
We also see that these files are not yet locked in ACC. And if somebody now approves this particular submittal, our program recognizes it because it runs every five minute. And usually, it does this in the background. For demoing purposes, I made it visible. So what it does-- it's authenticating. It's using the AEC Data Model API to get all this granular data. Then it's computing the necessary length. And then finally, locking the files and sending everything over to Odoo.
In there, we now see a list of products that have been extracted from the Revit files and created in this ERP system with even standard length. And we see the delivery. And the delivery contains the products with the right length-- so with the right length of the pipes-- and the demand-- so the quantities-- what needs to be shipped to the construction site.
And I think this is a huge benefit if you compare this with finding all this information out of this huge Revit models and counting the length of each individual piece. So for this customer, it's a huge benefit. And let me show you how we approach this project. So just like we did it for the Manufacturing Data Model API, we also looked for samples and tools available. And luckily, we found that the AEC Data Model API tutorial contains just a quantity takeoff example that we need for this project.
Once again, we use the Data Model Explorer. There is two different ones. One for-- there's three different ones-- Data Exchange, one for manufacturing, and one for AEC Data models. And we use the AEC Data Model Explorer to fine-tune the scripts that we found on this tutorial. And then, again, Voyager helped us to, yeah, understand the structure of the data that we find with these APIs.
And just like with the Manufacturing Data API, I also want to show you some insights on how we used the AEC Data Model API in the GraphQL. First of all, we needed to get an element group. An element group is basically a file. So the Revit file is called "ElementGroup" in the AEC Data Model API. And we just get some information, like the ID of this element group, to go along with other calls.
To get this information, what we need is the project ID. In our case, we didn't have a project ID, but we only had the hub name and the project name. So in order to get the project ID, we need to query the projects. Now, the projects wants us to provide a hub ID. We don't have an hub ID-- we just have a hub name. So basically, it would be a third call actually providing us with the ID of the hub by its name.
And this is-- you probably figured this-- way too complicated and not the right use of GraphQL because GraphQL allows you to granularly and deeply actually ask for the properties that you need. So we build out a query that starts with the hub. And the hub has a projects property now, and we filter the hub by its name. Then we filter the projects by the project name.
And the project has a property element groups, which we can then filter by its file URN. And what we get back with a single call-- not with three calls-- is the idea of the element group that we can further use. And this is how it looks like in action. So we see the same query here. We're interested in the ID. We have the hub name, the project name, and the file URN, everything from the ACC APIs. And what we get back is the ID that we were looking for.
Now, and with this ID, we can ask for all the ducts and the fittings that we have seen in the video. So we have the element group ID, which is, yeah, the ID that we gathered before. We have a filter that only returns properties with category ducts, or flex ducts, or fittings as an element filter. And then we have this pagination. So the GraphQL APIs are limited to return only a certain amount of data, and that's why we were using a cursor to get this.
And as for the properties that we want to receive, we only want to receive the diameter, the length, the size, the element name, and the family name. And these are basically Revit properties. So when we do the first query now, we get back the first 100 results. And we also get back a cursor that actually-- and you can see this here-- cursor's currently null. But we add this cursor and do this once again.
In our case, for the demo that I showed you, we had to do this 20 times. But I think you get the point. So you can use these cursors to do pagination and iterate through everything until the cursor's null and you have all the information that you need. Now, once this is understood, what's left was to use these queries in our PowerShell script.
And same problem as with in the C# code, the query itself is a string, so we had to reformat the line breaks once again in another language. And then we were able to use the PowerShell included command, let Invoke-RestMethod, to just send our request to the GraphQL API. I also want to compare this with the API of the ERP system, which is JSON-RPC.
JSON-RPC is a remote procedure call protocol encoded in JSON. So it has a single endpoint for authentication, and it wants us to provide the database of our system, the login, and the password. And what's special about it compared to other APIs is when you call the authentication endpoint, in response, you get back the session ID in a cookie. And this cookie needs to be used.
And for further calls, we also have a single endpoint, just like with GraphQL. And in the body, you can specify-- what kind of information do you want? So which table or which entity you want to read or even write? You can also specify that in the payload in the body. And then it allows you to filter.
In this case, I wanted to get all the products, so my filter ID is greater 0. And like GraphQL and like OData, you can specify the fields that should be returned. In my case, ID name, and so on. And last but not least, it also allows some sort of pagination. It can limit the data that's received. And you see the differences, but you also see some identical concepts.
Once again, we needed to use this in our PowerShell script, so we created functions. And we were afraid at the beginning that dealing with this cookies that we get back is an issue in PowerShell, but it turned out it's not because this Invoke-WebRequest cmdlet in PowerShell. And it has a parameter called "SessionVariable" that actually wants you to provide a name for a variable. This variable is created whenever you call the Invoke-WebRequest, and this variable contains the cookie information.
Now, in the next step, we needed to use the other endpoints to get our products to create our delivery, and so on. And therefore, we use the Invoke-RestMethod cmdlet from PowerShell because this now actually deals especially with rest or similar calls. And it allows us to specify a web session parameter where we can inject the odooSession variable that contains the cookie with the session ID, which is used whenever we call the Invoke-RestMethod, yeah, with the right payload.
And all that was left for us was to create a couple functions specifying the exact body that we need and then call this intermediate function, which is using the cookie information when it sends requests to the API.
All right. The biggest problem that we were facing, however, was the PowerShell script and userless authentication. You may know that, yeah, this Task Scheduler can run on a machine where no user sits in front of it. And we wanted to have this a service-like application-- however, the AEC Data Model requires a three-legged authentication. And this is where our tool, powerAPS, come into play.
Luckily enough, we had this before for other projects. And we immediately knew that we need to use this in here because it's a PowerShell-based APS authentication utility that supports three-legged authentication, that supports PKCE, that supports two-legged, and that has some sort of an express mode where if you specify your user ID and password in the script code and pass it along, then it automatically enters this information in the login box.
And since this tool also has a visible and invisible mode, we could use this service-like even though we created a three-legged authentication. Let me show you how this works. So this is a very simple code. And you can see we specify the client ID, the client secret, the callback URL, scope, username, and password. And when we run this, the app actually opens, puts in everything into the dialog box by itself, and then returns a token that we can use for further calls. And this is because I set the visible parameter.
If I remove this and run it again-- and before I run it again, I have to close the connection first in order to revoke the token. Otherwise, it would just refresh the token in the background. So if I don't have a token anymore and run this again without the visible parameter, it does everything in the background without even showing the window.
And it still gets us back a three-legged bearer token, which we can use then for our further calls inside of the script. So this way we can easily use service-like and server-to-server applications and use the Autodesk APS APIs that only allow to authenticate with three-legged tokens. All right. Yeah, that's the token that we got.
And this is about the second project and how we approached it. Finally, I want to talk about some key learnings and limitations that we had while working on these two projects. So first, I want to say the Autodesk Data Model APIs-- they are huge. So I personally like it, but I needed to learn a lot. I needed to learn how pagination and cursor works, how the filtering works, even advanced filtering. I needed to learn about rate limits and quotas, asynchronous data calls, fragments, directives, aliases, mutations, and so on.
So there is a lot to know about it. And I personally had a lot of key learnings, but I want to focus on three of them. The others are very well documented on the APS documentation sites. And I want to start with authentication because we just talked about it. So we know that when we create an APS application, we can choose between three different application types.
First is the traditional web app, which is basically a three-legged authentication. Second-- relatively new, as we speak-- is the desktop, mobile, and single-page app, which is a three-legged authentication with PKCE. Basically, this means there is no client secret needed for this, and you can distribute it to customers' devices. And the third one is a server-to-server app. And I never liked it because, for me, it never served as a server-to-server app. It's just a basic two-legged authentication.
So most of the endpoints of the APS environment, let me say, require an X-User-ID header when we use two-legged just so that they know about which user is actually asking for the information. And even worse is that most of the APIs that we can find in the APS ecosystem, they don't even support two-legged authentications. So neither the AEC GraphQL endpoint allows two-legged. They only support three-legged authentication.
The Manufacturing Data API in the beta version, they also only support three-legged authentication. And even ACC APIs, such as issues APIs, they require a three-legged authentication, which makes it hard for system-to-system integrators to use that. So the good news is that Autodesk is very well aware of that and is currently working on something that's called SSA token. It's a secure service account. But it's only in private beta as we speak. But if you're interested, then reach out to the APS folks and get more info about that.
For me, powerAPS served very well. It's just like a user that's, yeah, invisible. And the code can act on behalf of a user, but the user doesn't have to provide anything in the login box. So that's the first key learning that I had. The second one is the identifiers or IDs of the different APIs. And I think that's kind of funny because, overall, it makes it hard to combine different APIs.
So, for an example, the Data Management API has different IDs for the same objects as the Data Model APIs. And, for example, the hub in the Data Management API is some sort of a GUID with a B in front of it. The same object, the same hub in a Data Model API is identified with a URN-- same for the projects.
And then a file version, for instance, in any of the Data Management API is an URN, where else the same element group or file in the Data Model APIs is just a random ID. And even worse, if we don't consider the Data Model APIs but only the traditional APS APIs, if we look at the Data Management API, we see that in front of this GUID, there is a B dot.
But if we want to get the same project in the Autodesk Construction Cloud API, we have to remove the B in front of it to get this project. Otherwise, the ID is invalid. So I think, again, it makes it hard to combine those APIs to a certain extent, and I think Autodesk should know about this. But we developers, if we notice, we find a way to overcome these issues. It's not a big deal-- just want to mention it here.
And the last thing, which is also not a big deal, is the Beta APIs and that they are subject to change. So remember, I used version 2 of the Manufacturing Data Model API, which is currently in beta. And on a particular day, the prototype that we built threw an error. It said, "field item argument hubId of type ID is required."
So before that day, we queried an item with a project ID and an item ID. But on that particular day where we received the error, Autodesk changed the parameters to create an item to hub ID and item ID. And, I mean, they put this statement to their website and to their documentation for a reason. Beta APIs are subject to change, and they also change. I found it out. They're not only subject-- they change. So don't use it in production environments. Be aware of that.
All right. And that brings me to my conclusion. Autodesk promised a fileless future, and I can kind of agree on that. So we have seen that we have these granular data queries, and we get data very, yeah, specific, very deeply. We have the Data Model Explorer that allows us to test and fine-tune all these queries. We have the Voyager application-- allows us to understand the data structures of the Data Model APIs.
And we have really great support, and great examples and tutorials, and everything from the Autodesk Platform Services Team. So with all that, I'm very convinced that the day will come that we will have a fileless future, even though I don't think this is going to be tomorrow or the day after tomorrow. I think this will take some time to get all the data inside of the industry platforms. But all I can say now is I see a fileless future for us.
All right. And finally, I want to also tell you that the code samples of the projects that I presented can be found on GitHub, along with some Postman Collections. And I want to invite you to get in touch with us and join the conversation about system integrations and granular data. Now, thank you for watching, and goodbye.