AU Class
AU Class
class - AU

Up and Running with Fusion Data API

Share this class

Description

The Fusion Data API unifies the way you can access data stored on Autodesk Forge, model hierarchy information, and properties of Autodesk Fusion 360 models. This API is based on GraphQL, an industry-standard technology that makes it easy to discover the properties and relationship information that the Fusion Data API provides.

Key Learnings

  • Discover GraphQL.
  • Learn about the benefits of GraphQL compared to RESTful APIs.
  • Learn about the features of the Fusion Data API.
  • Discover where to find further help on the Fusion Data API.

Speaker

  • Avatar for Adam Nagy
    Adam Nagy
    Adam Nagy joined Autodesk back in 2005, and he has been providing programming support, consulting, training, and evangelism to external developers. He started his career in Budapest working for a civil engineering CAD software company. He then worked for Autodesk in Prague for 3 years, and he now lives in South England, United Kingdom. Twitter: @AdamTheNagy
Video Player is loading.
Current Time 0:00
Duration 1:02:02
Loaded: 0.27%
Stream Type LIVE
Remaining Time 1:02:02
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

PRESENTER: Would you like to be able to access Fusion model and lifecycle information without having to install Fusion 360? Then the Fusion Data API for you. I'm Adam Nagy, a Forge developer advocate, and this is the topic I'm going to cover. I will be making some forward-looking statements. So here is our usual safe harbor statement slide that you can read later on in your own time.

The Fusion Data API provides manufacturing design data in the cloud with granular and easy access. Currently it only enables read access to data, but we are working on letting you modify existing data and also extend it with your own custom data.

We want to provide a cloud-based source of truth for all valuable product design and manufacturing data. It will not matter whether the data is altered by an Autodesk application or a third party application. No need for desktop altering applications, like Fusion 360 or Autodesk Inventor, to access it.

In another presentation, you might have seen how our partner, Bommer, is using this API to synchronize data between multiple systems and can even use it to provide notifications to users when a new milestone version of a model is available.

Fusion Data is one part of the global cloud information model we are working on. This API focuses on the manufacturing side of things and provides the following. Access to model properties, like part number and description. I'm not talking about the new model properties API. That is available in Autodesk Construction Cloud. This is about properties coming from the Fusion 360 model.

The API lets you traverse the hierarchy of the model, including both internal and external subcomponents and access life cycle properties that are in sync with what's available inside Fusion 360 Manage or lifecycle product.

Thumbnails, even for internal components, unlike with the Model Derivative service, where you could only get thumbnails for a file, not for an internal component. Then eventing, in other words, getting notified about certain events, like creation of a milestone version. We call it eventing to differentiate it from the well-known Forge web hooks API that you might be already familiar with.

And beyond all the above, it also exposes all the data management API functionality related to getting the list of hubs, projects, folders, files, and versions a given user has access to. So you could create an app only using Fusion Data API that lets the user navigate all the files they have access to on Forge and then provide model and lifecycle information about the selected model.

All the properties and hierarchy information are stored in parallel with the geometric information. So far, shown as option B, if you wanted to get information out of the model, then you had to get it translated first, and then you had access to model properties and hierarchy information. But now with Fusion Data, which is shown as option B, you can access such data as soon as the file is saved and the new version is created without having to wait for any translation to take place.

And here you can see how we can request, for example, the description property of a model, which is just simply description at the moment. Then we can go into Fusion 360 to modify that property. Let's change it to, for example, just simply new description. So we are inside the Properties panel, and just quickly change it to new description. Then of course, close the dialog, and then save to create new version.

And as soon as the new version is available, when we go back to the GraphQL API, and then execute the query again, straight away we have access to the latest version of those properties.

Internally we have a F API that interacts directly with the graph database where all the data is stored. However, the structure is a bit complex and you would need to understand all the relationships between the nodes in order to use that API.

So we decided to create a wrapper on top of that API using GraphQL and this can drastically reduce the number of requests you need to send to us. Instead of you having to call our REST endpoints multiple times, all those calls can be done internally by the GraphQL layer.

So what is this GraphQL technology that we are using? It is becoming a popular way to expose and let users query data in a nice structured way. Don't let the name mislead you. This technology is not specific to graph databases. Its name simply means that the data is made accessible through a graph-like query language.

The data could be coming from anywhere, from a REST API, some external structure, database, or anything else. It was created by Facebook back in 2012, and then they also publicly released it in 2015, so it's quite mature by now.

And so, many companies have been using it already, including the creator Facebook Meta of course, Twitter, PayPal Pinterest, and many others. And now, Autodesk is using it as well for the Fusion Data API, our first ever public GraphQL API.

Just like REST APIs, GraphQL as well provides broad access. In other words, the ability to create, read, update, and delete elements. In GraphQL, all read operations or queries, and everything else that modifies the data, the create, update, and delete operations, are all mutations.

Its website has a great documentation. Nice graphics to help explain the concepts, tutorials, support for lots of programming languages. And the banner just says it all. This technology enables you to describe the data you want to expose, let users make queries on that data, and provide them with exactly what they ask for.

They even provide very easy to follow tutorials to show you how you could expose your own data through GraphQL. For example, using JavaScript. Our JavaScript samples tend to use Node.js and Express, and they have a sample for that too. One great thing about the GraphQL component is that it also enables you to provide a very nice graphical user interface for users to play with and explore the data you expose through GraphQL. You will see that in a second.

We just need to open a folder, in our case, an empty one, where you want to create the sample project. Then, we can create a basic Node.js project using NPM init. Accepting all the options, apart from the name of the entry file of our project, where we use server.js to stay consistent with the GraphQL tutorial.

Yeah, once that's done, then we can follow the steps from the web page to add the necessary components. So we just run NPM install to install all the components. That will be added in the node modules folder. And then we can create the server.js file and fill it with the sample code from the website.

And save the changes, of course.

Once we are done with that, you can use node server.js in order to start the application.

And when you open up the application in the browser, then you will see the graphic UI. UI that I was talking to you about, where in the documentation Explorer panel, we can find a query we implemented, which is called Hello, that simply returns hello, world.

So let's go to the editor, and then quickly just type in the query that we created. As you can see, while we are typing intelligence helps us find what's available, and once we executed the query, we can see the result, which says hello, world.

We can make the sample a bit more interesting by providing data types, additional queries, and mutations that let us create and modify data. Let's say you wanted to use a database. The data could be coming from anywhere and SQL database or any other place. To keep things really simple, we just stored the data in a variable named users.

At the start we'll have two users, both with the name and ID, and people will be able to add to that. In order to expose the data from the users variable, we need to declare things in the schema. So we can define the user. They will have a name and ID. Provide a query that returns the list of users that we have in the database, and also a mutation that lets people add a new user.

And then in the root object, we will contain the implementation of all the queries and mutations that we declared in the schema. So you can see the user's query and also the create users, create user mutation. To keep things simple, the user ID will be generated based on how many users we currently have in the database.

After adding the changes mentioned on the previous two slides, we end up with this code. With the schema, with the list of users in the users variable, and then of course, the implementation inside the root object.

When we run the sample again, just like before, no server.js, and then we navigate in the browser to our application.

And then, you go to the documentation section, now post the newly added query for users will show up and also the mutation that we added there. So you can see we have under mutation, we have create user.

So let's quickly test what we have done so far. So we can simply run the query. As you can see, when we say users, then it's underlined in red, meaning we still have to provide the properties of the given query that we're interested in. So we can just simply go with name and ID of each user.

Once we edit that, then we can run the query and get back the two users that we have by default, John Doe and Jane Doe, with their IDs as well. Now let's test the mutation. So instead of that part, we are going to be creating the mutation. We only have a single mutation currently. That's the Create User.

Again, it's still underlined, then you get an error, because you need to add more information there. We need to provide an input parameter for the name, and so let's just go with, for example, Sarah Doe.

And we also have to specify what we want to get back. I could say, name but there's not much point to it because we are just providing the name right now. So instead, I'm asking for the ID, so we'll also get back the ID that was generated automatically for this newly added user. And there's also a nice button in the user interface called history, so we can go back to a previous query that we already ran and then use it again.

So as you can see, my query is back again, and of course, when I'm running it this time, then it will return three users, which will include the newly added user as well, called Sarah Doe and with the generated ID.

GraphQL enabled us to present the data in a way that's using the same terminology that Fusion 360 users are already familiar with. Exposing components, occurrences, versions, path number, description, and so on. You can see in this image how nicely the hierarchy information, provided by our GraphQL API, lines up with the structure shown in the model palette inside Fusion 360.

If you were to compare GraphQL to a REST API, then you find that GraphQL has the following benefits. A single query can retrieve results from multiple REST endpoints, which means less calls. You can also add multiple queries to a single API call, which also means less calls. And you will get that just the data that you are interested in, no unnecessary data that you would have to traverse to find what you need.

Now, let's see these points in detail. I'm going to use the Forge Data Management API for comparison. So as an example, if you wanted to get the names of all the projects you have access to using our REST APIs, you would have to first request the list of hubs you have access to, then for each hub, request the list of projects it contains.

As you can see in GraphQL, I can simply request all this in a single query. Asking for all the hubs, then names and IDs, and then for each hub, getting the name of each project they contain.

With REST, you have to send the request multiple times in order to get back results for multiple different items. In GraphQL, I can combine two completely separate queries, one looking for the hub named Fusion Data and requesting all the project names inside it, and the separate one looking for a hub named My Hub and requesting all the project names inside that as well.

We trust you will get back all the data that is exposed through a given REST endpoint, whether you want it or not. With GraphQL, you can specify what you are interested in, and only that will be returned.

In this example, you can see that we just want the names of the hubs we have access to, but instead in case of REST, we get loads of unnecessary data for each hub. Type of the hub, its schema, links, et cetera. With GraphQL, we get just what we ask for. The name of each hub, nothing else.

Currently only the eventing part, where we can create web hooks to get notified about milestone creation, supports all the CRUD operations, and we only have read only access to the rest of the objects. But we are planning to expose the ability to modify other data as well.

You'll find the documentation of the Fusion Data API at the link below, which provides you with a nice overview of the capabilities, tutorials, samples, and also a list of all the objects that are exposed.

The source code of the samples can be found inside this GitHub report. Since there is a single endpoint where you can send your GraphQL requests, therefore the samples are fairly similar, mainly the GraphQL queries are different.

One of the samples also has a live version that lets you log in using your Autodesk credentials and play with the Fusion Data API. It's using a library called GraphiQL, that makes it really easy to provide a very nice intuitive graphical user interface for your GraphQL APIs that lets your users interact with your API and quickly get familiar with all the data it exposes.

Many web services, including GitHub itself, are using this component to provide a kind of playground for their customers.

GraphQL lets you type in your queries, then run them and get back to the results. While typing your queries it also offers intellisense, listing the options available for you. And it also has a documentation explorer on the right, which will list all the possible queries and mutations you can use. As you may remember, in order to get back information, you need to run a query. If you want to modify something, then you need to use the mutation.

When you see the list of available queries, you'll also find, of course, the one we just used at the beginning, simply called hubs. That returns all the hubs that log in users has access to. But we could also start with other queries like projects, that returns a given project based on the hub and project ID, or item, which can return a specific file or folder, based on its ID.

As you keep clicking on the various types, you will quickly discover the whole object model exposed through the Fusion Data API, with all the various types available and the properties or fields they have.

Just to show you how you can play with the API in this online tool, first, you have to log in using your Autodesk ID, then can start doing various operations. The simplest is probably to ask for the list of hubs you have access to, then we can also use fear to reduce the amount of data in the response to just what we need, based on the name of the hub.

So first we are just running the simple hubs query, and then we are going to be filtering it down to something that we are actually interested in. So hubs has an option, you can put it in brackets, you can provide a filter. And it needs to be an object with one property. Go with name, and then we can just pass in the name of the project or hub that we interested in.

Now we can simply list all the projects inside the hub that we selected.

So here are all the projects. Now we are going to be filtering down to the specific project we're interested in. So we're just passing the name to the filter. Then we run it again. This time only the specific hub and projects are listed. And we can also ask for other properties, like for example, the root folder of the project and see what's inside those.

So the name of the root folder the will be the same actually as the name of the project, as you see right here. And now we can check the items, the contents of the root folder, and it's always going to be part of a results list. And just for fun, also we are requesting the type name for each of the objects, and this is how you can see that one of the items is actually a subfolder, or a folder, and all the others are Fusion objects, or components.

You can also specify the event type that you're interested in. So let's say you only want to get certain properties on a component. Then we can say dot, dot, dot on, and then specify the type in this case component. And then, we can write there what we're interested in. Let's say the tip version of the given component, and then also get back some properties for that tip version, like the item number.

We could also find out if any other components are being referenced by this component. So we can go to model occurrences, check the results for those, get the ID for each of them. So we can see that the steering wheel assembly for the FOFD test that has multiple occurrences being referenced by it. So those are all listed there. But the ID doesn't tell us much, right?

So instead we can also ask for the component version of what's being referenced and then list the name. So some of these components might be internal components. Some of them might be external components, which are also available in the root folder of the project.

So this is one way to find the hierarchy of a given model. And again, we can just ask for more properties for each of these component versions as well, if you want it to. The part description, part number.

Yeah, that's it. So you saw how we can get to data from files solely using the Fusion Data API. However, many people interested in this API already have applications using for example, the data management API and already know the UR and what the identifier of the files they are interested in. In that case, they can use different queries as a starting point.

So as an example, let's get the project ID and item ID of the Fusion 360 model using the tool like the Model Derivative API sample. You might be familiar with this already.

So this is using the data management API to list all the files I have access to. You can see the hubs projects, folders, and the rest. And then when I select the file and then in the developer console, I can check the URL of that item and that will contain both the project ID and the item ID of the given file.

So here you can see the URL, so projects/projectid. I'm just quickly starting up the Fusion Data explorer sample that you've seen already. But this time around, I'm going to be starting with a specific file.

So we just need to authorize the application to access our data. And now instead of doing the usual query, like going through the hubs and projects, we can start off straightaway with a specific item. And as you can see in this case, we have to provide the project ID and also the item ID. So we can take the project ID from this URL right there. And then the item ID, which is also part of that URL. I can just copy paste it from there.

And let's just check what we found, so we can ask for the name and also ID of the item. So this should be the box, exactly the box model that we have in our root folder. You can also ask for the type name. It should be design file.

Again, I can use the on operator in order to drill down to the specific type of object that I'm interested in. In this case, the design file, and then check the tip version, for example of the design file, and then check the root component version of that.

And once I get there, I have access to the usual information for the component version, like part description and part number. So I can just run the query again to get back that information. And again, you may remember we just changed the part description for new description. So of course, that's what's going to show up there as well in the query. And beyond that you could also ask for, for example, the thumbnail of the given component.

When I'm running this query, the image is not yet available. If you're familiar with Model Derivative API, it's a similar process. You have to ask for thumbnail. If it has not been generated already, then it might take a bit of time for it to be so. So best thing is to check the status of the thumbnail, but by the time I'm running this query again, the status is already success, and the URL of the thumbnail is already available. So I can just click on it and get the thumbnail.

Apart from the Fusion Data Explorer sample, you can also check out the other three samples showing specific workflows. Let's have a look at one that gets the thumbnail of a given model. As pointed out on the website, you have to make sure that the Forge app with credentials you are using has the correct callback URL.

You can just go to your app on the Forge website, forge.autodesk.com, and check it there. Log in with your Autodesk account, go to my apps, then select the app that you want to use, and then of course, make sure that the callback URL is as shown in the picture, which is HTTP, not HTTPS. And then local host, colon 3,000 so we will be running on the 3,000 port, and then forward slash callback and forward slash oauth.

All three samples have a similar structure. So the index js is the starting point where you have to provide your Forge apps client ID and client secret, plus the location of the model you want to work with. In other words, the name of the hub and project where the model resides, plus the name of the model itself.

Currently, the sample assumes that the model is inside the root folder of the project. However, it would be quite straightforward to modify the code in order to work with the model that is inside the subfolder of a project. And I'm going to show you that in a second. But this specific sample you see on the screen, that shows how you can get the thumbnail for a given Fusion model.

So you can just open up Fusion 360 or the Fusion Team website in order to find the name of the hub project and the model that you want to work with. In our case is L2 Forge data team, Adam, and then the box model that was already playing with before.

And then, update the values of the variables in the index.js file accordingly. So hub name, project name, and component name. And then the auth.js file is using the usual Forge authentication APIs that all Forge developers are already familiar with. In this case, we need to use a Three Legged authentication, because that's what the data management API requires when accessing models on Fusion Team.

And finally, app.js is the file that contains all the functionality related to the Fusion data API. You can see the single endpoint that you need to use to access all the GraphQL functionality. So the developer, the API, the total [INAUDIBLE] Fusion data, the specific version, and then forward slash GraphQL.

In order to get the thumbnail for our model, we just have to run this single query. So first we query for all the hubs and filter the result the hub with the given name we previously provided. Then the query for all the project, again filtering it for the specific one we're interested in. Then the query for all the files and components that are available in the root folder of that project and filter that for the one with the name we provided.

Since the thumbnail might not have been generated for the given component already, therefore we have to check its status. And then we have to keep sending this query to get an update of the thumbnail status. This process is similar to how you translate files using the modal Derivative API, or get properties out of an already translated model with it.

Once the status is a success, then we can use the provided URL to download the thumbnail. As pointed out in the pre-requisites of the samples, make sure that the node.js version on your system is at least 16 before trying to run the samples. You can simply run node-v in the terminal to check the node.js version currently being used on your system.

Before you can run the sample, you need to use NPM I in order to install all the required components in the node modules folder. Then use NPM start to run the sample.

So to recap the whole process, once you downloaded the sample, all you need to do is set the values of the highlighted variables in the index.js file, as you can see there. Then the run NPM I to get all the necessary libraries installed in the node modules folder.

You can see all the packages that were added, then can start the app using the NPM start command. And then, you'd have to open the given URL in the browser so that you can log in using your Autodesk account. So we just have to type in our account details, our email address, usual things.

Sometimes, it asks for the emails twice. Then of course, the password. And then we also have to approve the sign in requests.

So we got the access token. Once that's done, the program will continue execution, and send the GraphQL query to the Fusion Data API. And as soon as the thumbnail is available, its location will be printed to the console so that you can open it and here's the thumbnail of our model.

You can run the other two samples exactly the same way. One sample shows how you can get the full hierarchy of a given model, and the other shows how you can subscribe to the milestone created event in order to get notified when someone creates a new milestone version of your model.

If you wanted to modify the code, the easiest way to make sure that you are sending the query correctly is testing it in a utility like the Fusion Data Explorer that I've shown you before. So let's modify the sample code so that it will work with the model that is in a subfolder in this case called my subfolder.

We can simply copy paste the query into the Fusion Data Explorer's editor and also provide the query parameters. When typing those in as usual intellisense will help you here too, if needed. And the parameters that the query is using will be listed in a popup window that you can select from. And then you can just run the query to test the result.

In our case, all we need to do is filter for the name of the subfolder first, then look for the model or component inside that folder, instead of doing that directly in the root folder. So only these couple of lines need to be added to achieve what we want.

Since the name of the model is different as well, we need to modify that in the query variables window. Then we can run the query and see that it's working just fine. So we manage to change the code as we wanted it. So now it's time to copy paste to go back into our application. You can see that now it includes those extra lines of code we added.

And we also need to update the component name in the index.js file, of course. And since now the data will be one level deeper in the response, we have to modify the Get component version thumbnail functions code to get the data from the subfolder. I just had to add these three lines and also modify this line, so that we check the results in the subfolder instead of doing that directly on the root folder.

Now we can run the sample, just like before, by using NPM start. Then go into the browser to log in using our Autodesk account. We got the access token straight away. And by the time we're back, the sample already downloaded the thumbnail, and we can view it by clicking on the link. And there go. We have the sphere model's thumbnail.

And this sample that prints out the full hierarchy of the selected design can be run exactly the same way as the previous sample. The way we can get the component or design we want to work with is exactly the same in the query.

So we are just going through the hubs, projects, root folder, and check the items in there. But instead of checking the thumbnail, we are asking for the list of model occurrences that this model has. In other words, what other components are used in the design.

We also have another query that will check the model occurrences of the components that we got back from the previous query using the component version query.

We request the same info here, as in the previous query, the list of model occurrences with their ID and name, and we keep doing this until we discover the full hierarchy of the top model.

Here as well, we just have to make sure that we file the values of the relevant variables in the index.js file, run NPM I to add all the necessary components and packages, then we can run NPM start to start the application.

Then navigate to it in the browser, and then log in using our Autodesk account. But since we've already logged in before, it remembers that and we go back the access token straight away. And by the time we got back to the application's console window, we can see that the full hierarchy has been printed there.

You can also double check to make sure that this is exactly what you would expect. So we can simply go into Fusion 360, check the Components tab, and you will see that it's exactly the same hierarchy that we got back using the API.

As mentioned at the beginning, we always get up to date information without any delay. So let's for example, modify the components in our model. We are going to be adding another instance of the bolt that we have in the model.

So as you can see, currently we only have two bolt instances. Bolt one, bolt two. We can simply just copy it, and then paste it in the model. We don't really care where it's placed. We only care about the hierarchy or the structure of the model. Just to test it, that it's all correct in the API. So we're going to be saving this. In order to generate the new version for the model.

And you can go back to the sample that you just ran before and run it again to get the up to date information about the hierarchy of the model that we're working with. So again just navigate to the application and the browser, login using our Autodesk account, and by the time we get back here, well almost, we have to wait just a bit.

But you can see there's a hierarchy and now, we have three instances of the bolt. So all seems to be good. It all seems to be good.

The last sample is a bit more complicated to run because it requires that you handle callbacks coming from the fourth server, and in case of testing on a local computer that's not exposed to the internet, we need a tool that will pass on the message to our computer. So the two extra things needed in the sample are handling the callback coming from the fourth server and using a tool like and ngrok that we pass messages to our computer.

The Fusion that the API provides and the venting system, which is basically like web hooks, but the system is separate from the Forge web hooks that you may be familiar with already. Currently, it only supports the milestone created event. As the name suggests, this allows you to get notified when a new version of a model is saved as a milestone in Fusion 360.

In this sample as well, the queries are in the app.js file. The main one in case of the sample is how we create the webhook using a mutation. You just have to specify the ID of the component that we want to monitor and then of course, the callback URL that our app is expecting to get the notifications on.

As usual, first we have to make sure that the values of the relevant variables in the index.js file are correct, then we also have to start ngrok so that we can receive the webhook callback. On this system, it's already installed in the applications folder, so I can run it from there.

And we can simply copy paste the URL it generated for us and use that for the ngrok URL variable. So we just replace that there. We have to install all the necessary components. So we're going to be running and NPM I.

And once that's done, we can start the application. So we're going to the browser, navigating to our application.

And once that's done, the application registered the webhook for us and printed a message about that in the terminal. To test things we can just go in Fusion 360, open the model we are monitoring it, make some changes in it, and see if the new version as a milestone.

So we can just simply open up the model and probably one of the easiest modifications to do is just select one of the faces, and then do a push pull or press pull modification on it. So we are just going to be changing the height of this object. Click OK, save the changes. Make sure we select milestone, because that's what we are monitoring.

So that's been done now. We just have to wait for this modification to show up in our program. And here it is. We received a notification successfully. And you can see that information, including the name of the milestone, event type, and some other information like component ID, which might be relevant.

You also have a great sample utility web app available online that you can log into and start testing the API with. It's using the same GraphiQL component as the first sample I showed, but has many other features as well. On the left side, it provides a tree control that lets you access any of the models that you have access to, and this app as well is using the Fusion Data API under the hood to populate the tree control.

Apart from being able to run any GraphQL queries you want, it also suggests queries for you, depending on what object you select in the tree control. So for example, if you select the hub, then it shows you how you could query for all the projects in it and the contents of the root folder of those projects.

If you select the project, then it shows you how you could query for the contents of the root folder of that project. If you select a model, then it shows you how you could query the component version of the model, get properties, like path number and description. And also get a list of the other components this model is referencing and the properties of those components as well.

You can of course run all these queries by clicking on the play button in the user interface, and get the results back in the panel on the right side.

It also has a tap code demo app that shows the hierarchy inside the selected model using a graph visualization library. You can simply double click on a node, and if it has a subcomponent, those will appear and you can drag them around. It's a bit like herding cats. The nodes keep trying to escape.

So we can just discover the full hierarchy or object model of the design that we are working with.

If you wanted to start from scratch, then you could either select the file again in the tree control on the left side or in fact as I found out, if you just double click the root component, the same thing will happen.

Yeah. So we are back to square one, have to touch those components again.

And if you're paying attention, you can see that in the tree control, where all the files in the root folder of the project are listed, some of these names will be familiar because some of these are actually external references inside the design. So those will also show up in the root folder of the project.

So I'm checking which ones are there. The bolts are not. Those are internal components. Those are not external files. Steering wheel form, that's not either. As you can see, it's not part of the root folder. But for example, if you check the center bracket modified, that does show up in the root folder, or the base modifier. That's also an external component, which also has some internal components as well.

So this is a really fun way to investigate the hierarchy of the models that you're working with inside Fusion.

There is another tab, which will use the well-known Forge Viewer to display the selected model. You're probably familiar with the Forge Viewer already, with the various functionalities which are available to you, including sectioning, measuring, and all those things.

Then there is the component table tab, which shows the same hierarchy that was presented on the demo app tab as a graph. But this time organized into table, and also including component properties and thumbnails, basically providing [INAUDIBLE] information for the model.

And last but not least, there is the query history tab that not only lists the queries that you ran in the Query Editor, but those as well that the app used to populate. For example, the tree control, the username and icon, and the Forge Viewer. It might be quite useful to have a look at those as well, to see what API requests the app is making in the background.

All the samples you've seen were using node.js. If you prefer .net, actually created one following the steps in the tutorial, where you can find in the online documentation. So it's following the exact same steps outlined in the tutorial, going through picking a hub first, picking a project to work with, picking a component, generating the thumbnail, and then of course, downloading the thumbnail as well.

So here as well, first of all, you have to fill in the credentials and make sure that you use the correct callback URL, which is the same like in all the other samples. And then when you run the app, then the browser should pop up where you can log in with your Autodesk account. You just have to wait for it a bit until it gets started.

So we need to provide the email, password of course, and also approve access to our data on Fusion Team. So signing in.

And as soon as we go the access token, of course the application continues running. And it goes through each step, just like as I said in the online tutorial. So first of all, we are going to back order. Get back all the hubs that we have access to.

So those are listed in the console, and what we have to do now is select the ID of the specific hub that we are interested in. So I'm just going to be going with the usual hub like I used before. The L2 Forge data team, so I just need the ID from there. And then I have to go into the get all project's function and then paste it in there. And make sure you save the changes. And then we can restart the application.

So this is the way the sample works. Each time, you're going to get an error until you fill in all the necessary information in order to get back the thumbnail that you were interested in.

So now that you selected the hub, it's going to be listing all the projects available inside that hub. Unfortunately, each time we have to go through the login process. So that takes a bit of time. It's just asking for the usual credentials.

So this time around, it should successfully execute two of the steps out of the five of the tutorial. So now it's going to be listing the hubs and also the projects inside the specific hub. So you can see all those should be available inside the console, and this time around we need to select the project ID that we want to use, and go to the get components and also select the ID of the root folder of the given project.

So we are copy pasting that as well. Again, save everything. And then we can restart the application.

So now, three steps should successfully execute. Getting list of hubs, getting the projects inside the hub, and now getting also the contents of the root folder of the given project.

So once we go through the usual login process, then the application should continue.

Yep, so now we should have the list of components available in the root folder as well. We're going to be going with the box model that we already use before. So we just need the ID of that model. Place it right there in the generate thumbnails function, or the generate thumbnail function.

As you can see, we'll also need the project ID, but that's something we already used in the previous function. So we can just simply copy paste it from there. We go to the get components function, and copy paste the project ID from there into the generate thumbnail function. Again save all the changes, and then we can restart the application.

So after this, we reached the final step of the sample. We are going to have the last remaining, or missing information that we still needed.

So once it successfully signed in, then we can go back to the console, and check how far it got.

So now, it's also listing the component versions. So we can simply go with the tip version of this box model. So we can copy paste that from there, and then place it in the download thumbnail function. And finally, we finished the sample. So all the necessary information is available there now. So when we run it for the last final time, then we should get a URL of the thumbnail for this given model.

So again, we just have to go back to the browser. Log in.

And this time around, we shouldn't run into any errors, because we provided all the necessary information for the sample.

So now let me go back to the console, after you've managed to log in successfully. Then I'm expecting to see that the thumbnail's URL. Well, basically actually, no the sample just takes the URL of the thumbnail and downloads it automatically in the background.

So now we have the location of the thumbnail that was downloaded, and we can just navigate to it inside the File Explorer and there is the model that you've seen already before when I opened it up in Fusion 360. And this is the thumbnail we got for it.

So far, you've seen what is already available today. And now a few words about what we are thinking of providing in the future. Expose more properties that customers ask for, like volume and bounding box. Provide an extra function that not only returns the components that the model is directly referencing, but the full hierarchy of the model, however many levels it may have.

As previously mentioned, we are also planning to enable write capabilities on data, and even let you create your own data that you could attach to components.

Finally, just a quick recap of the main resources that will help you get started with the Fusion Data API. There is the online documentation, of course. The GitHub report with the samples, the two online utilities that let you play with the API, and the link providing info about how you can contact us if you need any help.

I hope you found this presentation useful and it convinced you to give this API a try. Bye for now.

______
icon-svg-close-thick

Cookie preferences

Your privacy is important to us and so is an optimal experience. To help us customize information and build applications, we collect data about your use of this site.

May we collect and use your data?

Learn more about the Third Party Services we use and our Privacy Statement.

Strictly necessary – required for our site to work and to provide services to you

These cookies allow us to record your preferences or login information, respond to your requests or fulfill items in your shopping cart.

Improve your experience – allows us to show you what is relevant to you

These cookies enable us to provide enhanced functionality and personalization. They may be set by us or by third party providers whose services we use to deliver information and experiences tailored to you. If you do not allow these cookies, some or all of these services may not be available for you.

Customize your advertising – permits us to offer targeted advertising to you

These cookies collect data about you based on your activities and interests in order to show you relevant ads and to track effectiveness. By collecting this data, the ads you see will be more tailored to your interests. If you do not allow these cookies, you will experience less targeted advertising.

icon-svg-close-thick

THIRD PARTY SERVICES

Learn more about the Third-Party Services we use in each category, and how we use the data we collect from you online.

icon-svg-hide-thick

icon-svg-show-thick

Strictly necessary – required for our site to work and to provide services to you

Qualtrics
We use Qualtrics to let you give us feedback via surveys or online forms. You may be randomly selected to participate in a survey, or you can actively decide to give us feedback. We collect data to better understand what actions you took before filling out a survey. This helps us troubleshoot issues you may have experienced. Qualtrics Privacy Policy
Akamai mPulse
We use Akamai mPulse to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Akamai mPulse Privacy Policy
Digital River
We use Digital River to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Digital River Privacy Policy
Dynatrace
We use Dynatrace to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Dynatrace Privacy Policy
Khoros
We use Khoros to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Khoros Privacy Policy
Launch Darkly
We use Launch Darkly to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Launch Darkly Privacy Policy
New Relic
We use New Relic to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. New Relic Privacy Policy
Salesforce Live Agent
We use Salesforce Live Agent to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Salesforce Live Agent Privacy Policy
Wistia
We use Wistia to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Wistia Privacy Policy
Tealium
We use Tealium to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Tealium Privacy Policy
Upsellit
We use Upsellit to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Upsellit Privacy Policy
CJ Affiliates
We use CJ Affiliates to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. CJ Affiliates Privacy Policy
Commission Factory
We use Commission Factory to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Commission Factory Privacy Policy
Google Analytics (Strictly Necessary)
We use Google Analytics (Strictly Necessary) to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Google Analytics (Strictly Necessary) Privacy Policy
Typepad Stats
We use Typepad Stats to collect data about your behaviour on our sites. This may include pages you’ve visited. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our platform to provide the most relevant content. This allows us to enhance your overall user experience. Typepad Stats Privacy Policy
Geo Targetly
We use Geo Targetly to direct website visitors to the most appropriate web page and/or serve tailored content based on their location. Geo Targetly uses the IP address of a website visitor to determine the approximate location of the visitor’s device. This helps ensure that the visitor views content in their (most likely) local language.Geo Targetly Privacy Policy
SpeedCurve
We use SpeedCurve to monitor and measure the performance of your website experience by measuring web page load times as well as the responsiveness of subsequent elements such as images, scripts, and text.SpeedCurve Privacy Policy
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

Improve your experience – allows us to show you what is relevant to you

Google Optimize
We use Google Optimize to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Google Optimize Privacy Policy
ClickTale
We use ClickTale to better understand where you may encounter difficulties with our sites. We use session recording to help us see how you interact with our sites, including any elements on our pages. Your Personally Identifiable Information is masked and is not collected. ClickTale Privacy Policy
OneSignal
We use OneSignal to deploy digital advertising on sites supported by OneSignal. Ads are based on both OneSignal data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that OneSignal has collected from you. We use the data that we provide to OneSignal to better customize your digital advertising experience and present you with more relevant ads. OneSignal Privacy Policy
Optimizely
We use Optimizely to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Optimizely Privacy Policy
Amplitude
We use Amplitude to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Amplitude Privacy Policy
Snowplow
We use Snowplow to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Snowplow Privacy Policy
UserVoice
We use UserVoice to collect data about your behaviour on our sites. This may include pages you’ve visited. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our platform to provide the most relevant content. This allows us to enhance your overall user experience. UserVoice Privacy Policy
Clearbit
Clearbit allows real-time data enrichment to provide a personalized and relevant experience to our customers. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID.Clearbit Privacy Policy
YouTube
YouTube is a video sharing platform which allows users to view and share embedded videos on our websites. YouTube provides viewership metrics on video performance. YouTube Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

Customize your advertising – permits us to offer targeted advertising to you

Adobe Analytics
We use Adobe Analytics to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Adobe Analytics Privacy Policy
Google Analytics (Web Analytics)
We use Google Analytics (Web Analytics) to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Google Analytics (Web Analytics) Privacy Policy
AdWords
We use AdWords to deploy digital advertising on sites supported by AdWords. Ads are based on both AdWords data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that AdWords has collected from you. We use the data that we provide to AdWords to better customize your digital advertising experience and present you with more relevant ads. AdWords Privacy Policy
Marketo
We use Marketo to send you more timely and relevant email content. To do this, we collect data about your online behavior and your interaction with the emails we send. Data collected may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, email open rates, links clicked, and others. We may combine this data with data collected from other sources to offer you improved sales or customer service experiences, as well as more relevant content based on advanced analytics processing. Marketo Privacy Policy
Doubleclick
We use Doubleclick to deploy digital advertising on sites supported by Doubleclick. Ads are based on both Doubleclick data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Doubleclick has collected from you. We use the data that we provide to Doubleclick to better customize your digital advertising experience and present you with more relevant ads. Doubleclick Privacy Policy
HubSpot
We use HubSpot to send you more timely and relevant email content. To do this, we collect data about your online behavior and your interaction with the emails we send. Data collected may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, email open rates, links clicked, and others. HubSpot Privacy Policy
Twitter
We use Twitter to deploy digital advertising on sites supported by Twitter. Ads are based on both Twitter data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Twitter has collected from you. We use the data that we provide to Twitter to better customize your digital advertising experience and present you with more relevant ads. Twitter Privacy Policy
Facebook
We use Facebook to deploy digital advertising on sites supported by Facebook. Ads are based on both Facebook data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Facebook has collected from you. We use the data that we provide to Facebook to better customize your digital advertising experience and present you with more relevant ads. Facebook Privacy Policy
LinkedIn
We use LinkedIn to deploy digital advertising on sites supported by LinkedIn. Ads are based on both LinkedIn data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that LinkedIn has collected from you. We use the data that we provide to LinkedIn to better customize your digital advertising experience and present you with more relevant ads. LinkedIn Privacy Policy
Yahoo! Japan
We use Yahoo! Japan to deploy digital advertising on sites supported by Yahoo! Japan. Ads are based on both Yahoo! Japan data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Yahoo! Japan has collected from you. We use the data that we provide to Yahoo! Japan to better customize your digital advertising experience and present you with more relevant ads. Yahoo! Japan Privacy Policy
Naver
We use Naver to deploy digital advertising on sites supported by Naver. Ads are based on both Naver data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Naver has collected from you. We use the data that we provide to Naver to better customize your digital advertising experience and present you with more relevant ads. Naver Privacy Policy
Quantcast
We use Quantcast to deploy digital advertising on sites supported by Quantcast. Ads are based on both Quantcast data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Quantcast has collected from you. We use the data that we provide to Quantcast to better customize your digital advertising experience and present you with more relevant ads. Quantcast Privacy Policy
Call Tracking
We use Call Tracking to provide customized phone numbers for our campaigns. This gives you faster access to our agents and helps us more accurately evaluate our performance. We may collect data about your behavior on our sites based on the phone number provided. Call Tracking Privacy Policy
Wunderkind
We use Wunderkind to deploy digital advertising on sites supported by Wunderkind. Ads are based on both Wunderkind data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Wunderkind has collected from you. We use the data that we provide to Wunderkind to better customize your digital advertising experience and present you with more relevant ads. Wunderkind Privacy Policy
ADC Media
We use ADC Media to deploy digital advertising on sites supported by ADC Media. Ads are based on both ADC Media data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that ADC Media has collected from you. We use the data that we provide to ADC Media to better customize your digital advertising experience and present you with more relevant ads. ADC Media Privacy Policy
AgrantSEM
We use AgrantSEM to deploy digital advertising on sites supported by AgrantSEM. Ads are based on both AgrantSEM data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that AgrantSEM has collected from you. We use the data that we provide to AgrantSEM to better customize your digital advertising experience and present you with more relevant ads. AgrantSEM Privacy Policy
Bidtellect
We use Bidtellect to deploy digital advertising on sites supported by Bidtellect. Ads are based on both Bidtellect data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Bidtellect has collected from you. We use the data that we provide to Bidtellect to better customize your digital advertising experience and present you with more relevant ads. Bidtellect Privacy Policy
Bing
We use Bing to deploy digital advertising on sites supported by Bing. Ads are based on both Bing data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Bing has collected from you. We use the data that we provide to Bing to better customize your digital advertising experience and present you with more relevant ads. Bing Privacy Policy
G2Crowd
We use G2Crowd to deploy digital advertising on sites supported by G2Crowd. Ads are based on both G2Crowd data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that G2Crowd has collected from you. We use the data that we provide to G2Crowd to better customize your digital advertising experience and present you with more relevant ads. G2Crowd Privacy Policy
NMPI Display
We use NMPI Display to deploy digital advertising on sites supported by NMPI Display. Ads are based on both NMPI Display data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that NMPI Display has collected from you. We use the data that we provide to NMPI Display to better customize your digital advertising experience and present you with more relevant ads. NMPI Display Privacy Policy
VK
We use VK to deploy digital advertising on sites supported by VK. Ads are based on both VK data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that VK has collected from you. We use the data that we provide to VK to better customize your digital advertising experience and present you with more relevant ads. VK Privacy Policy
Adobe Target
We use Adobe Target to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Adobe Target Privacy Policy
Google Analytics (Advertising)
We use Google Analytics (Advertising) to deploy digital advertising on sites supported by Google Analytics (Advertising). Ads are based on both Google Analytics (Advertising) data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Google Analytics (Advertising) has collected from you. We use the data that we provide to Google Analytics (Advertising) to better customize your digital advertising experience and present you with more relevant ads. Google Analytics (Advertising) Privacy Policy
Trendkite
We use Trendkite to deploy digital advertising on sites supported by Trendkite. Ads are based on both Trendkite data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Trendkite has collected from you. We use the data that we provide to Trendkite to better customize your digital advertising experience and present you with more relevant ads. Trendkite Privacy Policy
Hotjar
We use Hotjar to deploy digital advertising on sites supported by Hotjar. Ads are based on both Hotjar data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Hotjar has collected from you. We use the data that we provide to Hotjar to better customize your digital advertising experience and present you with more relevant ads. Hotjar Privacy Policy
6 Sense
We use 6 Sense to deploy digital advertising on sites supported by 6 Sense. Ads are based on both 6 Sense data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that 6 Sense has collected from you. We use the data that we provide to 6 Sense to better customize your digital advertising experience and present you with more relevant ads. 6 Sense Privacy Policy
Terminus
We use Terminus to deploy digital advertising on sites supported by Terminus. Ads are based on both Terminus data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Terminus has collected from you. We use the data that we provide to Terminus to better customize your digital advertising experience and present you with more relevant ads. Terminus Privacy Policy
StackAdapt
We use StackAdapt to deploy digital advertising on sites supported by StackAdapt. Ads are based on both StackAdapt data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that StackAdapt has collected from you. We use the data that we provide to StackAdapt to better customize your digital advertising experience and present you with more relevant ads. StackAdapt Privacy Policy
The Trade Desk
We use The Trade Desk to deploy digital advertising on sites supported by The Trade Desk. Ads are based on both The Trade Desk data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that The Trade Desk has collected from you. We use the data that we provide to The Trade Desk to better customize your digital advertising experience and present you with more relevant ads. The Trade Desk Privacy Policy
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

Are you sure you want a less customized experience?

We can access your data only if you select "yes" for the categories on the previous screen. This lets us tailor our marketing so that it's more relevant for you. You can change your settings at any time by visiting our privacy statement

Your experience. Your choice.

We care about your privacy. The data we collect helps us understand how you use our products, what information you might be interested in, and what we can improve to make your engagement with Autodesk more rewarding.

May we collect and use your data to tailor your experience?

Explore the benefits of a customized experience by managing your privacy settings for this site or visit our Privacy Statement to learn more about your options.