AU Class
AU Class
class - AU

Media and Entertainment Joins Autodesk Platform Services

共享此课程

说明

In this session, we will show the new media and entertainment Autodesk Platform Services-oriented software and give an introduction to how the APIs work. The Autodesk Platform Services Flow Graph Engine API is already in beta, with the Media and Entertainment Data Model coming soon to support the Flow Asset Management tools. This class will focus on those features that will also be included in the digital content creation (DCC) tools, but from an API perspective to show how customers can automate and access the data and functionality in an automated way.

主要学习内容

  • Learn about the Media and Entertainment Autodesk Platform Services API strategy.
  • Learn about the new Flow Graph Engine API.
  • See how the Media and Entertainment Data Model will bring data more easily to customer workflows through the API.

讲师

  • Kevin Vandecar 的头像
    Kevin Vandecar
    Kevin Vandecar is a Developer Advocate Engineer and also the manager for the Autodesk Platform Services Media & Entertainment and Manufacturing Workgroups. His specialty is 3ds Max software customization and programming areas, including the Autodesk Platform Services Design Automation for 3ds Max service. Most recently he has been working with the Autodesk Platform Services Data initiatives, including Data Exchange API and Fusion Data API.
Video Player is loading.
Current Time 0:00
Duration 47:48
Loaded: 0.00%
Stream Type LIVE
Remaining Time 47:48
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

KEVIN VANDECAR: So hello, everyone. My name is Kevin Vandecar. I'm part of the Autodesk Developer Advocacy and Support group. And today we're going to talk about how media and entertainment is joining up with the Autodesk Platform Services. So we'll start with an introduction to the Platform Services and we'll talk about the new Flow services as well.

So I want to start with the Safe Harbor Statement. We've seen this before and the gist of this is just to make sure you don't make any purchasing decisions based on software that's in beta or may not be released yet. So the agenda for today is we're going to talk about Autodesk Platform Services and give an overview, just so everyone is aware how these Flow APIs will be available. We'll talk about the M&E Flow strategy overall, and then we'll show the Flow Graph Engine Service and the Flow M&E data model vision.

So we'll start with what is APS. And the very first thing I want to point out is that we had the Platform Services branded as Autodesk Forge in the past. So whenever you see Autodesk Forge, there's some history on the internet and so forth, just always equate that with the current terminology, which is Autodesk Platform Services. Autodesk basically made the decision that because this is really going to be the core of not only our cloud presence, but how data gets authored from our desktop software, just wanted to really point out that this is the platform of the design and make industry.

So to start, APS has really enabled this digital transformation for a lot of our customers. It connects the design and make platform to the business processes. And here's just a few examples. Office is a great one, where people have connected with things like Power BI to extract data into Excel and do rudimentary reports and things like that with the data that's coming from the platform.

And then, of course, all of these other activities are really enabled by the platform. So ERP, asset management, which is going to be really big for the Flow strategy. Collaboration is also a really big one. And then CRM, even. Customer management is even better when you have the data from things that you are designing and making.

So Autodesk Platform Services has been around for a number of years. And like I said, it was called Autodesk Forge in the past. But all of the underlying APIs are the same and are very mature.

The main use case here is connecting data, workflows, and people. And in this example, it's showing the Model Derivative service, which is taking all kinds of different data inputs and processing them through the Platform Services and bringing really unique workflows. Viewing, planning, reviewing the things, asset management, maintenance, purchasing, all of those activities can be derived from this data coming from the platform.

So to take a look at Platform Services from an API perspective, on the left, the core APIs basically Data Management, Data visualization, that's what we also call the APS Viewer. Sorry, that's an add-in for Revit.

We also have the Viewer API. We also refer to that as the APS Viewer. Sometimes, you'll still hear it called the Forge Viewer. And then the Webhooks API, which allows you to get notifications from a platform when tasks are finished and things like that.

In the middle, you'll see the Data APIs. So we already have the manufacturing data model and the AEC data model in a production state. So AEC Data Model has even gone to general availability. And then we also have Model Derivative API, which has been around for a number of years, and that's our translation service.

If you're familiar with a feature called Shared Views within some of our desktop products, that's the Model Derivative API running underneath it. And a great example of where our desktop products are using our platform directly from desktop instances to produce things that can be shared and collaborated on in the cloud. And then finally, I want to mention the data Exchange API. So Data Exchange is another data model that is allowing interop between products and data models. So that one is in beta as well at this point.

And then finally on the right, cloud product APIs. So the most mature one that we've had there for a while is the Design Automation API. And this is basically running some of our desktop software in the cloud, allowing you to automate common tasks and freeing up desktop resources to be able to continue using your licenses in a desktop way while you batch process, and extract data and things that you need from the cloud instance. You let the compute take place in the cloud where it can be more efficient and more affordable, even.

And this is where the new Flow Graph Engine API will fall. It's basically also a compute service that's very similar to the design automation API in that it allows you to batch process Bifrost graphs in the cloud. And then we also have some AEC product APIs. Forma API is relatively new. It's from the new Forma product.

Tandem API is the same. It's for digital twins in the AEC space. And then we have the AEC and also the older BIM 360 APIs. And just want to point out that all of these APIs are industry standard, well-formed APIs. Most of them are REST APIs.

But as we move into data more and more, those data models are using GraphQL, which is even a better way to get access to granular data. So with a REST API, typically you would get back the entire chunk of data and it can be very large from products like Revit, or even in the M&E space, products like 3ds Max or Maya.

So with GraphQL, it allows you to drill down and query and ask for just the data that you want. And then, of course, all these other technologies. Of course, the frontend development, typical HTML, things like that. And then all the common programming environments are also supported. And because we're using REST and GraphQL, even programming environments that are not listed are easily supported, as long as they can support these environments.

So to take a quick look at how Platform Services fits, of course, you have your front-end experience with HTML, CSS, and probably JavaScript. And then you can start adding in all these other components. So MongoDB is a common one for database storage. Our viewer is based on WebGL and Three.js. And so we're consuming some of these common technologies in our use as well.

And storage is another big one. So Dropbox, Google Drive, so forth. You can basically use their APIs to bring that data into the Autodesk Platform Services as you need and vice versa, as needed. And then, of course, all the big players in cloud services are completely supported and can be used.

And then finally, some of these other side services like Microsoft SharePoint, for example, or IBM Watson, and even the Autodesk Construction Cloud, these are supplemental services that all just work well with building complete solutions. And there's all kinds of other things that are not even listed here. So we just want to point out that Autodesk Platform Services is just another web component that you can consume to build these really robust, cloud-based experiences.

So to give you a quick look at our vision for Platform Services, you'll see in this diagram that Autodesk Platform Services does sit at the bottom. And everything we do for cloud-based workflows are built on top of this. So the Autodesk data model is a great example. It's using an asset graph-style database. So that's a programming term. Don't confuse that with asset management.

But that graph style of database to store this data at the low level is a really powerful way of having high performance access to your data in a granular fashion, while also maintaining versions, and relationships, and things that are very important to the data itself. And also, that concept then becomes your source of truth. And then on top of that is where our industry clouds are built.

So Fusion and Forma are already building themselves out and are pretty mature already. And Autodesk Flow, for the M&E industry, is coming online now. So that's the point of this presentation, is to bring you up to date on what brings to the Flow industry group.

So let's talk just a little bit about what customers and partners build with APS. So the most common one is dashboards and reports. Basically, getting access to your data and understanding what that data means.

So for example, maybe looking at how many versions of an asset you have, looking at how many instances of that asset is used in actual scenes, or shots, and things like that. So once we have this data in place in a single location, it's managed by the Autodesk Platform Services and it makes it much easier for you as the consumer and producer of that data to get back statistical and other great information.

On the AEC and manufacturing side, digital twins have become very important. So that's another big use case. ERP, CRM we mentioned earlier. That's another big use case.

And configurators and design automation is another great one. And I think this is going to be an important one in the Flow industry group. And just talking about Flow Graph Engine as an example, there's no reason you couldn't have a standard Bifrost effect and have that configurable by your customer, or your user, or your employee to produce multiple outputs to simulate something in a specific context.

So this all results in building things on your side that would include data connectors to get access to the data, maybe asset catalogs so that you could review your assets very easily, either within your own studio, or maybe you're a marketplace and you want to sell your asset. So we've seen this already done in the manufacturing space, for example.

And then, of course, review is another big use case. Things like retopology, Bifrost simulations, those are all important aspects where you want to experiment, but also review the results before you actually put those assets into production.

So we do have very strong support for your APS journey. These are very open APIs, like I mentioned, REST and GraphQL. We also have a marketplace. So if you are a commercial developer and you need to have-- you need to market your tools, we have a marketplace that allows you to even market your cloud-based workflows.

And then we have a very strong onboarding and support program and that's what my team leads. So developer advocacy and support. We run activities such as bootcamps, accelerators where you can come with an idea and spend a week with our experts to build out your proof of concept, for example, developer conferences, and, of course, things like Autodesk University and SIGGRAPH. We participate in all those types of activities.

So I want to mention before we dive directly into some of the new Flow stuff that the M&E has had presence in the APS services already. So these have been in general availability production for a number of years and it mostly centers around 3ds Max. And it kind of makes sense, because Max has always been the product that is kind of part of the CAD visualization realm.

And so it makes sense that 3ds Max is already there, but we're going to see more and more M&E tools coming into the mix. So just as an example, 3DS Max Model Derivative is supported. We have other formats, FBX, OBJ, glTF that also are supported in Model Derivative, and then Design Automation is also supported for 3ds Max. And I'm going to show you examples of both of those, Model Derivative and Design Automation.

So we'll start with the Model Derivative workflow. In this example, I want to show the Model Derivative service that is part of Autodesk Platform Services, or APS, and how it can handle M&E-style scenes. So in this particular case, bringing in a 3ds Max native file, so the .max format.

And it has the physically-based rendering materials, so PBR materials. And basically, when PBR materials are present in the scene and you translate your scene, those materials will be loaded by a standard surface extension that is part of the APS viewer.

So it gives you the ability to have pretty decent materials without having to use custom shaders and things like that to get previews, and maybe e-commerce type scenes that are full 3D in behavior that come straight from products like 3ds Max.

Model Derivative also supports FBX and so forth. And the APS viewer is actually running here and I'm actually doing all of this within Visual Studio Code. We have an extension, which I mentioned in the tools. And basically, the extension allows you to do some of the base APS activities directly in Visual Studio Code, which is a great environment for testing and kind of understanding the behaviors of the APS APIs, while getting real-time feedback and working directly with the services.

So the viewer itself is a full 3D viewer, has all the standard viewing capabilities, has a model browser, which helps you to navigate through the scene, isolate things. You can also turn things off, show all objects, turn one or two things off as you want. And so you have quite a bit of capability here, right out of the box.

Standard navigation tools as well. You can always go back to the home scene. And pretty decent online viewing capabilities. Let me just show you one more example of that before we move on.

So this is another scene from 3DS Max using physically-based materials. And in this example, it's this astronaut guy. And one of the cool things here is, again, the ability to look at different components. So I could hide the geometry, and-- oops. I'm sorry.

And just look at the biped, for example. So if you wanted to do some review and see if this scene was set up and had the proper biped, you could, of course, view it this way. Or we could do something opposite and just select it. Or we could basically just highlight the different parts that we want to look at. So again, navigation is pretty decent and shows quite a bit of capability here.

In this example, I want-- OK. So the next example is using Design Automation for 3ds Max and just want to show some of the capabilities there. In this short video, I'm going to show how 3ds Max Design Automation can work in a web app.

Basically, 3ds Max is running in the design automation tech stack and that means you can pretty much automate anything in the cloud that you might normally do on the desktop. And so you can see that this is a custom app. It's been deployed to a certain location using Heroku in this example, and it has some of the parameters that you might see in Pro Optimizer, but parameters are not absolutely necessary.

So basically, we gather up these inputs. Could come from wedging, or other types of activities. But in this case, it's got a user interface. There's three different values that are going to be used, which will optimize in three different outputs. And then we can start the work item with this.

So basically what's going to happen when we start the work item is it will send all of these inputs, including the Max scene that has the mesh in it to the cloud and automate the Pro Optimizer against it. You see this work item started and finished. And then we're actually using the APS Viewer as a way to preview the results.

So the top here is the original model and its basic output. And then you can see in that dropdown, we're looking at different versions of the series of values. So the 25%, the 35%, and the 65%. And we can toggle on the wireframe mode to get a little bit better idea of what this looks like.

So you'll see it toggled on here. And there's the resulting mesh with the original, again, on the top. So it just shows one example of what you could automate in the cloud. And just wanted to make sure we highlighted that 3ds Max is part of Design Automation and has been there for a number of years. People are using it for various things.

OK, so let's now move on to the M&E Flow strategy. So we looked at what the Autodesk Platform Services offers in general across the whole design and make space, and the M&E Flow strategy is coming into the mix now more formally. And really, the point of the Flow strategy is to provide some of these high-level goals.

So one of the big ones is to help orchestrate open standards. That's a big one. We'll talk about that in a minute. Better collaboration, tackling complexity and asset management while delivering flexibility, unlocking data and insights. And that's what we talked about before with the concept of dashboards and being able to get insight into your data as well as just accessing your data.

And of course, everything will follow the APS guidance, in terms of API first. REST APIs will be used on the compute side and GraphQL will be used on the data side. And additionally, because M&E workflows typically include Python, a lot of our desktop softwares, of course, support Python. There will be Python SDKs to help with pipeline developers and so forth. And the tagline has become unbridled creativity through this effort.

So from a standards perspective, just want to point out that these are the standards that we're working with. MovieLabs, we work with directly. VFX platform is something we've supported and followed for many years. Tools like Maya and 3ds Max support the tools that are suggested for VFX platform.

And then the Academy Software Foundation is also another one. So we want to keep these standards in mind so that the out-of-box experience is as close to industry standards as possible, but also giving you flexibility to alternate to the demands of your workflows and pipelines.

So to start, I want to mention some rebranding. And so anyone using these products, of course, already know. But just to understand in the bigger picture, the Autodesk media and entertainment Hero products remain the same. So products like 3ds Max, Maya, those are not going to be rebranded.

They will be integral to the workflow strategy, though. So don't confuse them being separated out, because those are our key portfolio products and they will be part of this Flow strategy. Some of the cloud-connected products that were cloud connected already will be moving to the Flow terminology.

So Flow Production Tracking, which was formerly ShotGrid, has been rebranded already earlier this year, and only the name changes. The functionality remains the same. And as we move forward, you'll see that ShotGrid will start interacting with the Flow Data Model at some point in the future as well.

And the same goes for Flow Capture, which was formerly Moxion. Both of these tools already have cloud connectivity. And so the industry group just felt that putting Flow terminology into these names was really important to show the strategy behind the industry group. There is a link here that will be also in the handout where you can sign up to stay connected to the Flow strategy as well, and you'll get newsletters and be able to see how things develop.

So let's take a look at the typical environment from a programming perspective and a conceptual perspective of where the M&E workflows come from. So there's always data somewhere. And so the vision is that Autodesk Platform Services will bring and allow this authoring of data and access to this data. And then at the API level, all of these different operations will be provided through APIs. And those APIs will be reused by any of our products. So up here, you'll see these products.

So that really supports the concept of API first. Autodesk tools are going to be using the same APIs that will be made available to you as a developer and a customer. So the vision is really clean and good. And to just point out where we're starting, compute and services is one big area, and that's where the Flow Graph Engine API comes into play. And then Asset Management is another one, and that's where the data model is really important.

So the two services that we're going to talk about today that are on the horizon, Autodesk Flow Graph Engine Service, currently in public beta, and Autodesk Flow Media and Entertainment Data Model. And we'll talk about the vision there, because we don't have any customer access to it yet. So we'll start with Flow Graph Engine Service.

The primary goal for this service is to provide a means for Flow customers to use compute services by offering M&E compute software as a cloud-enabled capability. So what does that mean? What it means is basically you, as a customer, will be using these services, whether you know it or not, within our products. But because they're also API-based, those services can be automated as well.

So for example, in the context of Flow Graph Engine, they are Bifrost graphs that you're executing. And so you'll be able to execute them directly from within the Maya environment, for example. But what if you want to batch process a bunch of those graphs outside of Maya and not tie up your Maya licenses for that activity? You can use the API to execute those graphs directly in the cloud and get your results back and free up your desktop resources.

So some use cases here are building bespoke compute workflows by external customers and also internal projects. And that's important to keep in mind that Autodesk is using the same APIs that we're making available to the public. Accessing VFX compute software as cloud-enabled services to integrate into studio pipelines.

The current capabilities and benefit, you can currently execute the Bifrost graphs. And we're looking at other operations to execute. Retopology is another tool that's already using the Graph Service from a customer product workflow perspective, and we expect that to be made available at some time in the future as well.

And this allows users to access compute instances running VFX compute software as cloud-enabled capability. We are looking for input. So if you have other operations that you think would make sense here, you can use that feedback link for the beta and provide us with input.

So let's start with what deoes Bifrost provide. And a lot of even my customers are not really aware of what Bifrost is. So it allows you to create stunning procedural effects. It offers a visual programming environment, so it's a graphing environment. So drag and drop nodes to connect different behaviors to your scene and your geometry.

The output is realistic simulations and effects. The example does a scatter object procedurally. So you basically can say, for example, on this plane that represents some terrain, scatter some trees, and that's what the example does across, and make sure those trees are always on the plane and in a random pattern. And it also supports creating and editing of USD scenes.

Now Bifrost is for Maya, and this is what it allows within the Maya context. But there's also a version of Maya Bifrost that's standalone, which has a full desktop SDK and it can be used to programmatically create graphs. And then you could, in theory, take those graphs that you create programmatically with the SDK and send them for Flow Graph Engine evaluation as well. So there's lots of possibilities here.

So how does it work? So first of all, just know that Bifrost can create a massive amount of data and it can take a lot of time. And that's where the benefit of the cloud comes into play, because the efficiency of creating that data in the cloud is contained in a virtual machine environment and it can also take a lot of time.

So if you were to run this on a resource on a desktop machine, it's going to consume that desktop resources. Even if you don't have an artist sitting in front of it, that machine is sort of dedicated to executing this. So it offloads that compute to the cloud.

It is a compute service, very similar to the Platform Services Design Automation. It uses REST to automate the evaluation of the Bifrost graphs and allows you to integrate this functionality directly into your workflows. So it could be something that you batch process in a pipeline. Maybe you run your graphs at night when no one's around and you have the results the next day.

There's plenty of things that would make this really an efficient way of executing Bifrost graphs. And the basic steps are you gather your inputs from the Maya Bifrost instance. You'll submit those inputs to the Flow Graph Engine Service to create a job.

And then your program will use a polling technique to check the status. So it's going to get scheduled. It'll get queued, and then you'll get a succeeded response or maybe a failed response. And when the job is finished, you'll download and interpret the results.

So let's take a look at a practical example. So we're going to take the input geometry, in this case is a USD, and this is the terrain, which is basically a plane from Maya. And it's in the USD format. And we're going to also take the Bifrost graph.

So you'll publish the graph out of Maya. It'll be in a JSON format. And those become the inputs. The Flow Graph Engine Service will run the compute and then the output will be the effect. And if you notice here, the output is different from the input. And so that's also a data efficiency.

You don't necessarily need to maintain this if all you want is your results. And this is just an example of how it's viewed in the scene. So let's take a look at a sample program that does this outside of Maya.

For this Flow Graph Engine API demo, I want to start in the documentation and just show that we are currently in beta. Full general availability will come soon. And this documentation is quite complete. It talks about how you can submit a job, and showing you code examples, and so forth.

So the field guide is an important one to read. It contains all the different steps that you would need to understand overall. And then I would also suggest running through the how-to guide, which is basically a tutorial. So this is the tutorial that basically is connected to our samples as well. And you'll find our samples here. We currently have a JavaScript sample and a Python sample, and we're working on a new one, which I'm going to show in a minute, that has a user interface, which makes it a little bit easier to understand and see.

The other thing, as mentioned before, is that it does run Bifrost graphs. So it's helpful to know about Bifrost and the fact that Bifrost is an integral component of Maya. So currently, using Maya, you can use Bifrost within Maya, create these graphs, and have these graphs automated in the cloud by running them through an automation pipeline using the API.

And finally, I just want to point out that I am going to be running a new sample in this demo. It's currently in an individual's account, but we will be soon moving this to the Autodesk Platform Services account. And you should be able to access it directly there.

So this is a Node.js sample. And I've got my Node.js command prompt and a copy of the source code from this GitHub repo. So to start, you would do npm install. And this has already been done, but just to show that that's the technique.

And in a default state, it's going to download all those npm packages, of course. And then I'm going to do npm start and that's going to basically execute this sample. So I can go to localhost now. And again, this is a sample, and you can see it running here.

And one of the things that you'll need to do is post your own client ID and client secret. These are obtained through the Autodesk Platform Services app creation, and you'll need to make sure that you have the Flow Graph Engine provisioned.

So I am going to paste my client ID and my client secret. And normally, these would be part of your app and not visible to the customer. But this is a sample and we want to show the required pieces to get the sample to run and how it connects back to your account in the APS services.

So with that client ID and secret, I can log in and I can create a new job. And so this is a job-based service. So I'm going to create a name for it and we'll just call it fred01. And I need to create-- or, I'm sorry, I need to upload the Bifrost graph.

So this is a JSON. And what I'm going to do is I'm going to browse to my local file system. And I basically created a data folder here, just so I could find that input and other data quickly. So I'm going to open that. And then this particular sample also requires a USD file, which contains the geometry.

And then I'll go ahead and create this job. And what it's doing is showing you the different steps of how this is operating. So there was the upload step. As soon as it gets queued, it gets a job-- unique ID. It's going to get queued and then scheduled to run in the service.

And this is how you can stack things up and batch process them, basically, to execute any number of graphs. You can run them in parallel and the sample will kind of allow you to test that and see how that works. So we're going to let this run. It should only take a few seconds, because it's a pretty simple graph.

You'll see that the job succeeded. And what I need to do now is download the output. So I'm just going to download it. And I've run this before, so I'm going to overwrite the one that I have there before. Yes. OK.

And I do have 3ds Max running here in the background. And I previously imported the import plane, the one that went with the Bifrost graph, to basically execute against. And now I can import the results of the graph, and that was that output1.usd.

And again, this was a scatter graph. So what we're going to see is all the trees that were computed to be placed on that plane. So pretty cool, easy to execute, and you can batch process all you want with this API.

So we have quite a few resources available for the Flow Graph Engine API. And remember, it is in beta at the moment, but it will be coming to full production general availability soon. All of these links, I'll provide in the handout. So you'll be able to get those as a download.

And just keep in mind that while we're in beta, the service is free. So it's available to you to test and work with to see how it might work best in your workflows. While it's in beta, it does have a few limits set up. So the CPU and memory configurations are fixed. But once we go to a paid model, then you'll be able to configure those as you want to best service your Bifrost graph.

So if you have a very complex graph, you probably want more CPU and more memory components for the job execution. And it will be consumption and token-based. So using the APS typical token model is how you will pay for it through the APS platform. And we're still looking for feedback. So if you're interested in trying this out, the feedback link will take you into the project and provide the input as you want.

So that leads us to the next topic, which is the Flow M&E Data Model. So the data model itself is the underlying software architecture piece. And what it's going to be doing is enabling Asset Management Services for many use cases. And the primary goal is to provide a means for Flow customers to collaborate on assets and data in a centralized fashion without worrying about the data stability and duplication.

So the way this is going to look, again, if we go back to the industry clouds, our core portfolio of products are going to be authoring that data in the most part. Now it doesn't mean as a developer, you can't also write data, and especially metadata is going to be important for the M&E workflows. But let's take a look at how this is unfolding from the Autodesk corporate perspective.

Fusion was the first data model to go into general availability, and it's basically using the Fusion software to write the data model during the authoring of a scene. So it's already got that vision of eliminating the concept of files, which we'll talk about in a minute. Now Forma is the AC data model and it's primarily working around the Revit software. And it's very-- the schema there is very much tied to the Revit software and the data layout from Revit.

And that's important to note, because the AC industry is very much tied to BIM data and how Revit handles data, as well as formats such as IFC. And so they're chasing standards that have already been pretty well established for a while. And then when we look at Flow, this is where there's going to be a lot of capability on the customer side to create the schemas and the data structures that are most important to you as the customer.

And of course, once the data is there, it can be interacted with. So customers, partners, third parties, pretty much anyone in the pipeline that's interested in data and can do something with that data is going to be a consumer and potentially even an author of that data.

So fundamentally, these ideas of putting these data models in the cloud is to move away from files and move more towards data. So today, we have files and folders on disk, and that's a big problem. Where are the files? Which server are they on? Which cloud service are we using today?

Whereas in the future, we want you to be more worried about managing the assets in a project. That's all you really care about. It shouldn't matter about where the files are on disk and which version of the files, and so forth. You have to deal with naming conventions and file paths. And we want you to deal with data, with structured relationships and versions.

Scripts and tribal knowledge are very common in the M&E space and we want to give you more software-defined workflows. So rather than having scripts do various things, we want it to be well-defined software that either you write and of course, Autodesk provides, that that completes your workflow strategy. Data silos and special files are also something we see often with customer-customized workflows.

And we want that to really become a centralized information model. And so this is the grand vision for all of the Autodesk data models and we're taking baby steps to get there. Files are not going to go away overnight. But as you start thinking about data, we want you to think about it as data and not as files.

So how would this look in the Flow Data Model? So with Flow, it's all about assets, and versioning of those assets, and metadata against those assets. So for example, and this is not fixed, and this could be customized, you could have an ID of some sort, the name of the asset, of course, the immutable versioning.

So any time the asset changes for any reason, a version is created. And you can always roll back to a version. You can look at a version that might be used in prior scenes and whether you want to use that version or maybe an updated version. And of course, the history. Where have these assets been used, and so forth.

And an asset will be made up of components. And this is where we drill down into a more granular type of data. So it could be binary data, pipeline data, studio data, things like that. Maybe the studio location that produced it or the division that produced it, things that are important to you as a customer.

And within that, also there will be things like types of data. So in this example, binary data. A 3D model is a perfect example of that. Movie files, image frames, textures. And, of course, app-specific binaries. We're looking at ways to even connect this outside of the Autodesk ecosystem, of course. So Houdini is being considered to connect to this as well.

And then on top of an asset and its components, you can have relationships. So now you can have a new asset actually derived from a prior asset, but be a new asset on its own. So you can manage the original asset and the base asset by itself. And the derivation automatically picks up those things through the relationship.

Dependencies. The asset requires this set of textures and it's dependent on that. So there's all kinds of ways that this asset concept can apply to your specific pipeline.

And with this format, it's using components to really give the assets personality. So when we look at different types of assets, it could be 3D model, it could be a take, it could be a user shot. And the typing is going to be inherited by the schema.

So the schema itself can be inherited. So that's the point here. And customer-defined components are really important as well.

So let's take a look at a couple of workflow examples. So Flow Production Tracking, of course, what you need to manage and keep track of is tasks. And so this is what it might look like within the Flow Production Tracking interface.

Maya and other DCCs, I mentioned Houdini. Typically, there, you're going to be dealing with character assets and this might be what the interface would look like for that tool. And then finally, Flow Capture, which is managing sequences, and takes, and dailies, basically. And so this might be what it looks like there.

So very similar user interface, but very specific to the tasks at hand and the workflows at hand. So how is this going to work? Again, the DCCs will be authoring the data. And unlike other Autodesk data models, the M&E data model will provide a customizable schema.

We will have a default out-of-box schema based on those industry standards that we talked about before. So we're going to try and make the initial schema as close to those standards as possible so that out of the box, you don't have to do any extra work, unless you need some custom behavior. Connections currently being considered are for multiple DCC products.

We saw Maya, Flow capture, Flow Production Tracking. I also mentioned Houdini is in the works as well. And from a developer perspective, it is going to be GraphQL, which allows industry standard access for developers and is supported by all the Autodesk industry data models. And then specific to M&E will likely be a Python SDK that will allow you to integrate this into your existing tools and pipelines as well.

So just remember that the M&E data model, this is a vision at this point, but other Autodesk data models are already in production state. Some of them are still in beta, but they are running in our production services. So you can access all three of these other data models, AC data model, manufacturing data model, and Data Exchange today to get a sense of how these data models are going to work.

So in summary, just want to follow up with that Autodesk Platform Services provides industry standard APIs and SDKs supporting a variety of workflows. So don't think that you only need Flow Services, for example. Other services in the stack may be perfectly viable for whatever workflow you need. The APS Viewer is a great example of that.

And APS does already serve M&E customers today. So this is not an entirely new concept, but it was sort of limited to 3ds max and certain file types. So this is bringing a greater presence of M&E into the stack.

Flow Graph Engine API is available today. And the M&E data model and other services are coming soon. And we'll provide these resources in the handout.

So look for that. And you'll be able to access everything that I showed here after the event. Thank you for your time and hope this was valuable.

______
icon-svg-close-thick

Cookie 首选项

您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

我们是否可以收集并使用您的数据?

详细了解我们使用的第三方服务以及我们的隐私声明

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

改善您的体验 – 使我们能够为您展示与您相关的内容

通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

定制您的广告 – 允许我们为您提供针对性的广告

这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

icon-svg-close-thick

第三方服务

详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

icon-svg-hide-thick

icon-svg-show-thick

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

Qualtrics
我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
Akamai mPulse
我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
Digital River
我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
Dynatrace
我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
Khoros
我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
Launch Darkly
我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
New Relic
我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
Salesforce Live Agent
我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
Wistia
我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
Tealium
我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
Upsellit
我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
CJ Affiliates
我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
Commission Factory
我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
Google Analytics (Strictly Necessary)
我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
Typepad Stats
我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
Geo Targetly
我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
SpeedCurve
我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

改善您的体验 – 使我们能够为您展示与您相关的内容

Google Optimize
我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
ClickTale
我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
OneSignal
我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
Optimizely
我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
Amplitude
我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
Snowplow
我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
UserVoice
我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
Clearbit
Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
YouTube
YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

icon-svg-hide-thick

icon-svg-show-thick

定制您的广告 – 允许我们为您提供针对性的广告

Adobe Analytics
我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
Google Analytics (Web Analytics)
我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
AdWords
我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
Marketo
我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
Doubleclick
我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
HubSpot
我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
Twitter
我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
Facebook
我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
LinkedIn
我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
Yahoo! Japan
我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
Naver
我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
Quantcast
我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
Call Tracking
我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
Wunderkind
我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
ADC Media
我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
AgrantSEM
我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
Bidtellect
我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
Bing
我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
G2Crowd
我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
NMPI Display
我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
VK
我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
Adobe Target
我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
Google Analytics (Advertising)
我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
Trendkite
我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
Hotjar
我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
6 Sense
我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
Terminus
我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
StackAdapt
我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
The Trade Desk
我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

是否确定要简化联机体验?

我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

个性化您的体验,选择由您来做。

我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。