AU Class
AU Class
class - AU

Democratize Data with AEC Data Model and AI: Break Data Silos Using Autodesk Platform Services

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

Join us to see how GOLDBECK used the new AEC Data Model and Data Exchanges offered by Autodesk Platform Services to democratize data access across departments and different applications. Our session will highlight how we access data from native software tools using Data Exchange, reducing the need for manual data drops and empowering stakeholders to make informed decisions based on a single-source-of-truth AEC Data Model. With AI-based prompts powered by ChatGPT, we will illustrate dynamic ad hoc queries, enabling users to find, access, and consume model data in a new, intuitive way (for example, "Show me the CO2 footprint of all windows”). Our real-world examples showcase interactive data visualization and collaborative platform services that allow you to provide relevant data to different project stakeholders throughout the project lifecycle. Learn how to harness innovative technology with the AEC Data Model API and Data Exchange API to streamline workflows, break down data silos, and make data available everywhere.

主要学习内容

  • Learn how to seamlessly access data from native software tools, eliminating the need for manual data drops.
  • Learn how platform services can make relevant data available to all project stakeholders throughout the project lifecycle.
  • Learn how to harness AI-based prompts for dynamic ad hoc queries, enabling users to interact with model data in a new, intuitive way.
  • Learn about reducing software engineering efforts and boosting development agility by empowering citizen developers to use data.

讲师

  • Alexander Stirken 的头像
    Alexander Stirken
    Alexander Stirken is a dynamic IT Project Manager and visionary leader at GOLDBECK, where he spearheads the integration of cutting-edge data technologies. With a focus on Building Information Modeling (BIM), Alexander is at the forefront of leveraging cloud products to drive transformative construction development. His expertise in web-based configurator solutions ensures seamless execution of innovative BIM projects, making a significant impact on large-scale ventures. Holding a Bachelor's degree in Civil Engineering and a Master's degree in Structural Engineering, both from prestigious institutions, Alexander's academic background bolsters his ability to blend technical prowess with strategic leadership. Furthermore, Alexander's fervent passion for software engineering enhances his already multifaceted skill set. Driven by an unwavering dedication to innovation, he is determined to revolutionize the construction industry through the utilization of cloud products. With this vision, Alexander Stirken is poised to inspire and lead the way towards a groundbreaking future of construction.
  • Jan Christoph Kulessa 的头像
    Jan Christoph Kulessa
    Jan Kulessa is an expert with comprehensive knowledge in the development and implementation of solutions. He has specialized particularly in the development of applications with Autodesk Technology and Microsoft Azure, leveraging his passion for cloud software to create tailored solutions. His expertise supports companies in optimizing their processes and increasing efficiency. With a decade of experience in software development, specifically in the areas of Building Information Modeling (BIM), Computer-Aided Design (CAD), and Virtual Design and Construction (VDC), Jan Kulessa is a sought-after expert and innovator in the construction industry. His extensive experience shapes his understanding of digital technologies in this field. In his role as a dedicated Solution Architect at GOLDBECK, Jan Kulessa significantly contributes to the development and implementation of innovative solutions for the construction industry. His solid knowledge and practical approach enable him to analyze complex requirements and design customized solutions that meet individual customer needs. In addition to his technical skills, he demonstrates impressive leadership qualities as the Team Leader of Software Development at GOLDBECK. His clear communication, effective leadership techniques, and talented project management have contributed to the successful and timely completion of complex software projects. As a sought-after speaker and expert in technologies and APIs, Jan Kulessa is frequently invited to conferences and symposiums where he shares his experiences in software development, requirements analysis, and software architecture. His impressive career and deep understanding of the challenges and opportunities in the construction industry make him an inspiring and visionary speaker. He encourages companies to seize the possibilities of digital transformation and create innovative solutions for the future.
Video Player is loading.
Current Time 0:00
Duration 41:20
Loaded: 0.40%
Stream Type LIVE
Remaining Time 41:20
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

ALEXANDER STIRKEN: Hello, everyone, to our session Democratize Data with AEC Data Model and AI. Within the last four months, we have participated in the Autodesk data model private beta. And today we want to tell you something about our experiences with this new technology and want to talk about breaking data silos in our company using Autodesk platform services.

And before we actually start, I want to introduce you to your speakers today. My name is Alexander Stirken. I'm actually a structural engineer and a former research assistant in digital engineering. I have now four years of experience in software engineering at GOLDBECK and currently I'm working as an IT project manager at BIM with focus on configurators and Autodesk cloud technologies, about which we will talk a little bit more today.

Then let me introduce you, Jan, and Jan will introduce himself now.

JAN KULESSA: Thank you, Alex. So I'm also working at GOLDBECK. I have over eight years of experience in software engineering. Currently I'm working as a solution architect focusing on BIM technologies, Autodesk technologies, and process automation. So let me introduce you to GOLDBECK, the largest family-owned German construction company in Germany.

So let's have a look on what's GOLDBECK been doing. So at GOLDBECK we build future-oriented properties in Europe, from multi-story halls, offices, and parking garages, to residential buildings. If a customer builds with GOLDBECK, they get everything from one hand, from the first idea up to the construction and services of the complete life cycle of a building.

So let's have a look at GOLDBECK in a glance. We have existed for over 50 years, and we are already in a second generation of a family-run business. We have existed over 50 years, as I mentioned. And last year we completed over 570 projects.

This was made possible by our 14 plants in 111 locations in Europe, with over 12,000 employees. One of our uniquenesses is an integrated digital design with more than 2,000 Revit users and 6,000 BIM360 users. So let us talk about a bit of our key competencies.

The element-based construction, think of it like building blocks, just in big. So we enable this with our modular and standardized system that makes the build incredibly fast, reliable, and cost-efficient. If we now step ahead, we will see their challenges.

So today, in the time of big data and AI, we still exchange data in AECs for files, which presents us with some pain points, which we'll explain later. So keep up with us. Besides that, we also have real world challenges. The federal government in Germany wants to build at least 400,000 apartments a year, with lots of challenges on the market.

So like increasing material and lag over costs, and higher interest rates for construction loans. So if we take our numbers, and let's see. We will see that there it's at least-- I'm sorry, [CHUCKLES], let me rephrase that. So if we look on the numbers, we see a lag behind with more than 110,000 apartments. So the question is, how can we solve this problem despite all these challenges.

This is where GOLDBECK comes in. So with our systematic building approach, we can build fast, reliable, affordable, and also sustainable buildings. This is also which keeps us ahead of the competitors. One of our solutions is we have the known BIM method.

So GOLDBECK implemented BIM 15 years ago, and is well known in the company. With BIM as our primary digital design method, we serve all phases of the construction process in-house, in our vision, all building into a centralized as a digital twin. As already mentioned, Alex will explain you some pain points in the construction process.

ALEXANDER STIRKEN: So thank you, Jan, for the introduction of GOLDBECK and the pain points we are having and the big problems we are having currently in Germany. So as Jan mentioned, we have implemented BIM 15 years ago already. But we are still facing some major pain points.

So as you all know, when you are having different phases in the construction industry, you're using a wide variety of native software solutions. So for example, we are using Revit for design. We are using Tekla for engineering and Inventor and Bogart also for engineering. And the problem is within these native software solutions, you don't have a uniform description of this component data.

So every column, let's say, is described in every solution differently. You have a Revit column. You have a Tekla column. You have an Inventor column. And they don't have a uniform description. And because of that, at GOLDBECK we have brought up some connectors which are transferring this data.

So these are in-house applications which are transferring data from Revit to Tekla or from Revit to Inventor or Walker to Tekla. But all of these have to be maintained by ourselves. And that's very time-consuming. And also all our automation tools that we have built over the time, they use the data model of this native software solution.

So for example, all Revit add-ins that we have built with the Revit API, they cannot be used in Tekla, and they may use a different data structure for, let's say, a column than you use in Tekla. And that's very problematic, because when you do changes to our components, and that's what is often happening in the real world-- so you have changes in the GOLDBECK system, changes in possibilities of the physical representation of a component, then you have to maintain these changes in several different applications.

And as you know, this is very time-consuming. And then also, there is no real cross-system data master for our content. So, yes, we have a data master for content in Revit, let's say, where we have all the Revit families and their versioning. But it does not exist for all our systems. And that's very painful, because, as I said, when you have changes you have to do them everywhere.

So what is the current state that we are having? So, yes, we can do model processing. But we can just do it in one direction. And on the one hand, it's good that we can do it. But it's just one directional. And even if the process in the construction area is all starting at the very beginning, and then you're doing the design, and the detail planning, and then the engineering and then the fabrication, you still have interchanges between these processes.

And because we are having everything in-house, also these processes run in parallel from time to time. So currently, we are sharing models as files. So if you want to transfer data from, let's say, Revit to Tekla, you do it as a whole model. And that's time-consuming, and sometime you don't need all the data. And then, all these data is also in closed data pools, because when I don't have Revit on my machine, for me then it's hard to access this data.

And someone, let's say, who is working in the finance department, he has no access to this data because he does not have Revit on his machine. And then, as I mentioned before, the data transfer that we are having in-house has to be managed by ourselves. And that's very, very hard and time-consuming, because we are actually a construction company. We are not a software company.

And that's why we don't want to do that in the long run. So where do we want to get to? First of all, we want to have multidirectional model processing, so bringing data from one native software solution to another, enhancing the information, and then bring this information back to the first system. That's very important in our digital process.

Secondly, we want to share granular model data. So if I just need information about the columns in my building, then I just want to share this data and not the whole model. And lastly, we want to make model data available for everyone in the company. So even the colleague who is working in finance, he should have access to the data, if he wants to, how many columns have we built last year, for example.

So all of this comes together in a highly automated digital process chain. This is what we need to reach the goals that Jan mentioned before, being fast, being affordable, and being sustainable. And that's what we want for all construction projects at GOLDBECK. And because of that, we participated in the private beta program for Autodesk Data Model, because we believe this is a technology that can help us to reach each of these goals.

And before I show you what we have actually done, and what are our experiences with the Autodesk Data Model, I first of all want to introduce it to you very quickly. So the big difference to the current file logic that we are having in our industry, is that the Autodesk data model breaks these monolithic files into smaller bits of data that are obviously managed in the cloud. Yes, that is what we are doing with the files already.

But now these are smaller bits of information. And this information you can easily access using APIs to bring this granular object data to you, and to retrieve this data. And this is very interesting, because these APIs are extendible, flexible, and the most important part, they are federated. So what is interesting about the Autodesk Data Model is basically that you can retrieve all the data using GraphQL queries.

So it's not like you have to have a JSON query, or a RESTful API call, it's all done by the GraphQL language. And using this technology, you can very easily navigate through your ACC hubs, your projects, and build designs. So everything is managed in the Autodesk Construction Cloud that you have your different hubs, your different projects, and the different designs in these projects.

And you can retrieve them as granular data, such as the elements of the designs, the parameters, and all their values. And that's very cool, because in the end you can list all these available property definitions of a design, to let's say, identify which family is missing some property, for example. Or what are actually the properties I'm having in my model.

Is someone using an old family, questions like this, very easily to obtain these information without this data model. And then also every change that you are making to the model will generate a new version. So in the end, you can compare these versions one to another. And that's very interesting because then you can see, OK, what was actually the history of my column. Who has made changes, when does he make these changes, and what were the effects, let's say, on the construction side?

And able to query across all designs that we are having at GOLDBECK. This is a completely new possibility to do data evaluations for our complete company. So you can query about multiple projects and say, OK, what are the columns that I have used in all projects? How many of them are they? And questions like this.

So this is very interesting about the Autodesk Data Model and there are completely new possibilities how to handle models and their data. So let's quickly talk about the structure of this Autodesk Data Model. First of all, on the left side here, you can see the basic structure. And as you can see, it's actually quite easy.

So you're having one design, and every design has a version. You can see design as one model. And every model has a number of elements that are describing the different components of the building. So let's say column, girder, slab wall, and so on. And every element has a list of properties. These properties are further defined in the property definitions.

And one of the interesting parts of the Autodesk Data Model is that actually these elements can be linked to each other via reference properties. So let's say I've drawn you here a small graph. So let's say I'm having a building. And this building obviously has some levels. And also this building has some rooms.

And every room knows on which level he is, and also every wall knows, OK, at which level I am, and every window knows, OK, what is my reference to which wall. So basically you can see it here. This is a window. This is a wall, and the window knows via the reference property which wall he is part of.

And that's super-interesting because, with this data structure, you can use it to define complex relationships, and also use this data model as a foundation to build your own optimizations on top of it. And the most interesting part actually is that you can then ask questions to this model.

So here you can see it highlighted. If I'm asking a question, let's say, like show me all windows on the ground floor of the building with a white interior, this can be easily displayed then as a graph query in the graph. So every building has a number of rooms. I can get all the windows. And then I can filter these windows for those who have material in the color of white.

And this whole Autodesk Data Model then ensures that we can do completely new queries and data evaluations with the data we are actually having at GOLDBECK, with our up to 600 projects per year. OK, that's so far about the technology, just a brief overview. And now we want to show you what we have actually done in the private beta program, because from our side it was interesting, OK. We can use the API with Postman all the day, but that's not as interesting as building something with these new APIs.

And that's why we thought about, OK, why not asking questions like on the slide before to a chat bot, and then we get answers about the model from the Autodesk data. And before I want to go into technical details here, let me introduce you to Lisa and Nick, just to have a brief idea what use cases for such kind of application can be there in our company. First of all, we start with Lisa. She's a construction manager.

She's using BIM360 for site management. Yes, we have BIM360 in place. All of our construction managers are using it currently. But they don't have access to Revit. So they just can use the models that they see in their systems. And Lisa may ask herself, OK, how many girders are getting assembled tomorrow?

I need to know how many trucks are coming and I need to manage it. And on the second, she may ask herself, OK, are they actually all ready for delivery, or is there any problem in the production facility? And she also wants to know, do they actually fit on a regular truck or do I need to order some special transportation, like a larger truck, which is just allowed to drive in the night in Germany.

Just some examples, Lisa has far more questions in her job as a construction manager, but these are just some examples so that you know what the use cases are. And then we have Nick. Nick is a project manager in the production field. And he's actually currently planning a new production facility for GOLDBECK. And also Nick has no access to Revit, because he's not actually modeling anything.

And also Nick has some questions to our models. So what kind of columns do we have? What is the distribution of them? So how many columns of each sort have we constructed last year? And then maybe he wants to know how many columns exceeding a height of 70 meters were constructed last year, to dimension, let's say, production equipment.

And maybe he wants to know what is the maximum weight of a column assembled last year, to check, OK, is my crane sufficient? So these are just some examples from two personas we are having in GOLDBECK. But our idea was to provide a solution to answer all these questions for our employees at GOLDBECK. And that's why we came up with our Ask the Model PoC.

So we did this application during the participation in the Autodesk private beta program. And as you can see, it's a web application. We are having the viewer here, the Autodesk viewer, we're having a chat here, and then we are having a list of results on this side. So our idea was OK, everyone can ask a question to this application or to the chat bot.

For example here, show me all girders at the first parking level. And then these questions are getting translated into Autodesk Data Model API calls. And then we will use this data to visualize it, to give answers, and to show you the results.

So the general idea was let's interact with the model without deeper knowledge of BIM or the data. So I don't need to know what type of girders do I have and stuff like this. I can just ask it in natural language and don't have to care about the data, which is laying behind.

OK, and before we are going into technical details, I will show you an example. And here we are having Lisa, and she is asking isolate all girder in this BIM model, because she is just interested in girders. And as you can see now, she has asked the question. And we are doing some processing in the background.

And this takes actually some time, because as you can see now here in the answer, we are having 511 girders in the complete building. And this is where we are asking or calling the API to retrieve this data. And we have to do some pagination then, because obviously 511 elements are a lot. And to show you what we are actually doing here with these GraphQL queries that I mentioned before, I've written down the query that is actually answering this question, so elements by design and version.

So we have a query here, some data. We will have a look at this in a second. And then also we are having some variables which are actually then filtering inside this query. So let's have a deeper look into the query and the variables. So first on the left side, what we have to do in the application is actually find the right query for the question you have asked. So I ask, OK, show me all girders.

So first of all, I need to check OK, which of the queries that are available I want to use for that. And then secondly, I have to check OK. How can I filter inside this query what I'm actually looking for. So here, for example, first of all, I need to know what kind of families we are having at GOLDBECK are actually girders, because as you can see, I'm not asking for property name girders, but something else is here, which is obviously German, hard to understand.

But the message here is I need to map my families to what are actually girders. And this is what I somehow have to teach my application. And because of that, I want to go over the process we are taking. So let's start with the key concepts.

As you have seen, users can ask questions to the model in natural language. So good so far. You are writing into the chat bot, what is my question. And then we have to teach, and now ChatGPT comes into play because we are doing prompt engineering. ChatGPT, we have to teach, please understand the intent of the user. So I'm asking for girders, OK, obviously, and then I'm asking a question which is related to my BIM model.

But also I could ask a question, which is completely decoupled from the BIM model, like how will be the weather tomorrow? Then my intent is different. So first of all, I have to filter the user's intent. And then secondly, I have to somehow teach ChatGPT how to answer my questions that I'm having, with the right ADM graph query, to retrieve the actual model data, because when I'm asking OK, show me all girders, then ChatGPT has to translate this natural language question to the right answer, as an ADM graph query.

So basically we are teaching ChatGPT to understand the API and to translate natural questions in natural language to actual ADM graph algorithms. That's what we are actually doing. And then in the end we are doing the visualization via the Autodesk Viewer SDK.

OK, so let's have a closer look in detail. We are starting in the front, and the user asks a question in natural language. The request is sent to our backend. And now it becomes very interesting, because in the back end we have a system which is agent-based.

So we are having different agents form different domains. First of all, we are sending this request to an intent agent. And every agent has a prompt, which is describing, OK, you are an intent agent, you have to filter out what the user wants. So basically there we are describing in natural language what ChatGPT has to do for us.

And then every agent has a subset of examples. So here those are three, but you can have far more. And we are actually using far more than three, which give examples how to solve example problems. So it's basically, you are just teaching ChatGPT in natural language what he has to do.

And then, as soon as you get the intent, and here we are getting the intent, OK, someone is asking for girders, we send this question to the ADM agent, because we know, OK, someone is asking for BIM data. Then the ADM agent has to take care about that. And here, it's basically, again, the same idea. You're having one prompt where you tell someone or ChatGPT, the GPT, what he has to do.

So in the future I want you only to respond with a GraphQL answer, matching the asked question. And then you're describing. And that's where actually the magic happens. You have to describe it very precisely, what you want ChatGPT to do.

And then you also provide them with examples. So here as you can see, we are providing examples like return me all entities with the category wall. And then this is the GraphQL query that he has to deliver. So you're teaching him basically easy to understand examples, and then later on you are asking harder questions. And he gets the relation and figures out the right queries.

And then as soon as you get the ADM query from ChatGPT back, then it's basically easy. You're doing the API call to the ADM. You're getting the result IDs. So what elements did I find.

And then we are bringing these IDs and the starting message, so the actual question from the user. We are bringing them to a summary agent, and he will give us a result from ChatGPT which we are then typing here into the chat bot. This is a result from the chat bot in natural language.

And also we are taking these IDs to our Viewer SDK to highlight the elements in the viewer. And also the IDs are shown here in the result page. So from the process actually quite easy, the hard part is to do the proper prompt engineering to make ChatGPT understand the queries and to generate the right queries, and also to find out, OK, how do I actually answer these questions with the Autodesk Data Model. So what kind of queries do I need to generate? And that's the interesting part.

OK, let's go to another question. So Lisa may ask, show me all girders on level two and highlight them in red. This seems to be a fairly easy query, OK? Nothing new here. We want to have level two, OK, and we want to have them highlighted in red. OK, that seems to be understandable.

But here it becomes a little bit tricky. And I've put you here the query and the variables we are using. Let's have a deeper look into them. So the problem here is with this question, OK, show me all girders on level two. We need to have a reference, so we are asking for elements which are on the level two, but level two is not just a property but it's a reference property.

So it's an element by itself. So what we here first have to do is we have to find this reference first. So we cannot directly ask, OK, show me all girders on the level, because we first of all need to find the right level. OK, level two, what is the actual ID of this level? So here we have to loop over two queries actually, to first of all, find the reference and the ID, so find level two, please, and the ID, and then do the actual query, OK, show me all girders that are on this level.

And that's actually where it becomes very interesting, because the API is not capable of having a string or some other description here for this parameter. But the API needs a proper reference, and this has to be an ID. And there it becomes very interesting, teaching ChatGPT to do, yeah, filtered queries, or queries that are combined together.

That was very challenging but also very fun to do. OK, and then let's come up with another example, which is very interesting, which is also very interesting. Which of these girders on level two are too long for a regular truck? Isolate them. So here it is very interesting, because I have not stated OK, how long is the regular truck. And that's the super-interesting part about that, is that you can ask questions, and ChatGPT uses also his knowledge from the data model, from his own data models, or from the internet basically, and thus the relationship here.

So as you can see in the answer, ChatGPT answers, OK, if 24 girders on level two have been isolated, and they are too long for a regular truck. And so you have to filter out, or ChatGPT does filter out, what is the length of a regular truck. And then he will insert this value into the query. And that's super-interesting because you can combine knowledge from the internet or from the industry with your open data.

And then let's go for the last query. Display all of these girders that have been produced so far and highlight them in green. And here it is very interesting that ChatGPT also notices the context. So you can not ask, you can ask a new question. But you can also refer to the question you have asked before. So here I'm asking, OK, display me all of these girders, so all the girders that are on level two, and all the girders that are too long for a regular truck.

And of those, please filter them then for one of the properties which is, in our case, the GOLDBECK status, and then highlight them again. So all these contexts, we have not taught ChatGPT this. This is what he does by itself. And that's very, very interesting and funny and nice to see here.

OK, I showed you a lot of examples, what is actually all working. Maybe we should talk also about what is not working or what could be better. First of all, and this is one of the biggest pain points we are actually having, because the prompt engineering is actually where you have to do a lot of work, is when you have an API change, you need to adjust all the prompt engineering.

So if the interface changes, and that's what in software engineering usually should not happen, but if the interface changes then you have to adjust all your functions, let's say. And that's what happened to the API. It is good for the API because it gained a lot of functionalities. But we had an API change during this project and then we had to do the prompt engineering again.

And also the API currently does not support real deep filtering. So filtering very deep from one element to another, there are some limitations. But we are definitely looking forward for the enhancement of this API. So it's definitely getting better. During the beta we had a lot of improvements and it was definitely fun to try it out.

And then what, at least for the engineering of the prompts, is very interesting and very necessary, is understanding the model quality. As a user I'm just asking, OK, show me all girders. But when you do the prompt engineering, you have to have knowledge about your data that you're actually retrieving in the end.

But that's not a problem, because the prompt engineering is actually done by those who have access to the data, who know the data, who know the data structure. But we have to have deep understanding of the model quality. And also, yes, you have seen it in the first example. When you have large models with a large amount of elements, even then the Graph APIs come to their limitations because of the pagination. Then you have some performance limitations. But compared to do analysis in your Revit model and working with the Revit API, it's much, much more faster and it's super-fun to use.

OK, that's for the Autodesk Data Model, then coming to the prompt engineering. So first of all, yes, obviously we are using ChatGPT here. So there are always data security concerns. And we have different solutions for that, especially the OpenAI package from Azure which we're currently observing, and also currently it's kind of hard how to check the tokens, how to use the tokens in these GPT implementations.

So we are currently there in observations. Then secondly, when you combine prompts in a sequence that results in long, time-consuming computations, so we have seen it. We are first asking our ChatGPT, OK, check the intent of the user. Then we are asking for the ADM query. Then we want to know, OK, wants the user to highlight or to isolate in the viewer.

So if you're chaining these prompts, you have to expect some computation time. And then, obviously, despite the substantial number of examples that we provided ChatGPT, sometimes, yeah, the program produces incorrect queries. But let's say at 95% it hit the target. So from our perspective, very, very nice for a PoC, yeah.

OK, I talked a lot about ask the model. So I would suggest we go over to the outlook, and I will hand over to Jan again.

JAN KULESSA: So thank you, Alex, for this really, really nice explanation and presentation of Ask the Model and also what we can do, and also the limitations we have. So as you might already have thought about, we do not stop here. So we want to go with a federated model data transformation. And therefore we need some next steps.

So you would think about bringing further domains into our agent system, as Alex explained well. So we already have the agent for the ADM, so we can get BIM data just out of the box. So we can combine it with a knowledge of ChatGPT, which is giving us the knowledge of the world actually summarized. And also for our company, like us, we want to introduce also or integrate Microsoft Dynamics 365, or even custom tools from controlling and production to make the system even powerful.

Think about it, with the example we provided, you can ask the model, which parts are already constructed or on site. And you can match that with production data. So and also we want to extend the Autodesk Data Model capabilities for multi-project queries. So in our examples, we limit it to one project. But you can also ask about multiple projects, as Alex also explained. If you go to Analysis and see, OK, how did my projects evolve?

How many parameters or families or stuff like that is used, just in all my projects. This is something we will look at in the future and want to do with ADM. And also another point already explained, we want to integrate our custom services. So if you think about the service which is to suit CO2 footprint calculation, so we have one at GOLDBECK. We want to integrate that with some of the second model and make it even powerful.

So think about Lisa wants to know, OK, what is my actual carbon footprint exhaust in this project? She will ask the model in the future and get a result. And again, we want to extend our prompt engineering for deep filtering capabilities of the ADM API in the future, to do even more relational data connections. So we are thrilled to do that.

But there's still one more thing to do. And it's a step or more steps for the cloud transformation. So we talked a lot about the possibilities. And as you may remember on the starting pages we talked about, OK, we want to get rid of single files. So it's a road to go, and we're thinking about the ADM and Data Exchange functionality will do that.

So we want to investigate on further use cases for the ADM, company-wide data extraction and quantity take-off. So we have a custom tool again for quantitative take-off, which will highly improve to say, yeah, the quality, if we can integrate ADMs in here. We want to also focus on further Data Exchange functionality. So this is a special functionality. This is part of ADM.

And one of our, again, key functions, because we do our own production and planning and engineering, is we want to connect other softwares in a native way. So we want, for example, integrate Tekla to ADM. So if you have the same data source, it makes everything easier to exchange data, to see changes, to connect, go to the idea of a federated data model. So this is how we want to drive the cloud transformation with federated data in AEC.

ALEXANDER STIRKEN: Yeah, thank you, Jan, for the out view. Yeah, we hope we could show you some insights into the ADM and make you a bit curious about using it. We had a great time participating in the Autodesk beta program. And we will definitely go on with these concepts in this program or in this PoC, and also in other projects.

And, yeah, thanks for having us. Hope you took some insights with you. And happy to see you around, and if you have any questions, feel free to reach out to us. Thank you very much, and have a good day.

______
icon-svg-close-thick

Cookie 首选项

您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

我们是否可以收集并使用您的数据?

详细了解我们使用的第三方服务以及我们的隐私声明

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

改善您的体验 – 使我们能够为您展示与您相关的内容

通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

定制您的广告 – 允许我们为您提供针对性的广告

这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

icon-svg-close-thick

第三方服务

详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

icon-svg-hide-thick

icon-svg-show-thick

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

Qualtrics
我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
Akamai mPulse
我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
Digital River
我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
Dynatrace
我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
Khoros
我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
Launch Darkly
我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
New Relic
我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
Salesforce Live Agent
我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
Wistia
我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
Tealium
我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
Upsellit
我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
CJ Affiliates
我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
Commission Factory
我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
Google Analytics (Strictly Necessary)
我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
Typepad Stats
我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
Geo Targetly
我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
SpeedCurve
我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

改善您的体验 – 使我们能够为您展示与您相关的内容

Google Optimize
我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
ClickTale
我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
OneSignal
我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
Optimizely
我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
Amplitude
我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
Snowplow
我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
UserVoice
我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
Clearbit
Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
YouTube
YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

icon-svg-hide-thick

icon-svg-show-thick

定制您的广告 – 允许我们为您提供针对性的广告

Adobe Analytics
我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
Google Analytics (Web Analytics)
我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
AdWords
我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
Marketo
我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
Doubleclick
我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
HubSpot
我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
Twitter
我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
Facebook
我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
LinkedIn
我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
Yahoo! Japan
我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
Naver
我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
Quantcast
我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
Call Tracking
我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
Wunderkind
我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
ADC Media
我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
AgrantSEM
我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
Bidtellect
我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
Bing
我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
G2Crowd
我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
NMPI Display
我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
VK
我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
Adobe Target
我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
Google Analytics (Advertising)
我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
Trendkite
我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
Hotjar
我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
6 Sense
我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
Terminus
我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
StackAdapt
我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
The Trade Desk
我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

是否确定要简化联机体验?

我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

个性化您的体验,选择由您来做。

我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。