AU Class
AU Class
class - AU

Journey Towards Data-Centricity with Autodesk Platform Services

共享此课程

说明

Unlock the transformative power of data centricity in the architecture, engineering, and construction (AEC) industry at Autodesk University! In this session, we'll explore how Autodesk Platform Services and Autodesk Construction Cloud can support your organization's journey toward data centricity. Discover how to capitalize on advanced data management, visualization, and collaboration tools to enhance decision-making, streamline workflows, and drive project success. Through real-world case studies and practical demonstrations, we'll guide you on the path to becoming a data-centered AEC firm. Don't miss this opportunity to revolutionize your approach to data and gain a competitive edge in the industry.

主要学习内容

  • Learn how to harness the power of Autodesk Platform Services to centralize and manage project data for enhanced collaboration and decision.
  • Implement data visualization tools to gain valuable insights, identify patterns, and optimize project performance.
  • Learn how to streamline workflows and eliminate data silos by integrating Autodesk Construction Cloud into your data-centered processes.
  • Gain practical knowledge and learn about actionable steps to embark on the journey toward data-centricity in your AEC engineering firm.

讲师

  • Puria Safari Hesari
    Puria Safari is a computational designer turned to the world of digital transformation and change management, currently lending his expertise to Ramboll. With a foundation in structural engineering and as a self-taught software developer, Puria has crafted a unique path. His journey began with prominent projects like the Tottenham Hotspur Football Club and The Factory by OMA, before embracing the challenge of guiding large consultancies toward becoming digital trailblazers. Puria's academic engagements span student workshops and papers delving into the intricacies of shell structures.
  • Giulio Pagan 的头像
    Giulio Pagan
    I am an Aeronautical engineer by training with more than 26 years of experience in software engineering and solution architecture. I work closely with the development ecosystem, partners, customers, and the data platform community to design innovative, value-driven, cloud-based solutions.
  • Adriano Parodi
    Experience in Manufacturing execution systems and integration with Enterprise PLM systems. Experience in programming, business analysis and design of solution architectures. Experience in Manufacturing industry
Video Player is loading.
Current Time 0:00
Duration 45:18
Loaded: 0.36%
Stream Type LIVE
Remaining Time 45:18
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

GIULIO PAGAN: Hello, everybody, and welcome to this presentation about Ramboll's journey towards data centricity with Autodesk Platform Services. My name is Giulio, and I will take you through-- I'll introduce myself. I have here a representative from Ramboll, which we will introduce to you in a moment.

We're going to go through the problem statement, and we'll talk about the data-centric journey we are embarking in. And we'll cover some basic concepts, and we'll explain what we've been doing through demo scenarios, which we will follow by some conclusions.

So, without further ado, I will just show you briefly the safe harbor statement. I won't keep it for long. And, after that, I'm going to introduce you-- tell you a little bit more about myself. I'm a principal solution architect working for Autodesk.

By training, I'm a aeronautical engineer, but I've been practicing software engineering and solution architecture for over 26 years. And I like working with developers, partners, and customers alike developing solutions which are innovative and value-driven. And, now, I will let Puria introduce himself.

PURIA SAFARI HESARI: Thanks, Giulio, and thanks for having us. My name is Puria, and I am, by background, a structural engineer and a computational designer which have come to work with digital transformation for the past years and change management. And am currently leading Ramboll's computational design transformation program.

I have, during my years in the industry, become a software developer, self-taught software developer, and had the opportunity to work with development in the AEC apps and tools but also great projects like the Tottenham Hotspur Football Club, among others, during my previous position.

So what is it that we're going to dive into in today's session? Well, in Ramboll, we have, during many years, had realized what using files and not having a data-centric approach to project deliverables-- and how that can affect our employees and our projects.

Basically, the usual statement that we hear is that my data is locked in so many files. We have difficulties finding the right files that we're looking for. We cannot really trust the data. Is the data that I see correct? Is the data that I receive from my colleague-- is it the latest one? Has it been modified?

So this is really what the problem is about. And it affects us daily. We have information loss. We have reduced accuracy. We have security concerns, quality, trust, and lead time.

So why is this happening? Well, business units create projects or buy products to respond to a specific need with a deadline. KPIs, such as data control, quality, or reusable architecture are underestimated. And data repositories are created to respond to the needs of the application layer, application-centric approach, which is really hindering us from working from a single data lake.

So, now, I will tell you a bit more about the Ramboll's project data management and data centricity journey, which is the collaboration that we're doing together with Autodesk. So short about Ramboll-- we're architecture, engineering, and consultancy company founded in 1945 in Denmark.

We're present in 35 countries, but our strongest presence is in the Nordics. And we have over more than 18,000 employees. And you can just imagine how a data-centric approach would help all these employees to become more efficient in the way they collaborate and communicate with their data and design, but also with our clients.

And so what is the vision here? What is the big idea? Well, we want, in Ramboll, by 2025, to have widely adopted a data-centric approach to projects-- or to project delivery-- in which data is an open resource not tied to a specific application, that the data is readily accessible through a single source of truth, the data is easily exchanged across application, the data is suitable to support sustainability metrics for analytics, and that the data is actively managed and controlled.

And so the expected outcome that we're looking for here can be divided into short-term, medium-term, and long-term. And the short-term expected outcomes are basically improved interoperability and data handover in projects, improved collaboration and teamwork in projects.

The medium-term expected outcomes are sustainability metrics are embedded in our design in a consistent way, that we have improved quality and reliability in our deliverables, and that we have enforced standardization of production and delivery environment. The long-term ones here are opportunities created for expanded and diversified services and a common and trusted source provided for analytics and best practice sharing.

And so looking at this slide, I just wanted to take you through the roadmap here, and how it looks like, and where we are, and where we want to be. So looking at the five different columns here, we have the files and at the first stage, and then you might move over to working with data, basically, objects. And then you work with more richer objects, and you start utilizing the information on that.

And at the next stage here, you're starting to share knowledge and start to gain a lot of insight in terms of your data. And that can help you to make better decisions.

So talking about utilizing AI and machine learning for decision-making-- the end of this spectrum is where we want to move. And looking in terms of Ramboll and where we are, I mean, the landscape is really diverse in terms of Ramboll.

We can definitely have projects that could be around data and information columns. And we could definitely have projects where-- that we can [INAUDIBLE] the data. But we really want to find and utilize this program and this collaboration here, together, get towards the last columns.

GIULIO PAGAN: OK. Thank you, Puria. I guess it's a little bit of my turn to talk to you a bit about the kind of solution we've been discussing with Ramboll. But before we dive a little bit more in the details, I would like to tell you a few words about the concept of data centricity and also to show you a very high-level architecture which displays how-- if you want the anatomy of a digital data centric solution.

So, first of all, what about data centricity? Well, let's talk about a data-centric organization. A data-centric organization is one which would put a lot of effort, and it would invest resources into generating that relevant information-- so churning data into relevant information, which then they can effectively share to enable collaboration across the organization.

So, clearly, the focus, even in this definition, is on data, from which the term data-centric really comes from. And the reason why this model can bring benefits is because applications are effectively transitory. But the data, especially when you have the right data, data stays. It stays there for a long time.

So at the heart of a data-centric approach, there is importance which really gives to our major asset, which is data. And if things are done in the right way, then you can reap the benefits of such approach, which you can see here in this kind of donut.

You can certainly hope to eliminate or reduce the number of data silos, reduce data redundancy, facilitate a more effective access to data, but also more secure, compliant to certain constraints. You can simplify applications and simplify upgrades, make data more accessible, eliminate complex data transformation-- which is a specific scenario which we started exploring at Ramboll-- reduce data errors and inconsistencies, improve quality, and then, in general, also streamline data management.

So this sounds great, doesn't it? Obviously, it's the reason why it is a journey. And it is a journey that starts from the center, and this is very important. Now, how does a data-centric solution look like?

Well, a data-centric solution is also focused on collaboration. And it's a solution which provides an integrated environment which is capable of managing different types of data. And, in here, there is a summary of the kind of data you would expect to find in this context.

So we would have product-related information. It would be library of objects, for example or other product information which is associated to the design or manufacturing of buildings. Then you have information which is associated to a project, which is vital to the project execution.

Any relevant standard could be found in the-- managed by the block. You can have digital assets, status associated to each to each digital asset. Or, traditionally, that would be associated to a file. But this is a thing. In here, we hope to go beyond that and have-- enable, for example, the management of transactional data, like IoT data.

And another aspect which you would expect to be managed by the platform is data orchestration-- so the movement of data across the different parts. So this is a good opportunity to state that we should not-- while data is at the center of this, we are not suggesting here that all data will have to sit in the same repository.

And what we're talking about here is more the need for a single data model, which basically implies a unified and integrated vision around data. This is important. And the data-centric approach, also, is not to be confused with a purely data-driven approach. A data-centric approach is, in fact, complementary, I would say, to a data-driven approach, which existed already for many years, while the data-centric approach requires changes at all levels, not just the technology level.

So what's the vision? I'm mirroring here what we have already seen on this slide. Let me go back to it in a second. So you may recall this slide. And, now, I'm showing you here another cartoon which is showing, instead, how we want to change the sentiment of these users towards data we have to deal with.

So you may still have the same data input. But, now, the platform will help the users to regain trust with data. And that would help happen because the access to data is easier, so we don't necessarily need to download, for example, entire model.

We don't need to replicate data. We don't necessarily need to transform data all the time. And they can attach metadata to the entities, and they can also share, in an easy way, data with suppliers, for example.

Now, that sounds great. But let's now start to talk about what could be the first steps to move towards this vision.

Well, as a starter, we want to move-- so if you look at these bullet points in this slide, we want to move from a universe-- if these bubbles are the situation where we are-- they represent the present and maybe the ideal future-- you can see that we want to go from a scenario where we have low granularity, situation where it's hard to find data, where you have lots of proprietary and application-centric scenarios-- we want to move from there to a scenario where we have a finer, granular access to data. It's easy to find the data.

We have more data-- product-agnostic approach. And, now, you can see the relationship between the current scenario, the current situation in the future and the fact that, today, we are primarily shuffling files around. So we will hear, mostly, talking about this product model data or that product model data. And the data comes out depending on the applications you're using.

So we want to transition to a scenario where we talk more about individual type of entities and instances of doors and instances of reports or work panels. So everything is an object, and everything can be potentially related to another object.

Having said that, the files will not disappear for a long time. And, chances are, your best-case scenario, your platform may have to interact with other platforms. But, in some cases, you may have to import files into your platform and export files in the way that you have are dealing with that today. But at least, within an organization, you should be able to reap the benefits of the data-centric approach in your own little bubble.

Now, I want to also emphasize the importance of other aspects, which go beyond the way you organize the data. So we discussed so far the granular access to the data. But two other fundamental pillars to this are the journey towards having a more harmonized data.

We mentioned the need for a single data model. It might be necessary to invest into mapping external models to your model. But the reality is that a single data model is what will accelerate the journey, and it will allow you to really reap the benefits.

And in order to, again, to create a stable platform that also doesn't cost too much over time, it's necessary to keep an eye on the status of application landscape. So as we said before, the approach here is that the application layer should be built around the data layer. And this has to be reflected in the choices of applications.

So the applications should be chosen because of their ability to more easily interact with the data platform. And the purpose is to provide the core capabilities which you see listed at the top. So you want to be able to exchange data, by the way, access data, manage your asset lifecycle, manage your asset metadata-- so data about data-- and so on.

Now, this was a bit of an introduction in generic terms. And we're going to now start to talk about how we started the journey by talking more specifically about interoperability. And what do we mean with that? We mean trying to leverage a selected set of services for the purpose of facilitating vendor-neutral data exchange.

So, therefore, the solutions that we've been looking at so far is focused mostly on reusability, increasing the-- make it easy to access the data and make it easy for applications to exchange data. And you can see in this slide, this is a one-slide summary of, basically, what we've been focusing on. So you can see, on the right side, also, the list of the technology which we've been more specifically looking at.

And there is ACC, Autodesk Construction Cloud, as a flavor of software as a service. But we've been focusing our attention on how Autodesk AEC Data Model and the Autodesk Data Exchange and the number of data connectors can help to turn this vision into reality.

And we are doing this. Since the beginning, we have ambitious target, which we hope to achieve with our cooperation. It is a drastic reduction in process lead time, remove almost completely the danger of having accidental data losses caused by data transformations, improve significantly the trust in data, and, also, convince the other parts of the business that data has to live in the cloud for purpose of leveraging the cloud.

So, now, let's try to understand this in practice by looking at one of the first-- we picked a couple of scenarios. So we covered a few scenarios in our engagement, but we picked two of them for the purpose of this presentation.

The first one is about detecting changes of quantities across different versions of a Revit model. So what you're looking at here is a revised version of a model. Currently, when we [INAUDIBLE] process-- so the as-is process.

If it's as-is process, it's difficult to achieve this comparison. And we wanted to show that by using the platform services. This could be greatly simplified. So how does it work in practice?

The data is shared by different parties on ACC. And the data is extracted using the AEC data model API, which makes it fairly easy to compare elements from different versions of the files.

The comparison results are stored in a database, which then can be used for analytics. So in the initial demonstrations and prototypes, what we'll be doing is populating a database, which then we use to populate Power BI dashboards.

Now, a few words about the technology-- for those that don't know what I'm referring to, I'm sure you heard about this before. But the reason why AEC cloud information model has been picked here is because, while it would be possible to use other API, the AEC data model is really the newest, more intriguing and innovating way to have a cloud-based source of truth for building and construction.

And why is it so promising? Because it provides a great tool for developers through dedicated, also, interfaces, like GraphQL. It allows you to access data in an easy way, so data is, effectively, easier to access. And it provides a repository of objects.

So if you remember my slide about explaining the need to move away from files, it's a great technology that helps moving in that direction. So as you can see from this picture here, your model is effectively turned into a graph of objects which you then can access via the API. And the use of the API is facilitated, also, by data connectors, which will mature in time.

Now, how does the solution look like from a conceptual point of view? So the models are updating ACC, as I mentioned before. The interesting aspect is the ACC model will grow in terms of compatibility with multiple platforms.

This specific solution here has been tested against the Revit file versions, but the specific business cases could change without necessarily impacting this architecture, which is the interesting thing. So there is a reusability in the architecture here at play.

Now, the models versions are compared, and this is done by querying the data. So, for the first time, it's actually almost there-- the possibility to treat ACC as a database of objects, which is why this API is particularly interesting.

And we also use the Schema Editor available from Tandem to create a reference asset classification. And this validation tool, which you see here-- what it does is to compare the content in terms of values of properties and is checking whether the property exists in first place in the reference schema.

It can also calculate-- within the dashboard or up front, it can calculate quantities and the quantities of error reported on a dashboard. And you're going to see two demo videos. The first one will show this solution in an automated way, and the output is shown in a dashboard.

So the idea is to periodically run this process to validate to make sure that the models across versions don't have significant differences. But you also see how you can create a web application to perform a same analysis on demand. I'm going to kick off this first video, and we'll provide some comments.

So this is the-- we're going to load the model in Revit, first of all. And what we're going to do is to make some changes to the geometry. So for this specific example, we use some test data.

So you can see here, the master bedroom. I'm making a small change in the geometry. And after that, I'm saving the model, which gets stored in ACC as a new version.

So what happens under the scene is that this data becomes available for extraction via the AEC data model API. Now, what you can see here is the user configuring a Power Automate script to point to a specific order in project and account hub.

And it's using a JSON file to specify what exactly is-- from wherever data is extracted and which versions are to be compared. So this is-- and, also, there is something called elements properties filter element, here, which is an example of how it's also possible to confine, for example, the comparison to certain properties when comparing that.

Then the workflow is executed automatically. And this could be triggered, for example, when a file is dropped in the folder. Or, in this case, it's just manually triggered. When the workflow is successfully executed, an email is sent to the user, and the data is stored in timestamps in an SQL system.

And from that moment on, the data becomes historical data, which is available for analysis purpose. And you can see a very simple dashboard here which shows different rooms which have been analyzed. And the system detected that there are differences in the values of the area and perimeter and [INAUDIBLE].

This is a process which, in some scenario, is not tackled at all. And that can lead at big issues. And although this problem can be solved in different ways, the approach here is particularly interesting because of the possibility to work in a way which is effectively independent from the original data products-- products they use to generate data.

OK, now, I'm going to show you a different demo, which is basically using the same services. But, in this case, the analysis of the two versions is going to be fed directly to the website that you can see here in this slide.

So I'm selecting from a portal, in this case, the models to be compared. So this would come from ACC, the list. And then, also from ACC, I would get a list of versions, and I will be able to choose the ones I want to compare.

And the difference here is that, instead of relying on automation, I would execute directly a comparison of two models. I would use the services of the platform to, here, load in the same scene two versions. And in the table below, you can see the comparison results.

The models are aggregated in the scene, so you see them overlap. You can see many differences because the differences are small. But, now, if I click, I can also-- yeah, I can see where the values are different, but I can also click on an object and find the object there.

And if you pay attention, you also see, to the right of the object, also, some elements which are different in geometry across versions. Yeah, exactly.

And so this is basically how to compare, with a fairly straightforward website, two versions. And you can then accelerate the fixing of the problem.

And now I'm going to describe a second workflow which I will let Puria demonstrate. This workflow, we've been investigating the use of another emerging API, which is actually fairly mature at this point in time. It is just Data Exchange, Autodesk Data Exchange API.

And this API grants access to the data, which can be stored in ACC in something called Data Exchange. So in this workflow here, what we're going to do is update or create a model, identify which parts of a model we want to export, create certain views which represent the elements which we want to extract.

And this will allow us to create data exchange, which is shared via the platform. And you can actually see exchange in ACC, and you can view it before the data goes anywhere else. But what you can also do is when the other engineer-- in this scenario, an external engineer could-- or an architect, in this case-- can open their application can connect to ACC, browse to the data which has been exchanged, and import the data in Rhino.

Also, this is a fragment of a workflow. But the specific scenario which we studied is, basically, the round-tripping of this. So starting, actually, from Rhino-- you're going to see in a minute-- we send it to the Revit and back.

The technology we use, as mentioned, is Data Exchange, which is built on top of our platform. It provides an API, but it also provides several out of the box connectors, which we leveraged to do the first evaluation.

Yeah, at conceptual level, the architecture is pretty simple because, basically, we are-- in this first phase, we simply used the out of the box connectors to send the data from Rhino to Revit and back. Now, I will let Puria explain what he has been focusing on with Rhino and Revit.

PURIA SAFARI HESARI: Thanks, Giulio. So looking into this work flow, while the video is starting, I think it's also-- we have really tried to push this workflow to its limits.

We have also tried, as Giulio described, sending data in both directions, going from Revit to Rhino and back, but, also, from Rhino and Revit and then go back but, also, at the same time, receiving data from Revit in Rhino and then referencing that data in Grasshopper and manipulate it and then send it back to exchange to the Revit model, et cetera. So it allows for a lot of flexibility.

And to take you through the video and how we've done that-- and so we're basically here, starting with the massing model in Revit. And this could be a scenario in the very early stages when we're evaluating different design options and architectural disciplines.

And so what we do-- we're sending this data through the Data Exchange and allow other designers to tap into this and to utilize what already exists as kind of a starting point or as a reference geometry to start building up other parts of the system.

So here, for example, we have a facade engineer, where he, in Rhino, and loads that exchange which has been uploaded to Data Exchange and get that data here in Rhino for the reference geometry.

And so what the facade engineer and designer can now do is to reference the geometry here to a Grasshopper script and easily generate different variations and different options and create multiple versions of this and upload those to the Data Exchange for the rest of the team to review and to test out and evaluate.

So here, for example, we have a triangulated facade, where sails are randomly placed on some of the rectangles. And that can be varied back and forth. So we can create different exchanges, if we would like to. But, here, we're only just sending one, the one that we just created and looked at. So we're creating a new exchange here using Grasshopper.

Once we've done that, there is a reference to the exchange. And, now, we use the Send component and send the data, basically. And this is basically what I mean with the flexibility that it allows. You can take any route you want with the different Data Exchange connectors.

And now coming back here, we can see the facade geometry being loaded to Data Exchange from Grasshopper. And so what we can do is, basically, again, go back to Revit and load that geometry here. And if you would have multiple versions or would get a later version, another version in a later stage, it's just to switch that and reload a new exchange here.

All right, so this sums up the demos and the scenarios and a few of the scenarios that we have worked on that we wanted to share with you guys. But, moving forward, I would also like to share some conclusions that we've come to and challenges and the benefits that we have found and share that with you during this journey, which has just started.

So the benefits from our candidates here and the people that we have worked with in different projects that have been testing these different scenarios and workflows-- we want them, and they believe that, by us adopting data-centric platform services, and Ramboll will be able to achieve 50% less time spent searching for data issues, 80% data quality increased, less than 30% shorter lead time by bringing forward the problem detection, 30% to 50% reduction in process lead time, virtually remove accidental data loss caused by data transformation, 100% increase in data trust, and 100% migrated to the cloud.

So by adopting the data-centric platform services, we can definitely achieve a lot of great values here. So what was the objectives that we set up starting this, and what was the achievements here?

So we wanted to understand the challenges and evaluate the benefit of a data-centric solution for better project data management. And we also wanted to demonstrate the current capabilities together with Autodesk of the Autodesk Platform Services to support project delivery. So what did we achieve?

We have successfully managed to demonstrate how Autodesk Platform Services can make data-centric project data management a reality. We explored how granular data-- granular access to data benefits both interoperability and data validation quality. And we have, in this initial phase, provided an effective way to better understand the needs, promote cross-vendor cooperation, identify new opportunities, and influence the product roadmaps.

GIULIO PAGAN: And if I may add, Puria, to this--

PURIA SAFARI HESARI: Of course.

GIULIO PAGAN: --this first phase, which was an exploratory one, it was also particularly useful for us. So I work for Autodesk Consulting, and it was a really great cooperation, also, with the platform team, which is working hard to help customers. And it was a great way to see a close collaboration to make sure that the product goes in the right direction.

PURIA SAFARI HESARI: And so looking at some challenges we had and some feedback that we've gathered from the people that we have worked on from the project teams and when they have tested out the workflows that we have suggested-- I mean, the overall feeling and the feedback is great. They're really happy to be a part of this journey, to be a part of setting the new standards in terms of working in a data-centric way.

And what we have achieved so far-- it's, overall, promising experience for easily round-tripping data between applications, which we saw in the last demo video, exporting data for data analytics and visualization, right?

But we also want to be clear about the Autodesk Data Exchange and AEC Data Model are in beta stages. So there are sometimes some issues to be expected. But that is something that we hoping to be able to collaborate together closely with Autodesk to mitigate as we go forward part of this collaboration.

We have had a great conversation with the Autodesk Consulting and the Autodesk Data Platform Team so far. We have already started discussing some of the issues that we're talking about-- for example, loading data exchanges can sometimes take time. They're listening to that feedback and taking it seriously, looking into this, which we really appreciate.

And there might sometimes be limitations when we exchange data types across different applications. And in the Rhino-Revit scenario that we looked into, we have had some reports from some of our users that tested the workflow that objects sometimes are only available as direct shapes, which can be a limiting factor.

And so, as mentioned, we're early in this journey. We're testing the cutting-edge stuff out there. And some of these tools and the platforms and the services that we use, they're in beta stage. So that's really to keep in mind. But, still, we're-- I think, from my side-- have come a really, really far way already this year and, specifically, working with [INAUDIBLE].

Well, thanks a lot for listening. It's been a pleasure to present what we've done during the last year to you guys. Yeah, thanks for joining in, and thanks, Giulio, for-- thank you for doing the presentation.

GIULIO PAGAN: And thank you very much for myself, as well. Thank you, Puria, for your very encouraging comments. And I'm looking forward to keep helping you guys in your challenging but rewarding journey towards data centricity. Thanks. Thank you, everybody. Bye.

______
icon-svg-close-thick

Cookie 首选项

您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

我们是否可以收集并使用您的数据?

详细了解我们使用的第三方服务以及我们的隐私声明

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

改善您的体验 – 使我们能够为您展示与您相关的内容

通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

定制您的广告 – 允许我们为您提供针对性的广告

这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

icon-svg-close-thick

第三方服务

详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

icon-svg-hide-thick

icon-svg-show-thick

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

Qualtrics
我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
Akamai mPulse
我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
Digital River
我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
Dynatrace
我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
Khoros
我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
Launch Darkly
我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
New Relic
我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
Salesforce Live Agent
我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
Wistia
我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
Tealium
我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
Upsellit
我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
CJ Affiliates
我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
Commission Factory
我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
Google Analytics (Strictly Necessary)
我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
Typepad Stats
我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
Geo Targetly
我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
SpeedCurve
我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

改善您的体验 – 使我们能够为您展示与您相关的内容

Google Optimize
我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
ClickTale
我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
OneSignal
我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
Optimizely
我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
Amplitude
我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
Snowplow
我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
UserVoice
我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
Clearbit
Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
YouTube
YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

icon-svg-hide-thick

icon-svg-show-thick

定制您的广告 – 允许我们为您提供针对性的广告

Adobe Analytics
我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
Google Analytics (Web Analytics)
我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
AdWords
我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
Marketo
我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
Doubleclick
我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
HubSpot
我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
Twitter
我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
Facebook
我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
LinkedIn
我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
Yahoo! Japan
我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
Naver
我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
Quantcast
我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
Call Tracking
我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
Wunderkind
我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
ADC Media
我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
AgrantSEM
我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
Bidtellect
我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
Bing
我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
G2Crowd
我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
NMPI Display
我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
VK
我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
Adobe Target
我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
Google Analytics (Advertising)
我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
Trendkite
我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
Hotjar
我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
6 Sense
我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
Terminus
我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
StackAdapt
我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
The Trade Desk
我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

是否确定要简化联机体验?

我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

个性化您的体验,选择由您来做。

我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。