AU Class
AU Class
class - AU

Speedup Railway Site Management: The Power of Automation and the Architecture, Engineering & Construction Collection

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

In this class, FSTechnology will showcase how to achieve up to 40% in overall time reduction for some site supervision activities by automating the estimation of the site’s physical progress and of cut-and-fill volumes. This class will illustrate two workflows that you can apply to both building information modeling (BIM) projects and traditional project drawings. The first fully automated workflow calculates the progress of the construction works, starting from the exchange on Autodesk Construction Cloud of all point cloud data coming from drone surveys. The main steps are the production of simplified railways track path models in Revit software, the performance of a bespoke clash detection in Navisworks software to assess the works’ progress, and the interpretation of the XML output with .NET scripts to feed a Microsoft Power BI dashboard. The second workflow generates DEM (digital elevation model) surfaces from point clouds via a Python script to then calculate cut-and-fill volumes with Dynamo in Civil 3D. The results are also exported as a structured data source for a Microsoft Power BI dashboard.

主要学习内容

  • Learn how to implement automated workflows to extract and elaborate data from site surveys.
  • Learn how to implement parametric planes in Revit from railway tracks as a path.
  • Learn about how to use clash detection in Navisworks to estimate work in progress.
  • Learn how to automatically manage and compare surfaces in Civil 3D to calculate cut-and-fill volumes.

讲师

  • LUCA CAPUANI
    I'm a GIS analyst and technical specialist. From two years in FSTechnology, I started as a consultant on Autodesk Geospatial and Collaboration products. Graduated in Geography and specialized with a Master Degree in GIS, so I'm interested in everything that is "GEO".
Video Player is loading.
Current Time 0:00
Duration 46:39
Loaded: 0.35%
Stream Type LIVE
Remaining Time 46:39
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

ALESSANDRO DELLE MONACHE: Good morning, everyone. It's a pleasure to be here in Autodesk University 2022, with my colleague Luca Capuani. I am Alessandro Delle Monache, BIM GIS technology specialist. And in this class, we are going to explain how our workflow flow can increase the time saving or railway site management.

Here is the agenda for the presentation. And now, a brief introduction of our company and our team.

FSTechnology is the high-tech company of Ferrovie dello Stato Italiane Group. It was created at the beginning of 2019, and its goal is to strengthen and support digital innovation among the company, within the group.

The BIM and GIS Competence Center is a team within FSTechnology. The main objective of our team is to research and implement new technologies to improve the processes and the workflows for the management of the entire lifecycle of infrastructure projects.

Considering the processes of the group, we mainly support linear infrastructure project, and therefore, we support Italferr, which is the engineering company of the Ferrovie dello Stato Italiane group during the design and the construction stages. We also support Rete Ferroviaria Italiana, the company of the Ferrovie dello Stato Italiane owner of entire railway network. And here, in this slide, we can see our team.

Our first Autodesk University class was presented by our head, Marcelo Faraone and Stefano Libianchi in 2019. In 2019, [INAUDIBLE] awarded us with the special achievement in GIS. In 2020, our group started to investigate how to improve the integration of BIM NGS with other platform and implement solution for remote site monitoring.

We focused our energy to the possibility to reduce time on construction site management activities. We will explain where we start and what we achieved so far.

In July 1996, the European Commission adopted a resolution to implement the Trans-European Transport Network, named TEN-T. The intent of this multi-phase project is to provide coordinated improvements to primary roads, railways, inland waterways, airports, and traffic management system throughout Europe. When complete, the Scandinavian-Mediterranean corridor will stretch from [? NCT ?] to Valletta. The Napoli-Bari High Speed Railway Project is part of this corridor and started in 2015.

The activity of the project-- we will present our main focus in this section. The project-- virtual construction site management-- started in 2019 as a proof of concept. The main goal was to monitor, remotely, some phases of the construction, with the help of advanced technologies, and provide support to construction site manager in some of the most expensive and critical activities, such as construction, health, and safety checks, work in progress and quality checks of the works being undertaken, environmental inspection during construction.

The technology we use in this project can be divided into three major groups. The drone surveys activities where drones have scheduled the recording inspection of the construction site to monitor the phases and the status of the works. After the processing, [INAUDIBLE] data collected by the drones. The outputs are or Ortophoto and Ortomosaic, Point Clouds Models, BubbleViews.

The major goal was reached with a workflow that includes the post-processing of the images and the models acquired during the survey, the analysis in [? BIM ?] environmental, and then the publication-- [? ArcGIS ?] portal enterprise. In addition, to make the data available to the world project team and potentially to the world company, we also managed to estimate construction site processes and calculate custom field volumes, which we will explain in detail later. The site of data analytics, after integrating via BIM and GIS information, although [? acquired, ?] can be used for different purposes, such as AI for automated image detection, augmented reality, and for the environmental control.

And now, two more aspects for managing the data acquired. The AI algorithm was used to automate the analysis and reduce the times for identification of image. Also, we are training the machine with the Ortophotos taken during the surveys for environmental inspection on construction site to identify illegal landfill, dangerous leakage of chemicals, environmental contamination.

Augmented reality and virtual reality side-- with the help of Unity-- a cross-platform BIM engine-- in ArcGIS SDK for Unity, we managed to integrate GIS data and B models to obtain an impressive solution, which is very simple to navigate, engineered by all known BIM experts.

Our massive goal was the possibility to share this game-like application as a simple installer and allow two users to simply launch it and navigate in virtual environment with a keyboard or gamepad. In the environmental side, we are studying the possibility to analyze the [INAUDIBLE] and post-operam environmental system to verify whether they were preserved and ensure the protection of our landscape heritage. Moreover, we can detect contamination of illegal landfills throughout the use the AI algorithm, image post-processing, or with multispectral analysis with sensors carried by the drones.

The focus of this class explain in detail two of the workflow we created, especially to calculate cut&fill Earthwork volumes and to estimate the physical progress of the works on site. Before doing so, it's important to understand the reason that led us to implement them, the requirements for the data, and the information exchange and the necessary level of detail in order to get satisfactory results.

For the semi-automated calculation, they needed to isolate the volume of the soil involved in Earthworks on the construction site. This volume shall be generated with little processing for the end user, and the processing time should be short to ensure the results are timely and useful. The results shown should also be within a certain accuracy and can be used as data sources and database for further analysis.

Let's now have a look at how the workflow is structured. Before demonstrating the workflow, we released all the file types we requested at each subway. Ortophoto then processes in Ortomosaics with a minimum resolution of 2 centimeters per pixel in Tif and tfw format. These are intended to have a detailed overview of the world construction site area, and they feed the AI algorithm for automated image detection.

Point Cloud in Las Rcp format-- classified by both WBS and material-- for example, vegetation, ground, and concrete-- with a very high accuracy, with a precision of lower than 10 millimeters. And BubbleView-- made with a 3D laser scanner that allow us to usually inspect and navigate the site, as well as taking measurements.

As mentioned earlier, one of the requirements in the validity of Point Cloud classified by WBS, which means having a single file for each element. All piers, piers [? cap, ?] beams and, therefore, save it in separate files.

For the same [INAUDIBLE], we also need an additional file classifying by the different materials so that we can distinguish concrete, steel, timber, as well as the ground and the vegetation. For this PRC, we choose the construction site [INAUDIBLE] as construction was kicking off, and the construction works include different types of structure, such as viaduct, railway embankments, tunnels, and more.

On this file, I shared the storage on ACC, with folder-- we'd organize in a folder structure and read with the client, which makes it easy to identify the main WBS elements of [INAUDIBLE] and so on. At the lower hierarchy, the name of the survey has organized the data by date. And at an even lower level, we have all the outputs listed by format and type.

As we carried out periodic surveys on three major pieces of infrastructure, two viaducts, and a portion of railway embankments, [? and ?] well organizing for the structure allows us to easily organize and find the data relating to specific survey and area, and carry out comparative analysis of the same area to different surveys. Coming down to the stakeholders involved in this project, we have been collaborating with Autodesk Engineering, Microsoft, Seikey, and Esri.

And now in this short video show the main construction site area of the walls on the Cancello Frasso Telesino Railway. We regularly survey the site, roughly every two months, to monitor the construction works, including two viaducts, a new viability system, and a tunnel.

And now I defer to my colleague, Luca, for the technical construction of the workflow we develop and then use.

LUCA CAPUANI: Thank you, Alessandro. Hi, everyone. I am Luca Capuani, and my role in FSTechnology is GIS expert. Now I will illustrate, in more detail, the first workflow we created to calculate cut&fill for volumes.

The first step of this workflow is about the necessary data preprocessing that we will then use in Civil3D for an automated calculation of cut&fill volumes. Using one or more Point Clouds in the class format, classified by material, we manipulate the ground level. And if needed, we merge it to make it a single file.

We then prepare a single file shape showing the different construction site areas, with the data attribute adequately populated. Both input data must reference to the correct geographic coordinate system. Then we use a Python script built in an [INAUDIBLE] environment, which executes the following commands-- cropping out the Point Clouds around the boundaries of the site areas, extraction of the name attribute of the areas, setting of the output resolution, export a general reference term for each item, and renaming of the duties with the attribute value of the relevant area and the suffix indicated the survey it refers to.

We started from a Civil3D template where we preset the graphic preferences for the imported [INAUDIBLE] surfaces to be made low quality-- that is, with no triangulation-- in order to ensure a better performance that-- the one achievable with level curves. The template also presents one of the most widely used GIS Italian reference system as the default projection system of the project drawings.

In easy words, we open a new Civil3D starting from the template, and we only change the reference system if required. Therefore, we proceed importing the DEM generated from the Python script previously described. Then we use the tool Create Surface from DEM, and we load our reference surfaces.

Next, we load the surfaces of the same site-- the area that survived the different time. [INAUDIBLE] or the relevant surfaces are releases during the tool spaces, and are renamed the coding as required by our standard procedure to create them. Specifically, our standard requires the surfaces to compare-- to be named the same, but the suffix indicating the survey that is T6 or T7.

In the same way, we can now run the Dynamo script. Let's see its structure and the way it works. Exploring the section of the script, in the first one, split the text to DEM. The second sector lists all surfaces, splitting by name, and extracts the data from the surveys. Then the Data Extraction process begins, and produces a structured Excel file as the output.

This is the custom Python script creating the volumetric surfaces. This box generates the Excel report that is automatically saved in the same folder as the reference [? DWG. ?] Also, we are prompted with the dialog box informing us on the number of surfaces created and on potential warnings. The process is ending confirming OK on the watch dial. The only parameter to set in the Dynamo script is whether to open up the accelerator to visualize the results at the end of the process.

OK, while the volumetric surfaces are generated, the process is shown in the progress bar. Once generated, the volumetric surfaces are listed in the two spaces within the surface category. The volumetric surface just created keep the same name as the control ones and get both the names of the surface converted as a suffix. This way, it's possible to keep track of the evolution of the construction site.

The report extracted are a data structure that can be loaded into business intelligence tool and visually represented on a dashboard. We are currently working to incorporate this data with the main mission project dashboard so that the user can easily interrogate, on a map, the variation of the airport volumes over time.

Once the workflow was defined, as mentioned earlier, we have usable results and limited processing times. We had to decimate the input data by using the tools that reduce the density of the Point Cloud, preserving the geometry as well as possible. To determine the optimal resolution of the Point Cloud, we test the different options and settings.

The results showed that the most straightforward solution for our purpose was a 10-centimeter resolution, which, while ensuring sufficient accuracy, kept the processing time limited, the size of the resulting Point Cloud much more manageable, and the volume calculation more efficient.

Here is the evidence supporting our choice. As we can see from the chart, for example, the time saving in generating the GeoTIFF between the original and the decimated surface account to 85%. Also, for generating the surface in Civil3D, we move from 27 minutes for generating surface with a decimation factor equal to 5 centimeters-- to three minutes for a decimation equal to 10 centimeters, while keeping a very limited discrepancy in terms of the overall volume, which means no significant drops in the accuracy of the custom field volumes calculation. The benchmark used was the volume of a surface with the estimation equal to 2 centimeters, and that's a very high definition for the purpose of our calculation.

Now I present the second workflow we created in order to estimate the physical progress of the works on site. We have implemented this solution by means of Revit, Dynamo, and Navisworks. The workflow requires a 3D model, which, in reality, is not always made available by the designers.

Therefore, when a project follows a traditional approach, starting from a simple Civil3D path, and when a 3D Revit model is not available, we can face two different scenarios. In the first case, having the required time, and now it is possible to create the necessary families which, even if time consuming, is still regarded as possible. In the second case, it is required one basic family to build a simplified model upon.

In the fifth scenario, it is the automatic production of a true BIM model. The Dynamo script generates the true part of the viaduct, in an empty private project, sizing based on, consistently, with the plan and duration that extracted from the Civil3D project, which must stay open in the background while the script is run. It is necessary to have the required families-- [INAUDIBLE], built [INAUDIBLE], beams ready and loaded the Revit.

Let's take the family of the typical pier of the viaduct-- for the viaduct as an example. We modeled the piers as a family having as many types as the design indicates. We then create instance parameters to be able to control features for each instance-- for example, the plan rotation of a pier in Dynamo. In simple words, the script writes the direction of the path at specific point and assign that direction as an angle parameter to the elements located there. [INAUDIBLE] to manually rotate the element to write it to align it to the track.

We also integrated by parameters that we can control while in the family editor. These determine the nested families to be shown or written within each type, and therefore, the characteristics of each family type. Let's open up the nest-- the nested standard family and see that it has a more feminine nest within.

Or with many parameters depending one on each other-- and when editing the one parameter, the formulas determine variation, such as depth of the foundation piles, the bottom levels of the piers, date of the piers itself as imported from Civil3D. And the top [INAUDIBLE] is determined by stating of a project-specific value from the [INAUDIBLE] parts.

OK, let's start from the Civil3D project. With all the necessary data, to make the process work with Dynamo, we need the [? bid-- ?] the different reference or phrases, and it provides with the innovation [? level. ?]

So let's take one of the different bets. Moving on to Revit, it is important that the project share the same coordinates as Civil3D. In this case, the coordinates had already been acquired from the CAD file. The summary point is perfectly coordinated, too.

Let's open the Dynamo and analyze the script, taking the global model as an example. In general, we see the section of the script. The first part is about the input data, the second about the parameters. The user needs to set the graphical user interface, then another section for modeling the necessary elements already-- and lastly, to fill in the parameter.

Precisely, in the input section-- the input section executes the following commands-- extracts the image and the code from the element to place, applies the rotation value, as calculated from the information extracted from Civil3D, extracts other information, such as the dates necessary for the 4D programming and the WBS codes. Lastly, for coordination purposes and a correct spatial placement, it shares with the same coordinates as in Civil3D.

The section ending the graphical user interface commands is made up by a series of Python scripts prompting the user with windows and menus to select the relevant families to the element model. And the last one is the interface to the path.

In this section, the geometrical characteristics of the Revit element are defined. There are three different groups of nodes-- each one related to the type of functional inner element of the viaduct, entering the data to, please rotate inside the family and its nested families.

The last part relates to the data manipulation extracted from Excel-- therefore, the input of parameters such as WBS, activity ID, date, et cetera.

Let's [? find ?] the Dynamo script, as mentioned already. The user is prompt with a series of graphic interfaces. Here, we can see the user selecting a viaduct family and pairing the codes with the respective families loaded the Revit. Then we had to do the same for the families of each type-- the second viaduct and its beams, piers, et cetera. Lastly, we can see that-- that selected relevant part from Civil3D is required.

Here, we can see the results of the Dynamo script. A true BIM model is, therefore, created. Every family got placed at the right location, with the correct rotation, dimensions, elevation, and with the WBS parameter already populated, with the relevant codes, such as the WBS identifying, unambiguously, the elements of the infrastructure. Each element is an ID for both activity and date, which can be retrieved in the text box by selecting Different Features. And then the file is exported to be, then, downloaded-- loaded in [? a resource ?] and clash detected against the Point Clouds.

This video opens up on our Revit view, showing the families created by the screen, and the progress of the construction-- a different surface between time two and time seven in this example. What is yet to be built is shown with a transparent green color, while, in the dark red, the older elements, and, in lighter red, the elements built more recently.

The other option is building a simplified model. In this case, there are no complex families in Revit, only boxes with a limited date, which we'll call planes-- or horizontal planes, for simplicity. There is only one family with as many instances as necessary. It has parameters that allow any family instance to adapt to different project needs-- for example, different dimensions of the [? piers. ?]

This family has only one family within. A smaller box that's repeated many times determine a volume made up by progressive slices. By opening up the family, we can indeed see that it is made up by nested family boxes. It [INAUDIBLE] as a parameters controlled at the family level so that every parameter variation within the family is reflected in each instance.

A Dynamo script allows us to apply, to the slides, a name which combines the [? WBS ?] code and the progressive elevation of the element. Therefore, the process is compatible, but the WBS values, with their special and dimensional information, are different, as well as the use of boxes acting as progressive lines across derivation.

These are useful to estimate the percentage of completion of the vertical elements once related to their overall project date. The zero datum is the top of the head of the rails, and the datum grows downward, giving an unambiguous name to each element. Lastly, the Dynamo script generate the planes and rename them according to Excel file after selecting the relevant path. The user only needs to interact manually when some unforeseeable events happen-- for example, multiple tracks or interchanges.

Before exploring in the process of Navisworks, let's have a look at the logic behind. In this example, during the first star, the drone scans the elements under construction, and we get an idea of the physical progress made. Being infrastructure projects are mainly linear, but [? punctual ?] piers spanning vertically, and beams spanning horizontally, we thought we could use control plans, boxes, slices along the length of each element to estimate the relative progress of each WBS element. In this example, after post-processing, the results of the first survey show that two foundation, two condition planes have been built 100% and 50% of the rate.

During the second survey, we can see that some progress has been made on all planes and on three piers. The results of the post-processing show the quantification of the physical properties on each of the WBS elements surveyed.

Finally, during the third survey, the drone scans the same elements again. And the results show that two vertical elements previously surveyed have now reached completion.

This is the script for the simplified model. The result is a series of block made up by the planes necessary to the clash detection in Navisworks, in place of having the tailored project being modeled. Therefore, the areas representing the elements are identified-- slice at top level, with the unique code combining WBS in relation. Lastly, the model is exported in this form for this direction.

This is the workflow about the railway embankment, starting from a Civil3D project. We need the bid and the revision, plus a series of [? contrasections. ?]

Moving on to [INAUDIBLE], we need to make sure the project share the same coordinates as Civil3D. In this scenario, the Dynamo script is quite different and more complex because the plans for each block are not sufficient and some sort of control over the sites is required, too.

It's necessary to have blocks along the elevation as well as on the sides to check the extent of the work executed. Therefore, it's needed to model objects according to the embankment typical cross-section. These objects are currently modeled in Dynamo memory through the input of some parameters. However, we are aiming to model a series of typical sections to standardize the process and allow the user to simply select the relevant typical cross-section from the base.

Here, with no further graphic interfaces, we run the Dynamo script. The result is similar to the viaduct. The difference stands in the fact that there is not one single slide for each level, but it is split in more slides. It's useful to check whether the sides of the embankment are consistent with the project section.

Here, each element is identified unambiguously. Similarly to other cases, the model is then exported to NWC to run a clash detection.

Once built one of the models and performs the export in NWC, we can open up Navisworks. Here, we will demonstrate the process on the simplified model of the viaduct as an example. Let's add, to the selection tree, the Point Clouds with all the RCS-files classified both by both WBS and material.

And then we add to the simplified Revit model with either its horizontal progressive boxes or slices, and the vertical ones, too, if necessary. Once imported, we verify, with a visual check, that all the Point Clouds fall within the volumes of the planes. With an XML import, we load the preset and automated clash matrix, with tests organized by WBS and material.

The clash matrix is built from a series of preset clash tests, saved in an example and loaded in Navisworks to run the clash detection. The tests were said to validate that the geometry is classified by type and level. Basically, a clash test validates the list of the selection sets of all the levels of a model against the selection set of a WBS type.

In this video, we are looking at-- at the pier caps, for example. We keep creating a clash test between the levels and the selection set of every WBS type and material. We then export all the tests, so defined, as a standardized clash metrics to be able to use it in the future.

The selection sets that are creating as the results of a search set filtering on a specific WBS site-- for example, the bids which are coded Wi-Fi and across all Point Clouds. The same process is followed for every WBS site-- with caps, slabs, beams, foundations, for every innovation plan of the model, and for every material-- timber concrete steel.

Since the naming convention and the five formats are standardized, the setup just illustrated this to be manually performed only once. When all the sets are defined, we can indeed export them as a standardized XML that can be reimported in Navisworks to easily perform the same exercise with the new service, or even a new project.

Once the clash detection is run, if the points of a Point Cloud intersect the block or the slice, we have the clash. Most tests will result in a series of clashes-- not really easily readable, as the point-- the points clashing against the planes are not very meaningful when reviewed in isolation, as these only means an element intersects one or more slices.

Therefore, it is necessary to export an XML file to feed a bespoke algorithm able to organize and interpret the results to extract useful information. The script combines the clash with the information of the Revit planes involved within the clash to determine the maximum progress of each WBS element on material.

One goal of this workflow is-- automatically, the feeling in the project is [? scales. ?] I use it to formally record the progress of the work on site and consequently pay the due invoices to the main contractor or in the trades. It's easy to understand that the added value of the workflow is that it simply automates existing procedures and blends in with the existing documents with no disruption or changes for the users.

So to fill in the schedule, we need to elaborate the example generated in Navisworks to simplify it by grouping the clashes, removing any duplicate or redundant clashes, and create a second procedure to-- to fill-- a second visit-- to fill the schedule in adding the values of the top of every WBS element-- paste or partial build and survey, and the percentage calculated against the project highest [? elevation ?] of each element.

Recently, we also added warnings for any unexpected results to make sure the user double checks the specific element and analyzes the reasons for such results-- as an example, anomalies such as progress values inconsistent with the results of the previous survey, or material survey with an unexpected sequence [INAUDIBLE] project.

OK, that's over. Thank you. And I'll give the floor back to my colleague, Alessandro, for the conclusions.

ALESSANDRO DELLE MONACHE: Thank you, Luca. And now to summarize all the progress we made with this workflow.

We estimated that, in this case study, an overall time saving of 40% in site supervision activities for the processing times. In particular, we increased the cut&fill accuracy with the use of scripts that helped us to automate the entire process so to obtain faster and more reliable results and planning, less prone to human error. Another important goal is the possibility to save time in the physical progress, monitoring with remote inspections, reducing site visit, and therefore, costs and potentially accident.

As a further benefit, we increased the data digitization efficiency, thanks to the possibility to store in the same place all the files of the survey, and thanks to the use of AI algorithms to automatically analyze and classify the content of thousands of site pictures, as well as we are studying new steps to integrate in this workflow in the next future.

In the AI side, we are introducing the possibility to resume or to publish data throughout the dashboard or throughout the Power BI. The augmented reality, virtual reality side is very interesting-- the possibility to use new technology for immersive experience. In environmental contexts, we have the possibility to verify, with multispectral analysis, the presence of underground waste, health state of vegetation.

Thank you, and I hope you enjoyed the presentation and our work. Thank you for watching.

______
icon-svg-close-thick

Cookie 首选项

您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

我们是否可以收集并使用您的数据?

详细了解我们使用的第三方服务以及我们的隐私声明

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

改善您的体验 – 使我们能够为您展示与您相关的内容

通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

定制您的广告 – 允许我们为您提供针对性的广告

这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

icon-svg-close-thick

第三方服务

详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

icon-svg-hide-thick

icon-svg-show-thick

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

Qualtrics
我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
Akamai mPulse
我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
Digital River
我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
Dynatrace
我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
Khoros
我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
Launch Darkly
我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
New Relic
我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
Salesforce Live Agent
我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
Wistia
我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
Tealium
我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
Upsellit
我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
CJ Affiliates
我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
Commission Factory
我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
Google Analytics (Strictly Necessary)
我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
Typepad Stats
我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
Geo Targetly
我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
SpeedCurve
我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

改善您的体验 – 使我们能够为您展示与您相关的内容

Google Optimize
我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
ClickTale
我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
OneSignal
我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
Optimizely
我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
Amplitude
我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
Snowplow
我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
UserVoice
我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
Clearbit
Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
YouTube
YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

icon-svg-hide-thick

icon-svg-show-thick

定制您的广告 – 允许我们为您提供针对性的广告

Adobe Analytics
我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
Google Analytics (Web Analytics)
我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
AdWords
我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
Marketo
我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
Doubleclick
我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
HubSpot
我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
Twitter
我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
Facebook
我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
LinkedIn
我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
Yahoo! Japan
我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
Naver
我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
Quantcast
我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
Call Tracking
我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
Wunderkind
我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
ADC Media
我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
AgrantSEM
我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
Bidtellect
我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
Bing
我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
G2Crowd
我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
NMPI Display
我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
VK
我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
Adobe Target
我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
Google Analytics (Advertising)
我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
Trendkite
我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
Hotjar
我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
6 Sense
我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
Terminus
我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
StackAdapt
我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
The Trade Desk
我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

是否确定要简化联机体验?

我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

个性化您的体验,选择由您来做。

我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。