AU Class
AU Class
class - AU

Tectonics via AI at Zaha Hadid Architects

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

Teaching Tectonism to AI"" is an exploration that focusses on the convergence of AI and tectonism. Tectonism, as a leading subsidiary style of parametricism, poses a unique challenge for current text-to-image AI tools due to their limited comprehension of structural dependencies in architectural representations. The presentation will showcase the potential of merging structural topology generation tools with Generative Artificial Intelligence across various architectural typologies and how this is integrated into early stage ideations at Zaha Hadid Architects.

主要学习内容

  • Incorporate design objectives into AI Generated Content
  • N/A
  • N/A

讲师

  • Vishu Bhooshan
    Vishu is an Associate at Zaha Hadid Architects. He co-administers the Computation and Design group (ZHACODE) in London. He leads the development of a state-of-the-art, proprietary computational code framework to synthesize high-performance façade and roof geometries and consequently enables their structural optimisation, parametric modelling and coordination with Building Information Modelling (BIM). The framework also assimilates field-tested research and development in early-stage design optioneering, robotic construction technologies, and digital upgrade of historical design and construction techniques in timber and masonry. Additionally, the framework powers applied research in emerging technologies of machine learning and artificial intelligence, geographic information systems and spatial data analytics. Since joining Zaha Hadid Architects in 2013, he has been involved in several design competitions and commissions ranging from research prototypes, products, galleries, stadiums, metro stations, residential buildings, masterplans as well as designing for the metaverse & gaming industry. Vishu is currently a Lecturer at Architectural Computation, Bartlett post-graduate programme at University College of London (UCL). He has taught and presented at several international workshops and professional CAD conferences. In the past few years Vishu has received awards for excellence in computational design and research, such as the '2022 Digital Futures Young Award', and the '2022 Best Young Research Paper' at the International Conference on Structures & Architecture (co author), while publishing many more research papers in the field over the last decade with ZHA.
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • subtitles off, selected
      Transcript

      VISHU BHOOSHAN: Hello, everyone. I'm Vishu Bhooshan from Zaha Hadid Architects, presenting this session on tectonics via AI at Zaha Hadid Architects, and this session is sponsored by HP. I'm affiliated to these two companies or institutes. One is the computation and design group at Zaha Hadid Architects, and the second one, where I'm teaching as a lecturer at the University College of London, a part of the architectural computation course.

      In both of these roles, I kind of do research in computation and design on a day to day basis, but also looking at ways to disseminate knowledge via these kind of conferences and also via teaching at workshops and universities. A bit about the team I work for at Zaha Hadid Architects. CODE is an acronym for computation and design. It was started in 2007 by Patrik Schumacher, JJ Bouchon, and Niels Fisher. It was started as a project independent research group looking into novel technologies of digital and robotic manufacture initially and also at the geometry processing and rationalization of geometry.

      So as you can see, it was initially looking at smaller scale pavilions to understand the technologies, both on the design creation side and also design delivery side. But as the research has matured on both the delivery and the technologies for manufacture, you can see that the application scale has gone up from pavilions to buildings, like arenas and stadiums to larger scale masterplans. The team is currently of 20 people. Most of them are architects as their background, but with various interests in computational technologies from people interested in geometry processing, machine learning, looking at architectural geometry and fabrication, parametric detailing, robotic 3D printing, et cetera.

      Typically, a research trend kind of matures in this way in the office. So you start off with a research topic, which is project independent, try to develop toolkits associated with it specifically for design, and these get tested out by pilots or special interest projects. Once it has matured and has the robustness as checked in a small scale project, then they are deployed onto larger scale projects across the office.

      The four strands of research we are currently having in the team at various levels of maturation is high performance geometry, looking specifically at architectural geometry, which is structure and fabrication aligned, and looking at applying it into this is the oldest strand, and this is now getting applied into large scale projects. The second one we are looking at is participatory design systems, which also include game technologies, and this one has also been around for, like, eight to 10 years now. And slowly, the same set of toolkits are also being embedded into the web and metaverse.

      The presentation today will focus on the last part, which is machine learning and AI, and that's more recent, but still been around for three to four years. So the agenda for the presentation is looking at tectonism via AI. So we look at our early beginnings, what AI can do, what we did with AI, and what we want to do with AI, and what we are currently doing with it, and what next is in the pipeline for us and outlook. And I conclude with the summary of all of these in the end.

      So what AI can do, it's an early beginnings and pilot collaborations, like with any other research trends we do in the office. So we start. This is basically looking at early stage into image generation pipelines using AI. So we started looking at the GANS based on photo data sets we had in the office, based on build projects, like interior photos and exterior photos, also, augmented slightly with what is available on the internet, so as to create these kind of quick animation of blends between the various OR projects in the office, so as to get our feet into it.

      Then we started looking at diffusion models, looking at how giving a prompt and, again, training it with the data sets from the office or augmenting it with data sets from the office to create these kind of image outputs. At that time, we were looking at-- when we started, we were looking at DALL-E, DALL-E 2 by OpenAI, Midjourney, Stable Diffusion, and more recently, also, Adobe Firefly. As with any other research trajectories, we generally collaborate with the pioneer in that field to understand the technology behind it a bit better. So in this case, for the diffusion models, we did a collaboration with Refik Anadol Studio called architecting the metaverse using a pre-release of DALL-E 2.

      So with the data set, which was used to train, or augment the training data set, was the data sets we had in house. Apart from the images, we also added, in this case, 3D model data sets, adding renders as well so that we had a large array of data sets. And it was augmented by some-- datas which were available on Flickr Images of buildings.

      Once the training was done, it generated these kind of spatial outputs, or geometric image outputs, which could be, which is similar to the aura of the office. So it's related to the kind of geometries we generate in our designs. Once we had these kind of various outputs as the 3D side of things were still not mature enough, or it's still under the process of getting developed, we were looking at designerly ways of how we could recreate these in the interfaces we normally design, like Autodesk Maya.

      So these were some early tests of how these spaces could be reinterpreted using a designer input. And we also created these kind of 3D models based on the AI images to understand the spatial quality and spatial quality aspects of the spaces generated by the images. Those were the initial beginnings. Next, we'll look at what we are doing with AI. So or one of them is, like, because it's image based, it was-- we wanted to use it as an early-stage design assist tool.

      So we built based on Stable Diffusion checkpoints, we built our own LoRA models on top of it, looking at various models for exteriors, various models for master plans, various models for interiors and graphic design. Now, this gives an overview of the various models we have for the exterior facade systems, kind of categorizing our 3D geometries and renders based on criterias of program, structural system, louver, and facade system as well, and trying to create these various LoRAs so that we can call upon them on a specific prompt, or create a blend, which you will see in the subsequent slides.

      Similar kind of LoRA models were set up for graphical learning so that we could create these kind of graphical outputs quickly as well. All of this was done by-- enables us to integrate into design pipelines because of accelerated training provided by hardwares we have in the office, like NVIDIA RTX cards and HP hardwares.

      What it's enabling us to do is to each of these LoRAs to be trained in 45 minutes to 1.5 hours, so which is very quick. And it also does these three levels, or three methods. Like, either you create a lot of only one model, or you do a combination of them. And all of this takes about one hour as average. And how it's getting embedded into design assist tools is like, so you have a segmentation map, and also the control at canny edges, and then you have these prompts which will-- which are also specific tags.

      We'll see how it was done later on. But then based on these tags, it generate an output. So for the same set of inputs, we are able to quickly generate variations. So it becomes very useful for early-stage design where we are weekly able to generate multiple iterations of options, and then pick the one we want to go ahead with. Same thing is done in masterplan models as well. So we have the same methods of training on existing masterplans we have in the office, and looking at combining multiple LoRAs or make a single LoRA.

      What is a interesting aspect of this is we are also looking at weighting the weights of the prompt versus the weights of the diffusion model. And that also generates a wide range of iterations of options. And then, which enables us to choose how to develop it further, as you can see on the right. As the weight becomes more, it is becoming more and more towards the images we trained from the office data set. As the weight is less, it's giving more generic outputs. Similar kind of things for masterplans, another view of it. Again, various outputs based on weights.

      Now, getting a bit more into workflows and use cases. So as I mentioned previously, we're looking at tagging an input image with specific tags. So the tags in this case, we're looking at typologies, like in this case program, which is commercial offices, residential, sports, hospitality, et cetera. But we are also adding tectonic tags to it, like whether it is going to be done by timber, or robotic hot wire cutting, 3D printed, et cetera. And these tags become part of the prompts to generate the outputs, as you can see. They become keywords like office, facade, et cetera.

      We are also looking at control nets to get a bit more refinements of the images. So we're looking at canny edges, depth map, and segmenting maps, and normal maps to kind of create the defined-- much more refined images. So the style library, as for in this case like an external, for giving an example of the external LoRA, so it takes an input as a 3D machine, and the user has a choice to choose from these external LoRAs.

      So they can choose either one and then get a generated image, or they can create a combination of multiple LoRA models, and quickly generate variations. So this is accelerating the design process to feed in early stages so people can then accordingly develop their designs based on what they're seeing in their images.

      Similarly, these are various outputs for materiality. Again, for early stages of design, looking at glass, concrete, timber, and giving an input to the designer where they want to move in terms of materiality. These workflows are also enabling us to do these kind of small snippets of videos, which were then subsequent. This is from a project for the Dongdaemun Plaza Museum in Seoul, where we created these NFTs for them, which was like small five-second, 10-second animations. And also, looking at the various tectonics of the same space, whether it was in concrete, if it was timber, et cetera.

      How to get deeper into how we are integrating this into the softwares we use in the office. So we have our own software-agnostic spatial technology stack called zSpace, which is the core framework which we develop independent of the softwares. So all the methods and the logics are in this-- in the core framework. Then, it is easier to create extensions, or plugins, for platforms like Autodesk Maya using their API, also NVIDIA Omniverse, et cetera, to create applications in these specific softwares the rest of the office uses.

      We use a lot of Maya, Autodesk Maya, for early stages of design. So we integrated this toolkit, the workflow in there. So as you can see, this model picks up an image from the viewport of Autodesk Maya, and the plugin gives this interface on the right to do the LoRA, to pick the LoRA model, the prompts, and the waiting to get output image based on Stable Diffusion.

      Now, this also showcases a quick video so you can, as the designer is making some changes on the right, you're also getting a quick preview of what those changes entail in terms of spatial quality. So the designer is able to quickly get this feedback, and make design changes accordingly, in this case, looking at the interior LoRA. We also integrated the shape modeling techniques with other Diffusion models like Midjourney and DALL-E as well. So this is giving an example of how it works with Midjourney.

      A similar kind of example, looking how Autodesk Maya is integrated with the masterplan LoRA. So in this case, the designer is making changes to a masterplan massing, and then quickly able to generate these kind of images on the left, just to get the visual feedback while they're designing. We are also looking at pixel streaming, where in this case, the designer is doing segmented models. And then based on the segmentation, either drawn or modeled in Maya or-- Maya or other design softwares. And they are able to quickly look at the outputs on the left.

      So that's what we are currently doing, like previously done. And then now, we are looking into what we are most recently doing with the AI in the company. So that is to teach tectonics to AI. A bit about what is tectonism. Tectonism is a subsidiary style of parametricism, the design theory, or the design background, for the projects in the office. So what it does is specifically is to make visible in shape and stylistically heightening performance criterias. These performance criterias would be structural, fabrication, environmental, spatial, et cetera. And that's what forms tectonism.

      And through the projects developed in the office, we are known, what are the benefits of it? It's high performance in terms of geometry, but also in terms of user experience and interactions people have in these projects. So this is a project, our project with the highest atrium in the world, [INAUDIBLE] SoHo. And we are learning from these-- the benefits of tectonism.

      Other benefits are also tectonic projects are also structurally aligned. So this is a project we developed with block research group, incremental 3D and [INAUDIBLE]. We're looking at 3D-printed bridge, which is standing in pure compression. So again, highlighting 3D printing tectonic aspects into a structural system, which is standing in pure compression.

      Like this enables it to be using less material, but also plugs in novel contemporary fabrication technologies like 3D printing, and combines with wisdom of ancient wisdom of masonry. To understand a bit more details on this, please join my session on unifying workflows, where I get-- delve a bit more into detail of high performance geometry and tectonism at the titled Unifying Workflows With Open USD.

      So how are we teaching tectonics to AI? As previously mentioned, for the diffusion models, we wanted to embed these features, which is structural features, environmental features, fabrication tectonics. A previous workflow, as we saw, was taking an input image, whether it is a depth map or an image, and then subsequently, it generates s output based on a prompt. What to do the teaching of diffusion AI tectonics to the diffusion model? We had to introduce these steps of having a structural AI model, a fabrication-related AI model, and environmental AI model.

      To create the data set, this was done in a first experimented in a workshop in digital futures at Shanghai, where we were using Amoeba and Peregrine as our topology optimization software to create this data set of 3D models. So one, Amoeba, is giving a bit more solid 3D geometry while Peregrine gives you a single-line optimized-- topology optimized center lines. This was also combined with structural models, which were available on the publicly available-- so as to get a bit more understanding of what are the various types of structural systems out there based on material as well.

      And subsequently, we developed tectonic details, which were related to the materials we wanted to explore for these towers, which was timber, concrete, and steel. So multiple geometry and data sets were generated using these tectonic principles based on material. And for the environmental aspects, again, we were looking at various scenarios, whether it was south facing, west facing, also programmatic distribution in the tower, whether it was living spaces, or office spaces, and also considering floor heights.

      Based on those criterias, we also were able to set up tools to make parametric variations, looking at the-- what would be the balcony system or the lower system, et cetera. We generated these kind of models, which could be used as a data set. And this is an rendered output of one of such data sets. So how are we doing tagging it into for the training was to, again, pick up like a 360-degree view, multiple images, where you can also see the angle and the position of the camera used as a tag.

      But more other-- apart from that, we were also embedding tags of structures. So anything with HDR, like Amoeba or SDR Peregrine showcases that was the structural model. And all the other things of the shape of the geometry, et cetera, we also used as tags. And similar kind of thing was done for the fabrication side, looking at digital timber, concrete, metal, and also sub sites inside of that, whether it is developable surfaces, whether it is glulam, bentwood, et cetera, to give a bit more details into the fabrication side.

      And similarly for the environment, it was doing the same, looking at whether it's horizontal or vertical louvers or balconies, where they are located, what side they are located in. All of those was kind of embedded as training tags. So this gives an overview of the data set and the models, which is and then for all the three AI models we kind of built on.

      And then subsequently, this was trained using the model was Kohya diffusion model, and then used using NVIDIA RTX cards, and to create the-- yeah, the very quickly train these models in 30 to 40 minutes. And these are some of the outputs of AI visualization on the right. It was also powered by workstations given by HP. So this also enables us to speed up the process of training.

      Some visualization of the outputs is like a video showcasing how based on these prompts, we are able to quickly generate variations on six tower models. I'll showcase a bit more detail in subsequent slides. So this is looking at how the same model was used to create tectonic details in visualizations. And this is a video which showcases the workflow, like overview of the workflow, how it was developed, and generating output.

      So as I mentioned, these are the various data sets, the structural fabrication and environmental data sets. To get a bit more understanding, this is showcased in the HP booth. Please do visit us so where we can give you a bit more insight into what's happening in the back end. So once this training was done, you will see an initial test of 3D-- 2D to 3D was also tested. But that is still in very early stages.

      But you can still see what was the outputs we were getting. But the more substantial results were related to the AI visualization side, which was able to create these kind of variations very quickly based on the tags and the prompts.

      So moving on, what we want to-- what we are currently-- what we want AI to do next is to assist in creating spatial content creation for cities. So what does that mean? So to get started, we wanted to also look in AI with 3D. So what could be geometrical learning, how we can teach geometrical-- learn from geometrical features.

      So one of the examples we were looking at, again, was topology optimization. Typically, topology optimization takes about 20 to 30 minutes to set up and run the simulation. Because it's a finite element analysis, you also have to create a high resolution geometry, which is not very conducive for early stages of design. So we were looking at whether we can actually predict topology optimization results, even if it just gives you two classes that you need material here and no material somewhere else.

      And if it could be predicted quickly, then it-- even if it is not 100% accurate, at least it's pushing us in the right domain. So we were trying to learn, or make these predictions, based on locally-calculable geometrical features like mesh distance to boundaries, angle of the load, et cetera. So this was done on a training set for chair, a chair we designed, with topology optimization. And as you can see, the prediction currently varies anywhere between 60-- 70% to 85%, which is already in a good ballpark for early stage of design.

      But we want to refine this further as we develop a bit-- make it a bit more robust to be more accurate and work on a larger data set. Why we want to get into the 3D side of it is to kind of assist in spatial tileset creation for cities or districts. This is from a game we recently developed with Epic Games. But we want to use these tileset creation, and use AI in assisting in creation of these tiles. So what would AI require for that would be the data sets of procedurally-generated content.

      In this case, models which have a-- again generated in Maya, which has a sequence of 3D operators. So a large language model can easily pick up, or learn, the sequence of operators, and try to recreate them, or create a combination of them, to quickly create these towers, or but also learn because we have most of the projects are designed in Autodesk. Maya, which stores a history of operators, we can actually expand our data set for all the 3,000 to 4,000 projects we have in the office and classified across various typologies like towers, bridges, cultural buildings, et cetera.

      And the data set could also be assisted by these kind of gameplays. This is Project Merlin we developed with UEFN to create, again, based on tilesets, creating spatial assets for buildings, but also landscape. So these could also be used as data points for training, because it is procedurally done, sequentially done.

      Again, so to create these large language models learning, we can really need it to be-- we can use existing language models, as you can see on the left by NVIDIA's ones. But we also require the hardware, which enables that. So HP is able to provide us with such hardwares to create these kind of training data sets, or to do the training on the data sets we have.

      The large language models, as I previously mentioned, can also learn sequence of operators for the tectonic aspects, not only creating a shape, but creating shapes, which are tectonically aware, whether it is for 3D printing, robotic hardware cutting, digital timber, et cetera. So we have built based on projects we have done previously. We have this extensive set of methods, which already does this procedural generation. So we want to train large language models on those so as to they can create 3D geometry.

      The goal would be once we have such geometries, or tile sets, was also to do AI to use them to create these kind of combinatorics. So create variations, which generating variations from simple tile sets, so and then making a evaluation check of which one is feasible, and which one is not. This gives an example of the combinatorics we did for a housing project. This is just showcasing the application's not AI generated.

      With the goal, why we are developing that, would be to generate these kind of based on simple tilesets to create also aggregations for city districts, which is where we are getting towards, whether it is based on programming differences, whether it's housing, or offices, and also how it integrates with existing conditions in the city, whether it's a water body or landscape, et cetera.

      So in summary, we looked at how we are using AI diffusion models to create these spatial models, which are similar to the projects we have in the office, or the style or aura of the office. We looked at how AI and we are teaching tectonics to AI, how the AI is integrated-- we are having structure fabrication models into integrated into diffusion models so as to get these kind of early-stage visualizations of the structure, of the environmental features like louvers.

      We also saw how AI is being accelerated now with integration with AI models, which are publicly available, and also great improvements in the hardware by companies like HP. And that's enabling us to accelerate the training process and saving us time. And we also saw how AI and 3D could be-- how AI is becoming more 3D with crowdsourced content, but also procedurally generated data sets, which can be trained with large language models so as to quickly generate tile sets and combinatorics for cities.

      So yeah. So let's join and collaborate to create these blueprints for future cities. This session was brought to you by Z for HP, and AMD, and NVIDIA. Please visit the booth of HP where we can show the showcase the various solutions they have for AI, and how it could be integrated into your workflows.

      But also, visit the booth if you want to have a bit more understanding on the tectonics for AI part, which is also showcased as 3D prints and data set models over there. So please visit the booth. Hope to see some of you there. Thank you. Thanks a lot, Robert, for your attention.

      Downloads

      ______
      icon-svg-close-thick

      Cookie 首选项

      您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

      我们是否可以收集并使用您的数据?

      详细了解我们使用的第三方服务以及我们的隐私声明

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

      改善您的体验 – 使我们能够为您展示与您相关的内容

      通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

      定制您的广告 – 允许我们为您提供针对性的广告

      这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

      icon-svg-close-thick

      第三方服务

      详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

      icon-svg-hide-thick

      icon-svg-show-thick

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      Qualtrics
      我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
      Akamai mPulse
      我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
      Digital River
      我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
      Dynatrace
      我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
      Khoros
      我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
      Launch Darkly
      我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
      New Relic
      我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
      Salesforce Live Agent
      我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
      Wistia
      我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
      Tealium
      我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
      Upsellit
      我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
      CJ Affiliates
      我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
      Commission Factory
      我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
      Google Analytics (Strictly Necessary)
      我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
      Typepad Stats
      我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
      Geo Targetly
      我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
      SpeedCurve
      我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
      Qualified
      Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

      icon-svg-hide-thick

      icon-svg-show-thick

      改善您的体验 – 使我们能够为您展示与您相关的内容

      Google Optimize
      我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
      ClickTale
      我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
      OneSignal
      我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
      Optimizely
      我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
      Amplitude
      我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
      Snowplow
      我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
      UserVoice
      我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
      Clearbit
      Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
      YouTube
      YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

      icon-svg-hide-thick

      icon-svg-show-thick

      定制您的广告 – 允许我们为您提供针对性的广告

      Adobe Analytics
      我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
      Google Analytics (Web Analytics)
      我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
      AdWords
      我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
      Marketo
      我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
      Doubleclick
      我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
      HubSpot
      我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
      Twitter
      我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
      Facebook
      我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
      LinkedIn
      我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
      Yahoo! Japan
      我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
      Naver
      我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
      Quantcast
      我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
      Call Tracking
      我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
      Wunderkind
      我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
      ADC Media
      我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
      AgrantSEM
      我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
      Bidtellect
      我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
      Bing
      我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
      G2Crowd
      我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
      NMPI Display
      我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
      VK
      我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
      Adobe Target
      我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
      Google Analytics (Advertising)
      我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
      Trendkite
      我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
      Hotjar
      我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
      6 Sense
      我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
      Terminus
      我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
      StackAdapt
      我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
      The Trade Desk
      我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
      RollWorks
      We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

      是否确定要简化联机体验?

      我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

      个性化您的体验,选择由您来做。

      我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

      我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

      通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。