AU Class
AU Class
class - AU

Innovating the Future Pipeline: Bridging 3D to AI Render

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

As technology continues to evolve, the intersection of 3D design and artificial intelligence (AI) presents exciting opportunities for innovation in content creation. In this session, we'll showcase experimentation with innovative techniques—namely, ComfyUI, ControlNet, and Stable Diffusion—to forge a future pipeline that seamlessly integrates 3D rendering with AI-driven processes. We'll outline a visionary approach toward empowering artists and streamlining the content creation process. By using advanced AI algorithms and data-driven insights, we aspire to give artists greater control over their creations as they enhance their efficiency and creativity while using Autodesk tools, specifically Maya and Arnold software. Our vision for the future pipeline revolves around achieving a seamless transition from 3D design to AI render, while empowering artists with unprecedented control over their creations.

主要学习内容

  • Explore the empowerment of artists through enhanced control and creative autonomy.
  • Learn about the integration of AI-driven insights to enrich 3D creations and elevate visual quality.
  • Learn about streamlined content-creation processes, leading to increased productivity and efficiency.

讲师

  • John Paul Giancarlo 的头像
    John Paul Giancarlo
    John Paul Giancarlo. JP is a 3D expert, who will expand our team by bringing his extensive knowledge as Visual Effects TD and Technical Lighting Artist with him. He brings wide-ranging knowledge of Shotgun, Arnold, Maya, MEL, 3dsMax and Nuke with him, which makes him a great contributor to help achieve our goals in Create, Connect and Compute in FY18 and beyond.   John Paul has been working in the TV/Film Industry for over 15+ years. He started his career as a lighting artist for Brown Bag Films and quickly made the jump from TV series to commercials to be the studio 3D R&D Technical Director. He was instrumental in integrating Maya and Shotgun in Brownbags’ Pipeline and has worked on Emmy awarded TV show "DocMcStuffin" currently on Disney Jr, Octonauts on Cbeebeis, Peter Rabbit on Nick Jr, among others
  • Roland Reyer 的头像
    Roland Reyer
    Roland Reyer has started in 1992 as an Application Engineer at Wavefront Technologies GmbH in Germany. He became an industry expert for the entire M&E product portfolio of Wavefront Technologies, later Alias | Wavefront, Alias and finally Autodesk. In his role as an application specialist he tested, showcased and trained Maya from its very first version. With over 25 years product and market experience, Roland now works at Autodesk as a Solutions Engineer for Maya, Arnold, Shotgun, Mudbox and Motionbuilder throughout Europe.
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • subtitles off, selected
      Transcript

      JOHN PAUL GIANCARLO: Hello, everybody. Thank you for being here today. Today I'm presenting Innovating the Future Pipeline and Bringing 3D to AI Render. My name is John Paul Giancarlo. I'm a former 3D artist, VFX, R&D, and Pipeline Technical Director. I've been working in the TV and film industry for over 20 years with a focus on the latest technologies, productivity, and efficiency. If you want to know a little bit more about myself, you can always ask Copilot and you can see some of the projects that I work in the past.

      So a little introduction. As you may know, technology continues to evolve. So the intersection of 3D design and artificial intelligence, AI, present exciting opportunities for innovation in content creation. In this presentation, I will showcase my experimentation with cutting-edge techniques, namely ComfyUI Control net, and Stable diffusion to forge a future pipeline that seamlessly integrates 3D with AI-driven processes.

      So in the agenda today, first of all, we're going to see which ones are the top AI services available for content creation today. Then we're going to see ComfyUI workflows, basically how to give artists control back over their tools and their desired results, and ComfyUI integration to 3DS MAX through TyFlow, or Ty Diffusion.

      So AI technologies, we have content creation for 2D and 3D. So we'll start with 2D images generation. As you may know already, there's a bunch of different services like DALL-E, Midjourney, Adobe Firefly, Runway, Haiper, Kling AI, and Leonardo. A thing that all they have in common is that you need to purchase a subscription if you want to get unlimited generations. Otherwise, you are very limited to what you can do.

      They all live in different servers, so it's quite high. It's quite difficult to set up a proper pipeline. It's not really an ecosystem. So in order to test a few of them, I've come up with a very difficult prompt. And I quote, the process as follows-- "A highly photorealistic image of an adult person inside of a floating placenta, set in a futuristic sci-fi laboratory inspired by the style of the movie Prometheus.

      The person is in a fetal position, surrounded by a dense network of cables and wires connected to their body and the placenta. The environment is dark and high-tech, with advanced equipment, glowing screens, and additional sci-fi details like holographic displays, robotic arms, and intricate control panels. The style is heavily inspired by heavy metal, with an intense atmosphere, metallic elements, intricate details and bold, aggressive designs.

      The person appears more diffused and partially obscured by the cables and shadows, creating a mysterious and eerie effect. The background is blurred and dimly lit to focus the illumination on the character and the placenta, with design elements reminiscent of the aesthetics from Prometheus." As you can tell, this is a very elaborated prompt.

      So let's start to see some of the results. So DALL-E, as you can see here, it gives me a really pleasant result. It's almost everything that I asked for, except for a few might minor details, small details. As you can see, I didn't ask for a capsule. I only wanted to have a placenta hanging out there somewhere. Anyways, I think this is a pretty good result, so I'm just going to give it a pass.

      Next is Haiper. So Haiper gave me this out of this prompt. I was not totally satisfied. It does look a little bit more like the Prometheus, like movie Prometheus. But as you can see here, there's no placenta. The woman is not in a fetal position, or not really what I asked for. I don't see a placenta anyways.

      From the KLING AI, I got very weird results. You can see here, this doll's kind of hanging from the cables. It is an adult person, but with a baby face. So not really-- not really pleased with this result. So Leonardo gave me this. This is the first try, by the way. This fat man sitting in somewhat of a chair with some sort of a fleshy kind of texture, not sure. It looks like gum or something. Not really what I asked for. Overall, it's a pretty good image, though.

      And now Adobe Firefly, out of them all, this is probably the one that is the most cartoon-y. I guess it's because of the data it's been trained on. And finally, Runway. So Runway unfortunately didn't allow me to produce this because it was flagged by their content policy. So I was not allowed to create an image out of this prompt.

      So next, we like to-- I'd like to show you my experimentation with image to video generation and video to video generation using the services mentioned in here. So again, we're going to see Runway, Luma, Haiper, PixVerse, KLING AI, among a few others.

      I have to mention Sora, even though I couldn't test it because it's not out yet. Sora is-- it's one of the video services for AI that looks very promising. A lot of people is talking about this, and it's been a trend for a while. So we actually just expecting for this to come out so we can test it.

      But in the meantime, I have tested a few others. Hotshot is one of them. I gave it the prompt and Hotshot gave me this result. I couldn't choose a specific model or style, so it gave me this. So I'm happy with the results, since I couldn't get any LoRAs, or any diffusion models, or any of that.

      In the case of PixVerse, it can be two shots. One perfect, perfectly understandable. Quite hard to understand what it is, kind of a blob or something. And a pregnant woman. Not really what I asked for. Overall, the image looks pretty pleasing. But as you know, if you want to get to what you want, you have to iterate a lot in order to get to the result that you're expecting.

      Next in the list is then PixVerse again. This time around, I had the opportunity to produce a video from an image, so I fitted with this particular image over here. And I quote, the prompt says, "The person looks to the camera in a real fast move, then opens his mouth, sticks his tongue out, and then turns into a lizard. His eyes turn yellow and his pupils move like of a reptile. Dynamic motion, handheld, fast moving, 35 mil cameras moving closer to the subject." And this is what he produced.

      [LAUGHS]

      Yep. As you can see, that's what I call a AI hallucination. It just produced whatever it wanted to produce. So I'm going to just give it a fail stamp because of that. I could iterate more and try it more times, but in this particular experiment, I wanted to just do one try for each one of these services.

      So in this case, I now have Haiper that image that it produced. And I asked for it to do the following. The person is trying to free himself from the cables and drops to the ground. That didn't happen. So I try Haiper again, give it another try. So I give it this image. And it produced that awful warping video, which, not quite sure why it did that.

      So I give it another try, as he has a keyframe condition on the option in which you can add a first frame, middle frame, and a last frame. In this case, I just gave it a last frame. And he gave me this. Again, Haiper was probably the worst, doing this. So it's a big fail. And it also reminded me of the movie Beetlejuice, for some reason. So that's why I put that in there. I just recently watched the movie.

      So next is KLING AI. So I went ahead and just gave it the image to video prompt. And it's the same that you saw before. This one is a lot more stable. It's a little bit of warping here and there, but it's not terrible. Leonardo, on the other hand, it wouldn't generate a video if you had a free plan, so I just couldn't do it unfortunately, this time around.

      Luma gave me this result out of the very first prompt that I read to you. And as you can see, not really what I was expecting. I didn't ask for a baby. I did ask for a placenta. So there's a lot of hallucinations and basically, doing whatever you want it to do, not really following the prompt. I guess it could have been more specific, but the whole point of the test was just to give a very complicated prompt.

      And now you can see here, these are the one. This pupil is doing something really weird. It's just not doing what I asked for entirely. So I gave Luma another try. So with this first and last frame, again, same prompt as before. And this is what Luma gave me. So I'm going to give it a pass stamp in this case, even though it's not totally stable, you can see some warping over here in the forehead and the head. I guess, good enough for certain things, so I give it a pass.

      So now, we're going to go to Runway. So Runway deserves special, extra time because of all the things that it does. So if you see here in the website, you can notice that they have a lot of different services like generating video, audio, lip sync, remove background, text to image, image to image, infinite image, expand image, video to video, color grade, all this stuff. And there's a lot of stuff you can do with it.

      And it's so good that a lot of people are actually using it and creating content already. So there's a film festival event called The Grand Prix, and you can go ahead and watch what people is generating nowadays with this technology. So in the case of Runway, gave the same prompt, no last frame, just the one frame. And he's the one, actually, he was the one who produced this. Runway, in this case, would not generate that image, but it wouldn't complain about the video. So as you can see, the video is actually-- the result, in my opinion, is quite awesome. It's very stable, as you can see. I mean, there's very little to no warping whatsoever in this one in particular.

      So I just try to do different things. We experimented with these services, I thought, was probably the best out of them all. And I gave different images to produce. In this case, I asked a man walks towards the camera while dragging his chains, fast motion. The chains didn't get dragged at all, as you can see. It's not really interacting with the chains at all.

      And this one here is a man running fast, deep into the forest while lifting dust, fast motion. But you can tell there's a walkie talkie that actually disappeared. So again, in my opinion, it's not good, very good handling prompts. Same in this case, another chain, another problem, not really doing anything with them. And it's quite saturated, in this particular case. But still, the produce results is not terrible. It's quite good.

      And here, you can see the man is walking fast with the camera, and the man swing way, particles are floating, and dust is moving as he walks, dynamic movement, handheld camera, time lapse. So you can see, it's-- now, I don't know. It's good. A little bit exaggerated, but good.

      Now, I also tested video to video. So I fed it with a man dancing on a green screen, and you can see here the results. I just basically changed the styles, just to produce a very obvious difference between those videos.

      So I guess this is the reason why Runway managed to partner with Lionsgate as of September 18 this year. They are partnering to explore the use of AI in film production. So I can tell the future is going to be very bright. So now, talking about 3D model generation, I also explore Meshy, CSM, and Rodin. The very first one was Meshy, as you can see here. I gave it a very simple prompt, an alien reptile, tall and scary. I gave me four different results over here. They can preview. It's got an automatic kind of view, which you can rotate the model automatically, as well. And if you want, you can generate a texture.

      So here, we can see how the texture gets generated and how it's being displayed. It's actually quite a beautiful animation of the texture. You can also just select a texture. And if you're happy with the results, you can download either an FBX, or a blend file, among others. But if you have a free account, you cannot download an FBX. You can only download a blend, unfortunately.

      So another thing you can do is to rig, actually, this character. So if you want to click next, it will give you this example that you need to follow, which is basically placing those points in the right places or the right positions for chin, shoulder, elbow, wrist, et cetera. Once you hit next, it's going to take a little bit of time to generate the rig, as you may see now. But once it's done, you're able to animate this object using some of the libraries you already have.

      And we'll see in a few seconds from now. There you go. So you can apply different animations if you want to, or just download the FBX file so you can animate some of the place.

      So another thing you can do is create models out of images. So this is the image right here. And this is the model that created for me. It's not an amazing approximation, but at least a pretty good start, in my opinion.

      So once you have that model, you're obviously free to import it into Maya. I start checking it out, see what you can do. If you need to retopo the mesh and start working on it, you're happy to do so. But after Maya 25, you can now use our flow retopology service, which is also in the cloud, so you can continue to work while you perform a retopology on this particular models.

      I'll show you this example because whenever you have, say, a project which you have tons and tons of prompts, you can maybe generate them with the help of AI. So when the artist opens the file for the first time, it's not an empty scene. At least it has something to start from.

      You can definitely also add a quick rig inside of Maya, if you'd rather use our character system instead of Maya. And then you can start animating inside of Maya, or connect any motion file from Rococos or any other library.

      Another thing that I thought was worth mentioning is that you can also export as quads, although that's only available when you upgrade or get a subscription with Meshy. Plus, you can see it's quite good. So that'll be a lot easier to retopo. So it wouldn't be no manifold, or any errors.

      So now, we'll give a try to Rodin. So Rodin's another service. I have some other controls. You can import 3D models, so bounding box controls, that kind of thing. You can also generate by text, generate by image. So in this case, I'm generating by image. So I give it that specific image there. Sorry. My mistake. That was the text. An alien creature with spiked armor, red glowing eyes. That's what I gave him.

      So pretty good results. You can also download OBJ, FBX, USZZ, STL, et-cetera. You don't need a subscription for that. You can download an FPX out of this one. So unfortunately, CSM, I just couldn't get a proper model because the servers would take 50.1 hours to generate and refine a mesh, so they kind of force you to buy a subscription, which is not good.

      And well, in the other hand, I'm pretty sure you already know this, but Autodesk has-- we have our own project called Project Bernini, which I'm just going to hit play to this video so you can see what it's all about.

      [MUSIC PLAYING]

      PRESENTER: At Autodesk, we're obsessed with geometry, and that obsession is reflected in our software. Introducing research Project Bernini, experimental generative AI for quickly generating functional 3D shapes from 2D images, text, voxels, or point clouds. Our first Bernini model is focused on unlocking professional workflows, generating multiple, geometrically realistic 3D objects to accelerate every stage of the creative design process.

      As Autodesk trains its generative AI on larger, higher quality data sets and modalities, the technology will become increasingly useful and compelling, producing 3D models and objects that work in the real world and serve the purpose a designer has in mind. Because the world's designers, engineers, builders, and creators trust Autodesk to help them make anything.

      JOHN PAUL GIANCARLO: All right. That was Project Bernini. So in conclusion from this very first part, we know by now the tons and tons of different services are there. So OpenAI, Runway, CMS, I mean, there's so many of them. It's really hard to put them into one page only. But after weeks of experimentation, these are my impressions. So all services are web-based, so pipeline automation would be difficult to set up. You never get the same result and similar, not the same.

      Results can be very unstable and produces too many artifacts, especially in video-- that not all the cases, not video to video, but image to video. These services do not connect with each other, so it's not really an ecosystem. You cannot create a sequence of images using these services in a consistent manner, at least not good for storyboard, not good for video, not good for any of that stuff. 3D models are very low res, thus, not good for production, only for low poly game assets.

      Textures are very small and unusable for film or TV, so-- and also requires too much work to get right. It's too many tries needed before getting to the desired results, image and videos. So this is not really cost effective because you spend so much money trying to get to the right thing. Even when you get good at it, you still have to iterate a bunch of times.

      So in my opinion, it's only good for advertisement. So when you have to produce one shot only, maybe YouTube videos, again, you produce a series of videos that do not really relate to each other. So it's a sequence of shots or scenes. Presentations, among some other use cases, for example, game-- producing game assets, that kind of thing in terms of 3D modeling.

      So I guess you'd be crazy to make a film or TV show today with only on available AI technologies. That's the quote of the day. So now, let's go into ComfyUI what we're here for. So this is a case study. But first, what is ComfyUI? So ComfyUi is a powerful and modular graphical user interface, which is designed for creating and managing advanced workflows in Stable Diffusion, which is a deep learning model used for generating images from text descriptions. It uses a node-based interface, allowing users to design complex pipelines without needing to write code.

      It is a node graph flowchart interface. This makes it easy to experiment and create intricate workflows. It supports various models like Stable Diffusion 1, 2, and SDXL. And got lots of optimizations, and it's very versatile. So you can see, he can handle tasks like ink painting, upscaling, model merging, so on and so forth.

      But what are the key components needed for this to work? You first need to have Stable Diffusion models or Diffusion models. These are the ones they use to generate data similar to the data on which they are trained. So fundamentally, Diffusion models work by destroying training data through the successive addition of Gaussian noise, and then learning to recover the data by reversing this noise process.

      We also need ControlNet models. These are neural network architectures that can be used to control Diffusion models. It works by adding extra conditioning to the Diffusion model with an input image as an additional constraint to guide the Diffusion process. Basically, it's guiding that Diffusion process you described before by using, for example, a depth, mask, normals, and lines.

      You also need LoRAs. LoRAs are lightweight models that are used in conjunction with the base models. And these LoRAs, basically modules that are trained in specific concepts or styles and increase the range of concepts that a checkpoint can depict. The checkpoint is basically the Diffusion model. That's where everything starts. So you can add different styles on top of this, which is quite good. And the VAEs are generative models using machine learning to generate new data in the form of variations of the input data they are trained on. Also, more variations.

      So when you start COMFY-UI you first get a CMD command window, and then you have a-- on the default browser, which is where it deploys, you have only this empty space with this little window. Now, you can load or start creating workflows, like the ones you see in these on this image right here.

      So you can start by adding nodes. In this case, you can, for example, load image, and you can start creating from there on. All right. So next, I'd like to show you the COMFY-UI workflow so we can render 3D objects with AI, using Stable Diffusion. So for that, I have set up a scene here. Basically, which is an old scene that I had in the past, which is this guy running, just like a zombie, or a monster or some sort, in this city ruins.

      So what we need to do, we don't need to render or export any model. We need to get a mask color. Cryptomatte, could be a normal map. A Z-depth map as well, and a contour. Once we've done that, we can start generating away and create results like the ones you see over here.

      But first of all, the thing we need to do is to either load the scene or start creating. In this case, I'm going to load a scene, which was started by Mickmumpitz. In this very first part of the workflow, what we can find here is that we have the ability to add a width and a height on the video. We also have the ability to add a mask, which is what we're going to do now.

      So right after that, we can load the Z-depth map, which is the passes that we need, I mentioned before, and the line art, or outline, or contour. All right? So next step, we need to extract those colors. So in this part of the workflow, I will need to do is find the hex color of each. In this case, in the green. Apply the hex color and go into the prompt section, in the primitive, change the title to character, just to be sure that we are working on the right one. And then we need to repeat the steps for the trees, the ground, and the sky.

      Once we've done all that and we rename all our primitives over there, we should now proceed and disable by pressing Control B, all the rest of the nodes, and queue the prompt, just to be sure that we extracted the right data. So you can see, we have all the data necessary, extracted in the right way. Now, we can start adding prompts.

      So the very, very first one is the master prompt, or just going to be applied for the entire scene. And then we can add prompts exclusively just to each one of the masks. In this case, I'm going to type this is a person, a happy one, and might be maybe a sportsman. So on the top in the primitive, I also added this is a cartoon, stylized, et-cetera. So there's a lot of other things that didn't show, but it's basically right over there.

      So now, we can proceed to select all the rest of the nodes and re-enable them back. Now, on the control net section, you'll see that we have two control net tensors, or safe tensors. In this case, we have to choose first the depth so we can control the depth. And we also have a canny with the outline. Basically, that's all we are doing. And if we go back to the load checkpoint, this is where we are going to specify which model we want to use. In this case, I'm going to be using a one called Juggernaut Excel, which is quite nice for the things that I'm trying to do.

      Now we can cue the prompt, and it seems like nothing's happening. But if you see the command window, the job is loaded, and it's going to start to render right now. So you can see the progression bar in the common line, and also in the workflow. So you see we have a result, not the best, just because we need to do some changes.

      So now, the K sampler node, we need to change some things. So we have the control after generate, which at the moment is randomized, but we can change that to fixed. So just to make sure everything is going to be controlled, and then change the steps to 32. And now, we can cue that prompt again.

      And finally, this is what we get here. It's kind of what I was looking for. So it's basically a sportsman, happy man. It did all the rest for me. And now, I can basically just change this back to randomize or increment, and start rendering a bunch of images to produce different results and see what is the one that I like the best. Over here in the history, we can always click on that and go back and reload whatever result what we want.

      So as you can see, there's a lot of renders or generations out of that. So you can mix different models with different LoRAs and get different results, as you can see over here. This is another example where instead of using random generations, I'm using a fixed generation with some increments.

      OK. So next, I'd like to show you the integration in 3DS Max from TyDiffusion or TyFlow flow team. So here, a little video so that you can see what they're doing with this technology.

      [MUSIC PLAYING]

      All right. So that's TyDiffusion in a nutshell. And now, I'm just going to jump into 3DS Max, and I'm going to show you a little testing that I did myself here. So as you can see, you have the basic settings where you can change the model. You have the ease. You also have the ability to change the resolution mode to set really-- You have control net as well, where you can basically define whether you want to use a depth, or edges, or poses, et cetera, et cetera. And you can upscale images, and a bunch of other things.

      Here, you have the steps, and then here, you have the prompt where you can basically change and add, and do whatever you need. So in this case, I'm going to change this prompt from a zombie to an alien. As you can see, I've already generated something. And if you see the-- whatever color you have in the shader will affect the final result a lot. So bear that in mind whenever you have your shaders. It will actually use them for final results whenever it's producing the final results, or generating the final result. So here, we just change the perspective and generate another one.

      So now, if you already have an animation like I did in this case, this is the animation.

      And this is the frame that I would have liked to produce. Unfortunately, I lost that one. But in case you wanted to, all you need to do is just create an animation over here and make sure auto seed is turned off. That's quite important.

      So here are some of the resources that I was able to produce using TyDiffusion. As you can see here, the generation time was 15 seconds using 20 steps. The resolution was 1344 by 768 with an op scale of 1.5 times. So the results are pretty good. I see some more variations of that, just by having an auto seed turned on.

      Then I just changed the model and the style basically to cyberpunk, or I can't remember which one it was, but you can see there's some real, real cool results over here. So now, for the first animation test, here you can see swapping on stable. And I guess this has a lot to do with the fact that I was rendering only every second frame with the frame interpolation set to one. So I had a total frame of 50.

      So as you can see, not really very stable. It's warping a lot. Second test with this guy, so produced this other version. In this case, warping was less and a bit more stable of a result, rendering 100 frames instead of 50.

      Yeah. As you can tell, there's tons and tons of slides-- sorry, styles that you can use in order to produce different results out of the same geometry. So this is quite-- pretty useful, or quite impressive, in my opinion, and very useful for concept art. You can produce somewhat animations.

      But in conclusion, I think COMFY-UI is pretty good for control, storyboard, and concept art with those variations that you saw. A positive things that you have access to multiple styles and models, you can mix Diffusion models and LoRAs, et cetera. It's very scalable, so stable results using control net not for animation. But for fixed images, yes. You have a great potential for production of final renders. And as you can see, we already saw it can be integrated into a DCC like Max in this case, through TyFlow, Maya, or Arnold.

      And one thing I have to mention, it is highly addictive. You can spend hours and hours and hours generating and generating and generating without getting tired. So in my opinion, this is the beginning of a new era for content creation, and it's going to be the era of AI. So a new era is coming. So thank you very much for your attention, and I hope you had a great time, and you learned something new today.

      ______
      icon-svg-close-thick

      Cookie 首选项

      您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

      我们是否可以收集并使用您的数据?

      详细了解我们使用的第三方服务以及我们的隐私声明

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

      改善您的体验 – 使我们能够为您展示与您相关的内容

      通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

      定制您的广告 – 允许我们为您提供针对性的广告

      这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

      icon-svg-close-thick

      第三方服务

      详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

      icon-svg-hide-thick

      icon-svg-show-thick

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      Qualtrics
      我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
      Akamai mPulse
      我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
      Digital River
      我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
      Dynatrace
      我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
      Khoros
      我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
      Launch Darkly
      我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
      New Relic
      我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
      Salesforce Live Agent
      我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
      Wistia
      我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
      Tealium
      我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
      Upsellit
      我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
      CJ Affiliates
      我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
      Commission Factory
      我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
      Google Analytics (Strictly Necessary)
      我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
      Typepad Stats
      我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
      Geo Targetly
      我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
      SpeedCurve
      我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
      Qualified
      Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

      icon-svg-hide-thick

      icon-svg-show-thick

      改善您的体验 – 使我们能够为您展示与您相关的内容

      Google Optimize
      我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
      ClickTale
      我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
      OneSignal
      我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
      Optimizely
      我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
      Amplitude
      我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
      Snowplow
      我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
      UserVoice
      我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
      Clearbit
      Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
      YouTube
      YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

      icon-svg-hide-thick

      icon-svg-show-thick

      定制您的广告 – 允许我们为您提供针对性的广告

      Adobe Analytics
      我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
      Google Analytics (Web Analytics)
      我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
      AdWords
      我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
      Marketo
      我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
      Doubleclick
      我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
      HubSpot
      我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
      Twitter
      我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
      Facebook
      我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
      LinkedIn
      我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
      Yahoo! Japan
      我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
      Naver
      我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
      Quantcast
      我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
      Call Tracking
      我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
      Wunderkind
      我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
      ADC Media
      我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
      AgrantSEM
      我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
      Bidtellect
      我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
      Bing
      我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
      G2Crowd
      我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
      NMPI Display
      我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
      VK
      我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
      Adobe Target
      我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
      Google Analytics (Advertising)
      我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
      Trendkite
      我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
      Hotjar
      我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
      6 Sense
      我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
      Terminus
      我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
      StackAdapt
      我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
      The Trade Desk
      我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
      RollWorks
      We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

      是否确定要简化联机体验?

      我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

      个性化您的体验,选择由您来做。

      我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

      我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

      通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。