AU Class
AU Class
class - AU

What's Cool in VRED 2025: A Comprehensive Review of the Past Year's Exciting Developments

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

This year—2024—was a significant year for VRED software, marked by numerous enhancements and new features. We started with a comprehensive UI redesign, reflecting Autodesk's guidelines for a modern, intuitive experience. And we introduced improvements, such as multiselection property editing and searchable preferences, boosting speed and efficiency. We launched many new features, balancing technical precision and artistic freedom, providing a platform for visually compelling and accurate designs. In line with industry advancements, VRED integrated open standards, including IFC, USD for data exchange, and MaterialX and MDL for seamless material exchange across applications. VRED also introduced OpenVDB for volumetric data processing from simulation applications, and OpenXR for a wider range of hardware support, including hand and marker tracking capabilities. In this presentation, we'll guide you through the changes from 2024 to 2025, providing an overview of the key updates shaping VRED 2025.

主要学习内容

  • Learn about the user interface enhancements introduced in VRED, and their benefits for usability.
  • Learn about the integration of open standards, as well as their impact on data exchange and hardware support.
  • Gain insights into the changes from 2024 to 2025, focusing on the most-significant updates that have shaped VRED 2025.

讲师

  • Pascal Seifert 的头像
    Pascal Seifert
    Pascal Seifert studied design at the Anhalt University of Applied Sciences from 2002 to 2007. He has been working in the automotive-design visualization and virtual-reality domain since 2008, and he has developed a variety of qualities and skills in the whole virtual-product lifecycle process. He possesses expert knowledge in database, handling file conversion and data preparation, and he presents the results in design or immersive engineering environments. Currently, he is the Technical Product Manager for Autodesk VRED and caretaker for automotive customers around the globe, using his design and visualization experience to help during the digital design phase.
  • Lukas Fäth
    Lukas Fäth joined Autodesk, in 2012 with the acquisition of PI-VR. After graduating in digital media Lukas drove in the visual and conceptual development of the VRED high-end virtual prototyping software. He was responsible for quality assurance, support, and consulting, and is a professional VRED software trainer for the automotive industry and computer-generated imagery agencies with a strong artistic knowledge base. He is now taking care of product management for the Automotive Visualization and XR.
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • subtitles off, selected
      Transcript

      LUKAS FAETH: Welcome, everybody, to our presentation about What's Cool in VRED 2025. Today we will review together what great functionality has been added to our software and look into how and why it is useful.

      So first, a quick introduction. My name is Lukas. I'm product manager for Automotive Visualization and XR at Autodesk. And I'm joined today by my colleague Pascal, who is the Technical Product Manager for the same field, Automotive Visualization and XR.

      Let's take a quick look first on where VRED is coming from. VRED HAS traditionally been used in the automotive, design, engineering, manufacturing, and marketing process. But it extended its user group and the reach beyond automotive for its unique capabilities and combined value over the last years.

      So for example, what you can see on the screen right now, collaborative real-time experiences in XR. In this case, it's taping on top of a clay model to define new feature lines or you light the scene in virtual reality. You can do that alone. You can do that with your colleagues together. That was all of that. But also for its high-quality visualization and design review capabilities, as you just saw on that Porsche model. So it's also known for almost simulation visualization capabilities in real time but also offline. And that's why it is so highly appreciated by the automotive industry, because you need all these precise visualization there.

      But in this example, you can see a marketing use case with a product configurator, with emotional capabilities or emotional rendering capabilities, here with this volume rendering, with the volume fog around the car to showcase the car in context as well and putting it into a-- presented into a nice environment. But also in engineering, as you can see here, with functionality analysis, but also reachability studies, ergonomics, visibility studies, reflection studies, visualization. And just try to experience all-- it allows you to experience the digital prototype before it is built.

      And again, as I said, it's all collaborative, in this case, virtual reality. So two or more people can easily join and collaborate. It's not just automotive. In this case, it's a real-time collaboration between two Inventor users and a VRED user, one in virtual reality, the other ones on the desktop.

      Of course, it's not just the car. It's also the HMI, the digital content on the car's display that can be reviewed. And yeah, that's what I meant in the beginning. This is real-time rendering right out of the software and very high quality. This is what's needed to sign off on static products like cars but also other things, consumer goods and other products.

      But it's also very capable of handling high complex configurations, as you can see here. This is a full Skoda with all configurations it potentially has. So this is millions of combinations of material and geometry. And again, it's not just cars case. It's also for interior visualization of houses, architectural use cases. And so on and so forth.

      So first of all, let's take a look at the combined value of VRED. Or why is it a unique product? Why is it different than other products? The value proposition is combined of three main pillars. One is the data pipeline that we quickly talked about already. VRED is capable of handling huge amounts of data. So you can-- and we're not talking about [INAUDIBLE] models or something, modeled geometry, but CAD data.

      So you can load a lot of CAD files into VRED, assemble it together, put it into a nice environment, light it, shade it. And this can be billions of polygons, millions of paths, and millions of different combinations and variations of the scene itself.

      The second unique aspect is the flexible rendering pipeline. We already saw different scenarios in the video. So VRED can be from very quick rendering and, let's say, less accurate rendering to almost simulation-grade accuracy. And you can pick and choose based on the same data set when you need it, which type of rendering you need for a certain use case. So that's super flexible as well.

      And the third element is collaboration from anywhere at any time. And none of those as a single one are, let's say, unique. There's other products that can render nice or handle big data or collaborate. But there's not many or no product that combines all those three in the way that VRED does. And that's why it is so beloved by the automotive industry because they need all the three elements at the same time in the same product to be successfully designing their products.

      And this makes VRED a great tool for visual communication, digital prototyping, and decision making. And yeah, that's what we will look into a little bit today. I'll give an overview, and Pascal is going to give you a deep dive later in the presentation.

      So let's start with the product portfolio. So for everybody who's totally new to VRED, we started with that little intro so you have a bit of a understanding on what the product looks like or the results that the product create, look like. And how does the product portfolio look like? So what products can you choose from?

      We have desktop authoring tools, VRED Design and VRED Professional. Then we have desktop viewing products, or one product, which is called VRED Presenter. It's basically just the Render window. And we have a cloud and HPC or server products called VRED Core and VRED Render Node.

      And how do they work? I think it's self-explanatory, but we'll go into a little bit of detail here. So we have the three desktop products. They can, of course, run on capable machines. So this could be a workstation or a cluster. Or this could be a notebook with a reasonably good graphics card. But then, also, we have those server and cloud products. And they are relying on streaming to deliver visual content.

      So you could either use them for automated data processing. Let's say you don't need a visual output. You just want to open a file, perform an action, and save it. You could do that with VRED Core and VRED Render Node automated on the cloud or on the server. But then, also, you could bring in people from any device with any type of expertise by using the streaming capability that's built into VRED Core.

      So we deploy it on the cloud and you stream to any device around the world, leveraging the same capabilities as VRED Professional and VRED Design desktop applications. You can join collaboration sessions. You can have the same visual quality than the desktop products. The only difference is it's computed in the cloud, and it's sent to you via stream. And also, and this is important, it's a great thing, the interface is different. You don't have the complex VRED interface. If you stream, you can just stream that into a web page and make it as simple as needed for the person that sits-- that wants to use it eventually.

      In this way, we are including everybody, independent of the hardware, so from very capable machines to mobile devices, but also every persona, from visualization experts or technical experts, let's say, to casual users, all the way to the decision makers, and bring everybody at the same digital table to have a conversation around the same digital prototype.

      How does it look like in a bit of a different visualization? So on the very low bar, you have the product development, where the actual product is created. You have all the experts that create, in this case, this packaging machine. Then in the middle bar, this is the layer that connects everything. It's VRED, where you import the files and have this single data set which can be used across any discipline for different use cases. And you can visualize it in context.

      So in this case, we imported the packaging machine into the whole assembly line or the whole building itself. And this data set then can be accessed from anywhere at any time. And in this way, we are basically connecting the lower and the upper layer of this diagram where the upper layer is the decision making or casual users or colleagues that are nontechnical, and the lower layer are the technical experts that work with VRED but also work with tools like Inventor or Alias in the automotive space.

      Yeah, so the principle is simple. It's one data set, many use cases. So you have one VRED data set. You import a lot of data. In this case, it's a lot of Autodesk tools but the CAD logo on the very lower part. We can import almost any CAD file into VRED. So you can assemble it. You bring everything together from Fusion, from Inventor, from Maya, Max, Alias, as I said, or from other products into VRED. And then people can choose what output device and use case or what interface they want to write from, as we said, mobile or ultrabooks to very capable hardware like desktops and powerwalls extended reality devices like HMDs, but also handheld devices.

      You can generate assets automatically or images and animations. And the cool thing, as I already said, is all of that can happen in collaboration in real time. So everybody picks and choose what is suited best for them but has the same quality as everybody else. And you have a discussion on the same level and the same visual base.

      So basically, that whole process is split into three parts. There's the creation part, which happens outside of VRED. So again, we have Catia/NX, Creo, Fusion/Inventor, or any other CAD creation product. That is imported into the VRED solution. Then the work happens. So the collaboration or the discussion happens, and there's feedback and error identification or a sign off, a positive feedback, or a stakeholder input that needs to be addressed or design change that somebody requests.

      This feedback can be fed back into the creation process, and then the cycle continues. And in this way, we are closing this communication cycle and enhance the speed to bring your products to market without the waste of time or losing information along the way.

      That was a quick summary on what VRED is. For a lot of you, this might not be news, but we wanted to make sure that people that are new to the product have a basic understanding before we go into the details as well. So you can already see on the slide, I wanted to talk about what we are working on as well. This will be a very high-level overview. As I said, Pascal is going to give you the deep dive in the bigger later part of the presentation.

      So the first group of investments or things that we are working on is the modernization of our product. So we modernize our product, the user experience. We are aligning and modernizing the layouts and the modules of the product. We are cleaning up and unifying functionality. We are decoupling componentizing and clearly structuring elements of the software in the background, like the technical elements, like the render engines versus the core application layer and so on and so forth to improve the performance but also the stability of the product and make it more future proof.

      And there's a second element that we are working and investing in, functional digital twins. So everything is around replicating physical prototypes with VRED. It's a digital prototyping software. And they are not only still. And VRED has basic capabilities and animations so you can open a door. There's basic capabilities available for that, but we are currently enhancing that to-- especially with the focus on enhancing mechanical behavior for digital prototypes.

      You saw that, let's say, digital interaction is already possible with interacting with the screens that we saw in the video on the first slide. But we really want to make sure that we take this beyond. And especially with more complex machines, the mechanical behavior and the movements are very important. So for that, we're working on integrating parts of physics, animation and improve animation, but also constraints and much, much more.

      Then one of the most important elements of our product is rendering, so we are investing there as well. On the one hand, and Pascal is going to talk about that in great detail, by implementing or integrating advanced and exchangeable materials to provide the full flexibility and accuracy regards to defining the look and the appearance of your products.

      But also, we are enhancing the render modes and futureproof the rendering frameworks, on the one hand for GPU ray tracing, making sure it's quick, it's fast for real time. It uses the latest and greatest technology. But also, we are replacing our rasterization renderer from OpenGL to Vulkan. This will allow us to, of course, have a more futureproof rendering in the rasterization space but also combine rasterization with hybrid or with ray tracing. So we get hybrid rendering and basically the best of both worlds.

      This will not be as accurate as our GPU ray tracing, but it will look better and will be very quick. Or we anticipate that it will be fast in the computation or faster than the traditional ray tracing. So yeah, something very promising that the team works on right now. Also in the space of rendering, we look at tone mapping and color grading and, of course, AI-based rendering enhancements to speed up the rendering engine.

      Another element is streaming and cloud. We talked about that already. In the past we've been working with the big cloud providers to make sure we optimize our product for automated deployment and scalability. So we can easily use it on their cloud platforms. We are also improving our own streaming capabilities that are in VRED that you can just use with a product but also integrate VRED with the leading or existing streaming platform providers. So if you choose their platform, you can just click on VRED, choose it, use it, upload your file, and stream if you use them anyways, for example. So it's up to you to choose in that space.

      And then also, the last one is interoperability and collaboration. So we are supporting the latest exchange and collaborative formats, for example, for geometry but also for materials, as we already talked about, to exchange data insight, but also beyond the Autodesk ecosystem.

      And we are also enhancing our own collaborative capabilities inside VRED. You saw some of them in the first videos but also with adjacent tools that are used heavily together, like Alias, for example, to have better collaboration between the different disciplines and people and personas.

      All of that, with a big focus on ease of use. VRED is a specialized product which requires a bit of expertise, I would say. It's not the most technical product or the most complicated product, but it requires some understanding and expertise. We're trying to make it as easy as possible, at least where we touch it. So whenever we modernize our product in different areas, we're looking at how can we make it simpler so it's easier for people to learn. And another big focus is providing tools to automate and speed up tasks with and also without the use of AI.

      So as I said, this is a very high-level overview. With that being said, I'll hand over to my colleague Pascal, who's going to deep dive with you into what has been done and what cool features our team has delivered over the past year.

      PASCAL SEIFERT: So thanks, Lukas, for the intro. I'd like to start my presentation with giving you an overview on what we have developed from the release 2024 to the release 2025, so one year of improving our software and introducing new features. But I don't want to go into detail that much at that point. I just want to show you all the different features we have developed.

      And I highlighted the things that are kind of reoccurring. Like, you can see New User Experience, so a topic we worked over multiple releases. That's what we call an agile development process, improving the feature over time and deliver value to our customers over time. And I also highlighted some features I'd like to talk about we think are cool enough to be presented.

      So let's start with what we call the new user experience. So VRED has quite a bit of history, about 20 years of history. And it is and it will continue to be a feature-heavy tool. But complexity doesn't necessarily mean that it's complicated to use.

      And there are a couple of initiatives we can do or can be done to make the product still easy to use, which is, for example, align our user experience to other products at Autodesk, like Alias for example, or, in general, aligning our UI to Autodesk standard guidelines, which makes it easier for users that come from other applications like Revit to use our products.

      From a pure styling point of view, we can remove distracting elements or unnecessary style elements to make the software clean as possible. Also, we introduced modern usability patterns that didn't exist 20 years ago or usability concepts, for example, that didn't exist 20 years ago.

      In addition, we try to align usability patterns across modules inside our application to trigger users' muscle memory. So ideally, if we can use a module, we introduce right out of the box because he knows other modules work the same way. And as a last initiative, what we can do is we can extend our customization possibilities for tailor-made interfaces. Because we see that customers have a very specific need and a very specific use case, they want to create a UI that allows them to work through the application as easy as possible.

      So with VRED 2024, we introduced our new UI. So we did a complete redesign, and we also introduced color themes for different personas. We introduced something really cool, which is a docking that is kind of unique to our application. We also came up with some workspaces that allows customers to build VRED in a way that it shows you only the modules that are required for a certain task.

      If you just do data preparation or if you're just into lighting or material, you can customize your UI and switch forth and back at any time depending on your use case. And then, of course, there are some minor things like style things we can align and make it easy and simply looking.

      But there's a lot more we did. So within one update, we allowed module pinning. So it is very easy to pin modules that are often used to our what we call quick access bar, so you can close it and open it at any time without going through the main menus, for example.

      We simplified our transform editor. So that was another initiative. We looked how small can a module be and still be usable to not waste screen space when you work with it because often you have multiple modules open at the same time.

      And with the two, we also extended or refined our Onboarding dialog that was introduced with 2024 to align to other Autodesk applications. And the Onboarding dialog, I will go to that in detail a bit later, allows users that are new to VRED but even to those ones who are experienced to easily access what's new content, how-to content, or simply open recent files and continue work you left the day before.

      So here you can see how that Onboarding dialog that is by now called the Home dialog looks like. It prompts you with the recent files, for example, so you can work on straight away where you have left on. But we also use it as a platform to communicate important changes customers should be aware of, for example, or simply link to our documentation or to the What's New content so people don't have to search through the internet for what is really new. So they prompt it in the first place with what's new and be aware what might have been changed.

      But also, across the whole year, from 2024 to '25, we aligned and redesigned other modules. Like, you can see at the bottom, the Render Layer module was, for example, redesigned. We didn't introduce that much many new features to those modules, but we aligned it with our XD patterns and guidelines.

      We also modernized our VRED online asset library to the Autodesk web UI standards, I would call it, to align with the overall Autodesk experience. From a UI point of view, what we are pretty proud of is what you can see here on the screen, is our searchable preferences. So it simply allows you to type a few letters, and it prompts you or suggests you what you are looking for.

      And as a user, even if you know how the things are called you are searching for, it still takes you a while to get to the right place going through the menu here. You can simply type, and you direct it to where that feature or that setting is, and you can immediately enable and disable it. So this shows quite nice how things can be improved and speed up because we are now talking about a few seconds or milliseconds to get to the right place in the application instead of knowing it inside out and find the right place.

      Another initiative we did in 2025 release is we moved important features from our VRED Pro version, so the high end version, into the lower end design version to simply drive customer happiness. And also, it turned out that over the years, use cases changed and some of the features are now part of, let's say, VRED design environment or design process.

      So we were able to move a couple of those, like the additional scene craft notes or an annotation module or denoising very important topic, where there's value for everyone in it.

      So an important part of rendering is, of course, materials. So let's talk about that a little bit. So VRED has a variety of different materials you can actually use. So there are our internal materials, like we call it the true light materials. There's also Phong and Chunk that are not that used anymore but still internally used. So those materials are internally designed to be set up as quick as easy and possible. So you don't need much knowledge in shader creation or shader programming or performance optimization and stuff like this. They work right out of the box.

      But there are also some materials that come from external resources, like OCS, Office Color Science, which is a kind of measured or scanned materials. And also X-Rite-- AxF measurements are very important. So there's a whole ecosystem for scanning or measuring materials accurate as possible and bring it into your 3D applications.

      Those two are more for the engineering side of things. But of course, we also support some external materials for artists. Like Substance, I go a little bit more deeper into that. Or MDL and Material X, which is a common standard in the industry to exchange materials.

      So for Substance, we recently added the Metallic Roughness Coated workflow that is suitable for, let's say, coat paint or basically everything that has a clear coat on top of a structure, like a painted wood, for example.

      As I said, so there are artists, and there are more engineers. And what we have introduced in 2024 or 2025 is translucent X-Rite AxF materials. Here you can see some examples. Because there is a growing demand for physical, accurate materials in the digital twins, which includes also translucent materials.

      So those materials are, in particular, challenging for user or for an artist to set up because there are some properties in it you simply do not know without measuring. And if you get some measures out of that, you do not know how to set that up in a standard material.

      So sometimes they are not available in a standard materials. So measuring material is very critical for simulation use cases. And there it can be really challenging because if you set that up as an artist, there's a lot of room for interpretation. And this is not what you want.

      AxF translucent materials for use cases like you can see here illuminate the guesswork or wrong interpretation in such simulation use cases. Because we already a lot about the light itself, the information, color spectrum, intensity, values, et cetera, but translucent materials were always a little bit guesswork. And those materials also store some very hard-to-find properties like spectrum, how much light is absorbed in that material and things like that.

      And especially with the release 2025.1 where we support now GPU ray tracing, those simulation tasks in a measured environment became highly efficient and get rendered highly efficient because now those simulation tasks also benefit from the 10-times-faster GPU power compared to CPU rendering.

      But as I said, also artists are important in our application. And there are different tools from Adobe, for example, that we support. And I'd like to show you some of them. But they all have a common sense. They all can export a native Adobe material format, which is called SBSAR. And I think this is the connection point between the Adobe tools and our VRED application.

      So there's, for example, Painter. Painter, you can bring in your 3D model. You can paint on top of it, a little bit like a Photoshop approach, where you can detail things on top of each other or layer things on top of each other and create realistic textures for your 3D model. So after doing that, you can export your material as an SBSAR file, bring it into VRED, and you're good to go and have a super, super realistic hand-painted textures, more or less.

      But you can also export that as a USD mesh. So USD, I will talk later about it, is a file format for exchanging data across applications, mainly DCC applications, but you can also bring that into VRED. And it stores the material as a USD preview surface, and you can bring that right into VRED without doing anything.

      But there's also a tool called 3D Sampler from Adobe, which is an AI-powered texture processing tool. So you can easily make an image of a material you like with your mobile phone or more professional camera and then run certain processes on top of it, like removing the light, for example, make it tileable, and so on and so forth. And you can create materials from a texture that are tileable and suitable in any 3D application.

      And here, as well, you can export as an SBSAR format, bring that into VRED. And the image or the animation you see at the bottom is created in VRED with the material you see on top that was created in Sampler. So very easy and quick solution to capture and tile any material you find on your way.

      But there are also some more applications in this that Adobe offers. And we support since a while actually Adobe 3D Designer, which is for creating parametric materials with a node editor. And the smart thing is that they are still parametric when you bring them into VRED So as a VRED user, you can still change properties like tiling, like the color, like glossiness. So a really artistic approach and you can create, let's say, dynamic parametric materials that are still accessible as a visualization user.

      Again, works over the SBSAR file format. We in VRED have the substance engine integrated which then creates textures out of that material and automatically maps it into the right channels of the material. But there are also some initiatives on open standard materials, like MaterialX between Autodesk and Adobe.

      We want to support-- there's something called OpenPBR, and this is where Autodesk and Adobe try to align on certain standards. And that's something we are aiming to support in the future because it kind of comes out of the box for us by supporting MaterialX.

      And yeah, that's another format or shadow format or material description we support in VRED. So how does it work? So of course, there's the application VRED itself. We have our own shaders that are suitable for physical-based materials like plastic, chrome, car paint, but we also have some nonphysical materials, like X-Ray material or toon shading, nonphotorealistic materials basically.

      But with the support or the introduction of MaterialX and MDL, we allow automatically things like this. So any description of a material, if it's Autodesk standard surface, if it's the USD preview surface, pixel description, or if it's the OpenPBR, the alignment of Autodesk and Adobe on the standard, we support that right out of the box, either through going through MaterialX, or in some cases, directly via MDL.

      What is not supported for MaterialX or MDL right now is nonphotorealistic materials, as I said, like toon shading, cell shading, line rendering, and those kind of things. There are initiatives, but at that point, we do not support that.

      Let's talk about or let's switch topic from material to file import/export behaviors in VRED because a lot happened as well in the last year. So we introduced new formats you can import. One is IFC, that's the Industry Foundation Class and basically the common exchange format in the AEC industry, dealing or handling data from, let's say, Revit into Bentley and so on and so forth.

      And by importing those data sets, we allow users that are not automotive-focused, more AEC-focused, also to visualize their data, getting the same workflows or use cases that Lukas presented in the beginning, like VR, ray tracing, or the [INAUDIBLE] streaming, but a different industry. But there are also some automotive customers who use that format for factory visualization, for example, or extending their automotive content with, let's say, the environment the cars created in a factory, as I said.

      Then also, we can now import 3MF as a format, which is, let's say, the new SDL, the newer format in the additive manufacturing space for 3D printing because it can capture much more information and hand that over to 3D printers, like texture support. So for printers, 3D printers that support full color, where you can print in color, you can now also print your textured 3D model physically.

      But also, on the export side, we worked on a couple of things. So we can now export that 3MF format for those 3D printers but also JT as a new format for the more engineering side of things. But you can also export your scene or your model as a USD.

      And therefore, we also introduced some new options or a dialog that helps you to downstream your very complex VRED scene to certain features or remove features. An agency, for example, should not have or to a use case that does not support specific VRED features, like NURBS, for example. You can convert NURBS. You can remove the lights, the cameras, whatever, and downstream your content to what you think is good to hand over.

      And also, USD. USD, many of you know it already. It's the format that is heavily discussed right now as an exchange format between the different application and across companies and to align also with the Autodesk initiatives to align to those open standards. We, of course, want to follow up with that and allow the import and export of USD from your VRED scene by supporting also the variations of USD, USDA, USDC, USDZ.

      But keep in mind this is an import/export. It's not a real-time connection, so it's not using USD runtime. This is really about exchanging files via import and export, not real time. On the USD side, there is a lot of things supported already. I highlighted the ones here in this slide that are currently not supported. It could be that the USD format itself doesn't support very specific features VRED has, for example. Or there is, let's say, a room of improvements for us as well in some of the things.

      But mainly, the data, the meshes are transferred, the materials are transferred via USD preview surface. Our materials are converted into USD preview surface. The animations we do support as best as can. Cameras can be imported, exported. Lights, except some very specific light types we have only in VRED, and UVs can be transferred as well, like the Adobe use case I showed earlier for painting the model.

      What our customers highly appreciate is the use the USD set format because it allows them to create easy AR experience on Apple devices. Like you can see here, you can simply take your VRED 3D model, export it via USDZ on OneDrive, access the OneDrive with the mobile phone, and you get an AR experience right out of the box without any special hardware or knowledge. And that's cool too, especially for those consumer products to bring them simply in context.

      What I personally liked a lot is volume rendering. Now we're switching over on the rendering side of things. And volume and rendering, like Lukas said in the beginning, it enhances the realism of the scene by incorporating volumetric effect like fog, dust, god rays, cloud, smoke, or fire, so things that are not necessarily product-related, but they can be if you think about it as a-- I don't know, a steam machine, probably not, but in a kitchen appliance, for example, if you have boiling water or if you have an oven, for example, or a barbecue there you can add realism to your product design as well.

      The model behind is a physical-based one, so of course, it works in OpenGL. But I think it really looks and behaves realistic when you switch in ray tracing because there we can really pass through the volume and step through the volume and evaluate how the light scatters and behaves inside the volume.

      There are two different types, actually, of volumes. You can use something that's called a scatter volume. This is more for creating homogeneous and heterogeneous atmospheric effects, let's say inside a defined space. Like, here, in that room, you can add dust to your rendering and see those nice little light beams or god rays, and it simply enhances realism into your scene instead of having a clean product visualization.

      Those scatter volumes are easy to set up. You simply create them. It comes with a materials with certain properties you can still affect. And you can change density but also fall off to create ground fogs or height fogs. So you can really place it in your scene and move it to where you like to have it and create an effect you want to have.

      But everything that gets a bit more complicated, like you see in that scenario, like a smoke machine in the background, that requires simulation. We added the support of VDB volume rendering, so we support OpenVDB and NanoVDB as a format. And those formats are storing volumetric data of a complex simulation that usually comes from another application, and they store it in a voxel grid. So again, VRED doesn't simulate anything at that point, so there is no possibility of manipulating it besides the appearance, so changing the density or the quality or the color or things like that.

      How does it work? A VDB is basically-- let's imagine it as a box of voxels, so a grid inside. And each of those boxes inside have a certain value. And we fill that value with material information. And by playing one file after another, we create the impression of playing an animation, like you have seen in the slide before.

      So fast playback requires inlining those files. They are not big, but in an animation, they can become big. It requires inlining so we can upload that to the graphics card as a whole. And then you can play through it very fast. But of course, it increases the file size, the VRED BPP file size. But you can also, if you are aiming for an offline rendering, load one file by another frame by frame that makes the file small, but you can't play it in real time because you load it, you unload, you load, you unload that frame by frame.

      Her I'd like to show you a nice use case where we combining simulation data of CFD simulations, or what you want to see on the car itself-- that red and blue is a velocity simulation of that car-- with something that is more coming from a media and entertainment space, which is that smoke moving over the car. You might know that from a wind tunnel. And this allows us to communicate to, let's say, people that are not so much into numbers from such a simulation to actually see what an effect it has.

      And this is, by the way, real time. So that's not an offline captured footage. This is how it runs in VRED on certain hardware. And it really helps to present your data coming from a simulation to stakeholders to better understand what's going on or how, in that case, a car behaves.

      Here's another example where you can see how realistic it behaves in ray tracing. So that light moving through that smoke or dry ice and really scatters inside that volume. But you can also see the indirect light from the walls reflected in the material.

      OK, next topic I'd like to talk about is a bit AI. So we support DLSS, Deep Learning Super Sampling in VRED. And with updating to the latest version, we are close to noise-free image at a high frame rate. As you can see at the bottom, it gives us really a noise-free image with a higher frame rate.

      So that scene coming from Inventor is quite heavy. It does not need any data preparation besides assigning a few materials to it and renders in real time in ray tracing-- so no data prep, no shadow calculation, nothing. And it looks super cool and noise-free in ray tracing.

      Another topic we worked on, and Lukas talked about that as well, is the tone mapping. So why is tone mapping necessary at all? So VRED in the background renders at a high dynamic range, so we have all the information on light and colors, but they are simply not possible to display on a screen, on an LDR screen, sRGB screen.

      So what you usually do? You try to squeeze that high dynamic range into a lower dynamic range. And the question is what to do with values that do not exist here. And there are different approaches. One is what you can see here called ACES. It shifts colors from red into orange or yellow or white that can be wanted in that case because it creates the impression or illusion of light, actually.

      But there are also some use cases where you don't want to have them. So if you're not into that cinematic environment, filmic approach, it can be distracting. For example, in the color and trim field, as you can see, the image in the center, that ACES is shifting that red color into an orange or yellow, and it's not possible to judge what the real color behind is.

      Therefore, we have introduced AgX, which is at the very bottom. And the AgX tone mapper maintains the color red itself and tries to make the adjustment over the brightness of the material. So red is the red, but it's becoming a brighter or darker red compared to other tone map mappers like ACES that really shift the color.

      Ag stands for silver, X for any numbers of compounds from a classical film roll. This is where it's coming from. And as you can see here on the slide, on the right, you can see that AgX provides a more homogeneous color progression compared to ACES, where you really have peaks. And here's a good example where you can see that certain colors like red or blue shift into a different color. Blue is shifting into a more purple, red more into orange yellow.

      And again, this can be wanted in some use cases, in the cinematic space, for example. But for the color and trim departments, different tone mappers are more valuable.

      Next topic I'd like to talk about is OpenXR. VRED has actually a long history in virtual reality or real-time visualization on devices like Powerwall, Caves, but also HMDs. Often focus on ergonomic evaluation OR ergonomic studies, reachability, for example.

      But with the, let's say, distribution of HMDs or HMDs becoming cheaper and more user friendly, it totally makes sense for us to support any variety of those devices for modern use cases.

      In the past, it was kind of hard for us to support all the different vendors that came to the market, including the different input devices they had, such as controllers, keyboards, pens, cloths. There's by now so many different input devices for VR, and they all required native implementation from our side. And that was not possible to maintain that.

      And also, for us, we had some concerns because it was in the beginning not clear if a vendor will still exist in two years, but we have to implement it so our customers can try it out to decide if it's good for him or not. But it could be that it is not existing in two years anymore.

      And for exactly that reasons, something was introduced called OpenXR, an initiative or an XR runtime created by The Khronos Group and many vendors that joined, like Meta, Valve, UltraLeap, et cetera. And the idea is that there is an application interface, a layer in between the hardware devices and the software companies that want to support it.

      So this makes things, of course, much easier because the vendors can contribute to it by enriching the OpenXR interface, sometimes also by adding extensions to it for hand tracking, for example, for video passthrough, for example. And consumers like VRED can simply take it and support it in their application and benefit from it, actually. So it makes things easier.

      It is supported by nearly almost all vendors. It has way more functionality than OpenVR had before. And as I said, when we, for example, implement via openings or hand tracking, it will work for all vendors basically supporting that extension. So very good. It helps us to also maintain our code base better.

      There are not so many different native implementation, and it also makes things from a legal point of view easier because we don't have to ship anything that is vendor-specific.

      Last thing I'd like to talk about is where we worked on and try to improve it with every release is our VRED library and web shops. Yeah, the VRED Library is our central hub where customers can download additional VRED content. By now, environments, 3D environments, but long-term, it can also be that we provide materials there and get rid of offline installers, for example, and provide everything in the cloud.

      So it exists a while, but with 2024, we added more environments, 3D environments like Berlin-- or Pariser Platz in Berlin. Then, in 2025, we added new HDR environment assets from a partner of us. Also, another one, another partner, CGI.Background, joined to that. And I think we by now offer about 20 high-resolution HDRs plus some 3D environments as well. And we will continue to add more and more over time.

      Our VRED library also got an update with 2025. So we refreshed the UI, aligning to the Autodesk, let's say, cloud or web standards basically. We also added several usability improvements to it, where users can search for certain content or download multiple environments at the same time and things like that.

      So yeah, as I said, hopefully we can grow our efforts or our things we offer there in the future.

      And then there's also our web shop. And the web shop is kind of a web browser inside VRED that guides you directly to our partners, like I mentioned before, Ultraspheres or CGI.Background, from where you can purchase content. So you pay for it and then can download it. But the smart thing here is, instead of going via your web browser, is that the assets that are downloaded are directly assigned in VRED to the right modules.

      So the HDRs, for example, are assigned to our Environment Dome concept and the backplates are assigned to our backgrounds or to our sceneplates or our background, so you can use them straight away. So instead of downloading and creating a material, open the material, place it. It happens in one go. And this is very, very fast way to access content or to add content to VRED.

      And with that being said, that was my last slide. Thanks for your time and watching the video. I hope you like the features we developed like we do. And hopefully, we see each other next year.

      Downloads

      ______
      icon-svg-close-thick

      Cookie 首选项

      您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

      我们是否可以收集并使用您的数据?

      详细了解我们使用的第三方服务以及我们的隐私声明

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

      改善您的体验 – 使我们能够为您展示与您相关的内容

      通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

      定制您的广告 – 允许我们为您提供针对性的广告

      这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

      icon-svg-close-thick

      第三方服务

      详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

      icon-svg-hide-thick

      icon-svg-show-thick

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      Qualtrics
      我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
      Akamai mPulse
      我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
      Digital River
      我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
      Dynatrace
      我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
      Khoros
      我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
      Launch Darkly
      我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
      New Relic
      我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
      Salesforce Live Agent
      我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
      Wistia
      我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
      Tealium
      我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
      Upsellit
      我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
      CJ Affiliates
      我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
      Commission Factory
      我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
      Google Analytics (Strictly Necessary)
      我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
      Typepad Stats
      我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
      Geo Targetly
      我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
      SpeedCurve
      我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
      Qualified
      Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

      icon-svg-hide-thick

      icon-svg-show-thick

      改善您的体验 – 使我们能够为您展示与您相关的内容

      Google Optimize
      我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
      ClickTale
      我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
      OneSignal
      我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
      Optimizely
      我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
      Amplitude
      我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
      Snowplow
      我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
      UserVoice
      我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
      Clearbit
      Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
      YouTube
      YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

      icon-svg-hide-thick

      icon-svg-show-thick

      定制您的广告 – 允许我们为您提供针对性的广告

      Adobe Analytics
      我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
      Google Analytics (Web Analytics)
      我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
      AdWords
      我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
      Marketo
      我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
      Doubleclick
      我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
      HubSpot
      我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
      Twitter
      我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
      Facebook
      我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
      LinkedIn
      我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
      Yahoo! Japan
      我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
      Naver
      我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
      Quantcast
      我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
      Call Tracking
      我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
      Wunderkind
      我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
      ADC Media
      我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
      AgrantSEM
      我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
      Bidtellect
      我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
      Bing
      我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
      G2Crowd
      我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
      NMPI Display
      我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
      VK
      我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
      Adobe Target
      我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
      Google Analytics (Advertising)
      我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
      Trendkite
      我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
      Hotjar
      我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
      6 Sense
      我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
      Terminus
      我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
      StackAdapt
      我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
      The Trade Desk
      我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
      RollWorks
      We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

      是否确定要简化联机体验?

      我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

      个性化您的体验,选择由您来做。

      我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

      我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

      通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。