AU Class
AU Class
class - AU

NVIDIA Omniverse: Modern Collaborative Workflows

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

Imagine working across teams, regardless of location, in an environment where your Autodesk software systems are connected in real time with NVIDIA's Omniverse. All changes made to the individual models from Revit software, 3ds Max software, Maya software, Rhino, Grasshopper, and Unreal Engine 4 are pushed to Omniverse Create's aggregated model in real time. Ideas are explored at scale in full fidelity with the fast and powerful Omniverse RTX renderer. Issues are identified and resolved in real time, without the need for back-and-forth emails or offline communication tools. Your project “files” are transformed into a connected digital “destination.” Now let's stop imagining. It's real and happening today. Seeing is believing.

主要学习内容

  • Learn how to connect multiple professional applications in real time with Omniverse Nucleus
  • Learn how to design collaboratively and explore USD variants with real-time results in aggregated Omniverse stage
  • Learn how to render in Real Time or Path Traced modes and adjust render settings with the Omniverse RTX renderer
  • Learn about USD layer-based multiapplication workflows

讲师

  • Dave Tyner 的头像
    Dave Tyner
    Hi! I've been working to expand the world of 3D things my entire career. My background was in the AEC/Industrial space before moving on to Autodesk and now NVIDIA. My goal is to enable customers to be as successful as they're willing to be.
  • Robert Cervellione 的头像
    Robert Cervellione
    Robert Cervellione is a registered Architect in New York And AEC Workflow Specialist for Nvidia's Omniverse Platform. Robert regularly engages in professional and academic research that explores BIM, computational systems, design automation, and advanced fabrication. His work focuses on the intersection of design and technology, exploring advanced workflows across the AEC space. Additionally, he is an Adjunct Assistant Professor at Pratt Institute, Graduate Architecture, and Urban Design Department, teaching various seminars in computational design, BIM, additive construction processes, advanced fabrication, and other research-related agendas.
Video Player is loading.
Current Time 0:00
Duration 33:14
Loaded: 0%
Stream Type LIVE
Remaining Time 33:14
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

DAVE TYNER: Hi, everyone. Thanks for joining our Autodesk University class on Modern Collaborative Workflows with NVIDIA Omniverse. My name is Dave Tyner. I'm a Technical Evangelist here with NVIDIA. And I'm located in Colorado.

JAY AXE: Hi there. My name is Jay Axe and I'm also at NVIDIA. I'm a Technical Product Manager and based in California.

ROBERT CERVELLIONE: And I'm Robert Cervellione. I'm an AEC Workflow Specialist here at NVIDIA and I'm located in New York.

DAVE TYNER: So Omniverse is really about collaboration or connection, whether it be applications or users. Regardless of location, regardless almost the size of the data we all need to be collaborating all the time. And for that, we need these connected worlds.

So 3D workflows are obviously essential for every industry from AEC to media and entertainment, product design. Everything you see here, all consuming 3D data, all needs to be collaborated on. No decision is made in a vacuum.

But it's an extremely complex team sport, right? So we have large teams with all these different skill sets, all these different tools that they use located all over the world, right? How possibly can those people collaborate in real time, let alone the applications that they use, right?

Some users are comfortable with Max, another user more comfortable with Maya. Someone is more comfortable with Revit, someone's more comfortable with Grasshopper or Rhino. So how do we get all those working together in real time?

And then not just that part of it being complicated but also the output, the data, the scalability of that data is getting huge. And the more room we create, the more of that pipe gets stuffed. So these are some of the challenges we face today and that's where Omniverse comes in.

And maybe, Robert, you can take us through this slide a little bit?

ROBERT CERVELLIONE: Yeah, absolutely. So there's many stakeholders in an AEC project. And there are many applications that each of those stakeholders is going to want to utilize. And what's great about Omniverse and the Nucleus server is that each particular set of users gets the access, the software and the workflow that best fits what it is they need to convey and what it is they need to consume.

So as a designer, you're going to really kind of focus on generating those beautiful designs, generating that content, creating those beautiful worlds. The vis team is going to take that and sort of bring the realism up to the next level and all of those beautiful details that just make the scene and the project kind of pop. While that whole thing is going on, you have your project leaders who are really kind of more focused on the larger array of things.

They want an overview of what's happening across all of the teams. They need to see not only what's happening, but most importantly, Omniverse is going to give them access to the current data. So they're not going to be looking at stale files. They're not going to be looking at old designs.

They're always going to have access to the most current pieces of information. And whether that's on their iPad or in a VR device or on the web, they can get it, they can comment on it and they can access it. And then most importantly, ownership.

Your clients, the people who you're making these amazing projects for, they also want a window into that design world. They want to be able to engage. They want to be able to quickly see stuff. They don't want to do this with a lot of overhead. They want to pull up an iPad or look at it on their phone or just pull up a web page.

And through this beautiful ecosystem, the Nucleus server is able to bring everybody together and allow everybody to access the data they need when they want it.

DAVE TYNER: Awesome, thanks. Jay, anything to add on that?

JAY AXE: Yeah, and for visualization, this is excellent because when, as Robert, describes having up to the current Revit data or Rhino data getting pushed to the Nucleus server, the visualization team is always looking at the latest content. We don't have to redo scenes. We're working with scenes.

We basically find things that can be enhanced based off of the proper-- the acceptable amount of detail for a given output, whether it's in Revit or something that is an end visualization in 3DS Max or Unreal Engine or even rendered in the RTX renderer.

DAVE TYNER: Yeah, that's a really great point. So what we used to do is wait for someone to have the project done enough that we could jump in and start creating visualizations around it. But now, we can tap into the project at any point in its journey and really start getting an idea and feel for what we're going to do there. So thanks, Jay, that's a great point. Thanks, Robert.

All right, so today's focus is really going to be on Omniverse for AEC. All three of us have AEC backgrounds and so the project we created, we really want it to be around that particular industry. All right, so a little bit overview of the project.

Robert, you want to tell us what's going on here?

ROBERT CERVELLIONE: So one of the things we wanted to do is incorporate some real-life data. So you're looking at three small Revit projects, but they're full Revit projects taken fully through construction. So the models here are LOD 300 construction-level models detailed out with all of the design options and variables and things like that.

So it's not just visualization data or an export of the data. It's the full fidelity construction-level model.

DAVE TYNER: Awesome. And Jay, what about 3DS Max? What role does 3DS Max play in this project?

JAY AXE: Thanks, Dave. Yes, 3DS Max is handling the site and the props and assets. Everything you would see here that would help enhance for visualization within the design process. So this is all happening in parallel.

So we're looking at design data information coming from Revit, files from Grasshopper. And all visualizing in Omniverse. So 3DS Max is acting as an enhancement visualization tool.

DAVE TYNER: Awesome.

ROBERT CERVELLIONE: On the Rhino, side it was a little bit simpler for this project, but it's still kind of gets the point across. So on the left is just a parametric-driven chandelier that you'll see up here in one of the projects inside one of the Brownstones, that we're calling them. And on the right is actually a fabrication-level Rhino model of one of the interior stairs of one of the projects.

This goes all the way down to the nuts and bolts in fabrication-level detail. Even the welds were modeled. And one of the interesting things is that kind of level of data typically doesn't make it into the full array of the scene because it's just too heavy. And what you'll see here is we can very easily incorporate that into the final output.

DAVE TYNER: And then finally, we're going to connect to Unreal Engine. We're going to pull some assets across and use them in one of the elements of the stage we're creating, as well as using Unreal Engine as our material editor for some of the work that we're going to be doing. And so you're going to see that here a little later.

All right, so we talk about collaboration. Collaboration means a lot of things. Multi-user, multi-application, single-user multi-application. And so in this case, we're going to kind of break it up. We're going to look at a couple of examples of single-user multi-application workflows where Create fits into that loop, where Omniverse fits into that loop.

And first up is it going to be Revit. And so, Robert, can you lead us through what's going on in this video a little bit?

ROBERT CERVELLIONE: Yeah, so what's interesting is that-- well, Omniverse is amazing at multi-user collaboration. It also will just enhance your just day-to-day workflow. So here I have Create looking at Omniverse at the same scene. And obviously, Revit on the right.

So I'm just doing a typical day-to-day workflow kind of tasks. I had to modify the kitchen island. I was adjusting-- I was removing a cabinet. And you can see that it's just updating in real time on the Omniverse server.

So I'm using it as another view into the design world for me. Because one of the things that's really nice is that Revit does what Revit does well. But in terms of fidelity and image, that's not its forte. So having this much better window into a more realistic or rendered design world is great while I'm just modeling in real time.

So here, I didn't like the arch opening so I was like, OK, let's just kind of change that back to square. And again, so I can see what immediately what happens to the light on the floor. And notice that light, it might be a little bit too high and I'm going to adjust the height and things like that.

So using it as a tool that enhances your just day-to-day kind of workflows to just have better decisions for design is great.

DAVE TYNER: And I think another really important part of this is that all the changes you're making are stored in Nucleus live, right? So whereas traditionally, maybe you have to do an export out per time. You don't have to worry about that step anymore.

All the changes you make are just fed down into Create through the Nucleus server. And they're stored on that Nucleus server and that is the last state-- the last state that is on-- sorry, the last file stage that you open is the last state of that design. And you can capture multiple design states that way really flexibly and easily.

ROBERT CERVELLIONE: Yeah, and to bring you--

DAVE TYNER: I really like it.

ROBERT CERVELLIONE: Oh, sorry. I was just going to say--

DAVE TYNER: Go ahead.

ROBERT CERVELLIONE: To bring back the conversation about multiple stakeholders, too. A lot of times, the project managers aren't in the Revit file. So all of this that I'm doing is streaming back to that Nucleus server, like you said. So if one of the project managers just wanted to see the current state of the Revit file, they wouldn't need to know how-- they wouldn't have to open Revit.

They can just open the view on their iPad or on the web and just see the current state of the model, see the current state of the design. Because everything I'm doing in Revit is just streaming directly back to the server.

DAVE TYNER: Perfect. So now, we're going to take this and go a level deeper. So we saw Robert in Revit modeling the Brownstone building. And now we're going to be inside of the Brownstone building where he's going to bring in his Grasshopper model and start running through some design variations with Jay. And so those guys are going to talk about that-- about what you see going on right now. Take it away, Robert.

ROBERT CERVELLIONE: So, yeah, this is one of my favorite features. So I understand you're looking at it, those of you who were into Grasshopper, and be like, Oh, it's a very simple example. But the takeaway here is notice that there's no baking whatsoever. I'm actually streaming data directly out of Grasshopper into Nucleus.

And what's amazing about that is that Grasshopper chandelier is sitting in the Revit model, which is sitting actually in a larger context, if we ever zoomed out. And I have total and complete control. And not only that, what you'll notice is Jay is able to put materials directly on top of my stream.

JAY AXE: Yeah, this is excellent. Because in the viewport on the right, this is my viewport. I'm working out of California and Robert's in New York. So we're doing this all via the Nuclear server and it's happening in real time.

So it's excellent for visualizing the lights and shadows and the amount of intensity from each light source to really get an understanding of the space as it's being designed. There's no disconnect here. This is happening live.

ROBERT CERVELLIONE: Yeah, and--

DAVE TYNER: Exactly.

ROBERT CERVELLIONE: --whether it's a tiny little chandelier or a 60-story amazing Grasshopper facade, the point is the same that you can now have a much better interactive workflow between your parametric design model and your actual fully embedded design environment.

DAVE TYNER: Precisely. Better decisions faster, better decisions faster. And we iterate on that. This is really exciting. All right, so now we're going to transition into more of the visualization space here.

And Jay Axe is going to walk us through what's going on-- what you see going on the screen here. Take it away.

JAY AXE: OK, thanks, Dave. What you see here is really exciting because we're working with many different USD layers. And this is all being stored on the nucleus server. So in the top left, I'm taking Revit data from Nucleus imported in 3DS Max. And we're looking at the level of detail of the door, which is perfect for Revit.

For visualization, we'd like to add in a door with higher fidelity visuals. So we're using 3DS Max connected to that Nucleus server to add in a high-fidelity door. And you can see that with everything in a live sync connection to Nucleus, we're able to look at individual assets.

The door in this case, in 3DS Max, the connection to Revit in 3DS Max via the Nuclear server. And as well visualizing with the RTX render in the top right. And you can see, I'm opening and closing the door and everything is connected via different USD layers.

And in the bottom right, we're visualizing the materials-- or we're using the v-materials from the NVIDIA material library to, in real-time, swap material choices and paint colors so that we can see that in isolation in the bottom right, looking at the prop. And also in the top right, in the context of the rest of the model.

So we have the site, the trees, the entourage. All of this connected in Omniverse and also the Revit data for the building. So the idea is to connect them and to access different parts of the model for whatever is interesting or the most important part for you to visualize. Whether it's how the material properties look, the lighting or even just the location of where that door should be in space.

So a lot of good stuff going on here.

DAVE TYNER: Yeah. Thanks, Jay. That's awesome. And my favorite part about this video is how you have it divvied up. So we have a single door down to 3DS max-- well, we missed it. But maybe you can talk about what's going on in the screen right here?

JAY AXE: Sure. And with all of the USD layers connected, whether the content's coming from multiple DCC sources or we're working just in Omniverse and visualizing with the RTX renderer, we're able to move around the Omniverse world with different cameras. And we can use waypoints in the view application so that these are all connected and up to date.

We heard Robert comment earlier about, there's no stale data. Meaning we're always getting the most up-to-date relevant content from any of the source applications.

ROBERT CERVELLIONE: And to pick up on the relevant content part, in no way would I want this high-poly in my Revit model. Because my model is going to be designed for construction. So while I'm going to describe this door in elevation and in section and on schedules, embedding this high-poly model, which does accurately represent what the final is going to look like into my Revit model doesn't make sense.

But using the connector and asset swap and Omniverse and Nucleus, I'm able to have the best of both worlds. So when it ends up into the review environment for clients and things like that, it's the full-fidelity model, which is the appropriate place for that to be. And I can keep my Revit model nice and light and clean and set up for its purpose of documentation and construction while still being able to fully visualize what the true high-poly, fully-detailed version of these things look like.

DAVE TYNER: Perfect. Thanks, Robert. That's a really important point. So now, we can see that the scene is starting to get a little heavy, right? There's lots of stuff in it. And generally, in a traditional application, if I would want to, say, create a camera path and fly through it for some visualization deliverable, that would be really heavy and difficult to do. And performance might start lagging because right now, over here in this create stage, I think we're at like something like 16 gigabytes worth of data and a huge polygon or triangle count.

Just very resource intensive scenes. But using the connector from 3DS Max into Create I'm able to create a very light version of just what I need, which is just this camera path. So I got a camera and a path and I'm working live.

And you can see, I'm scrubbing the timeline in 3DS Max and it's updating in Create. I can see where I'm intersecting objects or I have weird angles happening. And I can make adjustments in real time as I work and make those in 3DS Max and just know that that data is going to translate across into Create.

And in doing so, I'm going to be able to work faster and I'm going to be able to work collaboratively. So Jay or Robert could jump in and say, hey, I don't like that. I do like that. Change this, change that.

And I don't have to make a decision in a vacuum. We can make all these decisions collaboratively. And again, this is connected to Robert's Revit stage. This is connected to his Grasshopper stage. This is connected to Jay's door stage, right? So we have all the data available to make all the decisions and fully flexible to use the connectors to do it in a way that's smart and efficient and helps us get to the end quicker.

JAY AXE: Yeah, and Dave, I want to reiterate that everything you're seeing that's happening live with Dave working very quickly moving the camera around, this is still connected to the Nucleus server that's in California. So he's doing this on a real time from Denver, Colorado.

So it's really impressive that these type of updates with this high-fidelity model, that's happening in real time. It allows us to work at speed even in a scalable sense where you have more GPUs, you render larger scenes and you can fit more into your memory buffer. That's the way it's set up to be fully connected in Omniverse.

ROBERT CERVELLIONE: And the cool thing here is that I didn't have to do anything to my Revit files to decimate the data. I didn't have to change what I exported. Everything you see here is the full-fledged, fully-detailed model all the way through.

You're not looking at half-baked versions or decimated versions. We didn't have to spend hours optimizing anything. We're streaming the real data directly from the source of truth apps.

DAVE TYNER: Fabulous. Thanks, guys. OK, so now we're going to move from single-user multi-app to multi-user multi-app. And this is where I kind of recommend you put on your seatbelts, because it could get a little fast and hairy. But just know that in every window you see here, we're all connected via Create through the Nucleus server.

Again, Robert in New York, I'm in Colorado, Jay's in California. And we're all just kind of doing different things. But the point is that in whatever screen you're going to be looking at, the same things are happening. Whether the cameras looking at them or not is irrelevant. The same things are happening.

Some things we're going to all focus on, some things we aren't. But you'll notice that as things appear in one view, they're going to appear in all the other views. And that's largely due to-- well, it's exactly due to Nucleus server and the power of Omniverse. So I guess I'm going to start off and take it away guys.

ROBERT CERVELLIONE: Cool. I guess one thing to note is you'll notice the camera is moving around. That's actually the individual users. And it might be hard to kind of make out, but underneath each camera is the username. And so you can see where everybody's looking.

And here, I'm just starting, like, oh, let's go add some stuff to the shelves. So I'm dropping a couple of plates in there. I'm going to use the built in gravity system inside of Nucleus to just help place those plates by just turning on gravity and then one by one, letting those plates drop on top of each other. So I don't have to sit there and fiddle with intersecting objects, which is really cool.

DAVE TYNER: Using the PhysX integration no less.

JAY AXE: Which is a nice moment. Because we all took a moment to watch the PhysX and the plates. And then we went back to our respective areas and continued working. So in the bottom left, I'm starting to build out a living space and I'm putting a shelf near the wall.

And let's make a decision to convert this to a dining area. And Dave's going to take the couch and move it over to another wall. I'm going to add in some chairs and a table. So we're getting the broad strokes for visualization and how we're going to lay out a space.

And then from there, we're going to collaborate together to add in more assets in the scene.

DAVE TYNER: Exactly. So in the lower right now, I've added a coffee table. And I didn't like that yellow couch. Sorry, Jay. So I am now going to put it in a gray couch. And as I do that, you'll see it appear in Robert's stage up there in the upper left hand corner.

JAY AXE: Much better.

ROBERT CERVELLIONE: Yeah, and I did not like his white coffee table there. I liked the coffee table, but I didn't want it white so I made it wood.

DAVE TYNER: Yeah. And then you added a crash test dummy--

ROBERT CERVELLIONE: I did--

DAVE TYNER: --as one would.

ROBERT CERVELLIONE: --for scale.

DAVE TYNER: Yes.

ROBERT CERVELLIONE: Yeah.

JAY AXE: Yeah, so what's exciting here is that we're all putting our design options or our opinions on the scene. And we're interactively adding new USD layers and deltas to the scene to enhance the design. So capturing these in different design options, it's excellent for the collaborative workflow.

ROBERT CERVELLIONE: And this is fun. We all came together to watch everything crash on the shelf there as they enabled gravity, which was super fun. I think the plant fell over, which was cool.

DAVE TYNER: Yeah.

JAY AXE: It really changes the way you work.

DAVE TYNER: Totally.

JAY AXE: Yeah.

DAVE TYNER: And of course, I had to give the crash test dummy something to look at because he looked sad and lonely and bored. Yeah, and that point you bring up about the layers, Jay, in the options is a really important one. Because everything you see here, although, it looks like we're just spasming around the scene adding things.

It's all being fed into a layer and that layer is going to store all that data. And as Jay said, it stores it as opinion. So that's our opinion-- one opinion about the scene. But we can have multiple opinions. And those opinions can be stored as layers, just changes to the scene. But stored as layers so that they can be added back into any scene that contains that same data.

And then you're going to see those opinions reflected. So it's one of the huge flexibility points of Omniverse. And one that we really like to work with. And I think you will too.

So let's move away from the interior now. And let's move to the exterior where, again, in this multi-app multi-user environment. Just to set the scene up for you a little bit, down in the lower right hand side here, I'm in 3DS Max where the actual design of the park was designed.

And I'm going to be making a few changes, some elevation, some design changes, whatever, some geometry changes. In the upper right hand side, Robert's going to be making-- he's using Unreal Engine as the material editor. And although, Create-- I'll just go ahead and start it off. Create has its own material editor, but Robert's more comfortable with Unreal Engine's material editor, which is awesome.

Go ahead and use it. All the data's connected, feeds right in, no problem. So you can see, I'm making some adjustments here down here in 3DS Max in the lower right. And all the adjustments that I make from the application are, of course, updated in real time in all the connected applications.

JAY AXE: And I'll add, too, that in the bottom left quadrant, I'm reviewing the space from a high angle as I lay out the site. I'm just ensuring that there's the correct amount of space along the sidewalks. We have visualization-- or lines of sight so we can understand different parts of how the park would interact with the surrounding buildings in the context.

And as that's happening, you can see I'm changing the time of day. Getting an idea of the space. And as we continue further into materials and setting up the model, we can see we start to add a little bit more contextual assets into the scene, which is great for getting to the next level of visualization to have everything integrated into one cohesive scene.

ROBERT CERVELLIONE: And what's cool to understand is that I'm just in Unreal doing some material edits. And clearly, there's so much more you can do in Unreal. But the point is that I was able to open up Unreal and do material edits in it because I'm fast and I'm quick in Unreal.

And I could have done that in Max. I could have done that in Maya. I could have done that in a bunch of different programs. But the great thing is, I didn't have to choose one over the other. I could just pick the one that I thought was the best or quickest for me or I just had open at the time.

Because one of the great things about Omniverse here is everybody just can kind of work where they're comfortable and still feed into the same design ecosystem. So you're not then siloed into saying, look, our workflow is highly built around these two programs because we need them to talk to each other. It's very open to just allow you to use what you want.

DAVE TYNER: Then we can only hire people that know those programs.

ROBERT CERVELLIONE: Exactly. Exactly. So everybody can contribute because it's such an open world.

DAVE TYNER: Exactly. And so while all that was happening, you see, I was adding a bunch of vegetation, Jay was helping me out. And now we're going to use the RTX render and we're going to-- in a minute here, we're going to take a look at how this is turning out and capture some of those options that we were talking about before.

This is one option, but we created several. And when the RTX part comes on I'm literally-- there we go, all right. So really beautiful, full-fidelity, 4k RTX render. And you see, we have just different versions of the park. We have the Brownstones. We can capture it from any angle.

Again, this is all of us working collaboratively, so this is all happening simultaneous. And then we just upload our rendered images to the server. And here they come into the presentation. So next, speaking of rendering. Let's talk about RTX ON and how we can capture some of these options that we've been talking about. Some of this beautiful imagery we've been seeing.

So here's an example of just trying to capture the exterior with a couple of different options on. It's the Brownstone, it's isolated by itself. So we're not influenced by the surrounding area. And just a couple of options here, some furniture options, maybe some vegetation options.

But really quickly and easily able to view those. Just toggle on and off and see the results. And again, it's collaborative, so we make decisions about it. And if we don't like it, we scrap it. And we can collaboratively redesign the whole thing over again.

ROBERT CERVELLIONE: Yeah.

DAVE TYNER: No problem.

ROBERT CERVELLIONE: And right, that view was fully real time.

DAVE TYNER: Exactly.

ROBERT CERVELLIONE: Which is great. I mean, that wasn't a rendered sequence. That was just real time.

DAVE TYNER: Exactly. I'm just flying the camera. And, Jay, why don't you tell us what's going on here?

JAY AXE: In this case, we are looking at fully path trace rendering. And we're using the Omniverse view Sun Study tool to, with one click, we're looking at two hours of animation at different camera angles. So we're able to render out and look at the high-fidelity visuals.

And next, we're going to look at some tree variations with USD variants. So you can see how the nature of the space changes quite dramatically with different trees. And these are all high-poly, high-fidelity models that, I think across the site, we're looking at half a billion instance polygon.

So we're looking at a lot of data and we're visualizing with the RTX path tracer.

ROBERT CERVELLIONE: And the cool thing is we wanted to populate these fast, so I wrote a quick script in Grasshopper that distributed them across to the scene.

JAY AXE: Yeah, this is great. Because in 3DS Max as a visualization tool, we're just setting up the trees at the origin. And so it's a combination of the Grasshopper and 3DS Max workflow visualizing with the RTX renderer. So Omniverse is really connecting the work the tools that have their individual workflows that are efficient or automated or procedural connected to a visualization workflow. All connected to Omniverse. It's really good stuff.

DAVE TYNER: The options are endless. Once you start connecting these tools and using the different strengths of the different tools, you just find that you unlock all these workflows that were never possible before. It's really magical stuff. And I really love the use of the path trace fog in there, Jay, too. It's really nice.

OK, so moving on to the interior. Now, we've captured three options of Robert's beautiful kitchen. And we put those together in this path traced slow camera movement where we see one, then we see the other. And just using as a presentation tool, it's pretty effective and your customers can make a really quick decision about what they like best in the context of the scene. So it's really amazing.

ROBERT CERVELLIONE: And to be honest, because these path trace renderings even happen on the GPU, they're fast. And that's why you see so many movies. Because it doesn't take days to put these out. Because this is kind of isolated to the one project file, this takes minutes to output, not hours.

DAVE TYNER: Yeah. Yeah. It took me 45 minutes to render all this and put it together. So again, all the value adds of Omniverse. And then here we have the final fly through of the park. Of course, I'm in real-time mode. This would have sort of taken more than a few minutes to render just due to the poly count of the stage.

Like I said, it's about 16 gigabytes where we landed, maybe a little bit more. And billions and billions of polygons and all the complexity of the lighting and materials. But I am able to run it in real time and output this really nice-- at least it gives you an idea-- a very clear idea of what it's going to look like in path trace mode.

Although, you saw the path trace mode renders, it wouldn't be exactly the same. But it gives you a good idea of where we'd want to cut. Like for example, where I'm going to cut this video.

ROBERT CERVELLIONE: But to be honest, since Omniverse scales with your GPU count, larger GPU servers will cut the rendering time on path trace down significantly.

DAVE TYNER: Precisely. So I hope you've enjoyed the presentation. Now, the call to action. How do you get in on this? How do you download Omniverse, install the connectors, make amazing things? And then show us those amazing things?

Well, here's the launcher in action. All you have to do is go to this URL, download the launcher. This is what it looks like after you install it. There's multiple tabs from Your Library to the Exchange to the Learning tab, which will probably be very important.

Contains tons of learning material, all the connectors showing how they work together, how you can use them. Some really good examples. So we hope that you've enjoyed this presentation.

______
icon-svg-close-thick

Cookie 首选项

您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

我们是否可以收集并使用您的数据?

详细了解我们使用的第三方服务以及我们的隐私声明

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

改善您的体验 – 使我们能够为您展示与您相关的内容

通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

定制您的广告 – 允许我们为您提供针对性的广告

这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

icon-svg-close-thick

第三方服务

详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

icon-svg-hide-thick

icon-svg-show-thick

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

Qualtrics
我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
Akamai mPulse
我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
Digital River
我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
Dynatrace
我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
Khoros
我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
Launch Darkly
我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
New Relic
我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
Salesforce Live Agent
我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
Wistia
我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
Tealium
我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
Upsellit
我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
CJ Affiliates
我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
Commission Factory
我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
Google Analytics (Strictly Necessary)
我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
Typepad Stats
我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
Geo Targetly
我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
SpeedCurve
我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

改善您的体验 – 使我们能够为您展示与您相关的内容

Google Optimize
我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
ClickTale
我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
OneSignal
我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
Optimizely
我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
Amplitude
我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
Snowplow
我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
UserVoice
我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
Clearbit
Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
YouTube
YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

icon-svg-hide-thick

icon-svg-show-thick

定制您的广告 – 允许我们为您提供针对性的广告

Adobe Analytics
我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
Google Analytics (Web Analytics)
我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
AdWords
我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
Marketo
我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
Doubleclick
我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
HubSpot
我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
Twitter
我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
Facebook
我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
LinkedIn
我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
Yahoo! Japan
我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
Naver
我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
Quantcast
我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
Call Tracking
我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
Wunderkind
我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
ADC Media
我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
AgrantSEM
我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
Bidtellect
我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
Bing
我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
G2Crowd
我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
NMPI Display
我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
VK
我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
Adobe Target
我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
Google Analytics (Advertising)
我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
Trendkite
我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
Hotjar
我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
6 Sense
我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
Terminus
我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
StackAdapt
我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
The Trade Desk
我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

是否确定要简化联机体验?

我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

个性化您的体验,选择由您来做。

我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。