AU Class
AU Class
class - AU

Virtualizing Autodesk applications - technologies and considerations for success

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

Virtualization offers a smarter way for Autodesk customers to achieve improved manageability, centralization and security for their applications and the technology has now reached mainstream maturity. Explore what use cases virtualization is suited for, which platforms and technology you should evaluate, and what a successful deployment needs to take into consideration. Most importantly, learn about the unique challenges you should be aware of as you evaluate and embrace this technology. The time to virtualize is now, come to this session to get started!

主要学习内容

  • Understand the concept of virtualization
  • Understand the business benefits to virtualization
  • Position the different solutions available with pros and cons
  • Understand the platforms available for deployment

讲师

Video Player is loading.
Current Time 0:00
Duration 46:04
Loaded: 0.36%
Stream Type LIVE
Remaining Time 46:04
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

GARY RADBURN: OK, so thanks for being here. My name is Gary Radburn. I'm the director of workstation virtualization at Dell. I've been with Dell about 15 years now. And I might start speaking slightly quickly and whatever else, and as you can tell from my accent, I'm from Raleigh, North Carolina. So apologies if I get lost in translation somewhere along the way.

So you are here for workstation virtualization. How many of you are already doing virtualization today? A few of you-- how many of you are thinking of doing virtualization in the near future? Excellent-- how many of you have now realized you're in the wrong room? Good-- we can continue. So virtualization has been around for a fair time now, a couple of years. And it's really come on leaps and bounds in terms of how we can get business efficiencies from virtualization, how we can start saving real money inside our organizations. And I'm going to go through some of the basics, some of the technologies, why people are doing it, what the returns to the business are. And then also we've got a case study towards the end, which is actually around Autodesk products as well. So you can actually see a real life case study of how people have saved money when they've been using AutoCAD and Revit.

So, in terms of the Autodesk applications, most of you using AutoCAD, Revit-- any others?

AUDIENCE: [INAUDIBLE]

GARY RADBURN: So it's pretty much covering all of those, and I'll come up with a few gotchas as well. So some of the applications have particular needs when they're using the hardware-- things like open GL support, things like open CL, CUDA support-- if you are doing computational, shaders, rendering, whatever like that in your application. And there's some things you have to consider when you're implementing if you're using some of those features, because this isn't a one size fits all. You have to work out what you're using, how you're using it, what features you're using, and then choose your use case from there.

So, at the moment, organizations today are looking at virtualization to really give freedom of mobility. There's a lot of talk in the industry about work-life balance, giving people back, making them more productive, making engineers more productive. And, at the moment, if you're a workstation user, you're pretty much tethered to your workstation at the desk. So nine to five, or whatever hours you work, you have to come in, sit at your desk, you're tethered to that piece of tin-- your workstation. Meanwhile, you gaze wistfully out the window at the sales and marketing types, who are sending in their work from Starbucks cafes, from the beach, or whatever else. And the engineers want a slice of that life. So now we've got the technology where we can actually offer that to engineers today and still have the graphics fidelity and the power that we require inside a workstation use case, rather than an office-based worker.

So we're now getting the most out of our engineers. We can get higher resource utilization with the equipment we've got as well. When we talk about virtualization, this is taking things into the data center. This is centralizing our compute. This is centralizing our graphics, and then putting it out towards the edge. The question I normally get at this point is isn't this back to where we were with mainframes and green screen terminals? We had all the computers that sensor. We had the displays at the edge, and this is now going back to that stage. That's not actually accurate, because what actually caused the decentralization for mainframe, and then to desks like PCs and workstations was the power of those edge devices started to grow far more than the mainframe power did.

So users could do a lot more with what they had on their desk. You had the 286, the 386SX, the DX if you had the mass co-processor, if you were lucky enough. And then that's grown up over time. Now, we're at a stage where we've got Intel Xeon processors inside the workstation, and that can have up to 22 calls on each processor. You put that into a jewel socket system, and then that gives you 44 processors. You then put hyper threading in on top of that. You've now got the equivalent of 88 calls. In the workstation landscape, most of the applications out there are actually still single-threaded. So you've now got a lot of your workstation that's going to be sitting idle, unless you're running lots of other applications in the background, or you're running heavily multi-tasked applications.

So we've got too much power almost on the desk for some instances and for some applications. So IT managers have looked at that and said, how can I get better results? Because if I can centralize that and have those 88 calls, and then run multiple users off of that one box, I'm now getting a better resource, better utilization, and therefore it's a better investment. So that's one of the things that's driving the change now to the centralization.

That can save you money. Sometimes, it doesn't. So when you're looking at this, if somebody says they were looking to implement virtualization to save money, then we have to press the reset button on the conversation pretty quickly, because sometimes that might not work out-- because you're now relying on your infrastructure. So that infrastructure that you used for delivering email, for web browsing, for getting your normal day-to-day business done, you are now relying on it for real time communications. You're now shifting graphics backwards and forwards in real time. You want to minimize the latency, so that the engineers don't get taken out of their zone with their application. When they click a menu, they want to see the dropdown. They don't want to be waiting half a second for that dropdown to appear, because that's jarring, and it's going to take them out of their creative zone.

So we need to really invest more money into the network at that point, which is why it then evens out. So you could be saving money from the consolidation aspect, but then you're reinvesting that money into other aspects of your organization to make sure that you can deliver that real time aspect of the virtualization experience.

We've then got centralized management as well. This is when we come into things where we're starting to get some of those softer savings-- things that are quite difficult to quantify, unless you've got somebody that's really on the ball and has done this. So they know exactly what the management cost is for a remote workstation. They know how much it is for sending out an engineer to go and fix something that's broken. If you're rolling out driver patches-- how much does it cost every time you touch the image, and then have to roll that out to all your PCs.

With this, because we're centralizing, and we can now use what we call a gold image, we can actually just patch one image, and then roll that out to all of our users in the virtual environment. So we've now got much easier, much smaller footprint to manage inside of our data center to roll out any patches. We don't have to wait for people to turn on their PCs, and then try and ship out the patch. If people are on holiday, and they come back into the office, how do you know they're patched and up-to-date? With this, because we're centralizing, and we control all of the images that are going out to the thin clients or the end devices in a virtual network, then we can make sure everybody's up-to-date.

And then the other thing is we're also enabling collaboration with virtualization in ourselves, in Delhi MC. We've just acquired EMC there, and there's going to be a change-over period, where the two IT departments have to try and supplement each other, work around how do you merge those two entities together in a quick and efficient fashion. Rolling out virtualization is a great way to do that, because you can then roll out one image. They can use their existing PCs as their endpoint device, and they can get a new desktop delivered to them across the network, and that will enable them to become up and running in the new organization very quickly.

There's also ways you can use this to collaborate with external organizations. Remember, this is the centralized aspect. All of your data is now centralized inside the center of your network. Your data center is the safest place in your organization. If it isn't, you've got other problems. OK? So if that data goes walk about, that's your IP, your intellectual property. Keys have been dropped in car parks, automotive manufacturers have found their designs appearing on Chinese websites, right? The impact of the business is huge through a loss of data or sharing it through other organizations, and then it getting out into the wild.

We can now collaborate by giving people access to the data in the data center. The data never actually leaves the data center, and so, therefore, you've got full control of that at all times. And there was an aspect in the movie industry. So if you remember the film, Guardians of the Galaxy, there's a character in there called Rocket Raccoon. And, apparently, there's a studio that absolutely specializes in fur. And so there was a collaborative agreement between the two. And you can imagine the likes of Marvel being very, very secure of their data-- their very, very secure intellectual property. And yet having to work with a third party, using something like this, nothing has to leave the building. So it has those security advantages.

So what we're now doing is we're moving those costs, as I said, from the outside, and we're centralizing those. But there's other things you have to consider in terms of the rollout as well. I've already mentioned the fact that you have to invest inside your network. But you also have to invest inside the data center. I've talked to some very intelligent people who manage data centers, and they seem to lose sight of the fact that when we're centralizing workstations, we've now got graphics cards which are 225 watts per card, 250-- in some cases 300 in the latest generation. You put two of those into a machine, you've now got an extra 600 watts of power with all the hate that's contained within there. You've then got to get rid of that within your data center, so, therefore, there's going to be air conditioning costs, because that goes up. The floor space in the center goes up as well. And also consider that most racks, on average, are about 13 to 15 kilowatts per rack.

So you can't just go and load it up with 42U 2U systems. Mathematicians are probably ahead of me, that's 21 units you could possibly get in that physically. You're not going to put 21 workstations in there, because you will melt a hole through to China. So the idea is you've got to then budget your power inside of there, which is then going to increase your footprint, which is then going to increase your data center costs. So I've actually seen fights break out between departments. Somebody is managing the data center, somebody is managing the desktop.

The desktop people are going great, we've won. We've got thin clients, we've got less power out on the network, we've got less power coming out the floor, people are now no longer wearing shorts to work. They are actually wearing proper long pants, because it's not so hot out there. That's a win for me. Meanwhile, the data center person is pulling their hair out, because they've now got extra costs, extra cooling to do, extra facilities to build, and we're shifting one to the other. So the CXO level really need to get involved in this to make sure that everybody is looking at the overall budget, rather than individual departmental, because that can cause issues as well.

And then lastly, the applications-- obviously, we're here at Autodesk university. The ISVs these are very, very important in all of this. The reason why we use these applications, the reason why we use workstations is because they're certified for working on a particular platform. They're guaranteed to work. If you're designing an aircraft, you want it to be rendered properly. You want to see on the screen what you're getting. That's what we have that ISV certification checkbox. Apparently holes in an aircraft are a bad thing if they're in the wrong place. So we can still do that. And we certify the virtual machines with the applications as well. So in much the same way as you have a physical workstation being certified, we can now certify the individual virtual machines running on the platform, so that even though you're in the virtual environment, you can still be assured that what you were getting is a supported configuration.

And the applications as well need to be tweaked slightly. I think there's a couple of revs back in Revit, where the way it worked was because it thought owned the system, it was then caching things locally. And it was doing things in a smart way for a standalone system. When you put that into a virtual environment, it was starting to mess about with some of the calls that were happening. It was trying to pin CPUs occasionally. It would cache too much. And so therefore, in our virtualized environment, it wasn't snappy.

So that's now been fixed, and we've worked with the likes of Autodesk and other ISVs to check that they're looking at the virtual environment, as well as looking at the physical environment when they used to be able to do whatever they wanted. So writing to absolute locations and things like that is a no, no. We need to make sure that they're aware of that.

So that leads me on to some of the hardware that's available. So we've gone through why people are looking at virtualization. Why a business would like to do that. What some of the return could possibly be. So we've got the Nvidia cards here, and I'm going to do AMD in a moment. So I've got to give both sides fair coverage. And I'll point out some of the pros and cons in both solutions as well. So I'm going to inform you in an impartial way of some of the things you're looking for would be best served on one vendor than might be served better on the other.

So with the Nvidia cards, it's the Tesla M60. How many of you are familiar with the GRID K2 cards? Yeah? So this is the follow on to the K2. So the K2 came in two variants. There was the passive, which is the K2, and then what we called the K2A, which was effectively the same card but with fans. So you had active cooling going through it, and it's the A. Tesla M60, we've got the same deal. So we've got an active and the passive version in there-- two GPUs on the card itself and 16G of memory. So this can be used as a compute card, or it can be used as a GRID card, so the follow on to the K2.

The thing is that you have to actually configure that card onto what usage model you're going to be using it for. So there's a little utility you have to run. You get it, you can set it to compute, or you can set it to graphics. If you don't do that, you're going to start getting loads of errors on installation, and you'll be scratching your head wondering why you can't get any virtual machines up with a GPU, and it's normally because you haven't run that set application.

So we've got the GPUs in there. And we've now got GRID 2.0 software. This is what brings this card to life inside the GPU virtualization aspect. We can have up to 32 users on a single card, depending on the profile. So 16G of memory, 32 users, that's 512 mega framebuffer per user. OK, not a lot. So you're probably not really going to be looking at it for that mode for workstation applications. The traditional one is normally you get a one to 2G framebuffer inside of the card. But the key thing is look at what your users are doing today. Look at what you're using currently. What card are you currently supplying the engineers with? How much framebuffer does that card actually have? Because you need to give them an equivalent inside this card when you actually pair out the profiles. So if they've currently got a card that's got 2G, a framebuffer, then you want to give 2G, a framebuffer on that. That's going to give you 8 users on that card.

The only thing that a user has on this card is the framebuffer. All of the cuticles on the card are actually shared. So anybody who logs into the system will actually get all of the cuticles that are available, and then they'll be done on a time slice basis as it goes through. Only the framebuffer is dedicated to that user. I'll go into that in a bit more detail as we go on.

In terms of the AMD solution, then we've got the FirePro S7150. It comes in two flavors. So you'll notice that the Nvidia card was a double wide card. So that, obviously, limits which slots it can go into, and how much power it's going to take. On the AMD, you actually do have a choice. You've got a single white card, one GPU on that one. And then we've got the [INAUDIBLE] which is dual GPU. OK, so we've got two GPUs on the card, and again, different powers, different size, and I'll go into a bit more depth on how this one does its allocation in a moment. The difference is subtle between the two, but again, it can have an effect on the performance of your applications when you roll it out.

So just a brief recap of the different modes that we can actually give a user when they're working remotely. We've got the first option over here is where we have a dedicated user operating remotely. So how many of you have heard of Teradici? Yeah? Teradici, PC override P-- It's a very good, remote workstation solution. So for a one-on-one type deployment, you can actually get a hardware-based card that you can put into the workstation. You can actually have a hardware-based endpoint. And so we used to call them the P25 and P45. I think they are the 7050 now and 7020. But you have the hardware-to-hardware solution, and that will actually give you the best performance, and give you about 60 frames per second on average, depending on the network congestion or whatever else-- how far away it is. And the latency, again, depends on whether it's Lan or Wan or whatever, but it can work across anything there.

But the idea here is that you're actually working as if your workstation was right next to you. So it's a one-to-one relationship-- no virtualization, no hypervisors, no nothing. It's just a direct correlation of the one-to-one.

The second mode we've got is we could actually put four graphics cards into a two year box. So our Dell precision Rack 7910 has the capability of having four quadro 4,000 class cards inside of it. I could then partition that out by running a hypervisor. So I could run VMware, ESX, or Citrix XenServer. And then, I could actually have 4 GPUs in there and have four virtual machines, each one having its own dedicated graphics card. So now I've got a four to one relationship inside that two year box. Or, using the cards that I introduced earlier with the Tesla M60 and the FirePro cards, we can actually divide that out into the up to 32 users, and then have all of those users on one single platform. And there's different pros and cons of why you would do that.

Obviously, the best performance is going to be given in terms of the dedicated GPU. So one of the things I will point out at this stage-- I think I'll come onto it later in the presentation, but I just want to make sure I follow that thought through for you-- is that when you've got a dedicated GPU, it's exactly like having a dedicated workstation next to you. Everything you could do on that workstation you can do with GPU pass through. So you can run KUDA applications, you can run open GL. It's absolutely fine. It's exactly the same driver that you're using as you would do on a dedicated hardware system. So even though it's running a virtual machine, it's actually still running the exact same driver that you would have for the ISV certification on a physical hardware platform. OK? With me?

When you go onto the multiple-user situation, you start to lose some functionality if you're not doing pass through. So there's a pass through profile, which you could use. But bear in mind, there is only two GPUs on the M60. That means if you've got two cards in the system, you've got 4 GPUs that you can pass through out of those two cards. The password profile will give you that four users, the same way as if you had four physical cards inside the system. In that pass through profile, we can do everything we could do as a dedicated system. But if I start to get more than four users in the pass though mode-- so I'm now going onto a shared mode-- I then start to lose KUDA. So that's the first thing that disappears. I can't do KUDA operations, compute operations.

Even something in the Autodesk applications, there's quite a few of them that use KUDA at some point in their lifecycle to accelerate things and speed things up. You may notice a drop off in performance from your engineers that are they using those applications that are utilizing KUDA, and that would be the reason-- because KUDA does not get passed through in a virtualized environment on the Nvidia cards. That means that you have to really analyze your network and your applications to find out what they're using before you just run headlong into virtualization. Because if it starts these KUDA, that's going to start to steer you in a certain direction if you want that KUDA functionality to that pass through profile. And again, that could change.

So is that a question?

AUDIENCE: Yeah, just wondering about [INAUDIBLE]

GARY RADBURN: It depends on the graphics performance, obviously, because everything is being shared. The great thing is not everybody's using everything at the same time. Yeah, somebody has gone off to get a coffee, somebody is about thinking about what they're going to draw next, and there might be somebody doing a 3D rotation. So if you've got everybody at the same time doing a 3D rotation, you're going to say, pretty much, a linear drop off, because everybody's going to be doing the same operation, sharing the same number of calls. That generally doesn't happen in the same way that if you have ethernet-- you've got a 1G ethernet, not everybody's using at the same time, so you get pretty good throughput. Same principle here. Again, it's looking at the applications and the workflow and the workload of the users, and then once you work out their pattern of usage, that's going to give you a good estimation of how many users you can actually put onto a system.

So just to show you this in pictures, the Teradici stuff-- so we've got our Rack 7910 over there. We can put the hardware card inside of it, or we can use the software host. So again, we're not tied to actually putting cards inside the system. We can actually run a software application, so like an agent that runs on the box itself. And then we can have a software or hardware de-code at the other end. So we've got the option in that PC over IP situation to be able to do hardware-to-hardware, hardware-to-software, software-to-hardware, software-to-software. The best performance is going to be hardware to hardware, obviously-- because there is Asics in there that is speeding things up, giving you the best performance.

And then, if you the software-to-software-- remember, I said 60 frames per second-- that gives you about 30 frames per second on the software. So again, if you've got something that you're looking at that is video related, or you want smooth movement, and you want that 60 frames a second, a software-to-software solution probably isn't going to cut it for you. So again, something to be aware of there.

With the VMware solution, there's several different ways you can carve up a GPU. I've mentioned a couple of them already. So the first one-- if you're familiar with VMware, they've got this VSGA mode, which is a shared graphics adapter. Inside that shared graphics adapter, I can have one card in the system, and I share that amongst all of my virtual machines. It goes through a virtual graphics driver, so it doesn't go through a native Nvidia or AMD driver. It goes through their own graphics driver. And the knock-on support of that is open GL support. Because, as you can see, that supports open GL 2.1.

Now I think we're up to-- what version-- three, four, something? So, unless you're drawing circles and squares, then anything more complex than that you're probably not going to get the level of performance you're looking for out of open GL. So again, remember I said that we're looking at the applications, we're looking at the workflow. We are also now got to look at their technical requirements, because we also only get DX9. So anything that's grown up-- and we're up to the DX12. Anything that uses calls in 10, 11, and 12-- things like a tesalation for instance, wasn't available in DX9. It came after that. So if there were applications, your engineers are looking for those functions and features, then my advice is VSGA, don't even touch it, just don't go there. It's going to give you headaches for all but the most simple applications that are using Directx.

When we get onto the VDGA, that's that pass through mode that I was talking about. So you can take a single card, and we can pass it through dedicated into the virtual machine, and then we've got the VGPU mode, which is the GRID 2.0 implementation, where we can then divide up that card into multiple users. And remember, we're only dividing up the framebuffer. Everybody gets all of the calls. So if I'm the first one in the morning, and I log in, then I'm going to get-- as it was on the K2-- 1,536 calls all to myself. So I'm going to be happy, because I'm going to be working, and everything is going to be snappy. It's going to be great. Somebody else comes in, I'm now sharing that resource with that second person, and third person, and the fourth person. So the more people that come in and start doing things-- and that talks to the point I just made earlier-- is that the more people we have on it, then you're all sharing that pool of calls inside the graphics card. And you don't know how much you're going to get.

With the GRID 2.0 as well, there's licensing involved in it-- now on the GRID 2.0 front-- which is K2. You could buy the card, you could spit up the card, you could use it when you wanted to. And there was nothing else to go on top of that. With the GRID 2.0 in the Tesla M60, you need to install a license server. It's a Flex LM license server. And from there, you then check out a license for the profile that you're actually using. So profiles you've got. You've got virtual applications. So if you're using things like XenApp, and you're sharing the applications, then you have one particular license. Virtual PC means I'm not getting any open GL Support. It's a pure Directx PC implementation. That's another license.

And the one that you're really going to want for professional applications in the workstation environment is you need the Nvidia grid workstation license, the virtual workstation license. So again, an important thing today. You load those into the license server, and then you can check the licenses in and out, as and when you need to. If it loses contact with the license server, everything will still run as it should do. So once it's had one connection to a license server, you're good to go. If it loses connection with that license server, then it's not going to degrade performance. It's still going to run as it would do as a virtual workstation, but then at the end of the year, you basically then tally up how many licenses you've used, how many you've actually bought, and then-- yes-- it becomes time to pay the piper, at that point, for paying what you've used at that point. So that's a different implementation from where we were with the K2.

For the AMD implementation, we've got pretty much the same as we go through, so the VSGA remains exactly the same. We've still got pass through, exactly the same. But they've got a different implementation of the VGPU, as it was in the Nvidia land. It's now called MXGPU in AMD's world. Now the difference is subtle, but does give you a real world difference on how you use the hardware. It uses something called SR-IOV, which is a feature of the BIOS inside the system, and it basically makes that card look like up to-- HGPU-- 16 physical GPUs. That means I can divide that card out into discrete GPUs.

So remember I said that we had the calls that we shared in the Nvidia implementation? In this one, we actually allocate a performance profile to a user and the framebuffer to the user. Now what does that mean? The analogy I use here, normally, is something like a freeway. So I'm driving from point A to point B. In the video world, I can be driving at 110 miles an hour, 50 miles an hour, 60 miles an hour, back up to 110, down to 30, back up to 110 again, and then get to point B. Because depending on what the other users are doing on the system at any given time that I've got no control over, it will depend on how much horsepower that card actually gives me inside that system.

With this one, you actually set, almost, like a service-level agreement. You go and say each user can have up to this much power. They can't have anymore. So if I go and put that, there is eight users. They get 1/8 of the graphics card, and they get 1/8 of the horsepower of that card for their virtual machine. That means that travel from point A to point B will be a constant 50 miles an hour. You'll never get up to that 110 miles an hour that you could do in the Nvidia card, because I've allocated you a certain level of performance that you cannot go above. So again, there are some people that think when they're implementing things-- what somebody has never had, they never miss. So you don't want to give somebody 110 miles an hour performance, and then go down to 30, otherwise you going to find people who may start locking the door in the morning and not letting people in.

So we've now got this situation where we can give a defined service level agreement of performance, or we can actually give them everything that we've got at any given time. So again, that depends on your usage model and your engineers, and the way they're actually interacting with that system, as to which one is best fit. But again, that's a consideration over-- do you choose Nvidia, do you choose AMD? Because that is one of the key differentiators in the solution.

So we're trying to make it easier for implementation of virtualization. And blatant product plug-- sorry, I don't want to make this a product pitch or anything like that. But this is like a turnkey appliance. So for people who are-- not scared, but nervous about implementing virtualization, the layers of VMware skills, or Citrix skills that you may need for the implementation of it, what we've done is we've made it an appliance. So by asking a series of questions, when you first booted up, answering those questions in a simple way-- how many users do you want? What's your license key? And then it will automatically configure itself and get those virtual machines up and running, ready to install your gold image onto it. So you don't have to be a VMware guru, you don't have to be a Citrix guru in order to get this up and running.

So this appliance can run on the Rack 7910, which is the workstation. Or it can even run on the Dell powerage R730, which is our server range. They're like sister product. And so you can have one or the other, depending on the use that you want, depending on whether you want Citrix or whether you want VMware to fit in to your current infrastructure.

So how does this really play out in the real world? So we've got a case study here, and this was actually running AutoCAD and Revit, as I mentioned earlier. And it's Burns Engineering. So with this case study, they were looking at rolling out remote offices. And they were saying, how do we get these remote offices? How do we manage these remote offices? How do we patch them, implement them, and make them as effective as they possibly can be?

They looked at virtualization, and the appliance, and the way that they could manage that remotely-- they could actually send an appliance to a remote site that's already pre-configured, and then have those users actually working there locally. And then they can manage everything from a central source. And they found out that a deployment to a remote site, that was saving about 50k per site. This is where you start to see the real savings, when you going out to the branch sites. OK so real numbers, real applications, and Autodesk environment giving you an idea of the amount of savings you can have depending on your network to start off with, et cetera. So that's not to say everybody is going to get that number, but it gives you an idea of the level of savings you can get, depending on your starting point. So if you've got a very good network, then, obviously, you are going to start to save more because you don't have to have that reinvestment. But your mileage may vary, as I should say at this point.

The other thing we've done is make it a lot easier is we've got what we call the virtual workstation centers of excellence. So we're based out of Austin, Texas. And we have solution centers in Austin and dotted around the world as well. So what we've done here is we've actually put all the hardware in place, and we can actually take customer's images, customer's applications-- you can actually access it remotely as well. And the great thing is it's a free of charge service. So if you're looking to get into workstation virtualization-- I know that sometimes you try and implement things, you never get time, because you're in the office. You have to buy extra equipment to try and do it, and it never really seems to happen. So with this, we've already got the hardware, we've got the expertise in those solutions centers for virtualization, which you can then tap into. So if you're not deep in the weeds in VMware or Citrix, then there are resources that you can use there inside the solution center, and all the hardware to do that.

Once you're set up, you can then say OK, that was it locally. Now I can dial into it-- that shows you how old I am. You can link into it through the internet, and you can then say how quickly is that going to work? What's the effect of latency from my remote office to the solution center inside the COE? So that gives you a lot shorter times for implementation, gives you all the skill set there, so that you're assured that what you're getting is what you're going to expect. Rather than investing a lot of money, and then finding out that it doesn't quite deliver what you expected it to do.

Case in point there is something I didn't mention as a hypervisor, hyper-v. That was intentional, because remember I said it was like 2.1 for open GL on SVGA. With hyper-v, that becomes open GL 1.1. So again, in a professional workstation environment, not something we're really looking for. And so I don't even bother with hyper-v. That should change now with 2016. So you can start to do pass through with GPUs. But again, that's still early days at the moment, and we'll update as we move forward with that. But in terms of this, you can use Citrix, you can use VMware. And book it through your accounts teams, free of charge if you're already a Dell customer there. And we've got six different locations, which is not actually correct on the virtualization. That was me having a mad moment. That's a VL, that's a presentation I'm doing tomorrow.

So the workstation virtualization we've got in Round Rock, Santa Clara, and Limerick. I think they've got one out in China at the moment. They haven't got the three that I mentioned on there.

So in summary, we're looking at saving money, your mileage may vary. It really depends on your current implementation, and where your end point is. We're more than happy to work through that with you. We've got the skill set, we've got the product. Now is the time for virtualization. I know that sounds trite, everybody says now is the year of this or the year of that or whatever else. But with the new cards that are coming out, the power that's available in there, the level of user consolidations-- so if you remember the K2 cards, the M60 is almost twice as much depending on the profiles that you're using. So the power of the card has increased. It's like the Maxwell technology in there now, rather than the [INAUDIBLE] it was. So you've got a lot more horsepower inside the car, and so consolidation-- number of users on the platform-- starts to give you some real benefits in terms of this. So it starts to become more and more economically viable.

So using the Dell products there, and then us partnering with our ISV partners and helping them through the quagmire, them helping us with the way their applications work. We've got some very strong linkages in there to get the best performance. And Autodesk being a great partner. It's almost like the hook that says OK, it's time to leave. So at that point I'll leave. So thank you very much for listening, appreciate that.

[LAUGHTER]

I'll take any questions if anybody's got any.

AUDIENCE: [INAUDIBLE]

GARY RADBURN: Correct.

AUDIENCE: [INAUDIBLE]

GARY RADBURN: Right so the question was about the implementation of the licenses on the Nvidia system. I mentioned that it's like a FLEXlm license server, which is basically a Linux virtual machine. So the way it's normally implemented is if you have your 2U box I was showing up there, and you've got your users in that system, you also set up on the side, inside of VMware, a virtual machine that's a Linux box. That Linux box then runs the FLEXlm license server. When you install the driver, the Nvidia driver, into the virtual machine, then when that fires up, it then looks for the license server, so you can figure that with an IP address. So if you're familiar with the Nvidia control panel, there's now another item in there, which says license server. You click that, put in the IP address or the fully qualified domain name of the license server. That will then go and check for a license, and then check it back in. You select whether you want a virtual workstation, or whether you want just a PC. Because the licensing costs are different for one or the other.

AUDIENCE: [INAUDIBLE]

GARY RADBURN: Correct, so that's all controlled by the driver. And as I say, if it goes and checks in with the license server, it has to have at least one check in. So as long as you've checked in once and you've been given a license at some point, then that's good enough. That will continue through and will continue to operate, even if the license server goes down, or you run out of licenses. Because one of the things we discussed with Nvidia is how is this going to work? And, originally, it was oh, well what we'll do is we'll drop it down to three frames a second, if you don't have a license. Now, I don't know about you, but that's probably going to upset people. There is probably a stronger word that you can use at that point, but I can't because I'm in public. And so therefore, it was OK, well, we're not going to do that. What we'll do is we'll trust you that if you don't have the licenses, but you've already checked in once, then we'll do a tally up at the end, and we'll see how many you'll need to buy.

AUDIENCE: [INAUDIBLE]

GARY RADBURN: It's a very good question. And if I had $0.5 for every time I was asked that question, I'd probably have about $0.35 by now. But it's a really good question, because it really comes down to how long is a piece of string? Remember when I said that you really need to analyze your applications and your workloads and your workflow, it's highly dependent on that. So if you've got 20 engineers, and each one of them is using an M6000 today, then going virtualized is not the way to go. This isn't designed to replace workstations. This is designed to augment workstations. There is a market for fixed workstations, and there's a market for virtual workstations. Where we see this is if people are going into hazardous situations. So if you're going into oil and gas or a hostile environment, it might be easier to take a thin client to go and use that, and then leave the PC or the workstation back in the center, because you don't want to get that damaged or destroyed.

If you've got 20 users of AutoCAD, then again, that's going to be a light usage. So in terms of the memory footprint, in terms of the GPU footprint, it's a lot smaller, and then it would make absolute sense to move to that platform. But generally, your users aren't just using one application, they're using a suite of applications. So you really need to look at over a period of time, and there's some companies out there like Liquidware Labs and whatever, where you can run an agent. And it will tell you, over a period of time, it will report out this much GPU utilization, this much CPU utilization over this period of time. You can even do it even simpler than that, and just use the Process Manager inside of windows-- and something like GPUZ, which will monitor your graphics card and the utilization there.

So by applying a few smarts to it, you can actually see whether it makes sense to say well, there are 20 users, these eight are power users, so I'm going to leave them on their own physical workstation, because it doesn't make sense. But these people over here are signing off drawings, or they're doing a little bit less intensive workstation drawing, so I can virtualize them, because their workload doesn't need all of the power that I'm giving them today. So yeah, I know it's not the answer you want to hear, because I'd love it to say oh, yeah, 20 years. Virtualize them, yeah absolutely. But it really does depend on the workflow and the workload, and that's why we came out with the COE, because it's exactly those questions that people want answers to.

AUDIENCE: [INAUDIBLE]

GARY RADBURN: It absolutely would be, we say it's like five minutes from opening the box to actually powering it up, answering the questions, getting to a point where you are installing your operating system into the VM. So it's a really quick time to get it up and running, because we've taken all the difficulty of configuring virtual machines, doing the configuration scripts of VMware, et cetera, and getting those up and running. We've taken all the hard work away. So you can get up and running literally within five minutes of opening the box to installing your gold image. So it would be a good way of getting up and running.

And in terms of distance, I've done implementations where finance houses in London, engineers are in India. They used to transmit all their financial data across continents there. And they were paying huge amounts in data transfer costs. Now they don't have to ship that data off site at all into India. They leave it in the data center in London, and then the Indian engineers work on that data by using resources that are actually in the London office. And they're using it remotely. So that paid for its implementation just by the data savings alone, of shipping it. So a question in the back?

AUDIENCE: [INAUDIBLE]

GARY RADBURN: Absolutely, so the only limitation is you have to have the same profile on each GPU. Remember, I said that the M60 has got two GPUs inside of it. Then for that implementation, I could say there's 8G of memory on that GPU. I can have four users of 2G. So I couldn't have one user of 4G, and then the rest of the users with 2G. So that was setting me up the profiles there. However, remember I've got that second GPU. I could actually have that as a pass through profile. So that could be an 8G dedicated GPU, and I could still have the other four users sharing the other GPU. So I've now got banks, depending on the profile, of different utilizations of the GPU.

So if there's a manager, for instance, that doesn't do any design work, but has to sign off drawings-- so they need a workstation to actually represent that drawing-- I could give them a really small profile, but it's good enough to do what they want today. And in your case, I could allocate one profile to one user one week, and then I could allocate them that 8G pass through profile when they actually really need it and check that one out. Excellent, I'll let you go. Thank you very much for your time.

[APPLAUSE]

Downloads

______
icon-svg-close-thick

Cookie 首选项

您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

我们是否可以收集并使用您的数据?

详细了解我们使用的第三方服务以及我们的隐私声明

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

改善您的体验 – 使我们能够为您展示与您相关的内容

通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

定制您的广告 – 允许我们为您提供针对性的广告

这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

icon-svg-close-thick

第三方服务

详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

icon-svg-hide-thick

icon-svg-show-thick

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

Qualtrics
我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
Akamai mPulse
我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
Digital River
我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
Dynatrace
我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
Khoros
我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
Launch Darkly
我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
New Relic
我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
Salesforce Live Agent
我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
Wistia
我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
Tealium
我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
Upsellit
我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
CJ Affiliates
我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
Commission Factory
我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
Google Analytics (Strictly Necessary)
我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
Typepad Stats
我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
Geo Targetly
我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
SpeedCurve
我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

改善您的体验 – 使我们能够为您展示与您相关的内容

Google Optimize
我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
ClickTale
我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
OneSignal
我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
Optimizely
我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
Amplitude
我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
Snowplow
我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
UserVoice
我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
Clearbit
Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
YouTube
YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

icon-svg-hide-thick

icon-svg-show-thick

定制您的广告 – 允许我们为您提供针对性的广告

Adobe Analytics
我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
Google Analytics (Web Analytics)
我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
AdWords
我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
Marketo
我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
Doubleclick
我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
HubSpot
我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
Twitter
我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
Facebook
我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
LinkedIn
我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
Yahoo! Japan
我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
Naver
我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
Quantcast
我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
Call Tracking
我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
Wunderkind
我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
ADC Media
我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
AgrantSEM
我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
Bidtellect
我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
Bing
我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
G2Crowd
我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
NMPI Display
我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
VK
我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
Adobe Target
我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
Google Analytics (Advertising)
我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
Trendkite
我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
Hotjar
我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
6 Sense
我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
Terminus
我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
StackAdapt
我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
The Trade Desk
我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

是否确定要简化联机体验?

我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

个性化您的体验,选择由您来做。

我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。