AU Class
AU Class
class - AU

A Hardware Wonk's Guide to Specifying the Best 3D and BIM Workstations—2016 Edition

Share this class

Description

Working with today's Building Information Modeling (BIM) and high-end visualization tools presents a special challenge for your IT infrastructure. Wrestling with the computational demands of the Revit-software BIM platform (as well as related applications such as 3ds Max software, Showcase software, Navisworks Manage software, Lumion software, Rhino software, and others) is a challenge. One needs the proper knowledge to make sound investments in workstation hardware. Get inside the mind of a certified hardware geek as he explains in plain English the variables to consider when purchasing hardware to support the demands of BIM, and he also explains the latest advancements in workstation processor and memory architectures, storage, and graphics. This year we will pay special focus on the latest advancements in graphics systems to understand how they meet the demands of high-end visualization, animation, and real-time rendering. Along the way, we’ll look at specific hardware configurations to meet your specific budget. This session features Revit and 3ds Max. AIA Approved

Key Learnings

  • Understand the relative computing demands of each of Autodesk's AEC applications
  • Understand today's powerful and sweet spots to be found in processors, memory, storage, and graphics subsystems
  • Learn how to specify workstations for different classes of BIM usage profiles
  • Learn how to best shop for complete systems and individual components

Speaker

  • Matthew Stachoni
    Matt Stachoni has over 25 years of experience as a Building Information Modeling (BIM), CAD, and IT manager for several architectural, interior design, and engineering firms. He has been using Autodesk, Inc., software professionally since 1987. Stachoni is currently a BIM specialist with Microsol Resources, an Autodesk Platinum Partner serving New York City, Philadelphia, and Boston. He provides training, implementation, specialized consultation services, and technical support across a wide array of Autodesk architecture, engineering, and construction applications. Previously, Stachoni was the BIM and IT manager for Erdy McHenry Architecture, responsible for handling all digital design application efforts and IT support. Stachoni has experience providing on-site BIM modeling and 3D coordination services for construction managers, and also for HVAC (heating, ventilating, and air conditioning) and electrical trade contractors for a number of projects. He is a contributing writer for AUGIWorld Magazine, and this is his 19th year speaking at Autodesk University.
Video Player is loading.
Current Time 0:00
Duration 1:31:51
Loaded: 0.18%
Stream Type LIVE
Remaining Time 1:31:51
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

MATT STACHONI: All right, so welcome to this class. Thank you very much for attending. I know that there's a lot of classes out there, a lot of competition for seats. So it was really nice to have everyone here this morning. So what's this class about? Well, if you had a chance to look at the handout, it's all about specifying hardware for BIM and visualization, basically. Mostly AEC applications, although if you're in the civil stuff with the dirt and all that other stuff, the hardware recommendations that I passed through are going to be basically the same. But I'm focusing mostly on Revit performance, 3ds Max, but also other applications like Rhino, Lumion, on V-Ray, stuff like that.

So a couple of key learning objectives here. Understand how to specify workstation components. OK, so if you go out to Dell or HP or Box or any place like that and you have different lineups of different machines, what are the guts of the machine? What's really going on behind the hood? And a lot of people, when they don't know enough about hardware, don't know a lot about hardware they just sort of pick a box and just hit the send button. They buy it, it ends up at their desk, and it doesn't have half the ram they need, it's the wrong processor, it's not optimized for what they want to do. So this class is about what do you need to know to specify the right parts?

And also understand where the sweet spots are. Because you can buy any machine that has a lowly processor or a very, very fast processor, but your spending curve goes like this as you go towards the faster stuff that gets insanely more expensive. So the sweet spot is that point where yeah, I'm willing to spend this much money for this much performance. But if I go one tick higher, the price just jumps off the scale. So understanding where that best value is is important.

And also understanding how to buy a workstation or specify a workstation for specific users. I know a lot of people-- a lot of IT guyss-- they don't really think about what the user is doing. So they just sort of-- I got to buy 20 boxes for 20 people, and that's it. Whereas you might have one guy who's spending most of his time in visualization, they don't get a machine that's powerful enough for that. So you want to make sure that you know what users you have, what their capabilities are, what their capabilities are going to be in about three years, so they can grow into their system if they need to. But also, don't buy too far down the curve or too high up the curve, depending on what you need. And then also how to specify or buy or how to shop for individual components and systems like that. Where are some good places to look, things like that?

So there's a lot of stuff in this session. We are here for an hour and a half. OK, let me get that right first. Because last year I taught a class, I thought it was an hour and a half, but I only had an hour. So we've got an hour and a half, which is good, because there's a lot of stuff to cover here. At the very end, I'm going to have a couple of system builds that I put together for you so you can sort of compare and contrast different systems for different kinds of classes of people.

So when you're looking at a computer, or looking at a new workstation, your limitation isn't really on how fast the machine is. It's not something that you look at it and go-- one sits there and goes, wow, this thing's just-- I can't do anything as fast as this machine everyone's mind operate faster than their workstation, especially if you're working in Revit. So being able to specify a workstation that works for you, that doesn't keep you just hanging all the time whenever you change a view or render something is important.

So stuff we're going to talk about in depth. We're talking about Central Processing Units or CPUs, which are the brains of the computer, graphics cards, or GPUs, system memory, and mass storage. Those are the four components that you really need to worry about. Nobody uses optical drives anymore, so you don't have to worry about DVD or anything like that. I mean, everything else is pretty much secondary. We'll also talk about peripherals a little bit. I do a lot in the handout about biased opinions on certain little components and stuff like that.

But we're also going to talk about buying guides too. When you're specifying a machine, what does all that stuff mean in that wall of text right there? What stuff matters? What stuff doesn't matter? What's a good base configuration to work for? OK, so I'm going to talk about specific components, specific manufacturers, things like that. These are based upon personal preference.

They are in no way, shape, or form meant to endorse anybody. Nobody pays me, unfortunately. Is anybody here from NVIDIA or Intel? Damn. All right. Just because if you did want me to say something nice about your product, if you wanted to send it to me to evaluate, I would do so. You know, I have standards, but they're really low. So we can go. Darn. Every year I try to ask that question, and every year no one's there.

All right, so how many people here are using something in the Building Design Suite or in the AEC collections? Pretty much everybody. Who here is on the AEC side or the architecture side, I guess I should say. How many people are on the Civil side? How many people are using-- I'm assuming most people are using Revit on the architectural side. Good. Navisworks? 3ds Max? OK, a couple of people. Good. Lumion? OK, a couple of people here. Good. I got to work with Lumion quite a bit in my last firm. It was a gas to use. Really easy. Really expensive, but really powerful. Rhino? Couple of people. V-Ray? Good.

How many people are creating renderings on pretty much a daily basis kind of thing? OK, good. How about animations? A couple of people there. Good. How many people are running hardware that's like two or three years old, and you're wondering what I should do to make everything faster? OK, good. That's what I wanted to see. How many people don't know what to buy? Obviously most people here, because you're here.

OK, so if you're looking to buy a new system, the first thing you have to interesting is what's happening in the industry right now that's causing-- where you need to be. So on the AEC side, basically you've got everyone modeling everything all the time. And the record number of people using BIM tools, whether it's on the design side or on the contractor side, especially, it's increasing at a really high rate. So it's pretty much evenly diffused throughout the entire industry. If you're not on Revit or on a BIM platform, you're going to be within two years, or you're not going to really be doing a whole lot of business.

So we have everybody using Revit now. We have obviously new hardware capabilities coming down the pike every year that allow for some heavier problem solving. It also allows you to do things where you're doing more of what they call parallel processing, such as rendering, modeling, and things like that, where you're doing more than one thing at one time.

We also have distributed computing with the cloud. So we have cloud rendering. Everyone's probably used that at some point. It's fantastic. It's basically changed a lot about how you think about buying a new workstation. If you're going to be looking for our heavy rendering machine, maybe you're not going to look at that anymore, because of the fact that you can do a lot of your rendering in the cloud. Saves you a lot of time and a lot of effort and a lot of money at the same time.

How many people play video games? How many people do not play video games? OK, I'm sure half of you are lying, because everyone plays video games. But nice thing about games is the technology that is driving the hardware side from a market standpoint-- faster processor, faster graphics, more memory, things like that-- those are really largely being driven by the gaming industry, because games are getting more and more cinematic, more and more crazy, more and more realistic. So all of us on the design side are directly benefiting from that advancement there.

A lot of people are running more than one application at one time, right? You're going to have Revit open, you're going to have Max open, you're going to have Photoshop. All that stuff consumes a lot of memory, disk space, processing power as it's running. And also Windows itself is a little bit of a memory hog, because you've got services running in the background, antivirus, all that kind of stuff. And more and more of these applications are multi-threaded so that you can take advantage of more than one core in your processor.

The problem, though, is that we are getting to a point where every year the hardware increases are getting smaller and smaller. So in 2015, CPUs went down to what they call a 14 nanometer process size. And I'll explain what that means in a little bit. But that's essentially getting down to the point where you can't really make the chips any smaller. You can't shrink the size of the transistors. We're just getting down to the point where it's getting really, really crazy on the transistor side, and they have to come up with more and more exotic technologies to get us down to get the rates down to zero.

In 2016, we also had on graphics side of things, GPUs now are down to a 16 or 14 nanometer process too. They we're stuck at 28 nanometers for a while. And I'll explain what this means. Basically it means that GPUs and CPUs are smaller, which means that you get more transistors on the die, which means that they can do more stuff. They're running cooler. The smaller the process, the cooler the chip. Usually the faster it'll run, because if you've got a cooler chip, you can ramp it up so that it's faster and keep a thermal load that is going to be acceptable.

We also have the rise of what I call GPGPU or graphic computing doing computing on the graphics card. And we'll talk about that a lot when we get into the graphics card section. But a lot of things happening on the gaming engine side with Unreal Engine 4 and Stingray, Unity, things like that are directly affecting the AEC side, because now you can use those applications or those engines to basically drive all your visualizations. So it's a pretty exciting time on that side.

Virtualization and virtual desktop infrastructure, or VDI. Anyone using a VDI system now? Basically, there's a server sitting in a closet somewhere, or in a data center, and it just serves up your desktop for you. Your desktop can be one of those little tiny-- Anyone take a lab yet? Anyone have a lab yesterday? If you took a lab, you might see instead of a big box sitting at your feet, you now have this little tiny box about this size.

And the labs are running-- or at least eh lab I was in-- was writing everything off of the Amazon cloud. So the Amazon cloud was serving up your desktop. And me and my partner were walking through a lab, and my partner goes to put a USB thing in and turns it off and reboots the whole system. And we're like, OK, we're screwed. But then the guy came over, reset everything, and we were right back to where we were.

We didn't lose anything, because it's all processing in the cloud. We're just there as a simple conduit to see it. Speaking of the cloud, obviously you've got cloud-based services with Microsoft Azure, Amazon EC2. The cloud is the perfect place to do high demand rendering, because you can put thousands of processors against a rendering problem and not just four or eight or whatever. So it's kind of a waste of money to try to compete with that, because you're never going to be as fast. So as cloud rendering especially becomes more and more prevalent, it really does change the landscape of how things happen.

There we go. Let me back up a second. OK, so I was talking about transistors and how we are kind of quickly reaching a pinnacle of what we can do in traditional transistor and microprocessor design, which is really the reason that drives much of the conversation that we're having today. So there's this thing called Moore's law, which I'm sure everyone's heard of, that basically says the number of transistors doubles every 18 or 24 months.

And it works by shrinking the size of a transistor. And I'll explain what a transistor is in a second. But basically, when you shrink that over time, you can fit more of them on a chip. And you'll notice that this scale runs pretty linearly in terms of the direction of this arrow right here from 1971 up to recently. But the scale on the left hand side is logarithmic. It's an exponential function. It goes zoom like this. So if you double everything every year, pretty much it gets crazy after a while.

The problem is that you shrink these things down so much, and you're literally running out of atoms. At 14 nanometers, I think the size of a transistor is about 70 silicon atoms wide. That's crazy. A transistor works like this. You have a source on one side, you have a drain on the other, and then you have a control gate across the top. And it's an electrically-driven switch. So when you apply a voltage to that thing, it creates a magnetic field, and then the field underneath it, electrons will congregate there and close the switch. So this process size that we talk about right there is the distance between the source and the drain.

So like I said, we're at 14 nanometers currently right now for the high end CPUs, and about 14 to 16 for the GPUs. So the problem is that as you're seeing this trundling on, we were keeping pace with this fairly well until just recently. And to give you a sense of scale of how this works, in 1971 we had about 2,300 transistors on a chip at 10 micrometers or 10,000 nanometers. And then in 2016, we've got over 1.5 billion transistors at 14 nanometers. So it's basically equal to shrinking yourself down to the size of a grain of rice.

Put another way, the Boston Symphony Orchestra holds about just as it happens, 2,300 during pop season. And it's kind of like shrinking everybody down so you could fit the entire population of China, which has 1.3 or 1.4 billion people. You can fit the entire population of China in Boston Symphony Hall. I mean, that's the scale we're talking about in terms of how Moore's law has trundled on.

Now we're expected to go from 14 nanometers now to 10 nanometers in 2017. Whether we get there or not is up for debate. They're having a really, really hard time getting anything smaller than 14 nanometers to scale well. But they've got insanely smart people working on this problem. The problem is that when you get down to certain points, you get to a point where quantum tunneling comes into play.

And quantum tunneling is this weird effect where electrons will just be there. Like the probability of an electron crossing that gate becomes really, really high. And the gate has to get taller. It's kind of like a wall you're going to build between one side and the other. So you're trying to keep the wall there so that the electrons don't go across until you tell them to. The problem is that when you're down that level, they're crossing over anyway, because the probability of those electrons just showing gets to be pretty high. So it's kind of weird. It's not kind of weird, it's very weird.

So when we're talking about other hardware trends or other trends that are affecting your hardware purchasing decisions, the idea of multi-processing, multi-threaded stuff is coming into play. We talked about how you can have a single program execute many threads of execution. So for example, Revit will pop out a thread for doing wall clean up. It'll pop out a thread for doing rendering. It'll pop out a thread for doing a view generation, things like that. Those threads can be put on different cores of your CPU, so they can all run concurrently.

You can also have problems when you're doing that. You have issues where you have to keep all that straight, and that's one of the problems that Autodesk and other software developers have is that this is hard to program. So they have to keep all of this stuff straight, and it just consumes a lot of time on the development end. And so when people talk about how multithreaded Revit is how multithreaded AutoCAD is whatever, you have to understand that they're looking at certain aspects, certain performance targets they're trying to hit, and certain aspects that are low hanging fruit that they can easily sort of spin off as a thread of execution, and other things aren't quite so easy to do.

You can see multithreaded stuff happening in mental ray. If anyone has ever done a rendering in mental ray, you'll see these little white squares that will appear. Those are called buckets. And the number of cores you have in your machine indicates the number of buckets you'll see, because each one represents basically a core working on that particular square. And then each of those gets processed independently of the others, and then it creates your final image. The more cores you have, the more buckets you have, the faster your rendering is going to happen.

Like I said, gaming engines are becoming a huge deal. This is a scene from a rendering-- or actually an animation-- a walkthrough. Not really an animation, it's kind of like a walkthrough game in Unreal Engine 4. Has anyone ever heard of that? Unreal Engine 4? It's basically the big mother of all gaming engines right now. The cool thing about Unreal Engine 4 is first of all, it's developed in an incredibly aggressive pace. If you want to look at it, it's very easy to download and install. It's also free.

For visualization artists, you can download this, you can run it, you can go ahead and build your interactive animations in this thing for free, no charge whatsoever. Epic, who publishes Unreal Engine, charges gaming developers to use their code when they publish a game. So that's how they make their money off the engine. But for visualization artists, no problem. Obviously with Stingray and LIVE, which is basically the connector from Revit to Stingray, that's also available. It's also fairly inexpensive. I think it's $30 a month or something like that.

We have Lumion, which is kind of the granddaddy of this kind of stuff. It does use a gaming engine, but not a commercial gaming engine. They just created it for high end visualization. And so we have that kind of thing that's available to us. So we don't have to sit there and wait for a rendering for four or five hours. That's really 2010 kind of stuff.

And so obviously, the cloud has a big effect on everything, because again, if you can do all this stuff in the cloud in some way, shape, or form-- even building a rendering farm on EC2 is fairly cheap. So you don't have to put a rendering farm in your company, which ties up machines that somebody else might want to use. You can put all that stuff in the cloud fairly quickly, fairly inexpensively.

From a pricing standpoint, we also have what I call pricing compression, where you get hardware every year, and essentially the prices of things become so close for double the performance or double the capability. So like for example, a hard drive. A one terabyte hard drive I think is about $70, four terabytes is 140. Why not just spend the extra money and get four times the amount of space for basically double the cost? So when you look at it from that side, it just makes sense to go take the system one step up to do what you need.

So if we look at the AEC applications that most people use-- this is using the Building Design Suite-- what I did was gave you a chart right here that basically says for any of these applications, what does it stress? And for most things, a score of like 7 can be handled by any machine you buy pretty much off the shelf. If you went down to Best Buy and just sort of picked up a machine that was a medium-priced thing that was sitting on a shelf, it would handle anything up to about a 7 here. Anything above that, you really need to start shopping to optimize what's going on.

So for example, Revit, it really requires a fast processor. It also requires a lot of RAM. It wants a lot of hard disk speed, because the files are very large. Same thing for the 3ds Max Design. Here for the graphics card capabilities, a decent graphics card will work OK, but once you start using iRay for rendering, then you have to jack the card up. Because iRay works on the graphics card, so it stresses that. And this is in no way, shape, or form scientific. This is really just my personal impressions of what gets stressed on each application. But I think it's fairly accurate. I think it basically tells a story of the landscape.

So like I said, Revit, when we concentrate on that, that's the main bread and butter application that most people are using. It's stressing everything about your system in one way, shape, or form. It's very CPU-dependent, because it is a database application. So database applications stress the CPU a little bit differently from most others, because they typically prefer internal caches, like memory caches, like the L1, L2, and L3 caches, much more than other applications do. And Revit files consume a lot of system memory, obviously.

It is a parametric change engine, which means that when you make a change, it propagates through the entire database to make sure that change is OK. Whenever you do a synchronize with central operation wherever Revit's opening that file on the server, pushing your changes to it, making sure those changes are OK, and then pulling the changes down that everybody else has made since you did a last synchronization, that's all processor-intensive, it's network-intensive, it's pretty much everything-intensive.

Because it's all about creating relationships. There's no shortcuts you have in Revit. Revit has to dot every I and cross every T to make sure that everything's going to work OK. Otherwise your building's going to fall apart. If you have an unconstrained or something that's constrained that's conflicting with another constraint, all this stuff has to be double checked.

| it's computationally expensive. So that's why Revit is basically the hog of the bunch there. When we talk about specific CPU considerations and things like that, there is a Revit 2017 Performance Technical Note that you can download. I have a link to it in the handout. I'll probably post it to the AU site before the end of the week, so you have that available. There's a couple of supplemental documents I have too that I didn't post yet, but I'll post those before the end of the week. And like I said, Revit is somewhat multithreaded. It's getting better every year. As Audodesk finds places that they can go, oh, we can optimize this, or we can optimize that, they'll make it multithreaded. So things like vector printing and the new art renderer is completely multi-threaded. File opens and saves, stuff like that.

When it comes to views and graphics, views are live reports of the project database. So most people have several of these open at one time. Each view has its own display properties. A lot of these things consume a lot of memory. So a lot of the performance that you get out of Revit is driven by the hardware to a certain extent, but it's also driven by how you use Revit. Because the way you use Revit is a big variable for a lot of people.

Some people model everything. Some people model very little. Some people can use the efficiencies of the graphics system to do things like have families that have a lot of masking regions and detail, or symbolic line work so you don't have to show 3D stuff in a hidden line view, which consumes a lot of time on the graphics card to hide those elements. So the more optimized you're Revit approach is, the better optimized Revit will be on your system. So a good 600 megabyte Revit model will probably perform better than a bad 200 megabyte Revit model. So understanding how the model or the optimizations you can make when you're modeling and documenting things is really important. And a lot of that stuff is covered in the technical note.

Shadows specifically are a tough one, because they are actually now computed on the graphics card, which is nice. So the faster your graphics card, the faster your shadows will be. And they get better every year too, because when we talk about the specifications of your graphics card and the capabilities of that graphics card, a lot of stuff again from the gaming industry is moving onto the graphics card. So we can do things like shadows on the graphics card. It doesn't have to do it on the CPU.

So when we have ambient shadows where Revit is calculating kind of like the dark area where you have two surfaces meet in the corners, that's actually a technology that was developed by Industrial Light & Magic for the crappy film Pearl Harbor. They won an award for that. But basically, that has drilled down again to our design applications, where ambient occlusion just happens. It's pretty effective. It really makes an animation or a rendering pop. But again, this is all stuff that's done on the GPU as well. So when you combine the two effects, you get some really nice views.

Like I said, it stresses RAM. It stresses the storage. it stresses the network connections, as you suck down a 600 megabyte central model every morning when you open it up and create a new local. That's why solid state drives are the big deal with Revit, especially. And the move to 10 gigabit networking is becoming prevalent now too. Most people operate on a gigabit network. So you get 1,000 megabits per second speed between you and the server on a good day. That's the theoretical limit. The practical limit is much lower than that. But now you have 10 gigabit networking is coming into play that still uses the same cabling, which is nice. But it also boosts the transmission speed by 10. So now machines are coming with 10 gigabit networking ports, and you can get 10 gigabit switches and things like that for about the same price. So it's coming down.

We're talking about 3ds Max. When we talk about polygons and materials-- You know what? Hang on a second. Yeah, I'm sorry. I had a slide out of place. I thought I was in the wrong presentation. Sorry. So we're talking about polygons, right? Millions and millions of polygons in a scene. 3ds Max can handle that. When you take a file from Revit and throw it into Max, a lot of times those polygons are optimized. You're having a lot of polygons that are naturally going to get hidden, because they're concurrent with another face and something like that. So again, the modeling aspect of that is higher when you bring something in from Revit, because the model is automatically less optimized than if you just model every single polygon yourself.

And working with that in a view port is also graphics-intensive. That's all handled by the graphics card. It also has to handle materials. Materials can be very processor-intensive as well, because they have physical properties about them. They record the UV mapping and the different maps that happen, the fuse and roughness and all that other stuff. And then you also have procedural materials with stuff like substance designer, where the material is not using a bitmap at all, it's basically all computed pretty much on the fly. And that's also processor-intensive.

We work with lighting, where we're dealing with photometric physical lighting, calculating how that light actually works with the materials. So as light bounces off of something like this, where it's kind of diffuse, you're not getting a lot of shiny stuff. But if you bounce it off of a metal object, it is very shiny. Everything's working together here. And then we also have rendering. So that's where everything's coming together, and it has to process this stuff. And this is also very, very computationally expensive as well.

Most renderers, like the art render, mental ray, the new Arnold Renderer that's now in 3ds Max, those are all CPU-bound. They do not use the graphics card whatsoever. The renderers that do use the graphics card are iRay and V-Ray RT. Not just normal V-Ray, but V-Ray RT. Those will use the graphics capabilities on a particular graphics card. We'll talk about that when we get there.

Get to Navisworks. How many people are using Navisworks? OK, good. It's really come into its own. Navisworks is much lighter in terms of hardware demands than Max or Revit, so anything you have that can handle either of those two applications is going to handle Navisworks just fine. Essentially what happens, when you bring a file into Navisworks, whether it's CAD or Revit, it turns it into an NWC, which is just geometry. But it's also a very, very simplified geometry. It's only the outside faces of that thing. So the geometry becomes very light. It's basically the geometry and then the data that's behind the BIM elements.

Any normal computer nowadays will handle these things just fine, like that the Clash Detective and TimeLiner produce some very interesting results, but they're also very lightweight in terms of how they affect the overall system. And Navisworks now includes cloud rendering too. So if you're into rendering in Navisworks, you can do so pretty inexpensively.

And when using ReCap point clouds, point clouds are really tough on a machine. It's tough on the CPU. It's tough on the graphics card. It's stuff on memory, because these files are huge. It's tough on disk space for the same reason. So you're talking about billions of points. You're talking about files that are gigabytes, terabytes sometimes amounts of data. So being of a process those things effectively requires you to basically just max out pretty much everything on your system, especially storage.

So let's look at the hardware trends that are happening in 2016. Now, I'm going to be talking about Intel CPUs. Right now this year, everyone's like, well what's happening on the AMD side? The answer is, nothing's happening on the AMD side. Anyone here have an AMD-based computer? Probably not. You probably haven't had one for at least 10 years. AMD is coming out with a new architecture in 2017 called Zen which is targeting basically decent sized Intel chips. So there might be competition next year in terms of what's happening on the CPU side. Right now, Intel owns everything from the low end to the high end. So that's why when I talk about-- oh, he's an Intel fanboy. I'm like, I don't have any choice. It's Intel or nothing to get the performance that we need.

What happened in 2015 was we had a a die shrink. And I'll talk about what this means, but basically we have a new architecture called Skylake. When we talk about architectures and the models of chips and things like that, we talk about things in terms of code names. So we have things like Skylake and Broadwell and Haswell from 2014 stuff like that. So when I talk about those things and as you read the handout, you'll understand what I mean by those things.

So if you're running something that's kind of old but decent like a i7-2600K, which was Sandy Bridge Then we had Ivy Bridge Then we had Haswell. Now we have Skylake. So keeping track of all that stuff is kind of painful sometimes. So like I said, the Skylake architecture came around in 2015. And then since then, they've been improving it with little tweaks here and there. So it's important to pay attention to your CPU specs when you're looking for stuff, because you can get decent performance over something that was just released a year or two ago.

On the graphics side, we have a new Pascal architecture from NVIDIA. And just like I talked about with the Intel CPUs, I'm also going to be talking a lot about NVIDIA GPUs here. Not that AMD doesn't make great graphics cards. But when we talk about certain things that are specific to Autodesk applications, then we have to start talking about NVIDIA stuff. And I'll talk about that in a little bit. But they also had a process or a new architecture that is now in that 14 nanometer range or 16 enemy range too.

On the RAM side, we're looking at a new standard called DDR4. That's the new RAM standard. We'll be talking about that a little bit too. And now on the storage side, basically if you're buying a new solid state drive, 500 gigabytes is pretty much the standard main line minimum that you would want to get. Even one terabyte drives aren't that much more expensive anymore. And they're getting up there in size. So now you find two terabyte drives, stuff like that. So expect the mechanical drive market to sort of disappear over time. Anyone still running a mechanical drive? Like a rotating platter? Don't. Go out and get a solid state drive. Everyone's on solid state drives by now, at least in some way, shape, or form.

Let's talk about processors. Let's get to the meat of the stuff here. The first thing you have to do when you're talking about a processor is you have to talk about the platform. Excuse me for just a second. When you're talking about the platform, what you're talking about basically are the capabilities of the system. So you're not really choosing a CPU first, you're really choosing a platform. And there are three of them.

There's a desktop platform, kind of a run-of-the-mill kind of thing you'd find at Best Buy, there's what we call a high end desktop platform, and we also have a workstation platform. So the desktop platform is basically run on the Skylake series, what we call the sixth generation Skylake series CPU, such as the i7-6700K that I have listed there. It has a particular socket. It has a particular chipset.

Now, the chipset is the glue that basically connects the CPU to the rest of the system. And the chipset has capabilities that really flesh out the rest of the computer. So it determines how many USB ports you can have, and the audio, and connections additional PCI slots, and stuff like that. So the better the chipset capabilities, the better your overall system experience is going to be.

And for a long time, Intel put out really crappy chipsets. They were like, you had these great new shiny processors that did all these crazy good things, but they were hampered by a chipset that didn't have enough USB ports, or had terrible SATA configurations and stuff like that. So Intel finally got religion on that, and now they're putting out decent chipsets now. So we don't have to worry about those being hampered by that too much.

So each platform, each CPU in that platform has its own socket. That socket has a chipset behind it. And we work at it from that side of things. So when you're looking at your users, if you've got what I would call a BIM grunt-- the guy that is just sitting there doing Revit all day long-- a desktop machine is probably going to work just fine for him. The BIM guys, the really good guys that are running Dynamo and probably running Revit in conjunction with maybe Lumion or something like that, that's the next higher up sort of class of people.

Then you have the guys that are the visualization wizards of the VisWizes, as I called them. And they need a machine is completely on a different planet from the other two, because they're running high-end rendering applications, they're just really taking things to the max. But at the same time, a BIM grunt can turn into a VizWiz in fairly short order with all the stuff that's happening with software.

So when you're looking at stuff, if you go to buy a machine yourself or build a machine out yourself, you're looking at stuff like what's on the motherboard, which is basically the big circuit board that everything plugs into. So you're looking at things like the number of PCI slots. You're going to have at least one for the graphics card. But if you want to run more than one graphics card-- which is becoming more and more popular-- you have to look at the capabilities of your motherboard. If you try to put two or three graphics cards in a little tiny form factor box, that's not going to happen. So you have to pay attention to certain aspects of that.

How many people here build their own systems? Maybe for home. How many people go out and they'll go to like Dell or Box or someplace like that, and they have a corporate vendor that they look for all the time. Yeah, most people do that. So if you're in that camp, you're not going to really have any choice over the motherboard. Dell doesn't give you a list of motherboards to choose from.

And you do have to do a lot of research to find out if that particular machine will handle some of the things you want to do, because a lot of times, they put out stuff that you get it, and you're like oh, I can't put a second or third graphics card in here, because I don't have the capabilities. So pay attention to look for the technical specifications. Couple new things that are happening down the pike here on the platform side. I'll get into this, but we have a new M.2 slot for new SSD form factors. You know, Wi-Fi onboard, more than one gigabit port, or 10 gigabit ports, things like that.

There we go. When you're looking at a motherboard, especially based upon the platform again, it's going to drive a lot of the capabilities of that motherboard. So the platform's can tell you things like how many RAM slots you have. So a desktop platform is probably going to come with four slots. You get up to the next high-end desktop based on the Broadwell CPU, and you're going to be looking at eight slots for RAM. Same thing on the Xeons as well.

But you also want to look at the PCI expansion slots. Again, if you're looking at multi-GPU kind of stuff, using iRay especially. So when you do that, you have to look at the different kind of CPU you have. Because some CPUs in this class that I'm showing right here have 40 PCI lanes. Some have 28. And the way that those lanes get configured is in terms of the number of slots you have based upon their speed, or the number of lanes going to each slot.

And usually these big slots that you have here are what they call x16 slots, the long ones. And so the more lanes, the faster that particular connection will be. Now, the thing about this is that when we talk about PCI 3.0, this is basically double the bandwidth on a single lane that PCI 2.0 had a couple of years ago. And the thing about this is that we're not constrained on that at all.

Like, the graphics card really doesn't pump that much data across the thing so much that it would choke it. So when you're looking at the slot configurations, like if you put one graphics in here, it's going to run x16, if you put two, it's going around both at x16. If you put three, one's going to be x16, two are going to be x8. Or I'm sorry. Two are x16, one's x8. And then and so on and so forth.

But even here, if you stuffed five graphics cards in a system and they're all running at a x8 speed, that's as fast as a x16 speed on PCI 2.0, which was fine. So you don't have to worry about that. Like x16, x8, it's the same performance. You're not going to see anything happen there. But the idea is that depending on the number of PCI lanes in the CPU, it determines this number here. And it can determined what a manufacturer can do in terms of laying out those PCI slots. When you get into like the four and five slot range, you're looking at extended ATX form factors, bigger motherboards, bigger cases, bulkier stuff.

All right, and so we talked about the desktop strategy or the strategy for what Intel and AMD do on their CPUs, the desktop strategy basically is a four core processor. The high-end desktop or HEDT range has anywhere from six up to 10 cores. And then on the workstation side with the Xeons, you're talking about anywhere from four to up to 18 and beyond. So the number of cores you have is predicated on the platform you decide.

We also have hyperthreading, which essentially doubles the number of logical cores that the operating system sees. So if you have a four core system, Windows is actually seeing it as an eight core system, or eight logical CPUs. And on AMD's side, they've always been performing-- or their performance level right now is what they call a core i3 a core i5, sort of a low to medium level of performance, which is why they're really not competitive on the AEC side. When we're talking about Intel's chips and we're talking about the i7 with four cores plus, that's where we are right now. So AMD really can't touch the core i7 or the desktop range that we have on Intel side.

But like I said, in 2017, we got this new Zen architecture coming out, and the whole thing is going to be-- hopefully blows wide open. This is the reason why prices are so high for CPUs. When you look at a price for a CPU, just the CPU by itself, and you see its $500, $600, $800, over $1,000, basically it's because there's no AMD here to keep them legal, keep them happy, keep them sane. So the prices are going through the roof just because there is no competition happening. So once we get that in there and it performs OK, we're going to be looking at lower prices.

Now, what we've had over the past couple of years is what Intel calls a Tick-Tock development strategy. And what you have is you have to things about this. A tock starts out with a new microarchitecture, an architecture based upon a certain process size-- 45 nanometers, 22 nanometers, 14 nanometers, and so on. The tick is where that process is allowed to shrink using the same microarchitecture.

And it also gets another code name too, so that's another thing you have to keep track of. So we had a Sandy Bridge was a new microarchitecture in 2011. In 2012, there was a new shrunk process of the Sandy Bridge microarchitecture called Ivy Bridge. So Ivy Bridge was basically just a shrunken Sandy Bridge. It's not really any different internally. I mean, there's tweeks and things that they fix and so on and stuff like that. But it's not earth shattering. It just basically a die shrink. Those die shrinks give you more transistors on a chip. They run cooler. It makes the system I think more efficient.

Now, that was pre-2016. As we get down to 14 nanometers-- and Intel's really having a hell of a time getting anything smaller than that-- what they've done is they've said, OK, we're not going to be turning out a new microarchitecture or a new shrunken process every year. We'll do this Tick-Tock-Optimize thing. So it's kind of like yeah, we'll put out a new chip this year. It's no real different from last year. It's kind of like a service pack of last year stuff kind of thing. So we just sort of slightly tweak out the existing stuff with slight improvement. So that's again marketing stuff. So when Intel comes out with a new CPU and it's this whiz bang thing, when you look at the specs, you're like hey, that's on any different from last year's. Maybe it's got a little couple of little things that are different, but nothing earth shattering. It just gets Intel off the hook from having to do this every year.

So what makes a good CPU? So when you're looking for a CPU, what is it you're looking for? Obviously you're going to be looking at the platform itself. You know, the higher end desktop platform offers you the ability to put in more memory, more graphics cards, because of the additional PCI lanes. A lot more stuff there. Then you go to the Xeon side or the workstation class, which you may have to do if you're shopping for a system from say Dell or HP or something like that.

But the things to look for basically are all the same no matter what platform. First of all, the number of cores. You always want to use at least four or shop for a system for at least four. That's why when they give you recommendations on CPUs, I'm not looking at the core i3 or the core i5 models, because they only have two. I'm looking at at least the core i7 that has four.

We also have the number of threads, or does it have hyperthreading? So you'll see the number of cores and number of threads given as like 2/4 or 4/8 or 8/16. And that's because the first number are the number of physical cores. The second number is what you get when you toss in hyperthreading, which I'll talk about in a minute. So we also have a core clock speed.

And now, it used to be back in the olden times you bought a two gigahertz processor, and that was it. Just two gigahertz. And then what you did if you were a serious enthusiast about this stuff, you figured out how to tweak it on your system and make it run faster. Because when Intel produces a chip, they produce the same chip, but they can give it different speeds.

They can what they call bin them. They can test them to say OK, this chip runs really well at this speed. When we crank it up to this speed, it doesn't run so well, or it just errors out or whatever. So they throw that in the 3.2 gigahertz chip bin. Then they'll have other ones that might be on the same piece of physical silicon. This chip runs better than this one. It just runs faster. So we'll put that in the faster category. They bin them.

What they'll also do too is they'll look at the number of chips they have in each bin and go, we don't have enough in this bin, so we'll take one of these faster ones and put it over here. So what that means is that you have a chip that can physically run faster than it's rated for. So if you can overclock stuff using your motherboard BIOS, you can tweak some stuff in there and undervolt it and overvolt it and things like that, you can boost the system performance just for free.

So it really is a crapshoot in many ways. A lot of it depends on the microarchitecture of the CPU. A lot of it depends on the BIOS. If you're buying a Dell machine, you're probably not going to get a lot of tweaks there in terms of being able to overclock it. It's this speed and that's it, because they've locked the motherboard stuff down. If you put a system together yourself, or you get assists you get a system from a vendor like Boxx, who basically puts off the shelf parts together and calls it a system, you can then get in there and really tweak stuff out. So being able to increase your performance for free is sometimes a nice thing.

But also, you're going to see the core clock speed. That's the guaranteed speed. So if you get a 3.2 gigahertz processor, it's always going to run at 3.2. It will also have a turbo boost speed or a boost speed. And that's usually a couple of hundred megahertz faster. And that's basically where Intel's developed the technology that allows it to overclock the processor to a higher speed by itself. So you're going to get overclocking for free anyway. It's just looking at what those two are. And i'll talk about turbo boost in a second.

You're also going to look at the L3 cache size. There's three caches on a chip. There's the L1, L2, and L3. L1 and L2 are on the core itself. In a multicore system, each one has its own L1 and L2 cache. There's three of them, because the reason you have a cache on a CPU is because the CPU has to talk to memory. Memory is really, really, really slow compared to the stuff you have on the CPU. So what Intel did and what others have done is they put memory on the die that's running as fast as the CPU is running. But the faster it runs, the more heat it generates and everything else.

So the L1 cache is running basically at CPU speed. And so it's the fastest, but it also has to be the smallest, because you can't have a whole lot of L1 cache on a chip. So it's only got something like 8K or something like that. I can't remember the number. But basically the L2 cache is what happens when the CPU is looking for data-- it has data in all three caches.

And then if it doesn't find it in the L1 cache, it then goes to the L2 cache. The L2 cache runs a little bit slower than the L1 cache, but there's more of it. So there's a good chance that if you don't get a cache hit on the first L1 cache, then you'll find it in L2. And if it doesn't find it there, then it'll go to L3. Now, the L3 cache is a big area of this microprocessor that's basically shared between all four cores, or eight cores, or however many cores you have.

So if it doesn't find it in L1 and L2, it has a much better chance of finding it in L3. And if it doesn't find it in L3, it has to go back to system memory, which is very slow. So what it does is it grabs the stuff from system memory, and then puts it into the cache, and then tries to move it up the chain to L1 the more it's accessed. So that's why cache is really important. Without that cache memory, your machine would run like a dog. It really is the one thing that really keeps the CPU fed. And that's the idea behind microprocessor design is keep the CPU full of data. Don't let it sit there and spin with nothing going on.

AUDIENCE: [INAUDIBLE].

MATT STACHONI: No. It's basically on the processor. It's part of the architecture of the CPU. There are other caches you can do yourself, like a RAM cache or something like that, which is a completely separate thing. It's all software based. Also, the maximum RAM you can support. Desktop CPUs typically support up to about 64 gigs of RAM. That's up from 32 from two years ago, so that's a nice thing, especially as 16 gigabyte modules come into play. And I'll talk about that in a minute.

But now depending on the platform, desktops are at 64. I think HEDT or high-end desktop is like 128 gigabytes. On the Xeon side or the workstation side, you're talking about like 1.5 terabytes of addressable RAM. And that's because the Xeon class stuff is really not only for workstations, but it's really for servers. That's really where it gets most of its capabilities from, from that thing.

And then you also have to worry about this, the inclusion of an Integrated Graphics Processor or IGP. If you've got say a Skylake processor or a Haswell or a mainstream desktop CPU-- or especially a laptop-- you probably have an IGP, basically a graphics card on the CPU. Now that has its own set of problems, because it's an Intel thing, and it usually runs terrible. I mean, if you tried to run Revit on it, it may work, it may not.

And then you also have a discrete graphics card you're going to put in your system too. So keeping those two straight from a software standpoint is really kind of difficult. So you have to pay attention to the driver settings and basically make sure that Revit's running on the discrete GPU and not the integrated GPU, and it's just kind of a mess. So for that reason alone, I tend to steer most people towards like the high-end desktop or the higher order Xeon workstations, which do not have an IGP.

Because that just removes a point of pain out of that. Anyone have an IGP and have issues with it? I had a series of lab laptops at my work, and they did have this. And trying to run Revit was a pain, because they were lower order laptops. You couldn't disable the IGP, or you couldn't easily steer it. The drivers were terrible, and it was just hell on wheels.

When you're looking at GPUs or CPUs, go to ark.intel.com put in the CPU. It's easy to navigate. And you're going to get a nice way to compare CPUs side by side. And so you can see-- like, this is looking at say all the sixth generation Skylake stuff. And so you can see the i6700k, 6700, so on and so forth. And you're looking at these different things like the price. And you'll see a lot of stuff is the same. Like four cores, eight threads, hyperthreading's in there. You can see the base frequency here is just a little bit more than that, it's a little bit more than that, and so on and so forth. But you're looking at the price jumps as they go on too. So it's all a game of comparing the capabilities of the CPU with the price and seeing what's the best value for what you get.

Want me to go back to that? I'm sorry. We talked about turbo boost. When I talked about this earlier, where the chip can overclock itself. The way it does this is that say you have four core chip. If you look at the specs for the CPU, you're going to see a thing called turbo bins. And it's given as a series of four numbers, like A, B, C, D. And each one of those gives you basically a multiplier in 100 megahertz increments that will happen when that CPU is being accessed.

So in other words, like here I've got the 6700 Skylake CPU, has a base frequency of 3.4 gigahertz or 3400 megahertz. And there's a turbo bin of 3/4/5/6. What that means is that when there are four cores active, when it's really just cranking out, it'll bin that thing up three times 100 plus 3400 megahertz. So the numbers that you have are dependent upon the number of cores that you have active.

So what happens here is that you have a turbo bin of three, four, five, six. So the first number is what happens when all four cores are active. The second number is when three of them are active. The third is when two cores are active. And the last number is when only one core is active, or when you're just doing single threaded applications. So you'll see here that it'll basically just do four gigahertz, because you're adding six megahertz on top of 3,400.

So that's what that turbo boost number means, that boost clock. You're only going to hit that boost clock whenever you're doing single threaded stuff, which means no Revit, no 3ds Max, that kind of thing. But most games, for example, are single threaded. If you're just surfing the internet, it's single threaded. And so on and so forth so. That just depends on what your workload is.

When we talk about hyperthreading, what we are talking about basically is the ability for the chip itself to-- the chip pipeline is this very deep pipeline where data comes in and is processed through. Wel, that pipeline can be so deep that it's not efficient. So what Intel does with hyperthreading is it basically opens lanes for the thread to move through concurrently, so it can handle two threads at one time. And when two threads are happening at one time, the operating system sees it as two cores for the price of one.

It's like opening up another checkout line at the supermarket. I go to the supermarket. Never fails, the person in front of me is writing a check, has to have a price check or something like that, and I'm just stuck. Well, when the person next to you or the girl takes pity on you and says, my lane's open, you steer over there, that's exactly what's happening with hyperthreading.

Hello? There we go. Got to run this manual by now. All right, let's see what happens here. OK, so when we're talking about the sixth generation Skylake stuff for the mainstream desktop, it's basically 14% faster than 2014's Haswell. It's got full support for DDR4 memory. It will also support a special low voltage DDR3 memory, but you have to be really careful of that. So just the new DDR4 standard is where it's at.

It supports the new USB 3.1 type C port, Thunderbolt, some newer connections happening there for faster, easier ways of connecting peripherals. WiGig wirless docking, 4K wireless video, stuff like that. Integrated sixth generation graphics. Now, we had a thing happen in Haswell where we had this new thing that Intel came up with called TSX. And my thing has died here. Cool.

Had a thing here called TSX. TSX was interesting, because what it did was essentially optimized how the processor handles multithreaded stuff. Which was great, and what it required was the application software developer to write for TSX. So we haven't as far as I know, had any TSX instructions written yet by Autodesk applications. But the idea is-- and I talked about this in the handout quite a bit-- is that if you write TSK stuff and your processor handles TSX stuff, multithreaded applications will run much faster. So that's a promising thing that's happening on the CPU side. The problem was that they had a huge bug in Haswell series, and they disabled it across the board. And then they re-enabled it now in Skylake and later CPUs.

The nice thing about the desktop lineup is there's only one choice. When you look at all the stuff that's available from Intel, and you look at everything, what's available, what should I pick, there's only one choice. And that's the 6700K. It runs at 4 gigahertz as a base clock speed, which is fantastic. It has a turbo speed of up to 4.2. So you only get 200 megahertz of overclocking, but your base clock speed is so fast, it doesn't really make a difference. And it has all the features that you need.

The one thing about it is it has this K suffix. And that K suffix means that it is unlocked. What that unlocking means is that if you put it in a motherboard, you can tweak it out even further. You can overclock it. The 6700 with no K actually is running at a much slower clock speed, and it's also not overclockable. So when you're looking at a system by Dell or HP or something like that and you look at that 6700 that they offer, understand that that's a slower chip than the 6700K. It's the same number, that one K just makes a big world of difference.

That's disappointing. And like I said, look at that price, that $339 price right there. I'll look at that a little bit later when I talk about other chips. With the sixth generation i7, you also get this Z170 chipset. It's detailed here. Basically it is connected to the processor with what we call a DMI 3.0 link, which is double the capacity of what they had last year. Basically, the chipset will talk with the CPU two times faster than it did last year. A lot of the stuff is pretty technical. But basically, the idea is that you get 16 PCIe lanes from the CPU itself, which means that you can run-- essentially, the first graphics card you put in is basically running off the CPU. It also has additional CPU or 20 PCI lanes here for additional graphics cards. But again, that's a chipset-specific function.

The next up from the desktop side is what we call the high-end desktop, or HEDT. And this is using a chip called the Broadwell E. And the Broadwell E is basically-- a you can almost think of it like a cut down Xeon E5. Really, it's almost the same thing. But essentially, you get up to six, eight, and 10 cores instead of just four. You don't have any integrated graphics processor stuff to deal with there. Like I said, these are true six, eight, and 10 core designs. You get 12, 16, and 20 threads with hyperthreading.

It does use DDR4 DIMMs. It has what we call a quad channel memory controller built into it. And that memory controller is important, because on the desktop side in the Skylake stuff, it's a dual channel memory controller. And this is a quad channel memory controller, meaning that the bandwidth, the main memory is twice as much as it is on the desktop side. Now, the bandwidth doesn't really mean a whole lot, because real world applications really aren't constrained by the memory bandwidth.

But what it means is that you have to install RAM in fours instead of twos. And the reason you do that is because if you don't install them in fours, the RAM itself will drop back to a single data rate, instead of being a double data rate. That's what the DD stands fir in DDRAM. So what happens is if you don't install RAM in an optimized fashion, you can actually cut your performance down.

So the rule of thumb is if you're running a desktop CPU with a dual channel memory controller, you install RAM in pairs. So if you look at a motherboard, it will have colored slots. So what you want to do is you want to fill up one color, and then move to the next color. When you do that, the two colors, you can have different RAM capacities on in each one. So what you have to have within the same color or the same for the same two channels or same four channels, they all have to be the same size and the same capacity.

So if I have an HEDT system here with eight slots, I'll have four blue and four black. I have to put four blues of a certain capacity, and I can put four blacks of the same or different capacity. So when you're optimizing your RAM, you have to pay attention to this, because if you rely on Dell or HP to do it for you, it won't happen. They're happy enough to just put three RAM chips in instead of four. I mean, they don't pay attention to this at all. But you need to pay attention that to make sure you're optimizing your stuff. It's more expensive than Skylake. Probably about twice as expensive, in terms of just the guts of everything. But it's also much faster in multithreaded apps.

We have four chips to choose from here. Now, when I choose from these chips, from the overall breadth and depth of their offerings, I'm only picking certain kinds. I'm only picking certain kinds that have a certain minimum RAM, minimum speed. Because as you jump the number of cores up, you'll see here that as I jump the number of cores up from six to eight to 10, the speed goes down.

We're starting at 3.4 on the left hand side to 3.0. 0 So that's your guaranteed clock rate. As you get more cores, you're guaranteed clock rate goes down, because the more cores you have, the hotter it's going to run. And the hotter it runs, poorer it can perform. So we have a thermal envelope that we can operate in. So as you get these higher end CPUs with 18 cores, that's great. But they're only running at 2.0 gigahertz, which is really, really slow, which means any single threaded performance is going to take a huge hit. And because so much stuff is still single threaded, your overall performance is not going to be that great. But if you've go to render on it, yeah, it'll kill everything else out there. But how much time do you spend rendering versus how much time you spend doing normal stuff? That ratio is what you have to think about when you're talking about the platform.

And again, with this I sort of pick out my favorite. My favorite here is the 6850K. It's a six core chip, but it's running at 3.6 gigahertz. Why do I like it better than the eight core chip? Because it's $400 less. So for $400 more, you can go to an eight core chip and get two extra cores. That's a lot of money. So you sort of think about it in terms of what's are my main things that I'm doing? If I'm rendering a lot, yeah, maybe the eight core chip is worth it. And we also have this X99 chipset. I won't bore you with the details on that. But it does have a lot of PCI lanes coming out of it, up to 40 PCIe lanes for multi-GPI setup. So overall, I think the HE desktop is that nice sweet spot of performance, headroom, and the ability to expand out.

When we get into the workstations, now we're talking about Xeons. And these are designed servers and quote serious workstations. We have three series to choose from, the E3 E5-16 series and the E5-26 series. There's also an E7, which is a higher order Xeon. But those are relegated really to high number of cores for servers. If you're running a Facebook server, then you're using an E7. But if you're just playing on Facebook, you're not going to get an E7.

So we're only talking about these three. Now, the difference between these three is that the E3-12 series is essentially a Skylake processor put in a Xeon body. It's essentially the same thing, for the most part, but you have more choices instead of just one like you have in Skylake. Still four cores, eight with hyperthreading and so on. Then you have the E5-16 series, which is a single CPU design. In other words, you can only fit one CPU in a box. It's Broadwell E based. It has four, six-- or I'm sorry, six, eight, or 10 cores. That's a mistake on the PowerPoint. And so there's that.

Then we go to the E6-2600 series. Now, the 2600 series is where you can put more than one of those things into a system. So if you're working with a Dell machine for example, their I think 7000 series workstation will allow you to put more than one 2600 CPU in there, and you will pay very dearly to do so. No integrated graphics processor. Many different models to choose from.

It supports what they call ECC memory, or Error Control Checking memory. Which basically is slower, it's more expensive. But it's also more resilient to errors coming from glitches in your electoral system to gamma rays from outer space. So like I said, the 2600 series allows you that dual CPU configuration. And you also get way, way, way more L3 cache from 8 megabytes on Skylake up to 45 megabytes in the Xeons. And then also you have to pay attention to where you have dual channel RAM or dual channel memory controllers on the E3 series, because they are basically Skylakes, or quad channel both in the E5s.

When you're looking at the branding for these things, kind of keeping track of the nomenclature is a hassle in and of itself. So you're looking at the different numbers and what the different numbers mean. So that first number-- like, the E3 tells you kind of what the product line is, right? E3, E5, E7. The second number, the number after the dash there, is the number of CPUs that you can put in it.

So it's either going to be a one or a two. And then you have processor skew, which is basically just a number. And then you also have this V3, V4, V5. That is what you want to pay attention to, because that differentiates what microarchitecture is being used. So they'll have the same model V4, same model number V5, you want to offer the V5, because that's based upon the new either process or new architecture.

So when we're looking at the E3-12 series, we've got four of them right here. We've got the one that I like is the 1270 I think, which works out pretty nice. But notice it's $339. Remember, it's the same one as that Skylake 6700K that I talked about previously. But notice the specs of this. It's running at 3.6 gigahertz. The Skylake for the same amount of money is running at four as a core clock speed. So you automatically get a faster processor if you go with a desktop.

Other than that, everything else about it is exactly the same. Same amount of caache, the same-- everything else about it is basically the same. So when you're looking at specifying say a low end entry level machine from Dell or HP or something like that, they may offer the Xeon E3, but they may also offer a Skylake as the same for maybe a little bit more money or the same.

On the E5-1600 series, we have these five processors right here. Again, I like this particular one here, just because it offers the amount of stuff that you need, but it doesn't go to the next level. Like the next one above this is the 1660, it's $500 more. And your just moving up from six cores to eight, and it actually gets slower. So you have to look at it from that standpoint. So looking at the specs at a detailed will definitely pay off in your purchasing decisions.

This is what I find funny. Look at this right here. If you look at the 2643, this is the double one. So you can put two of these in a single machine for the E5-2600 series. So look at the 2643, which is the middle row right there. Six cores, 12 threads, 3.4 gigahertz, $1,552. The E5-1650 is almost exactly the same chip. Six cores, has a little bit less cache, slightly higher clock speed. It's $617.

So when you're specifying a machine that you want to put two of these things in, you're paying a premium to just put two of them in. Not to mention the fact you've got to pay for another processor. But just for putting in just one processor, you're being charged basically almost $1,000, a $900 price premium on one CPU. This is why we can't wait for Zen, because this is nuts.

All right, we talked about system memory. System memory used to be basically a boring subject. Guess what? It still is. We're looking at the DDR4 standard. The best bet I think is to outfit a system with 32 gigs right off the bat, if you can. 16 gigs works, but the idea is you want to use 16 gigabyte chips. If you do a cost analysis, on a cost per gigabyte basis, the difference between a 4 gig chip, an 8 gig chip, and a 16 gig chip is actually minimal. It might be $1 or $2 per gigabyte difference. So you're better off outfitting yourself with fewer chips of higher capacity, and fill out those RAM slots, as we talked about before.

The other thing about this is don't overpay for RAM. If you go to Dell or HP-- Dell especially-- you can tell I'm kind of a Dell guy, that's just my fault-- but basically, when you go to do an upgrade, they'll charge you an insane amount of money for a modest thing. Like a 16 gigabyte upgrade from Dell is like $200. Well, I can go buy 32 gigabytes for less than $200 from Newegg or whatever.

So like I said, ensure the RAM configuration matches the CPU controller or the memory controller. And buy all your RAM from one place at one time. One of the problems I see is that when people try to upgrade their RAM, they've got a couple of slots empty, so they go out they buy new RAM and they put it in there. The timings might be different. And the thing about this too is especially if you build a do-it-yourself system, even if you get a package of four ram chips, check the serial numbers, and make sure that they are all kind of sequential, like they all came from the same bin.

I had an upgrade I did where I tore out all my RAM and put four new chips in. One of them was from a different bin. And the only way I could find that out was by looking at the serial number on the back. And the machine couldn't-- it rarely booted up. It got to a certain point in Revit, and it just died. So troubleshooting RAM problems usually is just a pain in the rear end. But check the serial numbers on your RAM. If they're not all of the same sort of bin, you could have issues.

Like I said, this is a new standard. You can't put a DDR3 chip in a DDR4 system. They're actually keyed differently. They have a slightly different physical property. But they do have higher densities. So you get up to I think 128 gigs on one module. Faster clock rates. They basically pick up where DDR3 left off. So they were slow to get on the market just because at the time, people were still running DDR3 systems. And it wasn't until we went to Skylake, where basically the standard now is DDR4, and that's it. So if you're shopping for it, just understand that's where you are. Overall, though, even though you have more bandwidth and they're faster internally, there's not that much difference in terms of overall performance in real world numbers.

Whoa, that's bad. Yeah. All right, so we'll talking about mass storage. SSDs, like I said, must-have things. Once you put an SSD in a system, even a slow SSD, your system now takes on a whole new performance perspective, because it's so much faster. But even then, within SSDs, they don't all operate the same. So it does pay dividends to pay attention to the different controllers and different technical aspects of your SSD.

The thing about SSDs that a lot of people had a problem with is that you have a limited number of read-writes on these things. And so people are kind of freaking out, is my expensive SSD going to die after a year or so? Not really. They've done performance testing on these long range things, where they just put a whole bunch of SSDs in a system, and they just blasted it with data, petabytes of data. And they aren't dying until you get to about 2.5 petabytes of data. And that equates to about 500 gigabytes of read-writes per day for over 100 years. And most people don't write more than maybe a gigabyte maybe at most a day. So you don't have to worry about your SSD dying.

We had to have a new form factor now. We have this new M.2 form factor, which is becoming really popular. It's a little slot on the motherboard that goes pretty much right between two PCIe slots. So you're no longer constrained to what the old SATA protocol was. If you have a SSD running on a SATA of interface, you've got a cable going through it, and plugging in onto the motherboard, and you power up with a separate cable, that's an older protocol. It's an older standard. And it actually is a lot slower than the SSD is capable of doing.

Because SATA was really designed for fast hard disks, the mechanical hard disks. They don't work with SSDs that well. So we have a new protocol called NVME, which replaces essentially AHCI, which was used on SATA. And now instead of running on a SATA interface, we're running on a PCI interface, the same PCI interface that powers your graphics card. So we're basically putting the SSD right into the PCI bus that's on your system.

And they're much faster than SATA drives. So if I'm looking at benchmarks of the latest drives, all the ones that you see there that are much faster than the SATA drive, those are all the PCIe NVMe drives. So you're going to want to look at those when you're looking at an SSD. Now, if you don't have an M.2 slot on your motherboard, which you probably don't if it's more than a year old, you can always get an add in card for like $10 or $20 that'll put that in there. And then you can just put it on a x4 PCI slot. So chances are, you can upgrade to a really fast PCIe-based drive without too much trouble.

Graphics cards. Now, graphics cards are my favorite subject, because there is no talk about graphics cards without a bottle of gin handy. Because graphics cards to me are one of the things that drive me nuts when you specify things. People have all kinds of misconceptions about what a graphics card does, what it doesn't do, what should you get? Should I get a Quadro? Should I get a GeForce GTX? Whatever.

The graphics card basically is a pipeline system. So it handles data in parallel very, very well. And so it is the whole process of taking geometric data from the CPU you and then feeding it through, rasterizing it, triangulating things, and then popping out an image on the screen. And it has to do this many times, 60 times a second to give good visual feedback and performance.

They process tasks differently. Essentially, a CPU is a couple of cores running very fast, 4 gigahertz. A GPU could have massive numbers of cores, what they call streaming multiprocessors in NVIDIA land. And they have 2,000 or 3,000 of these things, and they're running very little tiny little programs very, very slowly comparatively to a GPU. But they're designed to execute these concurrent threads through the pipeline.

And these high end GPUs we now have from NVIDIA like the Tesla is basically it's own parallel processing powerhouse on a graphics card. It might not be doing any graphics tasks whatsoever. Like the Tesla card doesn't even have a graphics out. It's basically just a card that just cranks through problems. And so NVIDIA for their part, the Tesla cards are there-- that's what's powering the whole self-driving car thing. That's what they're really focusing on. They're focusing on deep learning, all kinds of the stuff that Jeff Kowalsky was talking about yesterday that was frankly scaring the pants off of everybody in the audience. You know, they're not coming for us, they're coming for us. Whatever.

Yeah. So the thing to understand about this too is that stuff is running on DirectX now. It's not running on OpenGL anymore. So when we're talking about graphics cards and professional level graphics and all that kind of stuff, register for CAD applications, well guess what? Everything's using DirectX. Guess what else uses DirectX? Games. So theoretically, a good gaming card is going to run AutoCAD or Revit or 3ds Max just as well if not better than a quote OpenGl card, because OpenGL doesn't come into play anymore.

So like I said, we talked about games. I do talk about shaders a little bit. Here what a shader does, basically it's talking about how the pipeline of the graphics card works and sends data through the graphics pipeline. For the most part, like I said, bottom line is that most graphics cards in around the $250 price point from any manufacturer are going to work basically about the same, and pretty well for everything you do.

With the exception of if you're doing iRay, then that's where you're going to be looking at multiple graphics cards, higher order graphics cards, things like that. But everything else uses basically DirectX 9 as a standard. Some might use DirectX 11, stuff that came out in Windows 8. So that's no big deal. We also have new renderers to work with. We have the art renderer in Max and Revit. We no longer have the mental ray rendering engine. The mental ray rendering engine was developed by somebody, and then NVIDIA bought it. So you saw NVIDIA mental ray. Well, Autodesk pays a price premium to have mental ray in their applications. So they said with 2017, you know what? We have our own. Bye bye mental ray. So they don't have to pay that fee anymore. Your cost for your software did not go down. Note.

Like I said, have this new art renderer, which is developed by Autodesk. It is actually very, very good, for the most part. We also have a new rendering engine that Autodesk just bought called Arnold. It is used in a lot of movies. So you've seen it in pretty much a lot of movies that you see nowadays. Guardians of the Galaxy, it was in-- what's the one with killer robots? Pacific Rim, yeah. And then we also have iRay. And this is the one that is probably the big differentiator. iRay uses the graphics card. And you can get fantastic results out of iRay if you have good graphics cards to work with it. They have to be NVIDIA graphics cards, which is why this talk is so focused on the NVIDIA side.

When we're talking about this, it is very, very photorealistic. It is a total game changer. These are all images produced in iRay. It's fantastic stuff. So if you're into that, that's where you're going to be. Now on the Nvidia side in terms of what's happening, we have this new Pascal architecture to be followed up by Volta sometime I guess next two years or so. Performance is going up. The size of the GPU is going down. More and more stuff happening there.

We have different models, as you see here. This is also spelled out in the handout quite a bit. It's essentially the architecture of the chip is basically-- the way that this works is NVIDIA puts out a chip, say the GP102 or something like that. It has a certain configuration of kind of these boxes of things. And these boxes contain 128 cores. And as you look at different graphics cards, you'll see the number of cores either go up or go down based upon their price point. What NVIDIA will do is just basically disable one of those boxes.

And it's the same chip, they just disabled it. So they can produce the same stuff. It doesn't cost them much from a manufacturing standpoint. But they can just go ahead and just disable it, and it artificially sort of bins the chip to do what it wants. And the thing about this is it doesn't matter whether you're talking about a GeForce or a Quadro, gaming or workstation, it's the exact same chip. They call it something different, but internally the same thing. It's just what's been disabled, what hasn't been disabled, and what's the clock speed.

The big difference between GTX or gaming cards is that they're clocked higher than the Quadro cards. Quadro cards are meant to go in workstations or in rackmount stuff in a server farm somewhere, and they can run 24/7 all day long. A GeForce is not meant to handle that kind of workload. It's clocked higher, it's meant to run games a couple of hours a day. But the price point is so low, you could basically get a high end GeForce card, run it, blow it up, get another one, run it, blow it up, do that five times before you even come to the price point of a Quadro.

So when it comes to, do I get a gaming card? Do I get a Quadro? And Autodesk is saying, oh, you have to get a Quadro. That's BS. You don't have to get a Quadro. It runs just fine on a GTX. Now, the drivers are certified in Quadros, which is the one thing that is the differentiated between the two. And that certification costs a lot of money, which is why the Quadros cost thousands of dollars, versus high end graphics cards, which are like $600.

AUDIENCE: [INAUDIBLE]

MATT STACHONI: Doesn't care. Does not care. What you're looking at here-- let me bump ahead to the chart here. This is just basically a rundown of everything that we have on the NVIDIA side. You'll see that the Titan X is running that GP102 chip right there with 3584 cores. Check out the Quadro 6000 on the right hand side. It's a GP102 GL. But it's a couple of more cores, because they've enabled another string of multiprocessor box or whatever. But the performance is essentially the same as you have with the Titan X. Essentially, the specs are almost exactly the same between them.

But the core clock rate might be lower or something like that. So you might get more cores, but they're clocked slower. So you might get the same performance or even less. The Quadro P5000 doesn't even belong in this list, because it's almost the exact same as the GTX 1080. The GTX 1080 is retailing for about $650. The P5000, we don't have prices on the new Pascal Quadros, but it's going to be something like a $4,000 card.

AUDIENCE: Do you have Quadros, then?

MATT STACHONI: I do on this laptop, and I run them on some machines at work. But I don't at home, no. I don't have that kind of cash.

AUDIENCE: RAM's also a bit different.

MATT STACHONI: It is to a point. If you're running or rendering in iRay, it has to be completely encapsulated within the memory on the card. So the more memory you have on the card, the better off you are. That being said, eight gigabytes for a medium GeForce GTX is not a bad place to start. It was four gigabytes last year with the 970. It's now eight gigabytes with the 1070. So you've doubled the amount of memory. The Titan has I think 12. The Quadro P6000 I think as 24. So yeah, there is a difference there in memory. That's one place where they've made a big difference.

let me just pop through this, because we are quickly running out of time. I've got about two minutes. What I'll do is I think I'll stop right here. I'll have this PowerPoint up on the site if you want to download that and read the rest of that. Even an hour and a half isn't enough to talk about all this stuff. But for me, I think the 1070 is probably the go-to card I would use for outfitting a new system.

Now, you may not have that luxury, because you're buying a system from Dell, and Dell's like, we're not putting GeForce cards on our workstations. So you have to look at maybe a hybrid approach of how you buy stuff, right? Buy it with a crappy graphics card. Go out and buy a GTX 1070 or whatever and put it in, or put two of them in. And you'll certainly save a lot of money over what NVIDIA would charge you or Dell would charge you to go to Quadro. Anybody have any questions? Yeah.

AUDIENCE: What's the advantage of using SLI?

MATT STACHONI: None. In fact, don't use SLI. SLI is a technology where you can basically pair graphics cards up. And it was used a couple of years ago to get basically double the performance in games. SLI is actually very detrimental to anything Autodesk runs. It doesn't run it, and it actually slows things down. So no.

AUDIENCE: But it's good for like VR and stuff like that?

MATT STACHONI: I don't know, to be honest with you. I have a dabbled in it. That will be next year's topic. I don't know if it does anything for VR, to be honest with you. I'll close with, don't get too hung up on the Quadro hype. Anyone telling you you have to spend $1,400 to get decent graphics performance, it's bull.

AUDIENCE: Is that what you recommend professionally at work? I feel the same way. I've always used GTX, and I have a really hard time at work [INAUDIBLE].

MATT STACHONI: Yeah, that whole not supported thing is just-- to me it's just scare tactics. I've had Quadros that didn't run well. In fact, I've got a lab full of Quadros right now that are running an older version-- they're really old cards. And they have a hell of a time in Revit in Windows 10, for some reason. So it is one of those things where-- and I've run a GTX 970 at home. It runs Revit like a champ. No problem whatsoever.

I've also had systems where I put GTX 980s in them, and they had issues because the timing is actually on the physical motherboard. So we had to go and look at different driver versions to make them work. But I took the same card home and put it in my system just to test if it was a card thing or not it. It's basically the motherboard that was the problem. So the graphics card itself I don't think was the issue. Yeah?

AUDIENCE: How do you [INAUDIBLE]?

MATT STACHONI: Well, in a workstation class system, like a Xeon, in the BIOS you have a thing called Optimus technology on an NVIDIA system. And what Optimus does-- and this is explained in the handout-- but it basically uses the IGP for certain things, and uses the discrete card for certain things. That can get really wacky. So if you disable Optimus, basically what I found is that it improves stability, so it's not using the IGP as much. You can't disable the IGP. Everything from the graphics card goes through the IGP as it goes out to the monitor anyway. So it's still one of those things that still has to be taken care of. But at the same time, there are certain ways depending on the system to make that work. Yeah?

AUDIENCE: [INAUDIBLE]?

MATT STACHONI: Yeah, we didn't get into mobile stuff. The Kaby Lake stuff is a new processor line that's basically after Skylake. It is slightly, slightly optimized. But Kaybe Lake is actually optimized for ultraportable or very low power systems. It is going to be a new standard at some point, but it's not there yet. So 2017, you'll see Kaybe Lake desktop CPUs. All right, I've got to go. I'm getting kicked out so. So thanks, everybody.

[APPLAUSE]

______
icon-svg-close-thick

Cookie preferences

Your privacy is important to us and so is an optimal experience. To help us customize information and build applications, we collect data about your use of this site.

May we collect and use your data?

Learn more about the Third Party Services we use and our Privacy Statement.

Strictly necessary – required for our site to work and to provide services to you

These cookies allow us to record your preferences or login information, respond to your requests or fulfill items in your shopping cart.

Improve your experience – allows us to show you what is relevant to you

These cookies enable us to provide enhanced functionality and personalization. They may be set by us or by third party providers whose services we use to deliver information and experiences tailored to you. If you do not allow these cookies, some or all of these services may not be available for you.

Customize your advertising – permits us to offer targeted advertising to you

These cookies collect data about you based on your activities and interests in order to show you relevant ads and to track effectiveness. By collecting this data, the ads you see will be more tailored to your interests. If you do not allow these cookies, you will experience less targeted advertising.

icon-svg-close-thick

THIRD PARTY SERVICES

Learn more about the Third-Party Services we use in each category, and how we use the data we collect from you online.

icon-svg-hide-thick

icon-svg-show-thick

Strictly necessary – required for our site to work and to provide services to you

Qualtrics
We use Qualtrics to let you give us feedback via surveys or online forms. You may be randomly selected to participate in a survey, or you can actively decide to give us feedback. We collect data to better understand what actions you took before filling out a survey. This helps us troubleshoot issues you may have experienced. Qualtrics Privacy Policy
Akamai mPulse
We use Akamai mPulse to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Akamai mPulse Privacy Policy
Digital River
We use Digital River to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Digital River Privacy Policy
Dynatrace
We use Dynatrace to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Dynatrace Privacy Policy
Khoros
We use Khoros to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Khoros Privacy Policy
Launch Darkly
We use Launch Darkly to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Launch Darkly Privacy Policy
New Relic
We use New Relic to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. New Relic Privacy Policy
Salesforce Live Agent
We use Salesforce Live Agent to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Salesforce Live Agent Privacy Policy
Wistia
We use Wistia to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Wistia Privacy Policy
Tealium
We use Tealium to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Tealium Privacy Policy
Upsellit
We use Upsellit to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Upsellit Privacy Policy
CJ Affiliates
We use CJ Affiliates to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. CJ Affiliates Privacy Policy
Commission Factory
We use Commission Factory to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Commission Factory Privacy Policy
Google Analytics (Strictly Necessary)
We use Google Analytics (Strictly Necessary) to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Google Analytics (Strictly Necessary) Privacy Policy
Typepad Stats
We use Typepad Stats to collect data about your behaviour on our sites. This may include pages you’ve visited. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our platform to provide the most relevant content. This allows us to enhance your overall user experience. Typepad Stats Privacy Policy
Geo Targetly
We use Geo Targetly to direct website visitors to the most appropriate web page and/or serve tailored content based on their location. Geo Targetly uses the IP address of a website visitor to determine the approximate location of the visitor’s device. This helps ensure that the visitor views content in their (most likely) local language.Geo Targetly Privacy Policy
SpeedCurve
We use SpeedCurve to monitor and measure the performance of your website experience by measuring web page load times as well as the responsiveness of subsequent elements such as images, scripts, and text.SpeedCurve Privacy Policy
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

Improve your experience – allows us to show you what is relevant to you

Google Optimize
We use Google Optimize to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Google Optimize Privacy Policy
ClickTale
We use ClickTale to better understand where you may encounter difficulties with our sites. We use session recording to help us see how you interact with our sites, including any elements on our pages. Your Personally Identifiable Information is masked and is not collected. ClickTale Privacy Policy
OneSignal
We use OneSignal to deploy digital advertising on sites supported by OneSignal. Ads are based on both OneSignal data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that OneSignal has collected from you. We use the data that we provide to OneSignal to better customize your digital advertising experience and present you with more relevant ads. OneSignal Privacy Policy
Optimizely
We use Optimizely to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Optimizely Privacy Policy
Amplitude
We use Amplitude to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Amplitude Privacy Policy
Snowplow
We use Snowplow to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Snowplow Privacy Policy
UserVoice
We use UserVoice to collect data about your behaviour on our sites. This may include pages you’ve visited. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our platform to provide the most relevant content. This allows us to enhance your overall user experience. UserVoice Privacy Policy
Clearbit
Clearbit allows real-time data enrichment to provide a personalized and relevant experience to our customers. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID.Clearbit Privacy Policy
YouTube
YouTube is a video sharing platform which allows users to view and share embedded videos on our websites. YouTube provides viewership metrics on video performance. YouTube Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

Customize your advertising – permits us to offer targeted advertising to you

Adobe Analytics
We use Adobe Analytics to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Adobe Analytics Privacy Policy
Google Analytics (Web Analytics)
We use Google Analytics (Web Analytics) to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Google Analytics (Web Analytics) Privacy Policy
AdWords
We use AdWords to deploy digital advertising on sites supported by AdWords. Ads are based on both AdWords data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that AdWords has collected from you. We use the data that we provide to AdWords to better customize your digital advertising experience and present you with more relevant ads. AdWords Privacy Policy
Marketo
We use Marketo to send you more timely and relevant email content. To do this, we collect data about your online behavior and your interaction with the emails we send. Data collected may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, email open rates, links clicked, and others. We may combine this data with data collected from other sources to offer you improved sales or customer service experiences, as well as more relevant content based on advanced analytics processing. Marketo Privacy Policy
Doubleclick
We use Doubleclick to deploy digital advertising on sites supported by Doubleclick. Ads are based on both Doubleclick data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Doubleclick has collected from you. We use the data that we provide to Doubleclick to better customize your digital advertising experience and present you with more relevant ads. Doubleclick Privacy Policy
HubSpot
We use HubSpot to send you more timely and relevant email content. To do this, we collect data about your online behavior and your interaction with the emails we send. Data collected may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, email open rates, links clicked, and others. HubSpot Privacy Policy
Twitter
We use Twitter to deploy digital advertising on sites supported by Twitter. Ads are based on both Twitter data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Twitter has collected from you. We use the data that we provide to Twitter to better customize your digital advertising experience and present you with more relevant ads. Twitter Privacy Policy
Facebook
We use Facebook to deploy digital advertising on sites supported by Facebook. Ads are based on both Facebook data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Facebook has collected from you. We use the data that we provide to Facebook to better customize your digital advertising experience and present you with more relevant ads. Facebook Privacy Policy
LinkedIn
We use LinkedIn to deploy digital advertising on sites supported by LinkedIn. Ads are based on both LinkedIn data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that LinkedIn has collected from you. We use the data that we provide to LinkedIn to better customize your digital advertising experience and present you with more relevant ads. LinkedIn Privacy Policy
Yahoo! Japan
We use Yahoo! Japan to deploy digital advertising on sites supported by Yahoo! Japan. Ads are based on both Yahoo! Japan data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Yahoo! Japan has collected from you. We use the data that we provide to Yahoo! Japan to better customize your digital advertising experience and present you with more relevant ads. Yahoo! Japan Privacy Policy
Naver
We use Naver to deploy digital advertising on sites supported by Naver. Ads are based on both Naver data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Naver has collected from you. We use the data that we provide to Naver to better customize your digital advertising experience and present you with more relevant ads. Naver Privacy Policy
Quantcast
We use Quantcast to deploy digital advertising on sites supported by Quantcast. Ads are based on both Quantcast data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Quantcast has collected from you. We use the data that we provide to Quantcast to better customize your digital advertising experience and present you with more relevant ads. Quantcast Privacy Policy
Call Tracking
We use Call Tracking to provide customized phone numbers for our campaigns. This gives you faster access to our agents and helps us more accurately evaluate our performance. We may collect data about your behavior on our sites based on the phone number provided. Call Tracking Privacy Policy
Wunderkind
We use Wunderkind to deploy digital advertising on sites supported by Wunderkind. Ads are based on both Wunderkind data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Wunderkind has collected from you. We use the data that we provide to Wunderkind to better customize your digital advertising experience and present you with more relevant ads. Wunderkind Privacy Policy
ADC Media
We use ADC Media to deploy digital advertising on sites supported by ADC Media. Ads are based on both ADC Media data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that ADC Media has collected from you. We use the data that we provide to ADC Media to better customize your digital advertising experience and present you with more relevant ads. ADC Media Privacy Policy
AgrantSEM
We use AgrantSEM to deploy digital advertising on sites supported by AgrantSEM. Ads are based on both AgrantSEM data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that AgrantSEM has collected from you. We use the data that we provide to AgrantSEM to better customize your digital advertising experience and present you with more relevant ads. AgrantSEM Privacy Policy
Bidtellect
We use Bidtellect to deploy digital advertising on sites supported by Bidtellect. Ads are based on both Bidtellect data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Bidtellect has collected from you. We use the data that we provide to Bidtellect to better customize your digital advertising experience and present you with more relevant ads. Bidtellect Privacy Policy
Bing
We use Bing to deploy digital advertising on sites supported by Bing. Ads are based on both Bing data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Bing has collected from you. We use the data that we provide to Bing to better customize your digital advertising experience and present you with more relevant ads. Bing Privacy Policy
G2Crowd
We use G2Crowd to deploy digital advertising on sites supported by G2Crowd. Ads are based on both G2Crowd data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that G2Crowd has collected from you. We use the data that we provide to G2Crowd to better customize your digital advertising experience and present you with more relevant ads. G2Crowd Privacy Policy
NMPI Display
We use NMPI Display to deploy digital advertising on sites supported by NMPI Display. Ads are based on both NMPI Display data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that NMPI Display has collected from you. We use the data that we provide to NMPI Display to better customize your digital advertising experience and present you with more relevant ads. NMPI Display Privacy Policy
VK
We use VK to deploy digital advertising on sites supported by VK. Ads are based on both VK data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that VK has collected from you. We use the data that we provide to VK to better customize your digital advertising experience and present you with more relevant ads. VK Privacy Policy
Adobe Target
We use Adobe Target to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Adobe Target Privacy Policy
Google Analytics (Advertising)
We use Google Analytics (Advertising) to deploy digital advertising on sites supported by Google Analytics (Advertising). Ads are based on both Google Analytics (Advertising) data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Google Analytics (Advertising) has collected from you. We use the data that we provide to Google Analytics (Advertising) to better customize your digital advertising experience and present you with more relevant ads. Google Analytics (Advertising) Privacy Policy
Trendkite
We use Trendkite to deploy digital advertising on sites supported by Trendkite. Ads are based on both Trendkite data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Trendkite has collected from you. We use the data that we provide to Trendkite to better customize your digital advertising experience and present you with more relevant ads. Trendkite Privacy Policy
Hotjar
We use Hotjar to deploy digital advertising on sites supported by Hotjar. Ads are based on both Hotjar data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Hotjar has collected from you. We use the data that we provide to Hotjar to better customize your digital advertising experience and present you with more relevant ads. Hotjar Privacy Policy
6 Sense
We use 6 Sense to deploy digital advertising on sites supported by 6 Sense. Ads are based on both 6 Sense data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that 6 Sense has collected from you. We use the data that we provide to 6 Sense to better customize your digital advertising experience and present you with more relevant ads. 6 Sense Privacy Policy
Terminus
We use Terminus to deploy digital advertising on sites supported by Terminus. Ads are based on both Terminus data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Terminus has collected from you. We use the data that we provide to Terminus to better customize your digital advertising experience and present you with more relevant ads. Terminus Privacy Policy
StackAdapt
We use StackAdapt to deploy digital advertising on sites supported by StackAdapt. Ads are based on both StackAdapt data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that StackAdapt has collected from you. We use the data that we provide to StackAdapt to better customize your digital advertising experience and present you with more relevant ads. StackAdapt Privacy Policy
The Trade Desk
We use The Trade Desk to deploy digital advertising on sites supported by The Trade Desk. Ads are based on both The Trade Desk data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that The Trade Desk has collected from you. We use the data that we provide to The Trade Desk to better customize your digital advertising experience and present you with more relevant ads. The Trade Desk Privacy Policy
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

Are you sure you want a less customized experience?

We can access your data only if you select "yes" for the categories on the previous screen. This lets us tailor our marketing so that it's more relevant for you. You can change your settings at any time by visiting our privacy statement

Your experience. Your choice.

We care about your privacy. The data we collect helps us understand how you use our products, what information you might be interested in, and what we can improve to make your engagement with Autodesk more rewarding.

May we collect and use your data to tailor your experience?

Explore the benefits of a customized experience by managing your privacy settings for this site or visit our Privacy Statement to learn more about your options.