Description
Arrive early for drinks, hors d'oeuvres, and a good seat.
Key Learnings
Speaker
- EBErin BradnerErin Bradner, Ph.D., is a Director and Research Scientist at Autodesk in the office of the CTO. Erin helped found the Generative Design practice at Autodesk and now manages AutodeskÕs Robotics Lab in San Francisco. Erin has led strategic research partnerships with institutes such as the U.S. National Laboratory in Livermore and NASA JPL to advance manufacturing automation. During her tenure at Autodesk Erin has led hundreds of research projects to identify the sweet spot where technology feasibility, viability and desirability meet. Erin has authored research in Human Computer Interaction with collaborators at IBM, Boeing and AT&T. She is a co-author on patents in advanced design, and holds a PhD in Information & Computer Science.
ERIN BRADNER: Good evening. I'm Erin Bradner. I'm from Autodesk Technology Office. And I am your emcee for tonight's technology trends session. Get comfortable. Secure your seat belts, because you are now committed to a fast paced one hour ride through the newest advances in hardware for AI, for VR, for immersive computing, and for advanced design.
Now all of us have sat through technology sessions where a sage futurist meanders through an automation fantasy land, to use Andrew's term from this morning's keynote. I am not that futurist and this is not that session, because you're in for a treat.
Today you will hear directly from the technologists themselves. You will hear from the man who is packaging a supercomputer into a workstation box. His name is Rob. He's from Lenovo.
You'll also hear from a woman who is defining the term immersive design. Her name is Molly. She's from Dell. And you'll hear from an entrepreneur in whose virtual computing technology might just turn the time honored tradition of the workstation business on its ear. His name is Nikola.
You'll hear firsthand from him how the showdown between the cloud and the workstation is currently playing out. That's just three of the folks you'll hear from. We have six speakers, 10 minutes each. And before I take any more precious time from those speakers, I'll introduce our first presenter.
He's from HP. And yes, he'll tell you where workstations are heading. But he's also here to tell us about printers. And these printers are not the printers that you would assume he's going to talk to us about.
He's telling us about 3D printers. He has something very special to announce today. Welcome to the stage, VP and fellow from HP, Bruce Blaho.
[MUSIC PLAYING]
BRUCE BLAHO: Good evening, everybody. Thanks for coming out. Thanks for the great introduction, Erin. I've got three tech trends I want to introduce just real briefly and then we'll maybe chat about it a little bit. Oops.
So the first technology I want to talk about is 3D printing. And the tech trend that we want to feature with this is 3D printing has historically been used mostly for prototyping and small volume production. The approach that we're trying to take in that trend or tend to drive at HP is to do 3D printing for both prototyping and for volume production.
So a couple of new technologies that are going to be coming out. This next year, we're going to start shipping our first color 3D printers. And again, same thing applies. Color printers today typically are used mostly for prototyping. Our 3D printers for color are going to do both prototyping and production. So that's the first thing I wanted to mention.
Then second, as Erin was, I think, referring to, as much as we enjoyed printing black plastic and moving onto colored plastic, there's a lot of interest and a lot of growth in the world in metals printing. So HP is announcing that we are going to be doing metal printers.
We'll talk more about that. I don't have a lot to say today about the technology behind it, other than to say that we're going to certainly try to follow some of the same strategy that we've used before with the nylon based printers. And we'll have more to say about that later in the year.
The second area I want to talk about is machine learning. I'm sure Bob will have a lot more to say about this. I'm going to focus on machine learning on the edge. So machine learning, deep learning is used a good bit today in the data center in the cloud.
But one of the trends I want to talk about is machine learning moving to the edge. So why would you do that? A couple reasons. One is, when you're doing the development of a deep learning system, this is an iterative process.
It's really an extension of-- it's very similar to software development. Only the big difference is that in the cycle, in addition to coming up with your idea, and then writing the code, and then testing it, and sort of iterating on that, there's this thing called training where you have to run a ton of data through the model. It takes a lot of time. So the workstation offers a lot of economic advantages for iterating on that cycle. So that's something that's a bit of a trend.
We're also seeing, on the deployment side, not only when you develop these systems but when you deploy them, we're seeing more of that moving to the edge. And I want to talk briefly about why is that. What are the reasons that you would deploy a deep learning solution on the edge?
Four reasons I'll talk about. One is that if you have local decisions to be made, so for instance I've got a picture of a factory floor, or if you're in, say, a retail setting and you have to make decisions very quickly, you don't have time to ship that off, say, to the cloud or off even to a regional data center or something like that. So decisions that have to be made locally, on the floor or in a retail center, is one reason why you'd want to do edge based deployment.
A second is IP protection. And this has come up a lot. I've heard this many times today. There are security concerns. If you're generating both your own IP, which I think makes a lot of sense, that maybe you don't want to ship that off outside your firewall or maybe forbidden from moving outside your firewall. But it also comes up when you're working with your customer's data.
For instance, tying back to the 3D printing example in our HP 3D printers, we would love nothing more than to harvest all the data coming through to learn how to print better each time and learn from that. However, that's our customers' data. We don't have rights to ship that off to a cloud service provider somewhere.
So if we're going to do any learning from that to make our printers better, it's all going to be local. So that's another reason that edge based machine learning is important.
Third is when the data lives on the edge already. Video is a really good example of this. If you're generating a ton of data on the edge, it's very expensive and cumbersome to move that off. Good example is a customer of ours we worked with does an online photo sharing service.
They'd like to do some things like, say, facial recognition and other things like that. They've got 2 and 1/2 billion photos a year. There are other folks on the cloud that they could work with to provide that as a service, but it's just not practical to move 2 and 1/2 billion photos over. So they need to be working on it with their own system.
And then the fourth area is privacy. We're seeing this happen a lot with everything from workstations to mobile phones. Right. So by privacy, I mean you want to protect your customers. You want to avoid bad perception.
So for instance, if you have video data, if I'm surveying this room and for some reason-- I don't have rights to go sharing your image, or your likeness, or if you're logging into your phone, or you're talking to your personal digital assistant, you're wondering, gee, where's that data going, if you keep it local-- so we see, for instance, on phones, a lot of the things that we're doing in the cloud are now starting to move onto the device itself for that reason.
Third trend, the tech trend I wanted to talk about, is VR and commercial VR in particular. So VR has been very popular for-- and it's pretty easy to see, over the last few years the consumer applications of VR. We're interested in VR for commercial applications.
And so what's the difference? So first of all, the power of the system. There's a lot more that you need-- there's huge data sets that need to be dealt with.
There's work that you want to do on the fly in addition to just the rendering. So being able to apply, say, full workstation power to the system is important. But probably the most important difference is for commercial is that you need to try to figure out how does this fit into the workflow.
So it's cool to go see a VR demo. And we've got a bunch. I'll tell you about some in a minute. But when you want to actually put it to work, then you need to be figuring out, OK, we're not just going to-- people aren't going to throw away the way they did their work for years just because VR has shown up.
So we've been trying to figure out things like how-- say the design process as an example here-- how does this fit in? So we're trying to do things like the backpack and the dock that you see here, because you're going to be working at your desk with this. You're not going to spend your whole day doing VR.
Maybe someday we'll get there where you spend all your time immersed. We're not there today. So you need to be able to move back and forth between that seamlessly. So those are the kinds of things, the kind of innovations that we're trying to do.
The VR backpack, if you haven't seen an action, I recommend you take a look. It's, of course, in our booth. We've got a bunch of partners including Unreal Engine, Unity, Bridgestone, Autodesk, Nvidia, Microsoft, and others that are doing their presentations or demos in the booth. Stop by.
And then for your early risers, tomorrow at 8 AM we'll be doing a session with Nvidia and Microsoft exploring. We'll be talking in more detail about our VR backpack. Nvidia's going to talk about their holodeck. And Microsoft will be talking about their Windows mixed reality. So if you have a chance to come take a look at that, that'd be great. Thanks.
[APPLAUSE]
ERIN BRADNER: Great. Thanks, Bruce.
BRUCE BLAHO: Yep.
ERIN BRADNER: So we have just a few minutes with each presenter and then we'll send Bruce on his way. I have a couple of questions for you. One, real quick, I want to thank you for including that presentation on your VR backpack. I look at it and I see VR off leash. And I think that offers a lot of opportunity that we cannot understand now until we have that experience of working with VR when we're not on leash connected to the CPU.
We'll hear more about VR and AR from Intel. So I do have a question for you, which is, if you could, tell me a rookie mistake that people make when they're first looking into machine learning. Now you--
BRUCE BLAHO: That's a good one.
ERIN BRADNER: --positioned machine learning on the edge with HP. What's a rookie mistake that our audience can avoid?
BRUCE BLAHO: OK. That's a good question. I'd say probably the biggest mistake that people make getting into machine learning is around the data. And I think people always underestimate how difficult it is.
It takes not only a lot of data, but it takes-- typically, most of the successful deep learning solutions use labeled data or what's called a supervised solution, which means I've got a picture of a dog and I tell you the answer. It's a dog. This is a dog. This is a cat.
Let's say, for instance, I'll give you an example from HP. We have a computer vision group that's doing deep learning for facial recognition. And when they first ran their model, they were disappointed with the results even though they had a ton of data. And then they realized when they went through about 20% to 30% of the data was just wrong.
So it's a little ironic that this great automation technology actually requires a lot of grunt work upfront. It's not magic. It's not magic beanstalk. There's a lot of grunt work up front. You have to find the data. You have to get it labeled. And then you have to correct it and make sure it's clean data.
ERIN BRADNER: For the computer scientists in the room, garbage in, garbage out.
BRUCE BLAHO: Exactly.
ERIN BRADNER: Right. OK. Super. Well, that's all the time we have. Thank you so much for coming up.
BRUCE BLAHO: Thank you. My pleasure.
[APPLAUSE]
Thank you.
ERIN BRADNER: So next up we have the head of commercial AR/M VR from Intel. This speaker will walk us through advanced applications in AR/VR VR for AEC and for manufacturing. Please welcome Kumar Kaushik.
[MUSIC PLAYING]
KUMAR KAUSHIK: Thank you, Erin. This is my first AU. And it's been fantastic. The energy out here, especially in the keynote today morning was fantastic. It got my creative juices flowing.
We are in a seminal moment in our history of technology, because three key innovations are coming together. One is around AI. You heard Bruce talk about AI machine learning, deep learning. The second is about immersive visualization with augmented reality and virtual reality.
And then the third thing which we have not talked much about, and I won't be spending too much time on it, is the next generation comms technology with 5G coming towards us. OK. These three technologies are going to converge and democratize the tools required for creators, designers, and engineers to disrupt industries.
I'm going to briefly talk about AI and then move on to my personal passion, which is AR and VR. But when we look at AI within Intel, it's not about one specific thing, but it's about the end to end solution from the cloud, to the data center, to the networking, to the applications, to the sensors on devices and at the edges.
So we are truly committed to go make this experience happen. And we think it's going to be part of the [? watchword ?] cycle, which powers both the data center through the data which goes back and forth between the cloud and the clients, and enrich the client experience as to our end customers.
But what I really am passionate about is what's happening in the space of virtual reality. If you look at how computing experiences have started off at the advent of computers in the early '80s, we think VR is going to be the next generation and the next interface for compute experiences. And it's going to be a combination of VR and AR.
We think virtual reality is going to be the immersive collaboration and training platform for the future. And we think augmented reality is going to be the productivity platform of the future. And we see solutions across all of these domains happening today.
Now what is Intel specifically doing in this space? We have multiple effort underway. We have virtual reality in sports happening today. Our Intel sports group has virtual reality capabilities deployed in partnership at NFL, in partnership at NBA.
We are working with health care institutions to deploy virtual reality using DICOM data today. We are working in the AC space to-- for example with the Smithsonian Institution to digitize their data and create virtual museums accessible to people today.
We are working with the University of Utah to use our commercial drones to scan the ecosystem to do reality capture and photogrammetry. All of these things are proof points of how the market is going to evolve and how we think VR is going to be a mainstream option over the next two to three years.
ERIN BRADNER: Great. Thank you.
[APPLAUSE]
ERIN BRADNER: Kumar, if you could, tell me your favorite example of how that AR/VR is applied in architecture or engineering.
KUMAR KAUSHIK: Well, maybe I can go a little bit deeper on the Smithsonian example which we did. I don't know whether people know about this, but Smithsonian shows only about 3% to 4% of all of their assets available, all of the historical collection.
So one of the things which we wanted to work with them on is how do we digitize their assets and make it available to end users. But the big thing was we didn't want it to be yet another 360 video experience. We wanted to have that full immersion of like you're visiting a museum.
So we commissioned this effort with Smithsonian and we used primarily Autodesk tools, including ReCap and Maya, to build a full photogrammetric for accurate 8k captures and 12k captures of Smithsonian assets in a real world environment. I think that's a fairly compelling story of defining a next generation workload to show how VR can be applied for real world use cases.
ERIN BRADNER: Did you go through that experience?
KUMAR KAUSHIK: Of course.
ERIN BRADNER: What was it like?
KUMAR KAUSHIK: It is fantastic. I think I have demoed it more than I care. It's an absolutely fabulous experience. And we want to go build more of these type of curated experiences especially for education.
ERIN BRADNER: Great. I have another question for you. You mentioned that AR is the productivity platform of the future. Can you unpack that?
KUMAR KAUSHIK: Yeah. One of the-- earlier in the year, we were having this conversation on, hey, we have AR coming up and VR coming out. How are people going to adopt these technologies? One of the things which VR has is this thing called as occlusion anxiety. When you're within a workforce and you have people all around you, when you're shut away from the environment by putting a headset on, it actually causes anxiety within people. So that's one of the early feedback we've got from our user experience researchers.
ERIN BRADNER: [? Got ?] it. Great.
KUMAR KAUSHIK: We see augmented reality as an ability to be a part of your productivity workflow. Bruce talked about how workflows are preestablished today and it's really tough to change it. VR requires a significant change to that model. But augmented reality, by its nomenclature, augments what you're trying to do. And so we think it has the ability to become the productivity platform of the future.
ERIN BRADNER: OK. So place our bets on AR and then VR.
KUMAR KAUSHIK: Well, the way I look at it is that VR is available today.
ERIN BRADNER: Yeah.
KUMAR KAUSHIK: So you need to be placing your bets on VR right now. Over the next five years, that market is going to boom. When I look at the headsets, commercially available headsets and augmented reality today, I think they're still a few years behind before we get to mass market adoption.
ERIN BRADNER: OK. Thank you, Kumar.
KUMAR KAUSHIK: Thank you very much. Appreciate it. Take care.
[APPLAUSE]
ERIN BRADNER: So mixing it up a bit here, we have a video of the future. Our next speaker calls it immersive design. She's going to cook up the future and then give us all a recipe of how we can make immersive design at home. Please welcome the Director of Industry Strategy and Partner Marketing at Dell Precision, Molly Connolly.
MOLLY CONNOLLY: Thank you. A pleasure. All right. All right. Well, thank you, folks. Glad to be here. And actually, I'm going to do something a little different. I'm going to sit down with Erin.
ERIN BRADNER: Oh, please.
MOLLY CONNOLLY: OK. So as Erin mentioned, I'm going to take you on to an immersive trip. So let's see it. Lights down.
[VIDEO PLAYBACK]
- It starts with a question.
- Load women size eight.
- Followed by an idea on how to make things simpler, better, or more beautiful.
- Approximate shape from sketches.
- But it's not just what it looks like.
- Load cross train sequence.
- It's how it works, which means trying and failing, and trying again. To be a designer means not being bound by the limits of your tools, but instead--
- Expand box.
- --being inspired by them.
- Show me the upper.
- So that you can focus on what only you can do, being creative, being curious, and being critical, exploring the union between function and form until suddenly you know.
- Optimize cushion pattern for terrain.
- That's it.
- And when you're ready to share your work, make sure everyone can see that the world is a little simpler, better, and more beautiful.
[END PLAYBACK]
MOLLY CONNOLLY: Well, I hope you enjoyed that video, because--
[APPLAUSE]
Oh, great. All right. Thank you. Actually, what you just saw there is the collaboration of our customer Nike with Meta an Ultrahaptics. And so what really are we showing here? Let's think about what we saw in that video. Really, it's a part of the digital transformation. And at Dell Precision Workstations, we really believe that we are working with customers and partners to really deliver on this transformation.
You know, we see it really in three different realms, the IT transformation where you want to automate, and simplify, and take advantage of cloud. Certainly with security transformation to there again automate but also integrate security into your workflow. And then workforce transformation.
What you just saw in that video was a prime example of how immersive technologies can actually inspire. They can actually engage the creator, whether it be an architect, a designer, a creative creating the next blockbuster. It allows them to see, as you saw in the AR glasses with the meta glasses, to see, to hear, to touch.
And what you saw in that video was our canvas, the Dell canvas. It's really exciting. Imagine a surface that is your do surface where you are, 10 fingers engaged in your design. You also saw on this video collaboration where people are working together and collaborating.
Think about the same things that Autodesk presented this morning in the keynote that everybody not only wants to but they need to be able to do more with less. Right? And so when we think about workforce transformation, guess what?
The workforce all around us is getting younger and younger. Right. And so many of the people that you're employing today grew up swiping either on a phone, or on a tablet, or on a canvas now. And by the way, we do have the canvas at Drink,
ERIN BRADNER: OK.
- OK. So people can see, feel, and touch the canvas at Drink in the Dell Precision Workstation booth. So please do come by and see that. But more importantly, when you think about for a designer to be immersed in their creative content, to be able to iterate multiple times, to be able to have that digital twin of being able to design fully with photorealistic rendering their product. And thanks to our partners here from Nvidia, we're capable of doing that. Right.
So we get more iterations, more creative designs, less limits obviously, and less time spent. And what does that mean to a business person? That means faster time to market with your product, your design, your building. And so it's an exciting time that we're all in. So I hope you enjoyed that.
ERIN BRADNER: Thank you, Molly.
- Sure.
ERIN BRADNER: OK. You did my job for me. So I won't ask you the more better less question.
MOLLY CONNOLLY: OK. Thank you.
ERIN BRADNER: I would like to know, you've unpacked that technology. You've given us that recipe. So that's the Dell Canvas. The VR goggles are--
MOLLY CONNOLLY: Yes.
ERIN BRADNER: [? Were ?] this one?
MOLLY CONNOLLY: Oh, those were the meta.
ERIN BRADNER: The meta.
MOLLY CONNOLLY: Yes. Those were AR.
ERIN BRADNER: OK. And then what was the computer on the back end?
MOLLY CONNOLLY: Well, you know, our canvas is actually a peripheral device. It does not have any compute power. That's powered by our workstations. OK. So it's a twofer. You get a workstation, then also, of course, you want a Canvas, right?
ERIN BRADNER: OK.
MOLLY CONNOLLY: So you do need, of course, workstation power to really engage with what I consider workstation worthy applications like Autodesk.
ERIN BRADNER: Excellent.
MOLLY CONNOLLY: Yes.
ERIN BRADNER: Sure. And then if you could-- so we saw a lovely industrial design example. Can you tell me how this is being applied in architecture, engineering, construction?
MOLLY CONNOLLY: Well, in architecture, of course, because being able to visualize whether it be a commercial environment or a residential, the ability to be able to visualize it and also collaboratively work with clients, because you saw collaboration here as well, to be able to work collaboratively with your client, to visualize the structure, the environment, to make changes right then and there in real time.
This is so impactful. Right. It also is truly attractive to businesses that have high value ticket items, because when you can change something right then and there for your customer, they see immense value.
We have another example. It's although in automotive.
ERIN BRADNER: OK.
MOLLY CONNOLLY: OK. One of our customers, Jaguar Land Rover, they use Dell precision workstations in designing their cars, which is fantastic. But they also use virtual reality in designing the interior of their cars so that they can get in and feel, experience, be immersed in that automotive experience.
Now here's a car company that when they speak about themselves today, and this is a surprise for a lot of people, they say that they are a technology company, not an automotive company. They call themselves a technology company.
ERIN BRADNER: Jaguar?
MOLLY CONNOLLY: Yes. Jaguar Land Rover. Now why is this? Well, believe it or not, and I think you saw it on the video that we showed earlier, the I-PACE car was Jaguar's first fully electric SUV. Right.
So this SUV was a brand new direction for Jaguar Land Rover. They wanted to be able to take it to market in a whole new way. And so they had this vision. And they worked with us on delivering a virtual reality experience so that when they launched at the auto show simultaneously in the UK and LA, there were 88 press members in VR experiencing this I-PACE car.
OK. Now what's cool about that-- now think about this. Somebody is going to spend more time in a VR headset, exploring the car, going around it, going in it, getting a sense for the controls, than they would if they're at an auto show and they're just walking around a static car. So it's really the blending.
ERIN BRADNER: It's a richer experience in fact--
MOLLY CONNOLLY: It certainly is.
ERIN BRADNER: --mediated by technology.
MOLLY CONNOLLY: When we immerse ourselves in technology or any use of it, when you add more senses, you internalize it. It's an emotional-- it's a visceral experience. It becomes real.
ERIN BRADNER: Well, what I love about this approach is that you are leveraging, as you said, the creativity, the curiosity, and the analytical nature of the human and augmenting that with technology. We do what we're good at. The computers do what they're good at.
MOLLY CONNOLLY: Yes. Well, I have a colleague. And his number one statement-- he said to me, don't forget this, Molly, and it's so true. In this environment, you're able to experience it, experience your design, experience your creation, not just view it.
ERIN BRADNER: Not just view it. Experience it before it's real.
MOLLY CONNOLLY: Entirely.
ERIN BRADNER: Thank you, Molly.
MOLLY CONNOLLY: A pleasure. Thank you.
ERIN BRADNER: Thank you. Thanks. To our friends in the back, could I get the clock reset, please? This next speaker is the executive director and general manager of the workstation business unit at Lenovo.
And I'm particularly excited to hear from him today, because it was four years ago when our CTO at Autodesk asked us in the technology office to imagine a future where every architect, every engineer had a supercomputer at his or her disposal. He asked us to imagine how that would change the art, the practice, and the profession of design.
Out of that provocation came the generative design technology at Autodesk. Now that future, those supercomputers at your desk, that's here. And this is brought to you by Lenovo. Here is Rob Herman.
ROB HERMAN: Right. Thank you, Erin. Hey. Good evening, everyone. I'm grateful for the opportunity to share with you this evening our experience in generative design and how we help our customers and our partners take it to the next level. And what I'd like to do is share with you an example of one of our customers and show you the different levels of computing as you go through the different phases of generative design.
So if you've attended AU the past two years, you've probably heard about MX3D. They're a company based out of Amsterdam that is combining robotics with 3D printing and metal And using that to create amazing structures. And their mission currently is to build a bridge that would span over a canal in the red light District of Amsterdam.
So it's a pretty cool project. And this is now our third year talking about MX3D. And you might ask, well, what's different? What's new? Well, MX3D 3D has now moved beyond the talking about the future of making and beyond the design concepts and is actually now in the process of making this bridge as we speak.
They are a third of the way through to completion of the bridge. And they expect to install the bridge over the canal by mid 2018. So it's a pretty exciting time for them. But it also affords us the opportunity to talk about the tools and techniques that are used in the next phase of the design, getting to the make, and then eventually to the use of the bridge design. So that's what I want to take you through now.
So in the design phase, if you think about it, printing a bridge in 3D metal, there's really no precedent for that. So in dealing with the local authorities from a building code standpoint, that was really one of their first challenges.
So they worked with the Autodesk team to use Dreamcatcher and run through several iterations of the bridge design. And then they ran simulations and structural analytics to ensure that the bridge would meet reliability and safety standards. And to do that, to run those multiple iterations and run the analytics, it really required a base level of computing that I think we're all familiar with today, really a basic workstation.
But as they moved into the make phase of the design process, and this is an actual picture. This is not a rendering. This is an actual picture of the 3D robotic arm printing out the metal bridge in midair there. As they got into this portion of the process, they faced another challenge. Right.
So in order to instruct the printer in how to build the bridge, they had to develop a software program from scratch. And the software program consisted of millions of lines of G-code. And the code was used to instruct the printer on how to print out the unique parts of the bridge slice by slice.
So you know, here again is another level of computing. I mean they're using dual processor desktop workstation with massive amounts of memory to build out this very complex software program in order to print the bridge.
And then in the use phase, really to me the most exciting phase, and really one that's more futuristic in thinking, MX3D's working with the Autodesk research team to build out a prototype of a monitoring system for the bridge so that the bridge can monitor and manage the traffic on the bridge. So this is really cool stuff, because now we're talking about turning the bridge almost into a living organism.
So again, the teams work together, Autodesk and MX3D, using Dasher and creating what we're calling a nervous system for the bridge. And it consists of IoT sensor networks machine learning algorithms, and all for the purpose of, again, monitoring and managing the traffic flow on the bridge. So imagine the bridge being overcrowded and the bridge then sensing that and haptically vibrating to send a signal to the people on the bridge to move on and clear up the traffic flow.
So this is the idea of bringing the bridge to life. And it really requires all these sensors to bring the data up into the cloud and then the bridge doing-- they're doing the analytics in the cloud so that the bridge can understand what is happening and then manage it.
And all that data really serves an equally important purpose, because that data turns into valuable information that can be input and feedback into a next generation design and make the bridge even better. And that's what generative design is all about is continuing on to the next generation and improving upon the design based on the feedback that you get.
And again here, this was even another level of computing. Right. So the Autodesk research team's using a dual processor system, very high end dual processor system, with Nvidia Quadro P6000 card to build out these machine learning algorithms and then the sensor network. So we at Lenovo, we really view ourselves as master innovators of compute technology from our partners like Intel and Nvidia.
We value the learnings that we get from our customers and take that feedback and intersect it with these new technologies to create the next generation solutions for your design needs. And right here, this is a picture of a ThinkStation P920, which is a dual processor system that can support up to three Nvidia GP100 cards. So basically what this is is a supercomputer at your desk.
ERIN BRADNER: Yeah.
ROB HERMAN: Right. And really capable of handling just about anything that you can throw at it from a generative design standpoint. So we're really excited about that. And ultimately, we see the technology business as a people business. And we want to help you solve problems and create new opportunities. So happy to continue this discussion down at the booth. So have a great show.
ERIN BRADNER: Thank you. Great. So if you could unpack a bit the MX3D bridge project. So we had the supercomputer processing the G-code to run the robots to fabricate. And then we have the supercomputer processing the data that's coming off the bridge or the prototype of the bridge.
So what if I am in the audience and I'm thinking, when do I need a supercomputer? What are the projects that I will need to invest my Lenovo technology in? What are those? And when do I need to get that box?
ROB HERMAN: Yeah. You know, I think really, Erin, it's about planning end to end. Right. So looking at your project overall, understanding the tools that you'll need from, say, an ISV application standpoint, and then pouring that all into basically like a project timeline and a workflow, and then understanding where your peak demands are going to be for performance and then plotting your hardware along that workflow.
ERIN BRADNER: OK. And now I'll ask you the same question that I asked Bruce. What is a rookie mistake that our audience can avoid when investing in supercomputers at the desk?
ROB HERMAN: Listening to your CFO.
ERIN BRADNER: OK.
ROB HERMAN: I mean, you know, I think we get talked into maybe purchasing something that doesn't quite meet our needs.
ERIN BRADNER: Uh huh.
ROB HERMAN: Right.
ERIN BRADNER: OK. So we can't advance our design practice unless we invest in our hardware and not listen to our CFO.
ROB HERMAN: Absolutely.
ERIN BRADNER: OK. All right. That's the recipe for success here.
ROB HERMAN: All right.
ERIN BRADNER: Thank you.
ROB HERMAN: Thank you.
[APPLAUSE]
Thanks.
ERIN BRADNER: OK. So next up we have the newcomer to the technology trends event. And I mentioned him earlier. He is our virtual computing representative. He's going to talk to us about that showdown between the cloud and workstations. And come on up, Nikola.
NIKOLA BOZINOVIC: Thank you. Thanks. So I'm Nikola from Frame. And we had a lunch today with Erin. And she asked a question, what would you consider a success [? come ?] today? Now I was thinking about it and only one thing was in my mind. And success would be if you remember what Frame was after this session, which I don't think it will be that hard, but these five companies that nobody really heard of that we're presenting around me.
But kidding aside, I'll assume that not a lot of what Frame is. And we are in the area called virtualization. We are virtualizing workspaces, a software [? designing ?] workspaces. If you ever heard about Citrix or VMware, this is how things would look and how great that would be if they were built today and not 15 or 20 years ago.
And we worked very closely with Autodesk, in fact from very early days when Frame was this. So 2013, we had these software design workspaces that you can open up a browser, get into a browser, say that you want some virtual workspaces, some virtual machines, and click a couple of buttons and things would come up.
This was running out of my house over my five megabit [? second ?] link. And we were able, even at that time, to get a lot of people excited and a lot of people interested. And fast forward if you were in one of the labs at Autodesk University, this week you were using Frame. So we are powering Autodesk University's all hands on labs so you can run Revit, AutoCAD, Inventor, and all the other goodness that Autodesk makes. You can run that from any browser.
So it's a perfect example of how software defined workspaces can create great experiences. So you don't have to haul 400 workstations here. You can just have any computer all the way down to Chromebook with a browser and get things going.
And someone might say, well, this is just remote desktops. But it's, in fact, a lot more than remote desktops. There is massive scale. There is security. There is infrastructure. There is storage. There is identity. And that got us into all sorts of partnerships among others with Microsoft and with many other folks, both on the software vendor side and on the service provider side.
And we are also-- I was looking at the logos here, and not only our size were sticking out. we're clearly a lot smaller than all the other guys that were presenting here today. But we were the only software company. If you look at it, all five folks over there are from companies that build something physical, that build hardware.
And we come from the part of the world-- we're based in San Francisco. Maybe you can figure that out from the logo-- where the mantra is software is eating the world. And when something moves at the speed of software, it moves like really, really fast.
And if you like change, you should be really, really happy because today is the slowest rate of change that you'll ever experience. Things won't slow down. They'll just move faster and faster. And even I sometimes look at, oh my God, things are now so fast compared to 10 years ago. God knows where we're going to be in another 10 years.
So when people see this image, and I know this audience quite a bit. We've been doing this for five years with big enterprises. And they sometimes freak out and they say, ooh, cloud. And it is a big technology trend, like cloud is a technology trend.
But in fact, I'd say that it's not about the cloud. It's about the winds that brought those clouds in and the winds that are changing. And it's about the way of thinking. And there is an old Chinese proverb that says that when the wind changes, some build walls, and others build windmills.
So figure out which one you're going to build. Are you going to build a wall or a windmill? I think walls are not a great idea. All the walls that we hear about not great idea. But the wall, you know in this space, also not a good idea. So let's think about what are important factors that we can think about building windmills.
So one is agility. What that means is you need 10,000 new virtual workspaces for a company that you just acquired. You can do that. You need a large workstation with 8 Nvidia GPUs. You can get it for five hours and pay a couple of dollars an hour for that. So that's the agility and the speed that new winds are bringing in.
What else? It's collaboration. If you have folks around the world or around the country, you can instantly collaborate. Put your data into a secure place and give everybody tools, whether it's Revit, or AutoCAD, or other tools that people use like Intergraph and Bentley. It all works together and it's all interoperable.
And the last one is security. And I think we're at the point, the tipping point of the cloud, where people should think about cloud and as something that's ideally more secure, because you look at how many people on average take care of security in an enterprise. It's a handful of people as opposed to thousands or tens of thousands of people that do it in a cloud.
So those are the big three tech trends. And I want to just end up with one more thing. And you know, I got inspired this morning at the keynote about all the things about automation and AI. And obviously it's hot topic.
Everybody talks about AI. Everybody mentioned AI. We're going to hear about Nvidia that just comes right after. And you know, just things that we build, I couldn't agree more with what I heard in the keynote today that the job of technology is to augment, and to help people, and to make them more productive, and to put them in the best position to achieve their full potential.
And I think like with AI, there are lots of things that are extremely promising, extremely exciting. But I must tell you as a technologist and as a citizen, I sometimes get probably to thinking about it more in terms of like, what can go wrong? And I don't think it's so crazy.
We have some very prominent people telling us about how AI is dangerous. You see Elon Musk on one side of the debate. There is Facebook and Zuckerberg. Hey, don't worry, look somewhere else. And in fact, and if it's that other company with a social network that's told us that they're building a tool to connect the world.
Well, in reality, we just heard from people who built it that their real goal was really to capture our attention, to get her eyeballs, and to get us hooked, to get us addicted. And as a result, we are seeing a real threat to our world and our democracy around the world. It's here. It's in Europe. And it's all over the place.
So it may seem like a great idea in the start, but how we ended up with years and years spent on Facebook, not to say wasted on Facebook. And where we really want to be is we don't want to gamble that future. We don't want to have all eggs in the basket of a private company that tells us look the other way the same way we're in the gambling capital of the world.
You know, we saw what happened 10 years ago with the financial crisis and how trusting someone with an important thing without controlling it led to a disaster. I mean, we're seeing some fundamental trouble in democracies around the world and in 2017. And I don't want to see the same thing in 2027 again as a technologist, and as a citizen, and as a father of kids that are growing up in this new world.
So we're all about empowering. We're all about augmenting the power of people. And if you want to see that firsthand, stop by our booth. It's down in the expo hall. And remember Frame and our website really easy, fra.me. Thank you very much.
[APPLAUSE]
ERIN BRADNER: OK, Nikola. So I did look at your LinkedIn profile. So we're in the presence of a revolutionary. In fact, a distinct decorated revolutionary. So this is a man who thinks big. You led a revolution in--
NIKOLA BOZINOVIC: It was in Serbia in the mid '90s. And I was 21 when dot com happening in California. I lived in Serbia. And we had a bad guy named Milosevic that some of you may heard of. And my startup at the time was to find a way to get rid of him and to bring in new wins to Serbia at the time. And I like to think we helped a little bit.
ERIN BRADNER: OK. So it is no coincidence that this man thinks big. He's thinking about the implications of a AI along with the implications of the work he's doing in the cloud. So thank you for that. I do want to know if you can ground the examples you offered around virtual computing in some of the applications that the members of our audience will be looking towards Frame to solve-- architecture, engineering, construction.
NIKOLA BOZINOVIC: It's a perfect example. We have many customers in AC. And they're using Frame to build virtual workspaces to use Autodesk tools. I mentioned a couple, AutoCAD, Revit. Revit is very popular. They use it with existing IT systems so maybe they have AD already in place, Active Directory. Maybe they use storage. It's an on prem storage, or cloud storage, or it's been zero storage. It's one of our partners.
And what we make possible is almost instant creation of a new pop up virtual office. So you can have, with companies moving projects around the world, you can have engineers in San Francisco working with architects in London building something in Singapore.
And once you put all of that in a cloud, you can get the power of GPUs, as many as you want. And you don't have to pay millions or tens of millions dollars upfront. And you can bring it up, run it for a couple of months or a couple of years, tear it down. And I think it really matches this new elastic agile world.
ERIN BRADNER: Excellent. OK. We are running out of time and I want to invite our last speaker up. I want to thank you.
NIKOLA BOZINOVIC: Thank you.
ERIN BRADNER: So Nikola Bozinovic.
NIKOLA BOZINOVIC: Thank you.
ERIN BRADNER: OK. So bringing it home today for us is the VP and general manager from Nvidia. And his customers are the folks you've heard from earlier today. He's going to synthesize what we heard. He's our closer, Bob Pettit.
BOB PETTIT: I think I've got the biggest microphone. Can you hear me?
AUDIENCE: Yeah.
BOB PETTIT: I can't hear myself. I love my job. I love my job because of you all and what you're striving to do, and of the partners that we have to work with. Don't quite agree with the AI deceivious conspiracy theory, but I welcome your view. And hopefully, we'll be able to control that just like we've been able to control the cloud not interfering with our lives or Microsoft not taking over the world.
So let me start off with how most of us are experiencing AI today. If you're talking to Alexa, if you're talking to Siri, hopefully you never have to talk to them, but if you are, you're using an AI bot. If you're searching for images and you're using voice control, you're using an AI bot. Searching through video.
It is a huge time saver. It has proven beyond a doubt to save money. If you're looking for Steph Curry's three point shot that won game seven of whatever, an AI bought is going to be able to find that a hell of a lot more than textual searches. Cyber security-- keeping our nations, keeping our world safe. AI is being used in there.
People are well aware of Nvidia's involvement with self-driving cars. You talk about the ultimate challenge is making sure that car doesn't hit a little girl, a little boy, that that car operates in snow, in rain, with different sunlight conditions. AI and the neural nets that we put together go from simplicity in Alexa all the way up to self-driving cars.
But it doesn't stop there. And we may use that as consumers, but AI in the content industry, AI for the creative and the designer, it's already here. It is not a thing of the future.
Generative design machine learning, being able to design something not by using your hands but by using your head, but by using the design constraints, you know very well that it is here. And I've had the pleasure of working with Autodesk for the last three years on things like Dreamcatcher, things like Design Graph.
It knows you're drawing a flange. Why do you want to continue drawing it? It knows the vehicle that you're working on. It knows the part that it needs to be connected to. The system can tell you let me finish that for you. There are a lot of fantastic uses for AI rendering, whether you're in AC doing rendering for homes or buildings, whether you're in M&E doing rendering for movies, rendering for manufacturing. GPU accelerated ray tracing is here to stay, proven that use of GPUs to accelerate scenes is significantly fast.
The scene on the left is one of the world's fastest GPUs, V100, with our optic software. Yes, we're not just a hardware company. Our optics software. The snow, the fuzzies, the fireflies you see are what the ray tracing algorithm is going through. It is generating millions of rays based on the light sources in the room and tracking every bounce that those rays make.
It is the most pristine view of any rendering that you can get to, but it could take hours and, in some cases, days. So very fast GPU. And as this thing is moving around from scene to scene, we applied an AI bot to it. Same software. That AI bot, after learning how ray tracing worked, is now able to do in a tenth of a second what is taking minutes and sometimes hours on the world's fastest GPUs with a 99.99% accuracy.
So if we let this thing resolve here on the left, it would look like that view on the right. It's a huge productivity gain. Just as generative design is a huge productivity gain for manufacturing, denoising in the acceleration of ray racing is a huge productivity gain.
AI is not the only trend that I wanted to talk about. And of itself, while it has fantastic applications, to me the trend is not any one of the things that we talked about. It's all of them. It's like holy shit. All of these things are coming together at one time. Massive amounts of data.
I need to do it in VR and AR even if people don't really know what VR and AR is. It should look real. It should look photorealistic. It should obey the laws of physics. And I'm going to want to do it with Dreamcatcher, Design Graph, other AI technologies. These are the trends that you guys tell us about and where we spend most of our time, most of our software, trying to accelerate.
So where previously it was good enough to have the CAD engineer pass off his design to somebody in styling who would do a rendering to get to present to marketing or management only then they go back and have to restyle it, we started moving photorealism earlier into the process.
That led to digital prototyping, which is pretty cool, except in order to do digital prototype prototyping in real time, you had to decimate the model. You really couldn't open the hood and see the engine. You really didn't have all the data.
If you wanted to actually use, that not just for a tutorial for how to use the car, if you wanted to use it for a mechanic on how to maintain the engine, you couldn't do it because the data wasn't there. What people want to do today is the same model that they're using to design the car is also the model that they're using the market the car.
It's also the model that they're using to train maintenance engineers on the car. Then that last bit that just creates this discontinuity, this step function both in productivity but it requires a hell of a lot of computing from workstations to data centers to the cloud, is this addition of artificial intelligence, this addition of generative design.
So imagine doing that all in what I'd like to call the design lab of the future. Imagine that any one of these things again is not an independent thing, but that I can do all those things-- and yes, I see the clock.
ERIN BRADNER: You're good.
BOB PETTIT: That I can do all those things inside a virtual environment, which can be an augmented environment, were the laws of physics are obeyed. You may not want to fondle the fender of this Koenigsegg, but you certainly don't want your hand to go through it.
Audio has to be perfect. If Erin's talking to me, audio can't be coming this way. It's got to be coming from this way, using ray tracing to do geometric audio. Accurate physics, accurate flow, photorealistic models. Every polygon, every element of that car as you see as we interrogate the engine, is there.
Every bolt, every nut, every hose, every clamp, everything from the PLM database is sitting there and we're rendering that 100 million polygon car at 120 frames per second per eye. That's an order of magnitude-- the level of performance is an order of magnitude greater than what we typically would do by just looking at a 60 Hertz screen.
I was going to wait for this to blow up. Can I wait 10 seconds for this blow up?
ERIN BRADNER: Sure.
BOB PETTIT: Because I love to explode things. And no, I didn't have issues when I was a child. And if somebody back there is working the slider, they could just go. Oh, here we-- no, I didn't intend for you to-- never mind, I'll come back to it. Every part blew up in that thing.
What you probably haven't heard about yet is AI for simulation. You know, I talked about how photorealism, which is usually done at the last step for marketing only to then result in a CAD redesign, simulation is also done, whether for drag, for structural, for flow analysis. What if AI could be put to bear on that simulation challenge?
What if in that same design lab of the future I'm not just looking at my car, swapping out parts, evaluating CFD analysis, but my AI bot is saving me a ton of compute time by predicting what the simulation is going to be? And so again, instead of waiting to hand off a model to run a CFD analysis only to get it back and make a change, what if I could make interactive changes to that model in real time?
Part of that is there because the GPU can do the power. A lot of it is there because AI can learn and understand what those Navier-Stokes equations will end up in resulting, end up with the results, and predict that in real time.
So from photorealism to the need for simulation, the need for improving photorealism with AI denoising, AI is not this-- it's not Alexa. It's not Skynet in the Terminator movies. It is a way to accelerate workflow, accelerate time to market. Those are the things that we're intimately working with Autodesk on as well as our partners here.
So for me, again, the trend isn't so much any one of these things. The trend is all of these things, all of these things. They're all essential. I should have showed something other than a chair since I have three cars, but you get the point. All of these are basically allowing your creatives, your designers to be much more productive.
Obviously, automotive examples. This applies for AEC and space optimization. This applies for M&E, the future of filmmaking. All of these factors are what we as Nvidia work on and bring to market both with our software partners and our hardware partners.
And a core part of that is making sure that whether you're running on a mobile workstation, a desktop workstation, a data center workstation, or a cloud workstation, whether it's streaming VR grid software through frame or through RGS, that that experience is pristine.
The other trend is you guys want to work where you want to work. You need to be mobile and you shouldn't have to give up performance. And you shouldn't have to carry around a supercomputer on your back. That's where we're focusing the time. That's what you'll see.
Everything that I showed you there are real apps, real use cases, real people. And those things will be running from the cloud in real time in the very near term.
[APPLAUSE]
ERIN BRADNER: Great. I think we'll stop it there.
BOB PETTIT: Yeah. Yeah. That's good.
ERIN BRADNER: Yeah. So thank you, Bob, our closer. He did a fantastic job synthesizing what you heard from our other presenters, in no small part because they are your customers.
BOB PETTIT: They are our customers. And I know I went over time. That's why I don't get questions. But I think we've already answered the questions that we were going to ask.
ERIN BRADNER: You have. The question I was going to ask is how do we get that consistent GPU experience from my local compute, from my virtual cloud, from my workstation. And you've--
BOB PETTIT: Yeah. To me it starts with the workflow, making sure that all the Autodesk software has a pristine run, whether it is in the cloud, whether it is local.
ERIN BRADNER: And what did you mean by pristine run?
BOB PETTIT: The experience shouldn't be different. You might get some issues if you're in Bora Bora from a lag time, but you shouldn't have to change the interface of the app that you're used to using. You shouldn't have to-- you shouldn't have to work with smaller models just because you're working remotely. It should be the same experience whether you're sitting right next to the workstation or whether you're taking your workstation with you via the cloud.
ERIN BRADNER: OK. Scalable.
BOB PETTIT: Yeah. Thank you.
ERIN BRADNER: Thank you.
[APPLAUSE]
So thank you to all our speakers. They have generously offered to come up to the front of the stage once the lights go up. So if you have some questions, you'd like to continue the conversation, you're welcome to come here. I'm Erin Bradner. I'm signing off. Enjoy the rest of AU.
[APPLAUSE]