Description
Principaux enseignements
- Learn about Autodesk applications in VR/AR
- Learn how businesses are implementing VR/AR
- Learn hardware requirements for VR/AR implementation
- Learn how VR/AR is applicable in your industry
Intervenants
- SRScott RuppertScott Ruppert is the Portfolio and Solutions Planning Manager for Lenovo Workstations. In this role he is defining Lenovo strategy in key areas of emerging technology, specifically AR and VR.
SCOTT RUPPERT: Well, let's go ahead and kick it off. So again, welcome. Thanks, everybody, for joining us. My name is Scott Ruppert. I'm with Lenovo. I'm going to sort of moderate this panel if you will. My goal today is to do as little talking as possible, because this is who you really want to hear from. Very excited to be talking about augmented and virtual reality today and the ways it's revolutionizing how we work.
So we pulled together this panel of experts that I won't do justice. So I'll let them introduce themselves. If you guys don't mind going down the line. Let us know who you are and give kind of a quick perspective on how you see AR and/or VR changing the way you work, the way people work. George, you want to start?
GEORGE DIATZIKIS: Hello, everyone. My name is George Diatzikis. I am a architect for Lenovo. As far as how I see AR and VR changing the way we work, I think this whole week has kind of given us a glimpse of what's possible. Kind of going from simple papal design to CAD, 3D printing.
I think the next natural evolutionary step is to have something that is completely virtualized, that you can get in there, manipulate it, look at it, collaborate with it. That you'll hear about shortly. It's-- I guess it's just-- you guys know as much as I do as far as what's going to happen.
SCOTT RUPPERT: That's awesome.
ANDY BEALL: All right, thanks, folks. My name is Andy, Andy Beall. I'm with a company called WorldViz. I'm one of the founders of WorldViz. WorldViz started 2002. And we kind of jokingly say we were one of the original VR companies because-- VR has been around a long time actually.
VR is undergoing a huge sort of resurgence. But for a lot of people, it's the first time they've ever experienced it. We have focused, and always have focused and continue to focus on base of the business, the professional, the enterprise side of VR.
We grew out of research labs. university and then industrial, like Boeing, R&D labs. And last for six years of our existence, actually this field, which is about a quarter of our overall business, AEC, has really grown massively. And partly because the content that the majority of AEC folks work with we find is-- I don't mean to be simplistic about it-- but it's actually fairly straightforward content. It's largely static.
You're not trying to simulate the operations of an engine cockpit. So it's not a solved problem to get the content in. But once it's in, it's relatively straightforward to utilize it, so-- we find that VR has had a place that has earned its keep for many industries for well over a decade. It doesn't mean it's always obvious to find the RI with that device. But there are well-established pockets throughout training centers, throughout industrial manufacturing centers, throughout architecture, throughout different phases of construction, throughout human research labs, and education. And it's exciting to speak to this crowd. I'm looking forward to the section where we can just take kind of random questions and help answer things.
DERRICK SMITH: Good afternoon. Derrick Smith. I work for Autodesk. My team manages our enterprise manufacturing customers across America. So one of the reasons that I'm here is that primarily we have been engaged, and obviously as you know, in a lot of initiatives around driving VR technology.
I think one of important things to Autodesk is that we look at the technology which has been around for a while. A lot of people have seen it in the gaming industry, which is obviously big for us as well. But we're looking at really practical ways where we can go and solve business problems.
That's where we feel like the traction is today in the VR AR space, and that's really what I want to talk about today, and hopefully give you some sense of what we are thinking, what we're doing, and how we're going to take this to, I think, the next level.
BOB PETTE: Great. My name is Bob Pette, the general manager from Nvidia for our professional visualization products. That includes our Quadro product line for pro workstations, all of our rendering products, retracing products, mental ray, I ray, of which there are several plug-ins for the Autodesk products. Our VR and AR initiatives and then our remote workstation, remote virtualization solutions as well in the cloud.
So for me, AR and VR is really about the way we're going to communicate in the future. It will help solve the problem of distance. It will help create better opportunities for people to collaborate, for people to make better product, because we'll be able to easily support our theories, easily propose ideas, and have enough information, whether it's a virtual model, a real model, with enough data, where distance is no longer an issue.
Where you don't have to bring people in from 10 different countries who are working on a product to try to get something done. So to me it's really about communication and just enhancing that next step of true distance-less collaboration.
SCOTT RUPPERT: Great. Thanks, guys. So real quick, just kind of format wise, I've got a handful of questions that I'm going to tee up. Things that I'm curious about. Questions that I've heard out on the floor this week, that sort of thing, to get the conversation going. But I want to save at least half of our time here this hour for interactive, for your questions.
So please, by all means, if you want to jump in, if you have a question. And we'll leave a lot of time for that at the end as well. So to start it off, actually. Derrick, I'm going to come back to you. Obviously Autodesk makes a lot of different software titles for a lot of different industries. So I'm curious from your perspective where you see AR and/or VR growing the fastest. Where it's taken off the fastest. Or maybe where you see it going across some of those industries.
DERRICK SMITH: Good question. So right now, I think where the most traction in the VR space that we're finding is with our automotive teams. If you look at the process today of designing an automobile, and we've got some great examples out on the floor. I think there are some examples of the Porsche Macan, the Ford Mustang, I think, is out there.
All of the automobile industry has embraced our technology and leveraging VR on the VRED platform. So that allows collaboration globally, so it doesn't matter where you are, the tools that you have-- If you haven't experienced that, it's kind of unique. I was telling a story today to I think it was Scott, one of the guys.
I was sitting in a car, so I was down in-- we have a lab down in our Venice office, in Venice, California. I'm sitting in a chair in a room just like this. Regular chair with an arm. And I put on the headset, and all of a sudden, I'm sitting in the front seat of a Mustang. I've got a steering wheel in front of me. I've got all that lighted dashboard. I'm looking around. I'm checking textures.
And I have a hand, so I click on the door knob, and the door opens. And I want to get out of the car. And I'm in this virtual environment. There's an arm here. And I'm like, holy crap, I can't get out. Because there's a chair arm here. And my mind is telling me I can't go this way. I can't go forward, because it looks so real.
And so eventually I walk through the door. And it's one of those moments where you start to realize what virtual reality does for you and allows you to do in the space. So it's a very immersive, but it also has allowed our automobile industry to not only do a lot of what you said, Bob, where they can go through and do those advanced iterations without ever building a clay model. Without ever cutting or carving wood or foam or anything. They can build that 3D model in 3D space, using our tools, and then evaluate those designs and make real design decisions.
And again, you want that new Mustang every year. So you can imagine how important this is to the automobile industry. So definitely one of the areas that's probably in the lead.
SCOTT RUPPERT: Yeah, all right. Cool. I'm going to shift gears a little bit. You mentioned collaboration. A couple people have mentioned that already. So I want to tee this one up for Andy, actually. Because I know WorldViz has specialized in collaborative VR for a number of years. But could you kind of share your thoughts on how VR is changing the way people, or AR for that matter, changing the way people collaborate. Or what we can do now in those environments.
ANDY BEALL: Absolutely. But first, maybe just to kind of calibrate, how many people in this audience have used, say, gear VR. OK. How many-- so gear VR is kind of to me the bottom standard. Cardboard maybe the super bottom standard for VR.
Gear VR is like a Viewmaster on steroids, in a sense. You've got a fixed station point but you can look around. You're kind of in the game. How many have used the next step, which would be to me an Oculus with a tracker device or a HTC Vive? OK, all right. Good, so that's a good portion of the audience.
I want to get the collaboration, and cut me off if I'm going on too long. But on the real side, like I referred to, I've been doing this for, as a company, for over a decade. From a research point of view, I actually came out of a psychology cognitive science background. I understand what it takes to really sort of affect people's processing of visual stimuli, et cetera.
VR is real. Don't kid yourself that it's not in many ways. You tap into parts of people's brains that people do not have conscious control over. And it can have really strong effects. We were down at the show floor on the outside. What's the area called?
SCOTT RUPPERT: The VR interactive zone?
ANDY BEALL: Yeah. We're doing it jointly with Lenovo right now. And we have a main collaboration demo, but we have another one that kind of shocks you in how real things can be. So the demo is a room about the size of this room. You put this on, you feel like you're in a large space.
Now that seems simple, but it's actually-- the fact that you can create a virtual space that feels many times what you might physically be in, that's an achievement. Second, we have a metal plank going across the floor. And you stand here, and we raise you 30 feet up in the air. OK, now, everybody who tries this gets a instantaneous physiological response.
All right. Some people freak out. But you're up there, and people are having a shakey knee response, and you have them walk across. And then maybe a third of people are willing to walk back to the middle of that plank and take a step. OK. And those that do, when you grab them, maybe because they're about to run or something, they're sweating. Their heart is racing. OK, they've just done something that's kind of physically impossible, but they are responding as if it's real.
So just as people do not want to walk through a virtual wall, you know, you're tapping into part of the brain it's-- it's just sort of like, kind of in the Matrix, jacking into the central nervous system. So you can use this for really great things. For really-- in design studies, you're massing studies or evaluating a true sort of response in terms of space and distance perception, emotional response. And there are all sorts of possibilities. We're going to see where VR goes.
Now those of you that have tried VR, so a little over half, I think. How many have had a social experience with one of the higher quality headsets? OK, so two in the room. So the higher quality, meaning with gear VR. You can sit down in a Netflix theater and watch a movie with someone. That's kind of down there for me.
The social is being in a space where both of you are able to move around, pick up objects, hand each other objects. I'll give you a quick anecdote. I've been doing VR for a while, as I said now three times. I've had customers that have had what we call collaborative virtual environments for a decade plus, we've built these.
It wasn't until-- and I admit, I went to the Facebook Oculus Connect meeting two years ago, where they had a toy box demo. The demo was you could light M-80s, you could throw them, you could smash these garden gnomes with a hammer. It was fun. It was physics. It was just going crazy.
But I had people recommend, you've got to try this, Andy. You've got to try this. I get in there, and there was another virtual person in there with me that I first thought was just a chap bot. And I was kind of pissed. Then I realized it was a real person. And the two of us just had fun.
I played hide and seek. I hid under the table. First time somebody had tried that he said. We went through this, and when I came out, my colleague said, Andy, did you-- this was a brand-new system they had. Did you take it apart? Did you find its limits?
And my response was no, that would have been rude. I was with someone in that space. It was really kind of a different experience, and so those of you who have tried it, I would posit that if you've been using VR for years, that social experience was a whole new level of VR. It's really kind of an eye-opening experience.
We do have something on the floor. I would invite everybody to try it. But I'll just say that collaboration-- the social side of VR, that's the thread that is going to, I think, weave into the killer app that I think is still undiscovered for the large consumer base that Oculus HTC, these companies are trying to sell too.
I sure hope they succeed. Because on the business side, I love the fact that the headsets now have two zeros knocked off their price tag. Five years ago, these headsets were $35,000 to do the kind of work you can do with a Vive or an Oculus right now.
So really we are absolutely at the point where you can join a virtual meeting with somebody from different cities in the world. And this communication, this is in some respects better than real. This communication is not like standard video teleconferencing, which is largely more of a distraction than a benefit over just pure audio. Because you're trying to coordinate people's looking directions. You have a camera here. You have a screen there.
It's like having a conversation looking at somebody's belly button, all right. So VR, you can get that right. You keep the-- you restrict what portions of the person you're trying to cast with a simple avatar. It's a very tight social experience.
DERRICK SMITH: I was going to add, too, one of the-- I talked about the automobile industry. And the other thing that's happening right now is, and thanks to our good partners here, technology has now converged to a point where multimillion dollar caves that our automotive partners would use to do this level of visualization can be done with a couple of Nvidia cards, a workstation, decent processor, you know, and a $700 HTC headset. So we have now, I think, revolutionized this technology and brought it to a place where I think a lot of people can start to leverage it. So, yeah, that's one of the other drivers, I think. So I know you kind of touched on that.
ANDY BEALL: Definitely.
SCOTT RUPPERT: Thanks. That's a great segue because I was going to talk technology a little bit. Bob, we're coming your way next. Clearly all of this puts a lot of demands on, I mean it's a very visual experience, right? A lot of demands on the graphics, on the GPU. So could you just talk a little bit about Nvidia strategy towards that?
BOB PETTE: Absolutely. You know, I am glad you brought up caves. I was actually doing VR about 35 years ago. I did 21 years at [INAUDIBLE] Graphics and a couple start-ups and a few years at Nvidia. 35 years ago, we were using Onyx supercomputers and power challenges with million-dollar computers and million-dollar systems to outfit a six-sided cave with tracking.
That doesn't really change when you shrink it down to someone's view. In fact, I'd argue that the requirements are going to be even greater than what we used to do when we were trying to drive-- if this room were a six-sided cave. And we built caves this big.
The requirements are going to be much greater. For one, you're not dealing with just HD. You're going to be dealing with for 4K, 8K displays. You're not dealing with 60 frames a second or 120 frames a second. You're dealing, you know, going to be with a couple hundred frames a second.
You're going to have to be tracking the eyes. You're going to have to be getting-- in order to really socially understand what someone's doing, you want to know if I'm giving you the stink eye. If I'm smiling. So you'll see cameras inside the headsets as well as video coming in and out.
You're typically not just going to be there for the 3D experience itself. You might have a large automobile in front of you, or be walking through a future building. In order for humans to interact, they're going to want to see somebody that's not Mario or Luigi, but a point cloud representation of you that's somewhat real. That's coming in real time. That shows your face.
You're probably going to have a whiteboard. You're probably going to be bringing in live video, if somebody is joining via teleconference. There is 2D data of manuals, of blueprints. All of that is still speaking to the killer app, that's where this will go. You want to recreate what you would do if you were to jump into a war room for product design or for a new idea. And you're going to want to sketch. And you're going to want to use the computer. And you're going to want to call somebody up. You're going to want to have video coming in. You're going to look at clips. You want to look at the model. You want to be peel it apart.
All of that means a shitload, sorry, a lot more data. A lot more data that is coming in that's above and beyond just drawing the models. You know, we've got several VR demos on the floor that are really taking up the entire 24-gig framebuffer of two quadrilles running in SLI mode. And that's just scratching the surface. People are having to decimate their data. And I'd argue that it is-- you don't want to do that.
The beauty of having something like Stingray in a single push is your entire data stream should come across. There should be no decimation. You decimate, and you lose something. You lose insight. You may lose your business if there's a big mistake that you're decimating out. So you want the entire model.
So I think the GPUs, you'll see them continue to get bigger. You'll continue to see GPU interconnects to allow a larger addressable frame buffer. And that's the difference, I think, between what I'd call a gaming card, regardless of who makes it, and a pro card. You know, gaming cards typically tap out at about 8 gigs of framebuffer. Pro cards are at 24, soon to be 32. Single address spaces of 96. That's the kind of framebuffer you need.
You know the pro cards typically are going to have encoders and decoders for video in and video out as you're bringing in multiple streams of video. And those are all the things that we're looking at. Putting more encoders on, both for video, but also for streaming. You know, we should be able to have this discussion in four different countries, and it should be just as fast and just as real. Which means I've got to-- I'm not transmitting video, but I'm encoding and decoding those IP streams that will reconstruct my holographic body.
So all those things we're looking at for future GPUs. Better interconnects, larger framebuffers. Current GPUs have single pass left and right eye rendering. You'll see more of that. Foliated rendering, where your periphery is blurred, like it is. Mostly that will be kind of an easy call.
360 video stitching, where we can take in 32 cameras. Stitch, ingest 32 HD streams. Stitch them, warp them, blend them, and egest it in real time. That just requires a hell of a lot of memory and a hell of a lot of additional ASICs on the GPU that you'll continue to see.
SCOTT RUPPERT: In that same vein, George, you know, I mean, Lenovo being a hardware provider, solutions provider. How do you see this technology changing or adapting, configurations or recommendations that you're making?
GEORGE DIATZIKIS: Certainly, well, with VR pushing the need for more power that the systems that people have had in the past may or may not just be enough for it. So as Bob mentioned, the right graphics card is very important. Things to consider beyond the graphics card perhaps will be something as long as simple storage.
Storage, so I would configure a system maybe with a large [INAUDIBLE] drive for keeping your models. But I would consider something along the lines of solid state or even EDME to get the performance that you're looking for so you don't have a bottleneck there.
Large framebuffers, large memory on the system itself to make sure you have enough space to bring your models in. So net VR and AR is putting out even a larger demand on the system as far as a need for power. So I would be very careful as far as you spec it out that you do make sure you have enough horsepower in the system to have the experience that you are looking for, so there's no jittered, there's no lag. There's no issues where it's just taking time to load.
And also consider, are you just going to use it for VR, or do you plan to do a lot of content creation on the same system? So when you spec it out, I would make sure you weigh all that.
SCOTT RUPPERT: Sure. All right. Cool, very cool.
BOB PETTE: I just want to add one thing to that, I forgot to mention before around kind of the GP requirements. I talked about caves 35 years ago, but caves are making a comeback. The democratization of VR with HMDs has not eliminated caves. It's actually brought them back. Because VR can be lonely.
And so in absence of a killer collaborative solution, you still want people to be able to see what people are doing, to put it into a large venue space, whether it's-- rather than huddle around a little display and look at two google eyes on the display. That large venue experience for VR is something we're continuing to work on. And to the point of power. Those are going to be, again, large workstations that are driving multiple projectors, multiple walls, whatever.
Those two, you'll see those grow in tandem. On the consumption of VR, obviously it's going to be an AMD. I think in the corporate space for the design and the content creation, it will be a combination of HMD and [? electrical ?] displays.
ANDY BEALL: If I can just jump in to emphasize. Absolutely. Caves, especially within AEC, half of our clients, half of our sales end up being cave based. And they are not your dad's cave, in a sense. It is lightweight projectors that can be ceiling mounted. 3D. Front projected, so you're not needing backspace, mirror balance, a lot of real estate to be able to do this. And it lends itself to just immediately sort of-- you've got that person in the room with you. You see a lot of their non-verbals. You see their face in ways that right now is kind of obscured with goggles.
So, you know, goggles are about half. We find designers like other half like to have the cave system. And then actually we have cultural differences. We sell into the Middle East, parts of Asia. And parts of the world, people do not like to put on headsets, period. And so they're exclusively cave based. So caves are not are not a dinosaur yet, and I actually think that we're going to-- totally agree with Bob, we're going to see more and more cave-like sales.
SCOTT RUPPERT: All right, guys, I just have one more question, and then we're going to open it up because I want to see what questions we have in the audience. I'm going to start with Andy, but everybody chime in on this one. We've already talked about a lot of different moving pieces, right? There's several different headset manufacturers, input devices, handheld, haptic devices, things like that.
As you look at this, I'll say, cluttered landscape, what advice do you give to customers wanting to get into it? how do they start making those decisions and put that solution together?
ANDY BEALL: Yeah, I guess some advice I'd have-- it's certainly cluttered. Almost ridiculously so. There are, luckily, a couple of predominant players like in the goggle base. But you just start to Google on controllers, and there's literally like 150 manufacturers of headsets coming out of China right now. A lot of them are just copycat. A lot of them have some interesting aspects.
You can really get fixated. You can waste a ton of time on, like, doing speccing. And, you know, which one has more of this or that. When it really comes down to it, it's really what you're building. What's the function you need. What's the story you're telling. A lot of this hardware can do that, if you focus on kind of your own business.
The early, earliest sort of consumer resurgence HMD was the Oculus, called the DK1 right now. You look at it now, it seems really pixellated. Screen door effect. People could easily criticize it, and I would find that when I would have somebody put it on, this is what? Three or four years ago now.
Yeah, half the people would say, but I see the pixels. I'd say, just get over it. Let's move into what the story is I'm about to tell you, OK? I'd get into the simulation of an operating room. Or have them walk across that plank. Within 10 seconds, people were beyond that, OK?
So don't get hung up on the particular spec. And just try to keep flexible. Choose a software that's going allow you to jump around to different devices. That's certainly sort of our mission statement, is allowing people to swap out devices and not get stuck on any particular piece of hardware.
SCOTT RUPPERT: Any other thoughts on that one?
BOB PETTE: Yeah, I come at it from a workflow perspective. And we help a lot of ICVs. Autodesk is one of our biggest ones. And a lot of the game engines we work with. Epic and Real Grun Engine With the initial Oculus push, there were a lot of professional enterprise customers that were wanting to try VR, and struggling to export their data. And sometimes not struggling to export their data to game engines.
And I would ask them, what are you trying to do? Or what is it that you're trying to do? Is it just to look at your model? Well, no, I want to get it in VR. Then I want it to interrogate. I want to see if there's a gap. Then I want to figure out what part that is. Look it up in my PLM system.
It's like, so how's a game engine going to help you do that? And so I would, you know, I think as you look at putting these things together, I agree, device independence is important. But I'd really look at the workflow. That's what I like about the one-button push. You'll see more and more of the ISVs ensure that workflow experience.
It's not to say that there isn't a place for game engines in the pro space. But I always come at things with, what's the question I'm trying to answer? And if I'm going to put myself in an immersive design review, am I just looking at the artistic quality? The design? Or am I going to want to look at CFD? Or am I going to want to interrogate? To see what parts are what? And place those out?
If I'm going to do that, then I think the software is really key in terms of the connection points. The devices will continue to get commoditized from an H&D standpoint. And so I think it's more related to what software packages you put together for the best experience.
SCOTT RUPPERT: Great, great.
DERRICK SMITH: I'll just add that I know for my team, our goal is to get the best visual experience out to our customers on our platforms. So I know as we've made decisions about hardware, we've had to do a lot of research. And obviously, we've forged some partnerships with Nvidia. But because they have the best hardware. And we're able to get these platforms out and show our customers workflows in a quality environment.
I think when I was in a training with some of your folks, they were talking about this concept of presence, and how it's changed the experience that we have as far as people getting sick. And you know the hardware could be the difference between that frame rate and that delta between what your mind is telling you you should be seeing and what you're actually seeing. And the technology becomes very critical.
Because we look at what the M6000s and now the P6000s are doing, and it makes a huge difference. you were talking a little bit earlier about some cleanup. We were prepping for AU this year, just to give you a sense of what hardware-- the difference that hardware makes. And I had one of my guys was working on the Ford model. And I think he had, I don't know how many millions of polygons. And he was on there M6,000 card.
And he was having to do the cleanup, he was getting rid of data. He was trying to reduce this model down. And so I went to Nvidia and said, hey, you know, can we get some P6000s, and we got them. And I sent him an email over the weekend. He's like, oh, my god, I don't have to do any more cleanup.
And that to me, you know, is dollars. That's money saved. This is one of our best experts out there. So I think even when you're working within your own environments or your own company's industries, there is some real importance in having good hardware out there. Obviously our workstation vendors are just as valuable, not to leave you guys out. But these guys saved us this year, so--
SCOTT RUPPERT: It's all good. It's all good. Questions? Questions? Awesome. Start back here. I'm going bring the mic around, just so everybody can hear.
AUDIENCE: So this is kind of a follow up on the second to last comment being made. It has to do with a comment being made previously about content creation, in terms of using this means to invite others to comment on things that they experience in the VR. And whether it's voice or text or other kinds of means, to tag or associate feedback from elements in the environment. Open question.
SCOTT RUPPERT: Sounded like the question was around pulling people into add--
AUDIENCE: Sorry. Let me clarify that. What kinds of techniques are being explored to capture feedback from people who are in a VR experience? Rather than the designers. They know what they're looking for. It's the other users that are going to kind of give comment, criticism, et cetera.
BOB PETTE: Good question. So well, part of it is to have an SDK where you can actually get feedback. Either in terms of-- part of this gets to eye tracking. Gaze. awareness intent. So you want to be able to do correct eye tracking. You'll see HMDs get more and more of the eye tracking.
You've got to have-- if you have accurate geometric audio and you have good eye tracking and you can do foliated rendering to blur out the pieces, you can solve most frame rate issues, right? So what we are trying to provide are a set of solutions that go beyond just controllers.
It's amazing, and I do it myself, I walk up to screens, and I'll do like that. And it's like, oh, I can't, you can't do that. Gestures are going to be important things. So as you get people in an environment, we usually have cameras set up as we test out new features, to see how they want to interact.
You know, you talked about that chair arm on the like, but people will do that with the controllers. Right? And so we're not giving-- it's still not a natural thing to have a controller in your hand when what you really should have is a welding gun or a pencil. And so we're trying to use gestures and cameras. Give them no controllers whatsoever.
Put them in a room. Give them, say, what do you want to do? I want to go write on the whiteboard. Well, just go do it. And you know, we'll measure their hands. We know where they're pointing. We can kind of simulate some text. What do you want to do with the model? I want to rotate it.
Well, go ahead and rotate it. And most people go like this, right? Zoom in, and so, for us, we put a lot of people through guinea pig experiments on how they want to interact in that environment. I think we have to do a lot more of that. And then build that into SDKs, so that software vendors can incorporate that into the application. Did I hit on that?
AUDIENCE: Close.
ANDY BEALL: I'll add one thing to it, I guess. There are great ways that I think add what we think of as traditional tools, like text or whiteboard animation. But truly, just the data you already have. You are tracking somebody's head in [? 6DOF. ?] You're tracking maybe their hand and their body position at 90 hertz, 120 hertz.
And actually studies have shown that in VR, people tend to point their head very close to their focus of attention. So it's actually, there are now eye trackers, SMIs is a German company, doing a great eye tracker in the Vibe. You can actually get away for a lot of utility with just the pointing vector from the headset.
What I'm getting at is you actually have almost like this huge big data just latent in your system. If you capture that, you will have the proxemic information about where they are, what they're grabbing, what their focus of attention is. You bring in the multiple people. You actually easily, if you start to bring that data into a 3D visualizer, you'll see a social dynamic, social hierarchy, starting to occur in that.
We have customers that will then record that, along with the audio. And you actually now have almost like a forensic fingerprint of that interaction. And we work with some construction firms that sign off, have an experience as the sign off. Built this operating room as I just experienced it. And that's the key. As I experienced it.
How do you record an experience? Usually well photograph, yeah. But in VR, you record the entire walking pattern. You've got this proof that that chief of surgery did all those steps and is signed off on what that was. You just have to build it again. But it's all 3D data. You can copy that. So it's, I think it's a whole new opportunity for crazy big data to be available as people start interacting with these systems, capturing all these aspects. And we're going to see the next Google AI, API come out. So it can predict your behavior based on your VR input.
GEORGE DIATZIKIS: Just to add to that as well. I approached your question more about kind of a workflow process. And one demo that I saw recently, the idea was the designer finished his model, and he sent it out for review. And so then the reviewers were looking at the model. They were able to use gestures, and they put notes where they wanted to move something.
And then once that was done, it went back to the designer. And designer put it in, and he saw something blinking. I can't remember exactly how he was being notified. But there was information there. And he looked there, saw the information, and said, oh, I see. The reviewers needed me to change this, change the color. Move it five inches. Or what not. Was that what--
AUDIENCE: Yeah.
GEORGE DIATZIKIS: Yeah, it's being looked at and addressed. You know, gestures is a good way. But you need some type of feedback mechanism to designer once the models is reviewed, what does he need to do. Because obviously there's multiple passes and so forth.
BOB PETTE: And I think as it relates to not necessarily understanding what the person is experiencing, but giving another person feedback with inside the environment. Again, it's critical to get beyond 3D. You know, a whiteboard, shared whiteboard, is two-dimensional, right? A live video stream of your mentor telling you to cut the green wire before you blow up is a video stream.
And so having all those elements in, I think, will create that app that makes it more and more intuitive for people to provide natural feedback versus unnatural ways to press a button on a controller to create a sticky note to hang it on someone's hat. And so all those other data streams are critical to this, and that's why AR-- most people, most analysts say AR will dwarf the VR market, or as a magnitude.
Because at the end of the day, I'd rather look at your eyes while we're looking at this virtual car here, versus a point cloud. I'd rather be drinking coffee while I'm doing that. I'd rather have something that weighs this much versus an HMD. And that's-- we'll get there.
SCOTT RUPPERT: Sorry. Shift gears a little bit. I saw a lot of hands. We start back here. Questions?
AUDIENCE: Thanks. You guys have, a few of you have alluded to, is there an issue or concern with safety or danger as it relates to this? And if so, how do you evaluate and address it? Just as it relates to the physiology of being that environment?
ANDY BEALL: Certainly there's physical safety issues. Absolutely so. You're seeing, I think, the game consumer companies, like HTC and Oculus being very, very cautious about this. You put on Oculus, there's a warning system. There is age limits that you're supposed to abide by. Some are-- there are some reason to agree with those. Other reasons are just, I think, lawyers being very cautious.
But absolutely everything we do, every time we work with a client, every time we install a system that can track a room this large, we are basically requiring that our customer maintain a safety spotter in that space with that person.
I referred to walking across the plank. We are very careful with our language there when I ask you to step off. It is take two, large, slow steps off. We used to say, jump. And absolutely, literally, I think we had half a dozen people that decided to be Superman, and did this, and landed on their belly at a trade show. Or knocked a table over, right?
People think that they have superpowers in VR. But thank god nobody, I mean, we had a $40,000 headset get smashed once as a result of that. The person had a cut on their nose. So I mean, you got to take it really seriously. And I sure as heck hope that a 12-year-old kid does not play a shoot 'em up game on a Vive and run through a plate glass window. OK? That's going to be the end of the consumer industry for a while if that happens.
You know, the physical safety. And then even like moral or ethical type issues. We can create absolutely with this technology, experiences that, I think, are beyond what should be exposed to maybe an average person, of say violence or fear.
AUDIENCE: So the psychological would be--
ANDY BEALL: I think the psychological, moral, ethical issues are. We're just kind of scraping the tip of the iceberg right now on what those possibilities are.
AUDIENCE: Thanks. So I was really interested in, I think, one of the first comments that was made about distance-less collaboration. It strikes me that the thing that's going to give me the experience of being in your presence is the headset. But then that's the same thing that prevents you from having any kind of true lifelike experience with me, because I'm obscured.
What is the best current way around that? And what do you think the future technology would need to be to completely remove that barrier?
BOB PETTE: So I'll take a stab at that. I alluded to internal cameras, which you'll see more and more of for eye tracking, but also for facial expressions. And a lot of research we're doing on just being able to see if you're again stink eye. Representing-- instead of picking an avatar, you know, there'll be a simple scanner. A point cloud scanner. You can, with a connect, you can get a pretty decent point cloud of your face.
And it's relatively easy, given everything else that's going on in VR and AR relatively easy to take that point cloud of your face, take that real time input from the camera and you know--
AUDIENCE: So have a real time 3D avatar?
BOB PETTE: Absolutely. So think of the cloud as the choreographer. Who are the people? Where are they? You pick your place at the table. You pick your place around the device you're looking at. Some people may be experiencing that in AR. They had the physical device in front of them. And the people that are the virtual beings.
Some people would be seeing everybody in all virtual, right? And so they're both displaying the device you may be looking at, as well as displaying everybody else. But it's going to be related to facial expressions, eye tracking, gaze awareness intent. And we can, we're doing it today. Reproducing that on the fly, so that your face, when you're talking, your lips are moving, saying the words that you're saying.
And if you're there at your place at the table in Japan, the audio comes from your direction. I look. I can glance over. Your lips are moving. It's your face. You'll see that very shortly. But it's going to be related to having enough camera and facial detection.
ANDY BEALL: If I just add. I've been in a small company, 35 people. We've got to make ends meet, obviously. And I've got to sell what works today. And while I think there is some fantastic research that what Bob's talking about is not going to be five years from now. Maybe two or three years from now. There are aspects of this that right now, today, do allow you to have interaction at a distance.
And there is no silver bullet that solves everything now. You have to understand what that problem, what your application is. But truly, and you can try it out this afternoon. For certain applications, the experience you have with that one-on-one collaboration right now is far better than video conferencing.
Are you seeing all their facial expressions right now with the headset on? No. But you're able to do a turn taking. You're able to gauge your audience. I gave Scott, the first time he got to try this with three people. We bring three people together, and it's bizarre. The three physical people are pointing in different directions. You know, they could be in different cities.
But together, I can do literally a sales pitch to you of something. And I'm telling a story, and I'm waiting till I see that both people are now paying attention. And they bump into each other, and they're a mile apart, basically. So--
BOB PETTE: My comments were more around the true, the way that we would want to interact. We've been in joint collaboration at SIGGRAPH and the Hack Rod. Designing a car collaboratively is certainly possible. And those are all good things out there today.
I would say within a year, you'll see something that will blow your mind. And really get something that you could readily use and drop in to the existing collaboration solutions, but allow you to really connect with a human being the way you want to.
AUDIENCE: I'll be waiting for that. I've already cued up my avatar with a full Afro.
SCOTT RUPPERT: Nice, nice. Was there one back here.
AUDIENCE: I had a simple one. A lot of them were kind of answered. I had three things, but one comment was, I like hearing what you're saying about-- I thought of Minority Report, where you're up to the gestures, and I don't know how much some of that was brought in as more of a comment. Then the security brought me back.
But all I'm hearing-- I'm from the automotive industry. So I'm very excited in seeing where this is going. And what I'm hearing, even being here this week, is reviews, reviews, reviews, collaboration, collaboration, collaboration. Well, that seems to be like, to me, like a small percentage of what this can do.
We've talked five years ago, you work every day in one of these. I won't need a computer screen. I won't need to sit in an office. So I didn't know-- I just want to see your thoughts because hardware wise, when you touch your glasses, that's the big thing that's missing right now, when the hardware gets-- you know, the first cell phones were massive.
So once it gets down to that, but I wanted to know because I never had someone in this panel, the psychological effects, can somebody wear it and live in that kind of world and work all day, like eight hours? Or has any been researched? Is anybody looking at that right now? Even if the hardware gets small enough? The computers get fast enough, and everything. Can we psychologically as humans do that without turning it to like, one of those movies in our minds, go to mush or something. I was just wondering, because you're all experts in this--
ANDY BEALL: Yeah, I'll start on some of them. There are folks that are trying to see what are your limits to wearing a headset.
[INTERPOSING VOICES]
ANDY BEALL: Five years ago, the limit was more like, I would say, about 15 minutes, for from the first headset I ever used was one by a company, BPO, the iPhone, who you, I'm sure know, Bob. Back then the track in latency was so bad on the order of 100 milliseconds, that you could not stay in it for maybe more than 5 or 10, unless you were like just crazy bulletproof, like an astronaut or something to SIM sickness.
Today most people can stay in up to an hour, I would say. And it kind of, actually other problems start to come in play. It's hotspots, and just the headset on your head. And getting a little warm or fogging up or things like that.
So the latency now on a properly tuned Vive or Oculus is just about at kind of optimal threshold or low enough that the physiological systems are unable to detect that. We still have eyestrain. The lenses are still creating a constant-- what's called monocular accommodation, the lens focus.
And I suspect one of the things that Bob may be referring to is there are some advents on the horizon with a couple companies to create light field displays, where you're actually getting a focus. And that stands to be really a qualitatively new sense of VR, but it's also going to reduce some of these problems, like how long can somebody be in that? Because your eyes going to be-- your lens will be shifting and matching to the binocular convergence of your two eyes.
So you know, your question of when are we going to be able to replace, like, you know, I walk into a room where we have a couple of 3D S Max artists. And they both have four screens, a Waycom tablet, space ball, keyboard mouse, right? And I would love to think that I'm around the corner from being able to use VR so that my artists can just go crazy.
But they are so fast at all the key bindings and all those monitors, that I think the hurdle to replace that physical gear is much harder than I think we give it credit for. It will happen. But even with the advent of some really killer new tech, there's going to be the new metaphors, interaction, the whole UIUX that's going to have to be tuned and evolved.
So we're-- I'm in it for the long haul.
DERRICK SMITH: I don't really see it happening until AR. We're working with some lenses now where I can see through as well as-- so light field displays that adjust to my eyes, give proper depth, and enable us to draw things that are based on where you're focusing so I don't get the same kind of brightness that are causing eyestrain all over the place. It's really where I'm looking at.
But more importantly, I can see through what I want to. So, you know, those are still big. And they will come down. But the key for me in staying in those, is that I can see where my, again, where my coffee is. I usually take my head set off when I'm looking for coffee or a snack.
ANDY BEALL: It's the Pepsi button as [? John Carmack ?] calls it.
DERRICK SMITH: And so, but with the AR goggles, you know, and I can see through, I can see my coffee. And I can put it down without spilling it on my lap. Those simple things that we're just used to when we're just working in front of computer will take us further and further into it. I'm not a psychiatrist on what it'll do to the mind. I guess if you're Dr. McCoy, I'm Scotty.
I'm trying to get the transporter to work, and we'll see if it comes back in the right bits. But AR, I think, is really where you'll get somebody that could literally stay in that all day. And of course, obviously the lighter they are, the more you'll be able to address that.
You bring in different issues just in terms of what you can do with fitting cameras in to adjust your, based on your own eyes and your own curvature of your lenses. But they'll get smaller. But I think that's far off. I think, and I forgot to say this upfront, I think we do a lot for designers and scientists, and we've been for years.
I think the bigger opportunity for those that are actually selling products in the room is the consumption. VR as a consumption device. Giving somebody the ability to design their own car, to design their own house, to look at their new building. That's the big opportunity from a monetary standpoint.
The number of devices that will be out there for people to be able to walk into any showroom. May not be any car on the floor, right? Buy a condo. Do virtual training, virtual maintenance over distance on an engine that hasn't been built yet. Or if it has been built yet, maybe augmented.
Those things tend to not be designer sitting there all day long working on something, but a specific task. Diffuse that bomb. Fix that engine. Buy your car. And that is the real volume opportunity in VR right now.
BOB PETTE: Yeah, I was going to--
AUDIENCE: How long do you think that will be?
DERRICK SMITH: We're actually deploying virtual showrooms now. Audi, Cadillac is on the verge. We're working with most of the automotives to put in virtual showrooms and give people the ability to go through every option they have in the car and see it. I don't know what burl wood looks like, but I do now, because I can just swipe it, and there it is. And then, you know, drive the car.
So those are being deployed, and you'll see, there's already several hundred out there now. But you'll see more and more. Typically in areas where they don't have the ability to get a lot of inventory. But I think there'll be-- the high end will be the sales rep will come to your house, right? Let me design your Rolls for you. Let me design your whatever for you.
GEORGE DIATZIKIS: I was thinking another great example is the oil and gas industry. We have a customer that designs oil platforms, and that's an industry that has evolved over the years. And a lot of the people that do that work are now older and retiring. And the challenge that they have is how do we get younger people, new people trained to go out on these rigs, which are very small, obviously out in the middle of nowhere. Very difficult to transport people there. It's a very dangerous job, so the training has to happen. It's critical. How do we do that?
Well, virtual reality environments will allow us to go in and help customers create those simulations and trainings. And you know, we talk about the realness of it. He and I were discussing this, another example, but when you're getting that training, and you're breaking that pipe out on a platform and that boom moves over, it feels like it's going to hit you. And that's the level of training that I think they want to show, hey, you can kill yourself if you don't get this right.
And so they're able to simulate that, get people training so when they're out there, they feel that experience, and it's very real.
We were talking earlier, in one of your examples where he has an airplane and the propeller is spinning. And I was thinking, you were talking about the safety aspects of this, and I was thinking, you know, it's easy to walk through that spinning propeller. You know, I don't ever want to feel like that's OK. And it is OK in the virtual environment.
DERRICK SMITH: You didn't want to desensitize, something like that.
SCOTT RUPPERT: I think there were a couple more questions. We just have a few minutes left. So, make sure we get these questions answered.
AUDIENCE: Mine's kind of more organic compared to what you've been talking about. More to go back towards the AAC and you was talking about the caves. And we build and overhaul aircraft carriers from the Navy, and the Navy is really embracing point clouds now. And so we're using that in the design process as our baseline.
The one thing I've been watching for, and hoping to see, is a little bit more point clouds in cave environment. And later on, getting into headsets. We just got done scanning an aircraft carrier in Japan, and it was really nice-- or it would have been nice to be able to send that back to our customer, have them do a virtual walk through of that area, and then we have our design changes comments from them before we even get back.
So I guess, you know, after listening to what you've all been talking about, it just sounds, like I said, organic and maybe-- you know, but point clouds is a big thing right now. Reality capture. And you know, that's the beginning process of later on, going into augmented reality. I mean, we're already training in augmented reality to build certain components of the aircraft carrier.
But the beginning phase, using point clouds is a really big concern. So is there any emphasis on point clouds and the cave environment down into--
DERRICK SMITH: Absolutely. In fact, I know Nvidia is pretty familiar with this as well. So Bob may chime in, but we have platforms today where we can take that data into an environment. One good example, and I think Nvidia you guys demo this, is where we use drones to go in and actually scan a construction site.
So they went through at various time points in that construction process, from when it was just dirt, laying the foundation. And they did point cloud scans of that, and you were able to put on a headset and actually click through the different phases of that construction. Go through the building and see a level of detail that was just amazing.
I mean, that's what we're doing today. So you're spot on as far as other areas where point cloud technology is--
BOB PETTE: Yeah, they sort of mesh. LIDAR, and video to point clouds is big, both from a media and entertainment standpoint as well. But for construction, huge. And so the SDKs that we enable, I mentioned the 360 video one. We've got point cloud ingesters.
I think the customer shouldn't be inhibited by what the viewer is capable of displaying, whatever your data is, whether it's polygonal or volumetric or point clouds or video, we have a viewer that-- I think we're demoing it on the floor-- that can take all that data in. Because you want to look at the point cloud data. You want to compare it to what the building is supposed to look like, And look for deltas and look for differences, right?
And so we've got some demos of that. You might want to Google, like our new headquarters, we're doing that right now. If you google Nvidia endeavor, you'll see some point cloud examples. But there should be no inhibitions in bringing in point cloud data, render data, to fill in the gaps. Because there's got to be some interpolation depending on how much the granularity. And then video as well. And I think all of those will be equal citizens in the kind of VR space.
ANDY BEALL: Yeah, in prep for the show, we actually-- Lenovo's team scanned in a server with the lid off, using one of these handheld scanners. Actually the FAB, or something very similar to the FAB two. So the system we're running downstairs, with three people collaborating, that's one of the things we-- one of the slides we popped in that you can-- all three people can check that out. It's an easy way to get in really good quality content. I think most VR systems are supporting that.
[INTERPOSING VOICES]
ANDY BEALL: Yes, I mean, I would be surprised if there was a technical hurdle there. I mean, rendering a point cloud, whether it's on a cave or a set of goggles, goggles have actually traditionally been more problematic for point clouds because the resolution of, like a DK1, was so kind of coarse. This is a case where, in the point cloud, unless you made the points, like two or three pixels, it was really kind of tough to discern a point cloud. There was a little bit sparse. So [? cadence ?] should be fine.
GEORGE DIATZIKIS: And I think our technology today is allowing you to not only capture the point cloud, but capture the actual images and stitch those to the point cloud. So when you're looking at a point cloud data set today, it doesn't look like a point cloud data set. It looks like you're looking at a rendered version of that design or object.
AUDIENCE: So you're actually able to do this right now? Because the million-dollar question I've been asking everyone, is how can I get my point cloud scans into my Vive? And no one seems to have been able to actually--
BOB PETTE: Come down to the Nvidia booth--
AUDIENCE: OK, yeah, 25 million points per scan, 180 scans, like-- OK, cool. Thank you.
SCOTT RUPPERT: Awesome. We're down to the wire here. Any more questions? Our panelists are going to stick around. Happy to continue the conversation, either here or downstairs.
DERRICK SMITH: Thank you, everyone.
SCOTT RUPPERT: I do encourage everybody, if you haven't already checked out the demos in the Future of Making Things, the VR interactive zone. I think we've got maybe an hour left of the exhibit hall. So please do check all that out. Let us know if you have any questions. Thanks for your time.