説明
In this class, we will bring data from different sources (Revit, Fusion, Sketchup, etc.) into the Hololens to be viewed.
Content coming from CAD applications is usually too heavy to be viewed on the Hololens. We will start with defining the key metrics to track for a smooth Hololens experience, set ourselves some specific targets for content and hardware limitations, explore different options, tools and workflows for optimizing content and setup some basic interactions in Stingray (3ds Max Interactive).
Because of the limited availability of the Hololens, this will be an Instructional demo, but we invite you to bring your own Hololens to follow along.
主な学習内容
- Learn how to view content from different sources in the HoloLens
- Learn how to identify hardware limitations and performance targets
- Learn how to use different tools to reach the performance targets
- Learn how to add basic interactions and input with the HoloLens
スピーカー
- David MenardDavid is a Software Engineer by training, with Master’s in Virtual Reality, and an MBA from HEC Montreal. After spending a few years working in the video games industry, he joined Autodesk to kick-start the effort around the ambitious project that is LIVE and Stingray, focusing on Virtual Reality. His deep experience in Virtual Reality and real-time rendering technologies has served him well as a Product Owner, enabling LIVE to become the One-Click solution to VR that it is today.
- LMLouis MarcouxLouis Marcoux has been a technical expert for 3D animation, visual effects and real-time rendering at Autodesk, Inc., since 2003. Prior to Autodesk, he was a real-time broadcast graphics specialist for 5 years, working with Discreet Logic and VertigoXMedia. Marcoux received a bachelor’s degree in electrical engineering from Polytechnique in Montréal, and he also holds a Bachelor of Communications degree from Université du Québec à Montréal and Bachelor of Fine Arts degree in film production from Concordia University. Marcoux has been awarded Best Speaker at Autodesk University 3 times. For more information, you can go to: http://area.autodesk.com/louis
DAVID MENARD: Well, welcome, first of all, to our HoloLens class, How To HoloLens. I know the title of the class was pretty long. I kind of cut it short. Before we start, I just wanted to ask a question. Who in this room has access to a HoloLens and can actually develop?
That's about half the people. So the other half here is interested in this because, and I'm taking a guess-- I know the HoloLens shows a lot of promise. So you want to know that when the HoloLens 2 comes out or when another device comes out, you can basically develop or create experiences for that next device. Is that something that resonates a bit?
I hope so. Great. Excellent. So my name's David Menard. I'm a product owner for Autodesk Live. I'm based in Montreal. And my co-speaker is Louis Marcoux. He's a subject matter expert, and he's here mostly to help me with the entire optimization workflow.
We also have a guest speaker that I'm going to introduce in a few slides. So the way the class is going to be setup, we're going to do three things. First, we're going to make sure everyone's on par, at the same level with the devices, with the terms that we're going to use. So just general overview about the HoloLens and AR and VR in general.
Then we're going to get right into the grains. We're going to do the hands-on demo, the hands-on part of how to get stuff running on the HoloLens. And so really the how to HoloLens part of this class. Then the second big section is going to be all about optimization workflows. Because, as you know, the HoloLens a bit less powerful of a device, there's a lot to do to get things working smoothly on the device.
All right. So first of all, why use HoloLens? We know that AR Kit now just came out. We know AR is the thing, we know VR is a thing. Why would you ever want to use the HoloLens in the first place? Well, I stole this slide from Nick Landry. He's a Technical Evangelist on Microsoft.
They coined the term "mixed reality." So today, on this slide, we have VR as it is today. Everyone kind of knows that part. You put the headset on, you disappear into your own world. You have no idea what's happening on the outside world. You're really in the experience. Nothing else is happening.
And on this slide, we have AR as it is today. Most of AR today is just overlays on the real world. You have some panels here and there, but then there's the entire spectrum of mixed reality that comes in-between. And this is where the HoloLens is actually showing the most promise. The HoloLens is something that can do AR today, but is really designed to do mixed reality in the future where you can not only overlay virtual things to the real world, but you can have your virtual objects interact with the real world in different contexts.
So I know some people hate the mixed reality term. And just to expand on that a bit, I want to invite Dace Campbell with me. He's our guest speaker, so please giving a hand. This was very unpredicted. They saw you introduce yourself. And just give a--
DACE CAMPBELL: OK.
DAVID MENARD: --one minute talk.
DACE CAMPBELL: Sure, sure. So I'm Dace Campbell. I'm a customer success manager with Autodesk. I have a pretty deep background, and AR and VR. I've been in it for more than 25 years. So as a CSM, I work with our major accounts and I get deeply embedded helping them use the tools that they bought.
My night job, however, is working with Louis and David and a whole crew of people at Autodesk, helping define the AR and VR strategy, specifically the AEC. I just want to speak just a quick minute-- and I'll go back to the Microsoft slide here-- I'm a big fan of the HoloLens. Big fan of what Microsoft is offering. I'm not such a big fan of their attempt to redefine the terminology to fit their current offerings.
And I'm going to get into this here just a little bit. They're defining AR on the physical reality side when typically, AR tends to fall somewhere in the middle. And they're actually putting in a transparency opacity along the same axis, but they're using terms like immersive along the same access. To me, it's a bit more orthogonal.
And not to geek out too much, but this hearkens back to the original MR diagram from 1994, then was put together to say there's a mixed reality continuum. On one side, you've got reality. On the other side, you've got true VR. Completely immersive.
Immersive-- think of immersive as surrounding. If immersive equals surrounding, then there's different points in between. So typically, AR falls somewhere in between true VR and true reality that we're existing in today. We don't really talk-- I'm sorry, there's AR.
We don't really talk a whole lot about augmented virtuality. That's sort of like VR with windows into reality. And those could be visual, tactile, a little bit of feedback that's maybe 80%, 90% VR, and 10%, 20% real world. There's not much there today.
Anyway, it's an entire spectrum, but I would argue that along that entire spectrum, things could be immersive or surrounding or non-immersive. So it's not a matter of, oh, some of this is immersive, some of this isn't. You can have immersive offerings and not immersive offerings along the whole continuum. And for example-- and I know David's going to talk probably a little bit about this more-- we've got Revit Live.
Show of hands, who's heard of Revit Live? Show of hands, who hasn't heard of Revit Live? Where have you guys been?
DAVID MENARD: Awesome. That's like--
DACE CAMPBELL: [INAUDIBLE] people. You've been in AU for days now. Anyway, so Revit Live supports a VR experience. And it could be an immersive, head-mounted, display-driven experience, or a non-immersive, desktop, VR experience. That's just one example of how to differentiate immersive versus a non-immersive on any points on that mixed reality continuum.
DAVID MENARD: Excellent. Thank you. This is great. And the reason I wanted Dace to give his opinion on this is because the space is brand new, as you know. But it's so contested right now. Everyone kind of wants their own thing and everyone thinks it's the future.
And we're really looking at a future where you want that pair of lenses that just does pretty much everything on this spectrum, right? So just to keep going-- by the way, there's a lot of new people who just entered the room. I'm just going to recap the agenda real fast.
We're doing a quick overview on the HoloLens, AR, VR, everything there. Then we're going to get hands-on with how to actually use the HoloLens with Max Interactive. And then Louis' going to do a whole, big spiel about optimizing your content for this kind of device.
So to get back to this, AR core and AR kit just came out. Obviously, two great technologies. One for iOS, one for Android. They do some things really, really well, mostly plane mapping. So detecting floors, detecting walls, top notch, right?
They do GPS localization because your phone has a GPS. So that's fine. It's accessible. They have all the tech that you already have in your phone accessible in these devices. To be fair, though, the plane mapping coupled with the GPS? Not so strong today.
I'm sure some of you have tried these apps that try to put those-- you know those drawings in real space and then someone else can come and see them? They're never at the right space.
So on the other hand, HoloLens. The first advantage is that it's pretty freaking awesome. Not a real advantage, but I wanted to put it there. There's a lot of stuff that HoloLens has going for it, but the big one is the environmental mapping.
So first time I got my HoloLens, I booked a meeting room at the office in Montreal. And they started placing windows everywhere in the meeting room. I placed holograms and it was just a mess. And then I left the meeting room, went back to my desk, and decided to test this for myself-- the environment mapping.
So I went around the office placing holograms everywhere. And the second round I did, all my holograms were still at the right place, which is great. But the second round, I actually went by that first meeting room where I set all my windows, and the HoloLens, without me even going there, and between two different sessions, completely recognized that that was the place I was before and placed everything at the right place. And this is really the power of the HoloLens.
It knows where you are, it can recognize your environment, and it can create meshes in real-time for you to interact with so that your chair that you put behind your table-- your virtual chair that you put behind your real table. Now I have to specify these kind of things-- it can be included in your virtual experience by the HoloLens. So that kind of power is not yet available in the AR Kit or the AR Core.
Of course, then, it's hands-free. It's great for collaboration because it's hands-free, because everyone can have one. All of those advantages are probably going to come, eventually, to the rest of the platform.
So the overall workflow that we're going to look at today is taking your source data-- and that can be pretty much anything. Revit, Fusion, CAD, FVX, whatever. Then we need to optimize it because this source data, especially the CAD data, is incredibly heavy. And we'll see that the device is pretty limited.
Then we want to take that data that we optimized, and we want to push it to a real-time engine-- that can be, again, anything. Unity, Unreal, Max Interactive-- to be able to push that or deploy it to our device. Simple workflow in theory. And we're going to look at all these steps.
So before we get started, why is optimization needed? You probably just want to take your CAD data, shove it in a HoloLens, and view it, right? That's the simplest thing. Optimization is needed because the HoloLens right now, as it sits today, is the equivalent of an iPhone 6.
The iPhone X just came out. It's already three generations up. The iPhone 6-- imagine it has to do all this environment mapping at the same time. So your budget for computing, or to draw things, is very limited on the HoloLens. And you might be thinking right now, well, OK, great. But I'll wait for the HoloLens 2 and I'll have all of that power on my face.
But that's just simply not true anymore. And I want to quote John Carmack here. And I'm a big fan of his. I'll show about 20 seconds of his video before going on.
JOHN CARMACK: And I-- some people would like to think that this type of disciplined coding and design within very tight constraints that, maybe if-- like in the old days, you could just wait a few years and I-- and your lazy design, whatever, just works on the new computers. That was the way of PC development for a very long time.
But we're in a situation now, with mobile being important, and with the end of Moore's law kind of drawing nigh, that those types of skills are absolutely going to remain important for the foreseeable future. This future device that we all imagine where we have AR sunglasses that billions of people are wearing, that's probably going to have at least as tight of a power budget and design constraints as you have on gear VR.
DAVID MENARD: All right. Who knows who this guy is? Who doesn't know who this guy is? A lot of people. Wow. This guy's got-- this is good.
This guy's John Carmack, the inventor of the Oculus, essentially. This was at Oculus Connect just a couple months ago, so it's not outdated at all. And the skill set that he was just talking about is the optimization part of your pipeline. It is that important.
So we all imagine a future where you're going to have a GTX 1080 on your face or something as powerful as that. That's just not going to happen in the foreseeable future. So the optimization part of this class is incredibly important. And I can't stress that enough.
So let's talk about that. Let's go back to the HoloLens. We talked conceptually about what was going to happen. The HoloLens today can handle this: 300 batches, 300,000 polygons. Well, you're probably wondering now how big that is.
Well, this is Oakwood Hospital. It's a data set that I use a lot during my work. It's a big hospital campus. When we bring this model into a real-time engine, it comes up with 14 million polygons.
Remember, we're trying to hit 300,000. It comes in with 7,000 batches, and as soon as you add one light to this scene, it doubles everything. 15,000 batches, that's 31 million polys.
Now I work in Live. This is the only time I'm going to mention Live. Live, we optimize a lot of stuff on the cloud, and yet we can only get this down to 5,000,000 polys and 1,300 batches. Louis is going to talk a bit more about what batches and primitives are really concretely.
I just wanted to give you a sense of scale. Remember, 300, 300. We're a long way off.
All right. So let's get down to-- oh. I did something. There we go. Let's get hands-down on how to get this kind of data into the HoloLens before even optimizing it. Let's start with simple stuff-- cubes, lamps, that kind of thing.
Now I want to give you a big experimental feature warning here because we're going to use Max Interactive 2.0, or Stingray 2.0. By the way, same thing. Just a different name change. All of this is experimental.
Not only is the platform experimental, the way of getting your data onto the HoloLens is also experimental. In the past, to get your data onto this device, you absolutely needed to use Visual Studio. It's a programming tool.
If you're not a programmer, you never want to see that, by the way. It's sometimes terrible. It's sometimes great, as well. But we're trying to avoid all of that. We're trying to make this a bit easier.
And before I start this part, because it's going to be a part with lots of tools and lots of things, everything's available here. All the class handouts that I uploaded to the AU website are just one-pagers or one-liners, and they just contain this link.
This link contains-- it's a box folder with all our videos, all our zip files, all our material that is really relevant. The AU web site wouldn't let me upload 600 megabytes, so everything's there, including-- I think this is going to be recorded, so I'm going to upload the recording there as soon as I can as well.
So on the box folder, the first thing I did is I uploaded a zip file there. I called it the HoloLens tool kit. This is just how I called it, by the way, so don't quote me on that. It's a zip that contains a few tools for you to be able to get going. The first thing you want to do is unzip it to a local folder.
And you'll see it contains two folders, one called Builds, and one called The HoloLens Project. The HoloLens Project is pretty straightforward. It's something you open in Max Interactive. It's a template project.
The second, Builds, contains two other folders, and we'll see what that's all about very soon. So the way game engines works, or interactive engines works. My PC needs to be able to compile the data so the meshes, the scripts, everything for each platform I'm going to deploy to. Usually, this is not a problem because you're deploying to PC.
But let's say I want to deploy to Android or to UWP, in this case, Universal Windows Platform, which is the HoloLens platform. I need my engine on Windows to be able to compile that data for that platform. Now because it's experimental, we did not ship this capability with the engine. You usually need to compile the engine yourself.
What I did is I compiled it for you so that you can use it yourself back at home. So first thing you need to do is copy that little part of the engine where you installed Max Interactive. Usually, Max interactive is installed in C, Program Files, Autodesk, 3ds Max Interactive.
And in that folder-- this one here-- there's an engine folder. Just open that and then go back to the Builds folder that is in the HoloLens Toolkit. There's a UWP 32 folder. Just copy it over. That's the first step.
Now your engine, or your editor, whatever, Max Interactive, on your PC is going to be able to compile your data for the HoloLens. So now that that's done, we're going to look at a couple tools that Microsoft hands you out just to simplify the process.
The first one is from the Microsoft store. Simply type in "HoloLens" in the search bar. There's an app called Microsoft HoloLens. It's very, very useful to transfer data. For us, we're mostly going to use it so that you could see what I see through the HoloLens. So this is very useful to give classes or just to record videos if you're ever looking for them.
So to use this app, you need two things. First of all, your log in on your HoloLens. The second one is the IP address of your HoloLens. Now you need to make sure that your HoloLens is on the same Wi-Fi as your PC. And we'll get to how you get that IP address right away, but I want to start the stream first.
And it's very important to be on the same Wi-Fi because I've had people be on different Wi-Fi's and tell me, why is this not working? Obviously, well, if you're not on the same Wi-Fi and you enter an IP address, it just won't work. So make sure that's working.
As soon as you click Live Stream here, you'll be able to get a view of what I'm seeing through the HoloLens. Now this is super useful because now I can show you exactly what to do inside the HoloLens. So first thing's first, we're going to enable Developer Mode. Oh, I can turn off the sound here.
Yeah. OK, I'll leave the sound. Hopefully this doesn't get annoying.
You go in Settings, Update in Settings. By the way, who's used a HoloLens before? OK, most of you. So you know that the Bloom gesture is kind of your main menu, and then your air click is how you select things. That's what you're going to see me do, so don't worry.
You go in Update in Settings. There's a panel on the left called For Developers. First thing you want to do is open Activate Developer Mode right there. And at the bottom-- this is important, as well-- Device Portal. This will allow you to connect to your HoloLens via web browser, and we're going to see that right away.
Then you want to go in Network and Settings. Make sure you connect to the Wi-Fi. In Advanced Settings right there, there's one field called IPv4. IPv4 address. Right there in the center, that's your IP address that I used to connect.
And save this. Mark it down. You're going to use it a lot.
OK. Now that that's done, we're going to get back to it. Now that that's done, we want to be able to install the engine, or Max Interactive, on our HoloLens. To do this, we're simply going to connect to it via our web browser.
So you open a browser page and, once again, enter that IP. Second time we were using it, right? My IP address.
Another tool that Microsoft gives you and lets you play around with-- a lot of different, really useful things here. The only thing we're going to use at this point is the apps section on the left. From there, you can install the HoloLens app.
The HoloLens app, you're going to find it in the folder that I uploaded earlier, the zip file. The second folder next to UWP is called APPX. Simply in the install app section right here, click on Choose File. We're going to go browse to wherever you unzipped the Builds folder, the APPX folder.
APPX is just the extension for the Universal Windows Platform files. So you install this onto the HoloLens. Click on Go, and it's going to get deployed. Now at this point, you have everything connected. You can stream your view to the HoloLens. And when you Bloom, suddenly, you're going to have a 3ds Max Interactive app.
You launch this, and it's going to say you pin it somewhere because you always have to pin your apps. You're going to see a Waiting for Connections tab at this point. So this is where we're actually going to start going into Max Interactive and create our experience.
So now our HoloLens lens is basically all set up so that we can start connecting to it. So in 3ds Max Interactive, first thing we do is, always, you open a project. I mentioned earlier the zip file also contains a template project, so the HoloLens projects. Simply browse to wherever you extracted this and opened the project.
The project is a template to help you get going. It does not contain everything you can do in the HoloLens, but it's a really good starting point. So in the content levels folder, there's an empty level. And notice the first difference with this level compared to a normal Stingray, Max Interactive, level is-- that never gets old-- is that there's no environment.
If you have an environment, you're not going to have a see-through thing. You really want to keep this as empty as you can. It contains three cubes. Pretty simple.
At this point, we're just going to go activate experimental settings. For those of you who have already done that, file, settings, editor settings, and you make sure that that second checkbox right there is checked, and then you're good to go. This is the last configuration thing that we're going to do.
All right. So we're good to go. We have our template. The template contains a few Flow nodes that I wrote to help you get going. Notice the first one is a HoloLens hand-pressed. So it just triggers whenever you do this. The second one is HoloLens Throw Ball.
So I hooked them together, which means that whenever I click on something, whenever air click, a ball is going to be thrown right in front of me. Pretty simple application, but it's a good starting point. I encourage everyone to start there before trying to get more data into the HoloLens.
Once again, I'm going to open the Connections tab. I'm going to create a new platform. You just click on the plus. You change the Windows part to UWP. If you don't see UWP, it's because you didn't enable Experimental Features.
At this point-- last time we're going to use the IP, I promise-- you enter your IP address once more here. Now you're all set up.
So when that's done, you make sure you go back to the HoloLens, you run that application, the Stingray application. When it says "Waiting for connection," this is when you're good to go. You click on Run Game inside Max Interactive right there, and it's going to start streaming data towards your HoloLens. And this is a huge advantage, actually. It really gets the iteration time down compared to using Visual Studio where you always have to copy your files over.
This is the real big advantage here of this new experimental workflow. So once you're there, everything loads. My three cubes are in my scene. I can look around. And you'll see me pinch, and all the balls will start being thrown. It didn't recognize my first pinches.
All right. So here, my balls are obviously interacting with the virtual world. They're not bouncing on the physical world. The reason that is is that I did not set up the mesh for the HoloLens.
In the template, there's a couple lines in Lua that you can just uncomment to actually create the mesh for the physical world to interact with it. In this case, we're just going to ignore that for now. Just time constraints.
So next thing we want to do is to get some kind of data in here. It's fun playing with balls, but we want something else. So first thing I'm going to show you is actually a CAD model of our office in Montreal. One of the small parts of it is an electrical panel. And we never know what's behind that electrical panel.
So what I want to do is have this electrical panel show up in the real world on top of the real electrical panel so I can see what's behind it. So you saw me right there go into Max, export it to FBX. And once the FBX is exported, simply drag and drop it into 3ds Max Interactive like anything else. And at this point, you just drag it in the scene.
Now it comes in really, really big at this point because I did not set the units properly when I exported the FBX file. I just chose the wrong parameters. And this is really important because you want everything at life scale. You want everything at the exact, precise scale, or else your entire experience is ruined.
So here, I'm just going to scale it down arbitrarily to what I think the panel is. I'm also going to change the material. Dim materials in the HoloLens don't work very well. You're always going to see through them. So try and use very bright materials as much as you can.
In this case, I'm just changing it to a bright green. The material doesn't really matter to me. I just want to make sure I can see this panel very well because wherever this panel is in our office in Montreal is very dark. So I want to make sure I can get that contrast.
At this point, I select the unit. I replace it in my unit Flow. And I'm just going to change the hookup for the hand-pressed from the throw ball into the place unit. And I'm going to set the distance to 3.
This just means that every time I pinch in the HoloLens, it's going to place my panel right in front of me at 3 meters. I also put a little marker at the same place so that I can see where I'm going to place the panel before I place it. This is super useful in many, many cases, especially when you want to overlay like this.
So at this point I just restart the app. Waiting for the connection. I click on Play Game, and I'm good to go.
Once the data is done streaming, you're going to see through this all the cubes will still appear at this right place. My marker is there, notice that's new. And whenever I click, my panel appears at my marker. And then you're going to be able to just walk around that panel and view it in context.
So in this case, I could just walk up on my panel, place it, and give my HoloLens to anyone. Or if you've attended other classes on Flow or on Max Interactive, you know the possibilities behind this. But we're not going to get too much into Flow today.
So next use case is going to be from Fusion. Now you might know that Fusion doesn't allow you to actually export to FBX directly. You have to go through the web portal. But again, the standard workflow is you're going to export to FBX and you're going to import into Max Interactive. Now, the advantage with the FBX export here from Fusion is that in Fusion, everything is going to be exactly to scale.
So when I import it into Max Interactive and I place it at my level, I can expect that this lamp will be exactly in the right dimensions and the right measurements for the world I'm going to place it in. So once I'm in Max Interactive, again, you place it in the level. I'm just going to switch the Flow notes once again to my new unit.
Change the distance to one meter. Go back to my HoloLens. Just to make sure, I click. And I make sure that the "Waiting for connection" tab is open, then click on Play Game, wait for the data stream.
Now, pro tip. Whenever you place your window, your Stingray window, in the HoloLens, if you click on it right away, you're just going to start at the previous point where you were.
So if I just open it right away, I'm going to be wherever my electric panel was. You don't want that. So just make sure you close the window and relaunch it. It just makes sure that you start fresh and that nothing weird happens.
So here's the connection. At this point, once the data stream, I'll be able to place my lamp anywhere in my world. And if I actually activate my mesh in the HoloLens, you can actually place it and interact with the physical world.
In this case, I'm going to place it on my desk, and right away, you see that all my dimensions were completely wrong in Fusion. That's because I don't know how to Fusion. So I have to go back to my original content creation tool and probably adjust some dimensions because I really don't want it this big.
So that's the basic idea of the workflow. While I made this class, I was writing a lot of Louis' scripts, I was writing a lot of Flow nodes, I was experimenting a lot because this was fresh off the press. You can ask one of our developers right there. I kind of see him. I was bugging him a lot.
So I came to a lot of pro tips during this entire thing. I'm not going to go through them right now just because we're short on time, but I'm going to upload this presentation. All the pro tips are in the comments section. So there's five or six things just to keep in mind to keep your workflow smooth. But that's it.
So now you're actually good to go to be able to get your data onto the HoloLens without using Visual Studio. That's kind of the big, first hurdle. Now for the second part of the workflow, the optimization part, I just want to pass it to Louis, who is the subject matter expert on this.
LOUIS MARCOUX: I am not the subject matter expert. I am the necessary evil part of the presentation because one of the thing about the HoloLens, about VR, AR is you are trying to render something in real-time and you are doing it for a limited device. And the first time that David approached me to do this presentation, he asked me to do a little project for the HoloLens.
And I'm used to the HTC Vive, I'm used to the Oculus, and I'm running it on big computers. And when he told me 300,000 polygons and 300 batches, I said, what can we do with this? There's nothing we can do with this. So it's a very limited device.
But if you think about it from the beginning, and you plan carefully how you can render to this device, you will be able to do a lot more. So the optimization part-- when I have to do projects and Chris and I work on a project together recently, and I've worked with some of you as well-- the first thing we do is we try to optimize as much as we can so that we can run something that looks beautiful and it runs in real-time.
So optimization is part of the process. We need to sit down. We need to do it. It's necessary. I know it's evil, it's boring, but you have to do it.
And every time I do a project, I have to ask myself questions. And the questions that I ask myself are the ones that I'm going to share here today. And the way that I do my presentation is a bit like David. I'm going to show you a bit of theory, then show you practical examples, and you're going to see the thinking process.
So 300,000 polygons, 300 batches. I start by saying, welcome to the world of polygons. Games, when we build anything for real-time experience, even if it's not VR, if it's just an application for the-- everything happens in polygons. So you're going hear me talk about polygons.
We are working for a specific target. So when we build for the audience, we have to know the limitation of the HoloLens. Same thing if you build something in VR. You need to know the computer that you're going to run it on, you need to know the device that you're going to run it on, you need to know the specs and what can you achieve with this specific target.
So the target is very important. You don't develop the same way for the HoloLens as for something very powerful like if you have a GTX or a P6000 in SLI. And if you have all the gadgets, you don't develop the same way. So understanding your target is very important.
And the necessary-- I have to practice this word-- necessary evil is the optimization part, which I'm going to talk in a second. So when we're building something for a device, what we're doing in our workflow here is we are actually building an application. And the application is machine language, it's compiled, it's binary, and it's specific to that device.
So if you're compiling for Windows, you're going to build an EXE. It's going to have a bit of DLL's. And that's what you're building for doing the Vive or the Oculus.
If you build for the iPhone or the Android, you're going to build an IPA or a APK, and those will be specific applications that will run on these devices. For the HoloLens, it's the same thing.
As you showed a bit before, we are building an APPX file, or application for the HoloLens. And we send it on the HoloLens, and that's what gets run on the HoloLens.
How do we generate these things? As David mentioned, you can use Visual Studio, you write C++ code, and you compile it, and it becomes an application that can run on those devices. I'm a 3D person. Well actually, I'm lying.
I graduated in computer engineering. I did C++. I loved it at the time, but when I discovered the world of 3D, I'm excited about it. And it's much more fun to work in 3D than in code. But if you want to build with 3D objects or with CAD data, you need to use engines that speaks this language.
So Unreal and Unity are very popular. You're all familiar with those engines out there. That's what they do, as well. So Unreal and Unity, you open up your project, you bring 3D data, you trigger, you create interactions and all of that. And once it's ready, you package it, you compile it, and it becomes an app that is completely independent of the engine.
Today, we're using Max Interactive, or formerly known as Stingray. But this is the same type of platform. This is the same type of engine. The engine is there to bridge between compiling visual C++ code or C++ code and a 3D artist that has to build 3D content and speaks that language.
So the engine sits in the middle. We bring 3D data, we compile it, we build some interactions, and then compile something for the targeted device. To feed the engines with data, we use a format called FBX. FBX is the standard kind of thing where Unreal will load FBX file, Unity will load FBX file.
So the things that I'm doing today for optimizing, if you're using Unreal or Unity, same type of thinking process. So it's always the same thing. The reason why we use FBX is that it contains everything we need for the real-time part.
So it contains the polling-- well, it connects to all of them. So it contains the polygon part-- the models, the geometry, the primitives, as you call them. So the 3D model are in there. We also have the mapping, which defines how bitmaps or materials will be applied to those objects.
It also contains the material definition. And if you're doing any type of animation, it also carries with FBX. So that's the format that we use to carry data to the game engine.
The best tools, or the most powerful tool to generate data or to generate the FBX, are the tools that allow you to work with polygons, to work with mapping, to work with all these things that allow you to prepare the 3D data for the FBX format or for the game engine. So that's why I'm putting Max and Maya here. But if you like Cinema 4d or all of these 3D packages out there, that's the same kind of idea.
At Autodesk, we have 3ds Max and Maya, and I know you want to know which one is the best between the two. And I'm not going to answer that. But I think they're both very capable. And if you're a Max user, you're going to feel comfortable using Max to do polygons. And if you're a Maya user, the same type of thinking process applies.
But if you bring CAD data-- I wanted to mention that this is-- in India, when you are in Max or Maya when you are doing the DCC part, this is where you do most of the work. So this is where you're going to make a lot of decisions. And all of the concepts I'm talking to you about today, this is where you want to do this.
And those things that we're going to talk about is organize your files so that it better manage, do some movies, do some mapping, LOD's, level of detail. So I'm going to talk about all of that, but this is where the core of the work-- and that's why you asked me to come here and talk about this. But this is where we're going to do most of the work.
But if you bring CAD data that are presented here, 3ds Max has a lot of importers to import Revit file, import Inventor files, and we can import a lot of data inside of 3ds Max and convert it to polygons and work in the world of polygons inside of 3ds Max. Either you within the Autodesk family of things, or if you're outside of the family, all of this can be loaded inside of 3ds Max.
So we can load this data, and once it's in 3ds Max, this is where we do the work of converting it to polygons. I must say-- and this is very true-- if you're using Alias in any sort of way and you wanted to convert to FBX, I think that Maya is the best tool for that because of history of the product. You have a really tight connection with Alias, so I think that if you're going the route of bringing Alias data into AR or VR, I would strongly recommend Maya because it's the same kind of core at the core. So whatever.
I'm French, French Canadian, and I translate everything as I go. And sometimes the translator is not following with what I'm trying to say, so that's why I-- and Dave, you have the--
AUDIENCE: [INAUDIBLE]
LOUIS MARCOUX: Yeah. So we've defined a target. This is where we want to end in the end, but the whole process of the big part of the work is really in the optimization, like I said. So the main three concept, or the main three things that I think you have to worry about when you are optimizing data, is the culling, the draw calls, or as David call it as a programmer, batches, and computation.
So those are the three main concept that we have to think about. And I'm going to define them and talk about them in a second.
Culling. What is culling? So culling is the process of defining what is visible and what is not visible on-screen. When something is drawn on the screen, it takes time. Big surprise. So what we want to draw on screen is what's going to be the most rewarding when we look at our content.
So the culling process is to define what we're going to see and keep it to the minimum. Yes, there are some processes in the game engine that will automate with conditions that will define, is it supposed to be visible? Is this not supposed to be visible?
So we can write a few lines of code or we can define a few parameters that will allow the computer to decide if it's visible or not, even if-- it still takes a little bit of time. So we need to care to think about that when we prepare data for real-time. But the best way of making sure that we save time is by discarding everything that we don't need in the final rendering, and also a way to organize our data so that we can turn on and off different parts of the data at different moment in time in the experience.
So that's the idea of culling. So if we look at the workflow that we had at the beginning, there is a big part of this workflow where you have to decide what's going to be called out. So that's very important. This is where the work happens. But at the end of the process, when you're generating the application itself, the game engine, or the AI, or the artificial intelligence, will also do its part to make sure that not everything gets drawn on-screen.
But like I said, if you look at the part here, there's a whole part of the workflow where you can make decisions about how to optimize all of this. So if you think about your original CAD application, and like I said, I try to keep it as generic as possible. So whatever the CAD application that you use, the same principle applies.
So the first thing is narrow down the scope. So if you want to render a hospital, yes, you can render a hospital, but think about it very strategically so that you can organize it so that it can be optimized during the pipeline.
Export the minimum. A good example of that is, let's say that we have a Revit model. And inside of the Revit model, behind the walls, we have wires, we have electricity, we have plumbing, we have a lot of things that we are not going to see in the experience that we want to create. So don't bring this into the real-time pipeline.
Just call it out at the beginning or take it out of the project at the beginning and just export what's needed. And in Revit, there's this great concept of view. So if you export a view that doesn't have all these things as part of the view, then you bring the least amount of data in during the pipe.
And like I said, break into smaller pieces that allows us to organize a better NDN. So that's what you can do in the CAD application before you start sending data for the visualization experience.
In 3ds Max, you can inspect, or you can look, at your project and inspect everything about this file. And this is where I spend most of my time. Sometimes people say, hey, you haven't done anything in two days. No, I did something in two days. I just look at the data, I organized it, I figured out what is visible, what is not visible, what can I cull out, what can I discard? And all of this process happens early in the process.
So remove everything that's invisible. The way that I do it is I use layers inside of 3ds Max. And I'm going to show you that in a second. So I have my layers that are set with invisible object, and I put everything that's not visible into this layer. And I can go back to it if ever something is missing when I'm building the final experience.
But this is also very important. Delete everything that's not needed. Polygons, objects, anything. Then when we get to the engine, when we take the FBX file into the engine, there are certain things that you can do to define what's visible and what's not. It needs your participation or your work. So you're going to define what's visible by setting visibility on object.
But also what we can do is to do dynamic loading, which means that we load in memory, only what's needed at a very specific moment. And we offload everything from memory when it's no longer visible. So by doing this kind of stuff, it will maximize your performance.
And another thing that I've learned by playing with AR rather than VR is-- we did a little project for you and we're going to talk about it at the end. Unfortunately, we're not able to show it, but the idea is that when you are doing AR, the best way of thinking about how to use the HoloLens in a very efficient way is that-- let's say that you bring an Inventor model, or a model, or anything, and use it as a reference to create interactions. Like, put this piece over here, and do this, and all of that.
So what I say is don't re-render reality. So the main part that is going to be visible or is going to be there on-site or you're going to look at it with the lens, you don't have to re-render it. Just add on top of it.
So you can use it as a reference in the engine to plan your animations and planning the reference, the geometry, itself, but do not re-render reality. Keep it to a minimum. So that's my little advice there. But when we started the project, we wanted to render everything and we wanted to show the panel.
And then we realized that it's too much data. So we stepped back and said, no. Just render the indications or the buttons that need to be touched or just a part that needs to be manipulated and that kind of stuff. So by doing this, again, you're narrowing down your project.
Then comes to the engine. The last step is-- you know that the frustum culling-- this is where the camera is looking. The camera looks at the scene and whatever it doesn't see, it's automatically getting culled out, which means that it's not going to be rendered. It's not going to be visible.
So there's a first process before you render called the camera culling, the frustum culling. And always keep that in mind because you can use it to your advantage. And I'm going to talk about that on the next slide. The backface culling, every face that's pointing away, is going to be called out automatically. So it's always good to not use double faces and things like that.
And there's a little tool that you can use called an occluder. An occluder means that if I'm looking at a certain direction, an object is visible because I might see it in the next few seconds so I want it to be visible and ready to be seen. But an ocular is kind of a wall. It's a piece of geometry that you put in your scene and whatever is on the other side will not get rendered.
So if you put occluder objects at strategic places in your scene, whatever is behind that occluder is not going to get rendered, which means that if you look at-- I'm going to come back to it a bit later, but I'm going to explain the concept.
Let's take a look.
AUDIENCE: Translator, translator.
LOUIS MARCOUX: Translator, yeah. Let's take a look at the process of importing and culling out some stuff. So I'm going to import here an Inventor file, and I'm just going to use the default as a starting point. So we have all these objects.
And let's say that I'm looking at it from the exterior, and the only thing I want to care about is the exterior part of this. There's a lot of pieces and screws and pipes in there that I may not need. So the first inspection I do is I have all these layers on the left. And when you bring something from Inventor, those are the material names from Inventor because there's no direct relation between the layers with Inventor.
But what we can do here is-- I'm looking at this and I'm just turning off all the layers one by one. And I look at the model from all direction and I see if something is missing. So something that you can do very quickly: you turn on and off. And as soon as something disappear, you just bring it back on and you have already a good optimisation of the model itself.
So what I can do because I use nested layers inside of 3ds Max: I create a layer that I call exterior layer. And then everything that's remaining visible, I'm putting that in that layer. So now I know that everything that is in that exterior layer is something that I see from the outside of this geometry. Then I can just create another layer called interior and put all of the other layers inside of this one.
So as you can see here, I've hidden everything that is inside of that geometry, and the rest is visible. So if I turned off, those are all the objects that I just decided to not render when we are outside of this. If I want to send this to the engine, what I'm going to use here, I'm going to connect to Max Interactive.
I'm going to select all of the nodes that are part of this geometry, or all of the objects, and I'm going to create a unit out of this by just exporting it as a unit. And then when we bring it inside of 3ds Max Interactive, we get our object here. And this here looks the same as in 3ds Max, but it's only the exterior part.
Then I can also say, well, I'm going to send the interior as well because I want to be able to see inside of this. So I'm going to send the layer number two. And I can create multiple layers like this. Three, four, five, and then you can make them visible or invisible depending on what you're looking at at different moment in your experience.
If you are importing a Revit file, if you bring everything in-- at a point, we're going to bring all the models and everything into light and all of that. It's a lot of data and there's a lot of inspection to be done inside of 3ds Max. So what I do is I do the same thinking. I create an empty layer that I call invisible, and then I turn it off, and then I go in and I start to inspect, and I put all of these things in this layer.
A good example is I've moved up the roof. And whatever is between the ceiling and in the roof, I'm not going to see it as I'm going to navigate. It could be useful later, but in the experience that I'm trying to build here, I'm not going to ever see that. So I can start to put those into the layers and optimize the number of objects that remain in the scene.
If I am planning this from the beginning, when you do a Revit project, you can also define view. So there's a lot of views into this Revit file. So if the Revit person, or technician, or architect, whatever-- so they said, no. Just use this view because I've narrowed down to the scope of this project. I don't have to do that work.
So if I bring this 3D view, that's the one that we want to work on. It's a limited portion of the scene. This is one room, and you see that everything that's there is needed for my experience. It's a very narrowed down part of the project, and you get something decent out of it.
So now let's take a look at-- let's say that we have ReCap data. The ReCap data, you can use that for surroundings around a Revit building, your building that you want to convey your design, whatever. And you want to bring in the context, and you did reality capture, and you have all of this data here.
So this has a lot of faces, and you want to remove the faces that are not needed. So what I do for this is for the context, everything that's pointing away from the camera that I'm never going to see, I'm going to delete it. So in this case, I turn around and I look at it from a very specific point of view. And when I have got my point of view selected, I go in polygon selection mode.
And you see here at the top where my cursor is, there's a selection by perspective. So I give it an angle of 90 degrees, which means select everything that's facing towards me. Select it, and now all of the faces that were facing me are selected, and all of the ones that are facing away are not selected. But this is a great, great, great, great tool inside of 3ds Max called Select Inverse.
So Edit, Select Invert, and now all of the faces that were pointing away are now selected. I can delete that. And now we have a geometry that looks the same, but we just remove half of the polys out of it. So that's the kind of thinking that you want to go through when you are deleting objects from your scene or deleting polygon.
Now the Stingray world, or the Max Interactive world. So the Stingray world is something that lives inside of Max Interactive. This is the actual world that you are going to render into. So the engine allows you to organize my units and organize my visibility and all of that. But what you're doing is you have an empty world, and then when you load a level or when you load a unit, you're actually loading data in and it gets rendered into the world.
So if you break it by section, you would say, I will render section one here, and then I can load more section. And as I move forward inside of my experience or as I assemble something and something gets visible and not visible in all of that, I can say, now load section three and unload section one. Or make it invisible.
To illustrate that, we've listed that we have a building. And when we create our AR or VR experience, we land in this specific spot and we have all this information. And this is on 18 levels. And you've got all of this data inside of your experience.
You know that if you're here, all of this is not visible. So you can make it invisible or you can unload it from memory. And as you move inside of the building, obviously, I may end up in this room, so I better load it. But I'm not going to end up here, so I can unload it. By loading and unloading like this, you keep it always to the minimum and it's done inside of the engine.
And if you think about occluders, we have this guy here and we have objects here that are in this room. But if I go at the end here and I look through the door, I might see those objects. But when I'm here, they don't need to be seen over there. So what I can do is just place a few occluders on the walls, which means that whenever I'm here, all of these objects will not be rendered.
It's always a way to keep the minimum at render time. And like I said, when you do an assembly, assemble something in front of you so you can bring in the pieces. But as soon as the pieces are hidden by over, over over--
DAVID MENARD: Something over it?
LOUIS MARCOUX: Yes, something over it. So if it's being by something over it, then you can hide these pieces that are, and then you can bring more and more and more and more. And as it assembles, everything in the middle can be just discarded and unloaded. So when you think like this, you can make a device like the HoloLens assemble something much bigger than what's actually visible.
You start small. You assemble, assemble, assemble, assemble. And all the pieces that were hidden at certain point, they can just be discarded, and you can continue to build. And it feels like 1,000 pieces were put together, and actually what you're rendering is maybe 1 or 2 or maybe 200 or something like that. So that's kind of the idea.
How do you do this inside of a Max Interactive? So to illustrate, I brought in my Inventor file here and I've made it three layers. And we are going to make them invisible and visible at Runtime. So in the editor, we can just press the H key, and it's going to hide in on hide. But we want to make it at Runtime so when it's in the application itself, it's going to be done automatically.
So when we launch, the first thing I'm going to do is-- I'm using Flow. And you mentioned Flow a bit earlier. I'm representing my assets here. So those are the units that I have in my scene. I'm just representing them in Flow by-- there's a little option that allows you to create a representation there. So create a representation of this in Flow.
And I've got my three units, and I can start to build a logic around those three units. What I want to do is I want to set the unit visibility, so I want that to be visible or invisible. So I'm going to say at the beginning, when the level is launched, I want to see the layer number one. I don't want to see what's inside.
So I'm just going to copy the unit visibility here and just make that false. And all of layer two and layer three are set to false. So when I'm going to launch this level, if I launch this here, this is now the Runtime. It looks exactly the same as in the editor, but now, all of the interior parts are not visible.
If I want to switch to that during the experience, I can still use more visibility nodes. Whenever I'm going to press the key one or whatever I'm doing, this, or whatever, I'm interacting with the-- or I trigger an event, then I want this to be false. So my exterior is false and my interior here will be set to true whenever that happens.
So if we go back to Runtime, at this point if I test the level, you see then we can switch. And it's going to immediately hide it. So that's the concept of visibility. Visibility means that it's taken off the render pipeline, but it's still in memory.
So if you have a very, very, very big model or a very, very big project, it doesn't fit all in memory, you want to load only dynamically what's needed at a very specific time. So it's a different approach when you do something like this.
So if you look on the right here, I've got absolutely no units in my level. My level is empty. My Stingray world is completely empty. So I want to start loading those units. I'm going to say I've level loaded. I want to spawn a unit at this specific position, and that unit will be called Layer 1.
So I'm going to spawn it at 000 and connect it here. So whenever I'm going to load the level, it's going to spawn this. As you can see, it's now visible in the scene, but it's actually not there when you are in the editor.
Same idea if I want to do the same thing as I did before. I want to spawn the two other layers when I press button one. So when I press one, I spawn the two others. And I want to unpsawn-- so I'm removing, dynamically from memory, this other unit. So now if I launch this, when we launch, we see this. Then I press the 1 key and it gets removed.
If you have 1,000 object, 500 objects, this could be a very messy Flow. So my way of doing this, when I have a lot of objects to spawn and unspawn, you can also use the idea of a level. A level is a bunch of units that get loaded all at the same time in memory. And when you change level, they're just being offloaded from memory.
So if you want to do something like this, I created a level that contains layer 1 and 2, and another level that contains layer 1. So if I go back to Layer 1 here, that's the only one that I have. So whenever I'm going to press the 1 button, I'm going to change the level and I'm going to switch to the other level here. And immediately, if you launch this, you're going to go and you see that you switch back and forth.
Visually, it feels the same, but all of these objects are just loaded in memory. And what I'm doing here is just if I press zero, I can go back to my original level. But by organizing this by level and by just switching, visually, it feels the same, but you can go back and forth between the two and you just optimize your model a lot.
So this is the concept of occluder. The occluder is an object that you place in your scene. You can make it in the shape of a wall or anything like that. But as you can see here, it's semi-transparent so that you can see through it. Everything that is occluded by the object is not being rendered.
So as you can see, as you turn around, you see those objects are disappearing. And if you launch it, you get exactly the same effect as-- so if you have a wall or anything that is going to obstruct a view of what's on the other side, it's always good to use occluders. And it will remove that from the render pipeline and it's going to go much faster.
So all these tricks allow you to do culling. Now let's talk about draw calls. Draw calls. Every time that an object is being drawn on screen, this is what we refer to as a draw call or as a batch. So this is when the GPU was being called. So hey, GPU. Can you do this for me? Can you draw this for me? That's a draw call.
So every object that you send to the drawing is a draw call. Not primitive, the second primitive. If you have multiple objects and you combine them as one, that's one draw call. If it's considered as one object, it's going to be one draw call. But at the opposite, if you have one object that has multiple materials on it, it's multiple draw calls.
So if you have two materials on an object, a multi-sub, it's going to be two draw calls. If you have very simple scene and you're seen is running in real-time, you don't have to think about this. But if it doesn't run in real-time, this is where you need to start thinking about reducing your draw calls because rendering a scene is all about draw calls, draw calls, draw calls.
And it's like as many as [INAUDIBLE]. That's kind of the idea. So how can we reduce the draw calls? Two ways. Reducing the number of draw calls, and reducing the time of the draw calls.
The numbers? If you have a bunch of objects that do share the same material, you can attach them in 3ds Max together, and they become a single object that's a single draw call. If you do this with multiple objects inside of 3ds Max, all of these will be attached a simple draw calls.
But one of the things that we need to remember is that yes, it's good to attach objects together. But if multiple objects are attached together because they share the same material, but they do surround you, this means that if you try to do camera calling on objects that are surrounding you, they're always being drawn on screen. So yes, it's good to attach objects, but it's good to break them still to be able to take advantage of the camera calling, sorry.
So if I look in one direction, all of these three here will be called out because they're outside of the camera frustum. So try to give a balance of attaching things together, but also keeping the camera calling in tact so that objects can be called out of the camera frustum.
If you have materials that are visually similar or they're exact copy of each other, make them a single one, and then attach all of the objects so that you have a simpler draw call. And like I said, the first thing I do is I try to see on the Vive or on the HoloLens or whatever. And once you know if it's running in real-time or not, this is where you jump into this process.
But if it runs in real-time, don't go into this process. But if it doesn't, then start looking. What can be optimized? How can we make this go faster and faster? And this is how you're going to simplify your scene.
Also the draw time is influenced a lot by the polygons that you're rendering. So ideally, everything is a simple polygon, a simple set of polygons. It's all attached together and all the vertices are in the right order. All of the faces are all beside each other in a very nice way.
If you have a lot of polygons, you want to reduce the number of polygons and make it as simple as possible. If you have polygons that are detached, even if they're very on top of each other and look that they're all attached, if they all have all separate vertices, it's a lot of data to transfer to the GPU. So you want to attach all of this so that is a very clean set of geometry.
Same thing if you have recapped data or data that's very messy, and the faces are all over the place, and you have vertices that are not attached. This is going to take more time to render, so you want to optimize that and clean it up so that it's faster to render. Also the concept of level of details. The level of details is automated process.
You have a point of view. And when an object is very close to the camera, it's going to render with a lot of polygons. And then you can create multiple versions of that geometry so that when it's very far away, it renders with less geometry where everything that is close to you is going to render with much more polygons and look better. And when it goes away from the camera, it's so small that you don't even notice it. And that's how you can optimize the render time.
Same thing if you have textures, and they're big, and they are all sorts of random resolution, try to make them power of two and reduce them to-- what I do is I look at how big it's going to be on screen and I try to narrow it down to this kind of resolution. And so if it's 512 by 512, it's better. But if I get really close to it, then I'll increase the size. But if it's in the back or if it's far away, I'll try to minimize the resolution.
But always try to make it power of two so that there's no mathematical operation happening on the texture itself when it's sent to the GPU. So to illustrate this, here we go. We have a look at it here. So we have a Revit file. And I look at all the materials that are inside of that file.
I'm going to drag it into the material editor here. And there's a nice button in the material editor that allows me to select all of the objects that do share this material. So now they're all selected, and I can isolate to see that those are all the objects that shared that specific material. Right now, they're all separate objects.
So what I can do, I can go one by one and start to attach them one after the other. But that's not how I do this. I go into editable poly, then I go and attach. I select everything that's currently visible. So now this is one object.
But the thing about this one object-- this is a big floor of a building-- all of these objects are attached together. But if I do look in any of the direction when I'm in the building, nothing is going to get called out because it's also wrong. So what I'm going to do is I'm going to break it, I'm going to detach it in multiple objects. So I kind of go reverse.
First, I attach all of them and then I break them into groups that makes sense too for the camera calling a bit later. So that's kind of the idea that I did here. Another thing about the-- if you bring in a Revit file, what you can do is you can say combine by Revit materials. So when you combine by Revit material, all of that is done for you, magically. And all you have here on the left is the materials and their name.
So if I do select this material here and isolate it, you see that it's already all combined. And then I can start making the reverse decision of how am I going to take advantage of the camera culling by breaking this into smaller pieces?
Level of details. This looks really bad, I know. And this is intentional. This is a very limit lower version of this, which is the higher resolution version. So this is different level of details. And when I'm very far away from the object, even if I just use this, or even if I use a plane with that color, it's going to be sufficient for me to believe that this object is still there.
So when you want to do this in 3ds Max, you create those different level of details. And then you align them so they're exactly on top of each other. So that perfectly aligns so that I can believe that whenever one is visible and invisible, I'm going to believe that it's the same object. You select all of them, you group them together, create a group.
Even if-- I know I've told in classes in the past that groups are bad. Well in this case, it's good. So you group them together, then you go into the utility panel. There's a tool called level of detail. And then you create a new set, and you're done. You could start adjusting these things here, but in the game engine, they kind of translated, but those ideas here don't make sense.
So what I do is I just create the set. And then when you do export to the game engine, then this is about what is going to be visible at different points. So I'm going to create a unit here for that object. I'm going to import level of details. Very important.
And when you bring him into the game engine here, what you're going to notice is that it's there. And if you open the-- it's going to have the three geometries that we just grouped together. And you're also going to notice that here, we have the level of detail tab. I'm going to move it here.
And we have the three different level of details that are available and the meshes that are part of it. I can exaggerate this so that it's going to be obvious for us to see. So the high level of detail, I'm going to make it between 80% and 100% of the screen. The medium one I'm going to make it 50 to 80. And then the last one here, I'm going to make it 10 to 50.
And I can add a new level of detail. And this level of detail will be from 0% to 10% on the screen. And I'm not going to put anything in it, so it's going to not render anything. So I save this unit. And now if you zoom out, you see that it's going to start decimate itself, and then disappear at some point. And as you get closer--
And like I said, I exaggerated it here so it can be obvious. But this is where you make the decisions. You create those different level of details, them you look at it in the environment, and you see, OK. When it's about like this, I can switch to this one, and switch to this one, and switch to this one.
And now we have something that can be super beautiful when you get very close to it, but as you get far away from it, you still have the impression that it's there. We did a project recently where we had a room full of chairs and we wanted to have a super nice chair because the whole idea was the chair was the design that we wanted to convey.
So the chair in the front was super high-resolution, but the chair in the back was super low-res. But it felt like all these chairs were super high-res inside of the environment. So that's the kind of stuff that's going to save you some time.
The other thing that I want to talk about is the way to create-- I'm going to kind skip this. OK. I'm going to talk about this. So the way that you create those level of details. If I bring in an Inventor file here-- there's two ways of bringing it in. You can bring a body object or meshes.
So the body object, what's good about it is that it brings the mathematical curve inside of 3ds Max. And once you are in 3ds Max, you can make decisions about which one that you want to use. Do you want to lower the resolution or make it higher? So you can make the decisions at an individual level object.
So I'm going to bring it in and isolate just one part so that we can focus on that part. And if you look at it here, we see that it's a body object and now it's set to the medium resolution. I can switch to coarse or fine and all of this. And if you want to see the edges or the resolution of this, just a simple trick is if you turn on edge faces, you're not going to see the polygons on this.
If you want to have access to the polygons, you need to add an edit poly on top. And when you add the edit poly, then you can see the polygons. But because we didn't convert it to an editable poly, what I really like is that we still have the procedure and the settings of the body representation. So if I go into the fine resolution, you see that we get the fine resolution. If I go to the coarse, you see that we have the coarse resolution.
Because it's procedural, I can make as many as I want by just changing the settings here. I can change those parameters, and then I can reduce it and reduce it and reduce it. So when I have this and I want to create my level of detail, I go and I use a tool called the snapshot tool. So snapshot allows me to take a snapshot of the mesh.
I just created one level of detail. I can go into the fine level, do another snapshot, and then another coarse. And we have those three level of details from this procedural mesh. So if you move them side by side, you can see that.
And if I want to decimate even more, I can use a tool called the pro optimizer. So the pro optimizer, if you have textures or materials, you can turn that on here and define what you want to keep. There's a lot of settings. But essentially, you press Calculate. And once it's pressed, you can start to reduce the number of vertices.
And even if you go to values like 30%, we just cut more than half of the polygons on this object, it's still look OK, and it still looks believable. But you can reduce it even more and take snapshots at different level of details, and combine all of these to create your level of details for your project.
So this is one way of doing it. So I'm going to try to move to the last part, which is the computation. The computation is everything that you do in your scene that's going to take time. So that's conditions, that's triggering, that's all the logic behind the real-time experience of computation is, well, computation. And we want it all.
So that's the thing. Every time that we go into VR or AR, people are like, I want it to be super realistic and I want to see everything. And all of these are great. We want it all. But everything that we put on screen is costing time and resources.
So pretty much all the game engines today are able to render images like this. But when I look at an image like this, what I notice is that there's depth of field. So, yeah. That's nice. That looks good. There's fog, there's ambient occlusion, there's screen space reflections. So all of these things look great and that's awesome. Bloom makes those little part-- bloom here. We got vignetting effects.
All of these things, they look great. But they cost a lot of render time. So you want to minimize this. So by just looking at all these effects and turning them off one by one, you're going to increase your performance. And when you're doing AR, VR, I think that's the first place where I start. I just turn everything off.
And then I decide, is it something that's going to make a lot of impact in my scene or in my experience? And then I can turn on only one by one or I can bake a lot of that. I'm going to talk about that in a second.
Same thing for lighting. The more lights you have in a scene, the more global illumination, the more subtle the lighting is, the better it's going to look or the more realistic it's going to look. So more lights equal more render time. So in pretty much all the engines today, there's a process called light baking.
So light baking takes all of the lighting, the global illumination, bakes it into light maps, and all of the lights become disabled. So they're not rendered, they're not calculated at all in your scene. But all of the light information of the global illumination is saved into light maps, and this is what get displayed in the end.
So, yeah. They're really beautiful global illumination solution, but it's not calculating at every frame. It's all rendered in light maps.
Metallic materials have this thing called reflections and that's what makes them look good. And reflections are also very costly in terms of real-time rendering. So we can also use a tool called reflection probes. And reflection probes are tools that allow you to capture the reflections at a very specific spot, and then you can use those reflections at different places in your scene.
They're not calculated. They're pre-rendered. But it's the same idea as rendered texture. You just apply it as a reflection map, or a reflection cube map, and you get some really nice reflection. They're not perfect. They're not exact. But it gives you the feeling that there's reflective materials in your scene.
When you have materials, when you build a shader, a real-time shader, you have all these notes here. And when we create, we bring something from 3ds Max and we didn't build a shader like this, it's going to convert the standard material or the v-ray material to a UBER shader. And the UBER shaders has a lot of conditions. If you have a map, then do this. If you have this, then do that.
And this is computation time. If you multiply this by 300, 400, it's going to take a lot of CPU time. So we want to minimize that as much as we can. So if you only have three maps on a material, reduce it to the nodes that you need for your shader so that it's very minimal and there's less calculation happening for every shader in your scene.
And if you have a Flow system that has a bunch of nodes connected to each other and calls to this and that, it's maybe time that you consider yourself an intermediate or expert user, and you start looking into the scripting tools for your game engine. In Max interactive, we use Lua. Unreal has C++, Unity has JavaScript. And also if you have a lot of nodes like this, it's going to be time to start thinking about using scripting.
So to illustrate this, this is a scene that I've got here and it has a lot of-- not of artifacts, but a lot of--
DAVID MENARD: Post effects.
LOUIS MARCOUX: Post effects. God. I mix my words. So we have a lot of post effects. So if I look into the filtering tool here, I'm going to be able to select the environment node. Environment node is where all of the post screen effects are happening. So they're all listed here. The one that have an impact, so the fog, I'm going to turn it off. So now we lost the fog.
The exposure doesn't have any impact, so it's just a way to define how it's going to be exposed. But screen space ambient occlusion is going to have an impact, so I'm going to turn it off. Screen space reflection is going to have an impact, turn it off. Depth of field is going to have an impact.
Lens quality, as well. The bloom and the vignetting. So all of these supposed effects, when we remove them, yes, it's a little bit less interesting, but at least we can render in real-time. When we want to do light baking, what's important for light baking is it needs a UV map on the second channel.
So for some people, it doesn't mean anything. But the idea is that you need to define where and how your bitmap or your light map will be applied to your object. So you can do it manually inside of 3ds Max. Look at this here.
So this object has only one map. We need a second map. So this is where it is defining here. We see that this object contains only one map, so I need to define a second map. I apply the UV W on wrap modifier. I go where we define the channel here and I set it to 2. I abandon the old maps that were there, and then we can open up the UV editor and just flatten it, and we have a basic map.
Now if I look at the channel info, I've got a second map channel. In most of the game engine, this is where they bake the light map, so you need to have UV maps on the second channel. Fortunately, because all of this can be automated, when you select objects that don't have second map channel, you can turn on that option which is allowing you to generate UV's for light baking. But there's a caveat. This can be long.
So, yes, if you unwrap a scene with a lot of objects, it can be long. So don't worry. Sometimes it could take 12 hours, sometimes it can take 2 minutes. If you have very, very complex scene and it unwraps all of the objects in your scene, it could take long. So be careful by turning this on. Be ready to wait for it a little bit.
But in this case, it's not too bad. We don't have a lot of objects, so we can say generated automatically.
AUDIENCE: We believe in you!
LOUIS MARCOUX: Yeah. And so I'm just going to talk as I get to that point. But the idea is that you can generate automatically. And when it's generated automatically, you can go to-- so I'm just skipping that video to go back to, as you can understand. Maybe I can skip to--
AUDIENCE: Skip to the [INAUDIBLE].
LOUIS MARCOUX: Can I skip here? Ah. Here we go. Over there. Oh, that's good.
So now what I'm doing here is-- you have your scene that we brought in. All the UV maps have been automatically generated. What we need to do is we need to select all of the lights inside of your scene that have an impact that are enabled. You can decide to render or not the sunlight, or to make it baked or not.
But you see now, all of the lights are selected. And if I go into the settings, if I switch the baking to indirect and direct, you see that immediately, all of my lights are disabled in the scene. None of those lights are now doing any type of light effecting this scene. So I've removed everything. But it's flat. It's boring.
So this is where you go into light baking. The light baking, you can make it to the minimum. What I'm doing here is I'm defining the textures in a number of passes of calculation for the light simulation to a minimum. And then I say, bake the scene, and it's going to start baking the scene.
And you see that even if all of those lights are now inactive, we have the global illumination solution. We start to see the shadows, and it's a very subtle global illumination solution, which means that it's going to look very, very good on screen. And if certain objects have-- I can see that on the projector, this may be less obvious.
But the map doesn't have enough resolution. I can go back here and say, well, I want to crank up the resolution for this one, and I'm going to crank up the number of passes so that I can get a better solution. And I'm going to say, bake only that selection, and we're going to have something that looks much better.
Same thing for when I talked about reflection. So in this scene here, we have nice reflections on the floor, and that's because we have screen space reflections enabled. If I turn that off, immediately we are using the main one, which is the world reflection. It's just reflecting the clouds everywhere in the scene.
So yes, it looks good. If it's an exterior scene, it's going to look good. But if it's inside, it's not working very well. So what do we do for this is we create a reflection probe. In the reflection probe right now, we're just using-- nothing has been pre-calculated. It's just using the default cube map, which is the sky.
But basically, you place this in your scene, and then you can say render the reflection at this very specific point by just making the reflection probe. And it creates the reflection for that very specific point in space. Then you define how far those reflections are valid. So you can make it very big or very small depending on your space.
And you see that this is the impact on the floor. So now we are impacting on the floor. We have one done, so I can copy this a few times inside of the scene. And Max Interactive, as you are navigating through the scene, it will blend the different cube maps. So it doesn't calculate any reflections, it just uses what's already pre-calculated.
So right now, it's the same exact reflection map for everywhere. So what I'll do is I'll just re-bake, or create another bake, for all of these specific spots. Big to the reflection probe. And of course it's not perfect, but it's realistic enough to be believable inside of a scene. And when I'm talking about the shaders, when you bring a shader directly from 3ds Max and you open up the graph to look at how it looks like, this is how it looks like.
So if you would just use a default converter, this is the standard base material. All of these nodes. Well, if I have a map here, I'm going to do this. If it's not, then I'm going to use just the roughness. All of these are calculations that happened inside of the scene.
So what I can do is when I look at this, I'm only using the color and I'm only using the metallic and roughness values inside of this shader. So I'm going to connect the roughness and the metallic directly to the node and then I'm going to delete everything else. I'm going to grab the color and connect it to the color.
So now my shader will do exactly the same thing in the scene, but it's narrowed down to something very simple. So if I save this, the UI for the shader has changed. I can still change the color in the metallic and the roughness value, but it's a much simpler shader. I've done it for one material, so this is great.
You can see that I can change the color here. But I'm going to just save this and rename it so that I can easily find it when I'm looking for it. I'm going to call that simple CMR, or color, metallic, and roughness. And then I'm going to pick the material for another object. And you can see that here, the material is still the default material.
So I'm going to say, instead of using the default parent, I'm going to use my simple CMR material. And because the input variables for all of those shaders are remaining the same because we just deleted nodes inside of our shaders, what's going to happen, it's going to reuse exactly the same UI, and you're going to get that shader.
So I stole my punch here by just clicking too fast. So when you're doing all these optimization-- that's my last slide, so after that, we're done. When you're doing optimization, you will use the performance HUD. And the performance HUD, I think in Unreal, is called the performance profile or something.
But all of the game engine have some kind of profiling tool that allows you to analyze the performance of your scene. So the performance HUD, if you don't know what that is, it's your best friend when you're doing optimization because it allows you to scrutinize your scene, look at what's happening in your scene, and find out where you need to spend more time optimizing.
So it looks like this in Max Interactive. It could be daunting, but if you read through this, there's a few parameters that are more important than others. So there's CPU and GPU time. Those are calculated in millisecond. It tells you how much CPU and the GPU are spending in terms of calculation.
We have the number of batches and primitives. So this is what we want to keep under 300. So the 300,000 and the 300 batches.
The drawing pixel. This is everything that's about drawing on screen. So it's very high, you know that your draw calls are probably too long, you may have too many polygons, maybe the anti aliasing is too high. This is the kind of stuff that you're going to find from the drawing time here.
Then the number of lights. So if you have lights being calculated, this is going to be high. So bake everything, and no more lights will be calculated.
The screen space effects. Everything I was talking about-- bloom, fog, all of these things that could take time. So if you look at that, they're all listed here. So if they're off, that's not going to take any calculation time.
And then your memory usage. So if you know your target device, you're going to know the texture memory, the space that it has. So you can look at all of this here and know how much your scene is taking in terms of space and usage.
So I want to pass it back to David. So that's the necessary evil part of the process.
DAVID MENARD: So last minute caveats before we go on to the next slide. Be very, very careful when you're baking lights for AR or baking reflection probes for AR for obvious reasons. If you bake your scene and then you put it in an environment that doesn't match, it's just going to look completely wrong.
Second caveat is the template that I gave you uses a mini renderer or a forward renderer, so you can't use all of the fancy stuff he showed anyways. So you're good to go. If you're using Unity, you're going to want to use their forward render, though. You cannot use a different renderer for those who know what that is.
All right. So you have a Louis with you who optimized all your data for you because no way I'm doing all of that. What now? Well, you're basically good to go. We've seen why you should use the HoloLens, right? We've seen how to actually export your data onto the device. And we've seen that if it doesn't run, all of the options that you have, at your disposal, a huge toolkit on how to optimize your data so that it actually runs.
So now it's your turn. Homework or don't do it, I don't care. You have all the tools you need. I want to see you guys actually use it. About half the room here raised their hands that they have a HoloLens. If you have any questions, if you have any demos, or if you have any successes, don't hesitate to shoot me an email. I'd love to see the results of all of this.
I guess I'm the last thing between the party, right? So thank you.
Last thing. I have the HoloLens here. We had prepared some demos and some fancy stuff, save the world scenarios. They didn't have the hardware for it, so if you want to try it, just come up here. Thanks.
Downloads
タグ
製品 | |
業種 | |
トピック |