Description
Key Learnings
- Better understand real-time content production
- Learn how to optimize content for use in real-time engines without having to sacrifice quality or the overall intention of your work
- Learn about workflow efficiencies when dealing with real-time content production, turning difficult tasks into easy ones
- Learn how to adopt workflow practices that will enable you to be more productive and efficient with your overall work
Speakers
- Logan FosterAs one of the product owners here at Autodesk I get to work with the 3DS Max and Maya teams to provide amazing modeling and animation tools to our users. I have been a user of 3DS Max for over 20 years and have a wide range of experience developing content for a multitude of platforms (though my primary focus for much of my career has been on real time graphics).
- Shawn OlsonI've spent decades in 3D, first as a hobbyist, then as a tool developer, game developer and now on the product team for USD in 3ds Max and Maya.
LOGAN FOSTER: Hi, everybody. Welcome to the Autodesk University talk on tips, tricks for efficient real-time content production in 3D. My name is Logan Foster, and I'm one of the product owners here at Autodesk working on 3ds Max.
So just a little bit about me. Like I said, I'm one of the product owners here at Autodesk. I'm working on 3ds Max. My primary focus is on modeling workflows that you'll find inside of the software. Prior to joining Autodesk, I actually worked in the games industry for an extremely long time, 16 years. I worked on a number of different projects for different clients, from original IP to name-brand stuff for big multinationals.
So I've got a lot of experience in this, and a lot of passion in real-time game production and real-time production experiences for serious games, education, the whole nine yards. So I'm hoping that you will get a lot from this class, and that my experience and my expertise will help out and give you a little bit of a benefit and a little bit of help working on this type of content production.
So why do I talk about this session, why does it matter? So really, when we're thinking about real-time production, real-time production is rapidly growing. We're seeing rapid growth every year within the industry. Obviously, one of the most clear examples of real-time production is entertainment products, so games that you might play on your PC, on your mobile phone, on your game console, things like that. That's the most obvious use of real-time production.
But we're actually seeing a huge rise in real-time production in areas such as serious games. So serious games is using games engine technology for simulations. And we're seeing this matched up and paired up with things like virtual reality and things like that to really augment the workflow that's going on to give people better on-the-job training experiences, with reality matching all into it.
So real time is super huge and super powerful for many, many different things. And lately, we're actually seeing a huge boost in real time being used in feature film production. So feature film and television production where they're using these large volumes and giant LED screens, along with game engine technology that's paired up with the cameras to do traditional filmmaking techniques. But being able to have virtual sets that are interactive and viewable by the cast members and the audience that's in there.
So there's a huge, huge benefit that we're seeing with real-time production. And really, what we're seeing as a huge trend as well is that a lot of things are converging into real time. So real time is becoming more and more of a critical aspect to it. And even as consumers, you know, we're going to see this more and more going forward with regards to things like e-commerce, where 3D content production and modeling and interaction, it's going to be really important for the purchasing and your buying habits and behaviors.
So with this big push for real time and this experience that I've got, one of the things that I've seen over many years, working with different clients and different people-- especially when you've got people that are maybe new or uncertain about working with real-time 3D art production and getting it into something like a game engine or a real-time engine-- there's a few misconceptions that keep popping up that I think really inhibits and blocks people from having the best experience that they can. Not only making the content, but producing compelling and amazing experiences with it.
The first one has to do with the quality of the art that people are producing. So people often think that their work needs to look low poly or poor quality overall to do it, that they're going to lose fidelity and look and things that they were really proud of with their work.
Another problem that people often feel that they have is they don't know what their content's going to look like when they get inside of a game engine. So they feel that there might be a disconnect between what they're doing and their content environments that they're authoring in versus what they're going to see in the final production working in. And last but certainly not least is that there's this feeling that exporting content is really difficult to do, and it's hard to set up and get going inside the game engine.
And so what I'm going to try to cover in this class is a few of these misconceptions, and help demystify them a little bit, too. Show off a little bit of how I work and some thoughts and suggestions that I've got to overcome these issues and these problems that are going on, and to help you benefit and move along with your work and hopefully have a better experience with it.
With that said, a lot of the demos and examples I'm going to show you are going to be using 3ds Max. But if you're an experienced 3D artist, you shouldn't really have too many problems taking a lot of these core concepts and ideas and applying them to another 3D application that you're using in your workflow.
So let's dive right into the problem here. So what I've got going on here is a scene that I've assembled. And the scene's got a mish-mash of art and content in here. We've got some photoscan art that's high quality, millions of polygons. We've got some low-poly content. We've got some stuff that I've imported from past architectural projects that I've done and engineering projects and things like that.
So it's a real mish-mash. And this might be a really good example of the issue that your boss might give you, where you don't often get the chance to have uniquely-created art content that you've made yourself and fits the environment. You often have to try to use what's there and what's been provided with you. So I felt this was going to be a great environment setup to use to show many different techniques and problems that are going on and how we're going to try to solve them to get past these misconceptions that are taking place.
So the first thing I want to talk with, here, is something called draw call reduction. And what's going on in your engine is that you've got the graphics card, and it's working away, and you've got these things called draw calls. Draw calls are pretty much the biggest limiting factor that you're going to find in any type of real-time production.
So a lot of times, people have this misconception that it's polygon counts, that their work has to look low poly and crappy and low quality. And really, what the issue is is that graphics cards can have a lot of polygons, but what they can't handle are a lot of draw calls.
And to understand what a draw call is, the really simple method of thinking about it is an object, or in this case, a really simple cube in your scene. This is a draw call. If the object was to have shadows, this would be two draw calls. And then obviously, as we have more objects and more things casting shadows, there's more and more draw calls going on here. And this really can add up really quickly when you start thinking about how you're making your art content and how you're placing stuff in your scenes and making it all work.
And it's really important to keep in mind, like I said, there's a huge limitation upon draw calls that you can have with your graphics cards. And then this has a huge cap, because when you go over it, you're going to have a huge performance degradation going on. So when you're thinking about working on a PC, you've maybe got about 3,000 draw calls to work with. If you're doing VR for a PC, you've got half of that sum, about 1,500 draw calls, because you've got to deal with two sets of rendering going on. So the left eye and the right eye being out of sync with the two different types of cameras happening.
And when we're thinking about mobile devices, maybe you only get about 180 draw calls. And if they get a mobile VR, you're now only getting about 90 draw calls. So if you think about that, that's 90 objects or 45 objects with shadow casting going on in your scene. That's a big limitation. But thankfully, there's a pretty easy way to get around this.
So looking at that previous scene that I had before, I just took it, exported it, brought it into Unity, but you'll see the same thing going on in any other engine you're using. And I've got a really complex scene here. And as you can see, the engine's telling me I've got 20,000 batches or draw calls going on here, which is huge. It's unacceptable for what we want to go on, especially for something that maybe at first glance, people might have thought, that's a pretty simple scene.
So looking into this deeper, one of the biggest, most egregious problems that's going on in here is that I've got all these little objects. So if you remember before, when I was talking about draw calls, all these little objects, each one of these, is a draw call. So there's all these little nuts, these little bolts, these little bits and pieces all over the place here on the railway ties.
Same thing with this power substation piece that I brought in. This is going to be the worst spot possible for us, and we need to fix this up. And this is actually going to be really simple and easy to use. So what we're going to do here is we're just going to want to look at attaching all these objects together. So instead of being individual mesh nodes, we're going to use something like the attach function in 3ds Max, the Edit Poly, and just go through and click and select things.
So I'm trying to just group stuff based on what type of object they are, how close they are, and most importantly, the type of material that's going on that they're using. So try to group things that use similar materials, and make them all work together and function so that it all happens. And so it's a bit of a tedious, time-consuming process, but it works out pretty good.
In this case here, one thing I want to really avoid doing, though, is I don't want to attach things that don't share the same type of material. This could be a really big problem. Because we think back to our example of the cube, if we have multiple materials that need to be processed on each face or in individual faces on your object, those have got to be separated out to additional draw calls. So you're not actually, in some ways, making things any more efficient, because you've got more stuff that's got to be processed on less objects.
So if I looked at the stuff, I go and make an export now after doing a bunch of attachments just on this railway tie piece here, and I go back into Unity, I can take a look at what's going on here. And now, I can see I'm down to 462 draw calls. That's a huge improvement for a very minimal amount of work, right? And I haven't in any way, shape, or form compromised any of the visual quality or look of this work. I've just made some efficiencies into the data and how I'm exporting it and how I'm going to use it.
So next up, I got a little tip here for hierarchy. So one of the worst things you can do, and I showed it off in the very first example, is I just hit Export All. I exported the entire scene, dumped it out into the game engine, and that was a big problem. What I should actually have been doing is exporting each individual piece out on its own. And one of the issues, though, that can happen with this is you get these pivot points that are located in weird locations, like you can see on the screen here.
So what I'm going to do to correct this is I'm going to add a dummy helper node to my scene. What I'm going to do is then link my objects for that piece to this dummy helper node. And what happens when I re-export it, now, is that I get this really cool pivot point location that's located at the base location where I would want this object added to my scene. And that lets me scale, position, rotate, to do all these sort of little adjustments for it.
And this is a really nice little trick that I like to use. I don't see it matching too much, but it's going to really help your workflow out with regards to it. So I'm just going to do the same thing with everything in my scene here. I've got everything assigned to layers, and now, I'm just selecting all the children within the layer, exporting that out to their own FBX file, and then I'll just pull that in at the very end and bring it into the game engine, in this case, Unity. But the same tip and trick would work in a lot of other game engines for it as well.
Things to also keep in mind when you're working on this stuff, try to align it, if you can, to go along the North, South, East, West world axis coordinates. Having objects that are rotated or misaligned can actually be a bit of a problem because you're going to be trying to perhaps adjust and change them around within the real-time engine itself.
So another useful tip-- and this is one I'm actually very surprised a lot of people don't know about-- is using splines as geometry. So in Max, we've got a data object called the spline object. And splines are really useful because you can just draw them out as little curves or lines. These are just vector lines. And in the inside of 3ds Max, you can actually enable this rendering, a viewport in the renderer.
And what ends up happening is that the spline object will actually build a lofted geometry for it. It's got a number of side segments for going around, and it's got a number of spans going in-between each of the knot points for it. And to get the stuff working inside of a real-time engine, because they don't support spline objects, I'm just going to tag an Edit Poly modifier on top.
This is going to turn all the data into geometry. And this is going to let it all be displayed inside the game engine, but while keeping a very dynamic and robust ability for me to go and make changes if I want to increase the resolution or decrease the resolution for how much interpolation is going on, so the smoothness of the curve as it's moving.
This is really easy to go back and do. I can make changes to the knot points or the vertex points that are being used on the spline to put them in different locations. It keeps it really dynamic and really robust. And it's perfect for things like wires, cables, pipes, you name it. Anything that has to follow along a curved path and direction is going to be really beneficial for this. And like I said, I'm super surprised that there's not a lot of people that know about this simple little trick here and how to use it to really benefit their workflows in the long run.
Next up is the thought about the smooth modifier, and how it actually benefits performance within your real-time work. So looking at the scene here, I've got a few objects that are not smooth. So they've got this faceted diamond-looking results on it.
Obviously, this doesn't look right. We don't want to have work that looks that way because it's incorrect. But the other problem that could happen with it is that these faceted faces actually cause additional computation that's got to go on with the GPU. And this can actually have a negative performance hit with your real-time work.
And so to fix this, all I'm doing is just throwing on the smooth modifier on here and then just applying it to the various objects. And then, of course, knowing that I had a lot of different shapes here, I'm just going to go through and, once again, use that little trick to attach various objects together based on their material, and their proximity to each other, and what they're doing to help produce the draw calls out.
Another useful thought on here is the Chamfer modifier. So we get a lot of jokes with 3ds Max that every Max update has chamfer updates to it. And there's a lot of really cool things that the Chamfer can do. But there is some really neat stuff that maybe has been overlooked a little bit by people for real-time production for games.
So for example, it's super easy to apply the Chamfer modifier, let it be parametrically driven by what's going on in the scene here. So in this case, I'm using it on all of the unsmoothed edges, and I'm telling it to chamfer. And then I've got the ability to control how many segments it's got. So what's the distance on the rounding that's going to go on and the segments of how many new bits of geometry are going to go in-between each piece and part.
So in this case, I can put a zero segment chamfer on it, and that will allow me to have these really nice bevels going on, which I can then smooth out to create a nice angled look. And as many of us know when we've been taught 3D, bevels are really important for helping to catch highlights and add more importance and relevance to our 3D work, because nothing is truly 90 degrees and perfectly sharp.
Another benefit as well is that the chamfer allows us to have a very fine control over the dynamic nature of what's going on. So we can control the resolution that we want to have. So if we're going to build level of detail models for different resolutions as the model gets further or closer away from the camera, we can control that.
But also, one of the things that I think is really important for the Chamfer modifier is that it doesn't destroy any of your UV coordinates on here. And as many of you know, UV coordinates can be a difficult thing and a frustrating thing that people don't like to do. And when they're asked to do a chamfer, it often creates new data and destroys that information that's there.
But this isn't something that happens inside of 3ds Max. So we've actually built all the chamfers, from the Chamfer modifier down to the physical chamfers that you might apply in Edit Poly, to be respectful of the UV coordinates and work with the data that's there. So this is really beneficial in a real-time workflow because you're not having to redo your work.
And in some cases, it can actually be smarter, where you're building your low poly work, getting your UVs working on something that's a much simpler bit of geometry, and then throwing your chamfer on top and having this ability to be very dynamic and robust with the resolution that's going on without needing to worry about things like subdividing, or TurboSmoothing, and adding extra geometry bits of data where you don't need it.
We can think about chamfer now being applied to other objects in the scene here. So here, we've got this power engine. Obviously, we can see how chamfer here is going to really benefit and complement what's going on with the work. It's going to help reduce those ugly 90 degree hard angles to it.
We've got the ability, once again, like I said, to zoom in on it. And as we can see here, there's a lot of explicit UVs on this model. And so we can see the fact that the chamfer isn't destroying these UV coordinates. It's working with them and complementing them with the mesh data that's going on.
Next up, weighted normals. So weighted normals is something that we added as a modifier at Max in 3ds Max 2021. And the benefit of the weighted normals is this. So as you see here, I've thrown this smooth modifier on it. Smooth is a default modifier for getting your object geometry to look smooth and robust. But you can create these nasty normals that are pointing in all the different directions.
What we can do instead is we can use the weighted normals modifier, which is going to look more at the actual intention of the surface and how its neighboring faces are working. And we have a really cool control over that to also work with UV coordinates, smoothing groups, and things like that to be very complementary to your workflow.
And using this in combination with chamfer actually allows us to have a very cool means and method to improve the quality of your work with very little effort needed. So you're adding the chamfer to get rid of those 90 degree corners. And then, by using the weighted normals modifier, we can just very quickly apply it to our mesh and give it even more rounded experience, because the normals are going to be pointed and looking at the whole surface attention that's going on.
Another really important thing to think about when you're doing real-time work is be efficient with your UVs. So when you think about bitmaps, the bitmap sizes that are being used, it's very common that artists will want to just use the highest resolution bitmap that they could possibly use to get the most pixel quality on their mesh so that there is no pixelation and no nasty square bitmaps going on and things like that.
But one thing you've got to really keep in mind when you're working with textures and bitmaps is that we're obviously stuck to a 2 to the power of size resolution inside of real-time work. So for example, our bitmaps have to be 256, or 512, or 1,024, 2,048, 4,096, and so on and so forth. But also that as we step up each resolution, we're actually making something that's four times larger than what came before it.
And this is really important to keep in mind. Because while we want to have really high-quality contents and high-quality textures and UVs going on here, we can actually end up using a lot more memory than we need to and make ourselves really inefficient with the memory that we're using.
So if you remember before when I was talking about graphics cards, and one of the number one limitations was draw calls, the second biggest limitation is the amount of video RAM that you see on the device. So video RAM, you'll see marketed and sold with all of your graphics cards. This is specific memory that the GPU uses to keep things like textures there so it can load them and unload them and use them as needed.
When we go past this amount of video RAM being utilized on your graphics card, what ends up happening is that just like on your computer when you run out of RAM, the data has to be swapped to the hard drive and back up again. And this can, once again, be a huge performance hit. So we want to be really smart and really careful of not only the resolution that we're using with our bitmaps, but how we're using the bitmaps as well.
So in this case, here, I've got this really simple platform base that you see in the scene here. It's got some really ugly UV coordinates that don't quite match up with what's going on with it. Obviously, we want to reduce these distortions. So to do this, I'm just going to use these new Unfold 3D algorithms that we've added into Max to get better results.
But one thing I'm also going to do is be very careful about how I pack it. And one of the things I'm trying to do and be considerate of as I'm packing it is ensuring that the UVs that I've got on the very bottom of this platform that exist are very small and very tiny. Once I've been certain that I've packed those UV coordinates very efficiently, I can go in and use something like Substance Painter or Photoshop to create textures on it. And then I can bring them into the game engine here and see what's going on with my work.
And so I can get a really good output result just from making some custom textures, applying them, being very efficient about the memory, because any unused pixels are still loaded into memory. And then using 3ds Max and the viewport that we've got there, along with the PBR shaders and this image-based lighting, to get a very close approximation of what I'm going to see within the game engine, and being able to use that to ensure that I can get very similar settings, a very similar look and feel out of it all when I've completed.
So next case, I'm going to look at this photoscan data that I brought in and how to retopologize. So as we saw earlier in the demo, there is this giant tree trunk that I photoscan captured from my yard. It was a giant poplar tree that had fallen over. And it looked really cool, but it's 1.7 million polygons of data.
This is totally unacceptable. And I don't want to go and manually redo this by hand. So I'm going to throw the retopology modifier on here. And because it's an organic shape, I don't need any edge control on it. I can just let it use what it's doing. And I'm going to tell it to use a target quad count of 100,000.
So after a few minutes, it's going to process through and give you this really super robust 3D model that very closely approximates the input geometry mesh form that came out of it. Looks really good. It's got just a few quads and tris going on with it. So as we can see, it's 200,000 triangles, which is actually still considered low poly.
So getting past this misdemeanor, that low poly is low-quality work, low poly just means polygon work that's being used efficiently when we talk about it. So this is what we've done here with the retopology modifier. It's here, it's nice and simple, easy to use, and there's no need to go around and try to trace the model and rebuild it.
So moving it on from here, of course, I need UV coordinates. And the reason why I need UV coordinates is that I want to project all the cool data from the high-resolution scan mesh onto this low-resolution mesh that we've got here. So once again, this is just UV work. So I'm using the Unwrap UVW modifier, using the new unfold algorithms.
And what I'm trying to do here is I'm trying to break up the mesh into smaller chunks and pieces so that I have these nice square checker-pattern bitmaps that are going to go across this whole thing. And these checkered patterns are showing the amount of distortion that's going on. So what I want to have is these checkers should be as square as possible.
And so to do this, what I ended up needing to do is just to go around my mesh and break it up into smaller and smaller pieces that were hidden by the mesh form, and the topology, and how it's moving around. And then this would allow me to have something that's got a minimal amount of distortion that I can then use, pack together, and create these really nice-looking UV coordinates out of it that I'm going to use later. And as you can see, all the checkers are fairly square. They're looking pretty good, and they all have a uniform size, which lets me know that they are all relatively the same size and scale to one another, here.
From here, if I want to look at the original bitmap that came with the photoscan, it's this mess, this monstrous mess that's going on. So I'm going to use that, though, and push it onto my new UVs that I just created. So to do this, the projection modifier in Max does a great job of it. I'm just going to target the high-resolution model and then push the cage outwards so that its recasting will capture all the hits from the high-resolution model.
From there, using Bake To Texture in Max, I'm just going to use it as a target reference. I'm going to capture the hi-res data down to the low-res. So I'm going to get a diffuse normal map and an ambient occlusion map. And when I look at it all applied to my material here-- so I'm going to use the PBR material that ships in 3ds Max, I just quickly drag and drop from my Windows Explorer onto it-- I can see I get this really nice output result that is almost a perfect match for the high-resolution results.
You wouldn't really know too much of a difference unless you actually inspect the wire frame and the polygon topology of it all to see what's going on. And this is a pretty simple workflow. I know it's sped up here really fast, but this maybe took an hour to do of my time. It was easy, very intuitive, very quick. So you know, don't be intimidated by this idea that you can't use photoscan data, or you've got to use it the way it is. It's very easy to repurpose the data and get it down and make it work into something that's going to be beneficial for your workflow.
Next up, as you might have seen in the original demo where we're looking at the scene here, there was this little rock that I brought in. And this is a rock that I had gone and actually made it in something like ZBrush, just brushed around, made some stuff. But it's high-quality, high-resolution as well. And I need to go through and recreate it. So I thought this would be a good example to show the pain of going through it, and would you have to manually build up the topology.
So manual retopo, it's neat in some ways in the fact that you get complete control over what's going on. But it's also a very slow process. And I'm just using the Step Build tools found in the ribbon of 3ds Max here to build this up. And what I'm trying to do is I was just trying to trace around the major shapes and the forms that were very important for the mesh, and then filling in the data from there.
One thing that could really happen, though, when you do manual retopo, is you can get caught up in small data. And so you can make some of these quads really small, and then, as you try to attach them to other areas of the mesh, they don't really align. So it can be a bit of an experience to go through and figure out what's the best possible way to build this stuff up with a manual retopo.
One of the benefits, though, that you do get with a manual retopo is that you've got complete control over the edges, the edge flow, and where the data is going to be spent. Where the automatic route topology, such as the Retopology Modifier, it's all based on algorithms. So it's trying to do what it thinks is best, as needed to retain the shape and form as optimal as possible.
Looking from here, though, this model that I brought in, the sculpt had no textures on it. So if I-- or UV coordinates, so I apply texture, you can see once again, we get this massive amount of stretching and smearing. I apply this really cool rock texture that I've got. You can see, there's also a lot of stretching and smearing going on with it.
This isn't great. I don't want to do UV coordinates for this high-res model. So what I'm going to use instead is an OSL shader, which stands for Open Shading Language, and I'm going to use a triplanar projection shader, instead. And what this is going to do is it's going to take the bitmap, and it's going to project it like a cube mapping from six different sides, and it's going to blend the corners together. And that's going to give me a really nice result.
From there, I'm just going to go and make some UV coordinates on the low-poly mesh. Once again, pack them together. Bake them out, get myself a diffuse, a normal map, and an ambient occlusion map that comes out of it. All of this data looks great, it was captured exactly as it was supposed to be.
Applying it to a PBR material inside of Max to ensure that I get a good lookdev, I can know what I'm looking at. I can see it looks pretty close to what I had before. And then, when I apply it all inside of Unity here in the game engine, I can see that I get pretty similar results from what I had in 3ds Max.
And this is really aiding me in my lookdev because I'm spending less time having to go back and forth, tweak the textures up, make changes. I can be pretty confident knowing that the data that I get inside of Max is going to be very close to what I'm going to see inside of a game engine like Unity or Unreal. So look at using the bitmaps and the viewports inside of Max now, because we've got a really powerful viewport in there that's going to give you some amazing output results.
Next up, still thinking about textures, trim sheets. So sometimes, when you have really large objects, so this large robot mesh might seem-- having an explicit set of UV coordinates for every face, it's going to be cost prohibitive because it's going to be large bitmap sizes that are needed. And instead, what I can use here is this idea of a trim sheet. So I've just got one that I've been making here inside of 3ds Max. The general idea is I've got a plane, and I'm going to use a projection mapping, and I'm going to try to map it to a bunch of geometry that's sort of floating above it.
What's important is the change in the surface angle. So because the flat surfaces are exactly planar, parallel to the plane of the planar object, it's going to simulate a flat surface when the normal map is baked out. So I can have all this wonderful floating geometry in my scene here, which is going to enable me to do rivets or cables or wires and any other little details that I want.
And I can build this content, I can assign it, apply it. And I don't ever once have to worry about the geometry underneath, because I'm just applying it and letting the fact that the illusion-- that it's going to get captured all in the projection mapping to make it all work. And this is a great trick. It's kind of a shame that I think some people have forgotten that you can do this as a great form of detail in your meshes, and you don't need to worry a lot about trying to make this stuff all in with the physical geometry and make it perfect.
So moving on from there, I've got my final trim sheets. I'm going to bake it out to some bitmaps. Once again, baked texture, it's a fantastic tool. It's going to let me build a whole bunch of different bitmaps out of it that I need to use. I've already baked them out, so I'm just showing them off right here.
And now what I can do is after I apply them to my mesh, I'm just going to go through. And the mapping method for this one is a little bit different in that I care about having my faces be flat and have minimal amount of distortion, but I don't care if they overlap. So the benefit of a trim sheet is that you're UV mapping and texturing the way that you might have learned back when you first learned how to do 3D, where you're just grabbing sections of faces, and you're lining them on a bitmap to what looks best and fits best.
And you're trying to work this way so that you are a lot more efficient with it. Overlaps are fine with a lot of UVs, it works great. But the benefit, of course, with the trim sheet is that instead of trying to worry about how do I have really big bitmaps, what I'm trying to do is I'm just trying to make everything work on the pixel data that's there. So everything shares and works with it. And the end result that I get from it looks really nice, really good.
Obviously, I still think I want to do some more tweaks and optimizations to get the color looking a little more contrasty on it, because I personally love more high-contrast textures in my work. So this one's a little bit low-contrast with it. But the data is there. It's showing up. It's looking great. And it's a great workflow to consider using. And it's one that I think a lot of the time, trim sheets and how to build them and how to utilize them is often overlooked.
But if you look at a lot of games projects, and works like Google's on, it's the most efficient way to texture something that's very large. So you'll see it being used on a lot of environment pieces and a lot of very large objects where having explicitly-mapped coordinates like we're often taught to use is very inefficient and incorrect to think about. And trim sheets will be a much more efficient way to get really high-resolution-looking texture data down onto your models but not overblow the memory usage and utilization.
So moving on from there, we can do some final lookdev between Max and Unity. Since I have 3ds Max here, I'm just using the floating viewport, and obviously I'm using the active viewport controls. I can turn on a lot of really nice things, ambient occlusion, better shadowing, bloom, so on and so forth, and get a beautiful render out of there. So if for some reason, I felt like I didn't need to or want to use the game engine-- certainly, the real-time viewport that we're seeing in the viewport of Max is looking great, but if I want to go into Unity and keep working here.
As you see, I'm taking my scene a little bit further. I've arranged all the rocks and dressed up the scene as I want to have it arranged to tell a little bit of a narrative. I've also got a very beautiful-looking scene here. And all I'm going to do to finish off inside of Unity or in Unreal, is I'm going to bake the lighting out.
So this means baking out the global illumination lighting and also considering using lightmaps. So especially on static objects that aren't going to move and adjust around, baking in the lighting will actually help improve the real-time engine so that the viewport's look and feel that you get out of it very closely matches that shadowed, proper ambient occlusion-lit viewport that you get out of 3ds Max.
So either way you want to go with it, you're going to get great results. But real time in Max viewport, real time in the game engine, either way, you're going to win. It just really matters on the output that you want to have as the final result for what you're going to deliver for the medium that's going to be put on, if it's going to go on the web, if it's going to be distributed through maybe CDs still, or kiosks, or whatever the point might be.
So that's the summary of it. I hope that everyone's hopefully learned a little bit about real-time production, some efficiencies that they can make, and what's going on. So just as a summary of what to do or what to think about. Think about reducing your draw calls. So draw calls are, in my opinion, the number one limitation, and they're going to be the number one limitation for a very long time. I know we've got a lot of cool technology coming out from companies like Unreal and Unity that are trying to look at minimalizing this.
But when we think about stuff for VR, we think about stuff for 3D commerce, training simulations, and so forth, draw calls aren't going to go away anytime soon. And we need to look at how do we fix them, how do we get past this limitation. And like I said, my suggestion is always think about combining, attaching the meshes based on their proximity, the type of object they're associated to, and the material that they're using. And this will really help you reduce those draw call sizes down.
We want to make sure that we're always utilizing UVs and textures in the most efficient way. We want to be smart about our UV optimization. We don't want to have a lot of empty dead space that's not being utilized by bitmaps, if possible. We want to be smart about our bitmap sizes, so using just the right resolution size, not too large, not too small. A little bit of Goldilocks, there.
And when possible, we want to look at reusing textures. So especially trim sheets, but reusing textures as much as possible will help us reuse materials, which helps be way more efficient with our memory that's going on. And it also helps create a very cohesive and concise look in your scene, because everything's going to look like it belongs together because contents and textures are being reused and utilized.
Of course, we want to work smarter, not harder. Max has a lot of really awesome parametric modifiers to it that are really beneficial. There's nothing really on the market that comes close to matching the Max modifier stack workflow and the benefits that you can get if you can use it properly, because you can keep modifying and changing and making adjustments along the way. So utilize the modifier stack whenever possible to make your life easier so that you don't have to work on it as high-resolution models initially.
You can use the chamfer to add detail and information there. Think about how you can use the UVs and be efficient with those. Using other modifiers to do twists and bends, and building stuff that starts straight but is turned and curved, and so forth. All of that's going to help you be way more efficient with your time, have a better experience overall, and just generally be happier with working with real time, if you're finding frustrations with it.
And last but not least, remember that Max, since Max 3ds, Max 2021, we've got a really powerful viewport system built into the software. It can do HDR image-based lighting environments, which is just the same lifht we would get for an image-based lighting inside of a game engine. We've got really powerful PBR materials that show up pretty close to one-to-one comparison in the Max viewport as we get inside of the game engine.
You can utilize the PBR materials to adjust parameters and hone in on the settings that you want to have. So as you probably saw in the video, there's a bunch of parameters and settings between the PBR materials and Max and what you're seeing in PBR materials in a game engine like Unity. So it's really good for just honing in and knowing what values you want to have and want to use so you're not fiddling and playing around with it a lot.
But we've also got these things called OSLs, which are Open Shading Language, which are essentially shaders that can be used to help push your work further. So you saw that I used the triplanar shader. There's a lot of other really cool OSLs that are available.
And what's really awesome about OSLs is that you can use them in 3ds Max, you can use the same OSL in a lot of game engines, you can use the same OSL in a lot of other 3D applications. So OSLs are designed to be transportable and reusable. And so think about using them, because they can do some really wonderful, cool things that can help make your life simpler, faster, easier, and better.
As for additional resources, there's a lot of resources that you'll find out there on real-time production and using 3ds Max and other 3D applications for it. I'm going to list just a few here that I found really beneficial while I was working on getting this presentation together for everyone. First, but certainly not most important, is the AREA. So we've got the Autodesk AREA, which is a great community for people using Max and Maya to talk about art production in general.
We've got a 3ds Max Discord, which is a real-time Discord server for people who are looking to chat and connect with peers, and connect with others about problems with it, and how to make solutions with 3ds Max so that they can get past. There's a wonderful group on Facebook called Stack, which has got a lot of really good users within it that are sharing really cool tips and tricks on how to use 3ds Max and push it even further, as well as sharing their work and their content.
Polycount is obviously-- if you're working in real time already, you know, probably, about Polycount. It's a great online website with a community and forums for people to connect and talk about art production for real time and games. And there's also a really nice resource that I found called the Gamedev Stack Exchange, which is more of a Reddit-like service that has just a forum of where people can ask questions and get a lot of really cool technical answers and suggestions.
Of course, if you want to push your work inside of your real-time engine further, I'd recommend looking at the Unity and the Unreal dev docs, depending on which engine that you're using, because there's going to be a lot of really cool tips and tricks and other things you can do to optimize your performance that's very specific to those engines that's going to be beneficial to you.
And certainly last but not least, I want to let everyone know that if you want to join the 3ds Max beta, play with any of the new work that we're doing here, give us some feedback on your real-time predictions, how things are going good, how they're not going good, improvements that you want to have, please feel free to sign up for the Max beta. We're always welcoming new members into it and looking to get as wide range of experiences possible from our users to ensure that we can build the best possible tools.
So with that said, I was going to give a special thanks to one of my peers on the Max team, Shawn Olsen; a friend of mine, David Gillen, who works as a technical artist at a serious games company; and everyone else that worked on the 3ds Max dev team that helped me out and helped answer some questions that I had and stuff like that while making this presentation. Couldn't do it without you guys and your tireless supports to make awesome tools that help people do things bigger, better, faster.
Downloads
Tags
Product | |
Industries | |
Topics |