AU Class
AU Class
class - AU

Rigging Fast Deformation Estimation with Neural Computation in Maya

Share this class

Description

This course aims to educate participants on the innovative integration of machine learning (ML) techniques for deformation approximation within Maya software. Using approximations for (part of) a deformation stack can highly improve interactivity when using complex rigs, and can significantly improve the portability between different applications. It is designed for animators, game developers, VFX artists, and any professional interested in exploring the intersection of ML and 3D animation. This course will explore a demo of a rig with computationally expensive muscle deformations that is reduced to a simple, general, black-box neural network that only requires inputs from the motion system. We'll show the entire setup process in detail. In addition, we'll go over best practices, tips, and limitations to help users get the most out of this tool for their production needs.

Key Learnings

  • Learn about building a rig with an ML Deformer to achieve interactive speeds.
  • Explore use cases and learn about fine-tuning parameters for optimal use of machine learning.
  • Learn how to integrate ML deformers into a production pipeline (rig, generate, train, and deform).

Speakers_few

  • Фотография профиля Etienne Allard
    Etienne Allard
    Etienne uses his nearly 20 years of experience in Content Creation software development to develop and optimize animation and rigging workflows. He worked for Content Creation companies like Softimage, Avid, Toon Boom and Autodesk which strengthened his love for making tools that artists like to use. Machine Learning being the latest trend in optimizing workflows, Etienne learned its basics and applied them to the familiar tools he knew. He also can play Bohemian Rapsody by hitting its teeth.
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • subtitles off, selected
      Transcript

      ETIENNE ALLARD: Hi, everyone. Thanks for attending this session on "Rigging Fast Deformation Estimation with Neural Computation in Maya." Good to see you here. So, first of all, before jumping into the subject, let's have a look at the safe harbor statement. Thank you. We can proceed. So what are we going to talk about today? It's mainly about the ML Deformer. It's a new node that we've got in Maya that can approximate the information that can be complex. And we'll see the reason why you'd like to use that.

      Then we see what's under the hood, how neural network works work, the basics behind that technology and how we leveraged it to create that feature. Then we'll have a live demo of how to set up an ML Deformer in your rig and see the different steps to make it a possibility, how you could use it. And at the end, we'll have general tips about how to get the most out of this new tool.

      So let's first have a look at and answer the question, What is it for? So to start, we'll have a quick brief video of it in action and the problem, I think, that we had before using it. This is a standard rig with a complex deformer that can be slow. Then a version of this rig using an ML Deformer instead of a complex deformation chain.

      You can see that even if it's an approximation, the speed of playback evaluation is quite faster. Same goes with manipulating the character. And we can see how much it deforms the base character and the difference it has with the source deformer, the one that it's trying to approximate. We can see that approximation is quite acceptable. Independent motion.

      Thank you for watching. So the ML Deformer, what exactly is it doing? It's mimicking the effects, or part of it, of a deformation. So it's based on an input character, an input mesh. And a select number of controls can replace the complex deformation and approximate it in a fast way. Who is it for? Many people can use it with different advantages, each for each of them.

      Animators, the benefit that they get out of using an ML Deformer is that they can get near-real time preferences for animating, somewhat manipulating the character. The evaluation of the deformation is much faster, or can be much faster, actually. And the final look of that animated character can be seen. Sometimes with complex deformation like cloth simulation or muscles or muscle systems, seeing that result is so slow that we need to work with proxies which don't give the actual results. And then deformer can be a step towards seeing that the final result.

      Because it can use nicer or more realistic deformation on secondary character or crowd character, that wouldn't be important enough to spend a lot of computational time. Now with ML Deformer, it becomes a possibility because evaluation is much cheaper. I don't know if some of you are working in studios, but in the studios, sometimes you have proprietary deformers, technology that the studio wants to protect and cannot go outside of the studio.

      Using an ML Deformer, since it's a black box, you can learn proprietary deformers for a given character. And then this ML Deformer can be distributed, instead of sharing that proprietary data. For example, if you're working with contractors or third-party studios, that might be a way to enable them to use proprietary data without actually touching the proprietary data.

      And in pipeline, it might be a one of the tools that can be leveraged to pass the information from one software to the other. We'd have more information about how to do that later in the presentation. Just take a step closer into how it works. So there's four main steps.

      First of all, you need to identify a complex deformation chain in your rig. You identify that it's your bottleneck or the technology you want to protect, and replace this deformation chain from your rig with the ML character, connecting the same inputs from the complex deformation to your ML Deformer. That's the first step.

      Next, you need to create a training set of data on which the ML model, the machine learning algorithm will learn how to deform. This can be a set of existing motion clips or poses generated from minimum and maximum values of the controls. After that we have the training set, we can train the machine learning model, the neural networks on it. And when we have the final model, we can use it in the rig to speed up or hide the complexity of the deformation chain.

      Then we replaced. [CLEARS THROAT] Sorry. Excuse me. Let's have a look and find out one step closer and one step deeper at how it looks like in a rig. So what you see in front of you is a geometry that is deformed by a deformation chain. And then we get the final deformed geometry.

      Here, LBS stand for Linear Blend Skinning, skincluster, a skeleton that is deforming the main portions of the character. And the asterisk boxes are the complex deformation chains that we want to replace, for example, a clothing system.

      Since we identified that this is the slow part of the information chain, that's the part that we like the ML Deformer to approximate, to replace. So you see, it can be a part of an existing deformation chain. Replace some component part of it. The part, again, can be proprietary or slow or too companies.

      Another scenario would be to have a character for which we have a fast deformation. Linear Blend Skinning can be an example. And from this, we want to learn the difference between the simple pose and deformation and something more complex. That could be, for example, produced by a different software.

      Concrete example, again, you could export it in the Linear Blend Skin character to [INAUDIBLE]. Add a cloth simulation to that character, export back the result to Maya, and learn the difference from it between the original simply deformed character and the complex deformation done in a different package. And that's what the ML Deformer could learn without you wouldn't have to go back to the external software, which is in this example, in this actual use case. And to do the deformation on the character, you could have an approximation of it directly from within Maya.

      Now let's see the two different scenarios next to the other. So in one case, the base geometry is still part of the deformation chain. We replace part of it with the ML Deformer. With the other case, it replaces all together, all of the deformation chain. And in this case, we don't even need the target deformation. We can put it aside. Or maybe it's not possible to produce it again because it's been done in an external package.

      To get on with some of the naming conventions that we've got, you might see in the UI of Maya some of the components of that system with those names. Very small shape is the geometry on the form. The base shape is the original geometry, deformed or not, by simple ML Deformers that are fast to evaluate.

      The target shape is the resulting deformation from the base shape using the complex deformer. So this is the actual result you would get, like a high quality, which we're trying to approximate or emulate. And the final shape is the actual output of the ML Deformer or the target shape.

      So all of this, what kind of technology is it using? The ML Deformer, well, what is it using? Under the hood, there are neural networks. Let's see the basics of how neural network works before going back to Maya and se ML Deformer in action. So what is it? Neural network, it learns to approximate a function.

      Given many examples of a combination of input and output. So if you have x, you have y, we provide a bunch of x's and y's. And it can learn how to approximate the in-betweens. So if you look at the graphic at the bottom of the slide, you can see that we have one sample in there that is part of the function. That's a training example. We can have many of them. And roughly, even your brain computes approximately the shape of that function. Element model can do this thing.

      And now when you feed a different input to the ML model, you can generate an output that would more or less fit on that function. That's what we call inferencing, giving an input to get an output back. So output values are computed from input values. The process is a black box. The neural networks, it's doing it's stuff. And we get outputs. So if we, again, go back to the actual use case, ML Deformer, what would be those inputs and outputs?

      The inputs, it would be the control values. Since a deformer is actually a function, it's a function taking the controller inputs and produce a set of deltas. One thing that is important to remember, the key advantage of using a neural network to evaluate a function is that it's always evaluated in constant time. So no matter how hard or complex or even computationally heavy the original deformer is, the approximation will be evaluated in constant time. That's why we might want to avoid approximating simple deformer and focus more on the more complex one.

      So the input values, what are they more exactly? They are more precisely, they are derived from the rig. So it could be direct controls from plugs and nodes. Maybe a slider controlling the tension, or a tension deformer. Or they can be values derived from joint matrices. So we have transformation matrices. We can extract parameters out of them and feed them to the neural network.

      The outputs, it's amount of movement that is produced by deformer, the delta between the target shape and the base shape. If you look at the video on the left, you can see the Base Shape, the input to deformer. They transform into the deformed shape, the target shape. Like again, the concrete example, if we apply muscles deformation, the base character without the muscle deformation and the target shape with the resulting transformation.

      Just a summary of what we've just seen. Control inputs. Using different representation. They are Fed to the neural network to produce an output, which is a bunch of deltas. Nutshell, it's just a deformer. And we don't know exactly what it's using. It's a black box. So that's some information, right? How would you set up ML Deformer in Maya? Let's see a practical example. For that, I'm going to jump to Maya and do the live presentation from there.

      So in this rig, I've got a body, a body shape. That's my base shape, my original shape, which is deformed by muscles. And the muscle system is taking a set of joints as its input. Since a muscle system is moving body parts like the arms around, that's introducing large deltas, I set up next to the target shape, a base shape that is only being driven by a skin cluster. So Linear Blend Skinning.

      So if I look at some of the movements, I move around this joint, see both characters are moving. The one on the left has some of the nice muscles moving as opposed to the one on the right. So let's start. Let's begin setting up the deformer. So what you need to do? You need to take the base geometry. And on top of it, we're going to add the ML Deformer.

      So if we go back to the base, we can see in the deformation stack that we've got an ML Deformer that is taking the output of the skincluster as its input. And I'm going to pin the Attribute Editor since we're going to spend a bit of time there. So next step is to tell the ML Deformer what it is trying to approximate. So it's going to compute, again, the difference between one shape and the other.

      We have here, the subtraction over there. Let's set the target. So now it knows that it needs to compute the difference between this guy and this other guy. Now we need to set up the controls. By default, when we create the ML Deformer, we have a node called the Control Character that is created next to it. I jumped to the Control Character. There's a section over here, which is calling using asserts, which is called Add Controls. That's where I'm going to add the controls.

      So what we see, displayed error is the attributes from the nodes that are selected. For this model, we're going to use the joints as the inputs. And the joints, we want to use their rotation. We could use the Euler angles directly as direct controls. But there's an issue with Euler's angle. It's that they're not a great representation of data for machines.

      The reason is that values that are very, very different for machines, like numbers that are quite far apart, can mean the same thing. For example, minus 180 is the same as 180 for a joint rotation. So because values that are given the same output are very different, it can confuse the machine learning. And also as an additional problem, there can be multiple representation at the same resulting rotation.

      So if we rotate things 0 degrees in x, 90, in y and 0 and z, it's the same as doing minus 180 in x, 0 in y, 180 in z. So something that's easier to understand for machines is maybe the representation of the three vectors from the coordinate system that have been rotated with the corresponding rotation. And since three vectors, we might not need three of them because the third one will always be the cross vector of the first two, we can simply pass two vectors for each joint rotation to the machine learning system.

      We jump back to our example. What we're going to do, we're going to select all of the joints that are being used by the ML system. Expand this. So we need all of the arms, the spine. The fingers are not being used, so let's skip that. You can see all of those joints, the bugs above. You want the matrix representation.

      Since we need a matrix representation, we're going to use a matrix converter node. So to do that, we create the node. We specify the type of rotation representation we'd like to use. So like mentioned when seeing the previous slide, we want to use the y-axis instead of Euler. We're not going to use a scale. And we're not going to use a translation. So we create the matrix converter node.

      And in this case, we're only interested in local matrices of each of the joints. So let's filter the results to joints only local matrices. Perfect. So I look at this, I see every matrix I need. Them all. You can see the matrix converter now. This is of matrix linked to it. You can also see that from the control character. We can see that there's a matrix converter node, but we can also see its content if we go in experiment.

      Now we see the content of each of the matrix converter node, and that should-- OK, so we're happy with the controls hooked up to ML Deformer. Now, we need a training set. So in this scene, we got an existing motion. OK, Jim from that character. So I'm using that. I already imported it in the scene. So we have a bunch of poses. And I generated some random ones to go along with it, to have a better coverage of all of the plausible poses. Let's have a look at how we can do that.

      We jump back to the ML Deformer tab. We see we have a section named post generation. There, we can create a second or third or several others, control character. Those control characters, we can use them for generating poses. Let's create a new one. We see why we need that. We do this again, control character. So it's going to-- you name it to make it clear which one is which.

      There. I'm going to take the same list of joints that we had. For the first couple director. And then I'm going to hook up the rotation that we use on those joints. In all of the different axes, add them to the other control character. You'll notice that I'm using other angles. Why am I using other angles? Because I'm a human. It's easier for me to understand them. And I can set limits.

      So there, I'm going to set a limit, the minimum and maximum values for all of the joints for which I'm going to generate poses. And since I already have many poses, I'm going to select only a subset of them first. Set the limit. Small limits. Because I only need small movements in addition to the large ones I've got. And expand the Pose Generation settings.

      I'm going to check After Last Key option so that when I create new policies, they're going to be added at the end of the sample set that I already got. And right. I can play with those settings. So one thing that the machine learning algorithm likes a lot is to have each control affecting parts of the body individually so that it learns really well the effect of a given control. In addition to that, it's also useful to have combination of controls activated at the same time.

      But we mainly need a one-for-one relationship, and then a smaller percentage of poses with different combinations. But since I've got many of them, I'm going to work on your selection. So let's say I want to add more wrist rotations. I'm going to select them. I'm going to select-- I'm going to generate poses from the selected controls.

      I'm going to randomize which nodes are going to be used, but the attributes within that node's like-- anyway, in this case, it's only going to be one node, so it doesn't matter. I'm adding the key at the end. The attribute, the controls won't be used for that generation. We'll use their default values, which we can set here. This key is going to be 0. And we'd like only a third of the sector controls to be active at the same time. So that, it could easily have shown a control, a good balance between mixed control activities at the same time and single ones.

      I'm happy with that. Right-click. And I can't do the poses. No, I added 20 more poses. Good. Very smooth. OK, so we can do that, all of the different components. Let's skip the 70 processes, so we'll skip to the next step right away. So, yeah, we're happy with the training set that we got. I think we got a good coverage of the different poses and the different inputs values.

      Well, let's go back to the ML Deformer tab and export the data. Export the deltas that we've got so that we'll be ready to be learned on. For that, we go back to the main tab, the tab from the ML Deformer. Right-click. Select Export Training Data. There, there's multiple settings available. The first one is the training data location. The training data is only useful for training.

      Once training is done and you're happy with the ML Deformer that you've produced, you might want to get rid of that data because it can be quite-- so I suggest you put the training data in a location, which you either remember to look for it and clean it up eventually, or maybe a temporary folder. In this case, I'm using the default value, which is in the Project folder under the Cache ML Deformer folder.

      I generated a bit more data than the default, so let's use the first 5,820 poses and give a name to our training set. It's important to give a good name to the training set because you can use different representation of the deltas. And each of the representation has its advantage and disadvantage. So it's what would be called. So that easily remember it's been done for this process. And I can select the mode. There's two different modes-- offset and surface. Let's jump back to the presentation to see the difference between the two.

      So the delta, as we said before, they are the difference between the target shape and the base shape, the result of the complex deformation and its source. I've set my own. It's simply the direct world space coordinate delta between the two. This can be used for simple deformers for which the difference between the two mesh is not affected by parent transformation. A concrete example would be the Tension and Tension deformer if you're applying it to a wire. So you have a simple line. You apply the Tension. No matter-- there's no hierarchy in the deformation.

      It's efficient. It's super easy to compute. So it's really, really fast. But they have some drawbacks. Let's say you have a set of joints. If you're always computing-- we have set in world space, the rotation of a parent shoulder on a hand will produce different inputs and will produce a large deformation, actually, in the world space for the end, even if they end locally, as in being deformed.

      But again, let's see here, again, muscle deformation system. So in this case, you should use surface mode. It's a bit slower to evaluate, but deformations that happen on the end won't be as effected by transformation on the parent because the rotation it has no impact on the local transformation of the end. What is surface mode? It's a mode in which for each vertex, we compute its local coordinate space system based on the geometry. So based on the surface, the polygons that are attached to it.

      Let's jump back to my-- so because we're using joint, surface mode makes a lot of sense. Another parameter that we've got here in the export is the ability to smooth the surface. So sometimes you'll have geometries for which the surface, they are really different. They might be convex or sharp edges. So neighbor vertex might have really different deltas, even if they're not just because the cards in space are quite different.

      So to help with that, to ease the learning, we can smooth the surface a bit, which we're going to do in this case. When you're ready, you can export the data to our file. This can take a bit of time, especially if your complex deformation is slow to compute since for each of the frame, we'll have to compute the whole deformation system. So actually, jump to the scene where this work is already done.

      Perfect. So the next step. The next step when we have our data exported, it is to train the ML model and produce the actual model that we use to approximate. So again, we can set a directory for ML model. We can select which inputs to use. So here, I'm going to use the same output that I've produced at the previous step. Give a name to our ML model. And we can decide to preload the data or not.

      Preloading, it's for loading all of the data in memory. If you have a really, really large set of poses, you might want to uncheck that. But it's faster to keep it checked. So in this case, I know I have enough memory on my system. So it will work, but keep it checked. Here, you can change the batch size and epochs. Batch size is the number of samples, like if they're going to be processed at the same time, that's going to play on the speed of the training.

      The number of epochs is the number of iteration the training we will do. So the more epochs that we have, the more precise your ML model might be. But don't put that number too high because it will cause other problems. We'll come back to it. The validation ratio, it's used for debugging. So again, to test that you didn't do too many epochs, having some poses be kept on the size, that won't be used for training, but be kept only for validating. That's what the ML model has learned is accurate.

      We use those set of inputs to validate if the output of the ML model match the actual outputs of the original deformer. So when I'm happy with the parameters, I click on Train, which can be fast or slow, depending on your graphic card. We're using PyTorch under the hood to do the training. So if you have a CUDA or advanced GPU, that will speed up the training dramatically.

      So we did the training. Now before, so now I can use an existing model that we've got. Load it. You have a visual pitch. OK, enable it. So I see that the model is here. It's loaded and enabled. So if I disable it, you can see the base geometry. If I enabled it, you can see it being applied. And if we have a look at the resulting speed and accuracy, you can't see that we can-- hopefully, good results.

      So I move this. And apply it. Select deformer. Oh, a bit too far. Yeah, you need to keep the transformations within the reach of the training set. Otherwise, the model might break. If that happens, you can always generate some new poses, and export and train again. You can see that the target is quite similar to the ML Deformer. We also have a look at the speed. We manipulate the original deformer so that we're about 15 to 20-- oh, one second.

      And then to go back to the ML Deformer, the same can go as high as 90 frames per second. So it's quite a faster. Let's have the setup. There's one mode we didn't talk about yet. It's the principle shape mode. So we go back to exporting the data. Sorry, no, training the data. OK, very good. We open the advanced settings.

      So we mentioned before, we're training on the deltas. But there's another way of actually getting the same result. It's not to train on the delta. It's to train on the shapes that combined will produce the different shapes. So again, just to be clear, here, it does the upshot, principle shapes. You have a number of shape limits. So the maximum number of shapes used to recombine the different geometries and the accuracy that is targeted. Let's keep that in mind.

      Go back to the presentation. So how does that work? Since we have a training set, out of the training set, we can compute the shapes that could be used. We cut from which that if we combine them, we can reproduce all of the other poses from the training set using different set of weights. So it's like using a blend shape for which we know the parameters of the weights that we need to produce any given shape with a certain degree of accuracy.

      So what we can do with that? So once we know that we can use a subset of the shapes to generate all of the other ones, we can make the ML model not learn the deltas, but learn the weights. So the percentage of which of the principal shapes. The principal shapes are the ones that we use to combine, to create the training set, and make the model learn the weights that it needs to produce to generate the training set.

      So for all of the poses, we know that we have a target of a given shape. We can compute the weights that we require from the principal shapes to produce this one. So instead of using the deltas, we can use the weights. To learn how to recreate our training center. So the inputs, instead of being the controls, it's going to be-- actually are still going to be the controls. But the outputs, instead of being the deltas, they're going to be what? So there.

      One, would you like to use this? If you have really large models, 3D models, the ML model can be quite large because there's going to be one output per vertex. So you have 1 million vertices. You'll have 1 million-- the size of your last layer is going to be one million. And then the hidden layers, like the internal components of the neural network, there's going to be an enormous amount of connections. So with weights, there can be multiple degrees of complexity, less in the neural network.

      So it's going to speed up the computation of the ML model quite a bit and minimize the memory footprint. But for small models, it's the other way around. So there's a balance. And the accuracy is a lesson also, with 10 shapes. It's a bit less accurate. One thing also that I mentioned about the training and which is super important, is that the training happens on your local machine.

      The resulting models, the ML model, whatever is being produced by the training, also lives on your local machine. So you can train once. You can produce from that training one neural network file, and then you can share that file with other users in your studios. And you can reuse the same neural network between different shots and scenes. The thing that is super important is that this will never see your data. Everything stays local. So it's really, you own your own data. You produce it. It's for you. We never have a look at it. It's never leaving your machine unless you want to share it.

      OK. So how to make how to get the best out of your ML Deformer. So let's have a look at some general tips about to optimize the result that you might get. The first thing to know, the thing that's the most important is that your neural network is always as good as your training data. Quality data makes quality models. So for that, what do you need? A good distribution of motion. Like, have a good representation of all the plausible motions that your character will have.

      You need to cover a lot of input values for a single control and combination of different controls, as I mentioned before. It's really important to cover the range of motion used by the animator. So the limits you need to cover, like, have at least the minimal values and the maximum values and some values in between because anything that the ML system won't have seen, it will have a hard time to reproduce it. It hasn't seen anything that's comparable.

      One thing that can be done to speed up the generation of motion is actually real, some existing animation that you might have from your character. In this case, you'd be sure to have, first, a good distribution of what the animator actually use and actually already have it. So you save some time in the setup. So that's the thing we need to be cautioned through. But we also need to avoid some other problems.

      If you have some broken poses, let's say you move your character to an external extreme where the geometry starts to break, that's going to contaminate your training set. So you don't want the ML system to learn how to badly generate deformation. So avoid them. It's good maybe to filter the poses before feeding the system and training it. And also avoid contradictory poses like poses for which you have the same inputs but different outputs.

      For example, let's say you have random noise in your system or within the simulation that you're running that it's not deterministic. So each time you generate, you simulate the deformation, you're getting different results. That's going to probably behave poorly with the ML Deformer. Another issue you might have is crosstalk. So it's possible that when you move one arm, some vertices from the other side might move even if the target-- great, it wasn't happening. So let's see. I'm moving the right arm.

      So you might notice that on the left, actually you're moving the left arm. And on the right arm, on the right bicep, you might notice that in the target model, the vertices are barely moving, if they are moving at all. But on the ML deform mesh, they are moving a bit more. So there's a way to get around that. By definition, it's always an approximation, the ML deformation. So it's normal that those things happen, but there's a way to minimize them.

      First of all, here's to use to play with the learning parameters. So when you train your model, you can minimize the amount of the dropout ratio. So the lower it is, the less crosstalk you have. But it also generates or send an approximation of the poses that it's never seen. So when you feed the different inputs from the training set, the generated output might be of a lesser quality. So there's a tradeoff here.

      And another solution would be to break the big geometry into smaller ones. You know, the geometry might actually break the deformer into-- actually to have different deformers per different section of the body. So in this case, what we can do, we go back to the slide, is to have halves of a body and a section in the center that would be shared by the two halves. And have one different deformer work on the left side and one different deformer work on the right side. And then combine the resulting outputs into a single mesh using a mesher.

      So if we have a look at the broken half of this, you would have the controls that affect the right side connected to one control connector, the control from the other side connected to a different control connector. Each of them connected to their own ML Deformer. And then we have a blend shape at the end to combine the result of those motions. So if we have a look at the resulting-- I thought you can see that when I move the left arm, there's no movement on the right arm that we see.

      OK. You might have some problems with the learning, or you might have some problems with the result. So how do you identify them? You can use some of the debugging tools that we're providing with the ML Deformer. If your complex deformation is still part of this scene, what you can do, you can toggle between the original deformation and the ML deformation by using the Target Morph.

      So if I enable it, when it is enabled and the Target Morph envelope is set to 1, we see the actual original deformation. Now we evaluate that original deformation. So it makes the scene a bit slower unless it's set to 0 or disabled. And when I play with the slider, it's hard to see that, but we can see the difference between the ML deformation and the target. So in this case, you can see that we have a fairly good approximation.

      So there's probably no actual problem with this one. And then if we want to make sure it's being applied, blend between the ML Deformer and the source, the base mesh. So we see that that's on undeform and deform. Watching this pose, it's not doing a lot of work. Because it's the opposite. So that's the base mesh. Then that's the result of the work of the ML Deformer. And we can compare to the original. Well, it's not too bad as an approximation.

      That's one of the tools you can use. And you'll see, you want to debug the learning. So maybe there's some overfitting. What is overfitting? If you right-click on the model from the ML Deformer tab, you can view the training results. So we have two curves you can see here. The blue one is telling you other training there. So as long as that curve goes down, that's a good thing.

      So the lower the number it is there, the better is the approximation. But you still need to be aware that the validation is also going down. I go back to the evaluation, what it is. The validation is the-- it's some of the poses for which we have inputs, outputs that we didn't use for training. When we do one iteration of training, we have an ML model. We test it. We see, ah, is it going to give actual good results on inputs that are not part of the training set? So those inputs, they are the validation set.

      Rerun the ML Deformer on them-- not the ML Deformer, the ML model. And we see how accurate is it. As long as this curve is going down, everything is good. Means that we're actually learning, and we're learning it. What we're learning, it's generic. But if you look at some of those curves and you see something flat, that's not going down or worse. Increasing, it means that you're in presence of overfitting.

      And what will overfitting do? It's going to make your model evaluate awesome results for your training set poses. But when we start manipulating and creating poses that are outside of the training set, the result will be worse or literally bad. So that's one way. Maybe if you have bad results, that's one way of defining that. So on the right, you can see your normal learning curve. And on the left, you can see in this region, we have overfitting.

      So what do you do when we have overfitting? You look at the graphic, and you see where the validation curve starts to go down. So in this case, it might happen around epoch 800 or 1,000. Just do another round of training using that epoch, that number epoch count. Other potential problems you might have. Don't ask too much from the ML Deformer. An ML Deformer, an ML model can learn have pretty much everything, but you need the amount of poses I described to do that.

      So if you ask too much from your ML Deformer, it's going to ask you too many poses. So like in the example below, on the left, the ML model is trying to learn how to deform this skincluster in addition to delta mesh. And on the right, the skincluster is being applied. And then the ML model learns how to approximate the delta mesh. And when we see color, if you see everything in white, that means the position is perfect.

      And the red, it's really bad. Green, it's a bit [VOCALIZES]. And yellow, really good. So if you want to avoid having to provide an infinite number of poses to your training system, maybe you better offload some of the easy work, the work that can provide the large deltas and be quickly evaluated to an actual standard classic deformer, like this skincluster, for example.

      Another problem you might have is maybe you're trying to approximate a deformer that is non-deterministic or is random. So it's not very good at learning those. You might have random noise that is deterministic. Maybe it's being if you're also providing its source as an input, but it's still very difficult to learn. Might need too many samples, too many poses.

      And, yeah, actually, if you want to use, let's say, a random noise, maybe you should do it after the deformation chain that is being replaced by the ML Deformer or before if that's possible. We have some examples. On the right, that's the noisy deformer. On the left, that's the result of the ML Deformer. And so, yes, kind of try, but it's not good.

      So we're done. Now I think enough with the ML Deformer to get started. So if you want to try it out, just fire up the Content Browser from Maya, goes to the Animation section and the ML Deformer folder. And you see, there's two things already set up. All of the control and the input meshes connected to the ML Deformer, as well as the training set, the set of motion that has been generated and some already trained model. So you can try it out.

      And before we close this presentation, I want to thank and acknowledge the work of those fantastic people has been necessary for creating this feature. So thanks a lot, guys. Thank you.