Description
Key Learnings
- Learn how to use RADiCAL Live to capture real-time, multiplayer motion data straight from any device, anywhere in the world.
- Learn how to stream real-time, multiplayer motion data into remote Maya instances through the cloud.
- Learn about the generative AI features RADiCAL is developing.
- Learn how RADiCAL's real-time collaboration platform can help you create content faster and more enjoyably.
Speakers
- GGGavan GravesenGavan is RADiCAL's CEO and cofounder. Prior to RADiCAL, Gavan co-founded Slated.com, the world's leading financing and networking platform for independent film, and he has been executive for, and investor in, a number of startups and content production businesses.
- MGMatteo GiubertiMatteo Giuberti, PhD CTO / Founder, RADiCAL Matteo is RADiCAL's CTO. He joined the company in 2019 as a Deep Learning Scientist and the architect of the company's AI. Prior to RADiCAL, he spent his academic and professional career in the human motion capture/analysis space, including five years as a Senior Research Engineer at Xsens, where he had principal responsibility for the design and development of the company's flagship products.
GAVAN GRAVESEN: Hi there. My name is Gavan. I'm the CEO and co-founder of RADiCAL. I am coming to you today from London. We're pre-recording this, but I hope that as many of you as possible will be with us at Autodesk University 2023 in Vegas in November.
If I do my job right, I will only talk about RADiCAL itself for a few minutes, and we'll then jump right into a live demo to show off a real-time multiplayer integration with Autodesk Maya. And then for those of you who are in attendance physically in Vegas, there will be some opportunity to go into a question and answer session. But with that said, let me just go straight into it, a little bit about RADiCAL's vision, a roadmap, and who we are.
My co-founder Matteo Giuberti, here, is the former lead developer at Xsens, a conventional motion capture provider that I think many of you will know. We have a great team of AI specialists, but we also, beyond that, have developers who cover all of the areas between the back-end, especially cloud, and Websocket communications, and then all the way through to the front-end, a react-based front-end. There is not enough time to introduce everyone, but I wanted to make sure that you knew that we're a much bigger than those few that who attend the session today. We are about 25 people at the moment.
Many of you, or those of you who know us, will know us for the work we do in AI-powered 3D motion capture. We have been around for a few years. We started this journey about five years ago or so in New York City. By now, we're all based in Europe. But fundamentally, it was always our work in 3D animation and 3D motion capture that we have been talking to the market about.
We will also be focused on that today. But I should mention, in passing we won't spend much time on that today, that we are also developing other feature sets for our platform, most notably an end-to-end content creation engine that will allow any one of our users to not only record but to also process and edit and re-composite and publish 3D computer graphics content, in part of course powered by our AI in the animation and motion capture space. That said, let's go back to the motion capture and animations part.
I'm only going to spend a few seconds on the AI itself. We take great pride in the work that we do. Our AI is completely our own we do not base our work on what you see out there in academia. Specifically, what we do is we emphasize the importance of the neural network in understanding human motion holistically over time. Specifically, we train our neural networks on human motion over time as well.
And the effect of that is, when we do our job right is that, the AI only needs to see face sparse information often in the form of a 2D matrix to infer from it 3D skeletal joints over time. So the fourth dimension being time in our output. That's why we often summarize our architecture, our strategy, in terms of AI as being fundamentally generative.
We have been generative for years. This is not a new thing for us. And that means we are less focused on the capture side, but we are entirely focused on generating and reconstructing beautiful, plausible, and organic looking human motion.
We are going to go into the demo in just a moment. You'll see that this is full-body motion capture. But at this point, I wanted to just give a preview of what's to come. We are about to release face animation, including in real-time and, yes, that will be included in the integrations that we support, in this particular case Autodesk Maya.
We will support hands, and we will likely also support fingers. But I'm less inclined to give a clear timeline around when we will release finger tracking. We will support multiple actors inside the same frame. At the moment, we do not. However, we do support a remote multiplayer setup, and in fact, that will be part of the demo today.
We will always be focused on making the inputs even easier for our end users, and we will support a mode that we call upper body only that is for people who want this cut out either at home or within the professional setting that they operate in. And lastly, we are working very hard on expanding the motion domain, which is to say the motion that our AI understands and can plausibly reconstruct.
Now that would also-- there are certain things that are not on here. There are certain general qualitative parameters that we are looking at in improving our AI. And that means, for example, the relationship between the foot, or the feet rather, of the actor and of our output, meaning that of the virtual characters and the floor. And that is certainly part of improving on the motion domain fidelity and precision general.
Very quickly about how we build, now I didn't actually mention this, but it is a fundamental piece of our architecture. On the AI side, that everything you do you can achieve with a single consumer grade camera, as long as that camera is connected to the internet. So that is to say, that as long as you have a browser, a camera, and an internet connection you can use RADiCAL, including RADiCAL-Live in real time.
You can use this indoors, outdoors, and at home, or in a professional setting. The way to think about us is that we're completely web enabled, completely cloud enabled, and we're completely device agnostic. Another way of summarizing this approach is that we solve for massive scale, not because it's a buzzword, but because it's fundamental to our mission. We want to enable 5 billion people to get to use technology like ours. The reason we use that number of 5 billion is simply because that is, as we understand it, the total number of people on planet Earth that have access to the internet, and that is how we want to roll out our technology.
Now our real-time multiplayer architecture that we're about to demo today works as follows. You're always going to be using the system via our browser. You can do that from any device, and you can be physically located remotely from other participants say. You will stream video up into the cloud, where the AI processes that video and then releases back into what we call a live room. That's called a RADiCAL Live Room, only the 3D joint rotations that essentially make up the animation data.
That live room can then be visualized through various means, and we're going to show you two of them today for the end user. That can be repeated as many times as you want. Essentially, an unlimited number of actors anywhere in the world can dial into the same RADiCAL Live Room. All of their video streams will then be processed in real-time simultaneously, and the animation data will be made available into the same live room and can then again be visualized back to all of the end users and a virtually unlimited number of audience members, all in real-time.
Now the question then becomes, well, how do you visualize the animation data once it comes back from our cloud? And the most obvious way to go, and we'll show you that today, is our own website. We use a WebGL visualization platform for that, and you can do it through Autodesk Maya. And those are the exact two visualization platforms that we will show you today.
But besides those, we also support Unreal Engine, Unity, Blender, and NVIDIA Omniverse. From within any of these clients, you can then stream to a virtually unlimited number of audience members. And that's why we always say we support an infinitely, scalable audience anywhere in the world, on any device, in real-time.
But today, as I've said, we will focus entirely on RADiCAL and Autodesk Maya. So I think this is probably the time we're going to get into it. There you go. I am now going to switch my screen over to the RADiCAL interface.
As you can see here, I'm already logged in. And I'm now going to show you how I enter the RADiCAL Live Room. In that live room, you will see another participant, and I will explain in a moment more about the role of that participant. But for now, the important thing is for you to see this interface. I have set this up to get to this point so we don't have to spend much time on it here. But rest assured, this setup to get to this place through a website takes about 10 seconds, 20 seconds.
So I'm fully connected. I'm now going to say, I want to enter the room. This is going to blow up the screen, so it gives me the opportunity to find a place to be. At this point in time, the way our AI setup is set up it requires me to be fully in frame. So I want to find a place where I am fully in frame.
All right. So now I can see myself. Here that looks fine. I know roughly where to be. All I have to do now is hit countdown.
And the only calibration, if you want to call it that, is required for our AI is to hit a quick T-pose at the bottom of that countdown. And voila, I've now entered the space. And this looks like I'm already accompanied by one of my colleagues. His name is Peter. He's waving at the moment. There you go. Thank you, Peter.
So I'm just going to quickly explain what we did here. I'm the character on the right. I hope that you see it the same way. I'm now going to wave with just one hand. That's me. There you go. We chose the same character.
In a moment, we're actually going to show you the animation data visualized on different characters. But for now, we're on the same one. You see my name there to indicate who I am.
I'm in London, at the moment. I'm looking at a conventional webcam that is streaming my video up into the cloud in the United States. That's where our cloud cluster sits at the moment. Peter is also in England, but that's a bit of a coincidence.
He's not anywhere close to me. He's actually somewhere outside London, and he's doing the same thing. He has a conventional webcam, and he's streaming that video straight up into the cloud. And then over there from the cloud cluster, the animation data is made available via Websocket communications, via Websocket servers. And then for any device in the world that is permissioned to receive the animation data can then visualize that data.
The assets that you're looking at, the 3D assets, that means the characters and the textures and the lighting, all of that is local. That means it's inside the browser. The only thing that comes in from a remote source is, in fact, the animation data. So with that in mind, I'm just going to start walking around a little bit.
I am probably not going to show you everything we can do, or some of the things we still want to work on. But it's just to give you a sense of the quality of the motion. One other thing I should mention, quickly, is that I have chosen, and I think Peter has chosen the same mode-- there you go. Nice one, Peter.
I have chosen something called fidelity mode. There's another choice as well called speed mode. Fidelity mode introduces roughly a one second lag between my actions and the visualizations. Now the speed mode would reduce that lag to what we believe is currently approximately 100 milliseconds.
The total lag could be less. It could be more. It depends a bit on where you are in the world, because the data has to travel across oceans often to get to its final destination. So we are now actually in London. Obviously, it has to cross the Atlantic Ocean to get to the cloud and to our cloud cluster and then make it back into our local applications.
I think that's where we'll leave it, in terms of the WebGL visualization. That is always available. I should also say, that's always available to anyone for free. You can always go straight to our website and try this out for yourself in our browser on our website. Also, this is the only time I'm going to mention our end-to-end content creation engine called RADiCAL Canvas. That will also sit inside the browser, and you'll be able to work on that data here in the website using RADiCAL Canvas.
With that said, I'm now going to ask to switch over to the Maya implementation, but rather to the Maya integration. And for that, I'm going to ask my colleague Pooya to take over the screen share. Let's start from the top. I'm going to try and talk you guys through what you're seeing.
At the moment, you only see one character, and that's deliberate because I wanted to make sure that you see how at least one of our characters are actually streamed into Maya and then assigned to a character. At this point, Peter is already in there. We have assigned to him the generic RADiCAL character.
That's very easy to do by simply checking a box. That can be-- exactly. That's what's indicated there. If you check that box, you'll be able to bring in the RADiCAL character so you know what it looks like.
Now just above that, you'll see a list. And that is the list of actors that are currently connected to the live room. One actor is already connected. That is Peter. But we're now going to bring in myself and assign my animation data to that female spacesuit character right next to it.
And to do that, he's just going to highlight it, and it creates a rig. Now we want to bring that rig onto that character such that it would animate it. What we've done already is we've essentially made sure that the female spacesuit character has a human IK rig.
And so we're going to bring up the human IK interface here. There you go. And with one click, we have assigned the animation data to that female character. Great.
So I'm now in here as well. And as you can see, it produces the same quality of data, of course. Now Maya, of course, works in a slightly different way. You in fact have far more opportunities to make sure that the characters are modeled the way you would want them to be modeled, and so on, and so forth.
I think for now the only other thing that I would highlight is that with our plugin you also have the ability to record the data. And that looks rather unique when you do it. So you see below-- this bottom half of that plugin interface, you'll see a list of the rigs that are currently inside the Maya visualization. We're going to choose one.
I think he's going to choose me. I'm not sure. Let's just choose one of them. And then, you just hit Record. There you go. Just hit Record.
So the first thing that happens is that the-- OK. So it's actually Peter that's being recorded. That's great. OK.
The first thing that happens when you hit record in Maya is that the characters disappear. That's OK. That's expected. That is the way it's supposed to be.
But you're just going to record that. That looks great. Great, thank you so much, Peter. OK.
Now let's hit Stop. And then we can play that data back. And for that, we're probably going to choose another character. I'm not sure what the right way is.
Let's see whether we have another character that we can apply to it. Or just play back the rig. There you go. So now we're playing back the data on the rig for now.
And you can take it. You can scrub it. There you go, and now it's being played back. Thank you so much, Pooya. That was what I was looking for. Great.
And voila. I think this is where we will pause the demo. There's probably much more to talk about. As I was mentioning somewhere halfway through the presentation, we are going to introduce many more features, including into the Maya integration, such as face animation, which I know a lot of us and a lot of our users are excited about.
So thank you so much for that. I'm going to pause here. So there you go. I hope that was entertaining, it was useful, it was informative.
There's much more that we would love to show you. There's much more that we have today. There's much more that's coming. In fact, for the rest of the year of 2023, and then also well into 2024, all the way through the next six to nine months, we intend to release new features, new AI, every two to four weeks.
And so I hope you check in with us. Our website is radicalmotion.com, radicalmotion.com. And you can chat with us. You can talk to us in real-time whenever you want through our RADiCAL community server on Discord. You can see that Discord server advertise on our website.
Please join us anytime you have the opportunity. Thank you so much for joining us today.
Tags
Product | |
Industries | |
Topics |