Description
Principaux enseignements
- Learn how physical accurate rendering can help you make better decisions on digital prototypes
- Understand why this level of accuracy is needed in the automotive and other industries
- Discover what the different levels of raytracing mean
- Learn how VRED was able to achieve physical accurate rendering in real time
Intervenant
- LTLukas FäthLukas Fäth joined Autodesk, in 2012 with the acquisition of PI-VR. After graduating in digital media Lukas drove in the visual and conceptual development of the VRED high-end virtual prototyping software. He was responsible for quality assurance, support, and consulting, and is a professional VRED software trainer for the automotive industry and computer-generated imagery agencies with a strong artistic knowledge base. He is now taking care of product management for the Automotive Visualization and XR.
LUKAS FAETH: So I guess we should start then. First of all, thanks, everybody, for showing up. My name is Lukas. I'm going to talk about physical accurate real time rendering, ray tracing, obviously, in this case. And I I've split the presentation into two. So on the one hand, I wanted to talk about why visualization is so important in the automotive industry.
That's where Autodesk VRED, or [? VRET, ?] is applied mostly-- by the way, for everybody who doesn't know the tool-- we're coming from Germany-- so the original name was [? VRET. ?] We were acquired by a US company. So they call it VRED. It's kind of a mixture, at least for me, as well for me. I'm mixing that up from time to time. So I'm referring to the same tool. Don't get confused about it.
I've split the presentation into two. One is visualization into, or small introduction to Autodesk VRED. Then visualization automotive and why it is so special, and in the third part, I'll talk about the research we are conducting at the moment. We're not done with that yet. For those who've been at the unreal booth in the exhibition hall, you could see the result already, one part of it. If not, I would encourage you to go there. Yeah. And I brought some videos, and as I said, the current state of our rendering research.
So maybe let's start with that. Who in the room knows what Autodesk VRED is, or used it, or saw rendering with it? Awesome. I think the next question is [? absolute. ?] Who's from the automotive industry? I think it must be quite the same. Yeah. Awesome.
OK, so then for those guys, I think I will explain your daily job to you, which might be not too interesting in the first part. But in the second part, we'll come to that. For everybody else, yeah, I'll go into detail about why it is so important to have physical accurate rendering in automotive.
Yeah, so the first overview, this is the statement, high level statement of what Autodesk VRED is. And it's a tool to transform extensive amounts of design and engineering data, so [? CAD ?] data to compelling high fidelity assets for real time and offline viewable on different environments and devices. So I've listed some, but I also have a small example.
So you could either use a product called VRED server to run a streaming Based rendering on a mobile device, you could use it on a desktop or any usual PC. Our customers [? oftenly ?] use it on big power walls connected to huge clusters. So for supercomputing, use cases, especially in ray tracing-- and that's what I'm going to talk about in the presentation a bit. And then we are also doing virtual reality. So we have two rendering systems at the moment. One is OpenGL. The other one is ray tracing.
And yeah, you could do collaboration session, even so connect people remotely to each other in virtual reality and view and interact with the same data / it's not limited to cars, but 90% of our customer basis from the automotive industry. So first, a quick video on what you could do with VRED. So some different application areas.
Here we go. On a desktop, as I just said, this is the regular view. Then this is a tracked 3D display. You could obviously do marketing images, print, printouts, and render offline. We have an animation system in it. Yeah. Mixed reality, augmented reality applications, lightweight virtual reality, so offline rendered spheres. Then this is real virtual reality. In the HECY for example, collaboration, in virtual reality you can use it on a tablet with a cluster or with an unlimited amount of computation behind it, and even on a mobile device.
So is it any rendering tool. I guess the answer to that is no. And is it just a rendering engine, like VRAY, for example? The answer is no as well. So it's a combination. VRED stands for virtual reality editor. So it's an abbreviation. And our focus is editing and rendering highly complex CAD data, so both of it, right? You've got to prepare it first. You've got to assign your materials, whatever, animations, configurations, and then you want to render and view it.
And we have basically two key strength that are defining VRED. One is we are trying to be or are in many cases of our customers the maximum efficient enterprise visualization data pipeline. So whoever saw the presentation from Porsche this morning, I think it was a very good example. You can use VRED throughout the whole process. That's what we mean with that. So what you would do is you would create a master model, what we call it in VRED.
So you'd get all the information from design to engineering-- I don't know-- the light design and everything into one file, all the configurations. And then you would refer to that every time you do a review, every time you want to evaluate something in context. So it's the place where all the departments gather there, create the data, and visualize it.
So we also refer to it as the single source of truth because that's the system. You don't have to migrate data. You'll always review your data in the same environment with the same materials, the same lighting conditions with the same reliable rendering. Yeah, so data handling and preparation is the key strength.
Just one example, . I think the biggest data set I witnessed was a billion polygons, which is quite a lot, in ray tracing, obviously. So it wouldn't fit on a graphic card. That's why I OpenGL would not be able to go. But actually, we don't know our real limits, so we stopped testing after that, or at least I didn't try more than that. And it's quite a while ago already.
And the second strength is-- yeah, all good. The second strength is obviously what we are here for today, The. Industry leading visual quality performance and scalability in terms of rendering. So as I already said, we have OpenGL. And yeah, I'm tempted to say OpenGL up for real time and ray tracing for offline rendering but, it's not the case anymore. This is what my talk is about today.
So we have OpenGL and the very powerful, if not the most powerful ray tracing engine on the market, and our industry leader in the visual quality and fidelity, both in real time and offline rendering, especially because we have a very good way to scale our renderings. So it's a very efficient way of scaling. A lot of our customers have up to-- I don't know-- 400 CPU notes, which they could connect to each other. And then really run a full GI real time ray tracing presentation to review a car in physical accurate rendering.
Yeah, so especially the physical accuracy compared to the rather low time we need to compute it-- and I will show some examples of that later on-- is a key strength of VRED. And both combined are defining the program. So that's why I said it's not just a rendering engine. It's also an editor, and it's a very powerful one.
So our goal is to serve the diverse needs, the diverse and complex needs of the automotive visualization industry out of the box. And I think there are several parts to that statement. So I will show you why the needs are very diverse and complex in this presentation. And I think that out of the box part is also something very important for us because we our customers don't have much time because they have huge time pressure to create and iterate. And that's why we want to provide them the tools out of the box so they don't have to cope with-- I don't know-- customization if they don't want to. VRED is also customizable. So if you don't want to, you could do that as well.
Which areas? And we saw something like that in the presentation this morning. A bit nicer-- so I have just a PowerPoint presentation. Sorry for that. But yeah. So in the middle of that slide, you see the design process. So this is where our partner programs are used, or software's used, like sketchbook, or Photoshop for sketching, then Alias for concept modeling, design modeling, and class servicing. So VRED starts to support this workflow I'd say either directly with concept modeling or direct after when it's getting into real design data.
So when you explore shapes and volumes, I think you don't really need it, but afterwards. So you could use it for design data which is very lightweight for technical surfacing. So automotive companies are very picky when it comes to details and especially continuities and like surfaces and shapes, things like that. So I never saw something like that in another industry, as well as color and trim where physical rendering makes much sense or is needed because you want to see how light is reacting with materials, for example, with-- I don't know-- with different fabrics or something like that.
And also, interior lighting, so ambient lighting in new cars is a very big topic. Then something where physical accurate light rendering is necessary, which is lighting design-- so if you want to design a headlight, for example, you need to have accurate rendering to make sure you are designing it in the right way because most of it is reflections and emitted light and how it interplays with the different shapes and materials. So yeah with the OpenGL rendering, you wouldn't come very far because you don't have object reflection, which is necessary to really design it.
Then perceive quality, where it's coming down to visibility checks. And yeah. Our customers trying to see how the car will be perceived by customers, so it's different quality evaluations and validations. And all that information-- or also, one important part I missed, the immersive validation, which is mostly done in HMDs nowadays. But also, caves. I don't know who knows what a cave is, like a [? multi ?] wall surrounding projection setup. It was something that was heavily used in the past. I think HMDs are a bit more easy to use, not that expensive. So I think there is a transition going on at the moment.
Yeah. And all of that data, as I said at the beginning, will be, or could be gathered. And you would have something that we call virtual garage, where you just place all of your VRED files. And you could refer to them at any point in time. Any point in the process, if you do an update or something, just load in, or load the file, put the new object in, and review it in context with the rest of the car.
And the cool thing is-- and that's also something we heard this morning. You could use those files and hand them out to agencies, for example, to let them create your CGI, your marketing images. And the very cool thing about that is usually you would use a VRED file, or whatever file. Like if you use something else for the data process, then you have the approval process because you convert the file from whatever format you have to something the agency uses.
And because VRED is so powerful in rendering, and you can just hand over the file, and there is no additional approval process between obviously the final result [INAUDIBLE]. The OEM doesn't need to go through materials and check whether they look right and things like that, because it's, again, single source of truth. It's the same rendering that you already used for your design process. And that's why you don't need to additionally approve it.
And yeah. Our aim for that is allowing our customers to make informed decisions, communicate visually, and collaborate. Quick video about the whole process, so this is not just VRED. This is the automotive portfolio showcase. This is shotgun. So that's the start of the process where people are sketching cars. Then project Sugarhill, where you could sketch in 3D, in virtual reality, in this case, then it's going into alias for the servicing concept modeling, or creating the model, then the real surfaces.
And you could even go into virtual reality an Alias nowadays. Dynamo for computational design, or generative design in this case, and then something very interesting, which I will refer to later on, the Clay, so digital to physical. That's one of the reasons why we need physical accurate rendering, right? OEMs in the past spend a lot of time for those clay models, money, and they still do it. And then into VRED with the materials, virtual reality, interaction of the car, experiencing the car, configurating it.
Color trim and trim, interior materials, animations, interaction, configuration. So you could easily build a configuator, small one, design configurator also, [INAUDIBLE]. We had a great talk about HMI design and HMI interaction. The Porsche HMI looked way better than [INAUDIBLE]. And even for marketing or safety visualization, as in this case, you could even combine it. You'll see that in a second with Maya.
So we don't have any particle system, but still, you could combine both renderings and do some dust and smoke stuff. OK, then I wanted to give you some examples to show you how good the quality is or what you could achieve with our rendering engine. NPIXO, a company from Germany, was so nice to provide me with some renderings that I could use for that to give you some examples. So it's not just automotive. It's also any transportation, and could be applied to any, let's say, engineering visualization.
So yeah. It might not be suited to render a feature film, but everything else, I guess. We have BMW exterior, for example. Interior, which is a very important topic, so the interplay between light and materials. I think in that shot, if you take a look at the details, they even added some scratches below the shifter, another interior shot, a [? poor ?] shot, some other vehicles, some old vehicles, so you can render whatever you like, right?
Yeah . So what is so special about automotive visualization, the process-- and I think this is a very, very reduced version of it, right? So we start with the design. Then there is engineering and manufacturing. And then the car finally reach the customer. The flexibility to change something decreases rapidly throughout this process while the cost to change something increases a lot.
So let's say we are here in the manufacturing and we figure out, oh, something in engineering didn't work, so we got to go back to that step. It will cost a lot of money, or even worse. If the car's already at the customer, and you need to get it back to fix something or to improve something-- and that's why it's very important that you take the decisions or the validation as soon as possible. So you spot flaws as soon as possible in the process. And best case, at the very beginning, although I would say between engineering and design there is a lot of negotiation because designers have ideas that engineers might not be able to realize.
Yeah, so and the great thing, or the aspect that I want to point out, is we've managed to establish ourselves as a trueworthy source to verify those models. So there were a lot of cars that are on the streets already that were built, or visualized, validated with VRED. And nothing went wrong so far. So yeah. Although it's digital and not a physical prototype, we have very reliable results. And our customers already proved that it works to work with our software.
We heard this morning that there are some customers that are not even using clay models anymore. So they completely rely on digital. One thing, one important thing is-- maybe for the next slide. So this is-- for everybody who doesn't know automotive industry, this is a clay model. So that's what you would do in the past to show a decision maker what you have built, what you have designed, and he would take a decision based on that.
It doesn't stay like that. So they would even apply foil that looks like car paint on top of it. So the car would look realistic. And then you'll bring this, the C-level for example, of the company in. They take the decision. Yeah, it looks awesome. It looks like a lot of work, right?
So there are a lot of-- I really like those clay models whenever I'm at a customer and I'm able to see them. It's very impressive. On the other hand, it's very costly. So you have to spend a lot of time to create them. A lot of people are involved. And there's one downside to it besides that they are looking awesome and very impressive. They are out of date as soon as they are created.
So many times because it takes time to really physically create them, as soon as you-- you need some time to do the final touches. And in that time, maybe the designers already changed the shape. So it's not even representing the current state anymore. They have also one big advantage. They are real. So somebody can stand in front of them, and just take a look at it, and say, OK, I like that. I like the size. You use your regular-- your eyes to judge. So it's a very natural way of judging. And they are reflecting the light in the correct way. You don't need to-- there's no way that that could be a bug, for example, that is-- I don't know-- messing up what you can see there, as long as you cleaned up your glasses or whatever you're wearing.
So if you want to replace that with a digital solution, it needs to be accurate. It needs to be-- you would need to be able to trust it. Yeah. And it needs to be in best base cheaper because that's always the driver behind things like that. You want to save money, you want to be faster, you want to have advantage. And yeah.
To sum that up, digital prototypes, so the digital version of it, they don't really have to look fancy. So it's not about creating-- I don't know-- flares on top of it, and explosions behind, and fire and stuff to make them look cool. They need to be correct.
But on the same side, if the designer-- and most of the designers, as you can see here, are doing an awesome job. If he's doing a good job, The correct result will look awesome afterwards. So yeah. Rendering something correct that is designed nicely will produce an awesome result.
There is another need which makes the automotive industry special. And this is the need to create cars that are sold or design cars that are sold in five plus years. I just took that as a rough value. I think this is altering from company to company and project to project. So what they basically have to do, they have to predict the consumer behavior and the taste of the customer in five years plus, which is-- I don't know-- somewhat future telling, right? It sounds awesome, but it's super, super hard to do.
Yeah, so I think nobody's able to predict the future precisely. But what you can do is you can prepare yourself as good as possible and give your team the tools to prepare them as good as possible to be able to get as closely as possible to what they think will be the case in five years.
So one tool or one thing that's very important there, because you have all these creative minds in the design departments, is removing boundaries, so or giving them access to new technologies. At a certain point of time, so they can plan with that, right? So it shouldn't be limited to any ideas. Because I don't know. Maybe in five years from now we'll have real holograms in the cars, right? It could be the case.
And yeah. You shouldn't block anybody from thinking about it. And then digitally exploring the experience or preparing themselves for the case where the technology is working to support them. And as I said, one thing is using cutting edge technology. This is one of the main drivers, in my opinion, why the automotive industry is so special, and why the things that I'm going to show you in a second are regularly used since ages.
So just quick examples-- virtual reality training and virtual reality in AR collaboration, augmented reality collaboration, I think this is something that popped up in the professional industry. I don't know. It was there for quite a while but there was this revival-- I don't know-- three, four years ago. And now we just-- for VRED, we just released virtual reality collaboration two months ago.
So for us, this is very recent. I've talked to Elizabeth Barron from Ford before my talk, not two days ago, to ask her when she did that first, the first time. And she said the first productive collaboration and augmented and virtual reality collaboration between two continents was in 2011 for her connecting two design centers in Ford. So this was a point where VRED-- I don't know-- I wasn't even working for the company at this point in time.
And we didn't connect any HMD2 to our tool. They did-- yeah, [? and had that working. ?] So quite awesome. They are embracing technology as soon as possible and spending a lot of money to make it work productively. Something else is complex cat prep. I just put it on that site because I think this is something that's worthwhile mentioning.
On the same side, it's popping up at the moment because of all the HMD and lightweight visualization things that come up with game engines and things like that. So this is nothing new, right? And it's automotive customers do that since ages, way more complex than the stuff that's happening in the entertainment industry right now, for example.
So 150% model is a model which contains all possible configurations, so millions of configurations, whether it's geometry configurations or variations or material variations in different environments. Yeah, so the CAD prep, as you can see in that model, you have the whole engine and every part of the car in the visualization model is something that [INAUDIBLE] since ages with that problem, or already working around it, or with it.
And something I'm going to talk about today-- and this is the bridge to the second part of the presentation-- real time raytracing. What you can see up there is a real time full GI scene that is rendered on a remote cluster. And yeah. The picture is streamed in real time. We did that two or three years ago in AIF, 2016, in 2016 on AIF Automotive Innovation Forum in Munich.
So yeah, this is full GI already. I think it's quite small. I have it later on a bit bigger so I can tell you the difference between what we did back then, what we do today. So why is it so-- I don't know-- mission critical, or why is it so important for automotive customers, design, and designing stuff correctly? Who could tell immediately which cars are behind those lights?
Here we go. So if they show up in your back mirror, you know if it's one of those three, maybe you pull to the right. But yeah. It's Audi, Mercedes. And BMW. Why I'm saying that is because it's the design language. It's something that's very important to those guys. And especially lights, designing lights is something that's very complicated, because as I said, it's the interplay between light, reflection, and different kinds of reflection, different kind of materials. And you could immediately tell this brand, this kind of car.
So it's super important for them and a differentiator to other brands within the industry. Some other use cases-- and I won't go into detail because I want to talk about the research we did-- is design review, which is basically just a review, although I think all the automotive guys would beat me up for that. It's very important thing to do. It's when you present your design to the decision makers nothing should go wrong. You want to present it in a way that it looks awesome. And presents the thinking you had behind the idea, like a front bumper or a general car design.
You need short preparation time. Sometimes we had it this morning. The changes come in during the presentation or shortly before. So you can just quickly import, throw some material on it, and then the demo must run. That's one thing where you need physical accurate rendering. The other one, reflection studies, something, again, light and reflections. So what do you want to make sure here is that the driver is not distracted by night design, for example, as you can see in that picture.
So the light shouldn't reflect in the windshield, for example, and distract somebody. Also, the chrome elements, if sunlight hits chrome elements, they reflect back on the light, on the windshield. And they could distract you when you're driving that day. So you need accurate ray tracing and rendering in this case. I talked a lot about light design. I will leave that. Gap analysis, something very special to the-- or gap analyzers might be the wrong word. But evaluating gaps and designing the gaps, it's a quality standard, especially in the German automotive industry.
And why I'm naming it here is-- if you look at the gap in OpenGL, you will not see. You need indirect lighting and reflection of light to really be able to judge the final result. In OpenGL, that's not working, or in DirectX or whatever. That wouldn't work. So you need ray tracing for that. And this is a-- so for those who are asking what I'm talking about, this is a design car, which is awesome. It's an Autodesk design concept car.
And on the right side, you have a production car from Porsche. And you can tell that the gap on the right side. It's getting smaller and bigger because there was no process. We didn't care about the gaps in this case. So I wanted to take it as a negative example and the other one as the positive one. So it's about how you perceive the gaps because it's an impression of the quality of the car. Especially for premium cars, this is a important.
How accurate are we with VRED rendering? The left side is a real car light beam. So this is if you put your car in front of a wall and turn on the lights. In the middle, there is a simulation tool, which is doing it. And it takes compute-- for that one picture, it's a computation of a matter of days.
And on the right side, you see VRED. And it's a matter of seconds, minutes, depending on how smooth you want the picture to be. And we messed up the scale a bit so the colors don't match. But you can see that the shape matches. And if you interpret the two results, or even that one, or compare it to VRED, we are very close to it. I'm not saying that we have the same result as the two on the right side, but we are close enough, and very, very fast computing it. And I think that's the very big advantage for VRED rendering.
Yeah. Another thing where it's important is ambient light, so interior light design and materials and perceived quality, which is basically a step where you take a look at the car, and under certain lighting conditions-- so, for example, the cars are designed. So the front grill, for example, should-- it's designed in a certain way that you don't see the cooling system behind it.
And you could take a look at the car under certain sun positions and see, OK, how good did we do that. Do we have to have a blocking element, which is contributing to not showing something that we don't want to show. Also, an area for that is the foot area in the car where some technical elements might be. So it's more or less validating on the digital prototype, how you want the car to look in reality later. So physical accurate rendering is needed.
Again, how accurate are we? Which one is the rendering? Which one is the photograph? On the right side, there is a camera on the reflection. So this is the photograph. And yeah. And the cool thing about that is, if you take a close look, so if I would let you five minutes with that picture, you would spot differences.
But we are close enough, again, to take the decisions. So, for example, you can see somebody set in that car right there, some kind of a carpet. And somebody had his feet on it. So it has different reflections. This one is perfect. This is a rendering. So there are differences to it. But you can judge the aspects that you want to judge based on the digital prototypes.
So we are close enough to take decisions. I think that's the result. But yeah, the camera is the problem here. So let's dig down to the core of my presentation. So this was just to let everybody who's not familiar with automotive know why we, or they, our customers are so picky about rendering.
The first step is what is accurate rendering or what is the basis for that. And I won't go into detail here because I want to show you our results and I might run out of time. So I will just roughly explain what raytracing is. I hope that a lot of people of you already know roughly what it is compared to restorization.
So in raytracing, in reality, light is emitted from a light source. It's bouncing through a room. And finally, when you look at something, it's hitting your eyes. And this is the way you can see a picture. In raytracing we are going the other way around. So we started the camera and shoot rays through the image.
Either it's really an image that you're rendering for offline usage or it's just the samples, the pixels on your display. And the ray is bouncing to the scene, gathering information. And then an average or accumulation of that information is fed back to the image and shown as that one pixel. So that's let's say the most basic version of raytracing how to explain it.
There's something special to that, and I want make sure that everybody slightly at least understands what I'm talking about here because it's important for our study later. We're not doing that once, right? If we did that once, you would have a very-- if it was a bad coincidence, I'd say, you would have a very black pixel next to a very shiny pixel. So what you're doing is something that's called sampling. You're shooting more than one ray per pixel.
It's calculating based on the randomization. It's calculating more than one path of that ray, and the average of that is finally defining the color of that pixel.
What that does, it's smoothing out the pixel. So pixels that are next to each other are looking, or the picture is less grainy, and you get more information, more precise information. So this is called sampling. This is very important for the rest of the presentation. Yeah.
And the picture finally over time converges to smooth best case accurate physical correct rendering. Also, there's different levels of raytracing. Path tracing, what I will be talking about, will result in something that's called full global illumination. So you'll have a full GI rendering. You might have heard that abbreviation already.
They are also rendering modes with pre-computation. So sometimes-- I don't know-- you just sample or you just raytrace reflections. For example, you could do that in VRED as well. There are application areas for that as well. It's faster then, but it's less accurate. So why would you use raytracing? For three main reasons and I think 10 others as well that I didn't list here.
But you have no or nearly no data preparation. So what I mean with that is if you want to do something for pre-computed for OpenGL, for example, you would have to do all the-- it already says it-- pre-computation. So shadows need to be pre-computed, like [INAUDIBLE] occlusion, for example, or indirects, or whatever you want to do there. And there is a lot of data preparation involved. You don't have that for raytracing.
You also need to prepare it. But preparation is bringing it in, tessellating, applying materials, whatever you like, animations or whatever, and then just head render, and shadows and all that will be computed for you automatically. You have almost no geometry limitations. So for our CPU raytracer, as I said, it was 8 billion polygons. It's just limited by the memory of-- by the amount of memory you have.
So in the CPU case, this is not critical because it's the main memory that's quite cheap to extend. On the graphic card, this is limited to how much memory and video gives us, right? And the third and most important thing, it's physically accurate if you go for full path tracing. So what was research? What did we research on?
And one thing upfront on that slide, we call that research for a certain reason because we are still taking a look at the technology and evaluating it. At the moment, there is officially no commitment from Audtodesk to prototype anything of what I'm showing you in the next slides. That being said, you will see for yourself that it looks very impressive, and yeah. We ray traced two technologies in cooperation with Invidia.
One was GPU raytracing on the RTX cards. For those who don't know VRED, we are coming from CPU. It took Invidea ages to get good enough and remove the limitations on the graphics hardware to convince us to research on that topic because one factor was the memory, for example, like a data set from automotive customers easily up to 100 gig. And you wouldn't fit that on the graphic card. So that's a big data set, but still.
So the graphic cards we had before wouldn't work there. And there's other limitations. But Invidea did a very good job in iterating and improving their hardware. So finally, with the RDX cards we took a serious, or we took the decision to take a serious look and evaluate technology. So we're coming from CPU and moving to GPU, or looking at GPU.
And then the second thing which is even more impressive for me, or more mind blowing, I think, is the NDA I accelerated denoise technology. The cool thing is we can use that with both with our GPU ray tracer and with our CPU ray tracer. And I will show you some examples on why this is so impressive. Yeah, it uses machine learning. Something I want to point out as well is I'm the product manager for that product. I didn't develop the ray tracing engine.
There will be most likely tech talks about that on the GTC or something like that in the future. I will also not go into detail on what the AI denoiser does, because I frankly am not technically skilled enough to explain it to you. So I will show you how we applied that to our rendering engine and show you the results of that. As I said, this is an ongoing research project we are doing with Invidea in collaboration. So there will be other talks. Maybe we can get one of our developers to talk about the technical background about the project.
So we used the AID noise technology from Invidea. I think it was released in optics 5.0. I saw it the first time for [? IRAY. ?] And they did it on a showcase for offline rendering. And I was already. I thought OK, that's super cool. And at the same time, I thought, OK, why wouldn't you do that for real time? Because just for offline, you have the time, right? So whether it's taken one minute or two minutes, it doesn't matter too much.
But if you have the chance to get close to 20, 24 frames a second, so you have a fluent noise-free image that would be awesome. So that was the first spark of the idea to take a look at that. And I talked a lot about technology stuff, ray tracing and things. I'll just show it to you in images.
So this is a ray tracing image with 16 samples per pixel. So that's what we discussed. Like we are testing the one pixel 16 times on average. So the noise is not that bad as for one sample per pixel. Still, you can see noise. So that's what I'm talking about when I say noise, right, all the grainy stuff.
You cannot judge based on that because it will refine over time. It will change the result. And it's-- yeah, something. Especially decision makers have a very hard time. C level people have a very hard time to understand why it is there. And yeah, it was a big pain point. So with AID noise, it looks like that. Can you see the difference?
So we are getting rid of the noise. And as I said, this is not-- I'm not going to lie here. If you take a close look, at the shadow below the car, if I'm going from that to that, you can see that there are still some-- I don't know how to call it in English-- like blotchy. Is that the right word? It's some artifacts.
AUDIENCE: [INAUDIBLE] .
LUKAS FAETH: With noise?
AUDIENCE: Yeah, [INAUDIBLE].
LUKAS FAETH: And then there is 512 symbols, which is if you offline render it, right? So it takes roughly-- I don't know-- like minutes to do that. But it is 512 samples. So you wouldn't be able to do that in real time. Why did we take a look at AID noise? Because we wanted to achieve noise free real time ray tracing, especially we didn't want to have waiting times for refinements in reviews, especially when you compare configurations, because you would see something, it would refine, then you would change the configuration. It would be grainy again and it would refine again.
So there's no possibility to side by side compare or quickly compared the stuff. Yeah, so that's something we were hoping to remove or enable people to judge upon. We wanted to improve for GI for [? perceived ?] quality and improve the appearance for physical accurate digital reviews.
But AI denoise alone doesn't have much. You need to have a raytracer below. So this is the video I showed you earlier. We have 14,256 cores, CPU cores beyond that. This is full GI raytracing on the CPU. You see the noise? It's not much, but it's still there. So especially for areas like the chrome parts-- can you see it on the dashboard? There's the refinement process kicking in and refining it. So you wouldn't get rid of that. Plus, we have five frames. And we have a huge amount of [? cause ?] behind that, 14,256.
Still, this is awesome. So I don't want to like degrade what we're seeing here. This is full GI in real time. This is awesome. Not many people can do that. This is from, as I said, 2016. But it's very expensive to do it. Then another video-- and I hope I can play both of them. Yep. This is what you could expect on one CPU. So I don't go into details of the CPU because what CPU it was, it is a research project. Details doesn't matter at this point in time. When we have a final result, we will have benchmarks and everything for you to compare. So on the left side, this is the refinement process. On the right side, this is the denoised image.
Whenever you stop moving the camera, it will refine over time. When you move it, it will go back to this noisy image. The cool thing about the right side, obviously, it doesn't go back to that noisy grainy image. On one CPU-- yeah, one workstation, not one CPU, but one workstation, you can see that the performance is not good enough. So it's not really fluent. It's jumping around. Yeah.
But you can already see where we're going. So, again, a comparison for the samples per pixel-- so one sample per pixel looks like that. 16 samples per pixel looks like that. One sample per pixel plus denoise looks like that. And you can see all the artifacts happening here. And 16 samples per pixel plus denoise looks like that. So we are getting to a good direction already with 16 pixels per-- 16 samples per pixel plus denoise.
So the more samples in real time the better the denoised result will look, and the less artifacts and less flickering and the more accurate it will be. And you can shoot or you can have more samples the more power you put behind the rendering. So what we saw two slides ago, with all the cores behind is already very converged, so a very smooth image. And if we would have applied the denoiser to that, the result would have looked even better than what we see here.
But I had-- yeah, yeah. Also, the access to the 14,000 nodes is something that I don't have at home, so couldn't do any videos with that. OK, this is a bit bigger cluster, and it's also not mine. We've [? been at ?] a customer to see that. So this is the 16-- no, is it 16? Yeah, it's 16 assembles per pixel. And now we're turning on the denoiser. And you can see how the result looks.
So this is CPU plus denoise, 32 nodes. It doesn't matter which CPU. It's quite current because it was a least hardware. So it's not very old. But yeah, you can see you achieve a very good result.
So that's the samples. They are hidden behind names. So we made it easier for people with low, medium, high, ultra high. Ultra high mean 16 pixels per-- samples per pixel. So yeah, you can already see that. The lower the samples per pixel the worse are the artifacts, also with the surrounding. So this wouldn't be usable. But if you put it up to medium or high, you're getting into a range where it makes sense.
And you can see the frame where it is not that bad. So we take a look at it. It's roughly 18, 20 frames in these results. Yeah, so that's the facts. I already told you most of them. So this is our example. And as I said, this is research work in progress. On the one hand, we are trying to improve that with our rendering engine. On the other hand, Invidea is working on the denoising technology as well, right? So this is something that's also new to them.
For those who know how AI basically works, or deep learning, you've got to teach the system and then it will provide your result. It isn't even trained to denoise our way of converging in raytracing. It's done for [? IRAY ?] or with [? IRAY ?] samples. So there's a lot of room for improvement that we're looking forward to.
Then something that's, as I said, very surprising for a lot of people who know us, GPU raytracing, yes, Invidea did a great job. Yes, the RTX cards are awesome. And that's why we took a look at GPU raytracing to enable-- OK, here we go-- to enable workstation based raytracing. So the stuff we saw is always on computation notes, like 20, 30. You need to be able to afford that on the one hand. And even the big companies cannot provide every single worker with that amount of notes.
So this would be a very awesome thing to be able to do on a workstation, especially in full GI, because then you could have the benefits. This is the first video. It's a capture, right? As I said, you can see that down in the expo at the Invidea booth.
This is the first implementation of our GPU raytracer with denoise on top. So denoise is already turned on here. This is running on a workstation. It's full GI. And you can see that even the sampling, for some areas, like the lights, for example, is working very well. I think we are with four samples per pixel in this example. Yeah. So this is the demo we just captured from-- I want to have it twice in there. Awesome.
Yeah, so we saw that already. Let's take a look at the facts. We used two RTX 6,000 cards, again, full HD. So that was our goal because it's a common resolution and something that is achievable. If we went for 4K directly, it would be not very realistic to get a real time performance out of it.
Then we have four samples per pixel-- so it's a bit less in terms of the quality-- and 12 FPS. But that being said, for everybody who knows how raytracing works, how the performance gains and how the development of that was over the last year, full GI on a workstation is something that's awesome already. Having 12 frames in full GI without noise is something that's completely mind blowing on a workstation.
So whoever is using that every day and saw the results we were having was very happy about it, let's say. So a quick comparison between CPU and GPU, there are some pros and cons, obviously. Performance development-- what did Inviea to improve their cars over the last years is very impressive. So it's a pro for GPU. Then they have the dedicated hardware, the cars on the RTX for raytracing, which I personally like that somebody is really taking a look at improving raytracing directly.
This is a plus as well. And they are enabling us to do that on a workstation. And then the heavy paralyzed computation possibility-- so you have a limit of cause and of CPUs in one machine. And CPUs have way more paralyzed processes. Yeah.
On the other hand, CPUs have pros as well. You couldn't scale an unlimited amount of GPUs. So at the moment, I think it's just-- yeah, we are using-- just we are limited to two cards. But obviously, there would be more possible. But yeah.
On the CPU, you can do 1,000, of course. The model size is unlimited. We talked about that already. It's driver independent and it's a proven technology, at least for us. So this is a pro as well. So the key takeaways, the future of GPU performance promises to outperform CPUs for raytracing.
Still, CPU classed as our current go to-- I'll show you why in a second-- because it is also supporting all features. And as I said, this is the current state, right? I'm not talking about a final product or a final project. So we are not sure how much from our CPU rendering we can port to the GPU. This is still open. It could be that it's 100%. It could be that it's not the case. That's why I'm not promising anything at this point in time.
GPU for us is still a workstation only approach, but a very powerful one, as we just saw it, on the rendering, on the video. CPU plus denoise promises the ultimate quality at the moment because you can crank up the real time samples depending on how much CPUS you put behind your base rendering. So the bigger the cluster, the more read time samples, plus denoise, the better the result will look. And as we read, we are looking to invest in both.
So we are looking at both GPU and CPU raytracing for our future evaluations and our future rendering technology in VRED. And with that being said, we had some more notes ready to get a short [? click. ?] This is a capture that I cut together in an After Effects afterwards because I'm not able to use Premiere. Here we go.
So this is if you put enough power behind it you could achieve a result like that. This is CPU plus AI denoise. Gull GI. As I said, it's roughly around 20 frames. So you can see some glittering during the fast movements. But yeah, as I said, this is something that also for us having one of the, as I said in the beginning, best raytracers on the market was something that wasn't really thinkable, achievable-- I don't know-- a few years ago.
So the denoiser really added an incredible step in terms of quality and also getting rid of the last bit of grain, as you can see at the Porsche. The video we showed, it was already noise free, but not completely. And it would have taken a lot of more CPU notes to get even closer to completely noise free image. And yeah, Invidea did a great job in helping us getting the last-- I don't know-- 10 meters on that in terms of combining CPU raytracing in real time with AI denoising.
With that being said, any questions to that? I'm sorry for the amount of information, but I wanted to get the bridge between automotive and the technology so you understand why we're doing all of that, right? Yeah?
AUDIENCE: You mentioned that the renderer was doing spectral calculations? So what are your [INAUDIBLE] look like?
LUKAS FAETH: I didn't get that. Did I talk about spectral? It was in my notes. But yeah. So we are doing-- I didn't say that. But anyways, we are capable of doing spectral rendering. Yes. But I didn't get the second part of it. The material representation?
AUDIENCE: Yeah. What's the material representation like? You're not just loading in RBG [INAUDIBLE] right?
LUKAS FAETH: No. So we have our own material library, which is-- or material set, which is based on physical material. So you wouldn't start with a [INAUDIBLE] or whatever, shader. You would start with a plastic and then define some roughness values and stuff like that. And yeah, that's the way we handle that. For the car paint, it's a very complex material. So it's multi-layer flakes. You could define sparkles even on top so you can have multilayer.
AUDIENCE: You're not modeling extra shells over body for that, are you?
LUKAS FAETH: No. No. Any other question? Yep.
AUDIENCE: It seems like most of your tests that are done with full GI, you step that down. You test it with a little less quality [INAUDIBLE].
LUKAS FAETH: Yeah. So on the CPU, we could go down to, let's say, just pre-computer plus shadows and denoise the shadows. Frankly, we didn't do that. This is the first time, or the earliest point in time. We had a demo on GTC this year a few weeks ago. So this is the earliest point in time I can talk about that. We really didn't have the time to do it yet.
So our goal was to get full GI first and then go down to the lower end. And also, we just didn't try it yet. So I don't have any experience on denoising the lower rendering modes, unfortunately. But as I said, ongoing project, we will keep everybody updated on that. I'll take him first.
AUDIENCE: [INAUDIBLE]?
LUKAS FAETH: Yeah, we are roughly about 20 frames. So I guess you would get motion sick quite quickly. No.
AUDIENCE: [INAUDIBLE] taking samples down to [INAUDIBLE] one, two, to see where it would look, and then [INAUDIBLE] go up to 16.
LUKAS FAETH: Yeah, so no. No. We didn't try that as well. So our goal was getting full HD on the demo down there. It was like the application where we wanted to run. And this was challenging enough. So yeah we didn't try to that yet.
AUDIENCE: Second question. Are you going to have to make a decision between GPU and CPU or are you going to keep going parallel, or you have to get to a point where you have to say the software has to be written one way or the other?
LUKAS FAETH: No. So we have a complete CPU raytracer already. What we are trying to do is having the equal rendering system on the GPU. As I said, it's not 100% sure whether we can do it, like whether it's practical to port all of that. But we are planning. If it works out, it should be a two-way route. I don't know. It should be two system. So in VRED, you can easily switch between OpenGL and raytracing. You have the same materials. So there's no different preparation for it. And we are aiming to do the same for the GPU and CPU raytracing.
So to just a button click and you would go from one to the another based on the application. So if you are on a desktop, you can maybe-- GPU is more applicable in the future than for that. And if you then want to go on a cluster, then you will have the same result just by the flick of a button.
AUDIENCE: When you say full GI, that's not unlimited [INAUDIBLE].
LUKAS FAETH: No.
AUDIENCE: How many [INAUDIBLE] are we talking?
LUKAS FAETH: On that one, I think 16.
AUDIENCE: 16. So then obviously, [INAUDIBLE] that back. You could cut it back to two [INAUDIBLE].
LUKAS FAETH: Yes, we could. But the especially the headlight was something that we wanted to-- you wouldn't see like the light bulb behind all the layers of glass if you would cut that down too much. So it was something that we did on purpose to have the quality, or the details, let's say.
AUDIENCE: Well, yeah, to get the full [INAUDIBLE]. OK.
LUKAS FAETH: Awesome. Then there are no questions anymore, thank you very much for your attention. And have a great party tonight.
[APPLAUSE]
Étiquettes
Produit | |
Secteurs d'activité | |
Thèmes |