설명
This panel will feature the perspectives of three top innovators in the 3D content space, gathered to discuss the creation, optimization and utilization of standardized 3D assets across a wide range of applications. With the ever growing need to have 3D content flowing seamlessly across multiple experiences and platforms, it’s never been more important to establish workflow standards for homogeneous 3D assets. By standardizing assets, automation pipelines can be better utilized to convert and conform an individual asset for multiple use cases. Our panelists will discuss the latest advances in USD, Material-X, MDL, and glTF, that are helping pave the way for more versatile content, and explore the value of storing and managing metadata for 3D assets in a time when rich content is more important than ever.
주요 학습
- Managing, and distributing 3D models
- Develop an ecosystem for future-proofed 3D content
- Develop a pipeline for publishing
- Lessons we’ve learned over the past 20 years dealing with 3D content
발표자
- Julian NeaguJulian-Alexander Neagu is a real-time 3D graphics enthusiast with 7+ years of industry experience. He is co-founder of RapidCompact by DGG and works with the company since 2018 as head of product. In the earlier years Julian started a 4-year journey working in the indie games scene before switching to XR. After being part of multiple award winning productions, such as German AR music video: Tunnel AR, the underlying 3D graphics technology sparked an interest in him, and he joined the Fraunhofer Institute for Computer Graphics in Darmstadt, Germany, working on 3D data acquisition technologies involving photogrammetry and robotics. His Fraunhofer work led directly to meeting his co-founders of RapidCompact by DGG. Today Julian guides the development of several automation efforts to help making 3D real-time content more accessible to end-users and industry clients alike. His close contact to customers aligns the research-heavy computer graphics innovations by DGG with demanding 3D content production pipelines and he is looking forward to the current 3D standardization efforts by the Open-Source Community.
- Robert CervellioneRobert Cervellione is a registered Architect in New York And AEC Workflow Specialist for Nvidia's Omniverse Platform. Robert regularly engages in professional and academic research that explores BIM, computational systems, design automation, and advanced fabrication. His work focuses on the intersection of design and technology, exploring advanced workflows across the AEC space. Additionally, he is an Adjunct Assistant Professor at Pratt Institute, Graduate Architecture, and Urban Design Department, teaching various seminars in computational design, BIM, additive construction processes, advanced fabrication, and other research-related agendas.
PAUL TEALL: All right. Yeah, so first off, thank you all so much for showing up today. As I mentioned, you got some locals up on the stage. We are excited to welcome everybody here to New Orleans. I know speaking for myself, we were totally surprised that it was in New Orleans. So we were looking to just send a couple of people to Vegas, and looked at the site, and went, holy crap, it's actually in New Orleans this year.
So instead of just sending a couple of people here, we are with a much bigger presence this year. My name is Paul Teall. I'm the VP of 3D Strategy and Operations at Shutterstock. And if you're thinking, OK, I know Shutterstock stock images, why are they at Autodesk University. Shutterstock acquired TurboSquid, which was the original and Premiere 3D model marketplace in February of 2021. So a few of us were part of that acquisition, and with that Shutterstock made a big bet on 3D. We've got the core marketplace. We've got custom 3D services. And then we've got some workflow tools to help you manage your 3D library.
But this is not a commercial for Shutterstock and TurboSquid today. So stop by our booth. You can get lots of information and people will chat your off about that. And then even more fun, we've got a happy hour tonight that we're hosting, 5:30 to 7:30 at the Sidecar Patio I think is the official name. Sidecar Patio, Steph says, yes.
It is a less than a half a mile walk from right here. It's a lovely outdoor space, the best kind of drinks. We've got free drinks and appetizers will be served. So come by, and hang out with us if you want to talk more 3D with a beer in hand, or beverage of your choice in hand.
OK. We are going to have time for Q&A at the end. So we're going to talk for about 30 or 40 minutes. And then we're going to open it up to questions to you all. These panelists are amazing, so I imagine you'll have some good questions, just please save those to the end. And with that, I'm going to turn it to this group for intros. So in maybe 60 seconds or less, take us through who you are, what you do, and how you find yourself on a stage talking about 3D standards.
Oh wait, the topic by the way too, just as a reminder, we're talking 3D standardization, how that feeds into some amazing automation opportunities, and some not so distant future looking things that will come from having this large hopefully ever-growing library of standardized 3D models. All right, so to Beau.
BEAU PERSCHALL: So I'm Beau Perschall. I am the director of Omniverse Sim Data Ops. I was actually with TurboSquid for 20 years of my career, helped build 3D standards from CheckMate, StemCell. At NVIDIA last week, we actually announced SimReady on top of all of that, as far as helping content become more useful to more users across--
DADE OGERON: Yeah, I'm Dade Ogeron. I also came along with the acquisition of TurboSquid. So I'm now the VP of 3D Innovation at Shutterstock. I'm very much involved in standardization, making sure that content flows in the course of the stock 3D market. We want to make sure that content can go to everyone. So it's been a big goal of TurboSquid for a long time to make sure that content flows. And of course, working with NVIDIA and other partners like DGG, figuring out ways to make sure that content goes from the creator to the end user, as seamlessly as possible.
ROBERT CERVELLIONE: I am Robert Cervellione. I am in the AEOC NVIDIA Omniverse team. So I work with Beau at NVIDIA, but my take, so I come from the architecture industry. I worked for 20 years in an architectural practice as design computation. Now I'm working on the Omniverse team. And so for me, the take on this content is really I'm looking at it from the consumer user point of view. I'm the one taking a ton of the content that Beau and his team are making, and we're deploying it, and utilizing it, and helping understand how can that be a more useful, more fruitful process, and what can we do to standardize it, and make it easier for everyone.
Yeah. Nice to meet you. I'm Julian, Product Manager and Co-founder of DGG. So what we do, we work mostly with e-commerce clients, and help with automating their 3D workflows, and optimizing their 3D data. And originally we are founded in Germany, out of the Fraunhofer Institute for Computer Graphics in Darmstadt. And basically from 2012-ish on, we worked already on the software and tooling side of our products, on the side of our research work.
And yeah, basically coming to this panel as we're working also with Shutterstock to see how can we basically automate certain downstream workflows, and also standardization in terms of 3D assets, 3D as a creation, and the key word interoperability.
PAUL TEALL: OK. Awesome. So we got some heavy hitters here. I'm excited about this group, and looking forward to this discussion here. So first off, Beau, I'm going to turn to you. Maybe just set the stage briefly about how this effort at 3D standardization has been sort of a holy grail for many years. I don't know if you want to touch on any missteps. But why we feel confident that this is actually going to happen this time.
BEAU PERSCHALL: Yeah, I mean, I could easily spend hours talking--
PAUL TEALL: Full hours.
BEAU PERSCHALL: --just about kind of the missteps in the 3D industry, having started in '93, pre-internet. The short answer is, there have always been attempts at of like being able to move your data around. You have formats like OBJ, and 3DS, and STL, and FBX, and DXF, and who else-- who remembers VRML, V-R-M-L? I mean right, like everybody's trying to do it.
But the problem is that all of this stuff, all of those formats were in some way, shape, or form lost, in that you could never move everything you wanted from one platform to another. So it was just this constant battle of having to try and manage your data over and over and over again. I think what's happening now is that the demand for 3D content is getting so big in so many different ways, beyond just beautiful visuals, architecture design, visualization, and movies, and TVs, and ads, and print. But simulation and understanding things that the community at large really kind of rose up and said, we have to have standards.
I mean, I was part of the Khronos Group when we did the 3D Commerce, which is where I met the DGG guys, just in terms of standardizing around GLTF for web delivery of content, so the big retailers like Wayfair, and Target could actually get stuff out to their customers quickly. And that is now starting to really drive a lot of what you're seeing in terms of consolidation around real open formats and standards, like the gift that Pixar gave us with USD.
I mean that's central to everything that NVIDIA is doing as well. So I think that's really kind of where we are today, is it finally the community is taking control of it, and saying, we have to have these kinds of standards.
PAUL TEALL: Right, and big name companies. You rattled off several that have a vested interest in seeing this happen. So it definitely feels more real. And one thing that's driving that a little bit, and I promise we won't go crazy on the metaverse, but there is a considerable amount of buzz around the topic, for sure. Everybody's heard this. It's in the past nine months or so really started to take off. And one thing that I think that that's done is that it's helped companies realize that, hey, I need to be thinking about 3D, where maybe I wasn't thinking about 3D in the past.
So Dade, the question for you is, I know you're talking to a lot of companies that are maybe going through that thought process of, OK, I've got to figure this out. For companies that haven't thought about themselves traditionally as a 3D company, what are you taking them through, or what are you doing to help them prepare for that?
DADE OGERON: Well, it's interesting, because there's lot of companies who actually already do 3D that are also struggling, right? So it's an amazing thing to see that companies that want to get into it can easily feel overwhelmed. One of the things that we try to work with a lot of our clients with, is coming up with a strategy for what that looks like for the future.
And even though right now, when you talk about metaverse, it's very nascent and there's just it's hard to tell people exactly, well, this is what it's going to do. But we know it's going to be real time. We know it's going to be always on. And we know it's going to be 3D in almost all cases. And so we're working with clients now to help them understand that the content that you're creating now needs to be created to a standard, so that it can flow seamlessly. And we try to educate them on what that means.
And again, things like USD really open that up for them. But then helping them understand too those endpoints, how are you going to get this content that you've created? Maybe you have old content that you need to convert, or you have new content that you're creating. That should all flow in the same way. And that endpoint, we come up with all these different endpoints that you can then conform to in an automated fashion, like GLTF or USDz.
But essentially, what we want to do is we want to set them up for success by giving them these standards up front, and saying, look, there's going to be experiences that don't even exist yet. But we can at least build content in a way that will make them extensible to what that experience becomes later down the line.
PAUL TEALL: Right. And you're finding people are more receptive to these types of messages now than maybe they were a year ago?
DADE OGERON: Oh, much more so.
PAUL TEALL: Yeah.
DADE OGERON: Yeah.
PAUL TEALL: The companies that are reaching out to us and I'm sure working with NVIDIA and DGG, it's just companies that you never would have considered in a million years would be going, I need a 3D strategy, like a Geico, or somebody like.
DADE OGERON: Yeah, and you have companies now that are-- well, we've been doing 3D for a while. And now we just have a bunch of content that just doesn't flow very easily everywhere. And so we want to be able to help them as well with conversion of that content, and getting that content to flow.
PAUL TEALL: Yeah. I think that's a nice setup for the next question for Julian here. So standards and automation are becoming so critical because companies are realizing they need to build these libraries of models. But they don't need to necessarily, or they don't want to necessarily build them for just one use. So they're going, maybe I was building for a beauty render before. But I want to put that in an AR app and it doesn't work. So what should companies be thinking about? Or how have their thought processes changed in the past few years as they realize they need this content that they're building to go multiple places? They don't want to rebuild.
JULIAN-ALEXANDER NEAGU: Yeah, exactly. I'm not sure if this works. Yeah. So as you said before, the usual way was to make purpose-built assets like for beauty renderings, or giving that shot as a reference for a game production, or seeing the same asset in a WebGL framework. And the tone now changed in that term that this whole process gets elevated to like a source asset level, where companies more think about how can we not necessarily abandon DCCs, but go out of them in terms of managing the source asset data. So we can still produce the meshes in a DCC. We can still curate materials in another DCC, and then to get these libraries out of this basically lock-in effect, and then also basically automating workflows and basically populating DCC scenes with certain SKUs, in terms of metadata and the associated mesh and material data.
And basically this goes down to downstream workflows for optimizing materials and meshes for real time display. Some standards are already in use since a few years, like for exchange formats, for example, FBX was mentioned. But there are certain pros and cons where people also look at alembic, in between, and then going to use D eventually that's basically the trend we can at least observe right now.
And on the material side, just mentioning it here for the first time, MaterialX is a very, very important and interesting new development basically in the open source community, in terms of material interoperability and standards.
PAUL TEALL: OK. Many of us on the stage come from sort of the media and entertainment world, or the DCC world. Rob, I know you said you come from the architecture world. What sort of similar problems are you seeing or do people need to be thinking about as they're considering coming from the world of Rhino or Revit, and maybe those content marketplaces that are a little more siloed?
ROBERT CERVELLIONE: Yeah. So for me the important thing really is something Beau touched upon, which is it's not really just about the geometry. There's so many formats that can take a mesh, and a mesh, and a mesh, or of NURBS, and of NURBS, and of NURBS. But creating a unified standard that brings all of the business data we like to call about.
OK, so you've modeled a sink. But now you need to know manufacturer, and how much it costs, and a whole bunch of what we call BIM data. That needs to live on that. If you're doing a plant or like HVAC stuff, you need to know it's electrical loads. You need to know a whole bunch of data. And what we don't want is a format that separates those two outright.
We don't want the mesh in one format, and then an Excel table that like somehow matched up to it. And that's where we're going to hold the business data. We want to be able to unify all of that right. And then if you take that even further, if you model a steel beam and you make it steel, I want to record the Poisson ratio, and its modulus of the elasticity. So that when I bring that into a structural solver, it can understand how to bend it, and how to deform it, right?
I don't want to have to recreate the data there. So really what we're looking for is that unified standard that's going to bring across all the important pieces, and we don't have to keep redoing it. And I think that's where this idea of SimReady really comes in, is that a geometry is so much more than some mesh or some NURBS. That it's a thing. It's a beam. It's a wheel. It's a joint. And there's so much stuff that has to be embedded in logic in there for the wheel to spin, and a beam to bend that we don't want to lose any of that data fidelity when we make that open standard.
PAUL TEALL: Right. And this transfer of data, this is not necessarily some abstract concept. This can be the same company just different teams, like the way things are structured make it hard for data to flow between different projects or teams. I mean we had somebody stop by the booth yesterday that is using one of our products, and said, well, the other group at our company is using this, but we're not.
So just making sure that stuff can transfer enables inter-company sharing as well.
BEAU PERSCHALL: Yep.
PAUL TEALL: OK. So as we're thinking about standardization, what are some important things that people need to be keeping in mind as they're maybe starting this journey? Do they need to be dogmatic and pick a specific standard, or is it just important that they're making an effort to stick to general industry best practices? I'll start with you, Beau.
BEAU PERSCHALL: All right. And I'm sure others can chime in here as well. I don't think it's about being dogmatic. It's more about being able to have lossless control of your data in a consistent way. Consistency is really the key in terms of standardization. When it comes to how you model things, that's CheckMate, or StemCell, so that artists become, or the designers become one step removed from how that content ends up moving within the pipeline downstream. I don't know.
ROBERT CERVELLIONE: Yeah, I mean on our side, for us the important thing is that the content is robust and flexible. On that standard, right? Because not everybody wants everything, right? The structural engineer wants to be able to put the modulus elasticity. The environmental analysis person wants to put carbon output on there. The design team wants to pick a finish on it. And so everybody needs-- the standard has to allow all of the different parties to be able to input, adjust, and access the pieces of the data that they need.
And again, it's not always the geometric stuff.
BEAU PERSCHALL: And it also evolves over time, when it comes to the simulations that are now starting to come to fruition. You've got Internet of Things, where you've got devices that want to report sensor data back to a digital twin. And you need now the content to be far more intelligent about itself. But you also, from a creation standpoint, want a future proof it so that you're not, again, rebuilding it over and over and over again for different things. So that's really, I think, the start of standardization is finding something where you get that consistent starting point, where you can now evolve it from a single point in space, not necessarily picking one over the other.
DADE OGERON: I was going to say on the TurboSquid side of things, what was an interesting challenge for us is that it wasn't just geometry or materials for us. It was also all the metadata that went along with that, which was product descriptions, product metadata, of how many polygons is it, how many materials, how many object components. All of these things we needed to record. So we recognized that a long time ago that an asset, a 3D asset wasn't just the 3D model. It was so much more than that.
And so that's why we developed our own infrastructure to be able to compartmentalize that content, because we knew it was coming. There was going to be something at some point. [INAUDIBLE] where we could then start to inject that data in it. But in the meantime, we needed a way to record that. And that's thinking forward. That's how you do that. You say to yourself, look, I may not have exactly what I need right now to be able to just hand over a package to someone. But I need to be responsible in maintaining as much of that data as I can in one place that's easy to find, and it's easy to distribute, and has all the components that I need for that.
And so what that means is that now we're set up. But as we start to bring USD into our pipeline, we can then apply all that data to that USD, and it can all be compartmentalized finally. So again, it's thinking forward in that way of knowing, hey, look. I know this content is going to need to live somewhere else at some time in the future. I want everything there in one place.
PAUL TEALL: Right. Let's stick with the metadata for a second, because I think that's going to be really powerful obviously for companies like NVIDIA that want to use that metadata to learn things about these 3D models. What are some of the ways that maybe technology can help the artists creating that content make sure that they're capturing whatever appropriate metadata needs to be connected? Let's maybe start there, Rob, because I know you've got some thoughts on that.
ROBERT CERVELLIONE: Yeah, I mean right now, I think a lot of that unfortunately falls into the DCC that you're using, right? So if you're in a program like Revit, it's metadata rich, right? An object has tons of properties. If you're in something like Rhino, it's a little bit harder. But there are ways to input it. And I think the important part is that no matter where you're creating this content that there is probably going to be some sort of data pipe way for you to get that metadata in there, and it's important to fill that in, and get that in there, and get all the important pieces.
But I do think that's something that can also evolve. Because right now it's very DCC dependent to be able to properly get that metadata. So understanding how the standards work and maybe what the artist needs to put in there, so whether they're in Revit, or Maya, or Blender or Rhino-- it shouldn't really matter that there should be an easy pipe to be able to do that. And unfortunately, that I think we definitely need to start to standardize a little bit more.
PAUL TEALL: Yeah. So on that note, Julian, I wanted to have you dig into a little bit. What is a well organized asset, 3D asset, look like in your mind today? And maybe how does that differ from, say, what that looked like five years ago?
JULIAN-ALEXANDER NEAGU: Yeah, this goes a bit hand in hand with the shift in how workflows are built, also in the example of e-commerce companies. So let's say five years ago, it was appropriate to really have mesh and material data for example adhere to a certain standard-- not even standard, but for a certain renderer in a certain pipeline, and then having completely different standards in that regard for an asset which goes more into a real time pipeline.
For example, I have different layers of the mesh. And they are also represented, let's say, in an example in a shoe model. And in the real world, there would be actually real different materials like rubber, some cloth, and semi-transparent materials layered over each other, creating basically the result, with the interaction of the light. And back then it was then appropriate to build the asset to get to that end result. But the way there was not really needed to be standardized. If it looks like it fits into like Unreal, and it has a certain workflow. They are making a displacement map or whatever to basically fake this. That was totally fine.
And now, the sentiment is changing a little bit to how can we capture this as close to reality as possible, as close as to the real world, like something like digital twin. And then basically automate and streamline all these kind of constraining workflows for the certain use cases after that. So it was basically a little bit like upside down in terms of priority.
And I think maybe Dade, you can say a bit more about where the metadata lives, as well and, yeah.
DADE OGERON: Yeah. I wanted to touch on something that Robert said. So Robert mentioned materials and materials have always been sort of our holy grail. If you can figure out materials and move those through, MaterialX is doing a great job of beginning to solve that problem. But having that metadata at the material level is sometimes maybe even more important than carrying it at the geometry.
BEAU PERSCHALL: Without question. Having worked in 3D for forever, you used to be like, oh, you build the geometry. It's precise and you're like, how can I make this look like carpet, or how can I-- when it's like the carpet has very specific properties in terms of friction, as it interacts with a rubber shoe, or whatever else, or a chair sliding across it. How are you now able to define that? I mean how what's the thermal characteristics? What does it sound like when some sound bounces off it?
Everything is driven by materials at some level anyway.
DADE OGERON: Yeah, a perfectly square cube in 3D has no attributes, other than its physical-- its scale and dimension.
BEAU PERSCHALL: Right.
DADE OGERON: And it doesn't get anything awesome until you put that material on it, right? And then once you put that material, then it needs to carry whatever that material's data is across it. That's where it's really important to be in the capturing metadata at that level.
BEAU PERSCHALL: Right, yeah. Physical materials, I mean it also impacts what we do at NVIDIA as far as simulations for non-visual sensors for things like autonomous driving. You end up now with sensors that can see LiDAR on RADAR. And they're now tuned to see that's metal, that's wood, that's concrete, that's flesh. Don't hit that. All of those kinds of really important aspects, it goes beyond just the visuals. It's everything else.
ROBERT CERVELLIONE: And I think it's important that we unify that too, right? Again, like I don't want 32 material libraries, one for each software that I use, right?
BEAU PERSCHALL: Correct.
ROBERT CERVELLIONE: And each one having its own little thing, and its own pieces, and nobody keeps them aligned. And if you export out of one program, you export out of another program, everything looks different. And then it acts different. So unifying and standardizing both the geometric, the metadata at the material and geometry level, becomes incredibly important so that you can start to finally create a unified catalog of stuff. And then stop worrying about whether that was material created in this program, or that program.
It shouldn't matter, right? You should be able to map those, and very easily integrate those together.
BEAU PERSCHALL: Right.
PAUL TEALL: Right. And the artist not having to worry about that is important too. They shouldn't have to know what the thermal dynamics of carpet are, or whatever.
BEAU PERSCHALL: Absolutely not.
PAUL TEALL: That should come with the package.
ROBERT CERVELLIONE: Or be easily able to be embedded by the sound engineer, or the acoustics engineer, or the structural engineer doing thermal, right? Like it should be modifiable enough. Like nobody's expecting everybody to know every piece. So the standard needs to be able to be very easily accepted.
BEAU PERSCHALL: And looking very forward into the future, even having an AI assistant, that can actually help you identify and say that's what this is. I'll go ahead and apply those, because I know that that's a known quantity.
ROBERT CERVELLIONE: Of course.
DADE OGERON: Yeah. Being able to-- can I hold it like this? Being able to embed in that material library, so that, again, the artist doesn't care at all about it. They just know that by applying that material, they're getting those attributes.
PAUL TEALL: And take us through some of the examples of what we're going to get, once we live in this world where these models that are built have all this rich data coming along with it? I mean Beau and Rob, I'm sure you guys are living this daily in the Omniverse world. So sort of paint that picture a little bit for us.
BEAU PERSCHALL: Rob?
ROBERT CERVELLIONE: Sure. So actually our team just put together this demo, we did not too long ago. And we did some things in a week that we didn't honestly think was possible. And a lot of it had to do with the standard USD and material library. So we were working in Rhino, Revit, SketchUp, Max, Maya, Unreal, and CityEngine all at once, all together, all simultaneously.
And being able to then-- what we call asset swap, and remap all of the materials across all of those programs, and all of the geometry across all of those programs to a single unified set just made things so much easier, right? And it made it possible. So then it didn't matter if somebody modeled something in SketchUp or Rhino. Nobody cared.
When they exported it or when it hit the system, it got the unified material, or if you're in Revit, the table became the high quality asset that's supposed to be right with all of the metadata on top of it that we piped in. And so it starts to just help relieve a lot of the work nobody wants to do. You don't want to sit there and fill in stuff. Like nobody wants to type stuff.
PAUL TEALL: Right. Do you have anything you want to add there? OK.
BEAU PERSCHALL: It's-- I mean the SimReady content is just trying to build all the simulation data on top of the beautiful stuff. But they're the ones who are actually employing it into all of the things that they're--
ROBERT CERVELLIONE: Yes, I just stole all the content he built. That's pretty much it. He built all of this amazing content, and I just like--
DADE OGERON: Hey, I think we both-- we built some of that.
PAUL TEALL: So Dade here, and I'm watching the time here. And I do want to leave time for Q&A. But I got a couple more I want to get into. So we're in sort of a unique situation. We're working with 70,000-ish artists around the globe. How do you convince these artists that are maybe building content for us, but at lots of small different studios, why they should care about this? And maybe if it requires extra work or them learning something new, why they should make that effort to go down this path?
DADE OGERON: Well one of the things that we've done, so we had CheckMate. And CheckMate was a standard that we used that artists could achieve. And if they did that, we knew that the scene would work, and that it would perform the way it was supposed to perform. It would have the features that it needed to have. And when the customer bought it, they could just put it in their scene, and it would go.
And that was a hard standard to hit. Even though it was based on a lot of best practices, artists felt like it was the 10 extra steps that they had to take to generate that content. So what we've done is we've sort of re-adapted that into a simpler specification that became sort of like, well, if you could just do these minimal things, then we can automate all these other things for you. And that's where really things started to change, where it was let the artists create the content that they want to create, and then let them move on.
Go back. Go start working on something else. And then we'll take it, and we'll do all the hard work on our side. And what we were aiming to do, is as we move further into the future, we want to start automating more and more and more of that process, making it much easier for them to be able to just move on. Right? Create the great work and then move on. We'll take it from there.
And so we have built automation pipelines to flow that content into pretty much any experience that it needs to go into. And of course with the help of NVIDIA and of course the help with DGG. And so for us, as we look forward, what we're going to continue to do is we're going to continue to guide and give artists as much help, and training, and tools as we can, to support them in this creation journey. And then we'll handle all that yucky nitty-gritty stuff on our end, conversions, and all that stuff.
And hopefully at some point, giving them tools to make it easier for naming objects, or naming materials, or a material library that has the metadata that we need in it, and storing that metadata, and giving them more options and more control and generating USD content as well. So these are the things that we're looking at as we're wanting to incentivize artists not just like-- we'll give you a boost if you create to the standard. You can get your content in front of more people more easily.
But also just make it easier for them to just create great content. And let us do all the hard work, other hard work.
JULIAN-ALEXANDER NEAGU: Yeah. I just want to add that's the same idea we strive for at DGG in terms of also asset optimization and interoperability, to for example, WebGL frameworks, so the artist should not be burdened with creating any LODs or fixing anything, applying some compression algorithms for displaying it on the mobile device. It's similar. We're thinking similar of that whole 3D pipeline as like the video pipeline, like 10, 20 years ago when you just have frame by frame image compression and other technologies there which just take it from there. Right?
You just cut your content. And then YouTube automatically compresses it, and so on. And we want to enable that, and already do that to a certain extent right now with 3D data.
PAUL TEALL: Yeah, I think that's an important theme. And maybe people have picked up on this already. But no one on this stage is advocating to try and lock you into any particular ecosystem or standard. I think it's the opposite.
BEAU PERSCHALL: I think the communities demanded it. No one can be a one app shop anymore. I mean everybody's using a little of everything to try and get their work done. Now it's a matter of how can we do it more efficiently, and how can we do it so that our content is more future-proof, so that we can continue to use it, and have it evolve as new simulations come online, and new techniques come online. How do you start to make sure that content moves forward?
ROBERT CERVELLIONE: Yeah. And I think this idea of standardization when it comes to automation is very important, because everybody looks at AI like this miracle technology. But what it really is just it learns from stuff, right? But if you don't have a standard to learn on, it's just as dumb as anything else. So if there's this standard material library, and this standard way that we're representing objects and metadata, to teach your AI engines to learn about things, to understand how to create segmentation maps, and all of that kind of thing becomes much easier because now you don't have 30 separate data sets right.
Imagine if there was you're doing mid-journey. But there were 42 image formats that didn't talk to each other. It would never be able to scrape the internet to go get them, right? Because JPEG is a JPEG is a JPEG, and a pixel is a pixel, we can do that. You can learn. You can learn on that data. But 3D data is not like that yet. And so if we standardize it, we can now start to do those kinds of amazing things on top of it in the 3D world.
PAUL TEALL: OK. So maybe final couple of questions here before we turn to Q&A. How close do you think we are this actually happening? There's a lot of brainpower. There's a lot of dollars. There's a lot of companies thinking about these problems. Are we years away from some of these things becoming a reality? Months away? Where do you guys think we're at?
ROBERT CERVELLIONE: Somewhere in between that.
PAUL TEALL: Somewhere in between, OK.
BEAU PERSCHALL: Yeah, I would agree with that.
PAUL TEALL: One year. It's one year.
JULIAN-ALEXANDER NEAGU: One year.
PAUL TEALL: What's the date today?
DADE OGERON: Yeah, I mean I would say that right now we are the closest without a doubt that we've ever been to coming. I mean USD has really solved a lot of the problems that we've wanted to solve for a long time. I think we still struggle a little bit with materials. But I think we're rounding third, if you will, on materials, hopefully. I mean it feels that way now, because we're not running across the same problems that we used to run across when dealing with materials.
So we're clearly right around the corner. And I think USD, the great thing about USD is that so many people have already adopted it, and they may not have got it quite right yet, but it's getting there. It is clearly the standard that we've been looking for.
BEAU PERSCHALL: Yeah, and it's not necessarily a format by itself. I mean, I think people under-appreciate that it's really a platform that we've been given now to actually use modularly. And it allows the data to exist as like parts of a much greater whole, as opposed to trying to make the next uber file, the next uber asset type. Now, you can have that compartmentalized, and pick and choose what you need where you need it.
ROBERT CERVELLIONE: Yeah, I mean USD really acts more like mini databases, and you can actually then just stitch those databases together as needed. And so it's got that same kind of fuzzy read, fuzzy writing, like ability to traverse stuff without like loading in mem, we can get really deep dive into that. But yeah, it's not just a file format. It's really this micro database that's super reconstructable.
DADE OGERON: Yeah and that's something you said earlier, Robert especially pertaining to that beginning point. Like we know you're going to still want a GLTF. And you're going to still want an FBX. Someone is going to say, hey, can I just get an FBX. You should be able to provide that. Not everyone's going to want all the metadata. They may not want to know the tensile strength of that.
BEAU PERSCHALL: Different use cases, yeah.
DADE OGERON: Right? And so but that data needs to live somewhere, right? And a USD is a great place to put that. And so from there, then you figure out what your-- your outpoint.
ROBERT CERVELLIONE: Yeah, and I like to talk about USD and GLTF like this. USD is your Photoshop file. GLTF is your JPEG. One is not better than the other. They just have very different use cases. And you start in Photoshop, and all of your base work is there, and you have all your layers, and your structures, and your overrides, and your non-destructive layer editing, and all of those things that you do to put together a very complex photo, in the output of JPEG, or PNG, or whatever it is you need for the other side of it.
And you don't then after you make the JPEG, you don't delete the Photoshop file. Right? You need both. And they all have their use. And as we all know in Photoshop, that whole JPEG PNG, that could all be automated, right? Like you can just rip through a batch of PSDs and like output what you need, right? Same thing on the USD side. So think of it like that. You have this highly structured nondestructive layering system. And it's not like we don't need FBX and GLTF. It's just they're sort of byproducts of the master deck there of information.
PAUL TEALL: It's the perfect analogy. Because, yeah, we're going to live in a world where not one of those format wins, and there's going to be different use cases for those different formats. But the key word is they need to be interoperable. You don't want to have something that you go, I need it for this other use case, and it's not going to work.
BEAU PERSCHALL: Right.
PAUL TEALL: OK. I said, we were going to do at least 20 minutes for questions. So we have 1 minute left before we hit the 20 minute mark. Any final thoughts that people up here want to give before we turn it to this crew? We're ready to turn?
DADE OGERON: I think we can turn.
BEAU PERSCHALL: Ready to turn.
PAUL TEALL: Let's turn. So, OK. We do have a mic. I was going to say we don't have a mic. But I see one being walked up here. So if people have questions for this group about any of the things that we've covered or any of the things that they're working on, come on up. Don't be shy. It could be about where to go for po'boys.
AUDIENCE: So what do you guys think it's going to take for some of these DCCs to actually get on board? I know there's been some that have. But the reality is there's a lot of different programs that are specialized, especially we do a lot of stuff in the retail space, and like GLTF for instance has been adopted. But I bet if you run it through the validator, 50% of the time they're not valid GLTFs. So what do you think it's actually going to take for them to say, OK, we'll do this. We'll make sure it's interoperable? Because they are incentivized still to lock you down into their ecosystem.
DADE OGERON: I mean, I could take a first stab at that. So I mean certainly I think one of the trends that we have seen is that many of the DCCs are moving in that direction to want to standardize. I think the challenge has been getting all of those different DCC makers to come up with a concise set of what those features are that works across the entire industry.
I think that's where you're getting a one implementation that just doesn't quite work the way this other implementation does, and so forth, and so on. I will say, and I know there's two guys standing here on the stage, I will say that Omniverse has done an amazing job of as a DCC tool, I still consider it, of releasing those layers to you, of the control that you have in the USD, and really being able to see how extensible that is.
BEAU PERSCHALL: It's also about breaking down the walled garden idea and mentality, in that Omniverse has been trying to get partners to just build connectors in and out, so that you can take your data and move it wherever it needs to live, so that it's not constrained, so that you're not blocked from doing it. So Autodesk has connectors into God knows how many different connectors. I mean we've got them into Unreal. We've got them into Rhino. We've got them into-- it's growing by the day at this point to try and make sure that data flows wherever it needs to be, and the system is open enough that people can write those connectors in and out.
So it's not like it's something we're giving away. It's like, take it and use it so that you can now move your data and own it without having to worry about being locked in.
ROBERT CERVELLIONE: And to the point though, in the beginning we had to do a lot of the heavy lifting, to get the industry to see. Right? We wrote the Rhino, and Revit, and Archicad, and SketchUp, and like the list is on and on and on. Right? It's a huge ecosystem.
So we had to take care of that, because I think the industry needed to see what a reference standard looked like. How do you take that, and move it over? But now what we're seeing is the amount of stuff we have to make as connectors in NVIDIA is slowly going down. Because the companies have finally realized, OK, now I get it. It's time to jump on board. And so we actually see a huge ramp up of the applications themselves, saying, OK, we understand. USD is important and we're going to now start implementing it.
So I think we're at this kind of precipice, inflection point, where we don't have to prove it anymore, and they understand it, and six months from now they'll all just have FOMO if they don't make one. So you know?
JULIAN-ALEXANDER NEAGU: Yeah. It comes down then to market pressure. And this is the inflection point right now. So I think the tech stack is there. I think the industry is convinced. It didn't maybe bubble up too far into the creator community right, to say, oh, you can build something in Maya, Max, whatever, and you can distill it into a GLTF with one click, and you don't even need to rebuild certain materials. You can even have glass materials in there. That's all basically possible.
I think it's just not yet there in the communities to create this pressure on the slow movers. And then I think this will figure it out itself in the next couple of-- one year. Yeah, one year.
PAUL TEALL: All right, other questions? Somebody's got to have something.
BEAU PERSCHALL: Awkward silence.
PAUL TEALL: Oh, we got one. Here we go.
AUDIENCE: So my background is in the AEC industry. So I guess this question is geared towards Rob, but really everyone. We saw yesterday with Autodesk University talking about forma and flow and all these different industries, and we're kind of like ever converging where particularly in the AEC industry we're seeing fabrication, construction, and media, all be ever present and layered on top of each other. So when it comes to engaging all these different audiences from someone say in procurement who's looking at product information, to the architect with their finishes, et cetera, how do you see this-- and again with the ecosystem of tools, a data standard regulating the industry by and large where everyone can kind of buy in through these different ecosystems? And what does that potentially look like, given all the new advancements we're seeing here with Autodesk?
ROBERT CERVELLIONE: Yeah. I mean, especially for let's put that on the operations side, right? Data procurement, operations, all of that kind of stuff. One of the things I'm most excited about, what this sort of universal format is giving us is that all of that Revit data that the architect spent three years making can now actually move into construction, and then from construction, move into operations, and move into procurement. Because it's actually a very open platform to count things, to do takeoffs, and quantities, and understand where something lives in space, so that you can track stuff, right?
All of those kinds of things USD is amazingly open enough to accept. So what I like to say is we can start to build tools now on that platform that you can confidently say that data is open. It's there. You can build tools. And then regardless of where that source came from, once it hits that, I can now compute on top of it. So whether that's doing area takeoffs, or figuring out where it is, or just understanding the temperature of a room, because I piped the sensor into a Revit model-- by the time it comes to operations, nobody cared where it came from. They just know it's this thing. It's got all of this rich data on it, and I can keep piping more data into it.
So for me, I feel like it's going to hopefully allow our data to stop having these like cliffs where, OK, at the end of construction, somebody goes and rebuild the whole thing for a construction model. At the end of construction, somebody goes and rebuilds the whole thing for a facilities management model. And then somebody else goes and rebuilds the whole thing all over again to do a planning and operations model, an asset inventory and tagging. Name all of these disparate systems.
I mean my hope is that now that we can actually start to move these things down the pipe in this unified thing, and now you know how to go into that object, and you can get a piece of it, and you can pull it out, and you can pop it into a dashboard or whatever it is you need to do with it, that regardless of the 5,000 tools out there that somebody used, that once it gets piped in there, your script that goes and counts stuff will still work. And that's what I hope that this kind of helps illuminate.
PAUL TEALL: This stranger.
ROBERT CERVELLIONE: Yeah. Only because this is live stream, so like when you talk, the internet hears you. So that's why you have to go to the mic.
AUDIENCE: So I'm not a 3D expert. But I very clearly see two forces that I think are at tension with each other, at least in the near future. One is this exponentially higher need for 3D content. So think about all the things that you guys are-- it's the world you guys live in-- metaverse, and NFT, and web 2.0, and all the things that I can't even spell.
But the second is what we've been talking about for the past few minutes, which is the underlying need for things like data standardization, extensibility, interoperability, data structures. You combine those two forces, which I think are going to continue to sort of tug at each other for the foreseeable future, I'd love to hear, especially Beau and Robert from you guys at NVIDIA, what do you guys think are some real commercial use cases that you think will become real in 2023 that we should all be aware of?
BEAU PERSCHALL: That's a good question. I think that 3D is still hard just in general to do. And the needs are expanding beyond, like I said, beyond visuals to simulation, and researchers, they're not 3D people, but they need the data. It's like coffee grounds to make coffee. They have to have these data sets in order to train their models. And they have to be consistent. So that's really the starting point.
At the end, they just get rid of the coffee grounds. They don't need to own the 3D content. So I think that there certainly a lot of-- if you're doing digital twins of warehouses and want to understand how you place your work cells for your robotic arm so it doesn't kill the workers that are in the facility, and that it's efficiently laid out for maximizing its throughput. Those are the kinds of things that are really starting to take off. We're working with God knows how many clients that are trying to build industrial applications between BMW and--
ROBERT CERVELLIONE: Yeah, there's actually a lot of cool applications that's not 2023, that actually came in 2022. Like the public ones that are on our website. So like the BMW factory of the future. It's a great example of 20 pieces of software that never lived in one place plus operations, and my favorite thing about that one was they went as far as modeling the behavior of the humans in the factory, like they MoCapped them, and put them in there.
And I thought this was a great example, because why would you want to do that in this physical thing? Well, they figured out how to help make the user more comfortable. Did the rack need to be taller? Did they bend over too much when they went to go grab a part? Was it too heavy? Was it too far away? It actually makes them meet. Now you have this whole thing that you can simulate. It makes a meaningful impact to actually people's lives.
Or how can we get the robots to more safely run around an enormous warehouse? I like to think of that as like mixed reality. It's like robots and humans, right? That to me is mixed reality. How can we get those two things to be safer, and especially when it comes to training data. Right? How do you train a robot now?
Well, you stick it in a situation, and you let it go. But the problem with that is you literally have to stick it in a situation and let it go, right? What you want to do is be able to generate a million situations in five seconds, so that the bad ones don't actually have to have happened for it to have learned. And so I think those kinds of meaningful impacts happen now.
But again, the important thing there is a robot vision is an actual camera. So in order to train a robot in the virtual world, it literally needs to look exactly like the physical world, otherwise the images you're generating won't be useful. So starting to pair those things together I think are today making really meaningful impacts.
BEAU PERSCHALL: Right, and having data that satisfies those needs of both the visual and the metadata, that there's real value in having that kind of data set.
DADE OGERON: I was going to add that also the challenge there is-- and this is where I know soon we're going to be seeing more and more AI being used in this way. But also making it easier for artists to be able to create in their DCC, things like UV-ing. UV-ing shouldn't be something that an artists needs to do anymore. We should be solving that. And we are. There are some UV-ing tools that are amazing now.
But that's the thing that 3D artist shouldn't be doing anymore. Naming their shapes shouldn't be something that they necessarily have to do anymore. We should be able to name the parts of an object for them. So that data becomes richer once it goes out. Because again, garbage in, garbage out. And so I think that's where you're going to start seeing more and more controls for the creators to be able to make that content more rich, so that it can flow into these different pipelines.
BEAU PERSCHALL: Right. Yeah.
DADE OGERON: And be more useful.
BEAU PERSCHALL: Yeah. I think that just essentially absolving them of the need to have to know how to do all of it, and give them the tooling on the back end that does it for them, or puts up enough guardrails they're not like bouncing all over the place to try and figure it out.
DADE OGERON: We need a lot more 3D artists in the world. We need a lot more, but we also need tools that make it easier for them to generate and create content.
ROBERT CERVELLIONE: Yeah. I think we were talking about this the other day. Sometimes you open up a model of a chair, and it's box 12345678, OK. Fine. That's what they made it. So this is where I'm excited. Can the AI engine have just said, OK, no you actually modeled a chair. And so that's the seat. That's the back. That's the side. It's made of wood. And all of this rich metadata we're talking about, now like, OK. I see what you're making. There's no reason for you to have to sit there and spend your time typing stuff in there.
Like, OK, you modeled the arm of a chair. And that's the caster. And that's the plastic caster. And oh, it's plastic. OK, so now in the SimReady, I understand if something hit it hard, it's going to break. It's plastic. So I think being able to pull that, I think for me 2023, that's the exciting thing where AI is going to start hitting 3D content, is because especially coming from the architecture world. I like to talk about Revit, you open it up, but I'd say like one 1 of the 57 properties is actually properly filled out on the object, because nobody really has time to go fill them all out. So that's where I think AI is going to help.
DADE OGERON: Yeah, and I think as we had discussed yesterday, I think you're going to start to be able to see that could actually happen too on the server side.
ROBERT CERVELLIONE: Exactly, exactly. It's like, OK, they uploaded box, box, box, box, box. But no, it's really a car. OK. Well, great. So let's completely tag, and do everything.
DADE OGERON: Yeah, do it for them.
ROBERT CERVELLIONE: Yeah.
PAUL TEALL: All right so, we got five minutes left. But oh, you have a question? Perfect.
AUDIENCE: So my background is AEC. I guess so this is a question to Robert. The geometrical metadata is important. And all the non-visible physical properties and thermal properties as you mentioned, those are critical as well. So standardization obviously as, I understand it, is an utmost priority for you guys. But what about other industry common, and industry critical, or even owner specific critical custom metadata that needs to be ported into let's say Omniverse, how do you plan on standardizing that?
ROBERT CERVELLIONE: Standardizing is, what I like to say USD and standardization, it's that there's a platform in USD that has a standard way of capturing metadata, what that metadata though is, is completely open, and layerable. So if you get an object from somebody, and you need to put five more properties on it because they're very specific to you, you can literally layer that data on top of it. And in such a way that the original object never cedes that proprietary data too, which is really interesting.
So is that what you're kind of getting at?
AUDIENCE: Yeah, but more specifically, the AEC industry has this illness where everybody creates their own standards. So something as simple as a title block, a sheet name, somebody will call that metadata drawing name, or a drawing. They will call it something else. And in my opinion, giving that flexibility is not standardization. Creating some very rigid boundary around it, and then forcing users to follow that rule, rather than creating a million different workarounds in Revit, that is sort of a direction of standardization.
And the AEC industry unfortunately is still suffering very, very much.
ROBERT CERVELLIONE: OK. Yeah, so agreed.
AUDIENCE: Like, that type of standardization.
ROBERT CERVELLIONE: Yeah, so agreed. So USD has the flexibility of allowing you to layer lots of things on top of each other. But yes, you can call it one thing and somebody else can call it something else. I think there needs to be a much longer conversation about what gets standardized. Are there architecture specific metadata standardizations that we want to start to roll in and adopt? Because it is important to make sure that if you need to go grab a property, that there might be a common name for it.
And I think there is talk about, OK, so at that metadata level, how do we start to call it the same thing. And I think there is a lot of conversations happening around that to be able to unify it. I think to your point, let's call USD step one, like a platform and a place where we can robustly hold such standards. But yes, I agree that we can then go one step further, and make sure that all of those properties that live inside of that USD file have a common name, all the way down to you want to use CSI Master Format? Great. So everything has CSI code on it, and that's what you want to do.
So there are lots of standards out there that we can start to talk about. But yes, I do think there needs to be a longer conversation about which ones do we adopt for all of AEC metadata.
DADE OGERON: And this isn't specific to AEC, but I think what you're going to see is that there is a subset of data that we can all agree on. At some point there's a subset we can agree on. And then there's always going to be extensions or things that take that further that people are going to want. I mean we see that with GLTF. Right? We see that with MaterialX. There's we all agree on this. This is going to work for 90% of the cases. And then there's this other 10% that just we're never going to be able to agree on. But we need to be able to support that.
And so in a lot of ways that's how we look at, at least especially with our modelers for TurboSquid, we look at like, well let's just give you a subset. And if you can successfully hit this subset of standards, then the other extra 10%, we're not going to worry too much about right now. We're going to work as time goes by to figure out that other 10%. But let's just decide on a platform, and decide on a subset.
JULIAN-ALEXANDER NEAGU: Yeah, and deciding together is a good keyword. As the process has to be democratized, right? This is also why we see a lot of potential that it comes from the open source community. And we want to avoid like that one company or one specific standard is basically ruling everybody out, right? So that can be frustrating at times, because it takes longer. But especially from our side, with the experience we have with Khronos, it's a long process. But the more people are engaged in it, and the more people are working on these standards, the more we can treat it like an open market development evolution of coming down to standard. OK, we call it now transmission, and not refraction, or things like that, right?
And these standards can't be forced in that regard. That's at least our standpoint.
PAUL TEALL: We're right at time to here too. I'm going to end it on that note. Thanks to all of you for participating in this panel. And thanks to everybody that showed up.
[APPLAUSE]
Come to the happy hour tonight. Dade can tell you how he got his hands on a pristine FBX model of the original Ironman in 2009 legally. He'll give you the back story.
DADE OGERON: It was earlier than that.
PAUL TEALL: Yeah. All right. Thanks y'all.
BEAU PERSCHALL: Thank you.