説明
主な学習内容
- Learn about how to take advantage of VRED Core streaming capabilities.
- Learn about the business ROI of a streaming collaboration platform.
- Learn how to implement on-demand visualization as a service.
- Learn how to visualize design data in a device-agnostic environment.
スピーカー
- LGLionel GrafLionel Graf is an implementation consultant with the Automotive Consulting Team at Autodesk, Inc. He has been working for over 14 years in the rail industry as a creative design production manager. He specializes in creative design visualization and communication, with a deep knowledge of real-time technologies and processes to achieve high-end visual quality for aesthetic design communication and real-time design reviews using VRED software, ALIAS software, MAYA software, 3ds Max software, and creative field standard software for image creation.
LIONEL GRAF: Welcome to this class called Road to Digital Shop Floor, or how to streamline the decision-making process with VRED Core. My name is Lionel Graf. I'm an implementation consultant for automotive at Autodesk.
First, a bit of the safe harbor statement. What I will share with you today may include some forward-looking statements, which may differ from what we will do in the future. It represents intentions as of today and is subject to change.
So let me first tell you a bit about myself. I've been working at Autodesk as an implementation consultant for seven years. Our world, at Autodesk Consulting, is to support our customers in the adoption of solutions and their integration in the production pipeline, which can involve customization.
Before that, I've been leading a digital design team in the manufacturing industry. I've been trained as a creative designer and specialize myself in digital design communication, real-time rendering VR and AR.
So what we will talk about today is a prototype application developed by Autodesk Consulting, which purpose is to democratize product visualization. It allows to access visualization data from anywhere on any device in collaboration and leverage its authorized streaming capabilities.
It is based upon a custom web server, which exposes available visualization data to the user and manages remote rendering instances powered by VRED Core. So anyone can use their web browser of choice and experience a high-quality visualization with VRED.
What you will come away with today is, first, a better understanding of what you can do with VRED's streaming capabilities, how can make this realization truly device agnostic without compromising on the visual quality, and what does it take to build on-demand visualization service. And finally, have a better idea of business ROI of a streaming collaboration platform.
So maybe let's start with where does it come from. There are three compelling events, which motivated us to invest time in developing this option. The first one is how the automotive decision-making process has changed in the last decade. Most of it is often 100% digital in the early design phases. And we lost the ability to work around the shop floor, which was, back in the days, full of clay models that we could gather around, discuss, and compare.
Today, we often need to set a meeting, prepare a 3D model, book a presentation room equipped with the right hardware and software, and make sure everything works before the meeting. So how could we make it again as easy as working around the market?
The second one is the digital design pipeline. Digital workflows brought some great benefits. We gained in efficiency with rapid prototyping tools, we can visualize design earlier with a level of realism that is everyday closer to reality, and which lets us take informed decision with a growing level of confidence.
We can even make lifelike experiences thanks to VR technology, which has become mature enough to be easy to use and affordable. But it comes also with some challenges. It requires specific knowledge to deal with content creation tools, and it's not rare to use three or four of them, depending on the design phase and what we need to achieve.
Sometimes we even need the support from experts when making photorealistic imagery, for example, immersive experiences, or simulation. And all those activities require dedicated hardware that everyone may not have access to, especially decision makers, who then rely on production team to be able to see the work in progress.
And maybe the most challenging part is to deal with the amount of data that is generated. Manage it in a reliable way to ensure that the right version of the files comes together on the day of the presentation.
The third compelling event is a technological called opportunity. Autodesk VRED is a well-known visualization software in the automotive industry that allows you to bring complex data to life. It is used to create high-quality rendering and interactive experiences, visualize, review, and validate with ease and accuracy, experience and collaborate in real-time 3D environments on any device, including VR.
But VRED is also proposing a web-streaming feature, which allows any VRED session to be streamed over the web, and thus be displayed on any device, including portable devices, phones, tablets, and more. So we thought it was a real need for a solution to democratize visualization and make it easier to access high-end visualization data and experience designs through a web browser and on any device.
Such a solution would drive operational time savings by reducing the need for an expert to fund, prepare, and put together the necessary data for presentation. It could also make access to the right data easier by removing technological barriers and give anyone the ability to consume visualization data. And ultimately, shorten the design cycles with the ability to shorten the time to find the right data to present and detect earlier wrong design directions.
So what I propose is to have a look to it. So I will now switch over to my web browser and run you through the prototype application main functionalities.
So the first thing the first thing that user will need to do is to log in. So the system itself pulls a user base. So we will declare some user in the system and manage them there. In the future, we can imagine to connect that also to an active directory or anything that you use to manage your users in your company.
Once the user logs in, the home page is built to be like a block. You can see here a list of data that has been shared with me or shared with everyone that has access to the system. And the only thing you need to do to review the data is to click on the link. Then the system will take over all the heavy lifting work. By the way, it's not working.
Let's open this one. And the system will identify the available resource to render, open the file, and once everything is ready, we'll stream the contents to the web browser.
So there, inside a web browser, you can experience the full VRED file. You have access to everything that has been prepared, programmed, or various viewpoints, animation, anything that could be relevant to show. So, for example, here we have all the material variance, and we can, from there, experience the whole variation of the model, whether it is material or geometry. And again, it's inside a web browser, so accessible, basically, from any type of device.
And to demonstrate that, I will also very quick show you how it can look like on the tablet. And I will connect my own tablet here. And show you the same file-- sorry for that. Here we are. And from-- it's a standard tablet here. We can experience the same kind of thing here.
And what is really interesting with that type of solution is whatever the device you are using, you will have the same experience with the same quality. It will not adapt the quality to the capability of the machine. All the heavy lifting work, all the rendering stuff, is happening on the capable hardware in the background.
Another thing that is interesting is that we are relying on VRED and its capabilities in terms of API programming customization. And one interesting use case here is that we have two different users looking at the same data. And the system would recognize that multiple people are looking at the same thing. So it will automatically create the collaboration session that will put together the two users in the same environment.
At any time, we can also decide to share the viewpoint. So if from this debate, I want to know what the other user is looking at. I can click on its name, and I will share the same viewpoint if it updates. Here it is. And if I move on the other side, I will still have the same viewpoint here. So you can really have a presentation happening between multiple people at any time. You can still switch back to your own viewpoint and have your own experience.
The last thing interesting we implemented here, but we can imagine a lot of other things. So I will close this one for a second and switch back to the tablet. I can leverage some interesting capabilities with sketching, for example. So I have a feature there that let me add notes and say, hey, here I would like this line to be maybe a bit higher. And this thing here a bit shorter.
And once I'm happy with the comment, I can post the comments. And it will automatically apply some actions in the background. So it will grab the image, it will create all the necessary information for the VRED scene to be able to replay that. So once it's done and we are almost there, we will be able to recall the same. But now I have a note group here, and I can see here the note that the user Code Manager 1 took on this date. And at any time I can replay that.
The other interesting thing is that if I quit that scene, automatically, the system will save everything that happened there and will expose the relevant information in-- so I need to refresh there. And here I can see that there is a note.
And I can open this one. So if I'm working on this same project and I want to know what has been said on the work, I can go back to it. And it's saved with the scene. So at any time, I can reopen the scene, or I can even download it if I need to work on it, and experience the same thing.
Here's one that has been taken. I will stop there for the live demonstration and give you a bit more information about-- reduce that. And what I would like to share now is what we think such a solution-- where such a solution could apply.
So the ultimate goal is what we could call the digital shop floor, where project data would be updated automatically. And where anyone could jump in from any device and review command collaborate around the digital roadmap. But to get there, there are so many steps to take.
The first one, which we are addressing today with this application, is to make that accessible in an autonomous way. Today, in a Design Studio, we are producing a lot of data. It can model visualization data sets, and whenever a decision maker wants to access this data, it requires to know where the data is, and have the proper hardware and knowledge to manipulate the data.
If any of these conditions are not met, they need to reach out to who made the data and ask this person to prepare the data for presentation, whether it is at his desk or in a presentation room. It implies time and effort just for having a look to the work in progress.
What if users could decide what is the relevant data to share, make it accessible through a device agnostic solution, and use remote hardware to take care of the rendering part, making lightweight hardware able to display the most complex data sets in real-time, or even in futuristic environments.
We can even think about much more use cases where remote rendering and streaming could help. In a world where working from home has become almost normal, giving anyone the ability to join a collaborative design review is critical. We could also think about democratizing an image reproduction and use this remote rendering power to allow non-savvy users to request for photorealistic image.
As we are using VRED, we can take advantage of the capability offered by the Python API and automate tasks also, like merging multiple scenes together, for example. Compare them, put them side-by-side on top of each other, automate assemblies, or even automate daily aggregation of the live studio data.
But how can we achieve that? What does it take to make such a solution? So the system we built relies on two main components. On one side, a custom web server is handling what is exposed to the user. And on the other side, rendering servers use VRED to render and stream the 3D content.
The web server manages users list, posts, and the data user is able to review based on user groups and permissions. Server backend manages the user request and rendering service on the other side. Rendering service will run the VRED Core session, giving access to all the same content varying viewpoint, animation, whatever is inside the 3D scene, and ensure time collaboration.
Looking a bit more into the detail, the web server is based on Django framework. As Django is working with Python, it works VRED with great API, and it simplifies VRED scripts integration to the back end. The Django server is managing the web front end, which are exposed to the user through web pages. Sorry, go back.
And it relies on three main so-called models which manage the user group permissions, which are used to filter information a user can view. The posts, which are the data containers, the post holds information about 3D model location, the title of the post, some description, and metadata.
It also managing information hidden to the users like who is currently viewing the post, which is used for collaboration, and the private collaboration session key to secure the session. And the last model is the streaming server model, which lists all the rendering instances we made available described with their IP code in their state, which is used to know if a rendering server is busy or not to manage the load balancing.
So how it works, when a user logs into the system, it will be redirected to the homepage, which will list all the posts you can view and interact with. As soon as the user requests to view a post, the system will look into the streaming server list, and the first available server to start the new VRED session, together with some scripts.
And it will then use the post information to open the related file, check if anyone is already reviewing the same post, and create or draw on the collaboration session, everything automatically without any action from the user.
And once the file is open and ready to stream, the user will be redirected to the 3D review page, which embeds the VRED stream app, giving access to all variants, viewpoints, animation, and interactive features set up into the scene.
So what's next? Let me show you some thoughts about how this could move forward. So far, the application has been designed around the statement that sharing-- I need to close that thing. So the application has been designed around a statement that sharing needs to be intentional. As a user, you want-- or you have been asked-- to share your work. So you need to create a post and decide who should be able to view it.
Another approach is to make it even easier by automating the creation of the post-face on the data contained in lookup folders. Of course, this would prefer to have reliable data structure to build the automation upon. But thinking further, this could automate data aggregation, or even have a ways up-to-date this visualization data set ready to be reviewed, which could update itself as new part comes in.
As the demons case, it may be a clever idea to use Shotgrid to manage data. And take advantage of data and test management feature of Shotgrid to keep track on project lifecycles and connect it to the streaming server.
Here, you can see an example of how we can leverage dynamic fields of Shotgrid to build the path to the relevant post, which will be generated based on the activity on the short pre-asset. And call another page, which will be managed by the web server we saw before to review the published 3D file.
Making the service scalable could also go through a cloud deployment. So far, the system has been constrained by the confidentiality requirements of the secret data generated in an automotive design studio. It secures the data, but requires to manage and scale the rendering server as the demand grows.
Somehow, it can limit the availability of the service, as the maximum number of concurrent users is fixed by the rendering cluster capacity. Posting such a service on the cloud could be the solution to make it really scalable and raise cloud rendering instances as needed.
We could even think about AR and VR. Late last year, our Autodesk technical sales experts put together a demonstration of how we could make AR and VR collaborative design reviews with VRED over the cloud. This has been made possible thanks to the support of AWS and NVIDIA.
For this policy, three components were required. First, Autodesk VRED that has the capability to render a 3D vehicle in context, at scale, and to multiple stakeholder points of view in collaboration. Then NVIDIA Cloud is our streaming protocol that compresses server-side rendering and decompress client-side images at low latency, while transporting six degrees of freedom data from device back to the server to render the next frame in near real-time.
And finally, AWS Cloud Infrastructure, supporting scalable, on-demand, real-time graphics workloads with low latency edge delivery capability. Thanks to this experience, AWS developed a so-called quickstart to easily deploy VRED and NVIDIA clouds on AWS Cloud.
This quickstart is for IT infrastructure architects, administrators, and DevOps professionals who are planning to implement or extend their total spread workloads on the AWS Cloud. If you want more information about AWS quickstart for VRED and the NVIDIA Cloud, please follow the link on the top of this page or refer to the class in [INAUDIBLE] available online.
Our AWS Cloud also giving a class TR502979, which is about more technical information about how AWS Cloud can help streaming and visualization over the Cloud. Finally, as we spoke about the solution leveraging broad spectrum API, we can think about more complex automation like automating data preparation by handling background tasks for almost any CAD data conversion, optimization, material replacement, and so on, and serve ready-to-view data in the best quality.
Or streamline visualization workflows by pairing live data pooling and automated data preparation to allow automated generation of large assemblies, and present them in an interactive environment. Or offline rendering services, enabling user to request realistic images and movies from role modeling data.
This list is, of course, not exhaustive, and we will be happy to explore together with you the specific needs and use case of yours. And if you want to know more, please reach out to me or your Autodesk representative. I thank you for watching this presentation. Bye-bye.