AU Class
AU Class
class - AU

Get Ready for Real-Time Virtual Production in 60 Minutes

Share this class
Search for keywords in videos, presentation slides and handouts:

Description

Have you seen the amazing things the cinema and TV industry are doing with real-time virtual production (VP)? Do you find VP amazing, but don't have a clue where or how to start? Then this class is for you! We’ll show you the entire process of building a real-time VP laboratory from Revit software to construction, choosing equipment such as cameras, microphones, computers, SDI routers, connectors, lenses, lights, sensors, and much more. We'll also dive into the required software for all the VP workflow—such as 3ds Max software, Revit, Aximmetry, Pixotope, Unreal Engine, source control, production management—and all the important steps. All that while showing the cost of everything. Basically, you'll leave the class ready to build a lab for yourself or for your business’s marketing department—which is exactly what we did! At IM Designs, we're saving millions on content production, travel, and assistance to other companies producing content with our existing 3D production workforce. Autodesk University 2022 is going to be legendary!

Key Learnings

  • Learn about what real-time virtual production is and how it’s being used to transform content production forever.
  • Learn how to choose lighting, filming, audio, computing, recording, routing, and camera tracking equipment.
  • Follow a production-ready 3D workflow for RTVP scenes and use multiple dynamic inputs such as external screens and cameras.
  • Become confident and motivated about doing real-time virtual production and building your lab at home or at your company.

Speaker

  • Avatar for Igor Macedo
    Igor Macedo
    Igor is the CTO & Founder of Imagine & Make Designs, a company leader in immersive software for AEC (Architecture, Engineering & Construction) with more than 300 XR projects delivered. He has been interested in three-dimensional user interfaces ever since he first heard of Ivan Sutherland's pioneering work in the late 1960s. He believes that the time has arrived for the computing power available on mobile platforms to make virtual reality a market reality. As a result, any person or company experienced in real-time rendering technology and 3D asset creation is uniquely qualified for the opportunity to become a leader in this expanding market. Since 2010, he has focused on developing for the evolving field of extended realities in order to set standards and break technological barriers. Today a significant connection between R&D and creativity is in place for anyone to get on board of this once-in-a-life opportunity of being part of the history of cyberspace. He's worked with Inter&Co, Autodesk, BASF, HyperX, and MRV in large immersive projects and is a lover of sharing tech knowledge and discussing artificial realities applications.
Video Player is loading.
Current Time 0:00
Duration 56:02
Loaded: 0.29%
Stream Type LIVE
Remaining Time 56:02
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

IGOR MACEDO: Hello, virtual prediction enthusiasts thank you for your time. And I'm very excited to present this class to you. It's going to be legendary. And I hope you get motivated and confident to start building your virtual production laboratory as soon as possible. The goal here is to get you ready for real-time virtual production in 60 minutes.

But you'll soon see that this subject is multidisciplinary and requires a lot of knowledge and practice in order to produce great content. This is not a sponsored class. So every manufacturer and software provider that appears in this presentation comes from our experience. And different ones can be chosen. So be prepared to receive an astounding amount of knowledge in a small amount of time. Now, let's get started.

So this class was planned and created as part of a training we give to everyone here the company that gets excited about and wants to work with real time virtual production. We divided the class into four main essential learning objectives that will be explained in detail during the class. Those learning objectives are in chronological order. And we'll use the concepts and knowledge to understand and learn how to apply them in a production environment.

We'll start by talking about the basics of real-time virtual production and how we're using it to transform our content production forever and for the better. Honestly, we're now able to do things we didn't expect to be possible before and with great ease. We'll then dive into how to choose the equipment, hardware, and software while explaining the importance of them in the VP workflow so that you'll be better prepared to understand our RTVP pipeline.

That being said, what would be sometimes talking about traditional studio equipment, we'll be focused on the aspects that are more important for virtual production. After understanding the importance of RTVP, its applications, and the equipment involved, we'll dive into our lean production workflow, our RTVP pipeline, and essential steps for what we'll show to you in the first learning objective. The last objective is aimed at wrapping up everything that was learned, talk about the construction process of RTVP lab from Revit to the physical world, and get you motivated and confident to invest your time and budget into the real-time virtual production world.

During this class, we'll follow the learning objectives and use them as checkboxes in order for you to follow along. All the materials and the knowledge that I'll share with you are focused on what works for us and what is possible with the technology we'll present. Ultimately, I want you to learn from our mistakes and also from our successes. Now, let's get started already, shall we?

Real-time virtual production is part of the spectrum of computer-aided production and media production techniques. And to give you a good start on this concept, real-time virtual production techniques enable content creators and filmmakers to interact with the production process in the same and sometimes better way that they're used to interacting with the live action process. Essentially, virtual production connects the physical and digital world together.

How? RTVP combines virtual and augmented reality with computer graphics imagery and graphics engine technologies to enable production crews to see their sets and stages as they are composed while capture everything on set. In the end, those techniques can be used as a means towards creative problem solving. When I first saw and understood the consequences of using the technology, of course, I was very excited to get on board.

Our company is specialized in XR software development. And we received a series A investment last year from Inter & Co. We're currently based at this amazing, modern building in the city of among other Inter colleagues. While we're specializing extended reality software development for business, we also gained the mission of enhancing and contributing to our existing Inter & Co. departments and initiatives using XR technologies.

The first department that could receive a great help from the XR world in the least amount of time was the marketing department. Since they have the all-around need for amazing content and media production, empowering them would be super beneficial for everyone involved. And also, it was a great opportunity to contribute to innovation and cost savings in the long run at Inter.

Real-time virtual production, here we go. So we had a few production things that were mapped that could receive contributions from the real-time technologies that we were so used to work with. But we also had a few business requirements and strategies to follow, such as quality improvements, help departments do more with less, reduce cost of production, while improving quality standards. Driving innovation as much as we could, rely on automated pipelines and frameworks to increase productivity and ultimately allow more creative freedom.

Believe it or not, real-time virtual production was the answer. And you'll understand why during this presentation. Let me tell you exactly what the production needs were.

So we wanted to pre-visualize as easily as possible for validating the complete production process. This helps creative teams to explore ideas, including staging camera composition, editing, and lighting, resulting in a custom-made trailer to demonstrate the project potential. We soon felt the need for virtually simulating the dynamics of each physical shot so that we could anticipate possible difficulties and solve them before the shooting day. We'll explain this in more detail later in this class.

Inter & Co. and several of our clients were in need of constantly producing live content-- events and presentations. You can already feel that by changing the production scenario from the set preparation work to a digital stage, preparation would allow creative teams to have fewer expenses with logistics, renting, and physical materials. And especially, recurrent productions would benefit from usable and easy to customize digital scenarios and virtual production frameworks for inserting videos, logos, 3D animations, and more.

The real live program-- we wanted to create live presentations for all kinds of things, such as product demonstrations and product launch events. Well, marketing materials and commercials could be shot internally and enhanced even more with small production efforts towards achieving high quality media production, and all that while having access to the raw footage and improving our growing library of visual effects, 3D models, and virtual production setups.

Another great example of recurring setups that can be reused are VP frameworks for podcast setups. We also wanted to create courses and educational materials, which again can rely on reusable code from repeat frameworks. We also wanted to explore the potential of virtual talents for our media production. We explored MetaHumans and traditional ways of creating avatars and virtual presenters using Maya in Unreal Engine.

Now, one of the amazing things that can be done with real-time virtual production is something called remote talent. Imagine that two directors of a company are presenting together side by side the launch of a new product to their audience, but one is in Brazil and the other is in the United States. We can basically stream the green screen from one location to the other and do the compositing as if the two or more talents were in the same space.

So basically, those were the things that we wanted to do. And all of them would impact name and satisfy our business needs, which are essentially are continuous quality improvements. Arguably, the biggest advantage of virtual production technologies is that it expands creative horizons. Stages can easily be customized using VP software. And design elements, such as props, colors, lighting, and much more can be edited and tweaked during the shooting stage of production. This empowers the filmmakers to translate their vision to reality without requiring limitless budget to do so.

Using a virtual environment, teams can transition from one scene to the next, contribute to the shooting performance, all of it happening at one location. Essentially, digital assets can be changed up quickly, allowing filmmakers to shoot several scenes with the same actors in a smaller time frame.

Virtual production has completely transformed filmmaking workflows into more efficient processes that save time and money. Visual effects are superposed in real time, meaning that less time is spent on the generally expensive post-production phase. By being able to shoot at virtually any place in the studio setting, producers can eliminate the need for travel. Additionally, since everything occurs in a controlled environment, the crew can save time by not having to account for changes in lighting, unpredictable weather, or any external factor that may impact a shoot.

Well, making creative decisions and revisions earlier in the process ensures that the right decisions can be made before and while actors are on set. This leads to cleaner takes and less rework. Technologies like motion capture, camera tracking, and the use of XR technology often lead to untested scenarios, where innovation can thrive in the form of finding better ways to solve creative challenges, which impacts directly on the innovation efforts of your company.

By leveraging reusable code, data sets, and digital frameworks, the opportunity to automate processes can get even more evident than traditional media production workflows. As it is said by so many experienced teams in real-time virtual production, the only constant is change. This is a big advantage, as changes in the post-production are very time-consuming. Therefore, this technology enables directors, crew members, producers, and as well as actors to make required changes, focusing more on creativity than other physical aspects of the shooting set.

And basically, this completes our efforts towards explaining what real-time virtual production is and how it impacts and transforms quantum production forever. Things will get even more clear throughout this class. Now, let's dive right into how to choose lighting, filming, audio, computing software, and the elements that allow those production objectives to become possible.

When we were planning to build our RTVP laboratory, one of the most difficult things to do was picking the right equipment, in order to avoid unnecessary expenses while having the best quality we could achieve in terms of media creation objectives. Here, we'll focus on how all the different equipment work and communicate with each other in a virtual production environment. And we'll basically start and first talk about equipment that might be already present in the marketing department and what they need to have for shooting with real-time engines.

Well, before even turning on your camera, you need to think about your lighting array setup, aiming for a chroma-key, which will require illumination strategy for all the background and the foreground of your studio or set. In the beginning, we didn't understand this. So this is how we started. And even then, we got some interesting results.

By far, successfully lighting a chroma-key project is the most important one for real-time compositing. Lighting the background properly can save you hours of needless frustration and problems. For example, when an actor stands in front of a large, intensely lit blue or green screen, two factors come into play and create a spill. One is [INAUDIBLE] the green or blue light that radiates off the screen onto nearby objects or people. And the second is reflection on the skin or other surfaces from the screen because it acts as a large reflecting card.

Both of these issues can be easily solved by using inverse square law, creating some distance between the subject and the green screen. However, reflections aren't as affected by distance. So other methods must be used.

A very simple technique is to light the background at a lower level than the foreground. Remember that the more lumens you put on the background, the more color it will reflect onto the subject's body, face, hair, and clothing. For classic chroma-key setups, the light in the background should be about half of what you use as key on the foreground.

And the theoretical ideal key will have a perfectly even color in the background. And the most common choice when it comes to lighting chroma-keys is to use something called space lights and lanterns that help you produce diffused and soft lighting. In our case, we used a mix of Aputure LS 600d Pro's and the LS C300d II to light up our final RTVP lab.

Now, the foreground must be lit carefully to preserve the even color, while interfering as little as possible with the lighting of the subject. Here, it is especially important that the lights are not pointed straight down or angled toward their camera from the [INAUDIBLE] position because we will create something called specular highlights on the floor that will appear in the compositing as [INAUDIBLE]

And then you can think about messing with some colored light effects in order to reproduce in the studio the light effects that will be present on your virtual scenario, such as police car lights in front of a talent, and ultimately match the color of a scene. Another example of this technique can be used when filming a scene with a virtual sunlit exterior. The talent can be made to look as if it's lit by natural sunlight by illuminating the subject with RGB lights that mimic the position and the color temperature of the virtual sun.

Now, when it comes to the camera, what are the most important features that a good camera must have for doing virtual production? First, you want a video output that is either HDMI or SDI, which stands for a serial digital interface. And it is a standard for high quality video and audio transmission over coaxial or fiber-optic cabling. The speeds currently range from 270 megabits per second up to 12 gigabits per second for SDI, which is the interface that you want in a professional setup.

The more bandwidth, the better it comes to outputting, for example, raw video signal at 4k. You'll need to receive the camera feed in the video card of the computer that will do the compositing in real-time. So the type of video interface will basically dictate the input of the capture card that you have to buy.

There are different levels of quality that are not related to resolution. So depending on the sensor size and the optics, you end up with different definitions for the same 4k image. We started with a Lumix GH5, did some testing, and ultimately selected the Blackmagic URSA Mini Pro 12K, which has all the features that we wanted, especially timecode and genlock support.

Clearly, an important tool for real-time virtual production is the camera use. We have experimented with everything from webcams to mirrorless hybrid cameras. But these are not ideal for the RTVP job. When you're selecting your camera tools for virtual production, an extremely significant feature to have is genlock. Anyone who's familiar with broadcast systems and multi-camera studio shots will be familiar with genlock. But those who are not in the know, genlock syncs up the frame capture of different devices.

Lastly, you'll want to couple your camera with optics that can output the best image quality as possible. But that's only the start. You want to capture and sync all the camera data as much as possible, especially the zoom and focus parameters. This can be done by attaching encoders that are sensors that will read rotational data of the lenses that they're attached to.

But what I truly recommend after using them is to invest in a full servo lens that has internal encoders that will output less data in a more precise and easier way because there will be no moving sensor parts attached to the lens. And that alone will minimize the need of doing lens calibration, which is super important for final compositing quality, especially in a tracked camera virtual production setup. That leads us to the tracking systems.

The use of tracking systems-- basically, they are simply amazing. And they take real-time virtual production to the next level. Camera tracking is a technique that allows us to make virtual cameras match the movements, positions, and other lens data, such as zoom and focus, from real world cameras to the virtual cameras within the real-time rendering engine. It works by having optical and inertial camera trackers that inform the graphics engine about the physical orientation and position of the camera in real time.

With this technology, the correct perspective of the production camera is rendered relative to the virtual environment, merging the virtual and the real prediction worlds. That alone will allow you to use camera cranes and do some crazy things, like making the illusion of having multiple cameras by switching from one cameras to the other, moving the first one, and then switching back to it.

And one of the astounding possibilities of virtual production is to put real people and objects into a digital environment. To better represent this effect, it's advised to track the 3D background so that it moves in sync with the real camera movement. This, of course, has latency coming from different devices. And genlock will help you sync that. The tracking system will also help us simulate lens distortion effects on different positions and zoom levels as well.

We initially tested camera tracking with a [INAUDIBLE] Sync and iOS Tracking app. But after seeing the benefits for ourselves, we decided to switch to one of the best available camera tracking technologies today, which is ncam Reality System. It has inputs for camera encoders and several positional tracking strategies using both reflective markers and natural features to use computer vision algorithms to do the tracking.

With lighting, camera, and tracking technology, we can then proceed to the audio equipment. And by the way, this was supposed to be straightforward, but it's not. There are some caveats and tips in order to get it right as well. One good tip here is to get a set of microphones that will work with the same RF signal. And that will save you some money right away.

We got a Sennheiser EW 500 and a [INAUDIBLE] so that we could choose from different microphones, depending on what we were doing in the virtual production. Lab for communicating between the command room and the stages, we got some SV200-W, which are good and almost inexpensive compared to the others.

For mixing, routing, and managing audio outputs and inputs, we use Behringer X32 consoles. They have all kinds of features, such as applying effects, filtering signals, digital output of audio, and a whopping 32 channels to work with. But for RTVP, the amazing part is that it can also get genlocked.

Or we can apply a specific delay [INAUDIBLE] to precisely match the latency of all the audio systems that go to the real-time processing. If you're using a digital output of a mixing tape, the latency correction will not be applied to it. And you have to tweak the audio latency with a virtual production software, such as Aximmetry or Pixel Tube.

Now, let's talk about computer systems for a real-time virtual production operation? So what are the priorities when building a virtual production workstation? Well, a content creator workstation deals with large amounts of workload-- compiling shaders, making lighting. It benefits for multithreaded CPUs for those use cases. But a workstation that will render things in real time gets more advanced advantage from higher clock speeds. And each process [INAUDIBLE] as fast as possible, running frequently single-threaded tasks. A good balance between the two are threadripper processors, such as Ryzen 5995X. But it should probably be OK with a desktop Ryzen 5950X for most cases.

As for the graphics card or the GPU, the more VRAM the graphics card has, the better it will be at handling high quality and bigger 3D scenes that must be rendered in real time, while not going over the memory pool. If you're using a green screen studio, you'll be able to get by with just an upper tier GeForce RTX, such as the RTX 3090 or the upcoming RTX 4090.

But when using LED walls, your GPU will have to have the ability to synchronize with the LED walls, with multiple cameras and ray-traced graphics, which will probably require an NVIDIA Quadro Sync II board as well as a Quadro RTX card. In this class, we'll focus on the green screen setups as they are more budget-friendly. And basically, they're what we're currently using.

Every camera will end up feeding its video signal, outputting it through the SDI or HDMI to the computer that will composite the keyed elements from the green screen to the real-time rendering engine. And for that, you need a video capture card that can receive video inputs and output the PGM, or program, which is the composited result of what the virtual production software will be computing.

While you can use multiple cameras with a single computer, that is not advised because you lose the possibilities of evasion maneuvers if a computer crashes, for example. And you also put the workstation under more computational stress, which can eat up GPU resources easily. Another amazing usage of the video capture card is for receiving inputs from external sources that are not cameras. This can be, for example, external PCs, like I'm doing right now, or even casting devices such as Apple TV or Google Cast.

This allows you to put those external video signals into the 3D environments of the real-time renderers. And they can also be calls from a meeting software. They can be live presentations. They can be virtually anything you might want to give control to a talent that will be presenting.

One of the amazing things that we do all the time is to put a slide presentation on the laptop that will feed this video to the real-time virtual production workstations while giving the user a presentation controller that has a USB attached to the laptop. This generates a safer IT environment, as you'll be isolating the VP workstation from external files, while giving the presenters full control over their content.

And when dealing with multiple 3D scenes, loading things up in real time, buffering content, [INAUDIBLE] on the VP software recording something, depending on the available workstations, you most definitely want an ultra high speed storage device with plenty of storage. Remember, when loading anything on a computer, you're dealing with reading and writing operations that will take data from your storage hardware to your RAM, video RAM, or possibly both. So go for a PCIe 4.0 NVMe with 7 gigabytes per second of speed with at least two terabytes of storage. You won't regret it.

Now, for control EVP software and creating macros for common operations, we use a MIDI pad and an Elgato Stream Deck XL. This leads to easier operations and the ability to create profiles for each production scenario. Remember, we want to create profiles and basically map the most used features as templates for productions that can leverage reusable code, 3D scenes, and much more.

Now, beyond the RTVP workstation, you want to have a directing workstation that will receive multiple RTVP video outputs and control what is being shown to the viewer. This computer doesn't have to be super powerful. But it will help you ease the burden of the multiple things a VP operator must deal with. The video director will switch between the different cameras, control overlays, and live transmission during the production and shooting.

In our pipeline, he takes care of all the video routing and preparation of inputs and outputs between devices in the stage. This computer will also record the PGM that can sometimes be a raw format. So be sure not to forget that NVMe. For overall storage, we bought two great NAS servers that allow us to work with high speed 10 gigabits per second and backup our files on multiple cloud storage at the same time.

Talking about routing, there is a generally misunderstood piece of equipment that is a killer hardware for the RTVP laboratory. But before talking about it, first, there is an equipment that can receive multiple video signals and output them in little squares that's called the multiviewer. And you can also monitor audio signals that are coming from the SDI connectors into this device on the fly.

This is super cool for not only visually monitoring signals, but it helps you diagnose fast during production-- for example, by literally seeing if a certainly required video and audio signal is being transmitted appropriately and where it came from. You will want to monitor your external devices, such as the slides computers and casting devices as well, not only your camera's raw green screen video feed and composited outputs from the RTVP computers.

And that gets us to the SDI router. This piece of equipment works as the brain of the media input and outputs. What it can do is to route any input to any output and duplicate input signals on multiple outputs. And this basically allows you to create setups for different production scenarios on the fly without having to touch your installed cables ever again.

Basically, you'll want to plug in almost all your video signal outputs into the video hubs channels. And from that, you connect almost every equipment that will receive some kind of video input signal. And then from the video hub, you'll be able to duplicate, change what kind of video signal gets outputted on the fly, again without messing with the existing cables. And this can lead to a cleaner studio environment, as the cables won't have to be touched after a careful signal routing project. On both SDI router and Multiviewer, you name your I/O accordingly with easy to understand descriptions.

Now, with all your signals configurable directly, we can dive into switching and monitoring. This was something that I did not give much attention in the beginning, until we faced some production issues. Having your monitoring ready, you'll want to have a Switcher that matches your audiovisual signal interface so that you can control the PGM or the program output that goes into your recorder that, in our case, is the directing workstation and the Atomos Shogun seven inches.

If you're starting out with RTVP, you'll see that a lot of softwares have the same kind of video switching overlay capabilities. So why would we want a hardware switcher? First, you have other capabilities with this hardware. For example, it will have internal encoders that can stream without using your workstation computing resources. But here are a few of my favorites and what I consider to be the most important things that you able to do with this equipment.

When a virtual production workstation that receives input from the camera and other devices crash, you're doomed if you don't have a spare camera to switch to. And let's consider that all the workstation computers crash. And you're only left with the slides that were coming from the external laptop with the slide presenter. It's easy to see now that depending on how many inputs you have, you have the same amount of evasion maneuvers to make. And this can give you enough time to restart things during a live presentation and get back on track in the case of an expected event.

Using a dedicated switcher, you can switch back and forth between two cameras, for example, while changing one or more camera positions. This allows your viewers to feel like they're watching a video with many more cameras than you actually have. And that will straight away save you a lot of money. This is literally super cool.

One thing is to capture a great image. But another is to compress it and stream it, delivering the best quality versus performance ratio to your online live audience. So to add on top of evasion maneuvers and also ease the computer requirements of the streaming process, you might want to get a dedicated encoder, such as the Blackmagic Web Presenter 4K. Adding to that, you can attach a backup internet to this device and be safer in terms of, let's say, the energy goes out and you have no break for your equipment, but internet goes away. That is simply amazing.

Well, your external laptop, responsible for those lights or the casting devices that you want to have available, most likely has an HDMI DisplayPort or a Thunderbolt port. And well, of course, none of them are SDI. So for that, you'll want these versatile devices that are bi-directional SDI to HDMI converters that can receive and send signals from one interface to the other.

We also got reference monitors for optimal color accuracy and reference monitors, like Blackmagic Video Assist to be added on the cameras so that the camera operator can see both green screen and the composited image from the virtual production software. So for recording, like I said before, we used both the directing workstation and the two Atomos Shogun reference monitors, one for each camera.

We mentioned before that synchronization of video frames and signals is crucial when working with multiple systems that might introduce some sort of latency. When you install all of the sensors in the device that you use, you have to sync everything up. Why is this important? Well, in the case of virtual production where you have a camera, a motion capture device, and software generating a virtual backdrop such as Unreal Engine and maybe an LED volume, all of those must be frame-synced to avoid problems.

Well, the timecode feature is a useful tool for synchronizing footage and audio because it helps you manage your meeting match clips to the sound files, this information is embedded and recorded raw files as metadata. You can set up your camera to generate timecode on its own or sync to an external source. Timecode is most accurate and effective when coupled with genlock.

Now, genlock, or generation locking, is a broadcast technique used to synchronize multiple audio and video signals with more accuracy. This enables precisely frequency-locked video signals, where every device knows when a frame starts and when a frame ends. More specifically, genlock uses a reference signal as the timing plane for the entire broadcasting system.

And just as a reminder, it is generally a good idea to opt for a genlock-compatible camera. A sync generator device will help you with synchronize all-- synchronizing all connected devices to its internal clock, delivering flicker-free, tier-free, and high quality mix output.

And finally, let's talk about the software packages that we can rely on. I'm going to be honest with you, the entire software stack of a virtual production framework can be gigantic. Here, I'll focus on what is more important when it comes to getting started. In the GitHub repository, software and equipment lists will be available so that you can check it out, serve yourself, and contribute to this community that we want to build.

For altering 3D assets and scenes, depending on the workforce you have available or your skills, you might want to start with an architectural project software. The reason is simple. In a camera tracking setup, the scale of the 3D scene should be accurately 1 to 1. So our AC team generally starts a 3D scenario in a very similar way to a traditional construction project.

If you're working, aiming for 4K real-time ray-traced rendering, creating an amazing 3D scene is not going to be enough. It must be optimized with proper UVs, caring about material complexity, draw calls [? count, ?] and memory consumption. Or you end up with limited rendering resolution capability. Those will have a direct impact on the performance and realism that the real-time rendering engine will produce.

Unreal Engine is a great development tool. And it is the core rendering system of this framework. If you haven't heard of light baking, check it out, as dynamic illumination will impact rendering performance from both [INAUDIBLE] and ray-traced workflows. By the way, a great tip here is to use and abuse the NVIDIA DLSS, which stands for deep learning super sampling. It will dramatically improve your rendering performance with minimal graphics quality impact. Oh, and it's free.

To get started with real-time virtual production fast, reliably, and with a lot of learning materials available on the internet, honestly, just download Aximmetry. It has a Community Edition that is free with watermark. Even if you have, let's say, a webcam and a do it yourself green screen, you will most probably work out of the box with it. And you'll be playing with real-time virtual production in a day, for real.

Aximmetry will save you time and money when it comes to programming, using overlays, controlling inputs and outputs, creating super useful control panels, and even making them available through web server. Added to that, you'll be able to connect your tracking system, stream, record. See for yourself. I got started in the world of real-time virtual production with Aximmetry powered by Unreal Engine. I fell in love. And we've been using it for almost a year now.

In a bigger and more complex or mature environment, you'll probably couple your tools and equipment with specialized broadcasting software. vMix is an industry standard. And it has a lot of features. One of the features that we use the most is putting remote talent and managing video calls within the virtual production framework.

A free alternative that can be used is OBS. But sometimes, you might face some limitations that are built right into vMix. In a virtual production stack, you'll want to use the best of every tool. Most of the time, you'll be working with softwares that have overlapping capabilities, such as streaming, that can be done by Aximmetry, vMix, OBS, your switcher, and ultimately your dedicated Blackmagic Web Presenter 4K encoder and streamer. The same applies to chroma-key. Unreal Engine, Aximmetry, OBS can do it. But you'll most likely get better results and free computing resources if you use a dedicated Blackmagic Ultimatte, for example.

When we started, we explored a combination of hardware and software that went from basic and budget-friendly tiers to more advanced and professional ways. Now, let's talk about what you can expect and achieve from the phases that we've gone through and also show you where you're going after as the plateau for our media production objectives. Just to remind you, feel free to choose, test, and experiment with multiple combinations of available software, hardware. I truly encourage you to do so, as you end up getting your grips of everything and confidently make your decisions after validating your needs.

Now that you have heard about equipment and the frameworks that we're using, let's talk about the money and some hardware combinations. We divided the evolution of our real-time virtual production endeavor into four tiers. And I'll show you how much you can expect to spend in each phase and the limitations and possibilities.

If you just want to get started experimenting with virtual production, this is the simplest setup you can make. And it can already be a lot of fun. An investment of a low cost tier 1 can dramatically increase the quality and possibilities of media production already. Here, if you have a camera with a video output, capture card, a lit green screen, and a computer, you'll be doing RTVP in no time. Just download a VP software like Aximmetry Community Edition, which is, again, free, and follow the tutorials. For this case, the investment can get up to $10,000. And you'll already achieve very nice results.

And on the tier 2 investments with $5,000, you can get a VIVE Mars that was launched this year. With that, you'll be able to track up to three cameras for a super competitive pricing. This wasn't available when we built our lab, so I couldn't forget to mention it here. And I would already recommend getting better equipment as well that has genlock and SDI outputs in order to improve your setup or you'll face issues when combining those devices. Add more computers, at least two cameras, and license for softwares, such as Aximmetry, and you'll be doing amazing stuff with an investment of up to $50,000. And that's basically it.

Now, when it comes to the tier 3, in this stage is where we are right now with all the equipment that I presented before. The complete list of devices will be available at the GitHub repository that will be shown at the end of this class. With this investment tier, you'll be able to shoot great media content for the entire marketing department and even provide some real-time virtual production services to all your clients.

You will definitely want help from more experienced professionals that will validate your infrastructure. This knowledge cost will actually help you save money by not spending on unnecessary or wrong devices. You'll be able to have outstanding professional grade equipment to power your RTVP lab with an investment up to $400,000. Depending on how much your company spends hiring external production teams, building sets for events, and other production scenarios, this investment will actually save you a lot of money in the medium to long run. This is exactly our case.

Now, with the technology and media production, the sky is the limit. And it really depends on the complexity of your production. Sometimes, you might want to rent equipment instead of buying them. But the biggest difference here with the LED volume setup is that you'll be capable of doing something called in-camera VFX, which is basically capturing the visual effects right from the camera recording.

On top of that, the LED volume will be able to provide accurate and realistic lighting and reflections. So your overall workflow will have a lot of changes. And you'll need help from other kinds of professionals on the production team to have-- to help you out with all the added systems.

This concludes our second and largest part of this class. Now, let's dive right into the ready-to-follow workflow and pipeline that covers the real-time virtual production side of things with a use case that needs external inputs, such as lights that I'm using right now. The way that the pipeline works is by minimizing or mitigating post-production work while increasing pre-production work. That allows us to better handle and fill the RTVP lab agenda with ready-to-shoot work and ultimately help multiple departments at Inter & Co. in the most efficient way.

Our goal was to have business and production requirements to be fulfilled at the same time, which we not only believed in. But after year of building and developing, I can confidently tell you that the dream came true. The roles and responsibilities that will be presented will most probably have to adapt to your reality. And you'll see that for several production scenarios, a bigger team will be needed.

In our case, we wanted to have a lean team at the VP lab that will empower external marketing teams of real-time virtual production technology and train them on its possibilities. In this part of the class, I will talk first about the roles involved, and then the pipeline, and how those roles got involved in the processes.

Our XR background with creating virtual reality in real-time engine software gave us the necessary workforce for dealing with 3D content creation and optimization for real-time rendering. We already had a complete organizational structure for software development. But in this class, I'll focus on what roles and responsibilities you must take into account for getting started and having a lean workforce that will enable you to do the things we talked about before.

Believe it or not, there can even be a one man army that will take care of all the responsibilities described, which is what I did a long time ago. But eventually, you'll need help and more talented people in order to improve delivery time and overall content quality. Now, let's get to the real time virtual production part.

For a lean RTVP laboratory team, we plan to have five people. Yes, five people. Of course, depending on the complexity, you will, again, want to hire more people to divide responsibilities. But think about everything that can be prepared and left in a ready-to-shoot state. Think about the reusable scenarios in real-time virtual production setups. You will soon realize that it is possible to achieve great things with a lean team.

The first one is the creative production supervisor that, in our case, acts directly in understanding, managing creative elements, validating the artistic needs from the departments at our company and external clients, of course. And that will come and ask for any kind of media production.

The second role is a technical production supervisor that, in our case, acts directly in understanding, managing hardware elements, and validating the technical needs for the media that is going to be produced. The third role is essentially a virtual production engineer that will take care of the programming, setting up control panels, broadcasting elements, data ingestion, and all the framework preparation in order to operate the real-time virtual production effort.

The fourth role is essential and will be taking care of the engineering operation of audio elements for the RTVP needs. The fifth role will help everyone stay on track on the pipeline, measure time, expenses, studio reservations, equipment borrowing, and create amazing reports in order to enhance transparency for everyone. And these lead up to the pipeline.

So learning from multiple sources and testing things out, we divided the workflow into traditional three stages-- pre-production, pre-light, and production. And then we subdivided them into multiple parts of the workflow. Any change always comes back to the scope assessment. And then the subsequent parts are reanalyzed because each step has a few dependencies on each other. With small changes, our middle stage, like the real-time virtual, production allows it to happen.

The pre-production part is of extreme importance for us. It is easy to see that there are more steps in it. And when it comes to filling the RTVP lab agenda availability, most of the work can be done outside of the studio. This will essentially maximize media production quantity and contribute to the return of investment of the lab. In the next part of this class, on every step explanation, the team involved will be on the bottom right part of the presentation.

Now, let's dive right into the steps. Scope assessment is the process of creating a document that displays all the project information to provide clarity among all team members and stakeholders. When a demand comes in, the creative and technical production supervisors will understand what is trying to be achieved. And the production analyst will collect the requirements while creating tasks around them in the management platforms such as Jira or ClickUp. And it will be present in a shared final report, trackable in the version of control system change log.

Next, we look to get and make available all data necessary for the development and preparation of the project. Things such as scripts, storyboards, and any media listed before in the scope assessment must be collected from the department or the client. In a marketplace live event, this could be images of products, logos from the manufacturers, to the image overlays, video animations that will be playing in the background and so on.

Now, in the content production, here, the VFX, the virtual art department, and the CAD department will prepare all the virtual content required for the project, such as 3D scenarios with their optimized assets. If you've worked with real-time rendering before, you'll be familiar with 3D asset optimization techniques, allowing you to develop large scenes that run smoothly even on mobile hardware.

Throughout the project's development, it is advice to use a virtual control system that will manage and track changes to test recordings files, data sets, or even documents that are pushed to repositories. Every commit that professionals do will effectively store each update as a revision. Each subsequent change can be restored and compared in the file level, being something much more powerful than a simple Control-Z.

On top of that, it enables all members of a team to collaborate from the most recently updated version or separately in different branches that hold different project configurations that can be later merged together if required. I literally cannot explain the amazing potential and flexibility that using a version control system will give to any production environment. But be sure to pursue the knowledge behind it.

Eventually, the team receives, organizes, and programs all the content, which is the content ingestion part. To maximize the efficiency of this process, those responsible for sending files need to know how they should be sent following resolution, bitrate, and format standards.

On the TechVis step, while all the necessary pieces of productions are getting in place, it is time to put them together and make sure that they work as intended and basically simulate the shooting process before it even happens. The intention here is to use virtual cameras, rigs, and visual effects to match the stage and real equipment. This step allows you to test and validate everything before moving on to the pre-light tests.

Now, according to the editorial, we must confirm that we have everything we need for our event, such as special lighting, furniture and microphones, a teleprompter, equipment and essentially the appropriate infrastructure for the team involved in the production. And in order to ensure that the day's filming runs smoothly, you must have a good grasp on who will be inside the studio and which external professionals will require permission to enter the building or even the parking lot. In addition to that, if anything needs to be bought that was listed in the stage recognition, approvals and acquisitions will happen here.

Finally, with all parts of the pre-production process done, we can book the studio and plan the schedule to be followed in the pre-light step, and then getting into the shooting day. And that requires careful planning. Now, let's talk about the pre-light. Here, we prepare and act as if the final shooting would already occur. Sometimes, the talent that will be participating in the production phase won't be available. And we'll rely on colleagues to validate all the preparation.

And then the first step here is to load in the gear, which is pretty straightforward. And basically, all of the stage equipment must be assembled and calibrated. Essentially, all test arrangements must be made.

The test day is super challenging and aims to avoid any surprises. Once the virtual and live stage elements are pre-validated, you should do a test run with everything working. This requires that the entire production team work together in order to test cameras, lights, audio, and all sources of input to mirror the environment of the shooting day. You'll validate and fine-tune cameras settings, lighting, and billboards as well as their interaction with the virtual content.

In our case, production was the stage that we wanted to maximize efficiency so that we could have and handle a full agenda. If everything before went OK, this is the most important day of all. It is the day of the show. That said, there is still some variables that go beyond technical aspects. And the whole team must be aware of the phrase that we said before. In virtual production, the only constant is change, and then do the best to adapt quickly on the fly.

If the shooting is a live stream, the next two steps cease to exist. If the production is recorded later, you will select the best takes for further post-production work, review the files' organization, and back them up. Finally, you'll prepare all the data and forms of videos and post-processes to deliver top quality outputs.

Wow, oh my God, we got here. And I really hope that you felt the dimension of this kind of media production work. When we started a year ago, a class like this was exactly what we needed to get more confident and make less mistakes.

Now, let's wrap things up and talk about building the real-time virtual production laboratory itself. Well, the greatest challenge of this construction project was to meet the needs and expectations of everyone involved. We couldn't just have a conventional studio. We needed a space to accommodate simultaneous activities with different approaches. And that would help bring the highest level of innervation to the entire joint effort.

I'll try to give you some inspiration on how to build your real-time virtual production laboratory and tell some cool stuff that you can bring to your company, too. We first thought about the XR studio, where the real-time production would happen with all the features we could work with. I advise you to put a big table, so the external devices that we mentioned before that will send video signals, such as lights and order things, remain controllable for the people involved in the production. Since we wanted to use the lab internally, not everyone has an extra level of presenting skills. So we coupled the camera with teleprompters. It is amazing. And you're able to help a lot of untalented people, such as myself.

We made a podcast studio with our green screen for smaller productions. Here, we only use real-time virtual production with augmented reality purposes, such as a podcast with virtual avatars and real people or projecting things in 3D. We put a TV, nice chairs, and external inputs that would be able to be mirrored to the wall TV in the same way that the virtual production studio allows us to do and edit teleprompters as well.

The directing room is where the computers and most of the studio infrastructure is. Some studios will have this within the set without walls. Some will isolate it. In our case, we prefer to have acoustic isolation in order to minimize interruptions of recordings while we were free to talk with each other. If you're not used to these kinds of things, no matter how much you plan, most likely you'll have issues with where you want to be able to talk to the production team without disturbing the presenters. We communicate with the XLR studio and the podcast studio through the inexpensive microphones that we mentioned some slides ago.

For expecting, holding a meeting, taking some photos, and seeing what both XR studio and podcast studio were transmitting to viewers, we made this multi-use space. We put two TVs for outputting the PGM from both studios. And it looked amazing. To see that with an SDI router working, you're able to map those TVs to any output you want without messing with cables.

Now, for some small audio recording, we built this small acoustic isolated room that is very straightforward. It is near the mixing table. And we can easily record audio clips and deliver them very fast.

Depending on why you might want a real-time virtual production lab, you might need to lend equipment for other departments or other companies as well. Our virtual production laboratory provides about 65 audiovisual equipments for shared used by the entire company. In order to meet those specific needs of digital content production in each department, this includes cameras, lenses, lights, microphones, cables, and accessories.

And here's a great tip in order to better organize and track the use of each piece of hardware and casual your studio, which is Cheqroom. In order to provide a transparent, automated, and autonomous management solution, we looked for a platform that would help us apply safe principles and policies to check out, check in, maintenance, accountability, reporting, support, and equipment booking. It is very easy to use. And the pricing is phenomenal.

With your laboratory built, we come to, what's next? Well, we hope to explore motion capture and MetaHumans that we can control and present alongside real people. We want to track physical objects that will interact with the virtual worlds. We learned that those industrial robotic arms are able to track themselves replacing, for example, a crane. And that will totally contribute to our lean team of real-time virtual production professionals. And of course, we wanted to work with LED volume soon. Our objectives for the end of this year is to turn on the camera and achieve movie-like quality standards in order to produce greater and greater content.

What we shared here was a compilation of what we wanted to have learned fast when we first started. A lot of great content can be found on the internet to help you explore the things that we presented here in more detail. I really hope that you got excited about real-time virtual production like we did and that you learned a thing or two.

But even now, we're still evolving. To be honest, we feel it's only the beginning. Check out the GitHub of this class. And let's build together a resource center to help people get ready for real-time virtual production together.

Thank you so much for your time. And see you in the next Autodesk University. Thank you, Autodesk, for the opportunity. And again, many thanks for your time.

[MUSIC PLAYING]

______
icon-svg-close-thick

Cookie preferences

Your privacy is important to us and so is an optimal experience. To help us customize information and build applications, we collect data about your use of this site.

May we collect and use your data?

Learn more about the Third Party Services we use and our Privacy Statement.

Strictly necessary – required for our site to work and to provide services to you

These cookies allow us to record your preferences or login information, respond to your requests or fulfill items in your shopping cart.

Improve your experience – allows us to show you what is relevant to you

These cookies enable us to provide enhanced functionality and personalization. They may be set by us or by third party providers whose services we use to deliver information and experiences tailored to you. If you do not allow these cookies, some or all of these services may not be available for you.

Customize your advertising – permits us to offer targeted advertising to you

These cookies collect data about you based on your activities and interests in order to show you relevant ads and to track effectiveness. By collecting this data, the ads you see will be more tailored to your interests. If you do not allow these cookies, you will experience less targeted advertising.

icon-svg-close-thick

THIRD PARTY SERVICES

Learn more about the Third-Party Services we use in each category, and how we use the data we collect from you online.

icon-svg-hide-thick

icon-svg-show-thick

Strictly necessary – required for our site to work and to provide services to you

Qualtrics
We use Qualtrics to let you give us feedback via surveys or online forms. You may be randomly selected to participate in a survey, or you can actively decide to give us feedback. We collect data to better understand what actions you took before filling out a survey. This helps us troubleshoot issues you may have experienced. Qualtrics Privacy Policy
Akamai mPulse
We use Akamai mPulse to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Akamai mPulse Privacy Policy
Digital River
We use Digital River to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Digital River Privacy Policy
Dynatrace
We use Dynatrace to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Dynatrace Privacy Policy
Khoros
We use Khoros to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Khoros Privacy Policy
Launch Darkly
We use Launch Darkly to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Launch Darkly Privacy Policy
New Relic
We use New Relic to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. New Relic Privacy Policy
Salesforce Live Agent
We use Salesforce Live Agent to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Salesforce Live Agent Privacy Policy
Wistia
We use Wistia to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Wistia Privacy Policy
Tealium
We use Tealium to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Tealium Privacy Policy
Upsellit
We use Upsellit to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Upsellit Privacy Policy
CJ Affiliates
We use CJ Affiliates to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. CJ Affiliates Privacy Policy
Commission Factory
We use Commission Factory to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Commission Factory Privacy Policy
Google Analytics (Strictly Necessary)
We use Google Analytics (Strictly Necessary) to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Google Analytics (Strictly Necessary) Privacy Policy
Typepad Stats
We use Typepad Stats to collect data about your behaviour on our sites. This may include pages you’ve visited. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our platform to provide the most relevant content. This allows us to enhance your overall user experience. Typepad Stats Privacy Policy
Geo Targetly
We use Geo Targetly to direct website visitors to the most appropriate web page and/or serve tailored content based on their location. Geo Targetly uses the IP address of a website visitor to determine the approximate location of the visitor’s device. This helps ensure that the visitor views content in their (most likely) local language.Geo Targetly Privacy Policy
SpeedCurve
We use SpeedCurve to monitor and measure the performance of your website experience by measuring web page load times as well as the responsiveness of subsequent elements such as images, scripts, and text.SpeedCurve Privacy Policy
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

Improve your experience – allows us to show you what is relevant to you

Google Optimize
We use Google Optimize to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Google Optimize Privacy Policy
ClickTale
We use ClickTale to better understand where you may encounter difficulties with our sites. We use session recording to help us see how you interact with our sites, including any elements on our pages. Your Personally Identifiable Information is masked and is not collected. ClickTale Privacy Policy
OneSignal
We use OneSignal to deploy digital advertising on sites supported by OneSignal. Ads are based on both OneSignal data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that OneSignal has collected from you. We use the data that we provide to OneSignal to better customize your digital advertising experience and present you with more relevant ads. OneSignal Privacy Policy
Optimizely
We use Optimizely to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Optimizely Privacy Policy
Amplitude
We use Amplitude to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Amplitude Privacy Policy
Snowplow
We use Snowplow to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Snowplow Privacy Policy
UserVoice
We use UserVoice to collect data about your behaviour on our sites. This may include pages you’ve visited. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our platform to provide the most relevant content. This allows us to enhance your overall user experience. UserVoice Privacy Policy
Clearbit
Clearbit allows real-time data enrichment to provide a personalized and relevant experience to our customers. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID.Clearbit Privacy Policy
YouTube
YouTube is a video sharing platform which allows users to view and share embedded videos on our websites. YouTube provides viewership metrics on video performance. YouTube Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

Customize your advertising – permits us to offer targeted advertising to you

Adobe Analytics
We use Adobe Analytics to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Adobe Analytics Privacy Policy
Google Analytics (Web Analytics)
We use Google Analytics (Web Analytics) to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Google Analytics (Web Analytics) Privacy Policy
AdWords
We use AdWords to deploy digital advertising on sites supported by AdWords. Ads are based on both AdWords data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that AdWords has collected from you. We use the data that we provide to AdWords to better customize your digital advertising experience and present you with more relevant ads. AdWords Privacy Policy
Marketo
We use Marketo to send you more timely and relevant email content. To do this, we collect data about your online behavior and your interaction with the emails we send. Data collected may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, email open rates, links clicked, and others. We may combine this data with data collected from other sources to offer you improved sales or customer service experiences, as well as more relevant content based on advanced analytics processing. Marketo Privacy Policy
Doubleclick
We use Doubleclick to deploy digital advertising on sites supported by Doubleclick. Ads are based on both Doubleclick data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Doubleclick has collected from you. We use the data that we provide to Doubleclick to better customize your digital advertising experience and present you with more relevant ads. Doubleclick Privacy Policy
HubSpot
We use HubSpot to send you more timely and relevant email content. To do this, we collect data about your online behavior and your interaction with the emails we send. Data collected may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, email open rates, links clicked, and others. HubSpot Privacy Policy
Twitter
We use Twitter to deploy digital advertising on sites supported by Twitter. Ads are based on both Twitter data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Twitter has collected from you. We use the data that we provide to Twitter to better customize your digital advertising experience and present you with more relevant ads. Twitter Privacy Policy
Facebook
We use Facebook to deploy digital advertising on sites supported by Facebook. Ads are based on both Facebook data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Facebook has collected from you. We use the data that we provide to Facebook to better customize your digital advertising experience and present you with more relevant ads. Facebook Privacy Policy
LinkedIn
We use LinkedIn to deploy digital advertising on sites supported by LinkedIn. Ads are based on both LinkedIn data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that LinkedIn has collected from you. We use the data that we provide to LinkedIn to better customize your digital advertising experience and present you with more relevant ads. LinkedIn Privacy Policy
Yahoo! Japan
We use Yahoo! Japan to deploy digital advertising on sites supported by Yahoo! Japan. Ads are based on both Yahoo! Japan data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Yahoo! Japan has collected from you. We use the data that we provide to Yahoo! Japan to better customize your digital advertising experience and present you with more relevant ads. Yahoo! Japan Privacy Policy
Naver
We use Naver to deploy digital advertising on sites supported by Naver. Ads are based on both Naver data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Naver has collected from you. We use the data that we provide to Naver to better customize your digital advertising experience and present you with more relevant ads. Naver Privacy Policy
Quantcast
We use Quantcast to deploy digital advertising on sites supported by Quantcast. Ads are based on both Quantcast data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Quantcast has collected from you. We use the data that we provide to Quantcast to better customize your digital advertising experience and present you with more relevant ads. Quantcast Privacy Policy
Call Tracking
We use Call Tracking to provide customized phone numbers for our campaigns. This gives you faster access to our agents and helps us more accurately evaluate our performance. We may collect data about your behavior on our sites based on the phone number provided. Call Tracking Privacy Policy
Wunderkind
We use Wunderkind to deploy digital advertising on sites supported by Wunderkind. Ads are based on both Wunderkind data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Wunderkind has collected from you. We use the data that we provide to Wunderkind to better customize your digital advertising experience and present you with more relevant ads. Wunderkind Privacy Policy
ADC Media
We use ADC Media to deploy digital advertising on sites supported by ADC Media. Ads are based on both ADC Media data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that ADC Media has collected from you. We use the data that we provide to ADC Media to better customize your digital advertising experience and present you with more relevant ads. ADC Media Privacy Policy
AgrantSEM
We use AgrantSEM to deploy digital advertising on sites supported by AgrantSEM. Ads are based on both AgrantSEM data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that AgrantSEM has collected from you. We use the data that we provide to AgrantSEM to better customize your digital advertising experience and present you with more relevant ads. AgrantSEM Privacy Policy
Bidtellect
We use Bidtellect to deploy digital advertising on sites supported by Bidtellect. Ads are based on both Bidtellect data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Bidtellect has collected from you. We use the data that we provide to Bidtellect to better customize your digital advertising experience and present you with more relevant ads. Bidtellect Privacy Policy
Bing
We use Bing to deploy digital advertising on sites supported by Bing. Ads are based on both Bing data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Bing has collected from you. We use the data that we provide to Bing to better customize your digital advertising experience and present you with more relevant ads. Bing Privacy Policy
G2Crowd
We use G2Crowd to deploy digital advertising on sites supported by G2Crowd. Ads are based on both G2Crowd data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that G2Crowd has collected from you. We use the data that we provide to G2Crowd to better customize your digital advertising experience and present you with more relevant ads. G2Crowd Privacy Policy
NMPI Display
We use NMPI Display to deploy digital advertising on sites supported by NMPI Display. Ads are based on both NMPI Display data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that NMPI Display has collected from you. We use the data that we provide to NMPI Display to better customize your digital advertising experience and present you with more relevant ads. NMPI Display Privacy Policy
VK
We use VK to deploy digital advertising on sites supported by VK. Ads are based on both VK data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that VK has collected from you. We use the data that we provide to VK to better customize your digital advertising experience and present you with more relevant ads. VK Privacy Policy
Adobe Target
We use Adobe Target to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Adobe Target Privacy Policy
Google Analytics (Advertising)
We use Google Analytics (Advertising) to deploy digital advertising on sites supported by Google Analytics (Advertising). Ads are based on both Google Analytics (Advertising) data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Google Analytics (Advertising) has collected from you. We use the data that we provide to Google Analytics (Advertising) to better customize your digital advertising experience and present you with more relevant ads. Google Analytics (Advertising) Privacy Policy
Trendkite
We use Trendkite to deploy digital advertising on sites supported by Trendkite. Ads are based on both Trendkite data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Trendkite has collected from you. We use the data that we provide to Trendkite to better customize your digital advertising experience and present you with more relevant ads. Trendkite Privacy Policy
Hotjar
We use Hotjar to deploy digital advertising on sites supported by Hotjar. Ads are based on both Hotjar data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Hotjar has collected from you. We use the data that we provide to Hotjar to better customize your digital advertising experience and present you with more relevant ads. Hotjar Privacy Policy
6 Sense
We use 6 Sense to deploy digital advertising on sites supported by 6 Sense. Ads are based on both 6 Sense data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that 6 Sense has collected from you. We use the data that we provide to 6 Sense to better customize your digital advertising experience and present you with more relevant ads. 6 Sense Privacy Policy
Terminus
We use Terminus to deploy digital advertising on sites supported by Terminus. Ads are based on both Terminus data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Terminus has collected from you. We use the data that we provide to Terminus to better customize your digital advertising experience and present you with more relevant ads. Terminus Privacy Policy
StackAdapt
We use StackAdapt to deploy digital advertising on sites supported by StackAdapt. Ads are based on both StackAdapt data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that StackAdapt has collected from you. We use the data that we provide to StackAdapt to better customize your digital advertising experience and present you with more relevant ads. StackAdapt Privacy Policy
The Trade Desk
We use The Trade Desk to deploy digital advertising on sites supported by The Trade Desk. Ads are based on both The Trade Desk data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that The Trade Desk has collected from you. We use the data that we provide to The Trade Desk to better customize your digital advertising experience and present you with more relevant ads. The Trade Desk Privacy Policy
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

Are you sure you want a less customized experience?

We can access your data only if you select "yes" for the categories on the previous screen. This lets us tailor our marketing so that it's more relevant for you. You can change your settings at any time by visiting our privacy statement

Your experience. Your choice.

We care about your privacy. The data we collect helps us understand how you use our products, what information you might be interested in, and what we can improve to make your engagement with Autodesk more rewarding.

May we collect and use your data to tailor your experience?

Explore the benefits of a customized experience by managing your privacy settings for this site or visit our Privacy Statement to learn more about your options.