Description
Key Learnings
- Learn how to deploy stand-alone Flame products on AWS instances.
- Discover how Flame can collaborate with other Flame products (Flare, Flame Assist).
- Learn how to centralize the project data and increase the productivity using Burn nodes.
- Learn how to cost-effectively implement this into your VFX pipeline or workflow.
Speaker
JEFFREY RAMIREZ: Hello, good morning, good afternoon, and good evening to all of you who are in different parts of the globe. Thank you for attending this class.
So with all the challenges we are facing during the pandemic, we have realized how constrained we can be when our access to our production site or office is limited. So we learn to adapt with the new challenges. And working remotely is one of them.
Working remotely may have reduced the quality of our work due to lack of collaboration technology and speed, to name a few challenges. So we also learned that working remotely might be the way many will work going forward. So with that, I would like to welcome you to our class, Flame on the cloud, remote production without compromising the quality.
So in this class, we'll discuss how we can help optimize your workflow and increase your collaboration remotely without compromising the quality of your work. So let's find out how flame on the cloud can help you achieve this and incorporate this workload with your visual effects pipeline.
So I would like to introduce myself. My name is Jeffrey Ramirez. I'm a technical support specialist with Autodesk for creative finishing products. I have 17 years of experience in the film and TV industry as a technical support. And aside from being a technical support specialist in Autodesk, I am also a case KCS, or knowledge center support coach and a geo-escalation lead for creative finishing team.
So before we start with our class, please take time to read our safe harbor statement. And please note that the AU content is proprietary. Please do not copy, post, or distribute without express permission.
So in this class, we will discuss the following learning objectives. So our first learning objectives would be to learn how to deploy a single Autodesk Flame family product on AWS. So this is similar to your standalone on premise workstation setup.
So next, we will discuss how Flame can collaborate with other Flame family products like Flare and Flame Assist. We will also discuss how to centralize the project data and add Burn Nodes to further improve the collaboration and help increase the productivity.
And then finally, we will talk about some consideration to help you implement these into your VFX pipeline or workflow. So I hope you will find the topic useful. So let's start.
So for those who are not familiar with Flame yet, so let me give you a little introduction. So Flame is a powerful 3D compositing visual effects and editorial finishing tool with an integrated environment that accelerates creative workflow.
So if you are a fan or has been amazed with TV commercials, TV series, and films that full of visual effects, Flame is likely the tool behind them. So let's watch this video just to give a little more information about Flame.
[VIDEO PLAYBACK]
[MUSIC PLAYING]
- Autodesk Flame, the 3D compositing VFX and finishing software behind a-list movies, glowing beauty spots, and more than one car commercial. It started on a large million dollar silicon graphics machine, adapting to PC workstations, Apple iMacs, and evolving into the Flame software solution we know today.
And over the past 30 years, its compositing VFX and editorial finishing tools evolved with it. Using tools like matchbox shaders, AI face normal maps, machine learning salient keyers, and next-generation camera tracking, you've taken Flame to unimaginable heights.
So let's take it even higher. Introducing Flame on the Cloud. Enjoy the full Flame experience on AWS cloud with scalability for VFX computing and storage right at your fingertips. Securely access and collaborate with multiple Flames.
And with Teradici CAS remoting software using PC over IP technology, experience the full performance of a cloud workstation from anywhere using the device of your choice.
Take on bigger projects and bring on additional artists with scalable compute and storage capacity. And with parallel distributed file storage solutions based on WEKA's data platform for AI, you can safely store and playback shots in real-time.
No matter the size of your business, Flame on the cloud gives you the freedom to build it the way you want it to be built. Take advantage of the scalability of the cloud and start building a more resilient future today with Flame.
[END PLAYBACK]
JEFFREY RAMIREZ: To give you some history about Flame. Flame was initially deployed on a high-end on-premise hardware. And as a technology evolves, Flame continues to adapt to take advantage of the newer hardware and software solutions.
So we have seen Flame being deployed on SGI, or silicon graphics. So this is literally a size of your fridge or about the size of full server RAM. So please note that this is not the actual SGI image. I use this for illustration purposes only.
And then we have seen it deployed on PC workstations, such as IBM, HP, Dell, Lenovo, and Mac workstations. So Flame used to be bundled with a turnkey hardware. Previously, you cannot acquire Flame software only. It must come with a certified hardware.
So the good news is, Flame family products are now a software-only offering, which means it does not come with a turnkey hardware like in the past. It is not limited to a specific workstations, as there are now several options and recommendation in the Flame system requirement page for your flexibility, including the self-qualification hardware. So you can choose the hardware and platform that suits your needs.
And what's more, Flame now runs on the cloud. So specifically on AWS or Amazon Web Services. Thanks for the effort of our engineering team that worked closely with the AWS team and system integrators to bring artists an alternative to working with Flame. So this technology enables us to work, leveraging the cloud without compromising the quality of our work.
So please note that at the moment, Lustre, or our grading tool, is not supported yet with a AWS.
So before we start, I would like you to familiarize with the type of AWS instances and storages that we will use for our configuration in the discussions. So as you can see in the top of the list, we have the g4dn.8xlarge that we will use primarily for Flame and Burn. So these are 32 CPU and Nvidia T4 GPU with 16 gigabyte of VRAM, 128 gibibyte of RAM, 900 gigabyte of SSD storage, and 50 gigabit network bandwidth.
So while instance type with [? AMGDP ?] are available on AWS, they are not supported by Flame family products. Also, AWS regularly updates their high performance Nvidia-based instances types. So consider the preceding as a minimum requirement.
So let's now discuss our learning objectives. So first, let's see how we can deploy a single Autodesk Flame family product on AWS. This configuration is a great starting point to enable remote workflow leveraging the cloud technology.
So this is ideal for artists who mainly work alone on a given project and rarely collaborate with other artists or an individual user. Also, if you are a freelancer, this will be ideal and great starting point for you.
So for a single Flame family product deployed on AWS, the media is stored in a direct attached storage to a Flame instance. Project metadata is stored in the system disk of Flame instance. So for this configuration, you need one Flame familiar product instance with a high performance Nvidia GPU g4dn.8xlarge or g5.8xlarge. So please note that these are the instances type that you can select from the AWS.
So we also need storage with at least 500 gigabytes for the system disk. So we need this much for a system disk, because this is where we will store the project metadata for disk configuration. And the other one is direct-attached storage using four times two terabyte of AWS ST1 EBS volumes.
So we need to configure the security group as well. The security group are designed to enable the different components with the correct network access they require to properly operate. So take this as a setup rule or permission to a given user or group. We also need one remote display client, either an HP Anywhere or AWS Nice DCV. So the remote client software is a tool to connect and control your Flame instances.
HP Anywhere is a product of HP or the Hewlett-Packard and is one of the remote display solutions tested by Autodesk to connect the Flame on the AWS. HP Anywhere clients are available for Windows, Mac OS, and Linux operating systems.
On the other hand, the AWS Nice DCV is a remote display solution provided by AWS, and is free to use on AWS instances. It was also one of the solutions tested by Autodesk to connect remotely to Flame on AWS. Nice DCV clients are also available for Windows, Mac OS, and Linux operating system.
So here are the steps to deploy these configurations. Please note that I will not go through a detailed or more technical step. But we will just show you the overview for you to have an idea. So if you are ready to implement this into your workflow, a more detailed steps on how to do this is available on our implementation guide in the Flame help website.
So first, we have to create the Amazon machine images or the AMI. So for those who are not aware, AMI is a disk image that contains the OS drivers. And for this case, it is also contained the DKU and Nvidia drivers and all the tools required to use Flame family in the cloud.
There is a guidance on how to create AMI in our implementation guide if you like to create it. But to simplify the deployment to the cloud, Autodesk provides a Rocky Linux 8.5 AMI available from the Flame family system requirement page.
Second, we have to choose and deploy a storage solutions. So we need the fast storage capable of high throughput to be able to work with high resolution media and play in real time. So this storage can be network or direct-attach. But for single Flame configuration, we will choose direct-attached storage.
For the direct-attached storage, you will use the AWS SD1 EBS volumes. And then you can configure the rate if you requires to. So the next step will be to create, configure, and deploy your Flame. That includes Flame Assist or Flare as required.
So for this step, we will install the Flame family products software that are mostly done through the shell or command line. And then we need to configure the machine ID, the hostname, the media storage, [? soft ?] partition, and the backburner.
So once the first three steps are done and Flame is now deployed, we can now connect your Flame using either HP Anywhere or AWS Nice DCV and work with our Flame instances on the cloud.
So again, this is our single Flame setup on AWS. This setup is simpler, it does not require additional instances for NAS, burn nodes, or project server. And again, this setup is ideal for individual user or a freelancer. So there will be no collaboration to network. But you can later scale it by setting up an AWS VPC, or Virtual Private Cloud, which is what we will take a look next.
So our next learning objectives would be to learn how we can do a collaboration between Flame family products. So these configurations are at NAS, or your Network Attached Storage, and AWS or VPC, or your virtual private cloud, to enable collaboration and project sharing. So treat AWS VPC as your network.
But this configuration will suit your pipeline if there are two or more artists that need to work on the same project. And if you need project sharing between cloud instances and on premises workstation.
So in this scenario, multiple Autodesk Flame family product instances are connected to a NAS or shared storage to enable the collaboration between your Flame. Media is stored on a NAS. And each Flame family product instance is store its project metadata on its system disk.
So for this configuration, you need at least to Flame family product in the same VPC with at least g4dn.8xlarge or g5.8xlarge. And in that instance we released c5n.9xlarge with media storage, AWS transit gateway. And one remote display client for each of the Flame instances.
The steps we will deploy in this configuration is almost the same in the single instance configuration. So this time, we need to configure the AWS virtual private cloud or VPC to enable networking with other Flame instances, AWS transit gateway to enable collaboration with other components and on premises workstation, and additional instances or instance for NAS.
So first, again, we have to create the AMI or Flame for every additional instances. So this is the same process we did in our single Flame configuration. So if you already have the single Flame instance, you may just have to scale it up by adding another Flame instances.
Next we have to configure AWS cloud using the AWS virtual private cloud and the AWS transit gateway. So the VPC allows you to network Flame instances, a project server, and a Burn nodes together in your cloud implementations. And to support the various networking capabilities of Flame, you need to configure the transit gateway service on your instances.
So AWS transit gateway connects your VPC's and on premises network through a central hub. So this simplifies your network and puts an end to complex peering relationships. Transit gateway acts as a cloud router. And third, we have to choose and deploy a storage solution, a NAS, or network attached storage, using the AWS SD1 EBS.
So please note, there are other solutions from third party vendors like WekaIO, AWS FSX, or OpenZFS, Pixitmedia Pixstore. So the links are available in the Flame help page and the digital copy of your handout.
Fourth step, we have to create, configure, and deploy your Flame, including Flame Assist, Flare, as required. Again, for this step, we will install the Flame family product software. And we need to configure the machine ID, hostname, the media storage, soft partition, and the backburner. And lastly, connect your Flame using remote display solution like HP Anywhere or AWS Nice DCV.
So again, this is an overview of summary and summary of multiple Flame instances with NAS. So this configuration is ideal for two or more artists and artists that needs collaboration for the same project.
So with this configuration, the artists can easily collaborate by sharing the project and media through the network on the clouds. And with the help of AWS transit gateway, cloud instances and on premises workstation can also share projects and collaborate.
For our next learning objectives, we will find out how we can further enhance the collaboration and our productivity by adding Burn and project server to your existing configuration.
So just to give you a brief information, Burn is a tool that allows you to render images in the backgrounds to free up your Flame workstations for more creative tasks. Adding Burn, the artist working on a Flame can send a render task to the Burn node, so he or she may continue the creative tasks.
While a project server is a collaboration and simplifies project management by eliminating the creation of project data on the Flame player or Flame Assist instances. The project data is stored on the centralized project server.
So this configuration is suitable for a pipeline that requires two or more artists, artists that need collaboration for the same project, and artists that need to focus on their creative work rather than waiting for their render task to finish on the Flame instance.
So here's the overview of this configuration. So in this scenario, multiple instances are connected to a shared storage. And all project data is created on the project server, enabling collaboration with shared libraries. The imagery stored on a NAS and shared with each Flame family product incense. Project metadata is stored on the project server, which is accessible by each Flame family product instant.
So we have to configure an AWS transit gateway to make collaboration possible between the Flame family project instances, the project server, and the Burn nodes, and on premises workstation.
So for this configuration, you need a minimum of two Flame family product instance with at least g4dn or g5.8xlarge, and NAS with at least c5n.9xlarge and AWS SD1 EBS for the media storage, project server, and a backburner manager with at least r5.xlarge, and a project storage of EBS gp3 type. You also need the Burn nodes, AWS transit gateway, and one remote display client for each of the Flame instances.
Since we already went through a similar setup in the previous slides, we will only go through on how we will add and configure the project server, Burn nodes, and backburner manager. So please note that the backburner manager is the render manager of the Burn nodes.
The project server is scalable depending on the storage and instance type we use. So for example, if you select EBS as a media storage, EBS gp3 as a project storage, and r5.xlarge for project server, this configuration can serve up to five instances. So you can mix up Flame and Burn, for example, three Flame and two Burn nodes.
On the other hand, if you select more expansive as shown in the slide, you could have up to 16 instances. So example, you have eight Flames plus eight Burn nodes.
So for this example, we will choose up to five instances configuration. So here the setup is to configure the project server. Again, I will not go through the detailed steps. A more technical detailed steps are available on our implementation guide.
First we had to set up project server instance on AWS using the following configuration. Again, we will use the r5.xlarge instance type. This has four CPU and 32 gibibyte memory. This has no powerful GPU, as this instance does not require to decode media. You also need one storage for the operating system and software, with at least 20 gigabyte capacity, one for the project storage using the AWS gp3.
Please note that to prevent deletion of important project metadata, we have to set the project volume to not delete an instance termination. So if this is not set, and once the incident is terminated, the project volume will be deleted as well.
And then, we also have to configure the security groups to enable the deeper end components with the correct network access they are required to properly operate.
Second step will be to connect to the instance through a command line. So again, there's a guidance in our help page and how to do this. Third is we can add some additional storage if necessary to store the project metadata. Next is to configure the instance as a project server. And lastly, we have to configure the instance to use the network storage or our NAS.
Now let's go to Autodesk Burn configuration. So the first step to configure the Burn is, of course, we have to set up a Burn instance on AWS. So this setup is similar to setting up Flame instances we discussed in our previous slides. The instance type must match the instance we use for Flame as Burn, as Burn is a high performance GPU to decode media.
Second, we have to connect to the instance of the command line to configure the Burn nodes. Third, we have to configure the instance as a Burn node. We will also set similar configuration we did with Flame, except that this time we will set the backburner manager in the project server. So in the previous configuration, we set the backburner manager on the Flame instances. And finally, configure the instance to use the network storage or your NAS.
So once we are done adding the project server and Burn into your configuration, so we can now connect and work with our Flame using either HP Anywhere or AWS nice DCV. Again, this is an overview of multiple Flame instances with NAS, project server, and Burn nodes.
So this configuration is ideal for two or more artists, artists that needs collaboration for the same project, artists that needs to focus on their creative work rather than waiting for the render task to finish on the Flame instance.
So with this configuration, the artists can easily collaborate by sharing the project and media to network on the clouds and through the project server. And eases the instances by rendering up to Burn nodes. And with the help of AWS transit gateway, cloud instances and on-premises workstation can also share projects and collaborate.
Now for the final learning objective, we will discuss the key considerations to help you implement the Flame on the cloud into your workflow or pipeline. To start off, I would like to give you an idea about the AWS instance cost. But please note that this is the current cost in the AWS website as of writing this deck.
So this price may change without prior notice. And the Autodesk has no direct influence or control over the price. So for more information, please visit the Amazon EC2 on-demand pricing website.
So for our primary instance type, g4dn.xlarge that we use in our configuration. So the price is about US dollars 2.176 hourly rate. So again, the price may vary depending on the region. For this example, I chose US is a higher region.
And here the pricing for the data transfer. Again, please check out the Amazon EC2 on-demand pricing website to find out more on this.
Cloud computing is a big shift from traditional on premises infrastructure. So it is understandable that we weighs the benefits or the advantages and disadvantages before deciding or consider adding this into our workflow. So we have to consider the on premises components, capacity and utilization, logistics. So please note that this is general guidelines. These considerations may vary in every facilities.
So but let's go through one by one. So one of the key consideration is on-premise component, such as hardware costs like the server, including the workstation, rack, cables, spare parts. We also need to consider the storage which includes disks, network cards, and cables.
And for the network, we have to consider its components like the network switches, router, cable, ISP bandwidth costs. And we also need to consider the five year upgrade cycle, which is the usual refresh cycle.
So next, the software costs, which includes the operating system, licenses and subscriptions, management software, and software upgrade. So next is the facilities costs, such as the server and workstation space.
So we need space for our hardware. And there is a corresponding cost with it. We need to consider the power and utilities, the cooling and air conditioning. And we also have to consider the manpower costs, like the IT technical support and facilities management.
So the next consideration is capacity and utilization. So how many users are required to use the cloud instance? The cost reduces when the instance is idle or if not running. So some facilities invest with numbers of workstations and servers. But there will be time that they are under-utilized.
How long the instance is needed? So there are some projects that will run within certain period only. For example, working in a movie project, short film, advertisement, et cetera. So how many instances are needed for Burn and Flame? Support in AWS, since the instance is on demand, the quantity is scalable.
How much storage are needed? And for the logistics, we have to consider the travel cost for the user or artist if needed to be on site. You also have to consider the shipping cost for the workstation. Some clients require the user or artists and their Flame workstations to be on site.
So these are just some of the considerations that we need to think about to help us decide whether on premises are still viable for us, or if we can add these to our workflow, or maybe fully shift to the cloud.
Remote workflow will continue to be the way many will work going forward. Flame on the cloud gives us the opportunity to leverage on the new technology that will help us optimize your remote workflow. With its speed, power, accessibility, scalability, and security among many other benefits, you can now experience the full performance of Flame from about anywhere without compromising the quality of your work.
Adopting this technology will benefit your organization with the broader business opportunities. With Flame deployed on the cloud and is essentially accessible from anywhere, an organization can have the ability to work from almost anywhere. It also gives you the flexibility to recruit the finest talent around the world and collaborate with each other wherever they are.
So these are testimony from one early pioneer and adopter of Flame on AWS cloud. Preymaker founder Angus Kneale. So he says, "Preymaker is all about having the finest talent using the best technology. And running Flame in AWS allows us to recruit and work with exceptional talent who live anywhere.
Having Flame projects in the cloud with artists collaborating in multiple locations, we are able to create exceptional work for our clients. Our colorist in Los Angeles can start a project with our Flame artist in London doing the cotton form ready for our CGI team in New York to continue work. Ultimately, the cloud gives us the flexibility to execute highly complicated, demanding, and compute-intensive projects in a collaborative cloud-based workflow."
So the Flame team has provided some prebuilt components like AMI and the implementation guide to help you get started. So we also have resellers and AWS-enabled system integrators that have fully or successfully deployed Flame on the cloud and are equipped to help with your workflow, deployment, and configuration needs. So in your digital handout you should find the link to these resources.
So if you have questions regarding this class, please use the comments section in the AU page, and I will try my best to answer as soon as I can. So if you like this class, please help me to share to your peers and click the recommended icon. Thank you once again. I appreciate you all. And I hope to see you again in the next AU.