Beschreibung
Wichtige Erkenntnisse
- Learn how to build and deploy software using Forge
- Learn how to easily create your own production-ready cloud environments for Forge applications
- Learn about the best practices for running and securing applications in the cloud
- Discover how the cloud helps accelerate change and time to market
Referenten
- Thomas JonesTom “Elvis” Jones is a Solutions Architect with Amazon Web Services who spends his time focusing on the complex challenges of our most strategic partners in the the Design, Engineering, and Manufacturing space. His career has spanned both the hardware and software sides of the house, including work at Red Hat, Transmeta, and Pratt & Whitney, giving Tom an extremely broad technical experience across multiple industries and verticals. He is a whitepaper author, a patent holder, a training material builder, a DevOps expert, an active Maker, a mountain biker, and above all, a passionate technologist. He has been known to go far out of his way for pinball and fondly recalls playing "Adventure" on an ADDS Viewpoint ASCII terminal.
- VSVinod ShuklaVinod Shukla is a Partner Solutions Architect with Amazon Web Services. He has over a decade of experience designing and building high-performance, enterprise-grade software systems. As part of the AWS Quick Starts team, he enjoys working with partners to provide technical guidance and assistance in building gold-standard reference deployments that are fully automated, highly available, and secure. He is also an active contributor in the open-source community. Prior to joining Amazon Web Services, Vinod worked as a senior software engineer for Atypon Systems, where he developed and maintained the RightSuite product line. RightSuite is an enterprise access-control and e-commerce solution used by many of the world's largest publishing and media companies.
- Jaime Rosales DuqueJaime Rosales is a Dynamic, accomplished Sr. Developer Advocate with Autodesk, highly regarded for 8+ years of progressive experience in Software Development, Customer Engagement, and Relationship Building for industry leaders. He's part of the team that helps partners/customers create new products and transition to the cloud with the use of Autodesk's new Platform - Forge. He joined Autodesk in 2011 through the acquisition of Horizontal Systems; the company that developed the cloud-based collaboration systems—now known as BIM 360 Glue (the Glue). He was responsible for developing all the add-ins for BIM 360 Glue, using the API's of various AEC desktop products. He is currently empowering customers with the use of Autodesk's Forge platform throughout the world, with hosted events such as Forge Accelerators, AEC Hackathons, VR & AR Hackathons. He has been recently involved in the development of AWS Quick Start to support Forge Applications.
TOM JONES: So everybody should have a little slip of paper with a little hash on it. If you don't, raise your hand, and one of our assistants will come bring you one. But you're going to need that to get into the lab.
AUDIENCE: [INAUDIBLE].
TOM JONES: We got one guy here. Everybody ready? All right, let's do this. So welcome to Forge and Amazon Web Services-- A Perfect Match. My name is Tom Jones. My nickname is Elvis, and both seem to be perfect for being on stage at Las Vegas. I'm a solution architect at Amazon Web Services. Joining me here today is Jaime.
JAIME ROSALES: Hi, so my name, Jaime Rosales. I'm a senior developer advocate for the Autodesk Forge platform. And also we have, today, Vinod.
VINOD SHUKLA: Hi, everyone. My name is Vinod Shukla. I'm also a partner solutions architect at AWS. I've been working with Jaime and Elvis, developing some Quick Starts, and excited to be here to show that to you today.
TOM JONES: Awesome. Super. Thanks guys.
So we got some learning objectives for today. We're going to learn how to build and deploy software using the Autodesk Forge. We're going to learn how to create your own production-ready cloud environments for those Forge applications. We're going to learn about best practices for running that application and securing it, and we're going to take a look at how the cloud can help you accelerate your development process.
So let's talk really quickly about the AWS Quick Start program, and I think that's-- this is you, Vinod. I'll let you talk about it.
VINOD SHUKLA: Sure, thank you. All right, so what is AWS's Quick Start program? So AWS Quick Starts are gold standard reference deployments of key partner technologies and solutions in the AWS cloud.
We give customers a push button way of deploying complex workloads using AWS best practices for security and high availability as well as the best practices for the product you are deploying. So you can think of Quick Starts as next generation white papers. So in addition to having documentation in the form of a architects' diagram and deployment guide, we also give fully automated deployment option using AWS CloudFormation so that not only you can read and learn more about the technology, but you can actually have a working solution that you can go and deploy in your AWS accounts.
So just a little bit talking about the motivation for-- sorry. There you go.
So just talking a little bit into the motivation for why do Quick Starts exist. So if you're building infrastructure in the cloud, and let's say you are start with basic building block, and you are just starting off setting up your network, and you build your virtual private cloud-- a VPC, which is your isolated environment. And you are setting up a network layout-- you set up subnets, then you set up rules for routing, and so on. So if you are doing this manually, these are the steps on the left that you would have to go through. We don't have to read it all here, but roughly, it would be around 100 steps that you have to follow to just set up a building block, which is your network layout in AWS.
So we learn from that, and we saw that lots of customers are doing these repetitive tasks, which is not differentiated, and we could help them make this really quick and easy and also something that is a best practice. So we built out this AWS VPC Quick Start, which if you use that, you get the same layout that you see on the right diagram, but now, you are able to set up your VPC in a recommended way with public subnets, private subnets, and that layering that you need in just three steps. So that's the value proposition of using these Quick Starts. And Quick Starts are very modular, so we can build upon each other. And when we built the Autodesk Quick Start, we were able to reuse some of the components and then build a Forge application Quick Start on AWS, reusing the VPC Quick Start in some of the modules.
When you go to the AWS Quick Start catalog, you see a curated list of over 160 Quick Starts. You can browse, or you can search by the use cases, such as databases, analytics, big data, and so on. You can also search-- so any search for Autodesk, you would see two Quick Starts that we have today. We have one for BIM 360 integration that we released this year, and we have one for Forge application that we built last year. For both the Quick Starts, you get the option of choosing your runtime language, so you can use-- today, you can use Node.JS as well as .NET Core to run your applications.
When you look at the deployment guide for the Quick Starts, you get to see the overview and any cost and services that you need, so the Quick Starts are all open source. They're free. You can take them. You can customize them. When you deploy them on AWS, you pay for the compute costs for the services that you're using.
We talk about the architecture and design considerations when building this, talking about best practices and how we make the workload scale and how we make it secure. We provide step-by-step deployment instructions, so there's-- it's using CloudFormation, which is a templating engine-- templating technology from AWS for defining your infrastructure as code. But we give a lot of configurable options, so you can tune the deployment to what you need. Finally, we have some links out for troubleshooting and anything that you-- like what to do next with it.
So as I said, we use AWS CloudFormation, which is our way of defining infrastructure as code, and you can choose between JSON or YAML options to write those templates. What you get at the end of it is a single launch URL, or a single button deployment where, when you go there, you just fill out a form with all the tuning options that you have. And once you'd submit that, you create the stack. We call a unit of work load a stack.
At the end of the deployment, what you get is the figure on the right. So let's just quickly dive deep into-- just at a high level. We'll be doing the workshop, so I wouldn't go into too much detail. But I want to give you an overview of what you'll be deploying today as a Forge application.
So starting off at the green box, the VPC label, so we create a virtual private cloud, which is your isolated resources in AWS. Then we have a layer for the bastion hosts, which is your way of securely entering your network. So the bastion host are-- you can make them-- you can open them up to your data center, your corporate network infrastructure so that you can securely get in. And then we have a private subnet layer where your actual workload, the Forge instances, will be deployed.
Now, for scalability, let's say your load varies with time, and you want something to cater to that demand. We have set this up an auto scaling group, so as and when you need more instances, new instances will automatically come up and cater to that extra load that you have. So we've set up the Autodesk for the application and auto scaling group, and then to distribute the load, we have elastic load balancing. So it's an application load balancer that we're using, which is our layer 7 or HTTP UPS load balancer.
And then there are some other options, which you can see the icons here. There's a NAT gateway, so if your instances require outbound internet connectivity to, let's say, download software or security patches, we have that. And that's also managed by us, so you don't have to worry about setting up your NAT gateways. You can just use the AWS services.
And then the fourth application, which is at the core of it, resides as an application on the EC2 instances that are in the private subnets. And then there's some other tuning options here, which are more advanced. So if you want to deploy it with your own domain name, you have the option of doing that. We'll skip that in the workshop today because that requires more setup. You have to have your own domain in route 53, but when you use the Quick Start to deploy your web application, your Forge application, you have that option.
And then we'll talk a little bit about how we secured your parameters. The application requires your Forge secret ID for your Forge application, so instead of keeping that on the instance in text file, we use something called a parameter store, which can be used for secure storage of your secrets. So that would be the work that we'll be doing today. In
Addition-- so that gets you started, but let's say you are evolving your application. So you deploy a Forge application, then your requirements change, and you make updates. So how do you make sure that your updated code is deployed? So for that, we build a Code Pipeline.
So Code Pipeline is a Amazon technology to-- that enables you to do continuous integration and delivery and deployment. So the way we start is we have-- all the Quick Starts are open source GitHub repositories. So you can use the first stage in the code-- in the pipeline as is your GitHub source, and also, we also have a second source here, which is your secret configuration.
So it's never a good idea to keep your secrets on GitHub because then it will be in source control forever. So to store your secrets, we are using encrypted S3 bucket, which will contain your Forge credentials. But the source is open source, and it will be on GitHub.
Now, when you take it, you make a fork of the open source Quick Start. You don't have to keep it open. You could make it private if you have to. If you are doing anything that is your IP, feel free to do that. But as an example, we'll take the repo that we have today, and we'll use that.
Before you get to the final deployment stage, you want to make sure that the code you've written is well tested. So the second stage here is a test stage and, we-- the Quick Start team built a tool called TaskCat, which enables you to test CloudFormation templates in multiple regions at-- in one go. So it runs your tests in all the regions, and it generates a report, so you can go in and see, if there's any failure, why was that case, and you can fix it. So the idea is that, before you move on to the final deployment stage, you must test your code. And only if that passes, it moves on to the next stage.
So now, it's very typical, in GitHub workflows, to have a test branch and a production branch. So you will have your development branch where you do all the dev work, and then only when that's good to go, you then merge it to your production branch. So that's our third stage, which is the Git Merge stage. It takes the code from you development branch, and if it passed that test in the previous stage, it will then merge to the master branch.
Once there, we are just getting to the production stage, but there's one more step remaining. So CloudFormation takes assets from S3 bucket. It can take asset from S3 bucket. It can also take from GitHub.
But today, Code Pipeline does not support Git modules, sub modules, and we are using-- in Quick Start, we use Git sub modules for modularity. So for that, we've created a stage, like the fourth stage here, which copies your master branch to a config bucket-- or a code hosting bucket, which is in your S3. So when we go through the workshop, we'll talk about some of the steps.
It's detailed there, but we are using two buckets. One bucket is for storing your secrets, which is encrypted, and it's still secret. The second bucket is storing your code, and that will be the content of the master branch.
So now that you have your code in S3, you're ready to deploy it as a CloudFormation deployment. So that's the fourth and the-- the fifth and the final stage, which is a production deployment, and it takes code from S3. It takes your configuration that is in a different bucket, and then it uses that, too, to make a CloudFormation deployment. So Code Pipeline can either create a new stack if it does not exist yet, or if it already exists, then it will update that stack. So with this development workflow, whenever you check in a new code in your development branch, it will automatically push to your production code at the end in your deployed application.
Now, it is probably a good idea to inject a manual step at the very last stage. Let's say our administrator wants to make sure that the code that is going to go to production is what they want, so at the last stage, we've actually added a manual approval. So it will go to the last stage automatically, but it will wait there.
So you go into the console, and you say Approve. Only then, it will deploy the updates and update your stuff. So that is the workflow we'll look into today. With that, let's get into the workshop.
TOM JONES: Cool. Yeah, go back. Go back one side for second, Vinod.
So Vinod's walked us through the pipeline and the workflow. I just want to give a little bit of background on why we built this, and I failed to do that at the beginning when we were starting the workshop. So we built this-- the three of us have participated in many of the Autodesk Accelerates, where we're sitting down for a week, and we're working with developers who are building Forge applications. And what we found is that they build out their application, and at the end of the week, they do a show and tell. And guess what-- it's still sitting on that developer's laptop.
And the developers-- many of them have come from a background where they're developing for Autodesk desktop products, and so they're used to developing for Windows. And they don't necessarily have all the information or understanding to run through this complete workflow and operate their application in a performant, scalable, highly reliable, secure way when they're done. So we wanted to simplify that, and that's why we built all this stuff.
So essentially, it's the same workflow the developer is building on their laptop, and then they say, all right, I'm going to commit. So now, they've got a source code management system. In this case, it's GitHub, but Code Pipeline is flexible. You can use many different products.
So they commit to GitHub, and it automatic runs through all this stuff for you. If you have additional tests that you want to run-- we're just using TaskCat as an example. But if you had additional tests, unit tests, functional tests-- you could put all of that in the test phase and have those execute either in serial or in parallel.
You can use third party tools to do inspection of your code, and I mean, there's a lot you can do here. But essentially, what we were trying to do is use the automation, so infrastructure as code, to allow you to easily get from your laptop and the app you developed to production and, really, allow you to just focus on your app. Anything else you want to add there?
VINOD SHUKLA: No, that's great.
TOM JONES: OK. All right, so let's talk about that. Thanks, Vinod.
VINOD SHUKLA: You want to take that?
TOM JONES: Which one? This one?
VINOD SHUKLA: Yeah.
TOM JONES: So AWS has over 165 different services today. I'm not going to give you a test on all of them or anything, but I want to highlight one of the ones that we're going to use, and that's Cloud9. Cloud9 is a in-browser IDE, so it's a development environment in your browser. We use that just make the class simple. Of course, you can write CloudFormation and your code in whatever IDE you want, but today, we're doing this just to make it easy for you.
We are also using a thing called Event Engine. So you should have a little slip of paper that has a little hash on it, and we're going to give you the URL for the Event Engine page. It looks like this. You put your hash in down at the bottom, and you hit Accept, and it will launch that environment for you. In that environment, we'll have a Cloud9 IDE that you can then launch and get to.
It also, when we first click it, it'll give you this team dashboard. So the team dashboard has two pieces of information that are important for you for this lab. The first is the AWS console button, which will allow you to launch the AWS Management Console in another browser tab, and that's where-- I'll show you a picture of in a minute, but that's where you can get to the various AWS services. And then the second thing that you're going to be interested in is a ReadMe, and the ReadMe is where you're going to find the instructions for the lab. And I'll show you what those look like here in just a second, as well.
So here's what the management console looks like, and this is the default blank page that you log into when you get in there. Two things I want to call your attention to-- so today, AWS has 22 regions around the globe with 69 availability zones. If you want to know more about what that means, come and see me while the lab's going on, and I'll give you as much detail as you want. But we got a lot of infrastructure. Today, this lab is operating out of our Oregon region, so you want to make sure you stay in Oregon, because if you move it-- if you change regions, the lab's not going to work.
The second thing that you want to know on this page is this search bar that will allow you to find other services like CloudFormation if you want to look at the output of your CloudFormation stacks. And then the step-by-step instructions Vinod has patiently crafted in Markdown and is hosting for you once you click on that ReadMe link. There's actually two clicks. You'll click ReadMe, and then click again, and you should see this.
One last thing about the training-- or the lab material. If you see a message at the top that says, if you're doing this as part of a workshop, please ignore this page and move to the next one. So this is generic lab instructions.
You don't have to do all the steps. If you see that message, just move on. This is not the message you were looking for.
Deploying applications-- so we're going to go through the lab. In one section in the lab, once we get the infrastructure up, we're actually going to deploy a sample Forge application. So these are a couple of applications that Jaime has placed for you. And this is just a little GIF that's running, and it shows once you have your application up, and you navigate to the URL, we're going to download the file that's in the instructions, and then you're going to upload it here. And you should see something like this in the Forge viewer in your application that you are now hosting in AWS.
You don't have to memorize this. It's in the instructions. I just wanted to call it out because it's at the bottom of the page, and if you click on it, it will expand, and it will blow up like this so you can see it more easily.
These are your actual instructions. That's the URL, so go to the URL dashboard.eventengine.run. Enter your hash and then follow the instructions in the ReadMe.
If you've got questions, raise your hand. We'll come around and help you. We've got some lab assistants here in the back. You've got the three of us up here.
And have fun. We hope you enjoy it, and let us know how we did. Any questions? All right.
JAIME ROSALES: And then in the instructions in the lab, it's going to ask you to use your GitHub account. So if you don't have a GitHub account, go ahead and create one, and the same thing for the Forge account. So the Forge account can be reached out at forge.autodesk.com. On the top right corner, you can sign in with your Autodesk ID and create a quick account there to create an app in order to get the Forge client ID and client secret that you're going to be using during the lab. If you have questions about that, and you need help, let us know, and I'll come by and help you out.
TOM JONES: And those are all in the prerequisites in the instructions, but we're here to help.
JAIME ROSALES: So just a quick thing because I saw someone in the other corner trying to do-- log in into an AWS account. You don't need to log in into any AWS account. We're giving you, with the hash, the access to the AWS console completely, so in that way, you don't get billed for any services that you will use today or everything. That's the reason we give you the hash. So if you're trying to sign in into an AWS account, raise your hand, and we can help you and direct your where to go. All right, awesome, I'll be right there.
Oh, my god, sorry. Sorry. Doesn't need a new tab. Sorry. There's Amazon on the laptops.
So for creating the Forge account, you're going to head over to forge.autodesk.com. I'm going to use your thing. Yeah, and then basically, when you create an app, you should be seeing an option to select all the APIs. If you don't have the option of selecting all the APIs, it's because you need to start your subscription. So in order to start your subscription-- and don't worry, we're not going to bill you for anything of this. This gives you a free one year subscription with 100 Cloud Credits and all that stuff, so if you want to later on use a different account, you can also transfer to a different account.
So you're going to go into Forge account details. In this case, Elvis already has a full access, but if you don't have a full access, you will see an option to start the subscription. So you will need to click on this because we have one of the Forge services, which is the model derivative, that only will become available at the time of creating the app if you have a valid subscription.
And then another thing-- when you're creating the app, there is some instructions in order to create the app. But in case you missed it, the callback URL-- we're not going to be using that at the moment. So you can type a dummy callback URL, or you can use the one that we're giving you on the instructions. It's up to you. And if you have questions. About this raise your hand, and I can come by.
TOM JONES: So I've got my own hash here. I'm going to walk through the first steps to this, so I'm going to accept that and log in. And some people have been asking about how they get to AWS. There's a button here right, when you first log in, that's says AWS Console, and if you click on that, it will open another window.
And you have to click it again, so open AWS Console again. And then it should open another tab, and you should see the AWS console. There we go.
AUDIENCE: Could you show them Cloud9?
TOM JONES: Yeah, sure, we'll take a look at Cloud9. So once you open up Cloud9, you'll see that we've already created a Cloud9 interface for you, and there's a button here called Open-- or labeled Open IDE, which I can click on. That will open my Cloud9 environment in my browser.
AUDIENCE: It'll take about a minute.
TOM JONES: Yeah, it'll take about a minute for that to start up. So now, my Cloud9 environment is up. There's a big Welcome window here that takes up most of the screen.
I'm just going to close that. You don't need it. And then you can take this window that's at the bottom-- that's actually your terminal-- and maximize that so you can see what's going on on your machine.
VINOD SHUKLA: Just a quick announcement-- so if you are being asked for a password for when doing something with GitHub, you can use that token that you created. So a few people are getting errors in the update artifact stage. You see the substitutions are empty.
So the way that will happen is that script is-- and you don't have to use the script. What we are doing is we have three files that we are substituting your Forge secrets and email address and IP address in. So the update script is-- update artifact script is using a token that is present in those files, and it's using your value that you provide to replace the string.
Now, if you did not provide the values correctly, and the IP address string requires quotes, what happens is it tried to replace, but the variables are empty. So now, your tokens in the three files that we are replacing in, they become all empty. So if you retry, it's not going to work because the tokens are gone. So what you can do is-- the two options, you can just go to those three files and just paste your email address, your Forge credentials, and your IP address manually. Or you can actually unzip the zip file-- the assets zip file again to replace your file with the right tokens and then try the replacement again using the update artifact script.
And then another question was about the /32. So this is a CIDR block notation that we're using for IP address ranges. So when you-- so the way this IP addresses work is it's like a octet. There are four octets.
So the last part, after the third dot, if you put a number there-- and then let's say your IPS address is 1.2.3.4, if you want to allow a range of IP addresses, if you say /32, then it allows access for only the IP address that you provide. As you reduce that number, if you say /24, then the last part of the IP address allows everything from 0 to 255. So in this example, if you say 1.2.3.4/32, then your basic-- and then if it says /24, then you're allowing access from 1.2.3.0 up until 1.2.3.255. Now, the way we are doing the string substitution, you need a backslash and a quote for the IP address. Otherwise, it will escape-- it will take the slash as Escape for a bash variable, and that could cause problems.
TOM JONES: [INAUDIBLE].
VINOD SHUKLA: Yeah. Let me walk--
TOM JONES: I'm trying to make that bigger.
VINOD SHUKLA: So yeah, let me just talk through again. On the steps, so when you unzip the asset bundle, you see these five files, and we listed them in the document as well. There's a Forge prod CFN, which is a configuration file that your production stack will use. And it contains a few tokens here-- your Forge client ID, your Forge client secret, your key pair, and so on.
So if you want, you can just manually-- let's say you get empty here. You can edit it manually. You don't have to use the update artifact script. Same thing here-- the Code Pipeline JSON file also contains a few tokens, and then there's one TaskCat project override that contains these three tokens-- key pair, email, client ID, and secret.
So what you have to do is, in the update artifacts file, you have to provide your email, provide the client ID, the secret, and then IP address, make sure you keep the quotes here and the backslash, because if you skip that, then that could have caused a problem. Once you run the script what what's expected is that this file would then have your email, your client ID substituted. So if that did not happen, you can just download the bundle again and unzip it, or you can manually update.
So quick time check-- if people have already deployed the Forge prod stack, and if it's in a create complete status, you can go in and look at the outputs. And you can see the application being deployed. Now, if there was an error in your IP address, it's possible that the URL of the app won't work.
Raise your hand. We can walk it through how you can go in and manually and update your security group to fix that. And if you've already deployed the Code Pipeline, make sure you don't deploy the last manual step before you verified your first Forge prod application. We want to show that you have a base app, and then that gets updated after you've done the approval.
TOM JONES: You want to just describe what that looks like?
VINOD SHUKLA: Yeah, sure.
TOM JONES: Just talk to him.
VINOD SHUKLA: Yeah, so when you first-- when your Forge prod stack completes, you would have a application URL that you can go to, and parallelly, because that step took 15 minutes--
TOM JONES: I can show that.
VINOD SHUKLA: Yeah. So that step takes 15 minutes, so you don't have to wait. You can continue building your Code Pipeline, and then the Code Pipeline will, once it starts-- it's also a CloudFormation stack. So when the Code Pipeline stack reaches a create complete state, at that point, it will start executing the pipeline, and it will go through the source stage, the test stage, the Git mode stage, and then it will finally reach the prod stage.
So when it reaches the prod stage, make it wait. Don't approve it immediately. First, go and verify that you're Forge prod stack is in a original state with one version of the app. Once you've verified that, then you can go into the pipeline again and approve the change, and you would see that the change is then propagated, and your stack, the Forge prod stack, will be updated. And once the stack reaches the update complete state, after that, you would see the new application being deployed.
AUDIENCE: So it automatically [INAUDIBLE] prod stage here? You don't have to disable [INAUDIBLE]?
VINOD SHUKLA: Yeah, yeah, it'll automatically stop. This one?
TOM JONES: Yeah, so essentially, what you'll see, once you launch the first prod stack is just the model, and then the update is to add the graphics-- or the graphs and charts on the side there. So that's the new code that you're pushing into your pipeline and having it go and built. And then if you verify that, you can click the Manual Approve button to manually approve that change and have it flow into production.
VINOD SHUKLA: Your GitHub token is on?
AUDIENCE: So is this only going to work with Revit models? Or specific models [INAUDIBLE]?
TOM JONES: Yeah.
AUDIENCE: [INAUDIBLE].
TOM JONES: Right.
VINOD SHUKLA: When you build the Code-- what stack did you--
TOM JONES: So the question is, will this only work for Revit models? So this pipeline is built to work with any Forge application. So it's not dependent specifically on Revit. We're just using the viewer here, and then we've got data that's pulling in.
AUDIENCE: [INAUDIBLE] pipeline. I'm talking about--
VINOD SHUKLA: [INAUDIBLE].
TOM JONES: Oh, this particular code? Yeah, I don't-- that's Jaime's code. You're going to have to ask him. I'm an Amazon expert, not a Forge expert.
[SIDE CONVERSATION]
JAIME ROSALES: OK, guys, so we are getting-- actually, we already passed the finish time. But for those of you that were able to deploy the last part of it, good. If you were able to just deploy the first CFN, good. If you were not able to deploy any of them, still good. Don't worry about it.
This is about learning, and then this material will become available once we get out of the craziness of AU and then also AWS reinvent in a week and a half. I will take care of doing a screen recording with all the steps in order to show you how this thing should work with the steps needed. And you can always reach us out, either Vinod, [INAUDIBLE], or myself, in order to help you out with any other question that you have on how to host your Forge application in AWS, OK?
So thank you again for coming. The material, like I said, will become available. The only thing is that it's going to have to be run on your own AWS account. Unfortunately, we're not going to be able to take care of that cost anymore. But yeah, so thanks again, and I hope you keep enjoying AU. Thank you guys.
VINOD SHUKLA: Thank you so much.
[APPLAUSE]
Downloads
Tags
Produkt | |
Branchen | |
Themen |