说明
主要学习内容
- Evaluate Vault software's performance in AWS.
- Discover data security concerns.
- Explore AWS efficiencies versus on-premise solutions.
- Learn about implementing a migration strategy.
讲师
- Joshua WilsonI am Josh Wilson, the Fusion 360 Manage Administrator at Bridgestone Americas. With a focus on data management and flow within the Autodesk Manufacturing industry, I have specialized in this field since 2011. My expertise lies in utilizing the Autodesk Vault vertical to ensure efficient data handling. Throughout my career, I have effectively implemented Vault at multiple companies, dealing with diverse levels of complexity. My primary objectives revolve around optimizing data flow, starting from the initial conception stage and extending all the way to the manufacturing and maintenance handoff. To achieve this, I rely on the powerful combination of Fusion 360 Manage and Autodesk Vault Professional to streamline the entire process.
- CCCarlos CaminosCarlos Caminos is a seasoned BIM Professional and the Manager of the Asset Data Management Team at Bridgestone Americas. In his role, he plays a crucial part in coordinating and optimizing the flow of data from the design and engineering stages all the way through to manufacturing. Carlos is responsible for implementing software solutions, providing training, and establishing efficient workflows within the organization. With an impressive track record spanning over 25 years, Carlos has extensive experience in utilizing Autodesk software. His proficiency extends across a range of tools, including AEC Collections, Product Design & Manufacturing Collection, Vault Professional, and Autodesk Construction Cloud software. Carlos's expertise encompasses practical applications of Autodesk, Inc. products within the architecture, engineering, construction, and manufacturing industries.
JOSH WILSON: Hi. Welcome to "Vault Shining on Cloudy Days." My name is Josh Wilson. I am the Fusion 360 manager administrator at Bridgestone Americas. My career has been spent doing data integration and data process flow within the manufacturing industry for the last 12-plus years.
CARLOS CAMINOS: And my name is Carlos Caminos. I'm manager of Vault and asset management at Bridgestone, specifically for ESS, which is Engineering Support Services. I have over 10 years experience with BIM technology in the plant design, mechanical engineering, and even construction area.
JOSH WILSON: Our learning objectives for today are understand Vault's performance in AWS, understand data security concerns that we had with AWS and on-prem servers, understanding AWS efficiencies versus our on-prem solutions, and learn about our implementation and migration strategy.
So to give a little background, we'll go into how we had things set up before. So previous to 2022, we did have a total of six servers, six different Vault servers, each with their own instance of the ADMS, Autodesk Data Management Server piece, on a local VM hosted in our data center.
This totaled about 6 terabytes of storage between all six servers. And on top of that, we had a total of 10 on-prem VMs that hosted different AVFS, Autodesk Vault File Server solutions. This was to help speed up replication and data flow between some of our sites that were a little bit further away from our data center.
Now, with these AVFSs, we had nine of them that connected directly to our first main production server. This main server was-- it's the largest. It's the one that everybody in the entire industry in the United States, and North and South America, connect to. And then we had one that went to a different sub-- or a different ADMS server for a different business unit.
Now, one thing to note is each one of these Vault servers is dedicated to a different business unit. So we can shift things around a little bit better and manage these different data solutions for each division a little bit easier.
So next, we're going to get into some of the IT security concerns that have come up over the last 5, 6 years.
CARLOS CAMINOS: With IT concerns nowadays and ever-changing platforms and landscapes, the world of IT security threats, there were things IT wanted to address. They wanted to move away from on-prem servers. They wanted server access and security to be better defined, data encryption, disaster recovery, planning, of course. And these things were important to us as well, being the data management team.
So let me give you a couple scenarios. What if things went wrong in your company? And let's say your snapshots were not running correctly.
Let's say your tape backup recovery was not available. They weren't functioning right. They weren't checked.
ADMS backup, full or incremental, was not running correctly because that would wreck your restore. The Vault backups were lost or misplaced for some reason. SQL databases were corrupt.
Let's read between the lines here. These are all things that could happen at some point. And if you don't think about it ahead of time, if you don't have it planned out, these could all be real catastrophes for somebody. That's why it's important to have a DR plan. And you should test your DR plan.
Failing to do so-- I don't know how many people test their DR plan. We have in place now that we recover one of our data sets quarterly. It wasn't always the case. I know, in some cases, it's not even common practice. But you should have a test plan and be ready to execute it and test it so often.
JOSH WILSON: So that leads us into, what did we do? After looking at some of these IT security concerns, we, as data management administrators within Bridgestone, we needed to look at solutions, look at what we could do to help protect the company and help protect the company's IP, Intellectual Property.
So we started to evaluate our options. We looked at what we had in place currently with Vault Professional on our local VMs. We looked at our current antivirus solutions and what we all had in place. That being said, we did have a little bit of a company directive, a company push, to transition some of our data to AWS to alleviate some old and retired hardware that we had in our data center.
With that, we did decide, let's do a migration with one of our Vaults just to test it out and see how it goes in a production environment. To do that, we chose our largest Vault, mainly because that's where we had the most access to our data. Everybody within North America, entire plants, access this engineering and design documentation located in Vault.
Now, with that, we had to come up with a plan. We had to develop a data migration plan. What were we going to do? How were we going to do it? We did have to have some communications with our internal IT on our IT requirements for AWS.
We also had to have a communications with our AWS implementation team. Within that communication, we came up with a rough timeline. IT and us, we kind of came up with two months is what it was going to take us to initially get that data transitioned over there, which gave us plenty of time for testing.
Now, our AWS team did create a dev environment for us, a development and scenario, so we could get in there before we did our migration to production, so that we could test our solution and make sure it was going to be viable from our end before we transitioned an entire company over of 2-plus terabytes of data in Vault to AWS.
And then we got to the execution phase. With this, we had some great communications. We had some great help along the way. But our IT team and our AWS team built up our production environment for us. We were able to install ADMS and SQL databases.
We have two different servers for this environment, one hosting the product and the other one hosting SQL. We were able to restore everything properly with communication. We had some great speeds on the server side. We were having great testing here locally in Nashville. And then we had to get some user acceptance put into place.
With that, we did reach out to some plants to make sure that users within plants were able to get in and test and still able to do their gets, their checkouts, and everything was working in an acceptable instance.
So why we chose AWS-- the first big reason is the more robust data security. Our IT team has greatly been built up with network security, which leads a little bit more to the IT support. We have a lot more internal AWS personnel who is vastly more knowledgeable on this topic than I am. But they will let you know that our AWS security environment is very much more strict than our on-prem solutions to this day.
But with that, it gave us a simplified maintenance. We were responsible for everything on our on-prem VMs. But transitioning to AWS, we have the support of our AWS teams to help with the maintenance of these servers.
It's also led us to some faster upgrades. With some greater speeds, greater flexibility on server resources, we were able to cut our upgrade times down significantly. Another thing was our ease of global access. This solution gave us the ability to add in additional connections, if needed, to bring in other geolocations into our AWS instances.
And the last thing was the AWS backup and recovery. AWS has a great solution for their data backup and recovery scenario, but that's not where our data backup and recovery stops. Yes, we do utilize that. But our internal IT also has their own solution that they utilize.
But on top of that, us and our DM team, we have our own data backup and recovery solution that we utilize. We're able to create this multilevel tier of validation so that in a worst-case scenario, we are always going to have a backup to go back to and recover. That way, we don't lose as much data.
With that, we just want to give a huge thank-you to IMAGINiT and Autodesk. They are our partners in all of this. Without them, we wouldn't be where we are today.
They have a vast knowledge base, and we lean on them heavily on recommendations for not just the server side, but our client-side stuff as well. Carlos, do you have anything to add?
CARLOS CAMINOS: Yes. It'd be selfish to say that we came up with the architecture ourselves. It'd be selfish to say that it was all straightforward. It required a lot of planning. It required us to have a relationship, both with our reseller-- in this case, IMAGINiT and Autodesk, in order to get their feedback, their experience, and make sure that technically everything was the way it should be.
And I'll give you a quick example of that. Originally, we have a cloud team that recommended an architecture which we weren't aware of. We don't know AWS in depth. So we thought we should take this back to our partners at Autodesk.
And we reviewed it with them, and it turned out that was not a solution that we would have been successful with. So we had to go back and change it and have several meetings again to make sure everybody understood why the architecture needed to be in a specific manner.
JOSH WILSON: So you might be asking yourself, how's it going? We transitioned one of our largest Vault instances up to AWS. Let's get into a little bit of it.
Our general performance of this instance of Vault in AWS is we have a 2.1-terabyte initially transferred size of vaulted data in Vault. And it's growing every day. I think today we're up to about 2.5, 2.6 terabytes of data.
In North America, we have an average ping rate of 37 milliseconds, which is phenomenal. We couldn't have asked for anything better. We have an average checkin and checkout time of assemblies. Over 1,000 parts of checkin is just about 5 minutes, and checkout is about a minute. The majority of that checkin time is us creating visualizations locally and all that stuff.
But you might be asking yourself, what about our AVFS servers? So since we transitioned to AWS, we've had no real need to re-implement our AVFS servers. We're seeing a significant decrease in our ping rates. Data transfer speeds are fine, where nobody's complaining about a lag or any sort of performance issues.
So we've just managed to not re-implement them, which has greatly helped us out on our ease of upgrades and ease of maintenance, because we don't have these 9 and 10 other AVFS servers that we have to worry about.
Now, the general server specs for this instance is it is running Windows Server 2019. We do have a 2.2-gigahertz AMD processor for each one of these servers. Now, this is for both our ADMS server and our SQL server. They both are running 64 gig of RAM. Our ADMS server has about 12 terabytes of total disk space.
And this is going to allow us to have multiple drive redundancies and partition things out the way that we need to for having OS on a dedicated drive, our applications on a dedicated drive, our data on a dedicated drive, and our backups on a dedicated drive. And then, like I mentioned earlier, we do have that dedicated SQL server with the similar specs other than disk space.
So with that being said, that brings us to our 2024 upgrade. Now, with the 2024 upgrade, that brought a big decision for us because we still had five servers hosted locally on VMs in our data center. So what we looked at was, do we stay on our local VMs, or do we transition these to AWS?
If we stayed on our local VMs, there were some things we had to look at, first being our current server OS wasn't going to be supported by 2024. Now we had a choice. Do we do an in-place OS upgrade? Do we spin up a new VM and transfer data? We had to go through and estimate our upgrade time based on historical information that we've kept and maintained.
Is this going to play a factor in our current backup and restore plan, our data recovery plan, our DR plan that we've already put in place for our AWS? And then we also had the data securities that IT has been talking about. Are we going to be able to meet and maintain these data security constraints and requirements that IT's given us?
Now, if we look at our transition to AWS, the first thing that we had to look at is our glaring user acceptance of our current AWS server. We haven't had much, if any, negative feedback from our users of this server in AWS. We had to estimate our upgrade time for this largest upgrade. And looking back, historically, we were significantly lower. I think we cut our upgrade time down by 2 days, complete days.
We already have a significant, robust backup and restore plan for our AWS environment. We're able to maintain this high-level data security that IT is pushing down on us that we need to make sure that we're maintaining. And then the other thing that it's going to give us is the ability to have this dedicated dev environment for any additional testing we might want to do.
So with that, we did decide to go to AWS. We were going to transition all of our on-prem VMs to AWS. But to do so, we did have to go through a security controls assessment. Carlos?
CARLOS CAMINOS: Yes. This is one of my favorite topics here. This was probably one of the most difficult experiences that I've had to go through, and I've gone through a couple in my life. I had implemented other concepts before.
But you have to understand the risks here, and the company clearly understands the risk. We are asking to load our most important data to the cloud, right? And AWS and cloud technology is relatively new to everyone.
It's not necessarily a comfortable thing to even speak about in some companies. So let's go through some of the things that we had in place already.
So we had covered already, of course, if you use Vault, you can control permissions, documented access to servers. We implemented a two-factor authentication. We managed our own vulnerabilities and patch management and so forth, the regular things that, to Vault users, are standards, right?
But there's several other items I'm going to mention here. The two highest on the list were-- one was encryption while in transit, and the other one was data scraping. So in these meetings, we talked about a lot of things-- many, many types of securities, you know, who's accessing, how is that being controlled.
We needed to prepare documentation. We needed to write documentation for a business requirement for us to have admin rights. So this meant we needed to control admin rights to Vault, very tight admin rights. So there's only very few people that have the ability to move files or even delete files. Deleting files is a no-no, right?
And then there was a lot of things there that we weren't aware of. But we were committed to it, right? So we wanted-- since we were going to be the first, highest-ranked data going up on the cloud, we wanted to make sure we met the requirements that cloud security asked of us.
And they knew and we knew that we weren't sure if we could achieve those. But luckily, through a lot of collaboration, internal support-- there's a lot of things going on. The cloud security-- since we now have a cloud security team, that means we have an AWS team. That means we have other technologies that are being developed for the data that's being stored on the cloud.
So it turned out we have a service internally that does data scraping. It monitors everything that gets checked in, checked out, who it is, the size of the data, 100% of the time. It didn't affect us. It just meant that we needed to make sure we got on their list.
We gave them the information they needed, and that was really a flip of the switch. There was some testing, since we were one of the few. But this was a solution they provided already.
I do have to warn you, this process didn't happen overnight. It probably took us early Q1 to, like, mid-Q2 to get fully approved. We sat in several large meetings where it's like the gauntlet, you know, and people recognizing that Vault and what we provided was critical for the company. But also, that doesn't mean that we turn away or close our eyes to some other vulnerabilities that might exist.
So we addressed those upfront. And luckily, if there were 10 requirements, we met all 10, which is something that got published internally. It was a great, satisfying experience at the end of it.
And thanks to Josh, as my teammate with this-- I really just spoke more. He's really more the technical support person here. But we were able to get this through.
JOSH WILSON: So now let's talk a little bit about our transition to AWS for these remaining five servers. In doing so, we had to come up with a plan. We made our decision back in 2022 of Q4 that we were going to transition to on-prem. But we had a plan B. We still had our on-prem servers, and we were still going to keep them up until we were able to get all of our stuff to AWS.
And then, in 2023 Q1, we started our requirements definitions. This is where we started having these discussions with our internal IT and our AWS team, which led to our security controls assessment that Carlos just talked about. With this, our requirements were based off of the 2024 system requirements that came out from Autodesk in early April.
From there, we were able to have some more advanced discussions with our AWS team, and we started our testing in Q2 of 2023. We were able to get a dev environment set up in AWS for each one of these servers. We had a dedicated server for each business unit that we were able to install the ADMS and SQL on, these being a smaller subset of servers.
And then, from there, we were able to get some testing done internally within the data management team. And then we expanded that out a little bit to our ESS team, our entire department, to make sure that everybody in our department was seeing the same results we were.
Once we got some of that feedback, we expanded our testing out a little bit further to some individuals located in our manufacturing facilities across North America. We got some great feedback from them, and we decided it's time. We're ready to start creating our implementation strategy and what we were going to do.
From here, we were able to have some more discussions with our internal group and our IT team to develop this plan and how we were going to implement things. Now, we did come up with our dev testing server requirements. And this is what we defined to our team, to our internal IT and AWS team.
We wanted to go with a Windows Server 2022 data center OS. We ran the same 2.2-gigahertz AMD processor. We ran 32 gigs of RAM across all machines. And then we had disk space varying for each one of these servers, but with the same general configuration. We had four drives for dedicated information.
Our C drive is going to be our OS. Our D drive is going to be where we're going to install all of our apps. Our E drive is our data drive. And F would be backup.
But within these dev servers, there were some things that we installed and we wanted to test before we transitioned that to our production environment, first thing being our SSL configuration. We did not have SSL set up before. So we had to create our certificates.
And we initially created self-signed certificates. Then we also had some server performance questions we wanted to make sure we got answered, that the throughput was going to be fine, that the servers that we had specced out were going to be sufficient for the amount of users and data we were going to host on there.
And then lastly is our client connections. With this being AWS, we wanted to make sure that everybody was able to hit these servers, including the new SSL configuration. With those self-signed certificates for our dev testing environment, we had to go through and manually install these SSL certs.
How we have AWS laid out is-- I feel like it's pretty simple, but it's a complex, secure solution for us. We have our client connections on the left-hand side there.
Those clients connect directly to our internal WAN. That WAN connects to our firewall that's located in our data center. And then that firewall is set up with a direct connect to our private subnet hosted in AWS. So we are able to have this secure connection from internal network directly to our private subnet inside of AWS.
So now that we've got all of our dev testing out of the way, let's talk about our production migration plan. We have our testing complete. We were ready. We were comfortable.
We had our server. Our dev servers were up and ready. You know, they just needed transitioned over to production.
We had everything in place that we thought we needed. Then we had our final meeting with our AWS implementation team, and they throw a little curveball at us. They let us know that they can't just do a migration of these dev servers over to a production environment.
So they had to create and spin up new production-grade servers for us. And with that, we want to emphasize communication. Communication in any form, whether it's written or verbal, needs to be clear, effective, and efficient.
And not having this clear communication up front led us to this little bit of a miscommunication, which we could have avoided. So I just want to emphasize making sure that we are communicating with everybody involved within a project, within a migration, within an upgrade, anything, that everybody's aware of what's going on.
So now let's get to our actual production migration. We have our new production servers installed-- or spun up, our AWS instances spun up. We had about two weeks to get the new ADMSs installed, to get new SQL instances installed, to make sure that we got SSL set back up properly, and that we were able to get backups backed up and then moved over.
The one thing I will note with our SSL configurations is after talking with some more IT groups within Bridgestone, we found a better solution as opposed to a self-signed certificate. So our IT was able to create a more advanced and secure SSL certificate for us, and they were able to push that out through group policy. That way, we didn't have to go through and push out these self-signed certificates anywhere.
But what we did for our backups was we had weekly full backups that were running. So we ran a full backup on a Saturday. And then, on Monday, Tuesday, Wednesday, Thursday, we ran incrementals. Throughout the whole week, we ran incrementals.
Now, we took those after they completed and copied those to their respective AWS servers. And once we had all backups restored, or copied up to the production servers, the new AWS production servers, we needed to disable the on-prem VMs to make sure that nobody accidentally got into one of those legacy machines and made any changes to data that was no longer active.
So what we did was we just disabled IIS, and we disabled the SQL instance. This gave us a little bit of security that the general user isn't going to be able to get in and make any changes. But if we needed to get in, we could jump back into these servers, re-enable these services, and get in and get any information we need to.
Now we're on to our AWS restore. We spent the whole week getting all of our backups copied up to the servers. And at 5 o'clock on a Friday, we decided, we're shutting everything down, and we're going to start our restore process.
With that, we have a great team with us here. And we were able to, by noon on Sunday, have all five servers restored and up and running, tested, completely validated, by noon on Sunday. I think that's a great feat for all of these, and that also includes our main production in-place server upgrade to 2024.
Now, with that all being said, we have some key takeaways we want to go over. First is understand your current environment and needs. AWS might not be the right solution for you, but it was for us. So take and evaluate what you currently have in place. See if your security requirements fit your security needs.
Also, evaluate the costs versus benefit for your situation. AWS can be a pricey situation, can be a pricey solution. So make sure that you are doing that evaluation yourself to make sure it's the right solution for you.
Another one here is going to be test your DR plan. If you don't have a Vault DR plan in place, please put one in place. If you do have one, make sure you're testing it regularly because you never know when something can go wrong, will go wrong. And you want to make sure you understand what it is you have to do and what your company needs to do to get your data restored and be back in production as fast as possible.
Have clear, direct, and efficient communication with anybody involved within some of these migration projects. These can be very complex migrations. So make sure that you are communicating with everybody, letting everyone know what's going on, when it's going on, timelines, and when everything is needed by.
Last thing is Vault Professional works in AWS. It's working for us as this solution.
So let's connect. Up on the board here, we do have some QR codes that go directly to our LinkedIn. Feel free to connect with us. If you have any questions, reach out. We'll be more than happy to have additional conversations with you. Carlos, do you have anything to add?
CARLOS CAMINOS: Yeah, again, to support what Josh just said, communication is key here in all different aspects. Surround yourself with a good team. Make sure you have relationships with the resellers and with your vendor. Make sure that's a constant dialogue.
Reach out. Network. This is part of the reason why we're here, right? This is why we continue to come to AU. Feel free to send us an email or a message.
We're very active. Josh and I are both very active on social media. Feel free to reach out to us.
JOSH WILSON: And with that, thank you for joining us today. This has been "Vault Shining on a Clouded Day."