AU Class
AU Class
class - AU

A Case Study Using Autodesk Vault and Autodesk Construction Cloud Together

Share this class

Description

In this session, the speaker will walk you through a customer's project showing how they effectively used Vault Professional software and Autodesk Construction Cloud, keeping them in sync with each other. See how we allow for both sides of the company—design and facilities—to effortlessly use the data at the same time with one source of truth.

Key Learnings

  • Learn how to configure Vault to work with Autodesk Construction Cloud.
  • Learn how to configure Autodesk Construction Cloud to work with Vault.
  • Learn how the two environments are synchronized.
  • Review the tools used to make the two systems feel like one.

Speakers

  • Avatar for Kimberley Hendrix
    Kimberley Hendrix
    Based in Tulsa, Oklahoma, Kimberley Hendrix provides custom solutions for lean engineering using Autodesk, Inc., products and industry knowledge to streamline design and engineering departments. Hendrix has worked in the manufacturing industry for over 30 years and she specialized in automated solutions for the heat exchanger industry. She has worked with Autodesk products since 1984. Hendrix is associated with D3 Technologies as the Manager of Data Management, focusing on data management, plant, automation, and mechanical issues
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • captions off, selected
      Transcript

      KIMBERLEY HENDRIX: Hello, thanks for joining our class today. We're going to do a case study using Autodesk Vault and Autodesk Construction Cloud together. There'll be three of us speaking to you today. We'll get started. Who are we? Are all part of Team D3 based in the central part of the US. We'll each individually introduce ourselves. I'm Kimberley Hendrix. I'm the Director of Data Management with Team D3. I'm out of Oklahoma.

      I have four beautiful children. One recently got married. I play a lot of tennis with one of my daughters. And my fun fact is that I play the saxophone whenever I can, usually every weekend or so. And next up is Chancellor

      CHANCELLOR KURRE: Thanks, Kim. My name is Chancellor Kurre. Like Kim said, I'm a Senior Implementation Consultant with Team D3 based out of Missouri. As you can see there, I've got a couple lovely dogs. One of them is a little bit of a winker, only has one eye but absolutely adorable with that one eye. And when I'm not measuring myself up against an apple being super tall, you'll find me working on my Mazda Miata, big guy, tiny car.

      KIMBERLEY HENDRIX: Drew.

      DREW WASZAK: Hey, guys. I'm Drew Waszak based out of Missouri. I'm a software developer with Team D3. I've been with Team D3 for four years now. And my primary focus is around the-- or creating custom solutions in the Autodesk platform services stack. I've got a beautiful fiancee and dog, as you can see from my pictures there. And I spend most of my free time programming, playing video games, hanging out with family. Glad to be here.

      KIMBERLEY HENDRIX: Great. So what we're going to cover today, we're going to walk you through a project that we actually did with the customer showing how to effectively use Vault professional software with Autodesk Construction Cloud. We're going to keep them in sync with each other and show you how that we allow for both sides of the company, designs, facilities, external engineers to accurately use the data at the same time with one source of truth.

      So the pain, so when they called us, that's one of the first things that we want to get into is what pain are they experience? What are we trying to solve? And so we had many meetings and scoping sessions. And what we came up with was three main bullet points is must have a single source of truth. That's because they had many PDM systems across many different locations. And not only did they have many PDM systems, but those same files were in multiple places.

      So copies of the files were in facility locations or engineering sites or with another engineer or on a network drive or in a different PDM system. And nobody knew which one was the latest copy, where that file was, what the metadata around it, how to find it, which one to work on. Sometimes, they were working on it, two people at the same time doing two different projects. And then they had to go back and look at it to figure out how to put all that into a single source of truth later.

      The second point was they had to be able to collaborate both internally with their own people at their field offices and in their offices, control their data, and internally and one person having at a time controlling it with projects and then externally with outside contractors that actually edit the files. And then whatever we come up with, they had to have a two way communication between those two so that one thing controlled it all.

      So our original solution, and I talk about the original solution because as you'll see as we get through this that everything evolves. And this project also evolved. And so you'll see about halfway through this how we're evolving this solution. So I'm to start with the original and then Chancellor is going to talk about how we've changed some things. And then Drew's going to come in and show you all the hidden things behind the curtain.

      So our original solution was five products, Autodesk Vault Professional, Vault Data Standards, which comes with Vault Professional, the Desktop Connector, Autodesk Platform Services, formerly known as Forge, and the Autodesk Construction Cloud. Our two main products were Vault Professional and the Construction Cloud. The other three products is what hooked it all together and made everything work seamlessly. So I've got a little bit of an overview here.

      So on the top left, we have the Vault Pro. That's the client side. That's what the client works from. Everything that they do is inside of there, right? If they're an engineer or somebody that creates content, then they're working in Vault. On the far right, we have the Construction Cloud. And the people that are doing read only, using it, consuming it, checking, it doing things like that, they're in the Construction Cloud because they're not creating new content. There are exceptions to that. We'll get into that.

      And so a lot of this process runs through coolOrange powerJobs, which we'll talk about quite a bit through here, the Vault server, and then the middleware being some stuff that Team D3 has written, the Desktop Connector, coolOrange powerJobs, and then of course, the platform services. So that's just, I wanted to give you just a high level, 10,000 foot look of what we're doing so that as we get into the nitty gritty of this, it makes more sense as we get into these things.

      So let's talk about Vault Pro. Vault Pro was chosen because for them, it's on-prem. And by on-prem, it's a native US or Azure server in the cloud. But they control it. It's in-house PDM for them. It's controlled by their IT department. But it is the document of record. In other words, it's king, OK? So everything that you're looking for, it's going to be involved. And it's going to be involved unless an outside contractor is working on it, and we're done, it still comes back to Vault.

      So Vault is king. That's a single source of truth and everything else is fed from that. And so on the far right, we have the Construction Cloud. Obviously, it's in the cloud. And it's for external collaboration. Again, for people that are reading only read only out in the facilities, and they can use a web browser to get to and read files or for external contractors that actually Edit Sauce files, and we'll get into how we do that a little bit later. And then in the middle is how we connect it.

      Vault Data Standards is a big part of what we're using, coolOrange powerJobs, which is the wrapper around the out-of-the-box job processor is heavily used as well, the Desktop Connector to actually push and pull files back and forth for from Vault Pro and the Construction Cloud is used as well. And then some Autodesk Platform Services in conjunction with our Vault data standards to make the end user experience seamless like it was like it was always there.

      And so that's always our goal is to make it look like it was out-of-the-box, right? So that's the solution that we came up, the single source of truth with multiple access points. So I'm going to show you just a little bit of the Vault Data Standards that we did. So this, and not all of them that we used in this is showing in this presentation. But I picked on the publishing and closing of projects to show you the breadth of the stuff that we can do with Vault Data Standards and using all of those connectors.

      So the first two lines on this is location and active projects. Both of those are pulled from custom objects. If you're not familiar with custom objects, I did a class a few years ago called, well, there's two classes, What the Heck Are Custom Objects? And then a few years later, I did a Configure, Don't Customize Leveraging Custom Objects. And that will tell you how we use custom objects to pull data into our data standards.

      You have some checkbox whether you create a new project. If they check that box just for information, it creates a new project for them and actually writes the new custom object for them so that the next time they open this document that project is already there. They don't have to go someplace separate and fill in all that information to get the project and then come here and do it. We can do it for them automatically in one data box. Again, we want it to look like it was always there, like it was built for them.

      But the important part of this and the one I want to talk about the most is our resource type. And that dropdown box, I just typed them in on here, has two options, PDFs published to ACC and source files remain involved or source files published to ACC, and they're locked in Vault. So the first one is pretty self-explanatory. It means I'm going to assign this job to a project. That means I'm going to work on it.

      And when I'm done or to a point where it's ready to be consumed, my life cycle will then just take a PDF, publish it, and publish it out to the Construction Cloud for viewable purposes using a PDF. If I choose the second one, that means an outside contractor needs to edit those files. They're going to do some work for me, some subcontracting for me. And I'm going to send them bucket of files. And I'm going to put in Construction Cloud.

      They're going to consume them and work on them. And we're going to bring them back. Because remember, Vault's king. So we need everything back in Vault to maintain that continuity. We do some fun stuff later on that we can talk about when they bring it back, and we want them to know that file is now, that AutoCAD file or Inventor file is now in Vault. We have some placeholders that we put out there so they don't lose all their history in Construction Cloud. So that's the stuff we do. That's our two choices.

      We have assign it to a project. Once we assign it to a project, we work on it. We execute the button. Job is queued using coolOrange powerJobs. And it does those offline. So my user does his things, and he goes on to the next thing. And the job processor in the background creates a PDF, puts it in the publish folder. And it's set up to sync every eight hours using the Desktop Connector.

      If we're doing sending the source files, it does it via the Desktop Connector in the original file in the Vault gets set to a lifecycle state that says, "Assigned to ACC" so it's very clear that somebody outside the organization is working on that file. And it's set to read only so that nobody can edit that.

      OK, next slide. OK, so when we close a project, you get a very similar dialog box. Again, we pick the location and the active projects from our custom objects. And we select this Find Project Files. And that's going to go out and find all the associated project files that we assigned earlier. It's going to pull a list from Autodesk Construction Cloud using some of that cool stuff that Drew's going to show you after a while. And it brings in that list, and it will return those files back to Vault.

      And it sets them at review so that we don't rely on an outside contractor putting something in our Vault at a release date. So it requires a set of eyes to look at that. Anything to add on those two things, guys? We're good?

      CHANCELLOR KURRE: I think we're good on my end right now.

      KIMBERLEY HENDRIX: OK. I was going to mention one thing. Let me go back one slide. The eight hour delay on publishing files, that is not a Desktop Connector thing that we also use project sync inside a Vault.

      And that's its limitation is that it is every eight hours is as soon as you can do that. So that's where this eight hour sync comes into effect. It uses the Desktop Connector, but the project sync inside of Vault doesn't fire it except for every eight hours, so that's the limitation on that.

      So this is in play, and it is working for our customer. And then they asked for more, right? So all projects evolve, and they're like, what if? What if we do this? Can we do this? And so the next stage is some rework that we did. And I probably shouldn't have called it rework. I probably should have call it expanding, right, expanding on our same stuff. And I'm going to turn this over to Chancellor to talk about the next stage.

      CHANCELLOR KURRE: Thanks, Kim. Rework, expansion, improvements, like expansion or improvements, we can go ahead and go to the next slide. Just like Kim mentioned, all projects evolve. If you've been doing this long enough any length of time at all, you'll know that what you start out with isn't always what you get. There are improvements along the way that you have to make. And so some of these are exactly that.

      We thought that every eight hours wasn't going to be a huge limitation. And we thought it was going to work out just fine for us. But as we started working in this environment, we found that eight hours was kind of a long time to wait. You could come in the beginning of the day, make that transition, and it wouldn't be out in ACC until the end of the day, maybe even the next day depending on when you made that transition. So that was a limitation on us.

      We also were in talks of wanting to take metadata out to ACC. You've done this great job of gathering all this metadata. You've got maybe multiple titles populating your title block. You've got last person who checked it in, maybe the person who did a review on it, got all that valuable metadata. But it hasn't transitioned into ACC. It's not usable. And it's not searchable. So we were wanting to bring some of that in.

      And then kind of the nail in the coffin was the Desktop Connector limitation to 40 projects. This is one that didn't exist when we started, but came along after the fact. As we were working in well over 100 projects, a limitation of only 40 projects was a big nail in the coffin for us. Using the Desktop Connector and the job processor to move things around, you might think, well, let's just use multiple job processors with multiple instances of Desktop Connector. But that leads to a lot of complication and a lot of cleanup that really just isn't necessary.

      Let's go to the next slide. And with that in mind, let's take a look at, we'll call it the new solution. It's our improvement on the old.

      We're still going to use Vault Professional. It's still going to be the king of all the documents. And it's still going to be that source of truth for us. We're still going to use data standard for all of the custom dialogs. We're still moving everything out to ACC, out to Autodesk Construction Cloud. And we're still going to use Autodesk Platform Services but this time in a little bit bigger way.

      And in order to do that, we're going to get rid of Desktop Connector to get rid of some of those limitations for us. Let's go to the next slide. With that, we do have some functional requirements that we have to hit.

      We still need to send and receive files just like Desktop Connector was doing, still have to get those files moved. But we also do want to bring in that metadata like we talked about. This will allow us to search, reuse titles and all of that from Vault into ACC.

      And we want to bring that back as well so that if our contractors or our other people working on it make any changes, we can capture that metadata back into Vault. While it's entirely possible to gather a list of the files in ACC at runtime when the user is actually interacting with that dialogue, we found that it was quite resource intensive to do that. And we were a little bit concerned about security, specifically with client secrets and client IDs being on client machines, specifically working with Vault Data Standard and having those more or less in a plain text environment.

      So we decided to on-prem maintain our own list of all of the files in ACC. With the functional requirements out of the way, let's take a look at what changes for the user. You can want to go to the next slide. Again, since we're still using Vault Data Standard, we're still doing all of that the same. The user really doesn't have any changes to look at. The location and active projects are still populated by a custom objects. And we're still selecting our source type.

      The big change here is when we hit Assign to Project. I'll let you go to the next slide again. When we execute that Assign to Project, we're still going to cue a job. But this time, instead of letting the Desktop Connector move those files, we're going to cue another custom job that sends those files using some D3 written libraries. We're still publishing that PDF to the publish folder. We have to keep that location. That's one of the requirements that we had to hit.

      But again, we're using a custom job now to send those. And those libraries are piggybacking off of the ACC APIs to let us send those jobs-- send those files out. The other option is sending source files. Again, we're using a custom job for that to send those original files and then marking those as assign to ACC so that users in Vault don't accidentally make changes while they're out for edit elsewhere.

      Let's take a look at the close dialogue. Again, the close dialog looks pretty much the same, location and projects, again, still populated from custom objects. This time instead of using a find files at runtime, we're referencing a CSV. Like I said earlier, this is because of some security issues that we-- or security concerns that we had as well as trying to lighten up some resource allocation. And so we'll look at that CSV.

      The CSV in this case is stored on a network drive so that all machines have access to it. And that will return the list of files. And we'll populate that here so that a user can select the files they need to bring back. When they hit Close Project, we'll take a look at that. When they hit Close Project, we're going to cue another job. This job will return the source files to Vault instead of using Desktop Connector again, with the D3 libraries written on top of the ACC API.

      We're then going to set that lifecycle state to review so that our in-house users, our Vault users can review that, make any required notations, and they can be the ones to move it to a release. Now that we've seen all of the fun parts, all the parts that the user gets to interact with, I'll bring Drew in to show us what's behind the curtain. Really I think that's what's the fun part.

      DREW WASZAK: [LAUGHS] Thanks, Chancellor, appreciate it. So yeah, we're going to take a little peek behind the curtain here and talk about the architecture behind the integration of Vault and Autodesk Construction Cloud. Can you can go ahead, go to the next slide? So we have a simple flowchart up here to walk through to help you guys understand the key software players that are involved and how the data is being traversed through the system.

      Let's start on the far left side. We have the Vault clients that are communicating back and forth with the Vault server. That's everything that Vault does great, all your check-in and check-out processes. And then to the right there, we have a separate server that houses the Vault Job Processor. And the reason this is a separate server is so that we can offload that processing from the Vault server and the Vault client to keep those machines working at what they do best.

      Next, we have each of those custom jobs that Chancellor was mentioning there. And on the far right side, we have Autodesk Build or Autodesk Construction Cloud. All of the endpoints we've used in this integration are backwards compatible with BIM 360. Jumping back over to the Vault Job Processor, we're using that coolOrange middleware or wrapper called Power Jobs that gives us ease in integrating and iterating and implementing this solution.

      Because our client had such a complex case of having hundreds of projects, thousands of folders, and tens of thousands of files, we had the potential of dealing with millions of entities. The bottom hub file index job is there to offset and build some efficiencies into our system. This is a reoccurring job that lives on the Vault Job Processor. It can be run once a week, once a day, once an hour, any frequency that you would want.

      This hub file index captures all of the projects, all of the folders, and all of the files and stores that on the job processor for easy access so that the other jobs, when the other jobs are queued up, they don't have to make as many API calls and gather up all that data. That data is ready for them to grab on the machine. Jumping back over to the far left on the Vault client, that's where that data standards implementation that came in. Chancellor has been showing you guys. That's where that lives.

      And that gives us events that we can trigger on to then run these jobs on the job processor. So from the Vault client, they can initiate a job to upload their files to build. This will download that file to the job processor, upload that file, and it will also map and bring any assigned user defined properties in Vault as custom attributes within Build. Again, from that Vault client machine, they can request to download or retrieve that file from Build.

      It will download and check in the file to Vault and will also bring along those custom attributes and update any user-defined properties that have changed. We have a few other jobs that run in the background for maintenance. One of our other jobs is kind of a catch-up job. If you opt in to this process late, and your project is already running, we have the ability to resync all of your files and custom attributes for an entire project, basically iterates through that hub file list that we've stored on the job processor. Now, Kim, can you go to the next slide for me?

      CHANCELLOR KURRE: Real quick before we do, I've got a question for you on that slide. That hub file index job, we're going through hundreds of projects. How do we make sure that an error on, let's say project five doesn't impact the rest of the projects? Can we make sure the rest of those projects still index?

      DREW WASZAK: Absolutely, that's a great question. Thank you, Chancellor. So the way the job processor works is in a singleton, or we have single instances of all of these jobs running. For context, in our hub file index job, we queue up a job for each project that exists in that hub. So if one of them fail or doesn't conform to our folder structure or the requirements we have for the integration, it can progress past that and still keep indexing the rest.

      The same is true for uploading files. If you upload 10 drawings from Vault to Build, they're going to be single jobs within the job processor. And if one fails for whatever reason, the others can still go through. On top of that, if a job fails for whatever reason, that's tracked within the Vault Job Processor and is also tracked within the coolOrange logs so that you can troubleshoot and find a solution. All right, next slide, please.

      KIMBERLEY HENDRIX: Well, just real quickly on that, so it sounds like if you're listening to this, we're putting a lot of pressure on that job processor. And I just wanted to interject in here that we can have multiple job processors to work asynchronously together to run these jobs and pick them up like if there's four jobs, and there's four job processors, they could all run simultaneously to keep up with the path if we needed to. We've not run into that yet in this environment. But as they grow, and more transactions happen, we can just add more job processors to the list and continue those jobs. So I just wanted to interject that.

      DREW WASZAK: Absolutely. That's a great point, Kim. And to further on that, we have complete scalability. You can have your Vault job processor living on the server if you're in a low or a small enough instance. Or if you're in a large enough instance, we can spin up 10 or hundreds of them depending on whatever is required. Any other questions? All right.

      All right, so for any programmers or coders in the room here, I wanted to talk a little bit about how straightforward it is to build these scripts within PowerShell using Cool Orange, Power Vault and then RD3 Autodesk Platform Services libraries. The great thing about using both of these modules is they are maintained, so coolOrange maintains all the differences and the SDK versions between Vault. And D3 maintains all of the changes within Autodesk Platform Services APIs that we use in these libraries.

      So you can see here this is a fairly straightforward script. It's less than 50 lines of code, 53 if we include comments. And this is everything required to upload a file and its custom attributes from Vault to ACC. Also, we've also built in the ability to debug this instance or run it locally for troubleshooting or iterating and building upon it. So it's a great script, and the other ones are just as simple as this one. Thank you guys so much. I'll hand it back to Chancellor to keep talking about how this thing is working.

      CHANCELLOR KURRE: Absolutely. I'll give a high level overview of what it takes to get this going and all that. If you want more information on it, we do cover this in more detail in the handout. Gives you a little bit more of an insight as to what it might take. As far as hardware requirements go, there's nothing outside of the ordinary. Whatever it takes to get your Vault client going is what it's going to take to get this going, right. Your Vault client, your job processor, all, those hardware requirements cover this as well.

      As far as software goes, of course, you'll need Vault Client, Vault Server, powerVault, which is available from coolOrange, powerJobs also available from coolOrange. And we've got our D3 Autodesk Platform Services libraries. It's our 11 secret herbs and spices that we like to throw in there that help us along the way. Set up an installation, really just as simple as a standard data standard customization. If you've done adding your own data standard dialogues-- found the word-- your own data standard dialogues, and it's as straightforward as that.

      You will need to configure the workflow a little bit as far as making sure that jobs trigger on certain lifecycle state changes so that you can get files sent out and brought back at the correct life cycles. And then as far as actually using the integration day-to-day, just get some practice with it. Get comfortable. Make sure that the users are happy with it. And make sure that the job processor stays running.

      With as many jobs as there will be running on this job processor or job processors, if you need multiple, we want to make sure that we're on top of those so one failure doesn't cascade down the road. On that, I'll hand it back over to Kim.

      KIMBERLEY HENDRIX: So we will cover this in more detail on the software and hardware requirements to set up, how you put Vault Data Standards on it, how you manage the Vault Data Standards for a large implementation and how we use it and what it looks like in the handout. That's a bit of an eye chart for this. So we'll put that in the handout for everybody to have for reference. And with that, I believe we have our contact information if you have questions or stuff about that. And we look forward to seeing you.