Description
Key Learnings
- Learn techniques for extracting and analyzing data from Civil 3D models to make informed decisions.
- Learn about using Dynamo and scripting tools to automate tasks and streamline design processes in Civil 3D.
- Learn how continuous analysis of models and data enhances efficiency and productivity in civil engineering projects.
- Discover the significance of data mining in Civil 3D models for downstream uses and future success in design projects.
Speaker
- Stephen WalzI have been in the AEC Industry since 2003 and have taken on many roles from drafting and designing, to Model Management to Implementing Company-Wide BIM and CIM standards, procedures and workflows. In my current role as the Civil Infrastructure Digital Design Lead at HDR, I collaborate and strategize with Business Groups and Technical Leadership, alongside ITG, across HDR to: o Drive Consistency with various Platform and Technology Adoption and Implementation o Manage and Assist in Vendor Engagements and Activities o Establish Best Practices for transitioning into Digital, Model-Based, Delivery o Identify Staff Development Opportunities o Develop Software Training Programs and targeted Skill Based Learning Paths o Identify strategies and initiatives for driving BIM | CIM Implementation company-wide o Monitor BIM | CIM tool usage, technology challenges, and elevating technical capabilities within HDR o Advise and assist in the development of HDR’s BIM | CIM strategic roadmap o Lead HDR's Digital Design for Civil Infrastructure Working Group o Assist with Key Project Pursuits o Build and Support a community of practitioners for various Design Platforms and Technologies being utilized o Work closely with HDR’s and Industry Thought Leaders to promote new platform and technology solutions
STEPHEN WALZ: All right, welcome, everybody, for joining. Our class today is going to be on data processing and mining with Dynamo for Civil 3D. Quick introduction of myself, I'm Steve Walz. Hey, how are you? This is my eighth time attending Autodesk University, my second time as an AU speaker. Last year, I had presented in New Orleans on how we could leverage Civil 3D property sets for design and collaboration purposes, talked a lot about how we could apply formulaic equations within the property set definitions to automate the value population of each of the property sets, talked about how we could leverage Dynamo and how we could leverage the property sets for collaboration and downstream uses.
I myself am the Digital Design Lead for HDR. What I do in my role is we really just make sure we're using the right tools in the right way across all of HDR, so driving consistency across the board, making sure HDR is prepared for new technology solutions as well. I'm also the BIM, CIM Content Manager for AUGIWORLD Magazine.
We are always looking for new upcoming experts, subject matter experts, within the industry who are willing to share their knowledge. And maybe it's a best practices or you have tips and tricks you'd like to share. We're always looking for those individuals who are really forward-thinking and looking to share their knowledge with the industry.
I also sit in the Education and Professional Certification Committee for buildingSMART International as part of their US chapter. And in that role, we really promote the idea of open BIM and open data standards, so a lot about ISO 19650 and IFC, the new version 4.3, trying to really push that out and build awareness around it. Also, earlier this year, I had published my first book, Autodesk Civil 3D 2024 from Start to Finish, available on Amazon, in Kindle, PDF, and hardcopy format. So if you're looking for a refresher or new to the industry and looking for some Civil 3D training, please, by all means, go out and check it out.
So as far as what we're going to cover today, some basic focus areas. We're going to start off with diving into Dynamo, so learning how we could access Dynamo and extend its out-of-the-box functionality. And really, what I want to do here is we're going to quickly brush over this, but really just want to make sure we're level-setting the audience for those viewing the video or attending the class. Just, they're coming from all different walks of life. And I don't know where everyone's at right now. So we just want to make sure we're level-setting the audience.
The heart and the meat of this presentation is really going to be focused on the last few items. So we're going to jump into techniques, learn techniques on how we can extract and aggregate data from Civil 3D models. We're going to talk about how we could find patterns, correlations, and trends with the data from our Civil 3D models. And then, we'll develop some methods, some best practices, for applying corrective action to improve productivity and performance based on what the data is reporting out and what types of correlations, and patterns, and trends we're-- that are being identified.
A lot of this is going to be covering Dynamo for Civil 3D. And throughout the presentation, we're going to do some pop quizzes, just to make sure we're following along and we're keeping your attention. So be on the lookout for those. But again, going back to the Dynamo comment, there is going to be a lot of Dynamo discussed on how we can extract and aggregate data and report using Dynamo. And you're probably thinking in your head like, Steve, why are we using Dynamo? Civil 3D has all these great tools available within the product.
Project Explorer has been out for several years. It's a great BIM modeling tool or model manager tool, where we could analyze all the model geometry within our files. We could update. We could decipher what's going on. And it's a quick visualization that runs on top of Civil 3D. We could analyze anything from alignments to pressure networks to surfaces to property sets, parcels, whatever it may be.
And it is very inclusive. There's all kinds of data that can be reported through here. In addition, we could extract that data and generate reports through Excel or Word. And we could develop some custom solutions. But it is somewhat limited. And this will become a little more clear throughout the duration of the presentation as to why we're leveraging Dynamo.
But right off the bat, some simple reasons as to why we're using Dynamo for Civil 3D is because, first off, it's flexible. It's a flexible solution. So we're not limited to what exactly is available in the out-of-the-box capabilities, data points that are being identified within Project Explorer or some of the other reporting tools, for that matter, that are available within Civil 3D.
We have the ability to either add or subtract some of those data points and make this a little more flexible of a solution. We also have the ability to make it unique and customizable to the end user experience. Maybe it's a client that only needs certain data or needs the data reported in a specific way. We could certainly apply that within Dynamo for Civil 3D within the scripts that we're developing.
We also have the ability to access more data points. We're not limited to what's available within Project Explorer or some of these other reporting tools within Civil 3D. We have the ability within Dynamo to leverage Python coding language to access more data points, tap into more APIs, and develop a pretty unique solution.
And then, thinking about the end goal, we want to make sure that whatever we develop is a scalable solution, whether it be a small project team or across an entire organization. We want to make sure we're adding the ability to provide customizable inputs into our scripts that can make this a scalable solution for everybody.
All right, so setting the stage with Dynamo, we're going to talk about how we could access and extend the out-of-the-box functionality, making sure we're level-setting and giving that high-level introduction. So what is Dynamo? It's a visual programming tool. So unlike other programming languages, like .net, C#, VBA, all those require a lot of text coding that maybe we're not very familiar with. We got into the design scene. And we haven't really gotten an understanding of the programming side.
So this is actually a very intuitive visual programming tool where all that data, all the coding, is behind these nodes, what they call. In a lot of cases where I've seen Dynamo being used on the Civil 3D side specifically over the past several years is to automate design workflows. If we plan on using Dynamo in that sense to automate some of the routine and mundane tasks that we perform day in, day out, you'll have to understand, in a lot of cases, the design workflow progression within Civil 3D specifically.
So if you take a corridor model, say your end goal is developing a corridor model for a site, we all know that the design progression starts with an alignment. It samples a surface. We create our profiles, profile views. Our assembly is composed of subassemblies. And then, we could finally get that output of the corridor model. So understanding how to get from point A to point G, or M, or whatever it may be. We understand that design workflow. So we do need to understand or have that kind of background.
I will point out that it was introduced in Civil 3D 2020. I've heard of folks in the industry that are still using releases prior to 2020. So if you want to access Dynamo, you will have to upgrade. And again, it does have the ability to continue to further extend the capabilities of out-of-the-box functionality through Python coding integration. So we can develop custom nodes.
All right, so how to access and get started with Dynamo? Within Civil 3D, we would essentially go to the Manage ribbon. And then, we have the Visual Programming Palette. Within the palette, we have Dynamo Player on the far right. That is essentially the equivalent to hitting the easy button. So once you develop a whole bunch of scripts, you can pull them up in a Dynamo Player. And you can hit play, play, play, play and run those on your current file.
If you select the left Dynamo, you will get this dialog box. It's Dynamo that runs on top of Civil 3D. On the left-hand side, we have the ability to open previously created scripts. Or we could create a new one. On the right side, we have the ability to access all types of great resources that are available online. A couple that are worth pointing out are definitely the forums and the Getting Started.
It definitely helped me a lot out over the years as I was-- as I started diving into Dynamo and trying to understand how we can leverage it in the Civil 3D world. A lot of great people out there on the forums for sure. A lot of people willing to help. So definitely check those out. And don't overlook those.
Once we create a new Dynamo script or open up a previously created one, we get this interface right here. Still running on top of Civil 3D. So it's able to communicate whatever script we're developing, able to communicate and apply to the current file that's open. But up at the top, we have our basic pull downs. On the left side, we have all of our custom nodes that are categorized by functionality. So we have a whole bunch of AutoCAD, Civil 3D nodes that are available to us. And then, the list goes on.
And then in our scripting space, which I like to call it, is the middle, where we're actually going to be developing our script, making the connections visually from throughout our progression of our script. At the bottom left, you'll notice there's an add-on section. And that is essentially the section where we can continue to build on top of out-of-the-box functionality or extend the capabilities.
To install those plug-ins, those packages, we would essentially go to the Packages pull down. We would search for package and then type in what exactly we're looking for. So if we want to filter out everything not related to Civil 3D, we could just type in Civil 3D there. We get our list of what's available for us within the Civil 3D environment. And we could select them and install. These are free packages that have been developed by users in the community that want to continue to build in functionality with Dynamo. So a lot of great resources out there and a lot of great packages that can continue to extend the capabilities of Dynamo.
Three packages I'm going to mention through-- and two of them are actually going to be used for a lot of the demonstrations. First one that I wanted to point out is CivilConnection. It is available in only Dynamo for Revit, not available for Dynamo for Civil 3D. But definitely worth mentioning because in the case of maybe a bridge abutment, you're designing a bridge, corridor, you have some structural elements within the Civil 3D environment. You want to bring that into Revit.
You can actually, within Dynamo for Revit, use the CivilConnection package to communicate with the design elements in Civil 3D, bring it over into Revit, perform your detailed design and analysis, and then update it, and then push it back into Civil 3D. So Civil 3D will actually be able to read the updated geometry. Very cool, but only available on the Revit side.
The two that are going to be discussed today that I'm going to be using are Civil 3D Toolkit. Civil 3D Toolkit, again, a free package to be installed, has been available since 2020, I want to say. But really, since Dynamo for Civil 3D has been around. This extends the capabilities, taps into all sorts of Civil 3D APIs. And we could identify more data points.
Camber, also, has been around for several years now, taps into both AutoCAD and Civil 3D APIs. So we can continue to build on top of what's available in Civil 3D Toolkit by using Camber as well. CivilConnection, like I said, is not going to be covered. Civil 3D Toolkit and Camber will be. There's all kinds of great resources out there. I showed you before. I have these links on the slides. So I also have them in my handout. No need to screen capture this or anything. But go to any of these resources. There's a lot of great folks in the community that are willing to help.
So first pop quiz. Make sure you're paying attention. Which two packages will we be using to extend Dynamo for Civil 3D's out-of-the-box functionality? Hope you guessed right. Civil 3D Toolkit and Camber. If you don't have these installed, I highly recommend you do. Again, they're free packages. And they will allow you to continue to build on top of the out-of-the-box functionality and will allow you to do a lot of the stuff I'm going to be demonstrating today.
All right, so let's jump into the heart of it, data processing with Dynamo. We're going to talk about how we can extract data and how we could aggregate it. Quick high-level definitions of both data processing and mining, which are both covered here. In the presentation, we're going to focus on data processing first, though.
Definition of that, convert. It's the process for converting unrefined data into a well-organized and structured format, rendering it suitable for analysis, interpretation, and decision-making. So what does that really mean? We're taking our data and we're going to be able to clean it. We're going to transform it into something usable. We're going to aggregate it and store it.
On the data mining side, we're going to take that data and we're going to analyze it. So we're going to apply some algorithms and techniques to discover trends, patterns, correlations, relationships, and anomalies. And again, we're going to focus on the data processing side. So we're going to talk about cleaning, transformation, aggregation, and storage.
So let's run through the data cleaning process, model interrogation and reporting. Here is a snippet of one of the scripts that are available in the data sets with this class, where we're identifying many data points that we want to hit within our files and extract the data from. So up at the top from the top-down, we're getting everything from the versioning.
The original version, the last saved version, we have some custom nodes that are available that just are box solutions that we can do-- that we could grab a whole bunch of data points. So we have the drawing scale, the coordinate system projection, units, and so on. And then down below, we start to get into line types, blocks, 3D polylines, polylines, and even xref data. So we could combine all these data points into one script and then extract and report.
Looking at this general one specifically, it's a great custom node, comes with all these data points that already boxed available. But we need a way to clean it also. So that's what we're seeing in the bottom, in the code block. We're organizing it in a way that we want to see in our final report. In our case, we're going to be reporting to Excel first, taking the data from-- through Dynamo, exporting it, and making an Excel file. So this is essentially how we're going to organize. And in all of these items listed out in the code block are going to be our column headers with all the data listed underneath.
For the surfaces, so getting out of the general information of our files, we could use these boxed-- out-of-the-box-- or I'm sorry, custom packaged solutions with these great custom nodes that have all these data points available, where for the surfaces we'll essentially identify all the surfaces in our file, extract things like the points, the areas, the grades, and so on. And then on this last one, it shows the alignment data and how we're going to organize and structure our Excel file. So we're going to hit all these data points with various nodes. And this is how we're going to organize it later.
High-level picture of one of the general scripts that are available as a data set within this-- for this class. We have our data points. So we're identifying all the types of data points we want to extract from our model. We have our data aggregation in the middle. That's essentially where we're consolidating everything and making a list that's well-organized and can be pushed out. And then finally, our data reporting on the right side, which is the final output, so telling us that we want to take that list and push it to an Excel file.
And this is a sample output of just that general script. So we're seeing all the column headers that were listed out in the code block. And then, we have all the data falling underneath, so everything from file name to the projection to the scales to line types and so on. This next one shows you the final output of the surface one that I was showing. Again, we start with the document file name. And we have all the boxed solutions within that custom package, the data points. And then, we have a series of additional ones towards the end, looking for the surface. Is the surface out of date, is it on auto rebuild, and so on.
And then, the last one is the alignments. So we saw how we were organizing it in that code block. And this is essentially that output. And we have all this great data. So again, going back to what's different about Dynamo for Civil 3D compared to Project Explorer or one of the reporting tools available within Civil 3D, we have the ability to customize this to whatever the end user experience is going to be.
And we do notice also that we have a commonality in that first column. So we have the document file name being listed. So we can tag and make those relationships later on in the end product, which is essentially Power BI. We need to-- a way to build the relationships and connect all the dots, all this information, back to those.
So data transforming, maybe Excel isn't your final product or Power BI, whatever it may be. Maybe we want to take this data and actually push it back into Civil 3D. We could certainly do that. And here's an example video of one script that I developed a couple years ago, which performs clash detections. So as we're seeing in this model, we have gravity networks and pressure networks combined. And the pressure networks essentially represented all the yard piping in that wastewater treatment plant site.
And there's a lot of intricate things going on here. There are some clash detection tools available within Civil 3D. It does only consider gravity networks, though. So with Dynamo, we're able to continue to build on what's available with Civil 3D and actually perform clash detections right in this product to go against both gravity and pressure networks, along with any remaining 3D objects.
So maybe we brought in a Revit model. Or maybe we extracted corridor solids from our corridor model and we want to perform some clash detections. It doesn't have to be a hard clash. We could add some buffers, as you're seeing here. Maybe based off the design requirements, there's certain clearances that need to be adhered to during the design. We could apply those within here.
We could run it. We could push that data all the way back into our current model also. It could be a point. It could be a solid. It could be, really, anything that we want it to be to be displayed within Civil 3D so we could make those corrections, those modifications and updates, to our network to make sure there are no clashes any longer.
All right, pop quiz. Make sure we're continuing to follow along. Can you name at least two of the four objectives of data processing? We've already covered two of them. That's right. Cleaning, transformation, aggregation, and storage. So let's talk a little bit about the data aggregation and storage side. We're going to extend the clash reporting and cost estimation concept.
So taking those data points that we just showed in the Clash Reporting Tool, we're able to actually export that to an Excel file and bring it into Power BI with built-in viewers. We're able to see and identify which pipes are actually clashing, from which network, and so on. We get all this great rich data available to us.
Here, we have just the tin surface isolated. We can see all types of surface operations. We can understand how a surface was built, what it actually looks like, what definitions were applied, and so on. We get all this great data available to us in a very visually appealing Power BI Viewer, which is really nice. On the other side, if we were to report using Project Explorer, it's a great tool, as I said. And we get all this great rich data available to us. We could make these great visualizations, share it with the client or a project manager.
And we could see, maybe in this case, how much material needs to be hauled offsite or brought on site. We could see all types of information associated with our pipe networks, gravity, and pressure. We could quickly identify if our pipes are meeting the minimum coverage, or clearances, or whatever it may be. And once we have all this data in here, not only are we getting the quantifications of it, we're able to apply some additional formulaic equations to generate some rough cost estimates. It's a great tool to be used for maybe design alternative analysis or something like that.
So data processing. Recap, what are we doing with it? We're taking the data and we're cleaning it. We're transforming it. We're aggregating it. And we're storing it. So we're going to continue to build. And you're going to see this evolve in the following slides about how we can continue to build this and make some even more visually appealing Power BI dashboards. So the data mining side with Dynamo, we've already extracted the data. Now we're going to identify some patterns, correlations, and trends based off of that data.
So again, the data processing, we're cleaning, transforming, aggregating, and storing. On the data mining side, we're taking that data and analyzing it using algorithms and techniques to discover trends, patterns, correlations, relationships, and anomalies. So data mining with a purpose, setting us up for success with consistent naming conventions.
Towards the beginning of the presentation, I talked about the scalability side of it. So we want to make sure that with that end goal in mind, we want to make sure that we can apply this throughout a whole project team, an area, maybe a whole organization. So keeping that in the back of your mind when-- as you're developing these scripts is very important. So as you can see, this is a data set that has a whole bunch of types of model geometry in it. This is actually a data set for my book. So if you have purchased the book, you have access to the data sets. And you can follow along, and even apply these scripts to that, and achieve similar results.
So we're going to call up our general drawing data script. Take a closer look at this, how it's developed. We have our current document that it's talking to or communicating with. We have all our data points listed out from the versioning to the projections to the object types, lines, 3D polylines, xref data. So not just identifying if there are xrefs, what is-- where are they located? Are they attached or overlay? We're consolidating all that data, aggregating it. And then, we're reporting it out.
Now, on the scalability side of it, we need to think about how we can continue to aggregate our data in that stored location. So we want to set the job number and Excel tab name as an input. So when you do call that up in Dynamo Player, we could actually change that to continue to build into our database. So if we call up our Dynamo scripts, scroll down to our general drawing data, select that, we have those inputs available to us.
Now in this case, the Excel tab name, instead of leaving it General Drawing, I'm going to call this Phase 1. It could be a 30%, 60%, 90%. Whatever your design progression typically looks like, you could name it as such. So when we start to identify the trends, and correlations, and whatnot, we could actually see progressively throughout the design process how a file has increased in size and compare that to what types of objects or what type of content is actually embedded in that file.
So in this case, we did the general. You can see, we have Phase 1 Utility Model. General Drawing Phase 1 Utility Model is the tab name. And we have all this great data. We've got line types. We've got the scales. We have blocks. We have polylines. It's not just how many polylines, but information about those polylines too, if we wanted to. Again, we got blocks. We have xref information, which xrefs are in that file, where are they coming from, are they overlay? We have all this great data available to us at our fingertips.
So what do we do with that data once it's exported? We bring that into Power BI, in this case, could use other products that are on the market like Tableau, or Quickbase, or some other product. I tend to rely on Power BI a lot. So what I do within here is I will bring that data into Power BI. And I'll apply some conditional formatting. And I'll show you that process here real quick.
So what we're seeing on the left, we have slicers. We have all the models that we've already extracted the data from. And we're seeing all these numbers in these cards in the middle update. Some are green. Some are yellow. Some are red. Red typically means bad. Maybe we have too many blocks, or too many gravity pipes, or whatever it may be. We're able to set rules within Power BI to display a different background color. So we can quickly identify what's good, what's bad with our files.
So if we were to go to the General, Background, we could see that the rules are set up. So let's clear that and show you what that process is. Again, those numbers are updating as we select each individual model. We'll go back into our visual. Go to the Effects. Go to the Background. And we'll apply some conditional formatting to the layers that we just cleared to show you what that process looks like.
So to set this up, we want to set a rule. And we're obviously-- since we're on the Number of Layouts card, we want to keep that as count of layers. And then, we're going to change the color. It's pretty much as simple as that. We'll set this to green and say that if it meets a certain criteria or threshold, it's OK. We'll set that minimum and maximum value from 0 to 500. If you have under 500 layers, you're good. Not an issue or a potential issue with your file.
The next one, we'll set the next range from 500 to 1,000. We'll call that yellow. And maybe there could be potential issues with it. It just gives us a quick indication as we're reviewing the content in our files. Then the last 1,000 to 10,000-- hopefully, nobody has 10,000 layers in your file. But even over 1,000 is pretty excessive. And maybe we need to look at the file, do some purging, auditing, whatever it may be to clean that up a little bit, make it a little more manageable with the end goal of making our teams more productive and efficient. That is the top of mind concern for everybody.
All right, so quick pop quiz. Can you name at least three of the five items we're discovering while data mining? Hope you got it. That's right. Trends, patterns, correlations, relationships, and anomalies. So let's talk about the data correlations, and relationships, and identifying trends. So this is a little more advanced Power BI dashboard that I developed that also includes all three of those files, the grading survey and model, but is also taking into account some progressiveness throughout the duration of or the life cycle of our design.
So we actually did this at a phase 1, phase 2, 3, 4, 5 6. We performed that extraction tool on those files. And we could see progressively how the file increases in size compared to the types of basic general AutoCAD type content within there. So we're seeing-- we're able to-- we have the slicer so we could isolate, grading model, survey model, utility model. And we see that progression in file size throughout the duration of the design, throughout the life cycle. And we see how many types of AutoCAD objects have caused that to increase.
The blocks, obviously, we see a decent jump in there. That typically happens throughout the process of designing, I guess. Layers seem to be pretty consistent. Xrefs, not a significant change. If we look at the survey model.dwg, just isolate that, we see it top off. And that's pretty consistent with what we would normally see with a survey model. Really not supposed to be touching a survey model once it's been incorporated into your design because that could affect a lot of things. And survey doesn't change. [LAUGHS] So this is pretty normal from what we're seeing, these flat lines.
And then the utility model, we could see that increase in size slightly, not too bad. But we are able to-- as we hover over each of these data points, we're seeing it went from 273 to 367 blocks. Maybe that was a cause for that-- the slow increase in file size. Probably not. It's a utility model. So we may need to look at maybe the utilities themselves and what's being reported there. But again, as we hover over these, we get to see a lot of great data points that are associated with the content that's being displayed.
Here's a simpler view now we're just looking at the surfaces. So we're starting to look into the 3D model geometry. Isolating the grading model, we could see that incrementally increase in size. And as suspected, it's more than likely coming from the surfaces. So from phase two to phase six, we went from one surface to two surfaces to 10, as you're seeing as we hover over this.
And then, it tops off, also. And again, we could see the file size as it relates to that particular file that we're isolating in this view also. So it went to 13. And it topped off towards the end. So there's not many significant grading changes going on from phase 5 to phase 6, which is expected and hopeful, at least, in a lot of cases.
And then, again, even diving deeper, maybe the increase in size could be related to the points in the surface. Maybe we created a more tessellated-- a higher tessellation of our surface, which caused more points to be generated or applied. Survey model, again, we're seeing that just top off. And the surface count and points flat lines, also. So we're seeing some trends now. We're starting to develop it once we start using this tool throughout the duration of the project design. Starting to pick up on that.
So identifying correlations, this is a much simpler view, pretty standard, pretty typical. We're applying some conditional formats to some of these. I didn't apply to all of them. But really, just giving you an idea of what we can do with these different views. We don't always have to have the bar graphs. We could just set something very simple up like this for a project manager who's more accustomed to using Excel or something. And this is more in their wheelhouse.
So the top section is listing out all that general stuff. The middle section, we're seeing all the surface information that we've been reporting. On the right, we have all our slicers, even on the bottom too. So we could actually just isolate those files that were included in the phase 3 data extraction process. And we get those listed out. If we want just the grading model within phase 3, we could see that information associated with that particular file or that phase.
And again, we're able to make some sense of what's being reported. So we could select both. We could select just one. We could jump down to phase 6. And as you can see, all these slicers actually update also. So if we want phase 6 utility model, we're seeing down on the bottom also only those surfaces that are available in that particular model file be filtered out.
So some very simple ways we could digest this information, and analyze it, and then make some corrective actions based off of performance throughout the duration of the design process. If we want to isolate just the surface, we can certainly do that. And it picks up all the files that are-- that actually have that surface in that particular file. So you're seeing 4, 5, 6 phases include just that proposed Residential Subdivision surface model. So some really cool things we could do with it for sure.
So data mining recap. What are we doing with it? We're identifying trends with the data, patterns, correlations, relationships, and anomalies. So this is great. We've extracted the data using Dynamo. We've been able to report it, identify some trends, some patterns using Power BI. But what do we do with that information? A lot of information being digested right now.
Ultimate goal, in my opinion and a lot of people's opinions, is to increase productivity or improve productivity and performance of our design teams. We want to do things faster, more efficiently, at a lower cost. So we need to take that data, that information that's been provided to us now, and apply some corrective action.
I like to keep things simple. And I rely on scripts where I can, where possible. Scripts is nothing new to-- in the Autodesk world. We've been using it for many, many, many years. And it's very simple. You create a text document, whatever your text-- I use Notepad. And I just give it an SCR extension. This one is a simple one for file cleanup.
A script file, for those that aren't aware, it's really just a series of commands that you would input into the command line. And that's all it is. To access this and apply it on a file, you could either drag and drop that script file from Windows Explorer into your file or you could type SCR at the bottom within the command line. You could navigate to your script file and click Open. And it'll run it the same way.
If you want a batch process that on multiple files, maybe you need to perform that purge and audit on a whole set of files throughout a project, you could certainly use the Autodesk Batch Save Utility. This comes standard installation with Civil 3D as well. And we're able to pick up-- although it's called the Batch Save Utility, you can actually select any script that you have and apply that on whatever files you select. So you could batch process across a whole project, if you'd like.
Keeping things in Dynamo, I like to consider using the easy button where possible. All those require clicks. And each click adds up in time. So within Civil 3D-- or Dynamo for Civil 3D, using the Camber custom package, there is a node that's really cool called Send Command. And we could send those lists of commands to our current file using that.
So if we were to do that, perform that scripting cleanup file, we could convert that to a Dynamo script, where, really, it just requires three nodes. We're looking at the current document. We have our list of commands listed out in the string. And we're sending that to the current document, three nodes. Very simple. Maybe we want to perform the overkill command. Or maybe we want to perform the remove duplicate feature lines. Whatever it may be, we could very simply write this out in your string using three nodes. We could send that to our file.
What if it's a little more complex? So looking at this one, this is our 3D Solids file. So maybe continuing on that same concept, we're thinking about all the clicks we're saving with Dynamo. So in this instance, I forgot to take the 3D solids that I had extracted from our corridor models. So what I'm trying to do here is show you all the pics that it takes to select all your 3D solids and your bodies.
And even now, I'm not even selecting them all. So I'm going to switch over to the Quick Select command. I'm going to select all the 3D solids. We're seeing many of them being selected throughout our file. And now we're going to add the bodies also that were extracted during that extraction process from our corridor model. We see 243. Again, a whole bunch of clicks that just were involved.
Or we could go back to Dynamo Player once we have that script developed. It's as simple as hitting that easy button. So now we don't have to perform all those clicks. I have a command line script that includes the bodies in the 3D solids. It's a simple click. And now there's no more trace of our 3D solids or our bodies in there. So very simple. Once you get these set up, use the Dynamo Player, easy button.
Continuing to build off that, maybe during our evaluation, our analysis, the data mining side, we identified that several of our xrefs-- several of our files had xrefs that were attached. Common practice, at HDR, at least, is to set those to overlay. So this is actually a script that I developed. Only four nodes. And it sets all our xrefs to overlay.
Maybe someone forgot to make that switch from attach to overlay. We certainly don't want to continue to carry legacy data or multiple files, nested references, from file to file to file to file. We just want to see what's important for that particular file without the nesting side of it. So we set everything to overlay. And this is a four node script that I developed that is very simple. And we could add to that easy button concept also.
There's another one I created in lieu of performing the Save As command. Maybe you want to apply some corrective action on a new file. Maybe it's for the next design. We could apply some inputs in this script to quickly create a new file in a certain format in a certain location also. Instead of having to navigate, so all we're doing here is just saving more clicks, saving some time, and keeping our project teams, design teams, efficient.
So the easy button with Dynamo Player, very simple. So as I've showed, we go up to the Dynamo Player. Here, we're going to select a folder. And what I've done is I've actually created a folder that contains just my Model Health Maintenance scripts. So we could set these up. We can customize it. And it filters everything else out.
So as I select the Model Health Maintenance, we're only seeing those corrective action scripts that I just developed and reviewed. And we just simply go play, play, play, play, play. If we don't want to do all of them, we could just select the ones that we need to based off the reporting and the analysis that we've performed. Maybe it's just xref to overlay that we need to apply on everything. So we'll continue to do that as we jump in these files and continue to evolve the design. So as you're already working in these files, you could do this. You could continue to build and apply that corrective action to the files.
So I focused a lot on model health specifically as, I guess, a tool that's scalable, a scalable solution. But throughout this process, throughout my journey of the data processing and mining side, I've certainly realized some huge benefits and advantages to this process. So some initially unrealized benefits as I entered this journey was that we could actually improve our production drafting and modeling habits and trends.
So maybe we're identifying some things being reported that maybe we want to have a discussion with our team about. So we could definitely set up some best practices documentation. Or maybe it's an upskilling or training opportunity. Maybe we need to sit down with our project team, and have a brown bag session, and just go over some of the workflows or some of the things that are being reported, and talk about the best practices. And this is how we should be modeling in our files.
As I mentioned earlier, augmenting more workflows with automation. We obviously have put a tremendous focus over the past several years on automating those mundane tasks. I showed you some very simple things that we could do to save ourselves a few clicks during the corrective action process as well. Just continuing to find more ways that we could augment the workflows that we do that require all these clicks and entries, keyboard entries, through automation.
As I showed earlier, we could perform alternative design and cost analysis with Power BI, even just using Project Explorer. But if you want to use Dynamo for Civil 3D for this purpose, you could certainly do that. And maybe it's additional data points that you want to hit within our models, and extract, and report on. We could certainly do that.
Streamlined collaboration is key. Maybe we have a generation or project managers that have been removed from the software, the design collaboration, or design authoring tools for maybe 10 years or so. And so they're not as familiar or comfortable jumping into the program, doing their interrogation. They could certainly look at just the report, Power BI dashboard or an Excel file. And we could get them the data in the right way that is easy for them to understand and digest.
We could obviously use this for design QA/QC. So we focused on the model health side. But there's also the QA/QC side. So again, we could extract different types of design criteria associated with our model geometry using Dynamo and report out and understand it, making sure our designs are in conformance. And the list goes on. There's so many more advantages to using the data processing and mining concept within Dynamo for Civil 3D.
All right, so what we've learned, we've definitely covered a lot. We obviously started with the high-level introduction to Dynamo, talked about how we could access and extend the capabilities of out-of-the-box functionality, talked about both data processing and mining, understood what those definitions were, where the delineation was, and how they complement each other in the extraction and aggregation process, and then taking that data and identifying patterns and correlations.
From that data that we're analyzing, we're able to make more informed decisions to increase productivity and overall performance with our design teams and across an entire organization, so data-driven processing. Where you can find me, please don't hesitate to reach out. I have my work email, stephen.walz@hdrinc.com, or my personal, stevewalz@hotmail.com. I'm always on LinkedIn, always posting where I can, liking and commenting. So it's a great place to reach out to me.
YouTube, I do post a lot of these video demonstrations. Many of you guys may have seen. But I will be posting more and more about this concept, this workflow, on my YouTube channel as well. Really, it ranges anything from design stuff within Civil 3D to Dynamo to Power BI to visualization stuff. It's a whole range of topics that I cover within my YouTube channel. But it's really just what I'm looking at right now or at the time and learning, that I want to share my knowledge with everyone.
I also republish a lot of that content on my design visualization blog at designtovisualization.com. And again, just a reminder, if you're looking for a refresher or looking to train some staff on Autodesk Civil 3D 2024, my book was released earlier this year, Autodesk Civil 3D 2024 from Start to Finish, available in Kindle, PDF, and copy format, along with the data sets that were included in this demonstration. And with that, thank you.