AU Class
AU Class
class - AU

Using BIG Data from BIM 360 Field to Track Corporate Trends

이 강의 공유하기

설명

This class will discuss pairing Microsoft Power BI with BIM 360 Field data to make informed corporate decisions.

주요 학습

  • Understand the data
  • Learn how to use the data
  • Learn about processing the data
  • Learn how to change practices based off the data

발표자

  • Zane Hunzeker
    An 8 year veteran in VDC implementation, specializing in highly technical and highly collaborative projects covering market sectors such as Education, Aviation, Semiconductor, Mission Critical, Healthcare, and more. Zane is the Divisional VDC Manager for Swinerton's San Diego Division overseeing all VDC and construction technology implementation.
  • Dustin Hartsuiker
    Dustin Hartsuiker works as the Manager of Technology Solutions for Swinerton Builders where he spends his time researching and implementing new technology and productivity enhancements for their project teams.With 24 years in construction (including experience as a Carpenter and Superintendent) Dustin is able to understand the unique challenges the industry faces and wisely differentiate meaningful improvements from the vast sea of emerging technology.His role in the corporate environment has caused him to focus on big data and the benefits it provides when properly harvested, organized, and evaluated.With a long term goal of improving productivity in construction, Dustin is on a mission to advocate for enterprise programs which can collect, store, and organize the massive amounts of data that hold the secret to unlocking increased productivity in Construction.
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • subtitles off, selected
      Transcript

      ZANE HUNZEKER: But welcome to our class on Thursday morning. And I'm Zane. I'm the VDC manager for the San Diego division of Swinerton. I'm more or less in charge of all VDC and construction technology implementation for my division.

      DUSTIN HARTSUIKER: And then I'm Dustin. I'm with Swinerton. A little bit of background, I started back in '92 as a carpenter, as a superintendent, worked my way up through. Right now, my title is manager of tech solutions but really I'm just a part of the innovations team. And what we do is we travel from division to division and look for areas where we can-- and what I like to do as an old superintendent I'd like to say, we can wisely implement new technology that truly makes a big difference.

      We do look at all the technology, but we really focus on the ones that we think can make a huge difference. And so that's us. And then just to put everything in character, so you understand who's talking. Swinerton was founded in 1888. We're doing about 4 billion, 3 and 1/2, 4 billion this year. And we have about 3,200 employees. And I think that split is around 1,700 admin which is standard admin employees that do project management. And probably the balance are craft employees for self-perform. That's where we're coming from.

      And so what we figured that would be the easiest way to approach this is to break this into maybe three different points. We're going to talk about data first. Just so we can frame out what data is to us and what information is to us. And then we're going to talk a little bit about, OK, now we know what data-- what we classify data as. Then we're going to take that and say, OK, this is the information that Swinerton thinks is important on our jobs.

      This is information that we collect for data. And then we're going to turn that in and say, OK, now, this is what we do with the data that we collect. This is how we implement machine learning or analytics for that data. So that's going to be the framework for how we're going to approach this.

      ZANE HUNZEKER: So to start off, we think that data is organized. It's electronic, consistent. It's accessible, and it's typically web-based. And it might be web in terms of an internet, or it might be web in terms of an intranet. So the biggest point is that it needs to be accessible, and it needs to be this hub of information. What it's not is an Excel spreadsheet.

      That's not really data, per se. It's a file. You can push that data into a database but the Excel spreadsheet itself is not really data. And then we have paper plans or any notebooks. We have a-- I think we've had superintendents argue with this that, oh, yeah, no, I've got all the data I need. It's right here in my notebook. Nobody else has that, so it's not really useful to the corporation as a whole.

      DUSTIN HARTSUIKER: Yeah, the difference is really you can have information, and information can be useful. So like a guy that's laying bar out there in the field or a post tension guy, he needs a piece of paper to tell him where to lay it, where to stress it, and all that. That's good information. It's a piece of paper that he carries around with him. It's not data. It doesn't allow us to get productivity rates. It doesn't allow us to move the needle forward at all. At some point, what you need is you need that data so you can pull analysis from it. So it's a key differentiator. And understanding is this a data set? Or is this really just information for the teams?

      And then the next part of that is to say, OK, so we know what we need. We know that we have a data set. There's going to be some pretty important things that you have to do prior to even getting started, prior to being able to use any type of big data or analytics. And the first thing is you basically have to have a structure. You have to structure your data out. And so for us, we started three or four years ago with developing what we call a data dictionary, right? And the data dictionary basically just says, OK, across all of our jobs, what are we calling a PCI, a potential change and what does that look like? We'll create a folder, an envelope for that. And then the next one, OK, what do we call an order change order?

      And then within that data dictionary, you start to develop, OK, we have trade partners. We can call them subcontractors, or we can call them vendors. Do we need to treat subcontractors differently than vendors in any way, shape or form. And if so then we need to be able to identify what's a subcontractor, what's a vendor, and how that partner, how that relationship works. And so key point is, yeah, you know that you want a database. You want to create that database.

      What you have to do is you have to start with the structure, and the structure is basically saying, OK, what's the common term across all-- it doesn't matter the business platform you're using, whether it's Field, or CMiC, or Procore, or whatever. You have to be able to say, OK, here's the common terminology for our data as a corporation, as a company. And then the other thing that you want to do is, at least for us that's really made a big difference, is then you start to apply some intelligence, some smart coding to some of that data. And one of our better examples is our project numbers, right?

      So it's year, division, and an opportunity number, right? And so what's nice for us is if somebody puts in, let's just say a support ticket to our BTech staff for a problem that they're having. They include the project number. We immediately know about how long that project's been out, how long that opportunity's been out. We know which division that person's working on. And from there, we can pull a lot of information to help that person out. We understand what the culture is for that division. We might understand where they're at in terms of training and some of that other stuff.

      So key takeaways is when you start with your data catalog, and you start to manage that structure, you want to make sure that you break it down to the lowest common denominator. And then the other thing is the bottom example is just you need to have consistency across your data. So if you're going to go four-number year, month, and day, you don't want another database doing a two-number year, month, and date, if you can help it. Because it just makes-- then what happens is if you have too many differences then you have to connect the dots manually.

      ZANE HUNZEKER: Why are we putting in all this effort? #Productivity All right, so what all this allows us to do is streamline and simplify our processes. We have analysis, tools, metrics, and lessons learned. And all of this, usually, if you don't have this type of data analytics and everything, just gets dumped in a closet or in a server somewhere. And it may immediately affect the people that have worked on that. And it may affect the people that that team then goes on to work with but it doesn't spread holistically across the organization.

      And the cliche behind it is what gets measured gets managed. For learning, if you're not sharing this, you're typically hovering right here in productivity. You're not really gaining much. So as we pull more and more data in, we're finding that we're changing some of our processes and, we're learning how to use the data. And we're also gleaning a whole bunch of issues that we didn't really know existed within our corporation, and we're changing how we're acting on those issues.

      DUSTIN HARTSUIKER: I think one of the key takeaways that really drives that home is-- it had to be three or four years ago that we had some data scientists, and they started passing around these Excel spreadsheets. And they're like, identify all these terms within construction. And we're like, this is ridiculous. The spreadsheet's huge. And how do we identify it? And it was really a pain.

      But from that now I can go into our data mart and basically pull anything that I want to pull and then run that through analysis. And the only reason I can do that is because now we have a data mart because we took the time to do a data catalog at the very beginning. So it's a lot of work, but certainly the value in being able to analyze your data comes from having it organized wisely at the beginning.

      ZANE HUNZEKER: So now that we've kind of framed up what we feel data is, we're going to now share what we actually track.

      DUSTIN HARTSUIKER: So a few things here, some of the stuff we track from our project management system. And I guess I'll just throw it out there. We use CMiC as a project management system. It's in-house so all the data stored on our servers. But from that, what we do is we actually to get that snapshot, we pull that into our data mart.

      So, typically, some of the stuff that pops down into the data mart for analysis in other programs and systems is contract value, change orders, recovery, outstanding accounts payable, accounts receivable. Some of that stuff, just our financial stuff we've pulled down. And then project data, it's going to be more of-- right now, what we're looking at is in terms of risk, right? Because what we want to do is we want to analyze risk on the job.

      It's basically cost and risk are the two that we're focusing on because we feel like if you can get your cost in order you can get your risk in order then you can be relatively successful. So for the project data, it's RFIs, submittal information, ASIs, bulletins, and some of the generic contract information. Pre-qual, some of that stuff is what we're pulling through.

      ZANE HUNZEKER: We also pull a lot of stuff out of P6. We have our project start dates, major milestones, any adjustments, so it's an iterative process. And we find that our fragnets are allowing us to easily identify delays. And those are our risk indicators when we're digging into this P6 information.

      DUSTIN HARTSUIKER: And one of the things that I'd highlight about that just a little bit is this thought process of-- we use P6 Web, right? And it's not the most friendly product unfortunately for our teams. So there's a very straightforward decision where you look, and you say, OK, look, yeah, we can use Sure-Track or-- that would be hard to use now but what I used to use as superintendent.

      But, yeah, we could use some single file-based product, or we can use a web-based. And the web-based is certainly more cumbersome, a little bit more challenging but what that allows us to do, though, then is pull all those analytics and roll them up and bubble them up to the top. And so sometimes, there's a hand off or a trade-off between maybe the easiest product to use in the field versus the best product to use for a corporation.

      You just have to wisely make those decisions. You don't want to hinder the field too much. But you do need to make sure that whatever product they're using-- a lot of times we'll have somebody come up and say, check out this really cool iPad app. It's like, oh, it is a really cool iPad app. What happens to the data? Well, I don't know. So it probably stays on the iPad. And if that's the case, it doesn't really scale for us very well.

      And then some of the things we track for safety and quality. And the bulk of this comes out of the 360 Field. Right now we're using classic and keeping an eye on Forge and 2.0. But ultimately, what we typically pull is-- we feel like, for safety, that the culture is as important as any punishment or any of that. And so we do try to track all the successes. And then, of course, we have the job walks/ inspections, issues/ problems.

      And then the training that we do, jobsite tool box. Any of our stretch and flex type stuff, we try and track that so that we know that we can prove out some of those training elements. And then for equality, it's that some of the indicators that we think make a good quality program. So it's preinstall, first work, follow up, deficiency, jobsite photography. And then there's probably a few others, checklist, and some stuff like that.

      ZANE HUNZEKER: With commissioning, too, just one little side note. We're doing a project out at the San Diego airport. And they have an enormous list of data parameters that have to live in the model. And they're looking to take all that data and push it into some of their commissioning. So that's what we're actively trying to work through right now.

      DUSTIN HARTSUIKER: We're earlier on in commissioning just because what we've found is a lot of the commissioning agents have their own systems. And because they're a smaller piece of the project, so it's a lot harder to be able to want to tie in an integration into them for the last half of a project or whatever. So what we've found is while we try to get them into our systems, honestly, they work best in the system they're familiar with. And so they wind up using their own system, so it's hard for us to pull a lot of commissioning in metric data. But, yeah, I think we did a little bit in San Diego, we'll do a little bit. So I think we're starting to churn into that a little bit.

      ZANE HUNZEKER: So a lot of factors to success that we have is that everybody needs to think the same way. And in a corporation that has 1,700-- 1,700, right? Craft labor, and then x amount of admin labor that are all pushing this information. That's a big ordeal to make sure that everybody's in the frame set of mind. So we need to have identified definitions of deficiencies. We need to have identified definitions of work to complete, specifically, in the BIM 360 Field information.

      DUSTIN HARTSUIKER: Yeah, and so some of the best examples of that are-- and this is something that you learn. So first, you have to figure out what data you want to track. Second, you have to get it in a good, nice organized database. And third, what you find out is, OK, now you have a good, nice organized database. You start pulling metrics out and you're like, man, we're all over the freaking board here, right? And some examples of that are basically you might be on project A. And within Field, you'll have quality to fit for us.

      We basically push a template to all jobs, so they're all the same. But even then, even though there's a template so all jobs are the same, we have issue types and some of those issue types are quality deficiency. Some of those issue types are work to complete, right? And so if you're on project A and that team is like, OK, the hardware on that back door, yeah, that's a panic but it's supposed to be silver and now it's gold. If team A says, well, that's a deficiency. It's the wrong product, right?

      But then team B says, nah, that's really just work to complete because technically there's a panic on there. It just needs to be done right, and so we're going to call that work to complete. It's not a deficiency. If you then, at the end of those jobs, you look and you see project A might have 500 deficiencies and Project B might have 10, right? And the reality of it is they're the same project with the same parameters. They just called it different things. And so then your data kind of goes down.

      And so what you need to do is after you've got your database and everything figured out and firmed up then you need to start working on training your teams to say, no, no, a deficiency is a beam that's installed upside down. It's a big gnarly deal that we're going to have to get some design and put to change. Hardware that's the wrong color may just be work to complete. So you train across all your teams to say, look, we need to be speaking in common terminology so that we can actually have data that's meaningful and not just data that we're trying to figure out what went wrong there.

      ZANE HUNZEKER: Right, because deficiencies are much more of a red flag. So that the scale of what you need to do to actually fix the problem tends to be very different depending on how you're classifying things. And then as far as safety goes, we need to make sure that everybody's on the same board of how safety issues are tracked. How you categorize them. The same kind of mindset that we just need to standardize across the board, how and what and the verbiage of what we're doing.

      So big data. Getting in to the fancier stuff here. For us, big data is all the information like we just spoke about. And on the grand scheme of things, if you look at IBM, they're doing billions of times more than we are in terms of data sets. But as far as a construction company goes, this is almost all of our data. This is nearly everything we have. So for us, this is big data. This is something very significant.

      DUSTIN HARTSUIKER: And see I will we'll just chat just a little bit basically about how we take all the data. And see that last part's kind of, say, look, we're calling it big data. We realize it's more corporate data, but it's as big as we can get versus the outside environment. But we'll talk a little bit about how we leverage that data for corporate trends and analysis.

      ZANE HUNZEKER: So our Quality Tracker is something that we made internally. And what it does is it takes a whole lot of other data points-- and Dustin will get into that in a minute. But it's updated daily, overnight. And every morning, you can pull it up and you can get a heartbeat, or a pulse, as it were, of how everything's going per job, per division, corporate wide. We can organize the data any which way we want.

      And then Power BI is actually a live connection to our data mart. So you can refresh it and whatever happened, in that last 15 seconds between you opening it and refreshing it, gets automatically pulled in there. We track any number of things and, again, Dustin will go through this a little bit in just a second.

      And BIM 360 Insight, or Project IQ, some of you know about this. This is actually true machine learning. So we take all this same kind of data, push it up to the AI, and the AI learns our progress. And it adjusts itself to how we're doing. And we can correct it with some of the anomalies. Say we have a scatter plot of all of the points, and then we have one over here.

      We can look at that one, and we can identify if that was a verbiage issue, or if it just pinged it for some weird reason. And we can actually bring that back down to a low issue or de-escalate that one anomaly. And then the AI will actually continue to learn that process.

      DUSTIN HARTSUIKER: So, yeah, to get started and what I'd say is if we just look back basically what I'd say is this is our starting point, right? And this is where we're taking all of our systems, pulling it together. We fully intend that we'll have more tracking tools beyond those but this is where we're getting started. The Quality Tracker is probably, in terms of big data, it's pulling from multiple areas. Basically what we do is, from Field, we're pulling information like the job number, checklist, issues, deficiencies. We're pulling all of that through the API. All of that kind of just comes and lands in our data mart. We pull every night. So that's refreshed every night. And basically, that's what we use for some of the analysis. And then on the ERP side, it's pulling some submittals, some meetings, general project data, QC manager project name. All of those are within our quality program. We've developed what we call KPIs, or key performance indicators. And so it's pulling all of the data from whatever system it needs to. It's pulling all that data down to basically say, can we fulfill the key performance indicators or attract the key performance indicators?

      So then with that quality tracker does is basically it pulls all that stuff in, and then it aggregates that data and gives a participation grade for the jobsites, right? And the nice thing about that is you can sort by a corporation. You can sort by region. You can sort by division. Or you can sort by project. So you can basically say, OK, on this project what's the grade? And it's based on our key performance indicators and how well they're doing. Or you can say within this division, what's the grade? And so basically that allows a regional manager to look at his divisions, understand where he needs to focus. But it'll also understand from a project team standpoint, what they need to do to try and improve their focus a little bit more on quality.

      ZANE HUNZEKER: It has a very broad implications. So a divisional manager can go in there just like he's saying. And if one of his projects has got a really low grade then we can call up the project team. And they might not be completely forthcoming with all their issues. And so this is how we, not necessarily babysit him, but we're an employee-owned company. So anything that we don't do efficiently, we lose money. All of us do, as an individual. So keeping us honest and forthcoming with all this issue. This helps us.

      DUSTIN HARTSUIKER: Yeah, and honestly, the goal of a general contractor is to manage risk and cost, right? And this is one area where we can wisely manage risk, and this basically allows us to quickly see projects that may need a little bit more focused attention. And that's not a bad thing. They just may need a little bit more focused attention on the quality side, so this allows us to focus that attention. The other thing that it can allow us to do is to look and see maybe there's an area where the data's just not quite right. So we talked about examples of the door hardware and a deficiency versus a work to complete. And so it's not uncommon that you'll look at a grade and then go out and chat with the team, and they're doing fine. They're just not maybe doing the data quite in the manner that we would expect. And that's very much a-- it's a journey. It's not something that you'll solve overnight. It's a journey that you just have to do over time.

      ZANE HUNZEKER: There's a lot of growing pains with that.

      DUSTIN HARTSUIKER: And then just a little bit of a background about how the Quality Tracker works. And this will be the same for Power BI when we get into that as well, but basically-- and I've mentioned a little bit of a highlight of it-- but basically, we've developed a data mart. So we have active databases like CMiC, and 360 Field, and P6 online web. And all of those databases are doing what they do best. And then we have this center. And I like to think of it as a data flea market, right? So we have this basically, this data mart that was identified within this catalog. So the catalog identifies everything that's in there, and the data mart actually hosts the data, hosts the information. And what it does is depending on the frequency, in some programs, it can get it right away. Some programs, it gets it once a night. Basically, it kind of dumps all of this data into this flea market area. And then things like the quality tracker can go in, peruse the flea market, and say, OK, for this job number, for this division, for this day, what's this data? And it pops it up, and it reports it. And that's the philosophy of where we--

      ZANE HUNZEKER: And it's on our own server. It's our internal information.

      DUSTIN HARTSUIKER: Yeah, basically, it's a balance of some Oracle software and some Microsoft software that basically pulls those two things together and allows for the integration. The nice thing, too, is if we have open APIs for any system that we would want to, as we develop more relationships for more vendors, then we'll basically just use that API. Drag the information in and drop it into our data mart. So it allows us to be very agnostic with the programs as long as the program has an open API.

      We can then connect the dots through the API to connect what we need to and pull the analysis from it. And so some of that is actually hinged on this philosophy of saying, look, one of our operations managers in Denver, which is where I'm based out of, was really, really fond of saying, hey, give me a reason to sit down and have morning coffee with our ERP which was CMiC at the time. And his point with that by saying, hey, give me a reason to sit down and have coffee with CMiC is basically in saying, give me a reason to sit down and peruse all the information that I want to peruse for a specific division, region, partner, company, whatever and hand it up to me quickly.

      What we found is asking him to log in to Field and look at the reports. And then asking him to log in to CMiC and look at the reports. And then asking him to log into P6 and see what's going on, doesn't work for somebody in that area. And so what we do is with the data mart, we can pull all that information down and then we can just do one report that says, hey, this is what you need.

      ZANE HUNZEKER: Construction's moving way too fast for our executives to drill into 13 different platforms. They want one place where they can see it.

      DUSTIN HARTSUIKER: Yeah, and the challenge is what we found is every platform works well, right? In its own unique use case, right? So 360 Field does very well for quality, safety. A lot of those issues were getting into Docs which will pull some of that other stuff. But then you're not going to want to try to use Field for your project management. It's just not going to work that way. It's not what it's designed to do.

      So what we found is, for a little while, we were trying to shoehorn-- for instance, CMiC is our project management and maybe we were trying to shoehorn safety and quality in there. Reality of it is even if it does have those modules, it's not really what it's designed to do, not nearly as well as what field is. And so what a data mart allows us to do is to be able to wisely say, yeah, absolutely this is the best product for this task. We're going to go ahead and sign an enterprise agreement with that product because we know that, as long as they have APIs, we can pull that information in.

      So it allows us to have multiple different software companies, multiple different vendors but still pull that information into one dashboard which gets us a little bit into Power BI. And so, again, we've talked about the data catalog. We have basically Power BI linked back in to the end of the data catalog right now. And we're relatively early in our run through with Power BI.

      I think it's an amazingly powerful tool. We looked at some Domo and some other things, and they're powerful as well. I think any type of dashboarding analysis product will really be very, very beneficial for us. We settled on Power BI just because we have so many Microsoft products that it was a wise licensing decision to make. And it seemed to do what we needed it to do. But basically, again, it pulls from the data catalog, so it can give us metrics on any of our systems in any which way that we want to. And then I won't really zoom in on these but ultimately what we're focusing on right now is a lot of financials.

      ZANE HUNZEKER: The pointer.

      DUSTIN HARTSUIKER: Yeah, we can certainly do that. So what we're focusing on right now is a lot of the financials. So if you look at the very top area, up in here, what we have a lot of this stuff right here is PCI. So it's potential changes. It's changes to owners. It's no change to owners. And what that helps us identify on a project is it helps us identify the risk of that project.

      So Jared, my ops guy, can basically sit down with a few different dashboards in Power BI very quickly, and he can say, OK, what's the risk? So potentially, If you look and you say, OK, these are all of the projects within one region. He can look and say, and he can say, yeah, there's a lot of potential changes on this job. Which maybe if there no changed owners can lead to financial risk of fee erosion. So I'm going to call those guys up, and I'm going to try and figure out what we've got going on.

      And maybe it's fine. Maybe it's a PCI that we know and owner change orders coming to backfill that PCI, and we're OK. But at least it brings it up to the top where he can kind see what's going on. So the top is some change order, some financial data. What do we got going on there? And then the next line down there would be-- in the middle there, it's RFIs. And that can be RFIs per division or per project. And ultimately, what we feel like is going on there is we feel like RFIs can lead to design risk. If you have a lot of open RFIs, that can impact your schedule. And it can lead to the fact that maybe the design's not as complete as it should be which may lead to financial risk down the road or schedule impact down the road, right?

      And so it's just nice to be able to look at all your projects and say, OK, how many RFIs have they written? You can take that in stride with where's the project at? If it's in the first third maybe they don't have very many. If they're in the middle or whatever. So when you look at this data, and this is-- we're going to talk a little bit about Insight in a bit here. But the key takeaway is there's different analytic systems that basically you have to look and make a judgment decision. And both our Quality Tracker and Power BI are those where, yeah, what this does very, very well is it gives you a lot of information in a very easy to understand and digest format.

      And then basically, you make that decision to say, OK, this job has a lot of RFIs. Half of them are open. Let's make a phone call, and see if we can help them mitigate that in some way, shape, or form. But that has to be a human decision and judgment call. What we'll talk about with Insight is a lot of the machine learning helps bubble that to the surface for us so that conscious decision doesn't have to be made but anyway. So we track some RFIs in the middle there.

      And then off to the right-- oh, that's probably more of an internal one for us. It's exception request, right? So we have a subcontractor pre-qual program. And there are subcontractors that we want to use that maybe an owner's required them or whatever. And we have to go through an exception for the pre-qual. And so you can sort that out by division, and that way you know about how many divisions are saying, hey, look we need a different subcontractor than what's qualified for the job. And, again, kind of leads to risk, right?

      And then the next ones down are pretty straightforward. They're financials, again, accounts payable, accounts receivable, right? So you understand where you're at with your money, where you're at with collecting your money back. And, again, you just look for-- you got trends which everything's good, and then you're like, whoa, wait a minute. What do we have going on here? And, again, there may well be a good reason for it, but this helps bubble that to the surface so you can make that phone call and see what's going on.

      And then another interesting one, this one is basically projects by market share. So we know if we have a good balance of health care versus educational versus airport versus industrial type of work. And that way if this pie chart gets too far out of whack, what you'll understand is, as the economy starts to drop or whatever, you're not going to be balanced right for tracking the right style of project across. So it's interesting to track that. And then these are, again, a little bit more financial. So I guess when you look at the overall big picture, financial is probably the main thing that we're looking at. And then we're trying to pull into RFI submittals.

      And then quality is not in Power BI. It's more in our Quality Tracker, but our goal is that we would turn that into a dashboard as well, so that somebody like a Jared can basically just look at Power BI and pull all that information. But if you look at the bottom here, these are financials. So here is basically craft labor put in place, and then the black is overtime craft labor. And so that helps you understand, look, are the teams working a ton of overtime? Are they using their overtime wisely? Are we at risk or whatever? And then this is admin labor which helps you understand fee and GCs. Are you overstaffed, understaffed, how's your fee and GCs?

      ZANE HUNZEKER: With the overhead, too. If you find that you could drill down from division to project to various levels, right? So if we have one project, specifically, in my division that's doing a lot of overhead. But that's because it's a casino, and we're trying to push three months ahead in the schedule, and there is a much larger bonus than there is an overtime cost for that turnover. So even if you are accepting that risk and monitoring it, as long as you intelligently know what is going on in that dashboard, you're not going to freak out.

      DUSTIN HARTSUIKER: Yeah, a lot of this just drives to the surface what are the trends. And there could very well be a good answer for the trend. And there's nothing wrong with that. But this populates it to the top. And one of the things in construction, one of our division managers is fond to saying, man, we're so busy chopping wood, we forget to sharpen the ax, right? And so they're out there chopping, chopping, chopping, chopping. And our project teams do that quite often, right?

      They're just out there, and, yeah, they know they have this owner change order. They know they have to get it done. But right now they're focused on all these other fires. This actually helps us to say, look, you realize that by not processing that change order, by not sitting at your desk and actually sitting down processing that change order, you're actually inviting a lot of risk. Because if that doesn't get processed the way we hoped then ultimately we're on the hook for all these funds that are being expended.

      Basically, this helps us manage the risk for the project teams when they're out there so busy chopping wood that maybe they forget what some of the risks are. And so that's what that boils down to. And Zane mentioned something that's probably pretty critical which is the drill down. And if you use BI much at all, ultimately, what you can do is double-click on any chart, and it'll drill you down to the next step and then drill you down to the next step.

      So for instance, if we were looking at the RFIs here, and you're like, man, that's a ton of RFIs in that division. Then you double-click on it, and it'll break it out into project. And then you double click on that and that breaks you down into another dashboard that says, OK, yeah, this is how many RFIs they have. This is how many are open. This is how many are closed. And this is what the status is.

      And one of the things that we talked about with consistency of data and RFIs as a good example is-- so for quality we talked about, yeah, it's the hardware on that door. For RFIs, and submittals, and some of that stuff, one of the things back when I was a-- I spent a little stint as a project engineer just to understand more what that life was like, and then I stopped. But when I was a project engineer, early 2000s, and one of the things-- and some of this is per project, per division-- but one of the things was, yeah, when you write an RFI, your cost impact probably will stay to be determined. And your schedule impact will probably stay to be determined. And when you figure it out then you can fill it in. But, honestly, a lot of times we never did, right?

      And so when you get into a dashboard like this and you're looking at RFIs, you can see what's opened and closed because those are forms that have to be filled out. But then when you want to drill the next one deeper and say, OK, how many of these RFIs had cost impact. Well, the reality of it is if every RFI has to be determined in the cost impact, you don't know, right? And so then there's a training element that we go back to the teams, and we say, yeah, 10 years ago it was fine to say to be determined, but today we need that data. We need to understand if it really, truly is a cost impact.

      And so what we need you to do is once you've figured that out you then need to go back and update the RFI. Or if you know it at the beginning, don't write to be determined because it's the safest, safest thing to do. Actually fill it out where you think it needs to be filled out, right? And so there's definitely some-- as you start to look into that data, there's a lot of training, analysis, and adjustments that need to be made as you walk down that path to try to make it as meaningful as you can.

      ZANE HUNZEKER: And one note on the RFIs which I think is the third row down, on the first, on the left, right?

      DUSTIN HARTSUIKER: Yup, down there.

      ZANE HUNZEKER: Oh, that one there, OK? So the big spike, right in the middle, that's our OCLA office. And they are going to have almost $900 million in revenue this year. So there's a reason why they're much bigger. And I think they've added $3 and 1/2 billion for their backlog.

      DUSTIN HARTSUIKER: Yeah, so absolutely when you're looking at it per division, ultimately, the first thing you do is you just say, man, there's some spikes here and there. Especially, if you're more corporate, you just drill down into the spikes. And maybe there's a very, very good answer for that. And quite honestly, like for that, for anybody that's in corporate they're going to look at that, and they're going to basically analyze and say, OK, yeah, Portland's a little bit smaller. Seattle's a little bit smaller. OCLA's huge. And they're just basically going to make those judgment calls as they're looking at the chart.

      And then anything that feels like an anomaly, they'll drill back down into it. But that kind of goes back, and I'd mentioned that we're-- right now, Power BI is a very good dash boarding tool for us. It brings a lot of stuff to the surface, but there still has to be a lot of analysis that takes place by the person. There has to be a thought process. And so I think the next thing we're going to talk about a little bit, which is Insight, which is actually, truly much more machine learning, which our goal is that it takes some of that thought process away and bubbles stuff.

      ZANE HUNZEKER: Right, so a little background on how we came into Insight. We were a pretty early partner with Autodesk before it was IQ or Insight. And we were really working with them to try to figure out how we can do what is now Insight. We weren't the first. We weren't the only ones, but we were one of the very early adopters of trying to work through this.

      And what we ended up doing was manually doing a data dump into an Excel spreadsheet, reviewing that, assigning risk levels, and a few other items. And then we'd push that over to Autodesk, and they would take all that data and help manage the actual algorithms that the AI is now using today. And so like we said it pulls from the 360 Field. It is true machine learning, so we don't even really touch it. And it tells us who's high risk for that day.

      It's on a number of factors. There's verbiage in the issues in BIM 360 Field. There's the number of issues per subcontractor or per job. There's also how long it's taken them to close out the average time to close. Or if you have a handful of items that are pending for 200 days or something, then that gets flagged very easily. And that could just be an anomaly that you need to go back and change the way that you're actually working. And I have an example of that in just a minute.

      But from the Dashboard, you get a big picture view. And that's the theme behind all this data is that it allows people that are so busy that they can't actually go into any one platform or project. And they can put it up on a big screen, and they can just look at it, see if there's any spikes, any anomalies. And if everything's OK, and we're assured that the data is good, everything's OK in the business. It's a very soothing thing to have is when you do go into this, and you have all green across. It hasn't happened yet.

      DUSTIN HARTSUIKER: And it may never, and that's OK. As long as your ratios are good, we're OK with the fact that yeah, you're going to have some hard projects. And you're going to have some good projects, and that's OK. A couple of things that we talked a little bit about Power BI and our Quality Tracker, basically, somebody having to make a decision. They have to look at the information. They have to consume it and figure out what's going on.

      The thing we like about Insight is it does that for us, right? So it looks at a project, and it says, OK, on this project, how many issues have the word water, or penetration, or moisture, or leak? OK, fine, so we have this many that have the word water, or leak, or penetration. That might be a problem. Who's it who is it assigned to as far as the subcontractor's concerned, right? OK, that's fine so it's assigned to this sub A, sub B, sub C. What's the performance of that subcontractor over time?

      Do they have a lot of open issues? Do they close their issues quickly? Does it seem like they're behind? Does it seem like they're ahead? And then it assigns the risk factor for that specific item which then rolls up into the risk factor for the job. And the nice thing about that is all of that analysis of, OK, what are the issues? Who's the subcontractor that's working on them? What's his history? Then it assigns risk and so the nice thing about the machine learning element of that is it takes away somebody having to have as much knowledge about the product or about that specific project?

      ZANE HUNZEKER: And we get results without ever looking at that 50,000 line items of data, as a corporation.

      DUSTIN HARTSUIKER: And the other element that we and when we touched based on this-- actually, it was a fellow from Leighton that brought it up in the last presentation. And it is a great idea, or a great insight. And it was, basically, that a lot of times-- so we have project engineers that are maybe earlier on on the project. They're not seasoned veterans in construction. They're basically just getting started, and that's perfectly fine. That's where we want them, but if we have them out there doing quality inspections, quality reports, and they're walking through. They may be walking through a roof checklist. And they may be, this jack's lose. This penetration isn't good. This flashing's off.

      They may not understand the value of the information that they're putting in but the intelligence behind Insight will actually bubble that to the top and will, A, help that project engineer understand the importance of what he's doing. But, B, it will also make sure that just because that-- if you have an old super that's walking there and the flashing's loose, he's going to make that top high priority, no matter what. But a project engineer may not but Insight will bubble that to the top for us which helps us align expectations and make sure that the--

      ZANE HUNZEKER: And so I have an example of our growing pains. This is a job that we're just finishing up. And this was taken a little while ago. But you can see that Swinerton builders drywall, our own internal self-perform crew is flagged for being higher risk. So I saw that and went, holy crap, OK. Went over to the drywall manager and said, hey, we have an issue. Come look at this data, but see what's going on. And it turns out that our project engineers on the builder side were making issues that lasted the entire building. It was make sure that the flashing is behind this door for the doorstop. It wasn't per floor which we now do because--

      DUSTIN HARTSUIKER: Make sure the flashing's good on levels 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12.

      ZANE HUNZEKER: Right, so that data point didn't get closed out for 160 days, right? So that was obviously a false positive but that's how we learned how to adjust how we do this verbiage. Because you don't want to downplay it, right? In Insight, you can take an issue and say, this isn't a high issue, it's a low issue. But you don't want to do that because it does need to be there. That flashing needs to be there.

      And if you tell it that it's actually a low issue, it will go corporate wide and say, hey, this type of item is actually a low issue, and it doesn't make a flag, which it still does in reality. But now we make it to where those issues are per floor, so that that way as that floor gets done, it will be closed out. And it's only open for 30 days. It's only open for 20 days. And then that way we don't get flagged internally.

      DUSTIN HARTSUIKER: So I think that's-- there's actually a couple of good points there. So we see that with the flashing. The other thing we see a lot is let's just say, you have a concrete break that's a seven-day break that's low, right? You throw that into Field so you're tracking the issue. And if you don't take the time to say this needs to be a 50-, 56-day issue within two weeks that's popping up as a low break. And it's popping up as a high risk issue because you have a low break in your concrete. Well, the reality of it is it's not an issue until you get your 56-day break.

      And so making sure that the teams wisely adjust the due dates, such that you're Insight information is there. And then the other thing is the nice thing about Insight or IQ is that you can basically go back in. And probably the easiest example to understand is if somebody writes, the alcove for the water fountain is three inches too small, right? Insight will basically look at that, and it'll say, oh, that's water that's a problem. And you can go back to that issue and you can say, no no, no, no, it says a water fountain, not water.

      So therefore the risk is a little bit lower. It's not a water problem, right? And so you can actually train Insight over time. So we helped develop the original algorithm, but then you can continually go back and adjust that algorithm to be wiser over time. And Insight will learn from that, as long as you're not trying to trick the system. So if it's a due date problem, you make the due dates right. If it's a true machine learning analysis problem then you adjust the machine learning where you can. And it gets smarter over time.

      ZANE HUNZEKER: And we've been pushing data into Insight for-- I don't even remember how long.

      DUSTIN HARTSUIKER: Well, the nice thing about Insight is it pulls from Field. And so we've been using Field since about 2007, when it was still Vela. So as much data as we have and Field-- I don't even know how many-- I think we have 220, 230 active projects probably about now. But I don't know how many we have total overall, but it does pull all of that data. So the nice thing about the Insight integration is we didn't have to do a lot with it. Autodesk just pulled it through. So we get that value.

      ZANE HUNZEKER: Yeah, let's go. All right, so now is there any questions? I'm assuming there's a couple, hopefully. No? Yes.

      AUDIENCE: What are you guys using for document management? [INAUDIBLE]?

      DUSTIN HARTSUIKER: We are--

      ZANE HUNZEKER: Piloting.

      DUSTIN HARTSUIKER: We're early in Docs. So we are using Docs for document management. Prior to that, we had a proprietary, a Bluebeam hyperlinking, ShareFile synchronization solution that we had put in place. And it's worked very well for a few years. We do feel like we'd much rather be in an all-in-one type of a docs environment. But that's primarily document management.

      What we're doing for analysis like submittals, RFIs, and that type of stuff, all that's in a database, which is CMiC. So the nice thing about that is you're not trying to do OCR recognition or anything on all your RFIs. You actually pull the raw data of your RFIs for analysis, if that makes sense.

      ZANE HUNZEKER: Right and the man.

      AUDIENCE: [INAUDIBLE] monitoring or using Insight versus Power BI?

      DUSTIN HARTSUIKER: Insight is used on about three different levels. So we have the project team would use Insight for their specific project. And then we have quality managers in every region. And those quality managers will look at Insight for their region. And the nice thing about insight is if you look, there's basically a map with pins. And so what we try to tell our teams to do is, look, on Monday morning, take a look in the map. Figure out where you're at. See what your pin colors are. And if you have some yellows and reds, they might be areas you want to focus on this week to see if you can try and turn them green, right?

      So on that tier, it would be our division quality managers. And then on the top tier, it would be people like Christina and Dennis who are quality directors. They'll look at the whole map, and they'll say, OK, in general, how are we doing? So that's kind of where that tier goes.

      ZANE HUNZEKER: And I have a every-other-week meeting with our quality director-- or not director, quality manager, Omar, in San Diego. And we go through the data on-- he's not as tech savvy as a VDC Manager, so I help him drill through it and see where we need to take our quality engineers and put them on specific jobs for maybe a couple of weeks, help them get their risk lower.

      AUDIENCE: So that's [INAUDIBLE] Power BI [INAUDIBLE].

      DUSTIN HARTSUIKER: For Power BI, our goal is to have tiered access to certain dashboards. Right now, what we were showing is basically something that's open to APMs and above, right now. So that they can see those metrics. Our goal is that we would have a tiered approach. Power BI, we have licensing across the company, so it's not hard for us to get it out. It's just a matter of managing security and visualization into what they want to.

      So right now the one that we were looking at is APMs and above but our goal is that we would have one for P's, at the project level. And then we'd have one for a APMs. And then from the above, we would have one that's more directly suited towards regional managers.

      ZANE HUNZEKER: Divisional managers.

      DUSTIN HARTSUIKER: Division managers and region managers. So right now, it's APMs, and it's all 1 bucket but we will split that out over time and make them more meaningful over time.

      ZANE HUNZEKER: I think we had a couple over here, real quick.

      AUDIENCE: So you mentioned that you started to do things like defining terms like efficiency [INAUDIBLE]. Are you also looking to define or establish rules, things like RFIs or submittals, like only one question per RFI. I mean, I've seen RFIs with a list of 50 questions in one RFI. So are you applying those things or creating anomalies when you go to check with a project defined by the [INAUDIBLE]? [INAUDIBLE]?

      ZANE HUNZEKER: Right, the question is that we have rules for all of our definitions but rules for setting up the RFIs, a number of questions.

      DUSTIN HARTSUIKER: Yup, for the process. And what I would say is we do have, certainly, standards like one question for RFI that we train on. But as you continue to walk through the data then more rules get created which is the way it always is, right? You find a problem, and you solve it. And then you try and put a rule in place so the problem doesn't exist again. So we're getting down that path. I'd say we have a ways to go, a ways to go, yeah.

      AUDIENCE: So your quality program [INAUDIBLE] internal [INAUDIBLE]?

      DUSTIN HARTSUIKER: The quality database is a mixture of both 360 Field, which is where a lot of our field input comes from and then CMiC which is our project management software. And it just meshes all those together.

      AUDIENCE: [INAUDIBLE]?

      DUSTIN HARTSUIKER: Oh, for the quality? It's basically a drop-down report, so you can jump into CMiC at any point in time. Our reporting software-- I can't remember what our reporting software-- with the name of it is, but you can basically go in, and you can pull that Quality Tracker at any point in time. It's not a dashboard yet in Power BI. That's our next goal is to turn it into a dashboard. But right now it's a hard driven report that, let's just say for a director or quality director, they'll probably pull that report across the region once a month. Maybe for a quality manager, for a division quality manager, they'll pull that report once a week. Is how that works, yeah.

      ZANE HUNZEKER: Yeah.

      AUDIENCE: So CMiC has been [INAUDIBLE]?

      DUSTIN HARTSUIKER: We try to use the best tool for the job that will bring the most implementation and the wisest approach. And so that's kind of why we have that data mart sitting in the middle. RFIs and submittals are absolutely in CMiC at this point because there's not a good fit for it. The dailies are in CMiC because Winfield had them. It was already after we had CMiC. But we are looking at, and researching, and trying to understand the value in the right way from a legal and risk standpoint to push more stuff into push dailies back over into Fields.

      So the key takeaway is, right now, we have some stringent rules from risk and legal on what we should and shouldn't do, but we always try to analyze the best place to do the right input and allow our integrations with the Data Tracker to pull the information across. It's slower than I'd like.

      AUDIENCE: Have you looked at actually doing it with the [INAUDIBLE] systems? So an RFI could be created [INAUDIBLE] CMiC?

      DUSTIN HARTSUIKER: We have. Yeah, we have. In fact, with Docs that something that we're-- once the APIs get flushed out through Forge, that's something that we're absolutely interested in and trying to make that workflow a good possibility for us. In the back.

      AUDIENCE: What sort of ERP do you use when you [INAUDIBLE]?

      DUSTIN HARTSUIKER: What's that?

      AUDIENCE: What type of ERP do you use when you [INAUDIBLE]?

      DUSTIN HARTSUIKER: So CMiC is our primary ERP. It's where the data is hosted. We host it locally. We do have bi-directional integration with CMiC. I was trying to think of an example off the top of my head, I'm not-- I'm aware of the big picture. I'm not necessarily one of the data scientists that goes in and--

      AUDIENCE: [INAUDIBLE] accounting side, like if RFI changes or change orders, things coming back to the project.

      DUSTIN HARTSUIKER: All of that is tracked in CMiC, and you can link RFIs to submittal-- or you can link RFIs to cost, PCIs, through CMiC. So that's not really a bi-directional thing. So what I'd say is I do think we probably have some bi-directional through a Textura, or a JD, or something like that, but I'm not sure what they are.

      AUDIENCE: [INAUDIBLE]?

      DUSTIN HARTSUIKER: What I would say is it's probably why we developed the data mart. And that's probably our more primary solution is to try and dump into a data mart. Where we have to we'd write back into CMiC, but you're right it's a struggle. It's a struggle. Yeah, so you have content creation elements. And it seems like you'd be wise to stick with the content creation and then just pull for reporting and analysis. Pull it out.

      AUDIENCE: The biggest problem is with systems, when you do this, if you have data that you put in this, it's difficult to control the format of that data. And then when it falls back into the ERP, it's not the correct format for accounting. That's our biggest struggle.

      DUSTIN HARTSUIKER: Yup, yup, that's a huge deal. Yeah, it is. It's a huge deal. And a data mart and a data catalog will help, but sometimes when you have proprietary system A that wants it this way and system B that wants it that way, you just have to-- where we manage that is through integration. And we connect the dots through integration. So we have-- and I don't know the exact names.

      I know there Oracle based. And we have some Microsoft based. But that integration would basically say pull this information out of, let's just say Textura. The date will be year, month, day but push it into here and make the date, day, month, year, right? And that's through an integration process. Again, data scientist would get into gnarly details that would bore all of us probably.

      ZANE HUNZEKER: That could be an hour class, just that conversation.

      DUSTIN HARTSUIKER: Yeah, any other?

      ZANE HUNZEKER: Did you face any resistance from employees [INAUDIBLE] system? And if so, any advice to get buy-in?

      DUSTIN HARTSUIKER: I think we all face resistance to use any system when it's first rolled out, right? Unless it's true, solid value, so I would say, yeah, we do face resistance. One of the things that helps immensely is to show them the why. So when you can-- if somebody's like, it seems like a lot of freaking work. And you're like, yeah, but check this dashboard out that we can get that will help you build better or will help you understand risk.

      I think that does help. I think there's a bit of coaxing. There's a bit of forcing. And there's a lot a lot of why. Is probably what I would say how we approach that, if that makes sense.

      ZANE HUNZEKER: And certain people will just refuse. That's their personality.

      DUSTIN HARTSUIKER: Yeah, and sometimes, there's doors that you just don't knock on, right? And so if it's somebody that's got two years left before they retire or whatever, and they're refusing then it's like, well, let's go knock on all these opportunities over here. And maybe that one opportunity just is a lost opportunity for the next two years. And that's OK. But then there's opportunities that you do have to push your way through and say, look guys. And it's time to actually get on the bandwagon and make this happen.

      AUDIENCE: What percentage of your teams are contributing-- let's say people are contributing to the data in the system? And what do you think is the bare minimum participation really to be able to have useful data?

      ZANE HUNZEKER: So percentage of useful data and then how many people are actually putting up good data?

      AUDIENCE: [INAUDIBLE], so you're not [INAUDIBLE] participation. But [INAUDIBLE], but like you said, there's some [INAUDIBLE]. I'm just curious [INAUDIBLE].

      DUSTIN HARTSUIKER: So that kind of depends on the system, right? And in CMiC every team is using CMiC. They have to. It's our project management. But for Field, in this case, what our goal has been and I'd have to look at those metrics is 75% of jobs over 10 million, right? And so that's roughly where we're at. What I do know is-- and this is all, big picture metrics. And then we have 220, 230 jobs in there that are active. I think when we look at big jobs, I think we probably have 150 to 200 big jobs go on at any given point in time which means that we're getting the bulk of the big, a little bit of the small.

      I would say that we probably have 75% of our major risky projects in there. The turning point where it truly starts adding value, you could do it maybe on a division basis and say, look, we need 100% of a division in. And that would probably start to give you some meaningful data if you're just kind of getting started and rolling out. I would say that you'd need 30% at a minimum. If you're not going to do it by a division. You're going to try and do it across all your projects. I think you'd need 30% to 40% to truly start to get the wheel rolling where it makes sense. Ideally, more is better, right?

      ZANE HUNZEKER: And so we're out of time but if you want to continue this, we'll be--

      [APPLAUSE]