Description
Key Learnings
- See how we have connected data
- how you can leverage the information from multiple systems
- the possibility of connecting the data and geometry in Powerbi
- Tracking change over a period of time
Speaker
- Tom DenbyI’m a Digitalisation manager for Skanska UK, with a background in delivering digital solutions to site. Most currently being part of @oneAlliance working to make Digital Project Rehearsal business as usual. I’m keen to drive change in projects behaviours through finding more efficient ways of working using digital technologies, to increase productivity and decrease health and safety risks.
TOM DENBY: Welcome to this presentation on how connected data can help make project delivery more efficient. I'm going to be using the M42 Junction 6 as an example for this.
First of all, please let me introduce myself. My name is Tom Denby. I'm currently the BIM manager for the M42 Junction 6. Previous to this, I've worked in a wide range of construction projects, from commercial buildings through to utilities and rail. I've always been driven to find project efficiencies, especially if it means replacing repetitive tasks and decreasing the likelihood of human error and increasing productivity.
Now let me quickly give you a project overview. The M42 Junction 6 is part of Highway England's regional delivery partnership scheme. The works for this project include a new 2.4 kilometer dual carriageway link road, a new junction at the M42 motorway Junction 5a, a new pedestrian footbridge over the A45, demolition of existing roads over bridge and construction of new bridges to replace them, realignment of existing local road networks, plus more. For those who are familiar with the Midlands, project primarily near Birmingham Airport and the NEC.
We set out looking for what it was we wanted to achieve and to understand what would really help the project be delivered more efficiently. With that in mind, we came up with the following objectives. As you will see, the challenges we will be facing are connecting large data sets that out of the box don't talk to one another.
We set the goal to make the data and information more accessible than ever before by making it consumable on any device, provided you have internet or data connectivity. In order to do this, we needed to, first of all, reduce any manual steps. Secondly, understand exactly what it was the team wanted to do with the information and what they wanted to get out of the information. And thirdly, we needed to try and standardize our development so that all development will actually be usable in the future for future projects.
Before we go into the detail, I want to give you a high level overview of the processes of how information and data is collected on the project. First of all, we have the project CDE, ProjectWise in this case. And alongside that, we have numerous systems, such as BIM 360 Field, Primavera, Power Apps, Skanska Maps, which is our primary GIS system, plus many more. In all these places, data is being collected. At some point in time, the data that is collected from these sources will need to be transferred over to the CDE, ready for hand over to our client.
So there's the first challenge. It is great that we are now able to collect data more digitally. It means we can standardize how we collect data, and also, there is no problems understanding the information that's being collected. But how do we ultimately get it stored in the CDE at the right point in time so we don't cause any program delays? Then, on top of that requirement, different disciplines want to overlay their data to utilize the information as effectively as possible for them.
I've chosen five ways the user can interpret the information, and those are Power BI, Skanska Maps, Skanska BIM Viewer, and 5D and 4D processes. The first and most accessible way of consuming the project's information is by using the Skanska BIM Viewer. You can see on the background of the slide what the solution looks like. It is a basic, read-only way for anyone to access the latest project's federated model that's stored on ProjectWise, or, for that matter, any CDE.
Now let's look at some of the processes and considerations that made this achievable. First thing to understand was the project's requirement for this tool. As you can see from the line along the top, there was one main requirement. They wanted to be able to access the latest project's federated model in no more than two clicks from any device, from anywhere.
Now let's talk about the process of what that meant to them, and then how we achieved it. So what they really wanted was, they wanted a web-based viewer that they could access via the URL link. They also would like the possibility to also have a viewer within Power BI, as well as the metadata being a data source in Power BI. So, really, it was three requirements, not two. The web-based project viewer showing the latest information. That's fine. And the viewer would show the same information, but only accessible via the date and via dashboards.
In order to achieve this, we took the following steps. Understand where the data needs to be hosted in order for the viewer to work, and also for us to comply with any of our client's requirements. And what the pros and cons of these two locations were primarily US-based server or EU-based server. If the project CDE isn't an Autodesk system, how do we make it accessible? How do we make the model accessible from this system to the Forge Viewer API? This was the API we used to develop the viewer.
How do we secure the URL without having to add more usernames and passwords for people to remember? Where is the best place for Skanska to host the viewer? And finally, how do we best manage future developments of the viewer without disrupting the live project viewer? The process I will take you through now is high level, but I'm hoping it will give you an understanding of how we achieved the following.
First of all, we have the project CDE. We need to understand that's location, and in our case, it was ProjectWise, which isn't an Autodesk system, so therefore, we had to get to an Autodesk bucket to enable the viewer to access the latest model. In order to do this, we used an AzureShell script that monitors ProjectWise and pulls the model over into the bucket. This uses both Bentley and Forge APIs in order to do this.
We then developed the code for the viewer, which references the models are stored in the Autodesk bucket. This will then mean that, when we come to viewing the model, we will have access to any of the models stored in that bucket. And then, finally, this is how the user consumes it. So it's just a simple URL where we list the projects along the top and they click on the project they require.
Now, we needed to secure that. We did this by using Microsoft's MFA system. Through our in-house IT processes, we were able to enable it on the website. And we also meant that we could give our supply chain on projects access to the viewer. So there were two levels of security. First of all was access to the URL. This was done by MFA. And then, secondly, there was access to your project.
We don't want anyone having access to any project, so we created Azure Active Directory security groups to enable this so you could only view your project's information. Next, we went about creating the Power BI custom visual. Again, this was a Forge development. It was actually a tutorial on the skeleton to this that the Forge team have posted online. Well worth a look up. But it references the same model.
So two different viewers always looking at the same model, which is important so that we get continuity and everyone's always looking at the same information. And then, custom visual works like any other visual does. In Power BI, you can select data and other infographics and it'll filter what you see in the viewer, and vise versa. So it was two of them. However, there was a third requirement.
They also wanted the metadata from the project federated model as a data source in Power BI so they could then start to cross-reference it. So, in order to do this, there is a process where once the model is in an Autodesk bucket, we're able to then download that model as a database file. So we download the DB file of the model, we then load that into our SQL server, and we can then reference that data back into Power BI. So, now we have the graphical and non-graphical information that relate to that model.
With that, along with other third-party data sources-- which you can see on the right-hand side, these are some of the ones we use-- we're then able to start cross-referencing. And we have a project taxonomy which we use, which I'll talk a little bit about later on. But basically, standardizes how we can make all the different data sources, items speak to each other.
With all of those processes happening in the background, that means we now have two clean and easy ways to access the project's federated model. One, we have a web-based viewer displaying the required data, and fully controlled by Skanska, meaning we can tailor it to any of our future needs, and as project delivery gets more demanding, we can make the viewer reflect that. And then, secondly, we have a powerful Power BI viewer that will filter and work like any other visual, and again, enable people to view data in a more graphical way.
Now let's talk about our 4D process. The image you can see in the background is SYNCHRO Control. This is a cloud service that we are currently using to enable cloud-based 4D delivery on our project. As you can see from the graphic, SYNCHRO Control, or in this case, the cloud-hosted 4D data is connected directly to our project CDE, ProjectWise. This means that we no longer have to worry about offline copies in order to create our 4d models.
We can link it directly to the latest design information. Either automatically or manually, we're able to set the system up so that we can update those models that we see in SYNCHRO. SYNCHRO is then linked directly into the SYNCHRO Control, which is a cloud-hosted element of it. So the desktop tools link directly into that to pull down the model information. Our program data-- in this case, P6-- is linked directly to SYNCHRO, and then SYNCHRO can push and pull information between one another.
This means that we now have a full, read-only access copy of the 4D sequence stored in the cloud. This can be accessed in a web browser. We were able to give people on the project web browser access, and log-in passwords, and what have you. But it also makes it more accessible by other devices. So, again, the image in the top, right-hand corner is Power BI. You can see that we've used the Forge development we talked about in the last set of slides to enable us to visualize the program a little bit more.
So when we select Program in our Gantt chart, it will highlight the element in the Forge viewer. And then it also gives all the other data that we cross-referenced with that. We're also able to link it to iPads and apps, and even AR technology if required. As well as all of these out-of-the-box features, there's also open APIs that mean, if required, we could fully develop our own solution in order to collect this data. But because it's cloud-hosted, it is much easier to get access to that information.
Now let's talk about our 5D process. The background to this slide is CostOS. This is the software we used to produce our 5D bill. Before we can even think about producing the 5D bill, though, there's a process we need to go through in order to make this information as reliable as possible. And now I'm going to talk you through that process.
First step is to understand the data we have and how that information is structured. We do this by collecting old bill of quantities and using this to populate our first version of our 5D library. A 5D library is basically a standardized database of all the components that we know about and any information we know about them. As well as this, we also have a standard taxonomy that I mentioned earlier on.
That is used on this project, but it's also used on all projects. This is a standard code structure that is given to all elements to ensure that all the various parties are naming the elements in the same way. Whether it be cost, commercial, planning, design, it's a requirement everyone has to follow. This enables us to easily be able to cross-reference commercial planning sans information, as well as mapping our rule sets in CostOS to this data set.
Once we have this information, we use all three to inform the project, how-to structure, its LOD and LOI matrices. This will then, in turn, dictate the structure of the 3D model and the deliverables we require from all the various parties. And using tools such as Power BI and Assemble, we can really easily check that the design files are following their precise naming conventions and data structures.
Once we are happy the model is following the structure, we can then begin to populate our 5D bill. This is done using CostOS by mapping our taxonomy to the bill methods of measurement. Once the bill is produced, we have a checking process to ensure that it is as accurate as possible. We identify any gaps and anomalies and update the 5D library accordingly so future bills won't have the same mistakes or gaps.
By following these steps over a period of time, we are able to more accurately and more efficiently produce bill of quantities in the future. Although this is a process that will mature with time, we are seeing real benefits using this process now. Next, I would like to talk about Skanska Maps. Skanska Maps is our in-house GIS platform that enables our teams and partners to access design and geospatial data from anywhere.
The platform is powered by Esri, as you will see on the process map over the next few slides. The first thing we need is a central repository to store all the data that is accessible by Skanska Maps. For this, we use Esri ArcGIS Enterprise as our front end. This is where we can store all of our information we want to publish.
We now need to identify the sources of all the data that we will overlay in our GIS platform. Firstly, we have our external data sources. This could include anything really, but it usually includes the following-- third-party information, maybe environmental health and safety. Customer data. This could be legacy information from previous schemes. And any other known legacy data.
Next, we need to import and link it to our design information. To do this, we link it to the project CDE as well as any design information not hosted in the CDE. This could be GIS data from our design partner's systems. All this data needs managing and bringing together, as well as updating on a regular basis. To make sure that Skanska Maps is displaying the latest information and make it useful to the project, we use FME Workbenches to regularly run the processes of collecting all this information, publishing it to our geospatial CDE, and then enabling our users to consume this data.
We then have tools that are used to interact with this information. Primarily, we use Skanska Maps, but we do also have the ability to access it via desktop GIS, as well as increasing demand around Field GIS apps as well. And last but not least, we have a series of quality checks to ensure that any GIS data we are using on the project I'm planning on handing over is structured in a manner that meets our handover and deliverable requirements by our client.
If you would like to find out more about how Skanska are working with Esri to enable projects to get the most out of these platforms, my colleagues [? Giorgos, ?] [? Balash, ?] and Anita, along with Esri's team, will be doing a panel discussion. I will put the links in the chat, as well as the handout documentation. But it would be well worth listening to that.
I hope you found it useful. And if you have any questions, please put them in the comments. If you have enjoyed, please like and share it.
Tags
Industries |