Description
Key Learnings
- Explore the problem, solution, implementation approach, objectives, and challenges involved in a digital twin.
- Learn about the Autodesk Tandem setup: 3D model, facility template, custom parameters, and data streams.
- Learn about the setup of IoT devices, including a sensor types overview, assembly processes, and integration with Autodesk Tandem using webhooks.
- Learn about analyzing data and making informed data-driven decisions to improve operations and well-being.
Speaker
- Mateusz LukasiewiczMateusz Lukasiewicz has over 12 years of experience in the AEC industry, and throughout his career, he successfully led digital delivery of large-scale projects and developed a number of modern digital engineering solutions by combining BIM expertise, computer programming skills and project management principles. Mateusz undertakes a vital role in driving company's clear vision towards achieving the leading digital innovator position in the market and its long-term digital capability goals.
MATEUSZ LUKASIEWICZ: Hi, everyone and welcome to my presentation. The topic of the class today is Digital Twins with Autodesk Tandem, from setup to data driven analysis. Few words about myself, my name is Mateusz Lukasiewicz. I'm Digital Projects Manager at KEO International Consultants. I'm based in Dubai. In my role, I'm focused on BIM, computer programming, and computational design, project and construction management, and digital twins.
The format of this class is a case study. We will start with a short introduction about digital twins, our objectives, and strategy. Then we'll move to practical step-by-step digital twin implementation and results overview.
Why are we here? In the last 10 years, we can observe growing interest in digital twins. Looking ahead in the future, digital twin is a rapidly growing business, expected to reach close to $50 billion in investments in the next few years. We are here because we want to be early adopters and understand the benefits of this relatively new and promising concept.
What is digital twin? It can be defined as virtual model of real object or process for analysis and optimization. Digital twin is composed by physical asset, digital model, and real time data connecting both. The real benefits of digital twins come from data that can be analyzed by using data science, parametric models, and optimization algorithms. The video shows digital twin of our office where we can see geometrical replica of physical asset and real time air quality and desk occupancy data. This model is analyzed by using custom parametric model, which is used to calculate the results and visualize various metrics that we will explore later on.
There are multiple uses and benefits of digital twins, starting from internal, such as real-time monitoring and analysis, reduce downtime, optimized resources utilization, predictive maintenance, health and safety improvements, employee well-being and retention, training and simulations, also external, like new revenue streams, services expansion, improved customer experience, reputation gains, and emission reduction. Let's go back to our case study. We have identified five objectives, such as improve assets monitoring, prevent equipment failures, improve maintenance, improve employee's productivity, comfort, and well-being, and also evaluate what if scenarios for different layout changes.
To achieve them, we implemented five components by creating 3D Revit model based on the physical asset, assembling IoT sensors, creating new facility in Tandem, importing 3D model and integrating sensors, then analyze model by creating parametric model using Dynamo for Revit where we define functions analyze historical data, calculated, and optimized results. We had challenges. Currently, digital twin software maturity is rather undescriptive and informative side, rather than predictive and comprehensive. As long as we've been able to achieve first three goals by using out of the box functionality, we had to create custom solution to optimize results and explore what if scenarios.
In addition, we had minor issues with mapping IoT sensor data due to data type temporary restrictions in Tandem. However, it was easily handled by writing custom translation Cloud Function in Microsoft Azure. Finally, in the future, we are expecting automated way of assigning hosts to data streams rather than doing it manually. We found the practice of assigning hosts manually for 100 plus data streams quite inefficient.
Now, let's talk about Tandem setup. The first step was to create digital representation of physical entity, which is selected floor of our office in Dubai. We modeled the relevant scope of structure and architecture, and applied special considerations to Revit rooms, which were split as per expected sensor coverage zone. Instead of having one room, we have multiple rooms in the same open space. Also, we utilized family instance mark parameter to identify desk number.
Let's move to Tandem. We have created additional categories by modifying default classification system. By adding rooms and sensor categories, it is easily done by editing Excel file exported from Tandem. We create a new classification system based on the updated classification system.
In the next step, we added custom parameters, such as carbon dioxide concentration, temperature, humidity, pressure, and occupancy to capture data coming from IoT sensors. This process was repeated for all expected data types. Later, we created new facility template using newly created classification system that contains sensors categories, where we applied custom parameters created earlier.
Finally, we created a new facility, which is basically a Tandem project, and imported model directly from Autodesk Construction Cloud. The imported model is the latest published workshop model. In either case, we are using a Revit model. The input is very straight forward process. So first step done. We have now virtual model in Autodesk Tandem. It is not yet digital twin.
The next step was to create data streams that are used to create connection with IoT sensor. Data streams can be hosted to specific Revit elements. In our case, desk occupancy data was hosted to desks. And other data streams were assigned to Revit rooms, such as air quality metrics, or meeting room occupancy. Data streams can be also added to classification system for grouping purpose.
In our case, we created more than 100 data streams, which are represented in Tandem as green spheres that indicates approximate sensor location in physical office. So far, we have geometrical model and data streams. However, at this point, we still don't have connection between virtual model and physical asset.
To do so, we need another component of digital twins, which are the sensors. There are multiple IoT sensors providers in the market. The manufacturer we selected offers following sensor types like temperature, humidity, touch motion, desk occupancy, water, object proximity, and air quality sensor. This is how it looks once installed. The assembly process is very straightforward.
Basically, Cloud Connector is plugged in the socket and connected to internet cable. Other sensors are assembled by using double-sided tape. Each sensor comes with installation manual and recommendation for the ideal placement. So for example, air quality sensor cannot be placed too close to building facade or air exhaust.
In terms of data flow, data is collected by sensors. Then it is sent to Cloud Connector device, and finally, to IoT service. From IoT service, we can link the data further to other software, such as Autodesk Tandem, which contains additional functionalities, such as data analysis tools, and also model visualization capabilities.
At this point, we have data in IoT platform, office model in Tandem, and data streams placeholders. Data Bridge between sensors and Tandem can be made by mapping data stream ID extracted from the link that we can see now on the screen. The external ID is the last part of the string. So this is done on the sensor level. And on the project level, we will be using the webhooks.
So we are now in IoT platform. We create new data connector by using webhook and specifying relevant parameters that should be reported back to Tandem. We can rename the sensors to match the IoT platform sensors naming with Autodesk Tandem data stream naming, and add label key for external ID and copy our external ID value to establish the connection. This exercise was repeated for 100 plus sensors.
Back in Tandem, we can notice data in JSON format received from sensor. We can now simply select key value pair for each parameter to start reporting data and display in the charts. So now, we can see that there are some entries for the data. Select the single entries. Now we can see data for last few days. We can play with different data ranges to display different data.
So basically, at this point, we have fully operational digital twin model. We have virtual representation of the office, and we have real data coming through sensors. The question now is how to use such model to achieve our objectives.
If you recall our objectives, The first one was to improve assets monitoring. So what we did, we evaluated temperature, humidity, pressure and carbon dioxide concentration against codes, such as thermal environmental conditions for human occupancy. And based on the results, we were able to optimize air quality by applying corrective actions by inspecting HVAC system and adjusting thermostats. In similar manner, we've analyzed staff attendance, desks, and meeting room occupation. We've been able to optimize desks allocation and revise meeting room booking schedules.
Moving to the second goal of improving maintenance, we've identified the problems in pantry area housekeeping, to reduce housekeeping team response time with utilize the touch sensor to send instant notification during specific hours to notify the maintenance team about various incidents in pantry area. Additionally, we've collected data, analyze it, and analyzed against current housekeeping schedule, and modify the frequency based on the peak and low periods. Our third goal was to prevent failures. We identified IT server room equipment as sensitive to high temperature and humidity. To prevent failures, we set the temperature trigger for sending automated notifications if temperature in IT server room is above 20 degrees. And eventually, as the result, we've been able to prevent failures by improving response time, as it was based on instant notifications whenever the event occurred, rather than relying on manual scheduled in-person inspections.
Previous goals were easily accomplished by using Tandem and notification system. Moving forward, we will go beyond out-of-the-box functionality and start exploring custom solutions to improve productivity, comfort, well-being, and also to explore different office layouts scenarios. So before we move to improvements, we have to be able to actually measure what we are trying to improve. And to measure, we need to define. What we did, we came up with this simple formula taking into account five factors, such as proximity to other desks, point noise sources, and communication paths. These three components have negative impact in our formula, as well as we look at the air quality and daylight access, which have positive impact.
First, we've noticed that two sides of our office are more or less independent. Therefore, each side was evaluated in separate exercise. So moving forward, we will focus only on this part of the office. To calculate the impact of each factor, we created a parametric model in Dynamo, which calculates the value of each impact, and also visualize for each desks. You can see there is some color coding applied for each of the metrics. Model is dynamic. Whenever we make any change in geometrical model, the metrics are being updated.
So let's have a look on this impacts and how they are calculated. So first, let's start with desk proximity. The general principle is that the closer and more frequently occupy disks are, the higher the negative impact. On the video, we can see which desks are impacting the desk that we are considering at specific moment.
In our exercise, we neglected desks that are further than five meters. So it's basically a sum of impacts. And the sum varies depending on the task that we are considering in our formula. The results can be exported and shown in the desk's interaction matrix. Based on the matrix, we can calculate the total result per desk. And we can also do some sort of a desk ranking based on the total value of this impact.
For daylight accessibility, formula is very straightforward. It is multiplicative inverse of the distance to building facade. Basically, the closer to building facade, the higher daylight is, which is represented by parts of various height. In similar manner, we use sensor's data to identify different air zones across the office. In our case, we have three different air zones, as we install three different air quality sensors in this part of the office.
We also had a look at point noise sources, such as printers. Basically, the closer to point noise source, the higher the impact is. And this impact is negative. Also, to demonstrate that the model is dynamic, we are going to simulate what if scenarios. So we are going to move the printer towards the left side.
So what's happening now in the background are moving the printer in Revit. And now, updating model in Dynamo, we can see the geometrical change in the model and also updates of each matrix. Finally, we took into consideration the proximity and magnitude of communication paths. Basically, the closer two communication paths having higher number of employees, the higher negative impact. Simply means that there are more people moving, and there's more potential distraction.
Going back to our case study, the formula for 41 desks, two noise sources, and two communication paths will look like on the screen. And we are going to explore three scenarios. First, we will look on dummy data, or rather, no IoT data. So we are assuming 100% occupancy and 100% air quality, which will give us some theoretical results. Then we'll plug actual IoT data to consider the actual air quality and desk occupancy time, which affects the proximity impact, as the result will have the actual results and desk allocation ranking. And finally, we will try to optimize results to achieve highest values by optimizing desk's allocation.
Let's move to case number one. So we have no IoT data, and expected output of this exercise is to evaluate which option among four shown on the screen is the best. So again, we are using our parametrical model where we can assess impacts and create desks ranking. Also, we can use option one as benchmark, so we can compare total improvements, improvements for each metric per desk and desk ranking changes. So you can see that the option two is approximately 1.6% better than option one. Similar, we can repeat for option three and four.
So as conclusion of this slide, you can see that the custom parametric model helps us to compare different layout options, and help us to identify the best among given proposed layouts. So in our case, the layout number four is approximately 6% better than option one. Previous example gave us some results assuming not actual desk occupancy and air quality matrix. In this scenario, we will plug the actual disk occupancy and air quality data to parametrical model to calculate the real values.
So we can see the air quality data in Tandem. And similar for occupancy, you can now see data export, the average sum per week and normalized data. Similar for air quality, this data is plugged to our parametric model where we can calculate the actual results and the actual desk's ranking. So the takeaway from this slide is to understand that the actual data does make a difference, which was observed in updated desk ranking and formula results. So the results from this case will be used as a benchmark for further optimization.
Before we attempt to optimize results by changing this allocation, let's spend a few seconds on basics of probability and statistics, and how many ways can three people be assigned to three desks? The answer is six. And this can be also visualized as shown on the screen. This number is not a guess. It's actually calculated using permutations formula.
In our case, n is equal to r, which basically gives us factorial of three, which is six. So that's the background. In our case, we have 41 desks. This gives us factorial of 41, which is equal to this long number. Obviously, we can't evaluate all combinations. And in our case study, we'll take 100,000 random arrangements in optimization calculations.
Before we calculate 100,000 different options, let's first try to do it manually. The first logical step was to optimize results manually by assigning better desks based on desk ranking to staff who are more frequently at their desks. So you can see the desk ranking. We can see the occupancy. So basically, the person who spends most of the time in the office is assigned to the best desk, and so on.
Distorted data is plug the parametric model where we can calculate the improvement. You can also see desk ranking changes and improvements per desk. So by using this intuitive principle, we achieved close to 4.3% improvement against the benchmark.
The results achieved through manual desk assignment in previous example are not optimal. The reason for is that the cranking and impacts are dynamic as they are influenced by time variable in desk proximity factor. To investigate further, we randomly selected 100,000 desk allocation permutations, calculated results, and identified the optimum disk allocation. So obviously, this exercise cannot be done manually. We are talking about 100,000 different combinations. We also use Power BI dashboard to somehow illustrate the results, and how the results are changing depending on the number of permutations.
So in conclusion, by assessing 100,000 options of using optimum desk allocation, we've been able to further improve comfort, while being scored by almost 11% against the benchmark. In separate exercise, we modify the daylight formula to reflect time spent at each desk to calculate daylight hours. Knowing the daylight impact factor per desk and multiplying by average time spent at the desk, we calculated the baseline. Moving further, we have simulated disk allocation changes by using the very simple principle that the higher attendance, the better in terms of daylight factor desk assignment. By using this method, we've been able to achieve overall 78% improvement of daylight accessibility.
In conclusion, setting up a digital twin with Autodesk Tandem is a straightforward process. IoT sensors connected to digital model enabled facility management to enhance asset monitoring, improve maintenance, and also prevent equipment failures. Historical data statistics and parametric model enabled the selection of optimal results and the exploration of what if scenarios. We've been able to improve overall performance comfort and well-being by 11%, reduce negative distraction impact by 27%, and improve daylight accessibility by 78% in a separate case.
To sum up, in our relatively simple case study, we've been able to leverage digital twin and achieve multiple objectives, also, to identify huge potential in analyzing real data. This slide concludes my presentation. I hope you found my presentation interesting. Please drop me a message if you have any questions. And thank you for watching.