AU Class
AU Class
class - AU

BIM and CDE: Building a Solid Foundation for Smart Manufacturing

이 강의 공유하기

설명

This case study explores how we automatically ingested Autodesk Construction Cloud building information modeling (BIM) content (3D model, drawings, docs) using generative AI, knowledge graph technologies, and SAP gateways to automatically generate assets in SAP. This smart handover provides operations with an adapted data framework enabling rapid information contextualization to support any field engineering or maintenance problem-solving activities. In this POV—executed in collaboration with Autodesk, Capgemini, and Cognite—we've explored solutions to the two following business problems: How do we reduce the postdelivery bottleneck of asset creation in SAP from 12 months to less than one month, enabling bidirectional updates between SAP and BIM data sources? How do we reduce downtime and improve reliability by decreasing the response time following an event by 70%–80%? And how do we eliminate the burden of manual data gathering (contextualization) across multiple siloed systems and provide co-pilot support (generative AI) to summarize and correlate data sources?

주요 학습

  • Learn about the pharma challenges to ensure a standardized representation of assets data to be ingested in SAP at handover.
  • Show the data flow from Autodesk Construction Cloud to SAP and the AI-powered creation of semantic relationships between data and docs in the CDE.
  • Discover the benefits of automated asset creation and response-time reduction enabled by AI-powered data contextualization.

발표자

  • ANNE BARD 님의 아바타
    ANNE BARD
    An authentic and energetic Design & Strategy Leader whose passions are asking questions, connecting dots, thoughts and people to unravel the problem to solve and deliver all together. My lifelong mission is to enable the joy of finding a way together and make any workplace special and fulfilling. I thrive when I make the invisible visible, the tacit shareable. How do I do this? 1. A natural drive to build and inspire teams. 2. A natural ability to think differently, to positively disrupt and challenge the status quo. 3. A passion for design thinking, storytelling and psychology. 4. 28 years of experience in project direction of large capital investment for pharmaceutical industry.
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • subtitles off, selected
      Transcript

      ANNE BARD: Hi, everyone. Welcome to our class that we have titled BIM and CDE, building a solid foundation for smart manufacturing in pharma.

      But before getting started, let's do a round table for introducing ourselves briefly. So my name is Anne Bard. I'm senior design lead in front end engineering and design in GSK.

      GIOVANNI GIORGIO: And hi, everyone. Yeah, my name is Giovanni Giorgio. I'm a senior digital engineer, also in front end engineering and design at GSK.

      ANNE BARD: Here's the agenda for today. So it is a case study with a 30, 40 minute presentation that should leave us 20 minutes for Q&A at the end. And we will go through the following. First, BIM versus manufacturing in pharma. We need to acknowledge the gap. Second, the big idea. We want to use the BIM and the CDE as a universal data locator and contextualization. Three, industry challenges from the BIM perspective and from the smart manufacturing perspective.

      GIOVANNI GIORGIO: Yeah, and the number 4, we're going to talk about the proof of value in the data contextualization from the common data environment to SAP. Then we're going to discuss the lesson learned and some conclusion at the end.

      ANNE BARD: OK, section 1, acknowledging the gap. But before that, let's talk a bit about GSK. So GSK is big pharma, with a strategy of uniting science, technology, and talent to get ahead of disease together with the following core therapeutic areas, infectious disease, HIV, respiratory, and immunology and oncology. So with a turnover of 2023 with over $30 billion, you can see what big pharma means. And we have three main divisions in our portfolio, vaccines, specialty medicine, and general medicine. And the turnover is spread across those three.

      GIOVANNI GIORGIO: Yeah, Anna, but we said we are from front end engineering and design. So what's the main activity and the purpose of our organization?

      ANNE BARD: Sure. So we are an engineering group in global capital projects focusing on the front end engineering phases of capital projects, from business analysis to feasibility and concept. We are the origination of the asset life cycle, hence our focus on the foundational aspects of BIM and design data.

      So the good practice is to talk about the elephant in the room right off the bat, so that's why I've put an elephant for the first slide, because in our title, we claim that BIM and CDE can be foundational data sources to generate value for manufacturing and more that it can be an integral part of our smart manufacturing ambition. But there is a but.

      GIOVANNI GIORGIO: Yeah, but we must be honest. Today, this is not the case. BIM doesn't generate enough value, if no value at all for the manufacturing part. And it's not really linked to the current smart manufacturing strategy.

      ANNE BARD: Yeah, so true wasted potential here. But let's explain better, shall we?

      GIOVANNI GIORGIO: Yeah. So we better start to list what doesn't work well in current BIM delivery. It will help us to explain to the audience the driver for change.

      ANNE BARD: Yeah, true. So first, there is the fact that all the effort put on BIM data quality. So from a data quality perspective, serve mainly construction and project delivery and still follows historical practices. As a consequence, a BIM model ends up more than often in a bin at the end. Then data and documentation is delivered as per supplier standard, particularly true for process equipment. We will double-click on that later.

      And digital delivery is still subjected to supply chain maturity. Then the data control of our BIM delivery, it's owned by our EPCMs. We don't own it. And basically, the EPCMs are getting the value out of BIM, but we don't very much. And at the end, we potentially lose information. We will extensively develop on that.

      So we have cumbersome data extraction. We lose some information. And some information at the end is not accessible to field worker or people of the shop floor. But on the other hand, the business drivers for change in the smart manufacturing are quite clear, Giovanni, right?

      GIOVANNI GIORGIO: Yeah, it's actually very clear and very business-oriented, starting with the people, clear business KPI, as you can see, and clear objective from an asset management perspective.

      ANNE BARD: So obviously, there is a gap. It means that despite the promising aspects of BIM today, it fails to generate a sufficient value for smart manufacturing and instead, preliminary costs. Therefore, it is essential to explore ways to bridge this gap. It cannot stay like that.

      GIOVANNI GIORGIO: Yes, it definitely needs addressing, and that's the whole point of the presentation.

      ANNE BARD: That's literally the point, yes.

      GIOVANNI GIORGIO: Yeah.

      ANNE BARD: So the big idea. So the premise of the big idea that we want to develop here is that BIM and the CDE can become valuable data sources to generate value for smart manufacturing. But before that, we need to acknowledge the fact that industrial data is inherently complex, spread in multiple platforms and tools, often siloed, and with multiple new data points continuously generated, OTA reports, work orders, et cetera.

      GIOVANNI GIORGIO: Yeah, and because of this explosion of data, industry research showed that people spend up to 50% of their time just searching for information.

      ANNE BARD: Yes, and often not individually, people waste this time and waste also the time of others. And if those others people are not available--

      GIOVANNI GIORGIO: There is even more waste of time.

      ANNE BARD: Exactly. So, therefore, we believe that the key value of BIM and CDE as data sources for smart manufacturing is that they can become the universal data locator and contextualization engine. So a true foundation in our information ecosystem to find information swiftly and solve problems as a team in a very rapid and fluid manner.

      GIOVANNI GIORGIO: Yeah, the goal is that we could move away from the current situation that we know is far from optimal.

      ANNE BARD: Yeah, totally. So, as we show it on the left-hand side of the slide, the 3D model that ended up into a bin more than often. And then the bottom, a manual contextualization for some [INAUDIBLE] field workers. In some, they have to out an event or an alarm to support the team to close the deviation. An engineer needs to connect manually, pin IDs, and data, layouts, and record. So we believe that BIM introducing 3D model, that and document can be the foundation for an easy to navigate workspace to generate insight. This is what we call contextualization. And here on the screen on the right-hand side, you can see an animation of what we mean by that.

      GIOVANNI GIORGIO: Yeah, and combined with gen AI, we can imagine to ask Copilot, for instance, to support for summarizing reports or answer questions related to [INAUDIBLE] that can be found in engineering documentation using natural language.

      ANN BARD: But this is possible only if our BIM processing force unified that [INAUDIBLE] requirements throughout project delivery from feed to commissioning and across our supply chain. And we are going to explain what we mean by that.

      GIOVANNI GIORGIO: Yeah.

      ANN BARD: So for this big idea to become a reality, we need to overcome a few challenges that we mentioned a bit in the intro. And we are going to explain in this section.

      So section three has two parts, part one, BIM, what needs to change, and we are going to deal with it now. And part two is smart manufacturing, what needs to change as well.

      So the first challenge is a classic one. The dilemma between data and document. Something may be specific to format, the majority of the data that will later be useful for smart manufacturing. And to have this contextualization capability that we talked about is encapsulated in document owned by supplier and based on the GSK deliverable list that we'll request to in our U analysis.

      I think that when we talk about data today in digital environment, we talk about 3D model data. But engineering data useful for asset lifecycle is still delivered in document as per our developer list, as I just mentioned.

      And these documents are delivered by package. So they are siloed by nature. You have package for compounding, filling, packing, et cetera. And each supplier deliver as per their own standard, and they tag the items as they see fit, which means today extracting information to create asset in SAP is a very cumbersome and manual activity.

      GIOVANNI GIORGIO: Yeah, and this activity starts at the very end of the project, so during the commissioning and qualification phase. And it can take up to a year involving more than one FTE, which is a massive bottleneck for our operations and for [INAUDIBLE].

      ANN BARD: Yeah. So let's continue on detailing the continuing BIM challenge. BIMs need to go beyond building delivery because, in the current way, in the coherent BIM process, the focus is made on building and in putting together a federated model, mainly focusing on the building aspect. B stands for building, right?

      And the data management in BIM is focused on getting the 3D data to do proper space coordination, class detection, and ensure all is well put together from a design construction standpoint.

      The problem is that for manufacturing we will need-- that are not for clash detection, but to be able to fill SAP with the components that are in pin ID from each package. Remember the package I mentioned earlier from each supplier.

      These components are present in the supplier 3D model. But if the supplier model is federated in the big model only for space coordination-- example, we just give a step file that arrives blocked-- it's not OK to connect data from SAP to model and to have this contextualization capability that we were mentioning earlier.

      GIOVANNI GIORGIO: Yeah, so basically, on what you're saying is that, if the BIM doesn't deliver processed information at the component level, we are not able to do the digital thread or the connection from SAP to the model.

      ANN BARD: Right, and we are not able to create value for smart manufacturing with BIM So yeah, what would good look like in this context? Or how can we change the current status quo and go toward and improved BIM process addressing the manufacturing needs?

      We need to influence our supply chain to get better, but also more data, which will cost money and will drive some changes in project management, procurement, data, quality control.

      GIOVANNI GIORGIO: Yeah, but the goal is also to avoid the cumbersome data collection or to avoid to ask for duplicate activities, which is also a cost.

      ANN BARD: Yeah, yeah. And to solve that conundrum, we foresee two possible options to get the data that will allow us to create this digital thread from component to document to 3D and ultimately to SAP or any other GSK systems.

      First and foremost, it all starts in the CDs, the Autodesk Construction Cloud. And within the CD, the information we will need for manufacturing, and moreover for smart manufacturing, are coming from different sources that are highlighted in the BIM document and critical component analysis.

      The first option is potentially an end goal for us. It's the data-first route. But for this, we would need to put in place the right infrastructure, namely, an engineering database and the adequate processes to make it available to our supply chain from front end to execution.

      [INAUDIBLE] and tell our supplier, this is where you are going to put data, not document, as you are doing so far, but directly data. It means diameters and tag designation, really data. But as of today, this database doesn't exist, and our supply chain principle of the package and equipment manufacturing are anyhow delivering documents.

      GIOVANNI GIORGIO: Yeah, and then the other option, of course, is the document-first route, which is the one that we actually tested in our POV. And that's, as you mentioned on, is because our supply chain is not mature enough. And that's probably is a good starting point.

      ANN BARD: And that's what we have tested. And what is important to work with our supply chain to enable this data threat that we want for contextualization is this ability to connect 3D data and documents. So whatever the route chosen anyhow, A or B, this work that we need to do with our supply chain, it will be [INAUDIBLE].

      GIOVANNI GIORGIO: Yeah.

      ANN BARD: OK. So what you see on the screen now, we're not going to delve on that. It's just an internal proposal to create a naming convention that we would like to implement from project origination.

      We believe that already in feasibility, having a naming convention, how we are going to name a site building, the system and the main equipment is anyhow good practice and not rocket science. But believe it or not, it's not happening today. Standard engineering tagging is not existing on a lot of sites.

      GIOVANNI GIORGIO: Yeah, so without going into too much details here, what we're trying to achieve is to create a standard unique BIM tag for each component, independently of the supplier, that can be put in relationship with a SAP unique ID for each component.

      ANN BARD: Yep. So part two of the challenge is still in section 3, the smart manufacturing readiness with respect to BIM.

      GIOVANNI GIORGIO: Yeah, so for the smart manufacturing initiative, what we experienced so far is that, OK, we achieved the first step, which is, get all the data in one place, the famous Data Lake that's used to be called a bit of time ago.

      But then most of the use cases are revolving around dashboards and/or trends and analytics, mostly involving OT and the ERP data. But we believe that spatial and 3D contextualization plus engineering data actually can liberate the full industrial potential. I can give you an example I personally went through some years ago, but I believe it's still relevant today.

      I was working on a root cause analysis involving a reaction in a vessel, and I was checking on our MS system the pump performance. But I could not be sure because I was working remotely. I could not be sure that the tag was actually one of the pump really sitting near to the vessel.

      Plus, I couldn't know whether the pump was undergoing any maintenance between batches or any failure or anything like that, because all that data is basically associated to a different data system. So I had to call, basically, the local engineer to ask for all those questions, and he had to go and retrieve those information, which took several days.

      ANN BARD: Yeah, and if we put BIM on the contextualization from 3D layouts, documents, information, this is what BIM can provide. And this is what we mean by contextualization to augment the asset care processes.

      GIOVANNI GIORGIO: Yeah.

      ANN BARD: All right, so the POV now, so let's explain to the audience what we have done in the POV.

      GIOVANNI GIORGIO: Yeah, so this is an attempt to show our vision if we solve all the problems that we mentioned in the previous section. So basically, we can use our BIM data platform to be the source of our engineering and spatial data.

      From there, we can then extract all documents and data needed to build our data platform and use the data platform then to contextualize the data. And from there, we can build and consume the data, prioritizing, of course, on use case and value creation as required.

      But as you can see, the flow is not that different from any other industry application. Also, in this case, we have seen a similar data flow. But the key aspect is the contextualization of spatial and engineering data on top of everything else. That is possible only if we have the right data source and the right model in the platform from day one.

      Then, I think we can talk about the POV and what kind of problem we try to achieve here. Basically, we are just summarizing, but I'm going to go in more details in the next few slides.

      But basically, in the first problem, we try to reduce the bottleneck of asset creation in SA post-delivery, as we already kind of hinted in the previous slide. And we try to reduce from 12 months to a month and also enabling a bidirectional communication between the SAP and BIM data.

      And then the same problem, which-- in the second problem, which was a bit more like a manufacturing-- trying to solve a manufacturing problem. We try to reduce the downtime and improve the reliability by reducing by 70% to 80% response time following an event. And we'll see how we can achieve that probably in the next slide. So yeah, the next.

      Yeah, so I said problem one is we're looking at the handover phase of capital project when, typically, the asset needs to be created in SAP. Typically, there will someone that will need to manually extract data from the BIM model, and other documents perhaps, to compile a data collection form in Excel and send them to another person outside GSK.

      We'll need to manually insert data into SAP. And because this is an highly manual and repetitive task, there have been cases we know that took 2.5 FTE over 12 months.

      And this is not just due to the simple repetitive task, but also because this task is also prone to error and quality. So the document needs to go back and forth a few times because you get all the right data in place.

      But what we claim is that we cannot just-- with the right data platform in place, we can actually do this same task under a month completely automated and with the right data, right first time.

      ANN BARD: Yeah, and we'll talk about what type of data quality we need to enable that.

      GIOVANNI GIORGIO: Yeah. In the second use case, we took a classic production issue where, at some point, an alarm goes off. One of the equipment breaks down, and they say corrective action needs to take place.

      So someone, usually an engineer, has to look and retrieve multiple documents going between manual SAPs, technical spec, et cetera, et cetera, look at extracted data, and so on and so forth.

      And sometimes this can all can also involve multiple people because you have multiple document owners or data owners. And just the data collection can take quite some time.

      And we have also cases where the data or the information of the document could not be found. And we had sometimes to call the suppliers of the equipment, for instance, to get the right information. And as you can imagine, this is quite cumbersome, and it can take some time.

      Then once you find the issue, and the issue is solved, then you still need to update the work order and SAP, and then everything is fixed. In the POV test, we propose that we could demonstrate-- basically, we want to demonstrate that anyone could access the right data very quickly or contextualize in one canvas.

      You can access the data even remotely or collaboratively. Plus, you could then automatically record the work order in SAP and check if there's any spare part if needed. All this is much faster, as you can imagine, and almost like an-- a very quick time frame compared to [INAUDIBLE].

      ANN BARD: And on this test case that you just explained about the deviation or problem solving or diagnosis, we did a semi-quantitative comparison between the current baseline and what it could be if we have this tool available. So you have it in the green, the numbers.

      And the result is that we could get 70% faster reaction time. And what our partner told us, Cognite told us, for this POV, that's also what they observed in other clients.

      GIOVANNI GIORGIO: Yeah, but I think at this point, it would be better if you actually show the video of what we've done.

      ANN BARD: Yeah, we'll show the video. A bit of disclaimer, as of today, the video mentions a lot of the data contextualization platform that we tested during this POV, so namely Cognite, you will hear a lot of that name on top of Autodesk Construction Cloud.

      But it is from a generic standpoint. As the approach of contextualization could be platform agnostic, we could test another platform. The key point is Autodesk Construction Cloud [INAUDIBLE].

      [VIDEO PLAYBACK]

      - In this video, we are going to demonstrate how Cognite Data Fusion can be used as a digital twin to help site engineers perform maintenance tasks.

      The current process for identifying and populating SAP with critical components is lengthy and manual. It requires several engineers to analyze supplier documentation and determine the component's criticality.

      This process can go through multiple revisions and take up to a year to complete. Capgemini proposes a new process using Cognite Data Fusion, CDF. Here's how it works.

      A GSK engineer uploads the documents and asset list from the supplier to a GSK SharePoint folder. The file extractor can be replaced by setting up an automatic connection to a common data environment such as Autodesk Cloud.

      Cognite Data Fusion automatically ingests and contextualizes all the files. The GSK engineer uses the CCA editor and Cognite's Industrial Canvas to validate the contextualization and add information regarding the component's criticality.

      Once the data is confirmed accurate, Cognite Data Fusion populates SAP with the critical assets. Cognite also helps site engineers perform maintenance tasks quickly and accurately using Cognite Infield.

      To start the process, upload all relevant documents to a specific monitored folder where the Cognite file extractor will detect the new files and automatically upload them to CDF to initiate the contextualization process.

      After the contextualization workflow is complete, all documents will be contextualized to the created assets and can be used as interactive diagrams. Using the CCA editor in Cognite, engineers can select the data set they wish to evaluate and determine the criticality of assets.

      To determine an asset's criticality, the engineer selects the asset and opens a link to Cognite's Industrial Canvas. Here, the asset is linked to all relevant supplier documents, providing quick and easy access to the necessary information.

      After analysis, the engineer can flag the component as critical, updating the asset's metadata to include its criticality status. Changes in the CDF assets trigger an automatic extraction to populate SAP with new data.

      Once SAP objects are created, the SAP IDs are pushed back into CDF by updating the asset's metadata, ensuring that CDF assets are linked to the SAP ID. We can see the data has been successfully published.

      The connection between CDF and SAP is bidirectional, ensuring that any changes in SAP are reflected in CDF. After changing an asset's description and creating a maintenance request in SAP, the description is updated in CDF.

      Using Cognite's Industrial Canvas, engineers can also create custom dashboards using all the information in CDF. They can add relevant documents, time series data related to a particular asset, as well as adding interactive 3D files.

      A site engineer can use Cognite's Infield to see their assigned work orders. After selecting a work order, Infield is populated with all the information they need to perform the required maintenance task.

      Infield also allows site engineers to use AI to ask questions about a component, such as how to adjust the set point. The answer is then provided along with links to where the answer is found in these documents.

      [AUDIO LOGO]

      [END PLAYBACK]

      ANN BARD: All right, so you have just seen the video that is illustrating what was possible of one of the tasks that we did during the POV. There is more than that one, but this test is summarizing a bit of everything.

      And we measure the value during the POV. So Giovanni's alluded to it already. It's about the asset data extraction. So the time reduced from 12 or more than 12 months if we are [INAUDIBLE] to one month.

      So that was confirmed. Downloading and uploading and contextualizing of a package took less than 10 minutes. So for a whole project, we believe that one month is definitely good.

      Then, 70% less time spent for searching engineering data to do the criticality assessment, this is what you have seen in the video because all the engineers on the team can have all the data available to do the criticality assessment. 70% less time for any diagnosis or deviation or team work around an event. So this is what Giovanni-- it was the use case number two.

      And the last one, that one we didn't tested it, the 30% less involvement to support or to solve individual problems. It was much more given by our partner based on industry benchmark.

      But I guess, in particular, that last one resonated very much. Giovanni had last week a conversation with one of our colleagues working in field engineering, right?

      GIOVANNI GIORGIO: Yeah, yes, indeed. I think he also mentioned that, for instance, people in quality, sometimes they're not able to close the deviation by themselves, because they have no particular access to some data. So they need to rely on engineers, which, of course, is a waste of time.

      But it's not just that. It's also the fact that you have siloed data that only few people can access. Of course, you translate in [INAUDIBLE] time, but it can also be disruptive for the business because you need to rely on multiple people to move forward.

      But also, this was another good comment from our colleagues, that there's basically no learning curve, because you always rely from others. So you're not going to learn by yourself what document you should look at, what is actually written in the document, which you can actually have very quickly if you use something like Canvas that we saw in the video.

      ANN BARD: Yeah, maybe the 30% that was estimated by our partner is on the low side of the curve. Maybe with what we are saying, maybe it's potentially more than that.

      GIOVANNI GIORGIO: Yeah, yeah. But anyway, that would be the starting point for our colleagues in the smart manufacturing.

      ANN BARD: Yeah, they will take it from there to build the business case to assess which platform is the right one for us.

      So we are reaching almost the end of the presentation, moving to the lesson learned. So what didn't work, because that was a nice story? But there was a lot of hiccups during the journey.

      And basically, all the issues that I'm going to list here are somehow for the audience a call to action for us to work with our supply chain or partners or engineering partners.

      So the first one is the cold start problem. So [INAUDIBLE] is nice, but if the AI it doesn't know what it looks for, it doesn't start. So what needs to be said is that for the POV, we started with a package document package, and the content of Autodesk Construction Cloud as of today, so with all the flows that we explained.

      And in the package we didn't even have from the supplier the list of the component. So we had to create the list so that, yeah, it could start, which, in the future, should be different.

      Then AI is nice. But as well, with new standards for formatting of data and documents from one package to another, the 10 minutes that I'm mentioning is not going to be as smooth as that if we don't have a unified standard to tag and name the things.

      Then the other problem, the 3D model delivered by the supplier, when they are delivered just for space coordination, we get a step file. So it's OK for clash detection integration into the model, but an HVAC package comes as a block.

      So to do the POV, we had to ask our engineering to remodel some valves and switch and stuff to be able to do the use case, which is not scalable. So really, working on the format and interoperability between Revit and other things and trying to-- because what we want is really to connect the data from the component and the geometry from the component with the data present in document, so a step file is not good enough.

      And then document quality, when we look at the-- you could have a package that are very compliant and deliver as per our requirement from deliverables standpoint. But if it's just a photocopy, a proof photocopy on a glass, the same AI they cannot read and cannot pass the diagrams or the text if it's not quality.

      GIOVANNI GIORGIO: Yeah, and then for the future challenges, basically, it's more what we need to do internally or as follow up of the POV. So the first one would be trying to push the BIM for smart manufacturing standardization in the global capital project organization, which we are part of.

      The second activity would be for the BIM-mandated documentation quality adoption in our supply chain. So I'm trying to at least adopt the standard tagging that we showed earlier.

      Then, we need to think about how we crack the BIM master data management post-handover, so who's going to take care of the data lifecycle post-handover.

      And the last one, but not the least, as we already mentioned, it is, basically, we need to develop some business cases for the enterprise platform, both for BIM and data localization, which is basically the stuff that the smart manufacturing guy will help us with.

      ANN BARD: Yeah, and on the first point you mentioned, that we will have to update our project management framework, our Bible, and explain how this process comes along to our standard step and stages from handover to platform.

      So you see all our stages and the gates. And basically, what we discuss is that the delivery of data, doc, and 3D along all these stages, and with equipment tagged in a unified manner, and landing into Autodesk Construction Cloud so that then it can be handover to a data contextualization platform, this is what we will need to write to make that happen.

      GIOVANNI GIORGIO: Yeah, and just to remind, again, what we already said earlier is what we have here in as a data contextualization platform, it could be potentially another one. So it's the concept of [INAUDIBLE] data that's important. Then we could use whatever platform we choose.

      ANN BARD: That has the capability of doing that sort of ingestion and contextualization, right?

      GIOVANNI GIORGIO: Correct, yeah.

      ANN BARD: Yeah, OK, so now the conclusion. So let's go beyond the B. BIM in its current building world is not enough to deliver the value that we need to solve the manufacturing and to address the smart manufacturing expectation.

      And basically, all the problems that we listed into the "What didn't work" section, we will need the support of Autodesk from our PCMs and mainly from our equipment manufacturer, the Groninger, the Bosch, the EMA, you name it, or anybody who has a machine and a pin ID and a 3D model, basically, because we really want to go beyond the B.

      We want to re-model to enable automated handover, as you have seen, and we want to enable asset performance. And yeah, going beyond the B is really the conclusion for our presentation. So thank you all. Thank you for listening.

      GIOVANNI GIORGIO: Thank you. Yeah, thank you very much.