Description
Key Learnings
- Learn how to apply AI-automated patterns to streamline MEP modeling.
- Learn how to enhance engineering route planning with AI-driven tools.
- Learn how to boost coordination and project efficiency through AI-enhanced workflows.
Speaker
- Enrique Galicia TovarWith 17 years of expertise in Building Information Modeling (BIM), I currently lead as the Director of Innovations and AI, specializing in developing cutting-edge solutions that streamline workflows and enhance interoperability across various platforms. Leveraging a deep knowledge of software development, I craft customized tools in C#, Python, and JavaScript, primarily utilizing Autodesk Revit, Navisworks, Dynamo, and Autodesk Platform Services. I consult internationally, delivering BIM software development solutions that address complex challenges within the industry. My role involves transforming theoretical concepts into practical, scalable applications that propel technological advancements in architectural and engineering contexts. Beyond technical development, I am passionate about education and knowledge sharing. I have created over 100 online courses that cultivate skills in BIM technologies, benefiting 30,000+ students globally. These courses collectively contribute an impressive 10 hours of learning daily, underscoring my commitment to expanding the capabilities of future professionals in the technology and AI domains.
ENRIQUE GARCIA TOVAR: Hello to our session on AI Nurtured MEP Modeling, Automated Patterns and Engineering Solutions. I'm Enrique Galicia, and today, we're going to explore how AI can fundamentally transform the way we approach modeling. We'll dive into our powerful tools and workflows that can take your projects from concept to completion more efficiently than ever before.
Let's begin by understanding the challenges we face in traditional MEP modeling and how AI can help us to overcome it. So myself, I'm Enrique Galicia. I'm a BIM specialist, architect, and software developer with 17 years of experience in BIM workflows and software development.
I have contributed to projects across US, Canada, Europe, Middle East, Mexico, and I'm recognized for optimizing resources and streamlining project delivery through tailored solutions in the AEC industry. My experience in Autodesk tools and programming drives innovations, particularly with AI integration, enhancing project management and sustainability.
I have online courses with more than 30,000 students globally, which reflect in practical approaches and commitment to education, helping companies to achieve efficiency and improved outcomes. So let's review our main objectives for today's lecture.
The first one is to apply AI-automated patterns to know how to dissect information we have on the MEP models so that we can streamline MEP modeling by automating repetitive tasks and standardizing processes using AI-generated patterns, ensure consistency, and reduce manual errors.
Second objective we're going to look upon is to enhance engineering route planning, utilize AI-driven tools to optimize routing and system layouts, improving the overall design efficiency and effectiveness. The third objective will go into clash detection. So how can we produce proactive clash detection and resolution, implementing AI to proactively detect and resolve clashes with MEP models, minimizing reactive corrections and project delays?
And the last is just to boost coordination and efficiency. How do we improve project coordination through AI-enhanced workflows, ensuring that all the team members have real time access to consistent updated project data?
So let's set up the stage, and, yeah, let's review some traditional changes we have on MEP modeling. We know that traditional MEP modeling faces numerous challenges that hinder project efficiency and success.
Manual entry data and configurations' inconsistencies lead to errors, delays, and increased costs. This lack of standardization complicates clash detection and often results in costly reactive corrections. So with that in mind, we want to introduce AI as a game-changer. I know that AI has been talked a lot in the latest years, and [INAUDIBLE] is going two different directions.
AI technologies like machine learning, deep learning, and generative AI are transforming MEP modeling by automating processes, enhancing the tech quality and ensuring better coordination. However, to do a successful implementation, these technologies require a robust framework for that extraction so that we can have ways of querying our information and to retrieve standards so that they can keep on working. We also will need the structure tools to leverage AI full potential.
So for that to happen, we need to focus on a solid foundation. I mean, I put a lot of images from AI just to have it that reinforcement of how it can enable things. But without that solid foundation, the core business will be in a little of a conflict because we need to have a successful implementation on AI.
Understanding the projects, the systems, and the information flow within the organizations are key objects that will allow us to know which elements and which tools are useful for our product success. Selecting the right tools and defining the user experience are also crucial, because it doesn't matter where technology is if it's not aided for the final user.
So the key components for an effective AI workflow rely on understanding the project assets, the project types, the things we are engaged, and the specific requirements. Particularly in our case, we're thinking about MEP modeling, and MEP modeling relies upon compliance of things and also on materials we're going to be utilizing over different projects.
Then we need to have a system development that analyzes the systems, a way of we having everything on a database so that therefore we can query. And finally, we need to understand the outcomes and references. So where do we want to head up? What's going to the result look like? And that way, we can know how we can push it forward.
There is also this concept that is very important. And particularly within our clients, I normally push it because we need to consider a low-hanging fruit effort for all the things we're doing since we are on a high type of volatile implementations.
So focusing on a low-hanging fruit, it's having quick wins that can provide significant results that we are on the right path of something that can become very tangible with benefits that can build momentum, justifying any further investment, and especially when it becomes into high-risk things happening. So these initial steps will create the foundation for a broader AI integration.
So we have seen some key features from AI. We have seen some concepts we need to align. So let's see how we can pass from traditional methods that often miss the mark in areas where AI can improve significant benefits to push it to key success, getting the right components with the proper tools.
A little overview before we go into that-- an introduction of Avant Leap. Avant Leap is a pioneering force in the AI-driven transformation of the AEC industry.
Avant Leap, we do not just meet the requirements. We want to redefine them. Our mission is to empower professionals with innovative tools and solutions that not only solve today's problems, challenges, but it also anticipates tomorrow's needs.
Avant Leap stands by your side as a strategic ally. We have clients all over the world with different specifics that allow us to have a much more universal type of solution. Our approach is rooted in collaboration, ensuring that every solution is tailored to fit your unique operational challenges and goals, pushing the boundaries of what's possible.
Pushing the boundaries means that no challenge is too complex. It will depend on how the conventional solution can lead upon that one and how we can use different technologies to find out if we can have a low-hanging fruit effort or if we need to scale it upon to get some benefits at the end. Whether it's automating workflows or creating custom solutions, we ensure our projects achieve unparalleled efficiency and precision.
The example we have is some developments that are related upon objects being created on a database, reading within a budget type of change so that, therefore, we can know which is the final cost we are aiming or how we are reviewing it. And every operational process we craft is designed to solve real-world challenges effectively.
Another sample we have here is just a tool that is released very early on the year. It's an assistant designer to query and enhance the tools you use for MEP modeling processes. The main advantage we found from that one is that it's not just about using a language model to understand you, but how do you use your tools or the tools you already crafted on a workflow so that it can respond and it can also train other people so that they can work better.
So within all that overview, let's jump into our processes. So let's dive on how we can use the extraction, how we can automate the MEP processes. And the first step, or the most common step to do it-- so I will recommend it-- will be having an extraction.
Within the extraction, the fastest way we can have it is we can extract it from Dynamo. We can extract it from using some frameworks, such as MEPover, or we can also use it from direct objects reading the parameters we have.
The difficult part from the extraction will need to be knowing which is the information we want to have. So we want to have at least the measurements of the element, the equipment categories, and, if we are going to relate it to any type of compliance, how we're going to call it.
The second part of the process will be from all those elements to save it. And in this case, I'm proposing JSON because we find out that from our perspective, it's a database that is not completely contained so that it's not complex. And therefore, we can review the results we have for each model we are extracting.
So we'll export this data into a structured JSON file, and that will serve as a blueprint so that we can reuse those configurations. We can query them. We can improve them. And the last part of this automation process of extraction is so that we can use those JSON files being analyzed by LLMs as a template to maintain other models or also to reduce manual work.
For those looking to deep digger, we have these reviews of other developments. So for doing instruction, there is a course related that it's called MEP Revit 2025, which is programming C#. And actually, in that case, that will also help you to get going around it. That's an open reference so that you can check on query. That will also help.
Then we also have creating Zero Touch Nodes because we know that Dynamo is very useful on the case studies that you can have very quickly something being extracted. But if you want to go deeper, then you can use it on a C#. And if there's a node that you need, particularly that you don't have it on Dynamo, then you can just craft it by creating it on a C# type of plugin. So that reference will also work with that one.
Then we have accelerated BIM modeling, which is another type of development that we will see how it goes into routing. But if you want to know everything, every single aspect of how creating the geometry, how improving those solutions, you can also look for that reference. All the reference happening is about using it on ducts.
Infrastructure for using MEP modeling when you're having Dynamo settings-- I will need to say that part of my interests on MEP happens to be working around the use of Dynamo Player and on the use of multiple tools to make it faster because knowing that it's complex, that can help to make it very easy.
So within all those references, you can just scan the QR code and then review all those contents. So let's begin the process of extracting. Typically, setting this workflow, extracting, involves creating custom nodes, interacting with the Revit API, getting some essentials, getting size, flow rate, material for each MEP element, ensuring that we are retrieving the [? tale ?] and accurate information.
So we have the sample. And the sample is from within a building. So what we want to find from here is that, how do we separate information, depending on what's our business case? It's not going to be the same if we're modeling all installations, that if we're just specifically started with piping, ducting, or some electrical hardware.
And we need to take the advantage of models we have done previously prior to these instructions because they might have-- or they have the information of compliance that we want to extract. So just basic sample-- not going to get too much technical on this case.
But a simple extraction can go just by selecting elements, selecting a category that we want to call upon, then, from that category, selecting the get all elements from that category. And furthermore, in this case, I'm using MEP [? over ?] to connect so that we can have connectors, distance, flows, materials, and elements that can give us into some perspective of how that one goes.
That's the blueprint. That's the DNA of our previous project. And what we want to do is we want that to be compressed within all the other data so that we can have somewhere to rely on.
So in that case, scenario, we see. And there's a lot of elements. We also need to do a lot of processing.
So there's also another Leap package on Dynamo that simplifies this process significantly because it's thinking upon this pattern. So I'm not going to get deep into that one. But basically, it offers specialized nodes that are preconfigured to interact with Revit API, extracting MEP categories within ducts, pipes, or fittings with ease.
So the purpose of this is that if you are using it more, then you will have more questions. Then you will have more overviews and settings. And I think that that's part of what technology-- we want technology to do so for.
We want to make AI not to be just about reading things or just about querying specific models, but to be on a daily basis. So with these nodes, you can quickly select MEP elements, automate data retrieval, and ensure that you are efficiently extracting the information.
So the next step on this extraction, it's about the use of machine learning, or the use of GPTs or language models for those JSON files to be very precise. And it's important in that case because we know that one of the main features that it's happening when you're using some machine learning is that the information is not accurate or the information is not good enough for giving you a positive output.
It might also go very badly. So we can use the same models to improve that data and then have it on a further element. We can involve clustering similar elements, detecting common roots, connecting common geometries, and use those relationships for creating different MEP elements.
So analyzing the data will be our next step. We have that being extracted from the model, but we want to know how we're going to use it. And this analysis can lead into two main outcomes. It's like the extraction of the branch geometry, how the branches work, which reveal the physical structure and layout of MEP systems.
The second goal will be within the detailed information about each component, such as materials, flow rates, and size. So therefore, if you are having a project on similar conditions, we can recreate from scratch, even though using the template can also be compared for compliance so that we are ensuring standards and quality are remaining. And with these two sets of data, we can create the patterns of configuration files that we can reuse in the future projects.
Some samples of this one-- this is an extraction of branched geometry. We're getting different elements from system name, which is the system name happening, the branch points, the elements that are connected and the diameter.
So on this particular scenario, this is very clear upon the objects we are extracting and how does that integrate with the workflow. From the MEP characteristics, it goes much deeper because we're getting characteristics that happens within the size, the material, the flow rate, the system.
I mean, even though-- we can use information that is related to the project, but that wouldn't be the complete case for what we are looking upon. I mean, we can use it on a high level, but it will still be part of some extraction. We can get some rules. We can get some project metadata. So, therefore, if we have similar projects, we can reuse them.
Within the extraction of fixtures will be also a different type of level because we can also use previous models related to the space so that we can have some sort of idea of how many they need to be upon an architectural naming room upon a specific requirement, and therefore also compare upon some new requirement to previously worked data.
So doing this process ensures that all fixtures are correctly positioned relative to walls, to rooms, to floors, to beams, columns, and other structural elements. That extracted is crucial for accurate coordination, clash detection, and ensuring compliance with design standards.
So as we conclude this chapter on AI-automated patterns, it's essential to understand the significant impact these patterns have on MEP modeling. By defining and automating patterns, we are not just streamlining repetitive tasks. We're ensuring that every design adheres to a consistent set of rules and standards. This consistency reduces errors and hastes collaboration and accelerates project timelines.
So we have some single diagrams. In this case, we're extracting how data should be looking upon. I mean, just by having that information of columns, having information of the fixtures we're using, and just overviewing on the floor's compositions-- but if we are related more to the types of elements on the MEP installation, we have there, which are the piping-- which are the type of fixtures that are connected to the specific pipes, conduits, or ducts.
We can also seek that relationship upon the mechanical items and their relationship to the branches that they are aiming to distribute-- yeah, and finally, also, the overview of elements being overlap.
So implementing these automated patterns means that our designs are not only more accurate, but they are also more adaptable. As we move forward to the next chapter, optimizing MEP route planning, we will see how these patterns set the foundation for something more advanced, AI-driven decision-making processes where the true value of automated patterns comes into play, creating intelligent, adaptable solutions that respond to the unique challenges of each project.
So effective MEP route planning starts with a dynamic approach to routing and simulation. By leveraging Dynamo for initial exploration and the Revit API for automation, we can experiment with different configurations and see the impact in real life. This combination provides a powerful toolkit for designing MEP systems that are both efficient and adaptable.
So dynamic routing means that we are having a structure, and we know which is the solutions we want to find out. So within the references I already introduced, there are some playarounds working within, like, geometry, being intersecting, perpendiculars working around.
But there's also a set that is useful for this particular case that it's understanding where is those connections happening. We can do it. We can tailor it from the Dynamo using Dynamo Player, or we can also craft it upon the Revit API, use it for automations.
For the sample we have on this type of settings that you can review on the handouts is the figure development, the figure development goals, whereas you understand that the things happen within a slope, or if they are on specific positions, respond to a figure. In this case, we have an L happening at the beginning of the image. Then we have a Y, which is when it's happening to go down, then connecting on a horizontal degree. And then the last one is happening directly on a 45-degree angle.
So within this approach, it's not that we're trying to solve the complete problem, but we are trying to get the idea of the requirements we have from the extraction previously get, of the types of pipes, conduits, and ducts we want to place, but also within the space requirements that were leading to a specific solution from the possible ones and also from the ones that are on a compliance resolution.
So with automated workflows, we minimize manual input and focus on critical tasks. I mean, and from the sets you can see on the handouts, we have tools that are related to slopes within getting on bendings and things around it. So let's keep on reviewing content.
And as more as we have tools related to our processes, combined with the data that we are saving from our previous packages, that's where automation happens. That's where clash detection is something that it goes on a daily basis.
So creating databases for configuration recognitions is to maintain consistency. It's often said that you need to have clear information so that you can train models, and also, you can get nice outcomes, right? But if it's not on a standardized way, it will also lead into complications, because on that way, you cannot rely that something remains the same way.
By recognizing patterns and configurations from a central database, we can realize which are the ones, elements, that are not part of the general norm of way of working. And we can ensure that all MEP systems are modeled in a way that aligns to previously successful projects with still accommodating unique project requirements.
In this sample, this is like a type of a script we will get from an LLM having a review of regulations compliance so that it says a system, it's upon having a material, and then using from some ranges different types of objects because what we want the applications or the automation processes to do is that if they need to change size, if they need to change material, they need to change slope to do compliance, they can also do it.
So as the model is being created, automated, and extracted again upon, the systems get better. That's why compliance and standards go on this loop where one and the other one are always connected, and they keep on improving.
So within the real-world applications and compliances, by basing MEP modeling on data-driven configurations, we can improve compliance and control. That means that we are not doing anything manual. Everything needs to go from a specific application or a Dynamo workflow that connects elements directly. And within those automated processes, we can ensure that elements are placed and configured accordingly to rules and standards.
Within that, we have data-driven compliance because if compliance changes, then we can check it out within a model, but checking it upon our patterns, then the patterns can be configured again or changed. And it's not a matter of changing the complete model, but actually just changing the rules that will create the system.
Some sample from our Dynamo application-- from our Andiamo applications with [INAUDIBLE] as a virtual assistant is that the access of data goes, the queries get received, and upon training models, tools, applications, and processes, we get an output that serves as an improved solution.
So let's go to the next chapter, and talking about now resolving MEP clashes proactively. So now we have tackled extraction of previous projects, doing a pattern related so that they can become LLMs-- it can become JSON documents being analyzed. Then they become automated so that they recreate it.
But therefore, still, if there's a need upon clash resolutions, there are some elements we can consider. So if you are not completely familiar with clash detection, clash detection is a fundamental part of any MEP coordination process because it's more often complicated than it appears.
It has a ripple effect. And the circumstances of traditional tools. Are great for detecting additional clashes, but they will only handle the first step for a much more longer journey, because still, we need to check which ones are happening, which ones have a huge impact on the project, and how can those be solved.
So sorting through hundreds of clashes, assigning priorities, and developing effective resolutions can become a time-consuming and challenging task. This complexity is a great opportunity for improving towards a more integrated and data-driven approach to manage and resolve clashes efficiently.
So from this process, some other references-- there are, like, two courses related just in clash detection. That is Clash Detection using Dynamo and Navisworks. You can just use them, as well as they will be referenced on the handouts.
And, yeah, there's also one lecture that was given on 2019 about accelerated clash coordination and MEP modeling. So there was a lot of content happening upon how you can solve clashes, but that wasn't within processes using AI or other tools.
And at the end, also a reference to our clash detection tool, that what it does, basically, is just reviewing within the Revit API classes, what is the geometry being handled, which are the neighbors, and how does that one affect, so that you can just have the result directly on your modeling using all the collaborative properties, and therefore, with those, getting to a solution faster.
So I'm just going to review with this one. This is the sample, having a sample model running on a clash detection test. In this case, the different protocols are activated. So for a protocol that goes with MEP by MEP, doesn't have any clash detection.
But if it goes deeper upon using structural happening with MEP, then it's sorting out the objects, just reviewing which ones are being placed. And within that type of analysis, it finds the objects that are having a collision, but also uses a type of droplet algorithm so that when too many objects that are small get connected, they become a bigger type of interference. So that can also help us to have priorities, know whereabouts they fit better on the space, and which ones can be improved within some modifications on a holistic approach.
So we see the final result. Final result is, like, having 98 clashes on our review model. Within those clashes, there are [? synthetic ?] models so that they can also be posted upon the ACC cloud so that they can be a linked model to be reviewed, to be assigned, and to be also retrieved by using the direct access to the APS.
So we see from those samples that we have these spheres. They have a general volume. When they go together, they can become bigger because they will also have these properties.
And within that first step, then we can go deeper with the integration of MEP requirements. So as I was mentioning, the ripple effect of clash detection is one of the things to consider. It will depend on how you have modeled MEP. And if it's already been an automated task by different figures, then the thing is that we need to integrate that data that we have on the clash detection with the other patterns.
So good thing from this-- it's already crafted on that type of process so that the result you're receiving is having elements being connected, elements on the spot. Still, you can do it manually within that being analyzed.
And, yeah, I mean the holistic approach will need to be related on that particular sample. So we're seeing now that we have a pipe object that is connecting upon different framings. So it's not used to create some sort of detour on the piping, because it will still remain clashing on the other spots.
And if it goes a little lower or it has some constraints being satisfied, it will also work. But then, how will you maintain that within the patterns being used?
So each clash should not be reviewed-- each clash should not be reviewed in isolation. It's a relationship with the entire system and its surroundings on the environment. So a holistic perspective is required to have deep resolutions and ensure that key decisions are supported over the project objectives.
So we have different levels and different tips for clash detections. The first one will be having a proactive clash detection. So do not let all the Revit model-- all the MEP model happening at the end and then just review it. Being proactive clashes is that you use the tool as often as possible for recreating it, but also to review it within collaborating tools.
So when the process of modeling is happening, then you are solving-- you're doing the clearance of the scopes so that it goes on a sequential process. So times are always tight for projects to be delivered, but that doesn't-- will help a process to be solved.
It's about shifting from a reactive approach to a proactive one. So leave the elements as much as cluster as possible, that will also lead to have a much more better result at the end. And with automations, we can ensure that the time of those proactive eases the coordination.
So within this sample, we have this visibility of elements that were not considered. That's particularly a framing having within some pipes, just to be a general type of sample that it might not be looking upon on on the structural plans, but it's an architectural requirement. And, yeah, it can be a workaround, that one.
It will also depend on the compliance rules. Perhaps it's not allowed to have different types of elements curving out, or perhaps that would be the easiest solutions. That would depend on different projects.
The second one will be having an assisted clash detection. So if, for some reason, proactive clash detection is not good enough, and we still are looking for some information to fix, we can have modeling alternatives, avoid conflict, and maintain the rules, we can shift from a modeling time-consuming and clash coordinations to a different process.
So doing an assisted clash detection means that as you are delivering the final results, clashes being reviewed, then the elements are checked. And then you're using, again, your self-automated tools to improve that so that it doesn't keep on being an issue on the project.
And the last one, it's a process of using automated clash detections that relies when the clashes have already modeling solutions. So from our patterns of types of elements, from our resolution on clashes happening, we need to have a priority working upon a type of VASA solutions-- so if you are not familiar with the term VASA, which is doing voxels for having analysis on whereabout you can go around it.
That leads into a type of solutions where you are programming. You are having spots where you can also create more installations happening. And therefore, you can just aim for that to be a solution.
So I will say that this phase, within the requirements and the specifics of AI, we are not still on a phase that it can just be solved completely. But therefore, if you are up to it, it's something that might happen very soon.
So let's go into our fourth objective. Our fourth objective is related on enhancing workflow coordination. So integrating AI into our workflows is more than just automating repetitive tasks. I will say that from my recommends on the AI perspective, it requires to be reluctant every day-- or resilient every day.
Working upon AI, the more you encourage it, the better you will lead into doing some changes, modifications, getting better, and as well finding other results that normally you won't find naturally. It's about transforming the way we respond to changes, to ensuring compliance, and to enhancing our analytical processes. With AI-driven coordinations, we can quickly adapt to changes in the science or regulations, streamline our response processes, and maintain a high standard of quality through the project lifecycle.
So main hits about enhancing workflows is improved response time. AI tools enable rapid adjustments to design changes or regulatory updates, reducing delays. It can be achieved upon the agreed-upon compliance requirements, within bylaws, and then, therefore, from that digestion, to get into actionable items that can affect the model creation.
Then we have enhanced compliance, which is part of also services that are required. Even if it's just compliance by bylaws or if it's compliance or sustainability requirements, AI improves those workflows by having all the information being digested properly.
Then streamlined processes-- analytical tools that provide real-time insights, allowing teams to make data-driven decisions efficiently. There's another talk about using unlocking APS potential, so that's related to those processes being improved.
Then, talking from that one, we're leading into something that standardization and automation are cornerstones of an efficient and reliable workflow. It's said that when we're trying to do automations or we're trying to do a standardization of our processes, we lose part of the way that makes each project unique. But it's also part of a way of having things repeatable and manageable.
By automating repetitive tasks and enforcing best practices across all projects, we're eliminating risks and inconsistencies to reduce the modeling time to only what's necessary. This structured approach not only ensures high-quality outputs, but also makes our processes more predictable and scalable.
So upon benefits on standardization and automation is reduced modeling time because also, if you have families, if you have elements that are already in a database that can be reused, can have their own outputs, then they can also have patterns, the patterns can be read, and then, from that reading, being automated so that tools can work around those upon AI assistance and other things around it.
The other thing is consistency across projects. Any type of requirement or any type of automations can be reused to ensure that every project follows the same high-quality processes. And the last one is scalability, because automated processes can handle larger projects without compromising quality or efficiency.
So now talking about optimizing routing for efficiency and performance, since we have already released some time from the MEP routing, the goal is to optimize the use of materials and resources while ensuring the best possible performance.
AI tools can analyze multiple factors and can also put into the challenge upon using different materials because we want to have more possible results so that we can change things-- we can make them more efficient-- such as system requirements, space constraints, material properties, that can recommend us the most efficient routing configurations.
So this approach not only improves performance but also minimizes reworks and material wastes. So we're not talking just about doing the model or solving the engineering, but also doing it better each time because we're enabling patterns to be safe.
Within the key points happening on optimizing routing for efficiency and performance, we have the efficient use of resources, a quality that will not only be part of a compliance type of check, but also on a sustainability size, because within that one, we are using the best type of way of working for each project.
Then performance optimizations-- so very common samples that can lead into this one will be like if families are not standardized, if they are just crafted specifically or they are not on a way of working that goes for specific locations or type of projects, then performance can load down very quickly because polygons or elements of parameters that are not used are not actually on our type of efficient machinery we want to build.
And the last one, reducing rework-- by calculating the best approach from the start, we are avoiding costly changes and disruptions later in the project. What we want to have is the lesser use of solutions that might not be the best approach, but also to test them more times.
So talking about the settings, the last key points I'm going to focus on are leveraging AI ecosystems for continuous improvement. An AI ecosystem is more about just solving the current challenges, but it's to having the continuous learning and improvement processes.
So within that references, my best recommendation is when we have solutions from any type of model, we encourage that behavior so that it keeps on training, it keeps on being better. So if the patterns are being used and the tools are getting, how do they become better?
What was the time saved? That will also imply modifications on an AI ecosystem. By analyzing data from the past projects, including families, materials, and system performance, we can develop deeper insights and refine our processes. This can ensure that each new project benefits from the lessons learned previously, resulting in more precise designs, better performance, and a shorter research and development cycle.
So deep data analysis-- AI tools analyze past project data to identify trends and improve future workflows. I mean, within the reach of AI, we can have documents we have delivered to be analyzed and compared, something that we will normally not do.
But within that one, we can have deep data analysis upon which will be the trends, which will be the things that are more taking time, how we can improve those, which are the key points to find the low-hanging fruits, and, therefore, specific types of automations.
Then we have informed decision-making. So the better we know what works better, it's a way we can have better decision-making, having insights from previous projects, guide design and material choices, enhancing overall project quality.
And the last one will be an accelerated research and development. Continuous learning from project data reduces the time needed to develop new methods and solutions. Within the branches of a low-hanging fruit tree, it's easier for them to be jumping upon different branches than rather trying to get just something very specific, a type of laser beam that solves a lot of things, without knowing if that adapts to our workflows.
So with all these settings, we have covered a lot of ground today, a lot of fundamentals from AI, using it in MEP modeling, some applications. And I will say from that that it can be-- it's a continuous process. It's a way of working. It's not a matter just to use a tool.
It's actually how do we get involved, just because from changing from a project perspective to a programming, from a programming also from an analysis side, therefore, analysis is being blended, with, how do we use better tools?
So from the final insights, I'm just going to go for some general settings. It's time to have some questions, and the floor will be open. The live sessions will also be from that type of thing so that we can discuss it. But still, if you want to do any other questions and you're looking at this recording, you can just contact and do it upon that one.
So references we have and some general conclusions-- we see things about how to transform workflows with AI, how to enhance coordination and compliance. How do those get particularly? and I will say that I didn't want to go too much technical on details, specifically on the presentations. But there's also the hangout that can be used for those purposes, and particularly widely open to improve on that settings.
Then, yeah, also for discussing any other challenges, AI, it's daunting to be integrated to workflows. But the less-- I mean, bit by bit, it can also become a very useful tool. It doesn't necessarily need to be from a specific vendor, but it can also be very good to be-- to get used on a way of working within different types of language models and different types of tools.
From my side, I mean, that will be mostly all of this talk. Hope you have enjoyed it. There is the contact side as well happening around it. More than happy to help, and see you around. Hope you liked this presentation as far as AU.