Description
Key Learnings
- We will show examples of how generative AI can be infused into Autodesk Fusion to enhance productivity.
- n/a
- n/a
Speakers_few
- MPMadhu PaiMadhu Pai, Ph.D, is a Principal Partner Solutions Architect at Amazon Web Services (AWS). He also serves as a Global Partner Tech Lead for Manufacturing at AWS. In this role, he leads bold initiatives that helps drive adoption of cloud-hosted solutions by automotive and manufacturing customers. His purview encompasses the entire manufacturing value chain, specifically Product Design and Engineering, Smart Manufacturing, and Smart Products and Services -- among other topics. He has over 20+ years experience in automotive, aerospace, energy, engineering & construction, and heavy industries. He currently explores how edge & cloud computing, machine learning & artificial intelligence, including generative AI, computer-aided engineering, and industrial internet of things can help address customer's highest-priority business needs. He serves as a trusted advisor and industry specialist for manufacturing partners and customers of all sizes on AWS.
MADHU PAI: Welcome. My name is Madhu Pai. I'm the Global Partner Tech Lead for Manufacturing at AWS. I have my colleague, Sahil Saini, also join me on this talk. The topic for the call today is Revolutionizing the Product Development Life Cycle with Generative AI.
As we all know, generative AI is creating waves across multiple industries. And customers are eager to understand how generative AI can help them in their businesses, in their daily activities, and so on. Here we hope to dive into how generative AI can impact the product development life cycle.
Now, the agenda for our talk today is as follows. We will first start off with evaluating the current state of the product development life cycle, the promise of generative AI. How can you build generative AI applications on AWS? Then we will show you a hot off-the-press intelligent assistant that our partner CCTech has built for Autodesk Fusion. And then we will conclude with what is in store for generative AI in this space.
Let's begin by taking a look at the typical product development life cycle in a traditional manufacturing industry. It can be automotive as well. So the first stage is concept and planning. So this is the initial stage where ideas are born and refined. It involves market research and analysis. It also involves defining product requirements, performing feasibility studies, initial concept development.
And during this phase, the teams brainstorm ideas, access market needs, or assess them, and lay the groundwork for the entire project. This stage typically takes about 15% to 20% of the total time. Then comes the design and engineering. This is the longest phase of the development cycle where concepts turn into detailed designs. It includes creating detailed product specifications, developing CAD models, engineering analysis and simulations, material selection.
Here, engineers and designers, they work closely to create a product that meets all the requirements while being feasible to manufacture. This stage takes about 30% to 40% of the total time. Then comes prototyping and testing. So this is a crucial stage where it involves building functional prototypes, conducting various tests like performance, safety and durability, iterating on the design based on the test results, multiple rounds of prototyping and testing, ensure that the product meets all of the specifications and quality standards.
So this takes about 25% to 30% of the total time. Then comes manufacturing preparation, which takes about 10% to 15%, sometimes even up to 20% to 25% of the total time. This is the final stage of development, focusing on getting ready for full-scale production. So it involves designing and setting up the production lines, creating tooling and fixtures, establishing quality control processes, training production staff, and so on.
And eventually, you have production. This is not typically included in the part of the development timeline. Production is the culmination of all of the previous efforts. So the initial production runs or the pilot production may overlap with the final stages of the development. But then full-scale production continues as long as the product is in demand.
Now, let's examine the state of the product development life cycle from a different lens. So we use multiple tools and drawings throughout our product development life cycle. So, in fact, we are reliance on 2D drawings and PDFs. We find that many companies still heavily depend on these traditional 2D drawings and PDF documents for design, communication, and so on.
There is a lack of digitization of legacy designs. Older product designs often exist only in paper form and outdated file formats. It's time-consuming, model creation, and analysis also. So if you want to create detailed 3D models, performing complex analysis can be extremely time-intensive, often leading to bottlenecks in the development process.
Also, designs are becoming more and more complex. So as products become more sophisticated, designs are growing increasingly complex. And they require more time and expertise to develop and validate. And integration challenges in the manufacturing itself. There is sometimes a disconnect between the design and manufacturing teams. And this leads to designs that are difficult or costly to produce.
There are time to market pressures. And so companies face intense pressure to reduce development times and get products to market faster, sometimes at the expense of thorough design and testing. There is also the knowledge transfer issue. So as experienced designers retire or leave, companies struggle to retain and transfer the valuable knowledge and expertise to newer team members.
There is also limited design automation. So when it comes to collaborative design, there are some hurdles. And so if it's a global team, they always face challenges and effectively collaborating across different time zones and locations. Then there is the constraint of optimization. So traditional design methods may not fully leverage advanced optimization techniques and potentially missing opportunities for performance improvements.
So there are these multiple challenges that we encounter in the product development life cycle. So there is a need for an integrated and a digital approach to product development, which we do see today. But there are opportunities for improvement. Now, what are some of these opportunities for infusing AI, artificial intelligence into product development life cycle?
First, in transforming engineering workflows. So AI can automate tedious and repetitive modeling tasks, freeing up engineers to focus on higher-value work. So rapid prototyping and iteration enabled by AI can lead to faster design cycles and more experimentation. Additionally, AI-driven knowledge management can facilitate sharing best practices across teams and projects preventing reinvention of the wheel.
Second is complex assembly modeling. So AI can intelligently break down intricate assembly requests into manageable subtasks, improving efficiency, and reducing errors. AI-driven optimization can ensure parts are designed for optimal manufacturability, performance, and cost-effectiveness. Smart assembly planning and constraint management are also other features of an AI-driven process.
Third, in enhancing human collaboration. So how can AI act as a force multiplier for human engineers providing intelligent assistant and recommendations? AI-generated design alternatives can spark new ideas and augment human creativity. So by offloading routine tasks to AI, engineers can focus their cognitive efforts on more complex and more strategic problems.
Finally, in driving efficiency, AI can enable exploration of vast design spaces, and that's clearly done by Autodesk today. That would be impractical for humans alone, leading to novel and innovative solutions. So faster iteration and testing time cycles, testing cycles driven by AI that can accelerate the refinement of design concepts. AI-driven optimization that can improve product quality by identifying and addressing potential issues early in the development process, reducing costly rework and delays.
So these are all the opportunities that we have for infusing artificial intelligence into the product development life cycle. In the remaining slides, we will explore the role that generative AI can have in the product development life cycle. But let's first provide the audience with a quick primer on what generative AI is all about. I'll have my colleague Sahil take it from here.
SAHIL SAINI: Thank you, Madhu, for giving us an overview of how generative AI can revolutionize the product lifecycle management. Hello, folks. I'm Sahil Saini. I am senior solution gen AI architect within AWS. And I work very closely with strategic industries. Today. I will talk about what are the promises of generative AI. But before we get into the promises and how generative AI can revolutionize this industry, let's talk about what is generative AI to set the stage.
So generative AI, we have seen recently, it has taken the world by storm because the capabilities it is capable of doing. You can use generative AI. And most of us have already explored ChatGPT, Bedrock, like couple of applications that you can access generative AI as of today from. But what all it can do? It can create new content. It can create new ideas. It can totally introspect your conversations.
And this create new images, videos. And all this is by power of large language model. So this whole generative AI is powered on large language model, which is pre-trained on a vast amount of corpus public data, which is enabling these models to take care of wide use cases. And it is important to note that at the core, generative AI is leveraging the latest advancements in machine learning.
It is not a magic. But the latest evolution in technology that has been evolving. What make foundational models special is that they can perform so many more tasks because they contain such a large number of parameters that make them capable of learning complex concepts. And although with a pre-training exposure to internet scale data and in all its various formats and patterns, the foundation models can apply their knowledge with wide range of contexts. And that opens up the avenue and opportunities to explore different use cases with generative AI.
Going on. How is generative different from machine learning model? The size and the general purpose nature of foundational model makes them different from a traditional ML model, which typically performs for the specific task like analyzing text, or sentiments, classifying images, and forecasted trends. In order to achieve each task and for each ML model, our customers, they need to gather labeled data, train a model, and deploy that model.
But instead with foundational model. Instead of gathering the label data and training multiple model for specific tasks, you can actually use a pre-trained foundation model to adapt several tasks. You can provide an unlabeled data and let foundation model do the magic because of all the training that has happened in past. Foundation model can also be customized to perform a domain-specific function that are differentiating to your business. Apart from the training from a wide publicly enabled data, it can be fine-tuned further on your domain-specific data from scratch. And that's a major differentiating factor between traditional ML models and foundation model, which powers the generative AI.
There are a couple of types of foundational models today and capabilities within generative AI. Few of them that we have categorized as like text to text where you give text as an input. And the response from the model is also in the text modality. You can also use text-to-embedding, which is very commonly used for this chatbot that is empowering the application patterns where you are doing text-to-embedding. You are creating a numerical representation of the user that is entering a natural language text so that you can find the semantically same responses as a model output.
And finally, multi-modality where you are enabling users to talk to model in different data formats being it audio, video, text. And you can have the model response in a different modality of data. And again, these are the different advancement in generative AI. This is opening the capabilities for industries to transform their business by enabling these methods.
Generative AI, so overall, has an immense potential to create significant business value across various aspects of your organizational operation. First, when we categorize how generative AI is moving the industries, the first one that we bucketize, it is new experience. Generative AI can enable innovation in engaging way to engage with your employees, with your customers, with your contractors. This could actually involve generating personalized content, tailored conversation, or immense virtual environment, which lead to very enhanced user experience.
Secondly, on the productivity, it boosts your productivity. Generative AI has potential to radically improve your efficiency across all lines of business. By automating your content generation, streamlining workflow, and augmenting human capabilities, generative AI can significantly reduce time and effort spent on repetitive tasks.
Third, on extracting insights. Generative AI can unlock the valuable information and clear answers from an organization vast corpus of your data. All your data that is sitting in your data stores, you can actually leverage generative AI to make insights out of that data. By intelligently processing and analyzing this information, you can provide actionable insights that enable faster, better decision-making, and giving an organization a competitive edge.
Finally, in fostering creativity, generative AI can serve as a powerful tool for creating new content and ideas, including conversations, stories, even model images, which we will see it in our demo today. Coming to generative AI is workplace. So we talked about what are the benefits, what generative AI is. But what is the actual impact? What we have seen is generative. I can have an impact in your workplace across productivity, creativity, and quality. So starting with productivity.
So from the research from American Productivity and Quality Center indicates that employees spend a significant portion of their day around 1.7 to 2.8 hours on repetitive tasks. However, with generative AI tools can greatly reduce this time and effort spent on this task by factor from 2 to 28 times. Similarly on the creativity, like Wharton Business School also experimented with the MBA students who are using generative AI to build up their case studies, or build up the business values, they are far more productive than students who are not using it.
And finally, in terms of quality. Even MIT has also released a survey that 88% of respondents believe that gen AI is not just helping them accomplish doing more work, but also enabling them, helping them create some quality of work as well. Moving ahead. When we talk about manufacturing, how does it relate to manufacturing? So the couple of benefits that we talked before, that it improves your efficiency and productivity.
How it can do that? The capabilities that you can imagine is it can assist your HR, your legal, your design procurement, your contract with document generation, which is right now unstructured data. You can use the generative AI tools to actually make insights of this. Reduce time, and cost for production, agents and search for a plant maintenance can be very well taken care by using agent on generative AI.
And finally, you can grow your revenue with product and your service level differentiation. You must have witnessed the similar trend within Autodesk as well. With empowering AI, you can see that differentiation in their product segment as compared to the competitor landscape.
Now, the big question comes is how to get started and how to actually make a tool set of generative AI for your business use case. So you can use Amazon, Amazon Web Services to start building up your generative AI use case. Within Amazon, we work backwards from our customers. So we offer you a stack of services that is considerate about your experience, how much your development effort is, and how to get started.
So if you look at all our overlying services and offerings, we have categorized this service into three big buckets. The bottom layer is talking about the infrastructure for training and inference. As an Amazon, we offer you the latest GPU chips, the latest compute, fastest storage, and a managed machine learning platform system, which is properly called SageMaker to build up your whole machine learning lifecycle and built to host an inference your foundational model to power on your generative AI applications.
This stacks of services are specifically meant for our users who have good understanding of underlying hardware, infrastructure, what model to use, how to deploy it efficiently. If you want to have more abstracted experience of building up your generative AI solution, then we offer you on the middle layer, which is Amazon Bedrock.
Moving on, I go back on the lower stack where we talk about SageMaker. So SageMaker is one of our unified platform which gives you end-to-end service or a game changer for your data scientists and your ML engineers in preparing data set for AI, run experimentations, do training, and inference your model. SageMaker also offers you a wide selection of foundation model and access to the latest publicly available foundational model for faster time to market in case you have a-- and that is one of the trend that we see. That organizations are looking for a small custom models then using big billion parameter models for the use cases.
We are also innovating at a silicon level as well, where we are offering services, offering chips like Trainium and Inferentia for foster cost and performance for hosting and deploying your foundational model. So we offer trainium, inferentia, graviton. Trainium and Inferentia, our machine learning model chips, custom chips which are being trained for fine-tuning and deploying your foundation model.
We also offer graviton, neuron as a couple of services in our underlying stack, which can give you get started with all the hardware required to power up your solution. With that, the second layer of the service is Amazon Bedrock. Amazon Bedrock is our fully managed service that offer a choice of high-performing foundation model from leading AI companies like Anthropic, Cohere, Meta, Mistral. So with Amazon Bedrock, you are not just refined to one model provider, you can access all models from the high-performing model from the leading model providers from your account with all the safety and privacy construct that AWS offers.
With Amazon Bedrock comprehensive capability, you can easily experiment with a wide variety of top foundation model for your use case. You can customize them privately with your data techniques, such as fine-tuning or retrieval augmentation. You can also create manage agent, which is where whole generative AI is moving toward marching toward is Agentic workflow. And we'll talk about it at the end of it.
But you can have all this Agentic experience as a managed experience. And all this without writing a single line of code. It's a managed service where you can just get started without having a very code-intensive experience. Since Amazon Bedrock is serverless, so you don't have to manage any underlying infrastructure. And you can securely integrate and deploy generative capabilities into your application using the AWS services that you must be already familiar with.
Bedrock also offers Bedrock knowledge bases, which is a managed solution for organization to store the organization-wide corporate data and make some insights out of that data. It is also one of the managed service. Without writing a code, you can just point Bedrock to the data set or the data store where all your data is residing and rest all the machine learning science behind it. In creating the embedding, creating the semantic search, Bedrock will take care of it.
And finally, when we talk about Bedrock, we talk about responsible AI. And this construct goes to SageMaker and underlying platform as well. Bedrock also offer you guardrail as a service. And what guardrail lets you is it enables you to implement safeguard tailored for your application requirement and align with the responsible policies that you want to set.
At a nutshell, what it does? It is a guardrail as the name state or it's your front door before interacting to your model or your company data. So you can have a lot of controls with you pertaining to configuring denied topics. You don't want any topics to be discussed by the chatbot or by the documentation like redacting or covering your personal identification information, your PII data, or even doing a ground check. You have all this filters and levers and capabilities by guardrail. That is offered by Bedrock, but that can be used in front of SageMaker as well.
Finally, that's a top layer of the service, which is Amazon Q. Let's say we have Bedrock where you have to build solution. We have underlying infrastructure. But let's say we are also targeting persona like your product managers who don't want to invest too much time and just get started with this experience. And that's where we built Amazon Q.
Amazon Q is our generative AI-powered assistant that help you make your organization data more accessible without writing a code. This can write code. It can answer questions, it can generate content, solve problem, and even manage your AWS services. Q is built with security and privacy in our mind from start and make it easier for organization to use generative AI very safely. It maintains your access control of your data.
And if your user want to access Q outside AWS ecosystem, they can totally use that. For business users who access Q via subscription, no data of the user is being used to improve the underlying foundational model for anybody, but you. It's all being trained by you, Q is right now being integrated with majority of your data sources starting from Snowflake to S3 Bucket to ServiceNow. And where majority of the data are customer are storing, you already have a native integration where you can bring that data and have this experience ready for your users.
Amazon Q right now comes in five different flavors. We talked about Q for business, which have the integration with your data sources and have give you a chatbot experience on whatever data that stores within you. You also integrates well with QuickSight, which is our Power BI solution. What QuickSight Q integration is you can have all the data in repository. You can use QuickSight to make visualization or analysis of data. But you can further take an experience, enhance experience for your end user. But they can write a natural language query and it will generate the whole dashboard, whole visualization of data for your end user. That really improves your overall product lifecycle management.
We have Q for developer, which actually helps user to plan, to do a code generation, code recommendation, even interacting with Autodesk platform service, APS. And other APIs, you can use developer to fast ramp up your development cycle. And similarly, we have in Connect that is majorly for call center and supply chain use cases.
With that, the next question is, what is this future of generative AI hold for us? What we think how industry is marching towards generative AI is in this four categories. We think the future is in agents. What are agents? The agents can orchestrate complex tasks called necessary API, leverage function, and determine next steps enabling end-to-end workflow automation. And that is what the need of [INAUDIBLE] right now.
Second, what we see the trend is multi-modality. Accepting multiple data types like structured data, audio, video, and text, which allow users to effectively communicate intent and data to model is something that is a future of generative AI. It will unlock assistance and workflow automation with any kind of data type. That may extend to 2D and 3D model types as well.
Multiple models. Customers have realized that rather than using one single model to do everything, why not have small models, custom domain models to take care of domain-specific tasks? And that is what the trend is on. And that is why SageMaker and our underlying services are empowering such use cases.
And finally, AI policies and standards. Effective risk-based framework and guardrails for AI are essential to protect your civil rights while allowing all the innovation that is happening. So collaboration with policy maker is crucial for your safe, transparent, and responsible generative AI development.
With that, I'll hand it over back to Madhu so that Madhu can take us over with how we can maximize impact of generative AI. And let's see something in action. How generative AI can be used with Fusion Moreover with Autodesk Construction Cloud. With that, Madhu, back to you.
MADHU PAI: Thanks, Sahil, for going through the services that we offer from a perspective of generative AI. Many of these services will be used in the later half of the talk today where we will show something really exciting that we have done with Autodesk Fusion. But before we go there, I'd like to talk about how you can maximize the impact of generative AI.
Now, in 2023, what we saw were primarily point solutions. So, typically, these were around text generation, around image generation, about code generation, data analysis, and visualization. Sometimes even creative ideation and brainstorming. Also generation of personalized content and recommendations.
Now, these are still valid use cases. And so these are still very important. But the true power of generative AI and the true impact of generative AI from our perspective will actually come from these multi-objective solutions. So how can you combine multiple AI models to get to address multiple different objectives?
Can you think about complex multifaceted goals at scale? Can you look at workflows and try to automate some of those? In fact, that is something that we'll be showing a little later today. Can you enhance this human AI collaboration? This is, again, a theme for what we'll be showing later today. And adapting to dynamic environments, and also integrating with disparate systems and processes.
So while we have been looking at these point solutions from the perspective of generative AI thus far, we believe that the maximum impact of generative AI will actually come when we expand our horizon of what is possible with generative AI. Now, what we will do in the next few slides is to go over this very interesting generative AI assistant that we have built to address certain key aspects of the product development life cycle. Now, our story begins with a company that has designed a butterfly valve.
So, however, before this design can be realized, a series of intricate steps must be executed. First, the concept design must be evaluated and optimized to ensure its feasibility and performance. Regulatory compliance checks are also crucial, ensuring that the design adheres to industry standards and codes. Process optimization is key to maximizing efficiency and minimizing waste.
The manufacturing workflow needs to be evaluated. And you have to identify bottlenecks and perhaps even suggest improvements to enhance productivity. You might want to create the toolpath for manufacture, and generate a detailed manufacturing schedule, and also help manufacturers coordinate their efforts seamlessly minimizing delays and maximizing output.
You might also want to create technical documentation around the product that you have just designed. And finally, you might want to integrate this within the PLM system or product lifecycle management system. In this case, it could be Autodesk Fusion Manage, or it could be other PLM systems as well.
Note that this scenario represents just one example of how generative AI can revolutionize the manufacturing process. So imagine the potential when this technology is applied to various other situations from onboarding new engineers and providing personalized training, to optimizing supply chain logistics, or even exploring innovative product designs through creative ideation.
Now, the intelligent assistant. And I have to remind you that this is an add-in. It is not a feature that's available within Autodesk Fusion. So I would like to make the disclaimer right here. It's an intelligent assistant that we can potentially use for product development.
Now, the way that this intelligent assistant works is that it uses information about the CAD model from within the Autodesk Fusion interface. It can leverage organizational knowledge bases. So if you have a knowledge base that is outside of Autodesk Fusion, how can you leverage that? It also leverages Fusion API documentation. And this is one of the most powerful features of this. It can call Fusion APIs to actually do some of the things that I will show you.
Also, it can engage with-- it can interact or integrate with the Fusion Manage PLM. And above all, it leverages Amazon Bedrock as its foundational capability to call foundation models, and then use that within the context of generative AI intelligent assistant.
Again, this is not officially endorsed by Autodesk. And this will become available as an add in the Autodesk marketplace from our partner CCTech. So what our partner has done is, basically, they have integrated multiple aspects of both the front end aspects, as well as back end, where they've leveraged a lot of AWS services, such as Lambda, such as Amazon Bedrock, and also Amazon S3, and also knowledge bases.
So, typically, what happens is that a user submits a query through the plugin front end. And I will show that plugin front end in a bit. The query is then sent to the plugin back end. And it can be a Python function running in AWS Lambda. This is for the intent detection. So you could have a natural language query that you add in the front end.
And then the Lambda function behind the scenes understands what the intent is of that query. The plugin back end then requests relevant model data from the front end. The model data in the form of say JSON or PNG if needed is again sent back to the back end. The assistant back end then retrieves any additional model data from Amazon S3. Or if it's integrated with Fusion Manage, then you can pull data from there.
The plugin backend then sends this model data and intent to Amazon Bedrock. Amazon Bedrock along with an appropriate LLM, a large language model will refer to any org databases. It could be a knowledge base that your company has, any manuals, for instance, and retrieve that information from this knowledge base. Then Amazon Bedrock formulates the appropriate response to the user query in the context of the knowledge that it has acquired. And this is sent back to the plugin front end, which then displays it to the user.
So the integration of these services allows for a scalable serverless and managed infrastructure to handle the decision flow and the processing of user queries. Now, let us look at this in action. So there are a few different steps that I enumerated earlier. So one is design evaluation. So in the design evaluation phase, let's say you have a butterfly valve that's been designed. I just want to describe the valve.
So I just want maybe it's a user who likes to understand what the valve is made of, and so on, and so forth. So I'm just asking, can you describe a valve for me? And it provides a detailed description of the various parts, the materials that are being used for the butterfly valve, what are some of the key dimensions and what valve is it, is it closed on one side, and so on, and so forth.
So it provides a nice description of the valve if needed. And all of this is generated on the fly. So there's nothing being typed as-- I mean, somebody is not typing this. So this is being generated on the fly by the sequence of processes that we just talked about.
In the next case, we want to list the key dimensions of the butterfly valve. And again, very similar to what we just saw. You can go in and actually do all the key dimensions. Now, you might want to understand what the temperature and pressure rating of this butterfly valve is. Now, in this case, what the assistant does is it looks at the valve, and it tries to figure out what are some of the dimensions of the valve, and comes up with an approximate temperature and pressure rating.
Now, this can be refined. This is not the end-all for this capability. But potentially, what you can do is if you already have a catalog of butterfly valves that your company has made, then you can actually refer back to that catalog and try to come up with the temperature and pressure rating that might be appropriate for this particular butterfly valve. So, again, this is a way for the intelligent assistant to look into our knowledge bases, and come up with a reasonable answer to the question that's being asked.
Optimization. So let's say you want to optimize the design. Now, in this case, we are using a very simplistic view of what optimization really constitutes. But you can potentially do an optimization analysis of the butterfly valve or some of these components. But in this case, what we're asking is, hey, what will happen if we change the shaft diameter by a small amount? What might happen to the temperature and pressure rating?
Now, here, it leverages both the capabilities of the large language model, as well as some knowledge of the model itself. And it will come up with a reasonable answer to what might happen if you change the shaft diameter by a certain amount. But here, again, potentially, you can invoke an API from behind the scenes, and do an actual optimization of the model itself. So even if the user is not an expert in design optimization, you can potentially orchestrate multiple processes behind the scenes in order to do the design optimization. So that's the power of the generative AI assistant.
Now, again, this is another example of looking at optimization from the perspective of say, hey, if I want to double the pressure, what is going to happen? And so, again, the large language model will go back, look at the database, and figure out, OK, these are the potential problems that might happen. If at all, you increase the pressure. And so you might want to rethink the design a little bit.
Next comes code compliance. So code compliance is actually a very important aspect of design. And especially in products such as maybe valves or any products that might impact human safety. And so it is good for the designer to be aware of what code compliance issues might come up in the design of a particular product.
Now, in this case, what we do is that we want to check whether the valve standards are being met. And so as you just saw, A PDF document has actually been loaded into the assistant. And now, based upon the query, it would actually look through the PDF document. So that's another capability of generative AI. That is, it can look through the document, identify the appropriate places from that PDF document, and then come up with an explanation of what code compliance concerns might be there with the valve.
Now, in a future release, what we can potentially do is that we will look at certain key dimensions of the butterfly valve and flag some of those dimensions and say, hey, this particular dimension is out of compliance. You will need to come, you will need to change this. So this is, again, a very simplistic view of the example. But you can think of the potential that this has in actually designing more complex products.
Next comes manufacturing feasibility. So, again, you can potentially do a tool automation, and a simulation of how the tool will move around as it manufactures a particular part of the valve. But in this case, we are not doing that. We're just basically asking the fusion, the assistant to figure out, hey, what will happen? How can I go about manufacturing this using a particular manufacturing technique? And it will provide some insights as to how potentially this can be manufactured.
Again, you can use a toolpath simulation, which is available within the context of Fusion to actually have a visual appreciation of how that manufacturing will actually happen. The bill of materials creation can be done very simplistically. So you just ask for, hey, I want a bill of materials for this particular butterfly valve. And it will immediately spit out the list of parts. And it can actually export that to Excel. And now, you have this bill of materials within the context of Excel, which you can then use for any later activities.
You can integrate into PLM. So once you have designed the part, say you want to integrate this into to your PLM system, in this case, it is Fusion Manage. And so what the assistant can do is that, hey, I want to-- so you can simply ask this question, I want to upload this particular part into my system. And whatever it does behind the scenes is opaque to the user. But then, eventually, what the user asks for actually happens. And so you have the new part that gets uploaded to Fusion Manage.
Also in the context of maintenance. So as you can see, there are so many different opportunities and examples of how you can integrate this assistant into the context of the product development lifecycle. Now, from the context of maintenance, now, currently, it gives an explanation of how potentially this butterfly valve will need to be maintained in the field.
But think about having another knowledge base associated with the product. You can have the assistant actually go through the knowledge base and come up with a reasonable maintenance schedule for this particular valve based upon the dimensions. And so you don't have to really go back and recreate all this technical documentation or the maintenance manual. You can basically use the assistant to create that maintenance manual for you.
So these are all the various different steps that you can potentially accomplish by with this assistant. Now, there are multiple more versions that will be coming up soon. And, hopefully, when you're at Autodesk University, you might see a slightly different version. But this is the impact that generative AI can potentially have in the product development life cycle.
Now, before I end this part, I'd like to quickly take you to-- actually, Sahil will take you very quickly to another interesting demo that we have built along with CCTech. And I'll give it to Sahil at this time.
SAHIL SAINI: So, folks, since we talked about how generative AI can be used with Fusion, one of the use case we try to solve-- and again this has not been endorsed by Autodesk. How you can have generative AI being integrated with Autodesk Construction Cloud? We see Construction Cloud as one of the leading project management tool for construction business.
But one of the integration, we would like to highlight is how at the field. If you are using Teams, Slack, or any tool for your team correspondence, how you can interact with Autodesk Construction Cloud with Amazon better? If you can look at it, the overarching architecture is from your team, you are user is asking a natural query. And through Team Webhooks we are making a prompt to Amazon Bedrock.
And here is a point where we are, again, using AWS guardrails, which will filter out, which will pass what user is actually entering to Bedrock. So that way, you have a better guardrail or a responsible AI solution for your overall integration. And once the bedrock will intersect and make a understanding of user like the prompt or the integration, it will interact with Autodesk Construction Cloud through APS. And through interacting with the APS, it will get right into right data out from Construction Cloud, process the data back in Amazon Bedrock, and responded back to Teams.
If you go to the next slide, and we can see a quick working session of it. So let's say if a user is on Teams, through a CC bot, user can ask like, what are the top issues that I'm supposed to work today? Or what are the top RFIs that are associated with the project? So with this whole integration behind the scene, it is actually connecting to Autodesk Construction Cloud with the right permission that a user have. And it is extracting all the top projects, all the hub, all the RFIs, all the issues natively to Team bought without user even getting to the Construction Cloud system.
So this is, again, one of the art of possibility of how you can embed generative AI solution to your product development life cycle. With that, Madhu, do you want to go ahead with-- what's the next step? Where are we?
MADHU PAI: I think this is really exciting. And I know we segued from product development into construction. But these are some of the more exciting opportunities out there that we can leverage generative AI for. And potentially, the possibilities are endless as also mentioned here, you just saw that you can seamlessly integrate Autodesk Construction Cloud within the context of Teams. And so you don't even have to exit out of Teams in order to get the information that you need. So that's a very powerful, I think, message that we're trying to deliver from this very quick demo.
Now, with all of this, I think it's very important to see what is on the horizon. So as we look ahead, what do we see? So we are actually seeing seamless integration of AI assistants into software for real-time collaboration and design optimization. And we saw a sneak peek of that using the intelligent assistant that we just demonstrated. But there can be potentially other possibilities or other applications of this intelligent assistant within the context of real-time collaboration and design optimization. Automated design, rule checking, validation, and compliance verification using AI models that are trained on industry standards and regulations.
Now, think about bringing in industry standards and regulations. If it's in a form of a PDF or if it's in a form of any other format, then can you bring that into your design life cycle itself? So that as you're designing the product, you can actually do these checks, compliance checks, and validations, and comparison with industry standards on the fly.
Democratization of design and manufacturing through AI-assisted tools, enabling non-experts to participate in the product development process. So I think this is a very powerful message that we want to deliver. So you can potentially have non-experts who are probably aware of what a butterfly looks like, in this case, in the demo that we showed. But they're not exactly aware of all the standards, or all the design processes that go into the building of the butterfly valve itself.
And so now, you have the ability for non-experts to actually be at the forefront of designing new products. Emergence of AI-augmented human machine, co-creation paradigms, fostering a symbiotic relationship between human creativity, and artificial intelligence. So I think this is another powerful message that we want to deliver to the audience. That can potentially be human-machine co-creation.
So this is actually a very powerful paradigm that we are seeing happen within the context of AI. And this collaboration between human and AI to come up with new ideas. I think that's really the crux of what generative AI is trying to solve. Then, of course, multidisciplinary collaboration between AI experts, engineers, designers, and domain specialists to unlock the full potential of AI within the context of product development. I think this is another very important aspect of having these workflows, these generative AI orchestration frameworks wherein you can potentially have multiple experts, and engineers, and designers with different domain expertise who can come together and towards a common cause.
With that, I would like to Thank all of you for listening. I think these are really exciting times. And I hope to meet you at Autodesk University. And if you have any questions, please do stop by our booth, or you can also speak to us at the conference. So from my side Madhu Pai, as well as Sahil, we would like to sign off here. Thank you all for watching and listening.