AU Class
AU Class
class - AU

AI-Powered Manufacturing: Enhancing Autodesk Fusion 360 Manage Integrations

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

Explore AI-driven manufacturing by integrating advanced language models with Autodesk Fusion 360 Manage. This presentation will demonstrate how AI can revolutionize manufacturing workflows, streamlining communication, collaboration, and data management. You'll learn how to use AI technologies to overcome common industry challenges and improve efficiency and productivity. We'll showcase real-world examples of AI-enhanced Fusion 360 Manage integrations for a seamless and effective manufacturing experience. Our practical live demonstrations will reveal how integrating AI reduces data-processing time, enabling faster decision making and agile responses to manufacturing changes. You'll leave equipped with the knowledge to transform your own workflows and unlock AI-driven innovation with Fusion 360 Manage.

主要学习内容

  • Discover the key benefits of integrating advanced language models with Fusion 360 Manage Extension for improved manufacturing workflows.
  • Learn about implementing AI-driven solutions to overcome common industry challenges, such as miscommunication and information bottlenecks.
  • Assess the effectiveness of AI-enhanced Fusion 360 Manage Extension integrations in real-world manufacturing scenarios.
  • Discover ways to apply these insights and transform your manufacturing workflows and unlock the potential of AI.

讲师

  • Greg Lemons 的头像
    Greg Lemons
    Full-stack Developer at D3 Technologies. FPV drone pilot.
  • Andrew Waszak 的头像
    Andrew Waszak
    Software Developer Intern for D3 Technologies
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • subtitles off, selected
      Transcript

      GREG LEMONS: Hey, everyone. Welcome to AU 2023. I hope this has been going good for you so far. We're here today to present on AI powered manufacturing, enhancing Autodesk Fusion 360 Manage integrations.

      So a little introduction. We are team D3. Both me and Drew come from the development team. I'm a senior developer there, I've been there for seven years now. Been slinging code for about 30 years. You can see from my picture, I'm wearing FPV goggles, that's what I do for fun. An FPV drone pilot, and if you don't get dizzy, hit up that QR code, and you can kind see what I like to do. My history is from the aerospace and exotic car industries. Drew?

      DREW WASZAK: Yeah, thanks Greg. Everybody, Drew Waszak, I'm a software developer with team D3. Been here for four years, and my primary focus is creating custom solutions in the Autodesk platform services stack. I'm a big video game enthusiast, and during my free time, I love to spend it with my family. Thanks for having me here.

      GREG LEMONS: All right, great. So the big question, what is AI? Well, that's too big of a question. We're not going to answer that today. We're going to talk about an aspect of AI today. We're going to talk about language models. Specifically large language models, and kind of the rise of them, and how we can use them right now.

      So what are they? You may know what these are. You've probably seen the hype around these chat bots, chatgpt, et cetera, out there, but they're text based AIs. So they try to understand human language, and give you a response back in human language. They are data driven. So they're powered by vast data sets that are trained on these models.

      Most of these large models that we see today are trained on a vast amount of the internet. They are used, machine learning frameworks, much like other AI technologies. They employ neural networks, particularly transformers. And that's to handle the complexity of the human language.

      The other thing about these language models, these newer ones, is they're very compute intensive. They require substantial computing resources. And that means specialized hardware. So why do these things matter? What is a language model, and how are they going to be useful for us?

      So one of the big ones, and maybe most obvious, is they can break down language barriers. So they have-- they understand languages from all over the world, and can offer kind of high quality translation to anybody. That's a pretty obvious one. Maybe not so much in how it will help in automation, but automation is another reason why they matter. And there's a lot of repetitive tasks that we do daily. Automation using these large language models can help free up the human resources.

      There is this fear out there that they'll maybe replace us. They aren't there yet. And I have a little bit of insight about that a little later on.

      Another reason why they matter right now is they can democratize expertise. And when I say that, a lot of industries that you may have questions around you have to know the verbiage of that industry. Otherwise everyone knows you don't know what you're talking about. So when you use these large language models, you can ask them in your own language, and they can figure what your intent is, and answer the question appropriately.

      The last thing I wanted to talk about on why they matter is just the human collaboration part of it. It's being used right now in different areas, health care, finance. In health care they're diagnosing medical conditions from patient records. In finance, they're doing risk analysis and trading insights. But so they're being used right now in industry.

      So where do they come from? We've talked about what they are, maybe why they're the talk of the town currently. But if we take a step back, and see how we got here, they didn't sprout up overnight. And in fact, these early chat bots, and I'm sure everyone on in here has used some of these older chat interfaces on websites where you ask it ask it a question, and 90% of the answers are sorry, I can't answer that question. They were kind of simple, trained on simple data sets, doing simple conditions against keywords in your questions.

      So we've made a significant leap there in NLP or natural language processing is the general term for this space. So they started, like I said, right at the beginning when we were really wanting to interact with computers. And in fact, kind of wanting to speak to a computer in human language is not new at all. It is one of the first ways we thought we would be interacting with computers.

      But here we've got-- open source took the reins on this, and started kind of nurturing these language models into, I'll say, large language models, to the point where now they're being offered as services by a lot of the large tech giants out there. And how? Why did we get here? One of the reasons is the hardware advancements that have been made.

      So we've had a huge increase in computational power, specifically GPUs and TPUs that's made it more feasible to build these large language models. Also the data availability. So like I said, it trains from the web. It can train from any data source. Most of the ones available commercially are based off of web training, but that gives them the fuel to learn and respond to a wide array of contexts.

      And they're being used right now. So I talked a little bit about how they're being used in health care and finance. But also legal services and education, they're being introduced as well.

      And there's a lot more depth to language models and large language models that we're not going to go into today. But we do want to wonder why, in manufacturing. What are the use cases here? And we've seen them where they can play roles in health care and finance. They can even write scripts and things like that, but we're here to talk about manufacturing. So here are a few of the places that we think right now that we can use them in manufacturing.

      And this list is likely leaving out many opportunities. So just keep that in mind. And you're probably thinking of opportunities or likely will around this space. So the first thing is automated data parsing. So large language models can read and understand a bunch of data, complex or not. If you can imagine in our manufacturing processes we've got data flying, especially in the integration side, we've got data flying all over the place. That's great context for a large language model to use.

      Another one is natural language queries. And in fact, this is where you'll see maybe the most use of language models right now is they'll say, hey, take this document, take this data, and give me a summary of it. Analyze this.

      And so right now, and in fact, even at this conference, you'll probably see Autodesk has some space here in natural language queries, where they're using that feature to allow you to ask about your data in Autodesk services in a human language style. I'll leave that to Autodesk to discuss.

      One of the other things we thought would be really great is anomaly detection. So these language models are really good at looking at patterns and detecting if it doesn't fit a pattern. If we have all of this processed data, integration data moving around, and none of that generally is seen by human eyes. So having an AI or a large language model in the mix there could help with anomaly detection.

      Workflow automation, which is one we're going to talk about, in fact demonstrate today. In fact, we're going to demonstrate a little bit of the automated data parsing and workflow automation as well. But we've got workflows. So integrations aren't simple connections between two systems, right? There are processes that are involved in those integrations.

      On Fusion 360 Manage those are called workflows, but that's generally the terminology for any of these processes. And just imagine being able to make decisions on build to order processes, and in fact that what we'll show later is in these integration processes having a large language model in the mix really opens up the door there.

      So the other thing that-- last thing I'll talk about in how we might see them being used in manufacturing is that contextual insights. So they can, again, ingest all of this data, data points, KPIs, and actually make some sense out of it for you. So now we've talked about what they are, why they are, who they are. How can we start using these things right now?

      So most of the services, a lot of the large tech companies have released products that are being used right now. You'll probably all recognize like chatgpt. Google has it, Microsoft has it. The language model services that you generally see provided right now are APIs. So language models as a service. Chatgpt is an interface that you go on to and you ask questions, but they also have a back end access to it via API.

      So why would we want to use large language models in this respect? I talked about some of the challenges with the computational power and things, which is a big one, but one of the reasons you want to use LLMs as a service is how easy it is. So they offer API access. A lot of developers and in fact non-developers are pretty familiar with utilizing web APIs, and that's effectively what they offer right now.

      Another reason is scalability. So these language models, they allow you to scale your operations without having to invest or worrying about computation overhead or hardware. Another reason you may want to use it as a service is the cost effectiveness. So again, outsourcing that workload to some cloud based language model service is more cost effective than maintaining your in-house resources, especially if you've got smaller organizations, or you only want to use it for a short term project.

      You can, while we're talking about using LLMs as service, one of the maybe Holy Grails in large language models is to have your own language model, and train it on your own data. While we're not going to be getting deep into that today, these services have started offering fine tuning capabilities to their models. So you have a base model like a chatgpt that's been trained on the entire web, and you can add additional information about your business, your business data, and then only you have access to that now fine tuned model.

      Another one of the reasons you might want to use it as a service is because of that cross-platform support. So being a web API, it can be accessed from any platform. And along that lines, it's also faster to integrate an API into an existing product. So if you need to spin around get it to market faster, utilizing one of these APIs as a service is going to be maybe the fastest way.

      So we've talked to talked a little bit about what they can do, and it's time to get our hands dirty. And I'm not talking about Deadpool dirty, there's going to be no chimichangas involved. But you're probably wondering, how do I get started using one of these things? How do we use these in our own operations? And that's where these language models as a service can help right now. And with that, I'm going to hand it over to my partner Drew who's going to demonstrate how easy this is to use with free tools right now.

      DREW WASZAK: So as Greg mentioned earlier, we are going to be using Postman today to be interacting with these open source services, or specifically open AI's API. The prerequisites to get into this state and start playing around with these APIs is you have to go to OpenAI, and register for an account, and generate yourself an API access key.

      The next step is you have to create yourself an account within Postman. In our handout we have we're going to have a guide on how to set up an account within both of those systems. And then also a link to our collection here that we're using that you guys can play around with.

      So now that we're in Postman let's go ahead and create a new request. I'm going to give it a name. I'm going to call this my chatbot. And looking at the documentation we know this request needs to be a POST request, and they've given us our URL. So I'm going to go ahead and copy this URL from a previous request, and you can see here it is hitting the API openai.com URL, and then we're hitting the endpoint of /v1/chat/completions.

      Now we need to get that API access token that was generated from OpenAI and we need to stick it into our request. So the way we do that is we go to the Authorization tab, and we want to make sure that we select the type of bearer token. Then you can just paste your OpenAI key within here, as a string. I do have an environment variable set up, so I'm going to go ahead and bring it in through my environment variable for anyone that is familiar with Postman, but it is just a simple string.

      We don't have to worry about headers for this call, they should be taken care of in Postman. But we do need to worry about the body. Our body needs to be of type raw JSON. I'm going to go ahead and copy over our body structure from our previous request as well. As you can see, it is in JSON format. And there's a few attributes that go along with it.

      The first attribute here is our model. This is that large language model from OpenAI that you want to use within your request. We're currently looking at GPT 3.5 turbo. It's a little quicker than the latest GPT 4, but the GPT 4 model has some better-- or is known to have better results. So we'll be hitting both of those today. I'm going to start off with 3.5 right now, and then our next attribute is an array or a list of messages, and within these messages you can have different roles to formulate your conversation.

      So the first role we have here is a system role. This is your system directive, and the great thing about being able to define a system directive is it's going to add more weighting to that content that you're supplying in there, to hope that your large language model doesn't stray off from that main goal. Our next role is user. This is you, this is you prompting, you asking questions to the chatbot. So that's pretty self-explanatory.

      And then the final one is the assistant. This gives you the ability to mock what GPT would respond to you with. So you're able to create a list or an array of conversation in here, and you can tell it at system command, you can give it your prompts, but you can also mock or imitate the prompt that ChatGPT has, feed that to the service, and the service will respond with its final prompt.

      Today's use case, we're going to just be using the system and user roles. We can see here our system directive is we are going to be in the persona of the Marvel character Deadpool. And then as the user we're going to ask it, how do you replace Iron Man in the MCU? A tough question.

      So I'm going to go ahead and send this off. And again this is using GPT 3.5 turbo. We're also going to send the same request off with GPT 4 to see the differences between the two. But I hope you guys can see how simple it is to get in here and start hitting these services.

      All right, there's our response back. And you can see within the content, "Well, well, well, if it isn't the famous merc with a mouth, Deadpool. So you think you can replace Iron Man in the Marvel Cinematic Universe? Let me tell you, buddy, it's not an easy task." So we can see it did respond to us in the persona of Deadpool trying to take over Iron Man in the MCU.

      All right, let's go ahead and show how easy it is to change the model. We're going to change this from 3.5 turbo to GPT 4. And I got and send that one off. As I mentioned earlier, GPT 4 does take a little bit longer than 3.5, but the content it serves back usually is a little bit better.

      Now while this is sending off right-- oh, actually it'll come back for us. So here we are. "Well, being Deadpool I'd start with a few sarcastic comments. After all Tony Stark, was known for his humor and wit. However replacing Iron Man isn't simply about being comedic, it also is about intelligence and leadership." So you can see got another response there.

      So this is a great example of how you guys can be using this service today. But you may be wondering how to get this outside of a Postman type context, and into some code. So Postman does have a script generating tool, and we are actually able to generate this request in code, in a multitude of languages, whether that's C sharp, Java, JavaScript, PowerShell, Python, you know, they've got all the main players on here.

      So for today, let's go ahead and grab a PowerShell generated script, and we're just going to launch this within our PowerShell ISE. So I'm just going to copy and paste that code directly in, send it off, and we can see within the code here it is formulating those headers of our authorization, that'll be your API access key that you got from OpenAI that we specified as the bearer token. And then there's our body structure built out in a nice body variable in PowerShell. And then we're invoking that rest method and converting the response to JSON here in the view.

      So we should see here in a second that will return to us. With that same response that we had earlier within Postman. There it is, came back. "Heck, if you ask me, Wade Wilson also known as Deadpool, says no one can replace the genius billionaire Playboy philanthropist Tony Stark." So I hope that gives you guys a good understanding of how you can be playing around with these services today. How you can use Postman to generate your created requests into code, and start plugging it into your applications.

      I'm going to hand it back over to Greg to talk about how we're going to be strapping this into our own integration platform, along with bringing it back into manufacturing.

      GREG LEMONS: Thanks, Drew. So we're here to see how this works with Fusion 360 Manage. And I'll tell you, it's kind of a perfect match. So Fusion 360 Manage gives us these extensibility points that we can use to integrate with all sorts of other systems. And that really is the key to being able to sneak into these processes at virtually any time.

      On the screen right now you actually see an example of setting webhooks, which are these remote events that we use to trigger our integrations, and we're going to be using those today as we demonstrate our platform. So the demonstration platform is going to be ForgeFlow. This is TMD3's integration platform and toolset for some Autodesk products. I'm going to give you a quick overview of the architecture of ForgeFlow. We're not going to go deep into it. But if you have any questions about that, or want to know more about it, would be happy to answer those.

      A quick architecture diagram here, just with the pieces involved in this presentation. So on the left hand side, we have the ForgeFlow front end. This is a web portal, and in our demonstrations you'll see us using an Integration Designer that's housed there. So we allow the user to design their integrations between Fusion 360 Manage and other systems in a nice drag and drop node based UI.

      That front end, though, is a dummy front end that does interact and can do API calls against Autodesk services. But the real power is that it can queue up longrunning jobs, and our architecture picks up those jobs and runs them on back end worker machines. Remote events are important in our integration, and in I think most modern integrations. And so when you are modifying items within the Autodesk ecosphere, they can trigger webhooks or remote events.

      Fusion 360 Manage specifically allows us to get very granular with what events that we can use. Our architecture at ForgeFlow captures all of those events in a large webhook funnel, and puts them on our own durable queue that can survive catastrophic downtime. And then at our leisure, but generally is in close to real time, those worker machines pick up those jobs off that queue and process them.

      So I didn't want to get you deep into it. But I did want you to see, get an idea of the players involved when you see some of the demonstrations that we'll be doing. And for context, every demonstration we'll be doing today is live against real systems. So with that risk hopefully we'll get a pretty large reward.

      I did want to also highlight that those back end worker machines, as they're processing either longrunning jobs submitted from the front end or these integration jobs, you can get notifications about those in the front end. So it's not so much of a black box integration.

      And so with that, I'm going to quickly demonstrate-- get to that demonstration platform for you. So this is ForgeFlow. This is the home page. We are going to look at the Integration Designer. So very quickly though, we are using this with Fusion 360 manager, you can see our tenant. This is a team three modern demo tenant we're connecting to. This is our tool set that allows you to do a bunch of useful time saving tools against Fusion 360 Manage tenants.

      We're going to go into the integration projects today, and we are going to look at what our designer is. So I'm going to bring up-- we'll bring up this integration here. This integration is an upchain in Fusion 360 Manage. Integration I'm going to use it to just explain really quickly the Integration Designer and what that-- because Drew's going to demonstrate a lot of features of this. So I want to give you guys a overview.

      So this is the Integration Designer. And every integration process in our integration platform starts with a trigger or a remote event. It could also be a recurring event. And these are the nodes that we talk about, triggers can be Fusion 360 Manage triggers, external triggers, recurring triggers. Our webhook creation methods are built into this designer as well. So you can actually on these triggers go in and find the transitions that are in those Fusion 360 Manage workspaces, select them directly from here to use.

      We also can use any of the normal CRUD events that are also webhooks in their system. So each of these little triggers, and what we call action node here, complete a process and an integration. The Integration Designer is very flexible. It allows us to even add kind of context around these processes, but you can see. This is, for example, an MPI process in Fusion 360 Manage. When we initiate that process it creates a webhook that we capture, and we create a project in upchain. Right?

      When a change request is happening in upchain we capture that and complete it in Fusion 360 Manage. When visualizations get created in upchain, we move those over to Fusion 360 Manage. It's kind of an idea of how integrations aren't generally a simple connection between two systems. There are processes that lie inside of them. However, this Integration Designer allows kind of a drag and drop experience to make those integrations a little easier.

      So with that, I'm going to have Drew take over and demonstrate how we can use these AI LLM nodes in an integration process.

      DREW WASZAK: All right, and we have my Integration Designer up now. And you'll notice it's a little different than the one Greg showed you. The primary difference right now in this new integration is we've added this OpenAI large language model agent. And so this is strapping in chatgpt, OpenAI into our integrations. And very similar how we did in Postman and we were defining the prompts or almost prompt engineering in there, we have that ability in our designer as well. And we've also expanded on it because now it's within our own infrastructure.

      So here we can see we've got the system command. This is exactly what we were describing as our persona of being Deadpool in Postman. But we've also added some additional context where we're able to pull or suck up some data, some metadata from Fusion 360 Manage items and pass that in to our context. We can also then map the result from the large language model back into an item details field within Fusion 360 Manage.

      So jumping back out to our designer here, we are starting with that Fusion 360 Manage webhook trigger. We've built a specific AI trigger on our new product introduction workspace, and most of these ideas are coming from the new product introduction philosophy of, we need to introduce a new product into the market, or into our company. And we may need some help along the way from AI to do that.

      So jumping over to Fusion 360 Manage, let's go and find our new product introduction workspace. And we'll see that we have some items created in here that are in this ideation state. So I'm going to go ahead and grab my drone x 10, and if you guys didn't know Greg and I are big on drone hobbyists, and so a lot of our demos we love to use drones and bring that into the world.

      But within this item we have some metadata. We've got a very, very brief description, an NPI number, and then we have some objectives, some primary goals, drivers, just some metadata. But we don't have a product description. An AI is really good, these large language models are really good at taking a lot of context, a lot of data, and turning it into something valuable. So that's exactly what we're going to do.

      Jumping back over to our integration, we're going to trigger from that item. We're then going to grab all those item item detail fields, and we're going to pass them into our large language models. So our first example here is a description generator, and our system command is tasked with creating a short mundane description for the product based on this data coming from this Fusion 360 Manage item. So let's go ahead and turn those guys on, and I'll show you guys a live transition over here. We're going to hit our trigger and just to show the workflow map.

      This is our workflow. All of these transitions in here do have webhooks that we could set up. Currently the one we're looking at is going to just be this little AI trigger. So go ahead and I'll transition him, and the beauty of our Integration Designer is, it's sending real live messages from our server as it's processing. So you can see it hit our trigger, and went and gathered the item. Now it's generating that description and then it's going to put that result back on the item. We can see you got a little notification that the AI generated this descriptor. The drone X is an innovative mid-range drone, targeted for both beginners and hobbyists.

      Designed for user friendliness it offers advanced features for drone racing, and aerial photography. And enhanced battery life. Jumping back over to Fusion 360, we'll do a quick refresh on this and we should see now that product description should be populated within our item, using that metadata and the large language model.

      And there it is, the drone X is an innovative mid-range drone targeted for both beginners and hobbyists. So you may be thinking like me that, yeah, that's cool that I can generate a description, but I want my description to pop. I want it to have some zing to it. So we've then created the description enhancer, and this now has the system command of being a description rewriting bot. You're going to take this mundane description, and you're going to make it more appealing to consumers.

      The other difference is, instead of taking all that metadata from the item, now that we have a product description generated, we're going to grab that product description and use that to create our new enhanced description. So go ahead and turn these guys on, and I'll trigger this one off screen, so you guys can see it happening all within the Integration Designer right now.

      I just triggered it, and our system had caught that webhook, sent it on to our back end processor, and now it's processing these jobs or these nodes. So we got that item, grab that product description. And now passing it in to the description enhancer. So our original description is up here, and then our enhanced description is down here. Discover the dynamic drone X, designed for both novices and experts alike, and watch as it propels your drone flying experience to thrilling new heights. With its cutting edge features the drone X is your go to gadget for adrenaline packed drone racing fun. So you can see that's that pops a lot more than our original description.

      And we can take it one step further and say maybe we want to market this beyond our current country that we live in, or beyond the languages that we need. And as Greg mentioned earlier, large language models are great at translating. We can translate to Spanish, French, English. It does a pretty good job doing that. So within our translator bot here.

      GREG LEMONS: I'd love if you could do, Pig Latin, Drew. Can you make it translate to Pig Latin?

      DREW WASZAK: So my example here was translating from English to Spanish. Greg has the idea, let's do something a little wild. Let's strap in Pig Latin to this thing and see how it handles it. So I'm going to change that. And this shows you how easy it is to change these system directives, just like we were doing in Postman. It's just human language. So we're going to say you are a professional English to Pig Latin translating bot. You're going to translate the English found in the product description, and only respond with the results translated to Pig Latin. I can spell it right. There it is.

      Oh, and you can see down here, we actually had another one of our jobs showing that, hey, people are working in our system. This was running scripts across the workspace 47 in Fusion 360 manager that came through. Just a little added bonus.

      All right, back to the task at hand here. Let's go ahead and take our enhanced description here, and I want to translate it to Pig Latin, because that makes sense.

      GREG LEMONS: Would you like me to trigger these for you?

      DREW WASZAK: Yeah, I'll let you start taking over the trigger.

      GREG LEMONS: All right.

      DREW WASZAK: Greg is across the state, we're both Missouri folks here, and he's going to be triggering it from his machine across the way, and just to show that this is all live, our systems are capturing these real webhooks.

      GREG LEMONS: All right, you just holler trigger when you're ready. Here it comes.

      DREW WASZAK: All right, send it off. Ah, there it is. System caught that webhook and is now processing our nodes. Grab that enhanced description, and is now going to try to translate it to Pig Latin. Something I have noticed with the translator node is depending on the language that you're translating, it does affect the speed of it, and it might be because there's more examples of English to Spanish than there is of English to Pig Latin.

      But here we. Have a nice little Pig Latin translator, and I'll try to butcher it for you guys. So we've translated it now into [INAUDIBLE] And again, we've completed the full circle on this by patching this back to our Fusion 360 Manage Item that it was triggered on. So let's jump back in here. And as long as we triggered it from the same item, it didn't use another one, it shouldn't be on here.

      And here we are. There is our enhanced product description, and our translation to Pig Latin. All right, so those were some great practical examples of just getting into using some metadata, using Fusion 360 Manage to generate some content for you. Now, let's get into something that's still practical, but maybe a little bit more fun. We're going to jump into my next Integration Designer here.

      And for all of these events, I am going to be having Greg trigger these from his machine across the state for us, so that we can continue through these. So the very first example we have, in now our more complicated integration is, again, we got that same trigger coming from the new product introduction workspace with our AI trigger. We're still going to be grabbing that item. But now we can do some other things.

      And so this bot over here is our new product summary bot. And our new product summary bot, very similar to our description bot. It's going to take that list of metadata, some of those objectives, those drivers. The data that was created within that Fusion 360 Manage Item. And now instead of tasking it with creating a description. We're going to task it with being a summarizing Bot. But furthermore, we're tasking this with summarizing it and producing the content in valid HTML.

      And so this is a great way to show a large language models can can return things beyond English. They can return things in a lot of different syntaxes, XML, JSON, HTML. HTML is a great use case for generating web content or maybe throwing that content in an email. All right, so I'm going to have Greg go ahead and trigger off our item from his machine over there.

      There it is, it caught, came in. We're grabbing that item again, grabbing all that metadata around the drivers, the objectives, things that people have filled out within that Fusion 360 Manage Item. And now our large language model is processing it, and it's going to come up with a new product summary invalid HTML for us. Oh, it looks like I forgot to turn off my other integration. Let me go do that real quick.

      Greg, will you actually go do that for me real quick, just so we're not seeing all those notifications pop Up

      GREG LEMONS: You got it.

      DREW WASZAK: Thank you. All right, so here is our new product summary bot. And it has created a nice long summary of that whole item for us. I'm not going to read the whole thing for you. But if we jump into our response, or our output here, we can see that this was wrapped within divs and dtags.

      All right, so the next example, very similar. We're going to still be taking all that metadata coming from the same item. But now let's create a marketing bot. And this marketing bot is tasked with being-- with specializing in human emotion and persuasion. And furthermore we want this to be an over the top marketing scheme, and we want it to be to read like crazy Dave's commercials. Also, we're going to try to keep this 70 words or less. It's a marketing campaign, you're going to lose the attention of people. So we want it to be nice and short.

      Let me go ahead and turn that on. And I'll have Greg, you can go ahead and trigger that item for me. There it is coming in. All right, and we are creating our marketing content in the style of crazy Dave's commercials. All right here, it is.

      "Exploion of thrills. Introducing the mind blowing game, changing drone X designed for adrenaline junkies and inquisitive minds alike. Jet into the brain frying excitement of militant drone races." Man, this is expanding my vocabulary. But as you can see, created an over-the-top awesome marketing campaign for us right there.

      All right, so we've got some of those more practical examples, but we can also do some fun things. We can also create a joke bot. If we want to lighten up our day, you know Mondays are never fun for everyone, maybe you just need an AI to tell you a good joke. So we have a joke bot in here that is going to be in the style of Jerry Seinfeld, and it's going to be presented the data from our Fusion 360 Manage Item about this drone, and hopefully comes up with some short funny joke for us.

      All right, Greg, I'll have you go ahead and trigger our event off again, so we can have a nice Jerry Seinfeld joke for us. And there it is. Came in. "So what's the deal with the drone X? This thing says it's perfect for both beginners and experts. Do you know what else they say about it? Chopsticks, and everyone remembers the first time they tried using those, right? One minute you're feeling like a pro, and the next you're dropping your California roll all over your lap. So go ahead, try the drone X, just make sure you're not eating sushi while you're flying it."

      Here we go. All right. So that's-- I hope that's a good example how you can have some fun with the AIs. But let's take it a step further in the AI space. We've been talking all about these large language models, but there's more to this AI space. There's also these image generation softwares out there, like midjourney, and dall-e. And we came to the epiphany that you could use these large language models to generate a fantastic description, or a fantastic prompt, into one of these image generation AI softwares.

      So we've built ourselves a dall-e action node, and we're going to be assigning our large language model agent here with being an AI image prompt engineer. And it's going to add in some other filters, like trying to get the photo, and the lighting, and the colors, all of this to look natural. And so this is a great way of using chatgpt LLM to generate a prompt that will be fantastic for one of these image generation softwares.

      So go ahead and turn those guys on, and I'll have Greg cross the state, go ahead and trigger that for me. There it is, caught our trigger, grabbed our item, and is feeding all that data into our image prompt bot to then create that prompt. Here it is with our new prompt from the LLM is a photo realistic dynamic drone X in flight, showcasing its cutting edge features and user friendly designs and black and red colors. Shot in bright outdoor light using 1/1000 second shutter.

      Here we are, it generated a nice little drone image for us. And this drone image is hosted online, and we can actually see that within here. And all of our nodes have fantastic logging. I'll let Greg get more into that, though, as I'll let him take over our next segment, which is using this in more of a programmatic interface. Greg, do you want to go ahead and take it away?

      GREG LEMONS: Yeah, Drew, go ahead and activate that second step for us.

      DREW WASZAK: Yeah.

      GREG LEMONS: So this one is practical, but maybe not necessarily should be used in the context of the Fusion 360 Manage integration, although I can argue that it should. But this is going to be an example of us chaining multiple large language models together. So we're going to similarly to the processes that Drew shown, we're going to get item from a Fusion 360 Manage, we're going to pass a set of that data over to these nodes.

      These nodes that first, one, make valid JSON, is effectively going to do just that. It's going to take the payload I sent from Fusion 360 Manage, and try to create a valid JSON object out of it. It's then going to pass that to the next node, and that next nodes job, it was directed to create a c-sharp class that can deserialize that JSON payload. That it just-- that the other bot had just created in front of it.

      You may notice here also we're using the 3.5 turbo LLM on this one. The first two nodes are using those, and the final one, you want to close that one down, and the final one is to create an application in C sharp that utilizes that C sharp class to serialize a JSON payload from a file. And you can see we've opted to use GPT4 for that one, so we're actually mixing models in this case, as well.

      So yeah go ahead and save that. I can trigger this one, I've got it up already, Drew, so I just triggered it, and let's watch it come through. It's going to get those same items, and now it's going to try to make us a valid JSON object. So Drew, as this one comes up, or is it finishes this node process, you might want to hit that logging info button there.

      DREW WASZAK: Yeah.

      GREG LEMONS: So we can see it kind of in a larger screen.

      DREW WASZAK: Yeah, I'm actually minimizing, that's all right, we have logging built in for this exact reason, so I'll go ahead and pop it open, and here is that output that Greg was talking about.

      GREG LEMONS: So that's the result of that JSON bot, right? And you can see, it's a valid JSON object product description, internal drivers, those are string properties and the values associated with them. Let's look at that second one. It's finished now. So did it create a C sharp class that we can deserialize that JSON payload to? Well, let's look at it.

      It's got JSON property attributes under all of the property declarations there. That's so that they can be serialized back to the actual-- that underscored name, that Fusion 360 manager uses. But we are-- it is naming those properties according to dotnet standards. So that's neat.

      And then so it should have passed now that gain knowledge to that third one, which it's directive was to build this application. Right? It's a very simple application, but it is it did build it for us here. And if we look it's even got it's using statements in there properly. It included that class that we need to deserialize, and if we look at the logic, it looks like it is reading the arguments from the console line, finding that JSON file, deserializing it to that class that we told it to, and then writing that type of class to the console.

      So I think that's pretty incredible. And this is maybe a lightweight version of that. And to tease you even further, we have dynamic nodes within our Integration Designer that can compile this code and run it against an F3M item in the context of an F3M item. And I will save that as a teaser for anyone that wants to know anything more about the ForgeFlow integration platform.

      All right, so that kind of wraps up our presentation and our demonstration today. To sum it up, you know AI will be changing the game across most industries. And I don't think we necessarily know how that game will change, or even what that game may be. But this is a place you can hop on that high speed train right now. Right? Fusion 360 Manage integration gives us the extensibility we need, an integration platform can easily take that data that's being moved across these systems and stick a language model in there to do something effective.

      So I hope out of anything you got that we can start using some of these AI services right now to assist us in our manufacturing processes. So for more information about ForgeFlow, you can hit up the links on the slide there. This will also be in your handout. Fusion 360 Manage and upchain, you can find more information about those products from Autodesk. And then please connect with us. We would love to answer any questions about this. Assist you with anything you need. And again, check that handout for more information, including that Postman collection.