AU Class
AU Class
class - AU

Intelligent Chatbots for Design Analytics: Leveraging AWS Bedrock and Autodesk Platform Services

共享此课程

说明

In this talk, we will explore the integration of AWS Bedrock with Autodesk Platform Services to create custom chatbots capable of answering complex analytical questions about your design data. Attendees will learn how to harness the power of advanced machine learning models and cloud services to provide real-time, insightful responses to design queries, streamlining the engineering and design workflow. The session will cover practical implementation steps, key benefits, and use cases, demonstrating how these intelligent chatbots can revolutionize design analysis and collaboration.

主要学习内容

  • Gain a comprehensive understanding of how to integrate AWS Bedrock with Autodesk Platform Services to build intelligent chatbots for design analytics.
  • Learn the step-by-step process to implement and customize chatbots that can access and analyze design data, providing real-time, actionable insights.
  • Discover practical use cases and the key benefits of deploying chatbots in engineering and design workflows, enhancing efficiency, collaboration, and decision-making.

讲师

  • Greg Fina
    Gregory ("Greg") Fina is a Principal Solutions Architect in Strategic Accounts for Amazon Web Services. He primarily focuses on application modernization using on Serverless and Containers. Additionally helping customers develop scalable data storage for these efforts. When not supporting customers he works on Open Source projects related to Backstage. Greg's main interests are large scale deployments of Kubernetes and GitOPs and DevOps tooling. Greg has 20 years experience in leading large IT organizations, holds 9 AWS Certifications and a MS, BS in Computer Science and a BS in Computer Engineering..
  • Petr Broz 的头像
    Petr Broz
    Petr is a developer advocate at Autodesk. After joining the company in 2011 as a software developer, he contributed to a range of web-based platforms and applications such as 123D Online or Tinkercad. In 2018 he transitioned into the developer advocacy team where he has been helping customers create cutting-edge solutions using Autodesk Platform Services, with a primary focus on visualization and AR/VR.
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • subtitles off, selected
      Transcript

      GREG FINA: Welcome to Autodesk University. This is SD3417, Intelligent Chatbot for Design Analytics, and we're going to leverage AWS bedrock in Autodesk Platform Services today to do some creative work within the data that you have access to through Revit.

      I think when you think about AI, I think the movie DALL-E 2 comes to mind with the adorable robot who was helping the architect with the blueprint. And if you take a look at the blueprint, you'll notice that it may be upside down. And I think that's key because a lot of what AI has done over the last few years is kind of made you more informed and help you make better decisions.

      So with that, my name is Gregory Fina. I go by Greg. I'm a principal solution architect with AWS. I've been in Amazon for six years and have 20 years of IT and development background.

      PETR BROZ: And my name is Petr Broz, and I'm a developer advocate at Autodesk focused on Autodesk Platform Services. I joined Autodesk in 2011, worked as a software developer on a bunch of different projects-- web-based projects, typically-- and in 2018, I joined the developer advocacy team in order to help our customers build amazing solutions using APS.

      GREG FINA: And so what we're going to do today is we're going to first talk about how you actually access that design data. And we're going to then talk about LLMs and chatbots. And we're going to hope to give you some information behind what an LLM is and how you can use it for chatbot. And then, like all good presentations, we're going to get out of PowerPoint and we're going to jump into the actual code and demo of what we've built. And because we're hoping that this is interactive, we are going to save time at the end for questions and answers.

      PETR BROZ: Thank you, Greg. Before we can dive into our talk, there is one obligatory legal slide we need to cover, and that is the safe harbor statement. So in our presentation today, you will hear forward looking statements that may change in the future. And so we definitely do not recommend that you make any strategic or purchase decisions based on these statements. Thank you.

      All right, first thing's first. If you want to build a chatbot that will help you run analytical queries over your design data, you first need to get access to that data in your designs. How do you do that? A really good option here are the Autodesk Platform Services. If you're not familiar with APS, it is a cloud development platform that allows developers to build custom solutions centered around their design data.

      There are a couple of web services that are part of the platform that you can use to actually access different types of information in your designs in different ways. So let's take a quick look at each one of those right now. The basic service that's part of our platform called Model Derivative API is used to ingest over 70 different file formats, not just Autodesk formats, but also our competitors. So whether you're working with AEC data such as Revit, IFC, Bentley, DGN, or whether you're working with manufacturing data such as Inventor, Fusion, SolidWorks, Creo, or maybe AutoCAD drawings, our platform and the Model Derivative service can ingest all these designs and it can start extracting information out of your design data.

      By the information, we talk about things like thumbnails, all kinds of 3D geometries, 3D views, 2D drawings, any kind of logical hierarchies that might be structuring the elements in your design, and more importantly, any kind of properties and metadata that's available on the individual design elements.

      All this information is extracted by the Model Derivative service and it can be then accessed through a series of REST APIs in adjacent forms so that you can start asking specific questions, filtering elements, asking for specific fields on those elements that match your filtering criteria. Here's how such a query might look like. On the left side, we see a payload of a REST API call that we can send to the Model Derivative service. In this case, we're specifying a query set of filtering parameters followed by the set of fields using wild cards, basically specifying the properties that we are actually interested in.

      So maybe we don't want all the properties for the elements that match the filter criteria. We are only interested, in this case, in object ID, name, external ID, or any properties starting with the word cons. On the right side, you can then see an example response in JSON that may be sent back to you by the Model Derivative service. In this case, we see a JSON containing a set of properties for the objects that match the filter criteria on the left side. And for each of the matching elements, we see properties such as constraints level or construction structure. All these properties and property groups basically match the field's wildcard paths and names specified in the request payload.

      Another service that's available in our platform is called ACC Model Properties API. ACC here stands for Autodesk Construction Cloud. This is a new product offering by Autodesk that can be used for managing your AEC projects from the design stage all the way through construction and operation.

      And for any design data you may be managing in Autodesk Construction Cloud, the model properties API can be used to access the information in those designs as well in a slightly different way compared to the Model Derivative service. In this case, the way the model properties service works is, first of all, you can generate an index or a diff of two different versions of your designs hosted in ACC. And once those indices or diffs are computed, you can then run different kinds of queries, complex queries, over that data.

      Here's another example. On the left side, we see the list of property definitions for a specific index or a diff. These are just the definitions of what properties are available on elements in your design, whether they use any specific unit type, what type of data they actually include. On the right side then, you can see the content, the actual values of those properties attached to individual elements that then correspond to whether, let's say, that property is a dimension, whether it's a document name, what kind of data type it uses, what kind of units are associated with that property.

      Another set of APIs, recent additions to our platform, are AEC and manufacturing data model APIs. In case you have not heard about data models within Autodesk, this is a huge topic as Autodesk is on a quest to basically break out of proprietary file formats into bringing your design data into a graph structure in the cloud that you can then use and access efficiently in a granular way.

      Now we have AEC data model API, manufacturing data model API. Those are currently in beta. They're very fresh. But again, the general idea these APIs will allow you to use a graph-based approach, basically using graph QL API, to access your design data, whether it's AEC or manufacturing type of information.

      Here's how is one example of how you might be accessing this data in the future. This is a screenshot of one of our live demos, live samples where we're using this GraphQL Explorer on the left side. We're specifying a query. In this case, we're basically looking for filtering elements in a specific design. We're looking for properties on these elements and we're specifically cherry picking the property definitions and unit names out of these property definitions. That's what you can see in the graph query on the left side.

      And then on the right side is the response to this GraphQL query example, where you can see that for a specific design, we're extracting information about properties of different elements such as length or external ID or area. And apart from the actual values for the length or area, we also include property definitions, the units, and the unit names in the response. So again, GraphQL here gives you a very precise granular control to just the data that you actually need.

      There is one more functionality provided by our platform that is, today, also available in Autodesk Construction Cloud, and that is our data exchanges. Data exchanges are basically a concept that allows users of ACC to specify a subset of a design, let's say, a subset of a BIM model, that they can then share with other stakeholders, other projects, and potentially even other applications. So imagine you're designing a hospital. You have a BIM model of a hospital and you want to collaborate with the manufacturer to have railings manufactured for your staircase.

      What you can do with Data Exchange is that you can identify just the portion of your BIM model that represents the geometry of your staircase. And you can then share this Data Exchange with your manufacturer who can be using Inventor. They can be using even third party applications like Rhino or Grasshopper. They can use the subset of information you shared with them through the exchange to design the railings, and then they can provide those and actually manufacture those for you for your project.

      And in our Autodesk Platform Services, we provide an API that you can use to access the content of the elements and the element properties in these exchanges in a similar way as you can access your property, your element design data using AEC or manufacturing data model APIs using GraphQL as well.

      Here's another live sample we have available in our code samples on the developer portal. In this case, we visualize the graph and the individual nodes/elements of your design with the relationships between them using the simple graph rendering. Here, individual nodes represent things like a door, and a door can be associated with a template with metadata about this type of door, and the node element can also be associated with another element representing the wall that the door is part of. You can see, there may be very complex sets of relationships and connections between the graph nodes.

      And finally, a very important and powerful service in our platform called Design Automation API can be used to host your custom plugins from some of our hero desktop products in the cloud. Imagine you are a Revit developer or Inventor, AutoCAD, or 3DS Max developer. You build a plugin and with Design Automation API, you can deploy this plugin to the cloud and you can execute this plugin, let it do whatever work you want to do in the cloud without having to have these applications installed on your machine.

      These workflows, the Design Automation service is used for different kinds of use cases. Sometimes, it can be to build configurator like experiences where your plugin running in the cloud may read in a bunch of basic inputs and it can generate a new design based on these inputs.

      Other use cases could be maybe the plugin reading an existing Revit or inventor design, running some model checking processing on top of that, and outputting maybe an Excel spreadsheet listing all the problems found, elements maybe not satisfying certain design goals. And for our talk, for our topic, these plugins, the custom Revit plugins-- Inventor, AutoCAD, or 3DS Max plugins-- can be used also to access and extract whatever kind of information you want to extract out of your designs.

      Finally, another screenshot of another live sample that we have available where we are extracting different types of design data, in this case, asset information out of a Revit model using design automation.

      GREG FINA: Now that we know how to access design data, we want to talk about generative AI large language models in chatbots. And the reason why I've entitled this section large language models, is because to me, that's the more interesting part of generative AI, taking some human input, like how much paint you need, and getting a response that the system builds.

      So with that, I'd like to go into a stat that, according to Gardner, more than 80% of enterprises will use generative AI APIs or deploy generative AI apps by 2026. And I think it's very interesting because generative AI really took traction in 2023 and there were a lot of early adopters. Amazon, Autodesk were early adopters of generative AI, and I'm sure a lot of your businesses were early adopters.

      But this year, it's going to be different because we're seeing huge growth of how these generative AI apps are going to improve your customer experience. And what I think is most interesting about this stat is just two years ago, only 5% of businesses said they would use AI or generative AI in their apps. So we've seen a huge explosion.

      I think that one of the things that is concerning, though, is when you look at construction as a whole, only 72% of construction companies plan to continue to increase their spend on AI and emerging technologies, and 77% will add AI or immersion technology spends, which is still below the industry average collectively of 80%. So we want to find a way to help improve the AI experience for construction companies.

      And so let's take a minute to go back to what's fueling artificial intelligence innovation. Why have we gone from 5% to 80% of customers wanting to build generative AI into their language models? And there's three main components. First is the explosive growth of large language models. And later, I have a slide showing that growth. But if you see the news lately, basically every other day, another company comes out with a larger language model than the previous day, which is great for the ability for generative AI to continue to grow.

      I also think that data scientists no longer have to label every piece of data. So if you were using AI 5, 10 years ago, you needed data scientists to basically go in and label your data for the training models. And that was a very time consuming process. You no longer have to do that.

      And I think the third item, and one of my passion areas, is open source. And a lot of these models are open source or allowing you to fine tune them. And it's continuing the trend to drive this adoption. And what's nice about it is a lot of these models are based on Python, which is a very easy programming language to know, and they use tools like PyTorch.

      So you have a ton of use cases from productivity. Like, I don't know about you, but I found one of the best innovations in the last year is I can basically go summarize meetings into a word or two words and understand what happened, or I can have generative AI rewrite my emails. I think we're going to talk about chats to virtual assistant and we'll get into the chatbot in a minute.

      I think if you know anything about the Amazon culture, we like to write six page narratives. Well, if you know anything about our six page narratives, they're really 34 pages by the time you add the appendix. And so one of the great things about generative AI is I can take that and I can spit that into an algorithm, and it can give me a summary. So maybe I don't want to read the whole 34 pages, and I just want to know if I should show up at the meeting. That's what summarization does.

      And I would say the other big thing is search. So I don't know if any of you have used a generative AI search tool, but today, I'll ask our product queue before I actually go to Google because it's so much easier. I have it opened in an IDE on my desktop, and I go to queue first before I actually go to Google. Because usually, I don't have to page through the results. Queue is pretty good at giving me the first answer. And you'll see in a minute, that queue is based on Bedrock.

      And the other one, which is really interesting, which is how I got started with Petr on this, actually, was code generation. I didn't know how to build code for the data model. So when I did the first iteration of some of this work, I had queue generate me the code that we would later use more and more and expand. And so this has been an iterative process over the last six months. And you can see the other use cases.

      I would say that there are challenges, though. First is their performance. Am I using the right language model? Am I getting the best answer? And am I doing it at the cheapest possible cost? If you know anything about developing large language model training models, you need hundreds if not thousands of GPUs. And then you need to inference them, which is more GPUs. So there's a huge cost.

      And ease of use, like how easy it is to engage, how easy is it to use. And then the last thing, especially given we're representing Autodesk and Amazon, two very sustainable companies, is the sustainability. How am I using the energy? Because GPUs use a ton of energy. So these are really sort of common challenges that you need to understand if you were going to go build your own LLM.

      And so why are the models getting bigger? Because generative AI has been around for 50 years. You go back to 1957, that's when AI started. And then in 2012, it started to get interesting because you ended up with having AlexNet, which was a 62 million parameter model. And it exploded basically in 2020 with the release of ChatGPT 3, which was 175 billion parameter model. You have SWITCH-C, which is now 1.6 trillion, and you have several other models in that model parameter space.

      And that's what's key to this is understanding that there's been growth. But understand what a parameter is, because it's really not something that's easy to understand. It basically is the way in which a model learns and generates text. So parameter shape, basically a model's understanding of language and influencing how it's going to process input and output. So it's not a one to one.

      Like, if there was a sentence that said the tree is green and the car is green, you're not going to end up with six parameters in the model. You're going to end up with a subset of those because the model is smart enough to understand the parameters and which question they need to ask.

      Meta just released a model that has 405 billion parameters. And in this demo, we're going to use Mistral, which has over 70 billion parameters. So there's a lot of models to choose from. And so one of the things is to understand when you're building models, there's more than one type of model family.

      And I'm going to actually ask we just focus on the middle, which is the casual, because that's really what we're in. We're in like the GPT, we're in the I'm going to ask for instructions based on some question, and that's what we're going to do. But there's other models that you can use as well, and those models will help you if you had to do different items.

      So for instance, like masked models really focus on sentiment analysis. And sequence-to-sequence are really more about, like, do I want to do text summarization? Is it something that I see before and I can basically summarize it? And so we're going to use these models to basically build SQL instructions as we'll demo.

      But let's get into if you were curious about what you would need to do is if you were actually going to go build your own model, you need to do a lot, because you're going to need your model frameworks, you're going to need your libraries, you're going to need your underlying hardware libraries, whether you use CUDA for NVIDIA chips or AWS Neuron for Amazon chips, you're going to need to do a bunch of training and you'll see that PyTorch is pretty consistent across the board. And then you'll need to do inference, and it's really complicated. And it's really actually hard to go build a model and train a model.

      So we, at Amazon, have done something to make it easy, and what we've introduced is Amazon Bedrock. It's the easiest way to go build and manage a foundational model and you can choose which model you want to do. You can access all of these models via three ways, whether you want to access them through the console, whether you want to access them through the GUI, or whether you want to access them programmatically. We give you the ability to customize the models. We give you the ability to use RAG to simplify the answers based on your own data, and we allow you to scale.

      And the whole point of Bedrock is basically choose the right model for the right work. And so you may find that a cloud model might work better if you were asking for sort of test. Mistral may work better for generating SQL queries. We don't tell you which model to use. We give you the choice to use them all.

      And so we have a lot of models. We have our own models that Amazon has built, which are called Titan models. Anthropic has a bunch of models that they've released, which are great for summarizing and writing code. Meta has released a bunch of models that are great for reading comprehension, and Mistral also has released a bunch of models that are great for code generation. So you can choose any one of these models and they're all behind the same API endpoint. So if you want to, you can also switch out models.

      We've made it simple by putting APIs in front of all these models, which is what Bedrock does, as well as some other things which Bedrock will give you AI controls. They'll give you guardrails. They'll allow you to control the output. So it allows you as an organization to manage what your users do.

      So I want to take a step back and think about how we got to where we are today. We got here because I had a question back in March about how much paint does a room need. And it was a really personal question because I went to paint a room and I ended up having to go to the hardware store three times. I'm sure many people have had that same experience. But it becomes a more complex problem when you go, how much paint do you need to paint a building? Or how much flooring does a building need?

      So if you think about it and you think about how this problem scales up, it becomes a really great problem to have an LLM solve. And so what I would say is for a minute, we're going to use Bedrock. And Bedrock-- this is through the AWS console. It's liquid and it looks like a chatbot in itself. You write a prompt, and it gives you an answer. There's some parameters on the side that you can set or change, and we'll get into those parameters a little bit later, but understanding that this is also available via the console or the command line, however you want to interact with it.

      But let's take that first question, really, of how much paint does a room need? And so I asked Bedrock, how much paint does a 144 square foot room need? And Bedrock basically responds to about 2.2 gallons. And to be honest with you, that was spot on because when I painted that room, I went to the hardware store three times because I got a gallon first, and then I got a half a gallon, and then I got another half a gallon, and then I ended up getting a quart.

      So it was a miserable experience because each time, I needed a little bit more paint and I wish I'd asked Bedrock and I would have just bought 2 and 1/2 gallons to start with, and it would just save me a ton of time. But if you look at a building, how much paint does a 24 square foot building need with multiple exterior walls?

      Well, Bedrock then starts to make a bunch of assumptions and it tells you what the assumptions are, and then it gives you a really broad range. It says, well, you need 13 to 20 gallons of paint. Well, if you're a painter or somebody that's planning a building or doing costing, this really doesn't help.

      So what I would say is you sometimes have to improve the prompt. So this is where the APS service comes in, because you actually know how much space a wall has and how many walls you have, and you can sum them up. So let's assume that in that 2,400 square foot building, when I reached out to the APS service, it basically told me I had 3,200 square feet of exterior and interior walls.

      Well, now Bedrock says, you kind of need between 20 and 22 gallons, depending on paint coverage. So the more information you feed the model, the better the results are going to be. And now I'm going to turn it back over to Petr to take you into our sample app.

      PETR BROZ: Thank you, Greg. OK, so when Greg and I started talking about this idea, we decided to put together a sample code, simple application that will combine the power of AWS Bedrock with the power of Autodesk Platform Services, basically combining the capabilities of extracting design data and running analytical queries over that information.

      The application is pretty basic. The user interface lets users browse their existing projects in Autodesk Construction Cloud, on the left side, selecting a specific version of your design. And after that design is selected and loaded into our viewer, in the center, you can then start having a conversation with your AI assistant using that sidebar on the right.

      So again, there are two main components to this application. First one is being able to actually extract information out of that currently loaded design so that we can later run analytical queries over it with the help of a large language model. For this demo, for this sample application, we decided to go with the first option, with the first service I explained earlier, the Model Derivative API.

      What we do is we extract information out of the specific design, the full information, then we cherry pick a bunch of properties-- different types of properties-- and we store them in a simple SQLite database. And I'll explain why we do this, why we chose this approach in a second. And we can review the code later. We'll show you a demo and then cover the source code in a bit more detail.

      And the second important component of our application was obviously once the data is extracted and prepared in a SQLite database, is to be able to run queries and execute queries on top of that data. For this, we chose LangChain. LangChain is an open source framework for building applications and pipelines built on top of large language models. It's available for Python and TypeScript. It basically is a collection of components that you can chain together. These components can be basically abstractions to different kinds of models so that you can have a single pipeline and you can start swapping out different large language models behind the scenes without having to worry about the differences in their API interfaces, maybe. And there's also some other very interesting tools available in that component collection. We also cover some of those.

      The actual loop, basically, that our application handles is all centered around the design data. So imagine that we start by as soon as you select and load a design in our application, we prepare the SQLite database of the selected properties, if it's not available already. And we start running this loop where we first wait for a prompt from the user, then LangChain uses a prompt, basically uses an LLM to turn this natural language question into a SQL query while also providing the schema of all the tables available in the SQLite database.

      The SQL query is then executed to get the results and the raw results combined with the question are then sent once again to a large language model to turn the results into a natural language response. All that is very nicely and elegantly handled by the LangChain pipeline that we set up. Then we embed the response into a history of our chat so that next time you ask a question, it can actually be a follow-up question based on some information that was already provided to you earlier.

      All right. This is how that code looks like. Putting together a LangChain pipeline is quite simple, and again, we will review this code a bit more closer after our demo. All right, let's see what our application looks like.

      In this case, we are running our application locally, so we go to our local host. We navigate one of our projects on the left side. We find a sample Revit file, and we select a specific version of this Revit file to be loaded in the viewer. And as soon as the model is loaded, by this time the SQLite database is already prepared, and we can start asking our AI assistant different questions. We start by a predefined question. We're asking, what are the top three objects with the largest area?

      And we're also adding this extra comment to the question. We're asking the large language model to actually output the results as a JSON array. And the reason we do that is we added a little nice chariot top to the client side JavaScript code where whenever the response from the AI assistant is actually a JSON array of numbers, we make that array interactive so that by clicking that list, you can actually isolate the corresponding elements in the viewer. So let's see what that looks like.

      When our AI system responds with these three IDs, we can click the list and immediately isolate those corresponding elements in the viewer. Now let's try asking more questions. Let's say we want to get a list of all the walls and their IDs so that we can maybe visually identify them in our viewer as well. As soon as we get the list of IDs, these will once again be made interactive and we can quickly identify those all elements in our design.

      Let's try a different kind of query, a more aggregate type of query. If we ask what is the total volume of all the walls? This will, under the hood, basically translate into some SQL query and we can confirm or we can quickly check if this number is correct by looking at the properties, at the volume properties of the individual wall elements that we currently have isolated in the viewer.

      Now let's try a different example. Let's say this time, we want to get the list of all floors. Similar question as before, we again ask for a JSON list so that we can nicely isolate these in the viewer. These are our floors. And another aggregate type of query would be, let's say, what is the total area of these four elements.

      And again, we see the result is 442 square meters. And again, if we wanted to check if that is the correct answer, we can again use the viewer property panel and check the area of the individual floor elements. Now for another type of query, let's say we want to filter elements that have their volume value in certain range, between 5 and 10, in this case.

      And as you might have guessed, this will, again, give you the list of elements that match this filter criteria, basically. Under the hood, this is all translated into a SQL query by the large language model. In this case, as Greg mentioned, we're using Mistral. You can see that the elements filtered by our assistant do in fact match the filtered criteria. And here, you can see in the logs of our running application, that this last prompt, last question we asked our assistant was in fact converted into a simple SQL query, getting the IDs of elements whose volume value is between 5 and 10.

      And for a final example, let's ask our assistant a bit of an unusual question. Is there a Beetle in our model? Yes, apparently there is one. So let's see where it is. We can ask our assistant one more time about the ID of our Beetle objects in a form of a JSON array so that we can actually find it in the model. And there it is. This is our Beetle.

      All right. Now, let's take a quick look at the source code and see how this functionality was actually implemented in a very little amount of code. This is the source code of our design chatbot. The main two components here are the data extraction, the design data extraction using Autodesk Platform Services. As we described earlier, we use our Model Derivative API to get the JSON payload of all the properties available for this particular Revit design, and we then select a subset of properties and store them in a SQLite database that is handled in this part of code.

      As you can see, this is really roughly 100 lines of code that are responsible for taking the design data out of your Revit model, in this case, and storing it in a SQLite database. And on the other hand, we have our chatbot loop based on LangChain connected to AWS Bedrock to actually process our questions into SQL queries, execute those queries, and turn those results back into natural language responses.

      As you can see here, for Bedrock itself, for actually creating SQL queries out of natural prompts and for keeping the history of our chat with our assistant, we use LangChain components for most of this work. This is a really nice set of abstractions and a nice set of components that the LangChain opensource framework provides that you can then put together as Duplo blocks to basically build a type of pipeline that we actually need.

      So what we do, we start by creating a SQL database from the SQLite file that we generated earlier. This is so that we can later actually extract the schemas of the tables that are in the database. And this information will be actually added to the question to the prompt to the LLM. When we're asking a question, we're also including the information about the schemas of the tables available in the database so that when it's generating a SQL query, it's really successful at providing the best possible type of SQL query corresponding to that database and the tables in the database. As I mentioned here, we're using Mistral and we're using the Bedrock component. Here, Greg, I don't know if you want to add anything about the Bedrock connection.

      GREG FINA: Yeah, I would just point out that underneath the hood, basically, LangChain is calling the Amazon SDK. And in that, we're making Bedrock available as an API call. And in the API call, you obviously need your credentials. You also need to specify a region. But as you can see from our testing, we actually tested it with four different models. So what makes it nice is you can change out the models based on the results and figure out which model works the best without changing any of your other code.

      I'd also like to point out there's three parameters that are given that we sort of mentioned before, which is temperature. And temperature is the amount of randomness that you're actually injecting into the response. So for an analytic or multiple choice, you want it to be 0. If you're doing a creative process, you want it to be closer to 1.

      Max tokens is going to limit the response. Basically, it's the maximum sequence you can have in the response. And obviously, when you're using LLMs, you kind of pay for the response. So if you were to shrink this down, right, it would limit the amount of cost, and the max retries are to limit the repetition that you would see in the tokens.

      So a lot of times, large language models will run and they'll get multiple responses and then they'll look at those responses for repetition. And the Max retry is trying to shrink down the repetition. So with that, it's a really simple and that's really all of the code you need to go access Bedrock. So again, it's a really simple process that you'll end up building the prompt later. But this is basically setting up the API call in this block of code.

      PETR BROZ: Thank you, Greg. I mean, putting this part together was extremely straightforward and really easy to work with. A great experience. And then, after we've set up our large language model, we then use another component from the LangChain framework, in this case, to really build a small chain for the SQL query. So we basically provide a large language model that will be used to convert that natural language prompt into a query. We include the database itself so that, again, the large language model gets information about the actual structure of the data in the database, and we select the dialect, in this case SQLite, so that the SQL queries that the large language model-- in our case, Mistral-- generates are actually valid for SQLite.

      And that's it, really. We ask the large language model to generate a SQL prompt, SQL query for us. We then use yet another tool from the tool belt of LangChain to actually execute this query over that SQLite file for us. And we then embed the result of the SQL query together with the original question into our chat prompt template. And we also include our chat history. So as I mentioned earlier, if you had a follow-up question on maybe one of the questions you asked earlier, you could easily do so.

      And that is it. We created the final chain or pipeline with the history and using in-memory storage just for the historical data. And that is our chatbot session. This is all the code. See, this is the total of some 80 lines of code to actually build this LangChain pipeline, the cycle that will wait for prompts from the user and it will keep turning them into SQL queries, executing those queries, and turning the results back into natural language answers. Super simple.

      All right. And with that, that is a wrap for our presentation today. Before we let you go, I would just like to quickly mention two types of events that we run-- we, as a developer advocacy team. So if we manage to pique your interest and you maybe want to get started with Autodesk Platform Services, or maybe you already have some ideas about solutions that you could build with APS, definitely check out our online trainings. The next one is taking place on November 18 through 21.

      This event is really good for people who, instead of going through our online tutorials on their own, prefer a more hand guided approach where maybe you want to join us, see us go through a tutorial, follow our steps. And maybe if you get stuck, you can just ask right away. We can unblock you and make sure that you are able to go through one of our tutorials without any hassle.

      And on the other hand, the developer accelerator. So accelerators are a different type of event that we run roughly once every two or three months. These are events where we invite customers to join us in one of our offices around the world for a week to work on a specific idea that they have. It could be an idea for a prototype they want to build, an idea they want to validate, or maybe a new feature they want to add into an existing solution.

      And the idea behind an accelerator is that you join us for a week in a conference room with a bunch of my teammates in a room that can help answer your questions immediately. So instead of you having to send emails, go to Stack Overflow and start asking questions or going to online forums, you can just turn around and say, hey, Petr, so I'm thinking about implementing this particular feature in this way. Is that a good idea? If not, would you recommend something else?

      Or if you get stuck on anything while implementing that idea of yours during that one week, you can just turn around and say, look, Petr, I'm running into this problem. Could you help me out? And instead of waiting for somebody to answer to your support ticket, I'm there. I'm going to try and unblock you as soon as I can. So hopefully, by the end of the week, you will have been able to complete your prototype or validate that idea that you brought.

      So the next accelerator is taking place in Atlanta. That's going to be the week of December 9. So again, if you're interested in maybe trying Autodesk Platform Services, trying to build a prototype, some idea using our platform, definitely check out these events. And you can find those on our developer portal as well.

      And with that, I want to thank you all for your attention. I definitely want to thank Greg for the great collaboration we had on this project. I hope you enjoyed it as much as I did. And I wish you all the great rest of Autodesk University.

      GREG FINA: Yes, Petr, I did enjoy the collaboration, but think you forgot to mention that this project came out of an APS itself, right? The whole process of this chatbot came out of the APS that we got together in London and we developed. So I highly recommend that if you're doing a prototype, think about going to APS.

      But I really want to thank you for the partnership on building this. I think combining Autodesk and AWS technologies have been great. And I think the fact that we have working code for anyone who wants to leverage this and use this, I think it'll be very helpful to those customers who are looking to add a chatbot into their APS, or maybe with their own spin on it. So thank you as well.

      PETR BROZ: Absolutely. Thank you.

      ______
      icon-svg-close-thick

      Cookie 首选项

      您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

      我们是否可以收集并使用您的数据?

      详细了解我们使用的第三方服务以及我们的隐私声明

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

      改善您的体验 – 使我们能够为您展示与您相关的内容

      通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

      定制您的广告 – 允许我们为您提供针对性的广告

      这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

      icon-svg-close-thick

      第三方服务

      详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

      icon-svg-hide-thick

      icon-svg-show-thick

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      Qualtrics
      我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
      Akamai mPulse
      我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
      Digital River
      我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
      Dynatrace
      我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
      Khoros
      我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
      Launch Darkly
      我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
      New Relic
      我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
      Salesforce Live Agent
      我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
      Wistia
      我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
      Tealium
      我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
      Upsellit
      我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
      CJ Affiliates
      我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
      Commission Factory
      我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
      Google Analytics (Strictly Necessary)
      我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
      Typepad Stats
      我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
      Geo Targetly
      我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
      SpeedCurve
      我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
      Qualified
      Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

      icon-svg-hide-thick

      icon-svg-show-thick

      改善您的体验 – 使我们能够为您展示与您相关的内容

      Google Optimize
      我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
      ClickTale
      我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
      OneSignal
      我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
      Optimizely
      我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
      Amplitude
      我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
      Snowplow
      我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
      UserVoice
      我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
      Clearbit
      Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
      YouTube
      YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

      icon-svg-hide-thick

      icon-svg-show-thick

      定制您的广告 – 允许我们为您提供针对性的广告

      Adobe Analytics
      我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
      Google Analytics (Web Analytics)
      我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
      AdWords
      我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
      Marketo
      我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
      Doubleclick
      我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
      HubSpot
      我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
      Twitter
      我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
      Facebook
      我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
      LinkedIn
      我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
      Yahoo! Japan
      我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
      Naver
      我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
      Quantcast
      我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
      Call Tracking
      我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
      Wunderkind
      我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
      ADC Media
      我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
      AgrantSEM
      我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
      Bidtellect
      我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
      Bing
      我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
      G2Crowd
      我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
      NMPI Display
      我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
      VK
      我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
      Adobe Target
      我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
      Google Analytics (Advertising)
      我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
      Trendkite
      我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
      Hotjar
      我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
      6 Sense
      我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
      Terminus
      我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
      StackAdapt
      我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
      The Trade Desk
      我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
      RollWorks
      We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

      是否确定要简化联机体验?

      我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

      个性化您的体验,选择由您来做。

      我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

      我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

      通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。