AU Class
AU Class
class - AU

Explore the Power of the Autodesk Manufacturing Data Model and APIs Using GraphQL

이 강의 공유하기

설명

With Manufacturing Data APIs, you can read, write, and extend your design model through cloud-based workflows. All this without the need for desktop authoring applications like Autodesk Fusion 360. You can access granular, transparent data via GraphQL APIs, and programmatically access manufacturing information to achieve a variety of cloud-based automations and workflows more efficiently than REST APIs.

주요 학습

  • Learn about Manufacturing Data Model and what you can do with the granular data accessible via the APIs today i.e., common usecases/workflows.
  • Explore GraphQL syntax and usability, with example queries and best-practices when building web applications using the Manufacturing Data Model APIs.
  • Gain an overview on our API developer docs, step-by-step tutorials including how to use our interactive data explorer - A sandbox to try the APIs.
  • Overview of Manufacturing Data Model API Roadmap and Upcoming Betas.

발표자

  • Aditi Khedkar
    Aditi Khedkar is a Senior Product Manager on the Autodesk Platform services-Product Data organization. She leads the Manufacturing Data Model. As a Platform Product Manager, she is excited to drive the future of data granularity and interoperability that enable customers automate and collaboration through cloud data models. On a personal front, Aditi enjoys hiking and traveling.
  • Patrick Rainsberry
    Patrick is a Senior Product Manager at Autodesk working on API and Automation projects for Fusion 360 and the Manufacturing Data Model. He has a mechanical engineering undergrad degree from UC Berkeley and a master's from UCLA as well as an MBA from the University of La Verne. He has been working in the CAD industry for over 20 years.
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • subtitles off, selected
      Transcript

      ADITI KHEDKAR: Welcome to this AU class called Explore the Power of Manufacturing Data Model and APIs Using GraphQL. Let's start with a quick round of introductions. My name is Aditi. I'm a product manager on the Autodesk platform services leading the efforts on the manufacturing data model. And co-presenting with me is Patrick Rainsberry, senior product manager from Fusion 360.

      So let's start with a quick safe harbor statement this is essentially stating please do not make any purchasing decisions solely relying on the statements made during this presentation. Kindly read it. Absorb it at your convenience. And let's get right into the presentation.

      So in this class today, we'll cover a quick recap of the Autodesk Data Strategy, learn a bit about the Manufacturing Data Model, what you can do with the granular data, what kind of data is available, along with some example queries. We'll talk through some common use cases and workflows that get unlocked using these APIs. And we'll also talk about where to find developer information like the documentation and some tools.

      And then Patrick's going to cover exploring GraphQL syntax, usability, some in depth queries, along with examples and then best practices on building web applications, using the manufacturing data APIs. And then we'll conclude with our public facing roadmap and essentially how to get in touch with us.

      So let's kick it off with a recap of the Autodesk Data Strategy. So over the years, we've seen three key trends emerging and accelerating. Our customers need more automation, whether it's to increase productivity, improve efficiency, or just faster time to market. There's also a need for more collaboration within organizations as well as across disciplines.

      We're also seeing trends towards industry convergence across AC manufacturing, and media and entertainment. But at the foundation of all of these trends, really lies data. So to respond to these challenges, our data strategy is focusing on three key areas, data granularity, data interoperability, and data accessibility. In this presentation, we'll focus on the first pillar, which is data granularity.

      Today, file is the smallest unit of collaboration. But we know file-based T is painful. As files move between products and organization, there's a patchwork of standards. This results in challenges like translations and the fact that multiple disciplines can't work with the same design simultaneously.

      This also leads to mistakes, as teams are managing different versions of these files. And it makes exchange of data across organizational boundaries rather difficult, due to IT reasons.

      So it's clear that we need a better design to make data management by putting together a standardized way that describes data, that allows for real time or close to real time access to data, and ensures that access controls are in place, so that the right data is available to the right people at the right time.

      So the fundamental idea here is to decompose files into valuable bits of data that's stored in managed on the cloud. This allows for different disciplines to concurrently work against the same model. And you just get specific granular data that you need and not a big old file. And making use of simple APIs that expose the granular data rather than all of the data at once, really makes the data accessible, not just within Autodesk products, but also third parties that are now able to participate in the ecosystem.

      So at Autodesk, we've been working on these cloud data models for each of our industries. The data models support Autodesk Industry Cloud's fusion, forma, and flow, and are built on cloud-based granular graph data architecture that allows customers to access their cloud design and make data.

      We started on this journey in the manufacturing space. This is already available in production. And we also have other industry models for AC and M & E currently in development across different beta programs.

      So let's do a bit of a deep dive into the manufacturing data APIs. So I'll quickly cover the overview, what data is available with some example queries side by side, and the developer docs, and the common use cases. So as I mentioned, the manufacturing data model is really a way to store your manufacturing design and make data in the cloud. making it easily accessible via APIs without the need for a desktop authoring app like Fusion 360.

      The manufacturing data APIs enable granular access but more importantly in real time. So you can actually read, extend, design data through cloud based workflows. The data APIs are using GraphQL technology to expose Fusion 360 design and make data. So let's touch a little bit about GraphQL.

      So when you think about graph data, you might picture your social graph on Facebook or your professional graph on LinkedIn. These are nothing but collections of interrelated people, places, posts, and more. CAD data is also a giant graph of little bits of data that have many relationships to each other.

      So when we were designing the Cloud APIs for our data models, we wanted a great developer experience that makes querying these graphs of data rather friendly. So we chose GraphQL, which is an open source data query and manipulation language for APIs, as it was an ideal choice for our complex design and make graph data.

      GraphQL actually provides very powerful, granular querying capabilities for our multi-relational landscape of design and make data. The queries are human friendly, easy to learn, easy to understand. And it exposes the data in a way that a Fusion 360 user is already very familiar with. So for example, accessing components, occurrences, sub occurrences, and so on.

      And you can see how nicely the hierarchy information provided by GraphQL lines up with the structure shown in the model palette inside of Fusion 360. So Patrick's definitely going to cover a lot more in depth best practices of GraphQL, the syntax, and how to build web applications a lot more in detail during his part of the presentation.

      So let's touch a little bit more in depth, in terms of what data is available through the manufacturing data APIs today. So you can navigate Fusion 360 data from hubs, projects, folders, down to individual components and drawings. So on the left hand side, you can see a simple GraphQL query that shows you how to fetch your hubs and projects, as well as folders and all of the contents within that folder.

      You can also fetch component properties such as name, part number, description, and material name. And you can also get assets like the thumbnails of a component. The thumbnail request is an async call, which triggers generation of the image. And your application would require to poll every so often and check the status, until it comes back as successful.

      And the response is a signed URL, which expires in six hours. So we highly recommend not caching or storing these URLs in your database but fetching them in real time. You can also fetch manage extension properties such as lifecycle, item number, revision, change order, change order URN, and change order URLs.

      The URLs really allow you to make it easy to create user experiences that link end users directly into the data pages in Fusion manage extension. And as you can see, the manage extension properties nicely get encapsulated within a manage object under the component. So you can request for this information, only if you need it.

      You can also build a BOM view, a bill of materials view, in your application through a performant API that lets you retrieve the entire assembly structure in one go, eliminating the need to fetch children of an assembly structure one level at a time. So this really serves as the source-of-truth for your CAD BOM hierarchy.

      You can access physical properties of a Fusion 360 component such as mass, volume, density, surface area, and bounding box properties. So as you can see, these values are generated on the fly. So if the values have not been generated for a component version, the query will basically trigger them to be generated.

      Again, this is an async call that requires your application to poll every so often, until the status goes from in-progress to completed. And in the Fusion properties, physical properties, they have returned in the Fusion system units. So you can use the physical properties also in your BOM table or for sustainability or cost analysis in your application.

      You can also generate STEP, OBJ, and STL files for individual components within a Fusion 360 design, including internal components. This is also an async call, as you can see. So it returns the status is in progress along with percentage of progress completed.

      You can also specify which formats you want to export. So if you only want a specific format like OBJ, you can specify that in a query variable under the output format object. And once this is generated, you receive a signed URL, that's easy for you to download. And the signed URL expires in six hours.

      With project admin APIs, tasks typically done manually within Fusion teams can now be automated via the API. So things like creating new projects within a user's hub, adding users to projects and hubs, using their email address can easily be automated via these APIs.

      With custom properties, which is currently in beta, you can augment the manufacturing data model with custom metadata, with information like purchasing or procurement. You can create global property definitions for different types of data and behavior, which is associated with your APS app and use them across multiple hubs.

      So the key mantra being, you create it once. And you use it multiple times. The end users of your app can then assign values to these custom properties at individual component and drawing versions in a standardized way. And you can even use these custom properties in Fusion 360 component property view and other navigational experiences.

      So let's touch a little bit about developer documentation and tools that are available for you to actually use these APIs. So you'll find all of the information, including detailed API documentation if you go to our Autodesk platform services developer portal, under solutions manufacturing data model. And I would highly recommend that you start out with a solution landing page.

      This outlines use cases, customer success stories. It even has our public facing roadmap. And you can also look at what upcoming beta events we have, what releases have been updated, as well as blog posts. And then the developer documentation can also be navigated directly from the solutions page. Or you can go through the dropdown on the developer documentation page directly.

      And the docs really have step by step tutorials on how to use the APIs, code samples, API references. And we also have a Query Explorer that lets you try out the APIs in an interactive environment using your own data.

      So let's touch a little bit upon the developer tools that are available to you. So as I mentioned, we have a query explorer, which is nothing but an interactive sandbox. This is based on an open source tool called GraphiQL, that makes it really easy for you to explore a GraphQL API.

      And we've integrated it in such a way that you can actually try these APIs directly on your data. So this environment also provides IntelliSense, which means as you're typing the query, it offers you the available options. You can also use Postman or Insomnia, which are other popular GraphQL tools, should you wish to do that.

      We've also embedded GraphQL Voyager within our Query Explorer. And this lets you visually explore the GraphQL API in an interactive graph. This is also based on an open source tool. And this is a really great way for you to understand the data model holistically and even run introspection visualization.

      So let's go over some example workflows or common use cases that are emerging from actually using the manufacturing data model APIs. There are a lot of repetitive tasks. For example, creating a bill of materials can easily be automated with the manufacturing data model APIs.

      But you can also integrate, design, and make data directly into ERP systems like SAP or Microsoft Dynamics to power downstream sourcing, procurement, pricing, and inventory type of workflows that you can do in real time with real time data coming out of the manufacturing data model APIs. Non-designers can also get the latest greatest access to design data to create up-to-date catalogs without needing access to authoring applications.

      And you can now also make data driven design decisions when it comes to, for example, sustainability analysis with the help of material, mass, density volume, et cetera, all in real time. So these are just some examples of use cases that our customers are developing using the manufacturing data APIs, just to get you inspired.

      So I'll now hand it over to Patrick to do a deep dive into GraphQL best practices, some tips and tricks when creating web apps using manufacturing data model APIs.

      PATRICK RAINSBERRY: OK. Thank you very much. That's great. I'm going to, as you mentioned, I'm going to get in here and start talking about a little bit more of the details about GraphQL and some of the usage of these APIs specifically in a client application.

      So one of the questions is, maybe some of you are asking is, what is GraphQL. Right. Or why GraphQL. So GraphQL has been around for a while. Began development 2012. Public release in 2015. It was originally created by Facebook, now Meta. But it's a fully open source project with this kind of governance board, if you will, Graphql.org, where you can get all kinds of information.

      The reason that we chose it over REST is that, the thing about GraphQL that's really nice when dealing with data that is so graph-based in nature is that, it lets us create an API and an object model in the API that's very reflective of how the data actually looks inside the product. So you look inside a Fusion 360, you can see things like the property panel or your navigation of your model tree et cetera.

      We try to construct the API to really mirror that. And it also lets you just get the data that you want. Typically when using REST APIs, it's kind of like, give me every possible piece of information about one resource. And in GraphQL, you just say, give me the data that I want. And give it to me for lots of resources at once.

      So it really reduces the amount of calls you would have to make. If you just even look at something simple like navigating from say a hub down to information about a component, you would say, give me all my hubs. And from a hub, get the projects. From a project, get the folders. From a folder, get the contents. From the contents, get root component, et cetera.

      And it's just like this repetitive thing. It also makes it really hard to handle kind of stacked queries or cache management on a client. And so again, like you say, in Fusion 360, if you look at some example queries, and you look at the results, you can see, it's really set up such that if you're familiar with using Fusion 360, if you're familiar with the way that data is organized, you have a top level assembly. Then you have a subassembly. Then that assembly has components. It might have further layers of subassemblies.

      It's kind of a what you see is what you get mapping. So you can craft your query to using your application really just by-- mostly just by knowing what that Fusion 360 data looks like in the first place. And that's one of the things that's really attractive about choosing GraphQL for this API.

      Even, I know a lot of people, it's still kind of like a new thing. But this is just an example of a bunch of companies that are using it, including the new one, Autodesk, which is great. And so one thing to note too, is there's a lot of companies that have adopted it. Because it makes it really easy to federate many underlying services, right. And so a lot of companies are using it as much for, say just powering some client applications, where they're tying data across many sources, and not even necessarily for their public APIs, right.

      There's some good examples of other bigger companies using it for public APIs. GitHub, Shopify are a couple examples. But it's also really been used a lot for internal applications by a lot of these companies.

      And there's just some nuance that's worth talking about. So, you know, typically you think about database operations. You think about CRUD, create, read, update delete. And, you know, equivalent REST, you have get, post, put, update, whatever, delete.

      But in GraphQL, there's really just-- there's really only two core types, which is a query, which is reading data, and then a mutation, which is changing data. So really all operations like create, update, delete are all going to be some form of what we would call a mutation in GraphQL. So that's just like one important thing to note.

      I think, Aditi already touched on this a little bit. But I just wanted to re-emphasize. The ecosystem around GraphQL is really strong. I think the fact that it is kind of newer. You have a lot of people adopting it, a lot of people building really nice tooling for it. This graphql.org is a great entry point. There's a lot of really good learning content there, that's kind of agnostic from any specific API, from any specific set of tooling just to learn a lot of the fundamentals about what GraphQL is.

      And then like I say, there's tools like insomnia and Postman that, kind of, if you're used to using REST APIs, I'm sure you're very familiar with. But those are great. But there's also, there's a lot of other tooling that's been built, whether it's extensions for IDEs or also a lot of tooling for just building GraphQL servers or building GraphQL clients or doing interrogation of GraphQL APIs.

      The one thing that I just wanted to really emphasize is, depending on what IDE you're using, so if you're using VS Code or if you're using like Jetbrains WebStorm. Apollo, which I'll talk a little bit more about Apollo in a minute, but Apollo GraphQL is a great example. There's others. There's plenty of others. But their IDE extension is really powerful.

      And it's almost, it's a you can't live without it kind of a thing, just in terms of having all of the syntax-- when you write a GraphQL query in client application, it's just a giant string. But you want it to be-- you want all the syntax highlighting. You want all of the error checking and linting and everything else that goes along with having a nice extension. So you definitely want to get GraphQL support for your IDE.

      So let's talk about actually how do you use some of this. And so we're going to talk a little bit about how to use GraphQL in general, but mostly this is going to be, of course, in the context specifically of the manufacturing data model API. So this is an example of just something that you may choose to build, right. So this is kind of indicative of many of the use cases where we want to-- on the left, we have a navigation experience that would let a user browse through their hubs, projects, designs, drawings, components, history of those designs, subcomponents, et cetera. So some kind of navigation and then some kind of tabular data.

      So in this case, I'm looking at the parts list for this particular design. And by looking at all the properties of the components within the design, using GraphQL queries to craft a user interface like this to populate all the data is relatively straightforward. But, you know of course, the bulk of the development of this application is mostly just in the UI and all the different UI libraries they used, and all of the different structuring and routing and populating tables and managing the state of the tables.

      And it's probably a little complex for the scope of this presentation just now. So also built a ridiculously simple single page application using-- it's another, it's react based, just trying to use the most bare bones of tooling, the literally no UI styling whatsoever. And this is the kind of demo that we'll be using for this today, so we can actually look at all the source code. And it's not too crazy.

      So this is built, again like I said, using React. The two libraries that I'm using, two main libraries I'm using are React router and Apollo client. And then it was built with Vite, not, create, react app. And we're going to use this to just demonstrate the principal workflows. Because really building this is all you need to know about how to build with our APIs in general.

      Obviously, what you choose to do on the front end to make it pretty and make it actually useful and interactive is all up to you. But from an API standpoint, this should really cover it. So we'll talk about authentication. In this case, again, trying to make it as simple as possible. I'm using PKCE authentication.

      Then we're just going to log the user in and then fetch some data. But we will also handle the caching of that data. And then we'll react to user input. So we'll basically have the hubs list. Click on a hub. It'll show you all the projects in that hub. Click on a project. And it'll show you the contents of that project.

      And so this simple three step process is what we'll walk through. So when you're thinking about GraphQL and Aditi showed you the GraphQL little sandbox tool, that we use pretty-- I use it really extensively for just kind of testing out queries and things like that. It's embedded into all kinds of other tooling.

      When you think about something like the GraphQL query, there's really three main components. So you have the query itself, which is-- the term is like GQL to GQL document. Or that's the term that you'll see in the client code. So you have the query itself, which is something that's very static.

      You have a certain amount of queries that you would have written to power your application. And these things are-- you don't dynamically generate the query. You just, the query is static with input variables. And that's really important for how the caching mechanisms work in the client.

      So you have the query itself. And then you have the variables, which would just be JSON, just typical JavaScript objects. And then you have the response, which again is just a JSON response that then you just interpret into a JavaScript object. So just thinking about these three components.

      So a little bit of an interlude. I want to-- all the rest of the code that I'm going to show and how to actually implement this, or how I implement it. Of course, there's a million ways to do any, solve any problem software development. I'm going to specifically talk about implementing these APIs in this basic React JavaScript application.

      And to do that, I'm going to demonstrate the use of Apollo client. And the reason is that, like I said, there's all this great GraphQL tooling that's out there. And this is one of those examples. If you're building a client application, I think they have libraries for View, Svelte, React, probably some other ones I'm forgetting. But the library itself takes care of a lot of important steps.

      So it's not just-- a lot of people use something like Axios for REST APIs. A better way might be use something like React query. Because then it starts to handle state management. And that's really the power of the Apollo client is, it lets you very powerfully configure cache. And I mean like crazy powerful configuration of a cache.

      And the important thing about that is that in these single page applications, oftentimes you're fetching the same data all over the place, right. Like you already fetch the information about the components when you're fetching it for the navigation tree. And now you're displaying the information for the same component over there in that bill of materials table. And then you looking at the bill of materials for one of those subassemblies. It's all the same data.

      And so, of course, there's a million ways to implement this to not only handle caching of data in local storage whatever. And there's also all kinds of things that you can do to manage state. A lot of people using Redux or something crazy like that. But the nice thing about the Apollo client is, it does both of these things for you. It gives you a nice simple framework for handling the query, but more importantly handling the caching of the results and state management with hooks.

      So if we look at what this actually looks like in JavaScript. So this is what I mentioned this earlier. So you have this GQL. GQL is kind of a lower level library. But it's rolled up through a dependency into Apollo client. So you can just get it from there. And so then you specify this GQL flag to specify that the following string is a GraphQL document. And then, of course, since I have a nice IDE extension, I get all this nice query highlighting.

      And this is it. So it's really it's just a string kind of. But it's explicitly interpreted it as GraphQL by the client code. So this is what, when I mentioned that typically all of these queries are very much static. So, you know, you're just going to define some constant that is the definition of the query. And you don't want to be like constructing these things on the fly. You want them to be pre-built because of the way it interacts with the cache, which I'll get into in a minute.

      So once you have the query, then again, this is where the state management comes into play a little bit. So when you're using this library, at least with React, which is what I'm most familiar with. So that's what I'm going to talk about. I'm sure there's equivalent stuff in other front end frameworks. But in React, you're going to say, get the loading status. There's actually a lot of other things you could get right there. But this is the simplest case.

      You might just want, whether the loading status, whether or not there was an error and then the response data. And since this is a React hook, it makes it really easy to set this up in a React component. So when you just say, you're going to say I want to use query and then reference to the query.

      So you see the first argument is a reference to the query itself. And then there's a bunch of optional arguments. For example, I don't want to fetch if it was in a different context, where maybe the user hasn't picked a hub yet. You want to just-- you don't want to try to execute the query if there is not a Hub ID.

      Then you define the variables. So in this case, we're going to assume that the Hub ID is provided somehow, right. And so then the Hub ID is coming in. That'll be the variable. And then there's some other options around error policy and the fetch policy.

      And so particularly the fetch policy, you'll see it's cache first. So you could say, I always want to refetch from the network. I want to only fetch it if it's already been cached or cached first, which is one of these things that just makes it very nice. Which is, if the data is already in the cache, don't make a network request. If the data is not there, then make a network request. So really speeding up kind of the interactivity of your application.

      So if we look at-- this is my very, very simple project list component. So we're going to get the Hub ID, the URL string. I didn't show the router page. Tried not to get into too much of the basics. But the URL that you'd be on would be projects slash the Hub ID.

      And then, so we'll get the Hub ID from the URL of the app that you're currently on. And then we will use the query. And then you can see, because we have this loading and error, we can say that if it's loading, so loading is true false. So if it's loading, maybe I just want to return loading or something more fancy like a spinner or whatever.

      And then if there's an error, this is not probably what I would really do. I wouldn't just display an error as the whole page. But for the purposes of this test app, that's what we'll do. Maybe you would just really, if there was a GraphQL error, you probably would just want to log it or something like that.

      And then, so basically if it's not loading and there's not an error, then we'll actually get to return the results. And so, you can see, this is just like literally the most basic page. It's just going to say projects. And then you can see that the response-- so the data. Data is the root of the response object.

      And then you can see, you just follow through here. So from the data, we'll go nav, hub, name, or I'm sorry, nav, hub, projects, results, name. So just like in the query, that's just the same thing you're going to get in the response object. So the response object exactly maps to the query that you submitted.

      And so then results is an array. And so then we'll basically just create a list and then map all of the results to the list and just display their name. And then on each list item, we'll have a link that will-- this is part of React router, which is beyond the scope of this class. But it's, I'm sure probably if you're familiar with React, you're probably at least have seen something React out router or equivalent.

      But so basically, we're just going to go to the projects path plus the project contents path plus the ID of that particular project, right. So pretty straightforward. Just, again, looking at this from a different standpoint of the three components, the query, the variables, and the results, you can see that if we were using graphical, this is what the results would look like. So the object as returned in the results is again just mapping back to that original query.

      And that's what we have, basically this one line right here, data.nav.projects.results.map. And then we'll extract ID and name. So it's actually a little bit of a mismatch to the results there. Because didn't have ID, when I took that screenshot apparently. But that's OK.

      So you know, here's the application in all of its glory, the three pages of the application. It would start on hubs. You'd pick a hub. It'll list the projects. You pick a project. And it lists the contents. And I don't have the right now as I'm recording this session before Autodesk University. But by the time you're seeing this, you will be able to look inside of the accompanying class notes and get the URL, where you can check out the repo for this example and download it as a sample, et cetera. So please see the accompanying class notes.

      So with that, I'm going to do some deeper dives into a couple of topics that I think are-- that was kind of the overview of just using it. Now, I want to get into some details about some of the things that are very, very specific. Everything I did up until now is really generic GraphQL, just kind of how you would use GraphQL to build a simple app like this. Now, I want to talk about some of the specifics when using GraphQL, specifically with our API.

      So one of them is pagination. So we made some-- we made a decision about pagination early on. And there's a lot of different ways to handle pagination. Obviously, if you're asking for all of the components of a 5,000 part giant manufacturing machine, obviously, you're not going to return all that in one shot. We're going to page the data out.

      And in GraphQL, there's a few different approaches that one could take with how to handle pagination implementation. We chose one of them. We're doing cursor based. And we're doing it with-- what you'll see here with this kind of idea of a pagination element and the results element.

      So that's why you've seen in all the queries up till now, whenever we're listing something, like we would say, the folders within a project or the items within a folder. There's always this extra layer called results. And that's because up till now, we've been mostly just abbreviating everything and just showing the results. But in almost any response object in our API that returns an array, there's going to always be a results field and a pagination field.

      And in the pagination field is where you will get the cursor. So if there is another page of results, you will get a cursor. If there is not another page of results, it will be null. And so, you can basically check for the presence of cursor in your response. And if there is a cursor, you can fetch more data, you know, whether that's a user clicks to an actual next page in your app or whether you're just kind of populating an infinite list.

      Either way, you know that there's more data to be had in that particular array. And so, now, when implementing this in client application-- so you saw the query. So we've included the pagination field and the cursor field in our query. I mentioned before that when you use the use query hook from Apollo client in React, I had shown before, loading, error, and data. And I said that there's a whole bunch of other things you could get. This is one of them, which is fetch more.

      And fetch more is really utility, very explicitly designed for pagination. It can be used for other things as well. But pagination is where it's really powerful. And so, you can see now in the variables to my query. Again, if we go back and look, you see now that the inputs to this query is the project ID and the cursor.

      And so, we'll be assigning that here. And so, typically, you're going to always-- the starting point, the original query, because you imagine it's going to run the query once, and then we're going to use fetch more to run it additional times if needed. So the initial query is always going to have a value of null for the cursor, which basically means first page, first page results. And if there's only one page, the response will have null value for cursor. And if it does have a cursor then we want to get an additional page.

      So in my case, I want to just basically keep listing everything until it's done. I don't want the user to have to have any interaction. I haven't implemented any fancy infinite scrolling. I basically just going to keep calling the API until I get to the last page of data.

      And so, a way to do this pretty easily is, I'm just for brevity, the first two constants here, we'll say is-- I'm just going to get the project items, which is basically the contents of that results array. And then I'll get the cursor, just to make the rest of this cleaner to read.

      And so, then I'm going to use a useEffect hook. So if you're not familiar with useEffect and React, you know, they call it the use footgun sometimes. I think is one of the other things it's called. But if used correctly, it is obviously really powerful and effective.

      So useEffect basically means rerun this function if any of the variables in the bottom array change. So if cursor or fetch more or loading change, basically meaning if it's a new query, that's when you get a new fetch more. I only want to really do this if loading is true. And so, it's going to change. It's always going to be. I'm sorry, if it's false.

      It's always going to be true initially. And then when it changes to false, is when I want this to run in the first place. And then I also only want to do it if the cursor is changed. So if the cursor was null, and then there was only one page of data, it's still null. Don't do this. But if all of a sudden there is a value for cursor or if there's a new value for cursor, I want to do this. So that's useEffect.

      So then, we're just going to basically say, if it's not loading and there is a cursor, then execute fetch more. And the way fetch more works is, you can actually even have a more streamlined query. But again, that's kind of beyond the scope of this. In general, fetch more is just going to say, rerun the same query but maybe with some different options. And particularly the common use case of that is new variables.

      So whatever the previous variable was, this is where things get simple is that if-- you know, I'd already specified project ID. Don't change that. All I'm saying is, rerun the query but with a new value for cursor. And then what that's going to do is basically recall the API and with the cursor. And so, then now the data response is going to have-- is going to be the second page.

      And so, this is where the real magic happens, if you think about it. Because in the front end, all I'm saying is, display a list of the data returned from the query. And in a super basic only just fetching data construct, you would have got the first page, show that. And then as soon as the second page was fetched, all of a sudden now data is only the second page, right.

      But what Apollo client is doing for you underneath the hood is, it is just saying that the folder contents or items is an array. And so, when you fetch more, it is just automatically appending that into the array. Because the data that is being displayed in your app is actually coming from the cache.

      Like the data-- you run the query. The query response is cached. And your application is actually observing and responding to the cache itself. And so, that's why this is all it takes to implement. As soon as the second page of results is ready, the component would rerender. If there was a third page, it would just rerender. And it would just keep stacking. But it's always still just all contained within that original results array.

      So, I don't know. To me, that was like magic. You know, so again, this is just kind of a bit of a diagram showing, you make the initial query. Then you get-- and the initial results are pushed to the cache. And then you would get that cursor. Run fetch more, which is off of the useeffect. Because there is now a cursor. If there was another cursor, it would continue to fetch more.

      And those additional pages are just being pushed into the cache. So because of the way React works with hooks, it's just looking-- it's using observable under the hood. It's observing that particular piece of data. If that data changes, React rerenders. But if it doesn't, it won't.

      So you're basically, like I said, you just keep pushing onto that stack in the cache. And then your component just reacts to it. But doesn't have to do any kind of special handling. You're not trying to merge arrays and all of that yourself in the client. You configure this in the cache, which I'll show in a minute. But yeah.

      So this is basically the third page in the sample app, which is the contents of the project. So you can see here, I'm using use query again with fetch more, as I showed. Then we have the useEffect, which basically says, if there's a cursor, get more data. And then the same thing, if it's loading, show loading. If there's an error, show the error.

      And then again super simple page response, just project contents. And then take all of the project items and map them into list items. And in this case, the list items don't have any-- don't have a link.

      So that is pagination. I feel like that's one of the most important things to grok here in our API is just this-- a lot of examples that you will see even with cursors and things like that. It's not always exactly like this with the pagination object and the results object. The results object is kind of one layer down from the field itself.

      And so, now I'm going to get into a little bit even deeper into how do you configure said Apollo client cache to make all this work. If you're watching this for the first time and you've never used React and you've never used GraphQL, and you haven't even looked at the Apollo client documentation, this might be a little bit of an overkill. But that's OK. My hope is that you've gotten a good overview of what you can do.

      And then for those of you that are already attempting to implement this and you've gotten farther along in your implementation, and now you're ready to actually start making your app very performant, that's when all of this stuff comes into play for caching. So hopefully you can watch up till now. And then you'll go do a bunch of stuff. And then you can come back and watch this part of the presentation.

      So that's my disclaimer. So the Apollo client cache, some of this is like some quote off their website. But for me, it's really just about the amount of power that you get in, not only minimizing network calls and refetch, it's also really about using the cache itself as the state management.

      You know, the first app that I show is pretty complex. There's a lot of tabs in there. And there's a lot of things that it does. And I don't use really any state management tool outside of the Apollo client. Because it has so much power for state management for the whole application. It's pretty powerful.

      So the cache, part of the configuration of the cache itself is that it understands the schema that your GraphQL API is built around. And it will normalize all of the objects. So even if in a single query you might say, you might have gotten the project by its ID and then the folders in the project and the items in the folders. And that was one query.

      But all of the individual object types are cached individually. So when you look in the cache, you'll see there's a project object. And then it might have a pointer to a folder object. So in the Apollo cache, it's just an ID of a pointer. Because then there's also a list of all of the folder items that are cached and all the component items that are cached, et cetera.

      So it kind of normalizes it. And then one of the things we'll talk about in a second here is, when you want it to not normalize it. But then the other thing you can do is also, is the merging of the arrays and things like that. And then so if you-- I mentioned this before, but it's worth re-emphasizing, which is the fetch policy.

      So again, the performance to be gained, especially for things where, for a given version of a component in a Fusion design, the data for that component version could never possibly change. Now, there could be a new version. So you might want to know if there's a new version of the thing. But for any given version, the data is never going to change. So you can kind of cache it to infinity.

      For other things, you might want to set when it's going to be invalidated et cetera, or in some cases, like maybe just on the first time that a user goes to initialize your app, maybe you want to always fetch the list of projects. Because you never know if a new project had been added since the last time you cached those results. So then you could say network only.

      And then if you specify network only for example, it will not even look at the cache. It no matter what, it's going to go ask for whatever that data is from the API. So this is also some graphics I stole from the Apollo client website. But basically it goes like this. You have the client I want to get. So that's why it's book, right. Because it's nice graphics from their site.

      So I want to get the book with bookID 5. So like I said, it actually first looks in the cache. And then it's not found. So then a query is-- then a query is actually sent to the server. Server responds with the book object. Book is cached. And then your use query is fed the data.

      If you were to ask for that book again, like you click back on-- it was a list of books. And you click on the book. And then you click on the different book. And you click back on the same book. It's going to go like this, right. So then from your app or the client, it's going to say, get book with bookID 5. And then it says, ah, I already have that data in the local cache. And it just returns it. And no network request is made.

      I know I've said that like five times. But I cannot possibly emphasize how much performance boost you can get out of this, especially when you have the same data being displayed in like many places within your application.

      So another note on tooling. As soon as you start heavily using the Apollo client library, and you're trying to build out a more complex React app or something similar. And you're just doing a lot of local debugging and trying to figure out why is the data that I think should be in the query not showing up, et cetera, et cetera. There's a Chrome extension from Apollo. I think it's just called Apollo Chrome extension. And it's DevTools.

      So if you use React, you probably use the React DevTools. If you use the Apollo client's DevTools, I think it's called the Apollo client DevTools. And it just gives you another tab in the Chrome debugger. And it gives you a bunch of information. So you can see, if you were to click on the Queries tab there, you will see all of the currently active queries. Because basically, the Apollo client is keeping every query that's currently running active.

      And then you can also interrogate the cache. You can see every object in the cache. So this is showing something from their website, this graphic. But you can see that there is all of these person objects. And the data for each person object has an ID. And it has a type, and it has a name, et cetera.

      And so, one of the main things that you're going to want to configure is pagination. And I think if you look in the accompanying class handout document, I'll have a bunch of links to a bunch of really relevant documentation from their site and others, which is just good to read. But in general, this is probably one of the trickier things to deal with, which is the fact that, like I mentioned, within an object, you would have say, within a folder, you have items. Items itself is not an array.

      Items has a-- is got its own object type that is. So it, items returns an items object. Items has this pagination field. And a result is an array of items. And so, for configuring the cache, I'm not going to go into a ton of details on this.

      But as you come back here for reference, this is the very most important thing to do, which is what you want to do is, you when that second page of data comes in, you basically want to merge it with-- you want it to be merged with the first page. And you need to-- what you're really telling it, Apollo client here, is that what you're merging is not the incoming array itself. It's the incoming value, like dot results, right.

      So then we just do this quick little trick here to just basically say, take that results array. Map it to an object by its ID, so we can also remove duplicates. And then when it's returned, just-- so that's the merge. That's for incoming. And then when it's read from the cache, just basically take all the values of that keyed object. So this is just like simple way to eliminate duplication. And it's also the recommended approach on their website, when dealing with this type of pagination.

      And then you'll see, so I'm going to go through this really quickly. But basically, when you configure the cache and you look through-- when you're using the Apollo documentation, you'll see where all this happens at. But this is just an example. These are all of the fields in all of our schema objects that use this pattern of pagination and results.

      And so, you want to define that configuration for a normalized page, normalized field. And then in Cache configuration, you basically just specify all of the different fields of all the different objects that you want to use that specific function for. So again, if all standard things, like if it's just a field and the field returns a string, or it returns an object of a different type. You don't have to worry about configuring the cache, because that's all standard.

      It's just this one specific thing about handling this fact that, the response array is one layer down. So can't emphasize that enough. You can see things like thumbnail and physical properties. I'm also specifying to merge. That we don't even have to go into details. You can just take my word for it that that's what you want to do.

      And the reason all this works is because, again, you know, I might have said initially. I might have originally fetched just some basic information for a component. So this is what this merge is about is-- and initially, you'd have in the cache. Your cache would look like this, component version 5. And it would be-- it's a type component version. Its ID is 5.

      Obviously, they're much longer than that. But it might have a few fields. And then you make another query that's also asking for physical properties. Physical property is not in the cache. So it's going to go fetch it. But there is some bit that already is. And it'll just get merged in with new data always overwriting old data if it's different.

      The other thing that this is super important about merge. This is probably like something that could be the easiest thing that would trip you up, that you're, just basically no cache behavior is working. And you can't figure out why. Similarly to how with results, because it's nested a layer down, you need to do some special things. Weave in the newer versions of our API.

      You'll see that all of our queries are also one layer down. So you-- query itself is an object. So you have query. And then typically, you would have all the queries from there. We have query, nav, projects or query, MFG, component version. So it's very important that you tell it that the MFG and the nav and the AEC objects, if new data comes in under MFG, don't replace. Merge. Right.

      So because that's basically, every single piece of data in your cache is either sitting under nav or MFG. So in the configuration, it is ultra critically important that you set merge true. If you take nothing from this second half of this presentation, setting merge true on the nav and MFG and AEC objects, like literally the most important thing.

      And then this is also kind of a thing that I learned along the way. This presentation represents the learnings of the last two years. As we've been designing and building this API, I've been designing and building that reference application to test it all. Right. And so this bit is another-- this is a huge boost to performance.

      Because sometimes you might have a query that was project, folder, items, item versions. And then later, you have a new query that's item version. And the way queries are cached is kind of based on what query it was and what ID it was. And so the input was say, folder ID.

      But now you're going to say I just want to get the item by ID. And its input variable is item ID, which is different than folder ID. And so this little bit of magic here, which is basically telling for any query for a hub object or a project object or a folder object, it's telling how to redirect in the cache for that particular type, based on the input argument.

      So if there is a query that's coming in looking for a project object, the input in that is going to be project ID. And so project ID equals cache ID, is basically what you're saying. Sounds kind of silly that you would have to do this. But there's no way that the cache could know that a variable whose name is project ID equals the ID of that object.

      And so, if you configure this one time. And if you implement it, your whole app, and then later you do it. As soon as you do it, all of a sudden you will see that, magically like loading times have gone away, all over the place. Because it's almost, so much data is refetched in any of these applications that as soon as you implement those two references, all of a sudden, now like any query that was going to ask for the same data, it knows how to find it.

      And it's like, the day that I implemented this, it was pretty, it's almost like unbelievable how fast everything starts going. Because you realize you're not over-fetching and that you had been over-fetching before. So it's really, that's probably my second biggest tip, the stuff on this page. These are the two most important things for performance.

      ADITI KHEDKAR: So we touched on a lot of information at the beginning of the class, in terms of what's available with the APIs today. We talked about best practices when building a web application.

      So we don't expect you to remember all of this. There's definitely going to be a lot of notes that we share with you at the end of this. But also interestingly, we have provided a public roadmap that kind of keeps track of our plans and gives you and informs you information on what gets implemented, what improvements we're considering.

      And this is definitely a very useful place to keep an eye, especially if you're already using our APIs or if you're planning to use our APIs. It also gives you the ability to upvote specific features that are important for you. And you can also send us a note as you're uploading a specific feature, in terms of what else you'd like to see, either as part of that feature or a feature that isn't showing up in the roadmap.

      So with that, I'd say, if you have any questions, or just want to connect with us to further deep dive into your manufacturing data needs, please reach us on aps.help@autodesk.com. And yeah. From Patrick and myself, thank you again for joining. And thank you for listening.