Description
Key Learnings
- Understand the capabilities of the Data Management and Model Derivative APIs
- Discover what Webhooks are and how to use them
- Learn how to write software that integrates seamlessly with Forge
- Understand where to find more help on Forge
Speakers
- Adam NagyAdam Nagy joined Autodesk back in 2005, and he has been providing programming support, consulting, training, and evangelism to external developers. He started his career in Budapest working for a civil engineering CAD software company. He then worked for Autodesk in Prague for 3 years, and he now lives in South England, United Kingdom. Twitter: @AdamTheNagy
- MSMonmohan SinghMonmohan Singh is a Software Architect in the Singapore office working closely with Fusion Lifecycle, Data Management and Forge-Webhooks teams. He has over 20 years of experience in design and development of enterprise products in a variety of areas - content management, portal collaboration, search, document management, social media, and data management. Monmohan is excited about contributing to the success of Autodesk Forge platform.
ADAM NAGY: Yes. So I think we can start. So welcome everyone. I'm Adam. And I'm going to present the [INAUDIBLE] about the webhooks.
So this is this class summary and key learning objectives. I guess the best thing is to talk about webhooks in context of Data Management API and also model derivative API. So I will give a bit of an intro into those along with the authentication API as well.
So first of all, if you want to get started with Forge with Data Management API or model derivatives and want to take advantage of webhooks, then you need to create an application or for our platform. So you can just simply go to Developer.Autodesk.com. You can log in video or Autodesk ID which is freely available. You can create a new application. You just need to specify the type of APIs that your application is going to use, BIM 360 API data management design automation or model derivative APIs.
You just provide some name, maybe an app description. If you want to be able to access user specific data from an A360 type of story system there is Fusion Team or BIM 360 Team or BIM 360 docs, in that case, you have to specify also the URL, the callback URL which will be used by force server. Once the user approved your application to access the data, this URL [INAUDIBLE] code. And then your application can authenticate itself. And you can also provide the URL for your application if you want.
And straight away, once you created the application, you will have access to a client ID and client secret, which will be generated for your application. So right after that, once you have those two things, you can start using the four JPIs and you can also authenticate your application. So here as well actually, there is the callback URL again, that you will need to provide for the application if you want to access user data.
So here are the two type of authentications available, whether it's two legged or three legged. So if your application is storing data in its own private bucket or on Forge, in that case, the authentication is pretty simple, because you can just take the client idea and client secret, call a single endpoint on the Forge web server. And then you will get an access token that from then onwards you can use in order to use the other Forge web services.
So in this case, the two-leg is your application and then the Forge server. In case of a three-legged authentication, it's a bit more involved, because in that case, your application will need-- when the user goes to your web application, then it needs to redirect the user to the logging site of Forge or AutoDesk server where the user can log in, and then it can approve your application's access to their data. And once that happened, then the Forge server will call the callback URL that you provided for your application. And we'll pass in a code, which then you can use in additional call to turn that into an access token, when from then onwards you can use to access your user's data.
We are also using something called the Authentication scopes, which is widely used by other services as well. This is like a extra layer of security you could call that. So when you're authenticating the application, you can provide-- or actually, you need to provide what sort of access you want to provide for the application. But it is just read-only access where there is just access in order to show the model in the viewer or searching the database or writing data, and so on.
And when you look at any of the functions or any of the endpoints of the various web services on Forge, then you look at the documentation on Developer [INAUDIBLE] .com. For each of these endpoints, the scope required for the given functionality will be also listed. So you know in advance what sort of scopes you should ask for when authenticating your application. So once you have your application, once you authenticate it yourself, now you have the access token and can start using other Forge web services, including the Data Management API, which is structured the following way, or the raw data, the actual files.
For example, if you're uploading the Revit file, which would be the RVD file or an Inventor file or assembly, those would be stored in the Object Storage Service, and then logically organized by other services like the projects service to organize all the data into hubs and projects that you have access to or your users have access to, and then further organized into folders, items, and versions. It's similar to how all your files are organized on your drive. There are various drives, various folders, sub-folders and so on where the files reside.
And again, if you want to access user specific data, then you need to use the three-legged authentication. You need the approval of the user, whose data you're trying to access. If you're storing all the data in your applications own private bucket that you created on the Forage storage service, then you just need a two-legged authentication. Good thing is that when you're using the various APIs, including the Data Management API, then using the same API, you have access to the various storage services, whether it's fusion team, BIM 360 docs or the others.
And once you uploaded files or the user uploaded their own files, you can also set up relationships between those files. It could be like, an attachment relationship, or it could be also a reference relationship in case you have a complex model, including of multiple files referencing each other. It could be like, an AutoCAD file extracting another drawing file, or in Inventor assembly referring to other sub assemblies or parts. And you can set up these relationships, which is necessary in order for other services like the model derivative service to do the translation and be able to find the files which are also needed for the translation. These are the main endpoints provided by the Data Management API in order to get a list of the hubs the user has access to, and the various projects inside those hubs, and the folders, items, versions, and also direct access to the older files and buckets on the Object Storage Service.
Though you're using the same API in order to access the various storage services, there are a couple of things you will have to keep in mind, it's that some of the file types and folder types and project types might be different, depending on whether you're interacting with Fusion team or BIM 360 Docs. So here is just a list you can have a look at later. Same then for example, trying to create a folder or upload a file. There might be a couple of differences
And once we have a file on the-- on our story services, then you can start doing things with it. One of the things you can do is you can use the model derivative API in order to extract data out of the model, translate it into another format, or get geometry out of them, or get thumbnails out of them. These are the main functionalities provided by the model there with the API.
And this API supports more than 65 formats at the moment. And in case of an OBJ export, which is basically provides the access to the geometry inside the model, that supports sub-components selection as well. So once you have a model, you can specify which sub components you're interested in and translate only those specific parts into an OBJ file format. In case of other translations for example, like translating a Revit file into an IFC format, the whole file will be translated. You can specify which sub-components you're interested in.
And this is what the currently the workflow looks like. So we're starting with the fact that you have a file on OSS or an A360 type of storage. Then you can job, which basically means requesting a translation for the given file which resides on our story service. And once you've done that, inside that post job request, you specify the five type you are interested in, what are the five formats you are trying to get. It could be the SVF format for example, which is being used by the viewer, in case you want to show the model in your own web application inside a viewer.
And once you've done that, you can ask for the manifest of the given file. And the manifest basically means it's all the information about the various translations which were requested for the given model. And it will also provide the information about the progress of all the translations. So at the moment, because we don't have the web [INAUDIBLE] for it yet, it will be coming later, you have to keep pulling. You have to keep asking for the latest manifest for the given model in order to find out when the translation actually finished.
And once the translation is finished, then you can go two ways, depending on what you want to do. If you, for example, just want to access the translated model and just download it, in that case, you can just call the GET derivative end point. You're just passing the URN, the identifier of the given translation that you requested.
However, if you want to get the hierarchy information out of the model, which is also available, or the properties which were extracted from the original model, in that case, first, you have to ask for the metadata, which basically means it's a list of IDs of the various views which are available. In case of the inventor models, it would be just a single view. But in case of Revit models, it can happen that you have multiple views for the same model. And in that case, you have to specify which view you're interested in. And once you have the ID of the given view, then you will be able to call these other two end points to get like, the hierarchy or get the order properties metadata from the model.
And this is about posting a job look like. This is all the data you need to post to the given end point. So as you can see, it's quite simple. You just have to specify the URN, which is the identifier of the give a model that you want to translate.
In case of a complex model like an assembly with sub assemblies and parts, you also have to specify the root file name that you want to translate. And if it was uploaded as part of a zip, because there are two ways of translating a complex assembly at the moment, you can either upload everything or the various fires in a zip file to our server. Or you can upload the file separately and then set up the relationship between the files later on.
In case of dealing with a simple file like a parts file, it's very simple. You just have a single file so you don't have to care about the others. But one thing to keep in mind is when you're uploading any of your models to our servers is that you should definitely keep the file extension, because this is what the translators are looking at in order to know whether they need to call the Inventor translator or the AutoCAD translator or a Revit translator and so on.
And this one is just basically showing where separately the derivative service is storing all its data, the manifest, the translation, properties, and hierarchy, and how you can request those translations. So if the file is on an A360 type of storage, then you can either just drill down to get to the given version that you want to translate and then parse the idea of that to the model derivative service. Or if for some reason you prefer, you can also drill down to the actual rule file on the Object store service and get back the ID of that, and then parse that to the model derivative service. Both of them should work fine.
And then we can talk about Forge Webhooks. Do you want to use that one or--
MANMOHAN SINGH: Sure. So let me--
ADAM NAGY: [INAUDIBLE]
MANMOHAN SINGH: Yeah. So how many of you are generally familiar with the concept of webhooks? OK, a lot of folks. So it's a fairly simple concept. What a webhook is basically an HTTP URL.
The goal of a webhook is to actually reduce complexity when you want to know about state change in the system of your interest. And how do you risk reduce complexity is by taking off complex protocol out of it, make it very simple to use so that the core concept of webhook is you specify URL as long as you can handle post. HTTP post on that URL, you are good to go.
And what you use that URL for is use that URL to register to the target system to say, hey, I'm interested in these events. Here's my URL. Give me an HTTP post to call back and whenever this even happens. So that's all the concept of the webhook is. And yeah.
So I think this is similar to what Brian showed in the morning. So these are some of the applications that actually use webhooks in some form. I like slack, [INAUDIBLE], Zendesk, Dropbox. So pretty much every new age cloud service that you have today has webhooks on it.
If you use GitHub, of almost engineering teams use git or GitHub in some form. GitHub has a very nice webhooks integration with Jenkins. And you have all of [INAUDIBLE] pipeline running with [INAUDIBLE] you [INAUDIBLE] a job. And you get your [INAUDIBLE] things figured out.
Stripe, I believe has a webhook that says that every time a credit card transaction is charged, you can actually register [INAUDIBLE], and you'll know when a merchant actually did some transaction and so on. So all of these expose a certain set of events that are important for their platform and allows application to build on their platform. That's the common theme.
There is stretching in these systems, and they want applications interested in the state change to be able to know without any complexity. Not you don't see a Slack SDK for example, to do that. Just know whether the message was posted and things like.
So I talked about that a little bit. What problems do they solve? Why do we need webhooks? So the most part, if you look at the desktop applications, almost all of them have this concept of adding or plugging.
So you have a piece of code and executable that somehow runs with the desktop application that has access to the environment. That's not going to happen in cloud. Most likely there might be specialized cases where it could happen, but for the most part no cloud service would allow you to say, upload a piece of code that runs along with it. So how do you do this same plug-in thing? Because what were the plug-ins [INAUDIBLE] doing is extending the functionality of your desktop applications.
So how do you extend the functionality of the API to you use it? Webhooks is where basically it sort of solves that problem, because for the most part, you have added some plugins that are interested in knowing what is happening, what is the state change that happened in this application that I want to react to? So don't call us, we'll call you kind of concept. So they solve that problem.
They're also commonly known as pipes for web. So you can actually pipe. Something happened in your data, a file was added to a folder.
And at that point you want to upload let's say, the same file to a Google Drive or somewhere else, a post modification. And then if that system supports the hook, you can pipe that call to do something else. So you can actually build a chain of functionality just using webhooks.
Yeah, some popular things on the web today, if you see that if this then that, if you guys have seen on the internet if this and that is a very popular platform to do that is Zappia, which is another popular platform to allow you like, programmatically these pipes. So some of the use cases that we can think of at least in our ecosystem is when a file is added to a project or to a folder, you do want to do something of this, maybe trigger some kind of extraction on the file, do some sort learning algorithm on that file, and things like that. And you don't want to go in like, continuously pull that folder to know when that happened. File is added, you got a message, you basically do your processing.
The ones that at the bottom I put a star, because these are on the roadmap. These events are not exposed today in more than one of the books. We are just exposing the data management events. And by data management events, I mean all the events that are related to the state change that you can do with the Forge data API. So for example, adding a file, move, copy, delete a file, and things like that, add a folder and so on.
So this is our API goal. I have underlined sort of to emphasize the important things here. We want to provide a simple platform. We all want to have the need for an SDK or you to write like, WebSocket or Socketed IU, or kind of complicated code to actually just know about what happened in the system. So the goal has to be simple. And that's where simple HTTP post, that's all.
The goal is for external users. And it's actually a little bit important for us that we're not using this platform. The target is not to use it for internal service collaboration, but make it all about course grained events that consumers of the Forge platform need.
And you should ideally be able to hook into all Autodesk events, which means that if there is an API on Forge that actually manages our changes state, eventually, you should see those even on a Forge webhooks service list. So basically, today you have Data Management APIs that exposes data events. You should be able to listen to that. Tomorrow, if you have-- well, we have in the roadmaps for example, when you created a design job and your translation got completed or your derivatives got generated, instead of falling, you should be able to see those events [INAUDIBLE].
Yeah, just a quick note on that. So it's not mean for real time connected IoT devices. You expect as HTTP latency. Don't expect the streaming kind of paradigm here that actually has very low subsequent latency. So you will get a callback, but it's not IoT style call back.
So here's a sort of picture that demonstrates what the goal is. So everything you see on the left side of the green line are the products that we use. So Fusion Team, BIM 360 Docs, BIM 360 Team, derivative service, which are internal, which is actually exposed as more deliberative service on the Forge, but Fusion Lifecycle and so on. So all these products have their own APIs and you interact with them directly or through in Forge API to actually manipulate.
So for example, the BIM 360 Docs Fusion Team, you actually use the Forge data API to manipulate ad data and so on. Then you have model derivative API, through which you interact with creating derivatives and translation and so on. And then you have Fusion Lifecycle. It's not in Forge, but Fusion Lifecycle is its own API that its customers use.
So on the left side the green line is a product API. On the right side you see things which are exposed as first class Forge APIs, so you have Forge data management, [INAUDIBLE], and now webhooks. So these are something that on the Forge platform you should be able to see and use these APIs. And the box shows the events that we have today. And this is what the goal is.
So basically what should happen is you go to Forge. You look at the list of all the events that we expose, and you choose-- I want to know when a file was modified, folder was modified, design was complete, an item was [INAUDIBLE], and so on. So you just choose the set of events that you want to listen on. They [INAUDIBLE] all and going. And that's it.
When any of those things changes, either because of your action or the otherwise, you will get a callback, and you can do your processing on the [INAUDIBLE]. Again, on the right side, on the right most is just a picture of the consumers. So it could be developers like yourselves, it could be a platform itself like, Zappia or Jitterbit, one of the [INAUDIBLE] partners. And these platforms basically allow you to-- that's what I was talking about pipes thing. So these platforms allow you to take the data that comes in, the callback comes into you.
At that point, you're go and see-- you want to interact with an external sub system or do your processing or something. The important thing is that that's not something that webhooks service controlled. It's just a picture that says that you can actually build workflows on top of the event and callback.
So webhook in a general concept I said, it's an HTTP URL. But from the Forge webhooks API perspective, well, how do we encapsulate or define the webhook object per se? So a webhook object from a Forge API perspective requires four things. Basically one, obviously, you need an HTTP URL so that you can actually get a callback on the event.
It has an owner. So who creates the hook? So it's-- is it an application directly creates a hook in which case, the application is the owner? Or a user on behalf of an application can create a hook, in which case, the owner of the hook is the user and not the application.
And the other important thing is you must specify a scope or sort of partition on the data that you want to listen in. So for example, you cannot say I want to listen to all the accounts on Forge data platform. So that's not something you can do. You have to choose exactly which project you want to listen on, for example, in this case. Or even a smaller scope like, a particular folder within a project.
But you must choose a data scope that you want to listen to, because for multiple reason. One, performance, and the other thing, it just doesn't make sense for anybody to listen to everything in the world. So this sort of partitions that things that you want to listen on specifically.
And obviously, it's a hook. So basically it's associated with an event. If you look at the API, I'll share the documentation link. So if you look at the API, you can listen to one or more or all events on a particular scope. So the idea should be you should be able to discover events.
Right now we don't have a meta API to discovery events. It might be sometime in the future, but there is enough documentation for you to see a list of all the events that you can actually register to. So discover the events, you register your hook, and then receive callbacks. So you need an endpoint to actually process those callbacks that webhooks service will actually do when that even happens.
Yeah, I wanted to specifically cover security, because in this case, we are making callback to public endpoints, which are not specifically in our control. And this is customer's data and information about the customer data that is relayed in these callbacks. So it's important to look at how we are securing these things. So first up, is just like any other Forge API, you must do OAuth to be able to access the Forge webhooks API.
So you can do a two-legged OAuth like Adam talked about, or you can use a three-legged OAuth. And that has a bearing on who becomes the owner of the hook. So normally, if you want like, sort of without a user context if you want-- if you're building an application that has no user context, and just say you want to just key off some processing every time a file is added to a folder.
Don't care, it's not about users. It's about just building a workflow, in that case, you would use the two-legged application to access webhook. And if you are actually an individual user who wants to know about certain events, then that case you will use a three layered context.
There is a set of specific white listed keys. If you are like, you know, high trust application, partner, yeah, I have honestly not thought about it. But there might be cases where you can get your keys white listed to listen to wider variety of events and go through less secure checks to access data.
So first is you must do an authentication to actually be able to create hooks. And the second case is authorization. So you can only create hook for things that you have read-access to, you cannot create hook on like, a BIM account that you are not a member or a project that you don't have access to. You can actually do the call and it could simply fail because you are not authorized for read. And we also check-- so because it's some sort of offline thing, we call you.
So when you create a hook, you actually-- we check your access. But we also check when we are about to deliver the callback payload to you because these things are asynchronous. So at a certain later point of time, your access might be revoked, or you're no longer a member of project, and you shouldn't be getting the callbacks. So we do check out the callback [INAUDIBLE].
This is sort of if you're familiar again, with how other services like, GitHub and Facebook and Twitter do webhooks, they have this notion of shared secret. So what happens is when we actually deliver the callback payload to you, we actually sign the callback payload with your shared secret that you created when you registered for the hook. So this way, you can actually be sure that the call came from the webhook service API and not from random user. Because you are public. You have no way to make sure that somebody else cannot call you and just post a fake callback.
So it prevents callback spoofing. So that's an important security measure that we have. And we didn't want too complicate it to make it two-way HTTP SSL, because then again, it's like, you know, it's not easy to build two-way HTTP SSL communication applications. And so it goes back to simplicity. You want to be simple, not complicated.
So this is sort of basically showing the same thing. Like, we sourced even from data management. And we check at the time of hook registrations, do you have access to this project that you want to listen to? And we also check at the time of delivering and say do you still have access at the time of receiving the callback?
Some important notes on-- because these are the key characteristics that you need to be aware of when you implement your callback. So when you have any of URL handling the post, it's important for you to understand what we expect, what's the frequency, what if your server was down for five minutes, does it go away, and things like that? This service is still in private beta. So we are working on a formal SLA. But for the most part, these are things of the interest that you should know.
One is what is our delivery guarantee? So in any renting system, there must be some sort of delivery guarantees. The general delivery guarantees are at least once, at most once, or once, or only once. So only once in normally pretty much nobody does in a distributed system. At most once is hard.
And we actually chose an at least once a delivery guarantee. So in rare cases, you might get the same event twice. But so that's the sort of delivery guarantee we provide. So you will get the callback at least once.
Ordering between callbacks is not guaranteed. We are debating about this thing, but in the first wasn't. Because of the way our internal systems are structured, ordering between events is not guaranteed.
However, for the most part, what we looked around and the rate of the data creation, it's not high enough to actually say that you will always get events out of sequence. For the most part, you should get events pretty much in sequence. But you should not expect or build a system assuming that every time the events will be in sequence. Assume that they could be out of order.
So the other concept of what you should be aware of when you're implementing the callback URL is you have a fixed time window in which to respond. And this is because we could have thousands of callbacks on an event, and every callback requires us to actually do an HTTP-- open a [INAUDIBLE], do an HTTP connection. And since it is a public URL, somebody can do a for loop and add like, thousands of fake end points. So it's an important matter that you must respond within a time limit. I think it's 10 seconds or something. If you don't get us a valid response within 10 seconds, we assume that your URL doesn't work.
And then we get into a fallback measure where it says OK, so maybe it's a temporary problem. Maybe it's actually not a temporary problem. So in order to handle temporary problems they even get queued into what we call it a try loop. And the retry loop is-- actually the information here is probably not correct-- So the reply loop is actually a sort of an exponentially increasing thing.
It does a 15 minutes and then a 30 minutes, and then I think 2 hours and then 24 hours. So it's like, within a span of 24 hours, it does 4 callbacks to make sure that maybe there is something transient issue and you've recovered, so you don't lose events in that time. But if you don't respond to any of those four callbacks in 24 hours, we basically deactivate your hook because it could well be a spam. It's hard for us. You can actually re-register and start the processing again.
But from the time of the activation to until you delete this stuff, you will actually lose events. And it's generally rare, but if you have a system that is actually doing some processing, we're down for more than 24 hours, we feel it's a fail window. We looked at all other services like Stripe and-- which does transaction processing. And they have I think even less, probably. A 12 hour window for this
And I talked about latency. Please don't-- the whole latency is not geared for like, sub second or sub millisecond stuff. It's an HTTP call. It has an HTTP latency and that sort of thing.
With that, actually, I wanted to just talk about what we build just to show how you can use this API and actually how easy it is today with AWS and its infrastructure to actually build URLs that do something useful. You don't have to build like, a whole host of servers and stuff. Because one of the concerns is oh, I need a URL? That means I probably need a server. And I actually need to host a server somewhere and so on.
So let's assume for a second that if I had a problem, that says that I want to know what all the changes that happen in a folder in last n minutes or n hours are in a day. So with the current data management [INAUDIBLE] it's pretty much impossible to build that solution, other than you probably just do a poll, get all the list of the files in there, make a mark somewhere, and then do another poll, and look at the [INAUDIBLE].
Well, it's fairly simple to build in with webhooks. So we basically build this that actually shows how you can build a pretty much a serverless webhooks application that does the same thing and in like, less than 150 lines of code. So if you want to know what happened in a folder, basically you need sort of three endpoints. You need to-- somebody to come and say I'm interested in listening in this folder and all the changes that happen in this folder. Somebody comes through that application.
The folder actually just create all the event hooks on that folder. So you have an API to do a watch on the folder. And then you need an endpoint to receive the callbacks whenever something happens in that folder. And then you need an API to expose all the changes. So these are the three APIs that our goal was to build.
And here is how we did that. So basically, this is not part of a hook service. It's just I'm trying to show what you can do without actually doing a lot of server infrastructure and so on. So we look at AWS. If you have tried to play around you will see AWS has an API gateway, and AWS has this nice serverless.
You can run lambdas like functions. And you can actually hook those functions with anything. So you can actually take that function and hook it to an API endpoint. And all you need is like, three end points and three functions, and then you're done. So basically an endpoint to register-- a function to register a folder to listen to everything, and another endpoint to basically receive the callback and store it in DynamoDB
Again, nothing-- you don't have to install anything. And then another endpoint to say, hey go to the deviant query and return all that that happened in last-- so let me see if I can open this up. So I mean if you're interested tomorrow in the Dev Lab or some time, just come look for me and I can share this code sample as well, or I can share with [INAUDIBLE], and then we can host it somewhere.
So this is the demo sample I was talking about. So if you have an API call watch-- so what it does is it takes a folder, and actually all it does is Rejist just takes that folder, goes to the webhook service and say I'm interested to know about all events in this folder and that's it. So you have a lambda function that does that part.
So this is the code basically. I'm not going through the code. But this is linked to a lambda function.
So you have a watch, takes a folder, goes to webhook service, says I'm interested in listening for everything. And then you have obviously a callback because you need to register URL. And that's another [INAUDIBLE] point linked to another lambda function that actually receives this callback--
AUDIENCE: [INAUDIBLE]
MANMOHAN SINGH: Oh. So do you know how to mirror this? Sorry. Is there a way? Can I get out of the presentation mode?
PRESENTER: So you should be able to mirror the desktop. [INAUDIBLE] Website?
MANMOHAN SINGH: Hm, perfect. [INAUDIBLE]
Yeah, sorry. So I don't know where should I start from, but-- Yeah So is the font good enough? Probably now. So I talked about this end point. So this is one of the endpoints.
So this is an API gateway. And I talked about three end points, you have watch, you have changes, and you have callback. Watch basically says take a folder, go register to webhook service. I want to listen to everything. And you have callback that says OK, this is where I receive a callback when anything happens in the event. So this is powered by the webhooks API.
And then you have changes, basically it says it's a good end point. And all it does is basically give it a time. And from that time onwards, it gives a list of all the things that happens, basically list of all the events that happen.
So if you look at, for example, let's say, callback. And if you looked at this-- so this actually what it does is this callback is every time it gets a post, it just takes the whole callback and adds it to a DynamoDB. Yeah, I'm not going to go to the code, but I just want to show that there's really no infrastructure here. You've a lambda function that just inserts what it got from the call in the database. And if you look at changes, all it does is-- it's another lambda function.
Basically, what it does is it goes to the DynamoDB and says give me all the things that are from this time, from a given time. And that's it. And we have built a system that scales, because you're using DynamoDB, and it doesn't matter how much like, even that you insert-- you have pretty much zero infrastructure to build a system, because you're using API gateway and we are lambda function and we didn't install anything.
And you can actually process and show a list of changes that happen in the folder. An easy thing to build with webhooks really, if that's something you want to do. But I just want to show how you can use a new age AWS stuff to not transfer [INAUDIBLE] and it still process whatever you want to do.
Let me see. I had a video of this thing working, but maybe you can just do the [INAUDIBLE] file demo. Tomorrow, the dev lab or otherwise, if you're interested in the code, this code sample, we've hosted. But I can schedule directly, and if you have more questions, just come reach out to me.
ADAM NAGY: Yeah, so actually you've seen it in the morning, this sample, which was created by my colleague [INAUDIBLE]. It was on the main stage. So basically what this does is that using the Data Management API, at least so the hubs and projects that the given user it has access to. Once they logged in, I already logged in here. And when it clicks on a given folder, then using the hooks API, if you check if the given hooks have already been registered for the given folder.
And as you can see in this case, they have already been registered. That's why they are highlighted here. So I've done that, which means that now if anything changes, like, I'm adding a new folder or a new version of the file has been added on A360 or Fusion Team or any of those story services, then I will get a notification.
So if I go back to A360 and I'm-- as you can see, these are the emails I have. So I go back to A360, and I create for example, a new folder, and I wait a bit, then there should be a notification coming in through the email. And as you can imagine, since you can listen to new five versions being added to A360 or this type of storages, you can easily hook it up to various other processes. You could automatically, for example, do version comparisons to check what's changed between the latest version and the previous version, or hook it up to some PLM system that the new version has been added to your project and so on.
So I can also show the code, which is available. It's on [INAUDIBLE] GitHub repository. So it's all there and you can have a look at it later. So as you can see, it's quite easy to interrogate the webhooks in order to find out what webhooks has already been registered, or register your new webhooks. You just have to call or access a URL, something like this through the webhooks [INAUDIBLE] systems.
And the rest, and then specify the given event that you are interested in, for example, FS file added. And a new file has been added to the storage system slash hooks. And then you have to provide also the authorization for it. So you pass in your you excess token.
And you would also need to provide, in case of creating a new hook, you would also need to provide the ID of the folder on which you want to enable this. And then just simply send a post request to give the new URL with the excess token, and that's about it. So it's really simple to use it.
And if I go back just for a second to the presentation that I had here in the PPT, when using the model derivative API-- so as you can see, currently, you have to just keep holding for the latest manifest. And soon, early next year, the webhooks API will be available for the model derivative API as well. So, that will be much better for your server, and of course, our servers as, much less traffic, much less processing time for the both of us, because you just posed a job, request or subscribe to the given webhook.
And when the given translation is available, your server will be called. And then you can react to it in your application. And I think that's-- ah, wait a second! Well, I wanted to actually show the--
MANMOHAN SINGH: --one thing I think I missed the part where we talk about the private beta.
ADAM NAGY: OK, so are you there already I?
MANMOHAN SINGH: No, I'm not. Me, I can just--
ADAM NAGY: Where do you want to go?
MANMOHAN SINGH: Plus. (WHISPERS) [INAUDIBLE]. Yeah, [INAUDIBLE]. Yeah.
ADAM NAGY: OK, there you go.
MANMOHAN SINGH: Yeah, sorry. I forgot about this one. So some people came by and said, well, we wanted to know whether this API is available, downloadable. So it's actually available in private beta now, which means that if you go to the portal, you won't see the API. But there is a private beta support email.
If you send a request to that email, will add your app ID, and then you can actually use the API. So it's-- So yeah. You can use it. It's available today in production and for you to test out.
There is documentation here. Again, it's not something that is linked in until we go through life, the documentation would be-- but you can use this direct link to actually go and see the documentation. So will we can share this information on that.
So that's something to know. It's not something we're talking about that will happen. It's there. You can use it. We just need to wipe this to an application ID to actually we using.
ADAM NAGY: And the last couple of slides are just on the various resources available. So obviously, you can go on the developer the [INAUDIBLE], the [INAUDIBLE] site to get access to the documentation, create your new application, and so on. And also we are providing support on Stack Overflow.
So you can just search for the various APIs that you are interested in with regards to Forge. And we also have loads of sample applications on the Autodesk-Forge GitHub repository. It's completely public. So do have a look at those. That's about it. Does anyone have any questions?
AUDIENCE: Do you have a lot of C# samples [INAUDIBLE]?
ADAM NAGY: We have now a few. So most of the samples are in NodeJS because we thought that was the easiest to get started if you have no experience with web development. But we have a quite a few, not just [INAUDIBLE] samples then as well.
AUDIENCE: [INAUDIBLE]
ADAM NAGY: Sorry?
AUDIENCE: Do most people develop in NodeJS or C#?
ADAM NAGY: Well, it depends on the given company. If they are coming from a desktop environment background, developing dot net desktop application, then obviously, they're probably more likely to use [INAUDIBLE].
AUDIENCE: [INAUDIBLE]
ADAM NAGY: Yeah well, we keep adding new samples. Any other questions? This is the last session between the dinner and the--
AUDIENCE: [INAUDIBLE]
PRESENTER: I think it's early Q1, but I can get back to you with the exact dates on it. But yeah, it should be it should be pretty early Q1. Yeah?
AUDIENCE: When you subscribe for an event, you must need to know what [INAUDIBLE]?
MANMOHAN SINGH: Yeah, it's in the documentation. You can look at all the payloads that you will get for a particular event callback. So if you get the file added, what part is that data that you get for file? You can see--
AUDIENCE: [INAUDIBLE] application
Every change in mobile application [INAUDIBLE] a callback files change [INAUDIBLE].
MANMOHAN SINGH: So right now there isn't, but we took sort of call that for example, if you copy a folder, we won't send event for every single individual file in that folder. We'll send you one meta folder was copied. So these are sort of the batching is done out of the box in certain events.
But there is no way for you to say if 10 file [INAUDIBLE] events happen in one minute, then give me one callback now. Sorry. There's no way right now to do it.
In general, if your frequency of adding a file is too high, then you get to get as many callbacks. But it's fairly simple. Most HTTP servers and applications can handle a lot of concurrent calls.
AUDIENCE: For long running processes, [INAUDIBLE] instead of getting something that just [INAUDIBLE].
MANMOHAN SINGH: So we haven't done modularity yet, but yeah, that we are going to do it in Q1. The goal for the first release would be-- still be at the end of the design job completion. And the real reason is again, you know, this is because like, sort of different ends of the same requirement, is that we want to stick more screen events like, we are building a progress listener on the translation, maybe a good use case.
But we go out first with a translation complete event so that that addresses-- it seems to address large majority of problems. But if you want to listen to progress events, then we'll see if there seems to be like, a lot of interest, and then it seems like a good use case, we can do that. But we don't want to actually go and make very fine grained events. We want to be a [INAUDIBLE] that allows workflow and passing right now.
AUDIENCE: [INAUDIBLE]
MANMOHAN SINGH: OK so that's a good feedback. And we can keep in mind. Yeah, obviously the number of progress events you might get is something that yeah. But it's a good feedback. I'll take [INAUDIBLE]. Thanks.
ADAM NAGY: Any thing else? Was there something there?
MANMOHAN SINGH: OK, if there are any-- no other questions, I think--
ADAM NAGY: Well yeah. So if you have any more questions, later on you can find us at the Forge Answer Bar. It will be in the Exhibition hall all along the AU. So do come there.
MANMOHAN SINGH: There's Dev Labs too, I guess. Thank you.
[APPLAUSE]