AU Class
AU Class
class - AU

Amplify Your Plug-Ins and Forge Dashboards to Full-Stack with AWS Amplify

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

In this class, you'll learn how to use AWS Amplify to connect your Autodesk add-ins to a database and let your users exchange data. You'll also learn how to use AWS Amplify and React to host the Forge Viewer to build clients custom web dashboards with authentication for your clients. AWS Amplify is a complete solution you can use to build and host full-stack applications, letting you concentrate on app development and avoid spending time and money on server maintenance and complex admin tasks. Adding database, account authentication, file storage, and several other functions can be achieved quickly using a command line interface, which makes it very easy to start new projects.

主要学习内容

  • Learn about the different tools available in AWS Amplify.
  • Start a new AWS Amplify project and connect it to your add-ins.
  • Learn how to integrate the Forge Viewer with AWS Amplify functions like Hosting, Storage, and Authentication.
  • Identify new use cases to connect web apps to your add-ins.

讲师

  • Mehdi Blanchard
    Applied Technology Manager and Software Developer at HDR for 12 years, Mehdi's role is to assess new technologies and find creative ways to implement them in HDR's workflows. He also researches, designs and develops custom software solutions for HDR globally to improve the BIM process. Mehdi has worked in the AEC industry for more than 18 years. He holds a postgraduate degree in software engineering from the Mediterranean Institute of Research and Computer Science and Robotics (IMERIR) in Perpignan, France and has also studied computer graphics and 3D animation at the Computer Graphics College of Sydney (now SAE).
Video Player is loading.
Current Time 0:00
Duration 53:05
Loaded: 0.31%
Stream Type LIVE
Remaining Time 53:05
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

MEHDI BLANCHARD: Amplify your plug-ins and Forge dashboards to full-stack with AWS Amplify. Although some of the presentation today will be done using Revit, this technology can apply to any other Autodesk product that has an API to create add-ins. My name is Mehdi Blanchard. I'm an applied technology manager at HDR. And this is my AU class.

I've been working in the AEC industry for the last 20 years, and I'm always on the lookout for technologies that can improve our workflow. I design and develop custom software solutions for HDR, so I need to develop prototypes very quickly for evaluation as well as developing solutions that can be used by a large number of employees for many years. Working in a large company like HDR allows me to always find a use case for interesting technologies.

HDR is a creative firm for architecture and engineering with more than 12,000 employees worldwide in more than 200 offices around the globe. We have access to highly informed, best practice, innovative, future-thinking, and top talent from around the world, enabling us to elevate the communities, industries, and professions we serve. We specialize in health, education, science, civic, and defense. And we design and deliver projects across a multitude of sectors at every scale.

HDR is an employee-owned company that has been established more than 100 years ago. And that is still growing at a steady pace. HDR receives many design awards and number one rankings in both engineering and architecture. We have offices in the USA, Canada, China, Singapore, Germany, UK, Middle East, and in Australia, where I joined the HDR office 12 years ago.

Even with the best tools, we always need to find good use cases to explain how it can actually be valuable for the company. So I'm going to start this class by showing some of the tools that we have developed that are using AWS Amplify. The first one is the Revit add-in, and the second one is a custom web [INAUDIBLE] portal.

What we often have to do in architecture, once a standard room design has been reviewed and approved, is to replicate it to other similar standard rooms in the building. In our Revit add-in, the design can be analyzed and uploaded to a database and then replicated to similar templates on the floorplan, taking into account changes in orientation, mirroring, and even room size and overall location. This can be done across multiple projects and allow us to spend more time on the design and less time on the repetitive tasks.

Developing custom websites using the Forge Viewer for our clients can be quite time-consuming. So we have developed an internal portal using AWS Amplify that allows the project team to create its own website for the client without requiring IT support while also allowing for a number of customization depending on their own project setup and client requirements. Once the project has been set up, the client can log in, see his own project, and create additional views that are specific for their needs as well.

For this class, I specifically developed a fictional solution where we require a communication between a client website with a Forge Viewer and the Revit add-in for the project team. This solution will handle a todo list between the client and the team and will showcase some of Amplify main features.

Here are the requirements for the todo app. So the client will log-- logs in into a website where they can see their project in a Forge Viewer. When they require a modification, they can create a todo item. They can add comments and images to their todo item. And they can see the updated todo status in real time when the company updates it.

On the company side, within Revit, users can see in real time when todos are added by the client. They can see comments and images as well as they are added. And they can update the todo statuses and add comments when needed.

This is what the Forge Viewer looks like. The page is split in two. On the left-hand side, we have the Forge Viewer with an additional custom extension. And on the right-hand side, we have our user interface to view our existing todos and create new todo items.

When we click on an existing todo item, we can view the todo properties and its existing comments and images that have been uploaded. We can also add a new comment and upload a new image. We can click on an image to make it larger, and the layout changes automatically to adapt to smaller width, like mobile phones.

For the product team, this is the Revit add-in. We have a new item on the ribbon and a button that launches a modeless dialog box, allowing the user to keep working in Revit while the dialog box is open.

When the dialog box is opened, it immediately connects to the database to download todos and display them in a table. When a user clicks on a todo, the user can view the todo's details, comments, and images. The user can also update the todo, for example, change the status, and can also add comments.

For both the web viewer and the Revit add-in, any changes made by the client or the team is reflected in real time in both user interface. In this class, I won't go into the details on how to create an add-in. I will focus mainly on the Amplify side of things.

I'm now going to run a demo so that you can see both applications working at the same time. So this is our web client. We've got the authentication that has been set up with Amplify so the user can sign in. And then, all the todos are downloaded from the database.

And we can see our web viewer and on the side here. When I click on the todo, I can see all the different properties of the todo and the images that have been uploaded. I can also see comments when comments are added or upload new images.

So, for example, if I wanted to create a new todo I would, for example, use this extension to filter my levels. And then, let's say, can have this custom extensions as well to select our rooms. So I can click and select a room.

And then, now I can create a new todo. You can see that the room has been automatically selected. And then, let's say I'm the client, and let's say-- oops, this is an office for 12 person. [INAUDIBLE]

This is an open space. And then, so we can create our todo. So the todo has been added. And then, we can add comments. Add a white board, and comment. And then, I can upload an image, for example, this one, where the whiteboard is supposed to go.

So that's the client side. And then, let's say now if I'm the team and I'm in Revit. So, as the user, I can open my todos. And we can see that the todos that have been added by the client. And we can see, this one just came up.

So now, what I'm going to do is to show how, in real time, we can have a real-time connection between the two. So let's see. Yeah. So this is our todo here that I'm going to update. And so, if I'm from the company, and then I'm going to start working on this item, I can say, in progress, save. And you can see that the status updates directly on the web URL. So let me just make [INAUDIBLE].

What we can also see is that, if we've got a comment here, I can add a new comment. And I say, for example, does it need to be digital? And I click save. You can see that it updates directly into the-- on the client side. And the same the other way around. So let's say, if I want to-- if I'm adding a comment from here and I say, yes, size is two by three. Let's say, add comment, and it appears in Revit as well.

So this is what we are going to build today. We've got todo apps, a web app, and a Revit app. And then I'm going to show you how Amplify can help us bring those solutions very quickly.

So what is AWS Amplify? Amplify allows us to build full-stack apps very quickly. But before looking at how it works, let's start by having a look at what are the requirements for a full-stack app like our todo list that we've just seen.

Here is a list of all the services that we need to create the todo app. We need some hosting with front-end and back-end. Serverless means that we don't want to book and pay for a server running 24/7, especially if we are at the initial stage of our apps. The server will be instance on demand, and it is fast enough that the end user won't notice it. So even for apps that are running at full scale, it's working very well.

Authentication, so we don't want any client-- anybody to log on that website. Obviously, we want to restrict access to our client. We're going to need a database. We're going to needs some file storage for our images. We need WebSockets for real-time communication, like we saw.

We need a trigger system, for example, to create images or thumbnails, or to send notification. We need staging so we can have a dev website and a production website. We also need-- we might also need internationalization so that the application can be multilingual.

An auto-build field feature so that when we commit our code to a platform like GitHub, for example, the code is published automatically. And caching. And also, obviously, we are going to need a few servers to get all that running.

Amazon Web Services is a cloud that offers over 200 fully featured services from data centers globally. From the console, you can access and configure all your services and add new ones as needed. Amplify is a complete platform for developing building, testing, and running mobile and web apps.

I think that anything that AWS Amplify could potentially be configured manually through the AWS console. However, doing it manually will take much longer and will require a much higher level of expertise to achieve the same result.

For the end users like us today, Amplify will consist mainly of a command line interface in the DOS command prompt, and an SDK to access the various services from our apps, web, and add-in.

There is also the Amplify Studio that can be used instead of the command line interface. And that can also add other features like Figma for UI development. But we won't be looking at the Studio in this class.

Once our app has been initiated and the AWS CLI tool is installed, we can open the DOS prompt or [INAUDIBLE] terminal and start typing Amplify commands like amplify add api or amplify add auth to start the automated creation of AWS services. When we type those commands, there are usually a bunch of options to select depending on our needs. We can see them here highlighted in red.

If there was a mistake during the recreation, services can be removed using amplify remove instead of amplify add. Services can also be updated using amplify update to reconfigure some settings. Finally, when everything is running as expected, we can run amplify publish to start the web hosting of our app.

We can now open the AWS console and see the different services that have been created and configured by Amplify. Thanks to Amplify, we don't need to be AWS experts to take advantage of the AWS cloud. Amplify not only might have saved us from lengthy certification programs but also allows a one-man-band to deliver powerful applications.

Now that we understand Amplify a bit better, we need to have a closer look at how the database will be configured and understand what DynamoDB, GraphQL, and AppSync are. The database is a DynamoDB and not an SQL database. In our apps, we will access it using the language GraphQL, which stands for GraphQL query language.

DynamoDB is fast and flexible database service invented to address high volumes of transactions. Using Amplify, we won't have to deal directly with DynamoDB. We will only access it through GraphQL in the same way that we use SQL to access a relational database. Unlike relational databases management system, the DynamoDB database should be designed by thinking of queries that will be called first, and the model should be designed accordingly.

The GraphQL service is created by defining types and fields on those types, then providing functions, also known as resolvers, for each field on each type. This is a big advantage over relational databases. If the table structure changes, the resolvers can be changed to follow the new structure. The client can still use the same old GraphQL request without noticing the change. GraphQL uses the JSON format to process requests and responses.

In the images here we can see how to create a project table, how to request some data, and how is the response returned. With Amplify, we will use the multi-table approach to define our model-- example of tables, project, todo, comments, images. However, DynamoDB can be optimized using a single table design. But it's a bit of a mind-bending concept that would easily require a class on its own.

Amplify will generate all the resolvers for us as well. So we won't have to worry about them. It's important to understand that GraphQL is completely independent from DynamoDB. It's just an API layer that has been added for us to access our data. Autodesk is also building a GraphQL API within APIs called Data Exchange to access the design data.

Amplify will create an AppSync service that will configure and manage the DynamoDB and GraphQL services. It also includes serverless websockets. The websocket is a bidirectional protocol in client server communication that will allow us to be notified after setting up a specific subscription of any changes made to our data.

It will also support the authentication service that we set up with Amplify, whether it's API keys, Cognito, Active Directory, et cetera. And this is done with a simple command called amplify add api

Let's have a look now at what a GraphQL schema looks like. We create a type for each required entity and add the add model directive to store them in DynamoDB and automatically generate the create, read, update, delete, and list queries and mutations. We can create relationships by using the add as many and add belongs to directive.

For example, a project can have many todos, and a todo belongs to one project. We can also create custom enumerated types, like todo stages. And we can also mark the fields as mandatory by adding an exclamation mark after them. Just to note here, the add directives that we can see are help us build for Amplify to generate the resolvers. They are not part of GraphQL.

Here is an example of a GraphQL query using the JSON format. Amplify generates the list todos query that we can call to get the list of todos. All the todos are stored in the items field. And for each todo, we can select which field we want to retrieve. Todos have also a similar list of items for both comments and images, with each their specific fields.

We use mutations to change an existing item. Mutations require an input object that will be used to match the database record and that also contains the updated data. Subscriptions target a specific type and a specific operation. For example, the subscription will be called every time a todo is created. And it will return the fields that have been listed.

One quick note here for those who are more familiar with the SQL. In our list query, we can see here that how we access the link data that would have required an inner join in SQL. If, at one point, we decided to change the database design and, for example, allowed comments to be applied to multiple todos, this would be a breaking change, and all our apps would need to be modified to take into account the new many to many relationship.

In GraphQL, we just need to update the resolver for the comments field, and no changes will be required in our apps. GraphQL will also require a class on its own. So we are just brushing the surface here.

Instead of using plain HTML and JavaScript to build our web app, it's highly recommended to use a modern JavaScript framework to do it. The most popular JavaScript framework is react.js, and this is the one I used for the todo app.

React is not necessarily the easiest one to learn though. But when I got used to it, I found it to be a very pleasant and very efficient tool to work with. Autodesk tends to release examples and demos that are using React as well. So it's always handy to be able to reuse tools that they publish.

Here is a list of what makes React a great framework to work with. It's an open source framework that can be used for web, mobile, and desktop apps. Once learned, you can develop on anything. Making sure that your application worked on all platforms and web browsers was very time-consuming, and now it's all taken care of.

It's the same language for both front-end and back-end, so no need to learn or maintain skills on another language like PHP. We create a lot of small components to break down the app complexity. This makes large apps much easier to maintain. There are also a lot of libraries that we can reuse.

Here is an example of what a React component looks like, the written functions contains the HTML tags required to build the element. Some tags like thumbnail or like buttons are actually calling other smaller components that have been nested into this higher up-the-chain component.

In brackets, we can see the code that is executed when the component is rendered. If the video parameter changes, the renderer automatically will rerun to reflect the new values. To render a list of videos, we now have a new component called video list that receives a parameter containing an array of videos. We can iterate this array to display multiple video components using the JavaScript map function.

So JavaScript frameworks are using their own specific syntax to do that. But with React, we can use any JavaScript function we want, which is great. We also have a header that will update based on the number of videos in the array.

We can see now how to add the state to a component, which is like its own private memory. When the user enters in the search field, the search text state variable changes. The state is passed to the filter function that will update the video array. The video list component will then just display the videos that have been filtered. React is a great way to build dynamic websites using a hooking system based on those state variables.

Once again, React will require a class on its own. So my goal today is to give you enough information for you to understand the concept and trigger your interests if you don't know React already.

Material UI is a library of production-ready components that helps us build our apps faster. There are good-looking cross-browsers and cross-platform customizable components so that we don't have to reinvent the wheel all the time.

You can see a few examples of components. Yeah, here is a list of the main components. We can see highlighted in yellow the components that we use to build our todo web app. So it's quite a few components that we were able to reuse for a fast development.

Now that we've reviewed all the concepts that we needed to understand, it's time to start the development of our web app. We start by creating an AWS account and installing a few software. Node.js is a cross-platform runtime environment for executing JavaScript code.

You will need it to create, run, and debug the app locally. Git is a source code management platform. Although I'm using React in this class, Amplify can be integrated with many other frameworks or languages.

Finally, install the Amplify CLI. Once everything is installed, we need to create a default Amplify user. And we do that by running the Amplify configure command. This opens the console where we can create our user. We will also need to create an access key in the security credentials tab. And we use this key and its secret to finalize the configuration of our user.

We create our app using NPX. NPX stands for Node Package Execute. It executes a JavaScript package without needing to install it. This command creates a default React app.

We then run the command amplify init in our app folder. This will create an Amplify directory to store our back-end definition and also create an AWS export.js file that holds all configuration for Amplify services that we can re-use in our code. Then we install the Amplify SDK that we use to access our Amplify services. We can now run our app using npm start.

We are going to start using Amplify to add authentication to our app so that we can control who logs in. We type the command amplify add auth and configure a few settings. Once it's done, we type amplify push to put the new service into our AWS account.

This creates a user pool for our app. And we can add a new user that will have the credentials to log in. This user pool can be configured in many ways to accept, for example, social network logins, Google logins, or Active Directory logins. It's actually important to add authentication before creating the other services, like database and storage.

AWS Amplify comes with UI components for React that we can install. We can now just add a few lines of code in our app to import those components. Add a couple of parameters and then use a wrapper around our components that will ensure that a user is logged in before anything is displayed from our app.

Thanks to Amplify, with one command line and a few lines of code, we have implemented a secure authentication system in our app. Now that our basic app has been set up, we can start by adding a Forge Viewer using a React component.

The Forge API needs to run through a REST API. The reason for this is that we don't want to expose any private information. And we can also build a service that could be accessible for multiple apps.

The first step is to create a new API with Amplify by running the command amplify add api. We configure the settings as needed, making sure that only authenticated users have access to it. The REST API will run in a serverless service called AWS Lambda.

During configuration, we can add secret values to store the Forge client ID and the Forge client secret from our Forge app. We can also use amplify update function to add or modify the secrets later. Then we run amplify push to push those changes to AWS.

In the AWS console, we can find our new REST API for the current staging environment. We could also use the Lambda environment variables instead of secret values. We can also see that our new REST API has been added in the Amplify folder of our app.

We run those-- we need to add a couple of more packages for the AWS SDK and the Forge APIs. And we need to install them in the source directory of our function as shown-- as highlighted in red-- in green, sorry.

Our REST API is going to be pretty simple here. The only thing that we need for now is to get an access token so that we can view our model in the Forge Viewer. To get a token from the Autodesk servers, we will need to retrieve the secret values that we stored earlier.

And we can use the AWS SDK for this. We call the AWS System Manager, SSM, to retrieve those values from the current environment. Using the Amplify CLI to create the REST API and then using the SDK to retrieve configuration values makes it very straightforward to create new services for our app.

To create a Forge Viewer component, I suggest to not reinvent the wheel. I used the viewer from the Forge database package, and it was a great starting point. We can now add our viewer component to our app. And we need to pass it a function that will retrieve the token. This function uses the Amplify SDK to access our newly created API, and we request the Forge token.

And that's it. We have our Forge Viewer working in our React app. When the model is loaded, it will trigger an event, and we can use this event to do further processing. In this case, there are some mass and mass floors that we need to hide, as they are not required for our purpose.

We use the React useEffect hook to monitor changes in the component state and hide the categories that are not required. For our viewer, we can add an Autodesk extension to filter the levels in the model. And we can also create our own custom extension to toggle the room visibility on or off.

We do that by creating a function in our previous hook and pass this function to the extension that will be run every time the user clicks on it. We can now see and select the rooms in our model. Our viewer is now complete, and it's time to move on to todos. We will need to create our database and connect it to a few React components to build the user interface.

To create a database, we run the command amplify add api and configure the settings as needed. We can see the code that has been added in our back-end. The most important file for us is the schema.graphql file. This is where we set up our database model and define the relationship between each elements. We then run amplify push. And we can open the console and see that a new AppSync service has been created.

Amplify makes it very easy for us to create a database with only one command and then another command to create all the services in AWS and all the basic GraphQL resolvers. When we run the push command, it can actually take a couple of minutes, and it displays all the commands that are running in the background. And I'm always very glad that I don't have to do it.

The main functions of GraphQL are queries that are used to retrieve data, mutations that are used to create, update, and delete data, and subscriptions that are used to maintain real-time connections with the server.

In AppSync, we can run all the queries, mutations, and subscriptions that have been generated by Amplify. This is very useful to understand the format that is expected for the requests and also the format that is provided in the responses.

For example, we can see an example of query with joint elements like comments and images, an example of mutation with the input parameter required to find out which element needs to be changed, and an example of subscription that gets a message every time a todo is created.

We built our user interface using the MUI grid component instead of the grid contains another component with its required parameters. For example, our todo list requires a list of todos and callback function when an item is selected.

The grid uses a break point system, highlighted in green in the code, that can change the layout depending on the width of the container. We can see how the layout would change on a cell phone, for example.

When our app starts, we want to retrieve the todos immediately to display them to the user. We use the useEffect hook with an empty square bracket parameter, which means that the code will run straight after the creation of the component. Amplify provides high-level functions that are lifting most of the coding required to access the database.

In our first todos function, we can execute the GraphQL operation list todos in one line of code, in which we have all the todos from our database and update the state of our app. The todo list component will rerender every time the todos state changes.

Amplify SDK is helping us again to create subscriptions so that we can run a callback every time it is triggered. This is very similar for mutations. We just need to understand which parameters are required and pass them to the mutation. You can look at them in AppSync to make sure the structure and the formatting are correct.

We need to install a few material UI packages to use the MUI components. And todo list component is using the MUI data grid component to display the list of todos. Many other options could have been selected, for example displaying the todos like cards in a photo gallery style.

Once again, we are using React hooks to update our components every time new todos are retrieved by the app. With MUI, we have been able to create a complex component with just a few lines of code.

We now want to be able to upload images for our todos. We are going to use Amplify again to create an AWS S3 storage bucket. We just need to run amplify add storage and configure the settings as required, and then amplify push to push the changes to AWS. In our code, we once again use the Amplify SDK to upload our file to S3 with one line of code.

We also need to add a mutation in our database to add a relationship between the image and the todo with the image key name. Finally, we can open the AWS console to view the files that have been uploaded. Amplify has helped us to create a complex file management system with just a few lines of command and code.

Our app is now ready to be deployed. We use the command amplify hosting add for this purpose. Then we use the command amplify publish to create or update the serverless hosting. At the end of the process, a URL is given that we can use to access our app online. We can open the Amplify service in the AWS console and see the deployment state of our app and its URL as well. And here is our app online.

Instead of publishing the code manually, we can set up the hosting to run on continuous deployment, which means that every time we commit code in GitHub or similar, on a particular branch, the code is automatically published. This is very handy, and this is my preferred method of doing it. Amplify, again, makes it very easy to deploy an app.

There are a few other features of Amplify that I should mention briefly. For internationalization, there is an [INAUDIBLE] implementation amplifier that we can use. We just need to set up folders with the name of the locales that will match the web browser. We then enter the text in the language of each of those folders. And then we can retrieve the text accordingly according to the web browser locale automatically.

Amplify caching can be very handy to catch thumbnails, for example, to avoid having to reload them from the server all the time. AWS Lambda is a serverless compute service that we use to run code to process something. It can receive events from DynamoDB, S3 storage, and many other services.

Some of the use cases are pre- or post-process data, process uploaded S3 objects, for example, compressed images, create thumbnails. It could send a notification every time a todo is created by mail or SMS. And it could archive all data, or run backups, or create an analysis for a dashboard.

Now that the web viewer is done, we can move our focus to the Revit add-in. This will involve integrating our code with the AWS SDK and also integrating GraphQL queries, mutations, and subscriptions.

To create an add-in, you can use some of the existing templates for Visual Studio. My favorite is Andrey Bushman's one. Hasn't been updated for a while, but it's easy enough to change it for each new version of Revit. We also need to add a few packages from the Nugget package manager. And this is all we need to get started to access the services we created previously with Amplify.

Another note here, you don't actually need a web app to set up the Amplify services. You could just add the services like we did before without developing the web app-- would be exactly the same.

When we start our add-in, we want to immediately download the todos and display them in a modeless form. This is an asynchronous call, and we can't create an async constructor, so we just need to create a launcher. This will create the form, run the asynchronous methods, and return the window.

Then we need to create types that need to match exactly with the GraphQL schema. The only difference is that the fields must start with an uppercase. We are literally replicating the GraphQL schema here. For example, these todos has items as a list of todos, and it also has a next token field.

To initiate the GraphQL connection, we are going to need to retrieve some information from the AppSync service in the AWS console. We need to retrieve the GraphQL endpoint, the GraphQL real-time endpoint for the subscriptions, and the API key.

Then we need to use some components from the packages that we downloaded earlier to create our connection using the parameters from AppSync. Initializing the real-time connection is a bit more challenging. But all the code is provided in the course manual.

We are now ready to run our request. We don't have Amplify here-- at least that I know of-- to generate the types and the requests. The first step is to create the GraphQL request in a JSON format. Then we can call the JQL client that has been-- the GQL client, sorry-- that has been previously initialized with our request. We always need to make sure that all the types of requests and responses are matching perfectly.

Many properties are returned, so we need to make sure that we match the properties that we are interested in to pass our response. One thing that I haven't mentioned yet is about the next token. GraphQL, when you request the list of todos, GraphQL is not going to send a response with all the todos because, for optimization purposes, it needs to reduce the size or limit the size of data that is transferred every time.

So for that purpose, it will send as many todos as it can. And then, if there are more, it will give us a next token field. And what we can do is to call the list todo again, the same query, with the token. And we will then get the next batch of todos. And so, we will have to repeat those calls as long as we get a next token.

And when we don't receive a next token, when the next token is empty, it means that's it. We've received all the todos that are in the database. And then we use the DataGridView component from Windows form to display our todos.

For the subscription, we have a similar process where we write the GraphQL request. This time, we use the GraphQL client that has been initialized with the real-time connection. And we can see where we call the gate todos as async is when we are going to receive an event from that subscription, that's the code that will be running.

So every time there is a todo that is created, we will run the get todos async as well to update our list of todos. And we can create that with all the different subscription that we have, like comments and images.

To retrieve our images, we use the AWS SDK. We create an Amazon S3 client and call get object with the key of the image that we retrieve from the database. The SDK, again, allows us to download our object with just one line of code. And this is the result that we get with the list of todos, the details, the comments, and then the images.

So that's it. Both apps are now complete. And it's time to reflect on how Amplify helped us achieving that.

So Amplify has allowed us to create full-stack apps very quickly. We can have different environments as well for the dev environment and then the production environment. Environment also very easy to set up with Amplify. You just type amplify add env, and then it creates a new environment.

And you can switch between them. So if you've got multiple environments, you will have multiple databases, multiple S3 storage, multiple APIs where you can have different configurations for-- and different data for all of them.

So, basically, Amplify is a command line interface, an SDK that allow us to access all the services from AWS very easily. There is also a Studio that allows further modification, and then, the AWS console, where we can access all the services in detail.

Amplify is always fast and very scalable. Using the AWS ecosystem, you could change from a serverless application to something accessible immediately everywhere all around the world. So I think it's always a great entry point into the AWS cloud.

From my experience, it has been a very low starting price, which means-- sometimes I laugh about it-- but if I just had sent a couple of emails to IT to require a server or something, that would have cost already more than I could probably pay for several years of Amplify.

So it obviously going to depend on how much volume you have. But what it means is that, at least if you want to get started with it, you will get invoices that are in the range of cents rather than anything else.

So it's a great entry to the AWS cloud, and can't wait to try a lot more of those services. Thank you.

______
icon-svg-close-thick

Cookie 首选项

您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

我们是否可以收集并使用您的数据?

详细了解我们使用的第三方服务以及我们的隐私声明

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

改善您的体验 – 使我们能够为您展示与您相关的内容

通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

定制您的广告 – 允许我们为您提供针对性的广告

这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

icon-svg-close-thick

第三方服务

详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

icon-svg-hide-thick

icon-svg-show-thick

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

Qualtrics
我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
Akamai mPulse
我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
Digital River
我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
Dynatrace
我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
Khoros
我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
Launch Darkly
我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
New Relic
我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
Salesforce Live Agent
我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
Wistia
我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
Tealium
我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
Upsellit
我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
CJ Affiliates
我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
Commission Factory
我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
Google Analytics (Strictly Necessary)
我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
Typepad Stats
我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
Geo Targetly
我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
SpeedCurve
我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

改善您的体验 – 使我们能够为您展示与您相关的内容

Google Optimize
我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
ClickTale
我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
OneSignal
我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
Optimizely
我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
Amplitude
我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
Snowplow
我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
UserVoice
我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
Clearbit
Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
YouTube
YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

icon-svg-hide-thick

icon-svg-show-thick

定制您的广告 – 允许我们为您提供针对性的广告

Adobe Analytics
我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
Google Analytics (Web Analytics)
我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
AdWords
我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
Marketo
我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
Doubleclick
我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
HubSpot
我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
Twitter
我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
Facebook
我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
LinkedIn
我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
Yahoo! Japan
我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
Naver
我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
Quantcast
我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
Call Tracking
我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
Wunderkind
我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
ADC Media
我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
AgrantSEM
我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
Bidtellect
我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
Bing
我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
G2Crowd
我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
NMPI Display
我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
VK
我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
Adobe Target
我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
Google Analytics (Advertising)
我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
Trendkite
我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
Hotjar
我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
6 Sense
我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
Terminus
我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
StackAdapt
我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
The Trade Desk
我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

是否确定要简化联机体验?

我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

个性化您的体验,选择由您来做。

我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。