AU Class
AU Class
class - AU

Building Patterns-Based Forge Integrations Using MuleSoft

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

This class will provide an introduction to building integrations with the Forge platform using standard integration patterns in Mulesoft. Mulesoft is a lightweight event-driven Enterprise Service Bus that provides a robust encapsulation of the core integration patterns as described in the popular book Enterprise Integration Patterns by Hohpe and Woolf. We will demonstrate a custom Forge Anypoint connector built on the Mulesoft integration platform that allows for easy access to the capabilities provided by the Forge platform. Using this connector, Forge functions can be accessed directly from the business process flow editor in Mulesoft. We will go over how businesses can leverage this connector to build repeatable integration solutions using Forge and other enterprise applications. The class will provide a technical demo of a business process orchestration connecting a Force.com application and Netsuite with Forge. By using the Mule ESB for Forge integrations, enterprises can benefit by using standard canonical models to interface with multiple systems as well as leverage common enterprise services for monitoring and security. (Joint AU/Forge DevCon class).

主要学习内容

  • Learn how to use Forge API from an enterprise integration platform
  • Learn how to use integration best practices for connecting enterprise systems to the Forge platform
  • Learn how to use common services and canonical data models when interfacing with Forge
  • Learn how to develop a custom connector for Forge in MuleSoft

讲师

  • Ravi Dharmalingam
    Seasoned software professional with experience in Integration consulting and Cloud based operations. Over 20 years of experience in all stages of enterprise software development and deployment in a wide range of industries.He is an experienced integration consultant having worked on helping customers successfully integrate enterprise applications across various industries. He has implemented legacy Enterprise Service Bus based integration solutions as well as modern cloud based systems and is proficient with using integration standards such as REST, SOAP and ODATA. He is focused on architecting and implementing patterns based solutions to integrate enterprise applications to help drive adoption and enhance overall value for customers.
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • subtitles off, selected
      Transcript

      PRESENTER: So we'll have somebody create an item in Salesforce. And then, let's say, a designer checks in a model into a file folder, the MuleSoft flow essentially translates that model and puts the link into Salesforce. So from Salesforce, you're able to correlate the model and view the model using a large-model viewer directly in Salesforce.

      And then the second demo is with creating a project in Salesforce. It, basically, pushes the project into BIM 360, again, using a Mule flow. And we'll highlight some of the patterns that we're using when you're building these flows and how easy it is for you to change stuff with this kind of architecture.

      And then, we'll will get into the actual code, how you can actually build the connector. And we can share the stuff we've done so far. And if you want to build your own connector, it is fairly easy. It's just something that you have to write, a wrapper that you have to write, on top of the API kit that's there in GitHub already. And then, we'll look at the Runtime environment and then wrap it up.

      All right, so integration architectures. So some of the key, primary architectures that you see for integrations right now are like a file transfer or a shared database. So those are still quite widely used. But the problem with that approach is it doesn't scale.

      And it usually is very tightly coupled. So if you have to make any changes, it locks you in into one architecture. You have to do quite a bit of rework, if you're trying to make changes to the system.

      The next one is point-to-point, where you can build a custom map in any programming language and, basically, use that to integrate. And again, this creates a tightly coupled model, which again, can work in some situations. But it can often lead to maintenance issues in the long run.

      So with that said, the messaging architecture is the most common architecture that's used across enterprises when you're talking about integrating large number of applications, managing a large number of integrations. So this kind of model essentially gives you the resiliency and scalability that you need when you work with large-scale integrations. So the Integration Bus, essentially, this is a core concept in pretty much-- There's probably 50 or 100 middleware products in the market that kind of support this architecture.

      Essentially, the core concept is, you have a group of applications that can work together in a decoupled manner. So changes in one app doesn't affect the other. So you can, basically, easily add additional components to the bus. All the other components don't get affected.

      So how do we use MuleSoft? So MuleSoft is basically a lightweight ESB. So I used to work, in earlier days, in more heavy duty stuff like WebSphere and TIBCO and stuff like that. MuleSoft kind of peels back the layer. And it's a lot more simpler product. It's called a lightweight ESB, which basically still supports the messaging architecture. But it's basically a lot simpler than some of the heavy duty ESBs that used to be around like 10 years back.

      So what I've done here is, basically, built a Forge connector that can, basically, tie Forge into the message bus so that you can, basically, leverage Forge across your enterprise. So if you look at the connector ecosystem for MuleSoft, so they have connectors for pretty much any leading enterprise system that you can think of. And so once we get a Forge connector into a platform like MuleSoft, it's fairly easy for us to integrate with any of the applications that are available in its ecosystem.

      So let's briefly talk about patterns. So one of the core things about Mule that kind of makes it simple is that they kind of adopted, almost religiously, the patterns that were described in this book. So this book came out 10-plus years back, maybe even longer.

      But this is still considered one of the seminal works in integrations. And lot of the patterns here, you see them all around. And MuleSoft kind of took an approach where they, basically, used their component names, essentially, follow the naming conventions used in these patterns.

      So patterns are nothing more than reusable integration or design solutions that you can either use independently or with other patterns to solve integration problems. So when you start looking at a integration problem, you can kind of break it down into different patterns. OK, this is an aggregator or this is a splitter. And then you kind of proceed in that manner. And the way the MuleSoft components are structured, essentially, it facilitates using a pattern-based approach for integrations.

      As I mentioned, many of the components in MuleSoft, essentially, use the same names that you find in the enterprise pattern. So once you study enterprise patterns, it's almost fairly easy to learn MuleSoft. So it's kind of like a common language that they adopted for integrations.

      All right, so next we'll talk about the Forge connector. So the MuleSoft environment essentially comprises of a development environment and a runtime environment. So the development environment is essentially an Eclipse-based studio, which they built a wrapper on top of that, allowing you to build the visual flows. So you can basically build visual flows.

      And then from there, you can either deploy it to the CloudHub, which is basically their cloud-hosted integration environment or you can also host it on premise. I mean, they have an on-premise solution as well if you want to run your integration on premise. And there is even a community edition that can be run on premise, which is open source and free, which is a nice thing. So if you don't want to go for the expensive solution, you can use the community edition, which is which is free.

      So once you deploy the Forge connector into the platform, it basically shows up in the palette of Mule as a connector. And then you can just use that in any of your flows. With Forge, we basically provide a configuration option. So once you add the connector, and then show you in the demo, basically, we have to specify the client ID and the client secret of the Forge application that we are going to be using for that.

      And then, the connector basically defines as many operations as you need. You basically reference those operations in the flow. And then the operation basically defines what the inputs and outputs are. All of this can be done kind of in a visual manner.

      All right, so with that, any questions so far? OK, so with that, I'm getting to my demo. The first demo is, essentially, somebody creates an item in Salesforce. And then we have a CAD model that's updated in a file folder. The Mule workflow essentially correlates these two, uses the Forge connector, performs the translation, and then sends the translated model to Salesforce. We have a LMV viewer embedded in Salesforce that you can use to view that model.

      And then, at the same time, we send a notification in Slack that the model is ready. And the link is sent in there. So this kind of highlights a simple scenario. But I want to show that. And we can take a look at different things that you can do with the flow from there.

      So this is basically the flow. So if you look at it, this is basically pulling for a file directory. And then it's basically ensuring that you don't process any file more than once. And this is, again, a pattern, called Idempotent Receiver, which essentially ensures that you don't process the same message more than once. And then, you're doing a translation, calling Forge connector to perform translation for the LMV model and then, again, calling Salesforce to update that LMV link and finally updating Slack.

      So let me just run the demo. And then we will take a look at the flow. And I can show you how this is structured. So in Salesforce, so let me just create a new product.

      OK, so this is the LMV model. But at this point, I don't have a viewable yet. So, let's say, I am not going to use the CAD system, at this point. I'm just going to copy an existing model.

      So it's basically matching it on the name. So basically, in a runtime environment, your integration would be running all the time. As I mentioned earlier, it would be running on CloudHub or on your hosted system. In this case, I'm just going to run it here. And, basically, the development environment has its own container to run it.

      So if you run it here, it's got a web container built in. It'll run it locally on the Eclipse environment. And it'll start pulling the file.

      All right, so it's running. So this is nothing more than a Java application that's running in a web container. So you see that it's picked up the file. And it's sending it to the translator.

      And, obviously, right now it's still pulling for it. But it's done. And it has updated the Salesforce link. So if I go back to Salesforce now and do a refresh, you'll see that the link is there and the model has made it to Salesforce.

      So just a simple scenario and then I think we also had a link to send a link to Slack. So you see this message in Slack that shows up. This is, basically, a simple scenario. But let's say you want to change Slack to Twitter or something like that or something else. It's really just a matter of just finding the connector and adding it in there.

      So if you basically find the appropriate connector for that tool. And you can just drag and drop it into the environment. And now, you basically have the option to connect with another enterprise application like Twitter or something like that. So that's basically the power of this, of a framework like this.

      I'm calling an operation, create LMV model, which encapsulates all the stuff that needs to happen to translate a model into a lightweight, I mean, large-model viewer link. And that operation performs everything. And all I need to worry about are the inputs and outputs to that.

      I have a transformer before that where I'm passing it the bucket key, the file name, and the file part. And I have an output where I'm getting the translated stuff back from that translation, which then I'm then passing to Salesforce to establish the link. Any questions on this flow so far? Go ahead.

      AUDIENCE: So basically you have to put a folder [INAUDIBLE]?

      PRESENTER: Yeah. It's [INAUDIBLE].

      AUDIENCE: So you're just kind of watching it?

      PRESENTER: Yeah, yeah.

      AUDIENCE: If something happens with this, then it gets transferred there. And when you tied it together with the file, so you made a product with a certain name [INAUDIBLE].

      PRESENTER: Yeah, yeah. You're matching. Yeah, yeah. Yeah.

      AUDIENCE: When you send it out to Forge, typically what Forge passes back after you translate it [INAUDIBLE].

      PRESENTER: It's URN.

      AUDIENCE: OK. So you got the URN back. And that's what you used [INAUDIBLE].

      PRESENTER: I sent the URN to Salesforce. If you look at Salesforce, see this is the URN. I mean, you would typically hide this, I mean, in your implementation. But that's basically what I'm passing back to Salesforce. And then it's using that. You need to authenticate in Salesforce. I'm using two-legged oauth in Salesforce to get the token.

      AUDIENCE: Is that the [INAUDIBLE] or is that [INAUDIBLE]?

      PRESENTER: Which URN? I'm sorry.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: This is outer desk document URN that you need for the large-model viewer. So this basically tells the LMV where to get that file.

      AUDIENCE: So the large part of the viewer is built into Salesforce then?

      PRESENTER: I added it.

      AUDIENCE: Oh, you added it.

      PRESENTER: Yeah, so it's basically an iframe. And I embedded that iframe into Salesforce and had some scripting in there to get the token. It needs to authenticate, as well. So it's getting a token to use the viewer every time I'm using that.

      So again, I think the real power here is, I mean, once you have a connector on the operation, you can use it for a number of things. It's not just for a particular thing. For example, if I want to change Salesforce to NetSuite now, all I have to do is change that connector.

      All the pieces of stuff I've done up to that point are still good. All I need to do is change Salesforce connector to NetSuite. And then, it still works. Any other questions on this flow?

      All right, so we talked about patterns So we just looked at this. So what are the patterns that what we saw here? So just to recap on some of the stuff, so what you do here, I mean, this is obvious, but it's actually there's a pattern for it. It's called polling consumer.

      So that's basically the pattern we're using here, where it's basically pulling your file directory to see if there is a file. So the next pattern we saw was Idempotent Receiver. So essentially, let's say you have a file folder. And it's looking at that file all the time. You don't want to process the same file multiple times.

      So this component, essentially, what it does is it allows you to define an ID, a correlation ID, or a message ID, which you can use to filter out messages that are already processed. So in this case, what I did was I used a combination of the filename and the timestamp. So as long as the filename and the timestamp don't change, I don't process it again.

      If I go and update that same file now, it'll process it again and send a new model to Salesforce.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Yeah, you can update it and put a new file in there.

      AUDIENCE: [INAUDIBLE] have the same name [INAUDIBLE].

      PRESENTER: You can have the same name. But it's looking at the combination of filename and the timestamp. So as long as that timestamp changes, it'll read it as a new message, yeah.

      So the other pattern you saw here, which is common across the board, is a message translator, which is a common pattern that you see whenever you need to translate a message from one app to another. And then, this is basic. Again, a lot of these patterns are overlapping. And this whole flow is called a Composed Message Processor.

      So, essentially, you have a whole series of steps that are happening. And then, if any one of them fail, you have an exception strategy on what to do. In my case, I just have a Slack message. If something failed, it'll just force something in Slack, saying that, OK, this publishing failed. You have to go take a look at it or something like that.

      All right, so any other questions on this before I go to the next demo? And again, the reason I'm showing these demos is just to highlight the point that with the connector your options are unlimited. And really, a flow like this, once you have a connector with the operations, you can set something like this in 10 or 15 minutes, literally. I mean, it's basically drag and drop. I mean, I know some people probably could write code as fast as that. But, for most people, I think this is still a convenient way to do stuff.

      All right, so the next one is similar. Here, what I'm doing is, again, starting in Salesforce. I'm creating a project in Salesforce. I'm taking a long route here. This, you don't have to do it. But I wanted to use a queuing system to show it.

      So, essentially, I'm using this app to transform an outbound Salesforce message to a Amazon queue, SQS message. In MuleSoft, we basically have listening for the SQS message. And then the Forge connector updates BIM 360 with the project.

      All right, so let me go back to the demo. All right, so again, it's the same thing that we looked at in the-- So you have the queuing system. Again, this message gets here from Salesforce through an outgoing message through Zapier. And then I'm doing some byte transformation stuff because it's Base64 encoded.

      So I take care of the decoding there. And then, I basically update BIM 360 with that information. Yeah, actually I called Salesforce because I don't get the entire data from the message. All I'm getting is the ID.

      So I make a call back into Salesforce to pull all the data. And then I call Forge to update BIM 360.

      So going back to Salesforce, so I just created, like, a simple object, again, in Salesforce. So let's, again, call it Forge. OK, so it basically sends it only if you select that flag. And some of those are things that you can easily tailor.

      So the good thing about queues is, I can basically post this message and it'll update the queue. My application doesn't need to be running. And if I start it later, it'll pick up the queue. So if you're trying to do something real time, like a HTTP, you need to have your system up and running. Otherwise, it'll fail.

      But since we're using a queuing system, I don't need to have my app up and running. I can just post this, the message is already waiting in Amazon SQS. And when I come back here, and I start this, so it's picked up the message and processed it. So if I go into BIM 360 now, all right, so we see how our project upgraded there.

      So again, simplest case, it just highlights the point. But let's say I want to include Slack here. Again, all I have to do is find the Slack connector and put it in there. So any questions on this flow on?

      So one other thing I want to highlight here is, essentially, when you build the connector, MuleSoft is, what they call, DataSense-enabled. So it can read your POJOs, your Plain Old Java Object. And it can, basically, find out what other fields it's looking for. And it can give you a drag and drop interface to do mapping stuff.

      So if you go and look at the transform, so I just mapped five fields. But, essentially, once you have a connector like this, this kind of mapping can be done by somebody who's not a developer. So this kind of expands the number of people who can use Forge to do integrations as well.

      So you can basically build a connector that supports a drag and drop transformer like this. And all the user needs to do is map the source connector to the target connector. And then the transformer automatically infers what are the outputs, what are the inputs. You just need to drag and drop.

      Obviously, if you need to make some additional changes to the transformation, you have to do some of the additional stuff. I mean, there are some syntax that needs to be learned. But for simple mapping, really, it's just really dragging from the source to the target, nothing more than that. So any questions on this flow or anything on the MuleSoft side?

      AUDIENCE: So [INAUDIBLE] you basically set up all these integrations in some sort of, like, project. And that whole thing is running in MuleSoft [INAUDIBLE] and it's just sitting there, like, some integrations you set up on, like, timers and things every 10 minutes. They check something or they pull something. Now those are--

      PRESENTER: Event driven, yeah.

      AUDIENCE: [INAUDIBLE] And then, what is that [INAUDIBLE] in thinking in terms of, like, having regular [INAUDIBLE] web services, in terms of having this, like, whole [INAUDIBLE] integration. If I wanted to access MuleSoft in some other system, how would I do that? [INAUDIBLE] API [INAUDIBLE]?

      PRESENTER: No, in this case I did it through a queue, Amazon SQS. But you can also do it. MuleSoft also has an API. So they have it as kind of API, which can trigger this as well. So the primary integration mechanism is event based, where you have events. And you said WebHook could be one.

      So, in this case, we are pushing from the Salesforce to BIM 360. With WebHooks, now, we can push it back from BIM 360 back into Salesforce. So I can register a WebHook here. And then once somebody gets the project in BIM 360, I can update back into Salesforce.

      But, yeah, they do have a REST API that you can define. In fact, they have a pretty rich API platform that you can define your own APIs that you can use to trigger different things. I mean, you have Forge APIs already. And then you have APIs on top of it. But it kind of gives you the ability to define like composite APIs.

      Let's say you want to do one big transaction. And you want a single API. You can basically do that. And so STTP could be another trigger, SQSS one polling file, which was the other one we did, or it could be batch, like you said. I mean, it could be scheduled based on time, run it on a budget.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Yeah, you're right. All right, so let's take a look at one of the patterns we saw here. So this is, I mean, it's basically a message channel. So we are using a queuing system. It's the Amazon SQS, in this case. But it's basically a message channel that's what they call in the pattern.

      And this pattern is called a claim check. And, essentially, it's like when Salesforce posted the information, it did not send all the data at the time. It just sent the ID of the document from Salesforce to MuleSoft. And MuleSoft is basically querying Salesforce to get all the data. So this pattern is called claim check, where you don't send all the data immediately. And you just send a reference and then use that to pull it back.

      Channel Adapter, again, a lot of these patterns are repeating I mean, any kind of adapter that you build to integrate with the bus is called a Channel Adapter pattern. And then, this, again, is an overarching pattern that you'll see everywhere, pipes and filters. Essentially, you have one, each layer, making changes to a message as it flows through the integration. Any questions on this flow?

      All right, so with that, let's get into how you build the connector. So building the connector actually is fairly simple, which is actually a nice thing about MuleSoft. Really, all we have to do is create a wrapper with their annotations in Java, which will link the client classes, which are the Forge APIs classes through MuleSoft. And that's exactly what I have done in my stuff.

      And these wrappers are fairly simple. Each operation that I defined is basically a single method, which I need to wrap with the processor annotation. It'll get recognized in MuleSoft flow. And they have a toolkit called DevKit, which you can use to build these connectors.

      And then, as I said, these connectors are DataSense enabled, in which if you define your static data models within the connector, the system will recognize that. And when you visually bring up the source and target, it will basically recognize what's the data I'm getting, what's the data I need to update. And then, you can use a drag and drop interface to do the mapping.

      All right, so let me show you in the studio, so this is the connector. So I'm getting the token. And you see this is the, for example, this is the method that we're using for creating the LMV model.

      And all it's doing is it's basically using the API kit. So if you guys have seen the Java API kit in GitHub, you can basically use that. And it just needs to go around this wrapper. So this wrapper, this annotation, ensures that MuleSoft recognizes this as an operation.

      And then, you can just use the standard Forge API kit to make your calls to perform the different operations. Have you guys used the API kit? Like, anybody use the Java stuff or no?

      AUDIENCE: [INAUDIBLE] use it, like, direct calls [INAUDIBLE].

      PRESENTER: OK. So actually, initially, I had done it through other stuff too. But now they have a standardized API kit because, if you look at it, I think they have different languages there. And it's kind of uses the same pattern.

      So this code, essentially, uses the API kit. So I really didn't have to do much code at all. I mean, I kind of had to copy some of the classes here. But really I didn't even have to do that. I could have just referenced the job completely and just called the matters externally to the API kit.

      And all we really need here is just to get the wrapper around here and that should have taken care of. Any questions on steps to build the connector?

      AUDIENCE: What were your variables defined in here?

      PRESENTER: OK, so this is the connector class. And there is another class called Config. These are the inputs where you define where the client ID is. So if I go back to MuleSoft, I look at the connector. So this is where I configure this. And that is exposed in the Config.

      So they have classes when you create a project. So after you set up MuleSoft, if you say new, any point connector project, this will, basically, create you a connector project, which has all the wrapper stuff that you need.

      All you have to do is add your methods there add any other Config stuff you need. It kind of gives you a base template for building a connector. You can just go ahead and add your stuff on top of it.

      So these are the inputs. So, for example, see this is where I defined the payload. So I call this payload. And I'm basically using a POJO here. And, as I said, MuleSoft recognizes the Java object. And it knows what it needs.

      You can define whatever you want in that class. So that's your input. And then this is your output. So that's basically what you define. And when you define something as a processor, it knows that it's an operation that it needs to expose. So that's basically it. Any questions on the connector?

      So, essentially, one of the real strengths of MuleSoft is that it uses what we call as a set architecture or a staged-event driven architecture, which inherently gives you more throughput than, like, a standard serial or any kind of threading that you do because this uses a queuing approach, where each stage can be processed thoroughly. So when you're talking about cloud infrastructures and stuff like that, using this kind of model allows you to scale easily across multiple-clustered environments and stuff like that and give you a much higher throughput than a standard Java program array or standard serial-based integrations would give you.

      So the runtime can be either on premise or use the CloudHub. The community edition is free, I mean, if you want to try that. And you can pretty much do most of the stuff we talked about in the community edition.

      And I think we talked about the REST API. They actually have a fairly extensive API framework using a RAML language, which allows you to construct the REST API, which you can use as a trigger to invoke different stuff. So, again, MuleSoft is not the only one. I mean, in fact, some of these are actually even closer to the enterprise integration patterns.

      And if you like open source stuff, these are completely free. The only thing is they don't have a nice virtual editor like MuleSoft does. So you'll have to do a lot of the configuration directly in XML or directly in code. But these two are fairly excellent frameworks to use as well, which leverage the enterprise integration patterns.

      All right, so here's something interesting I saw. Here's one of the authors of the original Enterprise Integration Patterns book. So he was apparently visiting a Starbucks, and all he can think of is enterprise integration patterns. So he's starting seeing enterprise patterns everywhere.

      So you look at Starbucks, they have their own naming convention for stuff. And this is equivalent to economical data model, essentially. You have your own standard for naming data. When you get a drink, they basically put your name in there. And this is, basically, done for correlating your cup, in case there's a problem with your drink back to your order. And this is, basically, a correlation identifier, which is a pattern in the enterprise integrations.

      Stores like this are set up for an asynchronous processing, so that they can get high throughput. So you got multiple baristas working on your drink. Depending on when your order comes, they're all competing to pick up an order and process it. And this is, again, a pattern. It's called competing consumers.

      So here are some of the key takeaways. Essentially, I think if you take a pattern-based approach, you can greatly simplify enterprise integration. And by mapping Forge capabilities into a connector that can tie into an ESB, you can basically even engage non-developers in your company to develop integrations. So the event-based, as we said, the set architecture, this inherently supports a higher scale processing and gives you a lot higher throughput for your inputs. Any questions?

      AUDIENCE: So the community edition, those are [INAUDIBLE] play around with. [INAUDIBLE] community edition, free the IT department in sort of [INAUDIBLE] got to be on the server connected to [INAUDIBLE]? How does that work?

      PRESENTER: So, again, as I said, if you're, obviously, trying to open firewall ports and stuff like that, you've got to engage IT. But the community edition itself is just a Java app that can run on, like, a JEDI container or a Tomcat. It basically is a Java application that runs.

      And you can run it. And you can do flows like queuing-based workflows and stuff like that. And also, the studio has its own container.

      If you just want to get started, go ahead and download the studio. The studio has its own container. And you can start. Like, all the stuff I did right now was completely on the studio. I didn't even have a server.

      So you can just do everything on the studio. And then, when you're ready to use, then, only then, you need to worry about whether you need the community edition or you want to use their CloudHub, which is their integration as a service offering. Any other questions, guys?

      AUDIENCE: So [INAUDIBLE]?

      PRESENTER: So Java's the primary platform that MuleSoft is built on. And, as I said, the connectors, you need to write it in Java. But MuleSoft itself has scripting capabilities. I mean, you can do scripting in JavaScript and all that stuff. So within MuleSoft, you can do scripting in other languages, like JavaScript or Groovy and stuff like that. But the core platform is, basically, built on Java and Spring Framework, basically.

      All right. Anything else? OK, thank you, all. Thanks for coming, guys.

      [APPLAUSE]

      ______
      icon-svg-close-thick

      Cookie 首选项

      您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

      我们是否可以收集并使用您的数据?

      详细了解我们使用的第三方服务以及我们的隐私声明

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

      改善您的体验 – 使我们能够为您展示与您相关的内容

      通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

      定制您的广告 – 允许我们为您提供针对性的广告

      这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

      icon-svg-close-thick

      第三方服务

      详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

      icon-svg-hide-thick

      icon-svg-show-thick

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      Qualtrics
      我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
      Akamai mPulse
      我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
      Digital River
      我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
      Dynatrace
      我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
      Khoros
      我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
      Launch Darkly
      我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
      New Relic
      我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
      Salesforce Live Agent
      我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
      Wistia
      我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
      Tealium
      我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
      Upsellit
      我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
      CJ Affiliates
      我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
      Commission Factory
      我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
      Google Analytics (Strictly Necessary)
      我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
      Typepad Stats
      我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
      Geo Targetly
      我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
      SpeedCurve
      我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
      Qualified
      Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

      icon-svg-hide-thick

      icon-svg-show-thick

      改善您的体验 – 使我们能够为您展示与您相关的内容

      Google Optimize
      我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
      ClickTale
      我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
      OneSignal
      我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
      Optimizely
      我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
      Amplitude
      我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
      Snowplow
      我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
      UserVoice
      我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
      Clearbit
      Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
      YouTube
      YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

      icon-svg-hide-thick

      icon-svg-show-thick

      定制您的广告 – 允许我们为您提供针对性的广告

      Adobe Analytics
      我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
      Google Analytics (Web Analytics)
      我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
      AdWords
      我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
      Marketo
      我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
      Doubleclick
      我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
      HubSpot
      我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
      Twitter
      我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
      Facebook
      我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
      LinkedIn
      我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
      Yahoo! Japan
      我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
      Naver
      我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
      Quantcast
      我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
      Call Tracking
      我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
      Wunderkind
      我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
      ADC Media
      我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
      AgrantSEM
      我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
      Bidtellect
      我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
      Bing
      我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
      G2Crowd
      我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
      NMPI Display
      我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
      VK
      我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
      Adobe Target
      我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
      Google Analytics (Advertising)
      我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
      Trendkite
      我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
      Hotjar
      我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
      6 Sense
      我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
      Terminus
      我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
      StackAdapt
      我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
      The Trade Desk
      我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
      RollWorks
      We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

      是否确定要简化联机体验?

      我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

      个性化您的体验,选择由您来做。

      我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

      我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

      通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。