AU Class
AU Class
class - AU

Autodesk Forge Data APIs: Standardized Granular Data Extraction to Reduce Code Base

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

For several years, Stantec has maintained a software stack used to extract data from Revit software for various purposes. Most recently, this was in support of a benchmarking effort to collect standardized, normalized data from our models for reporting and analysis by project type and sector. In this scenario, users must “submit” their data after they have a QA check. Autodesk Data Exchange and APIs present a method to standardize the data extraction process and give control to the end user in regards to what data is exported for downstream use. This process removes the compute from the local desktop and reduces the complexity of the internal code base. Using Microsoft Azure as a data pipeline, we move the data traffic off the corporate network (LAN, WAN, VPN, and ISP). This class will discuss our development of a solution that maximizes the Autodesk Forge Data APIs. In addition to benchmarking, we’ll touch on some of the other areas where we have experimented with the tooling, including as part of our digital twin project.

主要学习内容

  • Discover the value of the Autodesk Forge APIs for developing custom solutions.
  • Learn how to apply Autodesk Data Exchanges to workflows that require subsets of data to be shared across multiple apps and teams.
  • Evaluate the potential value in creating your own custom solution.
  • Learn about the long-term value of Autodesk's move to granular cloud data and how to capitalize on that value through APIs.

讲师

  • Robert Manna 的头像
    Robert Manna
    trained as an architect, Robert spent over a decade helping to implement technology, develop software and solve data problems with a global AE firm. He is now solving challenges for key accounts/clients at dRofus. Executive Committee member with Digital Built Environment Institute. Father, husband, model railroader, downhill skier, swimmer, road cycler occasional runner.
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • subtitles off, selected
      Transcript

      ROBERT MANNA: Welcome, everybody, to the class today. We're going to be talking about Forge Data Exchange APIs and Forge Data Exchanges. I'm Robert Manna, a senior solutions consultant with dRofus. And we've also got James Mazza, a solution architect with Stantec with us today.

      So let's go ahead and dive in. So neither of us obviously work for Autodesk. And they said that if you don't work for Autodesk we don't have to put up the safe harbor statement. But we are talking about some things today that are not yet officially released, at least as of when we recorded this video.

      So I've put this in just in case. Because as the Autodesk folks like to say, don't make purchasing decisions based upon future development plans or efforts, because things can change. And they do.

      So with that out of the way, we're going to start with just a little bit of introduction. I promise this isn't filler. There's a reason why we're sitting here doing a little introduction to ourselves versus diving right into the content. So yeah, let's start with James.

      JAMES MAZZA: Hi, everybody. I'm James Mazza. I work for Stantec I've been with them for nearly 15 years at this point.

      And I've got a pretty long and varied background. I've run the whole gamut from basic Revit user, through Dynamo power user, through BIM management, through regional management, all that stuff, and then into Revit API development. And then for the last couple of years I've actually been doing solutions architecture for the buildings digital practice team at Stantec.

      For the most part, I've actually worked with Robert. The majority of that time we've done a whole bunch of really interesting things over the last number of years. And as of a few weeks ago or maybe months, I can't keep track anymore, he's dead to me. And I'm really, really sad about that. So on that note, let's talk about Robert.

      ROBERT MANNA: Me, yeah, I'm dead to James. But yet he still showed up to help me teach this class, which I greatly appreciate. So I worked with Stantec. And Stantec acquired my company. In total I had 19 years in with them, a variety of roles, trained as an architect.

      But really, over the years focused on technology. And really in the last five years or more really focused on data. And so, I've moved over to dRofus where data is a huge part of what we do. It's the core to our primary product that we sell.

      And again, as I said here, this is going somewhere-- data, data, data. And if you are interested in learning more about transforming data, particularly in using Excel and Power Query, and just I've been told I condensed an entire semester's worth of introduction to relational databases into a single AU class last year.

      And that was a virtual class, planning driven by data. So if you're interested, check that out. Because apparently I could teach a college course, if I wanted. I don't know. We'll see.

      So why do we do all that intro? Because our background here is data. This is why we are interested in the Forge Data Exchanges, and why we thought it'd be worth teaching a class about these things. So in Stantec's world, we've been dealing with data for years.

      Like a number of firms and companies out there we have our own-- or Stantec now has their own code for extracting data out of Revit models to do things with it, including getting it into Azure. Again dRofus, data is core to the product that we have, which is called dRofus as well, not to add to the confusion. But again, data is the focus here and why we were interested in this topic.

      And from our perspective, and again, regardless of which organization we're talking about, data really has two primary paths. There's either a need or a desire to do something with data at the level of that local project.

      So the project is looking for some sort of outcome in leveraging or using data, or at an enterprise level there's an interest in aggregating data together to look for trends or feedback based on multiple projects. And again, it's true for both our organizations ultimately, as well and the customers and clients that we serve, or our internal customers and clients depending on, again, which organization we're talking about.

      So at Stantec-- and James can jump in here and interrupt me if he wants to. But I've only been away for a period of time. There's several tools that Stantec has today that actually leverage getting data out of Revit models to then do something else in another Revit model, whether it's actually making it easier to create and set up Revit projects based on data from other models, or extracting data out of one model to then feed that data-- keep that data up to date in another model.

      And you can think of it as Stantec's own proprietary version of Revit's copy monitor tool, only much more powerful and robust in terms of the ability to handle data and pass data into the secondary models.

      So for instance, an electrical engineer who would like to have data coming from a mechanical engineer's model in terms of the requirements for pumps or fans, or even just the locations of those things, that the electrical engineer ultimately doesn't need a representation of a pump or fan. They need just a device that they can circuit to and is in sync with those pieces of equipment that are in the mechanical engineer's model. So lots of uses for that tool.

      And then, the other thing that Stantec is very interested in which, again, not unique to Stantec at all, is culling large amounts of data from Revit models on various operations, whether it's Sync with Central or user-initiated to collect that information and do analysis or trending.

      And a really good example of that is there's been a lot of work done to be able to benchmark common types of projects, whether it's workplace design, labs and science and technology, or even healthcare. As I told the team when I was at Stantec, healthcare is the hardest, so we should do that one last. Which I think they're still listening to my advice-- maybe. [LAUGHS]

      Over at the dRofus side, again, we are a software vendor. We sell a product that is really focused on allowing our end users to collect, organize, and track all of the project data about a project. And so, this extends beyond the data that would be just in a Revit model into information about things that maybe aren't actually modeled, or specification requirements, or the actual products that will be ordered.

      And again, a lot of that data is not data that you necessarily want to track in Revit. But there is data that we want to get out of Revit. For instance, with what things have been modeled in the model so that we know if we're missing anything, as well as the designed areas so that we can compare those to the program areas.

      So we extract data out of the Revit model, and we also push data into the Revit model as well, because we want dRofus to be the authoritative source for certain pieces of information like room names, or the functional program of the room, or what objects or items pieces of equipment should be in rooms or should not be in rooms. So that's what our product does.

      And we've got a Revit add-in today that allows for that bi-directionality of data. And of course, that data all has to be mapped in terms of where is it going into Revit, and what data are we getting out of Revit, and where is that going into dRofus. So that's what we do.

      Ultimately, for both organizations, data exchange is really key. In both cases or all the cases I just mentioned, we're talking about the fluid exchange and movement of data between a model that is managed through a desktop application and then some sort of cloud resource, be it an Azure database of some kind, or dRofus's own project databases.

      Traditionally, are really the only option we had for doing that integration with Revit was on desktop level because of the Revit API. Because there really has not been any other ways to get into the Revit data except through Revit as your primary interface. And that's, again, both companies have had to address that.

      For both companies, again, whether it's Stantec or dRofus, we're talking about oftentimes third party models. So Stantec as a design firm may either have consultants that they've hired that are managing their own models. So Stantec doesn't necessarily really have the right to go into those models. But they may want to get data out of those models.

      Or other times there may be other consultants directly contracted with the owner that are not even contracted with Stantec. Which again, Stantec would benefit from getting data out of those models. But there's not even any sort of legal agreement between Stantec and that other consultant.

      So that's something that Stantec deals with. And again, for dRofus, all the models are third party models because we're software vendors. So those are our clients and customers' models that need to interface with the software tool that we've sold to them. So those are things that both companies deal with in terms of the context of data exchange and accessing data.

      And so, this really brings us to Forge Data Exchanges. And these become a key opportunity to maybe change this whole conversation about what it means to get data out of Revit models and be able to access it and use it in some other application for some other purpose. So the Forge Data Exchange APIs are effectively giving you access into the cloud and the Revit data that is stored in the cloud or a version of it.

      Another nice thing is we're no longer dealing with the Revit API with the Forge data exchanges. And in fact, we're now in sort of a neutral data format and not necessarily the Revit data format, which is tied so closely to the Revit APIs and requires an expert skillset in that regard.

      REST APIs, which is a common web technology, so again, not desktop or application-specific, but now we're talking a language that many more people understand and know. And we'll talk more about maybe there's more than just REST. That's a hint of what's to come.

      And then versioning, and access, and updating of this data is really managed on the Autodesk Construction Cloud platform, or ACC as it's often referred to. And so, again, that removes the responsibility from you as a developer in terms of having to manage that whole piece, because Autodesk now owns that piece.

      And that may be advantageous for you in terms of being able to reduce your code stack or simplify your code stack compared to perhaps what you might be doing now or what you might be considering now outside of using data exchanges.

      So again, I sort of just walked through our hypothesis. But our hypothesis with why we decided to put this course together was that with these data exchange APIs, we should be able to, in theory, reduce some of our code footprint because we don't necessarily need to maintain that code for directly extracting data out of models, or maybe long-term we don't have to. Because I don't think what we are doing today is going to go away immediately.

      But again, we're also able to move away from that desktop environment and move into a cloud environment. It's no longer an application-specific API, which are, again, all potentially beneficial.

      And ultimately, this means we're writing hopefully more generic tools that can potentially be extended to other applications in the future as well as opposed to writing a tool that is highly specific to Revit. And then when we want to deal with some other tool, application, or platform, well, now we've got to write a solution for that thing as well.

      So that was our hypothesis. And really, for the rest of the course, we're going to talk a little bit about how far we got and the things we ran into, which will hopefully help you in your journey if this is something you decide to pursue yourself. Did I miss anything, James?

      JAMES MAZZA: No, I think you nailed it. The key thing here, all about data. We don't need Revit API specifically anymore potentially. And web developers speak this language, not super rare Revit API developers.

      ROBERT MANNA: [LAUGHS] One of those unicorns is on this call today, if you haven't figured that out already. So let's back up for a minute, and where do we start and maybe help make sure that everybody understands what we're talking about when we talk about Forge Data Exchanges.

      So what the heck are these things? So really, Forge Data Exchanges are bundles of data that have been extracted from a Revit model today. And again, we'll talk about that as well more towards the end. End users can define the content of that exchange.

      And currently, how that is most likely to happen is the use of a view in Revit where the user has tailored the visible contents in that view to say this is the data that I'm going to share by creating this Forge data exchange. If so, using a view in Revit is very user-friendly for any Revit user because it's something that they know, understand. And they can quickly say, what I see here is what you're going to get.

      Now, that's actually not entirely true, because there are some things that export with a Forge Data Exchange that are not necessarily visible in that view. In the case of Revit, the best example is rooms. Rooms do export with data exchanges. But if you know anything about Revit, you know that rooms are not actually visible in 3D views.

      So there's a little bit of dichotomy there or whatever you want to say. But it does work. It does happen. And that is actually a good thing that we do get that room data on the Data Exchange side and we're not limited strictly by the rules of a Revit view. That also hints a little bit about the capabilities here ultimately is really that the mechanism of using a view is for ease of use, at least to get started.

      So again, bundles of data that are extracted. So the user defines that view. That view has to be listed as part of when the Revit model is published that view will be published with that Revit model.

      So again, if you don't have much background in ACC or BIM 360, there's this notion that the models there are published either when they are uploaded or when somebody chooses to publish a new model. Not going to get into all those details today. But basically, once that model is published then you can create an exchange. And as part of that publishing process, the user can define what views are included.

      So from the ACC browser environment, you can go into a specific model. And within that model you can see the list of available views. And then, basically, you can choose which view you want to say, yes, create a data exchange for me. And that's going to go ahead and create that data exchange in the Construction Cloud based upon that view that the user has selected.

      Now, a great benefit here as well is that the rules of access to that exchange once it's created are entirely governed by Construction Cloud. So as a user, you can choose what folder you want that exchange to reside in. And then, whatever access rules there are for that folder are going to apply to that exchange data.

      So again, once again, that's a piece that you don't have to worry about as a developer in terms of, well, who has access to the data, who can get access to that data? That is entirely managed by the Construction Cloud, which is, again, potentially beneficial.

      The other thing too is once that data exchange is created, any time that model is republished or effectively versioned, that exchange will update automatically. So once you have that exchange in place. You don't need to rely on users going and creating new exchanges as that model is published, which is presumably recurring on a schedule that makes sense or when it's appropriate, then the exchange will update and you'll have fresh data that you can ingest into your solution.

      So again, once those exchanges are created, they show up there in the list and they look like a file. But in reality, it's really just a pointer to a bunch of data that's been stored in the Construction Cloud.

      So that's also something important to realize is, it may look like a file. It may feel like a file from a end user experience perspective in the browser. But as a developer, you're actually saying, well, no, go send me this data payload from the exchange data collection or exchange data storage that they have in ACC.

      Now the other interesting thing is, and I kind of hinted at this, there are potentially alternate ways that exchanges can be created. So once again, this is where we're stepping a little bit into the territory of things that are imminent, going to be imminently announced, or have been announced by the time that you're watching this recording, is the ability that-- Autodesk is going to be releasing a plugin that actually allows end users to create exchanges directly from a Revit model.

      So a Revit user in Revit can open this little plugin. They can make some choices and then create a data exchange directly from the model, from Revit at that point, into the cloud. So now there's not even any need for that user to navigate to the browser and ACC to do that, they can literally do it from within the design application that they're working.

      Again, in this case, Revit. And you can see here that you're selecting a view and then selecting a category of elements that you want to create that exchange based upon it.

      And really, this is proof of concept stuff. Autodesk is demonstrating the capabilities of these APIs and this workflow definitely in the hopes that people will start to do more with it. And I'm sure they'll continue to develop their solutions over time as well.

      So getting a little bit more technical, the Forge Data Exchanges were officially released, or the APIs were officially released in April of 2022 this year. So that is not a very long window of time that they've been available. We had some nominally early access to it, certainly before the official release date. But to a large extent, that was more theoretical access than actual technical access.

      Because with the differences between the non-production and production environments on the Autodesk development side, it gets a little fuzzy there and a little challenging. So we've really only had a chance to seriously work with these APIs really since that official release date beyond having an understanding of what was coming with that official release.

      So again, it's a REST-based API. So that means you're getting a ton of data when you go in and ask for that exchange. And by a ton of data, just by way of example, we were just doing some testing with small models and things like that on what I would call a relatively small exchange with on the order of 50 objects, maybe more, depending how you count objects.

      There were 5,000 lines of code in the JSON that was returned. And again, this is a small sample set. So if you imagine this in production, those JSONs are going to start to get quite large. That's just something to be aware of.

      The data structure is relatively deep and also intentionally generic. Which, again, is a good thing. But also if you have somebody coming from the world of Revit, that's going to be a shift in terms of understanding the organizational structure of that data. And then, the data is normalized to ID values.

      So the data is frankly not usable out of the box. If you want to be able to present something to your end users in some sort of user interface where they're going to understand what they're looking at, you've got to put the pieces back together for those end users with your application.

      And so, that therefore implies you're going to have to have some sort of extract, transform, load, or extract, load, transform, whichever acronym you want to use. But you're going to have to do some of that work with your tools.

      Another interesting thing to note is that Autodesk has also released a connector for Microsoft Power Automate. That also makes use of the Forge Data Exchanges, which is an interesting thing. I'm going to assume that most of the audience is at least nominally familiar with Power Automate for Microsoft. And what that's all about, it's a low code environment for being able to automate things like getting data from different places and shoving it off somewhere else or doing something with it.

      And so, the interesting thing with the Power Automate connector, because it's supposed to be a low-code environment, because it's supposed to be user-facing, the APIs that they wrote specifically for the Power Automate connector actually do a little bit of that transform for you. Because again, once that data hits the Power Automate connector, it's got to be in a state that those end users are going to be a little more comfortable with and be able to use.

      So we actually did test against the Power Automate APIs as well in terms of we actually pulled down that payload ourselves. And it was a little bit more navigable from an expert end user perspective or a subject matter expert like myself.

      But it certainly by itself alone it still required transformation in order to get it into a usable state. So there's just sort of an asterisk there, or a note that that is interesting they did that, and also hints at maybe what's down the road as well.

      So just to wrap up this section, again, I'll give James an opportunity to fill in any gaps that he thinks I missed and reinforce anything that he thinks is worth reinforcing. But ultimately, the value of Forge Data Exchanges, if you tuned out everything I just said, is it's a good way to share specific data from a model with third parties. Because it's going to turn that data into generic data. And the end user has control in terms of what they're sharing with that third party.

      JAMES MAZZA: That is exactly right. And you'll notice the TLDR there for any of the developers in the room. Entirely accurate. You want to deal with exchanges. Because the person who made it has explicitly said, yup, you can trust this piece of this file.

      We're not giving you a huge Revit model and saying, oh, yeah, just ignore everything outside of this room. This is a much more explicit refined way of saying, this is OK. You can consume this.

      ROBERT MANNA: Exactly. And we're not giving away intellectual property either potentially, because, again, the data is now in a generic format as opposed to a native Revit format, which is good for some folks as well.

      OK, so what? We talked about the fact that you can get this data. We talked about why it's valuable. But what are you going to do with it or what do I do now?

      So as I mentioned earlier, Stantec for a number of years has been extracting data out of Revit models for various purposes and continuing to develop that pipeline. It's been a long journey that continues even after my departure. And needless to say, I was heavily involved in that journey.

      And so, for us in particular it was really interesting to be able to compare what we decided to generate in terms of a JSON payload when we extract data out of a Revit model versus what you get with the Autodesk data exchanges.

      And in some ways, it was great validation. Because we started to look at the Autodesk data, which is on the left. And we started to say, huh, this actually looks pretty similar. I mean, there's clearly differences.

      But they're clearly making similar decisions to the decisions that we made that led us to the structure of our own JSON. There were a few things that we've said, huh, they're probably thinking about this a little bit smarter than we did. But it's a little too late now for us.

      But regardless, I thought it was interesting to put examples of both up here. And again, the left side is Autodesk, the right side is Stantec. Because while the upper structure of both JSONs is obviously different in terms of I've highlighted key parts of the tree, ultimately you can get down to both sets of red brackets. And you're talking about an object that has a bunch of data, or fields, or parameters associated with that object.

      And so, I've called that out pretty explicitly with the highlighting where you do eventually get to that parameter in the purple highlight and the value of that parameter or field in the teal highlight. And again, obviously, you can see some structural differences in how we collectively approached it. But the concept is ultimately fundamentally the same. And again, it was good validation for us.

      On the Stantec side, we elected to be very explicit and say, yes, this is a collection of data that came from a Revit model. And you can expect this data to be structured and formatted in a Revit kind of way. As opposed to what you see on the left with Autodesk, where you can see it's a little bit flatter and it's not explicit at that same level of the hierarchy as Stantec is.

      And you don't really know that you're dealing with Revit data until you get down to those fields where you see autodesk.revit.parameter. And that's your clear indicator at that point of, oh, this is Revit data that we're dealing with as opposed to AutoCAD, or InfraWorks, or Fusion, or Inventor, or whatever, pick your Autodesk tool of choice.

      And again, Autodesk has completely enumerated that parameter field. So that autodesk.revit.parameter, and then datum da, da, da da, which tells you not a whole lot other than you look at the value and say, oh, first floor, that must be the name of the level that we're talking about.

      As opposed to, again, on the Stantec side we elected to use the native Revit IDs for the parameters, which again doesn't tell you much more than what Autodesk was telling you, maybe less. And again, you look at the value and say, oh, that must be the name of a level. So we know we're looking at a level object one way or another. But again, similarities and differences at the same time.

      What's interesting to think about as well here is that everything is broken down to the object level. And so, if you know anything about Revit-- and for those of you who don't, I'll give you a quick lesson. In Revit, you typically have families, which are things most often things.

      But everything really is categorized as a family. So you've got a family that represents a table. And then, that family has to have one or more family types, which is to say, OK, I have this table. And I have this table type, which is three feet by three feet. So it's a three foot by three foot square table. And you could have multiple types.

      And so, you might have another type that is three foot by six foot. And so now you have two types. And then you have explicit instances of those types which actually is the geometry a user puts into their model. And they say, yes, I want an instance of this three by three type here in my model.

      All of those are objects. The family is an object. The types are each individual objects. All of the instances or occurrences are objects. And so, in both cases, Autodesk or Stantec's own proprietary format, we have all those objects in there.

      And so, this goes back to what I was saying earlier that you have to manipulate and transform the data, because the data has been fully normalized so that you're not having multiple instances of that-- multiple definitions of that family, or multiple definitions of one type. You only have one definition of each of those things.

      And ultimately, again, to do something with it you're going to have to transform that data and put it together either for your own purposes or just even for whatever user interface that you're building for your users to interact with. Any comments, James?

      JAMES MAZZA: No. We'll talk about it again to drive the point home. Don't worry.

      ROBERT MANNA: [LAUGHS] So building off of that, the data transformation part of dealing with exchanges is not a small endeavor, ultimately. Particularly because there's so much data that you are getting back with the REST API where you're literally getting everything.

      And I come back to the fact that I had an exchange that I created where you say, as an end user, I say, well, I gave you about 50 objects in that exchange. Well, that's really the things that I'm thinking about as an end user as objects like, yeah, there's a bunch of instances of furniture, or a bunch of instances of equipment, and there's about 50 of them.

      Well, the reality is there's way more than 50 objects. Because you have all those other definitions that help to define those actual objects. And so, you've got a ton of data that's coming down with these. And so, it's up to you then to do something with that data.

      And so, again, because of time, manpower availability, where my expertise lies versus James's expertise and everything else, I ended up doing a lot of experimentation with this data in Power Query because it's what I know. It's what I'm good at.

      I'm kind of tempted to go learn Python now that I have a really good use case to go learn Python. But it was at least a good place to experiment with this raw data to see what we could get out of it better understand this organization.

      So what you're seeing on the right here is, I had to go get the data. I had to then get that data organized into some tables, so that I could then have a bunch of functions that would process all those tables of data. So that I could ultimately end up at my output, of what I was interested in, which was all the objects by category and the properties that go with those objects.

      And so, doing that all in Power Query is probably a bad idea in the long run. It took about 45 minutes for that Power Query to refresh. And again, this was sample data with a relatively small JSON in the grand scheme of things. So certainly not an avenue for production.

      It's not really what these exchanges are intended for in the long term. But just trying to share our own experiences and maybe what you need to mentally prepare for in terms of if you do work with these things and what you're going to do with it going forward.

      So data flattening, again, normalized, so we've got to get it to a human readable state preferably, or at least some sort of state that your own application can consume and understand. As I mentioned earlier, or as you saw earlier, the names of-- the username doesn't show up there.

      So that's one of the key things. You've got to swap out those enumerated names with something that a user would probably expect. Or you need that information to at least know what to do with the data in the first place.

      In the case of Forge Data Exchanges, that means you will need to use the schema API endpoint to go and get the parameter schema. So you can actually accomplish what you see highlighted here in the screenshot, which is now that column is named instance ID value, and the actual user interface or human readable name that the user would expect to see.

      Interesting thing here is to remember that Revit is basically unrestricted in the ability of users to add new fields to the Revit databases. You can't stop them by and large. There are some ways you can do that. But most people don't. And so, that means that your application has to be able to potentially dynamically deal with random fields popping up that maybe you weren't expecting or didn't know they would be there.

      And the other fun part is that because of this notion of type and instance and even family to a certain extent, there is the possibility that you can have parameters with the same human readable names that are, in fact, actually different parameters. And anybody who knows anything about Revit and shared parameters, they're going to be nodding your heads right now. And you know exactly what I'm talking about.

      What this means is you have to be prepared to keep track of these things. And you have to be prepared to deal with them. So what I did, and again, my experiments in Power Query, I ended up having to construct this sort of concatenated name that indicates is it instance or type, retain the original ID value.

      Again, if you know anything about Revit and shared parameters and all that fun stuff, you also will appreciate why it was necessary to retain that ID value. And then, finally, show the actual human readable name. Again, this is not something I would necessarily put in front of an end user for interface design.

      But these are the kinds of things that you will have to deal with in the background and be prepared to deal with. And your code is going to have to be able to handle it. Otherwise, you'll throw an exception in trying to deal with the parameters.

      Because you'll be like, yeah, it's all one parameter. Oh, wait, it's not the same parameter. And it has a different data type. And wait, now everything is broken because we weren't prepared to dynamically deal with the joys of user randomness. [LAUGHS]

      JAMES MAZZA: I am going to jump in here, Robert, and say that in this case, user randomness was Robert dealing with his own models [LAUGHS] and his own superior skill set in BIM.

      ROBERT MANNA: [LAUGHS] I have no further comment, or I plead the fifth. One or the other, or both. So where does that ultimately leave us? What we were able to accomplish, or prove, or validate is we can transform the data. It is usable. We did have a lot of feedback from the development team about opportunities to improve the experience for developers like yourselves.

      You have to be prepared to deal with that inherent variability that you're going to get with Revit data in particular. Which means you're going to have to be dynamic or you're going to have to be able to dynamically handle conflicts, or your application has got to prompt end users to resolve any conflicts and say, oh, yeah, do this, or that, or whatever.

      Again, we still see advantages here in terms of providing vision into models without having to develop any tooling that specifically has to interface with that model. We only have to develop tooling that is interfacing with these cloud APIs as opposed to, again, writing software that has to run at the desktop level in some way, shape, or form. Again, we said this earlier, but to just reinforce, the end users ultimately have control over what's being shared with you.

      Ultimately, though, there's a big disadvantage here, which is, again, the REST API dumps out an enormous amount of data. And if you need all that data, fine, that's great. The REST API may make sense for you. But don't forget that when we say it dumps out all the data, that includes geometry data.

      So it's not just the hard numbers or strings that have been associated to a particular object in the Revit model. Exchanges also include all of the geometry as well. And depending on your use case that could be very valuable. A lot of the use cases we've looked at or thought about frankly don't care about geometry.

      So that means we're fetching all this data, consuming bandwidth, consuming storage space, consuming processing time for a whole chunk of data that ultimately we have no interest in. Which begs the question, can this get any better? And so, the good news is, it can.

      And again, this is where we start to delve into the territory of about to be announced, has been announced, will soon be announced, pick your verb. But Autodesk has been working very hard on developing graphql APIs to query these data exchanges.

      And we see a lot of value in the graphql APIs. Again, we've had a lot of communications and a lot of conversations with the development team. They've been able to show us early samples of how it works and what the intent is there.

      And it's just so much better. Because now as a developer you are able to control what you're getting back in terms of the data. And you can filter that data in real time based upon either the parameters or fields that you're interested in or based on the actual values of those fields.

      So you can filter on either, which ultimately is going to mean you've got less data that you have to deal with from an ETL or ELT perspective, which is again beneficial. Still going to have to be prepared for the data to be dynamic. Because users are users.

      And then, as we saw earlier, there's also down the road-- not today or tomorrow certainly, but there's that potential of even being able to create your own exchange creators. Where now, not only with the graphql where you'll be able to control what data you're returning for consumption purposes.

      But if you can create your own exchange creator, you even now can control what data is going into that exchange in the first place. And it's really just that user being able to say, yep, I'm ready to share that data. And I know what data is going out with this particular exchange. Any thoughts, James?

      JAMES MAZZA: Plenty of thoughts. You're doing well.

      [LAUGHTER]

      ROBERT MANNA: I need that validation. So this is actually where James gets to talk more than me. I'm the guy who comes up with the brilliant ideas and says, yeah, we can do this, right? And then I turn to James and say, we can do this, right?

      And James looks at me sideways and says, I don't know. You got 200 hours in your budget to do that? And I'm like, no, it's simple. It's easy. I do appreciate and understand that developing is not easy. So I'm going to let James talk a little bit about some of those experiences and some of the things you should be aware from a development perspective.

      JAMES MAZZA: Yeah. So I will say this. Everything is easy once you know how to do it. So that's kind of a good advice to live by. But I am going to just quickly go through some of this. And I will shout out and give kudos to the folks at Autodesk for their documentation around this stuff and Forge.

      So you'll note, the short URLs there do point to the exchange documentation which is still in beta documentation. So even though exchanges were officially released as of this recording, if you go to any of the documentation pages, they still say this is recommended for beta users only. So stuff is still changing and maybe not fully baked. So just a little bit of warning there.

      So with that said, getting your feet wet, the kind of technologies that I recommend you play with or you're likely going to play with to get into all of this stuff, just getting rolling to get your head wrapped around all the Autodesk authentication and all that kind of stuff with Forge, just basic Forge. Postman is your friend. That's a really useful thing just to fire away the odd one-off queries and all that kind of stuff.

      When you start looking at the volume of data that you're going to be getting out of exchanges, Postman becomes painful. And you're probably going to fire up VS code and a Jupyter Notebook and start writing some Python. So you can start doing some iteration through all the various pagination that comes through.

      So as you're setting all of that up, the thing to keep in mind here is not only do you have to go through and set up your Forge app and get all the tokens and everything else set up. When you actually go to run the app, you have to make sure that the end user is appropriately authenticated in the target Autodesk Construction Cloud tenant and project.

      So it's not enough that you've just create the Forge application. You actually have to go through and make sure that the user context that's running this thing has actually been added to the project and actually has permission to the file. Or you're going to end up getting nowhere really fast. Next slide, Robert.

      ROBERT MANNA: Selenium at all?

      JAMES MAZZA: Oh, yeah. Selenium we can mention briefly. So when you're dealing with Jupiter, and Notebooks, and Python, and Auth, it all gets very painful, and you get sick of re authenticating yourself and all that. So for anyone who hasn't played with browser automation, the folks at Selenium do have some very excellent browser testing automation frameworks available.

      And you can leverage that testing automation to deal with inputs and getting outputs from doing the interactive auth and all that kind of stuff. I know I did that, and it saved me some time. So Selenium is also good. Play with that if you're interested.

      And as Robert alluded to, getting to an exchange is not actually super straightforward. It's not just as simple as browsing in a UI. The simplest thing to do is actually just find your project. And then once you've got your project, just rip through the whole contents of the project directory. Get all of the contents. And then go hunting for the FDX object.

      So the thing that you're going to want to remember here is the item type is that little items colon Autodesk BIM 360 FDX. Everything that you get out of Forge has an item type. These data exchanges are the FDX ones. So go ahead and find them that way. That's going to be the fastest way to do it. Next.

      And then, getting through exchange results, there's a lot. You don't get them all in a single call. You get to make a whole bunch of iterative calls to go through all the various pages that it's going to return. So you're going to write yourself a bunch of loops to go through until you basically don't have a next page on the next URL kind of payload.

      The other thing I'm going to mention, and Robert kind of mentioned it when he was talking about using the parameter service as well. You'll notice in the bottom there, we've got this autodesk.revit.parameter.structural family code named dash 0, or 1.0.0, which means absolutely nothing to anyone.

      So if you want to use this information or these exchanges for any kind of end user application, expect to make tons of various calls. One to get all the parameter data, and then another one to figure out what on Earth is the normal human readable name of the parameter whose data I now have so that you can actually understand what on Earth it is that's called chair.

      In this case, you might know it's all model description but the parameters below which are Revit shared parameters, you have no idea what those names are or anything without calling the scheme service. So that's another thing to keep in mind.

      There's the other thing-- and again, it's all actually really well documented in the documentation. The representation of these JSON payloads is actually a graph. So getting some familiarity with graph structures is very helpful to you. And then, like Robert said, we've got this very interesting very appealing graphql thing coming in fairly soon.

      And the reason that this is going to be important and appealing to you is that rather than making a dozen calls to put together the picture of something, you're going to make one. And it's going to be awesome. So look forward to that. I think that's going to make this a much more usable tool for all of us developer folks out here.

      ROBERT MANNA: Yeah. And one thing to note is that do not confuse graph databases with graphql API. Similar, related, not the same thing. The handout has a link to Pluralsight course that James and I both recommend if you want to learn more about graph databases and dealing with that data.

      And there's also some notes in there about the documentation Autodesk currently has, which is actually a really poor example of a Revit data. It's a good graph example of how they structure the data. It's a poor example applied to Revit because of what they chose. So check out the handout for more about that.

      So what's next? This is where we start to wrap things up. Where can we go and what can we do in it? Because to be honest, we haven't built any production software. I didn't expect to build any production software, again, given the timeline associated with what we were accomplishing here. We were testing the waters to help you out and help ourselves out.

      So just to bring this back, you've got some sort of working model. Users can publish that model. Exchanges can be created from those models. Then with your own code, you can go fetch and transform those exchanges, expose that data in some sort of user interface. And then you can do something with that data.

      In fact, you can get that data back into Revit today if you wanted to, you could either use Revit design automation, which means you're now 100% cloud. Or you could revert back to some sort of desktop add-in that is accessing your data source for your application to write data back into another Revit model or some sort of application.

      So Autodesk has gone and done this already for Revit to Inventor interop. It's a nice example of, well, how can you get a small piece of the Revit data into Inventor so that somebody can actually design or build something that they need to be working Inventor for that it goes into that building that Revit is representing.

      Stantec, again, we talked earlier about this notion of coordinating data between MEP models. So that's going from one Revit model into another Revit model with some sort of user interface in between. dRofus, again, we consume data from Revit models. And we do also want to write some data back to Revit models.

      So if we look at the Stantec example, electrical engineer could choose to consume a data exchange. They could review that list of equipment that they've now got from the mechanical engineer. They could potentially add some new data and some sort of user interface. They could modify some of the data that came from the mechanical model, or perhaps append new data to data to the mechanical model.

      And then, ultimately, once that electrical engineer or designer has reviewed that and said, OK, yes, this is all the equipment that we're going to need to connect to in our electrical model and circuit. And I've defined what's required for that equipment. Now we could automate putting that data into the electrical model.

      And one thing with Revit design automation-- I think was public knowledge-- Autodesk enterprise customers can use design automation to actively write to Revit Cloud work shared central models.

      So previously design automation was limited to you had to upload your model and download it. Enterprise customers can actually write in real time through sync with central to active models. So this is actually a workflow that could be achieved.

      And again, this is 100% cloud now. No desktop application required to get from the point of the published model to the point that you're writing that data into that new model. Just let that percolate for a moment.

      Again, with dRofus, we consume some data out of Revit. And then we write other data into Revit. It's not the same data going both directions. So again, if we wanted to, or thought it was of value to us or our customers, we could take what we do today purely in the desktop add-in, and we could write our own exchange application that would consume that data coming out of the exchange to write into our project database.

      This ultimately begs the question of, what if Revit could consume exchanges? If you look at what is going on say, over on the Fusion 360 side, and data exchange and SIM models for Fusion, I think you can start to read some of the tea leaves. Not to mention, as I mentioned earlier, the data in exchanges is intentionally generic.

      So if you have this generic data, there's no reason that as long as you have the end points or ability to consume, and exchange, and send that data into a Revit file you could.

      Again, we talked earlier about how the exchange connector is coming, where you can create exchanges directly from a Revit model, which implies the ability to directly create exchanges through APIs. So the other implication there too is, what if you could create exchanges from your own applications?

      And this really starts to potentially level the playing field in terms of data interoperability between any application, Autodesk or otherwise. Because now you have this common language and this platform, which is where Autodesk wants to be. They've been talking about this for years. ACC is a platform. We want you to build on this platform. We want other people to build applications that use this platform.

      This is where they're going. Again whether, you happen to be talking to developers, or you just listen to the messaging and read the tea leaves, I think it's fairly safe to say or assume that this is-- whether what form exactly it takes, this is the direction that things are headed in.

      So ultimately, conclusions. Why should we do this? Again, we talked about this towards the very beginning. You're moving out of a pure C# environment, desktop environment, into web technologies which automatically changes the type of people that can do this work for you. You don't need that unicorn knowledge of both how to write C# and a deep understanding of how Revit works and what it does.

      Certainly, understanding of the Revit data is useful and valuable. But again, that could be backfilled with an SME, again, not necessarily your developer themselves. Again, different skillset needed. Web developers, data focus developers can get into this.

      And ultimately, no desktop software. Which means, no licenses are required for that desktop software. Again, we talked earlier about the potential to offload code base where you don't need to own things that previously maybe you did, because you're leveraging this technology that Autodesk is providing as part of their platform.

      And again, cloud-native ultimately. So getting out of the desktop environment from a user, end user perspective as well. Any last thoughts, James?

      JAMES MAZZA: That was a lot at the end, Robert. A lot.

      ROBERT MANNA: I know.

      JAMES MAZZA: And we almost need a splash, the safe harbor statement on that last slide probably would have been helpful. But Robert is trying to put the pieces together as best he can.

      Of course, none of this may come to pass. But it is very interesting and appealing. And we both do recommend that everybody takes a good look at this technology, because it is quite fascinating. And the potential is certainly there.

      ROBERT MANNA: All right. Well, thank you all for listening if you made it this far. Hopefully you didn't fall asleep.

      There's certainly opportunity to engage virtually on Autodesk University website. You can leave comments. I will do my best to try and follow up. You can find me on LinkedIn if you really want to. You can probably even construct my email address if you need to or want to.

      So we're certainly out there. Same with James, you probably can construct his email address to his dismay. But we're certainly willing to try and answer questions as possible if you are not able to attend this session in person at Autodesk University. And hopefully you have a good rest of the day, whatever day or time it is for you. Thank you.

      ______
      icon-svg-close-thick

      Cookie 首选项

      您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

      我们是否可以收集并使用您的数据?

      详细了解我们使用的第三方服务以及我们的隐私声明

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

      改善您的体验 – 使我们能够为您展示与您相关的内容

      通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

      定制您的广告 – 允许我们为您提供针对性的广告

      这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

      icon-svg-close-thick

      第三方服务

      详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

      icon-svg-hide-thick

      icon-svg-show-thick

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      Qualtrics
      我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
      Akamai mPulse
      我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
      Digital River
      我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
      Dynatrace
      我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
      Khoros
      我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
      Launch Darkly
      我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
      New Relic
      我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
      Salesforce Live Agent
      我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
      Wistia
      我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
      Tealium
      我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
      Upsellit
      我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
      CJ Affiliates
      我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
      Commission Factory
      我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
      Google Analytics (Strictly Necessary)
      我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
      Typepad Stats
      我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
      Geo Targetly
      我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
      SpeedCurve
      我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
      Qualified
      Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

      icon-svg-hide-thick

      icon-svg-show-thick

      改善您的体验 – 使我们能够为您展示与您相关的内容

      Google Optimize
      我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
      ClickTale
      我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
      OneSignal
      我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
      Optimizely
      我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
      Amplitude
      我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
      Snowplow
      我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
      UserVoice
      我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
      Clearbit
      Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
      YouTube
      YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

      icon-svg-hide-thick

      icon-svg-show-thick

      定制您的广告 – 允许我们为您提供针对性的广告

      Adobe Analytics
      我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
      Google Analytics (Web Analytics)
      我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
      AdWords
      我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
      Marketo
      我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
      Doubleclick
      我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
      HubSpot
      我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
      Twitter
      我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
      Facebook
      我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
      LinkedIn
      我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
      Yahoo! Japan
      我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
      Naver
      我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
      Quantcast
      我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
      Call Tracking
      我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
      Wunderkind
      我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
      ADC Media
      我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
      AgrantSEM
      我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
      Bidtellect
      我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
      Bing
      我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
      G2Crowd
      我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
      NMPI Display
      我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
      VK
      我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
      Adobe Target
      我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
      Google Analytics (Advertising)
      我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
      Trendkite
      我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
      Hotjar
      我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
      6 Sense
      我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
      Terminus
      我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
      StackAdapt
      我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
      The Trade Desk
      我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
      RollWorks
      We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

      是否确定要简化联机体验?

      我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

      个性化您的体验,选择由您来做。

      我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

      我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

      通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。