AU Class
AU Class
class - AU

Autodesk Forge Data APIs: Standardized Granular Data Extraction to Reduce Code Base

このクラスを共有
ビデオ、プレゼンテーション スライド、配布資料のキーワードを検索する:

説明

For several years, Stantec has maintained a software stack used to extract data from Revit software for various purposes. Most recently, this was in support of a benchmarking effort to collect standardized, normalized data from our models for reporting and analysis by project type and sector. In this scenario, users must “submit” their data after they have a QA check. Autodesk Data Exchange and APIs present a method to standardize the data extraction process and give control to the end user in regards to what data is exported for downstream use. This process removes the compute from the local desktop and reduces the complexity of the internal code base. Using Microsoft Azure as a data pipeline, we move the data traffic off the corporate network (LAN, WAN, VPN, and ISP). This class will discuss our development of a solution that maximizes the Autodesk Forge Data APIs. In addition to benchmarking, we’ll touch on some of the other areas where we have experimented with the tooling, including as part of our digital twin project.

主な学習内容

  • Discover the value of the Autodesk Forge APIs for developing custom solutions.
  • Learn how to apply Autodesk Data Exchanges to workflows that require subsets of data to be shared across multiple apps and teams.
  • Evaluate the potential value in creating your own custom solution.
  • Learn about the long-term value of Autodesk's move to granular cloud data and how to capitalize on that value through APIs.

スピーカー

  • Robert Manna さんのアバター
    Robert Manna
    trained as an architect, Robert spent over a decade helping to implement technology, develop software and solve data problems with a global AE firm. He is now solving challenges for key accounts/clients at dRofus. Executive Committee member with Digital Built Environment Institute. Father, husband, model railroader, downhill skier, swimmer, road cycler occasional runner.
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • subtitles off, selected
      Transcript

      ROBERT MANNA: Welcome, everybody, to the class today. We're going to be talking about Forge Data Exchange APIs and Forge Data Exchanges. I'm Robert Manna, a senior solutions consultant with dRofus. And we've also got James Mazza, a solution architect with Stantec with us today.

      So let's go ahead and dive in. So neither of us obviously work for Autodesk. And they said that if you don't work for Autodesk we don't have to put up the safe harbor statement. But we are talking about some things today that are not yet officially released, at least as of when we recorded this video.

      So I've put this in just in case. Because as the Autodesk folks like to say, don't make purchasing decisions based upon future development plans or efforts, because things can change. And they do.

      So with that out of the way, we're going to start with just a little bit of introduction. I promise this isn't filler. There's a reason why we're sitting here doing a little introduction to ourselves versus diving right into the content. So yeah, let's start with James.

      JAMES MAZZA: Hi, everybody. I'm James Mazza. I work for Stantec I've been with them for nearly 15 years at this point.

      And I've got a pretty long and varied background. I've run the whole gamut from basic Revit user, through Dynamo power user, through BIM management, through regional management, all that stuff, and then into Revit API development. And then for the last couple of years I've actually been doing solutions architecture for the buildings digital practice team at Stantec.

      For the most part, I've actually worked with Robert. The majority of that time we've done a whole bunch of really interesting things over the last number of years. And as of a few weeks ago or maybe months, I can't keep track anymore, he's dead to me. And I'm really, really sad about that. So on that note, let's talk about Robert.

      ROBERT MANNA: Me, yeah, I'm dead to James. But yet he still showed up to help me teach this class, which I greatly appreciate. So I worked with Stantec. And Stantec acquired my company. In total I had 19 years in with them, a variety of roles, trained as an architect.

      But really, over the years focused on technology. And really in the last five years or more really focused on data. And so, I've moved over to dRofus where data is a huge part of what we do. It's the core to our primary product that we sell.

      And again, as I said here, this is going somewhere-- data, data, data. And if you are interested in learning more about transforming data, particularly in using Excel and Power Query, and just I've been told I condensed an entire semester's worth of introduction to relational databases into a single AU class last year.

      And that was a virtual class, planning driven by data. So if you're interested, check that out. Because apparently I could teach a college course, if I wanted. I don't know. We'll see.

      So why do we do all that intro? Because our background here is data. This is why we are interested in the Forge Data Exchanges, and why we thought it'd be worth teaching a class about these things. So in Stantec's world, we've been dealing with data for years.

      Like a number of firms and companies out there we have our own-- or Stantec now has their own code for extracting data out of Revit models to do things with it, including getting it into Azure. Again dRofus, data is core to the product that we have, which is called dRofus as well, not to add to the confusion. But again, data is the focus here and why we were interested in this topic.

      And from our perspective, and again, regardless of which organization we're talking about, data really has two primary paths. There's either a need or a desire to do something with data at the level of that local project.

      So the project is looking for some sort of outcome in leveraging or using data, or at an enterprise level there's an interest in aggregating data together to look for trends or feedback based on multiple projects. And again, it's true for both our organizations ultimately, as well and the customers and clients that we serve, or our internal customers and clients depending on, again, which organization we're talking about.

      So at Stantec-- and James can jump in here and interrupt me if he wants to. But I've only been away for a period of time. There's several tools that Stantec has today that actually leverage getting data out of Revit models to then do something else in another Revit model, whether it's actually making it easier to create and set up Revit projects based on data from other models, or extracting data out of one model to then feed that data-- keep that data up to date in another model.

      And you can think of it as Stantec's own proprietary version of Revit's copy monitor tool, only much more powerful and robust in terms of the ability to handle data and pass data into the secondary models.

      So for instance, an electrical engineer who would like to have data coming from a mechanical engineer's model in terms of the requirements for pumps or fans, or even just the locations of those things, that the electrical engineer ultimately doesn't need a representation of a pump or fan. They need just a device that they can circuit to and is in sync with those pieces of equipment that are in the mechanical engineer's model. So lots of uses for that tool.

      And then, the other thing that Stantec is very interested in which, again, not unique to Stantec at all, is culling large amounts of data from Revit models on various operations, whether it's Sync with Central or user-initiated to collect that information and do analysis or trending.

      And a really good example of that is there's been a lot of work done to be able to benchmark common types of projects, whether it's workplace design, labs and science and technology, or even healthcare. As I told the team when I was at Stantec, healthcare is the hardest, so we should do that one last. Which I think they're still listening to my advice-- maybe. [LAUGHS]

      Over at the dRofus side, again, we are a software vendor. We sell a product that is really focused on allowing our end users to collect, organize, and track all of the project data about a project. And so, this extends beyond the data that would be just in a Revit model into information about things that maybe aren't actually modeled, or specification requirements, or the actual products that will be ordered.

      And again, a lot of that data is not data that you necessarily want to track in Revit. But there is data that we want to get out of Revit. For instance, with what things have been modeled in the model so that we know if we're missing anything, as well as the designed areas so that we can compare those to the program areas.

      So we extract data out of the Revit model, and we also push data into the Revit model as well, because we want dRofus to be the authoritative source for certain pieces of information like room names, or the functional program of the room, or what objects or items pieces of equipment should be in rooms or should not be in rooms. So that's what our product does.

      And we've got a Revit add-in today that allows for that bi-directionality of data. And of course, that data all has to be mapped in terms of where is it going into Revit, and what data are we getting out of Revit, and where is that going into dRofus. So that's what we do.

      Ultimately, for both organizations, data exchange is really key. In both cases or all the cases I just mentioned, we're talking about the fluid exchange and movement of data between a model that is managed through a desktop application and then some sort of cloud resource, be it an Azure database of some kind, or dRofus's own project databases.

      Traditionally, are really the only option we had for doing that integration with Revit was on desktop level because of the Revit API. Because there really has not been any other ways to get into the Revit data except through Revit as your primary interface. And that's, again, both companies have had to address that.

      For both companies, again, whether it's Stantec or dRofus, we're talking about oftentimes third party models. So Stantec as a design firm may either have consultants that they've hired that are managing their own models. So Stantec doesn't necessarily really have the right to go into those models. But they may want to get data out of those models.

      Or other times there may be other consultants directly contracted with the owner that are not even contracted with Stantec. Which again, Stantec would benefit from getting data out of those models. But there's not even any sort of legal agreement between Stantec and that other consultant.

      So that's something that Stantec deals with. And again, for dRofus, all the models are third party models because we're software vendors. So those are our clients and customers' models that need to interface with the software tool that we've sold to them. So those are things that both companies deal with in terms of the context of data exchange and accessing data.

      And so, this really brings us to Forge Data Exchanges. And these become a key opportunity to maybe change this whole conversation about what it means to get data out of Revit models and be able to access it and use it in some other application for some other purpose. So the Forge Data Exchange APIs are effectively giving you access into the cloud and the Revit data that is stored in the cloud or a version of it.

      Another nice thing is we're no longer dealing with the Revit API with the Forge data exchanges. And in fact, we're now in sort of a neutral data format and not necessarily the Revit data format, which is tied so closely to the Revit APIs and requires an expert skillset in that regard.

      REST APIs, which is a common web technology, so again, not desktop or application-specific, but now we're talking a language that many more people understand and know. And we'll talk more about maybe there's more than just REST. That's a hint of what's to come.

      And then versioning, and access, and updating of this data is really managed on the Autodesk Construction Cloud platform, or ACC as it's often referred to. And so, again, that removes the responsibility from you as a developer in terms of having to manage that whole piece, because Autodesk now owns that piece.

      And that may be advantageous for you in terms of being able to reduce your code stack or simplify your code stack compared to perhaps what you might be doing now or what you might be considering now outside of using data exchanges.

      So again, I sort of just walked through our hypothesis. But our hypothesis with why we decided to put this course together was that with these data exchange APIs, we should be able to, in theory, reduce some of our code footprint because we don't necessarily need to maintain that code for directly extracting data out of models, or maybe long-term we don't have to. Because I don't think what we are doing today is going to go away immediately.

      But again, we're also able to move away from that desktop environment and move into a cloud environment. It's no longer an application-specific API, which are, again, all potentially beneficial.

      And ultimately, this means we're writing hopefully more generic tools that can potentially be extended to other applications in the future as well as opposed to writing a tool that is highly specific to Revit. And then when we want to deal with some other tool, application, or platform, well, now we've got to write a solution for that thing as well.

      So that was our hypothesis. And really, for the rest of the course, we're going to talk a little bit about how far we got and the things we ran into, which will hopefully help you in your journey if this is something you decide to pursue yourself. Did I miss anything, James?

      JAMES MAZZA: No, I think you nailed it. The key thing here, all about data. We don't need Revit API specifically anymore potentially. And web developers speak this language, not super rare Revit API developers.

      ROBERT MANNA: [LAUGHS] One of those unicorns is on this call today, if you haven't figured that out already. So let's back up for a minute, and where do we start and maybe help make sure that everybody understands what we're talking about when we talk about Forge Data Exchanges.

      So what the heck are these things? So really, Forge Data Exchanges are bundles of data that have been extracted from a Revit model today. And again, we'll talk about that as well more towards the end. End users can define the content of that exchange.

      And currently, how that is most likely to happen is the use of a view in Revit where the user has tailored the visible contents in that view to say this is the data that I'm going to share by creating this Forge data exchange. If so, using a view in Revit is very user-friendly for any Revit user because it's something that they know, understand. And they can quickly say, what I see here is what you're going to get.

      Now, that's actually not entirely true, because there are some things that export with a Forge Data Exchange that are not necessarily visible in that view. In the case of Revit, the best example is rooms. Rooms do export with data exchanges. But if you know anything about Revit, you know that rooms are not actually visible in 3D views.

      So there's a little bit of dichotomy there or whatever you want to say. But it does work. It does happen. And that is actually a good thing that we do get that room data on the Data Exchange side and we're not limited strictly by the rules of a Revit view. That also hints a little bit about the capabilities here ultimately is really that the mechanism of using a view is for ease of use, at least to get started.

      So again, bundles of data that are extracted. So the user defines that view. That view has to be listed as part of when the Revit model is published that view will be published with that Revit model.

      So again, if you don't have much background in ACC or BIM 360, there's this notion that the models there are published either when they are uploaded or when somebody chooses to publish a new model. Not going to get into all those details today. But basically, once that model is published then you can create an exchange. And as part of that publishing process, the user can define what views are included.

      So from the ACC browser environment, you can go into a specific model. And within that model you can see the list of available views. And then, basically, you can choose which view you want to say, yes, create a data exchange for me. And that's going to go ahead and create that data exchange in the Construction Cloud based upon that view that the user has selected.

      Now, a great benefit here as well is that the rules of access to that exchange once it's created are entirely governed by Construction Cloud. So as a user, you can choose what folder you want that exchange to reside in. And then, whatever access rules there are for that folder are going to apply to that exchange data.

      So again, once again, that's a piece that you don't have to worry about as a developer in terms of, well, who has access to the data, who can get access to that data? That is entirely managed by the Construction Cloud, which is, again, potentially beneficial.

      The other thing too is once that data exchange is created, any time that model is republished or effectively versioned, that exchange will update automatically. So once you have that exchange in place. You don't need to rely on users going and creating new exchanges as that model is published, which is presumably recurring on a schedule that makes sense or when it's appropriate, then the exchange will update and you'll have fresh data that you can ingest into your solution.

      So again, once those exchanges are created, they show up there in the list and they look like a file. But in reality, it's really just a pointer to a bunch of data that's been stored in the Construction Cloud.

      So that's also something important to realize is, it may look like a file. It may feel like a file from a end user experience perspective in the browser. But as a developer, you're actually saying, well, no, go send me this data payload from the exchange data collection or exchange data storage that they have in ACC.

      Now the other interesting thing is, and I kind of hinted at this, there are potentially alternate ways that exchanges can be created. So once again, this is where we're stepping a little bit into the territory of things that are imminent, going to be imminently announced, or have been announced by the time that you're watching this recording, is the ability that-- Autodesk is going to be releasing a plugin that actually allows end users to create exchanges directly from a Revit model.

      So a Revit user in Revit can open this little plugin. They can make some choices and then create a data exchange directly from the model, from Revit at that point, into the cloud. So now there's not even any need for that user to navigate to the browser and ACC to do that, they can literally do it from within the design application that they're working.

      Again, in this case, Revit. And you can see here that you're selecting a view and then selecting a category of elements that you want to create that exchange based upon it.

      And really, this is proof of concept stuff. Autodesk is demonstrating the capabilities of these APIs and this workflow definitely in the hopes that people will start to do more with it. And I'm sure they'll continue to develop their solutions over time as well.

      So getting a little bit more technical, the Forge Data Exchanges were officially released, or the APIs were officially released in April of 2022 this year. So that is not a very long window of time that they've been available. We had some nominally early access to it, certainly before the official release date. But to a large extent, that was more theoretical access than actual technical access.

      Because with the differences between the non-production and production environments on the Autodesk development side, it gets a little fuzzy there and a little challenging. So we've really only had a chance to seriously work with these APIs really since that official release date beyond having an understanding of what was coming with that official release.

      So again, it's a REST-based API. So that means you're getting a ton of data when you go in and ask for that exchange. And by a ton of data, just by way of example, we were just doing some testing with small models and things like that on what I would call a relatively small exchange with on the order of 50 objects, maybe more, depending how you count objects.

      There were 5,000 lines of code in the JSON that was returned. And again, this is a small sample set. So if you imagine this in production, those JSONs are going to start to get quite large. That's just something to be aware of.

      The data structure is relatively deep and also intentionally generic. Which, again, is a good thing. But also if you have somebody coming from the world of Revit, that's going to be a shift in terms of understanding the organizational structure of that data. And then, the data is normalized to ID values.

      So the data is frankly not usable out of the box. If you want to be able to present something to your end users in some sort of user interface where they're going to understand what they're looking at, you've got to put the pieces back together for those end users with your application.

      And so, that therefore implies you're going to have to have some sort of extract, transform, load, or extract, load, transform, whichever acronym you want to use. But you're going to have to do some of that work with your tools.

      Another interesting thing to note is that Autodesk has also released a connector for Microsoft Power Automate. That also makes use of the Forge Data Exchanges, which is an interesting thing. I'm going to assume that most of the audience is at least nominally familiar with Power Automate for Microsoft. And what that's all about, it's a low code environment for being able to automate things like getting data from different places and shoving it off somewhere else or doing something with it.

      And so, the interesting thing with the Power Automate connector, because it's supposed to be a low-code environment, because it's supposed to be user-facing, the APIs that they wrote specifically for the Power Automate connector actually do a little bit of that transform for you. Because again, once that data hits the Power Automate connector, it's got to be in a state that those end users are going to be a little more comfortable with and be able to use.

      So we actually did test against the Power Automate APIs as well in terms of we actually pulled down that payload ourselves. And it was a little bit more navigable from an expert end user perspective or a subject matter expert like myself.

      But it certainly by itself alone it still required transformation in order to get it into a usable state. So there's just sort of an asterisk there, or a note that that is interesting they did that, and also hints at maybe what's down the road as well.

      So just to wrap up this section, again, I'll give James an opportunity to fill in any gaps that he thinks I missed and reinforce anything that he thinks is worth reinforcing. But ultimately, the value of Forge Data Exchanges, if you tuned out everything I just said, is it's a good way to share specific data from a model with third parties. Because it's going to turn that data into generic data. And the end user has control in terms of what they're sharing with that third party.

      JAMES MAZZA: That is exactly right. And you'll notice the TLDR there for any of the developers in the room. Entirely accurate. You want to deal with exchanges. Because the person who made it has explicitly said, yup, you can trust this piece of this file.

      We're not giving you a huge Revit model and saying, oh, yeah, just ignore everything outside of this room. This is a much more explicit refined way of saying, this is OK. You can consume this.

      ROBERT MANNA: Exactly. And we're not giving away intellectual property either potentially, because, again, the data is now in a generic format as opposed to a native Revit format, which is good for some folks as well.

      OK, so what? We talked about the fact that you can get this data. We talked about why it's valuable. But what are you going to do with it or what do I do now?

      So as I mentioned earlier, Stantec for a number of years has been extracting data out of Revit models for various purposes and continuing to develop that pipeline. It's been a long journey that continues even after my departure. And needless to say, I was heavily involved in that journey.

      And so, for us in particular it was really interesting to be able to compare what we decided to generate in terms of a JSON payload when we extract data out of a Revit model versus what you get with the Autodesk data exchanges.

      And in some ways, it was great validation. Because we started to look at the Autodesk data, which is on the left. And we started to say, huh, this actually looks pretty similar. I mean, there's clearly differences.

      But they're clearly making similar decisions to the decisions that we made that led us to the structure of our own JSON. There were a few things that we've said, huh, they're probably thinking about this a little bit smarter than we did. But it's a little too late now for us.

      But regardless, I thought it was interesting to put examples of both up here. And again, the left side is Autodesk, the right side is Stantec. Because while the upper structure of both JSONs is obviously different in terms of I've highlighted key parts of the tree, ultimately you can get down to both sets of red brackets. And you're talking about an object that has a bunch of data, or fields, or parameters associated with that object.

      And so, I've called that out pretty explicitly with the highlighting where you do eventually get to that parameter in the purple highlight and the value of that parameter or field in the teal highlight. And again, obviously, you can see some structural differences in how we collectively approached it. But the concept is ultimately fundamentally the same. And again, it was good validation for us.

      On the Stantec side, we elected to be very explicit and say, yes, this is a collection of data that came from a Revit model. And you can expect this data to be structured and formatted in a Revit kind of way. As opposed to what you see on the left with Autodesk, where you can see it's a little bit flatter and it's not explicit at that same level of the hierarchy as Stantec is.

      And you don't really know that you're dealing with Revit data until you get down to those fields where you see autodesk.revit.parameter. And that's your clear indicator at that point of, oh, this is Revit data that we're dealing with as opposed to AutoCAD, or InfraWorks, or Fusion, or Inventor, or whatever, pick your Autodesk tool of choice.

      And again, Autodesk has completely enumerated that parameter field. So that autodesk.revit.parameter, and then datum da, da, da da, which tells you not a whole lot other than you look at the value and say, oh, first floor, that must be the name of the level that we're talking about.

      As opposed to, again, on the Stantec side we elected to use the native Revit IDs for the parameters, which again doesn't tell you much more than what Autodesk was telling you, maybe less. And again, you look at the value and say, oh, that must be the name of a level. So we know we're looking at a level object one way or another. But again, similarities and differences at the same time.

      What's interesting to think about as well here is that everything is broken down to the object level. And so, if you know anything about Revit-- and for those of you who don't, I'll give you a quick lesson. In Revit, you typically have families, which are things most often things.

      But everything really is categorized as a family. So you've got a family that represents a table. And then, that family has to have one or more family types, which is to say, OK, I have this table. And I have this table type, which is three feet by three feet. So it's a three foot by three foot square table. And you could have multiple types.

      And so, you might have another type that is three foot by six foot. And so now you have two types. And then you have explicit instances of those types which actually is the geometry a user puts into their model. And they say, yes, I want an instance of this three by three type here in my model.

      All of those are objects. The family is an object. The types are each individual objects. All of the instances or occurrences are objects. And so, in both cases, Autodesk or Stantec's own proprietary format, we have all those objects in there.

      And so, this goes back to what I was saying earlier that you have to manipulate and transform the data, because the data has been fully normalized so that you're not having multiple instances of that-- multiple definitions of that family, or multiple definitions of one type. You only have one definition of each of those things.

      And ultimately, again, to do something with it you're going to have to transform that data and put it together either for your own purposes or just even for whatever user interface that you're building for your users to interact with. Any comments, James?

      JAMES MAZZA: No. We'll talk about it again to drive the point home. Don't worry.

      ROBERT MANNA: [LAUGHS] So building off of that, the data transformation part of dealing with exchanges is not a small endeavor, ultimately. Particularly because there's so much data that you are getting back with the REST API where you're literally getting everything.

      And I come back to the fact that I had an exchange that I created where you say, as an end user, I say, well, I gave you about 50 objects in that exchange. Well, that's really the things that I'm thinking about as an end user as objects like, yeah, there's a bunch of instances of furniture, or a bunch of instances of equipment, and there's about 50 of them.

      Well, the reality is there's way more than 50 objects. Because you have all those other definitions that help to define those actual objects. And so, you've got a ton of data that's coming down with these. And so, it's up to you then to do something with that data.

      And so, again, because of time, manpower availability, where my expertise lies versus James's expertise and everything else, I ended up doing a lot of experimentation with this data in Power Query because it's what I know. It's what I'm good at.

      I'm kind of tempted to go learn Python now that I have a really good use case to go learn Python. But it was at least a good place to experiment with this raw data to see what we could get out of it better understand this organization.

      So what you're seeing on the right here is, I had to go get the data. I had to then get that data organized into some tables, so that I could then have a bunch of functions that would process all those tables of data. So that I could ultimately end up at my output, of what I was interested in, which was all the objects by category and the properties that go with those objects.

      And so, doing that all in Power Query is probably a bad idea in the long run. It took about 45 minutes for that Power Query to refresh. And again, this was sample data with a relatively small JSON in the grand scheme of things. So certainly not an avenue for production.

      It's not really what these exchanges are intended for in the long term. But just trying to share our own experiences and maybe what you need to mentally prepare for in terms of if you do work with these things and what you're going to do with it going forward.

      So data flattening, again, normalized, so we've got to get it to a human readable state preferably, or at least some sort of state that your own application can consume and understand. As I mentioned earlier, or as you saw earlier, the names of-- the username doesn't show up there.

      So that's one of the key things. You've got to swap out those enumerated names with something that a user would probably expect. Or you need that information to at least know what to do with the data in the first place.

      In the case of Forge Data Exchanges, that means you will need to use the schema API endpoint to go and get the parameter schema. So you can actually accomplish what you see highlighted here in the screenshot, which is now that column is named instance ID value, and the actual user interface or human readable name that the user would expect to see.

      Interesting thing here is to remember that Revit is basically unrestricted in the ability of users to add new fields to the Revit databases. You can't stop them by and large. There are some ways you can do that. But most people don't. And so, that means that your application has to be able to potentially dynamically deal with random fields popping up that maybe you weren't expecting or didn't know they would be there.

      And the other fun part is that because of this notion of type and instance and even family to a certain extent, there is the possibility that you can have parameters with the same human readable names that are, in fact, actually different parameters. And anybody who knows anything about Revit and shared parameters, they're going to be nodding your heads right now. And you know exactly what I'm talking about.

      What this means is you have to be prepared to keep track of these things. And you have to be prepared to deal with them. So what I did, and again, my experiments in Power Query, I ended up having to construct this sort of concatenated name that indicates is it instance or type, retain the original ID value.

      Again, if you know anything about Revit and shared parameters and all that fun stuff, you also will appreciate why it was necessary to retain that ID value. And then, finally, show the actual human readable name. Again, this is not something I would necessarily put in front of an end user for interface design.

      But these are the kinds of things that you will have to deal with in the background and be prepared to deal with. And your code is going to have to be able to handle it. Otherwise, you'll throw an exception in trying to deal with the parameters.

      Because you'll be like, yeah, it's all one parameter. Oh, wait, it's not the same parameter. And it has a different data type. And wait, now everything is broken because we weren't prepared to dynamically deal with the joys of user randomness. [LAUGHS]

      JAMES MAZZA: I am going to jump in here, Robert, and say that in this case, user randomness was Robert dealing with his own models [LAUGHS] and his own superior skill set in BIM.

      ROBERT MANNA: [LAUGHS] I have no further comment, or I plead the fifth. One or the other, or both. So where does that ultimately leave us? What we were able to accomplish, or prove, or validate is we can transform the data. It is usable. We did have a lot of feedback from the development team about opportunities to improve the experience for developers like yourselves.

      You have to be prepared to deal with that inherent variability that you're going to get with Revit data in particular. Which means you're going to have to be dynamic or you're going to have to be able to dynamically handle conflicts, or your application has got to prompt end users to resolve any conflicts and say, oh, yeah, do this, or that, or whatever.

      Again, we still see advantages here in terms of providing vision into models without having to develop any tooling that specifically has to interface with that model. We only have to develop tooling that is interfacing with these cloud APIs as opposed to, again, writing software that has to run at the desktop level in some way, shape, or form. Again, we said this earlier, but to just reinforce, the end users ultimately have control over what's being shared with you.

      Ultimately, though, there's a big disadvantage here, which is, again, the REST API dumps out an enormous amount of data. And if you need all that data, fine, that's great. The REST API may make sense for you. But don't forget that when we say it dumps out all the data, that includes geometry data.

      So it's not just the hard numbers or strings that have been associated to a particular object in the Revit model. Exchanges also include all of the geometry as well. And depending on your use case that could be very valuable. A lot of the use cases we've looked at or thought about frankly don't care about geometry.

      So that means we're fetching all this data, consuming bandwidth, consuming storage space, consuming processing time for a whole chunk of data that ultimately we have no interest in. Which begs the question, can this get any better? And so, the good news is, it can.

      And again, this is where we start to delve into the territory of about to be announced, has been announced, will soon be announced, pick your verb. But Autodesk has been working very hard on developing graphql APIs to query these data exchanges.

      And we see a lot of value in the graphql APIs. Again, we've had a lot of communications and a lot of conversations with the development team. They've been able to show us early samples of how it works and what the intent is there.

      And it's just so much better. Because now as a developer you are able to control what you're getting back in terms of the data. And you can filter that data in real time based upon either the parameters or fields that you're interested in or based on the actual values of those fields.

      So you can filter on either, which ultimately is going to mean you've got less data that you have to deal with from an ETL or ELT perspective, which is again beneficial. Still going to have to be prepared for the data to be dynamic. Because users are users.

      And then, as we saw earlier, there's also down the road-- not today or tomorrow certainly, but there's that potential of even being able to create your own exchange creators. Where now, not only with the graphql where you'll be able to control what data you're returning for consumption purposes.

      But if you can create your own exchange creator, you even now can control what data is going into that exchange in the first place. And it's really just that user being able to say, yep, I'm ready to share that data. And I know what data is going out with this particular exchange. Any thoughts, James?

      JAMES MAZZA: Plenty of thoughts. You're doing well.

      [LAUGHTER]

      ROBERT MANNA: I need that validation. So this is actually where James gets to talk more than me. I'm the guy who comes up with the brilliant ideas and says, yeah, we can do this, right? And then I turn to James and say, we can do this, right?

      And James looks at me sideways and says, I don't know. You got 200 hours in your budget to do that? And I'm like, no, it's simple. It's easy. I do appreciate and understand that developing is not easy. So I'm going to let James talk a little bit about some of those experiences and some of the things you should be aware from a development perspective.

      JAMES MAZZA: Yeah. So I will say this. Everything is easy once you know how to do it. So that's kind of a good advice to live by. But I am going to just quickly go through some of this. And I will shout out and give kudos to the folks at Autodesk for their documentation around this stuff and Forge.

      So you'll note, the short URLs there do point to the exchange documentation which is still in beta documentation. So even though exchanges were officially released as of this recording, if you go to any of the documentation pages, they still say this is recommended for beta users only. So stuff is still changing and maybe not fully baked. So just a little bit of warning there.

      So with that said, getting your feet wet, the kind of technologies that I recommend you play with or you're likely going to play with to get into all of this stuff, just getting rolling to get your head wrapped around all the Autodesk authentication and all that kind of stuff with Forge, just basic Forge. Postman is your friend. That's a really useful thing just to fire away the odd one-off queries and all that kind of stuff.

      When you start looking at the volume of data that you're going to be getting out of exchanges, Postman becomes painful. And you're probably going to fire up VS code and a Jupyter Notebook and start writing some Python. So you can start doing some iteration through all the various pagination that comes through.

      So as you're setting all of that up, the thing to keep in mind here is not only do you have to go through and set up your Forge app and get all the tokens and everything else set up. When you actually go to run the app, you have to make sure that the end user is appropriately authenticated in the target Autodesk Construction Cloud tenant and project.

      So it's not enough that you've just create the Forge application. You actually have to go through and make sure that the user context that's running this thing has actually been added to the project and actually has permission to the file. Or you're going to end up getting nowhere really fast. Next slide, Robert.

      ROBERT MANNA: Selenium at all?

      JAMES MAZZA: Oh, yeah. Selenium we can mention briefly. So when you're dealing with Jupiter, and Notebooks, and Python, and Auth, it all gets very painful, and you get sick of re authenticating yourself and all that. So for anyone who hasn't played with browser automation, the folks at Selenium do have some very excellent browser testing automation frameworks available.

      And you can leverage that testing automation to deal with inputs and getting outputs from doing the interactive auth and all that kind of stuff. I know I did that, and it saved me some time. So Selenium is also good. Play with that if you're interested.

      And as Robert alluded to, getting to an exchange is not actually super straightforward. It's not just as simple as browsing in a UI. The simplest thing to do is actually just find your project. And then once you've got your project, just rip through the whole contents of the project directory. Get all of the contents. And then go hunting for the FDX object.

      So the thing that you're going to want to remember here is the item type is that little items colon Autodesk BIM 360 FDX. Everything that you get out of Forge has an item type. These data exchanges are the FDX ones. So go ahead and find them that way. That's going to be the fastest way to do it. Next.

      And then, getting through exchange results, there's a lot. You don't get them all in a single call. You get to make a whole bunch of iterative calls to go through all the various pages that it's going to return. So you're going to write yourself a bunch of loops to go through until you basically don't have a next page on the next URL kind of payload.

      The other thing I'm going to mention, and Robert kind of mentioned it when he was talking about using the parameter service as well. You'll notice in the bottom there, we've got this autodesk.revit.parameter.structural family code named dash 0, or 1.0.0, which means absolutely nothing to anyone.

      So if you want to use this information or these exchanges for any kind of end user application, expect to make tons of various calls. One to get all the parameter data, and then another one to figure out what on Earth is the normal human readable name of the parameter whose data I now have so that you can actually understand what on Earth it is that's called chair.

      In this case, you might know it's all model description but the parameters below which are Revit shared parameters, you have no idea what those names are or anything without calling the scheme service. So that's another thing to keep in mind.

      There's the other thing-- and again, it's all actually really well documented in the documentation. The representation of these JSON payloads is actually a graph. So getting some familiarity with graph structures is very helpful to you. And then, like Robert said, we've got this very interesting very appealing graphql thing coming in fairly soon.

      And the reason that this is going to be important and appealing to you is that rather than making a dozen calls to put together the picture of something, you're going to make one. And it's going to be awesome. So look forward to that. I think that's going to make this a much more usable tool for all of us developer folks out here.

      ROBERT MANNA: Yeah. And one thing to note is that do not confuse graph databases with graphql API. Similar, related, not the same thing. The handout has a link to Pluralsight course that James and I both recommend if you want to learn more about graph databases and dealing with that data.

      And there's also some notes in there about the documentation Autodesk currently has, which is actually a really poor example of a Revit data. It's a good graph example of how they structure the data. It's a poor example applied to Revit because of what they chose. So check out the handout for more about that.

      So what's next? This is where we start to wrap things up. Where can we go and what can we do in it? Because to be honest, we haven't built any production software. I didn't expect to build any production software, again, given the timeline associated with what we were accomplishing here. We were testing the waters to help you out and help ourselves out.

      So just to bring this back, you've got some sort of working model. Users can publish that model. Exchanges can be created from those models. Then with your own code, you can go fetch and transform those exchanges, expose that data in some sort of user interface. And then you can do something with that data.

      In fact, you can get that data back into Revit today if you wanted to, you could either use Revit design automation, which means you're now 100% cloud. Or you could revert back to some sort of desktop add-in that is accessing your data source for your application to write data back into another Revit model or some sort of application.

      So Autodesk has gone and done this already for Revit to Inventor interop. It's a nice example of, well, how can you get a small piece of the Revit data into Inventor so that somebody can actually design or build something that they need to be working Inventor for that it goes into that building that Revit is representing.

      Stantec, again, we talked earlier about this notion of coordinating data between MEP models. So that's going from one Revit model into another Revit model with some sort of user interface in between. dRofus, again, we consume data from Revit models. And we do also want to write some data back to Revit models.

      So if we look at the Stantec example, electrical engineer could choose to consume a data exchange. They could review that list of equipment that they've now got from the mechanical engineer. They could potentially add some new data and some sort of user interface. They could modify some of the data that came from the mechanical model, or perhaps append new data to data to the mechanical model.

      And then, ultimately, once that electrical engineer or designer has reviewed that and said, OK, yes, this is all the equipment that we're going to need to connect to in our electrical model and circuit. And I've defined what's required for that equipment. Now we could automate putting that data into the electrical model.

      And one thing with Revit design automation-- I think was public knowledge-- Autodesk enterprise customers can use design automation to actively write to Revit Cloud work shared central models.

      So previously design automation was limited to you had to upload your model and download it. Enterprise customers can actually write in real time through sync with central to active models. So this is actually a workflow that could be achieved.

      And again, this is 100% cloud now. No desktop application required to get from the point of the published model to the point that you're writing that data into that new model. Just let that percolate for a moment.

      Again, with dRofus, we consume some data out of Revit. And then we write other data into Revit. It's not the same data going both directions. So again, if we wanted to, or thought it was of value to us or our customers, we could take what we do today purely in the desktop add-in, and we could write our own exchange application that would consume that data coming out of the exchange to write into our project database.

      This ultimately begs the question of, what if Revit could consume exchanges? If you look at what is going on say, over on the Fusion 360 side, and data exchange and SIM models for Fusion, I think you can start to read some of the tea leaves. Not to mention, as I mentioned earlier, the data in exchanges is intentionally generic.

      So if you have this generic data, there's no reason that as long as you have the end points or ability to consume, and exchange, and send that data into a Revit file you could.

      Again, we talked earlier about how the exchange connector is coming, where you can create exchanges directly from a Revit model, which implies the ability to directly create exchanges through APIs. So the other implication there too is, what if you could create exchanges from your own applications?

      And this really starts to potentially level the playing field in terms of data interoperability between any application, Autodesk or otherwise. Because now you have this common language and this platform, which is where Autodesk wants to be. They've been talking about this for years. ACC is a platform. We want you to build on this platform. We want other people to build applications that use this platform.

      This is where they're going. Again whether, you happen to be talking to developers, or you just listen to the messaging and read the tea leaves, I think it's fairly safe to say or assume that this is-- whether what form exactly it takes, this is the direction that things are headed in.

      So ultimately, conclusions. Why should we do this? Again, we talked about this towards the very beginning. You're moving out of a pure C# environment, desktop environment, into web technologies which automatically changes the type of people that can do this work for you. You don't need that unicorn knowledge of both how to write C# and a deep understanding of how Revit works and what it does.

      Certainly, understanding of the Revit data is useful and valuable. But again, that could be backfilled with an SME, again, not necessarily your developer themselves. Again, different skillset needed. Web developers, data focus developers can get into this.

      And ultimately, no desktop software. Which means, no licenses are required for that desktop software. Again, we talked earlier about the potential to offload code base where you don't need to own things that previously maybe you did, because you're leveraging this technology that Autodesk is providing as part of their platform.

      And again, cloud-native ultimately. So getting out of the desktop environment from a user, end user perspective as well. Any last thoughts, James?

      JAMES MAZZA: That was a lot at the end, Robert. A lot.

      ROBERT MANNA: I know.

      JAMES MAZZA: And we almost need a splash, the safe harbor statement on that last slide probably would have been helpful. But Robert is trying to put the pieces together as best he can.

      Of course, none of this may come to pass. But it is very interesting and appealing. And we both do recommend that everybody takes a good look at this technology, because it is quite fascinating. And the potential is certainly there.

      ROBERT MANNA: All right. Well, thank you all for listening if you made it this far. Hopefully you didn't fall asleep.

      There's certainly opportunity to engage virtually on Autodesk University website. You can leave comments. I will do my best to try and follow up. You can find me on LinkedIn if you really want to. You can probably even construct my email address if you need to or want to.

      So we're certainly out there. Same with James, you probably can construct his email address to his dismay. But we're certainly willing to try and answer questions as possible if you are not able to attend this session in person at Autodesk University. And hopefully you have a good rest of the day, whatever day or time it is for you. Thank you.

      ______
      icon-svg-close-thick

      Cookieの設定

      弊社にとって、お客様のプライバシーを守ることと最適な体験を提供することは、どちらも大変重要です。弊社では、お客様に合わせてカスタマイズした情報を提供し、並びにアプリケーションの開発に役立てることを目的に、本サイトのご利用方法についてのデータを収集しております。

      そこで、お客様のデータの収集と使用を許可いただけるかどうかをお答えください。

      弊社が利用しているサードパーティのサービスについての説明とプライバシー ステートメントも、併せてご確認ください。

      サイト動作に必須:オートデスクのサイトが正常に動作し、お客様へサービスを提供するために必要な機能です

      Cookie を有効にすることで、お客様の好みやログイン情報が記録され、このデータに基づき操作に対する応答や、ショッピング カートへの商品追加が最適化されます。

      使用感が向上:お客様に最適な情報が表示されます

      Cookie を有効にすることで、拡張機能が正常に動作し、サイト表示が個々に合わせてカスタマイズされます。お客様に最適な情報をお届けし、使用感を向上させるためのこうした設定は、オードデスクまたはサードパーティのサービス プロバイダーが行います。 Cookie が無効に設定されている場合、一部またはすべてのサービスをご利用いただけない場合があります。

      広告表示をカスタマイズ:お客様に関連する広告が表示されます

      Cookie を有効にすることで、サイトのご利用内容やご興味に関するデータが収集され、これに基づきお客様に関連する広告が表示されるなど、効率的な動作が可能になります。また、継続的にデータを収集することで、お客様のご興味にさらに関連する広告を配信することが可能になります。Cookie が無効に設定されている場合、お客様に関連しない広告が表示される可能性があります。

      icon-svg-close-thick

      サードパーティのサービス

      それぞれの情報で弊社が利用しているサードパーティのサービスと、オンラインで収集するお客様のデータの使用方法を詳しく説明いたします。

      icon-svg-hide-thick

      icon-svg-show-thick

      サイト動作に必須:オートデスクのサイトが正常に動作し、お客様へサービスを提供するために必要な機能です

      Qualtrics
      弊社はQualtricsを利用し、アンケート調査やオンライン フォームを通じてお客様が弊社にフィードバックを提供できるようにしています。アンケートの回答は無作為に選んだお客様にお願いしておりますが、お客様から自発的に弊社にフィードバックを提供することも可能です。データを収集する目的は、アンケートの回答前にお客様がとられた行動を、より正しく理解するためです。収集したデータは、発生していた可能性がある問題のトラブルシューティングに役立てさせていただきます。. Qualtrics プライバシー ポリシー
      Akamai mPulse
      弊社は、弊社サイトでのお客様の行動に関するデータを収集するために、Akamai mPulseを利用しています。収集する情報には、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID が含まれます。このデータを基にサイトのパフォーマンスを測定したり、オンラインでの操作のしやすさを検証して機能強化に役立てています。併せて高度な解析手法を使用し、メールでのお問い合わせやカスタマー サポート、営業へのお問い合わせで、お客様に最適な体験が提供されるようにしています。. Akamai mPulse プライバシー ポリシー
      Digital River
      弊社は、弊社サイトでのお客様の行動に関するデータを収集するために、Digital Riverを利用しています。収集する情報には、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID が含まれます。このデータを基にサイトのパフォーマンスを測定したり、オンラインでの操作のしやすさを検証して機能強化に役立てています。併せて高度な解析手法を使用し、メールでのお問い合わせやカスタマー サポート、営業へのお問い合わせで、お客様に最適な体験が提供されるようにしています。. Digital River プライバシー ポリシー
      Dynatrace
      弊社は、弊社サイトでのお客様の行動に関するデータを収集するために、Dynatraceを利用しています。収集する情報には、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID が含まれます。このデータを基にサイトのパフォーマンスを測定したり、オンラインでの操作のしやすさを検証して機能強化に役立てています。併せて高度な解析手法を使用し、メールでのお問い合わせやカスタマー サポート、営業へのお問い合わせで、お客様に最適な体験が提供されるようにしています。. Dynatrace プライバシー ポリシー
      Khoros
      弊社は、弊社サイトでのお客様の行動に関するデータを収集するために、Khorosを利用しています。収集する情報には、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID が含まれます。このデータを基にサイトのパフォーマンスを測定したり、オンラインでの操作のしやすさを検証して機能強化に役立てています。併せて高度な解析手法を使用し、メールでのお問い合わせやカスタマー サポート、営業へのお問い合わせで、お客様に最適な体験が提供されるようにしています。. Khoros プライバシー ポリシー
      Launch Darkly
      弊社は、弊社サイトでのお客様の行動に関するデータを収集するために、Launch Darklyを利用しています。収集する情報には、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID が含まれます。このデータを基にサイトのパフォーマンスを測定したり、オンラインでの操作のしやすさを検証して機能強化に役立てています。併せて高度な解析手法を使用し、メールでのお問い合わせやカスタマー サポート、営業へのお問い合わせで、お客様に最適な体験が提供されるようにしています。. Launch Darkly プライバシー ポリシー
      New Relic
      弊社は、弊社サイトでのお客様の行動に関するデータを収集するために、New Relicを利用しています。収集する情報には、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID が含まれます。このデータを基にサイトのパフォーマンスを測定したり、オンラインでの操作のしやすさを検証して機能強化に役立てています。併せて高度な解析手法を使用し、メールでのお問い合わせやカスタマー サポート、営業へのお問い合わせで、お客様に最適な体験が提供されるようにしています。. New Relic プライバシー ポリシー
      Salesforce Live Agent
      弊社は、弊社サイトでのお客様の行動に関するデータを収集するために、Salesforce Live Agentを利用しています。収集する情報には、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID が含まれます。このデータを基にサイトのパフォーマンスを測定したり、オンラインでの操作のしやすさを検証して機能強化に役立てています。併せて高度な解析手法を使用し、メールでのお問い合わせやカスタマー サポート、営業へのお問い合わせで、お客様に最適な体験が提供されるようにしています。. Salesforce Live Agent プライバシー ポリシー
      Wistia
      弊社は、弊社サイトでのお客様の行動に関するデータを収集するために、Wistiaを利用しています。収集する情報には、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID が含まれます。このデータを基にサイトのパフォーマンスを測定したり、オンラインでの操作のしやすさを検証して機能強化に役立てています。併せて高度な解析手法を使用し、メールでのお問い合わせやカスタマー サポート、営業へのお問い合わせで、お客様に最適な体験が提供されるようにしています。. Wistia プライバシー ポリシー
      Tealium
      弊社は、弊社サイトでのお客様の行動に関するデータを収集するために、Tealiumを利用しています。データには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。このデータを基にサイトのパフォーマンスを測定したり、オンラインでの操作のしやすさを検証して機能強化に役立てています。併せて高度な解析手法を使用し、メールでのお問い合わせやカスタマー サポート、営業へのお問い合わせで、お客様に最適な体験が提供されるようにしています。. Tealium プライバシー ポリシー<>
      Typepad Stats
      弊社は、弊社サイトでのお客様の行動に関するデータを収集するために、Typepad Statsを利用しています。収集する情報には、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID が含まれます。このデータを基にサイトのパフォーマンスを測定したり、オンラインでの操作のしやすさを検証して機能強化に役立てています。併せて高度な解析手法を使用し、メールでのお問い合わせやカスタマー サポート、営業へのお問い合わせで、お客様に最適な体験が提供されるようにしています。. Typepad Stats プライバシー ポリシー
      Geo Targetly
      当社では、Geo Targetly を使用して Web サイトの訪問者を最適な Web ページに誘導し、訪問者のいる場所に応じて調整したコンテンツを提供します。Geo Targetly は、Web サイト訪問者の IP アドレスを使用して、訪問者のデバイスのおおよその位置を特定します。このため、訪問者は (ほとんどの場合) 自分のローカル言語でコンテンツを閲覧できます。Geo Targetly プライバシー ポリシー
      SpeedCurve
      弊社は、SpeedCurve を使用して、Web ページの読み込み時間と画像、スクリプト、テキストなど後続の要素の応答性を計測することにより、お客様の Web サイト エクスペリエンスのパフォーマンスをモニタリングおよび計測します。SpeedCurve プライバシー ポリシー
      Qualified
      Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

      icon-svg-hide-thick

      icon-svg-show-thick

      使用感が向上:お客様に最適な情報が表示されます

      Google Optimize
      弊社はGoogle Optimizeを利用して、弊社サイトの新機能をテストし、お客様に合わせた方法で機能を使えるようにしています。そのため弊社では、弊社サイトにアクセスしているお客様から、行動に関するデータを収集しています。収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID などが含まれます。機能のテストの結果によっては、お客様がご利用のサイトのバージョンが変わったり、サイトにアクセスするユーザの属性に応じて、パーソナライズされたコンテンツが表示されるようになる場合があります。. Google Optimize プライバシー ポリシー
      ClickTale
      弊社は、弊社サイトをご利用になるお客様が、どこで操作につまづいたかを正しく理解できるよう、ClickTaleを利用しています。弊社ではセッションの記録を基に、ページの要素を含めて、お客様がサイトでどのような操作を行っているかを確認しています。お客様の特定につながる個人情報は非表示にし、収集も行いません。. ClickTale プライバシー ポリシー
      OneSignal
      弊社は、OneSignalがサポートするサイトに広告を配置するために、OneSignalを利用しています。広告には、OneSignalのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、OneSignalがお客様から収集したデータを使用する場合があります。OneSignalに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. OneSignal プライバシー ポリシー
      Optimizely
      弊社はOptimizelyを利用して、弊社サイトの新機能をテストし、お客様に合わせた方法で機能を使えるようにしています。そのため弊社では、弊社サイトにアクセスしているお客様から、行動に関するデータを収集しています。収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID などが含まれます。機能のテストの結果によっては、お客様がご利用のサイトのバージョンが変わったり、サイトにアクセスするユーザの属性に応じて、パーソナライズされたコンテンツが表示されるようになる場合があります。. Optimizely プライバシー ポリシー
      Amplitude
      弊社はAmplitudeを利用して、弊社サイトの新機能をテストし、お客様に合わせた方法で機能を使えるようにしています。そのため弊社では、弊社サイトにアクセスしているお客様から、行動に関するデータを収集しています。収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID などが含まれます。機能のテストの結果によっては、お客様がご利用のサイトのバージョンが変わったり、サイトにアクセスするユーザの属性に応じて、パーソナライズされたコンテンツが表示されるようになる場合があります。. Amplitude プライバシー ポリシー
      Snowplow
      弊社は、弊社サイトでのお客様の行動に関するデータを収集するために、Snowplowを利用しています。収集する情報には、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID が含まれます。このデータを基にサイトのパフォーマンスを測定したり、オンラインでの操作のしやすさを検証して機能強化に役立てています。併せて高度な解析手法を使用し、メールでのお問い合わせやカスタマー サポート、営業へのお問い合わせで、お客様に最適な体験が提供されるようにしています。. Snowplow プライバシー ポリシー
      UserVoice
      弊社は、弊社サイトでのお客様の行動に関するデータを収集するために、UserVoiceを利用しています。収集する情報には、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID が含まれます。このデータを基にサイトのパフォーマンスを測定したり、オンラインでの操作のしやすさを検証して機能強化に役立てています。併せて高度な解析手法を使用し、メールでのお問い合わせやカスタマー サポート、営業へのお問い合わせで、お客様に最適な体験が提供されるようにしています。. UserVoice プライバシー ポリシー
      Clearbit
      Clearbit を使用すると、リアルタイムのデータ強化により、お客様に合わせてパーソナライズされた適切なエクスペリエンスを提供できます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。Clearbit プライバシー ポリシー
      YouTube
      YouTube はビデオ共有プラットフォームで、埋め込まれたビデオを当社のウェブ サイトで表示および共有することができます。YouTube は、視聴者のビデオのパフォーマンスの測定値を提供しています。 YouTube 社のプライバシー ポリシー

      icon-svg-hide-thick

      icon-svg-show-thick

      広告表示をカスタマイズ:お客様に関連する広告が表示されます

      Adobe Analytics
      弊社は、弊社サイトでのお客様の行動に関するデータを収集するために、Adobe Analyticsを利用しています。収集する情報には、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID が含まれます。このデータを基にサイトのパフォーマンスを測定したり、オンラインでの操作のしやすさを検証して機能強化に役立てています。併せて高度な解析手法を使用し、メールでのお問い合わせやカスタマー サポート、営業へのお問い合わせで、お客様に最適な体験が提供されるようにしています。. Adobe Analytics プライバシー ポリシー
      Google Analytics (Web Analytics)
      弊社は、弊社サイトでのお客様の行動に関するデータを収集するために、Google Analytics (Web Analytics)を利用しています。データには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。このデータを基にサイトのパフォーマンスを測定したり、オンラインでの操作のしやすさを検証して機能強化に役立てています。併せて高度な解析手法を使用し、メールでのお問い合わせやカスタマー サポート、営業へのお問い合わせで、お客様に最適な体験が提供されるようにしています。. Google Analytics (Web Analytics) プライバシー ポリシー<>
      Marketo
      弊社は、お客様に関連性のあるコンテンツを、適切なタイミングにメールで配信できるよう、Marketoを利用しています。そのため、お客様のオンラインでの行動や、弊社からお送りするメールへの反応について、データを収集しています。収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、メールの開封率、クリックしたリンクなどが含まれます。このデータに、他の収集先から集めたデータを組み合わせ、営業やカスタマー サービスへの満足度を向上させるとともに、高度な解析処理によって、より関連性の高いコンテンツを提供するようにしています。. Marketo プライバシー ポリシー
      Doubleclick
      弊社は、Doubleclickがサポートするサイトに広告を配置するために、Doubleclickを利用しています。広告には、Doubleclickのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、Doubleclickがお客様から収集したデータを使用する場合があります。Doubleclickに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. Doubleclick プライバシー ポリシー
      HubSpot
      弊社は、お客様に関連性のあるコンテンツを、適切なタイミングにメールで配信できるよう、HubSpotを利用しています。そのため、お客様のオンラインでの行動や、弊社からお送りするメールへの反応について、データを収集しています。収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、メールの開封率、クリックしたリンクなどが含まれます。. HubSpot プライバシー ポリシー
      Twitter
      弊社は、Twitterがサポートするサイトに広告を配置するために、Twitterを利用しています。広告には、Twitterのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、Twitterがお客様から収集したデータを使用する場合があります。Twitterに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. Twitter プライバシー ポリシー
      Facebook
      弊社は、Facebookがサポートするサイトに広告を配置するために、Facebookを利用しています。広告には、Facebookのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、Facebookがお客様から収集したデータを使用する場合があります。Facebookに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. Facebook プライバシー ポリシー
      LinkedIn
      弊社は、LinkedInがサポートするサイトに広告を配置するために、LinkedInを利用しています。広告には、LinkedInのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、LinkedInがお客様から収集したデータを使用する場合があります。LinkedInに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. LinkedIn プライバシー ポリシー
      Yahoo! Japan
      弊社は、Yahoo! Japanがサポートするサイトに広告を配置するために、Yahoo! Japanを利用しています。広告には、Yahoo! Japanのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、Yahoo! Japanがお客様から収集したデータを使用する場合があります。Yahoo! Japanに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. Yahoo! Japan プライバシー ポリシー
      Naver
      弊社は、Naverがサポートするサイトに広告を配置するために、Naverを利用しています。広告には、Naverのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、Naverがお客様から収集したデータを使用する場合があります。Naverに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. Naver プライバシー ポリシー
      Quantcast
      弊社は、Quantcastがサポートするサイトに広告を配置するために、Quantcastを利用しています。広告には、Quantcastのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、Quantcastがお客様から収集したデータを使用する場合があります。Quantcastに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. Quantcast プライバシー ポリシー
      Call Tracking
      弊社は、キャンペーン用にカスタマイズした電話番号を提供するために、Call Trackingを利用しています。カスタマイズした電話番号を使用することで、お客様は弊社の担当者にすぐ連絡できるようになり、弊社はサービスのパフォーマンスをより正確に評価できるようになります。弊社では、提供した電話番号を基に、サイトでのお客様の行動に関するデータを収集する場合があります。. Call Tracking プライバシー ポリシー
      Wunderkind
      弊社は、Wunderkindがサポートするサイトに広告を配置するために、Wunderkindを利用しています。広告には、Wunderkindのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、Wunderkindがお客様から収集したデータを使用する場合があります。Wunderkindに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. Wunderkind プライバシー ポリシー
      ADC Media
      弊社は、ADC Mediaがサポートするサイトに広告を配置するために、ADC Mediaを利用しています。広告には、ADC Mediaのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、ADC Mediaがお客様から収集したデータを使用する場合があります。ADC Mediaに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. ADC Media プライバシー ポリシー
      AgrantSEM
      弊社は、AgrantSEMがサポートするサイトに広告を配置するために、AgrantSEMを利用しています。広告には、AgrantSEMのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、AgrantSEMがお客様から収集したデータを使用する場合があります。AgrantSEMに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. AgrantSEM プライバシー ポリシー
      Bidtellect
      弊社は、Bidtellectがサポートするサイトに広告を配置するために、Bidtellectを利用しています。広告には、Bidtellectのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、Bidtellectがお客様から収集したデータを使用する場合があります。Bidtellectに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. Bidtellect プライバシー ポリシー
      Bing
      弊社は、Bingがサポートするサイトに広告を配置するために、Bingを利用しています。広告には、Bingのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、Bingがお客様から収集したデータを使用する場合があります。Bingに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. Bing プライバシー ポリシー
      G2Crowd
      弊社は、G2Crowdがサポートするサイトに広告を配置するために、G2Crowdを利用しています。広告には、G2Crowdのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、G2Crowdがお客様から収集したデータを使用する場合があります。G2Crowdに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. G2Crowd プライバシー ポリシー
      NMPI Display
      弊社は、NMPI Displayがサポートするサイトに広告を配置するために、NMPI Displayを利用しています。広告には、NMPI Displayのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、NMPI Displayがお客様から収集したデータを使用する場合があります。NMPI Displayに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. NMPI Display プライバシー ポリシー
      VK
      弊社は、VKがサポートするサイトに広告を配置するために、VKを利用しています。広告には、VKのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、VKがお客様から収集したデータを使用する場合があります。VKに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. VK プライバシー ポリシー
      Adobe Target
      弊社はAdobe Targetを利用して、弊社サイトの新機能をテストし、お客様に合わせた方法で機能を使えるようにしています。そのため弊社では、弊社サイトにアクセスしているお客様から、行動に関するデータを収集しています。収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID、お客様の Autodesk ID などが含まれます。機能のテストの結果によっては、お客様がご利用のサイトのバージョンが変わったり、サイトにアクセスするユーザの属性に応じて、パーソナライズされたコンテンツが表示されるようになる場合があります。. Adobe Target プライバシー ポリシー
      Google Analytics (Advertising)
      弊社は、Google Analytics (Advertising)がサポートするサイトに広告を配置するために、Google Analytics (Advertising)を利用しています。広告には、Google Analytics (Advertising)のデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、Google Analytics (Advertising)がお客様から収集したデータを使用する場合があります。Google Analytics (Advertising)に提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. Google Analytics (Advertising) プライバシー ポリシー
      Trendkite
      弊社は、Trendkiteがサポートするサイトに広告を配置するために、Trendkiteを利用しています。広告には、Trendkiteのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、Trendkiteがお客様から収集したデータを使用する場合があります。Trendkiteに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. Trendkite プライバシー ポリシー
      Hotjar
      弊社は、Hotjarがサポートするサイトに広告を配置するために、Hotjarを利用しています。広告には、Hotjarのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、Hotjarがお客様から収集したデータを使用する場合があります。Hotjarに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. Hotjar プライバシー ポリシー
      6 Sense
      弊社は、6 Senseがサポートするサイトに広告を配置するために、6 Senseを利用しています。広告には、6 Senseのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、6 Senseがお客様から収集したデータを使用する場合があります。6 Senseに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. 6 Sense プライバシー ポリシー
      Terminus
      弊社は、Terminusがサポートするサイトに広告を配置するために、Terminusを利用しています。広告には、Terminusのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、Terminusがお客様から収集したデータを使用する場合があります。Terminusに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. Terminus プライバシー ポリシー
      StackAdapt
      弊社は、StackAdaptがサポートするサイトに広告を配置するために、StackAdaptを利用しています。広告には、StackAdaptのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、StackAdaptがお客様から収集したデータを使用する場合があります。StackAdaptに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. StackAdapt プライバシー ポリシー
      The Trade Desk
      弊社は、The Trade Deskがサポートするサイトに広告を配置するために、The Trade Deskを利用しています。広告には、The Trade Deskのデータと、弊社サイトにアクセスしているお客様から弊社が収集する行動に関するデータの両方が使われます。弊社が収集するデータには、お客様がアクセスしたページ、ご利用中の体験版、再生したビデオ、購入した製品やサービス、お客様の IP アドレスまたはデバイスの ID が含まれます。この情報に併せて、The Trade Deskがお客様から収集したデータを使用する場合があります。The Trade Deskに提供しているデータを弊社が使用するのは、お客様のデジタル広告体験をより適切にカスタマイズし、関連性の高い広告をお客様に配信するためです。. The Trade Desk プライバシー ポリシー
      RollWorks
      We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

      オンライン体験の品質向上にぜひご協力ください

      オートデスクは、弊社の製品やサービスをご利用いただくお客様に、優れた体験を提供することを目指しています。これまでの画面の各項目で[はい]を選択したお客様については、弊社でデータを収集し、カスタマイズされた体験の提供とアプリケーションの品質向上に役立てさせていただきます。この設定は、プライバシー ステートメントにアクセスすると、いつでも変更できます。

      お客様の顧客体験は、お客様が自由に決められます。

      オートデスクはお客様のプライバシーを尊重します。オートデスクでは収集したデータを基に、お客様が弊社製品をどのように利用されているのか、お客様が関心を示しそうな情報は何か、オートデスクとの関係をより価値あるものにするには、どのような改善が可能かを理解するよう務めています。

      そこで、お客様一人ひとりに合わせた体験を提供するために、お客様のデータを収集し、使用することを許可いただけるかどうかお答えください。

      体験をカスタマイズすることのメリットにつきましては、本サイトのプライバシー設定の管理でご確認いただけます。弊社のプライバシー ステートメントでも、選択肢について詳しく説明しております。