AU Class
AU Class
class - AU

Harnessing Project Data for Enhanced Decision Making in the AEC Industry

Share this class

Description

In the architecture, engineering, and construction (AEC) industry, collaboration and iterative design processes generate vast amounts of data for each drawing, model, and workflow. With legacy workflows and file-based applications like Revit software, this critical data remains uncollected and underused, not fully using AI-powered predictions and informed decision making. This course will address the challenge IMEG Corp. encountered when using untapped data for decision making through the AEC Data Model API. We will explore how AEC Data Model enables cloud-based workflows that allow reading, writing, and extending model subsets without the need for desktop application plug-ins. Participants will learn techniques for identifying data resources, executing robust data extraction, and assessing data quality using GraphQL. And we'll go into detailed strategies for integrating data insights into daily workflows with tools like Power BI, enhancing AEC firms at the forefront of digital transformation.

Key Learnings

  • Learn about transforming untapped data into a powerful decision-making tool through the AEC Data Model API.
  • Learn how to execute data extraction and quality validation using GraphQL.
  • Learn how to integrate data insights into workflows using Power BI and other knowledge resources.

Speakers

  • Avatar for Jasmine Lee
    Jasmine Lee
    Jasmine is a Mechanical Engineer spearheading the Data Initiative for the Innovation Division at IMEG Corp, a full-service engineering firm with over 90 offices across the US. Specializing in integrating corporate data into streamlined workflows, she is dedicated to promoting a data-centric approach to design engineering. Jasmine's passion lies in serving to optimize processes and engineering designs, keeping the user needs and efficiency at the center of it all.
Video Player is loading.
Current Time 0:00
Duration 41:55
Loaded: 0.39%
Stream Type LIVE
Remaining Time 41:55
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

MICHAEL KILKELLY: Welcome to "Harnessing Project Data for Enhanced Decision Making in the AEC Industry." I'm Michael Kilkelly.

JASMINE LEE: And I'm Jasmine Lee. So a little bit about me. I'm the Innovation Data Analyst at our company, IMEG. My background is as a mechanical designer, so I have experience in the health and educational division. And so what I've done in my career is done HVAC and plumbing design for our health care and commercial buildings as well.

As of now, what I'm doing is I'm leading the data initiative for our data collection and implementation as a company. And with that, I collaborate with a lot of our software developers and product owners like Michael for creating tools that assist engineers for our design.

MICHAEL KILKELLY: And I'm Michael Kilkelly. I'm the Structural Product Owner at IMEG, and I work with our structural engineering group to develop tools and workflows to help them work more efficiently. Now, I'm not a structural engineer. I'm actually an architect. And I practiced for 15 years on a whole range of different projects before I caught the automation bug, and then was a automation specialist for 10 years. I'm also the founder of ArchSmarter, so I work on-- through my website, I write about productivity and provide training and all sorts of things to help architects, engineers work smarter, not harder.

JASMINE LEE: So to give you some context on what we do, we're an engineering and consulting group that provides services of all types, as you see listed here. So we provide a very diverse amount of services. So what our engineering and planning group might be very different from what our consulting and advisory service do. And so I think the reason why I'm mentioning this is because of our diverse amount of services. The data that we collect from our entire company is very diverse as well.

IMEG is about 28,000. We're always growing, but we have about 90 plus locations. And with that come different design standards. Regional code. And so these new offices come from MMAs, and so we have offices transitioning from different historical standards. And as you know, with that, that's one of our greatest strengths, but it's also one of our greatest challenges when we work with our data.

So looking at our agenda and what to expect, first, we'll look at some key questions known as our data prompts that prompted our data journey, as well as the five W's that define the scope and methods for our data journey. And after that, we'll look through the overview of our IMEG data efforts. So this really relays our why and our purpose statement for why we started this journey to collect data.

And after that, we'll look through the very granular data workflow that we implemented to collect our data. And with that, Michael will discuss the AEC Data Model and how we leverage that for automating data extraction in our Revit models. And then lastly, we'll look at what we call our data inspirations, which are tools that we created by leveraging the data we collected. And our goal is that with these data inspirations, we can positively impact our data stakeholders.

So carrying on, this is our data prompt. So we would say that these data prompts are defining what data matters to you. So these are a couple of our data prompts that you see here. And these are typical questions that we would get from our designers, drafters, and our marketing team. And these questions are truly what drove us to start our data journey.

If we were to take a closer look at these data prompts, these typical questions from the previous slide may entail questions such as, how many outpatient care buildings have we designed? This is a workflow of what someone with this question might go through to answer this question. And so maybe they'll [? check ?] projects under a health care project type. But after doing so, they'll find out that half of the fields for our secondary building type are actually empty.

Or maybe the question, how can I determine a project fee? So maybe what someone could do is find a project that matches those project scopes. So they could look for projects with similar systems, square footage, clients building types. and they'll find that they're not able to filter through this data because it's not in one location or in an accessible manner.

Next, we might have a question like, what is the name of the VAV box family? And in this case, we'll [? be ?] excited because we haven an [INAUDIBLE] standard convention that we use. But we'll also realize that over our years, our standards have changed. And based on these MMAs that we've described before, the historical standard for those offices may be different than our current IMEG standard, and they're still transitioning over.

After that, we may have this question of, what is the typical square footage of a mechanical room? Well, this really depends on the system and the building type. So maybe we'll look into what kind of data do we have for a typical square foot for those parameters? And the main source we would look for that is in our design drawings. But the issue is that all this data is siloed into different PDFs, and we don't have a single location that gives all the standardized data.

So to really start a data journey, we had to ask ourselves, what are the five Ws? What is the who, what, when, where, and why to our data? And so you'll see why is this data Important? This why is really essential to knowing your end goal and the means that you want to achieve it.

Next is, who is this data important to? Your stakeholders who are going to use this data are important and very key to knowing what data you want to collect. And with this key data, they may say, why are you collecting it? And may not know the end goal. And so being able to know your who and really effectively explaining why this data is important to collect and how it really incentivizes and will make their work more efficient is very important.

Next is, what is data? Data types have different data extraction methods. And with that, the end goal is to have a structured and organized data set that our users can leverage. So defining what is the data we're looking at helps us find the right data to collect and the best way to process it.

Next is our when. This is broken up into a couple different pieces, the first one being data collection timing. When do we want to collect data? What we say among our data team is that we really should have been collecting data 20 years ago. And so collecting data earlier is always better. But something key to know is that data is very messy. Data is not structured most times. So having a structure implemented to collect very organized data is what's going to really help you in the long run.

Next is our analysis timing. When do we analyze our data, and when do we start using it? Obviously frequently is better. Using data whenever you can is very important, and it's really what makes a very data-centric company. But using data that hasn't been validated, that hasn't been vetted through, is not going to be helpful. So making sure your data is validated is very important.

And lastly, sharing timing. When do we start sharing this data? We have a data governance team at IMEG, and it's made up of a lot of different people from different teams. We have someone from our technical operations, our marketing team, me from our innovation team, and we've learned that each of these teams have their own data workflow and their process for making structured data that works best for them. But making sure to share those ways that we collect data and make sure we're working in parallel paths rather than adjacent paths makes it ideal to when we want to handle our data. So ideally, consistently sharing those workflows is important, but provided it's applicable.

And lastly, where is this data? People and [? experiences ?] and decision make the best prediction, I would say. And so one of these regions that I'd like to talk about is subject knowledge. Aside from that, we have our file types. So our Revit, AutoCAD, PDF files. And lastly, our cloud platforms, which for us internally we use Salesforce. Vantagepoint. Outlaw. And really determining where the data is located really identifies the data processing that data type needs, and also how we can use it.

So next, we'll jump into our IMEG data overview. So our IMEG why. I would say by going through these five W's, IMEG really came to our conclusion on why data is important to us. And it really came down to answering these simple questions with not so simple answers, or answers that were very hard to find.

So we wanted to highlight the purpose that really came from our who. So our designers, our drafters, our marketing team. And they are all having this common experience of frustrations of unanswered projects and design questions due to inaccessible information. So our goal is to collect this data and make it accessible to these users.

To give you a further breakdown on our five W's, our who, like I mentioned, are entry level engineers, experts, project managers, product owners, operations teams, anyone who's interested in using this data. Our what is specifically the data we're looking at. It's general project information. So this entails project location. Project client. Project total building square footage. So these are very general information about each specific project.

Next, we have our room information, which entails a typical room types, typical square footage per room, and typical quantities of equipment per room. And lastly, we have equipment quantities. So this relates to how many lighting fixtures do we have? How many plumbing fixtures do we have on a project per project basis?

Our when is as soon as possible. As we mentioned, now is better than later. And so we realize that data lies in these existing files and platforms that we use internally, so why not extract it? And where, lastly. This data really lies in the PDF drawings that we use as our deliverable for our clients and our Revit models.

So here's a high level architecture of our diagram. And with those five W's we defined, we can jump into how this data is going to be collected and how it's going to impact our firm. So you'll see here on the left-hand side, we have our manual user input and our automated data extraction. The goal with this is that we would have a hybrid manual and automated data extraction process that will further be validated and then added to a project database that can be used to leverage predictions such as tools such as our PowerBI or [INAUDIBLE] chat bot that can in turn benefit those users who have been answering our questions on our data and making for a very structured data set.

Next, we have our data diagram. So this is really the specifics on how we started our data journey and continued to collect data. It's divided into two different parts, one being our data catalog, which is the information that we have about our data in our resources; and our data extraction, which is split up into our manual and automated data extraction processes as well as our data validation process.

So jumping into our data catalog, we talked about these five W's. Well, they show up here again. So we have our metadata and our data types under our data catalog that tells us information such as project completion status, services provided, market type, project type. And with these different data types, we have location and quality. So these data types could entail Revit models, PDFs, JSON files. And with those different files, they would have different locations and quality.

And what I mean by this-- this is definitely our biggest challenge when finding the resources with these data sources, and that being the location of these data files were always changing depending on the cloud service we were using, depending on the year of that project, and what was the typical standard of saving that? Depending on the client. Does the client want their personal model, and they want us to make a copy? And so we don't have access to the original.

And with the quality, with these changing standards, the type of data that we have in those models also vary due to the different family types. The different naming conventions we use. And so how we would determine this quality is, is it up to this IMEG standard where we can keep using the same process that we have implemented to collect data?

Next, for our data extraction piece, looking at the manual data extraction, these are processes that we use to manually extract data from PDFs, Revit models, and procedural processes. With our Revit files, a key way that we use to collect data, we're using Revit native functions. And so a lot of the SNPs that you'll see moving forward are from a data manual that I created, which entailed the step by step process of how to extract different data parameters and what applications to use what settings do we use to make sure all this data that we're collecting is very structured?

So this is an example of a room schedule template. And so I had created this template. And so the user can import this template, and in return, they'll get a schedule that lists all the room names and the number of counts to those room names. Now what we found out is that these room names have different naming conventions. So an office may not always be listed in office. It could be a workplace. It could be a work room. A business center. And so you'll see that implementing this table with naming conventions is what helps us make more structured and accurate data, and having any repetitive data that's under different naming conventions all under the same account.

Next, we have our PDFs. With our PDFs, a typical way that we had users extracting data is looking at equipment schedule. So down here, you'll see a plumbing fixture schedule. And with this, you'll see the different tag conventions for the different sinks and toilets. What's important to note here is that the nomenclature for people is very different depending on their background.

So as a mechanical engineer, my background-- I can tell what a bathroom is based on. There's a plumbing fixture of a sink and a toilet there. But for someone who's not used to seeing that symbol and checking on the floor plan and actively looking for that symbol, it's very hard to count that. And so this is another venue of using this tag system so that people can manually count that.

Lastly, we have our procedural process. So one of the processes that we're implementing currently is the KPI update process. At IMEG, we require project managers to quarterly update KPIs. And with that, we realized that this was one of the venues that we could have more data implemented. And so the idea is, you're not creating a whole new workflow, but improving existing workflows to collect data. And these are the fields that we have required now in our new KPI update process.

Next, we'll look at our manual workflow diagram. So this is all the different venues where we have our users inputting data as of now, the first being our project number takeout. So when a project is opened, then they'll put in a list of required data inputs. And with our KPI process, they'll quarterly update those active projects in a personal dashboard.

And next, we'll have our quality control where every project that IMEG releases goes through a quality control process. And with that, the designer will upload those drawings at a specified location. And so with that, we can find the data that we're looking for because it's in a consistent location. And lastly, with our project closeout, this is project data that couldn't be identified in the beginning of the project. So projects such as construction completion date, construction budget, those kind of data parameters are required at the project closeout.

So next, we have our automated data extraction. So here, it's similar to the manual extraction process we're extracting from PDF and Revit models, but instead with our PDFs, we're using a lot takeoff third party software. And with our Revit models, we're using Revit plug-ins and the AC-- data model, which Michael will talk about later.

So this is an example of a Revit plugin we use called One Click. And here on the bottom, you'll see an Excel of an example of what you would get from One Click. And so you'll see here that the blue highlighted fields are all fire dampers or a type of damper. And so as we mentioned how there's different room conventions, you'll see that there's different naming conventions for different components in our models.

And so you'll see here in the top paragraph, we have IMEG FD, IMEG Balancing Damper, IMEG Fire Damper. These are all dampers, but they're all different types. And so you see this workflow of associating and mapping specific instances and quantities to one single quantity being the dampers in this project.

Up next for our PDFs, what we found was really important were defining workflows. We used a lot of takeoff software, some being ML Estimation and Togal.AI. And we realized that these softwares had different capabilities, but their outputs were not exactly the structured data that we wanted. And so we saw functions such as room name list creation or room square footage, but they weren't in manners that were associated with each other. So they were separate lists. And so what we had to do was define workflows to extract this data in a manner that we wanted. And so you'll see here, these softwares are building blocks to the type of data structure that we wanted to create.

And lastly, we have our data validation piece. We say trash in, trash out. So without validated data, can we really say this data is useful And keeping that in mind we really wanted to verify our data to make sure that it was very accurate. And to do that, what we did is we compared data from our manual and automated extractions, as well as comparing the different data from the PDF and Revit models. And this is because as we mentioned, models have different quality. PDFs have different quality as well. And really sourcing if the issue of the data is from the resource itself or the extraction material was something that was really important to us.

And lastly, another process we used was normalizing this data. So after comparing this data, taking it out and looking more in depth on any outliers we saw. Looking if this outlier is, again, due to the resource itself, or is it due to the process that we're extracting the data with?

MICHAEL KILKELLY: So Jasmine had mentioned our efforts at automating data extraction, and we've taken a few approaches to this. That's included a custom web API that we had a third party create for us, which was very expensive. And then we had an add-in that required each model to be open in Revit to extract the data, which was very slow. So one area that we're really excited about is using Autodesk's AEC Data Model API.

And if you're not familiar with the API, the AEC Data Model is a cloud-based API that reads data directly from models that reside in the Autodesk Construction Cloud. And the API uses GraphQL as the query language to make requests from the model. Now, if you're not familiar with GraphQL, let me tell you a little bit about it. So a key feature of the data model is this use of GraphQL, and it's a query language for web-based APIs that allows you to make a wide range of queries through a single endpoint or URL.

Now other API technologies like REST require multiple endpoints or multiple URLs for all the various queries that you want to make. So what you end up with are a whole bunch of URLs to get all the requests you want. And you need to predetermine which requests you want beforehand as you're building the API. So it's not terribly flexible. Now GraphQL, on the other hand, is much, much more flexible. So me as the developer, the user, I can use GraphQL and use the query language to ask whatever questions and request whatever data I want from my models. You just have to know how to write the query using GraphQL syntax, and we'll take a look at that shortly.

So today I'm going to demonstrate three ways you can use the AEC data model. The first is using Autodesk's Data Model Explorer, which is really a developer tool for exploring the API. Then I'm going to show you how to use Postman, which is a low-code software that makes it easy to interact with web-based APIs. And finally, I'm going to show you a custom data reader application that we built at IMEG that makes it a lot easier for users to interact with the API and get the data directly from their models.

So before we get into the applications, I want to talk a bit about how the API is organized. So it doesn't necessarily use nomenclature that we're used to with Revit. So we look at the schema, and it's like, what is an element group? Well, if we take another look at this, we can break down the schema. So an element group is really a model. And we can see that, and we'll see that throughout the API. And we just have to associate element group with model.

Now inside of that element group, we have elements. And those, as you can imagine, are instances of families. Their system families. We have [INAUDIBLE] views. We have annotation. All those elements that go into our model. Now we also have what are called reference properties, and these are links that associate an element to a property. And a property could be something like a width for a window or the length of a wall or the elevation of our levels.

Now properties also have a definition, which really consists of the units for that particular property. And as we're using our GraphQL syntax, we're going to see reference to element groups. Elements. Reference property. So it's helpful to know what those are before we get too far into it. Otherwise, it can be very confusing based on what those things are called.

Now the easiest but probably the least user-friendly way to interact with the AEC Data Model is using Autodesk's AEC Data Model Explorer. And I say it's the least friendly because it's not a tool that you necessarily want to give to end users. You actually have to know GraphQL. It returns all the output in a JSON format. Again, it's not super user-friendly, but it's a great tool for us as developers and as we're exploring the data model to get comfortable with the way it works.

And in fact, you can go use this URL right here. You can go and try it out yourself. Now I'm going to show you, give you a little tour of that and then demonstrate how it works. So when you go to the Data Model Explorer, you're going to see a screen that looks kind of like this. And so if we break down the pieces right here, you can see that there are different zones within the screen.

So if we look at number 1 in the upper right-hand corner, this is where we want to sign in to our Autodesk account. So the Data Model Explorer is only going to show you models that you actually have access to on ACC. Now, on the left-hand side, we have this tab all the way across that has different requests that we can make. And what's nice with the data model Explorer is that these are predefined. We can also create our own requests, but it gives you a good starting point to start exploring the API.

Now if I click on this Get hubs tab, here is the GraphQL code specifically for getting our hub. So I want to know which hubs I have access to. I can run this Get hubs response, and it will output those for me. Now I also have this section here down the bottom where I can input variables, and you'll see this when I demonstrate that we are getting a lot of variables from all of our responses. For example, I'm going to get my hubs. It's going to give me an ID for the hub. So when I want to get my projects, I need to know the Hub ID and so on. So we're to do a lot of cutting and pasting of those IDs.

Now when I'm ready and I want to give it a run, I just press this Play button. This red Play button right here. And that's going to send the request using the GraphQL format to this specific GraphQL endpoint or URL for the data model. And then I'm going to see my response in JSON format here in this window number 6 on the right-hand side.

So let's take a look at that in real time. And I'm just going to run through this scenario. So I'm here in the Data Model Explorer, and I'm signed in using my account. I'm going to go ahead and I'm going to choose the Get hubs response. I'm going to click Play. And in the window on the right, you're going to see the JSON specifically for the hubs I have access to. So I'm going to select the user ID for that hub I want. I'm going to click on the Get projects tab. And then I'm going to paste my user ID at the bottom and I'm going to run Get projects.

So this is going to show me all the projects that I have access to in that hub. And you can see there's a whole bunch of projects in here. I'm going to select one of the projects. I'm going to copy its ID for the project. I'm going to switch to our Get Element groups by project. Element groups our files. I paste in my project ID and I run the response. So now this is going to return all of the files that are in that particular project.

And there's a bunch of them in here. So again, I select the ID of the file that I want. I copy it to my clipboard. I switch to the next tab, which is Get elements from category. I paste in that file ID. The element group ID. And I also have a property filter here. So what this response is going to do is it's going to get me all of the walls in that particular Revit file.

And you can see it's showing me all of the individual instances, and then it's showing all of the properties for those instances as well. So it's a lot of data that it's returning. Again, it's in that JSON format, so it's not super user-friendly. But I'm seeing a whole bunch of data is coming back, so I know I'm getting something. I just may not be able to make a whole lot of sense from it right here.

So one thing to note with the Data Model Explorer is that there are some constraints to it. And what this is is that there's limits to how much data you can get back in each request. So you can't write a query to give me every instance of every element in every file in all my projects in my entire hub. So I've tried this and I ended up getting an error message like this that says, your query point value exceeded the quota. And it's like, nope, not going to do it.

And so there is a system that determines the point value based on the number of fields in the query. So the way I think of this is that the more complex the query, the more points it's going to take. So there is a limit to a thousand points per query or 6,000 points per minute. And it's a little bit complicated how those points get determined. You can go down to this. Use this link right here at the bottom if you want to read more about that rate limit and how the points are calculated.

But know that if you're trying to get a whole lot of data, you might get an error like this. And then that's telling you, hey, you got to simplify this. So you do want to step through it and reasonable bites. If the data is like a meal, you want to take reasonable bites and not try to stuff the entire dinner down your throat. So something to keep in mind as you're working with this particular API.

Now, we knew that-- let me back up. Sorry. We knew that we wanted to allow our users to access their data and to be able to ask questions of their models. And we wanted them to do that. We didn't want to push them towards the Data Model Explorer, because that just wouldn't go well. Most users don't want to have to learn a query language just to do that. And we're talking about engineers here. So if anybody would want to do it, it would be them. And I think there would be a lot of resistance. And also, reading the JSON format in that window is really difficult.

So as an interim step, we looked at using a piece of software called Postman that allows us to access the Data Model API outside of the Data Model Explorer. And this would let us better understand the API, and we'd be able to leverage some nice features in Postman so we could make the leap into a custom tool that would be a lot more user-friendly.

So Postman is-- it's a free software that you can download, and it looks a little bit like this. Somewhat similar to the Data Model Explorer. And it allows you, again, to interact with the web API. So on the left-hand side, I have a collection of requests. And this collection can refer to certain variables that I can set. Now where this is different from the Data Model Explorer is, I have to do a lot less cutting and pasting. I can set variables as I make my requests.

So let me go ahead and show you what a demo of this works. And again, it has a lot of really nice features. So I'm using my Get hub's request. So I click Send. And again, I'm using that GraphQL that I got from the Data Model Explorer. So I see my results down here at the bottom. I select the ID, and then I can set a variable in my environment to that value.

So now I can go from Get hubs. I can go to Get projects. That's going to use my Hub ID that I've set as a variable. And I can just send that particular request. So here are all of my projects down here at the bottom. I can go through and I can select the project that I want. And I'm going to do that same operation. I'm going to go ahead and select the ID. And I'm going to right click and I'm going to set that value to a particular variable.

So in this case, I'm going to set it to my project ID. So now I can go to Get files or Get element groups. I changed the name because it made more sense to me. And I can run that, and that's going to base the actual files on the project ID. So I get, again, that list in JSON format down here of all of the files. And I can choose the file that I want. Right click. I'm going to save that to my file ID variable. And now I can go to a request to Get element instances from category.

So if I look at the GraphQL, I'm going to pass in the actual file ID, and it's going to use a property filter to get all of the walls. And it's also going to get all of the wall types. So it's not going to give me the instances. It's going to get me the types. So now I can see in my output all of that JSON data for my wall types. And I have some additional requests in here.

And I can also save that particular JSON output as well. I can save that, and I see it here in my collection. I can save the JSON output also to an external file. So if I wanted to, I don't know what, share it with somebody, I could do that very easily. And they would get that JSON data.

Now one thing to note is that because Postman-- it's not an Autodesk product, so I do have to authenticate with my Autodesk account. And the way that Postman does that is it uses three legged authentication. And I have to use my client ID and my client secret that I use when I set up my actual application. My app. When I do that, then I can authenticate that app. And I can authenticate it with my actual Revit account. And so I have to do that as a first step.

Now one of the things I really like about Postman, and the reason that we did this as an interim step, is that I can take these requests and I can actually export them to code. So I knew we wanted to build a more user-friendly solution. So I can take each of these Postman requests and I can convert them into C Sharp code. And I have to change some variables here and there, but I have all of that code ready for me.

So that leads us into our next option here, which is where we built a custom desktop app called the IMEG Data Explorer. And we had originally done this for the API that we had actually hired a company to do. So we built this user interface. What was really nice about the fact that they're all kind of web interfaces is that once we heard about the [? AEC ?] Data API, it was really easy to switch where the data was coming from. So we could switch out all those requests for the data to use the AEC data model. So not a lot of rework was required.

But we wanted something that looked nice that a user could use, and that they wouldn't have to think about GraphQL or web APIs or any of that stuff. So again, it's a desktop app, and it provides a viewer for the JSON responses. And it also parses that JSON. So it provides element counts. We're working on a version that will provide volumes. And the user can then export those counts to a TXT file or a CSV file.

So if I take a look at a demo for this, here it is. It runs as a desktop application. We are working on a version that would be a web app. But I can select my hub. Again, I'm authenticated to my account. From the hub, I select the project that I want. So I select that. Then it's going to give me a list of all of my files. We were doing this in the Data Model Explorer. Where we were doing it in Postman, now we're doing it here.

So I select the file. And now I can choose a category that I want to look at specifically for my category filter. Then it outputs all of the instances here in this data grid. And I can click on one of those data grids, and it's going to show me all of the properties. And again, this is still we're working with JSON files, but we're providing a user interface. So the user doesn't have to look through that JSON file.

We can also get counts for all of those family types. So I can see we have 31 instances of Sink Vanity round. 21 of Toilet Domestic. I can look at all the individual instances here. So if I'm looking for counts of elements or families inside a model, I can do this very quickly with a glance and then I can export those counts either to a CSV file or to a TXT file. And again, I can change the file that I want. I can change the project. I can do this for a whole bunch of different models.

And even conceivably, I could-- we haven't implemented this, but I could say, OK, give me all of the Revit files in this particular project. I may run into some issues with the amount of data that's coming, but I can work around that. So I can ask questions more easily. And the user has an easy interface to do that.

JASMINE LEE: So with the AEC Data Model and these different various ways that we've extracted data, really, what have we done with this data? And so what we call these data inspirations are the different tools that we've developed, such as the different PowerBIs I'm about to show you. And they're really the way that we've been leveraging the data that we've been collecting.

So our first example here is the EcoMeter. This is a sustainability tool. So our sustainability and building performance team created this tool. And what it is-- it's a high level carbon calculator tool. And you can associate the location and then the different structural material that you're using in this building. And the great part of this is, it gives a very high level overview for clients. But what we plan to do with the data that we collect is really backfill the dashboards like this that have already been created and make them more detailed.

So what can we do with our data? We can start getting carbon counts with the IMEG associated building quantities. So what does a typical IMEG health building have in terms of carbon? What kind of system types influence our carbon counts? And you're getting more specified data with more specified results in these dashboards.

Next, we have our project information dashboard. This is a PowerBI that's in the works. But the idea is that project data that we have will directly go into this dashboard that will give similar project results as well as fee calculations. And so you'll see here on the left-hand side is a map. And so what the user is able to do is locate the region that they want to look at, as well as identify the market type that their building is in. And then even more narrow down the scope of the building square footage that would be associated.

So with this, we can have typical number of [INAUDIBLE] based on the building type. And with it, we can also give a feedback of similar projects that users can refer back to. And our next step, which is our fee tool, which would be another layer to this dashboard, would allow the user to identify these different building parameters and get a fee associated with it.

Next, we have our chat bot, MEG. You'll see here that MEG is able to answer a lot of high level questions and relay back resources. Something that she's having trouble right now is with project information. And so our goal is with the data that we collect, we're able to give MEG a more accurate data set to refer to so that she'll be able to answer project-based questions. With the chat bot, you need an LLM, as people know, but also your own data set. And so what we're doing here is really creating this integral data set that is focused on IMEG [? Data's ?] data, and also our experiential knowledge and putting that into the form of a chat bot for users to use.

And lastly with our takeaways, something we really want you to do after this session is determining your five W's. So we mentioned that the why is very important to set a goal and a method to your journey, but this why will really differ for everyone. With the data structure, collecting data, information, and creating a good data extraction process is the foundation of collecting data.

Something that Michael always mentions is our why will never change, and the structure that we set up will never change, even if the technology does. So maybe, as time goes on, we'll find better technology that extracts data from PDFs. But this mapping of what data components we want to look at and what data is important to us will never change. So defining that data structure is very important. And lastly, creating these data-leveraging platforms is the best way that your who can find the why. And so finding these platforms that make data accessible is really essential for impact.

______
icon-svg-close-thick

Cookie preferences

Your privacy is important to us and so is an optimal experience. To help us customize information and build applications, we collect data about your use of this site.

May we collect and use your data?

Learn more about the Third Party Services we use and our Privacy Statement.

Strictly necessary – required for our site to work and to provide services to you

These cookies allow us to record your preferences or login information, respond to your requests or fulfill items in your shopping cart.

Improve your experience – allows us to show you what is relevant to you

These cookies enable us to provide enhanced functionality and personalization. They may be set by us or by third party providers whose services we use to deliver information and experiences tailored to you. If you do not allow these cookies, some or all of these services may not be available for you.

Customize your advertising – permits us to offer targeted advertising to you

These cookies collect data about you based on your activities and interests in order to show you relevant ads and to track effectiveness. By collecting this data, the ads you see will be more tailored to your interests. If you do not allow these cookies, you will experience less targeted advertising.

icon-svg-close-thick

THIRD PARTY SERVICES

Learn more about the Third-Party Services we use in each category, and how we use the data we collect from you online.

icon-svg-hide-thick

icon-svg-show-thick

Strictly necessary – required for our site to work and to provide services to you

Qualtrics
We use Qualtrics to let you give us feedback via surveys or online forms. You may be randomly selected to participate in a survey, or you can actively decide to give us feedback. We collect data to better understand what actions you took before filling out a survey. This helps us troubleshoot issues you may have experienced. Qualtrics Privacy Policy
Akamai mPulse
We use Akamai mPulse to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Akamai mPulse Privacy Policy
Digital River
We use Digital River to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Digital River Privacy Policy
Dynatrace
We use Dynatrace to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Dynatrace Privacy Policy
Khoros
We use Khoros to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Khoros Privacy Policy
Launch Darkly
We use Launch Darkly to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Launch Darkly Privacy Policy
New Relic
We use New Relic to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. New Relic Privacy Policy
Salesforce Live Agent
We use Salesforce Live Agent to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Salesforce Live Agent Privacy Policy
Wistia
We use Wistia to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Wistia Privacy Policy
Tealium
We use Tealium to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Tealium Privacy Policy
Upsellit
We use Upsellit to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Upsellit Privacy Policy
CJ Affiliates
We use CJ Affiliates to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. CJ Affiliates Privacy Policy
Commission Factory
We use Commission Factory to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Commission Factory Privacy Policy
Google Analytics (Strictly Necessary)
We use Google Analytics (Strictly Necessary) to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Google Analytics (Strictly Necessary) Privacy Policy
Typepad Stats
We use Typepad Stats to collect data about your behaviour on our sites. This may include pages you’ve visited. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our platform to provide the most relevant content. This allows us to enhance your overall user experience. Typepad Stats Privacy Policy
Geo Targetly
We use Geo Targetly to direct website visitors to the most appropriate web page and/or serve tailored content based on their location. Geo Targetly uses the IP address of a website visitor to determine the approximate location of the visitor’s device. This helps ensure that the visitor views content in their (most likely) local language.Geo Targetly Privacy Policy
SpeedCurve
We use SpeedCurve to monitor and measure the performance of your website experience by measuring web page load times as well as the responsiveness of subsequent elements such as images, scripts, and text.SpeedCurve Privacy Policy
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

Improve your experience – allows us to show you what is relevant to you

Google Optimize
We use Google Optimize to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Google Optimize Privacy Policy
ClickTale
We use ClickTale to better understand where you may encounter difficulties with our sites. We use session recording to help us see how you interact with our sites, including any elements on our pages. Your Personally Identifiable Information is masked and is not collected. ClickTale Privacy Policy
OneSignal
We use OneSignal to deploy digital advertising on sites supported by OneSignal. Ads are based on both OneSignal data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that OneSignal has collected from you. We use the data that we provide to OneSignal to better customize your digital advertising experience and present you with more relevant ads. OneSignal Privacy Policy
Optimizely
We use Optimizely to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Optimizely Privacy Policy
Amplitude
We use Amplitude to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Amplitude Privacy Policy
Snowplow
We use Snowplow to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Snowplow Privacy Policy
UserVoice
We use UserVoice to collect data about your behaviour on our sites. This may include pages you’ve visited. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our platform to provide the most relevant content. This allows us to enhance your overall user experience. UserVoice Privacy Policy
Clearbit
Clearbit allows real-time data enrichment to provide a personalized and relevant experience to our customers. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID.Clearbit Privacy Policy
YouTube
YouTube is a video sharing platform which allows users to view and share embedded videos on our websites. YouTube provides viewership metrics on video performance. YouTube Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

Customize your advertising – permits us to offer targeted advertising to you

Adobe Analytics
We use Adobe Analytics to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Adobe Analytics Privacy Policy
Google Analytics (Web Analytics)
We use Google Analytics (Web Analytics) to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Google Analytics (Web Analytics) Privacy Policy
AdWords
We use AdWords to deploy digital advertising on sites supported by AdWords. Ads are based on both AdWords data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that AdWords has collected from you. We use the data that we provide to AdWords to better customize your digital advertising experience and present you with more relevant ads. AdWords Privacy Policy
Marketo
We use Marketo to send you more timely and relevant email content. To do this, we collect data about your online behavior and your interaction with the emails we send. Data collected may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, email open rates, links clicked, and others. We may combine this data with data collected from other sources to offer you improved sales or customer service experiences, as well as more relevant content based on advanced analytics processing. Marketo Privacy Policy
Doubleclick
We use Doubleclick to deploy digital advertising on sites supported by Doubleclick. Ads are based on both Doubleclick data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Doubleclick has collected from you. We use the data that we provide to Doubleclick to better customize your digital advertising experience and present you with more relevant ads. Doubleclick Privacy Policy
HubSpot
We use HubSpot to send you more timely and relevant email content. To do this, we collect data about your online behavior and your interaction with the emails we send. Data collected may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, email open rates, links clicked, and others. HubSpot Privacy Policy
Twitter
We use Twitter to deploy digital advertising on sites supported by Twitter. Ads are based on both Twitter data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Twitter has collected from you. We use the data that we provide to Twitter to better customize your digital advertising experience and present you with more relevant ads. Twitter Privacy Policy
Facebook
We use Facebook to deploy digital advertising on sites supported by Facebook. Ads are based on both Facebook data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Facebook has collected from you. We use the data that we provide to Facebook to better customize your digital advertising experience and present you with more relevant ads. Facebook Privacy Policy
LinkedIn
We use LinkedIn to deploy digital advertising on sites supported by LinkedIn. Ads are based on both LinkedIn data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that LinkedIn has collected from you. We use the data that we provide to LinkedIn to better customize your digital advertising experience and present you with more relevant ads. LinkedIn Privacy Policy
Yahoo! Japan
We use Yahoo! Japan to deploy digital advertising on sites supported by Yahoo! Japan. Ads are based on both Yahoo! Japan data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Yahoo! Japan has collected from you. We use the data that we provide to Yahoo! Japan to better customize your digital advertising experience and present you with more relevant ads. Yahoo! Japan Privacy Policy
Naver
We use Naver to deploy digital advertising on sites supported by Naver. Ads are based on both Naver data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Naver has collected from you. We use the data that we provide to Naver to better customize your digital advertising experience and present you with more relevant ads. Naver Privacy Policy
Quantcast
We use Quantcast to deploy digital advertising on sites supported by Quantcast. Ads are based on both Quantcast data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Quantcast has collected from you. We use the data that we provide to Quantcast to better customize your digital advertising experience and present you with more relevant ads. Quantcast Privacy Policy
Call Tracking
We use Call Tracking to provide customized phone numbers for our campaigns. This gives you faster access to our agents and helps us more accurately evaluate our performance. We may collect data about your behavior on our sites based on the phone number provided. Call Tracking Privacy Policy
Wunderkind
We use Wunderkind to deploy digital advertising on sites supported by Wunderkind. Ads are based on both Wunderkind data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Wunderkind has collected from you. We use the data that we provide to Wunderkind to better customize your digital advertising experience and present you with more relevant ads. Wunderkind Privacy Policy
ADC Media
We use ADC Media to deploy digital advertising on sites supported by ADC Media. Ads are based on both ADC Media data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that ADC Media has collected from you. We use the data that we provide to ADC Media to better customize your digital advertising experience and present you with more relevant ads. ADC Media Privacy Policy
AgrantSEM
We use AgrantSEM to deploy digital advertising on sites supported by AgrantSEM. Ads are based on both AgrantSEM data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that AgrantSEM has collected from you. We use the data that we provide to AgrantSEM to better customize your digital advertising experience and present you with more relevant ads. AgrantSEM Privacy Policy
Bidtellect
We use Bidtellect to deploy digital advertising on sites supported by Bidtellect. Ads are based on both Bidtellect data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Bidtellect has collected from you. We use the data that we provide to Bidtellect to better customize your digital advertising experience and present you with more relevant ads. Bidtellect Privacy Policy
Bing
We use Bing to deploy digital advertising on sites supported by Bing. Ads are based on both Bing data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Bing has collected from you. We use the data that we provide to Bing to better customize your digital advertising experience and present you with more relevant ads. Bing Privacy Policy
G2Crowd
We use G2Crowd to deploy digital advertising on sites supported by G2Crowd. Ads are based on both G2Crowd data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that G2Crowd has collected from you. We use the data that we provide to G2Crowd to better customize your digital advertising experience and present you with more relevant ads. G2Crowd Privacy Policy
NMPI Display
We use NMPI Display to deploy digital advertising on sites supported by NMPI Display. Ads are based on both NMPI Display data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that NMPI Display has collected from you. We use the data that we provide to NMPI Display to better customize your digital advertising experience and present you with more relevant ads. NMPI Display Privacy Policy
VK
We use VK to deploy digital advertising on sites supported by VK. Ads are based on both VK data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that VK has collected from you. We use the data that we provide to VK to better customize your digital advertising experience and present you with more relevant ads. VK Privacy Policy
Adobe Target
We use Adobe Target to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Adobe Target Privacy Policy
Google Analytics (Advertising)
We use Google Analytics (Advertising) to deploy digital advertising on sites supported by Google Analytics (Advertising). Ads are based on both Google Analytics (Advertising) data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Google Analytics (Advertising) has collected from you. We use the data that we provide to Google Analytics (Advertising) to better customize your digital advertising experience and present you with more relevant ads. Google Analytics (Advertising) Privacy Policy
Trendkite
We use Trendkite to deploy digital advertising on sites supported by Trendkite. Ads are based on both Trendkite data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Trendkite has collected from you. We use the data that we provide to Trendkite to better customize your digital advertising experience and present you with more relevant ads. Trendkite Privacy Policy
Hotjar
We use Hotjar to deploy digital advertising on sites supported by Hotjar. Ads are based on both Hotjar data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Hotjar has collected from you. We use the data that we provide to Hotjar to better customize your digital advertising experience and present you with more relevant ads. Hotjar Privacy Policy
6 Sense
We use 6 Sense to deploy digital advertising on sites supported by 6 Sense. Ads are based on both 6 Sense data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that 6 Sense has collected from you. We use the data that we provide to 6 Sense to better customize your digital advertising experience and present you with more relevant ads. 6 Sense Privacy Policy
Terminus
We use Terminus to deploy digital advertising on sites supported by Terminus. Ads are based on both Terminus data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Terminus has collected from you. We use the data that we provide to Terminus to better customize your digital advertising experience and present you with more relevant ads. Terminus Privacy Policy
StackAdapt
We use StackAdapt to deploy digital advertising on sites supported by StackAdapt. Ads are based on both StackAdapt data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that StackAdapt has collected from you. We use the data that we provide to StackAdapt to better customize your digital advertising experience and present you with more relevant ads. StackAdapt Privacy Policy
The Trade Desk
We use The Trade Desk to deploy digital advertising on sites supported by The Trade Desk. Ads are based on both The Trade Desk data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that The Trade Desk has collected from you. We use the data that we provide to The Trade Desk to better customize your digital advertising experience and present you with more relevant ads. The Trade Desk Privacy Policy
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

Are you sure you want a less customized experience?

We can access your data only if you select "yes" for the categories on the previous screen. This lets us tailor our marketing so that it's more relevant for you. You can change your settings at any time by visiting our privacy statement

Your experience. Your choice.

We care about your privacy. The data we collect helps us understand how you use our products, what information you might be interested in, and what we can improve to make your engagement with Autodesk more rewarding.

May we collect and use your data to tailor your experience?

Explore the benefits of a customized experience by managing your privacy settings for this site or visit our Privacy Statement to learn more about your options.