AU Class
AU Class
class - AU

Leveraging Analytics and Data Pipelines with Toric

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

Join this session to learn how to utilize Toric, the construction industry's trusted no-code platform for data movement and pipelines. Discover how to automate ETL processes, migrate data, and execute full table backups without coding. Using Toric's Autodesk Construction Cloud connector, you can access project data and extract insights in real-time. We'll also delve into practical case studies from Gamuda and Commodore, illustrating how they've optimized data transfer across APIs, applications, and databases with Toric. By the end, you'll be equipped with concrete techniques to enhance your workflows and ensure efficient data integration.

主要学习内容

  • Measurable and brief, learning objectives relate to skills, tasks, and knowledge to be gained. After consuming your content, participants will be able to.
  • Extract data from Autodesk products (Autodesk Construction Cloud, Revit, BuildingConnected, eg.).
  • Easily transform data sources and create data pipelines without writing any code.
  • Automate workflows to load data into a destination of choice such as a data lake or warehouse.

讲师

  • Chad Braun
    Having been familiar with construction from early life, Chad graduated from Colorado State University with a degree in Construction Management. From college, he joined a nationwide specialty contractor, where he eventually worked his way up to a Project Manager / Estimator position. It didn't take long for Chad to recognize the shortcomings of technology in construction, which eventually led him to make the jump to Autodesk, where he spent 5 years as a Technical Solutions Executive helping customers on their construction journey, specializing in multiple legacy tools and helping to optimize new ones. Chad recently made the jump to Toric after observing the very prevalent lack of standardized data and analytics that most of his customers were struggling with.
  • Austin Wolff
    Austin is a Data Engineer with 3 years of experience and is a certified AWS Cloud Practitioner.
Video Player is loading.
Current Time 0:00
Duration 44:38
Loaded: 0.37%
Stream Type LIVE
Remaining Time 44:38
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

CHAD BRAUN: Hi. Thanks for joining us for our presentation here today about Leveraging Analytics and Data within Toric. My name is Chad Braun. We'll do some short introductions, of course, after we do some housekeeping.

First of which, of course, is the safe harbor statement. We are going to be showing things that are kind of future-forward. So at the end of the presentation, of course, just make sure that we understand that this is not to be shared outside of necessarily today's presentation with third parties.

So what we're going to talk about today or really cover is just in general what data pipelining data transformations and really what data strategies we're seeing implemented within the construction technology front. Obviously, construction is an incredibly varied industry, a very fragmented industry when it comes to technology. We're going to talk about use cases for the what, the why, where, or the how. Data is starting to transform our construction industry as we know it and start to allow us to leverage analytics and make data-driven decisions.

So just a little bit about us here. My name is Chad Braun. I'm a solutions engineer with Toric. I have a construction background, so I was a project manager and estimator for a specialty contractor before spending six years at Autodesk helping them build out the Autodesk Construction Cloud.

I've been with Toric for about six months at this point helping customers like yourself understand what it is data should be doing for them, especially in regards to the construction-specific sources, be it the project management tools, the ERP tools, the scheduling tools. All of those data silos and really how to best utilize or break down the walls in between those disparate technologies. Austin?

AUSTIN WOLFF: Yeah. Hi, guys. My name is Austin Wolff I'm a data engineer here at Toric and I have a background in data science and data engineering. And my main role here at Toric is to help build data pipelines for our clients. So I'm happy to help. Pass it back to you, Chad.

CHAD BRAUN: Great. Thanks. So on the docket today, we're going to really start at the beginning, the impetus for construction problems. What are we seeing with data? Where is it falling to pieces? Where could we be doing a better job? How did we get to now, especially?

Obviously, construction has a data problem, much like every other industry. But construction being, how do I say, lacking when it comes to technology, at least by-- I like to say probably about five years behind the other industries. It's really a conversation about what we're starting to see happen here are really a Renaissance of thought when we're thinking about our construction specific data.

We'll talk about what Toric is, in particular relations to how it relates to Autodesk, actually. Why did it come to be? Why did we choose construction? And then we'll actually jump into a product demonstration.

And we're going to cover a plethora of different tool sets within Toric that are all going to-- or all being implemented to help you improve your tech stack, really, with Toric acting behind the scenes as a catch-all for that construction data, be it conversing about data pipelines or full table backups, even utilizing a warehouse, or Toric's built-in visualizations. And then, of course, actually syncing from source to source, if that's an option for your teams, or if that's something that would help enable your teams to make better data-driven decisions. And then last but not least, we'll finish up with some actual customer use cases for how people are utilizing the tool itself.

So when we're talking about the problem, or problems with construction data, as we all know, construction is complicated. Even compared to other verticals in other industries, we build very complex projects. We have very complex teams. We have very complex parties on all of these construction projects.

Obviously, in no certain order, subcontractors, when you get into the super subcontractors, the general contractors, the architects, the engineers, and that's not even including the consultants. All of those teams have multiple sources of data. All of these teams have or employ different technologies, even within one firm, especially when we're talking about something like a general contractor employing anywhere from sometimes even five to 10 technologies on any given project. And of course, the projects themselves are complicated and diverse. We always joke that you could build the same building in the same spot twice and it would be a totally different project.

And then, of course in relation to that, construction being tech averse. This is starting to change. We are seeing a bit of a change in the thought process when it comes to technology. But certainly that's really only taking place over the last 10 years. Only now are we starting to see, really, a new young workforce start to implement or push for more and more technology, understanding the benefit that it poses.

As such, of course, construction, again, has been behind the curve when it comes to the actual data collection, and we're finally starting to see the data collection happen. We have our ERP, we have our project management tools, we have our CRMs, we have our safety suites. But in all of those toolsets, what we often see is that the data hits a proverbial wall. In between each of these data silos, the data kind of falls off. And so what Toric is really helping customers with and what we're starting to see happen is a proliferation of the data itself in that collecting and doing what we need to do with the data is becoming more and more important.

We have the ability to farm the data and gather the data, but we don't necessarily have a place to store it all or make data decisions based off, of course, all of these different kind of collection points, if you will, and all of these different sources. And then, of course, construction is varied. Everything changes all the time. In that, we see all of these different tools. Everybody wants to be a single source of truth when it comes to construction technology.

But to be perfectly Frank about it, there's always a new tool. There's always something that's going to evolve beyond that quote unquote, "common data environment," or single source of truth. So that's really where Toric comes into play, is to act as a catch all for that construction data as it changes, as it morphs and evolves, always being there to make sure that you have the data you need when you need it, regardless of the tool sets you have at your fingertips.

So what is Toric? It's important to start at the beginning. Our founder and co-founder are Thiago and Dov.

They founded a company called Lagoa. And most importantly to this conversation, specific to Autodesk University, is that Lagoa was actually acquired by Autodesk. And both Thiago and Dov were really the men behind the Forge platform, what has now become Autodesk platform services.

So in doing so, they realized that there is an ample need for the ability to break down all of these data silos, the ability to bucket, or put all of this construction-specific data together, be it in a warehouse or even just for visualization purposes. Seeing that explicit need, or frankly the communication breakdown, not only, again, between parties, but between construction-specific softwares, which is where Toric really comes into play.

So when we're looking at Toric as a whole, on the left-hand side, you'll see that we have over 75 construction-specific sources. And that's not to exclude those that aren't construction-specific. Things like ERPs, CRMs like Salesforce, and a multitude of other tool sets.

The idea being that we can gather all of these data from all of these specific endpoints and automate an ingestion in real time, then transform and cleanse that data to get it analytics ready, so that on the opposite end it's prepared, normalized, or cleansed for its eventual journey to a data warehouse, or even writing from application to application. Or we actually have an analytics and BI tool in-house within Toric itself. So there's a lot of benefits to this one-stop shop of data.

Specific to Autodesk, what we're talking about is things that you're seeing on your screen. Obviously with recent acquisitions like PlanGrid, BuildingConnected, Assemble Toric does its best to keep up with these acquisitions and make sure that we're equipping your Autodesk-specific teams with all of the data from all of these disparate tool sets, most notably, of course, the Autodesk Construction Cloud, and included in that, of course BIM 360. But also things like being able to plug into PlanGrid or BuildingConnected for, of course, your preconstruction bid leveling, bid scoping, bid packaging data in addition to actually being able to plug-in directly to your Revit, your Navisworks, and even your Civil 3D.

So again, right off the bat, hopefully you're starting to see a benefit of being able to plug into even just these Autodesk-specific sources, harvest this data or farm this data, get it all in a bucket, and then send it to its eventual destination. So with that being said, now I'll pass it over to Austin for an actual product demonstration.

AUSTIN WOLFF: All right, guys. Today we're going to build a data pipeline from scratch and get your Autodesk data into whatever destination you want, whether that's a data table, data warehouse, data lake. So the first thing that I want to do is I want to make sure that Toric can connect to your Autodesk account.

So on the left-hand side, I'm going to go ahead and click on connectors here in the Toric software. And you can see here we have a lot of different connectors, different ways to access data from all of your different softwares. And then we also have a lot of databases and data lake connectors as well to push the data once you've gotten it from Autodesk to your final destination such as Aurora, AWS, Azure data lake, Databricks, Snowflake, so on and so forth.

We also have a lot of construction data connectors as well, as you can see on the screen here. Autodesk is the main one that we'll be covering today. But we have quite a few different connectors as well. We have connectors for spreadsheets, marketing and sales, finance, file storage, payments, also workforce planning as well.

But the one we are covering today is getting your data from Autodesk to your final destination. So I'm just going to do a Control-F, look for Autodesk, and click Set Up Connector. As you can see here, I've already set up a connector right here called New ACC Configuration. But all you would have to do is click Plus Create a Connector, give it a name, and then log in to your account. And that's all you have to do to make sure that Toric can start downloading your data from Autodesk.

Next is we are going to create what's called a project folder inside of Toric. Once we've connected to your Autodesk account, we have to have a location to then download that data into. So we're going to be downloading that data into our project folder. So I'll go ahead and click on Projects.

I'm going to create a project called SEC demo. We can name it whatever you want. Next is I'm going to be creating what's called a data flow.

A data flow is essentially just a data pipeline. All it is we're going to have the ability to download your data from ACC. We will then be able to transform it and then push it into your destination. So ETL extract, transform, load it to your destination.

So I'm going to click on New and I'm going to click on See All Connectors. I'm going to look for my Autodesk. Perfect.

I'm going to select my connector. So what I'm doing here is I'm selecting the connector I've already set up. That way, I can start downloading data from Autodesk. Give it a moment to load and to establish that connection with Autodesk.

Now as far as channel goes, this channel was essentially all the different ways we can download data from Autodesk's API. Let's look at forms just for a second. And with form, we can select your projects and we'll be able to select all of your form templates as well. You can select all form templates that have been updated after a specific date or not. You can just download all form templates that you have in your account as well. So that's something that we can do.

Another channel that I'll be specifically downloading data from is the project channel. And the project channel allows us to download a specific data based on what's called an end point. So just as an example, what we'll be working with today is forms data. So this is all of the forms data that is in each of your Autodesk projects.

Now here with project list, I can select certain projects to import or if I just leave empty, it'll automatically download all of them. So I'm going to be doing that. The other thing that I want to show you guys is incremental import.

So a lot of customers that we've talked to, they only want to download data that is either new or updated. So let's say that we're doing a daily intake of their data from Autodesk. They don't want to download all of their data every single day. It's efficient and it also costs money.

So one thing that we can do is we can do incremental import, and this will make sure that every single day when we run the data import, it checks the previous day and only imports data that had been changed or is new from the last time we ran the automation. So it's a pretty cool feature. Saves you time, money. It's more efficient.

I'm going to go ahead and click on Import Data. So what it's doing right now is it's first establishing that connection to Autodesk. It's getting ready to import the forms data from Autodesk. And then once it's done, it will create a blank data flow for us to start transforming that data.

But I speak with a lot of clients that when they're pushing their data to a data lake, sometimes they want it transformed, sometimes they don't. If we're pushing the data to a data warehouse, any of your tables in your data warehouse such as Azure SQL warehouse, all the time, every single time we are transforming that data, even if it's as simple as just defining the schema that we need, import that data into the data table.

As you can see here, it has finished importing that data into our workspace. Now this is going to be a lot of data, if you've never seen it before. So I know it can be like drinking from a fire hose. So not only am I going to attempt to go as slow as possible, but I'm only going to show the most important things when it comes to building your data pipeline. I'm not going to go over everything. That's not the purpose of this demonstration.

First thing I'm going to do is I'm just going to rename this to ACC to warehouse just so I what this data flow does. The next thing I'm going to do is I'm going to x this out to expand my screen. And as you can see here, you can see the data from the form's endpoint for one of your projects here.

So we have the assignee column, createdAt. We have custom values that are within an array, which I'll go over in a moment, form template, name, ID, notes, description, so on and so forth. Next I'm going to do is I'm going to go over here, click on this little tab called model root. If I click on that, it'll show me what's called a node. And this node is your source data.

So this is the source data for a project-- well, we're calling it sample project in our sample account, but Seaport Civic Center is the name of our sample project. But if you were to use this, you would start to see a list of all of your different projects that you've been able to import into Toric as well.

So I just click on this. As you can see here, I can take a look at our data. If I click out of it, there's no data to show. But if I click on this, OK, this is the data associated with this source node.

Let's get into transforming the actual data. This is the fun part. On the Overview little panel right here, I'm going to click on this double arrow to shrink it, expand my view.

You don't have to use a graph-based approach when you're transforming your data. I personally like to use a graph based approach when I'm doing data engineering, mostly because I want to visually see where my data is going from step to step. It just helps me out visually. So this is why I like using graph approach.

When I say graph approach, I mean every node you'll see is connected with essentially-- you just call it a line. The correct term is directed acyclic graph, but just call it a graph.

What I'm doing here is I am dragging the data from our source node over. I'm going to create a new node. We have a whole list of nodes here. I'm not going to go through each and every one, just a few of the highlights. But every single node does something different to transform your data.

So breakout allows you to just select certain columns to transform or keep. Filter, we can filter your data, such as give me all rows where createdAt is after 7/13/2023. You can find and replace.

So one example is, OK, find every row where created by is this long string and replace it with something else, group by, so on, and so forth. Again, I'm not going to go through all of them, but I'll go through a few specific ones here that might be relevant to showing you how else you can transform your data.

The first I'm going to do is data tagging. And what data tagging allows you to do is it allows you to create a new column and fill it with a specific field based on the rows of another column. So let's say, for example, I want to create a new column that has, just as an example, A, B, or C, based on what's in this column form template name.

In fact, actually, I'm going to call this form template one. You can give it a better name. I'm not going to give it a default value, but I will say let's fill it with the word "incident" when form template name is, and we have a dropdown if our column is in list format, incident report.

So we're going to create this new column called form template one. And if the column-- and if the rows in form template name equal this result, it gets filled with whatever I've selected here, incident. I'm going to go ahead and do that for the rest of this. Call this timesheet, call this incident report.

And we are going to tag the value timesheet or form template is equal to timesheet. And we'll call this daily report, where form template name is equal to daily report.

So I'm on this node right here. We can see the data for this node. And if I scroll over to the right, you'll see the new column that I've created.

So that's one example of data tagging. I'm sure, even as you're watching this, you can think of other ways to use it. It's very helpful for me when it comes to data engineering.

The next thing I'm going to do is show you how to do a join. So I'm going to take my data output and drag it over and look for the join node. And we'll zoom in just a little bit. So with the join, you need two inputs.

Typically what we'll do, for example, is you can either use two source nodes. I think that would be the quickest example. So let's say you needed to merge forms data with any other type of data. You would make sure to import that data into the project and then you would just drag it into port B of the join.

With the joins, we have a lot of options, left outer, right outer, inner. The main ones that clients like to use that I've seen are left outer and your inner join. It's essentially a one to one match. I'm going to get rid of this. We're not going to be joining today.

Another note I want to demonstrate for you is called Edit Height. So if I type in Edit, click on Edit column, you'll see here that there are a lot of different icons here next to the names of the columns. This tells us what type of data it is. So this pound symbol tells us that this column, form num, is a number column.

This calendar icon tells us it's a calendar-- sorry, it's date time. This brackets tells us it's an array. This little dropdown icon tells us it's a list, so on and so forth.

The T stands for text or string. And we can change the types of each of these columns to whatever we want. Unless you have a string and you're trying to convert it to a number, it's probably not going to work out.

I'm going to click on form template name. Right now, it's called a list type, so it allows us to select dropdowns. But it's not a string, so I can't form any string transformations on it. What if I wanted to? What if I want to start doing a regular expression extract of this, or just start to normally clean the string?

Can't do that if it's a list type. So I'm going to give it the string type. And as you can see here, it has changed from list to string. Easy enough, right? Now I want to do some text cleaning on it.

What if you wanted to use regular expressions? As you can see here, I've dragged this output into a new node. I'm looking for a regex extract.

If you don't know what regular expressions are, they're essentially just a way for you to match patterns within text data. And you can extract, you can clean, you can do whatever you want. But essentially what it is looking for is patterns.

I'm going to select a column to do my extraction and look for form template name. And what if I just wanted to extract the middle word of this sentence, regardless of what it is? I'm going to create a new column called form template.

Actually, you know what? I'm just going to call it regex extract. That way we explicitly know what we're looking for. And now we actually need our regular expression.

How do we to extract the middle word from form 10? Well, at this point, you need knowledge of regular expressions before you can do anything. So this is not a regular expression course.

I'm not going to go over it I personally use a website called regex101.com. That is my opinion. Does not represent the opinions of Autodesk or Toric. But again, this is the website I personally like to use to make regular expressions that match the text that I'm trying to match.

So again, I'll quickly go through this right now because again, this is not a regular expression tutorial. If you do know how to use regular expressions, this is a helpful tool that you can use. So again, I'm copying a regular expression I built that is meant to only match the middle word of our text in this column. Copy that, go over here to expression, I'm going to paste that.

For my flags, you can use other flags as well. I'm going to use case insensitive. Let's take a look. OK, it didn't match my regular expression right here. That's OK.

One thing that I can also do is take a look and troubleshoot why it didn't work, so on and so forth. But if you didn't want to use regular expressions, you don't have to. One thing that you can do, let's say, for example, that you don't want to figure out the pattern for your regex extract, oh I see here. I put daily instead of sample. Let's see if that worked.

All right, there you go. So again, with regular expressions, they're a little complex, and you do need to pay attention to detail, as you can see that I needed to do there. But if your extraction is relatively simple, let's say, for example, that I don't want to use regular expressions to extract this data, what else can I do?

Well, as you can see here, each middle word is separated by a space on the left and the space on the right. So if you've ever done splitting of text data, you can do that as well. So let's get into that.

I'm going to delete our regular expression extract, drag our output, click on a split node, and now I can actually split our data-- our text-based data, based on any character that I want. So I'm going to select form template name. Our delimiter is going to be just a space.

And I'm going to remove the original column. And as you can see here, it has split up by form template name into each of the words that are split based on the space. So I have the first word, the second word, and the third word, all split up by spaces.

And let's say I only want to keep this column. Now what I can do is I can do a new node called columns. I can actually hide columns I don't want. So I don't want this column and I don't want this column. I just wanted that middle word.

The other thing I can do is give it a new name. So I click on this. And let's just call it, just for simplicity, I know you wouldn't use this in real life, but just for the purposes of demonstration, I'll just call it middle word. There we go. And there you go. That's a demonstration of how to extract words from your text.

The next thing I want to do is demonstrate writing to a warehouse. So first I'm going to drag the output of my data and create a new node called right table. Now when it comes to this, we have to set up our tables ahead of time.

How do we do that? Well, I'm going to go into a new tab. I'm going to go to my connectors and let's say you want to connect it to your Azure SQL database. You find Azure SQL and you would set up your connector to your actual database here. You'd do the same for Snowflake, for Databricks, so on and so forth.

And once you set up your connection to your table-- to your warehouse, you can actually connect to your tables inside of your external warehouse. Toric also has the ability to create internal tables as well, just for you to store data, whether you need it to access it in a different data flow, or if you actually want to store your data inside of Toric, we have that ability as well. So I've created an internal table for this demonstration called ACC Forms. Just going to click on that. And as you can see here, I can quickly take a look at the schema for this table.

I'm going to go back into my data flow or my destination table, as you can see here, ACC forms is the one that I showed you just now. You can also connect to Databricks, Snowflake, Azure SQL database, so on and so forth. But let's just say you wanted to write data to your table. You would select it. The schema, columns for your data table, would appear, and then you need to actually map your data from your node, make it match the schema of your table. So I'm going to look for ID for the ID column. Great.

We know this column means the assignee ID. I'm going to look for that. Going to look for created by. And here I'm going to look for description.

And all I have to do is click on write to test it. Great. Now I know it works. Now, I know I can push my data to this data table.

Another thing you can do is let's say you're just writing your data to a data lake. Just for demonstration, I'm going to drag this output. And I'm going to select something called Run Export Automation. So all an export automation is it's a way for you to export that data to a data lake. It's one more step that you have to do. Instead of just writing to a table, you do have to run the export automation.

What is an export automation? All I'm doing is I'm taking a connection to our data lake, and I'll show you that we have one. So let's say I'm exporting to Azure data lake storage. I have a connection here. Now I need to create an export automation to make sure that data is run through our connector into the data lake.

So if I go to automations, scroll down a little bit, we have one called Azure lake export. As you can see here, I have the name of the automation description, the action type, which is exporting data, the application is data lake storage, the connector is the one I just showed you, and our channels files is all we get for Azure data lake.

And this is it. This is all I need to do to make sure that data is exported into Azure data lake. I'm going to go back to our node. We have our own export automation. I'm going to search for the name of our export automation called Azure Lake Export. Select that.

For the file name, we can type the file name in here. We can also create what's called a text node. And also give it a filename here, testing.csv. Then I can just connect the file name there. Great.

It's compressed. And when I click Export, it'll also be exported into my data lake. I don't want to do that right now. I don't want to clog my data like that. But all you have to do is click on Export.

Now let's say you want to do a full table backup. Most of the clients that I work with that I'm building data pipelines for, they want all of their data backed up into their destination of choice. So they want to be able to take all their data from Autodesk and then back it up to their own secret warehouse. How do we do that?

All you have to do is set up your data flow as I've done like this. Click on Export when automating. What we're going to do is we're going to automate this data flow. We're going to make it so you're importing data every single day, and every single time a file is imported into this data flow, it is run through it and then exported into your data table.

So what we do? We have to create a data flow automation. Again, we are just automating this data flow, that way every single time we import data into it from Autodesk, it's automatically run through this. You don't have to open it. You can just sit back and watch the data populate in your data warehouse.

So there's one more thing we need to do. I'm going to go to Automations, create an automation. I'm going to call this ACC to warehouse table flow automation. Great.

And I feel like that's pretty precise. I don't need to give it a more precise description. Going to use this name for the description. For the trigger type, I'm going to call it source updated.

All source updated is, is it's looking for new files. So every single time we import data from Autodesk, it's going to be looking for updated sources. So it's going to be looking for new files. That's all you can-- that's all you really have to think about when it comes to source updating. Think New file, essentially.

The source type filter, we need to know what data to look out for. I'm going to be looking for Autodesk. There we go.

So we're looking for new files from Autodesk. In what project? ACC demo. So all files that are imported into that project folder that we created at the beginning, that's our trigger type. The moment it sees a new Autodesk file in our project folder, it's going to do our action.

Our action is run data flow. We want to run that file through this data flow. I need to select the actual project data flow's in, ACC demo, and the name of our data flow is ACC warehouse.

Next we need to tell our data flow automation where the file is going to get inputted into in our data flow. And that is our source node. So all files that are coming from Autodesk, we want to put it right here into our source node. That way, it's run through the data flow and finally export it to our table.

So the port has a long name that is essentially the name of our source node, and then there are just a few default values we have to fill in. And then all we have to do is create our automation and enable it. And that is it.

Every single time we ingest data from ACC now, we can set it up on a daily timer, that data will get imported, run through the data flow, and exported to your data table. And that is how we can create a data pipeline from scratch and fully back up your data into your destination of choice. And that's it. I'll pass it back to Chad now.

CHAD BRAUN: Just a couple last notes in the actual demonstration environment, guys. First of which is that, of course, when we're talking about data pipelining, that's, in most cases, writing of course, from a source or multiple sources into a warehouse. But because Toric is an agnostic data movement tool, we can actually move data from source to source.

So what that means is what you're looking at here, in a similar capacity to what Austin was just performing with data pipelines, we're actually taking data from, in this case, a data warehouse and writing it back into the Autodesk Construction Cloud. So we use the same nodes in the same way. We're really just transforming the data and getting it prepared or put into a schema that matches the requirements as dictated by the Autodesk Construction Cloud. So in this case for RFIs, if we were to write RFIs from a system to ACC, we would just need the container ID, which is actually just the project unique ID, the status, and of course, the title.

Now, this isn't limited to writing from a warehouse. That's just the example that I've got here. You can actually write from a project management tool to ACC. You could write from ACC to an ERP. The only limitation of this tool set is the ability to-- or having the actual APIs available from whatever it is we're writing from and whatever we're writing into.

So there's some pretty cool use cases here. We'll actually touch on one here in just a moment when we talk about customer stories. But just understand that again, with the data being agnostic within Toric itself, the transformations are really at your fingertips to perform whatever you'd like to do, be it writing to a warehouse or writing to a different source, if you'd like to, writing to a table, as well as, of course, utilizing these transformations for a visualization, which, if you were to utilize something like Toric, should be mentioned that there is actually a visualization or BI tool built out within the system that enables your teams to actually associate model data with other data sets.

So in this example here, I've got a project phasing type build out for my model. So I can actually click in here and click through my five phases. As I do so, the model will start to build and change accordingly.

This is great not only for owners and developers to watch their building being built in real time, but of course, also for presentation purposes. If you're a general contractor trying to win new business, enabling your clients or your owners to actually be able to interact with the model and understand where they might be at any given time during the actual project progress.

Of course, this isn't limited to project phasing. If you have cost codes and you want to tie those to or associate those with families or elements within a model, if you wanted to take, say, a schedule from P6 or something along those lines, you'd be able to associate actual schedules with the model elements themselves. So Toric really enables teams to start doing 4D, 5D, 6D-type workflows behind the scenes. Of course, in those data flows associating data tables with other data tables, and then actually seeing the result in real time for your project teams who don't necessarily need to even know what Toric is or how it works behind the scenes. These dashboards can be passed out as many as you'd like.

So if you wanted to pass out 10,000 dashboards, it would be on the house. And of course, in addition to that, we have more or less all of the visualization tools that you could ever need, including things like if I wanted to do a timeline, or if I wanted to filter by responsible contractor, if I wanted to click into a particular cost impact. And of course, this is all interactive. If I want to go click into RFIs associated to a particular person, of course, the dashboard updates accordingly with where I'm clicking.

If I click out, submittals, observations, safety, anything that you want. So long as the data exists, we can build out the visualizations for it. We can pipeline it. We can write it from source to source, so long as the APIs accommodate, whatever it is you're hoping to achieve.

So with that being said here, we'll end it with a couple of customer use cases, the first of which is going to be Gamuda. So Gamuda is a really good use case in that Gamuda is multifaceted in what they do with Toric. Their use case is also pretty typical for what we're seeing general contractors asking of Toric and accomplishing for their project teams.

So in particular, Gamuda was asking for real-time data. That was their biggest hangup. They had more of a classical data build out, or data pipeline with hard coded transformations writing into what eventually is Google BigQuery.

They actually built out a data team for a specific data pipeline, an entire team for a data pipeline that they were hoping to achieve. And they struggled ingesting from all of these construction-specific sources. Of course, that seems to be the impetus for most of our conversations, is that construction tools, being construction tools, are very specific tool sets. They're not available in most cases for integrating with a platform you might pick out of a Google search.

So in this case, what they were doing was plugging into P6, ACC, as well as SAP, and then utilizing our data pipelines to make a repeatable, scalable data pipeline that they could use from project to project and of course get that data in real time. In addition to Gamuda, Whiting-Turner was doing something very similar.

Of course, in this use case, the reason I chose this slide is that Whiting-Turner as you can see, is utilizing both BuildingConnected and PlanGrid So what they're doing is automating an ingestion from those sources, routing it through Toric, performing that data cleansing, and then writing it to an Azure data lake, and eventually Power BI.

You'll also notice that neither Gamuda nor Whiting-Turner are necessarily utilizing Toric's BI tools. It's never going to hurt our feelings if you'd like to route your data to a lake or warehouse and then utilize a Tableau or a Power BI on the opposite end. But if you did want to utilize our BI or analytics tools, you can look at somebody like Commodore.

Commodore is a mid-size GC out of Boston. They have a data team of one person. This is a really interesting use case, in that she's actually plugging into all of these sources, creating the data pipelines herself. She is writing to, eventually, a data lake, but she's also utilizing, or starting to utilize some of the actual visualization capacity within Toric in that she's performing-- you can see on the right-hand side their quantity takeoff from a model , simply by ingesting that model from Revit into Toric itself.

And then, of course, given the properties in the data table actually allowing her to perform calculations for, in this case, total steel tonnage, she's also performing some safety type reports, incident reports, and really doing a lot of really interesting configurable visualizations and dashboards that are really built off of, behind the scenes, the data pipelines that she's running. She's really almost connecting the visualizations with the pipelines, which is a really interesting use case in that the data is full circle.

So coming soon should also be mentioned, Toric GPT. So I think like everybody else in construction technology, everyone is thinking about AI. Everybody is thinking about a GPT model.

Toric is incredibly well positioned when it comes to what GPT can be for us and what it can be for our clients and our customers. In short we're using a large language model, building a large language model off of the data that customers are allowing us to access. We have some of the largest data sets in construction at our fingertips that we're training the model on.

And eventually, our intention is to allow folks to go in and ask the bot how many RFIs are overdue this week? How many RFIs are over two weeks? And as the model learns, what's going to start happening is that AI piece is really going to come into play.

If you just filter it down to maybe your data sets, it'll start to average out things how long is an RFI typically taking to get responded to? If it's taking two weeks, 2.5 weeks, it'll start to eventually flag that data that's maybe over 2.5 weeks and you'll start to be able to make informed decisions based off of trends within your particular projects, or, of course, if you'd like to see maybe globally what the construction data is looking like, how long it's taking people across the planet to respond to particular RFIs.

The eventual intention here is actually to be able to build visualizations from these GPT models, meaning a project manager being able to go in and say, hey, give me my forecast for the next three weeks based off of my estimated cost to complete on this particular maybe even previous project. Maybe we're getting into the final phases of a high school that we're building. It's a football stadium. We've built one before.

We want to correlate the two, or understand the two together. That's really the intention of Toric GPT, and again, we're well positioned for it in that we have access to some of the largest data sets specific to construction, and that we will remain specific to construction with this particular GPT model.

If you go to ChatGPT now and you ask it about your RFIs, it'll probably respond with something about hot dogs. This is going to actually, of course, be completely specific to construction built off of your data sets if you want it to be, or, of course, just utilizing global data sets to understand more about how your projects could be performing better, how you could increase your margins, whatever it might be that you're hoping to accomplish.

So at the end of the day, why Toric? Why do our customers use us? Of course, I'm not going to read this word for word, but at the end of the day, nine times faster for getting data through your near real time, or in some cases, real-time pipelines.

You're five times savings with data movement, just the ability to do this in no code, being able to plug directly into those sources, being able to run it in real time, being able to template these data pipelines so you can run them time and time again, eventually, maybe, even saving headcount when it to data engineering team.

Six times productivity, of course, again with all of those aforementioned points being able to make your teams more efficient, really optimize your data pipelines and your workflows so that they're not managing the APIs, they're not managing the integrations. Toric is doing that for them so that they're making sure they're spending their time where it needs to be spent, either building new data pipelines or making, again, data-related decisions, as well as 12 times the volume really enabling your team to plug into all of those construction-specific sources and otherwise being able to route or channel that data through Toric, clean it, cleanse it, normalize it, and send it to its eventual destination. Thank you.

Downloads

______
icon-svg-close-thick

Cookie 首选项

您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

我们是否可以收集并使用您的数据?

详细了解我们使用的第三方服务以及我们的隐私声明

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

改善您的体验 – 使我们能够为您展示与您相关的内容

通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

定制您的广告 – 允许我们为您提供针对性的广告

这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

icon-svg-close-thick

第三方服务

详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

icon-svg-hide-thick

icon-svg-show-thick

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

Qualtrics
我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
Akamai mPulse
我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
Digital River
我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
Dynatrace
我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
Khoros
我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
Launch Darkly
我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
New Relic
我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
Salesforce Live Agent
我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
Wistia
我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
Tealium
我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
Upsellit
我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
CJ Affiliates
我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
Commission Factory
我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
Google Analytics (Strictly Necessary)
我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
Typepad Stats
我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
Geo Targetly
我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
SpeedCurve
我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

改善您的体验 – 使我们能够为您展示与您相关的内容

Google Optimize
我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
ClickTale
我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
OneSignal
我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
Optimizely
我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
Amplitude
我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
Snowplow
我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
UserVoice
我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
Clearbit
Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
YouTube
YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

icon-svg-hide-thick

icon-svg-show-thick

定制您的广告 – 允许我们为您提供针对性的广告

Adobe Analytics
我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
Google Analytics (Web Analytics)
我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
AdWords
我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
Marketo
我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
Doubleclick
我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
HubSpot
我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
Twitter
我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
Facebook
我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
LinkedIn
我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
Yahoo! Japan
我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
Naver
我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
Quantcast
我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
Call Tracking
我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
Wunderkind
我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
ADC Media
我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
AgrantSEM
我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
Bidtellect
我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
Bing
我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
G2Crowd
我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
NMPI Display
我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
VK
我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
Adobe Target
我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
Google Analytics (Advertising)
我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
Trendkite
我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
Hotjar
我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
6 Sense
我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
Terminus
我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
StackAdapt
我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
The Trade Desk
我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

是否确定要简化联机体验?

我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

个性化您的体验,选择由您来做。

我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。