AU Class
AU Class
class - AU

The Making of an IoT Nervous System: Pier 9's Smart Bridge

Share this class
Search for keywords in videos, presentation slides and handouts:

Description

This class dives into our cross-divisional project at Autodesk for the making of a SmartBridge inside our Pier9 facility.We'll walk through our case study where we used structural analysis, Eagle, Revit and Forge to design and plan the layout of sensors onto an interior pedestrian footbridge. Using sensors for strain, acceleration, temperature, pressure, humidity, sound and cameras, we created the "nervous system". We then integrated all this live sensor data into Autodesk's cloud environment, and used Forge to bring all the data into one context on top of the BIM.Using data at the centre, we'll show you how we learned from the data to create various machine learning analytics for interacting with people on the bridge. In our class, we'll go over how we applied big data analysis techniques, machine learning, computer vision and visualization in the cloud to make our bridge "smart".(Joint AU/Forge DevCon class).

Key Learnings

  • Learn how IoT can be applied to learn from a design in the real world
  • Learn how we used Revit, Autodesk Nastran, Robot Structural Analysis Professional, and other Autodesk tools to design our IoT system
  • Learn how visualization played a key role in understanding our data and design, and how to further develop and learn from the data generated by our smart bridge
  • Learn the importance of contextualizing sensor data in BIM on the web

Speaker

  • Alex Tessier
    Alexander Tessier is a Senior Principal Research Scientist at Autodesk Research. Alex is a founding member of the Complex Systems group whose mission is to embrace and explore the complexities of organic and built systems and the effects, impacts and interactions of humans with these systems. Through advanced modeling, simulation and data-driven approaches, the group uses machine learning, visualization and analysis to advance the state of the art in design tools and multi-scale simulation. Alex s recent areas of focus lie in the domain of IoT, sensor networks, and Big-Data for IoT. He leads the team of researchers and developers building Project Data360, a scalable cloud based time-series database for very high frequency data collection, storage, and visual analytics.
Video Player is loading.
Current Time 0:00
Duration 55:18
Loaded: 0.30%
Stream Type LIVE
Remaining Time 55:18
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

ALEXANDER TESSIER: So good morning, everybody. My name's Alex Tessier. Today we're going to have a case study of a project that we've done over the last year at Autodesk. And we used a whole bunch of products to make this happen. I'm not going to be able to dive deeply in depth into the way we used each product, but we're going to do a systems overview of how we put this whole thing together.

It's definitely an IoT project. I've been an IoT researcher since 2009. We used Forge, EAGLE, Revit, Nastran, a whole bunch of different tools. And today we're also going to see how important visualizing your data is in these contexts, as well as how key BIM is in being able to bring together all this information.

So, like I said earlier, my name's Alex Tessier. I work for the Office of the CTO at Autodesk, under Jeff Kowalski, but specifically in the research team. I've been at Autodesk officially since an alias acquisition in 2006. But before that I joined the company in 1998. So I'm coming up on my 20 years.

I used to be a software developer, before I joined the research team, working on products like Alias for automotive and, before that, a finite element design program called Precision Analysis, which was a plug-in to PTC, way back in the day.

So I work for Autodesk Research. We're a group inside of OCTO that does pure and applied research for Autodesk. And specifically I work for the Complex Systems team, where we look at large systems of projects.

So the stuff I'm going to present you today is not just my material. It's material from a bigger team. These are sort of the founders of the project. Alec Shuldiner has been acting as sort of our chief instigator and senior product manager, in this. His day job is actually piracy analytics.

David Thomasson, who's also in OCTO, in the Strategic Innovation Group, and has been responsible for working with a lot of the robotics that you've seen at Autodesk. Myself, and of course our illustrious Thomas Davies, who used to be a student of mine, back in the day, but now is a bona fide research engineer, or applied research engineer, in the Emerging Technologies group at Autodesk.

There's an even bigger team that's made this possible. Here it is. I wanted to thank all these people individually. They know who they are. And, of course, there is a Forge team, as well, without whom this project wouldn't have been possible.

So let's dive into a little bit of background. So a lot of you probably have seen this image before. This is the MX3D robotically printed bridge project. MX3D is a Dutch company. And their vision initially was, with 3D printing, what if we attach a welding arm to a robot and have it print a 3D bridge in place? What are the possibilities, there?

Over time, they discovered that printing outside is quite problematic-- probably still possible, but to limit the research, they decided to move things inside. And some of the early designs had this beautiful branching structure.

Now, this bridge is supposed to be situated in the city of Amsterdam. It's a pedestrian footbridge. And it's attached to a canal, or is supposed to bridge a canal wall. And if you've ever visited the red-light district in Amsterdam with these bridges, you realize these walls are very old and actually crumbling.

So this kind of branching structure is actually inappropriate for this wall. It would sort of tear the wall down. So, over time, the team had to evolve the design.

There's a greater team now involved that's helping MX3D. Autodesk has been one of the founding partners helping MX3D over the years. Most recently, the Alan Turing Institute and Royal Imperial College have joined us.

And this is a team photo of a meeting that we had in July, with Alec Shuldiner, number 13, myself in the middle, and others from the team. MX3D founders Gijs and Tim. And I believe Tim might even be here this morning, as well, and is also giving a talk this week as well as a demonstration in the Future of Making Things booth.

There are more partners involved now. The list of them is quite large. So it's a very cross-disciplinary, multidisciplinary project, involving many, many different companies that are helping MX3D, Autodesk included.

Now, it's quite fascinating. When I first saw this material-- so, the middle bar, here, is the robotically printed bar. It's a stainless steel that the robot sort of pulls and extrudes. It has to be very careful about the cooling-down period and the heat-affected zone. It's not always able to print in the same location at the same time. And, of course, depending on the atmospheric contaminants in the air, or the pollutants, or vibration, or all these different factors can really affect the strength of the welds. And MX3D has really pushed this technology forward.

Autodesk ourselves have done some dabbling in the research, through David Thomasson's lab, experimenting with this 3D robotic printing technology. But, you know, the jury's still out on all the variability that you can get with this particular material. Which is one of the reasons we are working with other university institutes to really test this material.

So what happens when you have an unproven material, a material that we don't understand well-- we have thousands of years of metallurgical history with different metal-forming techniques-- you apply a novel design, and it's a whole brand-new manufacturing technique? What is the actual outcome? Is the risk worth the reward? Hopefully, this is not what we're going to get. And the whole purpose of the initial idea of why we are putting the sensors on there is to prevent that catastrophe from happening, to really measure and understand what's going on.

This is the final and evolutionary design of the bridge. You could see, it still gathers some inspiration for the branching, but structurally and most fundamentally it is a u-channel, even though it's beautiful and quite sinuous. The balustrades on the side are very suggestive. It's quite a beautiful design.

And so the early designs favored branching. But over time, working with Arup and other engineering firms, they sort of modified the design to have this more structural u-shape. There's still quite a bit of interesting geometry. On the bottom and the upper right, you can see the different kinds of robotic exclusions that form the support for the main deck.

The main deck itself, I believe, will be cold-rolled-steel plate and form sort of an arch-like structure over the bridge. But really it's still quite a beautiful, elegant shape that only 3D printing could have delivered, in this particular case. And, of course, the structure is hollow.

I got a chance to go visit the bridge while it was being printed. Here's an early shot, based in last July, of where they were with the bridge sections. So you can see some of the interior structure, as well as the thicknesses of the material. And, of course, it has evolved even more since then. So we can see Gijs walking around and inspecting it, after polishing it. We can see the beautiful balustrade, here.

So we really needed to add structural monitoring to the bridge, strain gauges, so that we could perform some in-lab testing. This is now something that Autodesk has passed on to Royal Imperial College and the structural experts there-- Mohammed Elshafie. And we are going to continue to monitor it in situ. But really there's a lot more opportunities for what we can do with this bridge. And, in fact, Alec Shuldiner, on a visit to Amsterdam, discussed with Gijs, why don't we IoT-enable this bridge? It's a beautiful paragon of what we can do with modern 3D-printing technology. What can we do on the IoT side of things?

So, I mean, part of the idea-- so this is a picture of the bridge in Amsterdam. This is actually a temporary bridge that's in place. You can really see the amount of action and buzz that's going on, in that red-light district. There's quite a few people.

The bridge itself is not really an attraction, but a lot of people stop and look at the canal. And so we really want to help rejuvenate this area and put this beautiful architectural piece in place. So what could we do?

Well, really, we might be able to help the bridge do its job-- understand the context of what's going on in the city, understand the context of how people are interacting with it. We could provide maybe some sort of interactive projections or vibrations. At one point, one of the engineers thought that we could add transducers to the bridge and make it sing, by sympathetically vibrating as people walked over it.

So really the number of ideas are quite endless. But my team was brought in to really understand the overall system of it-- how can we do that? So one thing that we know is we know that people drive design. So really the key here is detecting human movements and interactions.

Of course, it's the red-light district, and people don't like their pictures being taken. Although there happen to be cameras absolutely everywhere. But we still want to be mindful of privacy.

We want to be able to monitor the state of that bridge, so that we know what's going on. We want to understand the history of what's going on on the bridge, that lifecycle. As I said, part of the long-term study of that material is, we don't really understand the fatigue properties of the micro structure that we get from this process.

And, of course, one thing that MX3D really wants to do is to make this an open system. They want other collaborators, from universities, from other institutions, to come in and have access to the data that the bridge produces. So what we really think that we need is some sort of bridge operating system, to do that.

So how do we build a bridge operating system? What are the kinds of things that we need? So one thing that my team always does is we always start with the sensors. We try to understand the sensoring technology that we need. The amount of data that's going to be generated from this truly makes it a big-data problem.

By having many different kinds of sensors, we know that we can sense more than just the thing that the sensor is dedicated for. We can fuse these different things together in a masala of sorts and really try to extract different kinds of things. Some great examples of sensor fusion, I think, are--

An engineer patented a method of detecting where rain bursts occurred over a city, by using the GPS in cars as well as the information of when wiper blades were turned on and turned off in the city, and then posted that data and sort of inferred a rough idea of where the cloudbursts were within a city core. So it's a beautiful, little example of a very simple kind of sensor fusion.

Of course, programming that kind of fusion and that kind of unknown patterning can probably only really be done effectively with machine learning. And one of the things that we're very familiar with is computer vision as a means of providing a baseline of understanding for our large experiment. So, as we build this bridge operating system, these are the kinds of things that we're going to look at.

Now, early on, the team got together and decided, well, you know, this is a hard problem. We don't really have access to the bridge. It's still being printed. It's still being developed.

How are we going to do a test? How are we going to understand if we have the right software components, if we have the right pipeline? What are the sampling frequencies that we need? What are the affinities of the sensors that we need or the precision that we need?

So we sort of honed in on making a prototype within Autodesk itself. Because we have this fantastic Pier 9 facility in San Francisco, which we've turned into a sort of a maker shop and office combination. It sits just off the Embarcadero. And there's this overpass bridge, which is this pedestrian truss bridge. And it's relatively the same size, give or take, as the bridge that's going to be built in Amsterdam.

It serves a similar function-- pedestrians coming through. There's a lot of tours at Pier 9, so people stop and behave in similar ways to the way they do in the red-light district in Amsterdam. We have pretty much unfettered access to this bridge.

We have a very reliable network infrastructure. It's protected from the elements. It's protected from vibration. So, a good starting point for us to understand and control where we put the sensors and how we do that. And, most importantly, it's protected from weather and the environment. Now, it's not an exact proxy of the area, but we were able-- one of the reasons we chose it was to sort of limit the effects of those things, so that we could tackle one set of problems at a time.

So our goals, really, in this project, were to help MX3D deliver an IoT-enabled bridge, to understand the cloud infrastructure required for that, the data gathering, the precision, and so on. We needed to create an open architecture, to really promote collaboration. We want the bridge to sort of develop intelligence over time. So we really want to store huge amounts of data, so that we can continuously improve upon our understanding. And, for Autodesk, we want to understand the full design lifecycle for smart infrastructure.

So this bridge may be a first, but it will certainly not be a last. Everything we're building has sensors in it, whether it's smartphones-- even clothing, these days, has sensors in it. So really, as part of our research in Autodesk, we were trying to understand the systemic relationships of all these things and how to fit all that knowledge into our tool chain, into our software pipeline.

Some of our goals were also to investigate how we could possibly automate sensor placement in the design. Design feedback from real-world situations-- I don't know if any of you are familiar with our Hack Rod project, where we used generative design to design a frame of a car that we then drove out in the desert, instrumented it, and then brought that information back, to help evolve the design.

Now, that was a closed loop. With infrastructure, we have the ability to always have these sensors on, all the time, to continuously learn from them and continuously try different designs and inform these generative-design softwares. And, most importantly, we want to understand the components. What do we need, to design responsive systems? And as an exploration for machine-learning workflows and how they fit together, we know our customers are very advanced. And you guys are going to be doing a lot of machine-learning projects on your own, in the not-so-distant future, if you haven't begun already.

So, our design methodology. Our drivers were that we needed a flexible system, we needed an extensible system, something where we could add sensors quite easily. For us, we wanted it to be relatively easy to program and make software adjustments. Now, we're all software engineers, so that's not too high a bar.

We really needed it to be modular, so that we could build upon the pieces that we already had. My team's budget is a little lower than it should be, so low cost is always an important thing, as is low maintenance. One of the projects that we've done previously, we continuously needed to babysit all of the components and all of the software, all the time, and that was something we vowed to never do again. And so that had to be a primary design driver.

It had to be secure. And, of course, it had to be well-organized, from the standpoint of tracking where things were, what they were doing, and how they fit together.

So our design philosophy actually started really early on. Inspiration came to me and the team when I read this very interesting paper, in the Journal of Biology in 2013. And it's a paper that studies nerve conductions in giraffes. Now, what does that have to do with a smart bridge? Well, I'm glad you asked.

So, because the giraffe is the tallest land mammal in the world, its nerves are very, very long. And, in fact, we've done nerve-conduction studies on mammals before. And some of the scientists wondered, well, how does a giraffe do it? The amount of time that it takes, while it's running, when it stubs its toe or hits something, for that signal to go up to the brain, be processed, and go back down to the central nervous system, to control an action so the giraffe doesn't fall, the numbers didn't quite add up.

So they did all these experiments, and they found-- they postulated that either the giraffe had to have very special nerves that were faster than regular mammal nerves. Answer was-- nope, same as ours. Or something else had to be going on. And what they discovered, through the process of deduction and the scientific process, was that the giraffes had to rely on spinal reflexes, reflexes that happened close to the muscle groups, close to the actuation area, down in the spline, rather than have these long loop signals travel to the brain. So what we really had was a distributed system.

And that epiphany has sort of been the core of a lot of the different things that we've done in the Complex Systems team, where, you know, we have a brain, we have distributed processing at all levels, in this very rich hierarchy, so that enables us to do very large things but also very fast things, by pushing those responsibilities. So, in this analogy, the cloud is the brain. That's where everything's stored. That's where all the big heavy lifting happens. All the statistical analysis, all the machine learning, all the computer vision happens in the cloud.

One of the other reasons we push our data to the cloud is, well, to keep the data safe. There's a lot of personally identifiable information in there, so we can encrypt it in flight and at rest. And also, our experience with some of these devices is that they can be very unreliable, in terms of storage. And the cloud is both scalable and resilient. Your data gets distributed. Engineers are continuously swapping out hard drives. If it's the Google cluster, it's robots doing it-- probably the same with Amazon. And so that's one of the reasons we push all of our data, our large data, to the cloud.

We settled on Raspberry Pis as sort of the local nerve controllers for our data-acquisition system, in part because they're very flexible. It's Unix. We understand Unix very well.

The nerves, in this analogy, really are the sensors. Some sensors have limited intelligence with them, built-in signal processing. Others are just simple analog devices, where the resistance is proportional to the signal that you get, and so you can measure that resistance and get a signal from it.

We also, in this distributed-processing analogy, we know that at some point, when the bridge becomes responsive, we're going to need that concept of the giraffe reflex. And how do we do that? If we don't have any reasonable processing power locally, we won't be able to create that short circuit from the data. By the time the data goes to the cloud and back, it will take too long. So we're going to have to have this short-circuiting mechanism, to program these reflexes.

We also know that, through machine learning, we can actually quantize our neural networks, boil them down to much-more-simple systems to evaluate-- at the end of the day, it's really just calculus-- and push those onto the Raspberry Pis, where we've had good results in actually running those little neural networks on these Pis.

So we want to program the learned responses in the cloud, learn the reflexes, and push them down to the device for those kinds of activations. Now, really, this notion isn't all that new. It's very similar to SCADA-- which is, I think, probably more than 40 years old-- which is Supervisory Control and Data Acquisition, which you find in most common factories. So the concepts are very similar.

So the research pipeline and infrastructure that we end up developing was, we really wanted to use BIM as the hub for our data context. Because this is infrastructure, because this is a model, we found that BIM historically has been the best way to record and link other kinds of data together in a very rich data environment. To do this, we used Forge and A360, access control through Forge, the model derivative server, all the data management capabilities of Forge.

Forge is doing a lot of heavy lifting for us. Of course, we also use the Forge Viewer to help explore our data and really understand it, through Project Dasher360. And a special part of the system that we're not using Forge for, at this point in time, is a scalable time-series database that we have in the cloud that we've tuned and developed, over the last five years, with various groups, called Project Data360. And everything on the cloud side runs on top of Amazon Web Services.

So this is sort of a slide for our flow and control. So, as I mentioned earlier, we have real-time data being generated at the bridge. There's small buffers. That data gets pushed as quickly as possible to a scalable time-series database in the cloud. We then will perform machine learning and analytics there, to send actions back to the bridge for that long-loop kind of thing, to program reflexes into the Raspberry Pis.

But also, we have this opportunity where we can tune our sensor parameters. If we need to increase sample rates or change gain or add filters, we can do that at this point in the stage by feeding it back into the system. We can also improve our sensor design. Did we put the sensors in the right place? So that's part of this whole evaluation loop that we have.

So Project Data360, as I mentioned, is a scalable time-series database. It's specifically tuned to handle very high-frequency sensors, whether they're bursty or constant. It was originally developed to get building-automation-system data, which tends to normally be 1-minute, 5-minute, 15-minute kind of values. But here, on the bridge, we have values in the hundreds of hertz or kilohertz streaming, all the time, so we needed a very well-tuned system, to be able to handle that.

As part of Data360, we have a visualization framework that we've developed that allows the Google Maps pan-zoom paradigm for sifting through the data. Data360 has a RESTful API. We're hopeful that one day it could be possibly part of the Forge services. And, of course, all of our machine learning and analytics stream through this system, which is a large distributed system. Recently we've added computer-vision capabilities, as well.

So, in the Forge ecosystem, here's where Data360 and our visualization platform, Dasher 360, sort of fit in. On the left are the data services. And we have a custom-written set of Python and C++ code that we fondly call the "programmable data router"-- because you know engineers love these great, compelling names. And that gives us a flexible code base, to be able to gather data at the point of contact of any of the devices. So that's what's running on those little Raspberry Pis. But, at that point, everything's a RESTful interface to Data360, as the information streams in.

So, to build a bridge operating system, we really want to build a modular software stack. So, typically, with deep learning and sensor fusion, we don't really have fine-grained control, necessarily, over the different parts of what the network learns. So you can pull out some insights from these tuned deep learning networks, but really it's a process of reverse engineering. You're not really always sure if the network has learned the kinds of things that you want, at these different stages. So, really, that knowledge is somewhere embedded in the network, and we wanted to be able to tease some of that out and make it modular.

So, to do that, you would typically need to retrain-- for every added new feature, like a sensor, you'd have to reformulate what your input vectors are. And you wouldn't necessarily gain any intuition, as you did so. So it is possible to compose deep learning networks and convolutional neural networks, but it's still very much an open research topic.

So what do we mean by "modularity," when we say this? We mean that we can split the modules up, we can swap them out, we can augment them by stacking them together and composing them, we can invert them to get hierarchical dependencies, and, of course, we can port them and move them around. So those are the kinds of properties that we really look for in our machine learning.

To do this, we really need to understand our data. We can't just take these large data sets, cram them in blindly into current machine-learning infrastructures. So the paradigm of "garbage in, garbage out" has never been truer, even in this era of machines learning and teasing out knowledge that we normally wouldn't be able to do.

It also enables us to see and understand, to know what's possible with the data that we see. We can probably train more quickly by understanding how our data gets pieced together. We can choose the right features for our training. And, of course, we can use both supervised and unsupervised techniques. And, really, the goal here is to create these composable modular parts.

So how do we understand our data? I did have a live demo prepared, this morning. But we did a deployment a few days ago, and that system is currently down. So I have some videos to play.

So the system that we developed for looking at this, it's actually been around for a long time. It's called Project Dasher-- "Dasher" for "dashboard" was originally what it was. And it's a project that began in 2009 with my research team, some of whom are sitting in the front row, here.

And really, the idea of it was to extend the value of building information modeling, a BIM, into the lifecycle of the building. So it was really the marriage of BIM, of Building-Information Modeling, building data, augmented sensor networks, to create a unified visualization that will allow engineers and-- well, at that time, building operators-- to really understand how their building was working. My boss, Azam Khan, typically called it a "building debugger," really understanding how all the pieces get put together.

One of the things that we typically saw was that people would look at charts and graphs and really not understand where that data was being generated and what sensors or devices were next to each other in that spatial context. And they also tended to get detached from the overall time frame of what was going on. So really what we created with project Dasher was a unified visualization system that would bring space, time, and sensor data all together into one system.

We've rewritten the old desktop model, and there's actually a demo. You can go today and try out Dasher for yourself, at dasher360.com. There's a lot of building examples. More recently, we will be moving into robotics and smart factories, as well. So things are starting to move in there and move through the data, all the while using BIM and other technologies.

And so typically we take scalar data, and we map it in such a fashion. So what you see here is the NASA Sustainability Base N-232 building. And they have a very rich sensor network. There is temperature sensors at all the desks. And so we're able to have an animated heat map, to show that kind of thing. So we're actually combing through that historical data, here, and visualizing it in that context, to understand how that works.

So this would be where I would fire up the live data and we would actually be able to virtually visit the bridge at Pier 9, at our Pier 9 facility. So this is Dasher. It's able to break the building into an occupant model view, a view of the building that an occupant understands.

Here we see the various sensors arrayed on our bridge and the actual locations that they are. We are able to explore and look at all the different data sets, to list through the different types of sensors, to filter them, to combine them together into different kinds of little dashboards. We're able to save different kinds of views, as we navigate through. And really we can track the process of your analysis, by walking through that kind of thing.

So here's an example of that Splash component I mentioned to you. So we're looking at the bridge. And here we're looking at what we suspected was an event, on one of the strain gauges. So we can zoom in.

Our Splash system works in harmony with Dasher, with Data360, in that all those levels of detail necessary for ease of paging in the data are created on the fly by the system. And Splash allows you to navigate that. So, even though the data's on the cloud, you don't have to wait too long in your client application to get at that. So here we see a nice temporal correlation between one of our strain gauges and one of our accelerometers.

Now, there's a lot more to understanding events than just looking at the value. So one of the things that we really need to do, in our system, is to understand really what's going on. Truly, at this point, there's no better substitute than the human.

So we've put cameras at the end of the bridge that are recording 24 hours a day, 7 days a week, as are all our sensors. Our strain gauges are running at 80 hertz and streaming all that data all the time. And here you see some of the computer-vision algorithms that we have that can detect occupants-- where they are or where they're going on the bridge. And we're using that as sort of a base data set for our machine learning.

If you want more details on project Dasher, especially how it was created and how we built it on top of different Forge components, my colleague Simon Breslav, who's sitting in the front row here, gave an excellent talk yesterday, which was recorded. And so you can get all the nitty-gritty details of how we used the Forge Viewer for creating project Dasher.

So now we move on to the sensor network. The different sensor types we chose are listed above. We had motion detectors above the door. We wanted to understand the environment.

We know that, once we're deployed in Amsterdam, the environment's going to make a big difference. Especially temperature and humidity could certainly affect the behavior of the structure on the bridge, as well, of course, as the people. So we have the full environmental array, here, with temperature, pressure, humidity, and, of course, CO2.

It's a machine shop. It's very noisy, so we're not actually recording live sound but recording sort of a sound average, to get an idea of what the noise levels are like in the space. We've added strain gauges for monitoring the structural response, as well as accelerometers for trying to understand how people are walking or interacting with the bridge. And the video system that I mentioned, at the time.

The environmental sensors that we have, we recycled from a previous project, one of our early building projects-- and they were purchased from Phidgets-- which are nice USB sensors. They're a little bit pricey for what we are looking for. And right now they're 8-bit-USB-based. And they can sample locally at a kilohertz, but they typically roll up values at 1 hertz. So they post a value every second. So right now our environmental sensors, our motion detectors, and so on are recording values every second.

So we ended up, for this project, doing something special. We decided that, to get the level of control that we wanted, to really understand the real origin of the data, that we were going to actually build some of our own sensors. The reason we did that were really to address these issues-- the issue of cost. We wanted the system to be distributed. We wanted that flexibility. And what we really wanted was complete control.

Some sensors do local filtering. For example, some motion detectors will do local filtering that really washes out a lot of the data. And I was speaking with a researcher from Hitachi, some years ago, who told me that you can do much better with a support vector machine and the raw, noisy data than you can with just the filters that come on this.

So really we wanted to be able to get at all that raw data. We want to see all the warts, all the noise, and everything, so that we can really dig as deeply as possible with the machine learning. We wanted to be able to control the polling rate, the sensitivity-- as I mentioned, the filtering. We needed something fairly robust.

So our first attempt was actually this little Cape. It's a top that gets put on the Raspberry Pi, using a very, very cheap-- I think it's a $3 chip, in fact-- analog digital converter. It's one of the things that the Raspberry Pi doesn't have very well. It doesn't have an analog digital converter.

It has general-purpose input-output interface, so that you can interface with most digital things, but it's very difficult to get these analog signals. So we developed this early Cape before the project. On the left, we did the whole layout in EAGLE, using both auto routing and manual routing features. And final board is produced in the middle. And on the right you can actually see the final results.

Our accelerometers for the initial test-- again, they were recycled. We only had two weeks to really put together the system and assemble it. And we only had a week on site to actually install it.

So we solved some of the problems that we had for deployment on site, like gluing these neodymium magnets, with epoxy, to the actual accelerometers and then doing sensitivity test to make sure that that wasn't going to really affect the performance too much. The bridge is metal, so this was convenient to place them. And one of the disadvantages of using old hardware was we actually had to hard-wire all of those things, using cat-5 cable, into the Raspberry Pi, resulting in the sort of Medusa-like creature on the right.

Strain gauges is something our team had never really used before-- I mean, in university, of course, but normally using very expensive lab-grade equipment from fantastic companies like National Instruments, to get the level of control that we wanted and to keep the costs down. The foil gauges themselves are actually relatively inexpensive, but the analog digital controllers can be very, very pricey, sometimes $16,000 to $32,000. And so Thomas Davies said, oh, I think I can put something together that does everything that we need it to do.

So really, I mean, we needed a sensitive system something, that could capture things in nanovolts, that could possibly do the filtering and amplification that we wanted, shunt calibration, the whole [INAUDIBLE] storm bridge thing, and temperature compensation. So that's what he put together. And, of course, he created this beautiful strain-gauge node you see on the right, here, with the lovely Autodesk branding on it.

So, at the end of the day, it was a-- maximum sample rate was 80 hertz, but we did get a full 24 bits of accuracy, which was far greater than the 8 or 10 bits that we could get from those accelerometers. The noise characteristics were very good. And we were able to put up to four ports with four strain gauges per port. And the price for generating this, parts in, was $60 for each node, which is really a bargain when you compare it to those $16,000 or $32,000 conditioners.

We designed the cases using Fusion 360, to protect them from dust and, at the last minute, with David Thomasson and Alec's help, added little slots for neodymium magnets, so that we could put them on the bridge very easily. The electrical workflow that we used-- you know, we developed a schematic. Then we would lay out the board in EAGLE. Of course, we had to do a couple of PCV fabrication steps, to make sure that we got it right-- a few bench tests. I think only a few iterations were involved, here. And then the final assembly we did by hand. For future projects, we're actually looking at using the pick-and-place device that we have inside of the pier.

So we used EAGLE for our electrical design, Fusion for our mechanical integration, and Dasher 360 and Project Data 360 on the software side, for closing that loop and bringing all the data together. So a lot of the testing led to changes in the electrical system, which resulted to changes in the mechanical system. And so we really utilized the integration between Fusion to do the full life cycle of that product and manage all these iterative changes. And it was a really, really valuable process for us. It saved us a lot of time.

So Thomas designed four boards an EAGLE. He absolutely loves the hotkeys. He says hotkeys are life. And, you know, EAGLE doesn't use hotkeys like, you know, Control-R-- or Option-R, if you're a Mac user. Instead, everything is very command-line-like. You have to actually type in "add cap_6021," to add a capacitor, instead of having to peruse libraries for it. It saves a lot of time when you're able to do that and really increases your productivity.

Thomas's tip, here, is start Fusion 360 integration early. So integrating mechanical and electrical components can be quite tricky. And they've done such a beautiful job, in that integration, here, that it's far easier to start it from the get-go. So the further along in the design you are, the harder changes become. And that integration makes it far easier to do.

So we also used Fusion 360 for dimension verification and integration. So what you're seeing, here, is actually a revision of our accelerometer board. It's an IDC multiplexer that allows us to use power over Ethernet power the new accelerometers that we're going to put on the bridge for our next iteration. Thomas's words are is that it's easy to ignore the real world, as an electrical engineer, when you have this kind of beautiful integration through Fusion 360.

One of the success parts of the story was the firmware. So, because a lot of our responsibility was pushed to the cloud, a lot of the processing there, signal processing, and so on, it really meant that all we really needed access to was the raw bits of the chips that we were using here. So we only really spent four days or so developing the software for the different components. And then we installed them all, made sure we got some signal, and then the rest we were able to do remotely.

A good chunk of the installation team-- myself and Thomas-- actually reside in Toronto, Canada, and the bridge is in San Francisco. So we actually finished the work over a period of several weeks, to bring the bridge online, by just remoting into the Raspberry Pis and upgrading the software that way. We couldn't have done this without the leveraging Forge and Data 360. And that really reduced that programming burden significantly.

We also added a lot of maintenance tasks, by scripting the startup, adding different kinds of checks, using cron that watched the different systems. We use something called "Observium" to really wrangle all the devices on the bridge. It's a beautiful little application that uses Simple Network Management Protocol. So, if you're an IT guy, you probably know about it. If you're not, it's a great way to understand and get alerts for when devices go down or need rebooting and so on-- which doesn't happen that much, but, when it does, you need to know.

We also recently integrated Fusion Connect, which is one of our IoT products. Fusion Connect is a beautiful system that speaks many, many different kinds of languages. We plan on integrating a lot of the machine data from the different NC machines and so on into our data set, using Fusion Connect, down the road. But right now it monitors our system and sends us text messages and alerts when they're critical. And, of course, we have different tools inside of Data 360 and Dasher 360 that let us know when things are wrong.

So how did we place these sensors? What did we do? So, to determine sensor placement, we turned to modal analysis or structural analysis. We looked at all the different vibration modes of the bridge, to try to understand the placement. And then we added the sensors inside of the BIM, to keep everything well documented and together. And I'll walk through that process now.

So we started with the Revit structure model. We created an analytical representation. We had a lot of problems with the handrail, early on. And the only tip I have is you need to make sure that the properties and settings that you put in the axes controls for the different structural elements in the steel work very well. They're all documented in the robot manual.

We then did a whole bunch of designs. Here is the final design, where we can see the accelerometers arrayed along the center line. Our modal analysis revealed what we sort of intuitively expected, that the first three modes of the bridge-- the first was a sinusoid, so the bridge bows the most in the middle. The second was a double-hook sinusoid. So we put the strain gauges in distances of thirds along the main C-channel, or the bottom of the bridge. And this also covers some torsional modes, as well, to look at the differences between left and right. So we're able to capture those fundamental vibration modes of the bridge.

The accelerometers are underfoot. We didn't have enough to put as many as we wanted, so we put them right down the center, along these rigid panels. And we arrayed them there.

We put the CO2 sensors and environmental sensors above the doors and then pointed the motion detector straight down, so that we could see the door open and close and get good data from that, as a good baseline. We also put light sensors and sound sensors in that location, to understand both day-lighting and automatic-light situations.

To encode the sensors into the model, we developed a custom network sensor family which we used to position the sensors in Revit. And the linkage to the database and everything that happens in Forge is done by adding extra metadata or mark values within those. So, as we deploy instances of those families, we make sure that those are encoded, in there, so that we know where things are placed and what they are and what their linkage is back to the system.

For Dasher 360, you really need to define rooms. Naming conventions become critically important. BIMs can be inconsistent. In this case, we started with a BIM that had been prepared by a different architectural firm, for Autodesk, of the entire pier. It's a highly detailed BIM, but it required some massaging and some renaming and some addition of room structures.

We also had to correct the heights of some of the elements, so that they would display properly. For elements that span multiple floors, for our visualization system, we do have to split them up. So we have to create parts and divide them by level, to assign those parts to a level, so that, when we have that beautiful breadcrumb navigation in our system, you see the right pieces in the right place.

So this was what the final initial install looked like. We spent about a week assembling them, but we really didn't consider cable routing as part of that project. We might try inventor's cable routing, in future iterations. But, because we pushed so much responsibility to the cloud, we really didn't spend that much time programming. We were able to mostly focus on the install.

Some of our initial results were very clear. We saw a very poor signal-to-noise ratio, with our 8-bit accelerometers. There's still signal there, but that's a very challenging aspect of what we're applying with the machine learning.

The strain gauges came in very clear. But, as I said earlier, you can see some of the digital noise and warts in both of them. So you can see these spikes, here. So we didn't want to hide those things. We wanted to understand them, to see if these were from different electrical equipment turning on and off.

So some of the early machine learning and some of the early results. So, although our eventual goal is to really not depend on video, video's an incredibly powerful sensor when we combine it with computer vision. We can use the computer vision to help filter and pare down our massive data sets, by showing events like motion and so on.

So here we see the computer vision. So, recently, we've even added more labels, through these visual cues, to understand when people are actually stepping on sensors. So we can create this highly annotated data set, in a semiautomated way, and filter terabytes of data down to megabytes and make it a far more manageable process.

So really our quest in the early machine learning was to find the best features for training. Autodesk is standardized on TensorFlow for all of its machine learning. And there's other talks at Autodesk-- I encourage you to go-- that will tell you more about TensorFlow.

And we really wanted to start with the noise from those noisy accelerometers, to understand when we had signal and when we didn't. We know a human can see the pattern, but can the machine? So we initially started with a logistic-regression binary classifier. A binary classifier is something that tells you, yes, there's signal or no, there isn't signal, in just sort of a binary way.

Our initial training was not bad. We used variance as a feature, a statistical variance. And we got about 89% accuracy, right out of the box, with a fixed window size. And then, since then, we've been trying various forms of fast Fourier transforms, to transform and find orthogonal training techniques, to look at our data.

Really, what our early efforts in machine learning have shown us is that we really need better labeling of the data. So the computer vision got us a big chunk of the way there. We actually take the results of the computer vision and pump it back into the system, as a virtual sensor. So we're able to go to that virtual sensor-- I was going to show it in the demo-- and actually see the actual correlation, so you know when somebody's there. This is when the computer vision identified that there was an occupant on the bridge. And that allows you to quickly, rapidly, navigate to all those other data sets and line them up, so that you can do more statistical analysis.

But we want to do more human-based labeling. So we've actually started a recent research project to try to capture the semantic knowledge that we as humans use in the data analysis and integrate all that semantic knowledge into the tools, through labels and annotations and so on.

So here are some of the lessons learned, on the bridge. So it's really important. If you were going to start from scratch-- and one of the things that we weren't able to do in the MX3D was really consider the placement of sensors, the kinds of sensors, in your design. They should really be an integrated part of the design, right up front. We hope that in, the future, when we understand these workflows better, that we can bake the sensor design into a generative workflow. But this is still really an open area of research for us at Autodesk.

Be very mindful of the cabling. We sort of sacrificed the ease of use to reuse our cheap, old sensors, but that caused us a lot more time problems down the road. So, if we just used digital sensors, IDC sensors for the accelerometers off the bat, we probably could have saved pretty much a whole day of soldering.

Cable management is really a nontrivial task, and we deeply regret not trying to use our Inventor software to do this. And that's probably the next thing we're going to do in this stage. One thing we did discover is the cleaning staff don't really care if you're running an experiment. And they would routinely unplug the devices. And so our screens with light up with various alerts. So, really, it's important to have more self-monitoring. And we want to add UPS and other sort of power conditioning to the system and really improve our alarms and sensing.

Sensor commissioning and placement is an important part of the process. We authored where they were, but we didn't have a really strong technique for going and validating that position. Things could be off by a few millimeters, and really when these errors can propagate through the system. So it's important to nail them early.

Now, one takeaway is, our proxy, our prototype, doesn't solve all the problems that we're going to have in the bridge. It proved out the data pipeline, it helped us understand the fidelity that we need, but there's a lot of details with attaching actual gauges to this rough material that still need to be solved. We think we're going to have to weld and grind down and polish these surfaces, in order to get various gauges and so on on there. Even affixing optical strain gauges could be a problem. So these are still open challenges.

We're beautifying the bridge. We've used Autodesk-generated design to create a whole set of absolutely beautiful brackets, to manage cable routing. And these were all verified with Nastran. And this is what the in situ VR visualization that Brandon has completed for us looks like, so we can see this in context.

My apologies for starting late. And if there's any time for questions, I'll take them now. Thank you very much.

[APPLAUSE]

______
icon-svg-close-thick

Cookie preferences

Your privacy is important to us and so is an optimal experience. To help us customize information and build applications, we collect data about your use of this site.

May we collect and use your data?

Learn more about the Third Party Services we use and our Privacy Statement.

Strictly necessary – required for our site to work and to provide services to you

These cookies allow us to record your preferences or login information, respond to your requests or fulfill items in your shopping cart.

Improve your experience – allows us to show you what is relevant to you

These cookies enable us to provide enhanced functionality and personalization. They may be set by us or by third party providers whose services we use to deliver information and experiences tailored to you. If you do not allow these cookies, some or all of these services may not be available for you.

Customize your advertising – permits us to offer targeted advertising to you

These cookies collect data about you based on your activities and interests in order to show you relevant ads and to track effectiveness. By collecting this data, the ads you see will be more tailored to your interests. If you do not allow these cookies, you will experience less targeted advertising.

icon-svg-close-thick

THIRD PARTY SERVICES

Learn more about the Third-Party Services we use in each category, and how we use the data we collect from you online.

icon-svg-hide-thick

icon-svg-show-thick

Strictly necessary – required for our site to work and to provide services to you

Qualtrics
We use Qualtrics to let you give us feedback via surveys or online forms. You may be randomly selected to participate in a survey, or you can actively decide to give us feedback. We collect data to better understand what actions you took before filling out a survey. This helps us troubleshoot issues you may have experienced. Qualtrics Privacy Policy
Akamai mPulse
We use Akamai mPulse to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Akamai mPulse Privacy Policy
Digital River
We use Digital River to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Digital River Privacy Policy
Dynatrace
We use Dynatrace to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Dynatrace Privacy Policy
Khoros
We use Khoros to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Khoros Privacy Policy
Launch Darkly
We use Launch Darkly to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Launch Darkly Privacy Policy
New Relic
We use New Relic to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. New Relic Privacy Policy
Salesforce Live Agent
We use Salesforce Live Agent to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Salesforce Live Agent Privacy Policy
Wistia
We use Wistia to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Wistia Privacy Policy
Tealium
We use Tealium to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Tealium Privacy Policy
Upsellit
We use Upsellit to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Upsellit Privacy Policy
CJ Affiliates
We use CJ Affiliates to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. CJ Affiliates Privacy Policy
Commission Factory
We use Commission Factory to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Commission Factory Privacy Policy
Google Analytics (Strictly Necessary)
We use Google Analytics (Strictly Necessary) to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Google Analytics (Strictly Necessary) Privacy Policy
Typepad Stats
We use Typepad Stats to collect data about your behaviour on our sites. This may include pages you’ve visited. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our platform to provide the most relevant content. This allows us to enhance your overall user experience. Typepad Stats Privacy Policy
Geo Targetly
We use Geo Targetly to direct website visitors to the most appropriate web page and/or serve tailored content based on their location. Geo Targetly uses the IP address of a website visitor to determine the approximate location of the visitor’s device. This helps ensure that the visitor views content in their (most likely) local language.Geo Targetly Privacy Policy
SpeedCurve
We use SpeedCurve to monitor and measure the performance of your website experience by measuring web page load times as well as the responsiveness of subsequent elements such as images, scripts, and text.SpeedCurve Privacy Policy
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

Improve your experience – allows us to show you what is relevant to you

Google Optimize
We use Google Optimize to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Google Optimize Privacy Policy
ClickTale
We use ClickTale to better understand where you may encounter difficulties with our sites. We use session recording to help us see how you interact with our sites, including any elements on our pages. Your Personally Identifiable Information is masked and is not collected. ClickTale Privacy Policy
OneSignal
We use OneSignal to deploy digital advertising on sites supported by OneSignal. Ads are based on both OneSignal data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that OneSignal has collected from you. We use the data that we provide to OneSignal to better customize your digital advertising experience and present you with more relevant ads. OneSignal Privacy Policy
Optimizely
We use Optimizely to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Optimizely Privacy Policy
Amplitude
We use Amplitude to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Amplitude Privacy Policy
Snowplow
We use Snowplow to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Snowplow Privacy Policy
UserVoice
We use UserVoice to collect data about your behaviour on our sites. This may include pages you’ve visited. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our platform to provide the most relevant content. This allows us to enhance your overall user experience. UserVoice Privacy Policy
Clearbit
Clearbit allows real-time data enrichment to provide a personalized and relevant experience to our customers. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID.Clearbit Privacy Policy
YouTube
YouTube is a video sharing platform which allows users to view and share embedded videos on our websites. YouTube provides viewership metrics on video performance. YouTube Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

Customize your advertising – permits us to offer targeted advertising to you

Adobe Analytics
We use Adobe Analytics to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, and your Autodesk ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Adobe Analytics Privacy Policy
Google Analytics (Web Analytics)
We use Google Analytics (Web Analytics) to collect data about your behavior on our sites. This may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. We use this data to measure our site performance and evaluate the ease of your online experience, so we can enhance our features. We also use advanced analytics methods to optimize your experience with email, customer support, and sales. Google Analytics (Web Analytics) Privacy Policy
AdWords
We use AdWords to deploy digital advertising on sites supported by AdWords. Ads are based on both AdWords data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that AdWords has collected from you. We use the data that we provide to AdWords to better customize your digital advertising experience and present you with more relevant ads. AdWords Privacy Policy
Marketo
We use Marketo to send you more timely and relevant email content. To do this, we collect data about your online behavior and your interaction with the emails we send. Data collected may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, email open rates, links clicked, and others. We may combine this data with data collected from other sources to offer you improved sales or customer service experiences, as well as more relevant content based on advanced analytics processing. Marketo Privacy Policy
Doubleclick
We use Doubleclick to deploy digital advertising on sites supported by Doubleclick. Ads are based on both Doubleclick data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Doubleclick has collected from you. We use the data that we provide to Doubleclick to better customize your digital advertising experience and present you with more relevant ads. Doubleclick Privacy Policy
HubSpot
We use HubSpot to send you more timely and relevant email content. To do this, we collect data about your online behavior and your interaction with the emails we send. Data collected may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, email open rates, links clicked, and others. HubSpot Privacy Policy
Twitter
We use Twitter to deploy digital advertising on sites supported by Twitter. Ads are based on both Twitter data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Twitter has collected from you. We use the data that we provide to Twitter to better customize your digital advertising experience and present you with more relevant ads. Twitter Privacy Policy
Facebook
We use Facebook to deploy digital advertising on sites supported by Facebook. Ads are based on both Facebook data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Facebook has collected from you. We use the data that we provide to Facebook to better customize your digital advertising experience and present you with more relevant ads. Facebook Privacy Policy
LinkedIn
We use LinkedIn to deploy digital advertising on sites supported by LinkedIn. Ads are based on both LinkedIn data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that LinkedIn has collected from you. We use the data that we provide to LinkedIn to better customize your digital advertising experience and present you with more relevant ads. LinkedIn Privacy Policy
Yahoo! Japan
We use Yahoo! Japan to deploy digital advertising on sites supported by Yahoo! Japan. Ads are based on both Yahoo! Japan data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Yahoo! Japan has collected from you. We use the data that we provide to Yahoo! Japan to better customize your digital advertising experience and present you with more relevant ads. Yahoo! Japan Privacy Policy
Naver
We use Naver to deploy digital advertising on sites supported by Naver. Ads are based on both Naver data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Naver has collected from you. We use the data that we provide to Naver to better customize your digital advertising experience and present you with more relevant ads. Naver Privacy Policy
Quantcast
We use Quantcast to deploy digital advertising on sites supported by Quantcast. Ads are based on both Quantcast data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Quantcast has collected from you. We use the data that we provide to Quantcast to better customize your digital advertising experience and present you with more relevant ads. Quantcast Privacy Policy
Call Tracking
We use Call Tracking to provide customized phone numbers for our campaigns. This gives you faster access to our agents and helps us more accurately evaluate our performance. We may collect data about your behavior on our sites based on the phone number provided. Call Tracking Privacy Policy
Wunderkind
We use Wunderkind to deploy digital advertising on sites supported by Wunderkind. Ads are based on both Wunderkind data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Wunderkind has collected from you. We use the data that we provide to Wunderkind to better customize your digital advertising experience and present you with more relevant ads. Wunderkind Privacy Policy
ADC Media
We use ADC Media to deploy digital advertising on sites supported by ADC Media. Ads are based on both ADC Media data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that ADC Media has collected from you. We use the data that we provide to ADC Media to better customize your digital advertising experience and present you with more relevant ads. ADC Media Privacy Policy
AgrantSEM
We use AgrantSEM to deploy digital advertising on sites supported by AgrantSEM. Ads are based on both AgrantSEM data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that AgrantSEM has collected from you. We use the data that we provide to AgrantSEM to better customize your digital advertising experience and present you with more relevant ads. AgrantSEM Privacy Policy
Bidtellect
We use Bidtellect to deploy digital advertising on sites supported by Bidtellect. Ads are based on both Bidtellect data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Bidtellect has collected from you. We use the data that we provide to Bidtellect to better customize your digital advertising experience and present you with more relevant ads. Bidtellect Privacy Policy
Bing
We use Bing to deploy digital advertising on sites supported by Bing. Ads are based on both Bing data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Bing has collected from you. We use the data that we provide to Bing to better customize your digital advertising experience and present you with more relevant ads. Bing Privacy Policy
G2Crowd
We use G2Crowd to deploy digital advertising on sites supported by G2Crowd. Ads are based on both G2Crowd data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that G2Crowd has collected from you. We use the data that we provide to G2Crowd to better customize your digital advertising experience and present you with more relevant ads. G2Crowd Privacy Policy
NMPI Display
We use NMPI Display to deploy digital advertising on sites supported by NMPI Display. Ads are based on both NMPI Display data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that NMPI Display has collected from you. We use the data that we provide to NMPI Display to better customize your digital advertising experience and present you with more relevant ads. NMPI Display Privacy Policy
VK
We use VK to deploy digital advertising on sites supported by VK. Ads are based on both VK data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that VK has collected from you. We use the data that we provide to VK to better customize your digital advertising experience and present you with more relevant ads. VK Privacy Policy
Adobe Target
We use Adobe Target to test new features on our sites and customize your experience of these features. To do this, we collect behavioral data while you’re on our sites. This data may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, your IP address or device ID, your Autodesk ID, and others. You may experience a different version of our sites based on feature testing, or view personalized content based on your visitor attributes. Adobe Target Privacy Policy
Google Analytics (Advertising)
We use Google Analytics (Advertising) to deploy digital advertising on sites supported by Google Analytics (Advertising). Ads are based on both Google Analytics (Advertising) data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Google Analytics (Advertising) has collected from you. We use the data that we provide to Google Analytics (Advertising) to better customize your digital advertising experience and present you with more relevant ads. Google Analytics (Advertising) Privacy Policy
Trendkite
We use Trendkite to deploy digital advertising on sites supported by Trendkite. Ads are based on both Trendkite data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Trendkite has collected from you. We use the data that we provide to Trendkite to better customize your digital advertising experience and present you with more relevant ads. Trendkite Privacy Policy
Hotjar
We use Hotjar to deploy digital advertising on sites supported by Hotjar. Ads are based on both Hotjar data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Hotjar has collected from you. We use the data that we provide to Hotjar to better customize your digital advertising experience and present you with more relevant ads. Hotjar Privacy Policy
6 Sense
We use 6 Sense to deploy digital advertising on sites supported by 6 Sense. Ads are based on both 6 Sense data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that 6 Sense has collected from you. We use the data that we provide to 6 Sense to better customize your digital advertising experience and present you with more relevant ads. 6 Sense Privacy Policy
Terminus
We use Terminus to deploy digital advertising on sites supported by Terminus. Ads are based on both Terminus data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that Terminus has collected from you. We use the data that we provide to Terminus to better customize your digital advertising experience and present you with more relevant ads. Terminus Privacy Policy
StackAdapt
We use StackAdapt to deploy digital advertising on sites supported by StackAdapt. Ads are based on both StackAdapt data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that StackAdapt has collected from you. We use the data that we provide to StackAdapt to better customize your digital advertising experience and present you with more relevant ads. StackAdapt Privacy Policy
The Trade Desk
We use The Trade Desk to deploy digital advertising on sites supported by The Trade Desk. Ads are based on both The Trade Desk data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that The Trade Desk has collected from you. We use the data that we provide to The Trade Desk to better customize your digital advertising experience and present you with more relevant ads. The Trade Desk Privacy Policy
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

Are you sure you want a less customized experience?

We can access your data only if you select "yes" for the categories on the previous screen. This lets us tailor our marketing so that it's more relevant for you. You can change your settings at any time by visiting our privacy statement

Your experience. Your choice.

We care about your privacy. The data we collect helps us understand how you use our products, what information you might be interested in, and what we can improve to make your engagement with Autodesk more rewarding.

May we collect and use your data to tailor your experience?

Explore the benefits of a customized experience by managing your privacy settings for this site or visit our Privacy Statement to learn more about your options.