AU Class
AU Class
class - AU

Automating Design Workflows Using ArcGIS Machine-Learning Tools and Aerial Location Intelligence

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

In partnership with Esri and Nearmap, Kimley-Horn is using high-resolution aerial imagery and deep-learning tools to transform workflows, gain unprecedented location insights, and make project delivery easier, faster, and more efficient. Attendees will learn how to address common firm challenges, improve design accuracy, and reduce rework using pretrained machine-learning models and up-to-date site intelligence. We'll discuss a wide variety of workflows and use cases, including how to automate image feature identification and extraction and seamlessly integrate those features into project drawings using the Autodesk Connector/ArcGIS for AutoCAD plug-ins.

主要学习内容

  • Learn about integrating high-resolution aerial imagery and data in design projects to reduce rework and improve project outcomes.
  • Learn how to use Esri ArcGIS deep-learning tools to automate image feature identification and extraction.
  • Learn how to incorporate extracted features into project drawings using the Autodesk Connector/ArcGIS for AutoCAD plug-ins.

讲师

  • David Garrigues
    David is the Head of Engineering Applications and has been with Kimley-Horn for over 17 years. He has is a change agent and relationship builder with a passion for anything that involves engineering. David has been a popular speaker at Autodesk University for many years and has been featured in both CADalyst and AWWA. As a presenter, David's energetic nature and enthusiasm makes him easily relatable.
  • Brett Heist
    Brett Heist is a Solution Engineer @ Esri with over a decade of expertise in GIS within the Engineering and Construction sectors. His extensive experience is complemented by certification as a drone pilot, enabling him to leverage aerial data for enhanced project insights. Brett is known for his innovative approach, constantly seeking ways to automate processes and drive efficiency. Passionate about helping organizations achieve excellence, Brett is dedicated to pushing boundaries and fostering collaboration in the industry..
Video Player is loading.
Current Time 0:00
Duration 49:41
Loaded: 0%
Stream Type LIVE
Remaining Time 49:41
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Transcript

    DAVID GARRIGUES: Hi, everyone, and good day. Welcome to our class. And we're going to show you how to use AI to extract features from high-resolution imagery. Let's get started with a few introductions. I'm David Garrigus. I'm the head of engineering applications here at Kimley-Horn. Been here for about 17 years. And a fun fact about me, I'm a big time foodie. My wife and I, we love to cook, and we love to dine out. So if you are one of those, hit me up. I'd love to talk about it. Jeff.

    JEFF SAUNDERS: Thanks, David. Hi, my name is Jeff Saunders. I'm a director of product management at Nearmap. I've spent over the last two decades building products and solutions for the built environment. And although I know many of you have beaten this number, this is my 12th Autodesk University. So excited to be here, excited to share our presentation with you. Brett, unto you.

    BRETT HEIST: Thanks, Jeff. Yeah, Brett Heist. I'm a solutions engineer here at Esri. And this is actually my first AU. And excited to be here supporting and presenting along with these two wonderful gentlemen here. I've been in these sectors for a little over a decade. And a fun fact about me is I once got the opportunity to serve John C. Reilly ice cream. Back to you, David.

    DAVID GARRIGUES: Excellent. Thanks, Brett. And an honorable mention, he couldn't be here today, but his name is Kyle Starkes. What a phenomenal developer. He's a solution engineer over there at Nearmap. And he couldn't be here today, but he's the one who helped build the tool, and design it, and make it all happen. And so he couldn't be here, but I did want to make sure he got an honorable mention.

    With that said, let's talk about what the agenda is. So first of all, I'm going to start off with, what was the big idea? How did we get here? What happened behind the scenes? And then what's going to happen is we're going to move on to Jeff. And Jeff's going to talk about laying the groundwork with Nearmap, and high-resolution imagery. And making AI extractable features possible. Lastly, we're going to end up with Brett. Brett's going to expose the opportunities that we have with AI with Esri. And then we'll have some time for Q&A. Or you guys can email us at any time. So we'll show that at the very end.

    So the big idea. So yes, I've been with Kimley-Horn for over 17 years. And am responsible for all of our engineering software, regardless if it's quality, deployment, programming, training. But what I'm most proud of is the opportunity to be able to turn a software vendor into a software partner. Partnerships involve work. And they involve the work towards a common goal. So when we talk about these things, we need to understand that we have to keep working together, and you have to dedicate time to each other.

    And that's what you're seeing here today. This is the result of time well spent. So when we take a look at Nearmap, we had to go and enter into a new type of partnership, a venture, if you will. And then, later on, do the same thing again with Esri. Now, Nearmap can absolutely deliver high-end imagery that is needed for the work that we're going to need to have start with. To be a great partner, we simply cannot satisfy our own needs. We must truly work to be a spokesperson for the industry at large.

    So starting with Nearmap, we needed two things. We needed to be able to extract features from an image, such as pavement, car spaces, landscaping. But then we also needed a faster way to download high-end imagery in larger scales. So let's take a look at that.

    So what we're seeing here is-- currently, if you go down-- if you go up to Nearmap's website, you can see that I'm starting off with this 0.075 pixels over here. The problem is that as I increase the size, so does the resolution drop. So you can see now I'm at 0.299. And the bigger I get, now at 0.597. So how can I get the highest resolution possible? Well, currently what we had to do was we had to go and set it, and then download the files.

    Then move it over a little bit. And then get that overlap going. And then download the files. And then move it over a little bit. And then get the overlap. Then download-- you guys get the picture. This is over and over again is what we had to go do. Is there a simpler way? And so the answer is, yes. But we have to do some programming. Now. I've done a lot of programming in my time, but this is a little bit above my head. But if you really wanted to know how all this happens and what it does, well then, pause here and talk to Kyle.

    But this is how it all works. And on our side, Paige Dyer, she couldn't be here either, but Paige was the one who worked with Kyle and got all this working. So with that said, what did we go and do? Well, we had to go and invent this tool. So this tool is allowing us to go and make a square, just like you showed before. So in order to do that, this right here is going to have to allow us to go out and go grab the lat and long. So to do that, just go out to Google. And you can right click on any spot and it will give you the lat and long.

    All you do is pick on it, and it will copy that to the clipboard. And then we go back into our tool, and we can paste it in. By moving the cursor down into the next one, it automatically puts anything after the comment to the next one-- next line. And then we get a radius. So while we're defining a radius, we, Kimley-Horn, put a max of 3,000 on our people. Next thing is the EPSG code. So what's an EPSG code? Well, we have to have a translator from web Mercator back to NAD83. So in my case, I just went to Google, typed in EPSG code for North Carolina, and then I ended up with 2264 for NAD83 foot.

    You can do the same thing. There's lots of sites to go and do this at. But I come back over here in my tool and I type in the EPSG code. Now, I've got three formats. JPEG, PNG, and TIFF. I can assure you you're going to want to use JPEG. So that's if we wanted to go just to develop a square. I know we said radius, so the radius fills out a square. But what if we want to do an irregular shape, something like a polygon? Again, I still have to put in my EPSG code.

    But this time, what I'm going to do is I'm going to use a tool from geojson.io. And what I can do is I can type in Raleigh, which is where I'm located. I can type in Raleigh, North Carolina. I can even type it in wrong, it'll find it. So fortunately for me. And so then what happens is, as I zoom in, what I can do is I get to determine, how do I want to create this new area? I can do it circular. I can do it with a rectangle. I can do it with just a line. Or in this case, what I'm going to do is draw a polygon.

    So now, these polygons are editable. So if you miss your pick or whatever, you can go back and do that. Now, here's the reality. Kimley-Horn we put a stipulation that we were not going to allow our staff to go more than two miles by thousand feet, which is going to be equal to about 10,000,000 square feet. All right? So we put that stipulation on people. So because we don't want everybody to go download the Earth.

    So now what's going to happen is you can take a look and see what all the info we have. And you can see that I'm underneath 10 million, so I'm in good shape. And that was something that we did, not Nearmap. But we put that stipulation on our people. So now what's happening is I got the code over here, and I can copy it. So little button over there, when I press that, I've copied it to the clipboard. All you got to do is go back into our tool and paste it in.

    When I do, I just get to pick where I want to go. And so I'm going to put it in my AU folder over here. And then I can pick the output. Now, because I've got Movie Magic going on, this normally takes about 15 minutes, so-- but hey, I just shortened up the clip for you. So now download complete. What's the next step? Let's talk about Civil 3D.

    So when we go into Civil 3D, one of the first things we're going to want to do is we're going to want to go and change our drawing settings over here. So in my case, I happen to know my code. NC83F is my code, so I go and set that for the entire drawing. Now, what I've also done is I went into map aerial, so my geolocation for that right there-- by the way, that's going to be going away soon. And Esri's is going to pick up the tab for us. So thank you again, Esri. I'm using the I Insert command over here. And as you can see, I've pumped this image out several times here.

    It only gave out one image, by the way. But I practiced in things, so we've got several here. So I bring that one in. And inside here, it takes a little while to get it going. This is a pretty big image. It's like 100 meg. Hundreds, maybe 115, 120 meg. So I'm going next, next, next. And then I'm going to zoom extents. And then I'm going to see myself down over here. And I can see I've got some black areas. Which is fine, because I didn't need that much data.

    You remember how my shape works? I'm going to go over here and set transparency for the image. And I'm also going to set transparency for the color. So I'm going to match that to the black background. So I'm picking that in the black background. And then it takes a while to chum it up a little bit. But then I get the image that I wanted, and I can see transparently. And I can see the images in the background as well, so making it transparent. But how good is this resolution?

    Well, if we zoom in a little bit, we're going to see the difference between what Bing has and what Nearmap has. So here, you can see payment cracking. You can see all the paint stripes just as well. I mean the resolution's incredible. So this is where we want to be. So if we take a look at the overview of what happens in a project, we all know we can sit there and take our data and we can go and get images today, and things like that. But now, I've got something else going on in the existing conditions.

    I can now kick this out to ArcGIS Pro. I've got some opportunities inside Nearmap as well for exporting out these features. I can go out and go find a building footprint. Maybe go find out some pavement. Go do those kind of things. And now, I can take those-- once those assets of features have been identified, I can move them straight into my design. With that said, let's hear more from Jeff.

    JEFF SAUNDERS: Well, thank you, David. This is a great handoff here to really start talking about how we looked at laying the groundwork, and laying the groundwork in developing projects for the workflow that David just showed. So let's start with some of the key foundational data. I think David said it really well in that having high-resolution aerial imagery at the start of the project, making it the foundational start really helps to define what can be done with that imagery. What information on site conditions can be used at the outset.

    So let me start with talking a little bit about some of the remote sensing technology. So there are a number of different remote sensing technologies out there, from satellite to drones to ground surveys. Nearmap's approach is somewhat unique in that we design and build our own camera systems. And we capture imagery from manned aircraft. What that gives us is this sweet spot at 2.5 centimeters, 7.5 centimeters GSD, that allows us to see a lot of detail with amazing clarity of that imagery.

    And it's important to note that we fly this all of the urban areas in North America on a proactive basis. So we have a program that runs annually. We capture multiple times a year across all of those areas. And that allows us to do a lot of exciting things to provide on-demand, as-needed data to projects as you're starting them. But what is 2.5 to 7.5 centimeter GSD really look like? Let's take a look at a few examples.

    So on your right here, that's one of the captures we've done at 4.5 centimeters. And on the left, similar to David's example that he was showing side by side, there's the satellite imagery at a native 30 centimeters. Now, there's a lot of great work being done in satellite AI enhancement. But again, here's a comparison between an enhanced 15-centimeter image and an aerial image from Nearmap on the right-hand side. So you can see the clarity. You can see the crispness of the imagery, and what you can do with that.

    But let's dive in a little bit deeper. So if we look closer at each of these at this location, you can start to see the tiles. You can see a lot of different roof-related objects here. You can see a lot of architectural artifacts. So this really does become a very powerful set of data as you're looking to start your projects out to really understand site conditions, to understand what's out there currently. And to be able to use that from the start of your project. And also be a foundational piece for AI-derived extraction, which we'll talk about a little bit more later.

    Let's talk a little bit more-- and this speaks to the accident that happened earlier this year that affected the civil infrastructure in the Baltimore area. We worked with responding agencies and provided our imagery. As it turned out, we were proactively flying this area at the time as well. So it did coincide with that, and enabled us to really be quickly supporting the accident as it unfolded. So we talked earlier about capturing vertical imagery and high-resolution aerial imagery from that perspective.

    Nearmap also captures oblique imagery, which helps us do a lot of exciting things, and support a lot of events like this one as they show up. So this allows us to really zoom in, see some more detail about what the impact of this accident looked like. And where different structural impacts may be happening, how to best support responding crews as they're addressing the issue at hand. Let's change gears one more time here. David showed this as well.

    But here, we're looking at an airport and looking all the way down. As we zoom in further and further, we can see cracks on the runway and on the taxiway here. And that allows us to do a lot-- make a lot of smart decisions early on in the project. So again, getting crisp information on the site conditions, on what exists out there in the field is really valuable and an important starting point. But as we talked about earlier, as David alluded to as well, there's a lot more that we can do with this data.

    Since we're capturing not just vertical imagery, but also oblique imagery, it allows us to build out 3D data as well as AI-derived insights and terrain information that can really be key components to starting your engineering projects. So I mentioned vertical capture imagery. We have that. Oblique imagery. We're capturing that as well, which allows us to create a derived panorama view. As well as 3D data, and AI-derived insights. We also have a post-cat focused product that allows us to capture when certain events post-catastrophe events happen. And we're able to help responding agencies and others respond to those claims.

    But in terms of 3D data, this allows us to create, essentially, plug-and-play data for your projects around with a textured mesh with point cloud. And you'll probably see some examples of that here at Autodesk University. Our point cloud within recap and InfraWorks, digital surface models, digital elevation models to start to work with contours and terrain at the beginning of your project. And a true ortho 3D-derived imagery, which allows us to correct for some of the lean and buildings that may come from vertical imagery.

    David mentioned this earlier, too. But I mean, a key piece of all of our work and the partnerships that we forged with Esri and Autodesk allow us to create this plug-and-play data sets and insights that can be used across the building and infrastructure project lifecycle. And that allows us to bring data directly into Autodesk, to bring data into Esri and share it with Autodesk. Really helps us to support the full gamut of workflows that would come into play for building and infrastructure projects.

    So we're going to look at two Kimley-Horn projects as sort of examples. I'm going to talk about an example here at the Orlando International Airport. And Brett's going to tackle the sphere, because it wouldn't be an AU without at least referencing Las Vegas at some point. So we'll get to that in a minute. But let me start with a video. This speaks to the Orlando International Airport. And so here, we're in Nearmap's Map Browser product to look at where the imagery is.

    This is very similar to David's starting video. But there's a few things that I wanted to highlight here. So we're looking at an area in the southern portion of the Orlando airport. And this is a project that is in place today. But if we wanted to use a time machine and pretend that we were back in December of 2017, we'd see that there isn't much here on this property area. But we might need to use that as a starting point for our design project. And from this, we get, obviously, the vertical imagery on the site conditions.

    Now we can compare them side by side. But in the simpler form of what David showed, I'm going to just pull a set of images and 3D data from that location for that capture area. And you can use this as a way to look back at historical data, or to look at current conditions as they stand today. And so this really allows us to extract this content. And allows us to choose what types of data we want to get, and at what accuracy.

    Where does all this come into play? So as you're looking at automating your design workflows and looking at how to get really good site conditions and understand all the information around a project at the start of a-- at the beginning. There are a lot of long lead time deliverables where this imagery can really come into play. Whether it's site suitability studies, environmental studies, traffic studies, or a broader stakeholder review and commentary. This content, whether it's 2D, vertical imagery, oblique imagery, or 3D-related data and AI can all help you look at recent up to date data.

    Start with topographic data for your conceptual designs. Look at 3D and oblique contextual information to more broadly assess the environment that you're going to be working in. And ideally, with AI and other tools to reduce and eliminate some of the manual drafting tasks that may exist. Let's take a look. Obviously, the DSM is a nice data set. And it can be really useful. But what is the common question everybody gets when they start a project? Where do I get my topo?

    So here's an example, obviously, bringing this data into Civil 3D. And I'm going to use the ArcGIS Pro AutoCAD toolset just to show this example. Because Brett and I spent some time curating this data and we built a project-- a group within ArcGIS Online that allowed us to curate some of the key data we wanted to look at. So I'm going to bring in the aerial imagery, the vertical imagery here. And then I'm going to bring in the topo as well.

    And so I sped this up a little bit here to save time. It only took a couple of minutes. But in the interest of the presentation, I wanted to make sure we got to it. But basically, we're going to have contour information showing up there in that purple-magenta color at the bottom. And that really highlights where this project is starting in this case. So we're going to get topo, we're going to create a surface all from the data captured from the aerial imagery. So we get base map data, and we get terrain data, and contours here.

    So I could also bring in the satellite imagery using the FDO tools as a WMS, or WMTS as well. If we want to stream that information, that's another option available. But in this case, I think it was important to highlight the Esri, Autodesk, Kimley-Horn, Nearmap partnership. And how this can produce results for you, and how you can use this information. So again, here we have the contours. I'll turn off the base map for this example. And you can see here we have a set of contours. And we can just verify-- well, actually, the first thing we need to do in this case, since we brought the data in from Esri, is we should look at assigning the elevation attribute field to the contours itself.

    So I forgot to turn on the pop-up menus here when I ran this one. But here, I can assign the attribute of elevation in there, and assign a Scale Factor 1. And basically now what I'm doing is assigning that attribute to each of the contours so they'll each have their own elevation. What this allows me to do-- and I'll do a quick list to show you that it actually did work. But I'm going to create a surface first, and then I will prove to you before I create contours that I'm actually pulling data that has elevation associated with it. So we have a z-value associated with that contour line.

    Now I'm going to add those contours in. This is Nearmap data from December 2017. So we'll start with that. And then we'll let Civil 3D do its-- Civil 3D do its magic, and just approve that we now have-- once we select all the contours in this case. Once we do that, I'll just prove to you that we actually do have a surface, we have a TIN. And we can use this as a starting point for our project. All in pretty quick succession without even having to leave your desk, in this case.

    So I'll turn on the TINs here. Just make sure that the TIN is visible through the display manager. And I'll give you just a peek at that TIN in a 3D view. So here we go. We have that TIN we have a 3D surface. Maybe not super exciting with a lot of rises and falls, but a pretty good data set to get started with. And with this, now we can start our projects and start to do some of our conceptual design as needed.

    But wait, there's more. So David mentioned this from the beginning. But some of the other things that the Nearmap provides is mature machine learning models that allow you to derive deeper insights. And ideally, save you time from some of the work of digitizing or some of the work of identifying certain conditions or certain material types. So all of this plays into helping to automate and speed up the design process.

    We also are able to provide-- and this showed up as in the imagery, but ability to identify pavement damage, looking at different pavement-- looking at both raster and vector perspective here. This is information you could pull into your design, into your planning model in Esri or Autodesk as key starting information. And then here's another example of using the vector detected pulls, or the vector version of the detected poles. And that can be, obviously, a starting point as you're looking at utilities and other public works related projects.

    But there's a lot of other layers that we provide. This is just a short list of some of the main ones. There may be different use cases that you're looking to solve. And these can be these can come in really handy. But we know, at the same time, these don't solve all-- or answer all of the questions you may have. And that's where we want to do the handoff with Brett and have him show you some of the exciting work that the aerial imagery from Nearmap and the content from Nearmap, as well as the AI detections, plus the work that Ezra's been doing, really come into play to automate your design workflows.

    Brett, over to you.

    BRETT HEIST: Thank you so much, Jeff. I just wanted to start off by just expressing again, how excited I am to be a part of this presentation today. And am truly excited by the work that Nearmap and Kimley-Horn have done. And this tool that has been born of this partnership to allow Kimley-Horn to more easily access and download and import the high-res imagery that Nearmap provides. I think that this is really exciting, again, because it is going to provide a lot of opportunity downstream to do more with this imagery.

    And it's becoming not only just a base map and a core element of the project and design lifecycle, but also it's becoming a source of data. I think this technology has finally advanced to the point where it's gone from being interesting to valuable. And that's what I wanted to continue to explore here today and see how we can leverage GeoAI and ArcGIS to again, continue on with these workflows and add value to them.

    But before we hop into that, I did just want to take a brief moment just to stop and define GeoAI, and what it means when we say that. So when we talk about GeoAI, we're talking about two different concepts here. We have AI and this ability to have a machine, or teach a machine to learn and do human-like tasks that were traditionally not accessible through a machine to do things read, and see, learn, analyze, and create.

    And when we look at this now, there's the subsets where we're able to take that concept and that framework and these capabilities, and do more specific tasks. Like machine learning, where we can feed specific data sets into a model to have it learn specific patterns. And deep learning, which is even a more specific subset of those previous two. Which you can think of it as like functioning like a human brain, where the computer is really learning complex patterns and concepts by piecing together simpler concepts.

    And it's really not until we then marry these two together with spatial analysis that we get GeoAI. So it's leveraging these capabilities of artificial intelligence with spatial analysis to not only generate and do things like feature extraction, but also do something with those as far as analysis goes. That can help us with making decisions, and asking questions, and getting answers. This is also where the analytic engines that are available through ArcGIS, again, provide that added value for us to be able to do something with this data that we're extracting.

    So whether that's further image and raster analysis, network analysis, connecting to real-time feeds, the platform really lends itself to continuing on with that feature extraction again and providing additional value to what we're actually getting out of that imagery. So if any of David like I do, when I first started talking to him about this, he was like, this is all well and cool, Brett. But what can I do with this? What can I do with this today? That's the real value for me is if I can actually use this and not just talk about it.

    And so that's where I wanted to start as we transitioned into the use of GeoAI within ArcGIS. And that's with our pre-trained models. And so as you can see, we have a lot of models. We're continually adding to these. And we support a lot of different sectors and use cases for these from public safety and transportation to utilities. And these are really great because, just like we saw in Nearmap and their mature AI and machine learning models, that a lot of the work has already been done for you to where you're just able to leverage those results. And point these at some imagery and get back some features or some specific classifications, or things that you would like to have.

    And to get started with these, it's really easy. We have all these deep learning packages available to you. And you can go here to the Living Atlas right now and type in those magic keywords of DLPK, and you're going to get a return. As you can see, we have 79 current models that are available. This is a great starting point to get a better understanding of not only what's available, what's possible. So from feature extraction to pixel classification, object detection. These models can do a whole lot of different things.

    And once you've maybe identified one that you're interested in, this is also where you can find more specific information about that model, as far as the description and what it's really intended to do and its use case. The licensing requirements that are required for you to be able to even run this model within our platform in ArcGIS. We'll have some links to things like step-by-step guide on how to use the model. And even explanations for the parameters and the arguments that it can take. So you can better understand not only how to use this model, but how it's going to work.

    And then most importantly, the input here. As you can see, we need that high-resolution imagery. And that's where this partnership with Nearmap is really going to shine and give us access to that. We'll also get some information about the output, the applicable geographies where it's been trained. So you can expect where to be successful with it. And with that, even some accuracy metrics to know how good of a results and return that we're going to get when we're running this model, along with some samples. So we can again, have level expectations going into this to not only know what we're going to get, but how well we're going to get it.

    I think right now, the top four models that-- you saw, we had 79 within there. I think currently the top four are these that you see on your screen right now. We have the Segment Anything model, which if you're not familiar with, was integrated and born out of Meta. And we brought that into our platform and made it available as a deep learning package. And this model does exactly what the name says. It segments anything. And is a really powerful tool for being able to, again, get objects out of imagery.

    We also have Building Footprint Extraction, Land Cover Classification. And the one that I love and most excited about and we'll talk about here in a little bit is the Text Sam, which is a spin off of the Segment Anything model. But it's been integrated with a large language model-- large learning model, LLM, to provide us with a text prompt. So we can be more specific in what we're segmenting out of the imagery. So Sam is a great model, but because of its ability to segment anything, sometimes that provides a lot of noise. And the Text Sam really provides us with an opportunity to be more specific in what we want to have segmented out of that imagery.

    So let's move on and hop into a demo, and see what this actually looks like to use, how it works. Some expectations of, again, what the results are. And some tips and tricks on how we can improve the results when they're not necessarily maybe what we were expecting. I think this is also a good time to let you know that I think a good word to keep in mind here as we move forward with this-- and as we look at the application of deep learning models, and machine learning, GeoAI in these workflows, and I think that word is accelerate.

    This isn't going to replace your current workflows, but it is going to really help accelerate them and help you get to 70% or 80% of the way where you need to. These things aren't going to work perfectly every time. And I think it's just good to be honest and transparent about that. So with that, let's hop into a quick demo to see this in action. So I'm starting here in our desktop application, ArcGIS Pro. And as you can see, we're starting with the aerial imagery that was extracted out earlier from Nearmap.

    So as I zoom and pan around here, you can see we're still receiving and getting access to that really nice, crisp, high-resolution imagery where you can see fences, and pavement markings, and objects and everything with contained within that imagery. And then from there, it's really easy to get started. So let's hop over to this northwest corner where the parking lot is. And earlier in that video that we were just watching, we saw that I stopped on the car detection model. And so we're going to use that one here and see what it looks like to actually run.

    So it's as simple as opening up our detect objects using deep learning tool. We just point that toward our imagery. We get the option to give it a specific name if we want. Or you can just leave it with the default. And then from there, we can pull in our model. And what's really nice about this is can download and use these locally. But we can also just connect directly to the Living Atlas again. So those keywords of DLPK are going to get us the return for all the models that are currently stored there in the Living Atlas. And then from there, we can find the one that we want here, car detection.

    And then from there, we can just click OK. Depending on your internet connection, this is going to take maybe just a few seconds, maybe a minute to load. And then we're presented with our arguments. Which in this case, we're just going to run default and come back to these in a second. Just so we can see what these results look like by just, again, running it with the default and kind out of the box. So once I'm ready, I can click Run, which I have gone ahead and done ahead of time. And you can see now, we got our results.

    So just like that, we've gotten all of the cars that were in the image of this parking lot back. But you can see we did miss some. So again, just level setting and being transparent about expectations here. It's not always going to work perfectly. But there are some ways that we can easily improve these results before we need to really panic, or go down any other route. And before we actually look at that, it's important to understand how this tool is actually working. So when this tool is running, it is splitting the image into tiles, like we can see on the screen here.

    And then it's parsing through these tiles. And based upon the imagery it was trained on and the patterns it was trained to detect, it's going to look for those within the pixels of those tiles. And so that's really important to know, because then we can come in here and change the cell size to help affect the results that we are going to get here. So depending on the imagery we're working with, the size of the feature, if we just go in there and make that simple adjustment to account for that, we can get much better results. So just by changing the cell size to a specific size, I rerun that tool. And then you can see I get a lot, lot better results.

    So again, think of this as just a way to really help accelerate your workflows. And you're not necessarily going to be able to completely replace them, but it's going to certainly help you get there a lot faster. And then from there, it's really just a rinse and repeat type cycle. So we're able to continue leveraging this tool that we have over on the right. We can pull in those specific deep learning models and packages from the Living Atlas, whether that's Building Footprints, like we can see here. Or maybe we want to look for other things like parking lots. We are just going to continue to do that workflow, and knowing that we can always change those parameters.

    But inevitably, we're going to run into a case where we don't have a deep learning model. And so we can either go back to Nearmap and leverage their deep learning packages and their robust library. Or we can also explore this opportunity that we have with the Text Sam that's available through the Living Atlas as well. And like I said, this just gives us the ability to have a prompt available within the tool to be specific about the feature or the object that we're really wanting to detect.

    So just like I did before, I just navigate to the Living Atlas. I load in that deep learning model. Again, it's just going to take a couple of few seconds-- to a few seconds. And then you can see, I get a text prompt. And then from there, I'm able to put in descriptive text to look for certain things. So here's a blown up version of the tool just so we can see. So I have a text prompt. And then you can type in singular or multiple values. And again, you can get really creative here as far as like what you want this tool to segment within that imagery.

    And the possibilities here are really endless. So in this case, I might have the need to identify some of the vegetation around this parking lot. So I can just type in tree, or as you can see, like I'm doing here, you can be super OCD and make sure that you try to type in every word that you can think of that's associated with greenery, or shrubbery, or trees. And then we can run that tool and get those results back pretty quickly. And again, there's limitations here. Depending on the imagery that was used to train the model, the imagery that we have, and even shadows, you can see they all have an effect on the results that we're going to get.

    But again, we're able to accelerate at least getting to this point, and then can clean it up from there. And again, think about getting really creative with this. You could do things like parking islands. You could do wetlands. We can do utilities, like maybe manholes and catch basins. And even things like light poles. And those can even include the shadows that might be beneficial depending on the angle of the imagery if we're working with nadir or oblique.

    So again, this tool can be really powerful. So let's hop over to another site and explore more of that geo side of GeoAI. So again, I'm here in Pro. And I'm starting with that really great high-res imagery from Nearmap that we have, and that Jeff extracted earlier. And I also have the contours that their AI model extracted for us. And so this is an opportunity to turn this into a surface within here, and maybe do some hydraulic modeling. Or we could leverage those analytic engines that we have available, specifically the raster.

    So we might want to know-- we have a beginning image and an image of some construction after a certain amount of time. We can point A tool toward both of these images and say, hey, tell me, what are the differences between these images here. And return that in into the map in a visualization like we have here. So it can be really easy to detect change and see where construction is happening, and maybe more importantly, not happening between these two time periods of imagery that we've extracted, again, from Nearmap.

    The dark green areas are going to be a significant change. And the deep purple's are going to be of even more significant change, things like buildings popping up. Lastly, none of this is really any good to us as far as these results that we're getting if they just live within this application in the desktop. So a next step would be to take this information and publish it to the web. And in doing so, we're creating a live service that is more easily accessible, and really helps increase sharing and collaboration amongst stakeholders, both internally and externally.

    So I can take all of that imagery or all those features that we've extracted from that imagery, including the imagery itself, and share that up into the web. And like you can see here now, this is available through a web browser that I can share out to anybody in the team. And just to show, just like Jeff did, that this is real and live, we can toggle these layers off and on here. And then this is also an important last step because this is going to set us up for what we will end with and how we can start enriching our designs with this information with a pipeline directly into our design drawings.

    So before we move on to that, some next steps that you can take with deep learning, you can continue to learn these pre-trained-- or continue to leverage these pre-trained models, both available within ArcGIS and within Nearmap. Or you can start fine tuning an existing model. Within ArcGIS, we have a ecosystem or a framework that can allow you to not only to fine tune or repurpose an existing model, but you can also train a deep learning model from scratch.

    And this is, for me, where the real-- another place where of excitement comes from this tool that's been developed through this partnership of Kimley-Horn and Nearmap, is that this whole process is predicated and starts with in imagery. So with this ability now to extract out this high-resolution imagery at will and through different geographies, it's really going to set up Kimley-Horn for success when they want to move into training their own models. To have that rich resource of imagery to start that process and be able to quickly train models in a robust manner is going to be really, really beneficial to their workflows. And provide them with the ability to really be innovative in how they deliver projects in the future.

    So let's end with why we're all here, really. Is for design. And we saw Jeff touch on this just a little bit ago in how we can bring in this information, both imagery and the data that we're extracting from it into Civil. And we saw the example of being able to bring in those contours again from a live service that was published and create a surface from it. And so I just wanted to pick up from there and touch on that a little more, and add to it, and provide some additional context on how that works.

    So we're going to pick up where we just left off. And that is now we've gotten our imagery. We've extracted our features, and we've published it up as a web service. So now, again, this is a live service that people can access through a web browser. They can open it on a mobile device, in a desktop application. And as well, we can bring this really easily into our design drawing. And so there's really two different ways that we can do that.

    And here, we're looking at Civil. So we can use the Autodesk connector. This is the one that Autodesk creates and curates on their own. Or we can use the one like we saw Jeff, and the one that I'm going to talk about today is the ArcGIS for AutoCAD plugin. Just mainly because that's the one I'm comfortable with. But with this plugin, we can start to create almost a self-service portal to access this information. Starting with being able to assign a coordinate system.

    So before I get started here, if I'm in a blank drawing, I can assign a projected or geographic coordinate system to this. I can also import a custom file. Or if this is a drawing that's already been started, I can assign-- I can set the coordinate system to that. So as I'm bringing in this information from the web service, it's aligning with that drawing. And then from there, it's really easy to get started. We can connect to our online portal, and we can sign in with a single sign-on and multi-factor authentication to make sure that we're secure.

    And then from there, we get access to our content. So we can navigate to our content. And then search for the specific data that we were looking for. In this case, that information I published earlier was from the sphere. So I can just type in the sphere, and I get that information here. And then it's just as easy as clicking, it'll add right into the drawing for us. And we can do that also with imagery here as well. And so now I have that same information contained here within my drawing.

    And again, I think something really important to reiterate here is that these are live services. So it's more than just being able to look at and see it on the map, or within our drawing. We can actually do stuff with this, like identify it. So as this information is being generated, both whether it's in Nearmap or ArcGIS and there's attributes with it, we can view and edit those attributes within this drawing, too. As well as this provides us the ability of two-way synchronization.

    So here, you can see, we delete that object. And then I have the ability to synchronize that back to the web service. And I'll be presented with a table that will show me exactly what's happening just so I can do some QA/QC to make sure I'm not syncing back something I really don't want to. And then just with the click of a button, that's going to sync that back again to that web service. And now anybody else that's viewing that same information, whether it's in another drawing, in a web browser, on a mobile device, they're going to get to see that edit happen.

    And this is two-way. So meaning as I would edit this within GIS. Or if more information and imagery is being captured from Nearmap extracted out and brought in through the same manner-- the same pipeline and publish to the web, it can be easily ingested into our drawing. So hopefully, this really demonstrates, through these strategic partnerships, how we can increase the efficiency for creating and delivering the data needed for the design process. So really decreasing the amount of time from sensor to database, database to design. And really lets the designers do what they do best. And that's design and solve problems.

    And with that, I'll throw it back over to David to close.

    DAVID GARRIGUES: Great stuff, Brett. Great stuff. Great stuff, Jeff. I think this speaks to both of your companies and what you guys are able to accomplish, and able to serve to the community at large that we have here today. The well-known names, great brands, really appreciate everything you do. What I'll say is that I like how both of you were open and honest about where the technology is at. And I do think as part of the engineering community, I will say that I really view this as year one. Meaning this is the year that you could really use it and really depend on it.

    Is it perfect like Brett was trying to show? No, it's not perfect yet. But this is definitely year one. You can use that stuff. You can use these things today. This is very real. So what I did like is on our next slide over here, what I'm going to show you is that here we are, we're looking at what is the best of? what's the best o? So in Nearmap, you're looking at footprints, pervious/impervious pavement, all this other vegetation stuff. And then Text Sam Land Cover.

    And so if you're going to go do that, I would start with these. I would start with these items first is what I would do, rather than trying to go-- OK, so I did think it was cool that you guys, Esri, can go look up the elephants. In my case, it might be stray dogs. But it's not really useful in my line of work. And so these are the top things that they felt like they could go do. I know that there was a lot of information that we showed today. So here's some information about us and how you can reach us.

    These images were not generated by AI. These are real, authentic images. This is our real selves here. But I would absolutely encourage all of you to go out and go build your own relationship with Esri and Nearmap. Do that. They're not just companies, everyone. These are people. These are real people. And they're really trying to do the very best they can. And they've got great output. And so I would encourage you to forge your own relationships. And let's do something else. Show me, next year, what you guys have done together. That'd be awesome.

    The last couple of things we've got here is just some QR codes, in case you're interested in finding out something directly about them. You can always-- the great thing about this video is, if something went by too fast today, or too slow, or you want to see it again, you can pause, rewind, all that kind of stuff. Simply just get your phone and take the QR code, take you right there to it, and you'll be in great shape. But I want to thank everybody for your time today. Especially Esri and Nearmap. Jeff, Brett, I really do appreciate your time. And I hope this was valuable to you all. And thank you so much for attending our class. Thank you.

    ______
    icon-svg-close-thick

    Cookie 首选项

    您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

    我们是否可以收集并使用您的数据?

    详细了解我们使用的第三方服务以及我们的隐私声明

    绝对必要 – 我们的网站正常运行并为您提供服务所必需的

    通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

    改善您的体验 – 使我们能够为您展示与您相关的内容

    通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

    定制您的广告 – 允许我们为您提供针对性的广告

    这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

    icon-svg-close-thick

    第三方服务

    详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

    icon-svg-hide-thick

    icon-svg-show-thick

    绝对必要 – 我们的网站正常运行并为您提供服务所必需的

    Qualtrics
    我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
    Akamai mPulse
    我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
    Digital River
    我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
    Dynatrace
    我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
    Khoros
    我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
    Launch Darkly
    我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
    New Relic
    我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
    Salesforce Live Agent
    我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
    Wistia
    我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
    Tealium
    我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
    Upsellit
    我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
    CJ Affiliates
    我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
    Commission Factory
    我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
    Google Analytics (Strictly Necessary)
    我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
    Typepad Stats
    我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
    Geo Targetly
    我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
    SpeedCurve
    我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
    Qualified
    Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

    icon-svg-hide-thick

    icon-svg-show-thick

    改善您的体验 – 使我们能够为您展示与您相关的内容

    Google Optimize
    我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
    ClickTale
    我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
    OneSignal
    我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
    Optimizely
    我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
    Amplitude
    我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
    Snowplow
    我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
    UserVoice
    我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
    Clearbit
    Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
    YouTube
    YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

    icon-svg-hide-thick

    icon-svg-show-thick

    定制您的广告 – 允许我们为您提供针对性的广告

    Adobe Analytics
    我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
    Google Analytics (Web Analytics)
    我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
    AdWords
    我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
    Marketo
    我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
    Doubleclick
    我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
    HubSpot
    我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
    Twitter
    我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
    Facebook
    我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
    LinkedIn
    我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
    Yahoo! Japan
    我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
    Naver
    我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
    Quantcast
    我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
    Call Tracking
    我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
    Wunderkind
    我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
    ADC Media
    我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
    AgrantSEM
    我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
    Bidtellect
    我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
    Bing
    我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
    G2Crowd
    我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
    NMPI Display
    我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
    VK
    我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
    Adobe Target
    我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
    Google Analytics (Advertising)
    我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
    Trendkite
    我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
    Hotjar
    我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
    6 Sense
    我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
    Terminus
    我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
    StackAdapt
    我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
    The Trade Desk
    我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
    RollWorks
    We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

    是否确定要简化联机体验?

    我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

    个性化您的体验,选择由您来做。

    我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

    我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

    通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。