AU Class
AU Class
class - AU

Reality Capture Round-Trip for Construction: Physical to Digital and Back to Physical

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

Reality capture is focused on digitizing the existing condition to gain a contextual understanding of the design constraints. In this class, we'll go beyond. We'll capture the physical condition to ensure designs are accurate and constructible by transferring our designs back into the physical world. First, we'll cover the digitization of the physical condition. We'll fully explore the laser-scanning process using the Leica BLK360 and ReCap Mobile software. Then, we'll learn how to use that data efficiently in Autodesk design environments such as AutoCAD Civil 3D software, Revit software, and Navisworks software. We'll collaborate our designs with the field using multiple digital approaches such as PDF, DXF, and BIM 360 Glue software. Finally, we'll transfer our digital designs back into the real world using the Leica iCon Robotic Total station, which we'll feed data from simple DXFs and outputs from Point Layout software, or we'll control it directly by BIM 360 Glue software.

主要学习内容

  • Learn how to use the Leica BLK360 with ReCap Mobile to collect field information
  • Learn how to use point clouds in AutoCAD Civil 3D, Revit, and Navisworks
  • Collaborate with field personnel using Point Layout, DXF, and BIM 360 Glue
  • Learn how to use the Leica iCon Robotic Total Station to perform field-layout operations

讲师

  • Daniel Chapek
    As the manager of the Reality Capture Solutions Division with IMAGINiT Technologies, I serve as the director of Professional Services for Reality Capture technology throughout North America.With over a decade of experience with Laser Scanning technology and more working with infrastructure technology, I've Laser Scanned in numerous environments including Civil/Survey, Architectural, Engineering, Structural, Environmental, Plant/Factory, Naval, and Construction.My strengths include implementing software and functionality to organizations with an emphasis on efficiency and production. I have played a significant role in numerous new technology deployments, which include custom content creation, custom training, CAD management training, project management, and business strategic planning. I’ve established a proven track record of listening to the needs of an industry and developing professional services aimed to solve technological, educational, and efficiency of production problems.
Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • subtitles off, selected
      Transcript

      DANIEL CHAPEK: That should help, huh? All right, now I got to watch what I say. Yeah, no problem. Thank you.

      Any questions on who I am, what I do? No. OK, good.

      So next, I want to talk a little bit about-- you have an introduction to reality capture and reality computing. So when I'm talking about reality capture, for those of you guys that are scanning in here, if you were to describe or define reality capture, how would you define it?

      Oh, by the way, this is an interactive class.

      [CHUCKLING]

      Go ahead and throw something out. You're not going to be wrong. How would you define reality capture?

      AUDIENCE: Collecting existing conditions.

      DANIEL CHAPEK: Yeah, collecting existing conditions-- certainly. Anything else?

      AUDIENCE: [INAUDIBLE]

      DANIEL CHAPEK: It is. Yeah, collecting existing conditions at a certain point in time. What reality capture does is it's creating a digital replica of the physical world. And we use that so that we can design in the full contextual understanding of the environment around us.

      Now that's a pretty broad statement. And we have to understand that, when we're talking about it that way, we're not talking about a specific technology. Reality capture is a lot like BIM. BIM is not a technology. It's a workflow, it's a process, it's a mentality. Reality capture is similar. We're not going to buy a reality capture system. We're going to buy technology that falls under the reality capture umbrella.

      So when we start understanding the technologies within the reality capture realm, we're looking at things like 3D laser scanning, which is predominately what we're going to look at today. But we can look at other things, like mobile LiDAR and aerial LiDAR. We talked about that before class a little bit. And all three of these are technically LiDAR.

      What does LiDAR stand for? If I had prizes, I'd give them out. What was that?

      AUDIENCE: Light Detection and Ranging.

      DANIEL CHAPEK: Light Detection and Ranging, using light or lasers to measure distances. So when we're looking at aerial LiDAR, mobile LiDAR, or terrestrial LiDAR, which is 3D laser scanning, we're simply using different mounting platforms to capture distances.

      When we look at completely different technologies, like photogrammetry, ground penetrating radar, Sonar mapping-- again, all of these are designed to do the same thing. We're taking a snapshot in time. We're measuring what's out there in the real world. So all of our traditional survey equipment fall under this banner. All of our GPS systems, all of our total stations. So there's variables obviously. Even if we go down to the tape measure, the pencil and paper. The sketch pad, when you walk out onto the site, and you take a measurement from your length and width of the room. We're technically doing reality capture. But there's a lot of differences in the technology, from your cost of equipment, time of collection, accuracies attained, so on and so forth. The tape measure from Home Depot's got a little bit different price point than an aerial LiDAR setup. I would think that's fair, right?

      [CHUCKLING]

      But if I wanted to use that same tape measure to go measure the entire topography of a county, that's not the right tool.

      So we have to understand that as we look at the different technologies that are out there under the reality capture banner, we have to deploy the correct technology to the correct project purpose. There is not a right or wrong technology. There's not a better or worse technology. We have to understand the project purpose and deploy the correct technology to it.

      Now when we look at the more advanced forms of reality capture, we're really approaching the way we measure the world differently. When we're looking at traditional-- tape measure and paper, traditional total station-- we're going out there and taking individual measurements-- Length and width of the room, height from floor to ceiling, or back of curb, edge of sidewalks, center line of road, all these individual measurements. And we've got to go back to the office and interpret what's in between, assume that based on the length and width of the room, this room might be square, or those walls might be plumb. We've got to make those assumptions. As we're surveying a cross-section of a road, and we survey it every 25, 50 feet, however far you want to walk, then we're assuming that that road is completely planar in between our cross sections. And again, depending on our project purpose, that may be absolutely fine.

      When we're looking at more advanced reality capture, it's a 180-degree flip of how we traditionally look at it. Traditionally, we go out there and say, what measurements do I need? When I'm using scanning, or aerial photogrammetry, or something like that, I'm really capturing everything I can possibly see. And then I take everything I can possibly see back to the office, and I extract the measurements I need when I need them. So I gather everything I can see. I'm not focused on, I need to measure this, or that, or the back of the curb, or whatever. I'm not focused on that. I'm focused on capturing everything I can see. Make sense? Cool.

      Now the product of a lot of these things is a point cloud. All right, so who wants to offer a definition of a point cloud?

      AUDIENCE: A 3D photograph.

      DANIEL CHAPEK: No. Normally I'd spin that and say, well, you're kind of right. But, no. Anybody else? Point cloud.

      AUDIENCE: Is that taking a summary of all of millions of measurements?

      DANIEL CHAPEK: Yes, absolutely. And I'm sorry to jump on you with the 3D photograph. But the photography is-- we're looking at images. And we're not looking at images with a point cloud. I'll get to it specifically.

      But yeah, you're right. It's a summary. It's a collection of millions or billions of individual points. As the laser scan is going out, as the photogrammetric processes the photos into a point cloud, the point cloud is simply a collection of multiple individual measurements. So when I understand that, we have to know that what's on the screen here is a point cloud. Any idea what this point cloud is of?

      AUDIENCE: [INAUDIBLE]

      DANIEL CHAPEK: Well, it's not a black screen. It's a black screen on the projector. It's not on my screen. That's kind of hard to see.

      AUDIENCE: [INAUDIBLE]

      DANIEL CHAPEK: Yeah. Well what I'm doing with this slide that you guys can't see, but if you could see it, it would be wonderful, is that I'm upping the density of the point cloud. So in the first couple series, there's a point but they're very, very sparse. You can't really see what's going on. As I up density in the point cloud, you get a better and better understanding of exactly what's there.

      So we have to understand that not all point clouds are created equal by any means. Again, there's a lot of variables in some of these different technologies we talked about. You know, a 3D laser scanning point cloud will be of a different quality than a photogrammetric point cloud, and that'll be of a different quality than an aerial LiDAR point cloud. So those variables of density, amount of data, again, the time of collection, the overall size of the point cloud, all those variables are going to come into play.

      So when we look at the quality of a point cloud, there's kind of two factors that make that up. There's accuracy and density.

      ADMINISTRATOR: Sorry, the fire marshal in the house, so everybody has to be in [INAUDIBLE]

      DANIEL CHAPEK: Oh, OK.

      ADMINISTRATOR: That's the issue.

      DANIEL CHAPEK: That's fine.

      ADMINISTRATOR: Yeah, sorry about that.

      DANIEL CHAPEK: Hey, better safe than sorry. As a volunteer firefighter, thank you guys for taking your seats.

      So again, the quality level of a point cloud is based on its accuracy as well as its density. And those two factors are not tied to each other whatsoever.

      Accuracy is going to be the relative error of each individual point to the real world. Now that's going to be variable, but it's fairly constant, based on the quality of equipment or the type of equipment that's used to collect the data. So using photogrammetric data, I have a lot of error variables that go into that, as far as how often I've taken shots, what overlap my photos have, the height of my flight, things along those lines.

      With 3D laser scanning, it's based on, am I using an imaging class laser scanner, or am I using a survey grade laser scanner? How does it approach tilt compensation for outside-- your environmental variables? All of those things are going to go to come into play. So accuracy is going to be based on the quality of equipment that I'm using to collect the data.

      Density is a user-defined variable. When I'm out there with my laser scanner, I'm setting that I want this density. I'm telling the instrument the density that I need. And that can be variable on what I want to do with the data later.

      Remember your job, Troy.

      TROY: It's just one job.

      DANIEL CHAPEK: Troy has one job. That's to make sure nobody kicks the tripod.

      [CHUCKLING]

      So again, density is based on a user-defined variable. What am I intending to use this data for? Can I do a quick scan of this room, and get length and width of the room? Sure. Do I need to accurately measure the trim work in the back of the room to replicate it? OK, well that's going to require a different density, and potentially a different accuracy. Make sense?

      AUDIENCE: [INAUDIBLE]

      DANIEL CHAPEK: I know. I was about to point to it, and then I'm like, wait, that's printed.

      [CHUCKLING]

      That'll still show up, but not geometrically.

      So anyway, again, we have to understand that not all point clouds are created equal by any means. So as we take a look at different reality capture technologies, as well as different instrumentation within the same class-- different UAVs, different 3D laser scanners-- they will all have these variables of accuracies, and densities are going to be the user-defined variable that we can set. Make sense? Cool.

      So how it works specifically with laser scanning is that it's going to pulse the laser if it's a time of flight laser. It pulses the laser by rotating horizontally and vertically. And with every vertical rotation, it moves slightly horizontally.

      So think about again the density portion. You guys might have a peg board hanging in your garage. It's got holes drilled in it. Those holes are drilled in 1 inch horizontal, 1 inch vertical arrays. You guys understand what's I'm talking about? OK, cool. Peg board, check-- base level of understanding.

      So we've got this peg board with 1 inch holes. If you put that peg board five feet from you, and you take your eyes from hole to hole to hole, your eyes are making an angular shift to get that density at that distance. You take that exact same peg board, put it 500 feet away, and if you can still see the holes, and you take your eyes hole to hole to hole, you have a much smaller angular shift.

      So when we're looking at how the laser scannings work with that vertical rotation, it's essentially surveying a vertical plane. So every rotation is scanning the plane of that rotation. Then it rotates horizontally, then it shoots another plane, rotates horizontally, shoots another plane. So while it looks like the scanner is continuously rotating, it's not. It's rotating, stopping, shooting the plane, rotating, stopping, shooting the plane.

      So the speed of that vertical rotation is based on how fast the mirror can spin. But the speed of the horizontal rotation is based on what density we ask for. That's setting that horizontal angular shift between points.

      So as it goes around and scans all those planes, it's capturing everything it can see within the range of the instrument that we're using. And every pulse gets us a measurement, the distance measurement. The three-dimensional point is calculated based off the horizontal turned angle and the vertical angle, the direction the laser is being shot. This is crucially important because this idea of controlling the direction the laser is being shot is the difference between a survey-grade instrument and an imaging class laser scanner. Because every laser scanner is going to understand I'm shooting the point in this direction. And in this direction, I just measured the corresponding distance.

      But the different laser scanners on the market will approach tilt compensation differently. Tilt compensation is how we take into account all of the different environmental factors that are enacting on the scanner. There's vibrations in the floor. The semi truck drove by. It's a little bit breezy. All of those things are enacting on the scanner, making the scanner move. And any movement of the scanner is going to adjust the actual direction the laser is being shot.

      So imaging class laser scanners like the BLK 360 use electronic tilt compensation, or tilt monitoring, depending on, again, which manufacturer we're talking about. But the BLK is using an IMU, an inertial movement unit, which has basically a circuit board of sensors with an accelerometer, a tilt sensor, things along those lines, to measure how it's moving in the real world. And it's looking at those measurements to adjust these values of horizontal vertical angle to correspond correctly with each individual point. When we get to survey-grade instrumentation, it gets a lot more accurate. Because a survey-grade laser scanner is going to use a fully liquid dual-axis compensator, where you've got a little puck of oil that's rigidly mounted to the cast metal frame of the scanner. If you look at a Leica P30 or P40, for instance, that dual axis compensator is accurate to one arc second. You guys familiar with arc seconds?

      AUDIENCE: [INAUDIBLE]

      DANIEL CHAPEK: Some. In other words, circles.

      AUDIENCE: Like the angular seconds.

      DANIEL CHAPEK: Right, yeah. So if you're not familiar with arc seconds, it is just a unit of measure. If you take a circle that's 360 degrees, if you take one degree, divide it by 60, you have an arc minute. If you take that arc minute, divide it by 60, you have an arc second. So we're really just talking about a very, very small angle.

      To give you an idea, if you extrapolated it out, the degree of one arc second is about the diameter of a dime a mile and a half away. So we're looking at very, very accurate.

      So when we're looking at a professional unit with a dual-axis compensator like that, it's looking at those angles that accurately. So any breeze or vibration in the floor if we're in that manufacturing plant and the press is stomping away, anything that's going to shake the direction of that laser is going to be tracked, and these angular measurements are going to be updated to make sure that the direction value the laser is being shot is correctly corresponding to the distance measurement that was taken in that direction.

      And the horizontal angle, the vertical angle, and the distance combine to give me my actual coordinate. So that coordinate that's calculated, when we talk about accuracies of scanners, we're really taking a look at the accuracy of that coordinate versus what's truly out there in the real world.

      So we can talk about these individual accuracies. Scanners will have specs of horizontal angle tolerance, vertical angle tolerance, ranging tolerance, range noise, all those different accuracy levels on there. But they all need to be summed up to 3D positional accuracy. How accurate is that coordinate versus the true real world?

      All right, so as it does that, it captures the point cloud. And the point cloud is a three-dimensional object built up of millions and billions of individual points. All these scanners also have cameras in them as well, which will take full dome photography to then overlay-- and overlay is not the right word-- but colorize the point cloud based on those photos.

      So with the horizontal and vertical angle that we were talking about, this scanner understands what direction each individual point was taken in. It also understands what direction each individual pixel is in in the photos. So then it just correlates. It says that individual pixel was taken in this given direction. We'll apply that single RGB color to this individual point.

      All right, so when we're talking about roundtripping, now that we got the basics of laser scanning down-- and for those guys that have been laser scanning, thank you for being patient. But when we're talking about the roundtripping of reality capture data, typically when we talk about reality capture it's all about taking the field into the office. We're taking the reality of the physical reality, and we're bringing it in digitally for our design purposes. But when we're talking about roundtripping it, we want to take the office, we want to take our accurate design, and put it back out into the real world for construction.

      How often do you guys hear-- anybody in construction? OK. This is a pretty targeted glass to that industry, apparently.

      [CHUCKLING]

      So yeah, when I'm looking at construction, how often do we hear that there's a break between the office and the field? The plans are beautiful, they look great, but it's not constructable. Or it's not constructed the way the plans are set up.

      This kind of alleviates some of that. If I can actually take my designs, and put them in the real world, and my design is constructable, then the real world should be constructable. So that's what we're looking to do here. We're looking to bridge that gap between the field and the office. So if we initially do reality capture to understand exactly what's there, we can design based on exactly what's there. We can then take our design and put it out into the world. So that it can accurately be constructed. As we're going through that, we can scan it again, iteratively, to ensure that it was built per plan.

      So as certain trades go in there, as the steel goes up, we can scan it and verify, validate that that steel was built correctly. And if not, we can get ahead of it. We can either get that steel fixed before it causes problems downstream, or we can adjust our plans, and give everybody updated things so when the HVAC goes in there, they're expecting that beam to be there. As we look at it here, we've laid it out. We've built part of it. And you can see that this part of the point cloud, that duct is in completely the wrong spot. And I can then determine if I care or not. Is that going to interfere with anything else downstream? If not, OK. If it will, then I need to get ahead of fixing that. And it's all about fixing these problems at the cheapest point possible, getting ahead of it before there's a sub on site saying, well, I guess I'll just stand here all day because I can't work now because that's in my way. They're still billing. They're just sitting there, not doing anything.

      So we want to be able to, again, bridge that gap between reality and digital, to be able to make sure this is designed and built accurately.

      So now we want to go ahead and get into the scanning portion, the reality capture side of it. And we want to go ahead and look at the BLK 360 and ReCap. So we are going to take a quick break from PowerPoint, and we're going to talk about the BLK for a second.

      So for those of you who have not seen the BLK, this is all it is. This is everything we'd have to take out to the site to go scan. And as a guy that's been lugging around tripods and larger scanners for over a decade, this is refreshing. I just take this through TSA, and I fly wherever I need to go. But in this bag, it's got a couple of things. It's got the scanner itself. And you pull the scanner out-- little case here. The scanner is incredibly small. Again, this is an imaging class laser scanner. So we're looking at about 1/4 of an inch at 30 feet error. We're looking at 5/16 at 60 feet or so.

      So while it can see 60 meters, its happy spot is going to be under that 10, 20 meter range. So it really is intended to be internal-- architectural interiors, things like that. It can go outside. It's IP54-rated. With the range that it has, it's much more intended to be more of an interior scanner.

      The tripod just folds out like so. Quarter turn, expand it. So we'll do that. And then the center post will just compression fit in there. And this center post is great because it's just a quick-connects onto the scanner like so. And I can put it upright. I can also hang it upside down. The IMU inside of it is going to understand whether it's right side up or upside down.

      So we'll set it there. Oh, it needs a battery. That's important. It needs more power, Scotty.

      So in here, we've got the battery door. Just put the battery in there. And as we were joking before, there's only one button. So I'm going to hit the button, and it will go ahead and boot up. Now as it's booting up, it's going to go ahead and turn on.

      And part of that is that it's going to be its own Wi-Fi hotspot. Right so it's going to emit its own SSID where I can go ahead and connect to it. Once it goes solid on the LED, than it's booted up. There we go. Jump into my settings. Just want to make sure that my Wi-Fi is on. You can see I'm connected to the BLK.

      And I'll just go ahead and jump into ReCap Pro. So on the iPad, I get an interactive interface with how I control that. The iPod is not necessary to run the BLK. I can walk over there and hit the power button once, and it would go ahead and scan. And I can scan onboard all day long, get back in the office, and pull the data off then. And depending on how experienced I am with scanning, I may prefer that, I may not. If I'm new to scanning, then the feedback I'm going to get on the iPad is really helpful to make sure that I've got good coverage, I've captured everything I need to see, all that kind of good stuff.

      So in this case, I'm going to hit New Project. We'll name it Class, because I'm feeling creative this morning. I'll hit Capture. And what that does is that kicks me into the actual project. And we can see that I'm connected to my BLK. See the firmware I'm running. There's 53 gigs left of storage on the BLK itself. As well as there's-- it's going to sync over. If I look at the settings, it's going to sync over to my iPad as well.

      When I look at the settings, you can see that I've got scan quality. Again, the scan quality is going to be talking about density. So again, what is going to be the angular difference between every vertical rotation of the laser? Now I've got low, medium, and high. Part of the BLK is that it's incredibly easy to use. The ease of use is phenomenal with the BLK. But a lot of times, when we up ease of use, we drop applicable settings. So again, as we take a look at an imaging class scanner versus a survey-grade scanner, when I'm working with a survey-grade scanner, I can set up, I can occupy known points, I can traverse, I can do all of that stuff. I can have specific settings on what I want my density to be. I can do all of those types of things. The BLK does not allow me to do those things. The BLK's ease of use is simply made for me to go ahead and turn it on, and hit Go.

      So I can set the different-- low, medium, or high. I can tell it whether I want photos or not. And then I'll just go over here to the right, and I'll hit New Scan. And as I hit New Scan, it goes ahead and starts.

      So it's first going to start by doing a 360-degree rotation to calibrate its tilt sensor, just to identify how out-of-level it is. And then it's going to start by taking photos. Now as it takes photos, you're going to see-- smile at him, because it's looking at you. You're going to see there's three lenses. And each one of those lenses does have an LED on it. So if we were in low-light situations, it would brighten it up, and the photos are pretty decent.

      Those photos are being populated in real time over here. They should be.

      AUDIENCE: [INAUDIBLE]

      DANIEL CHAPEK: Yeah, it did. Way to go.

      AUDIENCE: [INAUDIBLE] best used low light [INAUDIBLE]

      DANIEL CHAPEK: Ambient light really doesn't make a difference. So the laser scanning side-- the laser is its own light source. So it can be used in complete darkness. The ambient light portion is only going to affect the photography side of it. So the point cloud that's captured isn't dependent on the ambient light whatsoever. But the photography, to colorize the point cloud, make the color real, that's going to be where ambient light comes into play.

      We'll just kill it and we'll try it again. There we go. New Project, and New Scan.

      AUDIENCE: [INAUDIBLE]

      DANIEL CHAPEK: Yeah, it gets mad at me. That's the technical term.

      So I've checked the Wi-Fi. And we will go in here. Class 3.

      [CHUCKLING]

      Third time's the charm. There we go. New scan.

      So again, it's going to do that tilt calibration, start taking photos, and then it's going to start to scan. So again, some of the variables we have in there with the BLK, again, the ease of use part of that is just low, medium, or high, what density are you looking at. And for the most part, I do everything in medium. If I go outside, or if I'm doing a large foyer or something like that, I might kick it up to high. But medium is typically where I'm going to be. If I go into an electrical closet, I'll kick it down to low. But medium, for the most part, is where it's going to land.

      And again, the accuracy is a constant based on the quality instrument. So again, we're looking at 6 millimeters at 10 meters-- 1/4 inch at 30 feet-- and 8 millimeters at 20 meters, so 5/16 at 60 feet. So that's going to be a standard, regardless of what we're scanning or where we're scanning. Again, because of the electronic tilt compensation, if I do get into an area that has a lot of vibration or it's really windy, then I need to make other considerations to it. But in an environment like this, that's kind of the accuracy I can expect.

      AUDIENCE: [INAUDIBLE]

      DANIEL CHAPEK: I'm told it can. I've seen some guys that make different rigs to strap it to the side of a column, something like that. I've not done it personally. But in theory, the IMU is supposed to track that, and it's supposed to be able to do it. But I haven't done it.

      AUDIENCE: With the tilt compensator, do you have to set the [INAUDIBLE] level at all, or will the tilt compensator just detect how much you're not level?

      DANIEL CHAPEK: So this doesn't have a tilt compensator. So this has the IMU, the Inertial Movement Unit. So being that that's all digital, then no, you don't have to put it level at all. If I was using a professional-grade scanner, then I would. Because I want to level it within reason so that it knows where plumb is.

      AUDIENCE: [INAUDIBLE]

      DANIEL CHAPEK: Right. So that dual-axis compensator is accurate to one arc second, but it has five minutes of play. So it can move around quite a bit. When that semi drives by, and the gust of wind almost knocks it over, it's going to check that. And it's actively compensating within 5 minutes of play, to an accuracy of 1 second.

      I was in a mine down in Springfield where, in this mechanical room, there were five 5-foot diameter turbines that were sucking all the air out of the mine to cool this mechanical room, and then go out in a shaft. And I set up on the backside of one of those turbines. And in that shaft room, they said it was sustained 30 to 40-mile-an-hour winds. And my survey-grade scanner didn't care. It compensated fully for all of it. And I was right, directly, lined up with it. It wasn't ideal.

      AUDIENCE: [INAUDIBLE]

      DANIEL CHAPEK: Yeah, exactly. But that's one of those considerations where it's the correct technology for the job at hand. I would not take this guy into that scenario. I mean, it's great that it's only 2 pounds, but that also means a lighter wind can blow it over. There's a reason that the professional-grade scanners are bigger and bulkier than they are. They can make total stations smaller than they do, but there is a reason that they're not. And that's because it has that mass, and it can maintain that accuracy with those outside conditions on it.

      OK, so it's done. So you can see it. It was populating those photos in here. Smile, man. So serious. Oh yeah, you guys all signed the form of consent before you walked in, right?

      AUDIENCE: No.

      [CHUCKLING]

      DANIEL CHAPEK: So what it's doing now, once it's scanned, it transfers the data over. So you can see now it's saying Processing. But once it's done scanning, I can go ahead and pick it up and move it. And the thing about laser scanning is-- a lot of guys know that have been scanning already-- is this all about point of view. It's all about line of sight. So as the scanner was here, you could see everything from that viewpoint location. I'll just hit New Scan again. So we could see everything from that point of view, which means as it was sitting here, it couldn't see on the backside of the table. You guys were all blocking my view of this wall down here at the bottom. It saw up there completely. So as I'm scanning throughout the room I need to take that into consideration.

      Now in this case, we're not going to get a clean data set because we're in here. But if we were out in the hallway, and a few people were walking by, I can clean those people out. I can look around people. I can look around traffic. I don't need to shut down the interstate to be able to survey it. I don't have to make sure that that hospital facility is evacuated before I scan it. I can do all of those things while these environments are in use.

      So again, it's going to scan this setup. And once it does that, it's going to again transfer the data over, and go through the registration process. Now the registration process is the alignment of the data. So again, every individual setup location is a confined set of data. As it's capturing each individual point, we understand that each point was calculated by the horizontal and vertical angle from the eye of the scanner. So each setup has an origin, that 0, 0 point, at the eye of the scanner. So setup 1 has an origin, 2 has an origin, 3 has-- each individual setup has its own origin. And the registration process aligns all of those individual setup locations so that I have one overall point cloud at the end of the day.

      And that registration process is crucially important. The alignment of this data is probably the most important step in this entire process. It doesn't mean it needs to be difficult, but it does mean it needs to be right. Because any misalignment in the point cloud is going to give me error in the results. So if I've got points on this wall from setup number 1, and I've got points on this wall from setup number 2, and they're slightly misaligned, then when I look at the point cloud, if I slice through the point cloud, I'll have two parallel lines of points-- some points from setup 1, some points from setup 2-- and they're offset each other because the two point clouds aren't aligned to each other.

      So I need to be able to cut that slice, and have one crystal clear line of points for exactly where the wall is. And again, that is going to go into everything else that I need to do with this point cloud. All of my model extraction, going in there and having the software draw the 2D floor plan for me, for it to take a look at that point cloud, and model that piece of structural steel for me, extract the piping for me.

      Well, if these point clouds aren't aligned correctly, it can't do that extraction because it doesn't know where the wall is because there's two lines of points instead of one. So the registration process, again, is crucially important. But it doesn't have to be difficult, and it doesn't have to take a lot of time. And what you'll find is there's a lot of different ways to do the same thing when we're talking about laser scanning. There's different pathways, different parallel workflows, depending on what you're doing and how you want to do it.

      So we talked about the BLK, and I'm using the iPad with ReCap here. And in this workflow, I get feedback from the BLK. I can see-- and I'll show you here shortly what we can do with this data while we're here in the field using the iPad, but that's not the only way to do it. Again, all of this data is being written directly on the BLK. So I could not use the iPad, and I could just run around, hit the button, and it would scan each individual setup, take it back to the office, and use more advanced registration software if I'd like to.

      There's a lot of different options to use depending on how big my project is, how many setup locations I have. This process here, where you can see it's scanning setup number 2. It's done scanning. It's processing it now. It's going to register it on the fly here. Well, if I was going to move on, I would been paying attention, and as soon as it got done scanning, I would have gone out the doors and put the scanner out there. I always want the laser firing. If I'm out there in the field, I want to keep actively scanning. If the laser is not firing, I'm wasting time and I'm wasting money.

      So what you'll find is that this process of copying the data over, doing registration in real time, if you have a large project, you're going to outpace it. I'm going to basically be out there-- if I'm on setup number 35, 40, 60, then by the time I'm done scanning setup number 60, it may only be done transferring and registering setup 35, 40. So if it can't keep up, then I may want to just not worry about it at all, and do the registration back in the office.

      So again, it's going to change based on my level of comfort with the laser scanning process, and what other software I have available to me. So you can see, it's going through registration now on setup number 2.

      AUDIENCE: [INAUDIBLE]

      DANIEL CHAPEK: I could have done that a while ago. As soon as it's done scanning, I would move it. So when it's done scanning the photos and then the scan, then it'll say Transferring, then it'll say Processing, and when there's more than one it, goes through Registration. As soon as it's done scanning, when it goes over to transfer, I can move it.

      AUDIENCE: [INAUDIBLE] to have it register on [INAUDIBLE]

      DANIEL CHAPEK: Right, yeah-- no, if it's coming over with the ReCap app, it's going to register on the fly. So what it's done here is it says that it's registered. I can click on Confirm Results, and I can see this preview of the room, where again, setup number 1, setup number 2, one's red, one's bluish. And I can just visually make sure that they're aligned. I've got my standard ReCap reporting that 99.9% of them are within a quarter of an inch of each other, which is what I would expect. And a lot of times, why aren't they 100%? Well, because Adam moved. And it's all Adam in both setups.

      So the more vegetation that I may have, the more it might think that that tree is the same in both setups. But that tree was blowing in the wind. So it thinks they're overlapping points, but because it was slightly moving, it may be causing a higher error than I think it is.

      Go ahead.

      AUDIENCE: [INAUDIBLE]

      DANIEL CHAPEK: Not notable-- not notable that I have identified.

      AUDIENCE: I've been told not to use the iPad for the registration process.

      DANIEL CHAPEK: What you're going to see is that when I connect to it, and bring the data over, the desktop app is going to check that registration, and tighten it if it can. Because the iPad is less of a powerhouse than my laptop, so it is doing the registration with the same ReCap routines. But as far as accuracy or iterative nudging to make sure they're tighter, that is looked at on the desktop as well. Go ahead.

      AUDIENCE: So when I move the scanner [INAUDIBLE]

      DANIEL CHAPEK: Yeah, you could do that. And again, that's actually a pretty good workflow because of file size limitations that you might experience with ReCap. Normally I would go through and I would scan the entire building. I'd scan up down the stairwells. I'd have all the project in one set of data. Because with other software, I can. I can have files that are incredibly large, and I don't have any performance issues.

      AUDIENCE: [INAUDIBLE]

      DANIEL CHAPEK: You could. You could. I mean, I could take it through the stairwell. And as long as I've got overlap from setup 1, to setup 2, to setup 3 through the stairwell, I could do it.

      Yep.

      AUDIENCE: [INAUDIBLE]

      DANIEL CHAPEK: The thermal imaging? Right. That should be out, with another firmware, first quarter next year.

      There was a lot of questions here. You were next.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Yeah, cumulative air, you mean? Well, again, it depends on what we're doing and why. If I'm worried about cumulative air and I've got a long way to go, maybe this isn't the correct instrument. And I think I was mentioning earlier, if I was doing a hospital or a school-- I think I was talking to you about it-- I would take a survey grade instrument and do the hallways, the main corridors, and drop a BLK in each room. Because cumulative air is going to be there.

      Every time I registered, there is going to be a misalignment. It should be minimal, but there's going to be that. And yeah, we do have the potential of cumulative air if all that air is in the same direction the whole way through. It's--

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Certainly. Yeah, because at the end of the day, a point cloud is a point cloud. So there's variables and accuracy and time of collection and costs, all that kind of stuff. But they're all just collections of millions or billions of individual points. So using different softwares, we would be able to basically combine data from any source. UAV photogrammetric data with terrestrial LiDAR data-- all of it goes together, we just combine it and work with it then. OK, other questions right here? Go ahead.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: It is track-- right now, on this one, the IMU is not activated. Again, that'll be in the firmware here shortly. But it is going to be trying to track your movement lightly. But it's still using the overlap for registration. The IMU is going to get your initial alignment direction you went and the level of it. And then, cloud to cloud registration is what's going to tighten it together.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Yeah. Yep, absolutely, especially if we have setup 1 and 2 like we did, and then I invert it for setup 3, the IMU is going to tell it that. So the IMU is going to understand, you flipped it upside down. So the data will register automatically even if it's inverted. OK, go ahead.

      AUDIENCE: So how much overlap [INAUDIBLE]

      PRESENTER: Well, the overlap between the two setups is based on you as a user. I had this scanner here. I picked it up and I walked over there. The further I walk, the less overlap there'll be. So it's a matter of where I put the setup locations. And the different environments that I'm working in will be variables there as well. If I'm in a very complicated mechanical room, I need to stick it in back corners and around different equipment. And I may have less overlap in those areas.

      But that complexity will just drive me to have more setup locations, so that I have more overlap in areas that are blocked-- things along those lines. Go ahead.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Absolutely.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Absolutely. Yeah, so the question was, do I see different registration results between ReCap and Cyclone-- like a Cyclone. And yes, there is a number of different results, inaccuracies and things. And a lot of the big ones-- it just comes down to reporting. And the way that ReCap reports the accuracy is more of a go-no-go switch, where 100% of your points are within 6 millimeters. That doesn't tell me how off it is, that tells me how on it is, right? Does that make sense? Right? So in Cyclone, I actually have reporting mechanisms that tell me, this is your error, not how much percent is better than a threshold, right?

      Also your registration techniques in cyclone are much stronger, where ReCap is going to focus solely on cloud to cloud, where target based is going to strengthen that cloud to cloud. In Cyclone, I can do traversing, I can do resections, I can do all that normal survey stuff, as well as automated targeting, and then, cloud to cloud to strengthen it up. I mean, at the end of the day, if I was in Cyclone, I could model this plane of the wall in two setups and say that those modeled planes were coplanar. And that would do the registration.

      So I've got a lot more flexibility in Cyclone versus ReCap as far as registration goes. Also time wise. So when I'm looking at things like ReCap versus Cyclone register versus Register 360, I did some benchmark testing. I had 20 setups out of a P40 that I put into ReCap, and it took 4 and 1/2 hours to import-- not register, just import. I brought it into Cyclone-- that was imported and registered in an hour and a half.

      I brought it into Register 360, and that was imported and registered in 13 minutes. So there are distinct variables or differences in some of these software's. To give you another, I can show you a different project. It's a mechanical room with a BLK. It was four setups. In ReCap, the four setups was 2.4 gigs. In Cyclone, it was 1 and 1/2 gigs. In Jet Stream, it was 400 megs. So there are significant variables as we take a look at some of these different software's that apply to laser scanning.

      But again, I had mentioned, there's parallel workflows of getting the same thing done. We're scanning, we're registering, we're dealing with data management, and we're going to be modeling. Those things are going to be consistent regardless of what software we're using. Makes sense? OK.

      All right, so with that-- go ahead, I'm sorry.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Not on the iPad or directly on board. That would be done on a desktop. And we can bring the data into ReCap, bring in a CSV file, or actually, it'll identify targets and we can type in what those coordinates are. Again, if I was using Register 360 or Cyclone register, then that's completely in the realm of possibility.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Right now? Yes. Leica will have a mobile app as well very shortly. It was actually targeted to be at AU. It's not there yet. So it'll be very shortly. And the mobile app that Leica has is very similar. But the major difference is, Leica is not going to attempt to do the registration on the iPad. It will control it virtually. We'll be able to see some overlapping errors. We'll basically have the mirror ball looking around at each viewpoint location. But it's not going to attempt to do the registration on the iPad, because the idea is to use Register 360, which does it significantly faster, gives us better reporting, all that kind of good stuff.

      All right, so I'm happy with this registration report that I'm getting. So I'll hit merge scan. And again, it would move on. So now that it understands that my scans are merged-- you can make this mirror ball a little bit smaller-- it understands its setup up number 2 was over there. So now in this environment, I can really in real time look at the data and understand if I had coverage or not.

      So I can jump over to my map view where I get to see basically the 2D of what's been captured so far. So again, if I'm new to scanning and I'm not good at taking the mental snapshots of what I've captured so far, then I can see with these two setups, I didn't get behind the area that I'm standing up here-- behind the podium and whatnot over here. I can see that I didn't get that, so I need another setup.

      Now if I've been scanning for awhile, I have that mental snapshot thing down, so I know as I was sitting there, I didn't see over here, I need another set up there. I can plan my setup locations a little bit better just because of the experience of scanning. But I can see that here. I can also, again, jump between real views. I jump back and forth between the different setup locations.

      And one of the great things here is I can make notes. So I can come over here, and I can look at tools. I can take measurements right here on the iPad. I could also create a note right there. And that's the exit sign, in case you couldn't read, in the point cloud. But you can see, I can also take photos. So I could use the iPad to go take a snapshot of that, and integrate that directly into the scan data.

      So if I was in a mechanical room, I could take a quick snapshot of a cereal plate on a piece of equipment, and it's directly integrated in real time. So I'll save that. I can do other markups, where maybe I want to do some hatching around an area, make notes, all that kind of stuff. Whatever I want to do to, again, capture the reality of the space that we're working in. Now once this is done, I'll go ahead and back out of this and suspend the project. I'm going to go ahead and pull that, and I'm going to plug it in-- plug it into my laptop. I'll show you how to get the data off of the iPad.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Yes. Yeah, in fact-- Apple would yell at me. Anyway, we'll see if we need to do that. So when I would be here in ReCap on my desk, I would just hit New Project, and I would transfer from mobile device. So when I transfer from the mobile device, it'll connect to it. ReCap is on on the iPad-- it's in the front, it's in the view. And it comes in, looks at this, and says, here's all the projects that are on the iPad, which ones do you want to transfer over?

      So we went ahead and did class 3. I select that project. I can name it, put it somewhere. And it would go ahead and copy it over. Now this is not an immediate thing. We're doing two setups here. It's probably going to take about 4 minutes to pull it over, to index it, to do all that. So it's not an immediate thing by any means, which again, as we talk about the different parallel workflows, registering in the field is great, but if it's going to take me a long time to flip it over here, I may or may not do it. I may look at other routes to go faster.

      There's not a right or wrong. Again, we can come in here and we can start by registering on the iPad. As we get larger projects-- 70, 80, 100 setups, we'll probably start using a little bit more advanced software than what ReCap would give us. But as I hit Open Project there, again, it's taking a look at that registration, it tells me that these scans were already registered. It's preparing it. It's going to analyze it a little bit more, make any adjustments. I can make any adjustments I would want.

      So I think you had asked me about targets-- geospatial control-- this is where I'd be able to come in and identify targets and type in coordinates, things along those lines. And if I'm happy with it, which in this case, I am, I'll hit Index Scans. And it'll go ahead and index that data into RCP and RCS files.

      You guys all familiar with the RCP and RCS file structure? OK. So RCS stands for ReCap scan. RCP is ReCap project. So in here, we have one project that I called, Class 3. So I'll have a Class 3 RCP. But I will have individual RCS files for each setup location.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Yeah. And that's important, when I get into AutoCAD or Revit, when I bring scans in, I can import RCS or RCP. I'm typically going to be bringing RCPs so that I bring the entire project. And that's going to identify things that I do here. So again, as we get into ReCap a little bit more, you're going to see that we have the ability to go in and do region to areas. We can update UCS information. We do all that kind of good stuff. And that is going to be saved in that RCP file, so when I bring that into AutoCAD or Revit or Navas Works, wherever I'm taking it, all of those sectioned areas and UCS information, all that's going to come forward with it.

      All right, so I'll go ahead and launch it. We'll have one set up in here. Once the other one gets done indexing it, it'll load it in there as well. So you can see that we've got this data. We'll just jump in here. All right, we've got our mirror ball and we've got our setup locations that we had before. We also-- if we jump over here-- we also have that exit sign note.

      So again, if we had other data in here, other snapshots of photos, we would be able to see that stuff, other measurements, things along those lines, they would come forward as is. Now when we're in here, again, we could have worked the registration a little bit more to give it building coordinates or plant coordinates or whatever we wanted to do. In this case, if we just wanted to try to maybe square this up with the world, we would be able to adjust the UCS using the point cloud normals. Now a normal, for those of you guys that aren't familiar with a normal, the points were collected at a given angle of incidence. The scanner was over there. It hit this wall-- the laser hit it at a given angle.

      So there's an angle of incidence of collection. But the point cloud normal is the assumed perpendicularity of that point based on the other points around it. So it's seeing that all these points are on the wall, therefore the assumed normal is the perpendicularity of the wall, right? So we can take a look at-- you're resetting this UCS. If I wanted to come over here, I could update origin. I might click something like right there. And then, I could hit Tab to start setting the different axis.

      So here, it's setting Z. I want to keep Z, because I feel that the IMU did a pretty good job with leveling. So I'll hit Tab again. And now I'm setting-- oh, I'm going to Enter to accept that. Now I'm setting X. So I'm setting X by clicking on Y. That makes sense. So once I do that and I hit Enter, the UCS is now set, so that if I take a look at top view, it's roughly square to the world. Make sense? OK.

      All right, now once we-- Apple.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Sure.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Right. Let's see. So the option to set up targets and whatnot, that's going to be on the import of the data. So I'm not going to be able to really go back to that. This goes till 9:30, right? That was me getting nervous for a second. Yeah, yeah. All right, so when I was looking at that in here, when I was taking a look at the registration, I was going to be able to bring in those points to import the data. And when I would identify a target, I obviously don't have any in here, but let's just play a game.

      If I said a target was right there-- it's not going to find one-- then it would ask me what coordinate it was. So I click on it and then I would type in it's X, Y, and Z. And by giving it those coordinates, that would then do the UCS for me, based on those coordinates. So I don't have any of those. So when I went to the project, what I did is I used the point cloud itself to set the coordinate system.

      Now when I'm doing that, I do have to make some assumptions, where this room is not square. This wall is way out of plumb, because it's just a divider wall, so it's way out of plumb. If you use eyeball how out of plumb it is. So what I'm looking at the normal, and I'm saying, well, let's use this wall for positive y, it's looking at that in a weird way. But to update the origin, let's jump over here to the points area. And I can click Update Origin.

      So when I start update origin, it starts by asking me, click the updated location. So basically, give it the new origin point. So I'll click where-- and obviously, the more I zoom in, the more accurately I pick, the more accurate the--

      AUDIENCE: [INAUDIBLE]

      PRESENTER: No.

      AUDIENCE: No?

      PRESENTER: No. Again, there's going to be limitations of what ReCap is going to be able to do. And--

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Not that I know of in ReCap. In Cyclone, that's a normal thing. But not so much here. So when I click on that and I hit Enter to accept that location-- oh, I was one step past, I already hit Enter. So it went ahead and updated the location and did not adjust my axis. So when I come up here, I would update origin. I'd pick the spot. So I hit Enter to confirm or I hit Tab to adjust the axis. So it first asks on Z, and I'm happy with Z. It's not inverted. I don't have to flip it.

      So I'd hit Enter to accept Z. And then it asks about X. So as I click over here, you see it's looking at that point to set where X is. I go ahead and hit Alt to flip it. I can use those points to basically pick my orientation. So again, it's that fine line to walk between ease of use and capability.

      And ReCap is incredibly easy to use, which means it has a little less capability than it's more advanced software's. OK, so once it's here-- sure, just get out of there-- I would go into my design application. Now my design application is really your design application of choice-- AutoCAD, Plant 3D, Revit, wherever you're going.

      When I'm looking at these applications, I would jump into the Insert tab, and I would attach a point cloud. So this is where it would be looking for that RCP file. Here's my class 3 RCP file. I did scan it. Yesterday, I came in here yesterday, and I did two setups identical to what we just did. I just wanted to point cloud without all of you in it. Nothing against you, I just wanted a clean point cloud.

      So if I open up class 3 RCP, yeah, we can see right here, I didn't save it, so the registration wasn't done. That's a problem. But still, you would bring this in. I'll go in and bring in the point cloud from yesterday. It comes in very similar to a block or an XREF. We're just going to come in-- I can adjust my insertion point. I can adjust my scale, which is terrifying to me, as a survey kind of guy. We shouldn't have to adjust scale if units are correct. But technically, we can.

      So I'm going to just use the origin, use the scale, and I'm going to hit OK. And that data is going to come in. So you can see, it came into AutoCAD as a three dimensional object. As I just turn it 3D, I can take a look at it. And now I do whatever I do in the normal design application. So if I'm in AutoCAD, I make lines, arcs, poly lines, and blocks. If I'm in Revit, I make walls, windows, doors, floors, duct, all that kind of good stuff.

      So you're going to use this environment to essentially design whatever it is you need to design the way you would normally design it with understanding what's truly out there to guide that design. So there's a lot of other classes at AU on how to do that, on how to do modeling from point clouds and things like that. So I don't want to spend a whole lot of time on that, because I want to jump to the robot.

      All right, so what I did this morning, technically-- any other speakers in here? No? Well, they tell us to prepare. I didn't take them seriously. So my intent, yesterday, was just to scan the room, take a quick section of the room, draw up the length and width of the room and move on. But this morning, I was like, you know what? It would kind of be cool if I actually had the room with chairs and everything else. So I did it.

      I'm typically in central time zone, so I was up at like, 4:00 AM this time zone. So I'm like, I'll just draw it up. And I don't want there to be any smoke and mirrors here guys. Yesterday, I came in, and I had two setups at basically the identical locations I just did. And this morning, I started drawing this. And if I look at my DWG props, and I look at the statistics, I have been in this file for 10 minutes.

      So again, this isn't a difficult thing to do. You just need to know how to do it and get it done. So this morning, I jumped into that file, I brought the point cloud in that you guys saw a second ago, and I started drawing. I used some tools. I used some slicing tools. I used some extraction tools. I used Cloudworks, which is a Leica product, installed on top of AutoCAD to do line work extraction.

      So all those lines are actually extracted from the point cloud, I didn't draw them. I did create the chair blocks. That was probably the longest time, is I took a look at the point cloud, and I just basically sketched a bunch of poly lines and turned them into blocks, and called them chairs. Cool? All right.

      So that's what we got going on. So I used that point cloud, collected the point cloud, drew up existing conditions, and now I go about my design. And we're going to design something pretty neat here. That's sarcasm, by the way. I'm going to go ahead and just draw some lines in here. We'll put a line through here, put a line over to the projector table. Maybe we'll put a line over here. And maybe we'll put a circle over here in the doorway. Real, real complicated design.

      This is really just simulating what you guys would design. Where your pipes are going to go, where your ducts are going to go, where your hanger locations are, whatever goes into whatever you would be designing. And with that, I will go ahead and save it to this thumb drive. I'm going to do a Save As. I'm going to drop it on this thumb drive under Data. Now I'm going to save it as a DXF. Save.

      All right, so now we're going to talk about making the digital physical. So any questions on digitizing the existing conditions before we move on?

      AUDIENCE: [INAUDIBLE]

      PRESENTER: No. No, nothing from Leica is free. I shouldn't say that, but it's true. And everything is á la carte. So when I'm talking about-- we've mentioned a couple of Leica software's here. We've talked about Cyclone. We've talked about Register 360. We've talked about Cloudworks a little bit. All of those things are á la carte. And ReCap still plays with those.

      So we really have to identify the size of projects you're going to do, the type of project you're going to do, and what the best tool set is to get done what you're looking to get done with the strongest ROI. Cool. Any other questions? Cool then. So now when we talk about making the digital, making our perfect design environment back out here into the real world, we're talking about a workflow where we're going to take that modeled environment and kick it to this instrument here, which is a robotic total station.

      And the three step process in doing that is understanding the data that you're feeding it, and creating the layout points. That step is greater or less effort depending on the instrumentation you're using. And you'll see here that I didn't do any of that work with this DXF that I'm going to feed it. But that data goes into the field controller. Now that's that tablet right there on the prism pole.

      So I'm going to put that data into the controller. And the controller is going to guide the robot. The robot is going to track that prism and tell me, hey, if you're looking for this point, this hanger, then go 6 inches to the right and 2 feet back. It'll guide me in the real world on where my drawing says these elements should be.

      Now the process of doing this, as I had mentioned that point creation was step number one. And again, it doesn't need to be difficult. It's up to our level of advancement in other BIM areas. So if I'm in a completely 2 dimensional process, that's fine. I mean, that's basically what I simulated here, right? I have a 2D drawing of this room, and I created some line work. I'm going to use a complete 2D workflow here today.

      If I'm in a 3 dimensional area, but my subs are not, that's fine. I can extract 2D information, floor plans from your Revit models, and I can use that. If I'm in a completely 3 dimensional environment, that's also fine. I can kick out an IFC from Revit. That will feed the controller. I can use the controller to connect to BIM 360. And BIM 360 will directly drive the robot too.

      So no matter what level of advancement we are in our projects, this instrument is going to interact with that data fluently. So to give you an idea how this is going to work, and I kind of mentioned it earlier, is, we're going to go ahead and put that instrument out there in the field. Set up similar to how it is now, just typically not in the corner. Don't kick the tripod.

      We're going to set it up, again, line of sight, so it can see as much of the area as possible. And then we're going to walk around with the prism pole and the controller tablet, and it's going to actively track us. And as I'm on that tablet saying, look for this hangar, or look for this center of column, or look for this whatever I need to construct, it's going to tell me how to get there. I'm going to tell it, I'm looking for this point. It's going to say, well, that point is in this direction, this location, so on and so forth.

      So there are different instruments out here. What I've got here is a Icon 65. A 60 model, 5 second accuracy. So as we take a look at the project types we would be doing, we can get by with a more or less accurate unit based on how far away we're looking. So again, if you're looking at, typically, a 200 or 300 feet max, if you're going to do the interior layout of a building-- a school or hospital, whatever it is-- and you're normally going to be shooting, I don't know, 50 feet, 60 feet, 100 feet, then a 5 second instrument is more than accurate enough for what you're trying to do.

      But if you're going to shoot 1,000 feet, or a mile, you need a higher accuracy instrument than that. So the options are there. But the great thing about Icon is the way that it integrates with our CAD data. So we can work with 2D or 3D workflows, BIM or non-BIM, it's just a question of which tablet and communication device? So the top of that instrument is a handle, that handle comes off. Right now, I've got the long range Bluetooth handle on. That will interact with that tablet.

      I could put a long range Wi-Fi handle on it and use the same iPad I use for BIM 360 Glue or Layout. And with that, let's do it. So I should be able to switch over here, where you guys can now see that tablet on the screen. So I'm going to jump over inside this tablet, and I'm going to stick this USB drive in. Of course, we can get to this data however we want.

      If we were using BIM 360, we'd be using the iPad with a wireless connection and get to our data that way. I simply put the data here on this USB drive. So I'm going to jump over to Projects. I'm going to create a brand new project. DELPHINO. Whoops. I should have hit the X. Plus. Name, DELPHINO4101A. And with that, I'll go ahead and hit the check mark. And I can import some data.

      So I'm going to go ahead and take a look at importing reference data. I can also import road data, or a custom coordinate system for building coordinates or anything like that I'd want to do. I also have background image. So background image would allow me to bring in, say, a floor plan, something I'm not going to interact with. If I'm actively laying out duct work or fire suppression, I can bring in the entire background floor plan, so I can get my bearings, but I'm not interacting with it. In this case, I'm going to bring in reference data, which is going to be data that I am going to actively interact with.

      So as I hit Reference Data, it's on my USB drive. And there's that DXF we just saved over there. And I'll hit Go. And it imports that data. Now the great thing about this is this is a full blown Windows tablet. Pretty decent power here. So I don't have to be worried about the amount of data that I'm feeding it. Again, there's differences in all kinds of instrumentation setups. I worked with one client that had an instrument that only accepted 16 megs, that was its max. So it was anticipating a CSV file with maybe a couple hundred points, and that's it. It's not going to take a DSF. It's definitely not going to take an IFC.

      So this guy being able to take large amounts of data means, I can bring in my entire project into this tablet and understand the full context of what I'm laying out. So I'll go ahead and hit OK. And what I'm going to do now is decide what I want to do. So I've got as-built. If I want to shoot something, let's say, in this drawing, I have all my electrical Stub-ups, and they're done. I could shoot those electrical Stub-ups to as-built them to validate they were built accurately.

      Now again, there's a difference here between robotic total stations and 3D laser scanners. 3D laser scanners collect data incredibly quickly-- 360,000 points per second. These guys, one point at a time. So when I'm as-builting those, I'm as-builting individual shots. I may want to laser scan that post tension concrete slab to document where every single piece of rebar is. So I may go over here to layout points. And you can see immediately, there's my data.

      It's getting good. Yeah, background music. So the data's there, and everything that was in the drawing is here in the tablet. The great thing about this, though, is if I look at layers, again, it understands all of my layers that I have in there. But I can show points. And this is huge, because if I click Show Points, I now have points on everything-- whoops, back-- I have points on everything I would traditionally OSnap to. So every end point, every center of circle, which means, I didn't do a single step of point creation.

      I didn't have to use Autodesk point layout. I could, but I didn't have to. I didn't have to create any of the data. The data in the drawing is understood enough to go. So what I have to do first here is, I have to have the instrument understand where in the space it is, because I want it to drive me to tell me that this point on this poly line-- 26/2, next to the projector, I want the tablet to tell me, go over there. But to do that, the instrument has to know where in the room it is.

      So I'm going to set it up with this understanding. I'm going to go to Setup. And I'm going to just setup an anywhere setup. I'm going to use data in the drawing for it to back calculate where the instrument actually is in the world. Technically, a resection. So I'll go anywhere. It takes a look at the level. Sorry, it's going to get tight in here for just a second. So it's taking a look at the level. And we see, again, on the level bubble, I can adjust it if necessary. As long as it's within the ballpark, we're going to be fine, because again, this has got another dual axis compensator just like a survey grade instrument should.

      So once I'm happy with that, I can set up the instrument height. You guys up here can see that the laser plummet is on. So there's a dot on the floor, if I'm setting up over unknown point, I can set up instrument height and do 3 dimensional layouts. So if it understands how high it is off the floor, and in your model, you've got that concrete sleeve 6 feet off the floor in that wall, it'll point right on that wall, because it will know where it is in 3D space.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: No, we haven't got to the point where we can measure distances with lasers yet.

      AUDIENCE: You can use tape measure.

      PRESENTER: You have to use a tape measure. Yeah. Yeah, I know. I know. One day, one day, we'll take measurements with lasers. So now doing an anywhere setup, I'm just basically going to shoot two points in this room somewhere-- a point that I have in the drawing, and a point that I can see from where the instrument is.

      So over by the door, now I can see that back corner. That back corner is 0.7-2. So if I say I'm looking for 7-2, I can tell it how I want to look for it. So I'm going to change my prism type to reflectorless, which means it's actually going to shoot a visible laser, or I can eyeball through the scope. If it's really far away, I'll eyeball through the scope to find it. But as I hit Start, turns the laser on. You guys were all done having kids, right? Because these lasers are dangerous.

      AUDIENCE: Will it generate points [INAUDIBLE]?

      PRESENTER: That's a good question that I don't know. Maybe. I can look for you. So as I come over here, I'm just going to point it to that corner. And I've got to-- if I want to fine tune it, I've got these nice Etch A Sketch knobs. It's a technical term. So I'm just going to point at that corner and say, store that point. So it now knows that direction and distance. That it basically just drew a radius from, from that point, it knew, there is now a radius out here in 3D space of where the instrument might be.

      So now I say, well, I can also see-- I typically want to spread these out horizontally and vertically, but for today, I'll just look at that corner over there. We'll look at 5-1. I'll just move them over there. I just sketch them on there. Store. Yay. It now knows where in the room the instrument is. Make sense? OK.

      So again, ease of use wise, this is not as easy to use as a BLK. But versus other total stations, that's incredibly simple to do. So now that it understands where in the room the instrument is, I can now have it go tell me where to go when I'm looking for certain points. So I no longer want to be reflectorless. I could. If I wanted to look at that sleeve, I can't put this on the wall accurately enough to know where that sleeve is.

      So I could go reflectorless, pick a point, and it would turn and shoot the laser directly to that point. I could mark the wall, and that's where they're going to drill through, or whatever they're going to do. But in this case, I want to just be out here on the floor. So I want it to track my prism. So I'm going to change my prism type to an NPR-122, which is this prism. It's written on there, you don't have to remember that.

      So we're going to do an NPR-122. And nothing I have here is 3D. So it doesn't really matter what my prism height is. But if it was 3D, then I would want to know that so that it's calculating its height from the floor to the eye. And then it sees this, and it calculates down to the floor, so it understands where I am on the floor in a 3D mode. But I need for it to go ahead and find and track my prism.

      So I'm going to go over to move and search. And here, I can remotely control the robot where it will basically just find my prism for me. So I'm just going to hit Power Search to the left. Hey, it found me. It gives me a little nod. Prism locked. Prism found. Sorry, I'm new here.

      So now, it's actively tracking the prism. We have a relationship. And the great thing about this too is, as it's actively tracking it, it's going to get lost. I'm walking out a construction site, there's columns, there's other people around. When the beam gets blocked, and you can listen to this, do you guys hear that? No? It's a lost prism. In a very, very serious voice.

      But it found it again. It's still tracking me. So yeah, it did get blocked. Here, we'll see if the mic will pick it up. Yeah? It's serious. But since it didn't move, it instantly picked it back up. Now if I was moving it, because I'm not going to stand behind the column and the column jumps on in my way. I'm walking, and I walk behind a column. As long as I move in the same typical speed and the same typical direction, it will keep searching in that direction for about 2 more seconds, and it will pick me up again. Sorry?

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Sure. I'm not familiar with that term, but yeah. Yeah. It understands where the prism is and the direction the prism is moving. And if it does lose the prism, it's going to keep searching, assuming that you were moving in that constant speed and direction for a period of time. So now that it knows where I am, I can basically tell it that I'm looking for something. Let's go and look for that circle in the doorway. So I'm going to pick that. And that's where I'm going to start. And it now understands where I am and where the point is that I'm looking for.

      And it says, well, assuming that I'm looking at the prism, it's assuming that I'm always standing like this, where the prism is in between me and the instrument. It's telling me to go 23 foot 7 and 3/4 to my right, and go 2 foot back. Now I know it's over there, so I'm just going to walk over there. And you can see those values are updating telling me where I am, how close I am.

      And when I get to a certain point, out here-- I'm out here about 5, 6 foot away. As I get closer, it's going to flip over into a bull's eye view, where it's going to tell me, again, where I am. And as I get closer than that, it goes to a tighter bull's eye. Now I typically put it down here and plumb the rod, because any out of plumb of the rod is going to translate to offness-- that's not a word-- error down at the floor.

      Now in this room, I wanted to keep everything fairly high, because you guys were all sitting in here. If I'm in a wide open warehouse, I'll put the prism just a foot off the ground. And then, a little bit out of plumb isn't going to matter. But in this case, because it's 6 foot off the ground, I do want it pretty plumb. And after that, I just move it around. I've got to go about an inch to the right and 2 inches forward. I've got to go a little bit more forward. There we go-- 5/16 and a 16th, right?

      So we've got to judge how much time we want to spend here and how accurately we want to do it. If I'm just looking for a stake for a column center, that's probably good enough for the concrete guys to come around and form around that. I'm looking to drill a hole in the floor to put a hanger through, maybe that's good enough, maybe I want to adjust it. It's up to me. But when I'm happy with it, I'll hit store. And it now knows that I laid that point out.

      And I can just go to my next point, my next point, wherever I wanted to go. And in this case, I was laying out points. But if I jump over here, I could do layout lines. And instead of laying out to an actual coordinate, I could say that I want to lay out to this line 27-1, this line 27. And as I hit Start there, it's going to tell me how to get to that line, not where on the line I want to be, but to be in line with that. So if it's extending that line out to where I am here, I know that I'm in line with that.

      So if that's a column line, I don't have to be at the column, I could just be at an offset of a column. And then, obviously, as I go down that line, I can be wherever I need to be on that line. Make sense? Now I really want to lay out-- well, I'll just store it just for fun-- I really want to lay out that line 25 to 26-2. 25-2 to 26-2, I want to lay that out. Unfortunately, that's where they put the pile of rebar. So I can't lay it out, there's stuff in my way.

      So I'm going to go back home, and I going to jump over to sketching, and I'm going to go ahead and sketch an offset of that line. Maybe.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Well, come here. That's interesting. All right, well, let's do this. Let's go ahead and do lines and points. I'm going to connect this point with this stakeout point that I just shot. So I've got that line. Now I'll go ahead and make an offset of that line. And we'll go 18 inches to one side or the other. And you can see I pick the side. So I can create a line where I'd like it to be.

      And then, I can go ahead and go back to lay out lines, and I can lay that new line out wherever I need to go. And it's actually back here. It's right-- let's see, well, per the drawing, it should be pretty close to your chair. It should be right about there. So what you guys are seeing here is that by round tripping this data, we're using what's truly out there in the world to understand what it is we need to design.

      We're going to design it in a perfect design environment, where everything is true plumb and square. Then we can go ahead and take that true plumb and square perfect design environment and put it back out here into the real world, and construct it as accurately as possible. OK, so now in closing, any questions? A minute over. We started the minute late, so I'm still good. You had a question.

      AUDIENCE: I was assuming you can change that to measurement units instead of [INAUDIBLE]?

      PRESENTER: Absolutely. Yeah. We can do metric if you want to. Yeah, I mean, the units are really just a readout preference. I mean, this distance of the time of flight of the laser, that's the distance. And then, it's just asking us what units we want it to read out as.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Sorry, you had a question?

      AUDIENCE: [INAUDIBLE] work around [INAUDIBLE] take a point and define what it is? Just a way to bring in a [INAUDIBLE]

      PRESENTER: No. No. What we would do is we would have identified targets in there. And you would do some work on the front end, where if you have a file for this, you would say, well, that corner or this whatever-- 2 feet off of that corner, I'll tape a target to the wall. And then, you know where that is in your drawing. So you extract your coordinate. And now we've collected a target in the field. And we can coordinate the two.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Icon. It's a software specifically for this.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: You certainly can. Yeah. Yeah, you can.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Yeah, I mean, yeah, there's a number of different workflows to get it done. Yeah. It's a question of accuracy, though. And whenever we're aligning a scan with an established drawing, then we get into a fight of who's right? Because the scan is right, that's not much of a fight. But how much are we going to adjust the drawing, and where are we going to assume it's coincidental? Because we may assume that this corner is coincidental, which mean, any error in the room ends up in that corner. That corner is now 6 inches off.

      But is it truly 6 inches off? Or is it 3 inches off there and 3 inches off here? And if we look at the entire floor plan, we may not know. So any of that kind of correlation can be done. It's not going to be done necessarily well or easily unless you've got a really good file that you've got, or you get it aligned, and then you start adjusting that file to match the cloud. Go ahead.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: I don't believe so. And the Icon is a significantly lower grade than an MS50. But again, it's all about ease of use. This is a construction-- intended to be in the construction industry, where an MS50 is full blown top of the line survey. So the software that comes with the MS50, the MS60, that's going to be much more advanced software than what Icon has, but this is purpose built for the construction industry.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Oh, it's wonderful. I love it. You were next.

      AUDIENCE: [INAUDIBLE]

      PRESENTER: There's not a standard, if you will. It's more just gut feeling and experience. And it also really depends on the instrument I'm using. With the BLK or a FARO unit or something like that, I'm probably not going max 15 meters at very most, because 15 meters is about the rate that you can collect a target with a FARO unit. And I never try to survey outside of my control.

      So even if I have a 330, I'm not going past 15 meters for overlap. Depending on the environment that I'm in, I may be able to get by with less overlap, if it's more or less uniform. Like in this room, it's really easy to see that that door is an identifying feature. But if this was just a square warehouse with an even column grid, it might be more difficult to get the rotation everything correct, because everything is so uniform. So that's my way of not answering your question. I think you had one?

      AUDIENCE: Yeah. I've got a situation where I've been scanning over the course of a couple of days, where [INAUDIBLE] filming as I go. How do we compensate for the changes in the time it takes to scan the next [INAUDIBLE] higher density, and the next density, [INAUDIBLE]

      PRESENTER: So you're looking for higher density?

      AUDIENCE: Yeah, I did an initial scan, [INAUDIBLE]. Got a couple [INAUDIBLE] higher density [INAUDIBLE] traditional scans [INAUDIBLE]. And in this time frame, there were a couple [INAUDIBLE]

      PRESENTER: I'm still not following you. I'm sorry. Basically, if you had building coordinates, then they would be registered off that, and you wouldn't worry about density at all. If you are worried about density, then you have to make sure you have overlap when things are gone. For those of you guys that are left, thank you very much for coming. [INAUDIBLE], what's that?

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Yep. No, I'll be at the IMAGINiT booth.

      AUDIENCE: Thank you.

      PRESENTER: Thanks guys.

      AUDIENCE: Great job.

      PRESENTER: Thanks for heckling, man.

      AUDIENCE: Do you have a business card?

      PRESENTER: I don't. But if--

      AUDIENCE: Yeah, I have [INAUDIBLE]. Let me ask you a few more. So you have to have an overlap for registration, but if you scan this room when you went to the second floor, and there is no overlap--

      PRESENTER: Well, you have to have some.

      AUDIENCE: If you don't. In a situation where you don't have an overlap, because you're moving from one floor to another.

      PRESENTER: But you got up there somehow.

      AUDIENCE: I got up there, right?

      PRESENTER: So you would scan the way that you got up there. So I mean, if we had to get above that ceiling, there was a hatch. And then you got on a scissor lift and climbed through that hatch. You can see through the hatch.

      AUDIENCE: I see. So [INAUDIBLE], so we're going in galleries, so there will be something common. So if our elevation was still here--

      PRESENTER: Well, yeah you either have to make sure that you have overlap through the entire course. So even if you have to register this room, you may have to register it out the hall, up the stairs, and back over to the room above.

      AUDIENCE: And it wouldn't take the registration either, so I'm moving higher on the [INAUDIBLE].

      PRESENTER: Certainly. Yeah.

      AUDIENCE: Because our guy who sells us the software tried to sell us [INAUDIBLE]

      PRESENTER: OK.

      AUDIENCE: And I don't know if there is a need for it, to buy [INAUDIBLE] or just use the one that comes with the 660.

      PRESENTER: With the IMU, I don't think you would need it.

      AUDIENCE: I wouldn't need it?

      PRESENTER: No, I mean, in general practice, the more stable the base is, the better your data is.

      AUDIENCE: Yeah, because you said, if the overlap is not lining up, because this was the case is moving the [INAUDIBLE].

      PRESENTER: But that's not the tripod, that's going to be your registration technique. So yeah.

      AUDIENCE: Great. Thank you very much.

      PRESENTER: Thank you.

      AUDIENCE: Great class.

      AUDIENCE: So do they make similar kind of stuff for Trimble users?

      PRESENTER: They do.

      AUDIENCE: OK.

      PRESENTER: But they don't have the data integration. And the reason for that is, if you basically look at how Trimble is positioning themselves as a business, with their acquisition of Tekla, with the acquisition of--

      AUDIENCE: [INAUDIBLE]

      PRESENTER: Yeah, I mean, they're positioning themselves to be Autodesk competitors, so they're not really friendly.

      AUDIENCE: That's been my experience so far. The company I came into is all Trimble based.

      PRESENTER: Yeah.

      AUDIENCE: So I'm trying to figure out how I'm going to go about doing this. Same workflow, just with what software?

      PRESENTER: Yeah, so I mean, again, Trimble will do it, but you'll need Autodesk point layout to create all of those points. Yeah. And I mean, there's a good amount of time invested in that. And being that this is a an instrument--

      Downloads

      ______
      icon-svg-close-thick

      Cookie 首选项

      您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

      我们是否可以收集并使用您的数据?

      详细了解我们使用的第三方服务以及我们的隐私声明

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

      改善您的体验 – 使我们能够为您展示与您相关的内容

      通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

      定制您的广告 – 允许我们为您提供针对性的广告

      这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

      icon-svg-close-thick

      第三方服务

      详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

      icon-svg-hide-thick

      icon-svg-show-thick

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      Qualtrics
      我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
      Akamai mPulse
      我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
      Digital River
      我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
      Dynatrace
      我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
      Khoros
      我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
      Launch Darkly
      我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
      New Relic
      我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
      Salesforce Live Agent
      我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
      Wistia
      我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
      Tealium
      我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
      Upsellit
      我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
      CJ Affiliates
      我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
      Commission Factory
      我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
      Google Analytics (Strictly Necessary)
      我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
      Typepad Stats
      我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
      Geo Targetly
      我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
      SpeedCurve
      我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
      Qualified
      Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

      icon-svg-hide-thick

      icon-svg-show-thick

      改善您的体验 – 使我们能够为您展示与您相关的内容

      Google Optimize
      我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
      ClickTale
      我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
      OneSignal
      我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
      Optimizely
      我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
      Amplitude
      我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
      Snowplow
      我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
      UserVoice
      我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
      Clearbit
      Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
      YouTube
      YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

      icon-svg-hide-thick

      icon-svg-show-thick

      定制您的广告 – 允许我们为您提供针对性的广告

      Adobe Analytics
      我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
      Google Analytics (Web Analytics)
      我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
      AdWords
      我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
      Marketo
      我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
      Doubleclick
      我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
      HubSpot
      我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
      Twitter
      我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
      Facebook
      我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
      LinkedIn
      我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
      Yahoo! Japan
      我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
      Naver
      我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
      Quantcast
      我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
      Call Tracking
      我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
      Wunderkind
      我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
      ADC Media
      我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
      AgrantSEM
      我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
      Bidtellect
      我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
      Bing
      我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
      G2Crowd
      我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
      NMPI Display
      我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
      VK
      我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
      Adobe Target
      我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
      Google Analytics (Advertising)
      我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
      Trendkite
      我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
      Hotjar
      我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
      6 Sense
      我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
      Terminus
      我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
      StackAdapt
      我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
      The Trade Desk
      我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
      RollWorks
      We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

      是否确定要简化联机体验?

      我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

      个性化您的体验,选择由您来做。

      我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

      我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

      通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。