AU Class
AU Class
class - AU

Mixing Realities: A McCarthy, Autodesk, DAQRI Partnership

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

This class will discuss the practical application of technology to improve field operations. We will focus on the use of augmented reality using the DAQRI headset and the Autodesk Augmented Reality application to review design intent with all stakeholders, provide continuous quality control of the as-coordinated versus as-built environments, and provide expedited monthly billing applications to streamline the construction process.

主要学习内容

  • Learn how to use technology for contextual design reviews
  • Learn how to use technology to verify as-built versus as-coordinated models
  • Learn how to use technology to validate monthly billings
  • Learn how to use technology in facilities operation

讲师

Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
  • Chapters
  • descriptions off, selected
  • subtitles off, selected
      Transcript

      JORDAN MOFFETT: All right, so I know it's a small room, but we've got the mics on for the recordings. So is that too loud? Too loud? No. Good. Good. OK.

      All right, so you guys are all here to talk about mixing realities with us. This is a, kind of, a try venture pilot between McCarthy, DAQRI, and Autodesk. So DAQRI on the hardware side, Autodesk on the software side, and us providing some of the industry knowledge. Jose, you want to control yourself?

      JOSE NEGRETE: Hello, Jose Negrete. VDC specialist with McCarthy Building Companies primarily dealing with 3D modeling, rendering, animation, virtual mock-up, and AR RNT.

      JORDAN MOFFETT: I'm Jordan Moffett, a VSC manager for our Southern California region. My role really is to make sure the team-- guys like Jose have the right tools, access to the tools, the time to do RND to make sure that we're continuing to progress in the industry. My background is I started my career as an architect in the industry and then transitioned into project management, and then got into VDC about four years ago.

      So I, kind of, bring some of those practical applications to what we do and the aspect of having been out in the field or having been on the design side and just working through those pain points. So when we see that there's a different way to do business or a better way to do business, that's kind of what we try to focus on. I have a lot of ideas, but I don't really make anything happen. And that's where these guys come in to the equation.

      PAUL CHEN: And my name is Paul Chen. I'm a director of product management for DAQRI. DAQRI builds wearable devices and software to help augment workforces. I've been with DAQRI two years tomorrow, which sounds like a short time. But in the AR wearable space, it's, kind of, a long time. We talk about years in DAQRI as dog years, and it's been a roller coaster over the last few years.

      Prior to working at DAQRI, I was with an embedded software company for 13 years. We did operating systems for embedded devices. So DAQRI actually hired me to manage the operating systems of our devices. You may know DAQRI as the smart helmet company, and then a couple of years ago, we introduced the smart glasses.

      I would've brought a pair, but the ones that I brought are all on the show floor right now in the expo. So if you go to the Vinci booth, V-I-N-C-I, they're showing our glasses with some of the applications they wrote. But now I manage some of the applications on our devices, and I happen to manage the one that McCarthy is using and we'll be talking about today.

      JORDAN MOFFETT: And we'll have one more joining us shortly. Dave Tyner, he's the AR VR thought leadership manager for Autodesk. He's currently down in the expo hall. They have a pretty sweet multi-user environment setup down there that they're getting some press release around. So he'll be coming in hot and joining us upfront. So don't think it's some random guy when he does come in, all right.

      So the agenda, so we're going to talk about the definition of augmented reality just to make sure we're all, kind of, on the same platform. There's a lot of terms and definitions that get thrown around, talk about the value of AR, a lot of the use cases that we see, and we'll talk about the specific case study that we endeavored upon with this-- during this partnership over the last several months. We'll talk about the current reality, some positives and negatives to what we're limited by with technology, future potential of you know where we think augmented reality can take us in the future, and then finally, just close it out with a call to action to all you guys and your partners.

      So the definition for our purposes of augmented reality are simple. It's an image produced by a computer and then used together with a view of the real world. So you're truly augmenting your reality with digital content. The value that we see is so there's, kind of, four areas that we were looking at when we talk about the value of a AR, right.

      So when you're thinking about-- you're going through schematic design early in the process using augmented reality to study different massing models or options for what buildings could look like, placement of buildings on a job site within the context of the existing reality. So for us, a ton of work we do is it's health. It's education. But there's hardly any Greenfield sites, especially in Southern California. So for us to be able to actually see what the buildings look like in context with the current campus and then show that to the owner and let them understand better what the design options are before we start going down that design development road.

      JOSE NEGRETE: The other application we've tested is X-ray vision. So having the underground utilities that we've gone out and mapped at the site visible in the headset so that you can look at it in the field on site.

      JORDAN MOFFETT: And then going into quality control, so as we progress into construction looking at what the future installations are going to look like in context with reality. So in the case that we're going to go through, we did this with an existing building, and some new equipment is going to go on the roof. It's all good.

      But you can also do that on a new project where you start to look at the new-- the layers of construction that are going to come in after right. So you're constantly, kind of, looking at what does our coordinated model look like in context of what we've already built and then move it into the as built validation. So did the trades actually install per the coordinated model, or did something happen when we moved from coordination to fab drawings to fab and then out into the field. So for the case study, we really are looking at the two on the left there, the building massing and quality control.

      So just to put it in context, one of things that we're trying to solve is to do these things in a better way. So we've always done massing studies and we've always done quality control and as built validation, but we've done it different ways. We've done it expensive ways like with laser scanning. We've done it with time consuming ways like going out and field measuring utilities as they're installed. So this is just to kind of put this in context of a way that we, kind of, do coordination traditionally, right, where we're marking up drawings and providing that feedback to somebody. But what if we can actually look at the markups in context with reality and be doing these markups in 3D real time.

      JOSE NEGRETE: So moving on here, we'll see some of that computer interface. And as we go through these just keep in mind that McCarthy's initial R&D into AR involved much more complicated process with Unity 3D and basically creating a mini app every time we wanted to explore something on site. So luckily this works with 360 Docs.

      So you'll see the interface here. You go to BIM Viewer. DAQRI BIM Viewer, it's a website. Log in with your Autodesk login. And it's basically authorizing DAQRI's app online to access your models, which you would have already uploaded with VIN 360 Docs.

      So here you'll browse through your list of models, select the one that you're trying to explore, create a new scene, and then you'll see a familiar interface as you would with 360 Docs. And you'll see in the next video here this is sped up 400% or so. You're basically setting a 360 section box around the area you want to look at, add what's visible to a selection set, and then specify a location for the marker, for the landmarks that you will physically put on the site. Then this gets pushed to the headset, and you downloaded it-- you download it to the headset next.

      JORDAN MOFFETT: And I will say I was remiss in really driving that point home that the platform that this is based on is BIM 360 Docs. So how many of you guys in the room use-- are on BIM 360 Docs now. OK, so the idea here is that you're hosting your models on BIM 360 Docs either through C4R and having them live on the Cloud or hosting them.

      There is a model repository anyhow. So you basically have access to all your projects that you've already been invited to in Docs and then, of course, all the models that live within there. So it's critical that we don't-- we drive home the point that we're not introducing just another cloud. Because I know part of the frustration that we all deal with is having data in three, four, or five places.

      So moving on from the computer side, this is the DAQRI headset interface. And for me, having been on the other side watching these presentations, it was just important that we put this interface stuff in here. Because we wanted to show that we've actually done this. We've gone through the steps. This isn't, kind of, marketing footage, but this is real life.

      So here you have the model BIM. So this is, kind of, your first interface when you turn on the DAQRI, you have model BIM, which is where all the models are stored. And then you have a few options. So the first time you're actually loading a model, you scan the Scene Selection. So every single model that you saw at the end of that video has a QR code associated with it. So each of those QR's are unique to that actual model.

      So you scan that QR code, and that actually starts the process of loading the model onto the headset, right. So now we're actually getting the model to the onboard computer. And then depending on model size, you have a certain amount of load time. And then finally, you get to the point where you scan the landmark.

      So this landmark that you see is not unique to any model. So if you've got your landmark, regardless of what model you load, you place that landmark in reality exactly where you place it like Jose showed in that video virtually. So you're just matching the virtual width real world. So you have to have somewhere, somewhere on site, somewhere in your building that you know where that-- where that's located relative to the model position.

      So a lot of this goes back to our guys actually going out and surveying the site and giving us-- there's little bit of prep work that goes into it. But you get a known xyz, and then you can locate your model based on that. And please also as we go through this just ask questions if you have them along the way. I don't want to-- we don't have the way to the end. Oh, see perfect timing.

      AUDIENCE: [INAUDIBLE]

      JORDAN MOFFETT: So the question was so, yeah, is the orientation known? Is the headset figured out? So basically that marker it's planar, right. And so depending on where you place it, so let's, like, the column in that example, depending on what side of the column you were to place that on it would rotate the model. So you would want to have it at the right height and also in the right plane.

      AUDIENCE: [INAUDIBLE]

      JORDAN MOFFETT: No, I don't even know if it would-- I don't know. Would it read it?

      JOSE NEGRETE: It needs to be flat.

      JORDAN MOFFETT: It needs to be flat. They're the expert.

      AUDIENCE: [INAUDIBLE]

      JORDAN MOFFETT: Right. Yep, and we'll talk a little bit about that because there was some playing with that location to make sure, hey, you load the model based on that marker. The model comes in and you can tell that it's slightly elevated. But at that point at least you can see, kind of, OK, it looks like we're about a foot up. So then you can drop it.

      So I think that's probably some of the future development that we can get into is that the finite accuracy of that location. So for some of the things like this building massing study, that accuracy's not as critical as say that as built validation where we have some clients that require us to place objects, plus minus 1/4 inch of between as built model and reality. So the location becomes a lot more important in that sense. Good. All right, so-- [INAUDIBLE].

      AUDIENCE: [INAUDIBLE]

      PAUL CHEN: Yeah, I can take that. The DAQRI headset actually has a clear visor. For those of you who remember the smart helmet, we made it with a mirrored face. There was no reason to do that except the CEO then thought it was very cool looking and people did react to that. They said, wow, this is so futuristic looking.

      When we actually gave it to people to use, we got feedback from co-workers saying if I'm working with this person who's wearing the helmet, I cannot see their eyes. And that really bothers me, because I don't know where he's looking. I don't have that eye contact.

      So with the next generation hardware, we built the smart glasses. We purposely made the visor be very clear. So you can see the eyes of the person you're working with. Now you bring up a good point. When you go outside, it's very bright.

      We are shooting virtual content onto those displays. It's like a computer screen, and when you take your Kindle or even your computer outside it's very hard to see. So what we've done is we've built a little clip on basically sunglasses over the visor, and it works quite well. We were on site with its outside looking at the hospital. And the virtual content looks very bright against the background.

      JORDAN MOFFETT: That question may have been founded off the HoloLens, so not to bring up bad words around our DAQRI partners. But that's one of the issues, right, is, well, you've just cut out half of our environment. And even then even inside the building as we go through construction, you may not be closed in so you're always going to have potentially direct light. So the rooftop study, this outdoor study, I think both came through very well.

      So let the video roll. So again, just wanted to show kind of where we're at in the RND and give everybody a flavor of the type of content that you can bring back. So ultimately, we want to get to the point where we got see what I see type of technology showing people real time. But in this case at least we can go out, grab a case, grab some examples of maybe cycling through three or four different models to be able to look at the different options, whether we have a two or three story building that's, kind of, more wide taking a more space. Yeah, the scale gets a little bit-- a little bit crazy.

      But anyway, so the idea here is that you can just you can capture these videos when you're out on site and pull them back, bring them up with the larger design team, the owner, and start to say how do we want this facility to actually look. And I think the other goal is for us and the frustration lies in the owners obviously that we work with don't do what we do every day. And so to fool yourself into thinking that they really understand what the projects going to look like by reviewing the plans, elevations, sections is just that, right. We're just fooling ourselves.

      So I think if we can actually build a building that the owner goes out and when they see the complete product they say that's exactly what you showed me two, three, four years ago when we did those site studies, I think that's a huge win for all of us in the industry. The second part of the case study was the roof rack example. So there's the new building that you just saw. And then we have building three on campus, which is an existing building, basically has some retrofits to the mechanical, the cooling system.

      So we have new roof racks and then some of the mechanical piping that's going to be on those roof racks. So one of the cool things about this case study was the architect got the as built documents. They as built the model to then be able to go and design the retrofit. So when they as built the model, they followed the as built drawings that were x number of decades old. And there was actually a parapet wall on the roof that was about a foot off in the as built.

      So when we went out there with the hologram and we were studying the roof rack in context with reality, we saw that the roof rack was actually into that parapet wall. So we were able to actually take that back. This is a trade partner model. So the mechanical-- the mechanical trade partner was actually able to go change their model, re-validate through Navisworks' coordination that we weren't clashing with anything else that we thought was out there and then re-validate through augmented reality that the proposed solution actually works.

      So pretty cool to get a win, an early win on a project like this. And so you can, kind of, see where that value could come into play as you're building a new building and, kind of, always just looking out into the future of, hey, if we go pre-fabricate this item is it actually going to fit when we bring it into building. So now we want to, kind of, transition to what's the current reality, where are we out with the hardware, software and just in our work environments, and then we'll talk about some of the future state.

      PAUL CHEN: With the current version of our smart glasses, we made some very conscious design decisions. One is on the input methods. So you put on the smart glasses, we've been doing demos for years now. We have a competitor, and they like to use gestures.

      So the first thing that people do when they put on our smart glasses is in order to do something, they start doing this. We have this design philosophy that we're building our device for people who are doing work. They're doing tasks. They're typically using their hands.

      They may be wearing gloves. They might have grease and dirt all over them. They've got tools in their hands. They're twisting knobs, working on machines, carrying things, laying down materials. Their arms are very busy.

      And what we didn't want to do was give them the added burden of having to do this a lot of times. Think about how many times you click a mouse in the course of your daily work. Imagine doing this for every time of those and actually you sometimes have to click two or three times to get it to take. I've done maybe a dozen clicks standing here, and my arms getting tired. So we didn't want to add that burden to our workers.

      JORDAN MOFFETT: And everybody stares at you when you're doing it too, and that's the other thing.

      PAUL CHEN: They're already staring, because you're wearing this [INAUDIBLE]. We live in the 21st century where people are used to it. So we made a design decision to make our user interface completely hands free.

      So we have what we call a little reticule. It's a white dot, and when you look through our smart glasses the white dot follows your head. So you control the dot by moving around. When there's a user interface element like a button or a menu, you activate by looking at it and then what we call dwelling on it for less than a second. And then you've pushed the button or selected the menu item.

      People pick this up very quickly. Even down in the display booth, they started doing this. And I say we don't do gestures, and they go, oh, there's a white dot. And within 15 seconds, they're navigating our user interface. I say click on that. Oh, yeah, I did that already. So that was one bonus. People could still do work, climb ladders even while wearing our smart glasses. Question?

      AUDIENCE: [INAUDIBLE]

      PAUL CHEN: So the question is is it eye tracking, and no, it's not. One of the reasons is eye tracking cameras are added expense to the hardware. Also with different eye positions of each user, it's very difficult to get a good eye tracking position. So this is totally based on where your head is pointing. So you control it with your head motion.

      Another good feature that people gave us feedback about was the ability to work offline. Now of course, your models are stored in BIM 360 Docs. They're in the cloud. That remains your authoritative source for the models. Our hardware doesn't touch the model at all. We don't change the model. It stays there.

      But of course, if we're going to display it for you, we do have to access it. So as Jordan mentioned, you download the model to the device. If you know you're going to a job site where there is limited or no Wi-Fi capability, you can then store the model locally to the device. In the user interface, we have a set of buttons, one of which is download and save the model.

      Then when you're on site, when you launch the app, Jordan showed you how you can scan a QR code to download a model. The second option was load a saved model, and that way you can just refresh one from the stored memory question.

      AUDIENCE: So how many models can you [INAUDIBLE]?

      JORDAN MOFFETT: So the question is how many models can you hold? And I would counter that by asking how long is a piece of string? And it depends on how large your models are. There is 64 gig of hard drive on our device-- about a third of which is our operating system and the applications and about another third of which is the recovery partition, which in case the first one fails, you can back up. So there's about 22, 23 gig of hard drive space.

      Now that's one limitation. Going to next question, can we actually render a 23 gig model on our glasses. The answer's no. Think about your designers. They're using these humongous Alienware or other desktop machines with 14 and video processors in them.

      We have an embedded processor in our device. It's a laptop processor. It's an Intel Core M7, but it's going to chug on a large model. So as Jose mentioned, you want to segment down what portion of the model you want to view. That's not only to optimize the download speed of that model but also once we have it on the device that little Intel processor's chugging to get it out onto the glasses.

      AUDIENCE: And how big was the whole building or polys we were talking, right?

      JOSE NEGRETE: Yeah, it was like 500,000 polys still. And you saw it there's a sensor in the interface, so when you're defining a selection of geometry that you want to upload, there's a little HUD thing that shows you if you're well beyond the reasonable poly count or if you're within it. And so we're actually within it, but it let you go higher than that. You'll just experience a lot more jumping and jittering when you try to do the model in the headset.

      So this again current state. We're looking at anchoring. So if some of you noticed in the video for the building facade the 3D surface where the marker was in 3D space was actually several feet in front of the actual wall where we set up the marker. So anchoring is something that's important for now, because you have to go out on site and have somebody set a control point for you to anchor the 3D model.

      And in this case, we had an as built model that was actually in a different location than what our 3D content had. So we had to shift the model on site. Other things to take into account again is the model complexity. So like we were just talking about, you have to section of your model into blocks or like using the grids of your model to explore more than one little area of your project.

      JORDAN MOFFETT: But the nice thing is that if you do that right, you can have those multiple models loaded with the capacities so. And then using that same landmark, you can just move the landmark from location to location. So a lot of that's what we've been, kind of, playing with and trying to push the envelope as much as possible.

      PAUL CHEN: Sensor and other hardware limitations are also part of our current reality. The smart glasses that we have today were a direct evolution of our smart helmets. And in order to get the smart glasses to market as quickly as possible, we basically use the same exact hardware configuration that was in the helmet, same cameras, same IMUs, same processors.

      The glasses came out about two years ago. The helmet came out a year before that and was in development for two years before that. So design decisions on which CPU to use, which RGB camera, which depth sensors were made five or six years ago. Clearly, the sensors have progressed. Since then we're still using six-year-old technology that McCarthy is stuck with right now.

      It is limiting what we can do in terms of the size of the models, the field of view, the accuracy of the anchoring and the positioning. So these are all things that definitely can be improved. Comfort and compatibility are also important. So beyond just providing the AR experience, someone it wearing this new device.

      We're not building AR for AR's sake, we want people to use it like any other tool in their tool belt. And for that to actually happen, they have to be able to wear it for four or five, six, maybe eight hours while they're on the site. So it has to be comfortable and it also has to fit with other things that they're using, wearing, for example, hardhats.

      Current limitation, it does not work under all hardhats. There are some that you can screw down over the top. Some are designed differently, and it doesn't quite work. So you do have to be careful about what you choose.

      JORDAN MOFFETT: Our fourth partner. I'll move over there. It's getting crowded over there.

      DAVE TYNER: Hi everybody. How's it going. I'm Dave Tyner, thought leadership program manager for a construction customer success at Autodesk and in terms of the customer partners like DAQRI, the hardware, we're trying to enable this software accessibility of your data and your data in context through a process, which I am calling we-- there's a couple-- there's two of us, so it's we-- are calling contextual of data, which is different than visualization of data, because this is the first time somebody--

      JORDAN MOFFETT: You might want that.

      DAVE TYNER: OK.

      JORDAN MOFFETT: I don't know [INAUDIBLE].

      DAVE TYNER: Yeah, perfect. I can come over here and stand with you.

      JORDAN MOFFETT: Balance the room.

      DAVE TYNER: When people talk about visualization, I'm an ex visualization guy. It makes construction people, very nervous, right-- pretty pictures, little value, right. So contextualization is different from that, and it's so different that it needs a different name.

      OK, so why? Why do you use BIM? Why do you care about BIM? Because BIM helps you make better decisions.

      And that's whoops-- the other way. All right, BIM helps you make better decisions, right. And in a $7.2 trillion industry, global-- is that about right? Yeah-- better every decision you make comes at a time cost. And so if you can make better decisions faster, you're going to shave into that waste problem. Because there is a waste problem. I think we all know there's a waste problem, right.

      And better decision-making is going to help shave that down and increase your margins. So talking about, OK, BIM. Let's just dissect it really fast. Right information at the right time to the right people to make the best and fastest decision, right.

      So if we look at what does right information mean, well right information is the right data in the right context, right. And to date, this is our context. It's paper and screens, mobile or whatever paper and screens are how we're doing it. And that's awesome, and it solved a lot of problem.

      But when we started really thinking about this, OK, how can we affect this, what's valuable-- VR is not a toy. It's a tool. And we can't talk about it like it's a toy. Just like when we're looking at our Revit model, we're not like, oh, this monitor is so cool. Look at the colors and everything. This is so neat, right.

      No, who cares about that? As long as the technology is the center of the conversation, we're having the wrong conversation, right. So when we look at this, it's like, OK, here's real. Here's your data. Here's your people. Here's the data represented in context, and here is the physical manifestation of every decision that's being made on the job.

      Now we thought, OK, cool. That works, except that the data is represented contextually on the paper. That is then sent to the human brain for translation where you're like, OK, what does that look like in real life? And all of a sudden you're running your translation program, which is robbing cycles from your decision program saying, OK, why's this look like that?

      It should be about that. It should be about that, and that's what goes into reality. Am I on here? Is that-- OK, nodding heads, good. Let the record show that all 4,200 people in the room are simultaneously nodding their heads.

      No, so that-- I mean, that's basically what's happening, right. And we can-- this is how we can affect this. We can affect this with augmented and virtual reality.

      Granted I focus on virtual reality primarily. We are looking at augmented reality too, but it's in the adoption curve. It's back here. And what I feel is that if we can help the pre-construction people make better decisions, that's going to have a downstream positive effect while the hardware makers are making it better, while the early adopters are driving that use case so that we can deliver software-- excuse me, deliver data into the experience.

      Now someone had a great question over here about how many polygons are the size of the model. That's interesting. Because when I think about that-- and my response to that how has always been when do you need to see the whole thing at one time, right. Is there a use case for that? Awesome.

      Because if there is, a 3D render it seems like that would be a great tool for that. But what we're talking about is solving a problem. So if we're in this room and there's a problem that we need to solve, we don't care about the rest of the building, right. We just need to solve the problem in this room. And then when we go to the next spot, we're going to need to solve a problem in that room, which means the data needs to stream in. And then when we go over there, it's needs to stream in over there.

      And this gets offloaded, and that way you maintain your cycles in the hardware. OK, I diverged. Back to this. So yeah, so how can we do this, right?

      Well, the question is decision by translation. How many decisions are made in a project this size, right? About this many. About this many. And how many of those decisions-- well, you moved it. You moved it.

      OK, so that's fine. That's fine. So the diagram that he showed earlier with the paper and the red marks and everything, how many decisions are being made like that, right, when you can just jump in-- is the video here?

      JORDAN MOFFETT: The video is in the next--

      DAVE TYNER: The videos in next slide, OK.

      JORDAN MOFFETT: He's been busy, so we just.

      DAVE TYNER: In a moment, we're going to show you what that piece of paper-- we're going to show you what that looks like in the immersive context, but then I'm going to head to Jordan. All right.

      JORDAN MOFFETT: You keep that, but give me the clicker. There we go.

      DAVE TYNER: All right.

      JORDAN MOFFETT: Yeah, I moved his slides around to mess with them, since I knew he was going to be late. OK, so that's kind of, again, currently where we're at, limitations, where our heads are at, where we're trying to go. But now let's talk about what the future potential is for the value of AR in the industry.

      PAUL CHEN: From the wearable perspective, obviously, it has to get much more ergonomics so someone can wear it all day and enjoy wearing it with obviously better style wise designs, better colors, newer holidays, et cetera. There are hundreds of eye wear manufacturers today, because people are very particular about what they want to wear and what they want people to be seeing them wearing.

      Yes, we're making technology, but we have to be cognizant that people don't want to look like dorks when they're doing their jobs. Those are some concepts of what things might look like in the future. Nirvana, of course, is this form factor as cheap as light as ergonomic as we're used to wearing now.

      We're a long ways away from that yet. Things have to get very small before we can get here. Just think of batteries, sensors, cameras, RGBs, et cetera. It's a long road.

      But once we can get the better technology, we can also then provide a more immersive experience. As you saw in those videos-- those were captured actually through the smart glasses-- they're a little bit jerky and you, kind of, know you're not in the real world. You see this stuff that's floating around.

      We want to make it less and less unreal and make it more real. You'll still know it's your model against the real world, but you want to make it more seamless. Here's a little AR primer. There's something called the motion to photon latency, which we're fighting to make sure that it's transparent to you.

      When you move your head and you're looking at something, your perspective changes and your eyes see something from a different angle. It looks different. We're now projecting digital content that makes you think there's something there. If you move, we've got a camera that detects you moved. It sends the information from the glasses down to our computer. It tells the system the person moved we think this much and by this much angle.

      The computer then has to figure out, OK, I was showing this model. Now they're over here. The model has to look like it's turned so that they think they're looking from a different perspective. It has to send that information back up to the glasses, and then the projectors have to project the content across the displays.

      And we're doing this at 90 frames per second, which is 11 milliseconds. So that motion to photon latency, your motion back down and then photons onto your eyeballs, has to be less than about 10 milliseconds. That's a lot of work for us to do. You don't have to know about it.

      We want to make sure that you don't even think about it. And when you move, that content moves against you and you think you're looking at something real. That's what we have to get to to make it more immersive.

      JORDAN MOFFETT: But now you do know about it. So now you can quiz your buddies, right. I was joking with Paul earlier, because I was like he explained that today, and I thought, well, that just makes me feel pretty petty for complaining about the shortcomings of the technology. And then you start thinking about where we were just a few years ago. And so-- but he said it's OK.

      PAUL CHEN: I'm glad to hear there are usability concerns, because that means this part is by and large solved. He's not worried about it doesn't look that real. It's real enough for now, and then they're asking for why doesn't it do this and why can't we do that. And that's why I'm here.

      JOSE NEGRETE: So here's another thing we've been thinking about as we've been testing the headset is enhanced location tracking. So ideally in the future you wouldn't have to mess with a marker and try to go out in the field and have somebody lay a control point first. The headset could potentially use GPS in combination with AI to triangulate where you are in your project site and then have it load the model or the portion of the model for the area that you're standing in and then reposition as you move along rather than load the next chunk of the model that you had to section off.

      JORDAN MOFFETT: And some of that's just related to the accuracy as well. Even if we do a perfect job of our initial location, you will lose depending on what software or hardware provider you talk to an eighth of an inch per 20 feet or whatever that is. So as you move further and further away from that origin point right now, you lose accuracy.

      So if you're trying to do an as built validation on something that's a critical tolerance, that's really unacceptable. So the ability for the hardware to actually recognize that automatically and, kind of, always be repositioning you based on the combination of the GPS and then that finite tweaking.

      AUDIENCE: [INAUDIBLE]

      DAVE TYNER: So the question is if you have a single target as you move away, you lose accuracy. Can use multiple targets to sort of reposition yourself as you move? That's definitely something that we're looking at. But again, that gets to more of the worst problem of now you've got to go on site and put these targets out their.

      Targets get knocked down. Even if they're lamented on paper, they might get twisted or moved. They might fall down if someone puts it up in the wrong place. So it's a point of failure. And as Jose said, if we can get to more location based or-- we're calling it mapping-- if we can build a map of your area, then no matter where you walk the system knows, oh, I know where you are and what part of the models should be there. And that will help reduce some of the accuracy issues.

      JORDAN MOFFETT: So to time back really quick to the more immersive experience that Paul was talking about on the last slide, part of that is going to come with the improved field of view. So currently, the DAQRI device is at about 40 degrees horizontally and 30 vertically. So you can imagine how much of the hologram you're cutting off.

      So you're seeing a-- you're seeing this reality, but you're only seeing this much of that hologram overlaid with it. So it just lends to that sense of your brain constantly knows that this isn't real. Do you have a question or you just--

      AUDIENCE: [INAUDIBLE]

      JORDAN MOFFETT: Just yeah, think of a question really quick. So what we want to get to is more of that 120 degrees horizontally and vertically to have that full immersive experience. So a lot of the videos were capturing now as you turn your head, that hologram is loading and loading really quickly. But it's just-- it's just something that I think we need to get over and improve upon to have full buy-in in the industry.

      AUDIENCE: [INAUDIBLE]

      JORDAN MOFFETT: Right. So the question is-- so the opacity-- what is the opacity on the model content?

      PAUL CHEN: And currently, we're displaying it as it was done in the models. So we don't have that much control. We've heard this request. So we do have a future request into engineering for actually variable opacity. Sometimes you want to see the model as if it were solid. Sometimes you want to be able to see through it, especially if you're doing it as build versus as design comparison.

      Interesting point you bring up for safety. I was down in our Forge Village booth yesterday and someone mentioned to me some job sites have a requirement if you're wearing some device like that with a hologram or other virtual content, you should only be allowed to see it when you're still. And if you're moving, they don't want any of that content showing because of exactly safety reasons. So now I have a new feature request for us to be able to detect, which we can, when you're moving, and then the hologram will turn off. When you stop, it reappears in the right position so then you see it again. Very good future request.

      DAVE TYNER: The real future, which is actually the present. So yeah, so back to the VR and helping the pre-construction, sort of. Foot in the office people. So what you see here is this fully immersive, fully collaborative environment. And this is that room. This is that paper. This is the problem that they're trying to solve, which is that-- sweet usually it goes to something else. OK, good.

      JORDAN MOFFETT: I'm trying to replay it.

      DAVE TYNER: Oh, OK. Yeah.

      JORDAN MOFFETT: You want it to keep going?

      DAVE TYNER: Yeah, sure. Just replay it.

      JORDAN MOFFETT: I thought I had it on loop.

      DAVE TYNER: Right, so all of these stakeholders are in the experience together. They're solving the problem, right. The data was streamed in from BIM 360, no joke streamed in, no data processing in the middle through the new model coordination APIs. From there, they're free to use the plethora of tools. This is the Nvidia Holodeck, which we were down in the Nvidia booth downstairs and you should come check it out because it's pretty rad.

      JORDAN MOFFETT: I'll play it again.

      DAVE TYNER: OK. But it just has all these tools, just all these tools that enable people to make a good decision. It just didn't have data. And so when we saw it, I said that thing right there. We need that, and we need to pipe BIM 360 data into it and out of it.

      So what they do here or we do is we're making the decision. Of course, we have measure tools and whatever. And you see me pull out that camera, and I'm taking a picture. And when I take that picture that data is sent back into BIM 360 and attaches itself to the clash as an attachment so that whoever's going to resolve it in Revit is going to bring up that attachment and say, oh, yeah, we do this, this, and this.

      We can record the sessions 60 frames per second, and that goes up too. So if there is some question, it's not like I'm going to send an email. And I'm going to wait for the person to respond, and hopefully I'm on their priority respond list and I'm not 20 people deep because that's going to take them tomorrow, right. So but it's going to go right there.

      I'm going to see, OK, this is what they said ah. That's the answer to the question. Boom, done, right. Time, time, time, and then cut into that-- cut into that-- increase the margins, cut into the waste problem.

      So right, contextualization, bringing it back. Yeah, the problem is it doesn't have enough syllables. So if we can find a new word for it. It might be a new word in the future, but whatever this is what we start with is really the key to understanding the data, OK. And contextualization differs from visualization just in a couple points.

      First, it's centered in data, right. Like when-- like we were saying before, when we want this room how do we know? Oh, because it's in the data, and then it's just a query to the data. The geometry we know is just a representation of the data, but the data is where all the value is, right. So it's just query, boom, this. Query, boom, that, right. Solved. Backup.

      It's connected meaning you're connected. It's collaborative, it's functional, it's interactive, and it's immersive. And it's visual. So visualization becomes a component of contextualization, but contextualization on the whole needs to include these things. And if it does, we're going to get to the starting line of understanding how this technology is going to solve real problems.

      Because we haven't even got there yet, right. Because it doesn't scale, and there's all these problems with it. Well, let's solve the data problem for our part and allow the forward-thinking technology makers and partners like DAQRI to fix that problem while the early adopters push this over the chasm-- sorry. Yeah, it's early adopters.

      JORDAN MOFFETT: That's right.

      DAVE TYNER: Yeah, whatever the thing is. Yeah.

      JORDAN MOFFETT: Super early.

      DAVE TYNER: But we need you, right. We need the enthusiastic people to keep it alive so we can make that jump, and then we find the practical uses going from possible to practical.

      JORDAN MOFFETT: And when I saw these videos I guess, I know it's built on a VR platform, but one of the things that I'd love to see in the future is to leverage the AR untethered devices to do the same thing. So even if it's not contextual in the sense of the space isn't even built yet, I think we can start to leverage the AR platforms to do almost-- to do VR in a sense.

      So just the ability to have an onboard computer, have an untethered device, to be able to walk infinitely through a space without having motion tracking cubes, the ability to tie-in to having almost like a slave with or a master with several slave devices across the country or across the world and be able to share that environment or any environment without somebody having to have a VR ready computer and a vibe or an Oculus and all that stuff, I think will just get us to that next step of collaboration. So you're right. Solve the data. If you're functioning in the cloud, great and you've got that bi-directional feedback. But then now let's solve that kind of setup and the wearable problem in this context.

      All right, so that's the end of the presentation. So the call to action really from-- that was my mic-- from my standpoint is we can't just have one or two companies looking at these solutions. I think the key is whether you're on the architecture, engineering, construction side, whether you're on the owner's side, start driving this right from whatever standpoint and from whatever leverage you do have on the team. Start investing in the technology. It's a $5,000 device. I mean, it kind of-- obviously, if it's coming out of my pocket, it sounds like a lot of money. But look at where we were four or five years ago with the cost of VR solutions and really the unavailability of augmented reality.

      So my call to action is just use it. Use conferences like this to develop the partnerships with folks like Paul from DAQRI and with the different software providers. And what I noticed early in a lot of our pilots and different case studies was I've been in the AEC world for 14 years, and we take it for granted.

      And just like I don't know how to develop hardware or develop software, our partners in those industries don't have access to the job sites and they don't understand necessarily the pain points. So that's, kind of, where our role comes in is to say, hey, great product. Awesome technology. These things that you've done here have no value to our industry. They may have value to other industries, but if we're trying to solve that seven point whatever trillion dollar problem, we need to solve it in context of our industry.

      So that's where we've actually-- I mean, some of the folks that we've taken out to our job sites, they come out there and they have fresh boots, vests, everything, and it's like, oh, I just realized you've never been on a job site. So how can you solve a problem that we're facing on job sites when you don't even have access to the job site? So get the technology. Start using it.

      It's not anywhere near perfect. But if we can have collectively 100 voices or 1,000 voices instead of one-- just going back to like what Paul said about the guy that came and talked to him about the safety issue-- if we can get one nugget from each of you on, hey, I think this would be awesome to implement and then they can actually start working on that, that's how we're going to get to that point where this is actually-- why can't connect to box. It never wants to. Yeah. Yeah. Yeah. Cancel that.

      Anyway so we just-- it needs to be everybody's collective voice. And if they know what to focus on, they can kind of funnel that. You end up hearing the request five, six, seven times, and it starts to make you really think about I guess that is a valuable tool to add to the tool chest. So with that, I open it up for any more questions. And I just want to mention make sure you guys do your-- what's up? Oh, sorry.

      DAVE TYNER: I was just going to chime in on top of that. I'm the product guy. So feedback is gold. Whether it's positive or negative, that helps me make a better product. Of course, if you jump in early, you're buying devices and that helps me continue to be employed, which is great. But from my job perspective, the more users we have, the better feedback I get, the better the product gets and faster.

      JOSE NEGRETE: Just really quick I also want to add I've been in the industry for 13 years, and it's been refreshing over the last two years dealing with not just DAQRI, but other AR developers doing RND, their response to us, requesting features from them. With some of the things that we've requested, again, like you mentioned, they just haven't thought of. And some things are easy for them to implement.

      Also the barrier of entry is a lot smaller now, like I mentioned earlier. You no longer have to go and learn Unity 3D and develop your own app. You saw you can just use models you already have in 360 Docs and very quickly do you do your own testing.

      JORDAN MOFFETT: Yes, sir?

      AUDIENCE: [INAUDIBLE]

      PAUL CHEN: So the question is we've shown the recording capability of the smart glasses. Is there streaming capability? Yes, there is. It's, kind of, jerky. So if we stream it to a web browser, someone can see it but it's, kind of, jerky and jittery. And the reason, again, is that whole motion to photon latency that I mentioned earlier. While the computer is trying to display the content for the user of the smart glasses, now you're also asking it to display content and stream it out at 1080p to a remote internet site. So that really puts a strain on that little Intel Quorum 7. And when you wear it, you can feel it. It gets hot.

      AUDIENCE: [INAUDIBLE]

      PAUL CHEN: The user does see that jerkiness, because you're robbing CPU cycles to process content to go over the internet as well as display content for the user.

      AUDIENCE: [INAUDIBLE]

      PAUL CHEN: Absolutely. So the fact that it has a tracking camera, we have cameras on board. You can capture photos. You can capture videos with audio. Just to be frank, where we are in reality today we have a suite of apps that can do certain things. This particular app can show models. We have a different app that allows you to capture images and audio and store them places. We have to work to integrate all of them, so you can do all the things within all the different apps.

      AUDIENCE: [INAUDIBLE]

      PAUL CHEN: So can we do voice to text? That is a little more complicated. Speech to text requires pretty good horsepower. Generally, you're limited to a set off small commands if you want to do that on the device. So if you want to have the software to recognize, OK, Google, Alexa, that's fairly simple to do. If you want it to recognize your speech and transcribe it into a note that gets attached to the piece of data, that usually requires something like Google speech or IBM Watson speech services. So now you're asking for a very good Wi-Fi connection.

      JORDAN MOFFETT: So you said it's coming in Q2 for next year then.

      PAUL CHEN: Today's the 13th, yeah.

      JORDAN MOFFETT: Here we go right here real quick.

      AUDIENCE: [INAUDIBLE]

      PAUL CHEN: The question is if I'm looking at the model and I'm looking at reality, can I do measurements, again, for very good accuracy of as design versus as built. Frankly, today, no, you can't. It's a feature that has been requested. We have a different app called scan, which allows you to build a 3D mesh and model of what you're seeing and you can take measurements of that, but we don't have the ability to measure between the virtual and the real.

      And part of that, again, is our accuracy. Right now when we place a model in the world, we're about two to three centimeters accurate. Which is good enough for gross inspection-- oh, the duct should be there. I don't even see a duct. It needs to be there-- issue, but it cannot say, oh, this beam was here. It needs to be over there.

      AUDIENCE: [INAUDIBLE] Is there [INAUDIBLE] software or hardware to [INAUDIBLE].

      PAUL CHEN: Yes, so the question is there are a set of onboard apps with the device. Is their ability to augment that with your own development? Yes, there is. We do have a Unity 3D based SDK, which gives you access to all the hardware, all the sensors, all the cameras.

      To your point to connecting to back end ERP systems, we have yet another app that resides on the glasses today. It's called tag. That application allows you to connect back to ERP systems like IBM Maximo or SAP for asset management data or even IBM Watson for performance data, retrieve that data, and display it in the glasses. So now technician walking up to an asset, whether it's a pump, a motor, an AHU, et cetera can see the live performance data of that thing and they can make a better decision on whether they should take it down for maintenance, whether it's about to fail, et cetera.

      So some of that is available now in a different app. We need to integrate it across all the apps. But you do have the ability to build your own apps on top. And that's what Jose was getting to. Before we built our own apps, we relied on third-party developers to make apps.

      JORDAN MOFFETT: Which meant people like me couldn't use them, so.

      AUDIENCE: Very cool presentation guys. Thank you very much. Question to McCarthy guys. What made you partner with DAQRI versus some other competitors in the space?

      JORDAN MOFFETT: So cover your ears, Paul. So we've-- so yeah, we've been in the AR game for a year and a half, two years. Jose and I were thinking about, and so we've looked at most viable solutions. And honestly before this we had settled on the HoloLens and worked with-- whittled down to the software developers to two. And one of them we got to do voice commands, so we can talk about that later.

      So the intriguing part for us was that we have an enterprise agreement with Autodesk, and we actually randomly had-- I had an email from DAQRI, and it was like, hey, contact us about the headset. And this was like a year or so ago. So one of the sales reps came out, showed us the device, and we did a few things and they mentioned that there was maybe an alpha at that time with Autodesk.

      So we started pinging our Autodesk partners and saying, look, what is this software that they claim is working on the headset? And where we're at with all the apps on the HoloLens, they just don't-- there's no direct translation from the cloud that we already use straight into the headset, right. So there is no Autodesk app built for the HoloLens.

      So it was an intriguing partnership. And one of the other goals was we really wanted to bring something to AU that was real, that wasn't perfect, right. Because sometimes you sit through kind of a pie in the sky, hey, everything worked great and it's amazing and go do it. We wanted to, kind of, be honest with, like I said earlier, it's not perfect. But without starting a partnership like this where you have the three, kind of, legs to the stool that you really need to get it to the next level, then you're never going to go anywhere, right.

      So we thought it if we can really get to that point where we're pushing data to the headset and pushing data back down and we're able to provide back to the design team feedback in their native environment, in Revit, whatever the case may be, we're just going to cut out all those issues that kill us on time, right, two weeks to respond to an RFI, two weeks the process a submittal. Well, what if you just show them in context the issue, and you can get it right back to the design team through their software. That was, kind of, the concept. Any other question-- oh, one more. No, it's all good.

      AUDIENCE: You mentioned that--

      JORDAN MOFFETT: You have three minutes

      AUDIENCE: --the DAQRI [INAUDIBLE]. Is there a new [INAUDIBLE].

      PAUL CHEN: Absolutely. We rev the software regularly. Software's much easier to rev than hardware. We're also in design for the next rev of the hardware and in planning for the rev after that. So definitely we're taking feedback from users who are wearing it not only in the software but on the ergonomics and the usability of the hardware itself.

      I can't give a date on that hardware, but it's definitely in production. Again, it's the 13th. So maybe by the 19th It'll be ready.

      JORDAN MOFFETT: I like it. Any other questions, comments, concerns? All right, well, if you will please remember to fill out your survey, good or bad. Give us feedback on this presentation.

      We'll hopefully have other chances to deliver it, and maybe next year we'll have some improvements to regale you all with, OK. So thanks for your time. We appreciate it.

      [APPLAUSE]

      Downloads

      ______
      icon-svg-close-thick

      Cookie 首选项

      您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

      我们是否可以收集并使用您的数据?

      详细了解我们使用的第三方服务以及我们的隐私声明

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

      改善您的体验 – 使我们能够为您展示与您相关的内容

      通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

      定制您的广告 – 允许我们为您提供针对性的广告

      这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

      icon-svg-close-thick

      第三方服务

      详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

      icon-svg-hide-thick

      icon-svg-show-thick

      绝对必要 – 我们的网站正常运行并为您提供服务所必需的

      Qualtrics
      我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
      Akamai mPulse
      我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
      Digital River
      我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
      Dynatrace
      我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
      Khoros
      我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
      Launch Darkly
      我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
      New Relic
      我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
      Salesforce Live Agent
      我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
      Wistia
      我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
      Tealium
      我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
      Upsellit
      我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
      CJ Affiliates
      我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
      Commission Factory
      我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
      Google Analytics (Strictly Necessary)
      我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
      Typepad Stats
      我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
      Geo Targetly
      我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
      SpeedCurve
      我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
      Qualified
      Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

      icon-svg-hide-thick

      icon-svg-show-thick

      改善您的体验 – 使我们能够为您展示与您相关的内容

      Google Optimize
      我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
      ClickTale
      我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
      OneSignal
      我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
      Optimizely
      我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
      Amplitude
      我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
      Snowplow
      我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
      UserVoice
      我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
      Clearbit
      Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
      YouTube
      YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

      icon-svg-hide-thick

      icon-svg-show-thick

      定制您的广告 – 允许我们为您提供针对性的广告

      Adobe Analytics
      我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
      Google Analytics (Web Analytics)
      我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
      AdWords
      我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
      Marketo
      我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
      Doubleclick
      我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
      HubSpot
      我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
      Twitter
      我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
      Facebook
      我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
      LinkedIn
      我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
      Yahoo! Japan
      我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
      Naver
      我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
      Quantcast
      我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
      Call Tracking
      我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
      Wunderkind
      我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
      ADC Media
      我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
      AgrantSEM
      我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
      Bidtellect
      我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
      Bing
      我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
      G2Crowd
      我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
      NMPI Display
      我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
      VK
      我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
      Adobe Target
      我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
      Google Analytics (Advertising)
      我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
      Trendkite
      我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
      Hotjar
      我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
      6 Sense
      我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
      Terminus
      我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
      StackAdapt
      我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
      The Trade Desk
      我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
      RollWorks
      We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

      是否确定要简化联机体验?

      我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

      个性化您的体验,选择由您来做。

      我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

      我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

      通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。