Beschreibung
Wichtige Erkenntnisse
- Understand the visualization pipeline that enables live viewing of heavy CAD models in augmented reality on DAQRI devices
- Learn how to configure CAD models for optimal viewing performance on DAQRI smart devices
- Understand how Forge services can be used to optimize the CAD model and convert files to types that are supported by Unity-based applications
- Understand how IoT live-asset performance information from sensors can be displayed in the right context using augmented reality
Referenten
- PCPaul ChenDr. Paul P. Chen is a Director of Product Management for DAQRI, managing the OS/developer tools/partner integrations team. Prior to joining DAQRI, Dr. Chen was Senior Director of Product Management and Manager, Technical Marketing for Wind River Systems, where for 13 years he managed various embedded, real-time, operating system platforms. Prior to Wind River, Dr. Chen performed product management for account management software-as-a-service (SAAS) at Kovair, and technical marketing and developer training at Geoworks, creators of a graphical, embedded operating system used in some of the first PDAs and smartphones. Dr. Chen holds a B.S. in Electrical Engineering from Stanford University, and M.S. and Ph.D. degrees in Electrical Engineering from the University of Illinois, Urbana-Champaign.
- Cyrille FauvelCyrille Fauvel got his first computer when he was 12 years old, and as he had no money left to buy software, he started writing code in assembly code. A few years later, he wrote code in Basic, Pascal, C, C++, and so on, and he’s still doing that. He’s been with Autodesk, Inc., since 1993, joining the company to work on AutoCAD software originally. He’s passionate about technology and computers. At Autodesk he’s worked in various roles, from the design side to manufacturing and finally to games and films. He is now an evangelist for the Forge API (application programming interface) and web services, and he has a desire to deliver the most effective creative solutions to partners using these APIs and web services.
PAUL CHEN: Good afternoon, everyone. Thank you very much for coming to today's session. My name is Paul Chen. I am a director of product management for DAQRI. I am accompanied today by our Autodesk partner, Cyrille Fauvel. Today, we're going to demonstrate and talk about visualizing your designs in augmented reality using smart wearable devices, ideally those created by DAQRI.
A little bit of background on myself, I currently manage the operating systems, developer tools, and partner integrations group at DAQRI. Prior to DAQRI, I worked at a company called Wind River Systems, where I worked with real-time, embedded operating systems for a number of years. Here's the summary of today's session. It's divided into two parts.
In the beginning half, I'll give you first a quick overview of the four key learnings that you'll take away from this session. Then I'll go into an introduction on DAQRI and, as well, a brief introduction into augmented reality, for those of you who aren't familiar with it. We'll then talk about the key benefits and values that we believe augmented reality can bring to BIM.
Once we're settled with those key values and benefits, then we just answer the mechanical questions. And that's where we get into the key learnings that you'll take away today. We'll talk about the four key learnings that help show you how we actually bring your BIM content into augmented reality experiences on DAQRI smart devices. I will finish up the slide presentations with a few conclusions, and then we'll move to the second half of the presentation where Cyrille will give you an actual demonstration of what we've talked about today.
Just to review the key learning objectives, the first objective is to just give you a very high-level overview of the visualization pipeline that we're going to use to bring your BIM content into AR experiences on a smart wearable device. Key learnings two and three actually get into the nuts and bolts of how we do that. You first bring the model down from BIM 360 Docs and optimize it for the kind of file types that we support in our Unity-based applications. Key learning three is how you configure that model for use in augmented reality. And then, finally, we'll see how you can actually also display live data streaming from, say, IoT-enabled equipment into your augmented reality view as well.
An introduction then into DAQRI. DAQRI is the world's leading provider of professional-grade augmented reality. And I'll get into what we mean by professional grade in a few slides, but, essentially, we provide hardware devices that run our software that provide augmented reality experiences. The picture shows you our just newly released DAQRI smart glasses product. DAQRI was founded in 2010 in Los Angeles, continues to be headquartered in Los Angeles, but we've now expanded to multiple sites throughout the world where we do R&D, engineering, sales, and marketing.
We build everything ourselves. We design and build our own hardware. We design and build our own software. We have mechanical engineers on staff, electrical engineers, computer vision, computer graphics experts, operating system experts. We're 64% technical staff. We have over seven years experience in building augmented reality technology, and our executive staff is comprised of alumni from multiple successful companies.
We deliver professional-grade augmented reality on these two devices. Last fall, we released the DAQRI smart helmet, which we showed at this conference last year. We just released our DAQRI smart glasses this month, which we'll also be showing at our booth at Autodesk University. We also provide an SDK for our developers and partners to create applications to run on our smart devices.
Those applications all run on the DAQRI Visual OS, Vos. This is a highly optimized, Linux-based operating system that allows you to use professional-grade design tools, such as those that Autodesk creates, to bring your content into our devices in augmented reality. What then is augmented reality?
Well, in a nutshell, it's the addition of virtual content to the user's experience of the physical world. So here's an example. That virtual content doesn't have to be. Visual it can be sound content. It can be haptic feedback as well. But generally, when people think about augmented reality they're thinking of augmenting their physical experience with visual data. And in this case, we're augmenting a driver's view to get additional information about where he or she is going and what he or she needs to know as she's driving.
This is in stark contrast to virtual reality in which the virtual content entirely replaces the physical world, and you have no contact with the physical world. At DAQRI, we believe there are more opportunities to enhance people's lives, work, and play with augmented reality than there is with virtual reality. So what does DAQRI mean when we talk about professional-grade augmented reality?
Well, Pokemon GO was probably the world's first and most popular introduction to augmented reality technology. And I would posit that it is still the most lucrative augmented reality technology there is today. As of July, it had grossed an amazing 1.2 billion US dollars in revenue, and that's in one year of being on the market. But all that aside, Pokemon GO is just a game. It's just a toy. And the reason I say that is that it uses GPS coordinates to place the virtual content in the real world.
Now, remember, GPS is accurate only to a radius of about five meters. That's a radius of five meters. That's probably fine if you're placing an object, such as a bulbasaur or whatever that is, in the real world. It's not the kind of accuracy you need if you need to place a work instruction on top of a valve, which is part of a sub-assembly on a piece of machinery. You need to be centimeters and millimeters accurate in that case.
And that's what we're talking about with professional-grade augmented reality. DAQRI delivers that professional-grade augmented reality through technology that minimizes what's known as the motion-to-photon latency. The motion-to-photon latency. So, again, augmented reality, we're displaying virtual content onto a visual display that has to register and align with stuff in the real world. Meanwhile, the user's moving his or her head walking about.
The motion that the user's introducing has to be recognized by a computer vision system. Natural features have to be recognized and tracked. The system then has to decide what virtual content to display. That virtual content then has to be prepared and rendered and displayed onto the visual displays at the right place. So by the time the user gets to wherever he or she is going to look, the virtual content is aligned with the real world. That's professional grade, and that's what DAQRI delivers.
Here's a screenshot of an example of a worker using the DAQRI smart helmet to view professional-grade augmented reality on a system where he needs to do some disassembly first in order to an inspection of some equipment underneath. You can see that precision of the registration of the virtual content is essential for making this work. Here's another screenshot showing a worker using our DAQRI smart glasses, again, viewing very precise augmented reality information in order to do an inspection of some machinery.
Now, we know what augmented reality is from DAQRI's perspective, how can it improve BIM? Well, actually, there are multiple use cases for the improvement of BIM using augmented reality. These are three very obvious ones that I'll go through. In the design phase, architects and designers obviously create 3D models. If they can view them while they're working, that enables them to get better perspective and review of form, function, and aesthetics of the designs.
Even better, if they can collaborate with different team members, that makes sure that all teams are proceeding with designs in lockstep without any issues and misunderstandings. And in the best case, they collaborate with the end users, with the customers so customers can see in augmented reality exactly the ballroom that's going to be designed, and they get a feeling for how tall things are, where windows are, what sight-lines are like. This can help identify potential issues much sooner than if you start building and then the customer starts walking through and says, I don't like where that window is. Now it's expensive to change.
It's easier and cheaper to change when you do it in the design phase. During the actual building and construction use of augmented reality can help with laying out of building materials and routing of services. Workers can actually see where ducts, electrical conduits, plumbing should be laid out, where materials should be placed. They can identify potential issues much sooner and, again, faster issue resolution means saving time and saving money. Again, the collaboration part plays in here as well. If you identify issues and can notify the correct team quickly, issues can be resolved faster.
Once the construction is completed, you may need to do inspections. Augmented reality is perfect for guiding inspections through every point of the process, making sure nothing is overlooked or missed. If you need to renovate, the augmented reality views can show you that hidden infrastructure to ensure you do a renovation safely. It can also help with retrofits. If you've got some equipment that are placed in a particular site, you now need to replace it with a new pump, HVAC system, blower, et cetera, augmented reality can show you whether that equipment will fit where it's supposed to go. And even better, it can help you plan the optimal path to get it there.
We've had customers who have used augmented reality when building ships. As you can imagine, it can be very difficult to take a piece of equipment down into a submarine down through all the companionways and hatchways. They told us once they got the machine there, it was oriented the wrong way. And there wasn't enough room to turn it around, so they had to take it all the way back out, readjust it, and then bring it down. That cost a lot of time and money. You could have saved all that with an AR evaluation ahead of time.
Here's a screenshot of an example of an architect in the design phase. He's visualizing in 3D what his design looks like. This gives him a very clear idea of perhaps how the building situates itself on the city block. He's doing this alone, but this could also be done collaboratively and, again, ideally with the end customer.
AUDIENCE: Would the end customer also would have to have the glasses?
PAUL CHEN: Question is, would the end customer have to have the glasses? Yes. All the customers or anyone collaborating would need to have a DAQRI smart device drawing upon the same database and viewing the content.
AUDIENCE: So would they be connected to the view of the person what is turning his head or each one is independent on what they see?
PAUL CHEN: So the question is, is each person looking at one person's view, or do they each have their own view? And it's the latter. There's a common model. Each person's device knows where that person is with respect to the model and shows the virtual content in the correct alignment and proportion to that particular person. But they're all sharing the same data.
AUDIENCE: No, I understand. So whenever the main person turning his head everybody--
PAUL CHEN: No. Each person sees his or her own view. If you're over there looking and I'm over here looking, we see the same thing. But as I move, the content moves for me but not for you.
CYRILLE FAUVEL: Yeah, so if you want to point someone to a particular view or position into the view, you can use some techniques like a laser scanner. So you point with a laser or something in the field. Then that person, even if he has a different angle of view, he can see what you looking at. And then he can move or watch at the same position.
PAUL CHEN: Thanks, Cyrille. In the building and construction phase, the building may, of course, be semi-finished. You can use 2D blueprints. That's the state of the art. It has been for probably hundreds of years, but it's a lot easier with AR visualization to see exactly where the HVAC ducts need to go or where the electrical conduits will be.
And here's an example of a renovation. Before you dig up the street, it would be a good idea to know where exactly the water pipes are, et cetera, so you can do this renovation safely. If you make a mistake, that's going to be very costly. So this can help save you a lot of time and money.
So in essence, the key values of using augmented reality for BIM is to save time and save money, and that means a lot to a lot of people. So now we get to the actual nuts and bolts. How do we actually bring your BIM content down into a DAQRI Smart device in an AR experience?
This is key learning number one. It's a high-level view of the entire visualization pipeline. We show here a notional diagram of the Autodesk BIM 360 suite of tools and services. We're focusing on the BIM 360 Docs area because this is the repository where you're storing all of your 3D content.
Assume you have a model that you want to bring into an AR experience. You'll download that model into our web-based BIM configurator. This application takes the model. It's been converted from BIM 360 Docs to work on our system. You then annotate it for the AR experience. You store that data back into the repository. And then when you are on-site with your DAQRI smart device on your head, you load that data into the device. And the model opens in place, in context, in situ with where you are.
Then you can work with the model. Ideally, then you might even notice conflicts, snags, or BIM issues, RFIs. Not part of today's demonstration, but planned enhancements will allow you to share RFIs or BIM issues back up to the repository so they can be viewed quickly by teams.
AUDIENCE: Question. Does it require a persistent connection to use the glasses on site [INAUDIBLE] or [INAUDIBLE] connected?
PAUL CHEN: The question is, do you need a persistent Wi-Fi connection on site in order to view the model, or would you download it first? And you answered the question. If you're on a in-progress building site, there may not be Wi-Fi. So if you know you're going on site, you would need the Wi-Fi connection the first time to download the model. You can store it locally on your device, then go on site, no Wi-Fi, bring up the stored model, and you're fine. Later phases of the construction may have power and then you can set up the Wi-Fi. But it's definitely usable without.
Another future enhancement would allow you to send data into some other cloud services that DAQRI can provide, perhaps notifying teams immediately when you see an issue or an RFI. That way you don't have to fill out a report or something. The information immediately flows to that team, again, short-circuiting the issue resolution time, saving money. So that's the overall view. Question.
AUDIENCE: Do you have to have the 360 Docs? Can you take the [INAUDIBLE] from your local server into your BIM configurator?
PAUL CHEN: Question is, do you have to have BIM 360 Docs? Can you do it from, say, your local server? The way that the web configurator is built, it's configured to look for BIM 360. So when you open up the web app, it's asking you for your BIM 360 Docs login. We could configure it to do other servers as well, local servers or other companies services perhaps if we wanted to, maybe not. But in general, it's built right now to work within 360 Docs with Autodesk as our partner.
AUDIENCE: [INAUDIBLE]
PAUL CHEN: Our feeling was that BIM 360 Docs would be the repository for most of the data. If you're not doing that--
AUDIENCE: Not everyone.
AUDIENCE: [INAUDIBLE] do not have documents in BIM 360 [INAUDIBLE] software to convert it into a AR viewable [INAUDIBLE] and load it into [INAUDIBLE]?
CYRILLE FAUVEL: So the answer is yes. I can come back to that on Saturday to explain in more detail. But the API that we're using to build the AR scene into the DAQRI smart element is very flexible in the way that we prepare the scene. So you can either stream it to the device directly, store it locally on your device to work offline, or from the Unity editor or the Stingray editor, create the asset and create a pre-fab right there. But you need to go through the Forge server to get the data as the first step. After, where you store it doesn't really matter. Paul.
PAUL CHEN: Question.
AUDIENCE: So the device, the helmet and the goggles, they have all the compute power that they need independently of any network?
PAUL CHEN: Correct. Question is do the DAQRI smart devices work independently? Yes. They have all the compute power, memory, et cetera, to do all of the AR experience standalone.
AUDIENCE: [INAUDIBLE] data once it downloaded for that [INAUDIBLE] area [INAUDIBLE]?
PAUL CHEN: Correct. Once you have the data, then you're good to go.
AUDIENCE: How's the positioning work?
PAUL CHEN: I'll get to that. Good question. The question was, how does the positioning of the model work in the real world? And we'll talk about that.
AUDIENCE: What's the [INAUDIBLE]?
PAUL CHEN: I'm sorry. Can you repeat that?
AUDIENCE: You said once you get the model from the BIM 360 Docs, you can--
PAUL CHEN: Yes.
AUDIENCE: --apply it [INAUDIBLE].
PAUL CHEN: Correct.
AUDIENCE: What exactly do you [INAUDIBLE]?
PAUL CHEN: I'll give you the examples concrete of what we're adding to the model, but the point is that you don't actually edit the model. The model stays pristine. We're just adding AR data that goes along with the model. So that was the overview. Key learning number two, now how do we bring those models in the first step from Autodesk BIM 360 Docs into the BIM configurator?
The BIM configurator is built using Forge Viewer APIs, and we're leveraging all the Forge services that we can using those APIs. So, for example, the workflow is you open the website, the BIM configurator, you log into your BIM 360 Docs account, assuming you have one. You select the model that you would like to bring down. Forge then brings down the model, transferring it from whatever format it was into the format that works for our smart devices inside a Unity app. So this is really all done by Forge. Terrific tools that Autodesk provides.
Step three, or key learning three, is where DAQRI comes in. Now you need to configure that model for use in an augmented reality experience. The first thing to remember is that we have limited compute power and memory on the devices. We are standalone, but it's a mobile device.
Your models are probably quite large. So, in point of fact, you can't bring down the model of an entire building probably and have it render in a good performant manner on the device. So for good performance, you need to do a sub selection of your model. Focus on a particular room or an area or a scene, and use that as the basis for each particular experience. You can create, of course, multiple experiences, one for every section or wing or room that you're trying to model and view on site. But it's better to view them as separate scenes rather than trying to view the whole model at once.
We have tools now built into the web configurator, built on Forge, that allow you to select components of your model. It uses the standard object browser. Cyrille will show you other ways to select components to add to the model. There is then a selector dialog box that tells you what you've selected. And even more importantly, it's a bit hard to read, but there's a number in the top bar that gives you the number of polygons that will be required to render that model on our displays.
And while the number is green, we're telling you that your model is good and will perform satisfactorily on our devices. If the number gets too large, it will turn red. We don't prevent you from saving your scene this way, but we're warning you that you might have slightly degraded performance when you view the model if it gets much too large.
To answer now the question about how do we position the virtual content in the real world, when you are working with the model, you will place a marker, which is this blue DAQRI logo push pin. You'll place that marker onto the appropriate place in the model. And then you print out this visual image target on a piece of paper. And then on site, you place that piece of paper at the appropriate location that matches the location of the model.
Then when you view the model on site, you scan the image target. The system now knows that corresponds to the marker. The model opens at scale, registered to exactly where it should be.
AUDIENCE: One point gives you 20 centimeters of accuracy?
PAUL CHEN: One point gives us enough accuracy to align to centimeters or less. And even further, once you've scanned that image target, you no longer have to look at it. You can look away. You can move. You can walk around. Our systems include what's known as visual inertial odometry, VIO, that tracks your movements, whether physical, whether head, whether eyes. It will then move the virtual content to match what you're looking at so that even without seeing the image target, the virtual content stays registered in the physical world.
AUDIENCE: How big the print should be, and is it black and white or color?
PAUL CHEN: It can be black and white. It can be color. We just happen to have an image target that's black and white there. It's 8 1/2 by 11 or A4.
AUDIENCE: [INAUDIBLE]
CYRILLE FAUVEL: [INAUDIBLE] we're going to use.
PAUL CHEN: There it is.
AUDIENCE: So you're going to have to orient that picture to the direction where that actually is within your model.
PAUL CHEN: Exactly.
AUDIENCE: OK. That makes sense.
PAUL CHEN: Finally, you'll save and publish the experience from the web configurator back into the authoritative repository, the BIM 360 Docs. Note again that the original BIM 360 model is unmodified, which is good from the designer's point of view. They don't want you changing the model. There's also a unique QR code that will be generated for each model. And that's how you identify which content you want to bring down when you're on site wearing the smart device.
AUDIENCE: So, I'm sorry. I'm going to ask you so many questions. You said that it's always best to make different scenes so it works faster and more efficient. A couple slides ago, you showed us one that was trying to see how the model is positioned in the city block as the whole model. So how is that possible there and it's not possible here?
PAUL CHEN: What he's probably doing there is just selecting the components that are the exterior of the model. So he's not selecting all the interior floors, walls, doors, et cetera, that may be detailed inside the full building model.
AUDIENCE: So your model should be configured so that you could select different area [INAUDIBLE].
PAUL CHEN: Exactly. And maybe I should be clear. It's not, perhaps, a separate room that you may select. You can select all the walls of an entire palace. Just don't select any of the doors, stairs, inside. And that gives you--
AUDIENCE: Can you do it by cropping views?
PAUL CHEN: There is a bounding box selection that allows you to do that as well. So you can either select components or do a bounding box. And Cyrille has examples of that. Yes.
AUDIENCE: How am I interacting with the device? You said I have to point the target [INAUDIBLE]. I don't see any mouse interactive I'm curious [INAUDIBLE] or is it actually reading your hands?
PAUL CHEN: No. We do no gestures because we feel that this gets very tiring. It's what we call-- we have a reticle, which is a visual mouse. It's a dot, and we call it gaze and dwell. So this is your mouse click interface. For the scanning, though, the application just brings up-- such as your phone has-- a little square target and shows you a visual please scan. And you just look at it and the reticle--
AUDIENCE: It's tracking [INAUDIBLE].
PAUL CHEN: It's tracking your head motions.
AUDIENCE: Head. I see, so [INAUDIBLE].
PAUL CHEN: Yeah. So as you move your head, the reticle moves.
AUDIENCE: Interesting. And I can stay still for the [INAUDIBLE].
PAUL CHEN: Exactly. When you dwell on something for, I think it's half second or so, then it selects.
AUDIENCE: [INAUDIBLE]
PAUL CHEN: Exactly, without the tiring--
AUDIENCE: Yeah. Totally.
PAUL CHEN: Here's a screenshot of the web-based BIM configurator. There's the popular Revit house model that I've brought up. On the center-left column, that's the browser objects model list. That should be familiar for people who have used Forge. It's directly from the Forge APIs. You can click on certain pieces to create your sub selection.
The selection dialog box is in the upper right showing you exactly what you have selected. Again, the polygon count is up at the top right. This one says 30K, and you're certainly fine to display that many polygons. The rest of the tools on the bottom should be familiar. They all come from the Forge APIs. You'll notice though there are new four tools in blue at the bottom right. These are DAQRI-specific tools to do just those tasks that we've been talking about, placing the marker, doing a sub selection, and then publishing the model and its contents for AR viewing.
When you get on site, you put on the DAQRI smart device. And now you want to view that content. You open up an application on the device. It's called the BIM 360 Viewer. You scan the QR code. That encodes to the system which particular content you want to view. Assuming you have Wi-Fi connectivity, the model is brought down. If you know you don't have Wi-Fi connectivity on site, you can do this at your office and pre-load and store that model.
When you're on site, then you look for the image target. Scan the image target, and now the model opens at full scale registered to the physical world. Note that the use of the target is optional. If you decide not to put a marker in your model, then when the system opens the model, it assumes you're designer or an architect or something and you're not working in the real world. You just want to see the model. It will open up by default at half scale or, actually, we call it tabletop, within a one-meter bounding box.
You're not limited to that size. You can scale it, rotate it, translate it any way you like. After that, we just bring it up at tabletop size by default. You can even view object properties, again data directly from your BIM model, while you're inside the real world. And you do that, again, using your gaze and dwell on objects that you're looking at.
AUDIENCE: Sorry, maybe I missed the first part. Was that the target image? Do you use that one to place in the real world at that exact location?
PAUL CHEN: Yes.
AUDIENCE: As soon as you lose that image of that the target image, a little bit of the model will still be visible then?
PAUL CHEN: Absolutely. So the question is once I use the image target to register the virtual content, what happens if I look away from the visual target? And the answer is the virtual model stays exactly in place because our visual inertial odometer dormitory system, the VIO-- it's computer vision system-- knows where it initialized and then tracks your head and body movements. So wherever you go, it knows where you are in relation to where you started and keeps the virtual content in the right place. So the model doesn't disappear, by any means. It stays visible and in place.
AUDIENCE: Yeah, so if it's a large model you can still see maybe half of it even if the targeting is just somewhere there.
PAUL CHEN: You'll see whatever parts of the model you're looking at. So, for example, if I scan that target and I have a model of this room, the whole room model will be available to me. If I'm looking over there, I'll see that part of it. If I turn over here, I'll see that part of it. I walk over here and look here, I'll see this part of it. So the entire model is there. The system knows which part you're looking at, and we'll show you that, again, registered against the physical world.
CYRILLE FAUVEL: So target images, they are to position your model in the space you are and after, the smart helmet with track all your motion to put you at the right place into the model.
AUDIENCE: That is the biggest issue we're having with the AR models. It has to stay placed. That's really imp.
CYRILLE FAUVEL: Yeah, so that's a tracking thing.
AUDIENCE: If that works, that's brilliant.
PAUL CHEN: That's what professional-grade AR is.
AUDIENCE: Do you need one image for each scene?
PAUL CHEN: We would need one image target for each scene, because when you move to a different location, you now need to reinitialize against that position. Yes?
AUDIENCE: Can you adjust the view?
PAUL CHEN: Can you adjust the field of view? No. That's bound by the hardware, so that's static. And we're working to make it bigger. Everybody always wants larger field of view. So that's one of the top priorities for the hardware team.
AUDIENCE: So what if you want to show the model to your client, and you will show the entire model but you are not on the site? How are you going to do that? Are you still going to be doing scenes?
PAUL CHEN: It depends on how you've done your selection. If you want to see, for example, the designer's view of the whole building, you can just select the exterior walls, the roof, the windows--
AUDIENCE: No, I want them to experience [INAUDIBLE] and how the space feels and all that thing.
PAUL CHEN: It depends on your model.
AUDIENCE: We are in the business where every centimeter is really important. We deal with knives and cutting objects. So every centimeter is important. So for the client to feel that, OK, this is enough space, how we going to do that?
PAUL CHEN: It depends on the level of detail that you want to show your client at each stage of the process. If you want the client to be able to walk through the entire facility--
AUDIENCE: Yes.
PAUL CHEN: --probably there's too much detail on the model to load it all and then view it in any kind of performant manner. So you'll have to choose some scenes and then move from one to the next. If you're looking for just an overall flow of the workflow, and you'll have the tables, you'll have the machines, and other things, you can put those there without the actual implements and some of the smaller details and that way reduce the polygon count and make the entire model more visible, just with less detail. Yes?
CYRILLE FAUVEL: Or if you create your application such a way that every time you cross a door, you've swapped rooms, then you can unload one room, unload the second one, and while you way you walk through. Then you have different rooms appearing. It depends of your experience. So maybe we can take that offline, and I can explain you different approach.
AUDIENCE: Yes. Yeah, yeah. That would be neat.
PAUL CHEN: Yes?
AUDIENCE: If the images move or tilt or for some reason it's shifted from place, would it still register, align the model to the space and--
PAUL CHEN: So you're saying if the virtual content gets misaligned somehow.
AUDIENCE: Yeah.
PAUL CHEN: So the question is if the virtual content somehow becomes misaligned with the real world, what do you do? Well, you have a couple options.
AUDIENCE: The image--
PAUL CHEN: You can go back to the image target and rescan it.
AUDIENCE: I mean if the image itself--
PAUL CHEN: Oh.
AUDIENCE: --is shifted [INAUDIBLE].
PAUL CHEN: So, right. It's a piece of paper. Someone might rip it. It might get tilted. It might fall down and someone puts it back in the wrong place. Yes, in this case, the system's going to initialize exactly where that paper is and put the model with respect to that paper. It'll be wrong.
AUDIENCE: [INAUDIBLE] the model is not going to--
PAUL CHEN: Exactly.
AUDIENCE: --align [INAUDIBLE].
PAUL CHEN: But in those cases, we have tools that allow you to then translate, rotate, and move the model, maybe even if you need to scale it, so that you can manually register it where it should be.
AUDIENCE: On site.
PAUL CHEN: On site, if you don't have time to find out where that marker should be. We understand things change on a building site. In fact, the wall it was on might have been torn down. So now you have to replace the marker in the model and then replace the image target in the real world. Yes?
AUDIENCE: I guess I'm concerned [INAUDIBLE] the skill of the operator can be different from [INAUDIBLE] the operation skill. Is it set up so certain people can reset all the stuff and then just give it to people or--
PAUL CHEN: The way that we've designed the system so far, all the web-based configuration is ideally done by, say, an administrator. The people on site can't edit that AR content. They're just viewing it. We're going to add features where they can then give feedback, such as BIM issues or RFIs or other snags, et cetera, but not change the AR content in general.
Let's show a few examples. Here's an example of a screenshot or actually a point of view of a conference table in our office shown through our smart glasses. And then the designer has decided to bring up the AR content. There's no marker. There's no image target, and the model appears at tabletop size within a one-meter bounding box.
The other experiences where you may be actually on site, you would have scanned that image target. And when the model loads, it'll be registered against that image target so that it's now placed accurately in the real world. So that's what it would look like at scale. The final key learning is that you can also incorporate live, real-time data in those augmented reality views. Question.
AUDIENCE: This picture [INAUDIBLE] if you were to take the client to the site and it's different light, different cloud, does that affect--
PAUL CHEN: [INAUDIBLE] doesn't affect-- As long as you can scan that image target, the system will place the virtual content in the right place. If it's pitch black at night, the system won't see it.
AUDIENCE: No.
PAUL CHEN: But you could use a flashlight probably.
AUDIENCE: You're not going to take your client in the night. But I'm saying if it's cloudy or sunny--
PAUL CHEN: Sure. Sure.
AUDIENCE: --or the sun in your face or on your back does not [INAUDIBLE].
PAUL CHEN: Well, if it's a personal house, a personal residence, you may take them at night so they can see what it looks like against the views. What does it look like out these windows at night, et cetera.
AUDIENCE: That makes sense.
PAUL CHEN: But as long as you can see the target, you should be fine. You can display data in real-time. For example, IoT data from sensors that are in your model and also in the real world will be sending data up to the cloud. You can then read that data in real-time and display them in augmented reality properly registered against the devices.
This is a specific use case of just data visualization in AR in general, and it's a very popular use case especially for inspections. As someone is walking through an inspection tour, each particular piece of machinery can have their data called out, drawn directly from the, say, GE ServiceMax or IBM Maximo or whatever asset database, showing you the data live of that machinery. And very quickly, problems can be identified.
In conclusion, we believe at DAQRI that the use of AR can greatly enhance the use of BIM. It can draw BIM data out to people who haven't been able to access them before, such as being live on site. Our partnership with Autodesk has been very fruitful. We've been able to leverage the Forge APIs to build the BIM configurator and leverage the availability of BIM 360 Docs data for bringing those models into AR experiences.
In particular, in this use case, we brought down BIM 360 Docs models. We made them available for AR experiences. They can help design reviews. They can help guidance while you're building. They can help renovations and retrofits, and they can help you display real-time data as well.
Now, of course, there's many other future use cases in AEC that we're investigating, and we're looking forward to bringing those solutions to you. So thank you. That concludes the first half--
AUDIENCE: How can you [INAUDIBLE] the site and the site needs some kind of [INAUDIBLE]?
PAUL CHEN: Repeat the question, please.
AUDIENCE: If the site needs some earthwork then how are you going to make the client to see how the building's going to look at the end and the earthwork has not been done yet? How the building is going to show in different [INAUDIBLE]?
PAUL CHEN: Well, the model will, of course, have taken into account what the earthworks should be. It's just not done yet in real life. So when you scan the image target and bring up the model, the model will sit in the real world as if the earthwork had been done.
AUDIENCE: [INAUDIBLE] can mask whatever the terrain is.
PAUL CHEN: But again, the virtual content is generally somewhat transparent, 50, 60% transparent, so you can always see through it and see the ground and the trees, et cetera, to varying degrees. If there are no further questions-- There are further questions. Yes?
AUDIENCE: Yes. So placing the 3D model [INAUDIBLE] the images, right? How does the model say orient itself? Does it orient [INAUDIBLE] you scan a column inside a room. Is the model on just [INAUDIBLE] itself, or do we have to create manual placement then save the view then [INAUDIBLE] or images?
PAUL CHEN: The question is, how does the virtual content align itself to the image target? The image target is very closely related with that blue marker that you put in the model. So wherever you put that marker, whichever face of the column you happen to put it on, whatever height, that's what the system is going to use to orient against. So if you put it on a particular wall at four feet, then you have to put the physical image target on that wall in the real world at four feet. When the system sees it, it will then bring up the model aligned to that.
AUDIENCE: OK, so you just need one instead of three triangles?
PAUL CHEN: You just need one. Let me go back to the woman in the back. She was second.
AUDIENCE: [INAUDIBLE]
PAUL CHEN: It depends on the model. Assuming the model is built such that it knows what one-to-one scale is, that's what the system will use. But you never know. The real world and the virtual world sometimes don't match, and so that's why we do provide those scaling, translation, rotation tools so you can tweak the model to better fit what's happening in the real world.
AUDIENCE: [INAUDIBLE]
PAUL CHEN: What is the accuracy? Again, it depends on the model and what the designer built into the model. We'll show the model at full scale whatever the designer decided it would be.
AUDIENCE: [INAUDIBLE]
PAUL CHEN: I'm sorry. For which applications?
AUDIENCE: [INAUDIBLE] use it for [INAUDIBLE]
PAUL CHEN: Oh, as built versus as designed. That's what it's intended for. And in fact, in those cases, if the model correctly brings up at full scale and you see that something isn't aligning properly, you can assume the model is right and as built, unfortunately, is not right. That's actually a particular use case we've mentioned in the inspection phase.
AUDIENCE: [INAUDIBLE]
PAUL CHEN: So you want to be able to measure the difference, say.
AUDIENCE: Yes.
PAUL CHEN: That part we haven't gotten to yet.
CYRILLE FAUVEL: Maybe, Paul, we can do the demo and then we answer a couple of questions, and we take more questions after?
PAUL CHEN: How about that? Should we move to the demo?
AUDIENCE: Yes.
PAUL CHEN: Would you like to see it live? Very good. Cyrille, It's all yours.
CYRILLE FAUVEL: Do you know how I switch screen?
PAUL CHEN: I think you press seven.
CYRILLE FAUVEL: Seven.
PAUL CHEN: I turn off eight.
CYRILLE FAUVEL: Eight.
PAUL CHEN: That's seven.
CYRILLE FAUVEL: Yeah. Yeah. We see it. OK, so we'll start with-- so [INAUDIBLE] handmade, so we're going to show you the experience. The first thing we're going to do, actually, is to talk about the configurator. So that's what Paul was showing. I already loaded one of the models that I'm going to demonstrate to you.
And this model has been already prepared so you see that there is this blue marker, which is actually positioned on the table in the model. But I'm going to show you the different techniques you can prepare your scene. You're not forced to prepare anything. If you don't want a marker, if you don't want to do any selection, the system will understand you want to see everything and that 0.00 in your Revit model is actually the point where you will match the target image on field.
If you want to define a different position with a marker and you want to just select a room or a discipline by saying, I just want to see the HVAC component, this is where you can do selection. And you can do selection by just clicking object. You can also do a search. So if you want to say anything which is level two. There's nothing? Yeah, there's some level two.
So you can just go and search properties into your model and say, I want to include these object into my experience. At the same time, you can say, show me what has been prepared for this scene. So this is what I've preselected for the demo. And you see that the marker has been positioned on the table. I could decide to put it anywhere I want, on the door, on a wall, on the floor. I just decided for this demonstration that I'm going to use the table as the reference.
And all the object you're seeing at this time-- sorry for this-- is actually what we're going to import on the DAQRI. You see that the level of polygons-- I hope you can read-- is around 318,000 polygons, which is a nice number for the DAQRI to display. And we'll use a direct connection to download or stream the content to the device.
So while I'll start the experience, you'll see object appearing in front of me until I reach a 100% loading. It will take a minute to download everything from the internet. If you go over that limit as per site, this number will become red, and then you'll have two options. Either you load too many polygons on your experience or the frame rate you have on the device will not be very good. It will be a bit slow.
Or you ask the system to decimate the geometry so it fits the limit of the 300,000 polygon count. In that case, you're going to get lower quality measures into the experience. And I can show you one of the example that we made for this demo.
So this is a monument in Paris, and the relic model is around 20 millions polygon. And we decimated that to 300,000 polygons to fit DAQRI. And you'll see that there are couple of mesh which doesn't look very nice but still acceptable for the experience. So we saw that you can select object. You can filter object. We search. There are some other techniques like bounding box erection. So let me do it properly.
So here I am starting just a regular bounding box, and I search for my model. I can just say, OK, this is approximately what I want, and I just want to isolate one room. And when I'm happy with what I see on screen, then I can say, this is what I want to publish to the system.
So you have different techniques of selecting objects, of filtering by properties, bounding box, manual selection. There might be also techniques we add later to the system. One of those I'm using frequently is actually using-- so let me turn off the bounding box sign-- is actively using the [INAUDIBLE] Forge Viewer browser. And I can say all I want to see are doors or maybe stairs or maybe decks.
And then I add that to my selection. When I'll be happy with my selection, I can verify what is going to be loaded into the helmet. And if I'm happy, I can just publish the scene. And when the scene has been published and completely processed for the helmet, you'll get this QR code. So as the QR code is actually when I start using the helmet. So I'm going to start the application.
When I say I want to scan the QR code on here. Sorry for that. I need to switch-- oh, sorry. I need to switch to this because of the light. Here it is. And I'm going to watch this area. And you see objects are coming in. So like I'm 20% loaded, and I still see object appearing. But I can already walk through the scene while the system keeps going on, and I can load objects from my scene.
So it's quite a lot because it's almost a full building I loaded because I want to show you a couple of tools that the DAQRI team has been developed for the experience. What is interesting is the way things are being loaded. It's fully asynchronous, so it doesn't stop your experience, as well as we load objects which are nearby you at the beginning, and we load other objects in the background. That's why it takes a bit of time.
When I'm already in the scene, I don't see any more object coming in. When I reach 100%, which is coming now I think. Yeah. Now it's asking me, so it reduced the model to a bounding box of one meter. So I have that experience of seeing what I've loaded because if I don't have this position marker, I don't really know where to put the model.
So in the menu here, I can say I want to skip that step if I'm just interested to see the model in the one a tabletop experience. But if I want to position it, when I say I'm going to move here. And remember it'll be in the middle of the table. I just need to go here and-- I know. It's never easy with the light here. Yeah.
And you see the model has been positioned now straight to the young girl I had with this piece of paper. And you can see that I'm in the middle of the table now. So I can move away the table, and I can see the room in which I am. So I see the chairs. I see the window, the lights. And it scale ones, so it's respect and you can see that the model keeps aligned in the position it was. And the camera, which is right there, doesn't see the marker anymore.
So I can continue working. I know I'm going through object. It's not nice. But I can see things all around in a dynamic fashion. If I'm not happy about-- so I'm at the top floor and that may be not the room I wanted to end up, but that was where the market was. So there are a couple of tools you can use from the menu here. Whoops.
There have been some tools developed here, and one of those is actually to do some configuration. So I can say I want to translate-- no, mirror the scale. Where is he? Oh. I messed up. Move. Now I can move, and I can decide, using this tool, that I want to go up or I want to go down. So I can go to the other room.
I'm flying in that room now. Yes, that's the room I want to go in. So I can continue to go down a bit. And when I'm happy, I say OK. Confirm. I'm in the right place now. So this is where I wanted to end up. And I can, again, see objects. I can see the room. And I can navigate into my model.
Some other tools we've been doing is actually to show the BIM model properties. In that case, when I move close to object, if they have properties, you'll see that a little marker appears. And I can activate the marker, and I can see properties. I can inspect them. Later, I'm sure the DAQRI team would allow you to edit these properties and report that to your ARP system. But right now it's just like a preview technology, so you'll only see readable only things.
And you have also point of interest appearing while you walk through the model. So you can activate them or not. And you decide what you want to see. So here, there is something behind the wall. And if I want to see what it is-- double-flush. Oh, it looks like toilet. So if I go through the wall--
AUDIENCE: [INAUDIBLE]
CYRILLE FAUVEL: Yeah. I'm a good man. So now I'm where? Where I am? Oh. No, it's in the wall actually. Interesting. And I can deactivate this and I can come back in the house. And this point of interest will appear while you move towards object. So one meter, they will appear, and they'll be kept for three meter behind you.
So you can turn around and see that this is still active. But now, if I keep moving that direction at one time it would just disappear because you're too far. But you can keep things activated. So even if you walk around, it will appear there. It is the same thing for IoT data. So I actually implemented IoT data that I wanted to show you using a sensor like this one, which can capture temperature, lights, and everything.
And this IoT data will appear with red marks. Unfortunately, I can't show it right now because my server died in San Francisco. So I can't show you the information, but I'm sure by tomorrow I'll have repaired it. So if you go to the DAQRI booth, they will be able to show that to you nicely. But same principle. You would get-- sorry.
You would get something like this website where you have red marks on objects which has some IoT information coming. And when you click on this object, you would have not exactly this interface but something very close where you see the IoT data stream coming on your object. So movement, temperature, light, so you can really attach real data stream to virtual objects.
The only thing you need on your Revit model to activate this is actually the Mac address of this little device. And that will ultimately come because the DAQRI application will read that property from your model and say, oh, you have a sensor tag attach. I connect to the server, and I display the data that when you request for it. The last thing now is actually--
PAUL CHEN: By the way, I just want to make one comment. The visual lag that you're seeing there is not what he's seeing in the helmet. [INAUDIBLE] fact of the Wi-Fi streaming to the screen. What he's seeing in the helmet is a nice continuous loop.
CYRILLE FAUVEL: Oh, yeah.
PAUL CHEN: Unfortunately, we're not seeing what he sees exactly.
CYRILLE FAUVEL: So now what I want to show you is a model we talk about, which is a monument in Paris, which is the [INAUDIBLE] for those who know this monument. So this Revit model was 20 millions polygon. And if I bring it to a scale of one, so now I'm in the monument. And it's a floorplan, so if I want to move my altitude so I'm like a human walking into a monument, I can. And this is a decimated representation of the model.
So if you pay very close attention, you can see that some mesh are not necessarily very well defined because they've been reduced a lot. But for the 20 million polygons down to 300,000 polygon, it's pretty good. But we're still working on making this decimation work better because that's the first implementation. But the idea is here. Whatever you request from the system, we'll try to find the best match for your device.
And here for the DAQRI, I think it's pretty good. So if you look closely-- there is a menu in front of it. But if you look at the arch, this is where you can see a couple of issues in the mesh. But if you look at the ground on everything else, it's pretty good. OK, so that ends the demo. And we can respond answer any of your question. And again, at the DAQRI booth you can have that experience anytime.
Downloads
Tags
Produkt | |
Branchen | |
Themen |