Description
Key Learnings
- Learn how Autodesk Research benefited from using Forge web services to implement Dasher 360
- Learn implementation details of reusable Forge Viewer extensions developed as part of the Dasher 360 project
- Learn the history of Project Dasher and how Dasher 360 can be used for building performance management
- Learn the importance of contextualizing sensor data in BIM on the web
Speaker
- SBSimon BreslavSimon joined Autodesk in mid-2010 after completing a M.Sc. in Computer Science (with a concentration in Computer Graphics) from the University of Toronto. As part of the Complex Systems Research group, Simon is working on Building Performance and Simulation Projects where his current research interests include Information Visualization, Simulation, and Human Computer Interaction. Simon's previous experience includes working at Thomson Reuters and internships at the Adobe Creative Technology Lab as well as Auryn Animation Studio. Originally from Riga, Latvia, Simon enjoys spending time painting, carving, meditating, and smiling.
SIMON BRESLAV: Hi, everybody. I don't know if this is it or not but I'll get started anyway. The class is called, Using Forge for Advanced IoT Visualization in Dasher 360. So if you're in the wrong class now is the time to leave.
I'm Simon Breslav. I'm a research scientist at Autodesk. I'll give you talk about the project Dasher, which was originally desktop software that we converted to use Forge services to be a web application. And so I'll go through some history and some details of how we did certain things. Hopefully, in the class you'll get a sense of what it takes to do something similar. You'll get resources needed to be able to do a similar level of application using Forge.
A little bit about me. I've been at Autodesk for seven years. I'll use a clicker. And I've worked on various research projects, mostly related to simulation. So looking at different simulation tools, using these tools to create simulations, and then visualizing the results of those simulations, and doing analytics. My interest is probably more on the information visualization side of things. But I worked on various projects.
If you haven't heard of Autodesk Research, I encourage you to go to autodeskresearch.com. It has lots of different publications that our group is doing. There is a lot of really exciting, cutting edge research that's happening on the front of machine learning, robotics, general design. So check it out. There's lots of really interesting stuff.
So I'll give you a bit of history about project Dasher. So originally, the project started in 2009. So it's been a while since we've been talking about this.
The original goal was to try to extend the value of building information modeling to operations lifecycle of buildings. To do that, we combined BIM with sensor data. So we took data from a building management system and various devices in the buildings to create a compelling visualization tool that could help building operators to see how the building is performing, to help them make the building perform more cost efficiently, be more comfortable for people in the building.
So here is a video of the program that we built. This video is from 2011. Here it shows going to a room, navigating to a room with the temperature shown as a surface shading. Here we're exploring how much different parts of the building consume energy. And again, drilling down to a section of a building and then a room to see how much energy it consumes.
Looking at the different sensors, you can notice there's tool tips with plots. And now exploring a more detail view of that sensor. Here is the simple occupant simulation, or visualization I mean say, with a temperature of overlaid to try to understand the comfort level of that occupant.
Also, we had A track that could be overlaid over the architecture. Again, with the heat map of the temperatures. Here it's just going through some more sensors in the building, looking at the tool tips with Spark lines and graph, exploring more graphs. So that gives you a sense of where we're were at that time with this desktop application.
However, recently with the Forge coming on board, we start thinking about, well, wouldn't it be great to try to move that to the cloud so that we can expand the accessibility of that tool to not just building operators that maybe have a powerful machine, but also to occupants of the building that may not have a powerful machine, that has good hardware. Also, expand to new applications, like sensor commissioning, where the tradespeople have to walk around the building. So they need to have it work on a portable device. Or on a construction site, where, again, you really need to be more mobile.
So with Forge being there, available in 2016, so about a year and a half ago, we decided to give it a try and try to see what we can do with it, and to migrate. And try to basically duplicate our effort. To do it, we did look at what parts can we use. And we ended up with just some shaders that we could reuse. But most of the code was written in C++, and it wasn't necessarily straightforward to just try to reuse the code. So we just kind of started from scratch.
So I'll go through different parts of Forge that we're using and how we're using it, starting with the basics and going more into more complicated parts. The overall architecture is that the back end of the web version of Dasher 360 is a Node.js application. It uses authentication, it uses Data Management API, it uses Model Derivative API. And it also has its own MongoDB, just the database to save user preferences, settings for each individual building.
And on the front end, most of the code is organized as extensions to the Viewer. The majority of the code is really a bunch of extensions for the to do various different parts of the visualization. And we're also using Data 360, which is a database that we're still working on in the research department for time series database. And that's specifically for sensors.
When starting the project, we extensively used Forge examples. So if you are starting with Forge, really, really good examples are available open source on GitHub. In particular, the two repositories that I find very helpful are model derivative one and the Forge RCDB one. They both use an NPM package. NPM is a repository for packages for Node.js.
And if you're not familiar with Node.js, it's a JavaScript engine for server side execution. It's really easy to get going by using one of those examples as a starting point. So I encourage you to do that. And I'll reference those repositories quite a bit during the presentation, pointing you which of the subdemos to take a look for different parts.
And I'm actually going to switch to a browser and just show you Dasher. So this is dasher360.com. It's a real thing. You can go to it.
There's a video. There's a demo that will open up one of the models where you can play around with it. There are some restrictions in terms of you can't change things, but you can look at the real building with real data.
I'm going to log in. You can't log in. You will actually go through this process to try to log in, but there is a whitelist so only certain people with certain email addresses can log in right now. But if you are familiar or not familiar with authentication Forge service, this is what we're using.
When I click log in, it goes to accounts.autodesk.com. I log in. And then it takes me back to the Dasher website.
Here is the main project page for Dasher. At the top here we have recent files with the thumbnails, which we use Model Derivative API to get the thumbnails. Underneath that we have some shared files where we can share files. And again, we use Model Derivative API to create these shares. They are owned by the app. And we can specify whether they're public and whether they require passwords.
And under that we have all the different hubs that I have with all the different projects and all the different files within the project. So this is Data Management API here that we're using. So I won't go into any files just yet. I'll go back to the presentation.
So the big advantage of using Data Management API is that we don't have to recreate all the user management that is there in A360. I've been on A360 team. We can just have the users do it there, all the permissions, all the collaboration. It's a lot of work. We don't want to do that but it's a necessary thing to have.
So using Data Management API can really save you a lot of extra work. And luckily, both of the repositories that I mentioned before have really good examples of being able to connect to all your files there, or connect your users to all the files in their A360 using Data Management API. So I encourage you to take a look at those samples because you can just reuse the code and get going with that.
Assuming of course, you're using Node.js as well on your server. I am sure there are other examples for your particular language. I haven't explored it so I just don't know.
However, if you are going to try to take a different approach, and instead of connecting to A360, you want your app to manage the model. You want to have a different authentication mechanism. You don't want your user to have to have Autodesk account, it's possible to do that.
Your app could manage the resources and the model and have the user upload their models through your app. And there is a really good simple example of using-- for that, you need Model Derivative API that translates the model. The models.autodesk.io is a really nice example of that.
That's not what we did in Dasher. But that's certainly a path we considered. And some of you I'm sure will want to consider as well.
Of course, the majority of Dasher are a bunch of Viewer extensions. So here on the left, you can see the empty or the default Viewer, Forge Viewer. And on the right is the Dasher Viewer that adds a whole bunch of different widgets to create the whole Dasher experience. And the rest of the talk I'll talk about all these different extensions.
So I'll start with a quick demo of the navigation-- excuse me-- tools. And I'll open up one of my recent files. So this is our old Autodesk office in Toronto.
This is the model that you're able to see. If you go to dasher.com and you click on demo it will open this model. So you can afterwards play with it. You won't be able, hopefully, to changing anything, but you will be able to view something similar.
So I'm just going to wait until it loads. Sometimes it takes a little bit. The default tools that Viewer gives you are here in the bottom. This toolbar is the default Forge Viewer.
Tools. Our Dasher additional tools that we added are here on the left in the vertical bar. On the top here, you see a breadcrumb widget that lets you navigate the building. Here there is a dropdown where you can highlight different floors. And then when you click on them, it isolates that particular floor.
There is another dropdown now here for the different rooms. And it just focuses on that particular room. An alternative way to navigate is a navigation tool bar that is similar to the breadcrumbs, but it's just a tree. With again, it highlights the different parts.
Here I can go to a particular room without needing to navigate first the floor. There's also a dashboard. The dashboard has both some bookmarked sensors but also navigational bookmarks. So I can click on them to go to that particular view.
And I can create a new bookmark here. Hopefully, it will- yes, so here. And I can resize these to be a small or big. And move them around if I want to. Let's see. So these are navigational tools.
When I'm in the first person view here and I move around, it can be a little bit hard to use sometimes. Let's see if I went to a different room, the breadcrumbs update to show which room you're in. Sometimes in the first person view it's helpful because it goes too fast or to slow and you end up somewhere you don't know where.
So I'll switch back to the presentation again and talk a little bit about all the different features that I just showed and how we did them. So if you want to add a vertical bar with more tools that look like they're Forge tools, Kean Walmsley, who's also working on the project, has a blog that posts quite a bit about all the different things that we're doing in Dasher. So he made a post that gives the source code how to do a vertical toolbar. So I'm not going to go through the code because you can just go to the blog, take a look at the blog, and copy the code to do it.
With the breadcrumbs there's nothing really special about the widget itself. It's just a dropdown. But the more interesting or harder things to figure out was how to figure out how to break the building into floors. What we ended up doing was just using the properties of the different objects in the building.
So when you do a Navis export, it breaks the object hierarchy in a way that groups all the objects based on levels. So we took advantage of that. And we just have a simple settings file that identifies the string of the root object of the floor, mapping to whatever string you want to show up in the menu. And then we parse all the settings and classify all the objects that needed to be shown. So when you select the floor it isolates and hides all the objects that are not part of the floor, and shows the floor.
If you're interested in going through properties and working with properties of different objects to be able to isolate something, there's a really good example on the Forge RCDB called Matter Properties, which shows how to search properties and work with properties. It's very useful to be able to-- it's really the core of BIM really is to be able to have the semantic information about your geometry. And so learning how to work with properties is super useful to do fun things like this.
There is of course a downside to this method. Often objects span multiple levels. Often when architects make the building, the outside envelope will span multiple buildings. And you can't really just say, OK, it's this building.
One workaround that we came up with was to use Revit parts. So we used Revit parts and we separate walls into multiple parts. And each part belongs to a particular level. And this works but it's time consuming. It's a lot of work.
We have found this wall analyzer demo that's also part of Forge RCDB. But we haven't tried it. So I encourage you to try it.
Maybe that's exactly what you're looking for and maybe it's exactly what we're looking for. In the future we'll take a look. I think it does actual geometry processing to separate things. So it's really interesting.
Another part of the breadcrumb widget that is interesting is the highlighting. Before you go to the floor, you want to get a sense of where is that floor in the big building. It's super useful.
To do it, we use room geometry. So if you're exporting a Revit model, so if you're looking at the Revit model, right now it doesn't actually export those. But if you convert things to a Navisworks file, the Navisworks file has room geometries.
In the future that might change. Revit files will also have room geometries. But right now they don't. So what do we do, the first thing that you can try is just using the selection mechanisms that Viewer has. So here are the two calls that you need.
You go viewer.select the idea of any geometry that you need, and it will select it for you. So initially, that's the functionality that we were using. And so when you hover over things it would select the room geometries that belong to that floor.
However, the issue that we ran into, it was super flickering. So when you change the selection the whole building would be redrawing. And if you have a really big building it would be flickering.
So we decided to do a bit of a hack. And we tried to hijack the overlay functionality that's within Viewer, Forge Viewer, and to create a custom geometry that would have a separate shader that would ignore the depth. And here's a little bit a snippet of code that you would need to do something like that.
You can't quite see it on here. But that's good because you don't really want to look at code right now. But there is a really good set of course notes-- oh, very good-- extensive course note that I've put together that I encourage you to take a look. And it has this code. And if you want to implement this--
It's not super complicated but it does use undocumented and private API. So it may change in the future. So it uses the implementation object of the Viewer. So it's not something that you will find in the documentation. It's use it at your own risk if you want to prototype it.
But don't necessarily put it in your production code. Because you upgrade the version of the Viewer and things start to not work. And you don't know why. And that's because the API changed because you are using undocumented methods. So certain we've been there.
For the navigation lists, we were deriving from the viewer tree and the tree delegate class in the Forge Viewer API. And that's mainly to keep the feel and look of in the Forge Viewer. But you don't have to use that. You can use any list library. For example, Fancy Tree is a really nice one that I've used before because it can really put any kind of widget inside of a Forge panel, like we did here. So anything really goes.
With the dashboard is a really useful feature to have, like the bookmarks. Just so that you have a quick way to navigate to spots that you often visit, both whether it's looking at sensors or going to a particular location. And for that-- oh, sorry about that-- for that, we used a library called GridStack. I have tried a few different libraries to create this kind of a dashboard layout. And this one seemed to have the least amount of bugs. So I can recommend that one.
From the viewer side, you really need just three functions to know about to recreate this. There's getScreenShotBuffer, which will capture whatever you're looking at. So whatever the image that's in the viewer at any given time it will give you an image of that.
And then getState and restoreState. With getState you get the camera position, and the orientation, and cut planes, all the objects that are visible. So you want to be able to store that somewhere in your database. And then restoreState will bring that back.
So I'll give another demo of more features. But I'll go to a different building. But feel free to open up this building. Because that's the [INAUDIBLE] building is available publicly, if we go to the demo.
This is our San Francisco office, Pier 9. I've never been but from this model it looks pretty cool. And I'll show some sensor functionality that we have here in Dasher.
So the main sensor navigation tool is the sensor list, which is just a list of all the sensors in the building. And then it ties closely together with the sensor dots. In the list here I can search for a particular name or I can filter based on type. And the sensor dots will also sync up with the filtering when I'm doing them. So here, now all of them.
I can hide unconnected sensors if they're not connected to the database. I can go to the sensor, I can add or remove it to the dashboard. So here's the dashboard I showed you before. I can add it, remove it.
So let me just goes to the sensor. So it finds the sensor, secures the sensor. The sensors have a little tooltip that comes up. The tooltip has the current value, and a Spark line that shows the 100 last sample values.
If I click on it, it opens a graph, a chart of that particular sensor, which I can zoom in. And as I zoom in it should update with different levels of detail of that sensor. As I scroll, initially it loads a lower resolution, and it fills in higher resolution of the data. And then I can scroll out. Here I'm looking at months, and then so forth.
The sensors are divided in different categories. So model sensors are sensors that were modeled in BIM. And then there's a bunch of unpositioned sensors. Those are sensors that are in the database but not in BIM.
And those have also another option to be placed. So here I do have a little plus that can do. And then I can place these sensors.
And then, did that work? No. There they go. So now they have moved to a different section, called position sensors.
I can delete it. Now it's gone. So this is the basic sensor functionality.
So let me just switch back to the presentation and give you a little bit of details of how we did it. So the sensor list, again, similar to the navigation list. We're deriving from the viewer list but we're adding a bunch of buttons to each item.
It's nothing too complicated. But it's crucial to be able to have some standard visualization method. Everybody is familiar with trees, so it's good to have something familiar that people can know how to work with.
To be able to do the placement of the sensors was actually surprisingly easy. It's just really one API call that you need, clientToWorld, that converts a 2D position to a 3D position. So we take the mouse position where the person clicked, where they're dragging, and converting it to a 3D location. And this function call gives you both the 3D location, the geometry that the hit test intersected, and the normal on that surface. So super, super useful.
You can also specify whether you want to ignore glass or not. So transparent objects, I think, refers to glass, if an object has transparent properties, I think. I usually don't ignore it because why wouldn't you want to put sensors on glass? I don't know.
If you want to get started with that I recommend an IoT example on Forge RCDB. When we started with it, that particular example wasn't there. But something very similar was and that's what we really copied.
We used the SVG elements for all the different sensors. And then it made it really easy to make them be interactive, so hovering, clicking. And the way it worked, it would track the 3D changes of the model and reposition the 2D elements.
But when you have thousands of sensors this tracking can become super expensive. Because basically, what you're doing, for each time you are trying to figure out a new position you have to do a hit test to find the new location. So instead of having the sensor dots being HTML elements, we asked some Forge experts and looked around for some examples. And we found a really nice example of using particle system, where all the dots were interactive. And I have a link here to this particular example that we used as the basis.
One thing to note is that this example is on threejs.org, which has a lot of examples that you can really draw upon. The reason why is because Forge Viewer is actually built on top of Three.js. So all the functionality that you have in Three.js-- it's not always exactly the same as you would do it if you had just Three.js because Forge Viewer does add a lot of on top. But all those examples, this huge community of resources, are super useful when getting ideas of how to do something that you haven't seen anybody do in Forge yet.
So definitely check out Three.js. One warning is that the version that they use in Viewer, Forge Viewer, is a little bit older than the current development version of Three.js. In the class notes, I talk a little bit about that. So you can get the right documentation because the documentation does change from version to version, and the examples change from version to version.
There is a particle system example in the Forge RCDB, which I encourage you take a look. And here's a quick video of it. It really makes things much, much more performant if you want to visualize lots of points. We're talking about thousands of points.
If you want to make your sensors, or whatever you're using the 3D points for, if you wanted to work on a touch screen we've posted a bit of a tutorial of how to make sure that things work both on touch and with a the mouse. So Kean made a post about that. For the tooltips, we used a library called Tooltipster, which is pretty easy to use.
For the charts, initially to get going really quickly we use a library called vis.js. And it's a very easy API to use. I highly recommended it. It was really quick to get up and running with having charts.
We're not currently using it because we have migrated to use our own library that we wrote that's based on the research that we did in our department. The project was called Splash. And it really added a fluid pan and zoom navigation that you normally don't get in the standard time series plot.
When you're looking at a very high frequency data, to be able to fluidly navigate for various levels of detail is something that is a bit unique to this library. And if you want to find out more information about this project take a look at the Autodesk Research website. There is a detailed publication that was published at [INAUDIBLE], I think, about this.
And let me give you another demo of some other features. So I'll show you a different building. This is Schneider building. They graciously let me show their building and their data.
It's pretty recent. We were constantly collecting. So it's going to be just past week. By default, that's what it loads is one week of data.
Their building is a little bit different. Instead of sensors they have a bunch of equipment that has a number of sensors. So when they turn on the sensors it actually shows a different icon that is the piece of equipment.
Like this one has four different sensors. So now when I open it has a dashboard that has four different sensors. So now when open this particular sensor now it opens the plot that I can then explore in more detail.
Another interesting aspect of this particular building is that it has special occupancy sensors that Schneider are working on. I can't really talk about it. But you can maybe talk to them. I don't know if they have a booth. But I'll just show you what they do here.
So on the bottom here is a timeline where I can just scrub it to give you a sense. As I scrub the-- I'll just hide this-- you can see the magenta people there. And so that's the indication.
The sensor itself is in here. But the occupants are shown here. So you can actually know how many-- so here, six occupants. One, two, three, four, five, six. So it works.
I'll just put this here. As I scrub, you can see the values in the dashboard change and the Spark lines change to show the last value, which is useful, when you're looking at instantaneous values, to have a little bit of a history of what was previously done. So here I'll turn on surface shading.
So besides just having sensor dots and the values you want to have some spatial sense of how different values change over that space. So here I'm overlaying temperature surface shading. And as I scrub I can get a glimpse of, OK, it gets cold and then it gets warm, it gets cold.
I can also change it to be CO2 and see, OK, it gets more CO2 when there's lots of people, you can see. Of course I can play it just through. But I find scrubbing more fun just because once you've figured out area of interest then you can play through.
I'll just switch back to the presentation and talk a little bit how we did this. So again, surface shading is useful to get a good intuition of any kind of patterns that are happening in the building, whether there is any anomalies. You don't want to be looking at hundreds of different sensor plots to find anomalies visually. So it's really useful to have a nice surface shading that shows this. It's not a detailed analysis but it just gives you an initial area to investigate.
If you want to do surface shading, there is an example on GitHub, on the Forge site, called Forge Feeder that I recommend taking a look at. It's a little different. It doesn't exactly do sensors the same way, but it's still a good place to start.
To give you a bit of an overview of what our approach is, we're using a method called inverse distance weighting, which is just an interpolation method, or a simple interpolation method that given a set of sparse samples, which are your sensors, the problem is how do you find the values in between, so where you don't have the data? You don't really know what the value is but you can approximate given some samples. So this is a method that was developed in 1968.
Of course, before we had GPUs, or GPU programming. But it's a really simple method that translates really well to GPUs. And I did make a little shader toy for you to play with, which has all the source code for a 2D case. Of course, in our case, we're doing 3D. But it's not a very complicated algorithm.
And I don't know if I have time. I have time. I can just go to it, I guess. I'll show it to you. If you're not familiar with Shadertoy, it lets people do shaders in the browser, where you can play with it.
This is the shader I've made. And you'll have the link, and you can play with it. And if you are interested in doing shaders I recommend that you go to check out Shadertoy. All the different samples that people write have source code. So you can learn about making shaders. And often they'll have something that will help you with whatever problem you're having.
So I'll just go back to the presentation. That's weird. For the timeline we were using vis.js that I talked about before. We are using this timeline. It's fine.
It has so many features that we're not using. The API and the examples are very extensive. So I definitely recommend checking out if your application needs a timeline.
And the last extension that I'll talk about is called Kiosk Mode. And it is surprisingly useful. What it does it puts Dasher into demo mode. So it simulates a fake cursor that drives the application and turns on different features. And it just loops through all the different features.
What's neat about it is that it's view independent, so if somebody doesn't care what orientation the building is in. And it works for any model. So it's a really useful way to be able to instead of creating a video that you put on the big screen in your lobby, now you can actually put the real application that is driven in the lobby or in the kiosk where people don't necessarily feel comfortable maybe driving it, or they can't because it's on the big screen. But it's just the driving by itself. But it's not just a video, so the data changes. And they can see current data.
And Kean was kind enough to document the whole process of creating this extension on his blog. There's three posts that really go into details of what's involved in creating it. The essence of it, of course, is creating a fake cursor and making it do things.
And then, all the extensions that we have are Kiosk-enabled. So the extension has to tell say what areas of interest are available to be activated, and what happens when it clicks. It's really an elegant solution to the problem. So I definitely encourage you to take a look at the blogs if you're interested in this.
So here are a few videos of some work in progress that we're doing. This is an animated robot that has sensors attached to it. And here we're applying some surface shading to it.
This is just simulated data. It doesn't really mean anything. It's just a way to demonstrate the idea that either a program, or a robot programmer, or an operator may be using Forge Viewer to visualize moving objects in the context of a factory.
And of course, one robot is fun, but more fun is four robots. So here's another demo that we did that has four robots with sensors on them. And there's a simple simulation of them moving materials from place to place.
And that demonstrates the prototypes factory of the future, where we have a lot of sensors in the factory. And you can easily and quickly identify any issues that are happening in the factory. Or if you're designing a new factory floor and you want to run a simulation, you want to visualize it and really see it's working as it would in real life.
Of course in this case, usually with robotics they give you some software already that does simulation. But let's say it's a desktop application, and you want to export that simulation and send it to your client. And of course, Forge Viewer is really where the power of the web comes in, where you need to share it with somebody that doesn't have your setup, doesn't have the powerful desktop computer.
With that, I'll conclude that we had this really powerful desktop application that we developed. And we want it to be more accessible. And it was pretty fun and easy to be able to convert it almost feature by feature to the web.
There are lots of references, lots of JavaScript libraries that you can use. Obviously, as you saw, I mentioned a whole bunch of different libraries. We were able to put it all together because Forge Viewer is not super opinionated in terms of what you can and can't do. It's just a library, another JavaScript library that you can use and combine with all the different other libraries.
And another thing I want to mention is that the Viewer extensions was a really nice way to force us to encapsulate things into independent modules that are well encapsulated. And this gives us a way to mix and match different extensions depending on what the model has, or what we want to show. So we can easily turn on and off features and it will all work depending on the context.
So with that, I'm going that switch back to the demo and just turn on the Kiosk Mode while I take questions. We have 10 minutes. I guess the Shadertoy crashed my WebGL.
I'm sure you've seen this where the WebGL just crashes. But you just reload it and it's fine. So because someone on a touch-- my laptop is a touch screen. So it puts a big hand to indicate that it's a touch screen. On a regular computer it will show just a regular arrow. Anyway. Any questions? Sure.
AUDIENCE: You showed the room example earlier. And you said that you had to run it through Navisworks first. Was that metadata in the Revit model, or did you have to post-process something?
SIMON BRESLAV: We didn't have to do any processing. I think in the process of the export to Navisworks it adds the room geometries. But I'm not sure. You specify rooms in Revit. That information is there somehow. But I don't know if something during the Navisworks export actually tessellates and creates the jump. I'm not quite sure. We did ask about this but I don't exactly remember what the issue is. But there's definitely interest in having it being available in Revit exports. In the back.
AUDIENCE: Does Dasher work only if there's a Revit model? What if it in the real world MEP contractors, [INAUDIBLE] and other CAD-based models [INAUDIBLE]?
SIMON BRESLAV: It's mainly for 3D. If it works in Viewer it will load in Dasher because it's just a viewer. But you're not going to necessarily have the same information. It will be just a 2D interface.
There's nothing intrinsic about surface shading to be in 3D. But you do need the geometry. And with the 2D plans it's not as nice in terms of having isolated objects. If you work with 2D plans in Viewer it's a little bit harder to get the right geometry.
It still renders as geometry actually, but it's harder to identify proper regions. It's possible. But it's a little bit trickier. It would be harder to do it actually in 2D than in 3D. Sure.
AUDIENCE: When you placed the [INAUDIBLE] sensor on the bridge, is that persistent or written back to the file so it can be opened up later?
SIMON BRESLAV: Yes. So we just have a settings file associated with each model that we store in a database. It's a super low-- it's not it's an intense solution. Yeah, it saves it to the database. It's just basically a big JSON file that has a list of the idea of the sensor location. So if you move the model the sensor will be floating in space. And you would have to reposition it. What we actually found is that depending on the situation, sometimes it's a lot easier to just do it in BIM and place it in BIM. But not everybody necessarily has Revit. So if somebody on your team doesn't have Revit, and they want to do the placement they can. Because it was easy enough to have that feature we did that. I don't know if that answers your question.
AUDIENCE: It's probably not [INAUDIBLE] persistent.
SIMON BRESLAV: It is persistent.
AUDIENCE: [INAUDIBLE] back up it would be considered [INAUDIBLE].
SIMON BRESLAV: One question over there.
AUDIENCE: How did you [INAUDIBLE]?
SIMON BRESLAV: There should be a what?
AUDIENCE: That's the navigation tools data. There is a rule and [INAUDIBLE] rule. How are we going to get that? Is there going to be a [INAUDIBLE]?
SIMON BRESLAV: Yes. So when you go through all the different properties you
AUDIENCE: That is a JSON [INAUDIBLE] JSON rule. There should be a [INAUDIBLE] can render [INAUDIBLE] rooms and the other things.
SIMON BRESLAV: Room geometries have a special name. I don't know. I have to look at the code. I don't remember what it is.
But when you're going through the different-- are you talking about the room geometries or are you talking about just separating it in different floors.
AUDIENCE: Separating the [INAUDIBLE]
SIMON BRESLAV: The different floors. So in that case, we were just looking at the name of the node. Because when you do an export into Navisworks that it actually separates things into levels quite well. It's not a very complicated solution.
AUDIENCE: [INAUDIBLE] navigation [INAUDIBLE] like the names of the rooms and all the--
[INTERPOSING VOICES]
SIMON BRESLAV: So that's specified in the settings. So we have the settings file that's associated with each model. And it will be different for each model, the labels, because different models of have different floors, and such. Yeah, sure.
AUDIENCE: You didn't talk too much in the beginning about the data source of your records. So when you [INAUDIBLE] model, obviously, it's pulling sensor data from somewhere.
SIMON BRESLAV: That's right.
AUDIENCE: So are you doing anything specific in Forge to aggregate sensor data on the model data?
SIMON BRESLAV: So we have our own service that we used. It's called Project Data 360. And yeah, it does all kinds of stuff with rollups and things like that. But it's not part of the official Forage, or anything.
Any other questions? I guess if no questions, thank you very much.
[APPLAUSE]
Tags
Product | |
Industries | |
Topics |