説明
主な学習内容
- Learn how to capture point clouds from a standard LiDAR device (iPad).
- Visualize point clouds in Autodesk Forge Viewer.
- Learn how to automatically align point clouds with the intelligent model.
- Learn about merging point clouds in a Revit file by using Autodesk Forge Design Automation.
スピーカー
- Alexandre PiroI studied mechanical engineering and industrial organisation. Graduated in 2012, I started to work in aeronautics on tooling and efficiency improvement. In 2016, I began Augmented Reality development with CAD models and after 6 months I joined PIRO CIE. I spent one year of research and POC developments on Augmented and Virtual Reality. Since 2018, I lead AR/VR projects and web workflows to be more flexible with data exchange. I also started to work on Autodesk Platform Services to explore another aspect of CAD and 3D manipulation outside of editors. I am actively working with Forge Dev Team and participated to 4 APS Accelerators. I am able to develop on different parts of the APS API. At AU 2022, I presented a class about point cloud integration in a construction workflow with APS.
- Michael BealeMichael Beale is a Senior Developer Consultant since July 2017 for the Autodesk Developer Network and Forge Development Partner Program. Before joining the Forge Platform Team, Michael worked on Autodesk Homestyler, Cloud Rendering, Stereo Panorama Service (pano.autodesk.com) and A360 Interactive Cloud Renderer before working on the ‘Forge Viewer’ and Viewing APIs. Twitter: @micbeale Blog: https://aps.autodesk.com/author/michael-beale
- Brian NickelMy name is Brian D. Nickel. I am a graduate of Montana State University’s Graduate School of Architecture. I have been an educator for three years at Gallatin College in Bozeman, Montana. I have taught remotely from Boise, Idaho for two years through Microsoft Teams. We leverage VR technology to assist remote learning with Autodesk products. I am passionate and energetic about the use of AEC Technology and educating our future emerging AEC workforce. I have attended several national conferences where I have been a speaker, advocate, and collaborator with our industry. One of my core design principles is a belief that design can only have an impact through immense collaboration with the architecture, engineering, and construction (AEC) industry. The industry can be more successful by working through the design together and breaking free from individual silos. I have completed my NCARB Architectural Experience Program Requirements and beginning to study for licensure. A quote that has defined my career path and that I reflect on every day from Jack Smith, FAIA, my thesis advisor at Montana State University’s Graduate School of Architecture, “Don’t become a tool to the tool.”
MICHAEL BEALE: Hello and welcome to our talk on unlocking potential with Point Clouds and Forge, a customer story and technical deep dive. My name is Michael Beale. I'm from Autodesk on the developer advocates team. And with me today are two other guests.
ALEXANDRE PIRO: Hi, my name is Alexandre Piro. I come from France. And I'm working at Piro CIE.
BRIAN NICKEL: Hello, my name is Brian Nickel. And I'm the CEO and founder of Allied BIM. And I'm based in Boise, Idaho.
MICHAEL BEALE: Great. And between the three of us, we'll be presenting today. We'll be covering a quick introduction. And then I'm going to hand this off to Brian, who's going to talk a little bit about the customer side of things. Then I'll talk a little bit about the Forge and deep-diving into this technology. And then we'll cover some of the new future-looking work with Alexandre. So over to you, Brian.
BRIAN NICKEL: To myself? So my name is Brian Nickel. I'm the CEO and founder of Allied BIM. I am a Montana State University School of Architecture graduate based in Bozeman, Montana. I currently teach for Montana State University school systems, Gallatin College, as well as I have started up my own startup with my business partner for our software, which is Allied BIM.
I have a case study example of a general contractor project that we're working on. I am based in Boise, Idaho. And my background is basically focused on developing software for the AEC space.
A question that I'd like to kind of pose to the audience is why aren't we using models to fabricate buildings? Back in the 1970s, NASA and JPL went model to machine.
We're in 2022. And we're about 50 years later. And we're still outputting drawings to 2D paper and building off of 2D construction sets. We're going to talk a little bit about status quo, what exists today, and what are some of the battles that we face in our industry.
I'm going to introduce a new solution, which we're calling the Forge-powered fabrication solution. And then I'll show some screen captures from ReCap Pro, our BIM 360 share in CustomViewer. I'm actually going to have Michael's support in that. And he'll go into a technical dive and demonstration of the technologies.
Moving on to the next. On here, we have three different total stations and laser scanning equipment. On the left, you're going to see an icon 80. The icon 80 is a total station that basically lays out points and validates field conditions from the model and from reality back to the model.
The RTC 360 is a high-end laser scanner. It gets about a 16-inch accuracy. And it captures scans in about a minute and 26 seconds per setup. The BLK 360 is a coffee cup scanner that gets, basically, an eighth-inch accuracy in it's capturing.
These are just the Leica products. There's other products as well that relate to these. These are just what we've used at the subcontracting firm in Bozeman that I worked for.
These solutions are leveraged to drive essentially what we're laying out in the computer to digitally check and validate construction in real time, as we're constructing projects and buildings digitally.
So the question that I asked in the beginning is why aren't we using models to fabricate buildings? The problem is that status quo. Architects and engineers-- general contractors, subcontractors-- are all focused on, essentially, developing an LOD 300 deliverable, which basically are these construction documents that you're seeing on the left.
Now, a solution to this could be an approach for Forge-powered fabrication. Autodesk's Forge enabled. So essentially, we're using the Autodesk Forge platform to develop the solution on top of. And we're connecting to Autodesk's build in order to create a network of fabricators that can work with these contractors and their models so that we can start to focus on going model to machine.
In general LOD sucks. Someone is paying for it somewhere. So oftentimes, we hear that LOD 300 is a proposal that is in their scope of work. And they don't have money to develop LOD 500 models.
This chart here was developed by Jeffrey Pinheiro, @TheRevitKid. And what's interesting and comical about this graph is that everyone in our industry has a different perception of what LOD is.
So LOD 100's graphics should actually probably be in 400. But at the end of the day, we're all putting out 2D drawings. So why are we doing that? Why aren't we connecting models to machinery and building off of the models that we're spending so much time developing?
So a problem that we've recognized in our industry is that digital building design is growing rapidly. And offsite fabrication is also growing rapidly. But both are completely disconnected from one another.
So there's no real easy way to get the 2D documentation set out to the field. It's generally just paper to the field. And so what is the solution? What can we do? Also, in addition to that, offsite fabrication doesn't have machinery that's equipped or connected to the models, which makes it even harder to pass data to your fabrication shops.
So status quo-- the way it goes, everybody knows. The BIM model is a pretty picture. In coordination, we are wasting time and data modeling systems that are not accurate in the way that they're going to be fabricated.
We've oftentimes received coordinated engineered work that doesn't have appropriate conditions of what should be installed in the field. And so what happens is a lot of rework has to happen, which creates a disconnect.
What ultimately ends up happening is the engineer passes the coordinated model off to a subcontractor. And then the subcontractor has to rework that. And then nightmare mode's initiated in the field, because they're getting one set of plans. But they're getting a set of fabrication documents that don't match up with the engineered plans. So we call this nightmare mode.
So I have some examples of some of the nightmares that we've witnessed in reality. One on the left is a plumbing vent penetration going right through the center of a doorway. How does this happen? Why are we dealing with this?
On the right, we've got a construction pile of waste in front of a house. We drove through this subdivision. And I encourage you to drive through subdivisions in your local neighborhoods that are being built.
But out of all of the job sites that were being built, there were multiple piles that looked just like this one of wasted material in the front. And this can all be mitigated if we're building off the model.
What we're also seeing in here is installation nightmares. Because we're working in 2D conditions working off quote-unquote "coordinated models," we call these MEP sandwiches, where the plumber gets in first, or the electrician gets in first. And everybody has to work around one another. There's going to be some serious pressure drop issues in that duct run that you see on the right there. And why are we curving ducts to avoid pipe? This is insanity.
So the reality of status quo is that right now, in our industry, we're dealing with a high trade shortage. The trades are losing talent, because students are being encouraged to attend four-year schools. And they're not getting enough exposure into what opportunities exist in the trades.
We're also dealing with a high waste amount. There's large amounts of waste due to zero automation. There's low to zero productivity, which is creating a lack of ambition to increase productivity, which is contributing to zero growth in our industry. It's all red because this is status quo.
The solution that we're proposing is that we create BIM VDC models that are fully converted into constructable models for construction. In addition to this, we're using reality capture for verification.
So we're creating point clouds from devices like the BLK 360, or RTC 360, or drones. And we're capturing that to ensure that what is installed on site matches and it validates to the model so that we can then assemble the model into buildable kits of parts that we can use to tag and send to machinery.
And then ultimately, we can do some offsite fabrication and send that to the job site so that materials are shipped to the site in package boxes that are labeled and ready for the subcontractor to install. And in our view, in our lens, this contributes to success.
So Forge-powered fabrication statistics and results-- by using Autodesk Forge, we've been able to focus more on VDC digital design. We're using VDC to be a digital waste bin.
So what that means is that we're modeling conditions to confirm that we're yielding the least amount of waste. We can make the digital errors in advance. We can think more clearly through the fabrication process, and ultimately, tie that into machine automation, which allows us to go directly from a VDC model into a fabricatable machine piece.
This has increased productivity by 800%, or eight times beyond what the traditional status quo method looks like. We're also seeing 100% growth in opportunities to be able to connect to machinery, and to work on obtaining more work to complete the schedules quicker to build more buildings with less waste, and ultimately, contributing to more success.
So what does all of this mean? Well, models currently are being built for kitting for order procurement, which ultimately gives us the ability to have a set of materials that can be ordered from supply houses, which allows us to send a set of instructions to the machines directly out of the model with the data to allow us to connect the models to track the status of cut materials, to understand the labor, and statistics, and machine serviceability.
We're also enabling a network of machines to distribute job information remotely. In the fabrication side, we see this as creating an opportunity to enable anyone with or without machines to have a network of machines to connect to their models. In this example, we can see that we've got Scotchman RazorGage and AMOB machines that are currently connected, focusing heavily on linear positioning systems and bending solutions.
So with our case study for this general contractor project, what we did is we created a VDC model with the scope of work of a bathroom riser. And we did this against the traditional engineered design model to show the value and the business case of what would result in less material waste and an overall more efficient schedule.
So I designed this system using Revit. We had a reality capture specialist go out on site with an RTC 360. They captured the laser scan conditions in about 30 minutes. That data was sent over to us digitally. It was assembled, and spooled, and sent to a remote machine in Bozeman, Montana, where it was digitally fabricated from the model to the machine and shipped on site for success using the Forge platform.
What this enabled us to do was-- this traditional install would take about two weeks to install. We were able to install this plumbing riser stack in just one day. So we cut a two-week schedule down substantially. And this was success.
So I have a video that I'm going to share. So in this video, what we're doing is we're using Forge to essentially compartmentalize all of the assemblies out of the Revit model. We've grouped all of the assemblies into buildable parts. We're color coding those assemblies. And we're showing the part IDs that enable us a set of instructions of how these need to be pieced together.
This is all built on top of the viewer application. In addition to this, we're cloud compartmentalizing all of the machine control into the web and using a web browser to now distribute cut length information to these remotely-connected machines.
So what you can see is that we're actually sending machine control and sets of instructions to control a machine remotely. What this means is that every connected machine has a connector that's connected to the Forge-based model.
And as the lists are sent into this machine control graph, we can actually see fabrication plans and cutting success as we're running through this. In here, you can see that we've got linear lengths of pipe as well as movement operations to be able to control the materials that are being derived from the model. This is all powered by Forge.
All right, so in this video, what we have is our Revit application. This is the BIM VDC model. The BIM VDC model is allowed to be exported to a cut list, which can be sent directly to a machine.
What is nice about this is we're able to run it through a fabrication management system that enables us to organize all of the kits of assemblies, and parts, and pieces to distribute out to the Excel file so that the machine can read it. So in here, you'll see that the cut list is being distributed to Excel.
So we're going to switch over to one more video real quick. So in this application, what we're doing is we're loading the 3D viewer with the kit of assemblies. And we're distributing those into the machine control on the machine on the right.
So in the image on the right, we have a 24-foot linear positioner. And what this software allows us to do is take a BIM model that's housed in Forge. It allows us to load and home the machine.
So you can see that right now, we've moved the positioner to the minimum position. So we're remotely controlling this machine using the fabrication logic developed and derived from the VDC model to be loaded into the machine into a more applicable approach to minimize the amount of wasted material.
What we're seeing in a lot of shops is that people are hand-keying in lengths. Or they're pulling measurements using a tape measure. This digitally allows us to remove a tape measure out of the equation, which allows us to have less-skilled trade laborers put together and fabricate more materials with less knowledge.
What we're running is a dynamic algorithm on here that pulls the associated parts and pieces out and nests the linear length from shortest to longest to proceed with cutting the material.
So what you're seeing in here is a fully up-cut saw, meaning we load one stick and hit Start. And the machine is actually cycling and performing a cut. You can also see down here that every time the list is loaded and cut, that it's processing that material.
I'm going to go ahead and switch back over to the presentation. So once the materials are sent, in the example that I shared before, in the VDC model, as we're distributing these, we showed up on site with all of the prepackaged materials and put them in the designated zone.
And so you can see that we've prepackaged all of the pipe. Everything is bundled and labeled with a zebra-printed label. We're also doing inkjet locations on there as well. And what this enables is a set of instructions for the assembler on site to piece it together in a much more organized kit of LEGO part approach.
So this is the finished installation of the drainage waste event assembly in the wall. It was a two-story stack. And we can see that everything is labeled and installed appropriately.
So in this discussion of offsite scanning for project management, what we are looking at is that there are some scanning software options. So laser scanning software, we have Leica Cyclone Register 360 or Faro Scene, and some options. But it requires some technical skill and aptitude to be able to register and process those materials to get them completed.
We're seeing a phase shift, where now, we're starting to use iPhones and video capturing software options to stitch together point clouds. And what we're realizing is that the end result is at Autodesk Forge, where we can actually use Forge to streamline the communication of these systems to where there's no technical aptitude required to process these and view them as a project manager.
Most project managers don't know how to open up Revit. They don't know how to use registration techniques. And what Forge enables us to do is to connect directly into Autodesk Build, to fetch the required file types, and display that data in a usable, friendly fashion for these project managers.
In addition to this, we've laid out, using the total station, 750-plus sleeves that cut off-- basically, we did that in hours. And it cut weeks off the schedule. Traditionally, people are pulling tape measures. And now, we can use total robotic stations and lay out points in just a matter of hours.
This is going to segue into Michael's presentation. So Michael, take the lead.
MICHAEL BEALE: Thanks, Brian. That's a really great example of how Forge is helping the fabrication process and how point clouds can also help with that process. Now, I'm going to deep dive on how you can use Forge with point clouds.
But first, I need to cover a couple of other use cases in the AEC industry. So we've seen how fabrication can be-- how point clouds can help the fabrication industry with higher-precision scanning. I'll also cover a couple of other use cases in the AEC industry for point clouds. I'll then talk about the new point cloud support inside BIM 360. And then I'll wrap up on how you can use Forge with 3D tiles and points.
So the first one, let's take a look at some of those use cases. So this is really about making point clouds accessible from anywhere and how they're applied in the AEC industry. Now, Brian already covered some of those other products and some of those other technologies that are disrupting in this area.
So let me dig a little bit deeper into those. So the first one you saw with fabrication was taking high-precision measurements and being able to verify them offsite or from anywhere from a browser. And these are sort of high-precision comparisons-- to be able to compare the CAD model with the scan. And that's where that alignment is critical.
Another example of that is a company called Airsquire that uses Forge. And they find-- they can compare two different scans and take a version comparison. And they analyze those points. They cluster, segment, and identify them. And they do that in an automated way.
Doing that manually, trying to find the difference between two sets of points, would be really manually time-consuming. And so they try to automate that process. But again, this is an example of high-precision comparisons.
Another one which everyone's probably mostly familiar with is in that sort of visual comparison. This is where it's less about precision and more about the visual side of things.
So you see this in real estate. It's now expected that customers can see their real estate from anywhere. And they expect it to load quickly. It's supposed to render quickly. And it's very convenient.
And then if you apply that same thing from the real estate industry that Matterport and that Zillow were doing and you applied it to the construction industry, you get something like this.
This is where Open Space AI, and Structure Side, and a few others are using 360 panoramas and an iPhone and capturing the inside of a construction site over a period of time. So you take a photograph one day, and then compare it with something from a previous day, and see the progress of the construction site.
And another similar example-- this is, again, applying point cloud scans from a drone, processing that into a point cloud, and then overlaying that with the CAD model.
And not only that, you're taking the CAD model's schedule information and you re-colorizing things in it as with progress. And so you can see in this CAD model, parts of the job site are on track. Other parts are not on track. And that's done by a company called Reconstruct Inc.
So now, let's cover a little bit about the new point cloud support inside BIM 360. So if you haven't seen it already, let me give you a quick demo. So this is where you would upload your RCP files from ReCap Pro into BIM 360 Docs. And you can now view those point clouds directly in your browser. So let's take a quick look.
Here, I've clicked on an RCP file. And it immediately shows inside the browser as a point cloud scan. Now, I've got the debug terminal turned on. So you can see a little bit of these extra features. So one of them is-- one of these features is being able to change the point size, for example.
You can also see on the left that there are three scans. There's actually 50 scans here. And so this is not a unified scan. This is a combination of 50 scans overlaid on top of each other. But the important thing is as you zoom closer into the scene, more points are loaded and more details are loaded.
Now, under the hood, this is using 3D-Tiles Next. So let's do a little deep dive into that. 3D-Tiles is an open format. And it's based on a hierarchical level-of-detail system.
And what's going on under the hood here is when we upload that RCP file up to BIM 360, either via the browser or the desktop connector, this triggers a conversion. And that conversion happens in our Forge Model Derivative service. So that RCP file is converted with the ReCap engine from RCP into 3D-Tiles.
And that generates a tileset.json file and a whole lot of PNTS files compressed with Draco compression. Those files are then stored in Forge OSS buckets alongside SVF derivatives.
And then finally, we are going to stream those files just like we do with Forge Viewer and CAD models. We're going to stream those to the browser. And we'll load that tileset.json file first. And then that will start streaming in the points. And I'll dig into that in a little more detail in a minute.
The first question, though, is how do you upload these RCP files? So here's a quick video just to demonstrate the two techniques on how to do it. The first one is via our desktop connector.
So here, I have on the left my file system. And I'm simply dragging and dropping the RCP folder onto the desktop connector. And then that uploads that to BIM 360 Docs. And then that's automatically converted. And you can see that's viewing in the browser on the right.
The second method is to upload via the browser directly. And so you have to create the support folder and then slowly drag each of those RCP files and RCS files directly into BIM 360. And you can see that that's now processing those files. And I can now click on that RCP file. And it will load in the point clouds.
So that's the conversion process. Once it's uploaded, we now convert that process. And we convert that using Autodesk ReCap Pro. Now, this is the same point cloud engine that's found in Autodesk AutoCAD, Revit, Infraworks, and Autodesk Civil 3D. That engine is taking those RCP points and converting them into 3D-Tiles and 3D-Tiles Next.
So once those files are converted into 3D-Tiles file sets, that's then stored along with-- that's stored along with other derivatives, like SVF and SVF2. And the actual process is something like this.
We generate that hierarchical level of detail of the points based on an Octree structure. We divide the scene into different regions, an Octrees set of regions I should say, we take those million points and divide them into those Octree regions.
And then each of those small regions we call a tile. And we divide the tiles up into roughly 100,000 points each. And from there, we take that point set and save it as a file, a .pnts file, and make sure that it's Draco compressed. And then finally, we save that file onto Forge OSS buckets. So that's the storage side of things.
And then lastly, now that we've got the file set hosted in the cloud, we want to stream that to a browser. And this is where something like Forge Viewer, or Cesium, or maybe even Unreal Engine can take that tileset.json file. And it knows what to do with that in order to start streaming files-- start streaming points.
Now under the hood, the tileset.json file would be considered the master file. This is a JSON file that describes that Octree structure just in the bounding boxes. So you've got a BVH or an Octree all described as a JSON file in terms of regions and bounding volumes. And for each bounding volume, that will point to a PNTS file, or a GLTF, or GLB file. And so there are basically hundreds of these.
Now, when the browser comes along or an HLOD engine comes along to load this tileset.json file, it looks for the geometric era. And that geometric era is the distance between the camera position and the tile. And we call that the geometric era.
So that means points that are further away are loading-- what's the word, sorry-- denser points are closer to the camera. And sparse points are far away, because the geometric area is loading just the tiles that it needs based on the frustum camera and the geometric area distance.
So 3D-Tiles and GLTF are both open formats. And once we've got an open format, it means we can use open other viewers to decode this format. It means you don't have to rely on a proprietary format or try to decode it. So tileset.json is decoded by many viewers, such as Cesium, Forge Viewer, and I've got an example of Mapbox, which I'm going to show you now.
So let's start with the Cesium. If I give Cesium a tileset.json file like this one-- this is an Oakland train station drone scan-- you can see that these debug bounding boxes appear. Just set that as an option here. And as the camera moves closer, it's loading in more points for each tile set.
And for each tile, there's 100,000 points, roughly. And then Cesium knows what to do to load that tile and render it. As I move out, as the camera moves away, you can see those bounding boxes are disappearing. And that means it's unloading the tile from memory. And so there's less things to render.
Under the hood, you can see I've downloaded this locally. And I've got a whole lot of PNTS files. And I've got this tileset.json file. This tileset.json file, you can see that it points to these PNTS files based on the geometric arrow there.
And finally, here's the index.html file of that Cesium browser view that you just saw. It's a single file. It's pretty simple. And you can see, all I have to do is tell Cesium-- once it's loaded, I tell it to load this particular tileset.json file. And it knows what to do after that and streaming those points.
So the question then is how can I download this tileset from BIM 360 for my own offline use cases? So maybe I'm out on a field. And I need to load those points without an internet connection.
So one way to do that is to use a script like this. This is a really quick example of taking a BIM 360 access token, and the URL to the tileset.json file, and then recursively going through that JSON file and downloading each PNTS file or GLTF file.
Now, let's move on to Forge Viewer. Now, Forge Viewer version 8 is a work in progress. It's in beta, private beta. And we can use that to load in point clouds into the existing viewer. So a little bit like what you saw with the ReCap project, that's something that you'll start to see the CAD combine with the point cloud scan soon.
So here, I'm taking the model that Brian was demonstrating. This is a facility out in Boise. And you can see with Forge Viewer version 8, I've got a point cloud scan combined with the CAD model. And that point cloud scan is 3D-Tiles.
Now, I can still use all of the same Forge fewer tools like the magic tool for example, I can click on two points and take the measurement, which might be useful. The important thing is I can take these point cloud sets, the tileset.json file, and with an existing solution like Brian's LOD BIM interface, which is already using Forge Viewer, they can add this component to add their scan information on top of the CAD model.
Now, to do that, here's a quick example file, the demo that I just showed. And here's the source code behind it. You can see I've got an index.html file. And I've got the index.js file. This is simply just loading a URN with Forge Viewer. So it's the basic loading of Forge Viewer.
I then load a tileset based on this code here. And I provide the tileset.json file. Now, over in the tiles.js file, we're actually loading the ThreeJS 3D-Tiles loader from the New York Times. And that's this repo here.
So if you're interested in how this is being done, you can see, I'm loading the external library and combining that inside Forge Viewer using LMV version 8, which is now using the latest ThreeJS.
So a third way to bring the CAD and scan model together is by using layers or 3D-Tiles in the layering system. So 3D-Tiles allows you to load multiple sub-3D tiles together as an overlay.
And essentially, we can create two different tileset.json files-- one that's based on points, and one that's based on a CAD model, align them with the GPS coordinates on the planet, and then render them with a tileset-- sorry, a 3D-Tiles viewer.
Essentially, we're going to take a GLTF and wrap it inside a tileset.json and then define its correct location. The beauty of this is that means that the GLTF renderer that's built inside the 3D-Tiles viewer can handle both points or triangles without any effort.
So let's see that in action with a Mapbox demo. Here, I've got a local version of this. And I am zooming into Boise, Idaho in the USA. And I've located the structural and piping MEP, also the piping model, combined with the 3D-Tile set, the point cloud tileset, together in one single model.
So you've got that satellite imagery combined with the CAD model and the scan model all inside a sort of a mapbox prototype here. And you can see-- it's hard to see on Zoom, but it's actually rendering at about 60 frames per second.
And the code for this, the code for this one-- also fairly straightforward. Have an index.html file. We load in the GLTF loader that we need. And we go to app.js, which is basically where we load in these different layers.
You can see I'm loading a Mapbox satellite layer for the background. And then I load in the PNTS tileset and then also the CAD model. And let's take a little look at how the actual GLTF is loaded.
So when the tileset.json file finds a GLTF file or a PNTS file, it loads them in directly with basically a GLTF loader. Or if it's a PNTS file, it will load the point material and create a ThreeJS object.
So here, you can see the GLTF loader component. It's literally using the GLTF loader library, and then parsing that, and returning the buffer for it to be loaded in ThreeJS.
Now, let's take a look at the tileset.json file. Here, you've got the GLTF file, which is the structure. And you can see the offset coordinates for the lat/long. Those are actually converted from EPSG 3857. I take a lat/long coordinate of the location, and convert it to that projection, and then put that inside the tileset.json file. And I do the same thing for the piping GLB file.
Now, that's a standard GLTF file. So you can simply drag and drop that GLTF file in GLTF loader. It's basically 31 draw calls. And the same thing for the structure-- I can drag that into a standard GLTF viewer. And you can see the structure component.
Now, the points-- the tileset.json for the points, you can see, I've got a whole lot of PNTS files here. They've come straight from BIM 360. And I've downloaded them offline. And you can see I'm literally just loading in each of those PNTS files directly from offline from the local storage.
A couple of interesting things that came out of-- that's come out of this is we're also looking at using GLTF for points and using MeshOpt compression. So we found that the MeshOpt compression gets a really good decode performance of about a gigabit per second and a [INAUDIBLE] library that's about a kilobyte in size.
And the results have been really impressive. The ratio, the compression ratio, is about equivalent to Draco. And so this is something that's being proposed.
And then lastly, with those GLTF files of the CAD model, we've converted those CAD models-- SVF, SVF2-- into GLTF. And because we managed to consolidate them for draw call performance, we are using now what's called a feature ID to be able to pinpoint objects and identify objects within a GLTF viewer.
So you can see here, I've clicked on the roof of this building here. And I can see that it's got a DBid 2213, which was the same as what we saw in Unity.
And that is it. So hopefully, it's given you a quick overview of how you can use Forge and point clouds together. So next, Alex from Piro CIE is going to look into the future with what Apple is doing with ARKIT and how it can be used with design automation for Revit.
ALEXANDRE PIRO: Thank you, Michael. So as Michael and Brian were saying, we have a lot of way to use point cloud, to create them, and to use them in different softwares. So I'm going to show you a way to create point cloud from reality capture and to use them into Revit. So I'm going to describe you the whole workflow.
So first, I'm going to explain-- to talk a little bit more about Piro CIE. So we are a software development company. We develop custom plugins for Autodesk products. We also support digital transformation for our customers. And we allow them to implement BIM.
We develop some dedicated softwares using different kind of technology, especially augmented reality, virtual reality, or mixed reality. And we develop this kind of software on smartphones, and tablets, and also different kind of headsets.
We also developed a web application especially using Autodesk Forge. And we also do some research on digital twins using some sensors or different kind of IOT.
We are, from 2016, part of the Autodesk Developer Network. We're also a service provider from 2019. And we are a certified system integrator for Systems Integrator from 2020.
So how did we get here? So we developed an augmented reality app, which is working on smartphones and tablets, which allow to control the progress on construction sites by-- we superimpose this 3D model on site just to control that the shape and the location of all the elements are correct.
As we are working with smartphone and tablet and using the camera, sometimes, we need to be more accurate. Also, for an app, the only way to get feedback from this inspection is to take pictures and get some text notes. So we decided to find a way to get-- to be more accurate and to get more feedback from this inspection, these experiences.
And Apple released, in the last iPad Pro and last iPhone Pro, a new feature, which is the Lidar scanner, which is integrated into the device, the other camera. And it allows to implement new features, to be more accurate, and especially, to capture the reality and get more detailed feedback from this inspection.
So I'm going to explain you the workflow we created. So it's based on the augmented reality app we created and we are already using an on site. So the first part of this workflow is to access our documents, our 3D model, which are located on 360 or Autodesk Construction Cloud.
The second part is to create the point cloud using reality capture. Then I'm going to show you how to integrate the point cloud into the Forge Viewer with the 3D model and align them. And the last part is the merge using DesignAutomation with a Revit file and how we can visualize both 3D and point cloud into Revit.
So let's start with the document access BIM 360. So we need to access the file to get some information of the location where we are on the 3D model. So using the data management API of Forge, we can access our BIM 360 files, or list all of them, all the folders on the hierarchy.
And we can list all the detail of our file and open which version we want of the file. And we can display this file into the default viewer using a Webview on the iPad.
So we are using a lot of different building function from Autodesk Forge to help us to place into the model and to get all the information that we need, especially the AEC metadata and the level extension.
As you can see on the right, we used a Minimap extension to show a floorplan and the position of the user on the model. And we also use the Bimwalk extension to get a first-person view and to help us to select the different thing as our different preference that we need. It's very useful when we are in a very, very big building, and you need to locate in a different level or different position in the big building.
So the technique that we use, it's a [INAUDIBLE] technique that serves to place by taking the offset from an origin in the viewer. So we select three references in the viewer, so two vertical one and one horizontal one, and we create this intersection. And we get the distance from the model origin.
We also keep track of the plane normals that we will use later for the model orientation. And we also get some Forge data, especially the model scale, to get all the information to align after the point cloud with the 3D model.
So now, they're ready to capture. So with our device, we can start the augmented reality. So now, we need to repeat the process as we did in the Forge viewer. But now, in augmented reality, we need to, after detecting the wall, we need to select the three references that we selected in the viewer and create the same local origin, the intersection of the three planes, that allow us to get the offset from this origin to the real-world origin.
We also keep track of the plane numbers, also, for the point cloud orientation later. And for the scale, we assume that we, in augmented reality, we are in meter. So all the value are in meters. So we don't have any conversion to do.
After that, we can save all these value into a database. So in this case, you are using a JSON file format. But we can use pretty much any kind of file.
Now, how we capture point from our device? So after releasing the Lidar capability on the device, Apple provided a very good example of how to use the depth map from the metal buffer, and especially, how we can get the XYZ coordinates of different points.
So that brought us to get some point in the real world. So we get all these coordinates. And using the camera buffer, we can also get the RGB value of these points. So that allow us to get XYZ coordinate for each point in the wall space and the RGB value of this point.
So we can save that in a PLY format just to describe all these point cloud. There's other file formats, but we choose the PLY, as it's very simple to parse.
So this is a video that show the demo of the point cloud capture. So let's resume all the process I describe to you-- so how can access our file in BIM 360. You can see, I can navigate through the folders.
Now, I can open the file 3D model into the viewer. You see on the right that we have the Minimap, which is already enabled. And I'm going to place the user into the building and place where I want to select my references.
So now, I can select two vertical references on [INAUDIBLE]. And we keep track of this intersection. Now, we are in augmented reality, as you see that we have some scan of the room, where we see the mesh.
And we selected the same references as we did in the viewer. And we have this local origin. Now, we can start the point capture. So I just need to move slowly just to capture a lot of points.
The more I spend time moving the divide, the more points the point cloud will accumulate. So as you can see, in the few seconds, we already have [INAUDIBLE] 200,000 points. And when it's done, we're going to save this file, the PLY file, and also the different values that we get before.
So now, how to integrate that in the Forge Viewer on the web? So we use a very simple technique, which is to use the THREE.PointCloud function. So this is a function integrated into ThreeJS.
But it's a little bit limited to a small point cloud. So we can't do more than one million points. So if we go over, it's going to be a little bit slow. But as Michael show, there's a lot of different techniques to optimize that and to make it working with bigger point clouds.
So once we have all our points which are just displayed as overlay with the 3D model, and we came to the element of the point cloud with the 3D model using the original set that we calculate before with the viewer and the augmented reality, we can orientate with the plane normals. And we can do the scaling from meters to the viewer you need.
This is what I'm going to show you in this video. So as you can see, I can access now the same way I did before, but with a different interface. So I can access all my file and open my file in the Forge Viewer.
And now, you can see on the right side of the screen, I have some different scan that are available, so the one we show in the video. And there was another scan. So you see the dates. And you see also the number of points.
So as you can see, it's pretty smooth. There's not a lot of points in this scan. So can easily change from one scan to another, reduce imaging to combine multiple scan in the same viewer. But for this one, this demo, I was using one point cloud at a time.
So now, we can merge this point cloud with the DesignAutomation, as all we did in the viewer with just only overlay. But the Revit file remains the same. And it's still separate from the point cloud. So I'm going to show you to show you that with the DesignAutomation for Revit.
So our first approach was to create a plugin to import the raw point cloud data-- so our XYZ coordinates and RGB values-- into Revit. So it was working using the basic point cloud engine. But nothing is persistent.
So every time we closed the Revit instance, we lost everything. So we can't save the point cloud into the read file. It's probably due to the fact that from Revit 2019, we can't import a raw point cloud using the point cloud engine. And it's limited to the RCS and RCP file, which are the ReCap Pro file.
So we changed our mind and decided to convert our point cloud into ReCap Pro file format. So to do that, we use the ReCap SDK and created a microservice that's run on the web in order to convert our PLY file into RCS.
Now, it's pretty straightforward. So in Revit, we can create a simple plugin and use the point cloud engine with a simple point cloud instance to import our RCP file into Revit. And we need to do the element as we did in the Forge Viewer.
Now, we are using the ElementTransformUtils function and do exactly the same way using the offset data, and the planes normal, and also the scale to match everything together. And we can easily align the point cloud with the Revit file.
We decided also to automate this part using the Forge DesignAutomation. So we converted our plugin into a DesignAutomation work bundle. And we can run that from the web without doing anything with Revit.
So this is what I'm going to show you in this video. So after we loaded our point cloud into the viewer, we can click on the button. And you see all the different steps of this process, which means that we need to upload into our database, convert it to RCP, and then start DesignAutomation.
So you see the different step of the DesignAutomation process. And once it's done, we get a link with the results. And we can download the ZIP file. The ZIP file contains two files-- the Revit file and the RCS file, which is the point cloud.
Now, how we can use this file in Revit? This is the last part of our process. So when we open the Revit file into Revit, we only see the 3D model. But when we go to Manage Links in the Point Cloud tab, we see that we have a link to the RCS file.
We just need to resolve the link because of the absolute on our relative path, which is the tracking. But once it's done, we can save that. And everything is working. And once the link is resolved, now you see the point cloud that appear into Revit.
So the model hasn't changed. It's just a link to the point cloud. We can unshow it, remove it if you want. And we can also modify it as long as we don't change the coordinates of the points. And as you can see, we can start to work with Revit with the point cloud on top of the 3D model.
Now, how we can improve this workflow? So I just want to say that this workflow is not something closed. It's definitely something that we created just for demonstration.
We can use all our parts in other workflow and adapt it to everyone needs. We can include every different parts into the app. We could also prepare the point cloud with dedicated tools.
So in this case, we use an automated workflow. So everything is raw. So the point cloud can have some noise. They can have some problem with different point that we don't really want, especially if you see through the windows, there was a lot of points. So we can definitely prepare this point cloud before using it.
We could also pre-compute the point cloud. So in this case, we would like to use different ways, so every time we do the transformation of the point. But if we were fixed on the workflow, we could have recompute the point cloud with all the model coordinates just to fit a specific file that will allow us to reduce some step in the workflow.
We can also combine multiple point clouds. So in this case, we demonstrate that with only one point cloud. But we can definitely use different point clouds as many applications-- for example, 3D annotation tools where we don't want a large point cloud, but we just want some parts and different parts of point cloud just to show some details, as we do with pictures sometimes.
But we can also create very large point clouds by merging or combining many, many scans together. And as I said, when we're talking about the web integration, we can definitely use new features, such as 3D-Tiles and streaming, as Michael described in this presentation, just to optimize the way that we handle very large point clouds.
MICHAEL BEALE: Thank you Alexandre and Michael. This was a fantastic session with everyone. And as a final wrap-up, we just want to kind of summarize what our experience was today. I showed some examples of reality in the construction workflow of some of the things that we're facing as pain points in the field.
And one thing that's been nice is, having come from the construction industry, there's been a great resource available to me and our team at Allied BIM with Autodesk Forge's team. Autodesk Forge has a great community. I've participated in several of the accelerators. And the community involvement's important.
I think Alexandre's piece of technology really illustrates how we can leverage other mediums aside from actual conventional laser scanning technologies, such as iPhone cameras, to be able to stitch together and piece together point clouds.
Having the support and the network that we have with Autodesk Forge, it's second to none. It gives us all the tools that we need to be able to continue to advance.
One thing that's critically important that Michael, and Alex, and I would like to convey is that we would like you all to join the community and to really be a part of pushing the needle forward so that we can do exactly what was in Alex's last slide-- is lift off into outer space with these ideas in this technology.
So thank you, everybody, for joining the session. And please don't hesitate to reach out to us on social media networks, or even through the community, with Autodesk Forge. Thank you.