Description
Key Learnings
- Hear about the top Premium plan features
- Understand how to get starting setting up these features
- Learn tips and best practices for a smooth upgrade process
- Hear what benefits others customers are seeing with these features
Speakers
- Briana SanchezBriana Sanchez is the Content Specialist, for Business Models and Pricing at Autodesk. She creates content and communications for enhancements and updates made to Autodesk account and the Autodesk Premium plan.
MICHAEL BEALE: Hello, and welcome to our talk on "Unlocking the Potential with Point Clouds and Forge, A Customer Story and Technical Deep Dive." My name is Michael Beale I'm from Autodesk, on the developer advocate's team. And with me today are two other guests.
ALEXANDRE PIRO: Hi, my name is Alexandre Piro. I come from France, and I'm working at Piro CIE.
BRIAN NICKEL: Hello, my name is Brian Nickel, and I'm the CEO and founder of Allied BIM, and I'm based in Boise, Idaho.
MICHAEL BEALE: Great. And between the three of us, we'll be presenting today-- we'll be covering a quick introduction, and then I'm going to hand this off to Brian, who's going to talk a little bit about the customer side of things. Then I'll talk a little bit about the Forge and deep diving into this technology. And then we'll cover some of the new future-looking work with Alexander. So over to you, Brian.
BRIAN NICKEL: To myself? So my name is Brian Nickel. I'm the CEO and founder of Allied BIM. I am a Montana State University School of Architecture graduate based in Bozeman, Montana. I currently teach for Montana State University school systems, Gallatin College, as well as I have started up my own startup with my business partner, for our software, which is Allied BIM.
I have a case study example of a general contractor project that we're working on. I am based in Boise, Idaho. And my background is basically focused on developing software for the AEC space.
A question that I'd like to pose to the audience is, why aren't we using models to fabricate buildings? Back in the 1970s, NASA and JPL went model to machine. We're in 2022, and we're about 50 years later and we're still outputting drawings to 2D paper and building off of 2D construction sets.
We're going to talk a little bit about status quo-- what exists today and what are some of the battles that we face in our industry. I'm going to introduce a new solution, which we're calling the Forge Powered Fabrication Solution. And then I'll show some screen captures from ReCap Pro or BIM 360 share and custom viewer. I'm actually going to have Michael's support in that, and he'll go into a technical dive and demonstration of the technologies.
Moving on to the next, on here, we have three different total stations and laser scanning equipment. On the left, you're going to see an icon 80. The icon of 80 is a total station that basically lays out points and validates field conditions from the model and from reality back to the model.
The RTC 360 is a high-end laser scanner. It gets about a 16-inch accuracy, and it captures scans in about a minute and 26 seconds per setup. The BLK 360 is a coffee cup scanner that gets basically an 1/8-inch accuracy in it's capturing.
These are just the Leica products. There's other products as well that relate to these. These are just what we've used at the subcontracting firm in Bozeman that I worked for. These solutions are leveraged to drive, essentially, what we're laying out in the computer to digitally check and validate construction in real time, as we're constructing projects and buildings digitally.
So the question that I asked in the beginning is, why aren't we using models to fabricate buildings? The problem is that status quo architects and engineers general contractors subcontractors are all focused on, essentially, developing an LOD 300 deliverable, which basically is-- are these construction documents that you're seeing on the left.
Now, a solution to this could be an approach for Forge-powered fabrication, Autodesk Forge-enabled. So essentially, we're using the Autodesk Forge platform to develop the solution on top of-- and we're connecting to Autodesk Build in order to create a network of fabricators that can work with these contractors in their models so that we can start to focus on going model to machine.
In general, LOD sucks. Someone is paying for it somewhere. So oftentimes, we hear that LOD 300 is a proposal that is in their scope of work and they don't have money to develop LOD 500 models.
This chart here was developed by Jeffrey Pinheiro, the Revit kid. And what's interesting and comical about this graph is that everyone in our industry has a different perception of what LOD is. So LOD 100 graphics should actually probably be in 400. But at the end of the day, we're all putting out 2D drawings.
So why are we doing that? Why aren't we connecting models to machinery and building off of the models that we're spending so much time developing?
So a problem that we've recognized in our industry is that digital building design is growing rapidly and offsite fabrication is also growing rapidly, but both are completely disconnected from one another. So there's no real easy way to get the 2D documentation set out to the field. It's generally just paper to the field.
And so what is the solution? What can we do?
Also, in addition to that offsite fabrication, it doesn't have machinery that's equipped or connected to the models, which makes it even harder to pass data to your fabrication shops. So status quo-- the way it goes, everybody knows the model is a pretty picture.
In coordination, we are wasting time, and data modeling systems that are not accurate in the way that they're going to be fabricated. We've oftentimes received coordinated, engineered work that doesn't have appropriate conditions of what should be installed in the field.
And so what happens is a lot of rework has to happen, which creates a disconnect. What ultimately ends . Up happening is the engineer passes the coordinated model off to a subcontractor, and then the subcontractor has to rework that and then nightmare mode initiated in the field because they're getting one set of plans, but they're getting a set of fabrication documents that don't match up with the engineered plants. So we call this nightmare mode.
So I have some examples of some of the nightmares that we've witnessed in reality. One on the left is a plumbing vent penetration going right through the center of a doorway.
How does this happen? Why are we dealing with this?
On the right, we've got a construction pile of waste in front of a house. We drove through this subdivision. I encourage you to drive through subdivisions in your local neighborhoods that are being built. But out of all of the job sites that were being built, there were multiple piles that look just like this one, of wasted material in the front, and this could all be mitigated if we're building off the model.
What we're also seeing in here is installation nightmares because we're working in 2D conditions, working off, quote, unquote, "coordinated models." We call these MEP sandwiches, where the plumber gets in first or the electrician gets in first, and everybody has to work around one another.
There's going to be some serious pressure drop issues in that duct run that you see on the right there. And why are we curving ducts to avoid pipe? This is insanity.
So the reality of status quo is that, right now in our industry, we're dealing with a high trade shortage. The trades are losing talent because students are being encouraged to attend four-year schools. And they're not getting enough exposure into what opportunities exist in the trades.
We're also dealing with a high waste amount. There's large amounts of waste due to zero automation. There's low to zero productivity, which is creating a lack of ambition to increase productivity, which is contributing to zero growth in our industry. It's all red because this is status quo.
The solution that we're proposing is that we create BIM VDC models that are fully converted into constructable models for construction. In addition to this, we're using reality capture for verification. So we're creating point clouds from devices like the BLK 360 or RTC 360 or drones. And we're capturing that to ensure that what is installed on site matches and it validates to the model so that we can then assemble the model into buildable kits of parts that we can use to tag and send to machinery.
And then, ultimately, we can do some offsite fabrication and send that to the job site so that materials are shipped to the site and package boxes that are labeled and ready for the subcontractor to install. And in our view-- in our lens , this contributes to success.
So Forge-powered fabrication statistics and results-- by using Autodesk Forge, we've been able to focus more on VDC digital design. We're using VDC to be a digital waste BIM. So what that means is that we're modeling conditions to confirm that we're yielding the least amount of waste.
We can make the digital errors in advance. We can think more clearly through the fabrication process and ultimately tie that into machine automation, which allows us to go directly from a VDC model into a fabric-capable machine piece. This has increased productivity by 800%, or eight times beyond what the traditional status quo method looks like. We're also seeing 100% growth in opportunities to be able to connect to machinery and to work on obtaining more work to complete the schedules quicker, to build more buildings with less waste, and ultimately contributing to more success.
So what does all of this mean? Well, models, currently, are being built for kitting, for order procurement, which ultimately gives us the ability to have a set of materials that can be ordered from supply houses, which allows us to send a set of instructions to the machines directly out of the model with the data to allow us to connect the models to track the status of materials to understand the labor and statistics and machine serviceability.
We're also enabling a network of machines to distribute job information remotely. In the fabrication side, we see this as creating an opportunity to enable anyone, with or without machines, to have a network of machines to connect to their models. In this example, we can see that we've got Scotchman, RazorGage, and AMOB machines that are currently connected, focusing heavily on linear positioning systems and building solutions.
So with our case study for this general contractor project, what we did is we created a VDC model with the scope of work of a bathroom riser. And we did this against the traditional engineered design model to show the value and the business case of what would result in less material waste and an overall more efficient schedule.
So I designed this system using Revit. We had a reality capture specialist go out on site with an RTC 360. They captured the laser-scanned conditions in about 30 minutes. That data was sent over to us digitally. It was assembled and spooled and sent to a remote machine in Bozeman, Montana, where it was digitally fabricated from the model to the machine and shipped onsite for success using the Forge platform.
What this enabled us to do was-- this traditional install would take about two weeks to install. We were able to install this plumbing riser stack in just one day. So we cut a two-week schedule down substantially, and this was a success.
So I have a video that I'm going to share. So in this video, what we're doing is we're using Forge to essentially compartmentalize all of the assemblies out of the Revit model. We've grouped all of the assemblies into buildable parts. We're color-coding those assemblies, and we're showing the part IDs that enable us a set of instructions of how these need to be pieced together. This is all built on top of the viewer application.
In addition to this, we're cloud compartmentalizing all of the machine control into the web and using a web browser to now distribute cut-length information to these remotely connected machines. So what you can see is that we're actually sending machine control and sets of instructions to control a machine remotely.
What this means is that every connected machine has a connector that's connected to the Forge-based model. And as the lists are sent into this machine control graph, we can actually see fabrication plans and cutting success as we're running through this. And here, you can see that we've got linear lengths of pipe, as well as movement operations, to be able to control the materials that are being derived from the model. This is all powered by Forge.
So in this video, what we have is our Revit application. This is the BIM VDC model. The BIM VDC model is allowed to be exported to a cut list, which can be sent directly to a machine.
What is nice about this is we're able to run it through a fabrication management system that enables us to organize all of the kits of assemblies and parts and pieces to distribute out to the Excel file so that the machine can read it. So in here, you'll see that the list is being distributed to Excel.
So we're going to switch over to one more video real quick. So in this application, what we're doing is we're loading the 3D viewer with the kit of assemblies and we're distributing those into the machine control on the machine on the right.
So in the image on the right, we have a 24-foot linear position. And what this software allows us to do is take a BIM model that's housed in Forge. It allows us to load and home the machine.
So you can see that, right now, we've moved the position to the minimum position, and we're remotely controlling this machine using the fabrication logic developed and derived from the VDC model to be loaded into the machine, into a more applicable approach to minimize the amount of waste of material.
What we're seeing in a lot of shops is that people are hand-keying in lengths or they're pulling measurements using a tape measure. This digitally allows us to remove a tape measure out of the equation, which allows us to have less skilled trade laborers put together and fabricate more materials with less knowledge. What we're running is a dynamic algorithm on here that pulls the associated parts and pieces out and nets the linear length from shortest to longest to proceed with cutting the material.
So what you're seeing in here is a fully upcut SOC, meaning we load one stick and hit start, and the machine is actually cycling and performing a cut. You can also see, down here, that every time the list is loaded and cut, that it's processing that material.
I'm going to go ahead and switch back over to the presentation. So once the materials are sent in the example that I shared before, in the VDC model, as we're distributing these, we showed up on site with all of the prepackaged materials and put them in the designated zone. And so you can see that we've prepackaged all of the pipe. Everything is bundled and labeled with a zebra-printed label. We're also doing inkjet locations on there as well.
And what this enables is a set of instructions for the assembler site to piece it together in a much more organized kit. It's a LEGO-part approach.
So this is the finished installation of the drainage waste and vent assembly in the wall. It was a two-story stack. And we can see that everything is labeled and installed appropriately.
So in this discussion of offsite scanning for project management, what we are looking at is that there are some scanning software options. So laser scanning software-- we have a Cyclone Register 360 or Faro Scene, and some options. But it requires some technical skill and aptitude to be able to register and process those materials to get them completed.
We're seeing a phase shift where now we're starting to use iPhones and video capturing software options to stitch together point clouds. And what we're realizing is that the end result is-- at Autodesk Forge, where we can actually use Forge to streamline the communication of these systems to where there's no technical aptitude required to process these and view them as a project manager.
Most project managers don't know how to open up Revit. They don't know how to use registration techniques. And what Forge enables us to do is to connect directly into Autodesk Build, to fetch the required file types, and display that data in a usable, friendly fashion for these project managers.
In addition to this, we've laid out, using the total station, 750-plus sleeves that cut off. Basically, we did that in hours, and it cut weeks off the schedule. Traditionally, people are pulling tape measures, and now we can use total robotic stations and lay out points in just a matter of hours.
This is going to segue into Michael's presentation, so Michael, take the lead.
MICHAEL BEALE: Thanks, Brian. That's a really great example of how Forge is helping the fabrication process and how Point Clouds can also help with that process.
Now, I'm going to deep dive on how you can use Forge with Point Clouds. But first, I need to cover a couple of other use cases in the AEC industry. So we've seen how fabrication can be-- how Point Clouds can help the fabrication industry with higher precision scanning.
I'll also cover a couple of other use cases in the AEC industry for Point Clouds. I'll then talk about the new Point Cloud support inside BIM 360. And then I'll wrap up on how you can use Forge with 3D tiles and points.
So the first one. Let's take a look at some of those use cases. So this is really about making Point Clouds accessible from anywhere and how they're applied in the AEC industry. Now, Brian already covered some of those other products and some of those other technologies that are disrupting in this area. So let me dig a little bit deeper into those.
So the first one you saw with fabrication was taking high-precision measurements and being able to verify them offsite or from anywhere from a browser. And these are high-precision comparisons, to be able to compare the CAD model with the scan, and that's where that alignment is critical.
Another example of that is a company called Esquire, that uses Forge. And they find they can compare two different skins and take the Version Comparison, and they analyze those points. They cluster, segment, and identify them, and they do that in an automated way, doing that manually, trying to find the difference between two sets of points. But really manually time consuming, and so they try to automate that process. But again, this is an example of high-precision comparisons.
Another one, which everyone's probably mostly familiar with, is in that visual comparison. This is where it's less about precision and more about the visual side of things. So you see this in real estate. It's now expected that customers can see their real estate from anywhere, and they expect it to load quickly. It's supposed to render quickly, and it's very convenient.
And then, if you apply that same thing from the real estate industry, that Matterport and the Zillow were doing and you applied it to the construction industry, you get something like this. This is where Openspace AI and Structure [INAUDIBLE] and a few others are using 360 panoramas and an iPhone and capturing the inside of a construction site over a period of time. So you take a photograph one day and then compare it with something from a previous day and see the progress of the construction site.
And another similar example-- this is, again, applying cloud scans from a drone, processing that into a Point Cloud, and then overlaying that with the CAD model. And not only that. You're taking the CAD model's schedule information, and you're re-colorizing things in as with progress.
And so you can see, in this CAD model, parts of the job site are on track. Other parts are not on track. And that's done by a company called Reconstruct Inc.
So now let's cover a little bit about the new Point Cloud Support inside BIM 360. So if you haven't seen it already, let me give you a quick demo. So this is where you would upload your archive files from ReCap Pro into BIM through 360 Docs. And you can now view those Point Clouds directly in a browser.
So let's take a quick look. Here, I've clicked on an app file, and it immediately shows inside the browser as a Point Cloud scan.
Now, I've got the debug terminal turned on, so you can see a little bit of these extra features. So one of them is-- one of these features is being able to change the point size. For example, you can also see, on the left, that there are three scans. There's actually 50 scans here.
And so this is not a unified scan. This is a combination of 50 scans overlaid on top of each other. But the important thing is, as you zoom closer into the scene, more points are loaded and more details are loaded.
Now, under the hood, this is using 3D Tiles next. So let's do a little deep dive into that. 3D Tiles is an open format, and it's based on a hierarchical level-of-detail system.
And what's going on under the hood here is, when we upload that ICP file up to BIM 360, either via the browser or the Desktop Connector, this triggers a conversion, and that conversion happens in our Forge Model Derivative Service. So that OCPUs file is converted with the recap engine from OCPUs into tiles, and that generates a Tileset.json file and a whole lot of PNG files compressed with Draco compression.
Those files are then stored in Forge OS buckets, alongside CF derivatives. And then finally, we are going to stream those files, just like we do with Forge Viewer and CAD models. We're going to stream those to the browse. and we'll load that Tileset.json file first, and then that will start streaming in the points. And I'll dig into that in a little more detail in a minute.
The first question though, is, how do you upload these files? So here's a quick video just to demonstrate the two techniques on how to do it. The first one is via Desktop Connector.
So here, I have, on the left, my file system, and I'm simply dragging and dropping the SAP folder onto the Desktop Connector. And then that uploads that to BIM 360 Docs. And then that's automatically converted, and you can see that's viewing in the browser on the right.
The second method is to upload via the browser directly. So you have to create the Support folder and then slowly drag each of those RCP files and CSS files directly into BIM 360. And you can see that that's now processing those files, and I can now click on that ICP file and it will load in-- it will load in the Point Clouds.
So that's the conversion process. Once it's uploaded, we now convert that process, and we convert that using Autodesk Recap Pro. Now, this is the same point cloud engine that's found in Autodesk, AutoCAD, Autodesk, AutoCAD, Revit, InfraWorks and Autodesk Civil 3D. That engine is taking those SAP points and converting them into 3D Tiles and 3D Tiles Next.
So once those files are converted into 3D Tiles-- tile sets, that's stored along with other derivatives, like SVG and SVG 2. And the actual process is something like this. We generate that hierarchical level of detail of the points based on an Octree structure. We divide the scene into different regions, an Octree set of regions.
I should say we take those million points and divide them into those Octree regions and then each of those small regions we call a tile. And we divide the tiles up into roughly 100,000 points each. And from there, we take that point set and save it as a file-- adult file-- and make sure that it's compressed. And then, finally, we save that file onto Forge OS buckets. So that's the storage side of things.
And then, lastly, now that we've got the tile set hosted in the cloud, we want to stream that to a browser. And this is where something like Forge Viewer or maybe even Unreal Engine can take that Tileset.json file, and it knows what to do with that in order to start streaming files-- start streaming points.
Now, under the hood, the Tileset.json file would be considered the master file. This is a JSON file that describes that Octree structure just in the bounding boxes. So you've got a Beaver or an Octree all described as a JSON file, in terms of regions and bounding volumes. And for each bounding volume, that will point to a pnts file or a gltf or glp file. And so there are basically hundreds of these.
Now, when the browser comes along or an HLOD engine comes along to load this Tileset.json file, it looks for the geometric era, and that geometric era is the distance between the camera position and the tile. And we call that the geometric era. So that means points that are further away are loading-- what's the word? Sorry. Denser points are closer to the camera, and sparse points are far away because the geometric era is loading just the tiles that it needs, based on the frustum camera and the geometric era distance.
So 3D Tiles and glTF are both open formats. And once we've got an open format, it means we can use open other viewers to decode this format. It means you don't have to rely on a proprietary format or try to decode it.
So Tileset.json is decoded by many viewers, such as Cesium, Forge Viewer, and I've got an example of Mapbox, which I'm going to show you now.
So let's start with the Cesium. If I give Cesium a Tileset.json file like this one, this is an Oakland train station dronescam. You can see that these debug bounding boxes appear. Just set that as an option here.
And as the camera moves closer, it's loading in more points for each tile set. And for each tile, there's 100,000 points roughly. And then Cesium knows what to do to load that tile and render it. As I move out-- as the camera moves away you can see those bounding boxes are disappearing, and that means it's unloading the tile from memory, and so there's less things to render.
Under the hood, you can see I've downloaded this locally, and I've got a whole lot of files. And I've got this Tileset.json file. This Tileset.json file== you can see that it points to these patterns, files based on the geometric era there.
And finally, here's the index.html file of that Cesium browser that you just saw. It's a single file. It's pretty simple. And you can see all I have to do is tell Cesium-- once it's loaded, I tell it to load this particular Tileset.json file, and it knows what to do after that and streaming those points.
So the question, then, is, how can I download this tile set from BIM 360 for my own offline use cases? So maybe I'm out on a field and I need to load those-- load those points without an internet connection.
So one way to do that is to use a script like this. This is a really quick example of taking a BIM 360 access token and the URL to the Tileset.json file and then recursively going through file-- and downloading each file or go to pltf file.
Now, let's move on to Forge Viewer. Now, Forge Viewer version 8 is a work in progress. It's in beta-- private beta, and we can use that to load in Point Clouds into the existing viewer, so a little bit like what you saw with the recap project. That's something that you'll start to see the CAD combine with the Point Cloud scan soon.
So here, I'm taking the model Brian was demonstrating. This is a facility in Boise. And you can see, with Forge Viewer version 8, I've got a Point Cloud scan combined with the CAD model. And that Point Cloud scan is tiles.
Now, I can still use all of the same Forge Viewer tools, like the magic tool, for example. I can click on two points and take the measurement, which might be useful.
The important thing is I can take these Point Cloud sets, the Tileset.json file. And with an existing solution like Brian's-- Brian's light BIM interface, which is already using Forge Viewer, they can add this component to add their scan information on top of the CAD model.
Now, to do that, here's a quick example file. The demo that I just showed-- here's the source code behind it. You can see I've got an index.html file. And I've got the index.js file. This is simply just loading a URL with Forge Viewer.
So it's the basic loading of Forge Viewer. I then load a tile set based on this code here. And I provide the Tileset.json file.
Now, over in the Tiles file, we're actually loading the 3D tiles loader from The New York Times. And that's this repo here.
So if you're interested in how this is being done, you can see I'm loading the external library and combining that inside forward view using LMV version 8, which is now using the latest Three.js.
A third way to bring the CAD and Scan model together is by using layers or 3D tiles in the layering system. So 3D Tiles allows you to load multiple 3D tiles together as an overlay. And essentially, we can create two different Tileset.json files-- one that's based on points and one that's based on CAD model, align them with the GPS coordinates on the planet, and then render them with a Tile Set-- sorry, 3D Tiles viewer.
Essentially, we're going to take a GLTF and wrap it inside a Tileset.json and then define its correct location. The beauty of this is that means that the GLTF renderer that's built inside the 3D Tiles viewer can handle both points or triangles without any effort.
So let's see that in action with a Mapbox Demo. Here, I've got a local version of this, and I am zooming into Boise, Idaho, in the USA. And I've located the structural and piping MEP, also the piping model combined with the 3D Tile set, the Point Cloud tile set together in one single model.
So you've got that satellite imagery combined with the CAD model and the scan model all inside a sort of a Mapbox prototype here. And you can see it's actually-- it's hard to see on zoom, but it's actually rendering at about 60 frames per second.
The code behind-- for this one-- also fairly straightforward. We have an index.html file. We load in the GLTF floater that we need. And we go to app.js, which is basically where we load in these different layers. You see I'm unloading a Mapbox satellite layer for the background and then unloading the PNTS-- tile set, and then also the CAD model.
And let's take a little look at how the actual GLTF is loaded. So when the Tileset.json file finds a GLTF file or a PNTS file, it loads them in directly with, basically, a GLTF loader. Or if it's a PDF file, it will load the point material and create a three.js. object.
So you can see the GLTF load component. It's literally using the GLTF load library and then pausing that and returning the buffer for it to be loaded in three.js.
Now, let's take a look at the Tileset.json file. Here, you've got the GTF file, which is the structure. And you can see the offset coordinates for the latlong. Those are actually converted from EPSG3857. I take a latlong coordinate of the location and convert it to that projection and then put that inside the Tileset.json file. And I do the same thing for the piping job file.
Now, that's a standard GLTF file, so you can simply drag and drop that GLTF file in any GLTF loader. It's basically 31 draw calls. And the same thing for the structurw-- I can drag that into a standard GLTF viewer, and you can see the structure component.
Now, the points-- the Tileset.json for the points-- you can see I've got a whole lot of files here. They've come straight from BIM 360. I've downloaded them offline. And you can see I'm literally just loading in each of those points files with-- directly from offline, from the local storage.
A couple of interesting things that came out of-- come out of this is we're also looking at using GLTF for points and using MeshOpt compression. So we found that the MeshOpt compression gets a really good decode performance of about a gigabit per second-- an [INAUDIBLE] library that's about a kilobyte in size.
And the results have been really impressive. The ratio-- the compression ratio is about equivalent to Draco. And so this is something that's being proposed.
And then, lastly, with those GLTF files of the CAD model, we've converted those tp-- we've converted those CAD models, SVF, SVF 2, into GLTF, and we're-- we managed to consolidate them for draw call performance. We are using what's called a feature ID to be able to pinpoint objects and identify objects within a GLTF viewer. So you can see here I've clicked on the roof of this building here, and I can see that it's got a DBid of 2213, which was the same as what we saw in Unity.
And that is it. So hopefully, that's giving you a quick overview of how you can use Forge and Point Clouds together. And so next, Alex from Piro CIE is going to look into the future with what Apple is doing with ARKit, and how it can be used with design automation for Revit.
ALEXANDRE PIRO: Thank you, Michael. So as Michael and Brian was saying, we have a lot of way to use Point Cloud to create them and to use them in different softwares. So I'm going to show you a way to create Point Cloud from Reality Capture and to use them into Revit. So I'm going to describe you the [INAUDIBLE].
So first, I'm going to explain-- to talk a little bit more about precision. So we are a software development company. We develop custom plugins for Autodesk products. We also support digital transformation for our customers, and we allow them to implement BIM. And we develop some dedicated softwares using different kind of technology, especially augmented reality, virtual reality or mixed reality.
We develop this kind of software on smartphones and tablets and also different kind of headsets. We also developed a web application, especially using Autodesk Forge. And we also do some research on digital twins, using some sensor or different kind of [INAUDIBLE].
We are, from 2016, part of the Autodesk Authorized Developer network. We're also a service provider from 2019. And we are a service certified system integrator-- system integrator from 2020.
So how did we get here? So we developed an augmented reality app, which is working on smartphones and tablets, which allows you to control the progress on construction site by-- we superimpose this 3D model on site just to control that the shape and the location of all the elements are correct, as well-- working with smartphone and tablets and using the camera.
Sometimes we need to be more accurate. And also, for now, the only way to get feedback from this inspection is to take pictures and take-- get some text notes.
So we decided to find a way to get to be more accurate and to get more feedback from this inspection experiences. And Apple released, in the last iPad Pro and last iPhone Pro, a new feature, which is the Lidar scanner, which is integrated into the device-- the other camera. And it allows us to implement new features to be more accurate, and especially to capture the reality and get more detailed feedback from this inspection.
So I'm going to explain to you the workflow we created. So it's based on the augmented reality app we created and we are already using an onsite. So the first part of this workflow is to access our documents, our 3D model, which are located on 360 or Autodesk Construction Cloud.
The second part is to create the Point Cloud using reality capture. Then I'm going to show you how to integrate the cloud into the viewer with the 3D model and align them.
And the last part is the merge using this automation with the Revit file, and we can visualize both 3D and Point Cloud into Revit.
So let's start with the document access inventory. So we need to access the file to get some information of the location where we are and the 3D model.
So using the data management API, of course, we can access files-- at least all of them or the folders on the [INAUDIBLE]. And we can list all the detail of our file and open which version we want of the file, and we can display this file into the Forge Viewer using your webview and the iPad.
So we are using a lot of different building function from other research to help us to place into the model and to get all the information that we need, especially the LC metadata and the level extension. As you can see on the right, we use the minimap extension to show a floorplan and the position of the user and the model. And we also use the BIM work extension to get a first-person view and to help us to select the different things-- our different preference that we need. It's very useful when we are in a very, very big building and you need to locate in different level or different-- different position in the BIM building.
So technique that we use-- it's a [INAUDIBLE] technique that allows us to place by taking the offset from an origin in the viewer. So we select three reference in the viewer, so two vertical one and one horizontal one. And we create this intersection, and we get the distance from the origin. We also keep track of the plane numbers that we will use later for the orientation. And we also get some Forge data, especially the model scale, to get all the information to align after the Point Cloud with the 3D model.
So now they're ready to capture. So with our device, we can start the augmented reality. So now we need to repeat the process as we did in the photo viewer but now in augmented reality. After detecting the wall, we need to select the three references that we select in the viewer and create the same local origin-- the intersection of the three planes. And [INAUDIBLE] is to get the offset from this origin to the real world origin.
We also get the keep track of the plane numbers, also, for the Point Cloud orientation later. And for the scale, we assume that we-- augmented reality-- we are in beta so all the values are in meters, so we don't have any conversion to do.
After that, we can save all these value into a database. So in this case, you are using your JSON file format, but we can use pretty much any kind of file.
How we kept capture point from our device-- so after releasing the Liadr capability on the device, Apple provided a very good example of how to use the depth map from the middle prefer, and especially how we can get the XYZ coordinate of different points so that-- for us to get some point in the real world.
So we get all these coordinates. And using the camera buffer, we can also get the value of this point. So that's where it has to get XYZ coordinate for each point in the reward space and the value of this point.
So we can save that in a PLY format. Just to describe all these Point Cloud, there's also file format. But when we do the PLY, it is very simple to pass.
So this is a video that showed the demo of the Point Cloud capture, so that resume all the process I describe to you. So we can access our file inventory-- 16. You can see I can navigate through the folders.
Now I can open the filw-- 3D model to the viewer. You see, on the right, that we have the minimap, which is already enabled. And I'm going to place the user into the building and place where I want to select my references.
So now I can select two vertical references. And we're using tool one and, we keep track of this intersection.
Now we're adding to it. As you see, we have some scan of the room where we see the mesh, and we selected the same references as we did in the viewer. And we have this local origin.
Now we can start the point capture. So I just need to move slowly just to capture a lot of points. The more I spend time moving the divide, the more the point-- the Point Cloud will accumulate. So as you can see, in the few-- in a few seconds, we will have [INAUDIBLE] 200,000 points.
And when it's done, we're going to save this file to PLY file and also the different values that we get before. So now, to integrate that in the photo viewer on the web-- so we use a very simple technique, which is to use the THREE.PointCloud() function. So this is a function integrating into ThreeJS, but it's a little bit limited to a small Point Cloud, so we can do more than one million points.
So if we go over, it's going to be a little bit slow. But as Michael showed, there's a lot of different techniques to optimize that and to make it working with bigger Point Clouds.
So once we have all our points with that displayed as overlay with the 3D model-- and we can do the element of the Point Clouds with the 3D model using the original set that we calculate before with the viewer and the augmented reality. We can orientate with the plane numbers And we can do the scaling from meters to the viewer units. This is what I'm going to show you in this video.
So as you can see, I can access now the same way I did before, but with a different interface. So I can access all my file and open my file in the Forge Viewer. And now you can see, on the right side of the screen, I have some different scan that are available, so the one we show in the video, and there was another scan. So you see the date, and you see also the number of points.
So as you can see, it's pretty smooth. There's not a lot of point in the scan. So can easily change from one scan to another. Reduce the image to combine multiple scan-- the same viewer. But for this one-- this demo, I was only using one point at a time.
So now we can merge this Point Cloud with the automation, as we did in the viewer, with just only overlay. But the value remains the same, and it's still separate from the Point Cloud.
I'm going to show you-- to show you that with the design automation for Revit. So our first approach was to create a plugin to import the Point Cloud data, so the XYZ coordinate and RGB value into Revit. So it was working using the basic Point Cloud engine, but nothing is persistent. So every time we close the Revit instance, we lost everything. So we can save the Point Cloud into the Revit file.
It's probably due to the fact that, from Revit 2019, we can't import a Point Cloud using the Point Cloud engine, and it's limited to the RCS and RCP file, which are the recap Pro file. So we change our mind and decided to convert our Point Cloud into ReCap Pro file format.
So to do that we use the Recap SDK and created a microservice that run on the web, and it was to convert our file into RCS. Now, it's pretty straightforward. So in Revit, we can create a simple plugin and use the cloud engine with a simple Point Cloud instance to import our zip file into Revit. And we need to do the element as we did in the photo viewer.
Now, we are using the ElementTransformUtils function, and do exactly the same way, using the offset data and the planes normal, and also the scale to match everything together. And we can easily align the Point Cloud with Revit file. We decided, also, to automate this part using the automation. So we converted plugin into automation [INAUDIBLE], and we can run that from the web without doing anything with it.
So this is what I'm going to show you in this video. So after we load-- we loaded our Point Cloud into the viewer, we can click on the button. And you see all the different steps of this process, which means that we need to upload it into our database, convert it to RCP, and then start an automation. So you see the different step of the design automation process.
And once it's done, we get a link with the result. And we can download the zip file. The zip file contains two files-- the Revit file and the RCS file, which is the Point Cloud.
Now, we can use this file in Revit. This is the last part of our process. So when we open the Revit file into it, we only see the 3D model. But when we go to Manage Links, in the Point Cloud tab, we see that we have a link to the file. We just need to reserve the link because of the absolute and the relative path which is not working.
But once it's done, we can save that, and everything is working. And once the link is resolved, now you see the point that appear in Revit.
So the model doesn't change. It's just to a link to the Point Cloud-- we can enjoy it. Remove it if you want. We can also modify it, as long as we don't change the coordinate of the point. And as you can see, we can start to work with Revit with the Point Cloud on top of the 3D model.
Now, how we can improve this workflow? I just want to say that this workflow is not something closed. It's definitely something that we created just for demonstration. We can use all our parts in other workflow and adapt it to everyone's needs. We can include different parts into the app. We can also prepare the Point Cloud with dedicated tools.
So in this case, we use an automated workflow. So everything is raw. So the Point Cloud can have some noise. There is some-- they can have some problem with different point, that we don't really want, especially when-- if you see through the windows, there was a lot of points.
So we can definitely prepare this Point Cloud before using. It we could have so-- precompute the point cloud. So in this case, we would like to use different ways, so every time we do the transformation of the point. But if we were fixed in the workflow, we could have precompute the Point Cloud with all the model coordinate just to fit a specific file. That will allow us to reduce some step in the workflow.
We can also combine multiple Point clouds. In this case, we demonstrate that with only one Point Cloud over, but we can definitely use different Point Clouds as many applications-- for example, 3D annotation tools where we don't want a large Point Cloud but we just want some parts and different parts of Point Cloud just to show some details, as we do with picture sometimes.
But we can also create very large Point Cloud by merging or combining many, many scans together. And as I said, when we're talking about the web integration, we can definitely use new features, such as 3D tiles, streaming, as Michael described, in this presentation just to optimize the way that we handle very large Point Clouds.
BRIAN NICKEL: Thank you, Alexandre and Michael. This was a fantastic session with everyone. And as a final wrap-up, we just want to summarize what our experience was today. I showed some examples of reality in the construction workflow, of some of the things that we're facing as pain points in the field.
And one thing that's been nice is, having come from the construction industry, there's been a great resource available to me and our team at Allied BIM with Autodesk Forge team. Autodesk Forge has a great community. I've participated in several of the accelerators, and the community involvement is important.
I think Alexandre's piece of technology really illustrates how we can leverage other mediums, aside from actual conventional laser-scanning technologies, such as iPhone cameras, to be able to stitch together and piece together Point Clouds. Having the support and the network that we have with Autodesk Forge-- it's second to none. It gives us all the tools that we need to be able to continue to advance.
One thing that's critically important that Michael and Alex and I would like to kind of convey is that we would like you all to join the community and to really be a part of pushing the needle forward so that we can do exactly what was in Alex's last slide-- is lift off into outer space with these ideas in this technology.
So thank you, everybody, for joining the session, and please don't hesitate to reach out to us on social media networks or even through the community with Autodesk Forge. Thank you.
Tags
Topics |