Description
Key Learnings
- Understand applications of LIDAR technology
- Learn how LIDAR data is used in ReCap and InfraWorks
- Learn about automated feature extraction from point clouds
- Learn how to share intelligence across multiple uses
Speakers
- BABradley AdamsBrad is responsible for Product Management for Mobile Mapping in the Geo-spatial unit of Leica Geosystems. He has been actively involved with Departments of Transportation for 30 years implementing technology innovations from InRoads, GEOPAK, and MicroStation Engineering Design software to Terrestrial Laser Scanning to Aerial Mapping and LiDAR to Mobile Mapping. Currently Brad is integrating Mobile LiDAR Mapping with various foundation products to provide exceptional data across the entire enterprise.
- Ramesh SridharanRamesh Sridharan has versatile experience in civil infrastructure, including civil engineering, reality capture point clouds, GIS, image processing, and machine learning-based software development for over two decades. With over 20 years of experience, he has successfully driven programs in research and development, technical sales, partner marketing, product management, and customer analysis. He has experience working with customers to understand and set industry workflows that drive the technology forward. He is an expert in pushing technology to its limits and converting research findings into products that users can apply to real-life problems. He is a pioneer in reality capture point clouds that can handle and extract information from a large number of 3D datasets. Ramesh is one of the product managers for infrastructure products in Autodesk leading Reality solutions and ESRI partnership, to name a few. Ramesh is a post-graduate of the Indian Institute of Technology with a research focus in Image Processing and Artificial Intelligence.
BRADLEY ADAMS: Thank you all for making it to the end of the day. I hope that Ramesh and I have some good information for you today. My name is Brad. This is Ramesh. We are product managers for two different organizations.
I'm actually a product manager for Leica Geosystems. And I am responsible for the mobile mapping unit inside of Leica Geosystems. And Ramesh is product manager for InfraWorks, doing the extraction of intelligence of LiDAR point clouds inside of InfraWorks. And so today, we're going to talk to you a little bit about collecting large amounts of data. And then how do we deliver that data and draw intelligence from that data?
So to start with, I want to talk a little bit about the Pegasus Two Mobile Mapping System that we like at Geosystems. It is a very functional machine that takes a static terrestrial laser scanner, that you see here on the back, it's a P40, puts it on a mobile platform, and allows you to drive up to 70 miles an hour collecting extremely accurate, very precise data. So you can imagine just the amount of information that you can carry, that you can capture, in a day.
In addition to the laser scanner, it includes an inertial measurement unit. It includes GPS. It includes the individual cameras. And it includes an on-board PC that collects all of the data that's being generated from this machine.
So when we talk about the ability to collect huge amounts of data, this system really generates that. We introduced this system about four years ago. And you can see the adoption that has been generated across the globe. We have over 90 units now in production with clients using the Pegasus tool to collect data.
What is driving that adoption? Well, many of the things that drive technology are driving the adoption of mobile mapping inside of the environment that we're using today. The first and foremost, of course, is safety. When I started in this business back in the 2000s, we were doing static scanning on the side of the highway using a tripod. And the active roadways were literally feet from us. And of course, in the early 2000s, the safety on the side of a road wasn't nearly the concern that it is today, because today we have much more distracted drivers.
In the early days, we only had to worry about drunk drivers at 2 o'clock in the morning when they were coming home from the bar. Today, you have to worry about a drunk driver at 9:00 in the morning, because they're looking at their phone. They're texting. They're looking at Google Maps. So you have to take into account the ability to collect high quality survey data effectively without putting individual resources in harm's way.
Of course, the other component to this, when you talk about a dynamic mapping system, is speed. We can collect 120, 200, 250, 300 miles of mobile mapping data in a single day. And we're on a one-to-one processing component. We can collect up to 70 miles an hour, up to 300, 400 miles in a day, and have that data processed and ready for production the next day, which is an incredible turnaround. So it allows us to adopt this method of technology much faster than ever been done before.
We also get full coverage. We do only have a single LiDAR sensor. So any of us who are dealing with LiDAR know that if you have a single system, you will have shadows. However, the Pegasus System includes full comprehensive, 360-degree, high-quality orthophotography, which means that we now have the ability to do close-range photogrammetry out of those images and extract features in the areas where we have shadows in the LiDAR point cloud. This gives you the opportunity to drive a roadway one time and collect all of the data that you need without having to worry about individual shadows.
The other component is flexibility. Because this system is completely self-contained, it only has a single cable to a battery. You can deploy it anywhere that can hold 70 pounds, which includes a standard vehicle, an ATV, a small boat, a hi-rail vehicle. Anywhere that you can put 70 pounds of weight on 15-inch parallel bars, you can deploy a Pegasus system and collect high-quality data very, very quickly. You also have the versatility of adding multiple sensors into this system as well.
Because we have an on-board computer, it's not just a LiDAR sensor. And it's not just an image station. It also includes external ports where we can attach thermal imagery. We can attach error quality sensors. We can attach GPR. And because it all ties into the IMU and the GPS that is on board, we can then use that data source to colorize the point cloud once we generate the intelligence.
And once we start generating that intelligence, all of that gets shipped to InfraWorks. And Ramesh can use that to build better and more intelligent features extracted from there. And then finally, portability. Because it ships in two individual cases, I have, on many occasions, shipped the Pegasus System out ahead of time. And I'm based in Dallas. About $400 I can ship this system anywhere in the United States in three days without anything more than a pallet and $400.
And then I fly into the local area, get the rental car, pick up the pallet at the FedEx facility, put the system on top of the rental car, collect the data, put the system back in the box, ship it back, and I have collected everything I need in a single day. Because of this portability, now I don't have to ask my resources to stay overnight. I don't have to have three days for a single day of collection. Very easy. Very quick. Very low, low risk.
So what are some of the other components, other than the hardware capability, that allows this massive adoption of dynamic mapping in a LiDAR environment? Well the first part is, it used to take photogrammetrists, and registered surveyors, and PhDs in geospatial technology to run mobile mapping systems. And I have told anyone who will ask, mobile mapping is the most difficult technology to implement that we have in a terrestrial environment right now. And that is because you're taking the most difficult things to do from an aerial component and from a terrestrial component, you're adding them all together.
You have GPS. You have IMU. You have DMIs. You have laser scanners. You have time syncing. You have imagery. But because of the advancements in the software, we've now built the abilities for us to be able to use those inside of a traditional workflow.
So part of it is, how hard is it to mission plan? We also thought when we started doing mobile mapping that you simply put a system on top of a car and you started driving around. You never worried about planning anything, you just drive. And then at the end of the day, you put it all together and you create a deliverable that is good. Well, that's a fallacy, because you don't need to drive and collect more data than you need.
So it's extremely important to plan your missions the same way you plan a terrestrial mission, the same way you'd plan a survey project, the same way you plan an aerial mission. And we have delivered now, mission planning software that makes that significantly easier than anything that has been done in the past.
And of course, we've also grown in our ability to understand. Projections and GPS has become easier and better to work with. So the opportunities to bring everything into single projections is more available to us, as laymen, than it's ever been before.
We have the opportunities to look at real smart management of the data that we have. We're really bringing everything in to the opportunity for anyone to use the software the way they need to use it to deliver the products that they're trying to deliver. And then finally, this historically has been very challenging data to share among an enterprise, because you've got gigabytes and gigabytes of point clouds.
In fact, the data that Ramesh is going to show you is 80 gigs of data that has been generated on a single project. That's really, really hard to share amongst an enterprise, especially those that are remote. But now with the opportunity to share this over a web-enabled environment, now you have the ability to see, share, understand, and download your LiDAR data from a mobile mapping system that has not been available to you before. So what we see here is an example of a project that was collected in downtown San Francisco.
You can see that you have the images that were collected during the mission. You have the opportunity to see the extent of the point cloud in the upper panel. And that gives you the opportunity to navigate throughout this project and identify areas that are of importance to you. And so is this. As the progress is shown, you can identify the areas in the LiDAR point cloud that you need to look at.
You can check that information. You can make some 3D measurements inside the imagery. When I'm making these measurements inside the imagery, it's going back to the point cloud. And it's generating the exact measurements from the point cloud not just from the picture, but based on the information from their point cloud. When you get to an area that you need to examine more closely, you can then download that point cloud in a decimated area to your local machine. And you can use that point cloud locally to do other things that you need with that.
So in addition to having a vehicle-mounted system, we also have a human-mounted system that uses much of the same type of technology that the Pegasus Two uses. But it's meant to be worn on a bag and be used inside of a building to collect data that is not accessible to a vehicle. Because most of us would rather sit on an ATV and drive a Pegasus Two around, because its much easier. But that's not always available to us.
So with the Pegasus Backpack, we have the opportunity to leverage the same technology in a much more user friendly environment. And you can see it's got essentially the same types of technology that the Pegasus Two had. It's got Velodyne sensors. So the validation laser scans are not going to be as crisp and as clean as the terrestrial scanning.
But you still get tremendous amounts of information. We had the IMU. We had the GPS. We have all of the cameras. And we get essentially the same types of deliverables from the backpack as we get from the vehicle-mounted system. The value for this is the ability to enhance your terrestrial scanning to really document large amounts of area. And we're going to show just how much you can capture in a comparison between a static scan and a backpack scan.
So for instance, this is an example of a project in Germany, where we use the backpack to collect the route that you see in green here. So this is the trajectory of the walking area that is shown in Google Earth. And it shows all of the locations of the information collected with the backpack.
We were collecting for, essentially, two kilometers worth of data. It took about 55 minutes to process. And we get significant coverage that really generates a lot of valuable information both above ground and below ground. And you can see that these are the assumed P40 locations.
And we have the distribution of what it would have taken to do things with a static P40 scanner. What we generate is that it's almost 8.5 times faster to collect the data with the backpack than as it is with the terrestrial scanner. And these are images that show an example of what you can expect. This is the single collection. And you have the area above the ground and the area inside of the subway tunnel all shown in a relative view.
So you have a full three-dimensional point cloud that is generated from a single collect. And then these are the images that you see generated from that backpack collection. In the same way that we saw with the video, with the web viewer, we can then also collect information off of the imagery. But we're still going back to the LiDAR point cloud. So when we do measurements, we're measuring off of the true measurements from the point cloud even though we are in the image itself.
Now I guess I want to say, this is your presentation. If there are any questions, please don't hesitate to stop me and ask. Yes?
AUDIENCE: In the US, do you need to be a professional land surveyor to extract data and use that information?
BRADLEY ADAMS: You do not have to be a land surveyor if you are not stamping and sealing. If you are providing signed and sealed documents on a deliverable, then you have to have that as a level of control.
AUDIENCE: OK. So if I'm providing that as a basis for engineering?
BRADLEY ADAMS: That is correct.
AUDIENCE: [INAUDIBLE].
BRADLEY ADAMS: That is correct. Well, it has to be verified. It has to be collected and extracted under the control of an RPLS. But you can collect data. I mean, aerial mappers collect data all the time. It's just whether you're trying to use that and have it signed and sealed by an RPLS.
AUDIENCE: [INAUDIBLE].
BRADLEY ADAMS: Yes?
AUDIENCE: On the backpack system, if I were to get on a bicycle, and ride it [INAUDIBLE]?
BRADLEY ADAMS: That's correct. Yes. In fact, we have an image somewhere of one of our guys on the back of a little moped. And he's holding on for dear life while they're collecting.
[LAUGHTER]
And it's Aldo. For those who know Aldo, Aldo's a pretty big guy. So he's on this little moped holding on and they're collecting. So-- yes?
AUDIENCE: On what speeds can you go to collect data [INAUDIBLE]?
BRADLEY ADAMS: Well, so everything is built on-- so your imagery is built on distance, and, of course, your point cloud. Your laser scans are collecting information. It is built for a standard walk. Right?
And so if you're doing things other than that, you're going to reduce the amount of data that you're collecting. But as long as it's a fairly consistent, not too rushed, collection, you will get more data than you ever need.
AUDIENCE: [INAUDIBLE]?
BRADLEY ADAMS: Well, you used the vehicle-mounted system. And I have collected up to 75 miles an hour. But you have to remember, you're collecting a million points a second. So when you compare that to what you've done in the past, even at 70 miles an hour, you're getting substantially more data than you've ever had to deal with before. OK?
AUDIENCE: Are you capturing survey control information as well as you would in a terrestrial scan?
BRADLEY ADAMS: So you use a type of photo ID or targeting that you do use in terrestrial. It's really more aligned with aerial mapping so that you identify targets, and then you survey in those locations on the target, and then you fit the survey control into that environment once you get everything collected. So this is an example of an Orlando-- yes, sir?
AUDIENCE: The backpack will acquire how many points per second?
BRADLEY ADAMS: No. That was the P2. The-- yeah?
AUDIENCE: [INAUDIBLE]?
BRADLEY ADAMS: No, the backpack-- Josh, how many points a second does the Velodyne.?
JOSH: 300,000.
BRADLEY ADAMS: 300,000 each?
JOSH: [INAUDIBLE].
BRADLEY ADAMS: So you've got the backpack? You've got the backpack with two [INAUDIBLE] on there?
JOSH: Right.
BRADLEY ADAMS: You've got one on each side. So you're getting 600,000.
JOSH: And then the cameras provide the photo grid.
BRADLEY ADAMS: That's correct
AUDIENCE: And then [INAUDIBLE].
BRADLEY ADAMS: That's correct. Then you blend those two together.
AUDIENCE: [INAUDIBLE]?
BRADLEY ADAMS: Well, most of the P2-- most of the vehicle-mounted systems have the Z+F scanner on there. And it's 1.1 million points a second.
AUDIENCE: Rather than the P40 terrestrial, what does that [INAUDIBLE]?
BRADLEY ADAMS: Josh? P40 terrestrial?
JOSH: What's the question? Sorry.
BRADLEY ADAMS: Points per second?
JOSH: 1 million at 100 hertz rotation [INAUDIBLE].
BRADLEY ADAMS: No. The P40.
JOSH: The P40, 1 million.
BRADLEY ADAMS: OK.
JOSH: At 100 hertz.
BRADLEY ADAMS: So same thing.
AUDIENCE: Cool. Thanks.
BRADLEY ADAMS: So this is an example of the Orlando Convention Center. This was done by our friends at Langan. And this shows the comparison of data collected with the backpack and data collected with the P40 under the same time constraints.
So with the backpack, they did 40 minutes. They did about 35,000 square meters with a 2.7 absolute accuracy. And the P40, they did a 48-minute collection. They did 1,600 square meters. And of course, they have a much higher relative accuracy, because the sensor is much more precise.
But you can see that when you compare the data that you get from the P40 into the environment of the backpack, this really shows the use case of the backpack and how you capture large amounts of area. And then go back with your static scanner and capture the areas that you need for your design level approaches. So now, you really create a hybrid environment that delivers the best of both worlds. And this shows some comparisons of the two laying one on top of the other.
So you can see the backpack scans and the P40 scans one on top of the other. And so finally, I want to talk about a little project that's here close. We did something that was pretty unique. We took a Pegasus Two, and we took the backpack, and we combined them on the same active construction site. And it's the one right here in Las Vegas. It's the Project Neon site that is the rebuilding of the interchange.
And so what we did is we mounted the Pegasus Two on top of a vehicle. And we drove everywhere that was accessible on the construction site to pick up as much data as we could. And then in the areas where we could not drive a vehicle, we deployed the backpack. And we collected the data with the backpack where we could.
So of course, being on an active construction site, you were able to get into areas of very close proximity. And you were able to get into areas where active machinery was working. But as we said, you have to make sure that you're safe. Right? So the key about the backpack is not that you have to look at the screen 100% of the time. So you can start your active collection. Then you can put your tablet away. And then you can focus only on the task at hand, which is safely maneuvering around the job site to collect the data that you need.
The other things that make this a very intriguing technology to deploy under many, many times of conditions is the fact that the software has now come to the point that it can be used by the laymen, as we talked before. So when you get ready to process either the backpack data or the vehicle-mounted Pegasus Two data, it is primarily a batch process. And you see, you identify, the trajectories that you want to process.
You identify all of the products that you need to produce out of this job. You start to get the deliverables as generated from your products. And so this shows the red is everywhere that we drove with the Pegasus. And the blue is everywhere we walked with the backpack. These are some examples.
And then this kind of shows you the processing environment and how easy it is now to process both the backpack and the Pegasus system. So we're going to let this run for just a second. How are we doing on time?
And so as we discussed, this starts with the inertial explore. So it generates all of the alignment data and all the trajectory data that is needed to produce the products that we have. Once we have the alignment and trajectory data, we proceed to lay the point cloud and the imagery on top of that trajectory. So here we see the trajectory. We are able to identify exactly how tight the trajectory is.
And this is all being done real time. So you can see it doesn't take a lot of time or effort at this point. It is very batch processed. Lots of good documentation to allow you to understand exactly what it takes to process that. So we have really moved this from the need of a photogrammetrist and a need of a real technology heavy environment to now. We can train people to collect, process, and create huge amounts of scan and imagery data very, very easily.
And you can say that we're very close to completing all of the data. And with that, we start to generate our point clouds. And this gives us a little bit of a fly through of what the combined data looks like inside of the Pegasus environment. So we're going to scan around here.
This is all in either a desktop viewer or in our MapFactory software. So this is how you actually interact with mobile mapping data inside of the Pegasus suite of software. Very intuitive. You identify where on the trajectory you want to investigate. You click on that. It opens up the imagery. You navigate in the imagery to get to where you want to see in the point cloud. And then you do your investigation inside the point cloud.
Very, very interactive. Very user friendly. You can see that this is a colorized point cloud. So we've taken all of the imagery, and we've used that RGB information from the pictures to colorize the point cloud. Now we're doing the intensity of return so we can see the intensity that those points are coming back to the sensor.
I will tell the story that-- as you can see, the imagery on the backpack collection is very washed out. And that was my fault. We were in Vegas in August. I was collecting. And I was told by our technical manager, make sure that you set the lens so that you're not overexposing the imagery. It will not look good on the point cloud. And of course, I didn't listen to him.
So we got bad imagery. And the colorization on the point cloud is bad. And yeah, he's yet to let me live that down.
AUDIENCE: Is there anyway for you to markup some of this stuff?
BRADLEY ADAMS: So you have the ability to extract features. You have the ability to pull individual feature elements out. But with that, I'm going to turn it over to Ramesh and let him show you how he's going to do it inside of InfraWorks. So any questions about the acquisition, about the point clouds, the laser scanning?
This is actually pretty interesting. This is the comparison of the red and the blue. So this is showing the backpack and the vehicle-mounted data laying one on top of the other. And it's really incredible. We didn't have any ground control. We didn't have any post-processing that was anything other than just running through the batch process.
And the point clouds really laid together extraordinarily well using the two sensors at two different times of the day. Yes, sir?
AUDIENCE: You mentioned [INAUDIBLE]?
BRADLEY ADAMS: Well, so that only works in conjunction with the Pegasus Mapping System.
AUDIENCE: That would tell you how many batteries you need for [INAUDIBLE]?
BRADLEY ADAMS: Not necessarily batteries. It will show you the length of the mission. It will show you some of the best times that you can get satellite coverage. It will show the sun angles. It'll show you where your base stations are, or need to be, where your CORS stations are. So it's got those types of information sets.
Any other questions before I turn it over? I've got to make sure Ramesh is awake. Get him to tell you his stories on all of his partying he's been doing this week. Thank you, guys.
[APPLAUSE]
RAMESH SRIDHARAN: Thank you. Better? Cool. Perfect.
All right, guys. So now we know how to collect the good data, high resolution data, really big. Like Brad was saying, one of the models they're working on is about a 78 gigabyte model. The question comes is, OK, now we have the data, what am I going to do with it? So that's where the information extraction, or things you need to do, comes into picture. That's where I come into picture too.
So I work the file department management for InfraWorks mainly focusing on the reality capture feature extraction for infrastructure projects. That's what I focus on. Scan to BIM is something everyone's been talking about for quite some time. I feel it's so washed away. Now people don't even know what BIM stands for.
But it should be, in a way, where I need to convert every single thing you see, not every single point, every single thing you see as the information content. Something I can [INAUDIBLE] with the attributes. I can export it. I can model it. I can design with it.
How to get from this end to that end as quick as possible? That's exactly what they're designed in, InfraWorks. So first thing I want to say that we treat point clouds as point clouds. It doesn't matter where you get the point cloud from. It could be mobile. It could be terrestrial, backpack, or drones, anything. You're bringing us a point cloud data. What I'm going to show you today, you can actually use the exact same [INAUDIBLE] for each and every type of point cloud.
And this project is a perfect example, because it's a combination of backpack and mobile. And you can see the results. I'll show you in a second. It works pretty good. So that's what I want to impress here, where as a user, you bring in the point cloud data, you use the tools, take what you want, everyone's happy. That's the way it should be.
So we divide it. The three main things I can extract from point cloud data, first thing, simple, clean terrain information on a surface. I need to get it. It doesn't matter what type of point cloud it is. I always say that the Airborne LiDAR, that was the main thing, extracting the surface. But in high-resolution, when it comes to mobile and static, for some reason, people skip that part. That's because it's too much data to handle. Right?
So now, we came up with the capability where we're supposed to [INAUDIBLE]. Came up with the capability where you can actually-- [INAUDIBLE] OK. You can actually extract better terrain from the point cloud data. And software removes an [INAUDIBLE]. It classifies point cloud and extracts the [INAUDIBLE]. It creates a terrain as a raster. Or you can triangulate and use it. All the good stuffs.
With very little parameters so it will work from your side, I'll show that to you in a second. So once we have that, then we focus on the point features. This is something we [INAUDIBLE] for quite some time. How about converting something like this to something like this?
Now I'm classifying the data. I'm grouping the points. Why can't we recognize it? So we took a first step on that part. So now we can recognize street lights, signs, and the trees are the three main categories we chose as a first one. And the software does a good job. I'm not saying it's 100% recognizing. Right? We're still working on that.
But it does definitely do heavy lifting on point feature extraction. Anybody who did a project with it, it doesn't matter what type of point cloud data. Extracting the asset information from it, it's a tedious process. You have to manually pan through the data. You have to click it and attribute it.
And at the end of it, you cannot confidently say that I got all of them, because you know there's a possibility you missed it. But this one tool, it has some false [INAUDIBLE]. But it won't miss anything. So you will take it. It will automatically take it from one end of the project to the other end of the project.
You're guaranteed you're going to hit all the point features extracted at the end. You can confidently say that I got all of them. And it's pretty fast. I'll show you.
And the third and most important thing is the breakline extraction. When it comes to design, when you're extracting high quality data like Pegasus, this is a must. And again, it's an automated solution we added. So you'll tell the software what you want to extract. And the software goes ahead and extracts it for you. Then we added the cross section capabilities.
So you can edit them. You can create the line. You can see interactively going back and forth there and very easy. The line extraction should be easier.
And then we added the [? transverse ?] lines to augment the surface. And last thing is, you can export that in your desired [INAUDIBLE] system. You can take wherever you want to go. Go ahead.
AUDIENCE: Does that point cloud information have to be structured information?
RAMESH SRIDHARAN: Nope. Any point cloud
AUDIENCE: I have had the most difficult time using [INAUDIBLE] extraction.
RAMESH SRIDHARAN: If it's a drone data, it won't have any intensity values. If you try to paint stripes on the drone data, you won't get it.
AUDIENCE: It's not drone.
RAMESH SRIDHARAN: OK.
AUDIENCE: [INAUDIBLE].
RAMESH SRIDHARAN: We should talk.
AUDIENCE: Yes.
RAMESH SRIDHARAN: Yeah.
[LAUGHTER]
Sure. But it should work. I mean, I've tried it on the terrestrial data. I ran it on different data sets. As long as you see-- I always say this, when you look at the data, when you-- this is colored by classification. If it's not that, I'll show you. When you see that the software classified this, then you can get it. What you see is what you get, that kind of a philosophy.
AUDIENCE: [INAUDIBLE].
RAMESH SRIDHARAN: Yep. Yeah. You have to run that first before [? linear feature ?] extraction. Yes. So this I'd like to add is that so far what I showed you, it's in the product. But the vision, what I'm planning to do is that everything should go to a 3D modeling. Right?
And every time I say modeling, people always think it has a SimCity model, or it's not clear. It's not real. But what I'm saying is that 3D polylines that you extract that [INAUDIBLE] design. Same thing should be used to create a 3D model of the roads, or the sidewalks, or whatever you want. So instead of touching the lines, you're touching the actual entities and do your design stuff. I'm sure it's going to happen in the future at one time or the other.
I'm trying to put the best foundation for that to work towards that data. So let me open the project on this. Right there. I have the InfraWorks model with the exact same data. You guys can see it, right? Perfect. So same data Brad was talking about.
So Brad was nice enough to make sure the image is different here so I can--
[LAUGHTER]
--I can find where the backpack data is. But the cool thing is the intensity is pretty good. It worked out very good. So I took this data. We were planning for this presentation last week. We talked on last Thursday. So Brad was telling me that, you've got the backpack data process, right? I of course said, yeah, sure.
BRADLEY ADAMS: And then you had to go figure it out.
RAMESH SRIDHARAN: Exactly. I don't know where I kept the data. But what I'm trying to say here is, so I actually got the data Monday night before going to bed. I keep the processing on. And Tuesday morning, when I was in the drone area, I extracted linear features and stuff. That's what it takes. For 80 gigabytes of data, 80 gigabytes worth of model, I actually did it in overnight processing.
I was just sleeping, obviously. And in the morning, I can extract the line works and stuff. I can export and take it to [INAUDIBLE] if I want to. I didn't complete all the line works, but I started extracting it. But when is the last time you heard, browsing 80 gigabytes of data in a day or so? It takes that much time to copy it for crying out loud.
So that's the kind of part we're trying to push it into data. And it works pretty good. Any questions? Cool. All right.
So let me-- You guys already saw the data. I can at least show you a little bit. So this is how it looks. And it's covered by RGB. Obviously, you guys know that. And if I go for the point cloud teaming, sorry my mouse is-- my screen is really small. There we go.
And I can actually look at classification. And it's everything. So what I showed you before, you can actually see that all the brown ones are the ground, obviously. And you can see all the multicolored ones they have all the vertical features. So it's [INAUDIBLE] the street lights and all that stuff. I didn't go ahead and extract the point feature from this one. I wanted to, actually. But I didn't get a chance to. But you can do that. It's pretty nice.
And the linear features as well, you can actually see paint stripes right here and the curbs on the side. It can switch off. There you go. And there's a curb right there. So it goes back to what you're asking. So what you see is what you get. If you can see that-- you should be able to.
And so I did the process. But I didn't show you guys the terrain generation. So this is the terrain generation tool. Really, really small. All right. You guys have to take my word. My screen resolution is so small. So it's a one-click process. If you bring in all the RCS files, the InfraWorks takes RCS file point clouds. It can also bring the RCP, because there's a bunch of them you can just bring as one project file. You'll see all of them lined up here.
It's a complete batch process. You don't have to babysit it. And the parameters, it's what it says is optimum. So the software decides the parameters for the data. That's what I said in the beginning. If you bring in the [INAUDIBLE] LiDAR or something, you do the same processing, the software actually calculates a bad image at first, and then applies it so it can adapt to the data set for higher resolution, the same thing it does.
But if you're an advanced user or something specific you want to extract, like for example, I was working on a drone data last week. It was a mountainous terrain. So my default settings actually end up taking the mountainsides of things off, which I don't want. And I wanted the surface to be better.
So I changed the pattern just to a 10 centimeter or something so that it can keep the details as much as possible. On those cases, you can go and mess with the parameters. You can do it. But majority of the time, you don't have to, because the software adapts it by itself. And there are different proposals and things. But really, for in a real project, you bring the point cloud data in, you open this tool, click start processing. That's it.
The software processes it. It creates a terrain, something like this. If I go ahead and switch off the point cloud, so that's the terrain I got for a complete data. So while I was sleeping, so I took only two or three hours or something like that to do this, maybe a little more than that. But I can get it done. I can get [INAUDIBLE] and I can get a terrain real quick and get my job done.
Let me open the same model. But I have a different model for the linear feature. Give me a second. Any question on that while I'm switching the model? Clear? Yes, sir?
AUDIENCE: Do you [INAUDIBLE] that the [INAUDIBLE] tool will be expanding to building elements?
RAMESH SRIDHARAN: What do you mean by building elements?
AUDIENCE: For windows, doors, is it possible that this is going in that direction [INAUDIBLE]?
RAMESH SRIDHARAN: The answer is yes. Actually, I got the same question a couple of times today in the answer bar. The AutoCAD has those tools. ReCap is a software that puts a point cloud together. Right? For the structured data, because someone asked the terrestrial LiDAR structured data, they have very good algorithms on the plane fitting for the doors and windows and everything. They have for quite some time.
I've seen that in AutoCAD. Somebody said it's all available in Revit. I didn't check it yet. And they are working on it to make it better also. So there are some solutions already available. But that's a separate thing, working on that. I can get more information later if you want.
AUDIENCE: I'm still not quite sold on [INAUDIBLE]. I mean, it's still, I don't know. [INAUDIBLE] It already has some structured data. [INAUDIBLE]
RAMESH SRIDHARAN: I'd just like to find which one I-- just bear with me for a second with the linear. There we go. Any other questions? Nope? OK.
So I want to open and show some of the linear I extracted that I edited automatic. There's not a lot of [INAUDIBLE] There's only two laned roads in this area, actually. I got the center one, center lines, and the side on the top and bottom of the curb. There we go. And you can actually see there are a lot of collections there. And I'm going to switch off the-- let me change the view right there.
Well I mean, you get the idea. So and anyone who has not used the linear features? Of course, you did. Who are not using linear features, you can try that. It's pretty good. You can go into a cross section that's [INAUDIBLE]. And you can [? QBQC ?] the data and dig into Civil 3D or any design software basically. And you can also create the lines there.
There are a few things we're working on to make that automatic, vertex snapping and stuff like that. That's going to come up next month. Legalities, I can't talk about it. But you guys will be very happy. If you're using it, you'll really like those things. Making the line extraction, even the cross section is manual.
We are making it a little bit easy for you, so you don't have to do too much work. OK. I think that's all I have.
AUDIENCE: Is that in the sandbox? Or--
RAMESH SRIDHARAN: It's in the sandbox. Yes. Definitely. And that's all I have.
[APPLAUSE]
Thanks.