Description
Principaux enseignements
- Through cloud-based content management, learn how to automate client requirements and standardize project templates
- Understand how to generate native project templates through Revit I/O in the cloud
- Learn how to link to Forge for real-time query and visualization
- Learn how to manipulate and manage content in the cloud
Intervenants
- Mustafa Salaheldin Ali BakrAs the Innovation and R&D Lead at Atkins Middle East and Africa, I lead the digital delivery and engineering design R&D. Armed with a Bachelor's degree in Computer and Systems Engineering, my expertise extends across multiple domains, bolstered by certifications as a LEED GA, Autodesk Expert Elite, and Microsoft Applications Developer. My journey into the realm of cutting-edge technology began as the BIM R&D head at Engineering Consultants Group (ECG) in 2012, where I spearheaded pioneering projects in Egypt. Transitioning to SNC-Lavalin Atkins in 2016 marked a pivotal moment, amplifying my impact within the industry. With a repertoire of speaking engagements at prestigious platforms such as Autodesk University in Las Vegas, I've showcased my proficiency in BIM R&D and digital innovation. Notably, my project graced the Opening Keynotes main stage in 2018, underscoring my commitment to pushing boundaries. At Atkins, I collaborate seamlessly with diverse departments, including Architects, MEP, Masterplanning, and F+G, to devise innovative engineering solutions. My forte lies in system automation and integration, leveraging state-of-the-art techniques to drive data-driven business decisions. In my role, I orchestrate strategic planning for data management, encompassing gathering, ingesting, extracting, and analyzing data. By championing standardized data practices, I ensure Atkins remains at the forefront of innovation, driving transformative change across the organization and beyond.
- MDMarc DurandAs the Director for Digital Disruption for Atkins Middle East and Africa, my deep technical expertise, entrepreneurial skills and high-level strategic planning brings a new expertise and strength to capitalize on the technological growth opportunities that exists in our markets today. With over 15 years’ experience in leading roles in technology/AEC firms I have led technology research and development, implementation and project delivery across several tech firms in Germany France, including Faust Consult, Burt Hill, 3D Kyvoss and most recently in UAE as a partner with iTech a management consultancy firm and provider of Building Information Management (BIM) technology services. My appointment to Atkins enables full implementation of the Atkins digital strategy across the region. His focus is on enabling creation of new revenue streams I am originally from Boulogne Sur Mer, France, where he completed his Master’s Degree in Industrial Data Processes at the University of Littoral, Cote d’Opale France. My family and I relocated to the United Arab Emirates (UAE) in 2007.
MUSTAFA SALAHELDIN: So let's get started. Welcome, everyone. I just want to thank each individual in this room for coming. My name is Mustafa Salaheldin. I am the data science manager in SNC Lavalin in Atkins, Middle East, and Africa. And today I have my colleague Mark Durand. He is-- as a co-speaker. He is the digital disruption director in SNC Lavalin Atkins, Middle East, and Africa.
So introducing myself in short sentences, I am a multidisciplinary subject matter expert in NNC, industry, and software development. I am an expert in building automation system for BIM by developing Autodesk products, especially Revit, with cutting edge technologies from different vendors like Microsoft, Esri, Google, and Amazon. Besides that, I am an Autodesk expert lead as well as Autodesk authorized developer. And I am Microsoft certified developer, and I am certified as lead GA.
So before I start, I want to dedicate my first class in Autodesk University Las Vegas to my godfather Jeremy Tammik because he has not only been a fantastic mentor to me but he taught me also how to be a mentor to other people. And from here, I want to tell him thank you for being such a great role model and thank you for all the things that you have taught me all over these years. Also allow me to thank the Forge and design automation team Rahul, Liuan, Xaodong, Adam, and Philippe for their continuous support and for their quick response.
So during the presentation, I will take no questions, and I will post all the questions that-- 'til the end, So I will spare like five to 10 minutes at the end for the questions and answers. I just want to make quick survey to have an idea about the audience. So how many of you is a Revit API developer? Very good.
How many of you is our Forge API developer? Very nice. How many of you is using design automation API for AutoCAD? Awesome. How many of you is using design automation API for Revit? Very good.
Today, we are going to talk about new technology from Autodesk, which is the Forge design automation for Revit API and its role in changing the way we manage our digital content and Revit. Before I define the design automation, Revit API, I won't say that it is still a private beta, which means that a few people have access to such technology for testing and evaluation. But as we heard yesterday from the design automation team, it will go for public beta in 28th of January, correct?
During the presentation, I will use some terms interchangeably. So whenever you hear me saying Revit on the cloud, Revit.io, DA4R, or DA, then I'm referring to the same thing, which is design automation API for Revit.
And on our agenda for today, couple of interesting topics like what is the design automation API for Revit and why to you use Revit.io, and how does the Revit.io works and how to execute Revit.io add in. And we will showcase some applications we have done with the Revit.io and last but not least how we can use advanced technologies with Revit.io to manage our content in Revit.
So what is DA4R or Revit.io? Forge design automation API for Revit or Revit.io is another component of the Forge platform ecosystem. That allows the user to build web applications that can talk to a Revit engine on the cloud using RESTful APIs. The Revit.io allows that the web application to create or read or modify the Revit models on the cloud by executing Revit add ins.
As we can see from this graph to the right, the user can send some activity file to the web application combined with some instructions like creation or modifying or reading. And then the web application will send all this data to the Revit engine to the left and start to execute some add in. The Revit.io or engine will start to execute the instructions, and then after processing the information, it will send back the process file to the web application, which in turn will send it back to the user for further processing or for being displayed.
So what is exactly the difference between Revit.io and Revit? Revit.io is a headless Revit engine that runs on the cloud. When Revit is fully user interface, Revit engine that runs upon the user machine.
So if there is no big difference between Revit and Revit.io, why should we care about Revit.io? So the answer is because of its amazing advantages. And we are going to tell some of these advantages.
So the first advantage, for example, is we can now run all our add ins without even having Revit installed on our machines. And it doesn't depend on the operating system, so we can now run all our add ins on Apple Macintosh, on Linux, on, of course, Microsoft Windows because we are running all our add ins from the cloud. And there is no need to-- for the add ins to be installed or reconfigured or updated or licensed. All you need is just to trigger a command to execute the add in.
And you don't need also to care about the security because the add in is protected behind an authentication system, so you don't have to care much about the security. Also the user can control the add ins and the data accessibility, so it can be private or public. And in this case, the user can allow other third parties to include or to code the Revit.io add in in their pipeline to use it as a stage for-- from-- or of their pipeline.
So as the A4 outcomes as part of the Forge platform ecosystem, it has the integrity with all the different Forge services on the cloud. In addition to this, we will be able to use all the cloud processing capacity to automate all our complex design workflows and to do all our heavy repetitive tasks on the cloud. And this will be regardless of your computer hardware or your specification limitation.
In my opinion, the most benefit or most advantage of DA4R, since it is a cloud service run on the cloud, is we can make use of adding different services or different cloud services from different vendors to make our add ins more powerful. So, for example, we can integrate our add in with AI services from Microsoft or with machine learning from Google, or we can use lambda functions from Amazon.
So by adding such advanced technologies to Revit.io, we can add the data awareness or data self-awareness to Revit.io where we will digitally convert that data into entities that can be understood by Revit.io and can be filled by the Revit.io. So Revit.io will be able to make decisions based on its understanding of the data and its semantic meaning.
So how actually does Revit.io works? This what we will explain now from this graph.
So first off from this graph, the user has to write some code for the add in, and this add in should implement the i external DB application interface. And because we don't have access to the Revit UI, we cannot add reference to the file Revit UI DLL in our solution to call its components. So once the user finish the writing of the code for the add in, their user can compile it and then bundle it with the dot at infile or the manifest file in an archived folder.
The next step is to create what is so called add package. And you can think of the package as a placeholder where you can store your add ins and then using this by different web application. Once we define the add package or create the add package, we will upload the archive bundle, which contains the dot DLL file and dot at infile to the add package. The next step it is to define what is so-called activity, and every activity must be assigned to an add package. And the activity defines some inputs and outputs and defines some arguments for a method.
So how actually we can execute a Revit.io add in? The first step is we need to upload the input file to a cloud location. This cloud location could be Forge S Bucket or Amazon S3 Bucket or any cloud storage service of your choice. And in case you are creating a project from scratch or file from scratch, you can let Revit.io take care of this on behalf of you.
So once the user uploads the input file to the cloud location, he has to submit what is so-called work item. And the work item is a specific invocation of an activity that indicates some input files and outward location and the values of the arguments for the method. Then the design automation system will start to download the input files from the cloud location to a working directory and do some processing by executing the Revit add in. And once it finishes the processing, it will upload the output file to the output location we have identified by the work item.
After the file is reaching the outward location, the user can then retrieve the output for further processing or for being displayed.
We just came to that amazing part, the showcase of our application using Revit.io. So what I'm going to display now is meant to be done automatically, but for the showing what happens behind the scenes, I prefer to show the area-- the manual version so that you can have idea more about what is happening behind the scenes.
So in this demo, I will show the sequence of executing a Revit.io add in on the cloud. And this add in is used to extract the families from-- the loadable families from RVT file and store-- categorize them and then update the user or the web UI based on each category.
So first we will go to the family catalog to make sure that it's empty. The categories also is empty. And then we will go to the Control Panel, and from the control panel, we will create a new package. So from the drop down list here we find all the predefined add packages, and then from the drop down list here, we check-- we select the engine of the Revit that will execute our add in, in this case, would be a Revit 2018 and then we supply an ID and a description for our add package. And by default the first package will be assigned version 1, and for each version, we can create an alias.
And once the add package is created, it will return a pre-signed URL where we should upload our-- archive the package which contains the dot DLL and dot add in file. Once the upload is complete, we are finding now the activity. We assign the outputs and inputs, and in this case also I choose to make the folder archive. And we will explain this later. And by default also, the first activity will be assigned version 1. And for each version, again, we can define an alias.
Now we are creating the work item, which will invoke the add in, and we will specify the exact location of the input file and the exact location of the output file. And then we will execute the activity. Now we will start to pool the processing status until we reach the success status. And by reaching the success status, this means that the add in has successfully extracted the families from the activity file and categorize them and then upload them to the portal.
So by opening the categories, we will find now a list of the categories that has been identified by the add in from the RVT file. And if we went to the family catalog, we would find all the categories and then find-- beneath it we will find all the families that have been extracted. And thanks to the Forge model derivative API, we can now extract thumbnails directly from the families, and we can use also the Forge Viewer API to display the families.
So to understand what happened behind the scenes, we have to get a look into this graph. The user will start to upload the RVT file to the web application, and the web application in turn will upload it to the Revit.io engine. And the Revit.io will execute the family extraction add in. And once the extraction process is done, the Revit.io will compress or the extracted families into a compressed folder. This is because the Revit.io can only give a single file will execution as output. So in order to get all the output extracted families, we will put them in a folder and compress them and then upload them into S3 Bucket.
Once the upload process is complete, we will execute a lambda function, and the lambda function will start to extract the compressed file in a different bucket. And once the extraction is complete, another lambda function will be executed to upload all the extracted families to an Forge OSS bucket. And after the uploading is complete, the lambda function will execute another command through the gateway API to translate the families into the format that can be understood by the Forge viewer. And then once the translation is complete, we will be able to display in the Forge viewer.
This snapshot of the buckets-- and we have created two buckets, the first one that will receive the archive or the compressed families from the Revit.io and the second one is the bucket that will be used to extract the families into it in order to upload it again to the Forge OSS bucket. And here we can see also two lambda functions. The one is the-- the first will be triggered after the uploading from the Revit.io to the S3 buckets. And then the second one will be triggered once uploads complete to extract the families and then run the translation process for the viewer.
So now as we have created a standard loadable families catalog, we can use these families to create a template and use these families inside this template as samples. So in this project here, we have created a Revit template file on the fly and then started to place some family instance assembled in this template. And then we will save it and download it later.
So first we go to the categories and select the desired category. And then we start to select the families that we want to insert in the template samples. And as we can see now, we can directly put it in the Forge viewer, and the user can change the position and change the orientation of each instance.
And by that way, we can make sure that all the families are standardized according to the COBIT standard. So we don't need to take care of the naming convention or the scale or the orientation of the 3D view thumbnail, for example. Now we trigger the work item that will create the RVT file on the fly and insert the family instance in the same location.
And once the creation process has complete and we are now ready to download the file, so we are now downloading the file.
And we will open the file in Revit for more verification.
So in this application, we will talk about the most annoying task when we are starting up a new Revit project, which is defining the new system families types. Now with using Revit.io, we can easily define new system families types on the fly. And we would show this in this demo. And by creating the system families using Revit.io, we guarantee that all the system families types will be centralized in one location, and it will be easily maintained and updated.
So as we can see here, we can now create walls, floors, ceilings, and rules. And I would go for creating a wall type. First we have-- we need supply the wall type name, and then we configure the structure of the layers. So we select the functions, the materials, and insert the thickness.
And then we can even change the order of the layers. And by the way, all the information we have seen here is coming directly from the RVT files, so they are not stored in an external database and then read when we executing the application. And instead we are using Revit.io to directly access the information from the RVT file on the fly.
So once we finish the structured layers, we will execute the work item that will create RVT file and create the new wall type and create a sample wall in the project for the same all type that we have defined. I'm sorry.
So once the creation process is complete, the sample wall type can be displayed in the Forge of viewer, and thanks to the Forge viewer API, we can display all the perimeters of the wall in the viewer. Now if the file is ready for being download, so we will download the file and open it in Revit to verify the wall type.
So by selecting the wall, we can check the naming convention. And by editing the wall type, we can find all the structure of the layers that we have defined.
So now as we have a system families catalog, we can also use this families to create templates with samples of the system families. And this application has been done by [INAUDIBLE] from the DA4R team, and we will you we will see now how we can use the Revit system families to add samples to the Revit template. So by selecting the system family type, we will go for the wall and start to sketch the central lines of the wall.
And now we select the floor and start to sketch the boundaries.
And once we finish this sketch, we will execute the work item that will create the Revit template and create all the model elements inside of it. Now the creation is complete, and we can display the result in the Forge viewer. And once we check everything is OK, we can download the file and open it and Revit for verification.
So in this example, we have used the Revit.io to make inquiries directly from the Revit model. To check the model health, there are a lot of indicators, and one of them is the Revit warnings. And by checking the status of model warnings, we can enhance the performance of the Revit model by solving the issues and minimize the errors.
So in our portal, we can now inquiry the warning directly from the Revit model and display them in different types of graphs. This will allow us to have better insights about the model performance and let us give notifications to the modulars to solve the issues that are indicated by these warnings. Or even better, if we could train some machines with the AI, the machines can find the solution for these issues or at least can provide us with a list of the suggestions for solving these issues.
So in our application here, we can group the warnings by their severity or by their types or if they have a solution or not in different graphical shapes. And by selecting any bar chart or segment of the donut, we will be able to indicate what the elements that are causing this warning in the Forge viewer and do further investigation in the Forge viewer.
This model is-- or this project is consisting of multiple segregated model for each discipline. And by selecting each discipline, we are executing the work item that will run the Revit.io add in that will retrieve all the warnings from the model directly and then present them in the graphical format.
So in this example also, I will show you how we can use Revit.io to digitally build a dashboard for our Revit content. One of our projects was master plan project where the client needed to display visually the information of the GIS of some plots to help him take some decisions. Using Revit.io, we were able to directly extract the information from the Revit model and using a work item where all the GIS data were restored, and we recall all this information and display them in different graphical shapes.
The Revit file consists of multiple masses and each mass represent different type of building. And there is some set of shared parameters assigned with each mass to define certain values of the GIS information.
So basically, this is our master plan, and initially it is color coded by their menus. And each dot chart from these are representing some criteria of the GIS data. And by selecting each sector in each dot, we will isolate and color code the model elements that are related to the value of the section. And we can even do more track down of the information in the image of bar charts. For example, here we can see more information about the GFA and the cost and the population.
And all this information are now coming directly from the Revit model. These are not stored in external database, but these are now read from the Revit model directly by using Revit.io.
So how many times did you need to sanitize your Revit content before or after delivering a project. I think this was a very annoying task. So maintaining the models and keeping them healthy very important for to ensure the consistency of the project and its robustness. So this is why Revit.io playing a great role in which we can manipulate the Revit project to verify that we met the corporate standard to keep the model as healthy as possible.
In our example, we have set a great set of criteria as a checklist, and, for example, we will find some regular tasks like opening the Revit file with the audit option selected and deleting all the views that are not inserted in sheets, or deleting all the sheets that are empty, or removing all the groups and rooms that are not located. And finally, we can save the project with the compact options selected so that we guarantee that the model will be healthy and we minimize the storage space.
So now we come to the most exciting part, which is how to use the advanced technologies with Revit.io to manage our digital content.
So in this part, I will talk about the photogrammetry, and it is one of the most advanced technologies from which we can create the geometry for the families based on a real prototype. So there is an amazing product from Autodesk called ReMake that can help us do such a thing.
MAN ON VIDEO: Photogrammetry is the process of creating 3D models and textures of existing objects or spaces by shooting many overlapping photos from different angles. ReMake has a fully automatic 3D reconstruction engine. All you need to do is upload all your photos, and it does everything to create a textured model for you. To inform shooting better photos for photogrammetry, it's important to understand some of the basics of how the photogrammetry process works behind the scenes.
Photogrammetry relies on feature detection. First, the software will go through all of your images and detect common points between any pair of overlapping photos. Many thousands of features will be detected in each pair with significant overlap. Using the 2D features in a pair of photos simultaneously, it is possible to solve for the camera and feature point location in 3D space.
ReMake simultaneously solves all pairs creating accurate camera locations and surface points for all the photos submitted. Then it reconstructs the geometry and creates textures using the positions of the cameras. For feature matching the work, just make sure that nothing moves in your scene while you're shooting. Also make sure to get plenty of overlap between your photos.
So as we can see, we can use a ReMake to generate the family's geometry automatically. So by snap shooting the prototype from different angles, we can collect the images and then upload it to the ReMake on the cloud. ReMake will generate the geometry, and then we can send the geometry directly to the Forge we set of parameters and the Revit.io then do further enhancements on the geometry and assign the parameters to the geometry. And after finishing the processing, it will send directly they RFA file to the user.
In this slide, we will talk about one of the most common mistakes that most of the modulars do, which is using a high level of details of families in a stage where all these details are not needed. So as you know, excessive amount of details in a project will affect its performance in a very bad way.
So by using a technology like deep learning, we can make the machines simplifying the geometries of the Revit families and create automatically a library for the LOD 300 families and LOD 400 families, for example. And at any point of time, we won't shift the level of details from one level to another level. We can do this also automatically because now the machine understands what is the meaning of an OD.
So by using a simple machine learning algorithm with Revit.io, we can now do a lot of wonderful things. So, for example, by creating a classification system, we can make Revit.io able to recognize the families of the same geometry but having different parameters. And by-- that way the system will be able to do versioning system and ask the user if the user want to keep all the versions of the same family or if the user wants to keep only one version and delete all the others or if the user wants to merge all the families into a single family.
Another example of using the Revit.io with the machine learning is the machine will be able to understand the type of the project from the employer's information requirements or the EIR. And based on this understanding, it will able to decide which type of families are needed into this project and then suggest the user with the families that should include or should be included into the project.
Another example of using machine learning with Revit.io that if the machine learning helps the Revit.io to accurately identify or classify the families, it will be able to accurately generate the BOQ or the bill of quantities or the BOM or the bill of materials.
So by this slide, I have reached my end of my part and now I will hand over to my colleague Marc to continue. Thank you.
MARC DURAND: Thank you, Mustafa.
[APPLAUSE]
So I hope everybody's as excited-- was he talking?
MUSTAFA SALAHELDIN: Yeah.
MARC DURAND: So everybody is as exciting as we are, and I really hope that everybody starts to understand that we may not have to manage physical anymore in the short future. We will just generate them on the fly. And I think that it's extremely fantastic to start to think that you go inside, take couple of picture, and boom, it pops in your Revit model on your back office. So I think it's really incredible that the technology's evolving pretty well.
But before I even step further, so I'm Marc. And I have been working with the Atkins in the Middle East for the past two years now. I'm the digital disruption director, which means that I'm the crazy one in the office. And for people who have come this morning in the main stage, who are actually presenting to all of you guys, some of the technology that we have been looking at-- really data the call and how can we manipulate new information.
So this is literally one slide that wanted to keep more as a conclusion but more an open talk, explaining to you a little bit that we have been using the Forge JPI edition with everything that Mustafa was explaining to you earlier. But we're even using it beyond that as the call to data itself linked to other microservices to then said generate and interconnect with a platform. So I will not bother you with more details, but I invite you to come to a class that we are conducting tomorrow really specifically on this [INAUDIBLE] DM technology.
And again I want to emphasize one more time that one of the main quests that we are, let's say, conducting over the last year with Mustafa and our team in the office is really to try to be less and less focused on the site, being less and less focus on what standardization means because you can almost [INAUDIBLE] it and self-generate it on the fly. So imagine that we are being a big company in the world, globaly scattered, about 50,000 people. Becomes really difficult for us to keep it sanitized and standardized.
So when you think about it really quickly, what is a library of families, for example, in the big organization? And that becomes complicated. Maybe the answer is actually none because it's self-generated on what you need. And that's something that we have been trying to pursue and generate more and more.
So again I invite you guys tomorrow about speaking a bit more in-depth about what this scatter project is about. But I think, Mustafa, in the Forge is an opportunity to introduce what the next class could be. And again, try to conclude on opening everybody's mind, thinking that the next step of our digital era is actually data really being at the center and start to run away from this type system that becomes more and more complex I believe to manage.
I think it was only one slide you put up?
MUSTAFA SALAHELDIN: Yeah. Yeah. No--
MARC DURAND: That is the second one. This one? So just towards-- so it's going to be a little bit on this on the snippets, and this is really to wrap up everything that we were speaking earlier, the concept that we are looking at in today's technology is to be able to say how can we manipulate information that does not exist yet? So let me read the right one more time. The idea of the app that we are really focusing on lately, which has a massive amount of potential and capabilities, is how can I do something in one technology.
So on this slide, for example, you see a iPad-- somebody sketching on an iPad, and the sketch he is doing is generated and understood through the Forge API. in this case, the old name was HFDM for people who are familiar with this one. But the HFDM understand the shape, and then we can link it to many other things.
MAN: Can you make sure your microphone's on? Do you--
MARC DURAND: Better?
MAN: Yes.
MARC DURAND: Yeah.
MAN: Yes.
MARC DURAND: So the Cater-- the application on the HFDM side understand the shape. And obviously, like everybody can imagine, can recognize the shape. But that's not the purpose. It is actually recognize the shape, and then we say we can add more dimension. So if we add the third dimension, whether mass, form, or shape, but imagine that if I push the mass, I alterate the shape because actually they are the same. They are just displayed differently.
So imagine that you bring the shape and the administrator workflow, for example, and your graphic designer changed the color of the sketch into an SVG format, for example, you change in real time in your iPad as well. So everything is connected. It's actually the same data, just displayed differently.
And since we try to run our software and/or outware, we put everything in the cloud. By this, we don't have to bother with IT. So from a small machine, you have access to a crazy engine behind the scene that allows you to replicate and just interwrite, change the way that you are designing. And again we have been exposed to so many software over the past year that we tried to become more and more closer to the native behavior, which is how can I sculpt my ideas, which I don't know yet what I want to do.
And by doing that, there was a rapid iteration. And on top of it, you can even ask him to learn your own behavior. So more you start to sketch, more you understand what you want to design.
So imagine that on top of that, you add all the flow of your client database, your company database or their database. It starts to acknowledge what you are doing and predict how or what you want to do, which again facilitate the amount of iteration because I don't know if you work on all of the new technology allowing you to do more iteration of your own design, but it becomes complicated when you have 1,000 option to actually choose one that works.
Anyway that was a conclusion what we're going to approach tomorrow with Mustafa on another class. But again, all of them there combine because from this sketch, we can generate native ready type. So literally imagine your sketch, and you have your technical PDF coming out of the engine-- I think it's just fantastic-- without even opening Revit.
Mustafa, you want to conclude? No? That's OK. But just please don't forget to take this survey for the class on the app. So thank you.
MUSTAFA SALAHELDIN: And I guess now it's time for question.
MAN: Any questions? Yeah.
AUDIENCE: [INAUDIBLE]
MARC DURAND: Tell him to take the mic?
AUDIENCE: [INAUDIBLE]
Sorry, I'll repeat my question. I think the Revit.io, this automated API, has some limitations. It does not doing-- it cannot do everything you can do in Revit desktop.
MARC DURAND: Yeah, as I said, it is still in the private beta, so it's incrementing by time. So in the future, we are expecting more and more to come.
AUDIENCE: So I'm trying to catch those informations. Do I have to contact Autodesk or--
MARC DURAND: Yeah, you have to-- some of the VA for our team here, so they can give you more details about this after the class. Basically you'll have to receive an invitation to use the service. But I think in 28th of January, it will be public beta. So it will be reachable by everyone.
AUDIENCE: Thank you. Thank you very much.
MARC DURAND: Welcome.
AUDIENCE: It's-- for me, it's a great pleasure to see actually someone already achieve it, so it's good. Thank you.
MARC DURAND: Welcome.
AUDIENCE: Hi. So is the content classification library, is that something that's open source or is it something that you developed for the reduction-- so the LOD reduction, or is that something we can find?
MARC DURAND: Can you repeat the question? He was fixing the mic.
AUDIENCE: The classification library that you showed in the example for the LOD reduction from a complex object to a simple one, is that an open source or something that we can download or add to or where is--
MARC DURAND: So in this case, no. On this example, no. I think that we could link it to any specific library. But for us, we are more looking at it's-- as I mentioned earlier, it's a different representation of the same object. So at the moment, what we're doing is we are defining our own libraries.
And upon the needs, you download the one that match. So that's what we do to date but obviously while looking at classification could help or could be embedded on the fly to almost self generate. But again, that's not-- that hasn't been the yet the point of focus at the moment.
AUDIENCE: Are those Java libraries or do you know the-- any of the particulars what they're written in?
MARC DURAND: I didn't get your questions.
AUDIENCE: The object library, the images, are they written in Java? Are they?
MARC DURAND: JavaScript, yes. We're in JavaScript.
Any questions? Yeah.
AUDIENCE: Question for [INAUDIBLE].
How long does it take you to meet the [INAUDIBLE], and what are some of the challenges you went through [INAUDIBLE]?
MUSTAFA SALAHELDIN: It was almost one year?
MARC DURAND: Yeah, it's about-- so Caterpillar-- I'll be totally open on the thought process. We started the thought process last year in AU, so it's literally a year now that we started to work on that. The challenge is pretty simple. We are using a technology that Autodesk is currently developing, so you know what it means. We started on the alpha version.
So it's like we are literally developing classes with the team that then they use, re-inject, and we play a big ping pong game that drives him crazy quite oftenly. But that's how you create new tech.
So that was-- and the second part that we have others working on at the moment is the web browser side that has a certain limitation on how much or how many polygons is doing. So it's again something that we are trying to bypass more and more by creating different way of self-generating. That's something that we are looking at with the Forge team as well.
MUSTAFA SALAHELDIN: It is enhanced by time, so as the hardware getting more advanced and the web browsers get more enhanced, we can move forward. Any questions?
Thank you very much for coming, and we hope we can see you tomorrow in our second class, which will be the Edge FDM technology. And we will showcase some of the Caterpillar that was in today's keynote. Thank you very much.
MARC DURAND: Thank you.