Description
HFDM puts data at the center of Forge application development, enabling developers to build rich and collaborative, data-centric applications and reactive systems at scale. The Forge HFDM data service is a central hub that stores and communicates rapidly changing data between different clients, apps, and connected services while keeping all users in sync. We will show examples of what you can build with HFDM and discuss core architectural concepts and practices, highlighting customer value and benefits.
Forge IDX is a next generation cloud application leveraging HFDM, that lets you easily build rich, collaborative, and smart applications and services using the latest Autodesk technologies. IDX integrates environments for design, development, and publishing, while making it easy for multidisciplinary teams to work together.
Key Learnings
- Learn how to deliver sophisticated workflows designed for teams and multiphase projects
- Learn how to create multiple representations of shared data for multirole workflows
- Learn how to build extensible and scalable data-centered systems
- Learn how to extend your app with Forge High-Frequency Data Management
Speaker
- Farzad TowhidiFarzad graduated from McGill university as as a software engineering undergrad in 2012. Throughout the junior stages of his career, he worked on cutting-edge web technologies with a startup Lagoa (acquired by Autodesk) that built a mechanical CAD and rendering tool entirely on the web. Once he joined the Autodesk family in 2014, he applied his technical cloud expertise to innovate by building Forge Data’s, then, next-generation data management solution (HFDM) which implemented git-like semantics with realtime collaboration capabilities by applying a data-at-the-centre philosophy. His contribution to HFDM earned him a patent for Autodesk. Today, Farzad assumes a Product Management role where he is helping Autodesk achieve its data platform vision. By working closely with customers, he gathers and prioritizes their data needs in order to help customers gain access to their granular data and derive net-new value by unlocking ground breaking design and make workflows.
FARZAD TOWHIDI: All right. Good afternoon guys. So my name is Farzad Towhidi, one of the lead engineers on Autodesk next generation platform called Forge HFDM. And I'm joined here with Dr. Kai Schroeder, who leads the efforts in building an application framework on top of Forge HFDM. And today you will learn about how HFDM and the application framework is changing the future of CAD workflows.
So in this class, we'll start off by first talking about some of the current problems that we face with modern-day project design workflows. Next, we'll dive into how HFDM aims to solve these problems by talking about some of these core values and functionalities that it provides. Next, we'll give you guys a little sneak peek on how easy it is to use HFDM through its SDK and API. And we'll also give you guys a set of examples of HFDM in practice, where Autodesk is using its own technology to bridge the gap between different CAD applications in order to make them more interoperable. And finally, we'll give you guys a little teaser on how to build Forge HFDM applications through the application framework.
So in this example, we have one of Autodesk customers, CW Keller, who is in charge of building the formwork of the Miami shark tank exhibit in the Miami Aquarium. And this is a rendered image of the shark tank seen from within the building. And this is another image of this exhibit seen from outside the building to see how it fits with the rest of the building.
And so traditionally, Autodesk only cared about how the building is designed, and less about how the building is constructed. Well, things are changing. And we do recognize that in order to construct a building, there's a lot of extra design that comes into play, and a lot of extra planning. And so we really believe that Autodesk can really be of great help there.
And so in the case of the shark tank, a temporary formwork needs to be in place. So this formwork needs to be designed, manufactured, and installed. And to keep the formwork in place during the pouring of the concrete, a temporary shoring also needs to be designed, shipped, and installed by yet another company. And only then, the rebar can be installed, and the pouring of the concrete can begin.
So here we have four key players that need to collaborate with each other. We have the architect that's in charge of designing the building and the shark tank. We have CW Keller, who's in charge of the form work, in designing, shipping the formwork. We have the woodwork manufacturer. And we have the shoring company, who's also in charge of designing, shipping, and installing the temporary shoring that needs to be in place to keep the form in place during the pouring of the concrete.
So in the first phase, the architect starts off by designing the building inside an application like Revit. What comes out of that is a pretty large file. And this file could be in multiple of gigabytes in size. Once the design has been approved, and the construction preparation has begun, the architect sends this large piece of data to CW Keller so that the next part of the process can begin.
However, sending large files is doable, but it's not quite efficient. In this case, we only really care about the shark tank exhibit, and not really about the rest of the building. And on top of that, the architect might not really trust an external company with the rest of the design.
But nevertheless, the architect sends this file to CW Keller. And on the other hand, when CW Keller receives this file, it needs to import the data into its own chat application by writing a plugin, which then it generates its own file, sends that to the woodwork manufacturer and the shoring company, which they also need to import this data by writing yet another plug-in, and coming up with their own design.
Now this process, in an ideal world, would only take a single iteration. But we know that in practice several things can go wrong. The woodwork manufacturer can see that the design that it got, that it actually got is not manufacturable. So it needs to tell CW Keller about it, and that needs to bubble up to the architect, who also needs to do little slight changes, which generates a new file, sends that file back to the formwork designer, who also makes a slight modification on its design based on what changed.
And now two years later, or the duration of the design phase is over, what you end up is with lots of files. Essentially these are copies with lots of exposed intellectual property. And a lot of overhead went into converting data from one CAD application to another. And since these files really have no real relationship between each other, there is no way to go back in time to see what the project looked like six months ago.
So long story short, there's a lot of things that we need to do to improve this workflow. We need to be able to have granular access to the data so that the architect is not forced to send the entire design, and only send what matters-- in this case the shark tank design. And by doing so, it forces the architect to only send what changed on this design, on this piece of data, and not the whole file every time.
We also want to keep track of history and relationships so that we keep history in order to know what changed from one version to another. We also want to keep relationships between these designs-- these external designs-- so that we can go back in time to see what the project looked like a month, six months, or a year ago.
We also need to provide custom workflows without necessarily having to build a plug-in for every CAD program. And also something that I didn't discuss in this example was that each one of these companies have their own internal teams that also need to collaborate in an efficient manner.
So to address these problems, Autodesk came up with Forge HFDM, the next generation platform to manage your data. So by now, you're probably asking yourselves, well, what is Forge HFDM? You might be as confused as this meme here.
Well, let me just give a reference to Brian Roepke's keynote, where he talked about what it meant to design on top of Forge, and to build on top of Forge, where Forge is basically several layers. At the very bottom, you have the common data layer, where you have a set of cloud services that are built on top of that; and on top of which you have an application framework SDK that provides several reusable components that are then stitched together inside your custom application.
And now HFDM really is at the core of this stack, at the very bottom. And HFDM stands for High Frequency Data Management. And what high frequency data management allows you to do is to build rich, distributed, collaborative, cross-platform, scalable solutions.
Now these are all buzzwords. But you can see HFDM as being a cross breed between the revision control system of a fine-grained revision control system in the asynchronous branching and merging of Git with the real-time collaborative nature in a non-locking shared state of Google Docs. So essentially, HFDM is a service that allows you to efficiently store and process data on the cloud, and also allows multiple clients to collaborate with each other without necessarily locking the state.
HFDM is efficient, because we only store what changed since the last edit. And unlike the architect in our example, where he always needs to send the full state of the data-- the full design-- we only send what changed. And because we only store a list of changes, we are able to reconstruct a history-- the full view of the data-- by replaying the other series of changes that happened up until that time, thus giving us full, fine-grained history.
And by making use of a technique called operational transform, we provide two types of collaboration. You have the asynchronous collaboration, which is much like the branching and merging of Git, where you can branch at any point in time, make changes in isolation, and merge back to your changes. Or you have the real-time collaboration nature, much like Google Docs.
And when I talk about collaboration, I don't necessarily mean human collaboration. Microservices can also collaborate, just like any other human client. And this really gives another meaning to writing plugins that are completely independent of the application that authored the data.
And on top of that, we also provide granular access control, so that an Autodesk user can own a part of the data, has full control on who sees or who accesses and modifies this part of the data, and also allows clients to subscribe on certain parts of the data so that they have less load when loading, when working with data.
But let's take an example here. So let's take an example of an application where you have a canvas, and you can draw shapes on the canvas, make modifications to these shapes, and automatically save to the cloud. And now let's look at a naive way to write this type of application, where you're always sending the full state-- the full data model.
So on the left here you have the canvas view. And on the right would be the data model that you're sending every time you save. So whenever you do changes, whenever you add a shape onto the canvas, we modify the data model and send that over to the cloud.
So here we add a circle shape. We modify the data model, send that over, and upload that to the cloud. Next we add a rectangle. We also append this to the data model, and send that to the cloud as well.
And every little change that we make, no matter how small they are, we always send the full view of the data. So here we recolored. We change the color of the rectangle, resize it, and repositioning it. At all times we are always sending the full state.
Now this is very inefficient in the way we make use of the network bandwidth, because we're always sending the full view, basically redundant data, all the time. And it's also inefficient in the way we're using the storage, because the circle in this example is always redundantly kept, even though we don't need to. And so one common solution would be to always overwrite the latest state. But then you end up with no history.
So now let's look at the same example the HFDM way, where adding a shape translates into a command over the wire. So here, when we add a circle, at the data layer, what you send is add command. Same thing with the rectangle, and any modifications we make. If we change the color, we only send the parameters that changed. Similarly, if we resize or if we reposition.
So this type of approach is much more efficient in the way we utilize the network bandwidth, because we only send what changed. It's also much more efficient in the way we use storage, because the data is only stored once. So in this case, the circle data is only stored once.
And from this, we can reconstruct the full history of the data, which gives us fine-grained history. And fine-grained history, as I mentioned before, it really allows us to reconstruct the full view of the data at any point in time by applying all the changes up until that point. So in this case, if we go back in time and see what the full view looked like at the time that we resized the rectangle, we apply all the changes, all the add commands, followed by the modify commands, until we end up to that state.
And this brings us to the concept of low frequency versus high frequency, where in a traditional file-based collaboration approach, like the shark tank example, where you're always sending the full state of the data, full file, that approach is more efficient in function of the time you wait between the times you actually do your uploads. So the more you wait between an upload, the more efficient that approach would be. And so this makes it low frequency.
Now in contrast with a high frequency approach, where you're always sending what changed, since the changes are so small, since the data or the payload that you're sending over the wire is so small, you are able to have more frequent updates, and it also allows you for instant collaboration, and fine-grained history.
Another interesting aspect of HFDM is the asynchronous collaboration, or the branching and merging workflow, that's very similar to Git, where you can go back in time and branch at any point. So in this case, we go back where we added the rectangle. And we can create a new branch, a new variation of that design at that point in time, make changes, add a triangle, color that triangle to green, and then later on we can merge at any point in time.
And looking at the merge from the perspective of the target branch that we're merging into, what that results in is the commands that are part of the source branch are then applied on top of the target branch. And so here we take those commands, those changes where we added a triangle, and modify the triangle to green, and append it to the tip of the target branch.
So this type of asynchronous collaboration, because it can happen at any point in time, the target branch can really move forward while you're making your changes in the other branch. So at the time that you're actually doing a merge, there could be conflicts.
And so for that, we offer three options to resolve your conflicts. You can either let standard OT through the library to the SDK to take care of it in an automatic fashion. Or you can have your application code to override these rules with its own set of rules, and have the conflict also resolved in an automatic fashion. Or better yet, you can have the user intervene in a more interactive manner.
And so branching and merging is really powerful, because it really allows you to explore different design variations, allowing you to have multiple configurations of the same design, and allowing you to really explore which ones you would like to go forward with, and allowing you to also merge back your changes at any point in time. Yep.
So another type of collaboration is the real-time collaboration, which is similar to Google Docs. And real-time collaboration allows you to have multiple collaborators working on on the same branch at the same time, and also gives a continuous merging flow, where remote changes that are received are automatically merged on top of the local branch.
And so on the left here you see a diagram where you have several applications and microservices written in different backgrounds, written in different languages, operating on different environments, all interacting with each other, and reacting to each other's changes. And that's basically the kind of collaboration that HFDM provides.
So finally, HFDM also allows granular access control, where a user can really choose which part of the data it wants to see and load. A user can own certain parts of the data. And it has full control on who has access, the different types of access to the data.
So on the right here, and taking this same example, the same shape example, where you have one client that loads the full data-- all the data. And you have another client-- the client on the right-- that only loads the circle. So any changes that are made to that circle will be translated, will be broadcasted to the client on the right through the HFDM service. However, any other changes made to the data model-- the rest of the data model-- will not be transmitted to the other client.
So now onto the fun part. How do you use Forge HFDM? Well, wait. Sorry. This is the wrong one. Yep. [INAUDIBLE]
So from a developer's point of view, what you get is a managed service that takes care of persistency, and takes care of synchronizing all the clients. And what we provide also is a set of SDKs. We have JavaScript, C++, C#, and we have more to come on the horizon.
But before I jump into the SDKs themselves, there's a few concepts that I would like to talk about that's quite important to know about HFDM, starting with HFDM schemas and property sets, where a schema is essentially a declarative way to model your data. It is a tree-like structure, and it allows for inheritance, templating and typing. It offers several sets of rich base types, such as Float32/64, binary, enum, among others. We also have collection support, like maps, sets, and arrays.
And as we know, data can evolve over time. And so we also version our schemas. And the versioning scheme that we follow is SEMVER, where a major update constitutes to a breaking change, where, let's say, if you remove a property or change its type, a minor change would be if you add a new property, it would still be non-breaking. Or a patch update would be if you're still doing a non-breaking change, but only updating, let's say, the annotation field.
And from these schemas, what you get as the actual data is a property, or a property set, or a property tree. So these schemas can really be seen as a recipe to create a property. And changes made to these properties are stored inside what we call a commit graph.
And a commit graph is analogous to Git's data model, where a commit is a unit of change, a branch is a lineage of commits, and a repository is a tree of branches that are connected through a common root commit. And so as you're making changes to your data model, these appear inside the commit graph in individual commits.
And commits, they contain also metadata. But the actual core of commit, where the set of instructions of what changed since the previous state, is stored inside what we call a ChangeSet.
And so ChangeSets are reversible. So they provide native undo/redo capabilities. They're serialized in JSON. They're atomics, so a set of inserts, removes, and modifiers are applied all at once. And they also contextualize the schemas with the data so that you are able to dynamically introduce new schemas into your branch, seeing that you could have multiple microservices that are working on the branch at any point in time.
So now onto the SDK. So as I mentioned before, we support multiple languages. We have NodeJS/Browser-- which is essentially JavaScript-- C++, C#, and more to come. So for the sake of simplicity, I will stick to JavaScript.
But they all essentially follow the same flow, where you start off by first installing and importing the package, either through a package through your package.json. So this is the node example. And you would import it through the require statement in your application code.
Next, you would register the schemas that you predefined with the Property Factory. And the Property Factory is an object that generates properties from schemas. So you need to register a schema before you can create your property.
Next you can connect to the HFDM service by instantiating an HFDM object, which is essentially a local copy of the data store-- of the back-end data store, which automatically synchronizes whenever a connection is established. And this also allows you to work offline if ever you don't have access to internet. So on the right, here, we instantiated an HFDM object, and we called the connect function on it.
So next, what you want to do is create and initialize a workspace. Now you could initialize an empty workspace, which would create a repository, an empty commit. Or you can initialize a workspace onto a set of existing data set, either by passing in a branch identifier, a branch URN, which effectively joins the workspace in a live session, and any changes that are made are automatically received and applied onto that workspace. Or you can pass in a commit URN, which essentially binds you to a point in time, which is immutable unless you branch off and do commits in your new branch.
So after creating, initializing your workspace, what you want to do is modify the workspace. And you can modify a workspace by creating, modifying, or removing properties. And so here we create a property from the schema that we registered early on, fill it with some values, and insert it into the workspace under a given handle.
And so changes made to the workspace are pending until you actually do the commit operation. And the commit operation essentially packages all the changes that were made since the last state, and stored inside the local HFDM object, which is then broadcasted over to the back end, and then redistributed among all the other collaborators.
And so seeing it from the other end, you could bind to changes made to the data model through the modified event on the workspace, which is triggered whenever there is any change made to the data model. You can also bind to a specific path of your property set to only be notified on a certain path, which is what we're doing here.
Now let's take a look at some of the examples of HFDM in practice. So here you guys might be familiar with Fusion. And this is the web browser version of Fusion called Fusion Web. And here we're utilizing HFDM real-time collaboration to synchronize all the different clients with each other.
And this is another example where you have two desktop applications-- Revit and AutoCAD-- interoperating with each other through HFDM, where changes made in Revit are automatically downloaded by AutoCAD, and changes made by AutoCAD are also received on the other end by Revit.
And what's great about this is that this integration is abstracted away through HFDM, where you don't have a Revit plugin inside AutoCAD, or a AutoCAD plugin inside Revit, giving it much more maintainable solution.
Does this one not work? Ah. This was the most interesting one. All right.
So I'll just say what it was. So this was basically a example where we used Revit to integrate with Google Docs-- Google Sheets, more specifically-- through HFDM, where it really allowed a better workflow, and where an architect could essentially share a subset of the data with an external contractor, and have that contractor modify these sets of fields that it was shared with, without necessarily having to download and install this monolithic application.
So without HFDM, this type of workflow was quite painful. The architect would have to export the set of fields inside a CSV file, import that into Excel, email that to the contractor, had the contractor make changes, email it back to the architect, have the architect manually input the changes into Revit, and continue the cycle. So HFDM really allowed to really offer a seamless workflow and collaboration through this approach.
And now I'll give the stand to Kai to talk about the application framework.
KAI SCHROEDER: Thank you for that. Forge HFDM is a great technology to build collaborative cloud-native applications. But it's only the data part of the application. Now to build real web applications you need to have some runtime functionality coming to build these applications.
And for this, we built the Forge application framework. We are also working on an integrated development experience, to make it very easy to build rich web solutions, solutions like the one you saw, like the Fusion example; or if you've seen the keynote, the [INAUDIBLE] Forge Configurator.
If you have a look at Forge today, most of what's available are back-end component-- very complex services that can generate data. A front-end component that we offer is the Forge Viewer. This is a great tool to display complex models that are stored on Autodesk 360, stored in a variety of different formats.
Now we have been asked several times, can I customize this viewer in some way? Or maybe, can I have some editing operations on top of the viewer? And this is something we plan to offer as part of the application framework-- reusable components that you can integrate in your application, and combine them with a viewer.
Now building such applications is challenging. There are many aspects to this. You need components to create geometry, for example. In the [INAUDIBLE] Forge example, the prosthetics were generated on demand by Fusion, for example.
You also want to render your models in a lifelike way, and ideally photorealistic way. Maybe you want to have analytics to understand how customers use your application. And then, if you integrate some of the Forge services, some of the back-end cloud services, there need to be ways to orchestrate these services-- to build upon them. Also, to build applications you need productivity tools-- tools for debugging applications, and building these 3D experiences-- all of this with security built in.
Now if you want to build an application on top of Forge in the future, your application can make use of a variety of different tools and frameworks. You can use the Application Builder to easily get started, and have what you see is what you get workflow to build an initial version of the application.
Then you can integrate JavaScript SDKs, different frameworks that help you to add some more complex technologies. For example, we have data management technologies. So on top of HFDM user interface components, for example, to work with the data, inspect your data, and provide drag and drop workflows with data and data assets.
And I already mentioned the 3D viewing component that can soon be used as a single component, which is easily extendable. Then other components-- Smart Insights, that you can understand how users use your application, what types of data are common configurations that users define in your application, and really understand how your applications are used. UI rendering components help you to build these complex apps combining 2D and 3D elements on the same page. And then different framework technologies can provide their own UI components.
And we provide a set of reusable components to use. This starts with Autodesk standard components such as the View Cube, rotation tools you can integrate into your application. It goes on with tool palettes. So if you integrate some component for solid modeling, this can directly come with tool palettes to generate certain object types, geometry types.
At the same time, we want to make it very easy for you to build your own reusable components, and potentially share them with others. This can be tools of this size. It can be sets of tools. And it can be tools that also integrate more complex computations that you want to offer.
One use case for the application framework and the integrated development experience is in the context of mass customization-- and specifically, in the context of product configurators. For these Forge product configurators, what is really special about them is that we can provide very seamless workflows.
Due to Forge HFDM, different users can work with the data in different ways. So we can provide a seamless workflow starting with an engineering model designed in Fusion. This can be integrated by a VAP developer developing an online experience. A designer can have its own customs application to further design, add designs, change materials for this engineering model to make it really look great.
And this is different to what we often had before, with file-based solutions. What we typically had there is you had an engineering model. You exported to a different format. Then you maybe open this in Maya or 3D Studio Max, do your tweakings for the marketing, and make it look great.
Then you will render images. These images are then sent to a web developer. The web developer integrates the images into an HTML page. And it can take a very long time getting from changes in the engineering design now to getting these changes into the configurator, because there are so many manual steps involved. And in this new Forge, they can be very seamless work flows, where everyone works on the same data.
Then you can make use of our visualization tools to have really life-like experiences, realistic-- photo-realistic visualization-- based on physically based materials that are already used in our desktop tools.
Many traditional configurators are based on pre-rendered images. Now this means you can only offer very static experience to your customer. Also, you have to pre-generate a lot of images for all of the different configurations product configurator may have. If you build a configurator directly based on 3D assets, and you can use the viewer to render these 3D assets on demand, users of your application can inspect products from any angle. And it's a much richer, more interactive experience. As everything is based on [? VAP 2 ?] technology, it, of course, runs on any device.
This is an example for such a configurator making use of several framework components. So it makes use of the rendering component. It makes use of geometry generation components. So here, for example, you can set the number of holes in this [INAUDIBLE], and then Geometry Service regenerates geometry on demand.
This is another example with a higher focus on the visual parts. Here you can design a kitchen. You can select from a number of countertops, sinks, and other options; select materials. And then on demand a new image of this kitchen is generated.
Again, compare this to configurator based on static images. There you really need to generate many images. Nobody ever has to look at, because some configurations typically are popular, and some are configurations that no one wants to have. So here, no time is wasted generating images for these configurations, because everything is just rendered on demand.
This is another example integrating Fusion. And on the right, just for illustration, we have embedded Fusion Web. And you can see that all the data in this configurator directly corresponds to data in Fusion.
So here we can select between different models. With the sliders we directly drive dimensions in the Fusion model. So whatever comes out of this configurator, we can be sure that this is a manufacturable model, because it is really based on the original CAD design data.
We can even go to Fusion and make, then, changes in Fusion, for example, and make this edge a little more round by adding a [INAUDIBLE] there. And, again, as the data is shared via HFDM, the change directly is visible in the configurator. And so it's really easy to get changes of engineering designs into this application.
Of course, there's no need to get every change. So as HFDM also supports the Git-style workflows, of course you can have different branches, that the engineer works in one branch, the web developer works in another branch. And only whenever it makes sense the branches are merging in some way.
And this is a video of the integrated development experience that we want to offer soon to you, to make it really easy to build these applications. So there you can browse for Fusion models on Fusion Team Hub, directly drag a model onto the canvas, design an application with reusable components. And this way, it's really quick to get started.
And even though Brian mentioned in the keynote that this is a word pun on IDE, I would like to mention that we've fully integrate with existing IDEs. So we don't expect you to do all of your coding in a web application. So of course you can still use your sublime Visual Studio Code preferred environment, and just use this as one additional tool to make it easier to build the applications.
FARZAD TOWHIDI: OK. You want me to take over? All right. Thank you, Kai.
So what's next? Well, we're hosting an application framework accelerator in Montreal on the week of April 16th. So this will be a great opportunity for you guys to get a hands-on experience on building your first Forge HFDM application.
Some other classes you guys can attend in this event, if you're interested in seeing how HFDM is being leveraged to re-imagine BIM workflows to support a better project outcome, you guys can attend the class following this one at 3:45 on project Quantum. If you're interested to hear about how Forge app framework has been utilized to build configurator applications to improve design workflows, you guys can attend Leore's presentation, also following this one. And please join us tomorrow on the follow-up of this class, where we will go in-depth into Forge app framework.
So to summarize, Forge has evolved. And soon you will be able to build end-to-end seamless workflows using Forge Autodesk's powerful intellectual property, in combination with your app business logic. And Forge HFDM is at the forefront of this evolution. So please join us on the next evolution of Forge. Thank you.
Tags
Product | |
Industries | |
Topics |