AU Class
AU Class
class - AU

Dynamo and AEC Generative Design Product Briefing

共享此课程

说明

This product briefing will showcase the latest advances in Dynamo visual scripting, Dynamo Player for workflow automation, and Generative Design in Revit software for automated design exploration. Come learn from Autodesk AEC Generative Design product managers about where they've been focusing their efforts and what's on the road map for future releases. We'll use real-world examples to showcase these tools and show how we've improved the authoring experience in Dynamo and the running experience in both Dynamo Player and Generative Design in Revit. This briefing will demonstrate the value that Dynamo and Generative Design in Revit bring to the design process. We'll show how customers are using the tools to automate their workflows and design explorations in their projects in order to optimize sustainability and efficiency and produce less construction waste. We'll cover the product principles that we use to prioritize new work and future direction. Finally, we'll address the future of the product.

主要学习内容

  • Discover the value of Dynamo, Dynamo Player, and Generative Design in Revit.
  • Discover three examples of how customers are using these tools.
  • Learn about the driving principles for future prioritization.
  • Discover the future direction of the product and road map.

讲师

  • Lilli Smith 的头像
    Lilli Smith
    Architect and Digital Enthusiast
  • Steven DeWitt
    With 25 years of experience in electronic modeling for design/construction/fabrication, Steve's passion for helping solve industry problems has led him to FactoryOS. Steve is a Certified Revit Professional and has enjoyed sharing industry trade topics at AU, San Francisco Computational Design user group, CAD Manager Confession, TAPS and is a Co-Founder of KitConnect. Additionally, Steve enjoys being an avid outdoorsman, a softball and baseball coach and playing chess.
  • Karam Baki
    Karam Baki is an Architect who started his BIM journey when he was 16 years old, and since then, his passion for knowledge has never slowed down. He usually solves extremely complex problems related to facade engineering in high profile projects. Karam started AECedx for education and and AEC Group for consultation, utilizing his skills to educate, manage and run teams across multiple countries around the world.
  • Alexandra Nelson
    Alexandra Nelson is an Associate on the Design Technology team at DLR Group, a prominent global integrated design firm. Her expertise lies in Research & Development, where she spearheads innovation in fields such as design automation, data science, and artificial intelligence. In addition to her role at DLR Group, Alexandra serves as a strategic advisor for Acelab, an AEC tech company that is revolutionizing the way architects and design professionals access building products through their data-driven library and product ecosystem. Alexandra has earned a master's degree in both Architecture and Information Technology from the University of North Carolina at Charlotte, reflecting her deep commitment to merging technology and design. Prior to her tenure at DLR Group, she gained valuable experience at distinguished architectural firms, including Grimshaw Architects and Perkins Eastman in New York City.
  • Sol Amour 的头像
    Sol Amour
    Sol has a background in a myriad of design fields (Construction, Landscaping, Industrial Design and Architecture) but works at Autodesk as the Product Manager of Dynamo, responsible for the strategy, vision, direction and growth of moving forward. His ethos is that we should leverage the computational power available to us to automate many of the back end processes the modern world demands to allow us to get back to what we all want to do - Spend time on the human component of building making; to consider, to think, to play and to feel, to bring design back to the forefront allowing us to beautifully and holistically enhance the built fabric of our world.
  • Benjamin Friedman
    Benjamin Friedman leads DLR Groups Data Science and AI team. He has deep experience in deep learning, optimization, and generative AI that enables him to drive innovation and expand DLR Groups design thinking, quantify evidence-based design, and improve project timelines. After studying GeoDesign (Architecture and Planning) at the University of Southern California, he has led and worked on data science teams in the sustainable tech and energy spaces. Two significant achievements include leading data collection and distillation efforts for agriculture emissions tracking for Climate Trace and the UN Climate Conference as a part of Carbon Yield and developing the core operating logic of one of the largest residential virtual power plant systems in the world at Swell Energy.
Video Player is loading.
Current Time 0:00
Duration 54:07
Loaded: 0.31%
Stream Type LIVE
Remaining Time 54:07
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

LILLI SMITH: Hi, everyone. Welcome to the Dynamo and Generative Design Product Briefing. My name is Lilli Smith. I'm a senior product manager in the Computational Design & Automation Group here at Autodesk, and I am a registered architect who practiced architecture in a previous life. During my time at Autodesk, I have worked on many tools, including Revit, Formit, Dynamo, and Generative Design in Revit. I'm also joined today by Sol Amour.

SOL AMOUR: Hi, everyone. My name is Sol Amour, and I'm also a product manager in the Computational Design & Automation Group at Autodesk. I am from New Zealand, and have a background in architecture, construction, industrial design, and many other fields. And I'd like to describe myself as a curious human being. I've been here at Autodesk around 4 and 1/2 years, and have been deeply immersed inside of the Dynamo ecosystem since its early days.

LILLI SMITH: This talk is going to cover what new software we have in the works. Please remember not to make purchasing decisions based on statements we may make about future functionality. These are the learning objectives for today.

I'm going to start with a little bit about why we think computational and Generative Design workflows are so important. Then we'll talk about what's new in the products. And next, we'll feature four computational design practitioners who are using these products to improve their practices-- some really, really exciting workflows we have to share with you from them. And finally, we'll talk about our future roadmap-- what we have in the works near-term and farther out, and really the principles that are driving it.

So first, why should you care about computational design? So we are sitting right now in the fastest-warming city in the United States. July 2023 was the hottest month in Las Vegas ever.

The AEC industry owns much of the responsibility for building out the commercial and residential spaces for our rapidly expanding global population and changing climate. The future is a design challenge. Workflow, automation, and Generative Design can revolutionize the way we design by speeding up our work, making it more efficient, and using goals and measurable outcomes to help guide us and build sustainable, healthier environments.

Architectural and engineering services have evolved from drawing by hand on paper, delivering it to others to build buildings. We've evolved to building information modeling and more efficient ways to document and deliver building instructions to the field. But with the serious problems that we're facing, we're going to really have to figure out how computing power can help us more.

We want to invest in ways, not just to record design decisions, but also ways to automate them and keep track and measure success metrics. Now, we could throw more people on projects to do more work faster. But we could also use automated computing power to help us. What we really want to do next is to capture what's in these smart people's heads and pair human intelligence with machine intelligence. The key to all these processes is being able to combine and codify the kinds of knowledge needed to solve building problems, so that we don't have to spend as long on tedious tasks, and so that we can use data-backed design decisions to work together to create a better-built environment.

Sol and I work in the Computational Design & Automation group at Autodesk. Our mission is to provide simple and capable tools for encoding AEC goals and constraints to assist design and analysis with automation. We developed Dynamo, the Dynamo Players, and Generative Design in Revit.

This is the way that we see the world. The sky's the limit of what you can do by just writing some code. But that's like saying you could create a Mona Lisa by just doing a painting. Writing code is a discipline and an art. It takes time and effort to learn.

That doesn't mean that everyone here can't do it, but it does mean time spent away from designing, engineering, planning, and all the other things that architects, engineers, and contractors do as a part of their everyday work. So we have invested heavily in the open source tool Dynamo because it's a middle ground between direct modeling and a full coding environment. It allows architects and engineers to record their logic and speed up their workflows.

To work in Dynamo, you can either access it from a host application, such as Revit or Civil 3D, or you can use our standalone sandbox experience. The Dynamo UI consists of a node library full of functions, and then the nodes themselves that are like little machines that you wire together. There is geometry that you can see in the background. That's executed when you hit Run on these nodes, if your nodes make geometry.

To use Dynamo Player, you can access it via the Manage tab. The Player UI contains a library of already-created sample Dynamo graphs and documentation pulled together in a simple and inviting way for those who are maybe not so technically minded. You can also add libraries here of your own scripts for use by more people than people that might be willing to dive into the full Dynamo experience.

As most of you know, this is Dynamo. This is how almost all Dynamo scripts boil down to at their essence. There are nodes to read data in-- in this case data coming from sources such as Revit.

You could also have data coming from Civil 3D or Excel files. And this data connects to nodes that compute things. In this case, it's computing the text to be capitalized text. And the last section is nodes that write data back out also to these programs, like Revit or Civil 3D or Excel

Dynamo can also be used for Generative Design. In Revit 2021, we introduced Generative Design in Revit to make design automation workflows accessible to more people. You can see it running here.

The designer is performing a massing study to study the allocation of retail and office space distribution, while minimizing cost and maximizing rentable area. This is integrated right into Revit, and the designer performing this study does not necessarily have to be the same designer who has created the underlying Dynamo script, which makes the logic for this study. So integrating these tools into the large Revit ecosystem is a step towards making Generative Design processes more mainstream, so that more people can have this supercharged ability to explore the best possible solutions.

Dynamo can run in a standalone version that we call Sandbox. And we actually update it almost every day on our website. Or the more stable and tested versions that we have become integrated into all of these products that you see here on the screen. We are excited to soon be announcing a beta of Dynamo working in Autodesk Forma. So stay tuned for that.

There are active Dynamo user groups all over the world-- Atlanta, Boston, Catalunya, Shanghai, Ireland, Auckland, and many more. These are only the ones that we thought had the best logos. So we seeing people participate in these user groups, and also come to our forum. This is on dynamobim.org.

Our forum is a great place to come and get help. We see people asking questions here every day. And we love that the community helps people get started, helps people with a range of different problems. So now I'm going to hand it over to Sol to talk about all the great new tools that we have out.

SOL AMOUR: Thank you, Lilli. So now let's talk about some of the latest features that we've released in Dynamo and what you can use right now. To bound this section we have had a focus on making Dynamo easier to use and adopt, making it easier to collaborate with your peers, and share your content, and also making Dynamo faster and less error-prone, these features have been delivered in Dynamo versions 2.16, 2.17, and 2.18.

So in conjunction with the team that builds our Geometry kernel, the technology that powers Dynamo's geometry, we have implemented Native PolyCurve support. That means that polycurves now play nicely with the rest of our geometry library and have powerful new features, such as smartly understanding how to create regions and instead of self-intersecting curves that don't really help anyone.

We have also implemented a new way to create bounding boxes, which are an abstract piece of geometry that wraps the total extents of a geometrical element. In the past, we only had axis-aligned bounding boxes, which while useful also allowed for misrepresentation. As a containment check, we had to check to see if one element was inside of another one, or an intersection check could have had false positives. The new approach allows for us to find the minimum possible bounding box, irrespective of the axis, providing much more accuracy in the containment and intersection checks.

The Custom Selection Node allows you to create your own flavor of dropdown, complete with your choice of display option and value that it produces. If you have mixed object types here, it will default everything to a string. But any complete case of numbers or integers and so on will honor that value type. This is incredibly useful in creating Dynamo Player and Generative Design scripts for others to use. No more hacky workarounds.

And six new chat nodes-- which will all look familiar if you have ever used the node model Charts package in the past-- so shout out to Keith Alfaro there, now allow you to better interrogate data inside of Dynamo. They automatically update in automatic run mode, and they come with a rich and broad set of documentation around how they work and how to best leverage them-- a big win for exploring visual relationships with data.

The notification center is a way for us to send out important messages and information to you about Dynamo. We're right there at your fingertips inside Dynamo itself. Currently, this is largely blog posts and a handful of other salient posts, such as the Dynamo Future File. But we see this evolving into so much more. The future could provide real-time feedback in the event of service downtime and notify you when your favorite package has received an update.

Dynamo now ships with a splash screen, telling you what Dynamo is doing in its load sequence at every step of the way. Rather than clicking the Dynamo button and waiting 15-odd seconds for it to show up, it allows you to sign in to Dynamo as well, as we've had authentication added into the mix of the core Dynamo experience. And it gives you the ability to also import settings prior to initialization that will take effect inside of that Dynamo session. From inside Dynamo, you can also import and export settings. But this might require a restart.

Node Autocomplete got a massive power boost with the introduction of a recommendation mode based on machine learning. This means Dynamo will provide you with options for following or preceding nodes in a graph that other authors have actually used, rather than pairing library relationships like the old node type match. This unlocks the ability to build graphs at speed with much higher fidelity, and will only get better in time as we tune our machine learning model.

Here, you can see the machine learning Node Autocomplete mode in action in real time. The video shows a user starting with what they want to achieve, which is building a wall in Revit, and working backwards at speed to do so, having Machine Learning Autocomplete predict what comes next. Beyond selecting the nodes from the recommended list, the primary user action here is selecting from those dropdown menus the values that make most sense to the graph author, showcasing how easy it is to actually build a working useful graph.

Dynamo now ships with Extended Node Help, which will be familiar to those of you who ever used the Dynamo dictionary in the past. That provides in-depth documentation through a description, a sample image, and a sample file for a giant swathe of nodes that exist inside of the core Dynamo experience. We're actively working right now to achieve 100% coverage, including the T-Spline nodes and Mesh Toolkit, and are doing the same with Revit and Civil 3D. You can also insert that sample graph directly into your active Dynamo workspace directly from a button here, allowing you to either see what those nodes do or actually add them into your workflow.

How many times have you've forgotten what it is to do to hook up a Color Range node. You can now just press F1 and insert that cluster right into your graph through the Extended Node Help. That automatically places it inside a named group for your use.

The Python node has been modernized, matching the visual refresh with its UI, allowing for input and output port renaming with a user-defined description that is awesomely 100% backwards compatible with older versions of Dynamo, giving you the ability to also scale your font for those of us who like lots of text on screen or those of us who might need a little bigger, match the text coloring between DesignScript and Python so that there's better visual and logical interop between both worlds. And we've also introduced automatic text folding, allowing you to collapse portions of your screen to gain back some of that valuable screen real estate. All of these quality-of-life improvements will help you work easier with Python, but better yet, have it much easier to explain and share with your colleagues and peers.

As part of this, Dynamo now supports six new Python libraries out of the box that allow you to more richly interact with mathematics, data, and image manipulation, run scientific calculations, directly connect with Excel, or create your own custom scripts and custom plots. You simply need to import them as the Python node is already passed natively to their location. And we can't wait to see what cool things you come up with in your graphs, and maybe the cool packages that you build around these new libraries too.

Dynamo Player now exposes graph dependencies in your script, informing you of what packages are needed to run that graph, and whether or not you actually have them installed. You can also directly open Dynamo to resolve those dependency conflicts. We have also exposed the ability to view any warnings in the script with a New Issue manager, allowing you to read that expanded warning, click on any Learn More link and copy and paste them directly to get help from either the forums or your colleagues. And for the visually inclined, you can also now display images inside of Dynamo Player to the Watch Image node, as long as it's set to Outputs. Fantastic for visualizing results that require context.

Dynamo Player and Generative Design in Revit have new sample automation workflows for you to explore, allowing random family instance placement or the isolation of elements with warnings. We are extremely excited to announce the release of a dedicated Civil 3D Dynamo Primer chapter, out for you now to explore. A big shout out here to Zachri Jensen, all through the Cambridge package, for all his hard work on this wonderful resource.

Please do go check it out and explore the powerful new scripts that come with that Primer section. That can help fast track your understanding of Dynamo for Civil 3D. I'll now hand you back over to Lilli to explore some customer use cases.

LILLI SMITH: Thanks so much, Sol. So let's take a look at the incredible things that people are doing with Dynamo. I am thrilled to introduce this inspiring panel of computational design practitioners from around the world who are going to share their work with us.

Karam Baki is an architect from Jordan who will talk about a Foster and Partners project the Red Sea airport in Saudi Arabia that he helped model with Dynamo and Revit. Next, Steve DeWitt from Vallejo, California, will share how he is automated prefab housing construction projects using Dynamo and Revit. Chris Steer, coming from us all the way from the Gold Coast of Australia, will talk about wrangling a ton of data to improve road design workflows in Civil 3D. And then finally, Alexandra Nelson and Benjamin Friedman of the DLR Group out of Charlotte, North Carolina, will tell us about how they're using Dynamo as a prototyping tool for generative AI explorations.

So first up is Karam Baki, who is an architect and consultant for the AEC Group. They help solve extremely complex problems related to facade engineering and really high-profile projects. Karam is going to talk about this amazing Foster and Partners project in Saudi Arabia at the Red Sea airport on which he was a consultant. I'll let Karam tell you about it in his own words

KARAM BAKI: Hello, everyone. My name is Karam from AEC Group. It's a great honor to be here today. I'm here to share with you a challenge we encountered during the Red Sea International Airport project, specifically regarding the implementation of the roof cladding and ceiling paneling design.

The problem with the roofing arose from inconsistent relationships between panels in the IFC model, which prevented us from correctly implementing the roof elements. To address this, we designed a series of mathematical solutions that allowed us to automatically correct more than 22,000 panels in a single operation. We then simply replaced the panels with adaptive components.

The next challenge we faced was related to the ceiling elements. This was geometrically more complex than the roof, as each ceiling panel is a multidirectional curved panel. This meant that each panel was unique and had to be modeled separately.

To solve this challenge, we developed a cutting-edge solution that is now implemented across all of our projects. It is well known that Revit has some limitations when it comes to using multiple CPU cores for geometrical tasks. So our solution was designed to process a range of selected panels in each Revit instance.

That allowed us to launch multiple Revit instances and assign each one to a separate CPU core. With up to 20 Revit instances, each instance generated SAT files representing the eLOD geometrical data and CSV files representing the iLOD data. We then developed a final solution that combined all the panels from different Revit files into a single file in a single Revit instance.

By the end of this process, we had automatically generated a significant number of ceiling panels, complete with full iLOD details and data. We could even go a step further and generate all the fabrication orders for each unique panel. But that's a discussion for another time.

Now, to try this in your own PC, simply open any Revit version, starting from 2021. Then go to Manage Dynamo, Packages, Search for a Package. In the Search field, you can either search for Synthesize Toolkits, or you may simply search for Karam.

In all cases, you will find this toolkit. Once you have it, you may open New, and you will find Synthesize Toolkits on the sidebar. You will find inside of it Installer. Just activate the Installer. It will tell you to restore Trivet.

Once Revit is restarted, go back to Manage, Dynamo again. This time, you will find a section called Demo. Open that. You will find RedSeaAirport.

Inside of the RedSeaAirport, you will find two nodes, one of them called BuildCeilingFiles. Call that. Essentially, this node creates the SAT files that are required to generate the ceiling families. It requires you to insert an ExportDirectory.

Simply search for directory path. You will find the first node. Call it, browse, and just create a new folder anywhere-- let's call it EXPORTS in the desktop. And plug that in.

If we head to the EXPORTS folder, you can see that each panel is being exported into a separate subfolder. Each one contains SAT files that are essentially the panel parts. For the sake of simplicity, this demo creates only 13 panels.

Now, if you went to 3D View, you will notice that there are empty adaptive components. Those are essentially the placeholders that define the guidelines of each panel. Now, to learn the panel parts, you may simply go back to Dynamo and [INAUDIBLE] in LoadCeilingFiles from the provided list of SAT files. Simply plug in the SAT files.

And that's it. You are done. Although the panels are created accurately, and each of them has material parameters, detail level, and of course subcategories for further analysis of each panel.

Now, you may also click on any placeholder. Adjust its points as such. Go back to Dynamo. Just plug that out and back in. And your panel is updated accordingly.

I hope this presentation was insightful. Please feel free to contact AEC Group for any challenging project. Thank you for your time.

LILLI SMITH: That project really blows me away. If you want to learn more about it, make sure to download the newest version of the Synthesize Toolkit, available on the Dynamo Package Manager as Karam showed. And check out the demo files he has in there. And of course, as he said, he's also open to questions at his address.

So we've seen in that last example that Dynamo is really amazing at wrangling complex geometry. But it's also a fantastic tool for managing large amounts of data. In this next project, Chris Steer, who is an innovation engineer at WSP out of Australia, is going to tell us about how he generated over seven million attributes in a Civil 3D road project, and saved over 1,000 hours doing it.

CHRIS STEER: Hi there. It's Chris Steer from WSP, coming at you all the way from the sunny Gold Coast in Queensland, Australia. I'm going to be talking today about wrangling our road asset data in Civil 3D with Dynamo.

Our project data demands here have changed significantly over the last few years, with a change of deliverable processes from strings and surfaces to solids or pseudo solid mesh objects. The demands now are for all drawings to be cut directly from the 3D models and not really embellished too much. The asset data required against each object also must exist in the native model, and not just post-processed and applied to an exchange model. That asset information is also required at all project phases. And that's led to an exponential increase in the data demands on a weekly basis, with all of our models and data required when we need to share them, which is either weekly or fortnightly.

Recent projects-- one of our clients actually requires us to organize 65 different individual object attributes in three distinct property sets. Our project recently was over 110,000 objects modeled, resulting in more than seven million attributes needing to be generated and applied across these property sets in custom property tabs. That leads us to the solution and the questions asked of what if we already have most of the data available.

In the case here, we've got some standard data available in a Civil 3D corridor solid, already there-- HorizontalBaseline and CodeName. We'll use those two significantly later. That forms the basis. An extended use of already embedded design management process will supplement the rest, And also, intelligent systems are already at our disposal with Civil 3D and Dynamo.

But let's have a look at the solution quickly. What's needed for it? Standard design management documents-- so an Excel register for your alignments and some data against those. The master information delivery plan or a model register, whichever way you want to call it.

In Dynamo, what do we need? Civil 3D Toolkit, of course, some Spring Nodes, some Clockwork. And also, a special shout out to Paolo Serra, who provided us with a very special node which filled a gap that we needed in our script.

Here's a look at the overall graph. We don't generally have our users use this because it can be very, very daunting for them. We get them to concentrate on the little pink inputs on the left. And in that case, we give them Dynamo Player.

So here's a look at what the users will see. All they have to do is map their master data spreadsheet. Check that the predefined property set names are correct, and the two other tabs are actually loaded there or defined from the spreadsheet.

Let's have a look at how this breaks it down. So the upper left, we're looking at a standard Civil 3D drawing name. In this case, it's in distinct fields which actually define the data nature or the data structure of all model names on our project for this client. This becomes really valuable because fields 3 and 4 in this case-- and the last field, actually become very important.

So the first thing we do is break that document detail down into its individual fields, and then go and use those fields with the predefined property set names and other items during the graph to go and generate data for us. In this case, we're concatenating a property set name and a location together to actually tell us which tab of a spreadsheet we're going to be reading. And this changes from model to model, so that's actually really handy for us.

The other bits of information we pull from the model and align to our master data sheets. It's HorizontalBaseline. So you can see there, it's got some bespoke information about it, which is stuff that we don't actually generate within the graph. We actually have that registered outside.

The other item there is the codename. That becomes really important as well. So what we do with that is break it down into its individual fields as well, and then start to put that through some conditional statements with Python. This is all fairly simple coding. We don't need individual coders or specialist coders to do this. And that returns us a whole bunch more information that you can see there on the right.

Extending that further down the path, using the same codes and then aligning that with things like uniclass or other predefined things that we can put into conditional statements, and then ultimately organizing that into individual lists in individual orders that go back against the property set values that are applied automatically through the graph to the objects in the model, and resulting in a fully populated asset-rich model at the end of the process. It takes around five, 10 minutes per model for this whole process to run. And that's based on a model with around 1,000 objects in it.

But that's how we're achieving our client deliverables with an automated fashion, using Civil 3D and Dynamo to do some data science and apply our asset data for us. Thanks for listening. I hope you got something out of that.

LILLI SMITH: It's really great to see this workflow being able to save so much TDM and deal with so much data. And it's really great to see more Civil 3D workflows popping up. So we wanted to point out that there's a really great example on the new Civil 3D section of the Dynamo Primer that Sol had mentioned earlier, which will show you an example of working with Excel and Dynamo for Civil 3D.

Check it out. It's the lightpole example. And you can download files to get started there. Chris is also open to being contacted via LinkedIn at the QR code you see here or his email address. In the next project, Steve DeWitt, who is a design innovation engineer at Factory OS, will show us how he uses Dynamo to wrangle both geometry and data in this prefab modular housing project, where he generated over 55 million parts, and reduced the time needed to create a proposal from three weeks with a few specialists to one hour by anyone on his team.

STEVE DEWITT: Industrialized construction companies need to be able to rapidly configure and mass customize building products to conform to millions of building configurations. First, we're going to overlay masses. And we're going to model one mass for each unique dwelling segment. We're going to simply add in dimensions and configuration values for if there's a kitchen, where the kitchen is, and where the bathroom might be.

Next, we're going to look at real-time broken design constraints to get real-time feedback to make sure that what we produce inside of this model is going to be buildable based off of inputs by manufacturer or company constraints. From here, we're going to search our entire library to see if there is any other components or modules that we've created that are the same configurations as the two we've developed here. If not, Dynamo is going to create a new type for us. If so, the suggested outputs will tell us here in this Dynamo Player output.

Our next step is to utilize the components that Dynamo has suggested in Player. For each unique type, we have one new type or one catalog type that can be placed to configure the entire building, shown like so. The next few steps, we're going to select our unique configurations.

Dynamo is going to search our library to utilize components that have been pre-designed. You're going to see this pop in all in one shot. For the components that we do not have pre-designed, we're going to have to build each component, one element by one element by one element, to configure the entire component. And then lastly, make a Revit group so that we can then go to the next step, which will be placing these inside of a building configuration.

So our last step in the process is going to be to select all of the building masses. Dynamo will query Revit for all the groups. Then it's going to find a group with the same name as the mass, and it's going to place that group inside of that mass. Because we know what we're inserting here, we have several predefined schedules that are now populating. We can then extract this information and send it to Power BI to ultimately distribute to our entire team. So our entire team gets real-time information.

To date, we have produced 37 buildings, 73,600 assemblies, 55.2 million elements in parts, 1,338 cataloged dwellings, 1,795 non-cataloged dwellings. So it's been interesting to see what kind of information we can pull from the project. It's been also interesting to compare what kind of information we can pull as a company. Thank you.

LILLI SMITH: So Steve has a full class on this process at AU this year. So make sure you check it out if you want to learn more about doing this kind of workflow. I'm sure there's a lot of gems in there for using Dynamo with Revit.

Now for something completely different. So we've seen some great examples of Dynamo helping with huge amounts of geometry and data. But did you know that it is also a really great prototyping tool?

So next up, Alexandra Nelson and Benjamin Friedman from the research team inside of DLR Group out of Charlotte, North Carolina, are going to show us some exciting examples of how they're using Dynamo to prototype the use of generative AI tools. So they tell us that they can save money by writing stable diffusion locally, and also use it with their own IP inside their company's firewalls. They are saving time and money creating materials for Revit in this way, which all adds up. And they're also saving headaches by always having material images being the right size, and therefore not causing any slowdowns in Revit. So let's hear from them about their prototyping tools.

BENJAMIN FRIEDMAN: Hello, everyone. My name is Benjamin Friedman. I lead our data science work at DLR Group.

ALEXANDRIA NELSON SHQEVI: And I'm Alexandra Nelson Shqevi. I work in research and development.

BENJAMIN FRIEDMAN: And we're excited to share with you all some of the work our teams have been doing integrating generative AI in Dynamo. So we use Dynamo in a lot of ways across our firm. But in R&D and data science, Dynamo is really helpful for building small prototypes to get buy-in and build excitement from our users. And in data science in particular, Dynamo is powerful in its integration with Clarity as a data collection and automation tool. We're going to focus here on how we've integrated generative AI in Dynamo for one of our recent prototypes building a material generator.

So in this demonstration, we're leveraging a type of generative AI model called a Diffusion Model. And these are the models behind much of the Text to Image advancements over the last year or so. Specifically, we're going to leverage an open source Diffusion Model called Stable Diffusion. And Stable Diffusion enables us to run common image generation tasks locally, like Text to Image, Image to Image, In-painting, and Auto-Rendering.

To run this, there are a few requirements. Firstly, Hugging Face is an open source community built around delivering generative AI. And they provide a powerful Python library called Diffusers that lets us run and work with these models on our local machine

Additionally, it's helpful to have an NVIDIA GPU and CUDA installed, so you can run those models quickly. But you can still run it on a CPU. And finally you just need to make sure that you have Python in your Dynamo, rather than IronPython, because Diffusers won't work with IronPython. Now I'm going to hand it off to Alex to walk you through the recent prototype of our material generator.

ALEXANDRIA NELSON SHQEVI: As Benny mentioned, I'm going to go over our workflow of integrating Diffusion Models into Dynamo to generate new material images. We utilize the newly updated Dynamo Player UI in Revit 2024 to set up editable inputs for our text prompts, sampling steps, pixel sizes, and an option to generate a new material from the image that is generated and name it. A user can then run the task and get a quick view of their image that was generated and the file path where the image lives.

Here is a quick snapshot of our Dynamo script, which uses all of the out-of-the-box nodes and focuses primarily around two custom Python components. One is the image generator, which integrated Stable Diffusion via Hugging Face's Diffusers library and the other node utilizes the Revit API to generate a new material with the newly generated image. So what's next for us and others as we all explore generative AI in Revit and Dynamo?

For us, we are interested in the fine-tuning and improve control of diffusion models. We are also exploring the use of GPT models to develop patch patterns that we can integrate into our newly generated materials. And lastly, we are interested in pushing AI-rendered views back to BIM elements that they are overlaying. For this session, we will provide you with the Dynamo Player file that we are covering, as well as how to for setting up NVIDIA GPUs and CUDA to run your Diffusion Models significantly faster on your PCs. We will also give you a breakdown of the requirements needed to run our file, so be on the lookout for those materials.

All right, so now I'm going to jump into a quick demo. I've opened up Revit and Dynamo Player. We have the material generator here-- the script. If you click on it, it'll open up all of your inputs. And then we'll go ahead and hit Run.

Now, the Run was complete. So we can scroll down and check out the image that was generated. You can see the file path for the image is here. You can actually highlight it and copy it and paste it if you want to go to that location. The image generated here is a little preview of what that image looks like.

And then lastly, we have a Material Creation Status. Because we checked the Boolean for Yes, it says success, try applying this. We'll hit OK. All right. So you can see that image is set up and added to that facade.

So yeah, I think we're all just excited about where generative AI can go and how it applies to our day-to-day workflows in Revit. So we really appreciate you sitting into our presentation, and we hope we can do this again. Thanks so much.

LILLI SMITH: And thank you to all of our super-inspiring computational design leaders for their fantastic examples, including things that you can download and use yourself-- super-cool. So thank you. Now I'm going to turn it over to Sol for more about what we are working on next.

SOL AMOUR: Thanks Lilli. We'll now take a high-level look at some of the things we're actively working on right now. We have a Public Roadmap for Dynamo linked on the Dynamo website that showcases the bigger works that are in progress right now, as well as the things that we're considering doing next.

We would absolutely love each and every piece of feedback that you can give us-- the good, the bad, and the ugly. So please do come and have your two cents on each card. You can also submit your own ideas for Dynamo that are taken into consideration with our next Dynamo planning cycles.

We have a live beta for Dynamo connecting to Design Automation for Revit. This means that instead of having to learn C# and code yourself an add-in to run on Revit Design Automation-- which allows you to run things overnight on the cloud, for example, you can create a Dynamo graph and bundle that up instead, dramatically lowering the barrier to entry. This is a fantastic way to run QA QC scripts on your models overnight to ensure that problems are flagged to your team in near-real time, or little cleanup tasks like those thousands of pesky line styles that come in from DWG imports.

We're also working on connecting Dynamo to Forma, allowing you to run graphs to help shape and drive your conceptual design explorations. The work is constantly evolving, starting with Dynamo graphs authored on the desktop in places like Revit, Civil 3D, or Dynamo sandbox. And eventually, we'll shift to a full web authoring experience in the future. We're incredibly excited about connecting the computational power of Dynamo with the powerful analysis, modeling, and context data capabilities of Forma.

We are also actively working on simplifying the way you both create and find packages. This will centrally locate everything you do with packages, allowing you to search with added filters, see all packages that you have uploaded for reference, and publish your own package in a much clearer way. We'll also be adding in loading and default screens, a ton of messaging, and general quality-of-life improvements. We're excited to see what you think when you get your hands on it.

Beyond what's actively under development right now, we're thinking deeply about the future of Dynamo and its relationship to the ever-changing world. So let's take a journey onwards towards Dynamo's North Star. This is Dynamo's situation today. While automations in Dynamo are extremely well-loved and valued by many, these customized workflows take too long to create and too fragile to use, and are often difficult to share with success.

The value that Dynamo brings today is that it educates users on coding paradigms. It has a robust, rich, active, and helpful community-- so shout out to you all, that expands the possibility of what users can do, and is extensible for the community by the community. We also have a steadily growing user base.

But Dynamo also suffers from challenges in its current form with a 10-year-old code base that is siloed into host applications and extremely granular, requiring massive knowledge overhead to master. There are unique experiences between different Dynamo hosts, lowering overall consistency. And users experience a fragile experience when sharing the Dynamo graphs with others.

Today, Dynamo is monolithic and desktop-bound. And tomorrow, Dynamo is moving towards a nimble, highly composable ecosystem of SaaS applications that interacts richly with Forma and the AEC cloud. We can think of this in three distinct phases. The goal of phase 1 is hybrid, to position Dynamo as a bridge between traditional desktop workflows and the cloud. Here, people can begin to transition workflows from their desktop and use Dynamo with other Autodesk tools on the web, leveraging simple access to data and powerful new features that help them to connect this logic in their graphs.

The goal of phase 2 is to provide fully cloud-native workflows. Here, people can utilize rich access to cloud data and a growing ecosystem of more granular capabilities to orchestrate complex workflows, using rich tool-building UI and collaborative environments. And the goal of phase 3 is to really shift the paradigm, where users can find or use increasingly powerful capabilities that feel magically simple.

So we have an opportunity to transition Dynamo from the desktop to the cloud, focusing on access from anywhere, connection to anything, and a dramatic reduction in the time needed to find success. This means that we can not only migrate to the cloud, but actually evolve what Dynamo and visual programming are and can be through the breaking down of silos, enabling rich connection and collaboration, and a partnership between human and machine, where both get better in parallel. This means that the people, offices, and products we connect to are enriched through a more powerful, smart, and adaptive Dynamo.

In concrete terms, this means you can expect a single Dynamo-- a one-stop shop that sets a strong foundation for us to build upon. It will be richly connected to the cloud-- first from the desktop, and then native, and much, much more simple than it is today, while still retaining that amazing power of what you can do with it today, but also going further in augmenting it with smart AI tools that work with you to get you where you want to go faster. It will be highly connected to the entire Autodesk ecosystem, allowing you to orchestrate workflows between different design domains.

And it will be inherently consistent, allowing you to trust the outcome from a script is what you actually designed it to do. We're extremely passionate, excited, and driven to realize this Dynamo future, and hope you come along for the ride with us. Now I'll hand it back to Lilli for some closing thoughts.

LILLI SMITH: Thank you so much, Sol. That was a really beautiful picture of the future. And the future of Dynamo is really, really bright.

I want to leave you now with some closing thoughts about continuing your journey to automation right now. So think for a minute about how robotics and automation have changed automotive production and the resulting improvements in the cars that we drive today. Think about the automotive manufacturers that are embracing new technology and are really reaping success.

Now imagine how leveraging more automated and digitized ways of designing and constructing might impact your business, and help you address the challenges and opportunities that we are facing. Think about all the amazing ways our inspiring leaders that we saw here today are automating their workflows right now. How much more might you be able to do with the workforce that you already have?

It doesn't have to be super-complicated. This machine is sorting recyclables. Seems like it saves a lot of tedious, sticky, and perhaps stinky work. It might even take two nodes and a cup of coffee. Shout out to John Pierson and the Rhythm package for these nodes, which can help you update your Revit content library to Revit 2024. And there's a great community of people out there to help you smash out your work in minutes.

In Harvard Business Review whitepaper, Martin Fischer puts it a little bit more starkly. He notes that younger generations have no patience for work that could be automated. They don't tolerate it. They're just going to leave. What will you automate next?

Thank you again to our inspiring computational design leaders who have shared their talents, their stories, and best of all, their Dynamo files with us today. I find great hope in their stories of innovation and automation that they have shared with us. And thank you for listening.

We hope that you will check out all the other great classes on Dynamo and Generative Design here at AU 2023 this year. Keep innovating, sharing your knowledge, and keep in touch. We'd love to feature your project at AU 2024. Thank you.

______
icon-svg-close-thick

Cookie 首选项

您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

我们是否可以收集并使用您的数据?

详细了解我们使用的第三方服务以及我们的隐私声明

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

改善您的体验 – 使我们能够为您展示与您相关的内容

通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

定制您的广告 – 允许我们为您提供针对性的广告

这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

icon-svg-close-thick

第三方服务

详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

icon-svg-hide-thick

icon-svg-show-thick

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

Qualtrics
我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
Akamai mPulse
我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
Digital River
我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
Dynatrace
我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
Khoros
我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
Launch Darkly
我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
New Relic
我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
Salesforce Live Agent
我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
Wistia
我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
Tealium
我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
Upsellit
我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
CJ Affiliates
我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
Commission Factory
我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
Google Analytics (Strictly Necessary)
我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
Typepad Stats
我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
Geo Targetly
我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
SpeedCurve
我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

改善您的体验 – 使我们能够为您展示与您相关的内容

Google Optimize
我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
ClickTale
我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
OneSignal
我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
Optimizely
我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
Amplitude
我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
Snowplow
我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
UserVoice
我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
Clearbit
Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
YouTube
YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

icon-svg-hide-thick

icon-svg-show-thick

定制您的广告 – 允许我们为您提供针对性的广告

Adobe Analytics
我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
Google Analytics (Web Analytics)
我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
AdWords
我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
Marketo
我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
Doubleclick
我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
HubSpot
我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
Twitter
我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
Facebook
我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
LinkedIn
我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
Yahoo! Japan
我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
Naver
我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
Quantcast
我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
Call Tracking
我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
Wunderkind
我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
ADC Media
我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
AgrantSEM
我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
Bidtellect
我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
Bing
我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
G2Crowd
我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
NMPI Display
我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
VK
我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
Adobe Target
我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
Google Analytics (Advertising)
我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
Trendkite
我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
Hotjar
我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
6 Sense
我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
Terminus
我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
StackAdapt
我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
The Trade Desk
我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

是否确定要简化联机体验?

我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

个性化您的体验,选择由您来做。

我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。