AU Class
AU Class
class - AU

Testing Strategies for Python Fusion 360 Add-Ins

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

We all know that testing your code is a critical step in deploying software, but manually following a script and pressing buttons on each screen is time consuming and error prone.In this class, we'll cover various testing strategies for Fusion 360 add-ins written in python, including structuring your add-in for unit testing, automating integration testing, and utilizing continuous integration systems.The goal of these is to ensure that your add-in is easily tested and verified, which will drive down bugs and drive up user satisfaction.This class is geared towards advanced programmers and add-in developers, and will cover Fusion 360 command and command definition APIs, events, and palettes.Experience with web development concepts (to use palettes) is highly recommended. (Joint AU/Forge DevCon class).

主要学习内容

  • Understand and choose appropriate testing strategies for your add-in
  • Understand and use the Fusion 360 command APIs
  • Learn how to use palettes to interact with Fusion 360 commands
  • Learn how to use a continuous integration server to build, test, and package Fusion 360 add-ins

讲师

  • Jesse Rosalia
    Jesse Rosalia is the founder of Bommer, and the developer of the Bommer for Autodesk Fusion 360 bill of materials manager. He has over 17 years experience in software development and architecture, including 2 years as the CTO of a small robotics startup (where he also dabbled in mechanical and electrical engineering), and is a mentor at the Highway1 hardware accelerator. Jesse is keenly interested in the intersection of hardware and software, particularly the use of software tools to aid in the hardware development process, and has a passion for building practical, useful, usable tools that delight and improve the lives of users everywhere. Jesse is a proud to be an alumnus of Georgia Tech, and lives in San Francisco with his wife and 2 cats.
Video Player is loading.
Current Time 0:00
Duration 50:23
Loaded: 0.33%
Stream Type LIVE
Remaining Time 50:23
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

JESSE ROSALIA: So real quick, just to make sure we're all in the right place. Who is this class for? Or who is this talk for? It's going to be heavy Python development, but it is for add-in developers in general because I think some of the concepts and technology that we're-- that I'm going to show off is applicable across the board. If you're so-- real quick show of hands, who's a software developer in the room? For the-- sorry to single y'all out-- for the two that didn't raise your hands, what do you do?

AUDIENCE: Designer in an architect group.

JESSE ROSALIA: Awesome.

AUDIENCE: Just edit designer [INAUDIBLE].

JESSE ROSALIA: OK, cool, well, so the last bullet applies to both of y'all. The perpetually curious are always welcome here. So real quick about me, and I'm not going to make you read all of that. But I am the founder of a software company called Bommer. We build a add-in, a bill of materials manager add-in for Fusion 360. I have many, many years in software and two years as the CTO of a cooking robot company, which ask me about that afterwards if you're interested. That was a lot of fun.

My passion lies at the intersection of hardware and software. It's one of the reasons I'm here. I went to Georgia Tech, where I met my lovely wife, and we have two cats named Alan Purring and Ada Lovemice which, again, terrible sense of humor. Buckle yourselves in folks. So real quick, I want to show you a very small demo of what we build and why we build it live within Fusion here. Because I think it'll motivate the talk quite a bit. So this is Fusion 360 for-- I assume everyone knows that-- and this is a simple model of a servo-controlled camera slider that I put together.

And now let's say I want to build a bill of materials for that. Well, in Bommer-- actually, even before I do that, I'm going to show you-- no, I'll go ahead and do that. So in Bommer, I can open my bill of materials and edit whatever fields that I might have set up for my BOM, part, name, whether it's excluded, whether it's the purchase line item or the part, cetera and so forth. All of this is built into a Fusion 360 table command input.

It's all generated. The columns are generated by your setup. The rows are generated by the data that's in your model. And we have various different interactive components. I can flatten the BOM. I can do it, expand all of the rows. I can do whatever I want to do here. As I mentioned, all of this is driven by our settings. So I can define what properties I want in my bill of materials, and that then influences the rest of the software.

And, similarly, if I just want a different surface for editing-- let's see here-- I have the ability to edit just one component in another form, also generated dynamically from the data and from the properties. You can see a theme here. We do a lot of dynamic UI generation based on the user's configuration as I would imagine many data management add-ins or other add-ins that are configurable in the way that that ours would do.

And that presents an interesting testing problem because I, to verify this, would need to go and click all of those things, and build some representative test cases, and understand what it is that I need to verify before shipping a new version out which is really the motivation behind a good 50% of what I'm about to present to you. Just to sort of complete the demo, Bommer also lets you export to an Excel spreadsheet. Let's call it AU2017.

And I was testing this earlier, as you can see. And so our users use this to build their BOM in Fusion, pull their BOM out of Fusion, and then go do whatever they would do with their bill of materials. External interfaces like that, writing to Excel, testing files, I mean, those are all integration test points as well. And so we looked at this and we looked at our software development processes and said, this-- I was one guy for a while. I'm still the only developer on our team.

So it was a challenge for us to keep up with the code build test cycle to get updates out to our users as fast as possible. And that's largely why we put together what we did that I'm going to show you today. Any questions so far? General norms for the talk. Feel free to raise your hand, ask questions anytime. I'll try and get to them if I'm not mid sentence or looking away for some reason. We will have to keep to time, and I think I should have time for any questions.

But there will also be my contact information at the end if y'all have any follow-ons. So what are we going to cover? We're going to cover unit testing and mocking, specifically, unit testing and mocking with Fusion 360 objects. We built a little bit of technology that lets us, basically, unit and integration test all of our Fusion 360 integration points in our add-in. And then we're going to cover integration testing, specifically, UI automation testing so that we can script all of the clicking around, and filling in data, and clicking OK, and testing that things do what we expect them to do.

And then a brief talk, although this part of the talk might be obsolete based on what Brian was telling me earlier, the tools that I use including a different editor other than Spyder to get my job done. But real quick before we go into this. This may be a stupid question, but I want to ask, why automate our testing? And actually, if anyone has any opinions, why even ask this question? I'd love to hear them.

AUDIENCE: [INAUDIBLE]

JESSE ROSALIA: It's faster. Yeah, I mean, certainly-- well, so that's-- excuse me-- that's an interesting answer because it's definitely faster to run and release, not so much faster to develop, right? So anytime you automate something you add some overhead. And so we better be getting something for this at the end. Still, as a single developer or on a small developer team, usually, speaking in my experience, the trade-offs are totally worth it.

The amount of energy you put in at the beginning is paid back in dividends at the end by having a test fixture or a set of tests that you can run up against. But it's always worth keeping questions like this in mind when we talk about automating things. Like, why are we actually doing this? And so with that, let's launch into some examples. We're going to go into unit testing and mocking strategies, first.

So our objective today, or at least their objective with this section, is to set up a project for testing and write unit tests for code that uses Fusion 360 objects. And to do that, we're going to cover how we set up our project. We're going to cover unit testing with the unittest framework. I should have asked earlier, for the developers in the room, are you Python developers? What language? C#. OK, OK. Any other languages? I'm sorry? VB, OK, .NET, as well. OK. So my add-in's written in Python. We're going to talk a lot about Python.

Similar frameworks exist, although if you challenge me, I couldn't name them for C++ and VB. Although, it may appear with similar-- or with different construction. So the concepts, though, are still going to apply. But I do apologize if I lose you a bit in Python land. If you have any questions, just ask. The reason I ask that, though, is unittest is a unit-testing framework that ships with Python 3. So it's a natural place for us to start. We'll link in the Fusion 360 APIs so that we can mock those objects.

We'll talk a little bit about how we had to build on top of unittest in order to support some of the way the API is defined. And then we'll look at writing some tests. So for the purposes of this talk, I assume the following project structure. I have a parent, or a root folder, called au2017. In that, I have my add-in, au2017 add-in which has my Python file, my manifest, and my resources folder. And then as a sibling, I have my test with an __init__.py. And for the non Python developers in the room, __init__.py is just a piece of code that gets run whenever you pull that package into another piece of code.

All of this stuff is hosted on my GitHub. This whole-- all of the code we're going to cover today. So feel free to go download that and check it out. I see no laptops. Well, I see one laptop open so try to keep-- I mean, look ahead, if you'd like. But try to keep on with us because we're going to go through fast some of the things. And so as I mentioned, the __init__.py is run whenever we pull in some-- or run some code that lives in this test. What this piece of code here does in __init__.py is link in our add-in so that we can start testing files within our add-in.

Fairly straightforward but it's an important step to how we run our tests. Any questions for that so far? Awesome. So let's jump to unittest, unittest.Mock. It's, as I said, it's built into Python. You implement test cases by deriving a class from unittest.Testcase. So in this case, ExampleTestCase has a setUp function that lets us set up any test fixtures. And a tearDown function that lets us destroy any-- or cleanup after ourselves.

And this could be useful if you're writing complicated tests everything from a nice self-documenting way to setup your expected values, as I did here, to establishing mock objects all the way to database connections and anything else that you might want include in your test. Although, I would caution you against using live APIs in your unit tests for reasons we'll talk about in a little bit. The method in the center, test_method is the actual test code. unittest, basically, looks for anything with test_ in the name and it'll run that in the order defined in the class.

So we could, if you wanted to, define a number of different tests that use the same fixture defined in setUp and the same cleanup logic defined in tearDown. And it's just a nice way of organizing our test code. unittest.Mock is a way of mocking objects. Who knows what a mock object is? OK, cool. So in a nutshell, and you can disagree with me if I'm wrong, but a mock object is effectively a object that looks like the object you wish to, for lack of a better term, mock.

But it's not the real thing. It's not the real service. It's not the real API call. It's not the real database call. And it has features like, you can track which functions are called. You can pre-can value. So you use it to set up, again, your fixture around your tests. unittest.Mock has a neat feature in that you can set up what's called a strict mock. Strict mock will cause an error if you try and call any functions that are not strictly defined in the object you're mocking.

For the C++ or the C# developers amongst us, that might seem weird, but Python is dynamically typed. You don't want your customers finding that you fat fingered name to nam and shipped to that piece of code out. And so this is using strict mocks and dynamically-typed languages as a way of making sure that if your code says nam and the interfaces name, you get an error at test and not at customer usage. Ask me how I know that.

In this mock, so just to walk through what we see here. We're creating a mock of real service. Real service has a get method called get_method, because I'm real original, that we want to return the value of Foo. And it has a method called do_something that takes an argument and we want to script or set up some operation to happen in the test when that function is called. And that, they call a side_effect. We can then use that object within our example code as the real service might be used.

And then to finish out the section on unittest, we run our tests from the command line by just invoking Python, setting the module as a unittest. They support test discovery so you specify with -s the start directory to discover in and -t the folder fuses your route. And it will go find all of the files with test_ as the prefix and look for all the test cases and then run them. We recommend putting that in a script so you can run it early and run it often. And the code that's up on GitHub has that script.

Questions so far? Awesome. So now now things get fun. Not that things weren't fun before, but now we start tying in pieces of Fusion 360 and actually testing things that matter as it pertains to our add-in. So the first thing we want to do is find the definitions that Fusion 360 ships with. It's effectively the interface for the Python API. They exist on Mac in Library Application Support Autodesk webdeploy production, a long string of numbers and letters, Api Python packages adsk. And similarly on Windows, it's all referenced to your AppData Local.

What you want to look for is the hash, as I called it, that satisfies the following condition where API, Python, packages, Autodesk will contain either the _core.so or _core.pyd files. And I say it this way because, in my experience, as Fusion updates, it sometimes leaves some of the older version stuff behind. So you might get 3, or 4, or 5, or 10 of these hashes in your production folder. But it does a good job of cleaning up method stubs and API stuff.

So you might hunt and peck through a whole bunch of different folders before you find the actual one you want to link in. I don't know that I published this, but I'm happy to furnish it to anyone who wants it. I just wrote a script that uses find that finds these files and then returns the actual folder that I can use. Once you find that, you want to link that, and, again, this is-- well, I guess I didn't mention this earlier. As you can see by the stickers on my laptop, I am a Mac user.

And so that is Mac, or bash, or CygWin, or whatever. I don't know that you can create symbolic links in Windows, so you might need to copy stuff in or figure out some other way around it. Since we're running stuff on the command line. You can use bash on Windows and Windows 10. You can use CygWin, MinGW, any of these command line interpreters, and get this sort of functionality. But the key point is we want this locally so that we can link it into our test project.

And also, because the hash changes every time you upgrade Fusion, and you don't necessarily want to be modifying your test code every time Fusion updates. So to finish this out, we're going to add this line with our symbolic link into the __init__.py into our sys.path which is where Python looks for packages to import. This now lets us import adsk.core and adsk.Fusion into our test code which is what will let us mock those files in a little bit. Questions on that? Cool.

So now there's a few more steps and then we actually get to writing tests I promise. Now we're going to look at mocking these objects. And as I mentioned, we had to enhance the built-in mock in order to support some of the functionality that the Fusion API supports. One specific example is if you're using a selection command input, you want to get the selected items. It's a property that's defined in Python as a property.

But you want to specify a function that will return that value based on state of your tests or whatever. The built-in mock unittest framework doesn't support that case particularly well. And so we built something for it. And code is available for anyone to use. I'm not going to dive into the code right now. Because it is probably more-- we'll go down a rabbit hole for sure. But if you have any questions, definitely let me know.

How it's used is we create a function called create_object. We pass in the fully-qualified name of the object. We then can treat it like a real, in this case, component. And we can verify that nam isn't getting called instead of name, for example, which, as I mentioned, is a good thing. How this is used and what the snippet is from is we built a component hierarchy builder that we use in our test code for Bommer.

So Bommer walks to component hierarchy, rolls up counts of all the different components to compute your bill of materials, at least the quantity field in your bill of materials. This helps us build that test fixture, or that test mock, so that we can test that code. And now let's put it all together. Now that I've thoroughly bored you all with the background, here's the action shot. We're going to build a test.

And, actually, I'm just going to go ahead and switch over to PyCharm which is super small, so give me a second here. And there's my test. It's still super small. How do I zoom? Oh, god. Guys, I'm sorry I forgot how to zoom in PyCharm. Can you all read that? No? Let's see here. Sorry, give me. I just saw it. Where is it? Yeah,

I'm just going to change the font. Appearance. Theme. All right. All right. Plan B. Plan B is not much better. All right, plan C. Yeah. So I was fighting with-- Oh, Jesus. See, now let me do this. Editor. There we go.

AUDIENCE: Does-- do they envision adding more [INAUDIBLE]?

JESSE ROSALIA: That's a really good-- excuse me-- that's a really good question. And one that I don't know that I can answer. I don't think it obsoletes any of the stuff I'm talking about right now. Because we will evolve our add-in, and we'll still need to test it. But it might be a question for the Autodesk folks in the room if--

[LAUGHTER]

Someday. But, no, it's-- as a complete aside, it's something we pay attention to obviously at Bommer because it would obsolete what we're doing but we're not planning on stopping with what I just showed you. OK, now that we finally have some test code up. It's going to get a little awkward in the next section, but that's OK. We can see that I've created a unittest case four component_histogram. Now, I defined a little toy example where I take it-- it's a function that takes it a component, takes in dictionary.

It takes the name out and adds one to that key in the dictionary. So I could count the number of times Foo and bar show up in my-- excuse me-- in my component. And I'll actually just go ahead and show you that code right now. Fairly simple, fairly straightforward but I need a component in order to execute this code. So I use my f360mock. I create object. We didn't talk about this yet, but part of create_object let's you pass in patches. So I say that every time that the object name is invoked, return that value as a string.

And so I've effectively created two components with the name foo, one component with the name bar which is what I expect to see. And then the actual test calls component_histogram multiple times and then tests that the result is equal to what my expectation is. And so I can from my command line-- this I know I can zoom in, there we go. I can run it. Nope. The were go. How exciting.

But just to show that there's nothing up my sleeve. Let me go ahead and just change this. And it failed, and, as you can see, whereas I expected two foos and one bar, I got two foos and one ba. It's useful to both, know that my tests are correct, and, my data is correct. And so in this situation I might look at, well, what did I do wrong? Oh, my data is wrong, fix it. Whoops. Run my tests. And way to go. Test early, test often is a motto worth keeping in mind. And the nice thing about these unit tests with Fusion objects is it lets you do exactly that.

AUDIENCE: [INAUDIBLE] exactly [INAUDIBLE] those are [INAUDIBLE].

JESSE ROSALIA: Correct, in this particular case that's correct. If I wanted to change that-- so think about testing as I have a method under-- or code under test, and it has some specification that it needs to adhere to. And my test is effectively a spec on that. So in this case, I've defined my spec is case sensitivity matters. But if I wanted to change that, obviously, I could change the method under test and then change my tests to ensure that if I gave it capital F, Foo, and lowercase f, foo, that they both counted as the same.

Another example of testing with those patches. Let's say, as I mentioned earlier, I want to specify a function and that function gets called every time the string value command input value is called. In this case, it's a toy example because it's just returning a value, but you could use this to rotate through a set of values. You could use this to accumulate things that your test will-- or your code under test will add in. So for example, the example I gave earlier, I use this to track selected items in my selection command input.

Let's say I have a function that we'll select odd elements in a list of components. I can use this to validate that that data gets set properly. So let's take a step back for a second. Why are we doing all of this? And specifically with Python. Python supports what's called duck typing. If it looks like a duck, quacks like a duck, you can-- yeah, so I can pass objects with a name around. It's no problem, right? What's the big deal? Well, as I alluded to, or said explicitly, strict mocking in a dynamic language gives you that assurance that you're calling the right things and not calling the wrong things.

And in this case tight coupling between mocks and ADSK core and ADSK Fusion insulates your code against evolutions of the API. Specifically, if something changes in the API, your tests will break. That's a good thing because as we said earlier, you finding your problems is better than your customer finding your problems. So anytime early in the process you can see something break and fix it, the better off you are.

And now I-- despite all of that, I feel a little silly having to say this. One of the gotchas I found that I was actually going to email you about Brian is there are some structures that are not defined in those .py files, specifically, the event handlers for CommandCreated and CommandEventHandler. And so what that means in the context of what I just shared is you can't have code under test in the same file as a CommandEventHandler. It'll just break. And it's a bummer.

But at the same time it also enforces that what I would say good compartmentalization of you're add-in code. You shouldn't have it in one big monolithic file. You should actually break up your code accordingly. So that it's easier to read and easier to understand.

Unfortunately, I'm going to have to skip this philosophical debate for right now because we're running low on time. But it's in the slides which I think you might have access to. And it's a fun debate as to how you decompose things for testing that I'd love to have with people afterwards. Or we'll come back to it if we have some time. Any questions on that so far? Yo.

AUDIENCE: How can you have [INAUDIBLE] associated data sets [INAUDIBLE]. the assembly you want to run your tests on [INAUDIBLE] load that or--

JESSE ROSALIA: So in the unit tests you would have to handle that in your setup. So define your data either in a file or in some other structures. And then build up your mock objects accordingly. So as I mentioned, the tests for Bommer use a builder pattern for loading a component hierarchy or an occurrence hierarchy. So we just say builder.add component.addChild.addChild.addChild blah, blah, blah, blah, blah. And that lets us then build up all of those nested data structures.

What we're going to get into here actually uses live Fusion data. So if you have a very complicated model and you can reason about it as an integration problem and not a unit test problem, you can use just anything that's available in Fusion here. Cool. So moving on. Our next section is about integration testing which is a misnomer, as we'll talk about here. Well, no, I'm going to hold that thought for just a minute.

Our objective here is to write automated integration tests for adding commands and run them in Fusion. This is something that, as I showed in the demo, is super powerful for us because we have a few commands that are really, really complicated. And they're complicated because they're all driven by customer data. We have no way of knowing what rows or columns are going to be in a table when a customer opens their model. And we've had some issues where we made some assumptions about the shape or size of some data that some customer has proven wrong with the way they model their designs.

And so being able to capture that in a repeatable test is really, really nice. We'll just do a quick review of the Commands API. I'm going to talk about a piece of software we wrote that we're calling sodium. And I'll explain that name a little bit later. And then we'll write some UI integration tests as well. So real quick, what is integration testing? Anyone have an opinion on-- and I say opinion because as I did research, no one agrees on this. As best I can tell, no one can agree on the term integration testing and what it applies to some kind of curious, what do you guys think?

Well, so let me get the conversation started here. Unit testing, I think we can all agree, is testing one specific unit of code. It's a function. It's a method. It's a algorithm, whatever. Functional testing is where agreement starts to break down, but I tend to define it as testing a function in the-- in the literal sense not the coding sense. What does the specs say this thing should do? What is the user presented with? What do they type in? What do they click? What happens as a result? I want to test that end to end. And I want to do with live services.

And that leaves this nice huge gap in the center for this thing called integration testing. Which I will, at least today, define as testing the integration of more than one unit a code using live or mock services. This is a wide berth we're given here. And so some of the stuff that we talked about in the previous section could be considered integration tests. We are integrating on top of mocked but quasi-real like objects with the Fusion 360 API. We might consider decomposing our code so that algorithms are separate from integrations which are separate from actual live integrations.

So for the purposes of this code-- this section, we're talking about automated in-app, live integration tests but a lot of the same stuff that we just talked about still applies if you want to run sort of mocked integration tests. Real quick review of the Commands API. On the left, we have CommandDefinition, which is defined as a potential command. So when you define a command in Fusion 360, you first define the command definition. That's what you can attach to a button or a menu item or whatever. And then a command is a running command.

So after you've executed the command definition, you get access to this command, which you then can attach some lifecycle methods to. And a small selection of those lifecycle methods are out on the right. These are the ones that we tend to care about so these are the ones that I focus on, but there are many, many more events that you might be able to attach things to within your command hierarchy-- or within your command object. Structured in a time line fashion, not a very good time line but here we go.

We start with the CommandDefinition. It has a commandCreated handler. Inside that we get access to the command. We then add activate, input change, validateInputs, execute, destroy, et cetera. And that is the lifecycle of your commands within Fusion. What we do-- or what we want to do is effectively hook into commandCreated and probably activated and say, get all of the inputs out that this command has created so that we can influence them and then actually influence them then execute the command and test that the results actually happened. And that's what we did.

Real quick, before we jump in, getting back to the example code that's posted up on my GitHub. We now have a new folder we care about, AU2017 add-in tests. The way we write our integration tests is we use a completely separate add-in. There's no code linkage between the main add-in and the separate add-in. It's all accessed through the command definitions that are registered within Fusion.

And it's really nice because, again, philosophical debate if you want, but I personally believe that your main code should be your main code. You should be able to test object binaries with your integration testing framework which you can do this way because it's completely separate. And the really nice thing about the way the Fusion API works here is it lets us have that coupling through Fusion not through code.

In theory, although, I'm looking for Brian to maybe shake his head on this, and I haven't tested it, I believe that means you can use what I'm presenting here to test your C++ add-ins. Yeah, it should work that way, right? So that's pretty cool. It might mean that you have to introduce a new language to your stack but we've done a lot of the heavy lifting. So take advantage. So we wrote this package called sodium.

And real quick, the name. So I mentioned have a terrible sense of humor. There is a chemical test called the Lassaigne test, Lassaigne test. It's a French guy. I'm going to butcher his name. Where he found that you could take pure sodium and fuse it with other elements, other compounds to test for impurities. So sodium is a thing that enables Fusion tests. Groans, groans, groans, awesome.

[LAUGHTER]

So in all seriousness, we modeled it after unit test. You derive a class from sodium.CommandTestCase. You have your fixtures set up. You have your tear down. The big differences, you have a testCommand method so not any arbitrary test underscore will work which in at least its current state means you can't fixture a bunch of tests at once. It's a bit of a limitation that we're working to overcome. The second big difference is setUp is expected to return a command definition.

That's how the framework knows what to execute, what to hook into, and how to call your test code. And then one really quick comment because I am apparently doing something that some of the Fusion developers have told me I'm not supposed to be doing by executing commands in this way, we execute commands with the terminate flag is false and we use the SelectCommand to close ourselves at the end. So what this does, get_command_def, and what that does is a way of preempting the running command to clean up after yourselves.

We also do-- I'm sorry, did I hear a question? No, I'm hearing things, sorry. We also defined the test runner where you can add one or a number of tests, and then run them, and then print the results at the end. This is fairly primitive. This is all still active development that we're working on. But in a nutshell, we want to run one or many tests. We want to capture successes or failures. And because we have a completion handler set up to print the results, we want to print the results at the end.

It works fairly well this way, but it'll definitely get better. A little bit of a more detail on the API and this is all, I think, stuff that we've talked about already. But setUp and tearDown are your fixtures. And setUp returns the command definition. Test, your goal is modify your input values, execute your command, and then verify your results. And then with the runner, you add your tests, you add your complete handler, and you run it.

Something I forgot to mention, runner is a global reference, and that's because within Fusion Python add-ins, you have to keep global references to various handlers and stuff around. We encapsulate all of that into member variables inside of the runner so all you need to do is keep a global runner around, and you don't have to worry about caching your handlers, or caching your test cases, or anything like that.

So with that, the three steps then to write a automated UI test or automated integration test is identify the IDs of the commands we want to change. Identify how to verify the effects of the command. And then write the test itself. Fairly straightforward. And so to do that we're going to write a test, and I apologize-- oh, Jesus. I am going to take a second more, and--

AUDIENCE: [INAUDIBLE].

JESSE ROSALIA: Ahh, thank you. Under, oh, yes, Font. 24. That did not do what I expected it to do. Oh, there we go. Yes. Awesome. That should be readable. So let's go over to, as you can see, our AU2017AddInTests project. We have some stubbed out code here, as I mentioned, it is an add-in. It's got a run. It's got a stop. And we're good there. So I'm going to go ahead and add that first test.

Just to refresh, we're going to verify that the commands were-- the command inputs were created. Useful for what we do with all of the dynamic stuff that we do. So first class InputCreatedTestCase sodium. First thing is the base class requires a name so that we can identify it within the results. So we'll call this input changed. But that's basically all that's required in this _init_. Now we'll define our setup. And that's going to return.

I'm going to be a little sloppy here and chain a bunch of stuff together. AddInCommand. So the command defined by our test add-in in that we ship with that-- or that we have up on that sample project. All it does is have a text box called name and it creates a component with demo as the start of the name. So capital demo colon and then whatever name you put in. We're going to define our test command which takes in the command and the inputs. And all we're going to do is assert that name is in inputs. Fairly straightforward here and then tearDown. I'm going to do-- let's go and grab this guy.

I'll execute that. And actually one other thing I need to do here. Need to set isExecutedWhenPreEmpted to false. Otherwise, the model that I have open in Fusion will get a whole bunch of empty components because I'm preempting it in tearDown. And I don't want it to actually execute my command. I actually noticed that as I was testing prior to this talk, and one of my models had a whole bunch of weird components with weird names. So that's our test.

And it doesn't do a whole lot, it just tests the name. But let's go ahead and add it to our runner and run it. Nope, not that. InputCreatedTestCase. Oops. Awesome. And so let's go back over to Spyder. The way these tests are run right now is you just run them in Spyder and observe the results in the console. As I said still active development. We're still working on these things. But for the sake of funsies, we'll go ahead and run it. And it's executing the test. And it's still thinking about it. And in a second here, it should pop up with something. Come on, Fusion.

What did I break? There we go. Just took it a second. So this is what it spits out when it's done. We got test results. The input changes. But I fat fingered that. Is a success. And, again, just to show you that I'm not lying to you, let's go ahead and break the test, run it again. And it broke, and it tells me where it broke. It spits out an exception that says, name two is not in inputs. We know that because it's actually name.

So we got one more test to run. And now we're going to actually set some input values and observe the results. I'm going to cheat here in the interest of time and go over to-- it's like those cooking shows where the person baked the cake before and just kept it behind the podium. So I'm going to copy in the test that I already wrote. It's basically the same thing except, in this case, we're going to open a brand new document so that we do all of our work in a brand new document. We use the inputs parameter.

That's a dictionary of id2 command inputs. And we set-- actually, there we go-- oops. Set the name of-- or set the value for the name input. We're going to execute the command. And then we inspect the active products allComponents to see that that component is actually created. And then in the end we tearDown the Command and tearDown the document. So let's go ahead-- oops. Did I not save that? There we go so-- oh, I guess gotta-- There we go. So now we'll-- now it's going to execute.

This one actually takes a little while because it's got to create the document. But it's running behind the scenes, in fact, if I went over to Fusion. There's my command. And there it is exiting. And I can go back over here and see that it successfully completed both tests. Using this framework, you can then automate pretty much any action in your add-in in the same way that you would write unittests for your unit-- your code under test, your units under test.

Questions. Anyone want to see anything else? Explore anything else? You have access to the full Fusion API. It goes without saying so you can do literally anything you want to fixture or tear down your code or your test which means you can do anything that you would manually click through as part of a test, test case, an automated test case.

Sodium also has-- and this is super beta, so I'm just going to talk about it briefly-- an inspector that will spit out the IDs of the command that you could set values to. This is useful when we write our test cases for highly dynamic components-- or highly dynamic commands because we may or may not know what is available for us to actually poke in that inputs dictionary. My vision, by the way, is for this to be-- has anyone ever used Selenium, web testing framework.

The idea is you could click through on a web page and it just automatically tracks what you're doing. So my vision for this is you open a command and start typing things in. And it just automatically tracks what you're doing and builds the test behind the scenes. We're super not there yet but lucky for you, we are accepting pull requests. It's under active development. If you like this, if you're interested in it, please, get involved. There's instructions on how to do that up in the GitHub. And I think you guys-- I don't know how handouts work with AU-- but I did upload a handout with all of these links that you should have access to.

So we've got about 10 minutes left. I figured I'd spend a little bit of time talking about the tools that I use that you saw me fumble a little bit through during the presentation. But I do promise I love these tools. One of them-- the main one is PyCharm. PyCharm as a Python editor is pretty awesome. There are other awesome editors out there as well. I just sort of fell in love with this guy. I use Vim integration mode so I have all my Vim keystrokes supported in PyCharm. And so I can move very fast and do a lot of good things.

AUDIENCE: [INAUDIBLE]. So unfortunately with the way the Fusion 360 add-ons are right now, if I want to run an add-on in Fusion, I either have to run it through the add-on box in Fusion. Or execute it manually in Spyder. It would be a dream and I say this because it sounds like y'all are working on it. It would be a dream to be able to execute things directly from my editor of choice, but, unfortunately, right now it's through spider.

Cool. So, yeah, but the nice thing about developing a PyCharm is I get all the advantages of code completion. It has a really good linter, has a really good syntax checker, which for something like Python, all of those are really important because you can and will shoot yourself in the foot typing things out without those safety nets. Configuring PyCharm to work with the Autodesk API is a piece of cake because of the work we did earlier linking that folder in. So we linked in Autodesk-- or adsk-lib.

We then can go into our project settings or project structure, select depths, click sources, and that's all it takes. And that's all it took for me to be able to get code completion as I was writing that test a second ago. There's a mention of one other-- and have you guys used pyenv for any of your Python development? So if you're like me, you have a million different compilers, and interpreters, and whatnot on your machine because you're always trying something else out. Pyenv is just a way of managing it. It uses shims to basically say, this directory gets this version of Python.

So I set up all my add-in developer-- or, excuse me, add-in directories to use 3.5.3 since that's what ships with Fusion, but I also have 2.8-- or, sorry, 2.7, 3.6 and various other versions installed on my machine. It's just a nice way of managing that. And that manages all of the ancillary applications like PIP and so on. And that's all I got, so, please, connect with me if you like what you heard.

My email, my LinkedIn, my personal GitHub, and then my company's GitHub. If you want to click around with Bommer, you can use this easy-to-remember link right here. Or you can just go on the app store and search for B-O-M-M-E-R. And it will be the first thing that comes up. If you have any questions, if you have any feedback, let me know. Despite the technical difficulties, I think we covered some good ground today.

And then just one last note, if you do connect on LinkedIn, just write a little, note saw you at AU2017. And I'd love to connect about blank, and I'd love to continue the conversation. Thank you very much.

[APPLAUSE]

Downloads

______
icon-svg-close-thick

Cookie 首选项

您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

我们是否可以收集并使用您的数据?

详细了解我们使用的第三方服务以及我们的隐私声明

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

改善您的体验 – 使我们能够为您展示与您相关的内容

通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

定制您的广告 – 允许我们为您提供针对性的广告

这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

icon-svg-close-thick

第三方服务

详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

icon-svg-hide-thick

icon-svg-show-thick

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

Qualtrics
我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
Akamai mPulse
我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
Digital River
我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
Dynatrace
我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
Khoros
我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
Launch Darkly
我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
New Relic
我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
Salesforce Live Agent
我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
Wistia
我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
Tealium
我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
Upsellit
我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
CJ Affiliates
我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
Commission Factory
我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
Google Analytics (Strictly Necessary)
我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
Typepad Stats
我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
Geo Targetly
我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
SpeedCurve
我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

改善您的体验 – 使我们能够为您展示与您相关的内容

Google Optimize
我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
ClickTale
我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
OneSignal
我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
Optimizely
我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
Amplitude
我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
Snowplow
我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
UserVoice
我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
Clearbit
Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
YouTube
YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

icon-svg-hide-thick

icon-svg-show-thick

定制您的广告 – 允许我们为您提供针对性的广告

Adobe Analytics
我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
Google Analytics (Web Analytics)
我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
AdWords
我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
Marketo
我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
Doubleclick
我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
HubSpot
我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
Twitter
我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
Facebook
我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
LinkedIn
我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
Yahoo! Japan
我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
Naver
我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
Quantcast
我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
Call Tracking
我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
Wunderkind
我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
ADC Media
我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
AgrantSEM
我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
Bidtellect
我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
Bing
我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
G2Crowd
我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
NMPI Display
我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
VK
我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
Adobe Target
我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
Google Analytics (Advertising)
我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
Trendkite
我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
Hotjar
我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
6 Sense
我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
Terminus
我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
StackAdapt
我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
The Trade Desk
我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

是否确定要简化联机体验?

我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

个性化您的体验,选择由您来做。

我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。