AU Class
AU Class
class - AU

Vault Advanced Administration

共享此课程
在视频、演示文稿幻灯片和讲义中搜索关键字:

说明

Basic administration of Vault software is pretty clean, but what about when your data over the years has gotten dirty. Ever feel like you’d like to run your data through a power wash? Maintaining a clean functioning Vault takes time if done manually. In this class, we’ll explore ways to use PowerShell and the API to clean, rearrange, and add metadata to your data. Duplicate file names, missing information, other data sources, or maybe a new file structure. You can handle all of these things and more with PowerShell and the API.

主要学习内容

  • Learn how to analyze a vault for data issues
  • Learn how to set up corrective actions to clean the vault using PowerShell
  • Learn how to create some weekly tasks for reporting
  • Learn how to utilize the job processor to help maintain a clean healthy Vault

讲师

  • Kimberley Hendrix 的头像
    Kimberley Hendrix
    Based in Tulsa, Oklahoma, Kimberley Hendrix provides custom solutions for lean engineering using Autodesk, Inc., products and industry knowledge to streamline design and engineering departments. Hendrix has worked in the manufacturing industry for over 30 years and she specialized in automated solutions for the heat exchanger industry. She has worked with Autodesk products since 1984. Hendrix is associated with D3 Technologies as the Manager of Data Management, focusing on data management, plant, automation, and mechanical issues
  • Lauren Drotar
    Lauren got her start back in 2009 when she began attending her local technical high school's drafting program. Since then, she has gone on to pursue her mechanical engineering degree and worked in numerous sectors of the industry- including firearms, diesel engines, and fluidics. In 2020, Lauren transitioned from being an Autodesk customer to a member of the Autodesk partner channel when she joined D3 Technologies' Data Management team, focusing on enterprise integrations using Data Standard and coolOrange tools. She was lucky enough to attend AU in 2017 and spoke at the 2019 Autodesk Accelerate conference as well as AU 2019. When she isn't working, Lauren can be found reading, hiking, mountain biking or spending time with friends.
Video Player is loading.
Current Time 0:00
Duration 33:15
Loaded: 0.50%
Stream Type LIVE
Remaining Time 33:15
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
Transcript

KIMBERLEY HENDRIX: Hi, thanks for joining my class. This is Vault Administration, Advanced Administration. I'm Kimberley Hendrix with D3 Technologies out of Oklahoma. And I'm going to spend the next 30, 40 minutes or so talking about how to administrate your vault in an advanced way using some PowerShell and some techniques that I've learned over the years.

So with that, we'll get started. Our objectives today is we're going to learn how to analyze the data in our vault for these kind of issues-- some useful reports and some data pools. We'll learn how to set up some corrective actions to clean the vault using PowerShell and using the Job Processor.

We'll learn how to create some weekly task. I refer to them as Cron jobs that create reports and tasks to keep the vault and the job processor clean. And then we'll spend some time working on the Job Processor to help maintain a clean and healthy vault, a self-cleaning vault, and a self-cleaning processor. So let's kick it off.

We're going to start off with analyzing vault because if you don't know what's wrong with your vault, it's a little bit hard to clean it. So we're going to start with some of that. There's three ways to get the data that you need in your vault.

The easiest one are the out of the box reports. And we'll look at the duplicate files and a way to correct that. That's out of the box report. And we'll use that report to do some corrective action.

There are some custom queries that we find helpful when we do maintenance checks or wellness checks or cleanups of vault. And I'll talk through some of those that we do. And then some PowerShell scripts that we create that help us clean your vault, maintain your vault, improve your vault, and keep it as an overall healthy environment.

So we're going to start off today with the out of box reports. And I'm sure most of you guys have seen this. And it's pretty handy to look at the duplicate files report that's right out of your vault. Duplicate files can create unknown errors in your vault. There's a setting you can turn on that says Only Allow Unique File Names.

The problem is that some people have had this vault for 10 years or whatever, and they started off without that box checked. And then they're like, I can't ever go back because I have so many duplicate files. I have part 1 everywhere.

I have this half-inch screw named the same all over my vault. And if I check that box now, I get a lot of errors. And if I go run this report-- which I do this by going to Tools, Administration, Vault Settings, which you see here, and then I check on the Find Duplicate Names-- and it creates a report like this.

Looks pretty benign until you hit that little Details button. And it comes up. And there's hundreds and thousands of duplicate files. Well, you can go through and clean them. Like, this one part I've got right here, there's three of them in there and I could go clean them. Some people have hundreds of the same files, and it would take weeks and weeks and months to clean that up to be able to successfully check that little box.

What I'll have you do when you run this, after you run it and get to this details, and we'll go into File and Export that as a CSV file-- we'll save that for a little later. And I'll show you a PowerShell script that will help rename those files and clean that up so that we can check that box.

So going forward, everything is unique. And then how to make that a self-cleaning issue as we go find That. So we'll get to that in a minute. Just save that CSV file and we'll come back to it.

The other thing we've got are some custom queries. And I'm going to show you some of those live, as well as the one I've got. But what I look for when I'm doing a maintenance appointment or a health check on a vault with the customer is a few things.

One are files that are checked out for x amount of time. If you have files that have been checked out to user George for two years, that's a problem. Obviously, those aren't in use and that needs to be cleaned up. I look for files that have been checked out for more than 30 days typically when I do a maintenance check. And I give my customer that report so that can be cleaned up.

I also look for change orders if you're using that feature in Vault Pro that haven't been modified for a period of time, that aren't closed. And I'll show you how to do that search as well. And then the biggest thing that I find in vaults are visualization attachments not there.

So if you use your vault and you use it to its full extent-- so you're using the thin client, you're using the Preview, and you're doing checks and balances with it, then those visualization attachments are important and not having them creates errors. So the thin client can't see the preview. So files without visualization attachments is also an important source that we'll do. And I'll show you how to clean that up and set that up to clean up on a regular basis every week to keep that clean.

And then orphan files-- part files that are without a parent-- it's rare that a part file should not have a parent. There are some occasions. But looking for those orphan files will also help clean up your vault.

This sample I have on the screen. And I'll do a couple of live here in just a second as well. You do the Find. I use the Advanced Find. And I set my file extension to a part.

And this is a relatively new property that's available. It's called Has Parent. And so I set that Has Parent property relationship to false. And it gives me a list of all of the files that are orphaned.

And I can run a report on that. Thinking back to what I had to do with the duplicate files, I can run a report, a table, an export at the CSV. And once I have that CSV file, I can make PowerShell then execute on that. And I'll show you some of that.

Let me pull up my vault and show you a couple of other things that we look at as well. So this is my standard demo vault that's one of the original ones from Autodesk. But if I start at the original at the Project Explorer, I can look for all files-- and let's just do files-- whose file extension-- file extension contains. We'll just do IDWs for right now-- and who's visualization attachment is none.

If I find that, I'm going to find a handful of files. And my vault's pretty clean. I have four files that do not have visualization files that are IDWs. Now, that's all cool, and those four files, I could just randomly queue them.

But if you're like most people out there, you're going to have hundreds of them, especially the first few times that you run it. You know, visualization files get detached on a lifecycle change or a property update or checked in. Or they get checked in and it's queued. But somebody changes the file before the job? Processor run. You get a non-tip. You don't get a visualization file.

But for whatever reason, those things need to be cleaned up. And I have a script. And I'm just going to show you real quickly-- let me pull up my PowerShell. So this script here, I call it D3files create the width.

And it is a pretty simple script. It does a search. And the reason I pulled it up right now without talking about PowerShell is because I want to show you the search function in here, and I want to compare it to what I did here with the manual search inside of here.

So I'm looking for-- I open my vault. And I do use a utility from coolOrange called Power Vault. It is a gateway into the API, which makes things much easier. Otherwise, this opening the vault connection in the API or in PowerShell would take about 25 lines of code.

With them, it takes four variables and one line. So I use that. I set up the property-- I get all my property definitions for my file, and I look for the actual property name called Visualization Attachment. And then I set up search conditions.

And you'll see I have three search conditions set up here. It would be as if I put three here. This says, My Visualization Attachment, which is this first one this, file name Prop.Id is none. The second one is my file extension. And it includes for this first one DWGs.

And my third one is Checked Out By because I don't want to try to execute a PowerShell script on a file that's checked out, like this box 1, 2, 3 because it would just fail my script. So I look for those three and I run a search.

This is a standard API call for find files and search. And it returns an array of files. And I've actually ran that down here already down to that point, down to this search right here. I ran it down through 117.

And if I do a files count, I get five files. So if I change this-- and let's just do that real quick. If I change this to DWG and replace that in Find Now, I get seven files, one of which is checked out already. So I wouldn't get that in the search using my PowerShell.

So then my PowerShell goes through. And it says, OK, if I got five files, I do a second search for IDWs. We won't go through that just now.

But then I say, OK, I'm going to set this to run every week. And I don't want to do more than 1,000 a night or 1,000 weekend because I want my Job Processor to be able to run regular scheduled jobs throughout the day. So I'm just making use of my job processor during off hours.

So that's the reason I have this limit of 1,000. I only have seven, so it would run really quick. And I'd say, for each file in this list of files that I have, I'm going to get, get the file ID, and I'm going to queue property sync, which if you queue property sync, then it runs a job property sync, and then it queues a DWF.

And then I add my counter to 1. I write that I have done that limit. And I count it and I do it. So if I ran this right now, it would cue those six or seven jobs for me right away, and it would run overnight.

And we'll get into more detail in this PowerShell a little bit, but I wanted to show you how I can do this same search that I do inside a vault using PowerShell and then affect on that change. So I don't always have to run a report and use that CSV file. With duplicate files, I will have to run that CSV file because getting that duplicate file search in a PowerShell search is a little more difficult. So it was quicker to do it with the CSV file.

OK. If we were live, I'd ask for questions, but we'll have that for our normal time. So let's move on. OK, so, now, we have all that data gathered up, and we have PowerShell scripts that run, the run searches. We have CSV files that we saved from searching. And what do we do with that data?

Well, that's when we can make educated decisions on that data and run things like renaming files and generating DWF files and moving files and adding lifecycles or whatever it is that we need to do that makes your vault work to its optimal level. So let's look at some options.

Duplicate file names is the number one request I get from customers to be cleaned up. That causes so many problems with copy designs. And I can just go on and on. That's the number one I get.

And so if I have that CSV file that we created in my slide several back, then I can execute on that. And I'm going to show you how to do that in a second.

The other thing that I get is-- we've had Vault Basic for three or four years. It's a great product, but now we're ready for the new features in Workgroup Professional. That's fantastic. I can upgrade that from Vault Basic To Workgroup or Pro.

Now have, all these new features, but my data is not in a position to take advantage of that. So how can I go through and make all my files be in the right category with the right lifecycle with the right lifecycle state in less than the two years it would take me to hire an intern to do that, right? It takes weeks and weeks and weeks to do that.

The other question I get a lot is I have a stack of data. Maybe I acquired another company, or a vendor gave me 5,000 files and an Excel sheet with all the information around them. I loaded all that into my data, and I have all this information that I would like to have in my vault to make it searchable, but I don't want to type it, right? And I'll show you how to take that data and update it using a CSV file.

So let's look at the first one. This one is my duplicate file rename. And I run this. It's not recommended that you just run this on the fly. We typically do a lot of testing and run it on a test environment before we do it.

This code snippet here that I have on the screen will log into the vault-- that's what we got up here, the username, password, open the vault-- and that's going to import the CSV file. So that's the CSV file that we created earlier on in the class from our duplicate files. And I just took everything out of there but the list of files because that's all I need is to list the file names.

And then I-- oops, go back. There we go. And then for each row in my CSV file, I execute stuff on it. I go find the files that have that name on it. And then I take those files and I sort them by date.

This is how I've done it. I can do it however the customer or however your environment needs it. But I sort them by date. And I take the newest one, the most recently and I skip it. I leave it, whatever that original file name is.

And then starting with the next oldest one, I rename it to the same file and I underscore duplicate 1. The next one would be the same file name underscore duplicate 2. So I'm just renaming the files in there.

I'm not doing a search and replace. I could, that would be a little more complicated. We'd have to open an adventure and do some stuff. For right now, I'm just renaming them with duplicates. That gives me another search opportunity to have an intern maybe say, OK, go look for all files that end in underscore Dup, and I can start doing search and replaces, or renaming because they might not actually be duplicate file names. They may just have the-- I'm sorry, they may not be actual duplicate files, but they have the same filing.

So we could rename them to a logical 1 and that cleans it up. And eventually, you would get rid of all of underscore Dup file names, and you have this very clean file with all unique file names. And then you don't get those errors anymore.

You can check that little box in your settings, and it's clean from that point on. If somebody tries to check in a file then and that file already exists, it'll flag them with that error and have them renamed.

Lifecycle updates, that's where you've made the move from Vault Basic. Or maybe you've been in Vault Professional for a while, and you have one rogue department who's like, I don't need workgroups or I don't need workflows. But now it's time to get everybody on the same page.

So I can create that search-- if you think about the search that I showed you in PowerShell-- that looks for whatever it is I'm looking for-- all the files in this folder, all the files older than-- maybe you're going to take all the files older than five years ago, and you're going to move them to a category that's called obsolete, and you're going to change its workflow state the obsolete workflow definition and set them to an obsolete read-only file name.

We can do that with PowerShell. So we create that same search that we talked about. And then we iterate through each file, and we perform an update vault file. Update vault file is a cmdlet that comes in Power Vault. And I'll show that to you.

So in this instance, I found my search status. The search is above on this. And all of the PowerShell scripts that I am showing you in today's class will be available in the class documentations on AU. So you'll have access to these files.

So I'm going to search through each file, and I'm going to execute an update. So I'm going to get the file. So I have the file ID. And then I'm going to update the lifecycle using Update Vault File with the full path. I'm going to change my life cycle distance to engineering my status to Release.

I'm going to update my counter, and then I'm going to iterate through the next file. So if you think about doing all those steps manually-- select a set of files, right click on them, change the scheme, right click on them again, change the status, or move them to a different folder or out of property-- it takes a really long time.

This script will run through 1,000 files in about 20, 30 minutes max, maybe even quicker depending on your systems. This runs client-side, by the way. It uses the vault client API to do that. So you don't even have to be on the server to do that.

The next one that we do is a data dump from another source. This is real common when you're coming from an ERP system, or maybe you're changing ERP systems, or a vendor has given you a list of stuff. It's pretty common then to get this list of file names and associated properties.

Or the last one I did, a customer of mine acquired another division. And we brought all their files into their vault. And we did a dump out of their old ERP system with all the properties of lifecycle states if they were released or obsolete or whatever it was. And I could do that update all at once using the CSV file.

So instead of having to do a search for files like I did in the previous ones, I imported a CSV file. In this instance, I called it properties dot CSV. And in that CSV file, it has a file name and five properties-- division, division manufacturing, business unit, business unit manufacturing, and region manufacturing. Those are the five properties that I need to update.

And so this Excel sheet is just-- or CSV file was just name and these four across there. So I go find the file for each entry in the file. I do a Get File By the File Name. So now I have this as a-- this returns a file object, which tells me all things about the file, the path, ID, all the properties. It gives me all of that stuff.

And I'm creating a hash table of my property array. So I'm saying my division, which is the name of my property in my vault, equals. And this entry dot division would be for each line in here. So it would be that column from that field.

And I do that for each of the five. And I convert that to a hash table. You could type it out individually, but I use the hash table. And then I say, update the vault file, that full path, these properties with this hash table that this creates. And it, just that quick, updates all five properties for each file in that CSV file. And it can, again, do thousands or hundreds of them at a time.

So that's how we update a whole bunch of properties using a CSV file or an Excel file. I typically use CSV rather than Excel because it's quicker. Excel, I have all the object and heavy stuff behind it.

I can do the same thing with Excel. I can read from Excel. But typically, I go into my Excel file and export it to CSV because it's like text, and it runs much faster without all the overhead of actually having to run Excel.

OK, let's take a breath and let's show you a few things in the vault before we go into the server and Job Processor. I'm going to show you a few of the scripts that we've talked about and kind of run through them. So the one that we talked about-- let's go through the update lifecycle ones in-- yeah.

So in this one, I'm looking for all files in the base category that have been modified since 1995. So if you think about your search, if you did this search inside a vault, you would add-- for each one of these search conditions that you see here, it's one of your conditions in your standard find file.

And so I'm looking for a category name that equals- in this instance, I have part machine down here. The classification is designed visualization and that it's in this part machine lifecycle. And I get that list of file. Your search conditions can be whatever's important to you.

And I'm going to do 5,000 or so a night. And I'm going to get that file. And in this instance, I'm going to change all of them to the engineering lifecycle at the status release, all of them. That's my criteria for that one.

I can have many different search. and I could say for this group, change it to obsolete; and for this group, change its release, but it's a pretty simple script. The other one, let's talk about is creating DWFs. It's just so important to do that.

So in this one, I have this one. I'll talk about Cron jobs in a minute which does it on a timely thing. I have this running on most of the vaults that I manage that has Power Jobs which I'll talk about in a minute.

I have this running every weekend for most of our vaults that we manage. And it goes through and it finds-- it could find all files-- IPTs, IAMs, IPNs, IDWs. In this example, I'm just finding the drawing files because that was the most important to get through first.

Once I get all the IDWs and all the WDG with visualization files, because those were the most important, then I go back and I add in all my part files and assembly files and IPN so that I get them all cleaned. And then eventually, once my vault stays very clean, every week, I take the file extension completely out. And I'm like, I want any file that doesn't have a visualization file, and I want to generate it. And then I put some catches in there for PDFs and document files so that I ignore them.

And I run so many a night or so many a weekend every weekend. And I keep this job running continuously on the vaults that we manage because that makes you have then what I refer to as a self-cleaning vault so that it stays very clean. It keeps the property synced in all the DWFs.

These last two we'll talk about on our next section. So the next session is server and Job Processor Cleaning. The Job Processor is a tool that is invaluable in vault administration. You think about the job process and you're like, yeah, it's over there in the corner, and it generates DWFs When I check in a file or when I do a lifecycle transition, or it generates PDFs. That's new a few years ago.

The job processor can do so much more. Some of the stuff that we do in here-- and this is just some cleanup stuff that I do-- is if we have a managed services agreement with a customer, then we want to check this stuff on a regular basis, like, the size of your SQL database.

If you're running SQL Express-- most people start that way-- then there's a limit of-- I think, it used to be 10 gig per database. I think it's 12 now. I think it's changed.

But if you hit that max, if you bump up the-- it just shuts down your vault. And then you're down until we buy you a seat of standard SQL and get you upgraded, also, drive space where your file store is.

If you max out your file store drive space, then the vault just shuts down until you can improve that. So we want to be proactive on that. Same thing with backups. Are they running successfully? Is your Job Processor clean?

I don't know if you've managed a Job Processor before. It's running a bunch of jobs, and it downloads these files to the temp directory. And yeah, it's supposed to clean up after itself. It's supposed to clean those files, but it doesn't always.

If a file errors, then that builds and builds and builds and builds, and before you know it, your temp file is three or four gigabytes, and your machine is running really slow for your job processor, and it can't keep up with your day-to-day processes.

Same thing with orphaned processes. If you're not rebooting your Job Processor, every so often you'll get orphaned design review or Inventor Server or other ones. And so we do some self-cleaning stuff. So I'm going to talk about how we do that. And then we'll talk about a Cron job.

So a Cron job is-- so first off, to be able to do a Cron job or a timed event, we partner with the coolOrange folks. They do a couple of classes in AU as well. You can see-- I know Christian's doing a class this year, so you can see some of those.

They have a product called Power Jobs. It utilizes the vault out of the basic Job Processor. I always say it's the Job Processor on steroids, so it adds a layer on top of it so that we can run any job that's written with a PowerShell script.

So you think, oh, that's cool. Then I can do fancy PDFs, or I can write a STEP file, or I can write it and I just file out. But I can literally make it run a job with anything having to do with PowerShell. And if you're an IT guy or an administrator, that means I can do server status stuff, too. I can do File system stuff as well.

So to do this on a timely basis, we do something called a Cron. Back in my old HP Unix stuff, we used Cron jobs a lot. But there's a Cron trigger. You can create these crons using cronmaker.com, and this comes as a sample.

The base part is this line right here, it's time-based. And this means that at 8 o'clock-- that's the 0, 0, 8-- once a month on the third Sunday, I'm going to run a job.

And I'm going to write it on this vault. And I'm going to set it as a priority 10, and I'm going to throw this description in there. So if you look at your Job Processor and it's gets cued, that's what you'd see is this. This job is triggered monthly on the third Sunday at 8:00 AM.

And this is called a settings file. And a settings file needs to be named the same as a job, and then it will run that job based on this Cron settings. And so the one I'm doing for this is size of SQL database. The size and the free space of the driver of the file store is located, an then I email those results to an administrator.

Alternatively, we can send that information to our database and then manage it with our managed services. But this is internal, so we'll go with the internal one. And let me show you what that script looks like. Has absolutely nothing to do with vault.

It runs using the job processor on the cron, but it doesn't even log into the vault. All this does is it sets an empty string for the email text just so I can create a body. It gets today's date in a string format. This CR sets and carriage returns so I can make the subject line pretty. And I'm going to run some of this manually for you.

It's going to do a call. For the database size, it uses invoke SQL command. That's a part of Microsoft that we can get with PowerShell. And it's going to query my databases and my Autodesk vault instance.

And then I'm going to just get the I only care about my vault database. So I type it through where-object. And then I do some fun stuff, so I divide it by gig so it's easier to read.

And then I set up my email string. So my first email says-- the first line of my body says, server status for date today and then two carriage returns. My second line is going to be that plus this. And it says, the database size for the vault mono is the database size.

And you'll notice I did some ifs. If my database size is less than 1 after I do the division of 1 gig, then instead of putting a number of 0 in there, I do an if statement, and I say it's less than a gig. So it's nothing to worry about.

Then I do the same thing for my disk size. I use a Microsoft object. And I call a remote computers. And I filter it for my C drive because that happens to be where my file store is on my system. And I get just the size and free space.

Now, I do the same thing. I reduce that divide it by the gigabyte so it's easier to read. And I write to my email text. The disk size free space on this server is this and a couple of carriage returns. And then I set up some email and I do send mail message.

And let's do some fun stuff and run that. So this is inside of the PowerShell. And I'll go full screen so we can see it. I can run-- I'm going to run these a few lines at a time so that you can see what happens if I do these first lines. And then down in the blue I show you what the email text is.

Right now it says server status for date Friday 9/3/2021. If I do this next section right here, which is to get my database size, and down to the email and I run that. It went and got my SQL database size down here at the bottom. And I can look and see what my email text says now.

And now it says server status for date Friday 9/3. Database size for vault BAC MONO is less than 1 gigabyte. Mine is pretty small.

And then my next step, I'm going to get my disk size. And I'll run that. And if I look at my email now, the body in my email will be-- you can see right here. It's this right here-- server status for the date. The database size is less than a gigabyte.

And my disk size, free size for the server is 931 gigabytes with 450 gigabytes free. That gets emailed using the rest of this down here. I don't know if my SMTP server is right, so I'm not going to run it. Knowing my SMTP server is correct, I would get an email.

And I can schedule that to run once a month, once a week, every day. I don't recommend doing emails every day, but I can run it as often as I want. I can also add other things to this.

So the other things that we do on Cron jobs is-- let me see if-- the self-cleaning Job Processor. The cleaning out the temp directory and killing orphaned processes. , Again nothing to do with vaults and not checking into my vault. It's just keeping my environment very clean.

So all it does is it says, hey, do I have these processes of design review. If I'm running this job, design review and express review shouldn't be showing, right, because that's the only job running. So I'm going stop any of those processes. If there's not any, it just says there wasn't any to kill. That's good.

And then I go to my temp folder, whatever that is for that one, and I get all of my folders and I delete them. I remove each one of those item recursively through all those folders to keep that clean. I've been on some customer's job processors whose temp folders are 3, 4, 5, 6 gigabytes full. And that really slows down the speed of their system. So we want to be sure that we keep that as clean as possible.

And let me show you that script. Here we go. It's very short. It's like 30 lines. Stop the processes, clean them up, clean up the folders. And I can run this right here in PowerShell, or I can have my system run it.

So it says, you don't have any processes running. So mine's clean. And it cleaned up my temp folder. So my temp folder is now all empty. I didn't have any processes to run, so it said there's not any processes like that. That's what the red is down here.

With that, I'm going to show you a few references and some different places to get some fun stuff. PowerShell with the vault extensions and vault data standards is very powerful. You can use it to enhance your data, to clean your data, to maintain your vault, to maintain your server status, and to let people know what's going on with your vault, and where things you need to do.

There's a lot of good references. The Autodesk knowledge network is a valuable resource of examples and tutorials. The coolOrange product vault is invaluable and it's a great entry into the vault API.

As of now, they allow you to download that free Power Jobs, which is a product that they sell which makes your Job Processor on steroids. So that would be a purchase product. I put a link to their website. They also have a great blog and tips and tricks out there.

Marcus with Autodesk, his GitHub is a wealth of information on vault data standards and all-things PowerShell. I also listed D3 Technologies where I work for our website and our block.

And then I want to let you know that all of these PowerShells that I've shown you will be available on the AU website I believe when I AU's over. So with that, that's my contact information. And that will be available. I appreciate your time, and I look forward to our question and answer series. Thank you.

______
icon-svg-close-thick

Cookie 首选项

您的隐私对我们非常重要,为您提供出色的体验是我们的责任。为了帮助自定义信息和构建应用程序,我们会收集有关您如何使用此站点的数据。

我们是否可以收集并使用您的数据?

详细了解我们使用的第三方服务以及我们的隐私声明

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

通过这些 Cookie,我们可以记录您的偏好或登录信息,响应您的请求或完成购物车中物品或服务的订购。

改善您的体验 – 使我们能够为您展示与您相关的内容

通过这些 Cookie,我们可以提供增强的功能和个性化服务。可能由我们或第三方提供商进行设置,我们会利用其服务为您提供定制的信息和体验。如果您不允许使用这些 Cookie,可能会无法使用某些或全部服务。

定制您的广告 – 允许我们为您提供针对性的广告

这些 Cookie 会根据您的活动和兴趣收集有关您的数据,以便向您显示相关广告并跟踪其效果。通过收集这些数据,我们可以更有针对性地向您显示与您的兴趣相关的广告。如果您不允许使用这些 Cookie,您看到的广告将缺乏针对性。

icon-svg-close-thick

第三方服务

详细了解每个类别中我们所用的第三方服务,以及我们如何使用所收集的与您的网络活动相关的数据。

icon-svg-hide-thick

icon-svg-show-thick

绝对必要 – 我们的网站正常运行并为您提供服务所必需的

Qualtrics
我们通过 Qualtrics 借助调查或联机表单获得您的反馈。您可能会被随机选定参与某项调查,或者您可以主动向我们提供反馈。填写调查之前,我们将收集数据以更好地了解您所执行的操作。这有助于我们解决您可能遇到的问题。. Qualtrics 隐私政策
Akamai mPulse
我们通过 Akamai mPulse 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Akamai mPulse 隐私政策
Digital River
我们通过 Digital River 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Digital River 隐私政策
Dynatrace
我们通过 Dynatrace 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Dynatrace 隐私政策
Khoros
我们通过 Khoros 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Khoros 隐私政策
Launch Darkly
我们通过 Launch Darkly 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Launch Darkly 隐私政策
New Relic
我们通过 New Relic 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. New Relic 隐私政策
Salesforce Live Agent
我们通过 Salesforce Live Agent 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Salesforce Live Agent 隐私政策
Wistia
我们通过 Wistia 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Wistia 隐私政策
Tealium
我们通过 Tealium 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Tealium 隐私政策
Upsellit
我们通过 Upsellit 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Upsellit 隐私政策
CJ Affiliates
我们通过 CJ Affiliates 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. CJ Affiliates 隐私政策
Commission Factory
我们通过 Commission Factory 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Commission Factory 隐私政策
Google Analytics (Strictly Necessary)
我们通过 Google Analytics (Strictly Necessary) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Strictly Necessary) 隐私政策
Typepad Stats
我们通过 Typepad Stats 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Typepad Stats 隐私政策
Geo Targetly
我们使用 Geo Targetly 将网站访问者引导至最合适的网页并/或根据他们的位置提供量身定制的内容。 Geo Targetly 使用网站访问者的 IP 地址确定访问者设备的大致位置。 这有助于确保访问者以其(最有可能的)本地语言浏览内容。Geo Targetly 隐私政策
SpeedCurve
我们使用 SpeedCurve 来监控和衡量您的网站体验的性能,具体因素为网页加载时间以及后续元素(如图像、脚本和文本)的响应能力。SpeedCurve 隐私政策
Qualified
Qualified is the Autodesk Live Chat agent platform. This platform provides services to allow our customers to communicate in real-time with Autodesk support. We may collect unique ID for specific browser sessions during a chat. Qualified Privacy Policy

icon-svg-hide-thick

icon-svg-show-thick

改善您的体验 – 使我们能够为您展示与您相关的内容

Google Optimize
我们通过 Google Optimize 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Google Optimize 隐私政策
ClickTale
我们通过 ClickTale 更好地了解您可能会在站点的哪些方面遇到困难。我们通过会话记录来帮助了解您与站点的交互方式,包括页面上的各种元素。将隐藏可能会识别个人身份的信息,而不会收集此信息。. ClickTale 隐私政策
OneSignal
我们通过 OneSignal 在 OneSignal 提供支持的站点上投放数字广告。根据 OneSignal 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 OneSignal 收集的与您相关的数据相整合。我们利用发送给 OneSignal 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. OneSignal 隐私政策
Optimizely
我们通过 Optimizely 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Optimizely 隐私政策
Amplitude
我们通过 Amplitude 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Amplitude 隐私政策
Snowplow
我们通过 Snowplow 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Snowplow 隐私政策
UserVoice
我们通过 UserVoice 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. UserVoice 隐私政策
Clearbit
Clearbit 允许实时数据扩充,为客户提供个性化且相关的体验。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。Clearbit 隐私政策
YouTube
YouTube 是一个视频共享平台,允许用户在我们的网站上查看和共享嵌入视频。YouTube 提供关于视频性能的观看指标。 YouTube 隐私政策

icon-svg-hide-thick

icon-svg-show-thick

定制您的广告 – 允许我们为您提供针对性的广告

Adobe Analytics
我们通过 Adobe Analytics 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Adobe Analytics 隐私政策
Google Analytics (Web Analytics)
我们通过 Google Analytics (Web Analytics) 收集与您在我们站点中的活动相关的数据。这可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。我们使用此数据来衡量我们站点的性能并评估联机体验的难易程度,以便我们改进相关功能。此外,我们还将使用高级分析方法来优化电子邮件体验、客户支持体验和销售体验。. Google Analytics (Web Analytics) 隐私政策
AdWords
我们通过 AdWords 在 AdWords 提供支持的站点上投放数字广告。根据 AdWords 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AdWords 收集的与您相关的数据相整合。我们利用发送给 AdWords 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AdWords 隐私政策
Marketo
我们通过 Marketo 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。我们可能会将此数据与从其他信息源收集的数据相整合,以根据高级分析处理方法向您提供改进的销售体验或客户服务体验以及更相关的内容。. Marketo 隐私政策
Doubleclick
我们通过 Doubleclick 在 Doubleclick 提供支持的站点上投放数字广告。根据 Doubleclick 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Doubleclick 收集的与您相关的数据相整合。我们利用发送给 Doubleclick 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Doubleclick 隐私政策
HubSpot
我们通过 HubSpot 更及时地向您发送相关电子邮件内容。为此,我们收集与以下各项相关的数据:您的网络活动,您对我们所发送电子邮件的响应。收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、电子邮件打开率、单击的链接等。. HubSpot 隐私政策
Twitter
我们通过 Twitter 在 Twitter 提供支持的站点上投放数字广告。根据 Twitter 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Twitter 收集的与您相关的数据相整合。我们利用发送给 Twitter 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Twitter 隐私政策
Facebook
我们通过 Facebook 在 Facebook 提供支持的站点上投放数字广告。根据 Facebook 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Facebook 收集的与您相关的数据相整合。我们利用发送给 Facebook 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Facebook 隐私政策
LinkedIn
我们通过 LinkedIn 在 LinkedIn 提供支持的站点上投放数字广告。根据 LinkedIn 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 LinkedIn 收集的与您相关的数据相整合。我们利用发送给 LinkedIn 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. LinkedIn 隐私政策
Yahoo! Japan
我们通过 Yahoo! Japan 在 Yahoo! Japan 提供支持的站点上投放数字广告。根据 Yahoo! Japan 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Yahoo! Japan 收集的与您相关的数据相整合。我们利用发送给 Yahoo! Japan 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Yahoo! Japan 隐私政策
Naver
我们通过 Naver 在 Naver 提供支持的站点上投放数字广告。根据 Naver 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Naver 收集的与您相关的数据相整合。我们利用发送给 Naver 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Naver 隐私政策
Quantcast
我们通过 Quantcast 在 Quantcast 提供支持的站点上投放数字广告。根据 Quantcast 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Quantcast 收集的与您相关的数据相整合。我们利用发送给 Quantcast 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Quantcast 隐私政策
Call Tracking
我们通过 Call Tracking 为推广活动提供专属的电话号码。从而,使您可以更快地联系我们的支持人员并帮助我们更精确地评估我们的表现。我们可能会通过提供的电话号码收集与您在站点中的活动相关的数据。. Call Tracking 隐私政策
Wunderkind
我们通过 Wunderkind 在 Wunderkind 提供支持的站点上投放数字广告。根据 Wunderkind 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Wunderkind 收集的与您相关的数据相整合。我们利用发送给 Wunderkind 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Wunderkind 隐私政策
ADC Media
我们通过 ADC Media 在 ADC Media 提供支持的站点上投放数字广告。根据 ADC Media 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 ADC Media 收集的与您相关的数据相整合。我们利用发送给 ADC Media 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. ADC Media 隐私政策
AgrantSEM
我们通过 AgrantSEM 在 AgrantSEM 提供支持的站点上投放数字广告。根据 AgrantSEM 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 AgrantSEM 收集的与您相关的数据相整合。我们利用发送给 AgrantSEM 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. AgrantSEM 隐私政策
Bidtellect
我们通过 Bidtellect 在 Bidtellect 提供支持的站点上投放数字广告。根据 Bidtellect 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bidtellect 收集的与您相关的数据相整合。我们利用发送给 Bidtellect 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bidtellect 隐私政策
Bing
我们通过 Bing 在 Bing 提供支持的站点上投放数字广告。根据 Bing 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Bing 收集的与您相关的数据相整合。我们利用发送给 Bing 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Bing 隐私政策
G2Crowd
我们通过 G2Crowd 在 G2Crowd 提供支持的站点上投放数字广告。根据 G2Crowd 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 G2Crowd 收集的与您相关的数据相整合。我们利用发送给 G2Crowd 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. G2Crowd 隐私政策
NMPI Display
我们通过 NMPI Display 在 NMPI Display 提供支持的站点上投放数字广告。根据 NMPI Display 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 NMPI Display 收集的与您相关的数据相整合。我们利用发送给 NMPI Display 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. NMPI Display 隐私政策
VK
我们通过 VK 在 VK 提供支持的站点上投放数字广告。根据 VK 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 VK 收集的与您相关的数据相整合。我们利用发送给 VK 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. VK 隐私政策
Adobe Target
我们通过 Adobe Target 测试站点上的新功能并自定义您对这些功能的体验。为此,我们将收集与您在站点中的活动相关的数据。此数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID、您的 Autodesk ID 等。根据功能测试,您可能会体验不同版本的站点;或者,根据访问者属性,您可能会查看个性化内容。. Adobe Target 隐私政策
Google Analytics (Advertising)
我们通过 Google Analytics (Advertising) 在 Google Analytics (Advertising) 提供支持的站点上投放数字广告。根据 Google Analytics (Advertising) 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Google Analytics (Advertising) 收集的与您相关的数据相整合。我们利用发送给 Google Analytics (Advertising) 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Google Analytics (Advertising) 隐私政策
Trendkite
我们通过 Trendkite 在 Trendkite 提供支持的站点上投放数字广告。根据 Trendkite 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Trendkite 收集的与您相关的数据相整合。我们利用发送给 Trendkite 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Trendkite 隐私政策
Hotjar
我们通过 Hotjar 在 Hotjar 提供支持的站点上投放数字广告。根据 Hotjar 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Hotjar 收集的与您相关的数据相整合。我们利用发送给 Hotjar 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Hotjar 隐私政策
6 Sense
我们通过 6 Sense 在 6 Sense 提供支持的站点上投放数字广告。根据 6 Sense 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 6 Sense 收集的与您相关的数据相整合。我们利用发送给 6 Sense 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. 6 Sense 隐私政策
Terminus
我们通过 Terminus 在 Terminus 提供支持的站点上投放数字广告。根据 Terminus 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 Terminus 收集的与您相关的数据相整合。我们利用发送给 Terminus 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. Terminus 隐私政策
StackAdapt
我们通过 StackAdapt 在 StackAdapt 提供支持的站点上投放数字广告。根据 StackAdapt 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 StackAdapt 收集的与您相关的数据相整合。我们利用发送给 StackAdapt 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. StackAdapt 隐私政策
The Trade Desk
我们通过 The Trade Desk 在 The Trade Desk 提供支持的站点上投放数字广告。根据 The Trade Desk 数据以及我们收集的与您在站点中的活动相关的数据,有针对性地提供广告。我们收集的数据可能包含您访问的页面、您启动的试用版、您播放的视频、您购买的东西、您的 IP 地址或设备 ID。可能会将此信息与 The Trade Desk 收集的与您相关的数据相整合。我们利用发送给 The Trade Desk 的数据为您提供更具个性化的数字广告体验并向您展现相关性更强的广告。. The Trade Desk 隐私政策
RollWorks
We use RollWorks to deploy digital advertising on sites supported by RollWorks. Ads are based on both RollWorks data and behavioral data that we collect while you’re on our sites. The data we collect may include pages you’ve visited, trials you’ve initiated, videos you’ve played, purchases you’ve made, and your IP address or device ID. This information may be combined with data that RollWorks has collected from you. We use the data that we provide to RollWorks to better customize your digital advertising experience and present you with more relevant ads. RollWorks Privacy Policy

是否确定要简化联机体验?

我们希望您能够从我们这里获得良好体验。对于上一屏幕中的类别,如果选择“是”,我们将收集并使用您的数据以自定义您的体验并为您构建更好的应用程序。您可以访问我们的“隐私声明”,根据需要更改您的设置。

个性化您的体验,选择由您来做。

我们重视隐私权。我们收集的数据可以帮助我们了解您对我们产品的使用情况、您可能感兴趣的信息以及我们可以在哪些方面做出改善以使您与 Autodesk 的沟通更为顺畅。

我们是否可以收集并使用您的数据,从而为您打造个性化的体验?

通过管理您在此站点的隐私设置来了解个性化体验的好处,或访问我们的隐私声明详细了解您的可用选项。