설명
주요 학습
- Get started with scripts, and understand key capabilities.
- Learn about debugging, troubleshooting, and best practices to resolve common issues.
- Gain insights into real-world examples of scripting.
발표자
- FKFiroz Abdul KareemFiroz Abdul Kareem is a Senior Technical Product Manager at Autodesk, currently focusing on Fusion Manage. His role involves a deep understanding of customer needs, which he translates into actionable plans for product development. Firoz's expertise lies in transforming complex technical concepts into viable product features, ensuring that the tool continues to meet user needs effectively.
- PRPedro RiosPedro Rios is a Principal Engineer at Autodesk, being one of the original developers of Autodesk's Cloud PLM solution, Fusion Manage. Currently as Dev Ops, he's involved in continuous delivery and quality management of the solution, bridging developers and operations, and assisting Tech Support.
FIROZ ABDUL KAREEM: Welcome to this session on Autodesk Fusion Manage, the cloud product lifecycle management solution. Today, we'll be looking into the scripting capabilities within Fusion Manage. Am Firoz Abdul Kareem. I am a Technical Product Manager at Autodesk, and am the Product Owner for Fusion Manage.
PEDRO RIOS: And I'm Pedro Rios. I'm the Principal Engineer for Fusion Manage. I've been at developer since the beginning of this project, and I'm currently DevOps.
FIROZ ABDUL KAREEM: Before we start, take a moment to review our standard safe harbor statement. So today, we'll get started with the basics of scripting, and we'll proceed to the deep-dive section where we'll take a look into the various objects, functions, and methods available within scripting. We'll also take a look at some of the practical use cases of scripting with some sample scripts. This will be followed by the Tips and Tricks section where we'll look into ways of using scripting efficiently, and then we'll wrap up with the Q&A.
Getting started. So server-side scripting, it's a very powerful and versatile feature that helps extend out-of-the box Fusion workflows, Fusion-managed workflows. Scripting helps automate tasks depending on specific business needs of the customer. This includes making changes as needed by the organization, validating and preventing certain actions from occurring, and limiting certain actions to a specified group of people.
For example, it helps extend part numbering to suit specific requirements of each company. It can be used to send custom email notifications. It can be used to implement custom business needs, such as an order of value greater than a predefined threshold having to go through some additional approvals.
Scripts in Fusion Manage are written in JavaScript. Fusion Manage being a cloud product, scripts are edited online, they reside on the server, and they run on the server. This means that scripts cannot access local folders or files that are saved on your computer. There are no client-side functions and there is no access to the Fusion Manage UI elements.
PEDRO RIOS: Let's talk a little bit about the JavaScript language itself. Why did we choose that language? First of all, it's a very popular language. Latest polls indicate it seems to be the most popular language on the web right now. And it was designed to be simple and easy to learn so that web developers could quickly adopt it into their web pages. And in fact, we all use JavaScript every time we go to almost any web page.
So there, it runs on the client side, which is the browser. And its purpose is to control the presentation of visual elements, navigating to other pages, all the bells and whistles in the page. In our case, in Fusion Manage, it runs on the server side because the purpose is to manipulate the data stored in the database, so it has to run where the database is. And because it runs in our servers, it is sandboxed for security, and it's also transactions-safe. If anything goes wrong during a script, everything that's done so far is rolled back.
Now, JavaScript has some characteristics that are not exclusive to this language, but they are not used in all languages. So it's good to know, if you're already familiar with other languages, how JavaScript stands.
First of all, it is case-sensitive. So an uppercase letter is completely different from a lowercase letter. When you define your variable names, you have to keep that in mind. It's weakly typed, meaning you don't have to declare types of variables ahead of time. And the values you put in a variable can be different types across the script. It works fine. You have less worry about types. The JavaScript engine has to worry about converting those types whenever necessary.
Indexes are zero-based, meaning an array of elements will have numeric indexes, and the first element will be element number 0, not number 1. It is not officially object-oriented because does have classes, but there are objects there. There are variables that doesn't hold simple values like a simple number or a text, but they hold complex values with a bunch of attributes you call properties, or even behaviors we call methods that we can execute. So we will encounter several of these objects in the JavaScript code.
There is exception handling built in. Nowadays, most languages already have it, but it's good to know that you can use it to control the looks of your script and avoid crashes.
And one important characteristic of JavaScript is even though it inherited syntax from structured languages like Java and C, it also provides alternative syntaxes in some cases that can make the code more efficient or at least more readable.
FIROZ ABDUL KAREEM: All right. So how do you get started with scripting in Fusion Manage? Tenant administrators have access to Script Library where new scripts can be created. While creating a script, the script type should be selected and a unique name should be provided for the script.
The script type cannot be changed after creation, although the unique name can be changed. It is also possible to import library scripts and enable auto-code completion while creating a script.
Once the script is created, it needs to be configured. Scripts can be configured to run on item creation, item modification, and to run on demand. On-demand scripts are custom actions that are available to run at any time. Any number of scripts can be configured to run on demand, while only one script can be configured on item creation and modification.
Scripts can also be configured to run on workflow actions as precondition checks, validations, or as post-processing actions, actions to be performed on successful completion of the workflow transition. Let's take a look at the various script types available in Manage and do a comparison. We'll start with condition scripts.
Precondition scripts determine what workflow transitions are allowed for the current use for the current user based on the current state of the item. It returns a Boolean. If the return value is true, the workflow transition is made available to the user. If it returns false, the transition is denied.
Precondition scripts are triggered when the item loads so as to determine what workflow transitions should be presented to the user. It also runs during calculation of my outstanding work and when workflow notifications are sent. An example could be making quick approval workflow available only if the cost involved is less than a certain threshold.
You might also want to consider precondition filters. If the purpose of the precondition script is only to validate actions related to item owners, approvers, or additional owners. Precondition filters serve pretty much same purpose as scripts, but it's limited to owners or users who are approvers of the item. Precondition filters are faster and efficient than precondition scripts.
Let's imagine the conditioned script passed and the workflow transition is made available to the user. User now executes the transition, and this is the point where validation script kicks in. It can return an array on failure, so when it fails, the transition does not start, and the array that's returned by the script will be displayed as an error message to the user.
For example, a validation script can be used to check if change tasks have been defined before initiating a change order. If the script fails, it can return a meaningful error message to the user, indicating that change tasks should be defined.
So, when the validation script completes, the workflow action proceeds and the workflow action completes. Upon completion of the workflow action, the action scripts are triggered to automate further processing as needed. Action scripts update data. They don't return anything as such. Action scripts can also be configured to run on item creation, modification, on demand, or on workflow escalations. For example, an action script can be used to assign individual approvals upon changing the approver group on an item.
Then there a library scripts. Library scripts contain functions that are to be reused in other scripts. Library scripts, as such, do not return anything, but they contain several functions which can individually return values. Library scripts cannot be triggered individually, but they are triggered by their hosting script, which are scripts of other types.
And an example of library script could be a function, which is used to obtain user information. So such a function could be part of a library script and it can be reused in other scripts.
Script invocation. Let's take a look at the order of script execution when an item is created. So user creates the item that clicks on the Save button on the Item Create form. At this point, the record gets created. This will trigger the initial workflow transition of the item. At this point, the action configured on the workflow transition runs. Along with it, the On Create script that is configured on the workspace is triggered.
The new record is displayed to the user. And at this point, the condition script runs in order to determine what further workflow transitions should be made available to that user. One thing worth noting is that any validation script that is configured on the first workflow transition does not run.
In case of item modification, the On Edit script is triggered upon clicking the Save button on the UI. The On Create and On Edit events that we discussed, now these are available only on item details. Rather, they are not available on tabs such as Relationships and Grid. So scripts can access data from these tabs. However, scripts cannot be triggered on events occurring in these tabs. On-demand scripts are invoked when user explicitly runs that script from the UI. And it's not possible to configure scripts on deletion or archival of items.
Before we get into the deep-dive section, this is just a reminder of the basic syntaxes used in JavaScript for anyone who is not very familiar with the language, so take your time to go over this.
PEDRO RIOS: Now let's dive a little deeper in this script what they can do and how they work. First, before the script even starts, it needs a context. It needs to know what's being invoked for. So the invoker of the script, the action that triggered the script, must provide some contextual information so this group knows what it's doing and who it's doing it for.
First, there's the dmsID. It's the numeric identifier of the field-- oh, sorry. It's its numeric identifier of the item where the script was invoked. It already populates all the item data in another object, but you can use it if you want to expose the numeric ID for some reason. Then there's the user ID, the user invoking the script, or the user script is invoking for in the case of conditional scripts. And you can it's just an ID, but you can use it to obtain more information using functions that you see later.
When the script is triggered by a workflow transition, we also need the ID of that transition so the script knows how to behave. Usually, we would decide the course of action depending on what's the transition being performed. The custom transition ID is an identifier of the same transition, only this one is defined by the admin in the workflow map. So it's used like the transition ID, but it makes the code more readable because it's often numerical.
The newStep determines what will be the order of this transition in the workflow history once it's performed. The workspaceID is identifying the workspace where the item is in. We can obtain it from the item itself, but it's already available in case you want to perform more operations in that workspace, like creating another item in the same workspace.
maxObjectDepth is an information for the debugger. So only when you are debugging a script, it will be used to determine the granularity or the resolution of object and subobjects that you are inspecting in the debugger.
When you're testing or debugging a script, you are the invoker. So you will be asked as soon as you click the Test button to provide this contextual information so that you can simulate the conditions for which you want to test the script.
Now as I said before, the dmsID is a numeric identifier, and before the script starts, an item object is pre-populated using this dmsID. So based on this identifier, we'll collect all the information about this item or related to this item from the database. We should notice that this dmsID is just a simple number. Everything this variable contains is a number.
But the item is an object. What it means is that it's a special type of variable with a complex value. It contains some properties. So many values identified by a name like the descriptors in this case. It also contains methods that you can see as behaviors we can execute-- for example, if we want to delete the item.
And most of those properties will be objects themselves, so they will contain their own attributes or properties, like the descriptor-- and the descriptor itself in another property, but it also contains other metadata about the item, like the owner of the item, the workflow state the item is in, and many others that are all listed in the Help pages.
One important set of properties in an item are the fields defined in the workspace config. Those fields are represented as all uppercase properties of the item object. So if we define the field with ID NOTE1, that's how we will access it in a script. Or, because of the versatility of JavaScript, we can also access it with this different notation where we put the field ID in a string and pass it in square brackets.
They both collect the same information or change the same field data, only the square bracket notation provides a little more flexibility. If, for example, we have fields whose ID can be calculated, suppose we have NOTE1, 2, up to 5, and you want to do the same operation on all of them, we could have five lines, one with each NOTE ID, or we could look and calculate the ID using a numeric variable. So this will clear-- will empty the value of all the fields from NOTE1 to NOTE5.
Another example is-- because of this alternative notation, we can use the Object.keys function of JavaScript to obtain all the names of all the fields in an item. So in this case, we're going to print the names of all the fields and the values in each of those fields in a given item.
Other attributes of the item include all the tabs that are not the item details. For example, the Grid tab where we can define all the fields and its grid structure with rows and columns. This grid property is also an object that contains its methods, like this method to add a new row to the grid, and attributes, like the field names, the field defined in the grid in workflow admin. And they're accessed the same way we access the fields from the item object.
It's important to notice that this grid, besides being an object that controls the grid as a whole, is also an array of all the rows in the grid. So in this case, we're accessing the status field of the first row in the grid, remembering that we'd start indexing by 0. And it has methods as well, like I want to remove this first row of the grid. So that's the array. If we call methods of the grid object itself without specifying a particular role, the actions will be performed in the whole grid of the item. So this clear method will delete all the rows in that item's grid.
Other tabs such as BOMs, or attachments, and all the other tabs, they are accessible in a similar fashion. They are all listed in the Help pages. And one important case is the classification fields. In the UI, they show up alongside all the item details fields, but they are defined elsewhere, so we access in the script with a different attribute, the classification attribute.
In this property, which is also an object, we have its own properties, which are classification fields, again, accessed in the same way as the item properties. And we have methods regarding this classification, such as getting the classification name selected for this field, for this item, or the full classification path.
Now let's see what's available for us, then, when we start a script. What are the functions and methods that we can use? First of all, they are all the methods under the item object. In the item object itself, like adding a new milestone, or in properties of the item, like BOMs, grids, all the ones we discussed. And also, some of these properties will have arrays that contains their own methods.
Besides that, we also have top-level functions that are not part of any object, but they perform important operations that we may need, like creating a new item, loading an existing item if we have the dmsID, or getting a print view for the workspace the item is in so we can format the data of this item.
One very important of those functions is returnValue. Conditional invalidation scripts will need this returnValue function to communicate back to the invoker what was the result of the calculation.
We also made available a println method. It's used to print information, but of course, the script doesn't communicate directly with the user. Usually, this println method will print lines only when you're testing or debugging a script. So if you want to know what the script is doing, you can use this to see what's happening inside the script while testing.
There are also other objects that are already available for every script. They're not necessarily related to the item, but that we have to use-- they have useful operations that we can perform in a script as well, like sending emails, loading user information, or groups or roles, and the sequencer that we'll discuss a little bit later.
And, of course, on top of all that, you can define your own functions, typically in library scripts, but you can also define local functions in any script. Here, there's a snippet where I try to use a little bit of everything we discussed so far. So this will load the user using that userID parameter. So the Security object will give me all the information about this user in an object.
Then we can use the email object to create a new email that we're going to prepare for sending. The email address will be one of those informations we obtained for the user. Then we use the item.descriptor to find out the workflow state of this item and put it in the subject. And finally, we getPrintView that will format all the information of this item and finally send the email to that user.
The sequence or object is used to provide a sequence of numbers-- so it can be used to count events, to count elements, or simply generate unique numbers based on a sequence. It is configurable. We can configure what's the first value and how much we increment this value every time. If we don't want to specify that, we will just default to one as the first value and one as the increment.
Once we create a sequencer object in a script, we can use it in any other scripts. And one limitations that it cannot be reprogrammed. It's defined in a way, and then we can redefine it or even delete it, it will just stay there, but it's very lightweight, so you can create as many sequencers as you want.
Now to see that in an example, here's how we create a sequencer. We have to provide at least a name so we can reference it in other scripts. If we want, we can define these optional parameters the first value and the increment. If we don't specify, it will default to 1. And in another script, if want to use the sequencer just created, I just could get it with this method as long as I provide the right name.
To use the sequencer, we will call the next value method that will just calculate the current value plus the increment. So every time you call nextValue, a new value is generated. That's how we can guarantee its unique value.
And we can create as many sequencers we want. If it gets complicated, we have this list method in the sequencer object that will provide an array with all the names of all the sequencers created so far. Here's an example of printing the names of all the sequencers created in the system using this method.
Some of you who are familiar with Fusion Manage may have noticed that the sequencer behaves very much like an auto number, so what's the difference? The auto number is implemented directly in the database with database sequences, and you can only define one in the Item Details field per workspace. So you cannot have two auto numbers in the same workspace.
Whereas, of course, we can create as many sequences as we want. And once we create it, it can be-- the values that we obtained with nextValue can be used in any fields of any tabs of a workspace wherever your script needs it.
The order number already has a configurable prefix that you can change. You can even calculate some information there, but with some limitations. Whereas the sequencer doesn't have a prefix itself, but because it's in a script, you can programmatically append or prepend any information to the number obtained.
The order number's increment is not configurable. The first value also is not configurable. It will be whatever is there in the workspace. And some gaps may occur depending what happens when you're creating items. But in the sequencer, as we saw, you can configure the first value and the increment. Also, the Auto Number field will always follow the same sequence. It will always be unique for that field.
Whereas, because you can create several sequences in the script, you can have parallel sequences of numbers as long as they have a different name for the sequences. So the same field can contain non-unique numbers differentiated by the name if that's what you want.
For example, let's compare the auto number functionality with the script sequencer. Suppose we have this workspace where we want to create a numbering scheme for the reviews, but we don't want just the number. We will also want to put the year of creation and the type of review that will be read from a different field, the Review Type field. And for each different type, we want to have a separate sequence of numbers.
Let's try to do that with an AUTONUMBER field first. So here's how we would define this AUTONUMBER field. The definition of the AUTONUMBER field is highlighted here. Let me expand it a little bit so we can read what's in there.
So most of it is calculating the prefix. And we can see, it's a complicated calculation. And most importantly, it's not JavaScript. This is SQL code using SQL stored procedures and functions, so you have to be familiar with that.
Now, if we try to do this populating the other field with the sequencer, we would have this code that will do practically the same thing as the auto number. So to create a sequencer with the name, the name depends on the Type field. The order number wouldn't have this capability, it would just have one sequence. And then we can calculate the date, take the two last digits of the date. We can pad the number, like the auto number did there.
And we can prepend the year and the contents of the Type field before the sequence number. It's important to notice that in this example, I didn't mention, but the Type field is actually a picklist. So if we do this in AUTONUMBER and we read the Review Type field, all we would have is the numeric index of the label, not the label itself of the picklist.
Whereas here in the script, every time we read a picklist field, it already converts that index to the label. So in the AUTONUMBER, we would have the two digits of the year followed by an arbitrary number that we probably wouldn't know what means. But here, after the year, we will have the actual label of the type-- if it's tooling, if it's product, whatever.
So the solution using the script is more readable, it's easier to read because it's JavaScript, and it's more efficient because it converts picklist values.
FIROZ ABDUL KAREEM: OK. XMLHttpRequest object. This is a standard JavaScript object used to issue HTTP requests to a server. Fusion Manage supports this object in its scripting, and it can be used to issue API calls.
Using this method involves five major parts. First one, initializing the object using a constructor, followed by opening the method-- opening the object by specifying a URL and a method. Next, of course, is setting up the request parameters in the body and sending the request. Once the request is sent, the response has to be received for further processing.
These are the properties and methods that are supported by this object. And one thing to keep in mind is that HTTP requests issued using this method operates only in synchronous mode within Fusion Manage. This means that the script will wait for the response of the request in order to continue with execution of the subsequent steps.
We'll take a look at an example. So there is this customer who has opportunities managed in Salesforce. These opportunities are copied over to Fusion Manage opportunity workspace by integrations. When the opportunity details change in Salesforce, there is an integration that updates these changes in Fusion Manage, but there are projects related to these opportunities, and those projects are managed in ACC, in Autodesk Construction Cloud.
So when opportunity details change in Salesforce, it is copied over to Fusion Manage through integrations, but then Fusion Manage has to update the related project in ACC using a PATCH call.
So here is a sample script that can be used for the purpose. In the first line, the XMLHttpRequest is initialized. The method and the URL are set up as specified in the second block. And in the third block, the request headers are set up. It may be noted that the Authorization request header is also set in this block.
So this Authorization Bearer token, it is requested for authenticating against the PATCH API call to ACC. The fourth block is where the request body is set up. Could typically be a JSON body. And then the request is sent. And in the last line, you can see that the object, the response is captured in a JavaScript object so that it can be read and processed further. Fusion Manage supports XMLHttpRequests for these methods that are shown on the screen-- GET, POST, PUT, PATCH, and DELETE.
Now, since the API call is issued in synchronous mode, Fusion Manage script will wait for its response. This means that if the API call is slower, it could lead to timeouts.
Another thing to note is that it is possible to make API calls to the same Fusion Manage tenant using this method. However, if you are doing that, keep in mind that the authentication has to be performed within the script, meaning the bearer token for that call to the same tenant will have to be fetched within the script and passed to the API call. Rather, the fact that the user is logged in to Fusion Manage already does not mean that the script will be able to make the API call with that user.
Some of you might be aware that the xhr.open open method has an optional parameter called async. However, passing this parameter as true will not help run the HTTP request from Fusion Manage in async mode. The script always operates in synchronous mode because it's hardcoded in the backend.
PEDRO RIOS: Well, now that we a little bit about all the objects and methods that are available, let's see how they all come together. Suppose we find this line in a script. What kind of information this line conveys to us? First of all, there's the top-level item. It represents an item in a workspace. In this case, let's suppose it was this item for which this script was triggered. So the item object will contain all the information related to this item.
Some of this information is in different types, like the bill of materials type that we see here, and we access it through the BOMs property. In this case, BOMs form means the fifth line in the bill of materials, so that's what this property represents in this line. This line already has a lot of columns, each of them representing one information. Maybe the most important is the item information. That means the part that is in this bill of materials, which is another item.
So the Item property of the BOMs property is itself another item object with most of the same methods and attributes of the first item. They're a top-level item, only with different values because it now represents a different item, the item that is in the bill of materials.
This item itself contains a lot of information in several tabs. In particular, the grid that, here, was renamed to approved manufacturer list. If I want to see the first row-- if I want information about the first row in the grid, I would use grid 0. Again, many fields were defined in this grid. One of them is the Manufacturer field, which is actually a picklist, a linked picklist that links to another item.
So if I read this information, I will get information about this other item, another item object, same methods, but now the attributes are slightly different because the fields in this supplier's workspace are different. But it behaves the same way as the other item objects. We can obtain the descriptor, meaning all the metadata about the item. In particular, I'm interested in the workflow state of this item, which is in this example that's Under Review.
Now let's put this to test with a script. I want a validation script of the change order where the item in the previous example is included. Before sending it to manufacturer, let's suppose I don't want any manufacturers-- or I want at least one manufacturer to be active, not under review. So how would a validation script for this workflow transition that sends to the factory would be coded to account for this condition?
So what we're going to validate, we will have to provide a list of errors. So if any error happens, we will populate the list that called result here. And if there are no errors, we send back an empty list, which will mean that we can perform the workflow transition.
So in this validation, I will first throw my change order, loop through all the affected items. They are in the workflowItems property of the item. Each of them is an item itself, like the one we saw before. So we will check if this item contains at least one active supplier with this notActive function that I'd find.
And if it's OK, we will check all the subassemblies in this bill of materials. So we go through the bill of materials of this item in the change order, and we check each one of them to see if there is at least one active manufacturer in each one.
This function doesn't exist. I had to define it, that notActive function. What it does is it goes through the grid, it reads the Manufacturer field in the grid, checks the workflow state of each supplier appointed by that manufacturer, and if it finds an active, then we find what we're looking for. If it doesn't, then we will convey the information that there is no active supplier for this particular item.
Notice that we have a lot of for loops in this script. So for loops can take time depending on the number of items and the size of the bill of materials. That's the reason why I inserted a lot of breaks in those loops, so that it can stop as soon as I got the information I wanted. This will avoid timeouts in this script.
I even included a label, the loop label up there. It's not commonly used in JavaScript, but it's available, and I use that so that if the inner loop found the information, I want to break the outer loop and stop the script right away because I already have the information.
Finally, if the loop-- if it doesn't enter any of the loops because there are no affected items in this change order, then the assembly variable will not be defined, and I can detect that just checking the assembling as if it were a Boolean. Remember, in JavaScript, we convert whatever type assembly has to a Boolean because it's used in an if. So I can use that to just determine that I cannot send this order to the factory because simply there are no items to be manufactured here.
And then we return the result variable that contains the list of errors, or an empty list if there are no errors. We could remove the breaks so that we can, in this list of errors, list all the items that have problems. That will make it a little harder for the user to read because instead of one message, we have a list of messages. And, again, if we don't break the loops, we have to make sure we don't have a lot of items or too big bills of materials that will timeout the script.
Now that we know how the script works, let's see how we can make it better. Some tips and tricks, what we should do, what we should try to do, what you can do in the script, and even when you should not use scripts.
First of all, as I mentioned, JavaScript is a very versatile language that provides alternative syntaxes. For example, the for loop in JavaScript was copied from Java, that itself copied it from the C language way back then. So it may be a little arcane, a little hard to read. I admit, it's not exactly intuitive.
But if, like in this case, we're looping through all the elements of an array, there is an alternative syntax, the for-in, which makes it way simpler to read. It gives you the same information, but in a much more readable fashion.
Another typical case is when we have a lot of ifs testing one variable. If is intuitive enough, of course, but there is a better way to write it in JavaScript, which is the switch case. It's doing exactly the same thing as all the ifs, but way less words.
Even if the we don't want to check identity like in the prior case, but we want to check some inequalities-- like, in this case, checking ranges of values, we still can use the switch construction. The so-called switch(true) template where the values that we put in the case clauses are not constants, but they are themselves conditions that will be evaluated by the script and return a true or false. If it's false, we don't do anything. If it's true, then the switch(true), we all agree, that it has to do something there.
So, again, those two blocks of code are doing the exact same thing. Maybe it's a little longer in the switch(true), but some could argue that it's more readable.
And finally, if we have an if that decides which value to put in the same variable, we can do it with an if. I think it's readable enough. But there is also the ternary operator. Some could say it's not as readable, but it does everything in a single line, and once you get used to it, I think you'd prefer.
FIROZ ABDUL KAREEM: Right. The Script Editor in Fusion Manage comes with a lot of handy features. So there is the syntax highlight, as you can see. There is code folding. So in this case, the function reset is folded, whereas the function syntax is unfolded. It allows you to import library scripts. There is on-the-fly syntax validation. In this case, the missing closing curly braces for the switch statement that's highlighted.
It allows for auto-code completion. There is the capability to test and debug scripts. And it also displays-- it also has a mechanism to view runtime error logs that occurred during previous executions of this script.
There are several keyboard shortcuts, handy keyboard shortcuts available on the Script Editor. We'll have a more comprehensive list in the handout for this session.
Contextual scripting assistance. So there is this option to enable auto-code complete. It provides contextual scripting assistance by suggesting and completing the name of relevant functions and fields as you code. The suggestions are displayed automatically as you type. Of course, you will have to pick the context. The context in this case is the workspace on which the script is expected to run for the system to automatically predict the fields that reside within that workspace.
Objects and functions are also suggested within the context of your code. You could press any of these shortcuts, keyboard shortcuts, to bring up Code Complete options in case you are editing an existing script.
Code Complete analyses your code and warns if any unknown references or objects are present. And of course, this is dynamic. The script is analyzed when the Code Complete is enabled each time a new line is started.
It is possible to test and-- it is possible to validate the script code in a simulated test run. Testing will not create or modify any data, and it requires input parameters such as dmsID, workspaceID, and userID. dmsID is mandatory. workspaceID and userID are calculated automatically based on the dmsID and the user who is testing the script.
If the script runs or fails, there are error banners or success banners that are displayed. You may also make use of println statements to fetch more information in the test mode.
Debugging is very similar to testing. It will not create or modify any data. Like in any standard debugger, there is the dynamic highlighting which identifies the current line of code that is being executed. Breakpoints may be added or removed by clicking on line numbers, and there are functions to go to a particular line to step over functions or to step into functions and analyze them in a more detailed way.
Changes performed in debug mode are not saved, and it's not possible to add breakpoints at runtime. Before testing a script or debugging your script, make sure to save the latest code.
Runtime error tracking on scripts. This screenshot that is shown here, this might not be new to you as PLM admins. In this case, the user is trying to execute a workflow transition and they run into an error. The error message contains reference to a script. The name of a script, it contains an ID and a timestamp.
So when your users come to you as PLM admins, you have a way to go to the Script Editor and identify what exactly went wrong during execution of that script. So when you go to the Script Library, you will see a warning icon next to the script in question. And now, when you go to Edit mode on that script, you will see a link to view the error log.
Once you click on that, the error log is displayed. It will have the same error reference and the timestamp that was provided in the error banner that the user ran into. And then there is the log details which will tell you what line of the script failed and what exactly caused the failure. In this case, the failure was caused due to division by 0. Remember to purge the error logs once you have figured out the root cause and fixed it.
Changelog for scripts. So there is no versioning or changelog explicitly for scripts, so it's not very easy to find the history of changes. However, setup logs, which are accessible to administrators, display script changes. Not very user-friendly; however, it logs the previous state of the script and the current state of the script, along with details such as timestamp and the user who changed the script. We always recommend to keep a backup of the script before you Save Changes.
Naming convention. It is recommended to follow a standard naming convention while working on the Script Library. The scripts that ship with the product, out-of-the-box scripts, follow this naming convention. So they start with a workspace name, and the event on which the script runs is also mentioned. This could be On Create or On Edit events. It could be workflow validations, conditions, or actions.
On-demand scripts should have a meaningful name that can be understood by your users because they rely on this name when they run these scripts from the item. It is possible to find where a script is used from the Script Library, and scripts that are currently in use cannot be deleted.
PEDRO RIOS: So let's talk about some limitations that scripts have that you must keep in mind when deciding if you should implement scripts. First of all, they are time-boxed. Remember, they all run on the server. All scripts everywhere run in the same cluster of servers. So we have to time-box them, otherwise a rogue, poorly written script could have widespread consequences, and we don't want that.
Here, we listed the timeout limits for each type of script. And notice that if you chain several scripts, meaning if one script performs some action that triggers another script, they will be-- they'll be part of the same transaction, but the timeout is increased in that case because there are more than one script run.
Another limitation is that scripts are not meant to communicate with the user. Validation scripts can send error messages, but that's pretty much it. The println function is only for testing or debugging.
Although the scripts are transaction-safe, they run inside their own transactions. That's rolled back if anything goes wrong to avoid inconsistent states. You cannot create your own transactions with different scripts called in different times. If a script does something wrong but it doesn't know that it's wrong, you will have to undo whatever the script did with another script.
They cannot use the search capability. They cannot be triggered by other tabs other than Item Details. So if you create a grid row or if you add a new item bill of materials, they will not trigger a script. We still don't have versioning for scripts, so you have to be mindful of the changes you do in script code, and as we said, keep backups whenever possible.
The Import tool does not trigger scripts by default. And the language that we use in our scripts, JavaScript language, is an older version, version 1.5 with some elements of ES5. We're working on an upgrade of the script engine to support new JavaScript elements, but they're most related to event handling. That's not doesn't make sense in server-side script, but we're still studying it so that we can guarantee backwards compatibility.
So if not scripts, what else? For example, let's compare scripts with REST API calls, typically done in an integration part. The purpose of a script is to automate smaller tasks. If you want to automate larger tasks, a lot of operations, it's recommended to use an integration with REST API calls. And if you're integrating with other systems that are not in our server, even though the script can issue HTTP requests as we saw, it will be more efficient to do that using all the REST APIs in an integration environment, ours and the other systems.
The scripts can only be invoked by user interaction, creating an item, editing an item, or performing a workflow transition, or even clicking on the on-demand script, whereas the REST API calls are invoked externally. So your integration environment will decide when to invoke them.
Scripts don't have to abide by all the permissions. They can override those permissions and access control when they're manipulating data. But the REST API calls, because they're coming from an external environment, they are subject to all permissions and access control settings.
Likewise, a script can override validations defined for fields. They have this ability, but REST APIs, again, have to abide by whatever is defined in the fields. On the other hand, the timeouts in scripts that we saw are very limited. In a REST API, each REST request will have five minutes before timing out, and you can change several of them together. Typically in an integration, you would chain several of them together, so your timeout is much larger.
As we saw, the log entry generated by a script execution can be a little confusing. It's not very readable. It's usually so that when you can't find the problem, you can send it to us and we analyze it. And the REST APIs will behave like the UI, so they will leave detailed log entries.
The script can manipulate all the tabs, as we saw, with objects. It cannot be triggered by all the tabs, but once they're triggered, they can manipulate data in any tabs. And the API calls also can do that, but, of course, you have a different APIs call for each tab.
FIROZ ABDUL KAREEM: Before we wrap up, we would like to go over some of our references while we were preparing this presentation. Of course, we started with the Fusion Manage help documentation, which is very extensive. It contains a lot of details with some script samples. We refer to previous AU sessions on scripting. The hyperlinks of those are provided in the slide. You can access it, it's publicly available.
We did refer to a training from Sven Dickmans, Territory Solutions Engineer at Autodesk. And we also had some conversations with Michael Liu, Software Engineer at Autodesk, who helped us with this presentation. With that, thank you, everyone, for listening to this presentation on scripting in Fusion Manage.