Description
Key Learnings
- Learn how to use Forge API from an enterprise integration platform
- Learn how to use integration best practices for connecting enterprise systems to the Forge platform
- Learn how to use common services and canonical data models when interfacing with Forge
- Learn how to develop a custom connector for Forge in MuleSoft
Speaker
- RDRavi DharmalingamSeasoned software professional with experience in Integration consulting and Cloud based operations. Over 20 years of experience in all stages of enterprise software development and deployment in a wide range of industries.He is an experienced integration consultant having worked on helping customers successfully integrate enterprise applications across various industries. He has implemented legacy Enterprise Service Bus based integration solutions as well as modern cloud based systems and is proficient with using integration standards such as REST, SOAP and ODATA. He is focused on architecting and implementing patterns based solutions to integrate enterprise applications to help drive adoption and enhance overall value for customers.
PRESENTER: So we'll have somebody create an item in Salesforce. And then, let's say, a designer checks in a model into a file folder, the MuleSoft flow essentially translates that model and puts the link into Salesforce. So from Salesforce, you're able to correlate the model and view the model using a large-model viewer directly in Salesforce.
And then the second demo is with creating a project in Salesforce. It, basically, pushes the project into BIM 360, again, using a Mule flow. And we'll highlight some of the patterns that we're using when you're building these flows and how easy it is for you to change stuff with this kind of architecture.
And then, we'll will get into the actual code, how you can actually build the connector. And we can share the stuff we've done so far. And if you want to build your own connector, it is fairly easy. It's just something that you have to write, a wrapper that you have to write, on top of the API kit that's there in GitHub already. And then, we'll look at the Runtime environment and then wrap it up.
All right, so integration architectures. So some of the key, primary architectures that you see for integrations right now are like a file transfer or a shared database. So those are still quite widely used. But the problem with that approach is it doesn't scale.
And it usually is very tightly coupled. So if you have to make any changes, it locks you in into one architecture. You have to do quite a bit of rework, if you're trying to make changes to the system.
The next one is point-to-point, where you can build a custom map in any programming language and, basically, use that to integrate. And again, this creates a tightly coupled model, which again, can work in some situations. But it can often lead to maintenance issues in the long run.
So with that said, the messaging architecture is the most common architecture that's used across enterprises when you're talking about integrating large number of applications, managing a large number of integrations. So this kind of model essentially gives you the resiliency and scalability that you need when you work with large-scale integrations. So the Integration Bus, essentially, this is a core concept in pretty much-- There's probably 50 or 100 middleware products in the market that kind of support this architecture.
Essentially, the core concept is, you have a group of applications that can work together in a decoupled manner. So changes in one app doesn't affect the other. So you can, basically, easily add additional components to the bus. All the other components don't get affected.
So how do we use MuleSoft? So MuleSoft is basically a lightweight ESB. So I used to work, in earlier days, in more heavy duty stuff like WebSphere and TIBCO and stuff like that. MuleSoft kind of peels back the layer. And it's a lot more simpler product. It's called a lightweight ESB, which basically still supports the messaging architecture. But it's basically a lot simpler than some of the heavy duty ESBs that used to be around like 10 years back.
So what I've done here is, basically, built a Forge connector that can, basically, tie Forge into the message bus so that you can, basically, leverage Forge across your enterprise. So if you look at the connector ecosystem for MuleSoft, so they have connectors for pretty much any leading enterprise system that you can think of. And so once we get a Forge connector into a platform like MuleSoft, it's fairly easy for us to integrate with any of the applications that are available in its ecosystem.
So let's briefly talk about patterns. So one of the core things about Mule that kind of makes it simple is that they kind of adopted, almost religiously, the patterns that were described in this book. So this book came out 10-plus years back, maybe even longer.
But this is still considered one of the seminal works in integrations. And lot of the patterns here, you see them all around. And MuleSoft kind of took an approach where they, basically, used their component names, essentially, follow the naming conventions used in these patterns.
So patterns are nothing more than reusable integration or design solutions that you can either use independently or with other patterns to solve integration problems. So when you start looking at a integration problem, you can kind of break it down into different patterns. OK, this is an aggregator or this is a splitter. And then you kind of proceed in that manner. And the way the MuleSoft components are structured, essentially, it facilitates using a pattern-based approach for integrations.
As I mentioned, many of the components in MuleSoft, essentially, use the same names that you find in the enterprise pattern. So once you study enterprise patterns, it's almost fairly easy to learn MuleSoft. So it's kind of like a common language that they adopted for integrations.
All right, so next we'll talk about the Forge connector. So the MuleSoft environment essentially comprises of a development environment and a runtime environment. So the development environment is essentially an Eclipse-based studio, which they built a wrapper on top of that, allowing you to build the visual flows. So you can basically build visual flows.
And then from there, you can either deploy it to the CloudHub, which is basically their cloud-hosted integration environment or you can also host it on premise. I mean, they have an on-premise solution as well if you want to run your integration on premise. And there is even a community edition that can be run on premise, which is open source and free, which is a nice thing. So if you don't want to go for the expensive solution, you can use the community edition, which is which is free.
So once you deploy the Forge connector into the platform, it basically shows up in the palette of Mule as a connector. And then you can just use that in any of your flows. With Forge, we basically provide a configuration option. So once you add the connector, and then show you in the demo, basically, we have to specify the client ID and the client secret of the Forge application that we are going to be using for that.
And then, the connector basically defines as many operations as you need. You basically reference those operations in the flow. And then the operation basically defines what the inputs and outputs are. All of this can be done kind of in a visual manner.
All right, so with that, any questions so far? OK, so with that, I'm getting to my demo. The first demo is, essentially, somebody creates an item in Salesforce. And then we have a CAD model that's updated in a file folder. The Mule workflow essentially correlates these two, uses the Forge connector, performs the translation, and then sends the translated model to Salesforce. We have a LMV viewer embedded in Salesforce that you can use to view that model.
And then, at the same time, we send a notification in Slack that the model is ready. And the link is sent in there. So this kind of highlights a simple scenario. But I want to show that. And we can take a look at different things that you can do with the flow from there.
So this is basically the flow. So if you look at it, this is basically pulling for a file directory. And then it's basically ensuring that you don't process any file more than once. And this is, again, a pattern, called Idempotent Receiver, which essentially ensures that you don't process the same message more than once. And then, you're doing a translation, calling Forge connector to perform translation for the LMV model and then, again, calling Salesforce to update that LMV link and finally updating Slack.
So let me just run the demo. And then we will take a look at the flow. And I can show you how this is structured. So in Salesforce, so let me just create a new product.
OK, so this is the LMV model. But at this point, I don't have a viewable yet. So, let's say, I am not going to use the CAD system, at this point. I'm just going to copy an existing model.
So it's basically matching it on the name. So basically, in a runtime environment, your integration would be running all the time. As I mentioned earlier, it would be running on CloudHub or on your hosted system. In this case, I'm just going to run it here. And, basically, the development environment has its own container to run it.
So if you run it here, it's got a web container built in. It'll run it locally on the Eclipse environment. And it'll start pulling the file.
All right, so it's running. So this is nothing more than a Java application that's running in a web container. So you see that it's picked up the file. And it's sending it to the translator.
And, obviously, right now it's still pulling for it. But it's done. And it has updated the Salesforce link. So if I go back to Salesforce now and do a refresh, you'll see that the link is there and the model has made it to Salesforce.
So just a simple scenario and then I think we also had a link to send a link to Slack. So you see this message in Slack that shows up. This is, basically, a simple scenario. But let's say you want to change Slack to Twitter or something like that or something else. It's really just a matter of just finding the connector and adding it in there.
So if you basically find the appropriate connector for that tool. And you can just drag and drop it into the environment. And now, you basically have the option to connect with another enterprise application like Twitter or something like that. So that's basically the power of this, of a framework like this.
I'm calling an operation, create LMV model, which encapsulates all the stuff that needs to happen to translate a model into a lightweight, I mean, large-model viewer link. And that operation performs everything. And all I need to worry about are the inputs and outputs to that.
I have a transformer before that where I'm passing it the bucket key, the file name, and the file part. And I have an output where I'm getting the translated stuff back from that translation, which then I'm then passing to Salesforce to establish the link. Any questions on this flow so far? Go ahead.
AUDIENCE: So basically you have to put a folder [INAUDIBLE]?
PRESENTER: Yeah. It's [INAUDIBLE].
AUDIENCE: So you're just kind of watching it?
PRESENTER: Yeah, yeah.
AUDIENCE: If something happens with this, then it gets transferred there. And when you tied it together with the file, so you made a product with a certain name [INAUDIBLE].
PRESENTER: Yeah, yeah. You're matching. Yeah, yeah. Yeah.
AUDIENCE: When you send it out to Forge, typically what Forge passes back after you translate it [INAUDIBLE].
PRESENTER: It's URN.
AUDIENCE: OK. So you got the URN back. And that's what you used [INAUDIBLE].
PRESENTER: I sent the URN to Salesforce. If you look at Salesforce, see this is the URN. I mean, you would typically hide this, I mean, in your implementation. But that's basically what I'm passing back to Salesforce. And then it's using that. You need to authenticate in Salesforce. I'm using two-legged oauth in Salesforce to get the token.
AUDIENCE: Is that the [INAUDIBLE] or is that [INAUDIBLE]?
PRESENTER: Which URN? I'm sorry.
AUDIENCE: [INAUDIBLE]
PRESENTER: This is outer desk document URN that you need for the large-model viewer. So this basically tells the LMV where to get that file.
AUDIENCE: So the large part of the viewer is built into Salesforce then?
PRESENTER: I added it.
AUDIENCE: Oh, you added it.
PRESENTER: Yeah, so it's basically an iframe. And I embedded that iframe into Salesforce and had some scripting in there to get the token. It needs to authenticate, as well. So it's getting a token to use the viewer every time I'm using that.
So again, I think the real power here is, I mean, once you have a connector on the operation, you can use it for a number of things. It's not just for a particular thing. For example, if I want to change Salesforce to NetSuite now, all I have to do is change that connector.
All the pieces of stuff I've done up to that point are still good. All I need to do is change Salesforce connector to NetSuite. And then, it still works. Any other questions on this flow?
All right, so we talked about patterns So we just looked at this. So what are the patterns that what we saw here? So just to recap on some of the stuff, so what you do here, I mean, this is obvious, but it's actually there's a pattern for it. It's called polling consumer.
So that's basically the pattern we're using here, where it's basically pulling your file directory to see if there is a file. So the next pattern we saw was Idempotent Receiver. So essentially, let's say you have a file folder. And it's looking at that file all the time. You don't want to process the same file multiple times.
So this component, essentially, what it does is it allows you to define an ID, a correlation ID, or a message ID, which you can use to filter out messages that are already processed. So in this case, what I did was I used a combination of the filename and the timestamp. So as long as the filename and the timestamp don't change, I don't process it again.
If I go and update that same file now, it'll process it again and send a new model to Salesforce.
AUDIENCE: [INAUDIBLE]
PRESENTER: Yeah, you can update it and put a new file in there.
AUDIENCE: [INAUDIBLE] have the same name [INAUDIBLE].
PRESENTER: You can have the same name. But it's looking at the combination of filename and the timestamp. So as long as that timestamp changes, it'll read it as a new message, yeah.
So the other pattern you saw here, which is common across the board, is a message translator, which is a common pattern that you see whenever you need to translate a message from one app to another. And then, this is basic. Again, a lot of these patterns are overlapping. And this whole flow is called a Composed Message Processor.
So, essentially, you have a whole series of steps that are happening. And then, if any one of them fail, you have an exception strategy on what to do. In my case, I just have a Slack message. If something failed, it'll just force something in Slack, saying that, OK, this publishing failed. You have to go take a look at it or something like that.
All right, so any other questions on this before I go to the next demo? And again, the reason I'm showing these demos is just to highlight the point that with the connector your options are unlimited. And really, a flow like this, once you have a connector with the operations, you can set something like this in 10 or 15 minutes, literally. I mean, it's basically drag and drop. I mean, I know some people probably could write code as fast as that. But, for most people, I think this is still a convenient way to do stuff.
All right, so the next one is similar. Here, what I'm doing is, again, starting in Salesforce. I'm creating a project in Salesforce. I'm taking a long route here. This, you don't have to do it. But I wanted to use a queuing system to show it.
So, essentially, I'm using this app to transform an outbound Salesforce message to a Amazon queue, SQS message. In MuleSoft, we basically have listening for the SQS message. And then the Forge connector updates BIM 360 with the project.
All right, so let me go back to the demo. All right, so again, it's the same thing that we looked at in the-- So you have the queuing system. Again, this message gets here from Salesforce through an outgoing message through Zapier. And then I'm doing some byte transformation stuff because it's Base64 encoded.
So I take care of the decoding there. And then, I basically update BIM 360 with that information. Yeah, actually I called Salesforce because I don't get the entire data from the message. All I'm getting is the ID.
So I make a call back into Salesforce to pull all the data. And then I call Forge to update BIM 360.
So going back to Salesforce, so I just created, like, a simple object, again, in Salesforce. So let's, again, call it Forge. OK, so it basically sends it only if you select that flag. And some of those are things that you can easily tailor.
So the good thing about queues is, I can basically post this message and it'll update the queue. My application doesn't need to be running. And if I start it later, it'll pick up the queue. So if you're trying to do something real time, like a HTTP, you need to have your system up and running. Otherwise, it'll fail.
But since we're using a queuing system, I don't need to have my app up and running. I can just post this, the message is already waiting in Amazon SQS. And when I come back here, and I start this, so it's picked up the message and processed it. So if I go into BIM 360 now, all right, so we see how our project upgraded there.
So again, simplest case, it just highlights the point. But let's say I want to include Slack here. Again, all I have to do is find the Slack connector and put it in there. So any questions on this flow on?
So one other thing I want to highlight here is, essentially, when you build the connector, MuleSoft is, what they call, DataSense-enabled. So it can read your POJOs, your Plain Old Java Object. And it can, basically, find out what other fields it's looking for. And it can give you a drag and drop interface to do mapping stuff.
So if you go and look at the transform, so I just mapped five fields. But, essentially, once you have a connector like this, this kind of mapping can be done by somebody who's not a developer. So this kind of expands the number of people who can use Forge to do integrations as well.
So you can basically build a connector that supports a drag and drop transformer like this. And all the user needs to do is map the source connector to the target connector. And then the transformer automatically infers what are the outputs, what are the inputs. You just need to drag and drop.
Obviously, if you need to make some additional changes to the transformation, you have to do some of the additional stuff. I mean, there are some syntax that needs to be learned. But for simple mapping, really, it's just really dragging from the source to the target, nothing more than that. So any questions on this flow or anything on the MuleSoft side?
AUDIENCE: So [INAUDIBLE] you basically set up all these integrations in some sort of, like, project. And that whole thing is running in MuleSoft [INAUDIBLE] and it's just sitting there, like, some integrations you set up on, like, timers and things every 10 minutes. They check something or they pull something. Now those are--
PRESENTER: Event driven, yeah.
AUDIENCE: [INAUDIBLE] And then, what is that [INAUDIBLE] in thinking in terms of, like, having regular [INAUDIBLE] web services, in terms of having this, like, whole [INAUDIBLE] integration. If I wanted to access MuleSoft in some other system, how would I do that? [INAUDIBLE] API [INAUDIBLE]?
PRESENTER: No, in this case I did it through a queue, Amazon SQS. But you can also do it. MuleSoft also has an API. So they have it as kind of API, which can trigger this as well. So the primary integration mechanism is event based, where you have events. And you said WebHook could be one.
So, in this case, we are pushing from the Salesforce to BIM 360. With WebHooks, now, we can push it back from BIM 360 back into Salesforce. So I can register a WebHook here. And then once somebody gets the project in BIM 360, I can update back into Salesforce.
But, yeah, they do have a REST API that you can define. In fact, they have a pretty rich API platform that you can define your own APIs that you can use to trigger different things. I mean, you have Forge APIs already. And then you have APIs on top of it. But it kind of gives you the ability to define like composite APIs.
Let's say you want to do one big transaction. And you want a single API. You can basically do that. And so STTP could be another trigger, SQSS one polling file, which was the other one we did, or it could be batch, like you said. I mean, it could be scheduled based on time, run it on a budget.
AUDIENCE: [INAUDIBLE]
PRESENTER: Yeah, you're right. All right, so let's take a look at one of the patterns we saw here. So this is, I mean, it's basically a message channel. So we are using a queuing system. It's the Amazon SQS, in this case. But it's basically a message channel that's what they call in the pattern.
And this pattern is called a claim check. And, essentially, it's like when Salesforce posted the information, it did not send all the data at the time. It just sent the ID of the document from Salesforce to MuleSoft. And MuleSoft is basically querying Salesforce to get all the data. So this pattern is called claim check, where you don't send all the data immediately. And you just send a reference and then use that to pull it back.
Channel Adapter, again, a lot of these patterns are repeating I mean, any kind of adapter that you build to integrate with the bus is called a Channel Adapter pattern. And then, this, again, is an overarching pattern that you'll see everywhere, pipes and filters. Essentially, you have one, each layer, making changes to a message as it flows through the integration. Any questions on this flow?
All right, so with that, let's get into how you build the connector. So building the connector actually is fairly simple, which is actually a nice thing about MuleSoft. Really, all we have to do is create a wrapper with their annotations in Java, which will link the client classes, which are the Forge APIs classes through MuleSoft. And that's exactly what I have done in my stuff.
And these wrappers are fairly simple. Each operation that I defined is basically a single method, which I need to wrap with the processor annotation. It'll get recognized in MuleSoft flow. And they have a toolkit called DevKit, which you can use to build these connectors.
And then, as I said, these connectors are DataSense enabled, in which if you define your static data models within the connector, the system will recognize that. And when you visually bring up the source and target, it will basically recognize what's the data I'm getting, what's the data I need to update. And then, you can use a drag and drop interface to do the mapping.
All right, so let me show you in the studio, so this is the connector. So I'm getting the token. And you see this is the, for example, this is the method that we're using for creating the LMV model.
And all it's doing is it's basically using the API kit. So if you guys have seen the Java API kit in GitHub, you can basically use that. And it just needs to go around this wrapper. So this wrapper, this annotation, ensures that MuleSoft recognizes this as an operation.
And then, you can just use the standard Forge API kit to make your calls to perform the different operations. Have you guys used the API kit? Like, anybody use the Java stuff or no?
AUDIENCE: [INAUDIBLE] use it, like, direct calls [INAUDIBLE].
PRESENTER: OK. So actually, initially, I had done it through other stuff too. But now they have a standardized API kit because, if you look at it, I think they have different languages there. And it's kind of uses the same pattern.
So this code, essentially, uses the API kit. So I really didn't have to do much code at all. I mean, I kind of had to copy some of the classes here. But really I didn't even have to do that. I could have just referenced the job completely and just called the matters externally to the API kit.
And all we really need here is just to get the wrapper around here and that should have taken care of. Any questions on steps to build the connector?
AUDIENCE: What were your variables defined in here?
PRESENTER: OK, so this is the connector class. And there is another class called Config. These are the inputs where you define where the client ID is. So if I go back to MuleSoft, I look at the connector. So this is where I configure this. And that is exposed in the Config.
So they have classes when you create a project. So after you set up MuleSoft, if you say new, any point connector project, this will, basically, create you a connector project, which has all the wrapper stuff that you need.
All you have to do is add your methods there add any other Config stuff you need. It kind of gives you a base template for building a connector. You can just go ahead and add your stuff on top of it.
So these are the inputs. So, for example, see this is where I defined the payload. So I call this payload. And I'm basically using a POJO here. And, as I said, MuleSoft recognizes the Java object. And it knows what it needs.
You can define whatever you want in that class. So that's your input. And then this is your output. So that's basically what you define. And when you define something as a processor, it knows that it's an operation that it needs to expose. So that's basically it. Any questions on the connector?
So, essentially, one of the real strengths of MuleSoft is that it uses what we call as a set architecture or a staged-event driven architecture, which inherently gives you more throughput than, like, a standard serial or any kind of threading that you do because this uses a queuing approach, where each stage can be processed thoroughly. So when you're talking about cloud infrastructures and stuff like that, using this kind of model allows you to scale easily across multiple-clustered environments and stuff like that and give you a much higher throughput than a standard Java program array or standard serial-based integrations would give you.
So the runtime can be either on premise or use the CloudHub. The community edition is free, I mean, if you want to try that. And you can pretty much do most of the stuff we talked about in the community edition.
And I think we talked about the REST API. They actually have a fairly extensive API framework using a RAML language, which allows you to construct the REST API, which you can use as a trigger to invoke different stuff. So, again, MuleSoft is not the only one. I mean, in fact, some of these are actually even closer to the enterprise integration patterns.
And if you like open source stuff, these are completely free. The only thing is they don't have a nice virtual editor like MuleSoft does. So you'll have to do a lot of the configuration directly in XML or directly in code. But these two are fairly excellent frameworks to use as well, which leverage the enterprise integration patterns.
All right, so here's something interesting I saw. Here's one of the authors of the original Enterprise Integration Patterns book. So he was apparently visiting a Starbucks, and all he can think of is enterprise integration patterns. So he's starting seeing enterprise patterns everywhere.
So you look at Starbucks, they have their own naming convention for stuff. And this is equivalent to economical data model, essentially. You have your own standard for naming data. When you get a drink, they basically put your name in there. And this is, basically, done for correlating your cup, in case there's a problem with your drink back to your order. And this is, basically, a correlation identifier, which is a pattern in the enterprise integrations.
Stores like this are set up for an asynchronous processing, so that they can get high throughput. So you got multiple baristas working on your drink. Depending on when your order comes, they're all competing to pick up an order and process it. And this is, again, a pattern. It's called competing consumers.
So here are some of the key takeaways. Essentially, I think if you take a pattern-based approach, you can greatly simplify enterprise integration. And by mapping Forge capabilities into a connector that can tie into an ESB, you can basically even engage non-developers in your company to develop integrations. So the event-based, as we said, the set architecture, this inherently supports a higher scale processing and gives you a lot higher throughput for your inputs. Any questions?
AUDIENCE: So the community edition, those are [INAUDIBLE] play around with. [INAUDIBLE] community edition, free the IT department in sort of [INAUDIBLE] got to be on the server connected to [INAUDIBLE]? How does that work?
PRESENTER: So, again, as I said, if you're, obviously, trying to open firewall ports and stuff like that, you've got to engage IT. But the community edition itself is just a Java app that can run on, like, a JEDI container or a Tomcat. It basically is a Java application that runs.
And you can run it. And you can do flows like queuing-based workflows and stuff like that. And also, the studio has its own container.
If you just want to get started, go ahead and download the studio. The studio has its own container. And you can start. Like, all the stuff I did right now was completely on the studio. I didn't even have a server.
So you can just do everything on the studio. And then, when you're ready to use, then, only then, you need to worry about whether you need the community edition or you want to use their CloudHub, which is their integration as a service offering. Any other questions, guys?
AUDIENCE: So [INAUDIBLE]?
PRESENTER: So Java's the primary platform that MuleSoft is built on. And, as I said, the connectors, you need to write it in Java. But MuleSoft itself has scripting capabilities. I mean, you can do scripting in JavaScript and all that stuff. So within MuleSoft, you can do scripting in other languages, like JavaScript or Groovy and stuff like that. But the core platform is, basically, built on Java and Spring Framework, basically.
All right. Anything else? OK, thank you, all. Thanks for coming, guys.
[APPLAUSE]