When designing your Camel routes, you may sometimes want a route to have multiple inputs. Maybe you want to receive messages from a web service and from a JMS queue.
You can’t have multiple
from() methods in the same route, so how can you have multiple entry points to the same route?
Similarly, you might want to reuse the same Camel message processing logic in multiple places, so how do you avoid duplicating code?
The answer to both these questions is to join your routes together, using Camel’s in-memory messaging components: Direct, Direct-VM, VM and SEDA.
In this article I’ll explain each of these components, how they differ, and how you can use them to make your routes more modular and awesome.
First…an example scenario
First I’m going to start with an example.
I have defined a route that validates an incoming order, by passing a message to some underlying system. My orders initially arrive through JMS messages.
But what happens when orders start coming from new sources - such as a file upload, or a web service call?
To avoid having to repeat the same route code, Camel has features built-in which allow routes to have multiple inputs, by using a range of joining components to glue these routes together.
So how does it work? Camel glues endpoints together using the components Direct, VM and SEDA.
These components join your Camel routes together in different ways. They are collectively known as Camel’s in-memory messaging components, because they allow messages to be passed between routes, while the message stays in memory at all times. This is a really important detail, which I’ll touch on again later.
But for now, let’s now look at each of these components to see how they differ, and where you might use them.
This has to be one of the most frequently-asked questions by Camel beginners:
What does “direct” mean in a route?
You’ve probably seen the code
direct:... in so many Camel tutorials on the web. But what does
direct actually do?
direct is one of the most simple ways of linking your routes together. When it’s used in a
from() definition, it creates a synchronous endpoint that can be invoked by other Camel routes. For example, this code which starts with from(direct):
…will create a Direct endpoint called
yourname. That same endpoint can then be invoked in another
to() statement somewhere else, like this:
.to("direct:yourname"); // sends the message to the direct:yourname endpoint
Apache Camel’s Direct component joins routes in a synchronous way. This means that when one route sends a message to another
direct endpoint using
to("direct:myroute"), the route
myroute will be executed in the same thread as the first route, and a response message will be returned.
In examples, it’s often used because it provides a simple entry point into a route, without having to expose a web service, or otherwise rely on an external interface.
But, the simplicity of Direct comes with some drawbacks.
Direct endpoints can only be accessed by other routes that are running in the same CamelContext and in the same JVM. This means that you cannot access a Direct endpoint from another CamelContext. Remember the CamelContext is the container where your Camel routes are created and booted up.
So what happens if you want to access a route in another CamelContext? You use the next component, Direct-VM.
This very simple example receives files using Camel’s File component. Each file processed is passed, as an Exchange, to the direct endpoint
Separately, we have defined the
processTheFile endpoint as the start component for a route which modifies the message body. Once this is done, the new message is returned back to the calling route. All of this happens synchronously, within the same thread.
Direct-VM is a component that allows you to synchronously call another endpoint in the same JVM, even if it’s in a different CamelContext.
When used as a start component, Direct-VM exposes that route as an endpoint which can be invoked synchronously from another route.
The difference with the Direct-VM component is that direct-vm endpoints can be seen from other Camel Contexts, as long as they share the same Java Virtual Machine (JVM).
This opens up possibilities of linking routes together that were not developed in the same CamelContext. For example, you might use this component if you have different CamelContexts deployed in one container - such as when you’re deploying into JBoss Fuse or Talend ESB.
Camel’s SEDA component allows you to join routes together using a simple queue.
In a Camel route, when a message is sent to a SEDA endpoint, it is stored in a basic in-memory queue, and control is returned back to the calling route immediately.
Then, independently, a SEDA consumer picks up the message from the queue, and begins processing it.
SEDA does this by creating its own buffer which is used to store the incoming messages. Out of the box, SEDA creates a pool of threads to process incoming messages, meaning that several messages can be processed at once, making it potentially more performant.
In this way, SEDA can be thought of as a simple replacement for JMS queues. It provides queue-like functionality, but without the overhead of running an external message broker like ActiveMQ.
Remember that Camel publishes messages to a SEDA endpoint asynchronously.
You can only access SEDA endpoints that are located in the same CamelContext. So what happens if you want to send a message to a SEDA queue in another CamelContext? You use the next component, VM.
In a similar way to how Direct and Direct-VM are related, VM is a similar component to SEDA.
When used as a start component, SEDA allows a route to be invoked asynchronously from another route.
However the difference between SEDA and VM is that the VM component allows endpoints to be accessed from different Camel Contexts, as long as they are running in the same JVM.
Again, the VM component opens up possibilities of linking routes together that were not developed in the same Camel Context, in an asynchronous manner.
Drawbacks of SEDA and VM
The biggest drawback of using in-memory messaging like SEDA and VM is that if the application crashes, there’s a big chance you’ll lose all your messages.
This isn’t a major consideration if you’re designing the kind of integrations where it doesn’t matter if the message goes missing.
But think back to the order processing example at the top of this article. If an order gets lost during a server outage, this potentially means lost business. (Uh-oh.)
Have a think about when it’s appropriate to use these in-memory messaging components, and when it might be more appropriate to hand over the message to an external message broker, such as ActiveMQ, for reliability.
There isn’t a hard and fast rule. The right solution always depends on your use case. So when designing integrations using Camel, think about what you’d do if you lost messages. Would it matter? If it would, consider using transactions and persistent messaging to minimise any message loss.
Summary and best practices
So now you’ve learned about each component, which should you use, and when?
SEDA vs Direct:
- For synchronous (request/response) interactions within the same CamelContext, use Direct
- For asynchronous (fire-and-forget) processing within the same CamelContext (to process messages in a queue-like fashion), use SEDA
VM vs Direct-VM:
- For synchronous (request/response) interactions in a different CamelContext but within the same JVM, use Direct-VM
- For asynchronous (fire-and-forget) interactions in a different CamelContext but within the same JVM, use VM
Comparison of Direct, SEDA, VM and Direct-VM
This table compares each component, and shows whether they can be accessed from another CamelContext (within the same JVM):
|Component||Type||From same CamelContext||From another CamelContext|
Has this article helped you understand the difference between the Direct, VM and SEDA components? How are you planning to use these components in your Camel routes? Please share your comments, thoughts and questions in the box below!
What do you think? You can use Markdown in your comment. To write code, indent each line with 4 spaces.