Place | Area | Review |
---|---|---|
Barbican Centre | City EC2Y | The area near the Benugo cafe on the main floor has long shared tables for laptop working. The cafe is expensive but you can probably stay for a while without being troubled. Large space and great acoustics which tends to dampen the background noise. But it’s quite dark; not much natural light gets in here. |
Pret A Manger | Everywhere | Free wifi, cheap filter coffee and bakery items, and you won’t be bothered by staff. Better to not work over lunchtimes though, when most shops get very busy. |
Thoughts? Add your own suggestions in the comments! I’m sure that others reading this page would love to hear them.
]]>I want to demonstrate a slow consumer – that is, a piece of code which takes a long time to consume a message from a queue, and do some stuff with it.
(Let me preface this blog by saying that I don’t touch Java EE very often. So this is a bit of an adventure for me. 🗡️)
I decided that, to simulate a slow consumer, I would try to write a JMS message listener that takes a looong time to process each message, and do this very crudely by adding a Thread.sleep()
statement.
Then, I would try to ensure that messages are consumed from the queue only sequentially (one-at-a-time), so that a very clear and obvious bottleneck is created.
Essentially, I think I’m trying to create a singleton message consumer.
To achieve this, some random Googling, this post on Francesco’s blog, and some vague memories from a project that I worked on 3 years ago, have led me towards an activation config property called maxSession
.
maxSession
seems to control how many queue consumers a Message-Driven Bean will have (an MDB is a class that receives JMS messages in Java EE).
It seems that you can define this property maxSession
as an annotation on your MDB, like this:
import javax.ejb.ActivationConfigProperty;
import javax.ejb.MessageDriven;
@MessageDriven(name = "KitchenMDB", activationConfig = {
@ActivationConfigProperty(propertyName = "destinationLookup", propertyValue = "queue/sandwichshop.kitchen")
, @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"),
, @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge")
, @ActivationConfigProperty(propertyName = "maxSession", propertyValue = "1")
})
public class KitchenMDB implements MessageListener {
public void onMessage(Message message) {
// DO SOMETHING THAT TAKES AGES HERE....
}
}
Did I try this? Yes.
I tried setting maxSession
to 1. Then I put a whole load of messages in my MDB’s inbound queue (called sandwichshop.kitchen
in this example code), and sure enough, it resulted in my messages being processed sequentially… one-by-one…. very…. slowly. Result!
But why does this trick work? Where does maxSession
even come from? What are activation config properties? And should I just blindly add a config property to my app without knowing what it does?
I wanted to find out more.
maxSession
doesn’t seem to be heavily documented anywhere. It’s not an “official” keyword in Java EE, nor does it seem to be a Wildfly/JBoss property.
I couldn’t find any reference to it anywhere in Oracle’s Java EE reference. Perhaps I was looking in the wrong places.
I know that @MessageDriven
and @ActivationConfigProperty
come from the EJB specification, because they’re located in the javax.ejb
package. These annotations are also documented at javaee.github.io, which is the archive for legacy Java EE docs.
So I can use these annotations in my project by adding JBoss’s own packaging of the EJB APIs as a dependency, using Maven:
<dependency>
<groupId>org.jboss.spec.javax.ejb</groupId>
<artifactId>jboss-ejb-api_3.2_spec</artifactId>
<scope>provided</scope>
</dependency>
But this dependency gives me the APIs and annotations only. The actual implementation (the “thing” that will respond to the annotations) will be provided by JBoss itself, when the application server runs.
So I’m still curious. Which component is receiving maxSession
, and what is it going to do with it?
I started with a GitHub search for “maxSession”.
(This is my favourite way of trying to understand a codebase – find a keyword that you can search for, like a variable name, or a method name, or a class name – and then see where it pops up on GitHub.)
And, I think I found it. I tracked down this maxSession
property to a class in ActiveMQ Artemis called ActiveMQActivationSpec:
/**
* The maximum number of sessions
*/
private Integer maxSession;
OK, seems promising.
This declaration isn’t useful on its own, so I traced it back a bit. I wanted to find any other code that uses this field’s getter method, getMaxSession()
.
And it seems that it gets called from another class, ActiveMQActivation:
protected synchronized void setup() throws Exception {
// ....
// HMM, THIS LOOKS INTERESTING 😬
for (int i = 0; i < spec.getMaxSession(); i++) {
ClientSessionFactory cf = null;
ClientSession session = null;
try {
cf = factory.getServerLocator().createSessionFactory();
session = setupSession(cf);
ActiveMQMessageHandler handler = new ActiveMQMessageHandler(factory, this, ra.getTM(), (ClientSessionInternal) session, cf, i);
handler.setup();
handlers.add(handler);
This bit of code seems to do some ActiveMQ connection-setuppy stuff. The important thing for me was seeing the spec.getMaxSession()
used in a loop.
(This code is taken from Artemis 2.16.0, but a more recent version of this code has changed to allow multiple Sessions to share the same Connection.)
When combined with a bit of clicking around into other methods, it seems that this method creates a bunch of Artemis ClientSession
objects, up to the number given in maxSession
.
💡 ClientSession
means something specific in Artemis lingo, it’s “a single-thread object required for producing and consuming messages.” Producers and Consumers, the objects that send and receive messages, are created inside a ClientSession.
So by setting the maxSession
property, I should be able to limit the number of sessions that are opened to ActiveMQ Artemis.
After doing the digging, and looking at the code, I’m starting to build up a better picture now:
ActiveMQ Artemis is the message broker that runs inside JBoss.
Artemis comes with a resource adapter (RA) which can be deployed inside Java EE application servers.
The resource adapter’s job is to “mediate communication between the Java EE server and the EIS by means of contracts” (according to the Java EE docs, which are dry, so very dry)
The Artemis RA is called… “activemq-ra”, and it complies with the this Resource Adapter spec by implementing javax.resource.spi.ResourceAdapter
.
When the Artemis RA is activated (which I assume happens when I deploy my WAR file to JBoss??), it calls the setup()
method I talked about above. The method creates some connections, in a loop, up to the number given in maxSession
.
The Consumers inside the ClientSession objects deliver messages from the queue, to a pool of Message-Driven Beans (MDBs).
Here’s a sketch from Excalidraw which kind of visualises all of that:
In this sketch, the maxSession
property is set to 3 so the Resource Adapter creates 3 ClientSession
objects.
What did I learn here?
On a Message-Driven Bean (MDB), you can set some extra runtime properties using the @ActivationConfigProperty
annotation.
These properties are actually passed on to a Resource Adapter. A Resource Adapter is a Java EE abstraction (basically an interface) which describes objects that can interact with things like message brokers.
The RA’s job is to create and manage the connections to the message broker (or other external system).
The ActiveMQ Resource Adapter implements the ResourceAdapter
interface. It runs inside Wildfly, and it sets up the connections, Producers and Consumers for your applications. These objects are stored in an object called a ClientSession
.
maxSession
is just one of several properties that can be set on the ActiveMQ Resource Adapter, and it controls the number of ClientSession
objects that are created to ActiveMQ Artemis. Therefore this setting can be used to ultimately limit the number of simultaneous message consumers.
I could probably add more @ActivationConfigProperty
values, if I needed even more control over the ActiveMQ Resource Adapter.
Voila!
I think this is a good enough model for me for now. I’m sure it’s not perfect, but I think I have a better idea of how maxSession works. Do let me know in the comments if you think I’ve got something wrong.
If you want to see the demo application that I talked about in this article, check it out on my GitHub:
]]>So I registered the domain name, and used the project as an excuse to learn about AWS Lambda, server-side JavaScript and Serverless Framework.
After a little while, I finished a basic bare-bones version of the app. And used it a few times, for real! Then, it sat dormant for about six months, while I spent time working on other things like blogs.
Fast-forward to November 2021. I wanted to update it. I wanted to add some new features. I looked at the old code with dread.
I’m going to have to learn JavaScript again, I thought. Who even wrote this code?! How do I deploy it to AWS again? What’s this new feature on serverless.com?
Oh dear, a long road ahead…
When I have something difficult to do, I like to go waste some time get some inspiration on a message board.
So I was reading this thread - “Solo-preneurs, how do you DevOps to save time?” on Hacker News (HN is actually my favourite goldmine of info and opinion from people who are actually doing this stuff for real, not just throwing up YouTube tutorials).
And someone replied with their deployment technique:
I never realised I was using spooky arcane oldhat stuff! I feel wizardly now.
My projects (for small clients and myself) basically use this.
- A “build.sh” script that does a local build of back end and front end
- A “deploy.sh” script that scp’s everything to the server (either a digital ocean VPS or an EC2 instance), runs npm install, runs database migrations and restarts with pm2
So running my entire CI pipeline in terminal is: ./build.sh && ./deploy.sh
Far away in the real world, people are still using simple tools to create things. You don’t need to use the shiny-shiny stuff. Keep it simple.
A lightbulb moment.
What does ‘simple’ look like, in my case?
For me, it probably means using tools and frameworks that I already know. Choosing boring technology. And making the build and deployment steps almost too easy. So I can focus on shipping new features quickly.
So I threw my toys out of the pram and rewrote the app in Java. Yes, Java.
For me, Java is my bread-and-butter. I’m not a hardcore Java developer, but I know enough to build simple things. And in the Java world, Spring still reigns supreme. It’s a framework for building just about anything - APIs, web apps, reactive applications.
Let me convince you (I’m also talking to myself) why it’s great.
Spring has well-maintained official docs. In fact the documentation is so good, that it’s boring. There is a Javadoc page for every class and method. Every feature is described in detail, down to absolute minutiae. It’s gold. There’s also a ton of unit tests to learn from.
👍 Goodbye scouring the internet for half-baked tutorials…. hello well-written documentation.
It’s battle-tested, moves slowly and comes with batteries included. I don’t have to waste time figuring out which Node module I should use to do X or Y. (You know, that fun activity of trying to find the module that everybody else uses… the one that’s fairly stable, but hasn’t been compromised with crippling malware.)
👍 Goodbye struggling to figure out which Node packages I need… hello to everything being included in curated dependencies, with versions that work together.
Most of the big software problems have been solved, funded primarily by the deep pockets of big tech’s customers. There are patterns, examples and stable libraries. So why not just leverage all that hard work, and build something cool with it?
👍 Goodbye cobbling together a solution with Pritt-stick and toilet paper… hello to following convention-over-configuration.
So, I rewrote my entire app from scratch:
The JavaScript/Lambda backend became… a Spring Boot application.✨
The frontend got merged with the backend… to produce a plain old monolith (which I’m calling “POMO”). 🗿✨ (wow monolith)
The data moved from the awesome-but-confusing DynamoDB… into PostgreSQL✨, using Spring Data JDBC.
Looking at it now, I can’t believe I didn’t think of it before. I get to build something useful, in a mature and stable ecosystem. Spring gives me the features I need, from database migrations with Flyway, to REST APIs. And it’s not going to change drastically overnight either.
What about the frontend? I’m not building a Netflix microservice. It’s just a CRUD app with some cheap lipstick.
Do I need to separate the frontend and backend, and create a fancy single-page web application? Probably not. (Although I did have fun developing a SPA with Svelte.)
Server-rendered HTML is the boring, old school way to do it. It has fallen out of fashion, but it’s still around. And you can still achieve a lot with it.
In Spring, the modern option for server-rendered HTML is Thymeleaf templates. I’m learning how to do it right with Wim Deblauwe’s excellent Taming Thymeleaf book.
When I want to make a change to a screen in the app, I just change the HTML in the template file. It’s worringly simple.
And I can even make code changes with automatic reload in the browser. Now my web browser reloads the page whenever I change a template.
You’re probably thinking: “But this is all static HTML, it’s not very modern”. Well, that may be true, although I’ll defend my minimal HTML and CSS, to the death, with a tiny sword. But if I need to add a little more sugar to the UI, I can just use htmx. It’s a tiny (~10kb!) JavaScript library for performing simple AJAX requests. It’s the icing on the proverbial cake. And I don’t even need to write any JavaScript.
Once I’ve finished developing a new feature, I build a fat-jar by running mvn clean package
on my laptop.
When I first started rewriting this app in Java, I thought I needed a CI/CD pipeline. I wasted time pondering where to run the pipeline, and how I would wire it up to my target server.
So I abandoned that. I’m a company of one. I don’t need to add that complexity just yet.
Instead, I just rsync
the jar to the server. Ha ha ha. Rsync! People literally point and laugh at me on the street for this. What a simple fool I am.
How does the app run? I thought that I would need to run it in a container. (You can probably guess where this is going..) I spent a lot of time thinking about it, too. Where do I build the image? Should I use a registry? But where can I store private images without paying? Should I use Docker or Podman?
Well, I abandoned all that, too.
A JAR and a database is fine.
So I installed a JRE and a database on the target server. My Linux distribution, CentOS, comes with stable versions of OpenJDK and PostgreSQL in its repositories, so I just use those.
I run everything on a cheap server from Linode or Hetzner (with plenty of capacity for other apps too).
Then I run it on the server using java -jar
. Done. Spring Boot runs any database migrations, and it starts the app.
I mean, it’s so laughably simple, and cheap, that you should try it for your next pet project.
]]>Can I use Apache Camel to help me receive contact form submissions on this very blog? And can I host it on my own server? And what are the results?
Let’s find out….
I’ve been running this web site for a few years now. It’s a static website, which means it’s a bunch of HTML files served from a server. There’s no PHP, no WordPress, and no dynamic functionality.
I also occasionally receive messages from lovely readers like yourself, in the time-honoured way: through an HTML contact form.
Contact form submissions have to be processed by something. Some sort of API or service which can take the form data and send it to my email address.
Previously, I used a third-party service which processes contact forms (using their starter/free tier). But the service seemed to stop working, without me knowing.
Nightmare. This is the downside of the API economy. You’re at the mercy of someone else’s service. Especially if you’re on their free tier.
Here’s the downside. If someone tried to send a message for a period of a couple of months, it never got to me, and I didn’t even realise it. If I was a business and I lost out on a bunch of customer enquiries, I’d be pretty screwed.
There are a couple of alternatives to this:
Pony up for a proper subscription to an app to process contact form submissions. (Maybe if I was a business, yes, but this is a personal blog, so no.)
Install someone else’s script on my little web server. (This is the boring, safe option but too easy, right?)
Develop my own API and self-host it. ✔ Saves money ✔ Gives me something to write about.
So basically I’m going to develop something.
(I know being cloud-native is all about using existing services where possible, but that would make for a boring blog article. ✋)
To give you a bit more background, here’s what I need the contact form API to do:
Receive fields from an HTML form: I need an HTTP endpoint (an API) to receive HTML form fields, and do something.
Send the details by email: When the form has been submitted, I want to send the results to my email address.
Persist the data to a database: I want to save all form submissions, in case anything goes wrong with the email.
Add some basic anti-spam protection - The spam-bots are numerous (there are probably some crawling around this site right now), but I’d like to add some basic protection against those annoying spam messages.
Redirect to a success or error page: Once the processing has finished, the user should be redirected to a page telling the user whether eveyrthing went OK. We can use the HTTP 303 redirect status code to do it.
Low usage: I don’t get many enquiries via the contact form (most people prefer to leave blog comments), so I’m not designing this with heavy load in mind.
I wrote down these requirements, and then recited them to myself, in a small, informal handing-over ceremony.
So I need to create a small integration, which receives some HTML form data, sends it via email, and then perhaps saves it somewhere, too. For the fun of it. For the blogging. And for a little weekend project to work on.
Wait… did somebody say integration? This sounds like a job for… Apache Camel. 🐪🐪
I’m a Java developer so it’ll be quicker if I use my existing knowledge rather than learning a new language or framework, just for the sake of one small API. So here’s the tech stack:
Quarkus: This is the Java framework I’ll be using. Quarkus is an alternative to Spring Boot, a framework for building Java apps, which comes with a curated set of third-party libraries for web services, integration, ORM, etc. Quarkus also boasts fast startup times, support for running in containers, and the option to compile to a native executable, so bypassing the JVM altogether.
Apache Camel: It’s my tool of choice for integration and APIs. And it runs on Quarkus. Great.
SQLite database: I’m going to use SQLite to store form submissions, because it’s extremely lightweight, and it means that I don’t need to run a dedicated database server. (I’m not expecting a deluge of contact form submissions!)
Here’s what I want the process to look like. This is a rough sketch of the Camel route that I’ll develop:
I started out with the Quarkus app generator at code.quarkus.io. This tool creates a basic Quarkus app:
With the Quarkus app generator, I checked the extensions that I need. These are essentially the Camel components that I am going to use. Each Camel component is packaged into its own extension:
Camel Core
Camel Mail
Camel Log
Camel Velocity
Camel SQL
Camel Direct
Then I download the code and import it into my IDE. Now we’re good to start coding!
To bootstrap Apache Camel in Quarkus, I need to create a new Camel RouteBuilder
class, like this:
src/main/java/xyz/tomd/FormEmailerRouteBuilder.java:
import javax.enterprise.context.ApplicationScoped;
import org.apache.camel.builder.RouteBuilder;
@ApplicationScoped
public class FormEmailerRouteBuilder extends RouteBuilder {
@Override
public void configure() throws Exception {
// Camel routes will go here...
}
}
This class is annotated with @ApplicationScoped
, which basically defines a bean in CDI (the dependency injection framework that Quarkus uses).
If you’re coming from Spring Boot land, you can think of ApplicationScoped as similar to the @Component
annotation.
When Quarkus starts, it will instantiate this class as a bean. Camel Quarkus will see that it’s a RouteBuilder
class, and will add the routes into the Camel Context.
Now the boilerplate stuff is done! Before I launch head-first into Camel, I’m going to define some business rules for receiving these form submissions……
I have some validation rules for my API. I guess you could call them business rules - they can be defined in code.
I won’t accept just any old crap! I will only accept form submissions if:
The user has filled in all of the required fields.
The user has answered the anti-spam question correctly.
These rules can be expressed in Java code, and I don’t want to clutter up my Camel routes, so I will add them to a Predicate
class.
A Predicate in Camel is just a way to filter or match a message. You can write a Predicate inside a Camel route, or you can write a Predicate separately in a Java method, and call it from a route. I prefer this option because it gives a nice clean separation between routes and business logic.
So I implement Camel’s Predicate
interface, adding my custom logic into the boolean matches(Exchange exchange)
method. This code will validate that all the required fields have been filled in, and the anti-spam question is correct.
A couple of things to explain about this code:
When Apache Camel receives form submissions, it places each form field value in a Header. So the code uses the Exchange.getMessage().getHeader(...)
to check that the required fields exist.
I want my antispam answer, and list of required fields, to be configurable, not hard-coded. So I store these values in Quarkus properties, and inject them using @ConfigProperty
(this feature comes from Eclipse MicroProfile Config:
src/main/java/xyz/tomd/SubmissionValidPredicate.java:
import org.apache.camel.Exchange;
import org.apache.camel.Predicate;
import org.eclipse.microprofile.config.inject.ConfigProperty;
import javax.enterprise.context.ApplicationScoped;
import java.util.Arrays;
import java.util.List;
@ApplicationScoped
public class SubmissionValidPredicate implements Predicate {
// The name of the antispam field that should be present in the HTML form
public final static String ANTISPAM_FIELD = "antispam";
// Reference the list of required fields from application configuration
@ConfigProperty(name = "fields.required")
String fieldsRequired;
// Reference the correct answer to the antispam question
@ConfigProperty(name = "antispam.answer")
String antispamAnswer;
/**
* This method contains the validation logic.
* It returns boolean false if the message passes validation, true otherwise.
*/
@Override
public boolean matches(Exchange exchange) {
boolean isValid = true;
// Loop through all required fields. If any are missing, then it's invalid.
List<String> required = Arrays.asList(fieldsRequired.split(","));
for (String field : required) {
if (exchange.getMessage().getHeader(field, "").equals("")) {
isValid = false;
}
}
// Also check that the antispam field is correct
if (!exchange.getMessage().getHeader(ANTISPAM_FIELD, "").equals(antispamAnswer)) {
isValid = false;
}
return isValid;
}
}
I can now inject this Predicate inside the RouteBuilder. So when Quarkus starts up, it’ll create the bean, and then it’ll inject it where I need it.
Quarkus uses CDI, so that means using the javax.inject.*
way to inject beans:
Add into RouteBuilder:
import org.apache.camel.builder.RouteBuilder;
import javax.enterprise.context.ApplicationScoped;
import javax.inject.Inject;
@ApplicationScoped
public class FormEmailerRouteBuilder extends RouteBuilder {
@Inject
SubmissionValidPredicate submissionIsValid;
// ....
}
Now it’s time to write the Camel routes. 🐪🐪
Now it’s the fun part. Writing Camel routes.
I want to separate out my form submission handling code, from the API bit. So I’m going to add two Camel routes into the configure()
method in the RouteBuilder
.
Here’s my first route. This first route does the validation, and then passes all valid requests on to another route.
Some notes for ya:
platform-http
is the recommended way to to use Quarkus’s embedded web server. This basically wires up a new API endpoint on the web server, and plugs it directly into Camel.
Since I already created the validation logic in the Predicate (in Step 3 - yes I know that was a long time ago now), I can refer to it in my choice-when block. Result: cleaner-looking code!
Regular, boring HTML forms are sent over HTTP using the application/x-www-form-urlencoded
Content Type. It is essentially a list of fields in key=value&key=value...
format. Camel maps these fields to Header values automatically. Yasssssss.
I’m quite lazy so I just return “NOOP” if the user didn’t send a POST request.
The first route that goes in the RouteBuilder class:
from("platform-http:/?httpMethodRestrict=GET,POST")
.choice()
.when(header(Exchange.HTTP_METHOD).isEqualTo(constant("POST")))
.log("Received POST submission")
.choice()
//
.when(submissionIsValid)
.log("Passed validation and antispam challenge")
.to("direct:process-valid-response")
.otherwise()
//Redirect to an 'invalid' page if the user hasn't passed the antispam challenge
.log("Submission failed validation or antispam challenge")
.removeHeaders("*")
.setHeader("Location", simple("{{redirect.fail}}"))
.setHeader(Exchange.HTTP_RESPONSE_CODE, constant(303)) // Redirect 303 'See Other'
.transform(constant(""))
.endChoice()
.otherwise()
.transform(constant("NOOP"));
Next, I add the second route. This picks up where the first route finishes. This is where the integration part happens! It processes the message:
The submission gets saved into the SQLite database using a simple SQL INSERT
statement with Camel SQL component
The text of the email gets prepared using a Velocity template. This works a bit like mail merge.
Tidy up the headers and send the email using SMTP, with a configurable URL that I’ve set up in my application.properties
. The URL looks like this: smtp.uri=smtps:smtp.example.com:465?username=postmaster@example.com&password=xxxxxx
Finally all of the irrelevant headers are stripped from the response, and the user is issued with an HTTP 303 response code, which will tell their browser to redirect them to a thankyou page.
from("direct:process-valid-response")
.setHeader("timestamp", simple("${date:now:yyyy-MM-dd'T'HH:mm:ss.SSSXXX}"))
// insert into SQLite here in case the email doesn't send
.to("sql:insert into responses (sender_name, sender_email, message, received) values (:#name, :#email, :#message, :#timestamp)")
.log("Saved into DB")
// Prepare the email content
.to("velocity:email.vm")
// Send mail
.removeHeaders("*", "email", "timestamp")
.setHeader("To", simple("{{mail.to}}"))
.setHeader("From", simple("{{mail.from}}"))
.setHeader("Reply-To", simple("${header.email}"))
.setHeader("Subject", simple("{{mail.subject}}"))
.to("{{smtp.uri}}")
.log("Sent email to {{mail.to}}")
// Prepare the response
.removeHeaders("*")
.setHeader("Location", simple("{{redirect.success}}"))
.setHeader(Exchange.HTTP_RESPONSE_CODE, constant(303)) // Redirect 303 See Other after form submission
.transform(constant(""));
With those two routes, I’ll receive form submissions over HTTP, save them to a database, and send by email.
Wow, you’ve made it this far. You now know the back story, and you’ve seen how I designed the app and created the Camel routes. Now you’ve seen how this Camel on Quarkus app was built, you can go check it out for yourself:
See the completed application on GitHub
In the next article, I’ll write some unit tests for the app, create the HTML form that will send submissions, then deploy and monitor the app!
See you then…..
]]>If you’re just getting started with Apache Camel, it can seem like there is a bewildering choice of options.
A processor? One of the EIP thingies? A message translator?
Transforming data in Camel is usually done in one of the following ways:
Mapping with Java code and Type Converters
Using a specialised Camel component, like XSLT, Bindy or Atlasmap
Marshalling/unmarshalling with data formats, like CSV and JSON
Using a templating engine, like Velocity, Mustache or Freemarker
So how do you know which one to choose? After all, they seem to all do the same thing, don’t they?
Let’s have a look at each of them.
I’m not going to recommend one approach over another here. The intent of this article is to give you a broad overview of the options so that you can dig into the Camel documentation and examples and make a choice.
So let’s take a look!
Also known as… The Java architect’s dream
In this approach, your input and output data need to both be Java objects, and you use Java statements to map between them. When you need to transform from your source type to your target type, you write a method that creates a new target object, and populates it with the relevant fields from the source object. Then, you tell Camel to invoke that method (e.g. using bean
, or perhaps a Processor).
For example, you might convert a Lead
object to a Customer
. So you write a transformation method public static Customer convertToCustomer(Lead lead)
:
public static Customer toCustomer(Lead lead) {
Customer customer = new Customer();
customer.setName(lead.getName());
customer.setCompany(lead.getCompany());
customer.setCity(lead.getCity());
return customer;
}
If your source data comes into Camel as a text-based format (like 🖹 XML 🖹), then firstly you need to unmarshal from that text-based format, into a Java object, like a POJO.
This approach is proper, full-on Java. It often requires writing lots of Java code. However, it’s also strongly typed. In my opinion this is a good thing. The great thing about Java’s type system is that you get built-in “type safety” – so, for example, you can’t put a String
value into a slot that’s designed for an Integer
.
There are some tools that can make this process a little easier. If you’re working with XML, then you can use Java’s JAXB (implemented in tools like Apache CXF) to create Java classes from your XML schema and then convert that XML object into Java.
Once you’ve written your transformation code, you can call it from Camel, just like any other regular Java method.
But you can make it more… Camelly… by registering it as a TypeConverter. This adds your code to Camel’s library of adaptors, that can convert Java objects from TypeA
to TypeB
. When you write your own transformation code, you can register it as a TypeConverter, so that Camel knows how it can convert from your Lead
to a Customer
.
To do this you just add the @Converter
annotation to your class:
import org.apache.camel.Converter;
@Converter
public class CustomerConverter { ... }
You also add your full class name into the file resources/META-INF/services/org/apache/camel/TypeConverter
:
com.cleverbuilder.cameldemos.typeconverters.CustomerConverter
Now you can use .convertBodyTo(Customer.class)
in your Camel routes, and Camel will execute the typeconverter.
Also known as…. Handing the job to a specialist
Writing Java code is “fine”. Although I wouldn’t call it “pleasant”, unless you really enjoy writing reams and reams of boilerplate code. So you might wonder if there are tools that can help you transform from one format to another – without writing all this boilerplate code yourself.
That’s where the specialist transformation components come in. They’re a bunch of workers who you can call on to get a specific job done. And unlike some workers, they generally turn up on time, and don’t say “I can’t do this for you, because I’m not qualified mate. But I will still be charging you for this visit.”
Sorry. Rant.
When you initialise one of these components – either explicitly in code, or perhaps automatically in a framework like Spring Boot – you get an Endpoint that you can push your message through, as part of a route. When the message goes through the Endpoint, the component kicks in, transforms your message, and returns the output.
This makes your Camel route look something like this:
from("file:somewhere...")
.to("some-component:blah?someConfig=1&anotherConfig=true")
.to("file.....")
As you can see, your Camel code looks rather clean. Now you’re using Camel as an orchestrator, handing off to other components to do the work.
So, pray, what are these 🎠 magical components 🎠 which you can use? Some good examples of these transformation components in Camel are:
Component | What it does | Based on |
---|---|---|
XSLT | Transforms XML to XML directly, using an XSLT transformation file. You’ll know one of these when you see it. | |
XSLT Saxon | Use this component if you want to use the Saxon library. Gives you a few fancier transforms and perhaps better performance. | Saxon-HE |
Dozer | Dozer is for Java-to-Java mapping. It replaces all of that boilerplate code above, with an XML mapping file. (Whether that’s a good thing is debatable :-)) | Dozer |
Atlasmap | This is the new kid on the block. It can map between JSON, XML and Java objects. Just like Dozer, it works with a mapping file. You can create mapping files in your dev environment using the Atlasmap extension for Visual Studio Code | Atlasmap |
Also known as… The marshallers and unmarshallers.
This is another piece of the puzzle. Data formats in Camel are utility layers for dealing with things like Zip files, Avro, Base64 encoded files, HL7 (Healthcare) files, JSON, Protobuf and much more.
Once you’ve configured a data format, you can plug it into a marshal or unmarshal step, like this:
// import org.apache.camel.dataformat.csv.CsvDataFormat;
// import org.apache.camel.model.dataformat.JsonDataFormat;
// Set up a CSV data format
CsvDataFormat csv = new CsvDataFormat();
// Set up a simple JSON output format
JsonDataFormat json = new JsonDataFormat(JsonLibrary.Jackson);
// Convert an incoming CSV into JSON
from("direct:start")
.unmarshal(csv) // unmarshal from CSV
.marshal(json) // marshal out to JSON
.to("direct:end");
Bindy is a customisable data format that can handle fixed-width (e.g. FIX) and variable-width (e.g. CSV) files. Bindy is a damn cool library that will help you out of a hole, especially if you’re working with some rather unusual or old file formats.
With Bindy, as long as you can establish the rules of the file format – like how many characters the first column is, what the separators are, etc – you can handle it with Camel.
See a demo of Bindy in action.
Also known as… I need to produce a report for The Boss
The final option I’m going to introduce in this article is the templating engines. These are fantastic if you need to build any sort of text output that’s going to be seen by people.
Think things like emails, documents, web pages …. Just like mail merge, template engines can take a template, and some source data and produce a document.
If you’re still not sure what I mean, then I’m talking about support for things like Velocity and Mustache {
👨.
In Camel, the templating engine reads values from your Exchange. So it can access things like the message Body and Headers.
Camel has components for a few templating engines. I’ve not worked with all of these so I don’t know the differences, but if you need something specific, check with the documentation of each of them first:
Chunk
Freemarker
Mustache
MVEL
Velocity
And that’s an overview of transformation in Apache Camel. You’ve got a lot of different options, depending on exactly what you need Camel to do.
]]>Along the way, I’ve learned a lot about blogging.
During my blogging journey, I’ve learned that I love writing articles for you lot. I enjoy learning and explaining things.
I also like sharing my knowledge and teaching others.
But more recently, I’ve begun to realise that there is a wealth of tech topics which I want to write about, and the content doesn’t really fit a “personal” website anymore.
In 2019, I bought the domain tomd.xyz and moved everything under this domain, to make all my content “personal”.
But I’ve since realised that writing all my articles under the name “Tom Donohue’s Blog” is a bit restrictive. For example, how do I separate tech content, from personal blog posts, like this one?
So, a bit of a back-track – or pivot, you could say.
I’m launching a new site for my tech content. I want to let the new site speak for itself, and be a general hub for tech tutorials.
In short: I’m changing things up a bit.
Just a few weeks ago, November 2020, I launched a new website, called Tutorial Works:
Tutorial Works is all about tech content, tutorials and blog articles. Actionable stuff that you can use as a software engineer or architect.
It’s still in its early stages right now, but I’ve got a lot of knowledge that I want to share on there.
Tutorial Works will be similar to my blog, but it’ll be covering a wider range of topics, and going much wider and deeper.
I’ll be writing about things like DevOps, Java, Kubernetes and containers, as I do now. And, I’ll also be writing about cloud providers, serverless, tech culture and much more.
Starting a new blog is difficult these days. Getting traffic to a new blog is difficult these days.
Google – the ultimate judge and jury of websites, in many ways – doesn’t just index your content, and then throw you onto the first page of the search results the next day.
Instead, Google prefers to wait and see whether your new website is any good. This is the so-called “Google sandbox” effect. It can sometimes take several months before Google decides that a new website should earn a place in the Google search results.
If you’re just starting your own developer blog, then persevere during this stage! It can be disheartening when it seems like Google doesn’t want to rank the content that you’ve poured hours and energy into writing.
In fact, right now, the site doesn’t have many visitors from Google at all:
Given the effects of the Google sandbox, I’m not expecting much traffic. At the moment, I’m in the content “seeding” phase. If I have an idea for a new blog post, I’m currently writing it for Tutorial Works, not here.
This means there’s already a good selection of content over on the new site which I think you’ll find useful.
For example, in my most recent post, Kubernetes Learning Resources, I do a rundown of all my favourite tools for learning Kubernetes. The list includes all the resources that I’ve been recommending to people over the past few months.
So it’s a long road, and this is the beginning.
“Utopia?!” CRINGE. OK, bear with me.
I wrote an article earlier this year about taking action. Experimentation and trying new things are two of the best ways to learn.
You can read all of the content and blogs that you like, about creating. But, if you really want to find out whether something will work, the only way forward is to actually create it. Start something today.
The great thing about the internet is that you can start a website for free. You can write content and hit publish. Fortunately, big tech hasn’t managed to take this right away from us, yet.
I could continue writing content for this personal blog, or I could exercise my right, and try creating something new and see what happens.
It might succeed, it might not. I’m probably going to be publishing in the dark for months.
But, as with this blog, I want to create the kind of website that I would find useful. As Derek Sivers says in one of my favourite books, Anything You Want:
“When you make a business, you get to make a little universe where you control all the laws. This is your utopia.”
Substitute the word “business” for “website”, and that’s what I’m aiming for.
Very little is going to change around here. I’ll still be publishing articles on this blog, and answering your comments and questions.
My Camel articles are not going away. 😄
But, I’m going to be writing a lot more, and most of my new content will be going onto Tutorial Works first.
Firstly, I’m going to be writing more articles over the Christmas holidays (2020) and publishing a lot of this content over there in the coming weeks.
And I’ll be using this, my personal blog, to be sharing more info. in case it inspires you to start your own blogging journey.
I’m really excited to see what happens with the new website and I hope you can join me there!
Want to read my latest content? Head on over to Tutorial Works now.
Thanks for reading this. If you’ve got any questions or it’s sparked any creativity in you, I’d love to hear your comments.
PS. Also, you should totally create your own website. :)
]]>Apache Camel is an integration framework for Java. It’s most suited for situations where you want to fetch data from files or applications, process and combine it with other data, and then move it to another application.
Apache Camel is a great choice when you’re working with data that needs to be shared between systems. This happens when you have data stored in different applications. For example, personnel files might be stored in an HR system, but need need to be shared with Finance to be able to process the monthly payroll. Camel acts as this kind of programmatic glue between applications, implemented in Java.
Are you wondering whether you should adopt Apache Camel in your own projects? Read on to find out more, and which kind of projects it’s right for.
Apache Camel is an integration toolkit or framework, written in Java.
Camel comes as a set of libraries and components, and a language (called a DSL) for describing how you want to move data between applications. You can add it to your existing Java application or you can run it in a standalone Java application.
Whenever you need to pull data from an application, remix it, merge it, and route it somewhere else, you can use Camel. Camel does this by providing:
a language (DSL) for writing your data flows, which are called routes in Camel
a set of patterns for implementing common things like error handling and transformation, which can be added to your Camel routes
a set of 300+ components for connecting to hundreds of different applications and protocols
an embeddable runtime (the “Camel context”) which runs your integrations
Camel is distributed as a set of libraries (JAR files) and is released under the open source Apache License, under the Apache Software Foundation.
I found there are a few major drivers when you would want to use Apache Camel:
Integrating applications together: Camel is intended for situations where you need to move data between different protocols and applications (like files, emails, APIs or web apps).
You can use Apache Camel when you want to move data between any one of the apps and protocols supported by its 200+ components. Components in Camel generally all work in a similar way. So, once you understand how to use one component, you will find it easier to use others. Camel includes components for many different apps, from Facebook and Twitter, to Salesforce and Workday. You can also write your own custom component.
Pattern-based development: Many frequent requirements for integration – like support for transactions, or transformation – would usually be complicated to plan and write in code. But Camel provides many of these, and can often be enabled with just the flick of a switch (OK, by just changing a variable!). Camel provides patterns and functionality for things like:
routing data based on its content, using Camel’s content-based routing
handling errors, transactions and rollbacks
transforming data
caching frequently-accessed data
encryption and authentication
These are examples of just some of the things that Camel can do.
These requirements are made easier in Camel, because it provides these features as a set of patterns, called enterprise integration patterns (after the book of the same name). You can pull any of these enterprise integration patterns “off the shelf”, and use them in your code, without having to write your own solution every time you need these capabilities.
This toolbox of patterns can make Camel a very productive way to write integration glue code, when you need to connect distributed systems.
One high-level style for many integrations: Once you’ve learned the basic patterns, and how to work with Camel components, you’ll find that it becomes easy to churn out many integrations in Camel.
This is an advantage of Camel: the ability to create many integrations fairly quickly. Camel is ideal if you are developing a set of integrations, and you would like them all to developed in a similar way. This can be really attractive option in larger companies where it helps to pick one approach which is shared and understood by the development team.
Working with data, and especially Java objects: As it’s a Java framework, it’s especially good at working with Java objects. So if you’re working with a file format that can be de-serialised into a Java object (many file formats can be converted into Java objects, like XML, JSON….) then it will be handled easily by Camel.
What sort of real-world examples are there? How is Camel used in the wild?
Here’s just a smattering of projects that I’ve either seen or been involved with. In each of these projects, companies are using Camel to achieve a certain goal. I’ve also included the related technologies in brackets:
Process and route data: Process customer orders and route them on to a database (Camel on Spring Boot with ActiveMQ Artemis and Qpid Dispatch Router)
Process web-submitted data: Receive and process/transform agricultural survey forms, and then insert into a database (Camel on Apache Karaf)
Financial transaction processing using message queues: Process financial transactions and route them to the correct department (Camel on Apache Karaf)
Put a gateway in front of your APIs: Implement a lightweight API Gateway, which authenticates and routes messages to the right API (Apache Camel on Spring Boot)
Data distribution: Poll for changes in HR data in a SaaS application, and distribute the changes to many downstream apps and files (Apache Camel on Spring Boot on Kubernetes)
Back-end order processing: Process car orders and car service requests (Apache Camel on Spring Boot, on Kubernetes)
Modernising legacy APIs: Expose data from a legacy ERP system as a REST API, so that clients can consume it (Apache Camel on Spring Boot, on Kubernetes)
Ad hoc data processing: Generate a sales report every day from data in a database (Apache Camel on Spring Boot)
Generally the best use cases for Camel are where you have a source of data that you want to consume from – e.g. incoming messages on a queue, or fetching data from an API – and a target, where you want to send the data to.
So when shouldn’t you use Camel? There’s a time and a place for everything, and I think Camel is great but it can’t do it all:
For heavy data transformation: Although Camel is great at connecting to lots of different applications, it’s not designed for heavy data transformation and analysis. If you have a really data-heavy workflow, where you need to do lots of intensive merging and processing of data – for example like batch processing or ETL – then I think there are other tools which are better for the job.
But, Camel is still great at orchestrating several steps together into a flow. So you might still consider using Camel as the orchestrator for your data transformation processes.
If you only need to write one single integration ‘flow’..: Camel is fairly lightweight, and it has got even more lightweight in recent releases (3.x onwards). But it’s probably not worth learning all the patterns and the whole Camel approach to integration development, if you just need it for writing one integration. If you just need to write one integration flow, you’re maybe better off writing your integration code yourself.
If you don’t have any Java skills in the team: As Camel is a Java framework, it requires some Java knowledge to use it. It doesn’t require much if you use the XML based DSL (language) for configuring it, but it still helps to have some Java knowledge to understand the concepts, because it usually runs on a Java Virtual Machine (JVM). This knowledge is especially useful so that you can troubleshoot when things go wrong: knowing how to read a stack trace always comes in handy!
Apache Camel is free and open source. Getting started with Camel usually involves either creating a new Java application containing Camel, or adding it to your existing Java application.
Camel can be deployed in lots of different ways:
Embedded in Spring Boot applications and microservices
Embedded in Quarkus applications, for serverless and containers
Embedded in Apache Tomcat
Deployed into Apache Karaf, the OSGi container
Deployed into WildFly, the Java application server
Run natively on Kubernetes using Camel-K
As a simple, standalone Java app with a main()
method
To get started, you add the Apache Camel dependencies to your application, create a Camel Context, write your integration routes in Camel’s DSL (either Java or XML), and then start the Context.
A good way to create and run a simple Camel application is to read my Camel tutorial, and then check out the Camel examples repository on GitHub, which is packed with useful example apps for you to learn from.
Once you’ve developed your Camel-based integration app, you can deploy it to your server, virtual machine or a cloud environment.
Camel is a fantastic integration framework that I honestly love working with. It’s especially suited to application integration – that means it’s good if you need to consume, transform and route data between applications.
Apache Camel is even more relevant today in the cloud era, where the number of applications in a typical enterprise just seems to be growing bigger and bigger.
So I hope you’ve found this guide useful, and now you know when to use Apache Camel! When you need to think about how to integrate and process data from applications, why not give Camel a try?
]]>But what exactly is a Camel route?
A route in Apache Camel is a sequence of steps, executed in order by Camel, that consume and process a message. A Camel route starts with a consumer, and is followed by a chain of endpoints and processors.
So firstly, a route receives a message, using a consumer – perhaps from a file on disk, or a message queue. Then, Camel executes the rest of the steps in the route, which either process the message in some way, or send it to endpoints (which can include other routes) for further processing.
To create a route in Camel, you first define it in code. This is called a route definition, and it is usually written in Java or XML.
Then, you start Camel, passing it your route definition. Camel reads the route definition and creates the route inside the Camel Context. The Camel Context is an engine which is part of Camel, and which runs your routes.
If you’re using some frameworks (like Spring Boot), Camel will try to discover your routes automatically.
Once the Camel Context has started, you can inspect it to see all of the routes that are running.
To see what a route actually looks like, here’s an example.
First, this route is written in the Java syntax or DSL (Domain-Specific Language). In Java, routes go inside a RouteBuilder
class, which has a configure()
method that you add your route code into:
import org.apache.camel.builder.RouteBuilder;
public class MyFirstRouteBuilder extends RouteBuilder {
public void configure() throws Exception {
from("jms:queue:HELLO.WORLD")
.log("Received message - ${body}");
}
}
This route uses the JMS component, to receive a message from a queue called HELLO.WORLD. Then, it writes a log message to the console, containing the body of the message (${body}
).
Sometimes it can be difficult to understand exactly what a route does, and how it works. So I created this infographic to show how the different concepts in Camel come together in a route.
In this diagram, we have one route. We use the File component to read and write files. We also use a content-based router to filter files based on their filename. The File component helps us by adding headers to the Exchange, which we can use in a Predicate expression.
If you’ve already embedded Camel in your application, then you can send a message to a Camel endpoint by using the Camel API.
To send a message to a Camel route from your code, grab the ProducerTemplate
object. Then:
sendXXX()
methodsrequestXXX()
methodsFor example, if you were using Spring you might do something like this to send a message asynchronously (fire-and-forget) to an endpoint:
import org.apache.camel.ProducerTemplate;
public class InvokeCamelRoute {
@Autowired
protected ProducerTemplate template;
public void invokeRoute() {
template.sendBody("direct:start", "Your message body goes here");
}
}
Enterprise integration is the sharing of data or commands between different applications. In some enterprises, there might be dozens, if not hundreds of applications, that all must be connected, for the business to operate - HR systems, finance databases, just to name a couple.
The idea of the book is that enterprise integration problems can be solved using messaging.
By using asynchronous messaging as a backbone or “channel”, integration becomes a series of asynchronous messages between applications.
These messages can be used to trigger events, track processes or transform data.
But the book is over 15 years old now. Is it still useful today?
“there are no simple answers for enterprise integration”
The premise of the book is covered in the first chapter: Solving Integration Problems Using Patterns.
Hohpe and Woolf saw the big ball of spaghetti. Even back in the early 2000s they understood that most companies run many applications and need to connect them together, and that it’s tough to do this.
Fast-forward to today, and companies are are running ever more applications, connecting to cloud services and storing data in SaaS products. In other words, the problem still exists, and is growing.
The book suggests that when we connect applications in a synchronous way, such as by using technologies like Web services, we enforce tight coupling between them.
This means that a synchronous interaction can bind two components so closely together, that a failure in one application will cause a severe problem for the other. These tightly-coupled interactions make things, according to the book, “brittle, hard-to-maintain and poorly scalable”.
Instead, asynchronous interactions, the book argues, are more flexible and loose. It talks about the advantages and disadvantages of using this approach, and then suggests a set of patterns that you can apply to the problems you’re trying to solve.
One of the best parts of the book is how it sets the scene first.
If you’re new to integration architecture, you can easily get lost in the terminology, and unable to find a decent explanation of what people are talking about. Most sources just dive into technical tools, without spending any time to cover the background or explain the reasons why we do what we do.
This is one of the strengths of this book.
It starts with a few introductory chapters on integration and messaging, and then the remainder of the book is largely presented as a list of patterns. So you get to first understand the problem, before diving into the solutions.
The first sections give a good introduction, along with an business scenario as an example, which explains the problems around enterprise integration. It also covers the possible approaches to integration (File Transfer, Shared Database, RPC, Messaging) before settling on messaging for the rest of the book.
Later, when each pattern is introduced, the authors explain the problem it’s trying to solve, giving a detailed explanation and then some sample code. (But remember this book was published in 2004, so don’t expect any modern Java, like Spring!)
And then, the book introduces patterns, one-by-one.
An enterprise integration pattern is an abstract solution which can be applied to the problem of how to connect two applications together.
In the book, you learn basic patterns about messaging itself, like:
Message Channel - this is the “thing” that carries messages. For example: an address or queue on a message broker, like ActiveMQ Artemis, or a Destination in JMS.
Message - a way of wrapping a unit of data to send between applications. For example: a message that you place on a JMS queue.
And then you learn about patterns for processing messages, like:
Splitter - a way of processing a message that contains multiple elements. For example: processing multiple lines in a CSV using Apache Camel’s Splitter component.
Message Translator - how to allow systems to communicate using messaging, even when the systems use different data formats. For example: mapping between Java POJOs using Apache Camel’s Dozer component.
In all, there are over 60 patterns in the book.
I think that you will get a lot of value out of this book if:
You’re a developer who wants to learn about integration or retrain into the area, especially if you are learning Apache Camel
You’re working with, or learning Apache Camel (or Red Hat Fuse, or Talend ESB), or a message broker like ActiveMQ or RabbitMQ
You are an application architect, looking at how to share data between two applications
You are new to messaging, and you want to understand what’s possible, and why people would choose to use message queues
You are an enterprise architect, looking to understand how other people have solved the same problems that you have
You are a systems administrator, managing message brokers and infrastructure, and you want to understand how and why messaging is used
So, onto the pros and cons of Enterprise Integration Patterns. Why should you buy it?
HTTP/synchronous style was popular for a while, but now messaging is finding favour again (Kafka, Event-Driven Architecture). The techniques in this book are semi-timeless and can be applied to problems, 15 years on from publication.
You should absolutely buy this book if you’re going to be working with Apache Camel. You will appreciate much more how Camel works, and what it’s capable of.
The book encourages loose coupling, which is a great school of thought for application architecture, and there are lots of patterns in the book to help you achieve this.
It’s a good introduction to messaging and thinking asynchronously. It will be especially useful if you’re a new enterprise developer, and you might not have covered messaging in your university degree.
15 years is a long time in computing so it’s not “cutting-edge”. Some patterns in the book have evolved since publication, and there are also some clear out-of-date references. The book was published around the time of peak SOAP, so it includes references to related concepts like WS-ReliableMessaging
, which we probably wouldn’t think of implementing now.
The book mandates messaging. This might be obvious, given the subtitle of the book is “Designing, Building, and Deploying Messaging Solutions”. But modern integration isn’t just about asynchronous messaging. We have REST and gRPC APIs now, which aren’t the solution to everything, but are a firm part of the integration mix.
Canonical Data Model - this is mentioned throughout. Whoops! This was a bit of an architects’ dream, from the days of centrally-planned architectures. In my experience, it was never practically implemented. How do you make updates to a canonical schema without upsetting everybody? See: Conway’s Law.
Score: ⭐⭐⭐⭐ (4 out of 5)
If you are an integration architect, especially if you are architecting solutions with Apache Camel, I strongly recommend this book.
Although you probably won’t find the code samples useful, the patterns in this book are close to timeless, and can still be applied to problems over 15 years later.
A word of warning though: This book carries a hefty price tag!
But make no mistake, this is a comprehensive book. In my edition, it runs to almost 700 pages. It’s not a book to read cover-to-cover, but a book to dip into and use as a reference.
It sits in a prime position on my bookshelf and I think it’s essential!
Have you read the book? What did you think? Feel free to drop your comments below.
]]>Red Hat Fuse (formerly JBoss Fuse) is Red Hat’s enterprise integration product. It’s a packaging of integration projects like Camel and CXF, on top of a runtime. In Fuse 6.x, the common runtime for Fuse used to be Apache Karaf. But these days, the common way of running Fuse is to deploy it onto OpenShift.
This video is my short tutorial on how to get started with Red Hat Fuse on OpenShift. If you want to know how to run Apache Camel on OpenShift, then this one is for you! You’ll need a Red Hat account (you can get a free developer account) and access to an OpenShift cluster.
This video is the steps I take when I install Fuse and create a new project for OpenShift. In the video, I cover:
If you liked the video, please give it a thumbs up! I’d like to make more videos like this, so please let me know what you think in the comments below.
]]>