Where do our logs go?
So when you’re developing for containers, where should you put logs?
In server rooms across the globe, there are ancient scripts running which rotate log files and ship the old ones off for storage.
This is fine for classic deployments, where you know the server that will be running your application.
But when developing for a container platform, your app could start on any node (almost anything can be a Kubernetes node these days), so this idea becomes less practical.
You can write a routine in your app to rotate and ship logs. But what happens when one service grows to two. And then that number grows to four. And then eight?
Before you know it, you’ve got a barnyard full of cloud-based apps, and you need to ship and manage logs for all of them.
Streams, everywhere
The approach that many people are taking to solve this is to remove the responsibility of log management away from the application, or even the application server. Instead, make it the responsibility of the platform.
The 12-factor app manifesto, a set of design patterns for modern apps, proposed this solution for logs:
[an] app never concerns itself with routing or storage of its output stream. It should not attempt to write to or manage logfiles.
This means that instead of writing logs to a file, you write to a stream (usually the standard output stream).
This idea was popularised by Docker, and later adopted by Kubernetes.
This implicitly means that it’s the responsibility of another component to listen to your log stream, archive and index it for you.
It’s become so normalised, that logging to stdout is now the default method of logging in Spring Boot, for example. This is so that, as a developer, you can get away from worrying about where your logs go, and instead rely on the platform to do it for you.
Then you rest easy in the knowledge that your logs are collected into Elasticsearch, Splunk, or whatever datastore is available to you.
Go forth, log to the standard output stream. Diana Ross thanks you.