Microservices are hard. Complexity is high. Securing microservices is even harder and even more complex. Where do we start? The first words that come to my mind are authentication and authorization. Firewall. Trust. Session. Tokens. We need to secure our applications and we need to secure our containers.
We can build a SSO Gateway. Like a normal gateway it sits in front of our applications, but takes care of the handshake with our SSO server and performs the redirects. We can propagate the authorization and authentification information to the services by using http headers, but this is not secure unless we are using https. And as we know https comes with certificates. Maintaining certificates for multiple microservices can become problematic, especially when you need to reissue or revoke them. The complexity would grow even more, and we are trying to avoid that.
An alternative is to use HMAC(hash based messaging code). With this the body of the request along with a private key is hashed, and the result is sent with the request. The server then recreates the hash by using its copy of the private key and the request body, and it compares with the received hash. If everything matches that means the message was not compromised and the request will pass. This pattern is implemented by JWT(Json Web Tokens). But it remains the fact that the data is still visible to the network listeners.
API Keys are used by Twitter, Google, etc. With API keys the service can identify who is making a call, and can set restrictions on it. The keys are kept in pair(public and private) and they must be managed centrally.
Ok. Let’s assume we have everything setup. But we still have a problem. How can we ensure a user cannot access another’s user data? Well we need more. We can protect against this is our services by making sure a user can only access his data, but we will have lots of boilerplate code in many places. Unfortunately there’s no easy way out. Microservices means complexity and some duplication is to be expected to reduce it.
Still we didn’t solve the problem with sniffers. How can we make sure they don’t see the data. Encryption is the answer. Encrypt when you see it first and decrypt per need basis. This means another component for our already complex system: a service for keys.
There are many libraries out there, and your best choice here is to go with something mature like OAuth or SAML.
We can be stateful. We can keep a global session for all our services which is initialized in the gateway and propagate it to all other services. The session can be obtained from a central authentication and authorization server (SSO) like KeyCloak.
But we wanna be stateless. That means we need to transfer the state from the server to the client. Json Web Tokens (JWT) are a good suit for what we need. It is a safe way of transferring claims between two parties. Basically its a json that is passed as the body of a Json Web Signature(JWS) or as the plaintext of a Json Web Encryption(JWE). The claims can be digitally signed and contain the issuer, the user’s identity, expiration time and may contain also custom attributes.
For our example the flow will look something like
The client app will interact with a UAA(user authentication and authorization) Server. He will exchange his credentials for redirect url containing a JWT token. That token will be presented to the gateway who will verify it against the UAA Server. If everything is in order it will forward that JWT token to the requested service which will decode it and grant access to the requested resource. How can it decode it? Instead of going every time to the UAA it will ask the server for a public key(JWK = Json Web Key) and cache it. With that key it can decode the JWT and knows is the token is trustworthy. This adds another layer of security.
Now that we have some ideas on how to secure our applications, it’s time to look at our containers. Docker is a widely used container. What can we do here to secure them?
Defense in depth is the answer. This means firewalls. This means we need to be careful with the information we are putting in the logs(logs are great way of recovering from an attack). This means we need to monitor our cluster for suspicious behaviour(Intrusion Detection Systems and Intrusion Prevention Systems). This means we need to segregate the services and put them in different locations (virtual private clouds) and create a set of rules making sure they are still able to communicate with each other. This means keep the OS up to date (especially the security updates).
In his talk at Docker Con 2016, Aaron Grattafiori talked about three principles:
- principle of least privileges, that means not to run applications as root
- principle of least surprise
- principle of least access, that means that every module must access only data that is relevant to it
This post goes into more details on this. OK, but assuming we have this covered too, the next question that comes to mind is: are the docker images that we use secure? How do we know that there do no contain malicious code? Fortunately for us this comes out of the box with Docker 1.8 as they introduced Docker Content Trust and with this the publisher of the image is verified before it is being pushed into the registry.
A useful tool for securing docker is Calico. Each docker network has a Calico profile, and rules and policies must be set in order to control the traffic. More on this here.
Cool. You should have by now enough information to be able to kick start your security.