To get a feel for wiring up an environment with Docker containers, I looked into a couple of options for service discovery - Docker Names and Links, and Etcd - and put together a couple of prototypes. In this article, I will talk a little about what service discovery is, how Docker containers fit in, and how (a couple) different techniques for wiring together your containers stack up.
Service Discovery is an umbrella term for the many aspects of managing the knowledge of where your application’s services can be found and how they should communicate. Some of these aspects are:
A few characteristics of Docker containers make some aspects of service discovery especially important.
To put Docker links and Etcd to the test, I created a simple set of services that need to be located for communication in three different ways:
To keep the implementation of these scenarios simple, I’ve written a single Java app with two resources to simulate two services. By running the app in two separate containers, we can treat them as separate applications. For each, I expose my application to the end user using Hipache.
Jump over to the Docker Docs if you’d like to hear more about names and links, or jump in and try out my prototype.
Get the poc on GitHub.
To run the test, add a name to your /etc/hosts
file
Build the project
Build the container image
Deploy the test environment
Now we can exercise the test environment from end to end by navigating your browser to “http://client.local/demo.” (Click refresh a few times to see the list of random numbers grow.)
Below is an outline of the flow of control, but it might be easier to just take a peak at the resources being used (fyi, the Java service is implemented using DropWizard.)
If you take a look at run.sh, you can see the mechanics of how the containers are all run, linked together, and how the client container is added to Hipache.
Using links for discovery leaves a couple things to be desired. First of all, links only work on one host and only expose private IPs and ports, so if you want to make your application HA, you’ll need something else. Second, the address info is only good for as long as your linked container is around, so if you want to release an update, you have to restart all containers that rely on the update - not just the container being updated.
My Etcd prototype works much the same as my links prototype. Before you judge me too harshly (or worse, start thinking about using any of this code in a real environment :) ), remember this is just a prototype- no part of it (including the components published to the Docker Index) is fit for developing against.
Give it a go, and don’t forget to stop the previous setup if you haven’t already:
Make sure you still have the name (127.0.0.1 client.local
) in your /etc/hosts
file, and…
and finally, the demo address (“http://client.local/demo”) is the same here.
There are a few differences you’ll notice if you take a peak at run.sh. First of all you’ll notice a few new containers:
(A complete explanation of etcdedge and etcdbridge is out of scope for this article, but take a look at the source if you’re interested.)
A second difference is that there are two instances of both the “client” and “service” containers. Despite the fact that this is a single host example, this is an attempt to show how having multiple copies on different hardware could interact in this example environment.
A final difference is that since the Etcd store is decoupled from the running of the application containers, we can stop and start individual components of the system without the cascading restart requirement necessitated by using Docker links for discovery.
Despite a desire to go into this deeper and relay more of my thoughts on service discovery with Docker, I find it hard to imagine that anybody is left reading so I’m just going to wrap it up.
The best TLDR I can supply is the prototype source itself (remember, all the orchestration is in “run.sh”)
And the additional components used in the Etcd prototype:
comments powered by Disqus