On the surface, creating a MySQL container for Docker is pretty easy, but if you want to connect in (not sure what a mysql server that didn’t allow that would be good for) and decouple your databases from your container (I’m assuming you don’t want those to go away with your container) then there are a few problems to sort out.
I’m going to start with that simplistic example (with ephemeral database storage and no way to connect) and build on the example until we have something useful. Still not production ready, but good enough for hacking ;)
Oh, and you can jump to the gist (which has the files for building the container as well as some scripts to build and run it) if things get too boring or convoluted.
create the Dockerfile
:
then build and tag it:
Now we have a fully functioning container that we can run like so:
This would work, but it wouldn’t be very useful.
First step is to make our mysql server listen to more than localhost so that we can connect from outside of our container.
To do this, we need to update the bind-address in /etc/mysql/my.cnf
from 127.0.0.1
to 0.0.0.0
(have mysqld bind to every available network instead of just localhost.)
We could just start maintaining the /etc/mysql/my.cnf
file and add it to our container with our Dockerfile:
Or we could update that one property. I prefer this way so that I know that I am getting the most up to date config from my install, and just updating what I need to. We can add the appropriate sed
command to our Dockerfile after we’ve installed mysql-server.
(Technically we could just delete the line for the same effect, but this is more explicit.)
Even though mysqld is listening everywhere now, we still can’t log in because the root user only has access from localhost.
We need to add an admin account to administer things from outside of the container. In order to add an account, we need our mysql server to be running. Since separate lines in a Dockerfile create different commits, and commits only retain filesystem state (not memory state), we need to cram both commands into one commit:
Let’s build and run it!
And now to try connecting. In order to do this, we need to figure out the container’s ip, and to find that, we need our container’s id. This is easy enough to do by hand with docker ps
and docker inspect
, but you could also script it:
Now we have a fully functional mysql container! That’s great and all, but we’re putting a lot of trust into this container by relying on it to keep track of our databases, not to mention we’re screwed if we ever want to upgrade or update anything.
We need to remove our reliance on this specific container and to do this we need to externalize our data directory. This is easy, but causes problems. When running our container, we just throw in a -v /host/path:/container/path
and the supplied directory on our host machine is used in the container wherever we specify.
So to persist databases from our container in /data/mysql
on our host machine, we update our run command to be:
The problem is, we just nuked our system tables when we replaced /var/lib/mysql
with our empty directory. This also means we lost our admin user. This is tricky to account for because we can’t initialize the directory (or add our admin user) until the data directory is visible to the container (at run time) but we don’t want to initialize the directory every time we start up either. The whole point of externalizing the data directory is so that the container can come and go without loss of data.
To solve this, let’s create a startup.sh
script to replace simply invoking /usr/bin/mysqld_safe
.
First, let’s write our startup.sh
script to do the initialization only if our data directory isn’t already populated.
This will look for the file “ibdata1” in our data dir as a cheap way to determine if we need to initialize the directory or not. After the data directory has been initialized (or determined already initialized) we can continue on to start up the server.
And now we will update the Dockerfile to add startup.sh
to the container and to call it instead of mysqld_safe
:
We can also add in our admin user with the startup.sh
script:
And of course we should also remove the RUN
line from the Dockerfile that was doing the same thing but getting undone as soon as we externalized the data directory.
Don’t want to follow all the incremental directions to get your files right? Here’s the finished product (plus some helper scripts to build, run your server, and connect with the cli client.)
These files are also available as a gist.
comments powered by Disqus