Virtualization is something that’s been mainstream for years. I’ve used it for production environments to increase hardware utilization and improve failure tolerance. And it is also great for quickly setting up and using test environments whether to test before production deployment or to evaluate a technology without intermixing with your production environment.
Docker, platform as a service virtualization, has been around for a few years now, since 2013 actually. And I’d never used it. I decided it was time to change that.
So… what to do? Well I’ve been suspicious that my Internet Service Provider, ISP, isn’t actually providing the promised speeds. Whenever I’d check the speeds at a speed test website it would be slower than the service promise. Of course that’s very intermittent testing and I couldn’t really maintain a regular schedule, note the results, and have a documented history to complain to my ISP about.
In the past I’ve used a program called Nagios to monitor network services and computers on a network. A little searching and I found that Nagios has a feature, a plug-in it’s called, that can be used to monitor Internet upload and download speed on a schedule.
With this information in hand I decided to try using Docker to run a Nagios container with the speedtest plug-in. Quite a bit for me to get my head around. This particular plug-in works differently than the ones built into Docker Core so I needed to work out how to get it working. Of course there’s documentation online but it is old and the plug-in has been updated a few times while the documentation has not.
And with Docker itself there’s quite a bit to learn. Easy enough to get a container started. However there is a lot going on behind the scenes. Docker has images and containers. Images are the templates for the containers. Start an image and a container is created. Stop the container then start the image again and a new container is created. Without a guide, which I haven’t found yet, that explains the “theory of Docker” one might keep starting the same image and by that technique keep creating containers. This leads to not finding any of the customizations made in the last container because starting the image again starts a new container.
Then of course, there’s getting to the container command line. Basically, getting inside the machine. Once there it can be difficult to accomplish anything because many of the common command line tools, like a text editor, are not in the container. That leads to needing to find how to access the files and modify them from outside the container.
There are ways to resolve all the above. More than one way for each of the issues.
My good fortune is that I’m attending the (ISC)2 2020 Security Congress this year. Virtually, of course. And there is a Docker related session I’ve signed up for. Excited to learn about this.
- 7 Layers of Container Insecurity