Mount an external LFS drive

It’s easy. Just took a while to recall.

Original server was hardware installed from thumb drive iso. Set up LFS on server install.

New server from VirtualBox vm. Used ext4 there. Have it running on different drive on original server. LFS drive is set aside.

Want to get at some info from LFS drive. Trying to mount external LFS drive is running into many dead ends so far.

And of course it was simply a question of installing the correct file system drivers. In this case # apt update & apt install lvm2, and the volume can be mounted read/write.

I will keep the old drive around for a while in the external housing. I’m sure there will be times I want to find stuff to pluck off. But I need to put a label on it for a hard date to be DBANed.

A little bit Docker

Platform virtualization, a more granular way to virtualize.

Virtualization is something that’s been mainstream for years. I’ve used it for production environments to increase hardware utilization and improve failure tolerance. And it is also great for quickly setting up and using test environments whether to test before production deployment or to evaluate a technology without intermixing with your production environment.

Docker, platform as a service virtualization, has been around for a few years now, since 2013 actually. And I’d never used it. I decided it was time to change that.

So… what to do? Well I’ve been suspicious that my Internet Service Provider, ISP, isn’t actually providing the promised speeds. Whenever I’d check the speeds at a speed test website it would be slower than the service promise. Of course that’s very intermittent testing and I couldn’t really maintain a regular schedule, note the results, and have a documented history to complain to my ISP about.

In the past I’ve used a program called Nagios to monitor network services and computers on a network. A little searching and I found that Nagios has a feature, a plug-in it’s called, that can be used to monitor Internet upload and download speed on a schedule.

With this information in hand I decided to try using Docker to run a Nagios container with the speedtest plug-in. Quite a bit for me to get my head around. This particular plug-in works differently than the ones built into Docker Core so I needed to work out how to get it working. Of course there’s documentation online but it is old and the plug-in has been updated a few times while the documentation has not.

And with Docker itself there’s quite a bit to learn. Easy enough to get a container started. However there is a lot going on behind the scenes. Docker has images and containers. Images are the templates for the containers. Start an image and a container is created. Stop the container then start the image again and a new container is created. Without a guide, which I haven’t found yet, that explains the “theory of Docker” one might keep starting the same image and by that technique keep creating containers. This leads to not finding any of the customizations made in the last container because starting the image again starts a new container.

Then of course, there’s getting to the container command line. Basically, getting inside the machine. Once there it can be difficult to accomplish anything because many of the common command line tools, like a text editor, are not in the container. That leads to needing to find how to access the files and modify them from outside the container.

There are ways to resolve all the above. More than one way for each of the issues.

My good fortune is that I’m attending the (ISC)2 2020 Security Congress this year. Virtually, of course. And there is a Docker related session I’ve signed up for. Excited to learn about this.

  • 7 Layers of Container Insecurity

Ubuntu server upgrade 16.04 to 18.04 (20.04 pending)

Virtualize, document, and test. The surest way to upgrade success.

For years my server has been running my personal websites and other services without a hitch. It was Ubuntu 16.04. More than four years old at this point. Only a year left on the 16.04 support schedule. Plus 20.04 is out. Time to move to the latest platform without rushing rather than make the transition with support ended or time running out.

With the above in mind I decided to upgrade my 16.04.6 server to 20.04 and get another five years of support on deck. I’m half way there, at 18.04.4, and hovering for the next little while before the bump up to 20.04. The pause is because of a behavior of do-release-upgrade that I learned about while planning and testing the upgrade.

It turns out that do-release-upgrade won’t actually run the upgrade until a version’s first point release is out. A switch, -d, must be used to override that. Right now 20.04 is just that, 20.04. Once it’s 20.04.1 the upgrade will run without the switch. Per “How to upgrade from Ubuntu 18.04 LTS to 20.04 LTS today” the switch, which is intended to enable upgrading to a development release, does the upgrade to 20.04 because it is released.

I’m interested to try out the VPN that is in 20.04, WireGuard, so may try the -d before 20.04.1 gets here. In the meantime let me tell you about the fun I had with the upgrade.

First, as you should always see in any story about upgrade, backup! I did, several different ways. Mostly as experiments to see if I want to change how I’m doing it, rsync. An optional feature of 20.04 that looks to make backup simpler and more comprehensive is ZFS. It’s newly integrated into Ubuntu and I want to try it for backups.

I got my backups then took the server offline to get a system image with Clonezilla. Then I used VBoxManage convertfromraw to turn the Clonezilla disk image into a VDI file. That gave me a clone of the server in VirtualBox to practice upgrading and work out any kinks.

The server runs several websites, a MySQL server for the websites and other things, an SSH server for remote access, NFS, phpmyadmin, DNS, and more. They are either accessed remotely or from a LAN client. Testing those functions required connecting a client to the server. VirtualBox made that a simple trick.

In the end my lab setup was two virtual machines, my cloned server and a client, on a virtual network. DHCP for the client was provided by the VirtualBox Internal Network, the server had a fixed ip on the same subnet as the VirtualBox Internal Network and the server provided DNS for the network.

I ran the 16.04 to 18.04 upgrade on the server numerous times taking snapshots to roll back as I made tweaks to the process to confirm each feature worked. Once I had a final process I did the upgrade on the virtual machine three times to see if I could find anything I might have missed or some clarification to make to the document. Success x3 with no changes to the document!

Finally I ran the upgrade on the production hardware. Went exactly as per the document which of course is a good thing. Uneventful but slower than doing it on the virtual machine, which was expected. The virtual machine host is at least five years newer than the server hardware and has an SSD too.

I’ll continue running on 18.04 for a while and monitor logs for things I might have missed. Once I’m convinced everything is good then I’ll either use -d to get to 20.04 or wait until 20.04.1 is out and do it then.

Windows 10 images

Windows’ various versions are the computer operating system I’ve supported my entire professional career. There have been very occasional instances of supporting other systems like Mac’s OS, both before and after Apple switched their OS to UNIX.

There’s many things I don’t like about Windows. I’ve stopped using it for my personal systems for around a decade now. One of many gripes is the installation and update process.

For a while I was fortunate enough to have a professional staff who developed Windows deployment images for our company. They were very good and made image deployment “just work”. It was to the point that about all that was necessary was network boot the pc, point it to the image source and sit back and wait.

I reviewed the procedures they created. Asked questions to better understand what needed to be done to create the Windows images. I never actually was hands on creating an image though. Not from my staff’s documentation and not with any of them shoulder surfing me through the process.

Years later I reached the point of needing to create zero touch deployment images on my own. I failed. It seemed I was close to the solution but never quite there.

Microsoft’s documentation is terribly frustrating for me for the task of image creation. I’ve not found a single Microsoft webpage that goes from zero to bootable deployment image. There’s lots and lots of webpages with instructions for various portions of the work. And some webpages with basic outlines that have links (too many) to details that themselves have many links to more details. Alice never went down such a deep rabbit hole.

Then I found Kari Finn`s guide to “Create media for automated unattended install of Windows 10” on tenforums.com. Kari takes all the diversions Microsoft provides and narrows them down into a single linear process that goes from having installation media to having a zero touch custom installation image. BRAVO and thank you Kari!

Using the guide I’ve finally made my first successful zero touch deployment image!!!

From here I’ll make custom images for the software installations and architectures, BIOS/MBR and UEFI/GPT, that I need to support.

Finally I can make my own images. The world is my oyster.