Up again, but not public yet

Well, except, you’re reading this so it is public.

Lost interest in maintaining this server and website when I lost my job and couldn’t get another. The server’s Ubuntu, web server is Apache, and CMS is WordPress. It’s been running for a number of years without issue. I wouldn’t call it production because I don’t rely on it for anything. It’s just a test bed to familiarize myself with the software stack and gain some understanding of it’s setup and administration. I’m self hosting. Its an old computer repurposed as a server.

One other thing I experimented with is DNS. I wanted to be able to get to my server on my home network using wp.boba.org, whether on the public Internet or my home network. That worked fine for years with BIND9 and isc-dhcp.

I developed the habit of running upgrades periodically without testing. If there was a problem then no big deal, not production, figure out the issue, repair and proceed. Problems happened a few times with that approach and were always easily rectified.

DNS on the server stopped working after an upgrade. I tried many things and couldn’t figure out why. Rather than rollback the upgrade or restore the system from a backup I kept mucking with it to try and get it to work. No success. Eventually I just lost interest and let the server go dark. I wasn’t working so didn’t have anyone to talk with about the server. With no one to talk tech with about my server project there seemed no point to fixing it.

I did want to dip my toe in the water again after a while. I decided to rebuild the server and bring all components up to the latest release. I still couldn’t get BIND9 DNS to work. Searching BIND9 issues I found other Ubuntu users were also having problems with it. After searching for alternate DNS servers I decided to try dnsmasq. That got me to a working DNS on my home network. And that got me to the point of having the server up and publicly available again.

All development of the server configuration and settings was done on a virtual machine, vm, in a virtual network with virtual clients. VirtualBox is the hypervisor being used. Once everything worked as expected I migrated the server vm to a physical host. That took surprisingly little tweaking. Network addresses had to be changed from the virtual network settings to the home network settings and a different Ethernet device name entered where needed. That was about it to migrate from a virtual to physical server.

For all the world to see, in all its underwhelming glory, wp.boba.org is back. Enjoy.

Perils of a part time web server admin

Not being “in it” all the time can make simple things hard.

Recently one of the domain names I’ve held for a while expired. Or actually, I let it expire. It was hosted on this same web server along with several other websites and had a secure connection using a Lets Encrypt SSL certificate. All good.

The domain name expired, I disabled the website, and all the other websites on the server continued to be available. Until they weren’t! When I first noticed I just tried restarting the web server. No joy, that didn’t get the other sites back up.

And here’s the perils of part time admin. Where to start with the troubleshooting? For all my sites and the hosting server I really don’t do much except keep the patches current and occasionally post content using WordPress CMS. Not much troubleshooting, monitoring logs, etc. because there isn’t much going on. And, though some might say otherwise, I don’t spend all my time at the computer dissecting how it operates.

I put off troubleshooting for a while. This web server’s experimental, not production, so sometimes I cut some slack and don’t dive right in when things aren’t working. Had other things pending that required more attention.

When I did start I was very much at a loss where to start because, as noted, I disabled a web site and everything continued to work for a while. When it stopped working I hadn’t made any additional changes.

Logs are always a good place to look, yes? This web server is set up to create separate logs for most of the sites it’s hosting. Two types of logs are created, access logs and error logs. Access logs showed what was expected, no more access to that site after I disabled it.

Error logs confused me though. The websites use Lets Encrypt SSL certificates. And they use Certbot to set up the https on the Apache http server. A very common setup. The confusing thing about the error log was it showed the SSL configuration for the expired web site failing to load. Why was the site trying to load at all??? I had disabled the site using the a2dissite program provided by the server distribution. The thing I hadn’t thought about is the Certbot script for Apache sets up the SSL by modifying the <site_name>.conf file AND creating a <site_name>-le-ssl.conf file.

So even though the site had been disabled by a2dissite <site_name>.conf I hadn’t thought to a2dissite <site_name>-le-ssl.conf. Once I recognized that issue and ran the second a2dissite command the web server again started right up. No more failing to load SSL for the expired site. And, surprising, failing to load the SSL for the one site prevented the server from starting rather than disabling the one site and loading the others that didn’t have configuration issues.

Something for another time… I expect there must be a way for the server to start and serve correctly configured sites while not loading incorrectly configured sites and not allowing presence of an incorrectly configured site to prevent all sites from loading. It just does not seem likely that such a widely used web server would fail to serve correctly configured sites when only one or some of multiple hosted sites is misconfigured.

The perils of part-time admin, or jack of all trades and master of none, is that these sort of gotcha’s pop up all the time because of limited exposure to the full breath of dependencies for a program to perform in a particular way. It isn’t a bad thing. Just something to be aware of so rather than blame the software for not doing something, need to be aware that there are often additional settings to make to achieve the desired effect.

Be patient. Expect to need to continue learning. And always, always, RTFM and any other supporting documents.

Server upgrade

…and I’m publishing again.

Well, this was a big publishing gap. Four months. Hope not to have such a long one again. Anyway, there are a number of drafts in the wings but I decided to publish about this most recent change because it is what I wanted to get done before publishing again.

The server is now at Ubuntu 20.04, 64‑bit of course. It started out at 16.04 32‑bit, got upgraded to 18.04 i686 and then, attempted 20.04 upgrade and couldn’t because had forgotten was legacy 32‑bit and 20.04 only available in 64-bit. On to other things and plan different upgrade solution. When I got back to it I thought should upgrade to 22.04 since that had been released. As I’m going through the upgrade requirements I discovered that several needed applications didn’t have 22.04 packages yet, particularly Certbot and MySQL. So back to 20.04 and complete the upgrade.

MySQL upgrade wasn’t too bad. There was a failure, but it was common and a usable fix for the column-statistics issue was found quickly. Disable column-statistics during mysqldump (mysqldump -u root -p --all-databases --column-statistics=0 -r dump_file_name.sql).

Also, switched to the Community Edition rather than the Ubuntu packages because of recommendations online at MySQL about the Ubuntu package not being so up to date.

Fortunately I’m dealing with small databases with few transactions so mysqldump was my upgrade solution. Dump the databases from v 5.x 32-bit. Load them into v 8.x 64-bit. But wait, not all the user accounts are there!!

select * from INFORMATION_SCHEMA.SCHEMA_PRIVILEGES; will show only two grantees, 'mysql.sys'@'localhost' and 'mysql.session'@'localhost'. There should be about 20. The solution was simple, add upgrade = force to mysql.cfg and restart the server. After this, select * from INFORMATION_SCHEMA.SCHEMA_PRIVILEGES; shows all the expected accounts AND the logins function and the correct databases are accessible to the accounts.

All the other applications upgraded successfully. DNS, ddclient, Apache2, and etc. It was an interesting exercise to complete and moved the server onto newer, smaller hardware and updated the OS to 64-bit Ubuntu 20.04.

I’ll monitor for 22.04 packages for Certbot and MySQL and once I see them, update the OS again to get it to 22.04. Always better to have more time before needing (being forced) to upgrade. 20.04 is already about halfway through its supported life. Better to be on 22.04 and have almost five years until needing to do the next upgrade.

Doing all this in a virtual environment is a great time saver and trouble spotter. Gotchas and conflicts can be resolved so the actual activation, virtual or physical, goes about as smoothly as could be hoped with so many dependencies and layers of architecture. Really engrossing stuff if you’re so inclined.

DHCP on the server was new. The router doing DHCP only allowed my internal DNS as secondary. That seemed to cause issues reaching local hosts, sometimes the name would resolve to the public not the private IP. Switching to DHCP on the server lets it be specified as THE DNS authority on the network.

Watching syslog to see the messages, the utility of having addressable names for all hosts seemed obvious. A next virtual project, update DNS from DHCP.

Ubuntu, ZFS and running out of drive space

For userland, system operations really need to be invisible.

I have been using Ubuntu as my desktop OS for over 15 years now. I came to it by way of using it on a backup computer to do a task that my primary computer, using Windows XP, failed at if I tried to do anything else concurrently. I didn’t want to buy another computer or a license for XP and put it on an old computer with specs that were very marginal for XP. So I thought, try this free OS and see whether I can use the old computer to do the task that my new XP computer would only do if nothing else was done at the same time.

Ubuntu got the job done. I recorded my vinyl albums to sound files, broke the sound files into tracks like on the albums, and then burned the tracks onto CDs so I could carry my music around more conveniently and listen to it in more places. XP did this too, but the sound files were corrupt or the burn failed if I did something else like open my word processor, spreadsheet, or web browser while recording or burning were going on.

Now I had two pcs running and would switch between them as needed to keep the vinyl to CD process going. That was a little inconvenient because my workspace didn’t let me put the two pcs right next to each other. Switching wasn’t a case of moving my hands from one keyboard and mouse to another, or flipping a switch on a KVM. Delaying getting to the Ubuntu pc after the album side was over meant extra time spent trimming the audio file to delete the tail of the file. This led to me trying some things on the Ubuntu pc like opening the word processor or spreadsheet or browsing the web while the recording or burning were going on to see if it caused problems. And amazing, it didn’t! A lower spec pc with Ubuntu could do more of what I wanted without errors than my much better XP desktop.

That led me to using the Ubuntu pc while converting my albums to CD and that led me to Ubuntu for home use. Professional life continued and continues to be Windows, but at home Ubuntu. And Ubuntu is still preferred at home because it doesn’t mysteriously prevent me from doing things, inconveniently interrupt me, or insist on having information I don’t want to share like Windows does. That is until ZFS in Ubuntu 20.04 started preventing me from doing updates on my primary and backup pc because of lack of space.

I’ve run out of space on Windows and Ubuntu before. It just meant time to finally do some housekeeping and get rid of large chunks of files, like virtual machines, that I hadn’t used in a while. Do that and boom, back to work! Not so with ZFS. Do that and gloom, still can’t do updates.

There were different error messages on the two pcs, one said bpool space free below 20%, the other was rpool space free below 20%. Rpool and bpool, what are they? And why, when there’s nearly 20% free space, is updating prevented? And why after deleting or moving tens of gigabytes of files off the drive and purging old kernels, a Linux thing, are updates still prevented and rpool and bpool still report less than 20% free? Gigs of files were just moved off the drive and these rpool and bpool things don’t reflect that!

My first experience with Ubuntu after more than 15 years using it where keeping it up to date wasn’t just a case of using it and running updates every once in a while.

Windows has a feature called “Recovery Points” that I’ve used to get back to a working system when things have been broken to the point of making it hard or impossible to use the pc. Ubuntu hasn’t really had anything equivalent until the introduction of ZFS. And as I’ve learned, that’s way too simple minded an explanation and doesn’t give credit to the capabilities of ZFS that go way beyond Windows Recovery Points. True, and so be it.

I dug through many ZFS web pages and tried many things until finally getting more than 20% rpool or bpool free on each pc, a list of links is at the end of this post. Now the pcs are back to updating without complaint.

What I’ve learned is Ubuntu has a way to go to make ZFS user friendly. Things I’d suggest to Canonical for desktop Ubuntu:

  • Double the recommended minimum drive size and/or tell end users they should have 2x the drive space they think they need if they already think they need more than the minimum
  • Reduce the default number of snapshots to 10
  • Provide a UI for setting the number of snapshots
  • Provide a UI for selectively removing snapshots from bpool or rpool when free space goes below the dreaded 20%
  • After prompting for confirmation automatically remove the oldest snapshots to get back to 20% free when the condition occurs

Both my pcs are now above 20% free space on rpool and bpool and updating without complaint. It took a while and some learning to make that happen. It wasn’t the type of thing an average end user would ever want to face or even know about.

77% and 48% free – will probably bump rpool space issues first again on this pc
89% and 90% free – plenty of room before bpool is a problem again on this pc

ZFS focus on Ubuntu 20.04 LTS: ZSys general presentation · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys general principle on state management · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys commands for state management · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys state collection · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys for system administrators · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys partition layout · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys dataset layout · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys properties on ZFS datasets · ~DidRocks

apt – Out of space on boot zpool and cant run updates anymore – Ask Ubuntu
For this link see especially Hannu‘s answer on Nov 19, 2020 at 17:22

docs.oracle.com | Displaying and Accessing ZFS Snapshots

docs.oracle.com | Destroying a ZFS File System

docs.oracle.com | Creating and Destroying ZFS Snapshots


Ubuntu and Bluetooth headphones

You don’t know what you don’t know.

It’s impossible to be expert at everything. Or even good at everything. One of the things that Ubuntu has frustrated me over is headphones. I’ve used wired headphones and they’ve worked great. But of course I’m tethered to the computer. I’ve used wireless headphones and they too have worked great. But I’m miffed that a USB port has to be dedicated to their use. Why should a USB port be lost to use headphones when the pc has built-in Bluetooth? Why can’t I just use Bluetooth headphones that I use with my phone and keep all my ports open for other things.

Why, because every attempt to use Bluetooth headphones has failed. Used as a headset they work fine, as headphones, not so much. Either the microphone hasn’t been picked up or the audio is unintelligible or non existent. And I know it isn’t the headphones because every set I’ve had that doesn’t work with Ubuntu has worked great with Android phones and with Windows after installing the headphone’s Windows program.

I’ve tried digging into the details of how to set up audio on Ubuntu to get Bluetooth headphones supported. And doing so, I’ve buggered up test systems to the point I needed to reinstall the OS to get sound working again. I obviously didn’t understand it well enough to resolve the issue. Even so, every once in a while I try again to make it work.

Recently, I came across the solution! Thanks to Silvian Cretu and his post Linux and Bluetooth headphones.

The post touches on many of the things I’ve tinkered with trying to make Bluetooth headphones work. The section, “Solution 2: Replacing Pulseaudio with PipeWire”, in Silvian’s post provides the recipe that makes it work. If you’re on Ubuntu 20.04 and are frustrated trying to make Bluetooth headphones work then head over to Linux and Bluetooth headphones and see if that is the recipe for you too.

Ubuntu server upgrade 16.04 to 18.04 (20.04 pending)

Virtualize, document, and test. The surest way to upgrade success.

For years my server has been running my personal websites and other services without a hitch. It was Ubuntu 16.04. More than four years old at this point. Only a year left on the 16.04 support schedule. Plus 20.04 is out. Time to move to the latest platform without rushing rather than make the transition with support ended or time running out.

With the above in mind I decided to upgrade my 16.04.6 server to 20.04 and get another five years of support on deck. I’m half way there, at 18.04.4, and hovering for the next little while before the bump up to 20.04. The pause is because of a behavior of do-release-upgrade that I learned about while planning and testing the upgrade.

It turns out that do-release-upgrade won’t actually run the upgrade until a version’s first point release is out. A switch, -d, must be used to override that. Right now 20.04 is just that, 20.04. Once it’s 20.04.1 the upgrade will run without the switch. Per “How to upgrade from Ubuntu 18.04 LTS to 20.04 LTS today” the switch, which is intended to enable upgrading to a development release, does the upgrade to 20.04 because it is released.

I’m interested to try out the VPN that is in 20.04, WireGuard, so may try the -d before 20.04.1 gets here. In the meantime let me tell you about the fun I had with the upgrade.

First, as you should always see in any story about upgrade, backup! I did, several different ways. Mostly as experiments to see if I want to change how I’m doing it, rsync. An optional feature of 20.04 that looks to make backup simpler and more comprehensive is ZFS. It’s newly integrated into Ubuntu and I want to try it for backups.

I got my backups then took the server offline to get a system image with Clonezilla. Then I used VBoxManage convertfromraw to turn the Clonezilla disk image into a VDI file. That gave me a clone of the server in VirtualBox to practice upgrading and work out any kinks.

The server runs several websites, a MySQL server for the websites and other things, an SSH server for remote access, NFS, phpmyadmin, DNS, and more. They are either accessed remotely or from a LAN client. Testing those functions required connecting a client to the server. VirtualBox made that a simple trick.

In the end my lab setup was two virtual machines, my cloned server and a client, on a virtual network. DHCP for the client was provided by the VirtualBox Internal Network, the server had a fixed ip on the same subnet as the VirtualBox Internal Network and the server provided DNS for the network.

I ran the 16.04 to 18.04 upgrade on the server numerous times taking snapshots to roll back as I made tweaks to the process to confirm each feature worked. Once I had a final process I did the upgrade on the virtual machine three times to see if I could find anything I might have missed or some clarification to make to the document. Success x3 with no changes to the document!

Finally I ran the upgrade on the production hardware. Went exactly as per the document which of course is a good thing. Uneventful but slower than doing it on the virtual machine, which was expected. The virtual machine host is at least five years newer than the server hardware and has an SSD too.

I’ll continue running on 18.04 for a while and monitor logs for things I might have missed. Once I’m convinced everything is good then I’ll either use -d to get to 20.04 or wait until 20.04.1 is out and do it then.

Jonas Salk Middle School Career Day

A presentation about information technology with demonstrations.

I volunteered to create a presentation for career day at school. Actually, my daughter asked me and I said “okay”. Then career day presentations were changed from in person to online because of corona virus.

It would have been so much easier for me to do in person. I’m certain the total time spent would be less than what I needed to produce the video! Everything I wanted to present could have been done live. Timing would be easier and adjustments could be made in each session depending the interest of the previous audience and questions during the presentation.

That wasn’t to be.

The good thing about the video is I was able to produce it. The bad things are obvious in review. There are several parts where the dialog is disjointed and not flowing with events on the screen. Arrangement of some screen elements blocks others in an undesirable way. And I need to think more of the audience. This is likely much better for high school seniors than eighth graders. Work more on the script and be EXPRESSIVE!

Making this video was an enjoyable and challenging experience. I had to learn things I’d never known to make the video. And watching myself and the content I can see how it could easily be improved. Information I’ll tuck away to use if and when there’s a next time.

If you’d like to check out the video, here it is.

At the end of the video is a list of the software used to produce it. That same list, with working links, is below.

Ubuntu 18.08 runs the laptop used to create this video (it’s an alternative to Windows, OS X, and Chrome OS). https://ubuntu.com/

OpenShot video editor was used to create the video. https://www.openshot.org/

vokoscreen made the screen video captures that got edited in OpenShot. https://linuxecke.volkoh.de/vokoscreen/vokoscreen.html

GIMP, GNU Image Manipulation Program, was used to create or edit some of the images in the video and to obscure and alter some portions of the video images. https://www.gimp.org/

Cheese was used to record my head shot and voice.
https://wiki.gnome.org/Apps/Cheese

Pick and OpenShot’s chroma key effect were used make the background behind my head transparent rather than appear in a box that blocked the background. https://www.kryogenix.org/code/pick/

I used LibreOffice Writer to take notes and make plans as I developed the video and for the scripts I used to guide narration. LibreOffice Calc helped calculating how to adjust length of some clips to fit the target time. https://www.libreoffice.org/