I HATE Optimum!!

There’s many reasons cable companies love local monopolies. One is that no matter how poor their service is, customers have no alternative.

Like many in the US I have only one ISP, Internet service provider, that can provide reasonable residential Internet upload and download speeds, e.g. >=200Mbps down, >=30Mbps up. That provider is Optimum. I’m not a gamer, don’t stream 4K, and almost never have more than one device streaming at a time so yes, >= 200Mbps down is reasonable for me.

To run this website from my home server I need the Optimum provided Altice Gateway (a combined cable modem and router) to have its router component set to bridge mode. My own router’s WAN port then gets a public IP address and that way I control NAT to the services I need. Optimum doesn’t provide access to the bridge mode setting for their gateway’s router so tech support must be contacted to have it set to bridge mode.

At this point I’ve had three gateways from Optimum. The first two gateways it took one or two days contact with support each time time to get the correct settings established. It was necessary to contact support, explain the router setting that needed to be made, and then follow up until it was actually working.

This third gateway, setup and config were nearly automatic, except of course, setting the router to bridge mode. As usual I contacted support to have the router set to bridge mode. It worked! My router got a public IP on the WAN port and everything was good to go. Then less than a week into the working setup it was no longer working. My router was getting a private IP on the WAN port. Checking my Optimum gateway on their website showed the router portion of the gateway had the LAN enabled and an address on that private LAN was what my WAN port was getting. Router working.

Call/text with support for more than three weeks trying to get them to set it back to the configuration I needed, the configuration it had been in for a short while after initial setup. Support even told me things like the problem was my router. The various techs I spoke/texted with continually blamed my router even though it was clear my router was getting a private IP address from the LAN configured in the Gateway’s router settings.

Every tech I spoke/texted with, their technical understanding of networking and their spoken English competency was not at the level needed for the work. It was clear basic networking wasn’t understood and also they would mistake words like “pass thru,” as “password.” I used “pass thru” while trying to explain (?explain? to tech support) what bridge mode meant. Getting them to recognize and acknowledge the problem was impossible, so I gave up. I decided to buy a cable modem to have both components needed to replace Optimum’s gateway.

Even that was fraught with concern for me. Given the problems I’ve had with knowledgeable support I wanted to be certain the modem I buy could be activated by Optimum. I went to an Optimum store to ask about modems. They directed me to a website, pickmymodem.com, to identify a compatible modem. I pointed out it wasn’t their website and asked if the modem didn’t work would Optimum simply say, “not our website, too bad,” even though they had directed me there? Silence from across the counter.

Pickmymodem.com specifically states DOCSIS 3.1 modems are not recommended for Optimum.



Next I contacted Optimum’s modem activation number to ask about modem compatibility. On calling, the voice menu doesn’t offer an option to speak to “modem activation” even though the website specifically identifies that number for modem activation. So of course one must guess the appropriate menu option, talk to a bot that tries to “solve the problem” without connecting you to a person, then, after getting over that, eventually speak with someone.

When I finally spoke to someone they told me a DOCSIS 3.0 compatible modem was needed but had no information about specific modems and would not confirm whether a particular modem would work. I pointed out Optimum’s website says a DOCSIS 3.1 compatible modem is needed and DOCSIS 2.0 and 3.0 modems are not allowed on their network. Which information is correct, what you’re telling me, DOCSIS 3.0, or what the website shows, DOCSIS 3.1? Just silence from them, no answer.

Optimum couldn’t/wouldn’t provide information about compatible modems, recommended a site they didn’t control to identify a modem, and between the website they recommended, their website and the Optimum support that I spoke with, gave conflicting information about what would work.

I gave up and just bought a DOCSIS 3.1 modem. The modem arrives and I’m amazed, amazed I say, activation went without a hitch! But this is Optimum so, of course, the trouble isn’t over.

When we moved here about six years ago Optimum was and still is the only service provider for my address. Service was started at 200Mbps and that has was never realized. Slightly above 100Mbps was the typical speed. The first gateway eventually started failing and Optimum replaced it. A little over a month ago Internet connectivity became very poor on the second gateway. Optimum sent me a third gateway, bumped speed up to 300Mbps, and reduced the monthly service charge by $5. Nice, right?

Of course problems don’t stop there. I never saw close to 300Mbps. The best was usually around 150Mbps.

Throughout all this I’ve been on their website many times and note new customers are being offered 300Mbps for $40/month. I’m paying more than 3x that, even after the $5/month discount, for the same speed.

I call the retention department to ask to get my bill reduced to something in the $40/month range. Nope, no can do, the $5 was all I would get. What they would do was bump the speed to 1Gps and reduce the bill for that to $95/month for three years. I doubted their ability to deliver the service given their history but that was the only way to get the bill below $100/month so I accepted the offer.

Next problem to resolve, I’ve supposedly got 1Gbps now but every speed test I’ve done until this morning tops out just below 200Gbps. This morning the speed tests across different test sites range from a low of 245Mbps to a high of 480Mbps. Better than I have had but still not even 50% of what’s supposedly being delivered.

I’m going to try and get the speed they’re supposedly giving me but I expect it won’t happen. They’ll tell me it is my equipment that’s the problem, I’m sure.

I expect the speed issue won’t be resolved and I’ll still hate Optimum after I try and get to the speed they say they’re providing. Have to wait see.

NOMACHINE on OpenSUSE Tumbleweed

A little bit of diagnosing and following up and all is fine.

Switched recently to OpenSUSE Leap on my backup laptop. Then found the simplest solution for me to sync Google Drive with OpenSUSE was to change to OpenSUSE Tumbleweed. Great. Done. Now time to get Tumbleweed to be a remote client to my Debian primary laptop.

For years, the primary laptop has run Ubuntu. Recently switched it to Debian 12 bookworm for the reasons documented here, From Ubuntu/Zorin to Debian/OpenSUSE. The remote solution to connect to the primary laptop has been RealVNC.

Debian 12 bookworm is using Wayland, and the latest Ubuntu is using it too. RealVNC isn’t compatible with Wayland so isn’t compatible with the latest Ubuntu or Debian. For myself, remote control hasn’t been a need for a few years so I didn’t realize RealVNC wasn’t Wayland compatible. As soon as I wanted remote control again, Wayland compatibility became an issue.

Do a little digging and find NOMACHINE works with Wayland. Install NOMACHINE on the Debian primary laptop, client and server running, no problems.

Install on the OpenSUSE Tumbleweed laptop, install fails. Server service isn’t found and client won’t start. 🫤

Check out NOMACHINE compatibility, it includes OpenSUSE 15.x. Leap is 15.x, current Tumbleweed is 20250714. Tumbleweed is on the backup laptop to make Google Drive work. I NEED Google Drive syncing between the primary and backup laptop. Can I get NOMACHINE working on Tumbleweed?

First, let’s see what NOMACHINE looks like when it’s running. Spin up an OpenSUSE Leap 15.x virtual machine and install NOMACHINE. It works fine, can connect to Wayland displays from X11 virtual machine. Now to troubleshoot the installation on OpenSUSE Tumbleweed.

Spin up a Tumbleweed virtual machine and do a NOMACHINE installation. (virtual machines are indispensable and readily available troubleshooting and simulation tools)

Terminal output during installation on Tumbleweed is brief and indicates nxserver.service can’t be loaded.

Check the installation log, /usr/NX/var/log/install.log, find an error earlier than nxserver.service . The error, “execstack can't be run.” Check what is exestack and find execstack isn’t installed. Install it and run NOMACHINE installation again. Terminal output remains the same and nxserver.service can’t be loaded.

Back to the log and find a message, Warning: SELinux userspace will refer to the module from /usr/NX/scripts/selinux/nx-unconfined.pp as nx rather than nx-unconfined. Since nxserver.service won’t run and there’s a Warning that SELinux has a problem with something NX related I turn to finding out how to understand the SELinux problem.

Aside from knowing SELinux enforces access control policies on OpenSUSE Linux, I haven’t been aware of it before. Digging around to get an idea of what the warning means I find a tool called SELinux Troubleshooter. It’s in Tumbleweed’s Discovery app library, so I install it.

So, now, the execstack problem has been handled and a way to diagnose the possible SELinux issue installed. Time to run the NOMACHINE installation again.

Bingo!

The Troubleshooter provides two SELinux command recommendations. I apply the commands and NOMACHINE is now running on OpenSUSE Tumbleweed just fine. (At least on Tumbleweed 20250714.)

I still like RealVNC’s interface and modes of operation better. I’ll continue to watch it for Wayland compatible releases and continue to upgrade NOMACHINE with new releases, especially if they include better access to controls, something like a pop up tool bar attached to the host’s window border.

The “page peel,” NOMACHINE’s toolbar, isn’t always available depending on the Display option selected. Knowing the keyboard option, Ctrl+Alt+0, to call the NOMACHINE control center before selecting a display option that can’t display the “page peel” is a real panic avoidance tip.

NOMACHINE client window with “page peel” displayed.

The Ctrl+Alt+0 shortcut displays the NOMACHINE control center that fills the entire client window.

With the control center filling the entire client window the mouse can move outside the client window and interact with other applications on the client pc. In the control center, navigating the menu requires clicking an icon to reveal the icons in the next sub menu and click again and again until the target operation is reached.

Once the operation is completed each menu must be clicked to return to the menu above until reaching the top menu, the control center, and finally close the control center to display the remote control session. That’s really cumbersome navigation.

The “page peel” only has controls for the video connection and causing the “page peel” to appear doesn’t release the mouse to leave the window. The only way I’ve found to leave the NOMACHINE remote client window is to use Ctrl+Alt+0 to bring up the NOMACHINE window overlay/control center, at that point the mouse can leave the window boundaries.

If the machine hosting the NOMACHINE server has multiple monitors an option to display all monitors can be selected. However all remote monitors are squished into a single NOMACHINE client window instead of having one window for each monitor. Not very useful.

If the client and server BOTH have two monitors the “Fullscreen on all monitors” option can be selected. In this configuration one each of the remote system’s monitors fills one each of the client system’s monitors. This feels very much like sitting at the remote system as opposed to controlling it remotely. However, again, there is no way to leave the remote control session’s window and interact with the client system without first displaying the top level NOMACHINE window to then be able to navigate around the client computer.

WordPress’ Post name Permalinks, Block Editor, and JSON Error

Following the right path makes finding clues easier.

Stated in many posts on this site is the lack of a traffic generating goal. When I first set up the site permalinks defaulted to “Plain” or I set them to “Plain.” I don’t remember. Every URL for a page, post, comment, or view ended with something like /?p=59 or /?p=128. It kept the URLs short but provided no other real benefit.

Recently I began to think about changing the permalinks to title style. I now appreciate having some description of the page in the URL and wanted to enable it for this site. Plus it is supposed to help with search engine ranking if the URL is descriptive. Simple enough, change the Permalink Structure from Plain to Post name.

Not so simple.

The change saves without error and the site functions as expected when browsed afterward. The URLs all include the title in the link rather than ending /?p=xx. All seems well. Then edit an existing draft or published post or create one. When saved as draft or attempting to publish, an invalid JSON message is displayed. The edits, or new post, will not be saved and cannot be published!

Start tracking down “not a valid JSON response” with “WordPress” and got plenty of hits. As is usual when there’s lots of hits, there is agreement across articles about many of the settings to review for a solution. And none of the common solutions applied! The conditions were already as specified or making the change didn’t make saving edits in the block editor possible. Same for the solutions that weren’t common to all articles. Either conditions already as specified or the change didn’t resolve the problem.

I turned off the block editor and used the classic editor. Posts could be edited and saved and new posts could be created and saved as draft or published. The site was working, and with Post name permalinks, but nothing could be edited or created with the block editor, the classic editor needed to be used.

Trying to get a handle on the problem I reviewed the server logs, the web server logs, enabled and reviewed the WordPress logs, monitored console output in my browser. I also tried editing from different operating systems and with different browsers. The problem was consistent regardless the OS or browser used while editing. All I was able to find was that files in a wp-json folder weren’t being found.

With this information I searched the server from / for a path that includes wp-json. There is no path with that string. Now a new search focus for troubleshooting. Rather than WordPress and the JSON error, WordPress and missing wp-json module. Eventually I found some articles that recommended AllowOverride all be enabled for the web server’s directory statement.

I tested and it worked! Post name Permalinks could be enabled and the block editor worked as expected. But I couldn’t reconcile enabling that directive for all sites on the server. Fortunately the <Directory> statement can be in a <VirtualHost> statement. This site’s <VirtualHost> statement now contains a <Directory> statement with AllowOverride all. Restart the server and *boom* the site works and editing in the block editor works.

It is surprising to me that not one of the WordPress related troubleshooting articles I found, that had in agreement at least a handful of steps among them, ever mentioned the Apache <Directory> statement.

The troubleshooting tree that finally worked, tracking down reasons for missing wp-json path, wasn’t hinted at in the initial error message, searches with “WordPress” and “the response is not a valid JSON response” kept turning up the same potential causes, none of which included discovering the missing wp-json, how to troubleshoot it, or that a web server configuration requirement, not a WordPress setting, resolved the issue.

When troubleshooting keep digging, and try different searches if what first turns up doesn’t help. Need to keep digging until the corrective action is found. Even if the cause or messages from it can’t be found. Apache log level was set to info and there was nothing with the term “wp-json” found in any log.

Cloud Storage, overGrive, and my own cloud?

Host my own cloud? Part of the journey is here. Not a full blown private cloud yet.

Syncing with external (cloud) directories is such a common thing. Providers have big incentives to lock you into their platform and don’t always provide a straightforward or full featured way to connect if you’re not using their connection tool. And there are security considerations that affect the method(s) available to connect to the account.

I’ve had a GMail account for ages because I’ve had Android phones. I got in the habit of using DropBox on the phone as a convenient storage for documents on the phone and my computers, various Linux flavors, and Windows. DropBox changed its policy and limited to two the number of devices a free account can connect with. Now I needed a way for my second computer (primary computer + phone hit the device limit) to sync files.

overGrive to the rescue! A perpetual license, with plenty of personal use seats, for something like $5 back in 2020. Buy once, install on each pc, and have full GoogleDrive sync on my local drives. Make a change using any device, save the file, open it on another device and edit the sync’d copy with the latest changes.

I change my computer’s OS from time to time or do other things that require applications like overGrive to be reinstalled which involves reauthenticating overGrive with Google. Reinstall has always gone without a hitch and GoogleDrive was syncing on the pc. I’ve done this several times over the years with no issue. And all on the same original perpetual license.

When I needed to reinstall back in January because of one of those system changes, overGrive couldn’t authenticate. Google made some changes so the overGrive authentication (and other apps using the same mechanism) didn’t work any longer. Fortunately I was at a point where I didn’t regularly switch pcs and so wasn’t relying on GoogleDrive sync so much.

For a while the folks at The Fan Club had a page up explaining they didn’t know when the issue would be resolved. Google had changed the procedure and cost of licensing and they weren’t forecasting when/if the issues would be resolved.

A recent trip to The Fan Club revealed the problem description page was gone, replaced by instructions for setting up the Google Authentication on your own. I tried them and got authentication set up. Like many guides made for new services the illustrations, label names, and functional paths of the actual website were not were not the same, or in the same order. But overGrive was working again

It still makes GoogleDrive a manual sync for files I want on all devices. So there’s still a risk I cause a sync conflict between Dropbox which is “primary” and GoogleDrive which is meant as one way copy from Dropbox.

Solutions that come to mind are a paid Dropbox account so more devices can connect, switch over to GoogleDrive for all devices, or host my own cloud. There’s plenty of options for hosting my own cloud; FileRun, NextCloud, OwnCloud, Seafile, TrueNAS Scale, and others. And some appeal to knowing no one is monitoring my cloud use.

From Ubuntu/Zorin to Debian/OpenSUSE

Driven away from Ubuntu… by snaps

Ubuntu has been on my primary computer (initially desktop then laptop) for years. Yes, so many years that at one time my primary computer was a desktop. And on my backup laptop I’ve used a few different distributions but primarily Zorin.

The one thing in common with the distributions I’ve tired was being Ubuntu based. That meant lots of features driven by what Canonical was doing with Ubuntu. Then Canonical introduced snaps. For my use snaps have been frustrating. I believe it was Ubuntu 20.04 where snap packages became default for some apps and it has progressed to more and more default snap packages.

Things that frustrated me, and continued to frustrate me until switching to Debian in 2025, was that the snap daemon would often indicate updates needed but would refuse to update. Then also, snaps broke any modification to program launch shortcuts or made the modifications difficult or impossible (or at least beyond my willingness to invest the time) to implement where, when the app packaging was still .deb, updates didn’t break customizations. And, oh geeze, the loop back devices! Go from a third or a half screen of output when mount is issued to more than a screen full. That just makes it unnecessarily difficult to track down what you’re looking for in the mount output. All of this and more caused me to start seriously looking for distributions that don’t include snap, or at least don’t include or enable it by default.

What I’ve ended up doing is migrating my primary laptop to Debian and my backup to OpenSUSE.

There have been a few bumps in the migration, mostly because of my unfamiliarity with both Debian and OpenSUSE. But hey, anytime the OS is changed there’s some bumps. Even when upgrading to a newer version of the same OS.

At this point I won’t be back to Ubuntu for a while. I’m getting comfortable with Debian on my primary and getting comfortable with OpenSUSE on the backup. The initial draft for this post was created on my backup laptop, OpenSUSE, from a coffee shop connecting to my home server. The home server is still Ubuntu but, with the exception of Let’s Encrypt, there are no snaps in use on it.

Blocking, blocking, blocking

No visitors, so why not?

This site has been up for a few years now. Very few (hardly any) visitors. That’s fine. This is really just a place for me to make notes about tech that’s on my mind. Without a job there’s fewer situations that I find myself having to resolve so less to write about.

wp.boba.org is on the Internet though, so of course it gets hit by bots. And since commenting without creating a login is permitted the bots attempt to post spam. Comments need to be approved before they’re displayed so I see, and reject, all of the spam. Source IP is usually Russia but spam comments also come from Kazakhstan, Belarus, Iran, Amsterdam, Saudi Arabia, Kuwait, Dubai, China, and VPNs that originate in Stockholm and London, among other places.

For a while I didn’t bother about it and simply marked those comments as spam so they never show up on the site. Lately though I’ve changed my approach a bit. Since I’m not trying to make a popular site, and I realize the likelihood of getting real comments from any of the networks spam comes from is infinitesimal, I decided to start blocking networks that spam comments are coming from.

The interesting thing is that once I began blocking networks, spam comments became a bit more frequent. Each time from a new network, of course, because the firewall was updated for each new spam source.

The spam being more frequent is a subjective measure but when the first block rule went in it was a while before another spam comment showed up. After that new network was blocked the interval to the next spam comment was less than the interval from the first to the second. It seems as if once a site is detected where spam can be posted that IP or URL is shared among spammers so they can all take a crack at it.

I’ve also found how to add Internet block lists to the firewall. There’s hundreds of thousands of IPs that are blocked and the lists are updated daily. Even so, and much to my surprise, after adding the block lists, the only blocks I see in the log are from the spammer networks. That is honestly a surprise to me. With hundreds of thousands of IPs in the block lists I would have thought some would show up in the log. None have so far. That’s a good thing, but still a surprise.

Today’s blocked networks follow below. It will probably be a day or two before there will be others to add. Don’t expect updates. Hmmm…….

37.99.32.0/20
37.99.48.0/20
37.99.80.0/21
37.221.0.0/24
45.88.76.0/22
46.8.10.0/23
46.151.28.0/24
46.161.11.0/24
62.113.118.0/24
77.238.237.0/24
80.239.140.192/27
84.17.48.0/23
84.38.188.0/24
87.249.136.0/22
91.84.100.96/27
91.201.113.0/24
93.183.92.0/24
178.172.152.0/24
178.217.99.0/24
179.43.128.0/18
183.0.0.0/10
185.173.37.0/24
188.126.89.64/27
192.42.116.192/27
194.32.122.0/24
195.2.70.0/24
195.181.174.0/23
212.34.128.0/24
212.34.141.0/24
212.34.148.0/24

Windows 11 Pro setup

First time for me doing the out-of-box experience with Windows 11 Pro preloaded on new hardware.

ThinkPad X13 2-in-1 Gen 5 is a very nice laptop. Completing initial power-on and setup, since it was Windows Pro, I opted for a local admin account and associated it with a Microsoft account.

Once on the desktop I installed KeePassXC, Calibre, AeroAdmin, Firefox, iTunes, and Kdenlive. I used Kdenlive to create a few videos describing settings or usage for some of the apps. YouTube links sprinkled through this post.

The person getting the laptop has decided to step up their online credential game. Rather than using variations of a base password they want to learn to use and manage more complex ones. Bravo, I say. And check out the videos I made about using KeePassXC to do that, Login with KeePassXC and KeePassXC, Updating a Password.

Calibre is an e-book library manager. I included it because it has become very useful to me and I encourage everyone to try it. All those household documents, appliance manuals, car owner manuals, serial numbers and VINs can be cataloged and available at your fingertips. It’s also a great way to organize training guides, magazine and web articles, etc. Maybe my calibre library info system review will pique your interest.

Sometimes the hardest part of giving remote support is getting the recipient to recognize the steps that need to be taken to complete the connection. This AeroAdmin guide is my attempt to clarify that.

Then Firefox was added because after some updates Edge would no longer login to some Microsoft websites. That prevented access to some account info, among other things. So, install Firefox and with it successfully login to every Microsoft site that Edge would not login to. Firefox is there because a backup to Edge is necessary.

And what resolved Edge’s problems with Microsoft’s own sites? Disabling all Edge security features for any Microsoft.com, Office.com, Live.com domain. And after all that “Device encryption” couldn’t be enabled because it didn’t recognize the Microsoft account was logged in. It clearly was as demonstrated by access to OneDrive, Microsoft365, and other integrated features after logon to the desktop with no more credential prompts for any of those services.

It seems Microsoft tries to soften the blow when enabling device encryption fails with their messages, “Oops something when wrong” and “it was probably us”. It sets a light mood and is a relief at first. But after having the problem for more than a week it is disturbing that nothing has changed.

That didn’t get resolved before the laptop was delivered to its owner.

Confirmed, Movies Updates Work

House of cards, but with the stack setup it is easier.

Like many things that appear on a computer screen there is a long chain of events that need to happen successfully for what is on screen to be what is desired there. The various “Movies” tables on this website are one example.

I got some DVDs for Christmas. A very nice Seven Samurai DVD with extras. My movies database had it marked as movie I want. It appeared on this website in the Movies I Want tables that are on two pages of this website.

Change the setting in the database so it’s a movie I have and the title should now only be found in the two Movies I Have tables on the website and no longer found in the two Movies I Want tables on this website. One change to the source to trigger four changes on the website.

In the case of the Movies tables, for changes to the movies database to appear in the tables, the updated data must be exported to two files, one listing “Movies I Have” and the other “Movies I Want”. Those exported files update the source lists the Movies tables refer to. And finally a sync tool from the wpTables publisher must run against the source lists to update the Movies tables on the website.

Making changes to the movies database is infrequent, a few times a year at most. Remembering the process each time is a challenge but now the data extract step and the link refresh steps are automated which makes most of the process happen without need to remember anything (or look at the code if I wish to remember).

The link update code as a cron…

# m h  dom mon dow   command
*/15 * * * * wget -q -O – "https://wp.boba.org/wp-admin/admin-ajax.php?action=wdtable_update_cache&wdtable_cache_verify=<hex-number>"

Export updates for source lists …

<?php
 $server = "<host>"; 
 $username = "<user-name>";
 $password = "<pwd>"; 
 $database_name = "<movies>"; 
 $link_myMovies = mysqli_connect($server, $username, $password, $database_name);
 $Views = array("movies_i_want","movies_i_have");
 $out_path = "/var/tmp/";
 
 foreach ($Views as $view)
 {
        $query_result = null;
        $columns_total = null;
        $str_query = "SELECT * FROM $view";
        $query_result = mysqli_query($link_myMovies, $str_query);
        $columns_total = mysqli_num_fields($query_result);
 
        $col_names = array();
 
        for ($i = 0; $i < $columns_total; $i++) 
         {
                $Heading = mysqli_fetch_field_direct($query_result, $i);
                array_push($col_names,$Heading->name);
         }
 
         $fileOut = fopen("$out_path$view.csv", 'w') or die("Unable open ./$out_path$view.csv");
         fputcsv($fileOut, $col_names);

         while ($row = mysqli_fetch_array($query_result, MYSQLI_NUM)) 
         {
                fputcsv($fileOut, array_values($row)); 
         }
         
         fclose($fileOut) or die("Unable to close ./$view.csv");
}
?>

Up again, but not public yet

Well, except, you’re reading this so it is public.

Lost interest in maintaining this server and website when I lost my job and couldn’t get another. The server’s Ubuntu, web server is Apache, and CMS is WordPress. It’s been running for a number of years without issue. I have never thought of it as “production” because I don’t rely on it for anything. It’s just a test bed to familiarize myself with the software stack and gain some understanding of its setup and administration. I’m self hosting. It’s an old computer repurposed as a server.

One other thing I experimented with is DNS. I wanted to be able to get to my server on my home network using wp.boba.org, whether on the public Internet or my home network. That worked fine for years with BIND9 and isc-dhcp.

I developed the habit of running upgrades periodically without testing. If there was a problem then no big deal, not production, figure out the issue, repair and proceed. Problems happened a few times with that approach and were always easily rectified.

DNS on the server stopped working after an upgrade. I tried many things and couldn’t figure out why. Rather than rollback the upgrade or restore the system from a backup I kept mucking with it to try and get it to work. No success. Eventually I just lost interest and let the server go dark. I wasn’t working so didn’t have anyone to talk with about the server. With no one to talk tech with about my server project there seemed no point to fixing it.

I did want to dip my toe in the water again after a while. I decided to rebuild the server and bring all components up to the latest release. I still couldn’t get BIND9 DNS to work. Searching BIND9 issues I found other Ubuntu users were also having problems with it. After searching for alternate DNS servers I decided to try dnsmasq. That got me to a working DNS on my home network. And that got me to the point of having the server up and publicly available again.

All development of the server configuration and settings was done on a virtual machine, vm, in a virtual network with virtual clients. VirtualBox is the hypervisor being used. Once everything worked as expected I migrated the server vm to a physical host. That took surprisingly little tweaking. Network addresses had to be changed from the virtual network settings to the home network settings and a different Ethernet device name entered where needed. That was about it to migrate from a virtual to physical server.

For all the world to see, in all its underwhelming glory, wp.boba.org is back. Enjoy.

Mount an external LFS drive

It’s easy. Just took a while to recall.

Original server was hardware installed from thumb drive iso. Set up LFS on server install.

New server from VirtualBox vm. Used ext4 there. Have it running on different drive on original server. LFS drive is set aside.

Want to get at some info from LFS drive. Trying to mount external LFS drive is running into many dead ends so far.

And of course it was simply a question of installing the correct file system drivers. In this case # apt update & apt install lvm2, and the volume can be mounted read/write.

I will keep the old drive around for a while in the external housing. I’m sure there will be times I want to find stuff to pluck off. But I need to put a label on it for a hard date to be DBANed.