I HATE Optimum!!

There’s many reasons cable companies love local monopolies. One is that no matter how poor their service is, customers have no alternative.

Like many in the US I have only one ISP, Internet service provider, that can provide reasonable residential Internet upload and download speeds, e.g. >=200Mbps down, >=30Mbps up. That provider is Optimum. I’m not a gamer, don’t stream 4K, and almost never have more than one device streaming at a time so yes, >= 200Mbps down is reasonable for me.

To run this website from my home server I need the Optimum provided Altice Gateway (a combined cable modem and router) to have its router component set to bridge mode. My own router’s WAN port then gets a public IP address and that way I control NAT to the services I need. Optimum doesn’t provide access to the bridge mode setting for their gateway’s router so tech support must be contacted to have it set to bridge mode.

At this point I’ve had three gateways from Optimum. The first two gateways it took one or two days contact with support each time time to get the correct settings established. It was necessary to contact support, explain the router setting that needed to be made, and then follow up until it was actually working.

This third gateway, setup and config were nearly automatic, except of course, setting the router to bridge mode. As usual I contacted support to have the router set to bridge mode. It worked! My router got a public IP on the WAN port and everything was good to go. Then less than a week into the working setup it was no longer working. My router was getting a private IP on the WAN port. Checking my Optimum gateway on their website showed the router portion of the gateway had the LAN enabled and an address on that private LAN was what my WAN port was getting. Router working.

Call/text with support for more than three weeks trying to get them to set it back to the configuration I needed, the configuration it had been in for a short while after initial setup. Support even told me things like the problem was my router. The various techs I spoke/texted with continually blamed my router even though it was clear my router was getting a private IP address from the LAN configured in the Gateway’s router settings.

Every tech I spoke/texted with, their technical understanding of networking and their spoken English competency was not at the level needed for the work. It was clear basic networking wasn’t understood and also they would mistake words like “pass thru,” as “password.” I used “pass thru” while trying to explain (?explain? to tech support) what bridge mode meant. Getting them to recognize and acknowledge the problem was impossible, so I gave up. I decided to buy a cable modem to have both components needed to replace Optimum’s gateway.

Even that was fraught with concern for me. Given the problems I’ve had with knowledgeable support I wanted to be certain the modem I buy could be activated by Optimum. I went to an Optimum store to ask about modems. They directed me to a website, pickmymodem.com, to identify a compatible modem. I pointed out it wasn’t their website and asked if the modem didn’t work would Optimum simply say, “not our website, too bad,” even though they had directed me there? Silence from across the counter.

Pickmymodem.com specifically states DOCSIS 3.1 modems are not recommended for Optimum.



Next I contacted Optimum’s modem activation number to ask about modem compatibility. On calling, the voice menu doesn’t offer an option to speak to “modem activation” even though the website specifically identifies that number for modem activation. So of course one must guess the appropriate menu option, talk to a bot that tries to “solve the problem” without connecting you to a person, then, after getting over that, eventually speak with someone.

When I finally spoke to someone they told me a DOCSIS 3.0 compatible modem was needed but had no information about specific modems and would not confirm whether a particular modem would work. I pointed out Optimum’s website says a DOCSIS 3.1 compatible modem is needed and DOCSIS 2.0 and 3.0 modems are not allowed on their network. Which information is correct, what you’re telling me, DOCSIS 3.0, or what the website shows, DOCSIS 3.1? Just silence from them, no answer.

Optimum couldn’t/wouldn’t provide information about compatible modems, recommended a site they didn’t control to identify a modem, and between the website they recommended, their website and the Optimum support that I spoke with, gave conflicting information about what would work.

I gave up and just bought a DOCSIS 3.1 modem. The modem arrives and I’m amazed, amazed I say, activation went without a hitch! But this is Optimum so, of course, the trouble isn’t over.

When we moved here about six years ago Optimum was and still is the only service provider for my address. Service was started at 200Mbps and that has was never realized. Slightly above 100Mbps was the typical speed. The first gateway eventually started failing and Optimum replaced it. A little over a month ago Internet connectivity became very poor on the second gateway. Optimum sent me a third gateway, bumped speed up to 300Mbps, and reduced the monthly service charge by $5. Nice, right?

Of course problems don’t stop there. I never saw close to 300Mbps. The best was usually around 150Mbps.

Throughout all this I’ve been on their website many times and note new customers are being offered 300Mbps for $40/month. I’m paying more than 3x that, even after the $5/month discount, for the same speed.

I call the retention department to ask to get my bill reduced to something in the $40/month range. Nope, no can do, the $5 was all I would get. What they would do was bump the speed to 1Gps and reduce the bill for that to $95/month for three years. I doubted their ability to deliver the service given their history but that was the only way to get the bill below $100/month so I accepted the offer.

Next problem to resolve, I’ve supposedly got 1Gbps now but every speed test I’ve done until this morning tops out just below 200Gbps. This morning the speed tests across different test sites range from a low of 245Mbps to a high of 480Mbps. Better than I have had but still not even 50% of what’s supposedly being delivered.

I’m going to try and get the speed they’re supposedly giving me but I expect it won’t happen. They’ll tell me it is my equipment that’s the problem, I’m sure.

I expect the speed issue won’t be resolved and I’ll still hate Optimum after I try and get to the speed they say they’re providing. Have to wait see.

WordPress’ Post name Permalinks, Block Editor, and JSON Error

Following the right path makes finding clues easier.

Stated in many posts on this site is the lack of a traffic generating goal. When I first set up the site permalinks defaulted to “Plain” or I set them to “Plain.” I don’t remember. Every URL for a page, post, comment, or view ended with something like /?p=59 or /?p=128. It kept the URLs short but provided no other real benefit.

Recently I began to think about changing the permalinks to title style. I now appreciate having some description of the page in the URL and wanted to enable it for this site. Plus it is supposed to help with search engine ranking if the URL is descriptive. Simple enough, change the Permalink Structure from Plain to Post name.

Not so simple.

The change saves without error and the site functions as expected when browsed afterward. The URLs all include the title in the link rather than ending /?p=xx. All seems well. Then edit an existing draft or published post or create one. When saved as draft or attempting to publish, an invalid JSON message is displayed. The edits, or new post, will not be saved and cannot be published!

Start tracking down “not a valid JSON response” with “WordPress” and got plenty of hits. As is usual when there’s lots of hits, there is agreement across articles about many of the settings to review for a solution. And none of the common solutions applied! The conditions were already as specified or making the change didn’t make saving edits in the block editor possible. Same for the solutions that weren’t common to all articles. Either conditions already as specified or the change didn’t resolve the problem.

I turned off the block editor and used the classic editor. Posts could be edited and saved and new posts could be created and saved as draft or published. The site was working, and with Post name permalinks, but nothing could be edited or created with the block editor, the classic editor needed to be used.

Trying to get a handle on the problem I reviewed the server logs, the web server logs, enabled and reviewed the WordPress logs, monitored console output in my browser. I also tried editing from different operating systems and with different browsers. The problem was consistent regardless the OS or browser used while editing. All I was able to find was that files in a wp-json folder weren’t being found.

With this information I searched the server from / for a path that includes wp-json. There is no path with that string. Now a new search focus for troubleshooting. Rather than WordPress and the JSON error, WordPress and missing wp-json module. Eventually I found some articles that recommended AllowOverride all be enabled for the web server’s directory statement.

I tested and it worked! Post name Permalinks could be enabled and the block editor worked as expected. But I couldn’t reconcile enabling that directive for all sites on the server. Fortunately the <Directory> statement can be in a <VirtualHost> statement. This site’s <VirtualHost> statement now contains a <Directory> statement with AllowOverride all. Restart the server and *boom* the site works and editing in the block editor works.

It is surprising to me that not one of the WordPress related troubleshooting articles I found, that had in agreement at least a handful of steps among them, ever mentioned the Apache <Directory> statement.

The troubleshooting tree that finally worked, tracking down reasons for missing wp-json path, wasn’t hinted at in the initial error message, searches with “WordPress” and “the response is not a valid JSON response” kept turning up the same potential causes, none of which included discovering the missing wp-json, how to troubleshoot it, or that a web server configuration requirement, not a WordPress setting, resolved the issue.

When troubleshooting keep digging, and try different searches if what first turns up doesn’t help. Need to keep digging until the corrective action is found. Even if the cause or messages from it can’t be found. Apache log level was set to info and there was nothing with the term “wp-json” found in any log.

Cloud Storage, overGrive, and my own cloud?

Host my own cloud? Part of the journey is here. Not a full blown private cloud yet.

Syncing with external (cloud) directories is such a common thing. Providers have big incentives to lock you into their platform and don’t always provide a straightforward or full featured way to connect if you’re not using their connection tool. And there are security considerations that affect the method(s) available to connect to the account.

I’ve had a GMail account for ages because I’ve had Android phones. I got in the habit of using DropBox on the phone as a convenient storage for documents on the phone and my computers, various Linux flavors, and Windows. DropBox changed its policy and limited to two the number of devices a free account can connect with. Now I needed a way for my second computer (primary computer + phone hit the device limit) to sync files.

overGrive to the rescue! A perpetual license, with plenty of personal use seats, for something like $5 back in 2020. Buy once, install on each pc, and have full GoogleDrive sync on my local drives. Make a change using any device, save the file, open it on another device and edit the sync’d copy with the latest changes.

I change my computer’s OS from time to time or do other things that require applications like overGrive to be reinstalled which involves reauthenticating overGrive with Google. Reinstall has always gone without a hitch and GoogleDrive was syncing on the pc. I’ve done this several times over the years with no issue. And all on the same original perpetual license.

When I needed to reinstall back in January because of one of those system changes, overGrive couldn’t authenticate. Google made some changes so the overGrive authentication (and other apps using the same mechanism) didn’t work any longer. Fortunately I was at a point where I didn’t regularly switch pcs and so wasn’t relying on GoogleDrive sync so much.

For a while the folks at The Fan Club had a page up explaining they didn’t know when the issue would be resolved. Google had changed the procedure and cost of licensing and they weren’t forecasting when/if the issues would be resolved.

A recent trip to The Fan Club revealed the problem description page was gone, replaced by instructions for setting up the Google Authentication on your own. I tried them and got authentication set up. Like many guides made for new services the illustrations, label names, and functional paths of the actual website were not were not the same, or in the same order. But overGrive was working again

It still makes GoogleDrive a manual sync for files I want on all devices. So there’s still a risk I cause a sync conflict between Dropbox which is “primary” and GoogleDrive which is meant as one way copy from Dropbox.

Solutions that come to mind are a paid Dropbox account so more devices can connect, switch over to GoogleDrive for all devices, or host my own cloud. There’s plenty of options for hosting my own cloud; FileRun, NextCloud, OwnCloud, Seafile, TrueNAS Scale, and others. And some appeal to knowing no one is monitoring my cloud use.

Confirmed, Movies Updates Work

House of cards, but with the stack setup it is easier.

Like many things that appear on a computer screen there is a long chain of events that need to happen successfully for what is on screen to be what is desired there. The various “Movies” tables on this website are one example.

I got some DVDs for Christmas. A very nice Seven Samurai DVD with extras. My movies database had it marked as movie I want. It appeared on this website in the Movies I Want tables that are on two pages of this website.

Change the setting in the database so it’s a movie I have and the title should now only be found in the two Movies I Have tables on the website and no longer found in the two Movies I Want tables on this website. One change to the source to trigger four changes on the website.

In the case of the Movies tables, for changes to the movies database to appear in the tables, the updated data must be exported to two files, one listing “Movies I Have” and the other “Movies I Want”. Those exported files update the source lists the Movies tables refer to. And finally a sync tool from the wpTables publisher must run against the source lists to update the Movies tables on the website.

Making changes to the movies database is infrequent, a few times a year at most. Remembering the process each time is a challenge but now the data extract step and the link refresh steps are automated which makes most of the process happen without need to remember anything (or look at the code if I wish to remember).

The link update code as a cron…

# m h  dom mon dow   command
*/15 * * * * wget -q -O – "https://wp.boba.org/wp-admin/admin-ajax.php?action=wdtable_update_cache&wdtable_cache_verify=<hex-number>"

Export updates for source lists …

<?php
 $server = "<host>"; 
 $username = "<user-name>";
 $password = "<pwd>"; 
 $database_name = "<movies>"; 
 $link_myMovies = mysqli_connect($server, $username, $password, $database_name);
 $Views = array("movies_i_want","movies_i_have");
 $out_path = "/var/tmp/";
 
 foreach ($Views as $view)
 {
        $query_result = null;
        $columns_total = null;
        $str_query = "SELECT * FROM $view";
        $query_result = mysqli_query($link_myMovies, $str_query);
        $columns_total = mysqli_num_fields($query_result);
 
        $col_names = array();
 
        for ($i = 0; $i < $columns_total; $i++) 
         {
                $Heading = mysqli_fetch_field_direct($query_result, $i);
                array_push($col_names,$Heading->name);
         }
 
         $fileOut = fopen("$out_path$view.csv", 'w') or die("Unable open ./$out_path$view.csv");
         fputcsv($fileOut, $col_names);

         while ($row = mysqli_fetch_array($query_result, MYSQLI_NUM)) 
         {
                fputcsv($fileOut, array_values($row)); 
         }
         
         fclose($fileOut) or die("Unable to close ./$view.csv");
}
?>

Rehoming all my Domains, Oh My !

Domains and registrars, and services, and what?

Google is selling their domain registration business to Squarespace. If your webserver is at a dynamic Internet address, the address needs to be monitored so it can be updated on the name server when it changes. Squarespace name servers won’t accept dynamic updates.

Monitoring the network to see the router’s public Internet address change and updating Google Domains‘ name server was done with Google provided DDNS instructions and settings. Squarespace, the provider they’re selling their Domain Name business to, does not support DDNS. Once Squarespace is actually managing the domain name it will keep the old information about the Internet address in its name server but doesn’t provide a way to automate updates. Once the domain name is on Squarespace and my Internet provider updates my modem’s Internet address, access to this website by name goes down unless I’ve set up another way to keep the website address updated.

Two ways I found to avoid this are move to a registrar that supports DDNS, like Namecheap, or find a DNS provider that supports DDNS and doesn’t require registering a domain with them, like FreeDNS (at afraid.org, yes, but don’t be), and use of their name servers as custom name servers with the domain registrar. That approach requires two service providers for each domain, a registrar and a DNS service.

There’s a fee with registrars for migrating a domain to them. Not much but if you can just change a setting and then there’s no need to pay to move to a different registrar then why not do that?

“THAT”, in this case means leaving the domains with Google and updating the name servers on Google’s domain registration record to the FreeDNS name servers and then keeping the Internet address updated on the FreeDNS name servers.

I’ve moved one domain to Namecheap to see how I like that, an $11 move. It will give me a hand at a third domain control panel, Google Domains, Squarespace, and Namecheap.

The others I’ve created records for them on FreeDNS, updated the name server records on Google Domains and will start using the Squarespace control panel to manage them when they transfer from Google. Squarespace doesn’t support DDNS but if custom nameservers are supported the move from Google Domains will go without a hitch.

Haven’t moved boba.org yet. Want to interact with the other sites a bit before deciding to use FreeDNS and their name servers with Squarespace domain registration or move to a registrar that supports DDNS with their name servers.

I do have to spend time out of the house to interact with the sites through the new DNS / name server setups. Sure, could do it through the phone if I turn off the WiFi but LTE isn’t very good here and I don’t like phone screen for web browsing. If LTE was good could tether the computer to the phone and browse the sites on the pc as I’d like. Kind’a lucky the weak signal, more fun to go out. Maybe find a coffee shop in a mall, buy a cup, sit in one of the seats and figure out how to choose the better option, then compare the details and make the choice.

Goodbye Google Domains!! ?? !!

…hello Namecheap ddns or, hmm, domain hosting too?

This domain, boba.org, is on a server I control, behind a dynamic IP address. Google Domains provides the domain hosting and supports DDNS which made it easy to have Google nameservers be authoritative, keep the A record updated, and manage the physical server.

Now Google’s giving up the domain name business, along with all the convenient features they bundle like DDNS, redirects, privacy, etc.

It’s being transferred to Squarespace. And Squarespace doesn’t include DDNS or offer it as a bundle.

Still need a way to update domain record with new address when it changes BUT can’t do that with Squarespace nameservers.

Checking if domain record can have nameserver but no A record with IP. IF SO, domain record points to nameserver that can be updated, e.g. Namecheap free DNS, and domain continues to function when IP changes even though new domain host doesn’t offer dynamic IP updating.

Will see what happens and update…

Retiring some hardware

…when a computer’s been around too long

Time to retire some old tech. That display is a whopping 15″ diagonal. Resolution was limited. Only used it for a terminal for a server these last six years or so. And this is it under my arm on the way to the dumpster.

Right after the monitor, the old server was carried out to rubbish.

BEFORE delivering to rubbish I made sure to wipe the HD with DBAN, Darik’s Boot and Nuke. Have relied on it for years.

The computer’s manufacturing date stamp was 082208. Didn’t think to take a photo. It was a Dell OptiPlex 330 SFF, Pentium Dual Core E2160 1.8GHz, 4GiB RAM, 90 GB HD. They looked like this.

I got it in 2015. It had been replaced during a customer hardware upgrade then sat on the shelf unpowered for about a year before I joined that office. On hardware clean-out day it was in a pile to take home or put in the dumpster.

It became my boba.org server sometime in 2015 and served that function until December 2022.

Six years of service and then it sat on the shelf for a year. Then eight years hosting boba.org. Fourteen years of service is a LONG life for a computer!

The replacement “server” is an old laptop, old, but it’s new enough it doesn’t have an Ethernet port. I got a USB Ethernet adapter, Realtek Semiconductor Corp. RTL8153 Gigabit Ethernet Adapter, and plugged a cable in. Better performance than WiFi.

Hardware is several steps above the old server too. Intel Core i3-5015U CPU @ 2.10GHz, 6GiB RAM, 320 GB HD (I should replace with SSD). Date of manufacture isn’t as clear. Maybe late 2015 early 2016.

The CPU Benchmark comparison of the two processors, Intel Core i3-5015U vs Intel Pentium E2160, shows clear differences in processing power.

Now that the new server is up, well has been for a few months but I didn’t want to add new services until I got secondary DNS running, its time to add features and services on the network.

bind9 primary and secondary DNS on home LAN

I now have two DNS servers for my home network. Once I took DNS and DHCP off the router and moved them onto the server it was easy to connect to services on the home network by DNS name. But if the one DNS server was down then no devices could get to the Internet. Not good.

Time to set up a second DNS server. That need prompted my first Raspberry Pi purchase. The default app for DNS and DHCP on Raspberry Pi is DNSMasq. Tried to make it secondary to the existing primary BIND9 server. I didn’t work that out so purge DNSMasq from the Raspberry Pi and install BIND9.

Once I got the config statements worked out it’s been fun disabling one or the other and having the resolvectrl status command show the flip back and forth between the active DNS server and my web pages are found regardless the server that’s running.

The host with both DNS servers running:

localhost:~$ resolvectl status interface
Link 3 (interface)
      Current Scopes: DNS          
DefaultRoute setting: yes          
  Current DNS Server: 192.168.0.205
         DNS Servers: 192.168.0.203
                      192.168.0.205

…shutdown the .205 bind9 server

server205:~$ sudo systemctl stop bind9.service
server205:~$ sudo systemctl status bind9.service
* named.service - BIND Domain Name Server
     Loaded: loaded (/lib/systemd/system/named.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Mon 2023-01-23 06:51:42 EST; 35s ago

…and now the host’s current DNS server changes once the .205 bind9.service is shutdown.

localhost:~$ resolvectl status interface
Link 3 (interface)
      Current Scopes: DNS          
DefaultRoute setting: yes          
  Current DNS Server: 192.168.0.203
         DNS Servers: 192.168.0.203
                      192.168.0.205

Perils of a part time web server admin

Not being “in it” all the time can make simple things hard.

Recently one of the domain names I’ve held for a while expired. Or actually, I let it expire. It was hosted on this same web server along with several other websites and had a secure connection using a Lets Encrypt SSL certificate. All good.

The domain name expired, I disabled the website, and all the other websites on the server continued to be available. Until they weren’t! When I first noticed I just tried restarting the web server. No joy, that didn’t get the other sites back up.

And here’s the perils of part time admin. Where to start with the troubleshooting? For all my sites and the hosting server I really don’t do much except keep the patches current and occasionally post content using WordPress CMS. Not much troubleshooting, monitoring logs, etc. because there isn’t much going on. And, though some might say otherwise, I don’t spend all my time at the computer dissecting how it operates.

I put off troubleshooting for a while. This web server’s experimental, not production, so sometimes I cut some slack and don’t dive right in when things aren’t working. Had other things pending that required more attention.

When I did start I was very much at a loss where to start because, as noted, I disabled a web site and everything continued to work for a while. When it stopped working I hadn’t made any additional changes.

Logs are always a good place to look, yes? This web server is set up to create separate logs for most of the sites it’s hosting. Two types of logs are created, access logs and error logs. Access logs showed what was expected, no more access to that site after I disabled it.

Error logs confused me though. The websites use Lets Encrypt SSL certificates. And they use Certbot to set up the https on the Apache http server. A very common setup. The confusing thing about the error log was it showed the SSL configuration for the expired web site failing to load. Why was the site trying to load at all??? I had disabled the site using the a2dissite program provided by the server distribution. The thing I hadn’t thought about is the Certbot script for Apache sets up the SSL by modifying the <site_name>.conf file AND creating a <site_name>-le-ssl.conf file.

So even though the site had been disabled by a2dissite <site_name>.conf I hadn’t thought to a2dissite <site_name>-le-ssl.conf. Once I recognized that issue and ran the second a2dissite command the web server again started right up. No more failing to load SSL for the expired site. And, surprising, failing to load the SSL for the one site prevented the server from starting rather than disabling the one site and loading the others that didn’t have configuration issues.

Something for another time… I expect there must be a way for the server to start and serve correctly configured sites while not loading incorrectly configured sites and not allowing presence of an incorrectly configured site to prevent all sites from loading. It just does not seem likely that such a widely used web server would fail to serve correctly configured sites when only one or some of multiple hosted sites is misconfigured.

The perils of part-time admin, or jack of all trades and master of none, is that these sort of gotcha’s pop up all the time because of limited exposure to the full breath of dependencies for a program to perform in a particular way. It isn’t a bad thing. Just something to be aware of so rather than blame the software for not doing something, need to be aware that there are often additional settings to make to achieve the desired effect.

Be patient. Expect to need to continue learning. And always, always, RTFM and any other supporting documents.

Server upgrade

…and I’m publishing again.

Well, this was a big publishing gap. Four months. Hope not to have such a long one again. Anyway, there are a number of drafts in the wings but I decided to publish about this most recent change because it is what I wanted to get done before publishing again.

The server is now at Ubuntu 20.04, 64‑bit of course. It started out at 16.04 32‑bit, got upgraded to 18.04 i686 and then, attempted 20.04 upgrade and couldn’t because had forgotten was legacy 32‑bit and 20.04 only available in 64-bit. On to other things and plan different upgrade solution. When I got back to it I thought should upgrade to 22.04 since that had been released. As I’m going through the upgrade requirements I discovered that several needed applications didn’t have 22.04 packages yet, particularly Certbot and MySQL. So back to 20.04 and complete the upgrade.

MySQL upgrade wasn’t too bad. There was a failure, but it was common and a usable fix for the column-statistics issue was found quickly. Disable column-statistics during mysqldump (mysqldump -u root -p --all-databases --column-statistics=0 -r dump_file_name.sql).

Also, switched to the Community Edition rather than the Ubuntu packages because of recommendations online at MySQL about the Ubuntu package not being so up to date.

Fortunately I’m dealing with small databases with few transactions so mysqldump was my upgrade solution. Dump the databases from v 5.x 32-bit. Load them into v 8.x 64-bit. But wait, not all the user accounts are there!!

select * from INFORMATION_SCHEMA.SCHEMA_PRIVILEGES; will show only two grantees, 'mysql.sys'@'localhost' and 'mysql.session'@'localhost'. There should be about 20. The solution was simple, add upgrade = force to mysql.cfg and restart the server. After this, select * from INFORMATION_SCHEMA.SCHEMA_PRIVILEGES; shows all the expected accounts AND the logins function and the correct databases are accessible to the accounts.

All the other applications upgraded successfully. DNS, ddclient, Apache2, and etc. It was an interesting exercise to complete and moved the server onto newer, smaller hardware and updated the OS to 64-bit Ubuntu 20.04.

I’ll monitor for 22.04 packages for Certbot and MySQL and once I see them, update the OS again to get it to 22.04. Always better to have more time before needing (being forced) to upgrade. 20.04 is already about halfway through its supported life. Better to be on 22.04 and have almost five years until needing to do the next upgrade.

Doing all this in a virtual environment is a great time saver and trouble spotter. Gotchas and conflicts can be resolved so the actual activation, virtual or physical, goes about as smoothly as could be hoped with so many dependencies and layers of architecture. Really engrossing stuff if you’re so inclined.

DHCP on the server was new. The router doing DHCP only allowed my internal DNS as secondary. That seemed to cause issues reaching local hosts, sometimes the name would resolve to the public not the private IP. Switching to DHCP on the server lets it be specified as THE DNS authority on the network.

Watching syslog to see the messages, the utility of having addressable names for all hosts seemed obvious. A next virtual project, update DNS from DHCP.