Rehoming all my Domains, Oh My !

Domains and registrars, and services, and what?

Google is selling their domain registration business to Squarespace. If your webserver is at a dynamic Internet address, the address needs to be monitored so it can be updated on the name server when it changes. Squarespace name servers won’t accept dynamic updates.

Monitoring the network to see the router’s public Internet address change and updating Google Domains‘ name server was done with Google provided DDNS instructions and settings. Squarespace, the provider they’re selling their Domain Name business to, does not support DDNS. Once Squarespace is actually managing the domain name it will keep the old information about the Internet address in its name server but doesn’t provide a way to automate updates. Once the domain name is on Squarespace and my Internet provider updates my modem’s Internet address, access to this website by name goes down unless I’ve set up another way to keep the website address updated.

Two ways I found to avoid this are move to a registrar that supports DDNS, like Namecheap, or find a DNS provider that supports DDNS and doesn’t require registering a domain with them, like FreeDNS (at afraid.org, yes, but don’t be), and use of their name servers as custom name servers with the domain registrar. That approach requires two service providers for each domain, a registrar and a DNS service.

There’s a fee with registrars for migrating a domain to them. Not much but if you can just change a setting and then there’s no need to pay to move to a different registrar then why not do that?

“THAT”, in this case means leaving the domains with Google and updating the name servers on Google’s domain registration record to the FreeDNS name servers and then keeping the Internet address updated on the FreeDNS name servers.

I’ve moved one domain to Namecheap to see how I like that, an $11 move. It will give me a hand at a third domain control panel, Google Domains, Squarespace, and Namecheap.

The others I’ve created records for them on FreeDNS, updated the name server records on Google Domains and will start using the Squarespace control panel to manage them when they transfer from Google. Squarespace doesn’t support DDNS but if custom nameservers are supported the move from Google Domains will go without a hitch.

Haven’t moved boba.org yet. Want to interact with the other sites a bit before deciding to use FreeDNS and their name servers with Squarespace domain registration or move to a registrar that supports DDNS with their name servers.

I do have to spend time out of the house to interact with the sites through the new DNS / name server setups. Sure, could do it through the phone if I turn off the WiFi but LTE isn’t very good here and I don’t like phone screen for web browsing. If LTE was good could tether the computer to the phone and browse the sites on the pc as I’d like. Kind’a lucky the weak signal, more fun to go out. Maybe find a coffee shop in a mall, buy a cup, sit in one of the seats and figure out how to choose the better option, then compare the details and make the choice.

Goodbye Google Domains!! ?? !!

…hello Namecheap ddns or, hmm, domain hosting too?

This domain, boba.org, is on a server I control, behind a dynamic IP address. Google Domains provides the domain hosting and supports DDNS which made it easy to have Google nameservers be authoritative, keep the A record updated, and manage the physical server.

Now Google’s giving up the domain name business, along with all the convenient features they bundle like DDNS, redirects, privacy, etc.

It’s being transferred to Squarespace. And Squarespace doesn’t include DDNS or offer it as a bundle.

Still need a way to update domain record with new address when it changes BUT can’t do that with Squarespace nameservers.

Checking if domain record can have nameserver but no A record with IP. IF SO, domain record points to nameserver that can be updated, e.g. Namecheap free DNS, and domain continues to function when IP changes even though new domain host doesn’t offer dynamic IP updating.

Will see what happens and update…

MySQL export to .csv

The elusive MySQL export to CSV *WITH* headers.

It took a while, review of a number of weblogs, support sites, and error message causes but export from MySQL to .csv with the column titles is finally working!

SELECT convert('Title' using utf8mb4) 'Title',
	convert('Year' using utf8mb4) 'Year',
	convert('Time' using utf8mb4) 'Time',
	convert('Rating' using utf8mb4) 'Rating',
	convert('Genre' using utf8mb4) 'Genre'  
UNION ALL
SELECT * from 
	(SELECT * from movies_i_want order by Title) a
INTO OUTFILE '/var/lib/mysql-files/movies_i_want.csv' 
FIELDS TERMINATED BY ',' 
ENCLOSED BY '"';

and the file it produces…

"Title","Year","Time","Rating","Genre"
"Amazon Women on the Moon","1987","85","R","Comedy,Sci-Fi"
"Blade of the Immortal","2017","140","R","Action,Drama"
"Bound","1996","108","NR","Crime,Romance,Thriller"
 ...

The first SELECT above produces a single row table with the column’s titles as the field values in the row. The table is not named.

mysql> SELECT 'Title', 'Year', 'Time', 'Rating', 'Genre';
+-------+------+------+--------+-------+
| Title | Year | Time | Rating | Genre |
+-------+------+------+--------+-------+
| Title | Year | Time | Rating | Genre |
+-------+------+------+--------+-------+

The full SELECT expression uses convert('Title' using utf8mb4) 'Title'… for each field because UNION failed from incompatible collations between the first SELECT’s row and the “SELECT ... movies_i_want” statement.

Collations can be associated with one character set. A character set can be associated with many collations. For collations to match, both character sets must match. The purpose of the first SELECT is to emit the column titles for the .csv export. It produces only one row, the column titles. Converting that SELECT’s character set to match the character set used in the live data table permitted the collation the UNION was doing to work.

Once the collations were compatible it was necessary to get each column’s title into the output as the first row of the output. The view used for the second SELECT has the same column titles in the same order as the first select. Its data needs to be the 2nd and subsequent rows.

After many attempts, several ways to state the problem were found. Unfortunately none included the column titles in the output. The column titles are a must have for free wpDataTables to show headings on the tables it displays from reading the .csv files.

ignoring the comment about collation to simplify the statements...
The following both work except for no column titles

SELECT 'Title', 'Year', 'Time', 'Rating', 'Genre'  
UNION ALL
SELECT * 
FROM movies_i_want order by Title;

SELECT 'Title', 'Year', 'Time', 'Rating', 'Genre'  
UNION ALL
SELECT 'Title', 'Year', 'Time', 'Rating', 'Genre' 
FROM movies_i_want order by Title;

To include the column titles in the output three SELECTs are needed. And be sure to include an alias for the ‘derived table’, the one created by the second SELECT.

SELECT 'Title', 'Year', 'Time', 'Rating', 'Genre'  
UNION ALL
SELECT * FROM (SELECT * FROM movies_i_want order by Title) a;

Now the output can be created in the necessary format using native MySQL. The method is a bit dense but I think simpler than the PHP, Use PHP to export MySQL to .csv.

With this and the PHP method of exporting the data a trigger is need to cause it to happen, either method can run on demand, or either method can run on a schedule.

A trigger, in other words “only when needed”, is my preferred approach. MySQL can be triggered by INSERT, UPDATE, and DELETE.


Solutions:

A MySQL trigger runs a SQL user procedure to export the data.
Problem: export destinations are limited to a single location with very restrictive access and it is not the location where either .csv file for the web tables is.

A MySQL trigger runs an external program, the PHP, to export the data.
Problem: can a trigger launch an external program?

Use PHP to export MySQL to .csv

The wpDataTables update when the source .csv files change. Next, update the source .csv files when the movies database is updated.

To update the Movies tables the underlying .csv source files must be updated and the wpDataTables tables’ links with those files must be refreshed. Without the link refresh the table doesn’t update even if the webpage is refreshed.

The .csv source files for the “Movies I Have” and “Movies I Want” tables are monitored for changes. The wpDataTables links are refreshed when the files have changed. The table display will update the next time the page is refreshed or landed on.

Another step in the chain is a database update, (INSERT, UPDATE, or DELETE), triggering the .csv file to be updated. And to make that happen, need to be able to do several things

  • Figure out how to create and trigger an event
    • Trigger the the event after INSERT, UPDATE, or DELETE on movies database
  • Write trigger(s) to a) export the new data and hopefully b) update the .csv files
  • Figure out HOW? to update the .csv files
    • PL/SQL export, then use non SQL program to update files with the export
    • PHP program that extracts the data and overwrites the .csv files
    • Program (PHP, PL/SQL) extracts the data to monitored files. Another program updates the .csv files with the data when the monitored files change.

I’ve put together the PHP to extract the data. Also need to see what can be done with SELECT * INTO OUTFILE. PHP may be able to do the whole process, extract records and update .csv file. MySQL may be able to do it with INTO OUTFILE but I think that’s very limited the destinations it can write to.

<?php
 $server = "myserver.boba.org"; 
 $username = "fan";
 $password = xx; 
 $database_name = "the_movies"; 
 $link_myMovies = mysqli_connect($server, $username, $password, $database_name);
 $Views = array("movies_i_want","movies_i_have");
 $out_path = "/var/tmp/";
 
 foreach ($Views as $view)
 {
	$query_result = null;
	$columns_total = null;
 	$str_query = "SELECT * FROM $view";
 	$query_result = mysqli_query($link_myMovies, $str_query);
 	$columns_total = mysqli_num_fields($query_result);
 
 	$col_names = array();
 
 	for ($i = 0; $i < $columns_total; $i++) 
	 {
	 	$Heading = mysqli_fetch_field_direct($query_result, $i);
	 	array_push($col_names,$Heading->name);
	 }
 
	 $fileOut = fopen("$out_path$view.csv", 'w') or die("Unable open ./$out_path$view.csv"); 
	 fputcsv($fileOut, $col_names);

	 while ($row = mysqli_fetch_array($query_result, MYSQLI_NUM)) 
	 {
	 	fputcsv($fileOut, array_values($row)); 
	 }
	 
	 fclose($fileOut) or die("Unable to close ./$view.csv");
}
?>

bind9 and DHCP

Some emphasis on rndc freeze could save headaches.

Want to get full services on my home LAN such that devices that get DHCP addresses can be called by their host names. In other words, Dynamic DNS on the LAN. In a Windows domain it isn’t something I’ve thought about. It is inherent in setting up the DNS and DHCP server in the same domain. Or maybe doing that just masks netbind sharing names. In any case, can do DNS for DHCP hosts and address by name very easily.

Want the same for home network but am using Ubuntu server. DNS is BIND9 and DHCP is ISC-DHCP. Both work. DNS for the fixed IP devices, home servers, router, printer, works fine. Can ping by hostname or FQDN. The DHCP devices, not so much. They get an IP just fine and can all be seen by dhcp-lease-list. They just can’t be pinged by hostname or FQDN.

At least the home DNS has primary and secondary servers. And for DHCP clients, IP for <name> is available via dhcp-lease-list. But ping <name> fails with error … .

All the above was written before an eventual solution was found. The error was one part me (syntax) and one part bind9.

Ping by hostname would require the host’s A record appear in the domain’s zone file. But the majority of hosts get dynamic IP address so there’s no fixed list of hostname to IP address for LOTS of hosts.

The server providing IP addresses is isc-dhcp-server.service and the server providing DNS is bind9.service. The method, isc-dhcp-server.service updates bind9.service when an IP address is leased.

Of course. But it worked initially then didn’t. What happened?

CARDINAL RULE of BIND9 never update zone files while bind server is running or while bind is actively maintaining the zone files. And twice as emphatically, once zone file replication to secondary server(s) has been established and .jnl files have been created, never update zone files unless bind server has been rndc freeze frozen or systemctl stop stopped !!!!

Use rndc to freeze the zone files while leaving the name server running and responding to queries.

Make sure to update the zone file’s sequence number.

Delete any dynamic entries in the file. (when troubleshooting, not for routine maintenance)

Delete any .jnl files. (again, troubleshooting, not for routine)

Unfreeze the zone files.

Excepting “troubleshooting options”, if the steps above are not followed then the zone files will not properly update going forward. And no freeze, maintain, unfreeze, will fix the failures to update.

Plus named-checkconf and named-checkzone didn’t detect any errors after bind and dhcp were no longer updating zone and .jnl files. Nor did named-compilezone.

And I was confounding that with a failure of reverse zone lookup. Couldn’t get a host name for any dynamic IP address. “But it works in the virtual setup”, and it did. Reverse look ups and all.

Eventually I found a different spelling of in-addr.arpa between the primary and secondary zone files. With that fixed, zone update of dynamic IPs still not happening.

The final fix? The procedure above including the “for troubleshooting” steps. With the zones cleared of dynamic A records and managed keys .jnl file and zone .jnl files on both the primary and secondary removed while bind9 was frozen by rndc on both. Then restart both. Then, it all works.

Lesson learned, ALWAYS rndc freeze before doing any bind9 maintenance.

UI Design That Makes a Difference

Some things really are just easier

I’ve been a long time open source user. Some advocates disparage closed source, particularly Microsoft. That’s not productive IMO. Compare feature and function. If it has the feature(s) and function(s) that you need then how well are they implemented and how easy are they use? Those are the crucial considerations.

Microsoft’s Office Suite has many features that most will never use, much less understand how to use (myself included). There are other features that are obvious once they’re experienced and their superior usability is obvious.

One example of that I just experienced was filtering a list of values formatted as currency. Some values had more than two decimals, they were from a formula. All needed to display as currency. So all needed to display two decimal places. As a result different values might display the same. e.g. 0.445 thru 0.454 all display as $0.45.

Excel and Calc displayed the values as currency with two decimals just fine. However the filter list in Excel shows the values in currency format, the filter list in Calc shows the values without the applied formatting. Not nearly as useful.

In Excel different values that format to the same display value show up only once in the filter list and format matches the applied display format. The filter filters the rows based on the display value. As a result anything from 0.445 thru 0.454 displays as $0.45. The filter list displays $0.45 only once no matter how many different values it actually represents. And when $0.45 is selected from the filter list all rows with values from 0.445 thru 0.454 are filtered.

In Calc, not so nice. Any values 0.445 thru 0.454 display as $0.45 using the applied currency format. However, the filter list displays the value without formatting. As a result, to filter all values that display as $0.45, every value in the list from 0.445 thru 0.454 must be selected to filter the list so that only display values of $0.45 are displayed.

There are other issues too. Check out the image for more.

Retiring some hardware

…when a computer’s been around too long

Time to retire some old tech. That display is a whopping 15″ diagonal. Resolution was limited. Only used it for a terminal for a server these last six years or so. And this is it under my arm on the way to the dumpster.

Right after the monitor, the old server was carried out to rubbish.

BEFORE delivering to rubbish I made sure to wipe the HD with DBAN, Darik’s Boot and Nuke. Have relied on it for years.

The computer’s manufacturing date stamp was 082208. Didn’t think to take a photo. It was a Dell OptiPlex 330 SFF, Pentium Dual Core E2160 1.8GHz, 4GiB RAM, 90 GB HD. They looked like this.

I got it in 2015. It had been replaced during a customer hardware upgrade then sat on the shelf unpowered for about a year before I joined that office. On hardware clean-out day it was in a pile to take home or put in the dumpster.

It became my boba.org server sometime in 2015 and served that function until December 2022.

Six years of service and then it sat on the shelf for a year. Then eight years hosting boba.org. Fourteen years of service is a LONG life for a computer!

The replacement “server” is an old laptop, old, but it’s new enough it doesn’t have an Ethernet port. I got a USB Ethernet adapter, Realtek Semiconductor Corp. RTL8153 Gigabit Ethernet Adapter, and plugged a cable in. Better performance than WiFi.

Hardware is several steps above the old server too. Intel Core i3-5015U CPU @ 2.10GHz, 6GiB RAM, 320 GB HD (I should replace with SSD). Date of manufacture isn’t as clear. Maybe late 2015 early 2016.

The CPU Benchmark comparison of the two processors, Intel Core i3-5015U vs Intel Pentium E2160, shows clear differences in processing power.

Now that the new server is up, well has been for a few months but I didn’t want to add new services until I got secondary DNS running, its time to add features and services on the network.

bind9 primary and secondary DNS on home LAN

I now have two DNS servers for my home network. Once I took DNS and DHCP off the router and moved them onto the server it was easy to connect to services on the home network by DNS name. But if the one DNS server was down then no devices could get to the Internet. Not good.

Time to set up a second DNS server. That need prompted my first Raspberry Pi purchase. The default app for DNS and DHCP on Raspberry Pi is DNSMasq. Tried to make it secondary to the existing primary BIND9 server. I didn’t work that out so purge DNSMasq from the Raspberry Pi and install BIND9.

Once I got the config statements worked out it’s been fun disabling one or the other and having the resolvectrl status command show the flip back and forth between the active DNS server and my web pages are found regardless the server that’s running.

The host with both DNS servers running:

localhost:~$ resolvectl status interface
Link 3 (interface)
      Current Scopes: DNS          
DefaultRoute setting: yes          
  Current DNS Server: 192.168.0.205
         DNS Servers: 192.168.0.203
                      192.168.0.205

…shutdown the .205 bind9 server

server205:~$ sudo systemctl stop bind9.service
server205:~$ sudo systemctl status bind9.service
* named.service - BIND Domain Name Server
     Loaded: loaded (/lib/systemd/system/named.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Mon 2023-01-23 06:51:42 EST; 35s ago

…and now the host’s current DNS server changes once the .205 bind9.service is shutdown.

localhost:~$ resolvectl status interface
Link 3 (interface)
      Current Scopes: DNS          
DefaultRoute setting: yes          
  Current DNS Server: 192.168.0.203
         DNS Servers: 192.168.0.203
                      192.168.0.205

Didn’t get the interview. Was it a real opportunity?

Am I really a target? Are the scammers just getting really good? Or am I just too suspicious?

Always nice to be solicited for a role you’d like. That hasn’t happened to me often but recently I got a call about a position. That was followed up with an email. Then another call. Then a few emails and calls with the first caller’s senior recruiter. All in the space of three hours or so. Everything was rolling along and… crickets after my last email. Was it me?

I don’t think so, but you be the judge.

The position I was solicited for was IT Manager. One requirement that I didn’t have was SCRUM Master certification. But, so long as the certificate was earned within six months of start date that would be acceptable. It’s not common, but I have seen positions that require a certification and will accept it being earned within a certain period after starting.

I tell the people I’m speaking with on the phone I need a little time to investigate the certification and see if it seems like something I can achieve in six months. They say fine, they’ll call back in an hour or so and see if I’m still interested.

At this point I’ve gotten the consulting company’s name, the organization they’re recruiting for, and time frame when the position is to be filled.

After a bit of web searching I find a number of training organizations offering online SCRUM Master Certification training at a range of prices. It’s affordable from my point of view so I’m thinking… commit. I really am looking for a new opportunity.

I also check out the recruiter’s domain to find out how long its been around. Surprise, surprise, it’s only a few months old. Red Flag #1. Then I check website of the customer they’re recruiting for to search for the position. The position isn’t listed. Red Flag #2.

Finally they call back and we talk about the position. I tell them I’ve found a trainer that looks like they have a good online training program set up and the course is affordable. As soon as I tell them the trainer I’m told, “no, that’s not such a good trainer.” Red Flag #3. I’m given another training company’s name and told I should register right away so we can provide proof to the employer I’m taking steps to have the certification by the deadline. “Right away”, Red Flag #4.

I tell them to give me a few moments to check out that trainer’s website. Wouldn’t you know, the trainer’s website is even newer than the recruiters. Red Flag #5.

At this point I really don’t believe this is legit and ask for a contact at the company they’re recruiting for to confirm with them the position is open and the certification requirement.

End of conversation. Sigh. It was nice to be recruited for a position I am well qualified for and points to the sophistication of the scam. It was tailored to my skills. Disappointing that it was only a scam to get my money for a certification training course that likely wouldn’t have provided any training.

This all happened several months ago. At this point the “recruiter’s” website is still up. Doesn’t look changed much. All boiler plate stuff. The training company’s website isn’t accessible. Clearly, be suspicious, ask questions and investigate the answers to keep yourself from being taken.

Powershell – install a program with no .MSI

Don’t let the quoting drive you mad!

In an earlier post, Powershell – love it / hate it, I described needing to check the install status of a program that didn’t have an .MSI installer. That post provided details of parsing the install file names to know which pcs got the target install. This post provides details on what I did to make the install happen and create the files that logged the process.

With no software deployment tool and only an .exe for install you can still keep track of deployment with powershell.

In this case the program needed to be targeted at specific computers, not particular users. Easy enough to create a list of target pcs. Without an .MSI file GPO install isn’t available unless… that GPO runs a startup script to do the install. But it can’t be a powershell script if that’s disabled in the environment, so .bat files it is. Still want to know which pcs get the install and which don’t so have to log that somewhere.

How to make it all happen? This is how…

An install .bat file that makes use of powershell Invoke-Command -ScripBlock {} which will run even if powershell is disabled. The quoting to run the commands within -ScriptBlock {} gets really convoluted. Avoided that by calling .bat files from the -ScripBlock {} to have simpler quoting in the called .bat files.

The prog_install.bat file checks if the runtime dependency is installed and calls the .bat file to install it if it isn’t. Then it checks if the target program is installed and installs it if it isn’t found. For each of the steps the result is appended to a log file based on the hostname.

REM prog_install.bat
GOTO EoREM

REM prog name install
REM
REM This routine checks that both Windows Desktop Runtime (a dependency) 
REM and prog name are installed and writes the status to a file to have  
REM install results history.
REM 
REM The install results file must be in a share writeable by the process
REM running this install routine which is after boot and before logon.
REM 
REM A file is created or appended to based on the hostname the process
REM runs on. 
REM

:EoREM
 
@echo off

REM Check if required Microsoft Windows Desktop Runtime is intalled. 
REM Install if not found. 
REM Write reslut to results file.
Powershell Invoke-Command -ScriptBlock { if ^( Get-ItemProperty HKLM:\\Software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\* ^| Where-Object { $_.DisplayName -like """Microsoft Windows Desktop Runtime - 3.*""" } ^) { Add-Content -Path \\server\prog\prog_$Env:COMPUTERNAME.txt -Value """$(Get-Date) $Env:COMPUTERNAME Microsoft Windows Desktop Runtime is installed.""" } else { Start-Process -Wait -NoNewWindow \\server.local\SysVol\server.local\scripts\prog\inst_run.bat; Add-Content -Path \\server\prog\prog_$Env:COMPUTERNAME.txt -Value """$(Get-Date) $Env:COMPUTERNAME Microsoft Windows Desktop Runtime NOT installed. Installing""" } }

REM Check if prog name is intalled. 
REM Install if not found.
REM Write reslut to results file.
REM NOTE: Add-Content before Start-Process (reverse order compared to runtime install above)
REM       Above Add-Content after Start-Process so "installing" not written until after actual install.
REM       For prog name install, if Add-Content after Start-Process then Add-Content fails to write to file.
REM
Powershell Invoke-Command -ScriptBlock { if ^( Get-ItemProperty HKLM:\\Software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\* ^| Where-Object { $_.DisplayName -like """prog name""" } ^) { Add-Content -Path \\server\prog\prog_$Env:COMPUTERNAME.txt -Value """$(Get-Date) $Env:COMPUTERNAME ver $($(Get-ItemProperty HKLM:\\Software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\* | Where-Object { $_.DisplayName -like """prog name""" }).DisplayVersion) prog name is installed.""" } else { Add-Content -Path \\server\prog\prog_$Env:COMPUTERNAME.txt -Value """$(Get-Date) $Env:COMPUTERNAME prog name NOT installed. Installing"""; Start-Process -Wait -NoNewWindow \\server.local\SysVol\server.local\scripts\prog\inst_prog.bat } }

The batch files that do the actual installs refer to the SysVol folder for the programs to run. Using the SysVol folder because need a share that’s accessible early in the boot.

REM inst_run.bat
REM To work prog requires the following Windows runtime package be installed

start /wait \\server.local\SysVol\server.local\scripts\prog\dotnet-sdk-3.1.415-win-x64.exe /quiet /norestart

REM inst_prog.bat
REM Install the prog name package.

start /wait \\server.local\SysVol\server.local\scripts\prog\prog_installer_0.8.5.1.exe /SILENT /NOICONS /Key="secret_key"


So there you have it. To install a program with its .exe installer via GPO in an environment with no .MSI packager, no deployment tool, and powershell.exe disabled by GPO use powershell Invoke-Command -ScripBlock {} in a .bat file to do the install and log results. And call .bat files to simplify quoting where needed.