Didn’t get the interview. Was it a real opportunity?

Am I really a target? Are the scammers just getting really good? Or am I just too suspicious?

Always nice to be solicited for a role you’d like. That hasn’t happened to me often but recently I got a call about a position. That was followed up with an email. Then another call. Then a few emails and calls with the first caller’s senior recruiter. All in the space of three hours or so. Everything was rolling along and… crickets after my last email. Was it me?

I don’t think so, but you be the judge.

The position I was solicited for was IT Manager. One requirement that I didn’t have was SCRUM Master certification. But, so long as the certificate was earned within six months of start date that would be acceptable. It’s not common, but I have seen positions that require a certification and will accept it being earned within a certain period after starting.

I tell the people I’m speaking with on the phone I need a little time to investigate the certification and see if it seems like something I can achieve in six months. They say fine, they’ll call back in an hour or so and see if I’m still interested.

At this point I’ve gotten the consulting company’s name, the organization they’re recruiting for, and time frame when the position is to be filled.

After a bit of web searching I find a number of training organizations offering online SCRUM Master Certification training at a range of prices. It’s affordable from my point of view so I’m thinking… commit. I really am looking for a new opportunity.

I also check out the recruiter’s domain to find out how long its been around. Surprise, surprise, it’s only a few months old. Red Flag #1. Then I check website of the customer they’re recruiting for to search for the position. The position isn’t listed. Red Flag #2.

Finally they call back and we talk about the position. I tell them I’ve found a trainer that looks like they have a good online training program set up and the course is affordable. As soon as I tell them the trainer I’m told, “no, that’s not such a good trainer.” Red Flag #3. I’m given another training company’s name and told I should register right away so we can provide proof to the employer I’m taking steps to have the certification by the deadline. “Right away”, Red Flag #4.

I tell them to give me a few moments to check out that trainer’s website. Wouldn’t you know, the trainer’s website is even newer than the recruiters. Red Flag #5.

At this point I really don’t believe this is legit and ask for a contact at the company they’re recruiting for to confirm with them the position is open and the certification requirement.

End of conversation. Sigh. It was nice to be recruited for a position I am well qualified for and points to the sophistication of the scam. It was tailored to my skills. Disappointing that it was only a scam to get my money for a certification training course that likely wouldn’t have provided any training.

This all happened several months ago. At this point the “recruiter’s” website is still up. Doesn’t look changed much. All boiler plate stuff. The training company’s website isn’t accessible. Clearly, be suspicious, ask questions and investigate the answers to keep yourself from being taken.

Powershell – install a program with no .MSI

Don’t let the quoting drive you mad!

In an earlier post, Powershell – love it / hate it, I described needing to check the install status of a program that didn’t have an .MSI installer. That post provided details of parsing the install file names to know which pcs got the target install. This post provides details on what I did to make the install happen and create the files that logged the process.

With no software deployment tool and only an .exe for install you can still keep track of deployment with powershell.

In this case the program needed to be targeted at specific computers, not particular users. Easy enough to create a list of target pcs. Without an .MSI file GPO install isn’t available unless… that GPO runs a startup script to do the install. But it can’t be a powershell script if that’s disabled in the environment, so .bat files it is. Still want to know which pcs get the install and which don’t so have to log that somewhere.

How to make it all happen? This is how…

An install .bat file that makes use of powershell Invoke-Command -ScripBlock {} which will run even if powershell is disabled. The quoting to run the commands within -ScriptBlock {} gets really convoluted. Avoided that by calling .bat files from the -ScripBlock {} to have simpler quoting in the called .bat files.

The prog_install.bat file checks if the runtime dependency is installed and calls the .bat file to install it if it isn’t. Then it checks if the target program is installed and installs it if it isn’t found. For each of the steps the result is appended to a log file based on the hostname.

REM prog_install.bat
GOTO EoREM

REM prog name install
REM
REM This routine checks that both Windows Desktop Runtime (a dependency) 
REM and prog name are installed and writes the status to a file to have  
REM install results history.
REM 
REM The install results file must be in a share writeable by the process
REM running this install routine which is after boot and before logon.
REM 
REM A file is created or appended to based on the hostname the process
REM runs on. 
REM

:EoREM
 
@echo off

REM Check if required Microsoft Windows Desktop Runtime is intalled. 
REM Install if not found. 
REM Write reslut to results file.
Powershell Invoke-Command -ScriptBlock { if ^( Get-ItemProperty HKLM:\\Software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\* ^| Where-Object { $_.DisplayName -like """Microsoft Windows Desktop Runtime - 3.*""" } ^) { Add-Content -Path \\server\prog\prog_$Env:COMPUTERNAME.txt -Value """$(Get-Date) $Env:COMPUTERNAME Microsoft Windows Desktop Runtime is installed.""" } else { Start-Process -Wait -NoNewWindow \\server.local\SysVol\server.local\scripts\prog\inst_run.bat; Add-Content -Path \\server\prog\prog_$Env:COMPUTERNAME.txt -Value """$(Get-Date) $Env:COMPUTERNAME Microsoft Windows Desktop Runtime NOT installed. Installing""" } }

REM Check if prog name is intalled. 
REM Install if not found.
REM Write reslut to results file.
REM NOTE: Add-Content before Start-Process (reverse order compared to runtime install above)
REM       Above Add-Content after Start-Process so "installing" not written until after actual install.
REM       For prog name install, if Add-Content after Start-Process then Add-Content fails to write to file.
REM
Powershell Invoke-Command -ScriptBlock { if ^( Get-ItemProperty HKLM:\\Software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\* ^| Where-Object { $_.DisplayName -like """prog name""" } ^) { Add-Content -Path \\server\prog\prog_$Env:COMPUTERNAME.txt -Value """$(Get-Date) $Env:COMPUTERNAME ver $($(Get-ItemProperty HKLM:\\Software\\Microsoft\\Windows\\CurrentVersion\\Uninstall\\* | Where-Object { $_.DisplayName -like """prog name""" }).DisplayVersion) prog name is installed.""" } else { Add-Content -Path \\server\prog\prog_$Env:COMPUTERNAME.txt -Value """$(Get-Date) $Env:COMPUTERNAME prog name NOT installed. Installing"""; Start-Process -Wait -NoNewWindow \\server.local\SysVol\server.local\scripts\prog\inst_prog.bat } }

The batch files that do the actual installs refer to the SysVol folder for the programs to run. Using the SysVol folder because need a share that’s accessible early in the boot.

REM inst_run.bat
REM To work prog requires the following Windows runtime package be installed

start /wait \\server.local\SysVol\server.local\scripts\prog\dotnet-sdk-3.1.415-win-x64.exe /quiet /norestart

REM inst_prog.bat
REM Install the prog name package.

start /wait \\server.local\SysVol\server.local\scripts\prog\prog_installer_0.8.5.1.exe /SILENT /NOICONS /Key="secret_key"


So there you have it. To install a program with its .exe installer via GPO in an environment with no .MSI packager, no deployment tool, and powershell.exe disabled by GPO use powershell Invoke-Command -ScripBlock {} in a .bat file to do the install and log results. And call .bat files to simplify quoting where needed.

Perils of a part time web server admin

Not being “in it” all the time can make simple things hard.

Recently one of the domain names I’ve held for a while expired. Or actually, I let it expire. It was hosted on this same web server along with several other websites and had a secure connection using a Lets Encrypt SSL certificate. All good.

The domain name expired, I disabled the website, and all the other websites on the server continued to be available. Until they weren’t! When I first noticed I just tried restarting the web server. No joy, that didn’t get the other sites back up.

And here’s the perils of part time admin. Where to start with the troubleshooting? For all my sites and the hosting server I really don’t do much except keep the patches current and occasionally post content using WordPress CMS. Not much troubleshooting, monitoring logs, etc. because there isn’t much going on. And, though some might say otherwise, I don’t spend all my time at the computer dissecting how it operates.

I put off troubleshooting for a while. This web server’s experimental, not production, so sometimes I cut some slack and don’t dive right in when things aren’t working. Had other things pending that required more attention.

When I did start I was very much at a loss where to start because, as noted, I disabled a web site and everything continued to work for a while. When it stopped working I hadn’t made any additional changes.

Logs are always a good place to look, yes? This web server is set up to create separate logs for most of the sites it’s hosting. Two types of logs are created, access logs and error logs. Access logs showed what was expected, no more access to that site after I disabled it.

Error logs confused me though. The websites use Lets Encrypt SSL certificates. And they use Certbot to set up the https on the Apache http server. A very common setup. The confusing thing about the error log was it showed the SSL configuration for the expired web site failing to load. Why was the site trying to load at all??? I had disabled the site using the a2dissite program provided by the server distribution. The thing I hadn’t thought about is the Certbot script for Apache sets up the SSL by modifying the <site_name>.conf file AND creating a <site_name>-le-ssl.conf file.

So even though the site had been disabled by a2dissite <site_name>.conf I hadn’t thought to a2dissite <site_name>-le-ssl.conf. Once I recognized that issue and ran the second a2dissite command the web server again started right up. No more failing to load SSL for the expired site. And, surprising, failing to load the SSL for the one site prevented the server from starting rather than disabling the one site and loading the others that didn’t have configuration issues.

Something for another time… I expect there must be a way for the server to start and serve correctly configured sites while not loading incorrectly configured sites and not allowing presence of an incorrectly configured site to prevent all sites from loading. It just does not seem likely that such a widely used web server would fail to serve correctly configured sites when only one or some of multiple hosted sites is misconfigured.

The perils of part-time admin, or jack of all trades and master of none, is that these sort of gotcha’s pop up all the time because of limited exposure to the full breath of dependencies for a program to perform in a particular way. It isn’t a bad thing. Just something to be aware of so rather than blame the software for not doing something, need to be aware that there are often additional settings to make to achieve the desired effect.

Be patient. Expect to need to continue learning. And always, always, RTFM and any other supporting documents.

Server upgrade

…and I’m publishing again.

Well, this was a big publishing gap. Four months. Hope not to have such a long one again. Anyway, there are a number of drafts in the wings but I decided to publish about this most recent change because it is what I wanted to get done before publishing again.

The server is now at Ubuntu 20.04, 64‑bit of course. It started out at 16.04 32‑bit, got upgraded to 18.04 i686 and then, attempted 20.04 upgrade and couldn’t because had forgotten was legacy 32‑bit and 20.04 only available in 64-bit. On to other things and plan different upgrade solution. When I got back to it I thought should upgrade to 22.04 since that had been released. As I’m going through the upgrade requirements I discovered that several needed applications didn’t have 22.04 packages yet, particularly Certbot and MySQL. So back to 20.04 and complete the upgrade.

MySQL upgrade wasn’t too bad. There was a failure, but it was common and a usable fix for the column-statistics issue was found quickly. Disable column-statistics during mysqldump (mysqldump -u root -p --all-databases --column-statistics=0 -r dump_file_name.sql).

Also, switched to the Community Edition rather than the Ubuntu packages because of recommendations online at MySQL about the Ubuntu package not being so up to date.

Fortunately I’m dealing with small databases with few transactions so mysqldump was my upgrade solution. Dump the databases from v 5.x 32-bit. Load them into v 8.x 64-bit. But wait, not all the user accounts are there!!

select * from INFORMATION_SCHEMA.SCHEMA_PRIVILEGES; will show only two grantees, 'mysql.sys'@'localhost' and 'mysql.session'@'localhost'. There should be about 20. The solution was simple, add upgrade = force to mysql.cfg and restart the server. After this, select * from INFORMATION_SCHEMA.SCHEMA_PRIVILEGES; shows all the expected accounts AND the logins function and the correct databases are accessible to the accounts.

All the other applications upgraded successfully. DNS, ddclient, Apache2, and etc. It was an interesting exercise to complete and moved the server onto newer, smaller hardware and updated the OS to 64-bit Ubuntu 20.04.

I’ll monitor for 22.04 packages for Certbot and MySQL and once I see them, update the OS again to get it to 22.04. Always better to have more time before needing (being forced) to upgrade. 20.04 is already about halfway through its supported life. Better to be on 22.04 and have almost five years until needing to do the next upgrade.

Doing all this in a virtual environment is a great time saver and trouble spotter. Gotchas and conflicts can be resolved so the actual activation, virtual or physical, goes about as smoothly as could be hoped with so many dependencies and layers of architecture. Really engrossing stuff if you’re so inclined.

DHCP on the server was new. The router doing DHCP only allowed my internal DNS as secondary. That seemed to cause issues reaching local hosts, sometimes the name would resolve to the public not the private IP. Switching to DHCP on the server lets it be specified as THE DNS authority on the network.

Watching syslog to see the messages, the utility of having addressable names for all hosts seemed obvious. A next virtual project, update DNS from DHCP.

Powershell – love it / hate it

Sometimes it’s hard for me to wrap my head around things.

Powershell makes so many things easier than before it existed. At least for me. I’m not a programmer but simple commands piped one to another, like bash in Linux, I can get a lot done.

One of the “things” I need to get done is checking how many computers got a program installed. Because of the environment I’m in and the program itself, there’s no GPO based install for an MSI and there’s no third party tool based install. This stumped me for a while until I came up with the idea of using a startup script for the install.

Another challenge, powershell scripting is disabled. However I learned from “PowerShell Security” by Michael Pietroforte and Wolfgang Sommergut that powershell can be called within a .bat using Powershell Invoke-Command -ScriptBlock {} even if powershell is disabled by policy. So I wrote a start up script that relied on .bat files that had Powershell Invoke-Command -ScriptBlock {} in them to run the program install. The -ScriptBlock {} checked first if the dependencies were installed, installed them if not, then checked if the desired program was installed and installed it if not. It also wrote a log file for each pc named as “progname_<hostname>.txt” and appended to the file with each restart.

The startup script wasn’t running reliably every time a pc booted. Seemed to be NIC initialization or network initialization related. In any case, the pcs that were to be installed were listed in an AD group. The pcs that had run the startup script output that info into a file named “progname_<hostname>.txt”. One way to see which of the pcs had not gotten the install was by comparing the members of the AD group, the computer names, to the <hostname> portion of the log file names that were being created. Computers from the group without a corresponding file hadn’t gotten installed.

Easy, right? Get the list of computers to install with Get-ADGroupMember and compare that list to the <hostname> portion of the log files. How to get only the <hostname> portion? Get-ChildItem makes it easy to get the list of file names. But then need to parse it to get only the <hostname> part. Simple in a spreadsheet but I really wanted to get a listing of only the <hostname> without having to take any other steps.

I knew I needed to look at the Name portion of the file name, handle it as a string, chop off the “progname_”, and drop the “.txt” portion. But how to do that? After what seemed like way to much searching and experimenting I finally came up with…

$( Foreach ( $name in $(Get-ChildItem progname* -Name) ) { $name.split('_')[1].split('.')[0] } ) | Sort

The first .split('_')[1] lops off the common part of the filename that’s there only to identify the program the log is being created for, “progname_”, and keeps the rest for the second split(). The next split(), .split('.')[0], cuts off the file extension, .txt, and keeps only the part that precedes it. And so the output is only the hostname portion of the filename that the startup script has created.

Compare that list to the list from Get-ADGroupMember and, voila, I know which of the targeted pcs have and have not had the program installed without doing any extra processing to trim the file names. Simple enough, but for some reason it took me a while to see how to handle the file names as strings and parse them the way I needed.

Get-WinEvent, read carefully to filter by date

Get-WinEvent hashtable date filtering is different.

Widows event logs have lots of useful information. Getting it can be a slow process. Microsoft even says so in a number of posts and recommends using a hashtable to speed up filtering.

Many powershell Get-… commands include a method to limit the objects collected. A -Filter, -Include, or -Exclude parameter may be available to do this. They are generally implemented along the lines of <Get-command> -Filter/-Include/-Exclude 'Noun <comparison_operator> <value>'. Objects with <DateTime> type attributes like files, directories, users, services, and many more can all be filtered relative to some fixed <DateTime> value like yesterday, noon three days ago, etc. As a result the answer to the question, show every user who has logged in since yesterday afternoon, can be known.

In the case of the Get-WinEvent cmdlet none of these parameters are available. However the cmdlet’s output can be piped to a Where-Object and the event’s TimeCreated can be filtered relative to another time. In that way filtering is similar to how it works for other cmdlets that include a -Filter parameter.

All that goes to say I’d become very complacent about how to filter <DateTime> in powershell.

Now I needed to filter events in the log and, as claimed in many Microsoft posts, log filtering can be slooow. The posts also say filtering speed can be increased significantly by using a hashtable for filtering. And wouldn’t you know it, Get-WinEvent has a -FilterHashtable parameter. Great! Let’s use that to speed up my slow log filtering.

Well, guess what? Unlike any other <DateTime> filtering I’ve done there is no way to filter for StartTime or EndTime being greater than or less than some other time. And the fact that the hashtable key names StartTime and EndTime were being used instead of TimeCreated should have been my first clue that I wasn’t doing the usual filtering on TimeCreated.

The only option in a hashtable is to assign some value to a key name. So how to filter for, say, events that happened yesterday? There is no one <DateTime> value that represents all of yesterday. <DateTime> isn’t a range or array of values it is a fixed point in time value.

And for me this is where things get strange with Get-WinEvent. It can be used to extract events from a log, and those events can be piped to Where-Object and TimeCreated can be filtered by comparing to a <DateTime> just like using the -Filter parameter of other Get-… cmdlets.

After posting about this on PowerShell Forums, I found out I misunderstood the use of StartTime and EndTime in a -FilterHashtable used with Get-WinEvent. The post is, Hashtable comparison operator, less than, greater than, etc?.

Turns out whatever <DateTime> StartTime is set to in a hashtable filters for events that occurred on or after the time it is set to. And EndTime, in a hashtable, filters for events that occurred at or before the <DateTime> it has been set to!

As an example, I extract events from the Application log that occurred 24 hours ago or less. This is run on a test system that doesn’t get used often so there’s not many events in that time span.

The first example does not use the StartTime key in the hashtable. It pipes Get-WinEvent to a Where-Object and filters for TimeCreated being on or after one day ago.

The second example includes the StartTime key in the hashtable and sets it to one day ago.

Both return the same results, but there is no comparison operator used for the StartTime key in the hashtable. The hash table’s assigned StartTime value is used internally by Get-WinEvent to compare each event’s TimeCreated against the assigned StartTime value and check that it is on or after StartTime. Similarly, when EndTime is assigned a value, each event’s TimeCreated is evaluated if it is on or before the assigned value. I really feel that could have been made much clearer in the Get-WinEvent documentation.

Below, the hashtable does not include StartTime or EndTime. Where-Object filters against TimeCreated.

$FilterHashtable = @{ LogName = 'Application'
   ID = 301, 302, 304, 308, 101, 103, 108 
}
Get-WinEvent -FilterHashtable $FilterHashtable | 
   Where-Object { $_.TimeCreated -ge (Get-Date).Date.AddDays(-1) } | 
   Format-Table -AutoSize -Wrap TimeCreated, Id, TaskDisplayName

TimeCreated          Id TaskDisplayName
-----------          -- ---------------
3/1/2022 5:54:54 PM 302 Logging/Recovery
3/1/2022 5:54:54 PM 301 Logging/Recovery
3/1/2022 5:39:01 PM 302 Logging/Recovery
3/1/2022 5:39:01 PM 301 Logging/Recovery
3/1/2022 5:34:59 PM 103 General
3/1/2022 5:34:39 PM 103 General

Below, the hashtable includes StartTime. No Where-Object to filter on TimeCreated.

$FilterHashtable = @{ LogName = 'Application'
   ID = 301, 302, 304, 308, 101, 103, 108
   StartTime = (Get-Date).Date.AddDays(-1) 
}
Get-WinEvent -FilterHashtable $FilterHashtable | 
   Format-Table -AutoSize -Wrap TimeCreated, Id, TaskDisplayName

TimeCreated          Id TaskDisplayName
-----------          -- ---------------
3/1/2022 5:54:54 PM 302 Logging/Recovery
3/1/2022 5:54:54 PM 301 Logging/Recovery
3/1/2022 5:39:01 PM 302 Logging/Recovery
3/1/2022 5:39:01 PM 301 Logging/Recovery
3/1/2022 5:34:59 PM 103 General
3/1/2022 5:34:39 PM 103 General

Ubuntu, ZFS and running out of drive space

For userland, system operations really need to be invisible.

I have been using Ubuntu as my desktop OS for over 15 years now. I came to it by way of using it on a backup computer to do a task that my primary computer, using Windows XP, failed at if I tried to do anything else concurrently. I didn’t want to buy another computer or a license for XP and put it on an old computer with specs that were very marginal for XP. So I thought, try this free OS and see whether I can use the old computer to do the task that my new XP computer would only do if nothing else was done at the same time.

Ubuntu got the job done. I recorded my vinyl albums to sound files, broke the sound files into tracks like on the albums, and then burned the tracks onto CDs so I could carry my music around more conveniently and listen to it in more places. XP did this too, but the sound files were corrupt or the burn failed if I did something else like open my word processor, spreadsheet, or web browser while recording or burning were going on.

Now I had two pcs running and would switch between them as needed to keep the vinyl to CD process going. That was a little inconvenient because my workspace didn’t let me put the two pcs right next to each other. Switching wasn’t a case of moving my hands from one keyboard and mouse to another, or flipping a switch on a KVM. Delaying getting to the Ubuntu pc after the album side was over meant extra time spent trimming the audio file to delete the tail of the file. This led to me trying some things on the Ubuntu pc like opening the word processor or spreadsheet or browsing the web while the recording or burning were going on to see if it caused problems. And amazing, it didn’t! A lower spec pc with Ubuntu could do more of what I wanted without errors than my much better XP desktop.

That led me to using the Ubuntu pc while converting my albums to CD and that led me to Ubuntu for home use. Professional life continued and continues to be Windows, but at home Ubuntu. And Ubuntu is still preferred at home because it doesn’t mysteriously prevent me from doing things, inconveniently interrupt me, or insist on having information I don’t want to share like Windows does. That is until ZFS in Ubuntu 20.04 started preventing me from doing updates on my primary and backup pc because of lack of space.

I’ve run out of space on Windows and Ubuntu before. It just meant time to finally do some housekeeping and get rid of large chunks of files, like virtual machines, that I hadn’t used in a while. Do that and boom, back to work! Not so with ZFS. Do that and gloom, still can’t do updates.

There were different error messages on the two pcs, one said bpool space free below 20%, the other was rpool space free below 20%. Rpool and bpool, what are they? And why, when there’s nearly 20% free space, is updating prevented? And why after deleting or moving tens of gigabytes of files off the drive and purging old kernels, a Linux thing, are updates still prevented and rpool and bpool still report less than 20% free? Gigs of files were just moved off the drive and these rpool and bpool things don’t reflect that!

My first experience with Ubuntu after more than 15 years using it where keeping it up to date wasn’t just a case of using it and running updates every once in a while.

Windows has a feature called “Recovery Points” that I’ve used to get back to a working system when things have been broken to the point of making it hard or impossible to use the pc. Ubuntu hasn’t really had anything equivalent until the introduction of ZFS. And as I’ve learned, that’s way too simple minded an explanation and doesn’t give credit to the capabilities of ZFS that go way beyond Windows Recovery Points. True, and so be it.

I dug through many ZFS web pages and tried many things until finally getting more than 20% rpool or bpool free on each pc, a list of links is at the end of this post. Now the pcs are back to updating without complaint.

What I’ve learned is Ubuntu has a way to go to make ZFS user friendly. Things I’d suggest to Canonical for desktop Ubuntu:

  • Double the recommended minimum drive size and/or tell end users they should have 2x the drive space they think they need if they already think they need more than the minimum
  • Reduce the default number of snapshots to 10
  • Provide a UI for setting the number of snapshots
  • Provide a UI for selectively removing snapshots from bpool or rpool when free space goes below the dreaded 20%
  • After prompting for confirmation automatically remove the oldest snapshots to get back to 20% free when the condition occurs

Both my pcs are now above 20% free space on rpool and bpool and updating without complaint. It took a while and some learning to make that happen. It wasn’t the type of thing an average end user would ever want to face or even know about.

77% and 48% free – will probably bump rpool space issues first again on this pc
89% and 90% free – plenty of room before bpool is a problem again on this pc

ZFS focus on Ubuntu 20.04 LTS: ZSys general presentation · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys general principle on state management · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys commands for state management · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys state collection · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys for system administrators · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys partition layout · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys dataset layout · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys properties on ZFS datasets · ~DidRocks

apt – Out of space on boot zpool and cant run updates anymore – Ask Ubuntu
For this link see especially Hannu‘s answer on Nov 19, 2020 at 17:22

docs.oracle.com | Displaying and Accessing ZFS Snapshots

docs.oracle.com | Destroying a ZFS File System

docs.oracle.com | Creating and Destroying ZFS Snapshots


Ubuntu and Bluetooth headphones

You don’t know what you don’t know.

It’s impossible to be expert at everything. Or even good at everything. One of the things that Ubuntu has frustrated me over is headphones. I’ve used wired headphones and they’ve worked great. But of course I’m tethered to the computer. I’ve used wireless headphones and they too have worked great. But I’m miffed that a USB port has to be dedicated to their use. Why should a USB port be lost to use headphones when the pc has built-in Bluetooth? Why can’t I just use Bluetooth headphones that I use with my phone and keep all my ports open for other things.

Why, because every attempt to use Bluetooth headphones has failed. Used as a headset they work fine, as headphones, not so much. Either the microphone hasn’t been picked up or the audio is unintelligible or non existent. And I know it isn’t the headphones because every set I’ve had that doesn’t work with Ubuntu has worked great with Android phones and with Windows after installing the headphone’s Windows program.

I’ve tried digging into the details of how to set up audio on Ubuntu to get Bluetooth headphones supported. And doing so, I’ve buggered up test systems to the point I needed to reinstall the OS to get sound working again. I obviously didn’t understand it well enough to resolve the issue. Even so, every once in a while I try again to make it work.

Recently, I came across the solution! Thanks to Silvian Cretu and his post Linux and Bluetooth headphones.

The post touches on many of the things I’ve tinkered with trying to make Bluetooth headphones work. The section, “Solution 2: Replacing Pulseaudio with PipeWire”, in Silvian’s post provides the recipe that makes it work. If you’re on Ubuntu 20.04 and are frustrated trying to make Bluetooth headphones work then head over to Linux and Bluetooth headphones and see if that is the recipe for you too.

Anatomy of a Stealthy Phish

Targeting me or just a step up in the scammer’s tool quality?

Got an email from AUROBINDO PHARMA LIMITED asking to schedule an interview with me. Great!! I’m looking for work.

The email is from a GMail account though. So I ask to be contacted from the business email account.

Surprise, I get a follow up email that appears to be from Aurobindo Pharma Limited. Notice though I’m being solicited based on my resume but the “jobs” cover a wide range of positions.

And WHOIS, which can look up information about domain names, never heard of aurobindopharmaltd.com.

$ whois aurobindopharmaltd.com
No match for domain "AUROBINDOPHARMALTD.COM".
>>> Last update of whois database: 2021-12-22T08:15:49Z <<<

And there is no aurobindopharmaltd.com website as of this writing.

I’ve already found that the domain doesn’t exist. Getting email from that domain is therefore not possible. What’s going on? Time to examine the email header. This is what I found…

Guess what? The Return-Path/smtp.mailfrom domain is real. It is an actual business site related to sports. There’s some contact information on it and absolutely nothing to do with pharma.

As I understand it, Return-Path and smtp.mailfrom are the actual source of the email. The email originated from that domain. And that means the domain has been compromised. So I sent email to a site contact advising them what I’ve found and included the email header of the original phishing message.

What I wonder about though is the phisher’s follow up. Whoever was sending those messages seemed to want to convince me they were legitimate. Was it ME they were trying to convince? Or did they just have a better phishing tool and bots on compromised servers that enabled easily sending a message with a forged sender from a compromised server so the message isn’t from a GMail account?

I don’t know. This is the first time this ever happened to me. Actual attention to my initial response, replying and changing the message properties to be a more persuasive fake. Am I being spear phished? Don’t know, but what happened is intriguing.

Controlling file access

Use groups to maintain ACLs.

Digital information has creators, owners, editors, publishers, and consumers. Dependent on the information it has different approved audiences; public, creator’s organization, leadership, functional group, etc. And the audiences can be subdivided dependent on the level of authority they have; read only, modify, create, etc.

How to control who sees what? Accounts need to access, change and create information. At least some of that information will be in the cloud, either your own, or space and services hosted and invoiced monthly, or a combination. Access to public and private domains should be convenient for authorized users on supported platforms.

And be sure to classify the information! The public stuff has access control set so everyone can see it. Everything else needs to be someplace private. Add in an approval process for material to go public. Devise a rights scheme for the private domain. Owners, Editors, Readers.

Add to all this a folder hierarchy that supports the envisioned rights and document access should be understandable, maintainable, and auditable (with proper auditing enabled).

What’s the *perfect* configuration for all of this? As far as I’ve discovered, there isn’t one. Please comment with any reference if you know of some.

The perfect configuration is one that is maintained per business needs. Maintained is really the operative requirement.

Default everything to private so only authors have access to their own work?

How to collaborate? Give others read/edit access as needed per instructions from owner? That gets into LOTS and LOTS of ACL changes as people change in the organization, to say nothing of sun setting access. When should those collaborators have edit removed, or what about even read?

If rights are granted by individual account then this creates lots of future unidentified GUIDs in ACLs as accounts are removed, or lots of maintenance to find the accounts in the ACLs and remove them before the account is removed.

And, even if accounts aren’t removed because the person is changing position so should have access to different files, if requires lots of maintenance as people move from position to position.

Default everything to public read only and authors have edit access to their own work?

This limits the need to provide access to individual accounts unless the account needs edit rights to a document. If the same approach is taken to granting edit rights as was suggested for read rights above, then the same situation with maintaining access occurs except this time only for editors. Likely a lesser support burden but nonetheless still one that is likely to leave orphaned GUIDs in the ACLs.

Manage access by group!

Create Reader and Editor groups. As many as needed to accommodate each of the various groups needing access to the folders and files. Add and remove accounts from the groups as needed.

Managing access by group won’t cover all the needs. It may still be necessary to put individual accounts into the ACLs. However managing by group will limit the need to put individual accounts into the ACLs, and it will help make clear the rights if group name conventions are used to make the purpose of the group more apparent, e.g., AccountsPayableReaders, AccountsPayableEditors.

This can be taken further. If the two groups above have relatively steady membership then accounts that have limited need to access as readers or editors can be added to groups within these groups making it apparent the account holder has temporary access. The nested groups could be TmpAccountsPayableReaders, and TmpAccountsPayableEditors.

In the end..

There is not a “perfect” no maintenance system to manage and control access rights. Groups are certainly recommended over individual accounts. So long as the organization experiences changes that should affect document access it will be necessary to maintain ACLs.

The goal really is to limit the work needed to know what access is granted to which accounts, to maintain proper access, and use a method that is sustainable.

Groups really are the solution. Groups and a well established process to identify, classify, and assign rights to information throughout its lifecycle from creation to retirement.