Ubuntu, ZFS and running out of drive space

For userland, system operations really need to be invisible.

I have been using Ubuntu as my desktop OS for over 15 years now. I came to it by way of using it on a backup computer to do a task that my primary computer, using Windows XP, failed at if I tried to do anything else concurrently. I didn’t want to buy another computer or a license for XP and put it on an old computer with specs that were very marginal for XP. So I thought, try this free OS and see whether I can use the old computer to do the task that my new XP computer would only do if nothing else was done at the same time.

Ubuntu got the job done. I recorded my vinyl albums to sound files, broke the sound files into tracks like on the albums, and then burned the tracks onto CDs so I could carry my music around more conveniently and listen to it in more places. XP did this too, but the sound files were corrupt or the burn failed if I did something else like open my word processor, spreadsheet, or web browser while recording or burning were going on.

Now I had two pcs running and would switch between them as needed to keep the vinyl to CD process going. That was a little inconvenient because my workspace didn’t let me put the two pcs right next to each other. Switching wasn’t a case of moving my hands from one keyboard and mouse to another, or flipping a switch on a KVM. Delaying getting to the Ubuntu pc after the album side was over meant extra time spent trimming the audio file to delete the tail of the file. This led to me trying some things on the Ubuntu pc like opening the word processor or spreadsheet or browsing the web while the recording or burning were going on to see if it caused problems. And amazing, it didn’t! A lower spec pc with Ubuntu could do more of what I wanted without errors than my much better XP desktop.

That led me to using the Ubuntu pc while converting my albums to CD and that led me to Ubuntu for home use. Professional life continued and continues to be Windows, but at home Ubuntu. And Ubuntu is still preferred at home because it doesn’t mysteriously prevent me from doing things, inconveniently interrupt me, or insist on having information I don’t want to share like Windows does. That is until ZFS in Ubuntu 20.04 started preventing me from doing updates on my primary and backup pc because of lack of space.

I’ve run out of space on Windows and Ubuntu before. It just meant time to finally do some housekeeping and get rid of large chunks of files, like virtual machines, that I hadn’t used in a while. Do that and boom, back to work! Not so with ZFS. Do that and gloom, still can’t do updates.

There were different error messages on the two pcs, one said bpool space free below 20%, the other was rpool space free below 20%. Rpool and bpool, what are they? And why, when there’s nearly 20% free space, is updating prevented? And why after deleting or moving tens of gigabytes of files off the drive and purging old kernels, a Linux thing, are updates still prevented and rpool and bpool still report less than 20% free? Gigs of files were just moved off the drive and these rpool and bpool things don’t reflect that!

My first experience with Ubuntu after more than 15 years using it where keeping it up to date wasn’t just a case of using it and running updates every once in a while.

Windows has a feature called “Recovery Points” that I’ve used to get back to a working system when things have been broken to the point of making it hard or impossible to use the pc. Ubuntu hasn’t really had anything equivalent until the introduction of ZFS. And as I’ve learned, that’s way too simple minded an explanation and doesn’t give credit to the capabilities of ZFS that go way beyond Windows Recovery Points. True, and so be it.

I dug through many ZFS web pages and tried many things until finally getting more than 20% rpool or bpool free on each pc, a list of links is at the end of this post. Now the pcs are back to updating without complaint.

What I’ve learned is Ubuntu has a way to go to make ZFS user friendly. Things I’d suggest to Canonical for desktop Ubuntu:

  • Double the recommended minimum drive size and/or tell end users they should have 2x the drive space they think they need if they already think they need more than the minimum
  • Reduce the default number of snapshots to 10
  • Provide a UI for setting the number of snapshots
  • Provide a UI for selectively removing snapshots from bpool or rpool when free space goes below the dreaded 20%
  • After prompting for confirmation automatically remove the oldest snapshots to get back to 20% free when the condition occurs

Both my pcs are now above 20% free space on rpool and bpool and updating without complaint. It took a while and some learning to make that happen. It wasn’t the type of thing an average end user would ever want to face or even know about.

77% and 48% free – will probably bump rpool space issues first again on this pc
89% and 90% free – plenty of room before bpool is a problem again on this pc

ZFS focus on Ubuntu 20.04 LTS: ZSys general presentation · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys general principle on state management · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys commands for state management · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys state collection · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys for system administrators · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys partition layout · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys dataset layout · ~DidRocks

ZFS focus on Ubuntu 20.04 LTS: ZSys properties on ZFS datasets · ~DidRocks

apt – Out of space on boot zpool and cant run updates anymore – Ask Ubuntu
For this link see especially Hannu‘s answer on Nov 19, 2020 at 17:22

docs.oracle.com | Displaying and Accessing ZFS Snapshots

docs.oracle.com | Destroying a ZFS File System

docs.oracle.com | Creating and Destroying ZFS Snapshots

Ubuntu and Bluetooth headphones

You don’t know what you don’t know.

It’s impossible to be expert at everything. Or even good at everything. One of the things that Ubuntu has frustrated me over is headphones. I’ve used wired headphones and they’ve worked great. But of course I’m tethered to the computer. I’ve used wireless headphones and they too have worked great. But I’m miffed that a USB port has to be dedicated to their use. Why should a USB port be lost to use headphones when the pc has built-in Bluetooth? Why can’t I just use Bluetooth headphones that I use with my phone and keep all my ports open for other things.

Why, because every attempt to use Bluetooth headphones has failed. Used as a headset they work fine, as headphones, not so much. Either the microphone hasn’t been picked up or the audio is unintelligible or non existent. And I know it isn’t the headphones because every set I’ve had that doesn’t work with Ubuntu has worked great with Android phones and with Windows after installing the headphone’s Windows program.

I’ve tried digging into the details of how to set up audio on Ubuntu to get Bluetooth headphones supported. And doing so, I’ve buggered up test systems to the point I needed to reinstall the OS to get sound working again. I obviously didn’t understand it well enough to resolve the issue. Even so, every once in a while I try again to make it work.

Recently, I came across the solution! Thanks to Silvian Cretu and his post Linux and Bluetooth headphones.

The post touches on many of the things I’ve tinkered with trying to make Bluetooth headphones work. The section, “Solution 2: Replacing Pulseaudio with PipeWire”, in Silvian’s post provides the recipe that makes it work. If you’re on Ubuntu 20.04 and are frustrated trying to make Bluetooth headphones work then head over to Linux and Bluetooth headphones and see if that is the recipe for you too.

Anatomy of a Stealthy Phish

Targeting me or just a step up in the scammer’s tool quality?

Got an email from AUROBINDO PHARMA LIMITED asking to schedule an interview with me. Great!! I’m looking for work.

The email is from a GMail account though. So I ask to be contacted from the business email account.

Surprise, I get a follow up email that appears to be from Aurobindo Pharma Limited. Notice though I’m being solicited based on my resume but the “jobs” cover a wide range of positions.

And WHOIS, which can look up information about domain names, never heard of aurobindopharmaltd.com.

$ whois aurobindopharmaltd.com
No match for domain "AUROBINDOPHARMALTD.COM".
>>> Last update of whois database: 2021-12-22T08:15:49Z <<<

And there is no aurobindopharmaltd.com website as of this writing.

I’ve already found that the domain doesn’t exist. Getting email from that domain is therefore not possible. What’s going on? Time to examine the email header. This is what I found…

Guess what? The Return-Path/smtp.mailfrom domain is real. It is an actual business site related to sports. There’s some contact information on it and absolutely nothing to do with pharma.

As I understand it, Return-Path and smtp.mailfrom are the actual source of the email. The email originated from that domain. And that means the domain has been compromised. So I sent email to a site contact advising them what I’ve found and included the email header of the original phishing message.

What I wonder about though is the phisher’s follow up. Whoever was sending those messages seemed to want to convince me they were legitimate. Was it ME they were trying to convince? Or did they just have a better phishing tool and bots on compromised servers that enabled easily sending a message with a forged sender from a compromised server so the message isn’t from a GMail account?

I don’t know. This is the first time this ever happened to me. Actual attention to my initial response, replying and changing the message properties to be a more persuasive fake. Am I being spear phished? Don’t know, but what happened is intriguing.

Controlling file access

Use groups to maintain ACLs.

Digital information has creators, owners, editors, publishers, and consumers. Dependent on the information it has different approved audiences; public, creator’s organization, leadership, functional group, etc. And the audiences can be subdivided dependent on the level of authority they have; read only, modify, create, etc.

How to control who sees what? Accounts need to access, change and create information. At least some of that information will be in the cloud, either your own, or space and services hosted and invoiced monthly, or a combination. Access to public and private domains should be convenient for authorized users on supported platforms.

And be sure to classify the information! The public stuff has access control set so everyone can see it. Everything else needs to be someplace private. Add in an approval process for material to go public. Devise a rights scheme for the private domain. Owners, Editors, Readers.

Add to all this a folder hierarchy that supports the envisioned rights and document access should be understandable, maintainable, and auditable (with proper auditing enabled).

What’s the *perfect* configuration for all of this? As far as I’ve discovered, there isn’t one. Please comment with any reference if you know of some.

The perfect configuration is one that is maintained per business needs. Maintained is really the operative requirement.

Default everything to private so only authors have access to their own work?

How to collaborate? Give others read/edit access as needed per instructions from owner? That gets into LOTS and LOTS of ACL changes as people change in the organization, to say nothing of sun setting access. When should those collaborators have edit removed, or what about even read?

If rights are granted by individual account then this creates lots of future unidentified GUIDs in ACLs as accounts are removed, or lots of maintenance to find the accounts in the ACLs and remove them before the account is removed.

And, even if accounts aren’t removed because the person is changing position so should have access to different files, if requires lots of maintenance as people move from position to position.

Default everything to public read only and authors have edit access to their own work?

This limits the need to provide access to individual accounts unless the account needs edit rights to a document. If the same approach is taken to granting edit rights as was suggested for read rights above, then the same situation with maintaining access occurs except this time only for editors. Likely a lesser support burden but nonetheless still one that is likely to leave orphaned GUIDs in the ACLs.

Manage access by group!

Create Reader and Editor groups. As many as needed to accommodate each of the various groups needing access to the folders and files. Add and remove accounts from the groups as needed.

Managing access by group won’t cover all the needs. It may still be necessary to put individual accounts into the ACLs. However managing by group will limit the need to put individual accounts into the ACLs, and it will help make clear the rights if group name conventions are used to make the purpose of the group more apparent, e.g., AccountsPayableReaders, AccountsPayableEditors.

This can be taken further. If the two groups above have relatively steady membership then accounts that have limited need to access as readers or editors can be added to groups within these groups making it apparent the account holder has temporary access. The nested groups could be TmpAccountsPayableReaders, and TmpAccountsPayableEditors.

In the end..

There is not a “perfect” no maintenance system to manage and control access rights. Groups are certainly recommended over individual accounts. So long as the organization experiences changes that should affect document access it will be necessary to maintain ACLs.

The goal really is to limit the work needed to know what access is granted to which accounts, to maintain proper access, and use a method that is sustainable.

Groups really are the solution. Groups and a well established process to identify, classify, and assign rights to information throughout its lifecycle from creation to retirement.

Attractive deal? Check how long that website’s been around.

Was that vendor set up yesterday to try and take money from you today?

One thing that happens as advertisers get their algorithms into you is much more targeted advertising. Often times with a web link.

Ever wonder how long that website’s been around? Setting up shop, scamming money, and disappearing are tactics that have been around since scams. Long before the Internet. Checking how long a domain name has been around can help detect a scam.

One thing I do when I check advertising is check how old the domain name is. The domain name is the .com, .org, .gov, .net, etc., plus the word before it starting from the preceding / or ., whichever is closest before the .com. Examples like www.disney.com breakdown to domain name disney.com.

How old is the domain name disney.com?

The whois command reveals that information and more with 156 lines of output. The dates are among the first lines and are scrolled off the top of the screen. So scroll up to them to see them.

Substitute a function, called by the same name, that uses whois and grep to produce less output, and focused on dates and attributes like URLs. The substitute command returns 23 lines. These are the lines.

$ whois disney.com
   Updated Date: 2021-01-21T15:04:59Z
   Creation Date: 1990-03-21T05:00:00Z
   Registry Expiry Date: 2023-03-22T04:00:00Z
NOTICE: The expiration date displayed in this record is the date the
currently set to expire. This date does not necessarily reflect the expiration
date of the domain name registrant's agreement with the sponsoring
view the registrar's reported date of expiration for this registration.
Updated Date: 2021-01-15T16:22:12Z
Creation Date: 1990-03-21T00:00:00Z
Registrar Registration Expiration Date: 2023-03-22T04:00:00Z
Registry Registrant ID: 
Registrant Name: Disney Enterprises, Inc.; Domain Administrator
Registrant Organization: Disney Enterprises, Inc.
Registrant Street: 500 South Buena Vista Street, Mail Code 8029
Registrant City: Burbank
Registrant State/Province: CA
Registrant Postal Code: 91521-8029
Registrant Country: US
Registrant Phone: +1.8182384694
Registrant Phone Ext: 
Registrant Fax: +1.8182384694
Registrant Fax Ext: 
Registrant Email: Corp.DNS.Domains@disney.com

Easier to see only the dates and some other relevant info by customizing my own whois. I am sure it can be improved on, but for the time being this listing is the substitute whois in my .bash_aliases.

function whois {

        if [ $# -ne 1 ]; then
                printf "Usage: whois <domain.tld>\nTo use native whois precede command with \\ \n "
                return 1

# implemented code calls installation whois by full path 
        /usr/bin/whois $1 | grep -wi "date\|registrant\|contact 
## haven't tried outside Ubuntu
## a possibility to make this somewhat portable
## $(which whois) $1 | grep -wi "date\|registrant\|contact 

Now, for an advertisement that’s been showing up in my Facebook feed lately, there’s listncnew.com. Sells NEW laptops and Macbooks for $75 – $95!! I figured it must be scam but, for that price, worth the risk because could cancel the credit card transaction. Before I made the order I ran the domain name through my substitute whois to see when the domain was registered. It was created October, 2021, very new. I didn’t expect to get my order and didn’t. At least I wasn’t out the money and now have a way to look at whois data that limits the output to show only information relevant to me.

whois listncnew.com
   Updated Date: 2021-10-26T09:14:16Z
   Creation Date: 2021-10-26T09:10:35Z
   Registry Expiry Date: 2022-10-26T09:10:35Z
NOTICE: The expiration date displayed in this record is the date the
currently set to expire. This date does not necessarily reflect the expiration
date of the domain name registrant's agreement with the sponsoring
view the registrar's reported date of expiration for this registration.
 Updated Date: 2021-10-26T09:13:25Z 
 Creation Date: 2021-10-26T09:10:35Z 
 Registrar Registration Expiration Date: 2022-10-26T09:10:35Z 
 Registry Registrant ID: 5372808-ER 
 Registrant Name: Privacy Protection 
 Registrant Organization: Privacy Protection 
 Registrant Street: 2229 S Michigan Ave Suite 411 
 Registrant City: Chicago 
 Registrant State/Province: Illinois 
 Registrant Country: United States 
 Registrant Postal Code: 60616 
 Registrant Email: Select Contact Domain Holder link 
 Admin Email: Select Contact Domain Holder link 
 Tech Email: Select Contact Domain Holder link 
 Billing Email: Select Contact Domain Holder link

This is my first post in a while. Haven’t been routine releasing posts this year. There’s another five that have been hovering in edit for a while. Maybe I can get them out before the end of this year.

AD CI Struggles

Active Directory Configuration Item struggles! Seems like feeling around in the dark.

I set up a test lab to practice creating an administratively tiered AD forest with a single domain. Challenges came from everywhere; available documentation, platform the lab was built on, and ultimately figuring a way to compare policies and OU structure between labs.

First the virtualization was run in VirtualBox on my laptop and the lab build began there. That was abandoned because of available drive space.

Then it was moved to a dedicated virtual server running VMware ESXi. The browser interface was sluggish for me and, as I leaned, my account didn’t have permissions to copy/paste between my pc and the VM. That slowed me as I tried to enter configurations and compare between guests.

The ESXi guests occasionally froze after changes and were very slow to be to be power cycled when that happened, minutes.

Back to the laptop. Purge VMs that were built for other labs (intended to be continued) and start again on the laptop. And this time, got the administratively tiered lab running.

Great. Needs to be repeatable though. Go back to the ESXi test lab and try to produce the same results. Tried altering ESXi lab guests’ AD and GPO settings to be same as those on the VirtualBox lab. Didn’t work.

Found several ways to produce GPO reports to compare settings in each and see where they differed. Found a few differences and changed ESXi vms’ to match VirtualBox’ working ones. GroupPolicy Module | Microsoft Docs are a great tool. The most useful to me for this were Get-GPOReport, Import-GPO, and Backup-GPO.

Sadly the ESXi lab setup still didn’t produce the same results and response in the console was often sluggish. When I started ESXi I used the console tool which let me open multiple windows within one browser window. Unfortunately it didn’t copy paste between my pc and the session. Not helpful for testing.

I eventually tried the remote console tool. It opened a window for each connection and was more responsive than the console. And, bonus, copy paste worked between my pc and the terminal?!!

Better control of the ESXi test lab now but still not the correct tiered admin function. The ESXi test lab guests showed some symptoms of not enough memory like the sluggish responses and hangs. Am upping RAM from 2Gb to 4Gb, rebuilding the guests and trying again.

VirtualBox guests on the laptop are running 2Gb and the tiered admin lab works.

If more memory doesn’t do it I’ll have to come up with some other adjustment to try. Need to get both working with admin tiering.

Detail that may be a clue, the ESXi lab was built with a Windows 2019 Server lab DVD source. The VirtualBox lab was built with a Windows 2019 Server Microsoft Download image. The VirtualBox lab has Schema Admins group in AD DS as part of the default install from the DVD, the ESXi lab DOES NOT and is built from the different media. Makes me wonder if there are other differences, unseen, that prevent the ESXi lab from successfully building the tiered administration setup.

Diving into Tiered Administration

Really? There’s always something wrong in the instructions :-/

Approaches to improving security are always interesting to me. Recently I became aware of tiered administration as an aside in a security video I watched. “10 Work From Home Security Settings You Can Implement Now to Block Attackers.” Very good. Watch if you can. The intro to tiering admin credentials and systems begins at about 30:10. That started the dive for me!

There are many background and architectural articles on Microsoft.com. They talk ideas and generalization with really bad confusing graphics (imho). However, I found one article that promised to step through the process of setting up Tiered Administration, Initially Isolate Tier 0 Assets with Group Policy to Start Administrative Tiering – Microsoft Tech Community.

I followed the steps and it didn’t result in Domain Admins members being prevented logon to a member server or workstation. I repeated the process several times to be certain I hadn’t overlooked something and got the same lack of result each time.

The Group Policy precedence in the article didn’t work. The precedence in a comment to the article that stated the precedence in the article was wrong, also didn’t work.

At that point I put together a chart to track the hosts, accounts, policies, and security groups I was using. With the chart, and patiently changing one attribute at a time and repeating logon tests, I finally found a combination that worked!!

Great, Tier0 accounts couldn’t logon to anything except Tier0 assets. Now start trying other things in my virtual environment to find out what needs to be accounted for if migrating a domain to the restricted accounts model.

It didn’t take long to find there’s also something else Tier0 Domain Admins accounts couldn’t do, they couldn’t install software on Tier1 & 2 assets any longer. The Tier0 accounts couldn’t logon and there were no dedicated Tier1 or 2 accounts to use. (Should have tried the app server’s local admin for logon. Then try s/w install and see if could use Tier0 credential to perform s/w install.) Members of Local Administrators group can install software. Domain Admins group is in the local Administrators group. So any member of Domain Admins should be able to install software.

If a Tier0 account is in the group that limits logon on to only Tier0 assets then it cannot logon and install software on Tier1 & 2 assets. So, have Tier0 accounts restricted to Tier0 assets but how are Tier1 and 2 assets going to be managed?

Nowhere in the article is this limitation mentioned! Set up Tier0 admins and suddenly Tier1 & 2 assets can’t be managed with any Domain Admin group account. A real problem.

Back to my trusty charts. Create new security groups and Group Policies after spending some time trying to understand the policies and how they’re being applied. Then start testing.

Seems my head scratching after discovering the problem and before trying to produce a solution worked. I came up with a scheme that doesn’t change the working Tier0 accounts and hosts settings and gives Tier1 accounts access to Tier1 assets but not Tier0 assets. Still a bit more testing to confirm Tier1 can’t access Tier2. Then testing to confirm able to create Tier2 accounts. Then check the effect on service accounts which currently are admin accounts used only for function of certain software, e.g. manage audit settings to capture and report changes in the environment.

Anyhow this screed was about two things really. My satisfaction standing up a Tiered Admin environment (at least the beginnings, in test) and my growing frustration over technology implementation articles written as step-wise instruction that just don’t work (Tiered Admin, Certificate Services, Federation Services to name a few), and that leave out really important information like, “if you do this, you loose admin access to Tier1 & 2 assets.”

The “how to articles” that don’t actually work are all from Microsoft.com URLs. A third party site getting it wrong, frustrating but not feeling misinformed by an authority I should be able to trust. After all, not Microsoft. An article on Microsoft.com that says “do this” get “that result” that’s wrong or incomplete, very frustrating! If you can’t trust Microsoft about how to use its software then who are you going to trust?

Got a job!

Review LOTS of advertisements, select and apply, repeat. It’s a full time job that you want to dump.

After nine months of applications and 99% ghosted 🙁 got a job 🙂 !

Interesting that for the first time in my professional career the title is IT System Administrator. I’m familiar with all that has been needed so far. Seems a good fit. Yet I’ve never had this title before.

Good way to start a new job. Not lost in anything and able to contribute quickly.

Oh, and first post since commenting was enabled. Wondering what kind of spam will show up first.

Tracking Things, SO MANY THINGS, Which Are the Important Things?

Don’t get overwhelmed.

Digital devices, for discussion the range from smartphones to computers and devices making up the networks they attach to, offer so much information for monitoring health and diagnosing failures.

To maintain the health of that cloud of devices it’s good to know what’s going on. What to monitor. And by the same token, good to monitor things that affect your experience so the provider can be shown when it’s their problem.

For home Internet users the big things are usually the reliability and speed of the Internet connection. If it’s fast but down a lot that’s no good. And if it’s up and performance is good is it actually performing to spec? Are you getting what you’re paying for?

Only as an exercise in curiosity, wondered how often my public IP address changed, and how quickly the log would grow. Have been tracking since May, 2014 and have 11,441 lines in the log. It’s only grown to 670K in that time. Had 129 different public addresses and top five are 2,267, 1,681, 1,176, and 702 occurrences. More than half the instances.

Mostly just trivia. Having the log did help me discover one of the temporary IPs that I got in Flushing was on some black lists. When that happened I couldn’t log in to my (ISC)2 account. Once troubleshoot I was able to get it removed from the blacklist and was again able to get to (ISC)2 when I got that IP.

More immediate, is the Internet performance I contracted for being delivered? In my case it certainly seems it isn’t being delivered.

A typical recent week of service from my ISP. Any not green is bad :-/. There’s quite a bit of it.

Better times :-). Start of November, 2020.

That’s examples of some things to track. One seemingly more immediately useful than the other. There’s so many more. Which are important for security? Authenticate by location, time of day, second factor, log file access (hierarchy of criticality). Web browsing?

Need to ask and answer what’s critical, confidential, who should have access and access paths allowed.

More phishing warning

Yeah, always talking about it because always getting examples to share.

Another picture to help avoid possibly painful mistakes.

This is my Inbox with only one message displayed.

See the mouse (the pointing index finger) is floating over the first column, the Sender. And beneath the finger is a black rectangular window with white text.

When the finger floats over the Sender that black window pops open and shows the email address the message is supposedly from. It is very obviously NOT the Apple App Store. Mark this message as SPAM and delete without opening!

Don’t even open it