Security – It needs to make sense

Don’t make things unnecessarily difficult and then say “it’s for security”.

At this point I hope most everyone knows basics about online security like don’t reuse passwords, use unique passwords at each site, use complex passwords, use multi-factor authentication when available, and use a password safe. These are all components that rely on the user (yes, in a corporate environment these things should be controlled by the IT department). The user though is only part of the security equation.

The website owner also needs to contribute to a secure online experience. And I submit that making access and credential requirements proportional to the criticality of the information available in the account is part of that responsibility. After all, if your credential isn’t easy to use or needs to be changed because of some requirement that isn’t an issue for any other website it doesn’t exactly make you want to use the website or promote it to family and friends.

This is a tale of a website which, IMHO, has account credential practices that are unnecessary and antithetical to positive user experience. Also they are not proportional to the value of what’s being protected and are uniquely cumbersome compared to any other websites I have credentials at.

I have a credit card. Surprised? It has a rewards program. The rewards program website is separate from the credit card company website. It is a third party provider of credit card reward program services, CU Rewards. And it has two “security features” that to me are absolutely abhorrent.

First is its CAPTCHA to prove I’m not a robot. I’m not against CAPTCHAs. I don’t mind them and they’re on many sites that I use. However the CU Rewards website CAPTCHA is one that regularly requires me to complete two, three, or more “click on all the …” CAPTCHA challenges to prove I’m real.

C’mon, really? Every other website I use that has CAPTCHA, it takes one challenge before it decides I’m not a robot. CU Rewards, never only one challenge. Why?

The images are lower resolution than most but certainly not the lowest. Why make access so difficult when I’ve already provided my credentials? What’s being protected? My retirement savings, no. My bank account with it’s wad of cash, no. My medical record with all that PHI (Protected Health Information), no. What’s being protected is my ability to order “free” stuff that is available on my credit card rewards program. This degree of difficulty to gain access does not make sense. It is not at all proportional to the value of what’s being protected.

The second issue with this website’s security is credential creation. I do use a password safe. I do use complex passwords. I do not reuse passwords. I have user accounts with banks, investment companies, retirement accounts, schools, job boards, etc. The list goes on and on. If my credential needs to change at any one of these websites, even those that require a user name separate from my email address, what needs to change is my password. Nothing else.

Imagine if you will a financial institution issuing you a credit card and you create a user credential to have online access to your information. Eventually they send you a new card. Maybe you lost your card, maybe you suspect some fraud and got it replaced, maybe it was about to expire and the replacement card was sent as a routine part of the account management process. Or how about an investment account where you’ve invested in stocks and index funds through your employer’s savings plan but have left the employer, managed the investments yourself for a while and then turned over fund management to an investment management company.

How many NEW user ids do you imagine needing to create for the above scenarios? Maybe a new one each time the credit card company issues you a new card and, for the investments, a new one when leaving the employer and then another new one on giving the investment management company the responsibility to manage the investments. Seems crazy right? You’re still you. The company you’re doing business with is still the same credit card company or same financial services company. You wouldn’t expect to need to create a new user name and password for any of these changes, would you? You’ve probably had some of these changes happen and not had to create new login credentials.

And yet CU Rewards requires a new login id be created whenever a new credit card is issued even though the card is from the same financial institution and is replacing your previous card. After being issued the new card your account information is still accessible at the financial institution using the same login credentials you’ve always used. The new card continues to accumulate points on the same CU Rewards program even automatically transferring the points balance from your old card to the new one. However CU Rewards won’t give you access to your account without creating a new user name and password?!

This, in my opinion, is absolutely TERRIBLE security design. It creates unnecessary barriers to the user and is not at all proportional to the value of what’s being protected. The requirement is 100% unique among all my other online credentials which is an indicator nobody else thinks it is a good process either. There is not a single other business be it bank, credit card company, finance company, mortgage, investment, insurance, medical records, or online retailers that requires a new login credential be created when a new credit card is issued.

CU Rewards your credential practices suck. You need to change them to stop sucking.

PayPal scam

Illustrations to help you avoid the scam.

Another example of a scam email. It copies PayPal’s look to a T. The apparent email address service@intl.paypal.com is not the email address! The actual email address begins after the “<“. It is an indecipherable address and once you spot the “@” sign you see it isn’t @paypal.com. This isn’t a PayPal email.

Don’t click the button in the email that says “Log in Now”. It will go to a website that looks like PayPal but it’s not. If you enter your PayPal credentials to login then your PayPal account has just been compromised. Don’t do it.

Ubuntu server upgrade 16.04 to 18.04 (20.04 pending)

Virtualize, document, and test. The surest way to upgrade success.

For years my server has been running my personal websites and other services without a hitch. It was Ubuntu 16.04. More than four years old at this point. Only a year left on the 16.04 support schedule. Plus 20.04 is out. Time to move to the latest platform without rushing rather than make the transition with support ended or time running out.

With the above in mind I decided to upgrade my 16.04.6 server to 20.04 and get another five years of support on deck. I’m half way there, at 18.04.4, and hovering for the next little while before the bump up to 20.04. The pause is because of a behavior of do-release-upgrade that I learned about while planning and testing the upgrade.

It turns out that do-release-upgrade won’t actually run the upgrade until a version’s first point release is out. A switch, -d, must be used to override that. Right now 20.04 is just that, 20.04. Once it’s 20.04.1 the upgrade will run without the switch. Per “How to upgrade from Ubuntu 18.04 LTS to 20.04 LTS today” the switch, which is intended to enable upgrading to a development release, does the upgrade to 20.04 because it is released.

I’m interested to try out the VPN that is in 20.04, WireGuard, so may try the -d before 20.04.1 gets here. In the meantime let me tell you about the fun I had with the upgrade.

First, as you should always see in any story about upgrade, backup! I did, several different ways. Mostly as experiments to see if I want to change how I’m doing it, rsync. An optional feature of 20.04 that looks to make backup simpler and more comprehensive is ZFS. It’s newly integrated into Ubuntu and I want to try it for backups.

I got my backups then took the server offline to get a system image with Clonezilla. Then I used VBoxManage convertfromraw to turn the Clonezilla disk image into a VDI file. That gave me a clone of the server in VirtualBox to practice upgrading and work out any kinks.

The server runs several websites, a MySQL server for the websites and other things, an SSH server for remote access, NFS, phpmyadmin, DNS, and more. They are either accessed remotely or from a LAN client. Testing those functions required connecting a client to the server. VirtualBox made that a simple trick.

In the end my lab setup was two virtual machines, my cloned server and a client, on a virtual network. DHCP for the client was provided by the VirtualBox Internal Network, the server had a fixed ip on the same subnet as the VirtualBox Internal Network and the server provided DNS for the network.

I ran the 16.04 to 18.04 upgrade on the server numerous times taking snapshots to roll back as I made tweaks to the process to confirm each feature worked. Once I had a final process I did the upgrade on the virtual machine three times to see if I could find anything I might have missed or some clarification to make to the document. Success x3 with no changes to the document!

Finally I ran the upgrade on the production hardware. Went exactly as per the document which of course is a good thing. Uneventful but slower than doing it on the virtual machine, which was expected. The virtual machine host is at least five years newer than the server hardware and has an SSD too.

I’ll continue running on 18.04 for a while and monitor logs for things I might have missed. Once I’m convinced everything is good then I’ll either use -d to get to 20.04 or wait until 20.04.1 is out and do it then.

Jonas Salk Middle School Career Day

A presentation about information technology with demonstrations.

I volunteered to create a presentation for career day at school. Actually, my daughter asked me and I said “okay”. Then career day presentations were changed from in person to online because of corona virus.

It would have been so much easier for me to do in person. I’m certain the total time spent would be less than what I needed to produce the video! Everything I wanted to present could have been done live. Timing would be easier and adjustments could be made in each session depending the interest of the previous audience and questions during the presentation.

That wasn’t to be.

The good thing about the video is I was able to produce it. The bad things are obvious in review. There are several parts where the dialog is disjointed and not flowing with events on the screen. Arrangement of some screen elements blocks others in an undesirable way. And I need to think more of the audience. This is likely much better for high school seniors than eighth graders. Work more on the script and be EXPRESSIVE!

Making this video was an enjoyable and challenging experience. I had to learn things I’d never known to make the video. And watching myself and the content I can see how it could easily be improved. Information I’ll tuck away to use if and when there’s a next time.

If you’d like to check out the video, here it is.

At the end of the video is a list of the software used to produce it. That same list, with working links, is below.

Ubuntu 18.08 runs the laptop used to create this video (it’s an alternative to Windows, OS X, and Chrome OS). https://ubuntu.com/

OpenShot video editor was used to create the video. https://www.openshot.org/

vokoscreen made the screen video captures that got edited in OpenShot. https://linuxecke.volkoh.de/vokoscreen/vokoscreen.html

GIMP, GNU Image Manipulation Program, was used to create or edit some of the images in the video and to obscure and alter some portions of the video images. https://www.gimp.org/

Cheese was used to record my head shot and voice.
https://wiki.gnome.org/Apps/Cheese

Pick and OpenShot’s chroma key effect were used make the background behind my head transparent rather than appear in a box that blocked the background. https://www.kryogenix.org/code/pick/

I used LibreOffice Writer to take notes and make plans as I developed the video and for the scripts I used to guide narration. LibreOffice Calc helped calculating how to adjust length of some clips to fit the target time. https://www.libreoffice.org/

Fake news!

Be informed, not misinformed.

Fake news has been a problem since the Internet (before actually, but much easier to recognize then). With the rise of social media it has become a serious problem that is influencing large numbers of people with false and misleading information.

With a presidential election in the offing and intelligence services currently warning about active foreign interference, now would seem a good time to brush up on identifying fake news. Prevent oneself from going off half cocked on someone or making a choice based on a false story.

I found an NPR article, With An Election On The Horizon, Older Adults Get Help Spotting Fake News, and training about the problem.

And although the article’s title includes the words “Older Adults” the lessons are for everyone. There are many adults who need to be able to recognize and acknowledge fake news. Not only “Older Adults”.

Definitely good resources to be familiar with and to share. Please spread far and wide.

JavaScript and modular pages

An easy example of simplified page maintenance.

I have written about a website I maintain, the Senior Computer Learning Center. It was built from scratch when I knew absolutely nothing about coding webpages. And no understanding at all how to use libraries or a cms to style and customize pages.

One thing I realized right away, even on a simple site, it would be useful to build the navigation menu once and reuse it on each page. Less coding per page and a single place to edit the menu for changes.

With my first ever attempts at coding a simple web page I couldn’t find out how to load external elements into the page if they didn’t have a tag like <img>.

Now I’ve done it, learned how to load a document node from an external file. Understanding the JavaScript selector, $(), and how to pass an object to a function solved the problem.

Trying to solve the problem of maintaining the menu in one place and using it on multiple pages I searched and searched but couldn’t find examples that helped. I was trying to add a predefined menu to any <body> I wanted by loading it from a file.

After a lot of reading and trial and error I ended up with an external JavaScript file, custom.js. Currently it contains only one function. It adds DOM elements to the page so the menu is built dynamically when the page is loaded. Same menu on each page and only one place to maintain it. Much better maintainability.

Below is the HTML for the menu, which used to be in each of the seven pages of the SCLC site, embedded in an html() function that adds a node to the document.

function myMenu(target) {
    target.html('<h2>Winter 2019<br>Spring 2020</h2> \
                   <a href="index.html">Home</a> \
                   <a href="announcements.html">Announcements</a> \
                   <a href="schedule_changes.html">Schedule Changes</a> \
                   <a href="course_desc.html">Course Descriptions</a> \
                   <a href="schedules.html">Schedules</a> \
                   <a href="calendar.html">Calendar</a> \
                   <a href="enrollment.html">Enrollment Information</a>');
}

Now each of the seven pages uses a short <script> to get the menu when loading. Nothing to change when the menu changes.

<nav id="mainMenu">
     <script>myMenu($("nav#mainMenu"));</script>
</nav>

Modify the html() in myMenu() and all pages display the updated menu when refreshed.

Plenty more to do to the SCLC site to make it more maintainable and more useful for end users. Using a common routine on multiple pages is just one of the first steps.

Chasing my tail and finding something new to learn

Experience and keeping notes helps limit chasing tail.

In my last post, Help people get the job done, I wrote about disappointment with how a change was made in the end user’s environment at my office. The change required they do something different to accommodate a purely technical change in systems. Once connected their work was no different than it had been.

Why we didn’t build in the logic to connect them to the new resource and make it transparent for the user seemed to me like a failure on our part. Simplify the user experience so they can focus on the work they do by IT using our skills to make the computers work for people rather than the other way around.

I made some changes to personal websites to demonstrate redirection could be used to point at the correct work websites. It was meant to illustrate the analog idea that one work website could be pointed at the other. Going to my websites, train.boba.org and sclc.boba.org, immediately sent a browser to the intended work website. Success!

After demonstrating the capability I disabled it so my URLs go to their originally intended websites.

So where’s chasing my tail come in?

While experimenting with the redirect I modified the boba.org configuration. For a while it wasn’t possible to get to that site at all. Then depending on the URL got to it or andrewboba.com. Putting boba.org in the browser’s address bar ended up at andrewboba.com, but not correctly displayed. Putting http://boba.org went to the correct site but didn’t rewrite the link as secure, https://.

To stop being distracted by that issue and continue testing the redirect I disabled the boba.org website.

Worked more with the redirect over a few days. Got to the point I felt I understood it well and tried boba.org again.

It wouldn’t come up no matter what I tried. Everything went to a proper display of andrewboba.com.

I increased the logging level. I created a log specifically for boba.org (it didn’t show up which was my first clue). Not seeing the log I went through other site configurations to see how their custom logs were set up. They appeared to be the same.

Finally I decided to try boba.org without a secure connection. I wasn’t sure the name of the .conf file for secure connections and decided to look in Apache’s ../sites-enabled directory to see if there were separate .conf files for https connections.

And guess what I found? There are separate .conf’s for https, yes. There were no .confs of any kind for boba.org! Then it hit me. There had been no log files for boba.org because there were no ../sites-enabled .conf files for boba.org.

And then I finally remembered I had disabled the site myself to focus on the redirect. Chasing my tail because I’m very new at Apache webserver administration. I disabled a feature to focus on making something happen then forgot the change I made when I resolved the first challenge.

Better notes, and more experience, would have helped me remember sooner.

And I also found something new to learn. While boba.org was disabled, andrewboba.com was being displayed. Would prefer “not found” or something similar to show up rather than a different website on the server.

New challenge. Figure out how to serve a desired site/page not available message when a site on this server is down.

One of the reasons I like information technology. Always something new to learn at every turn.

Help people get the job done

IT’s job is supposed to be making things easier for users.

Users have been using a single URL for access to all their web applications and now the backend for just one is moved to another server to avoid end of life? If you’re where I am now users are sent a new URL and told to use it if that application is needed.

It is accessed via Citrix and I don’t understand Citrix architecture well I have to say. However the users of this app apparently don’t use any other app via Citrix.

In the meeting about the change I wondered out loud whether users could just be redirected? No need to learn a new URL, no need to know when or if to use it. Just send the apps’ users to the new URL when they attempt to use the app.

The response was, “no, can’t do that”, “don’t have wild card certificates”, “can’t install existing certificates on other servers”, “can’t change DNS”, “can’t send people from the old site to the new site”, and so on…

My reasoning was to simplify the user experience. Why make people learn something new if there’s a way to get them to the new webapp without learning a new URL? As a technologist I feel VERY strongly my job and the job of others like me is to enable people to do their work and not force them to understand or learn technology that is not relevant to that.

Back to the objections. A DNS name can have its network address updated periodically. This very website has a dynamic address and can still be found by name even after an address change. The server is running a job to monitor the public address and update DNS when it changes. Automatic. Hands off.

No certificate changes required. If siteA and siteB are continuing to operate as siteA and siteB and each has their own valid certificate then no change in certificate needed. When someone browses to the site the browser requests a secure connection. The trustworthiness of the connection is determined by information the site provides and certificate authorities the browser trusts. No need to move certificates anywhere. Even if there were that can be done without renewing certificates.

Sending people from one site to another, in its simplest (as far as I know) form only requires a Redirect. For wesiteA and websiteB, if visitors to websiteA should actually be going to websiteB tell websiteA’s webserver to redirect browsers to websiteB. When somebody browses to websiteA the webserver sends a message back to the user’s web browser which says you need to ask for websiteB instead. Then the browser does just that and ends up at websiteB even if it’s on a different server in a different country.

I actually set up Redirect on this server to test my understanding and be certain it would work the way I thought. It did. Visiting one of my webhosts on this server automatically directed me to workAppA and visiting another webhost went automatically to workAppB.

In doing the reading to get Redirect set up I learned it could be as granular as by user or program on an Apache server. I suppose it’s possible Citrix doesn’t have a way to support that. But I don’t believe it. I know Citrix apps can be secured by login so userA and userB don’t see all the same apps. I’ve written powershell to report what security groups are associated with which published apps on a Citrix server.

In this case telling end users YOU HAVE TO LEARN SOMETHING NEW to keep doing your job the same way strikes me as IT not doing its job!

Certbot automatic authentication

Enable certificate auto renew after a manual renew.

I have a number of websites run from my own web server, like this one. Something I set up to experiment with web technologies and gain some insight into how things work.

One of the things I did was setup HTTPS for the websites once I found about about EFF‘s LetsEncrypt service. I wanted to see if I could provide secure connections to my sites even if they’re only for browsing.

I was able to get HTTPS working for my sites and have the certificates renew automatically. Then I changed ISPs. With TWC, now Spectrum, there was never a problem with the automated renewals. With Optimum the renewals didn’t work.

Emails alerting me to certificate expiration were my first indication there was a problem.

The logs indicated that files on my server couldn’t be manipulated to confirm my control of the website. Plus, entering the website address as boba.org or http://boba.org no longer connected to the website (externally, on the local network it still worked). Connection to any of my hosted sites now required prefixing https:// to the name. Automatic translation from http to https no longer worked.

After talking, chatting online actually, with Optimum they told me yup, that’s just the way it works. “We block port 80 to protect you” and “you can’t unblock it”.

Panic! How to maintain my certificates so https continues working? Fortunately certbot offers a manual option that requires updating DNS TXT records. It’s slow and cumbersome and NOT suitable for long term maintenance of even one certificate containing one domain but it works.

Sixty days pass and the certificate expiration emails start again. This time I determined that I’d speak to a person at Optimum and not use the chat. After some time with my Optimum support tech, and after she escalated to a supervisor, I was told there is in fact a way to open port 80. And it is a setting available to me via my account login. So I opened port 80 and thought all set now, renewals will happen automatically.

Not so. I got more certificate expiration warning emails. What to do? All the automated renewal tests I tried indicated a problem with a plugin. I read the certbot documentation, did searches for the error and tried to find a solution that was applied to the problem I had. I didn’t find it. But I did get a clue from a post that said once a manual certification has been done that setting needs to be removed before automated renewal will work again.

After more digging I discovered the certificate config files in /etc/letsencrypt/renewal. In them were two variables that seemed likely to be related to the auto renew problem. They were authenticator = and pref_challs =. The settings were manual and dns-01 respectively.

I never touched these files. It turns out doing manual renewal with DNS TXT records using the command sudo certbot certonly --manual --preferred-challenges dns --cert-name <name> -d <name1>,<name2>,etc just changes the config files in the background. Attempting auto renew later doesn’t work because the settings in the config files have now been changed to authenticator = manual and pref_challs = dns-01.

There was no help I could find that explicitly listed the acceptable values for these variables. And I didn’t have copies of these files from before the changes. After digging around in the help for a while I decided it was likely they should be authenticator = apache and pref_challs = http-01.

I made the change for one certificate and tested auto renew. Eureka, it worked!!

Next I changed the config files for all the certificates and did a test to see if it worked.

$ sudo certbot renew --dry-run
** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates below have not been saved.)
Congratulations, all renewals succeeded. The following certs have been renewed:
/etc/letsencrypt/live/alanboba.net/fullchain.pem (success)
/etc/letsencrypt/live/andrewboba.org/fullchain.pem (success)
/etc/letsencrypt/live/danielboba.org/fullchain.pem (success)
/etc/letsencrypt/live/kevinkellypouredfoundations.com/fullchain.pem (success)
/etc/letsencrypt/live/www.anhnguyen.org/fullchain.pem (success)
/etc/letsencrypt/live/www.conorboba.org/fullchain.pem (success)
/etc/letsencrypt/live/www.mainguyen.org/fullchain.pem (success)
** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates above have not been saved.)

It worked. All my certificates will again auto renew.

This website was created after the problems began. So I didn’t even attempt to make it https. Now that I’ve figured out how to have my certs auto renew again I’ll be converting this site over to https too.



Virtual Host??

Setting up Apache to support multiple websites on one host. My server already does that for my public websites.

However I want to control what is returned to the browser if a site isn’t available for some reason. So I’ve set up a virtual server with multiple sites. Each site works when enabled. However if the site is set up to be unavailable, disabled, no index file, etc. the default page returned to the browser is not what I’d like.

Need to identify a few fail conditions, see what the server returns when the condition exists, see if what’s returned for a given condition is the same regardless which site the failure is generated by, then figure out why the webserver is sending back the page it does.

Reasons not available:

  • site not being served, e.g. not enabled on server
  • site setting wrong, e.g. DocumentRoot invalid
  • site content wrong, no index file

Answers that might be returned:

  • site not available
  • forbidden
  • …other’s I’ve seen but don’t remember now

From what I’ve read it seems whatever’s in 000-defalut.conf should control which page/site loads when a site isn’t available. That’s not the result I’m getting.

Either I’m doing it wrong or I’m just not understanding what’s supposed to happen and how to make it happen.

More digging…