Fake news!

Be informed, not misinformed.

Fake news has been a problem since the Internet (before actually, but much easier to recognize). With the rise of social media it has become a serious problem that is influencing large numbers people with false and misleading information.

With a presidential election in the offing and intelligence services currently warning about active foreign interference, now would seem a good time to brush up on identifying fake news. Prevent oneself from going off half cocked on someone or making a choice based on a false story.

I found an an NPR article, With An Election On The Horizon, Older Adults Get Help Spotting Fake News, and training about the problem.

And although the article’s title includes the words “Older Adults” the lessons are for everyone. There are many adults who need to be able to recognize and acknowledge fake news. Not only “Older Adults”.

Definitely good resources to be familiar with and to share. Please spread far and wide.

JavaScript and modular pages

An easy example of simplified page maintenance.

I have written about a website I maintain, the Senior Computer Learning Center. It was built from scratch when I knew absolutely nothing about coding webpages. And no understanding at all how to use libraries or a cms to style and customize pages.

One thing I realized right away, even on a simple site, it would be useful to build the navigation menu once and reuse it on each page. Less coding per page and a single place to edit the menu for changes.

With my first ever attempts at coding a simple web page I couldn’t find out how to load external elements into the page if they didn’t have a tag like <img>.

Now I’ve done it, learned how to load a document node from an external file. Understanding the JavaScript selector, $(), and how to pass an object to a function solved the problem.

Trying to solve the problem of maintaining the menu in one place and using it on multiple pages I searched and searched but couldn’t find examples that helped. I was trying to add a predefined menu to any <body> I wanted by loading it from a file.

After a lot of reading and trial and error I ended up with an external JavaScript file, custom.js. Currently it contains only one function. It adds DOM elements to the page so the menu is built dynamically when the page is loaded. Same menu on each page and only one place to maintain it. Much better maintainability.

This is the code for the menu…

function myMenu(target) {
    target.html('<h2>Winter 2019<br>Spring 2020</h2> \
                   <a href="index.html">Home</a> \
                   <a href="announcements.html">Announcements</a> \
                   <a href="schedule_changes.html">Schedule Changes</a> \
                   <a href="course_desc.html">Course Descriptions</a> \
                   <a href="schedules.html">Schedules</a> \
                   <a href="calendar.html">Calendar</a> \
                   <a href="enrollment.html">Enrollment Information</a>');
}

And this code is in the pages to call it…

<nav id="mainMenu">
     <script>myMenu($("nav#mainMenu"));</script>
</nav>

Plenty more to do to the SCLC site to make it more maintainable and more useful for end users. Using a common routine on multiple pages is just one of the first steps.

Chasing my tail and finding something new to learn

Experience and keeping notes helps limit chasing tail.

In my last post, Help people get the job done, I wrote about disappointment with how a change was made in the end user’s environment at my office. The change required they do something different to accommodate a purely technical change in systems. Once connected their work was no different than it had been.

Why we didn’t build in the logic to connect them to the new resource and make it transparent for the user seemed to me like a failure on our part. Simplify the user experience so they can focus on the work they do by IT using our skills to make the computers work for people rather than the other way around.

I made some changes to personal websites to demonstrate redirection could be used to point at the correct work websites. It was meant to illustrate the analog idea that one work website could be pointed at the other. Going to my websites, train.boba.org and sclc.boba.org, immediately sent a browser to the intended work website. Success!

After demonstrating the capability I disabled it so my URLs go to their originally intended websites.

So where’s chasing my tail come in?

While experimenting with the redirect I modified the boba.org configuration. For a while it wasn’t possible to get to that site at all. Then depending on the URL got to it or andrewboba.com. Putting boba.org in the browser’s address bar ended up at andrewboba.com, but not correctly displayed. Putting http://boba.org went to the correct site but didn’t rewrite the link as secure, https://.

To stop being distracted by that issue and continue testing the redirect I disabled the boba.org website.

Worked more with the redirect over a few days. Got to the point I felt I understood it well and tried boba.org again.

It wouldn’t come up no matter what I tried. Everything went to a proper display of andrewboba.com.

I increased the logging level. I created a log specifically for boba.org (it didn’t show up which was my first clue). Not seeing the log I went through other site configurations to see how their custom logs were set up. They appeared to be the same.

Finally I decided to try boba.org without a secure connection. I wasn’t sure the name of the .conf file for secure connections and decided to look in Apache’s ../sites-enabled directory to see if there were separate .conf files for https connections.

And guess what I found? There are separate .conf’s for https, yes. There were no .confs of any kind for boba.org! Then it hit me. There had been no log files for boba.org because there were no ../sites-enabled .conf files for boba.org.

And then I finally remembered I had disabled the site myself to focus on the redirect. Chasing my tail because I’m very new at Apache webserver administration. I disabled a feature to focus on making something happen then forgot the change I made when I resolved the first challenge.

Better notes, and more experience, would have helped me remember sooner.

And I also found something new to learn. While boba.org was disabled, andrewboba.com was being displayed. Would prefer “not found” or something similar to show up rather than a different website on the server.

New challenge. Figure out how to serve a desired site/page not available message when a site on this server is down.

One of the reasons I like information technology. Always something new to learn at every turn.

Help people get the job done

IT’s job is supposed to be making things easier for users.

Users have been using a single URL for access to all their web applications and now the backend for just one is moved to another server to avoid end of life? If you’re where I am now users are sent a new URL and told to use it if that application is needed.

It is accessed via Citrix and I don’t understand Citrix architecture well I have to say. However the users of this app apparently don’t use any other app via Citrix.

In the meeting about the change I wondered out loud whether users could just be redirected? No need to learn a new URL, no need to know when or if to use it. Just send the apps’ users to the new URL when they attempt to use the app.

The response was, “no, can’t do that”, “don’t have wild card certificates”, “can’t install existing certificates on other servers”, “can’t change DNS”, “can’t send people from the old site to the new site”, and so on…

My reasoning was to simplify the user experience. Why make people learn something new if there’s a way to get them to the new webapp without learning a new URL? As a technologist I feel VERY strongly my job and the job of others like me is to enable people to do their work and not force them to understand or learn technology that is not relevant to that.

Back to the objections. A DNS name can have its network address updated periodically. This very website has a dynamic address and can still be found by name even after an address change. The server is running a job to monitor the public address and update DNS when it changes. Automatic. Hands off.

No certificate changes required. If siteA and siteB are continuing to operate as siteA and siteB and each has their own valid certificate then no change in certificate needed. When someone browses to the site the browser requests a secure connection. The trustworthiness of the connection is determined by information the site provides and certificate authorities the browser trusts. No need to move certificates anywhere. Even if there were that can be done without renewing certificates.

Sending people from one site to another, in its simplest (as far as I know) form only requires a Redirect. For wesiteA and websiteB, if visitors to websiteA should actually be going to websiteB tell websiteA’s webserver to redirect browsers to websiteB. When somebody browses to websiteA the webserver sends a message back to the user’s web browser which says you need to ask for websiteB instead. Then the browser does just that and ends up at websiteB even if it’s on a different server in a different country.

I actually set up Redirect on this server to test my understanding and be certain it would work the way I thought. It did. Visiting one of my webhosts on this server automatically directed me to workAppA and visiting another webhost went automatically to workAppB.

In doing the reading to get Redirect set up I learned it could be as granular as by user or program on an Apache server. I suppose it’s possible Citrix doesn’t have a way to support that. But I don’t believe it. I know Citrix apps can be secured by login so userA and userB don’t see all the same apps. I’ve written powershell to report what security groups are associated with which published apps on a Citrix server.

In this case telling end users YOU HAVE TO LEARN SOMETHING NEW to keep doing your job strikes me as IT not doing its job!

Certbot automatic authentication

Enable certificate auto renew after a manual renew.

I have a number of websites run from my own web server, like this one. Something I set up to experiment with web technologies and gain some insight into how things work.

One of the things I did was setup HTTPS for the websites once I found about about EFF‘s LetsEncrypt service. I wanted to see if I could provide secure connections to my sites even if they’re only for browsing.

I was able to get HTTPS working for my sites and have the certificates renew automatically. Then I changed ISPs. With TWC, now Spectrum, there was never a problem with the automated renewals. With Optimum the renewals didn’t work.

Emails alerting me to certificate expiration were my first indication there was a problem.

The logs indicated that files on my server couldn’t be manipulated to confirm my control of the website. Plus, entering the website address as boba.org or http://boba.org no longer connected to the website (externally, on the local network it still worked). Connection to any of my hosted sites now required prefixing https:// to the name. Automatic translation from http to https no longer worked.

After talking, chatting online actually, with Optimum they told me yup, that’s just the way it works. “We block port 80 to protect you” and “you can’t unblock it”.

Panic! How to maintain my certificates so https continues working? Fortunately certbot offers a manual option that requires updating DNS TXT records. It’s slow and cumbersome and NOT suitable for long term maintenance of even one certificate containing one domain but it works.

Sixty days pass and the certificate expiration emails start again. This time I determined that I’d speak to a person at Optimum and not use the chat. After some time with my Optimum support tech, and after she escalated to a supervisor, I was told there is in fact a way to open port 80. And it is a setting available to me via my account login. So I opened port 80 and thought all set now, renewals will happen automatically.

Not so. I got more certificate expiration warning emails. What to do? All the automated renewal tests I tried indicated a problem with a plugin. I read the certbot documentation, did searches for the error and tried to find a solution that was applied to the problem I had. I didn’t find it. But I did get a clue from a post that said once a manual certification has been done that setting needs to be removed before automated renewal will work again.

After more digging I discovered the certificate config files in /etc/letsencrypt/renewal. In them were two variables that seemed likely to be related to the auto renew problem. They were authenticator = and pref_challs =. The settings were manual and dns-01 respectively.

I never touched these files. It turns out doing manual renewal with DNS TXT records using the command sudo certbot certonly --manual --preferred-challenges dns --cert-name <name> -d <name1>,<name2>,etc just changes the config files in the background. Attempting auto renew later doesn’t work because the settings in the config files have now been changed to authenticator = manual and pref_challs = dns-01.

There was no help I could find that explicitly listed the acceptable values for these variables. And I didn’t have copies of these files from before the changes. After digging around in the help for a while I decided it was likely they should be authenticator = apache and pref_challs = http-01.

I made the change for one certificate and tested auto renew. Eureka, it worked!!

Next I changed the config files for all the certificates and did a test to see if it worked.

$ sudo certbot renew --dry-run
** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates below have not been saved.)
Congratulations, all renewals succeeded. The following certs have been renewed:
/etc/letsencrypt/live/alanboba.net/fullchain.pem (success)
/etc/letsencrypt/live/andrewboba.org/fullchain.pem (success)
/etc/letsencrypt/live/danielboba.org/fullchain.pem (success)
/etc/letsencrypt/live/kevinkellypouredfoundations.com/fullchain.pem (success)
/etc/letsencrypt/live/www.anhnguyen.org/fullchain.pem (success)
/etc/letsencrypt/live/www.conorboba.org/fullchain.pem (success)
/etc/letsencrypt/live/www.mainguyen.org/fullchain.pem (success)
** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates above have not been saved.)

It worked. All my certificates will again auto renew.

This website was created after the problems began. So I didn’t even attempt to make it https. Now that I’ve figured out how to have my certs auto renew again I’ll be converting this site over to https too.



Virtual Host??

Setting up Apache to support multiple websites on one host. My server already does that for my public websites.

However I want to control what is returned to the browser if a site isn’t available for some reason. So I’ve set up a virtual server with multiple sites. Each site works when enabled. However if the site is set up to be unavailable, disabled, no index file, etc. the default page returned to the browser is not what I’d like.

Need to identify a few fail conditions, see what the server returns when the condition exists, see if what’s returned for a given condition is the same regardless which site the failure is generated by, then figure out why the webserver is sending back the page it does.

Reasons not available:

  • site not being served, e.g. not enabled on server
  • site setting wrong, e.g. DocumentRoot invalid
  • site content wrong, no index file

Answers that might be returned:

  • site not available
  • forbidden
  • …other’s I’ve seen but don’t remember now

From what I’ve read it seems whatever’s in 000-defalut.conf should control which page/site loads when a site isn’t available. That’s not the result I’m getting.

Either I’m doing it wrong or I’m just not understanding what’s supposed to happen and how to make it happen.

More digging…