PayPal scam

Illustrations to help you avoid the scam.

Another example of a scam email. It copies PayPal’s look to a T. The apparent email address service@intl.paypal.com is not the email address! The actual email address begins after the “<“. It is an indecipherable address and once you spot the “@” sign you see it isn’t @paypal.com. This isn’t a PayPal email.

Don’t click the button in the email that says “Log in Now”. It will go to a website that looks like PayPal but it’s not. If you enter your PayPal credentials to login then your PayPal account has just been compromised. Don’t do it.

We be scamming. Seems yes, but… maybe no?

Never seen this before.

I am unemployed due to COVID-19. Probably something that’s happened to many of you. I’ve also been searching for work continuously, continuously, since loosing my director of IT role. I have not gotten an offer on anything equivalent and have had periods of unemployment where I didn’t get responses for anything I applied to. The low point was when I was so desperate I applied for an hourly position at Dunkin Donuts and they didn’t call me back! I have gotten help desk roles and that position is what recently ended due to coronavirus.

Since I have been continuously searching for employment for years I’ve got accounts on all the major job boards, CareerBuilder, Monster, Beyond, Indeed, and many minor and regional ones too. And of course I use LinkedIn. My profile is here, Alan Boba. Message me if you need someone to manage your technology.

Recently I was very disappointed by the response I got back from an application, “Thank you but we’re not interested in you”. The position was very local to me which would have been great. And the IT Manager job description was one I would have written if asked to write one that was an exact match for my skills. I was really hopeful when I sent the application and very very disappointed when the rejection came. Not even a phone screen.

Next position I applied for on CareerBuilder I was presented with a message as soon as I completed the application, “would you like to instantly apply to these 26 matching jobs?” Typically I review job title and description, check the location and do some other review before applying for a position. This time I just hit “apply”. Right away CareerBuilder came back with a similar “instant apply” message and again I clicked “apply”. This kept happening. I kept clicking. I figured to be clicking until “matching jobs” ran out. They never did. I stopped clicking after instant applying to about 500 or so “matching jobs”.

Wouldn’t you know… next day I was getting invitations to online interviews. I was skeptical and cautious. The biggest and most immediate red flag was that all the “interviews” were with people using @aol.com and @gmail.com email addresses. No business emails. But hey, I didn’t have any real offers to reply to and who knows, maybe I’m just too suspicious and one of these was real.

One of them even said they were part of an agribusiness that was started in Australia and expanding in USA. The business is real and it even has two locations in the western US that were correctly identified in the chats.

I received a check by FedEx, almost $4,000! Ostensibly to buy equipment I would need for my office. A cashiers check though, not a check drawn from a business account. The letter that came with it is on plain paper, not office stationery. It doesn’t say what I should buy and doesn’t have a business name or address. Plus I am again directed to communicate with a non-business email account, @aol.com.

I’ve tried to validate the check’s bank routing number and two of the three routing number websites I’ve found recognize the routing number. I’ve also scanned the check front and back. No watermarks show up in either scan. And the check doesn’t have a stamp on it’s face with “valid for xxx days”. A stamp I’ve seen on every cashier’s and corporate check I ever recall handling.

For now I’m still thinking this is a scam. But I’ll play along because I’ve got the time and I’m unemployed. And who knows, maybe I am just too suspicious.

In case you’re curious and want to see what I’ve received so far, take a look at the letter and check that came in the FedEx package. It does cost money to send via FedEx. So unless a business’ FedEx account has been hijacked the scammers have spent some money to send me the check.

Ubuntu server upgrade 16.04 to 18.04 (20.04 pending)

Virtualize, document, and test. The surest way to upgrade success.

For years my server has been running my personal websites and other services without a hitch. It was Ubuntu 16.04. More than four years old at this point. Only a year left on the 16.04 support schedule. Plus 20.04 is out. Time to move to the latest platform without rushing rather than make the transition with support ended or time running out.

With the above in mind I decided to upgrade my 16.04.6 server to 20.04 and get another five years of support on deck. I’m half way there, at 18.04.4, and hovering for the next little while before the bump up to 20.04. The pause is because of a behavior of do-release-upgrade that I learned about while planning and testing the upgrade.

It turns out that do-release-upgrade won’t actually run the upgrade until a version’s first point release is out. A switch, -d, must be used to override that. Right now 20.04 is just that, 20.04. Once it’s 20.04.1 the upgrade will run without the switch. Per “How to upgrade from Ubuntu 18.04 LTS to 20.04 LTS today” the switch, which is intended to enable upgrading to a development release, does the upgrade to 20.04 because it is released.

I’m interested to try out the VPN that is in 20.04, WireGuard, so may try the -d before 20.04.1 gets here. In the meantime let me tell you about the fun I had with the upgrade.

First, as you should always see in any story about upgrade, backup! I did, several different ways. Mostly as experiments to see if I want to change how I’m doing it, rsync. An optional feature of 20.04 that looks to make backup simpler and more comprehensive is ZFS. It’s newly integrated into Ubuntu and I want to try it for backups.

I got my backups then took the server offline to get a system image with Clonezilla. Then I used VBoxManage convertfromraw to turn the Clonezilla disk image into a VDI file. That gave me a clone of the server in VirtualBox to practice upgrading and work out any kinks.

The server runs several websites, a MySQL server for the websites and other things, an SSH server for remote access, NFS, phpmyadmin, DNS, and more. They are either accessed remotely or from a LAN client. Testing those functions required connecting a client to the server. VirtualBox made that a simple trick.

In the end my lab setup was two virtual machines, my cloned server and a client, on a virtual network. DHCP for the client was provided by the VirtualBox Internal Network, the server had a fixed ip on the same subnet as the VirtualBox Internal Network and the server provided DNS for the network.

I ran the 16.04 to 18.04 upgrade on the server numerous times taking snapshots to roll back as I made tweaks to the process to confirm each feature worked. Once I had a final process I did the upgrade on the virtual machine three times to see if I could find anything I might have missed or some clarification to make to the document. Success x3 with no changes to the document!

Finally I ran the upgrade on the production hardware. Went exactly as per the document which of course is a good thing. Uneventful but slower than doing it on the virtual machine, which was expected. The virtual machine host is at least five years newer than the server hardware and has an SSD too.

I’ll continue running on 18.04 for a while and monitor logs for things I might have missed. Once I’m convinced everything is good then I’ll either use -d to get to 20.04 or wait until 20.04.1 is out and do it then.

Jonas Salk Middle School Career Day

A presentation about information technology with demonstrations.

I volunteered to create a presentation for career day at school. Actually, my daughter asked me and I said “okay”. Then career day presentations were changed from in person to online because of corona virus.

It would have been so much easier for me to do in person. I’m certain the total time spent would be less than what I needed to produce the video! Everything I wanted to present could have been done live. Timing would be easier and adjustments could be made in each session depending the interest of the previous audience and questions during the presentation.

That wasn’t to be.

The good thing about the video is I was able to produce it. The bad things are obvious in review. There are several parts where the dialog is disjointed and not flowing with events on the screen. Arrangement of some screen elements blocks others in an undesirable way. And I need to think more of the audience. This is likely much better for high school seniors than eighth graders. Work more on the script and be EXPRESSIVE!

Making this video was an enjoyable and challenging experience. I had to learn things I’d never known to make the video. And watching myself and the content I can see how it could easily be improved. Information I’ll tuck away to use if and when there’s a next time.

If you’d like to check out the video, here it is.

At the end of the video is a list of the software used to produce it. That same list, with working links, is below.

Ubuntu 18.08 runs the laptop used to create this video (it’s an alternative to Windows, OS X, and Chrome OS). https://ubuntu.com/

OpenShot video editor was used to create the video. https://www.openshot.org/

vokoscreen made the screen video captures that got edited in OpenShot. https://linuxecke.volkoh.de/vokoscreen/vokoscreen.html

GIMP, GNU Image Manipulation Program, was used to create or edit some of the images in the video and to obscure and alter some portions of the video images. https://www.gimp.org/

Cheese was used to record my head shot and voice.
https://wiki.gnome.org/Apps/Cheese

Pick and OpenShot’s chroma key effect were used make the background behind my head transparent rather than appear in a box that blocked the background. https://www.kryogenix.org/code/pick/

I used LibreOffice Writer to take notes and make plans as I developed the video and for the scripts I used to guide narration. LibreOffice Calc helped calculating how to adjust length of some clips to fit the target time. https://www.libreoffice.org/

MySQL backup and restore

Dig in and do it, and repeat. Get the desired result faster by combining research and testing.

Maintenance is important. A car needs oil changes or eventually the engine will be damaged by regular operation. A server needs software updates to fix bugs and protect against threats. Even when those things are done failures can happen that create problems returning to normal operation.

For a car there needs to be a spare ready to go in case of a flat. If there’s not a spare ready for use it will take longer to get the car back in operation when a flat happens. For a computer, programs and data need to be backed up. If a disk drive crashes the information stored there may be lost or too expensive to recover, so just as good as lost.

This website has not been well protected for too long and I knew that needed to change. There’s a server operating system, a web server, WordPress software, and a MySQL database that all operate interdependently to make it work. As the amount of content slowly continues to grow my manual system to back everything up has become too cumbersome and is not done frequently enough to ensure minimal to no loss of data.

That needed to change.

Step one – automate the MySQL backups. Documentation states the “logical” backup method is slow and not recommend for large databases. The alternative “physical” backup entails stopping the database server and copying the files. The licensed MySQL Enterprise Backup performs physical backups and from what I’m able to tell runs clone databases so one can be stopped and the files backed up while the clone continues to run and is available for use.

This is a hobby operation and has limited resources so purchasing a license for Enterprise Backup is out of the question. Taking the whole thing offline to backup probably doesn’t bother anyone except me. Still, I did want to be able to continue to run the server while the databases are being backed up. Enter logical backup.

It didn’t take long to find the command, mysqldump. Confirming that it would backup everything including user names and passwords so all the accounts got restored with all the data took longer.

Despite my best search-fu I was unable to find any documentation that explicitly says “do this” to back up user accounts in addition to system databases and other databases. Let me fill that gap by saying “do this to back up user accounts, system databases, and other databases”. mysqldump -u root -p -h server.yourdomain.org --all-databases -r backup_file.sql. I did find the preceding command as the backup command. Nothing I could find said this backs up user accounts and system databases. I tested it. It does.

With the backup done, the next step is restore. And confirming the restore works as expected. Another case of things that otherwise seem obvious not being declared in the documentation.

Restore from the command line looks like this: mysql -u root -p database < backup_file.sql. But wait, I want to restore all databases. Search-fu failed again to find any explicit instruction how to restore all databases and what database to name on the command line.

Try the command without naming a database to see if all are restored. No, that fails. Then a flash of insight. Create an empty database, name that on the command line, and try the restore again. It works!

$ mysql -u root -p
> create database scratch;
> exit
$ mysql -u root -p scratch < backup_file.sql

Did this a few times and then restored the tables. As far as I’ve been able to determine the restore is an exact replica of the backed up data.

It seems odd that important use cases, complete backup of database server and complete restore of database server aren’t clearly documented. The information is there but important nuggets are left out. The only way to be sure you’ll get what you need is to experiment until you’re able to produce the results you need.

So yes, do the research but also just do the work and inspect the results. When research doesn’t clearly answer the questions backup it up with experimentation. Do both and get a result faster.

Afraid of the wrong thing and how to know

Everyone needs to know about Snopes.com.

A woman I used to work with contacted me via Facebook messenger a few days ago. She was passing along a warning regarding a video containing a virus that formats your phone. The video was called “Dance of the Pope” and the notification included the suggestion to forward “to as many as you can”.

I hadn’t heard of this virus so did some searching right away before sending any warnings. As some of you may know, this is a hoax. Snopes reported it as such a few years ago. And a little more searching turned up .uk websites with articles dated this year that reported the hoax.

I thanked my friend for the warning and let her know that what I had found identified the message as a hoax. I also let her know about using the Snopes website to check for hoaxes and provided her with the link as well as links to some of the .uk articles I had found.

The unfortunate truth is people have tools in their hands that they know can cause pain or even economic loss if the tool is lost to them. What too few people know is where to get accurate information about risks and how to respond. Unfortunately, as far as I know, there’s only Snopes. And unfortunately, not enough people know about it.

If you read this, pass it on. Use Snopes to check for hoaxes.

Of course, malware writers are smart. And always devising new ways to infect systems. One day it may be true that “Dance of the Pope” is weaponized so if it is opened it does cause damage.

When that day comes the usual guidance of don’t open things you aren’t expecting will need to be a mantra everyone follows. And there will be an even greater need to know where and how to get reliable information to protect your digital life.

More phishing…

There’s more than one way to hook a fish.

Lets say you’ve become comfortable in your ability to recognize phishing email. You’re able to spot the strange “From” address hidden behind the reassuring “Billing Department” or “Customer Service” label that’s been applied. And even if that looks like it might be legit you know how to hover over links in the email and recognize something that says it came from Amazon should have amazon.com/ as the last part of the web address that comes before that very first single forward slash, “/”.

A business web address should always be https://businessname.com/maybemore or https://www.businessname.com/maybemore or https://businessname.org/maybemore and so on. The critical part of the address that tells you where the link will take you is between the paired // and the very first single /.

What do you do when everything looks legit? The “From:” doesn’t look strange, the subject isn’t alarming.

The message itself doesn’t try and make you panic. You can see the full email address and it looks legit. There’s no business website listed in the message but the part of the email address after the @ looks legit. And if you put the part after the @ into your web browser it does go to a legit website, in this case “equitybrands.com”.

Stop right now! There’s no contact info provided in the message. No corporate website identified. No contact phone or email provided. And there’s no info what this is about. Did you buy something and there’s a payment issue, forget to return something, detail about a pending refund…? There’s just nothing except a big blue “View File” button.

In case you can’t resist taking a peek at the “Payment doc.excel” file I did it for you.

It isn’t a regular Excel file because the last part of the file name would be .xls or .xlsx. Sorry but you do need to know that. Ignoring all this I clicked the “View File” button. It got me to the screen below.

If you haven’t got suspicious yet you should turn and run now.

There’s no identifying information for the company.

Why are you being asked for your email? It came to your email. Why is it asking for that now?

What password do you need to enter? Since your email is asked for it seems like a reasonable password would be your email password. Don’t!! Your email password is to get into YOUR email. Nobody else needs that.

Then there’s a conflicting statement at the bottom of this web page. See just below the “Submit” button? It says “Never submit passwords through Google Forms.” That’s because this phishing message is bringing you to a Google Form to collect your email and password. The criminal can’t prevent Google from showing you that warning on a Google Form but they’re hoping you won’t see it or will ignore it.

In summary, even if everything looks legit, if you’re asked to enter your email and password somewhere and you got there by clicking a link in an email DON’T DO IT!

Email and password are for you to get into your accounts. Don’t give them up at a website you got to by an email link.

Always go to the website your usual way and login. Then check your account to see if anything is needed.

If it isn’t a website you remember having an account at do not, do not, do not provide credentials to login. Call the business and ask what’s up!

Coronavirus and work from home :-/

Communication and planning make a world of difference.

The office I work from is in Manhattan, NYC. Up until yesterday we were going into the office for work. About 5pm an email was sent to all staff that they should begin work from home the next day. Not much other guidance except — work from home.

My primary function is to connect to remote point-of-sale systems and poll their transactions if the routine automated polling from the night before isn’t successful. Depending on the day there are a few hand fulls of locations to poll. I’m not currently doing a lot of end user support because there’s another person who has that for their primary role.

The work from home email went out about an hour before we closed for the day. I installed the needed remote host on my work pc so I could get to my internal resources and informed our acting CIO (small shop but the IT department head is referred to as CIO) it had been done. My credential on LogMeIn enables me to download the host associated with our account but, once the host is installed, the CIO or another person needs to add it to the list of hosts before I can actually make a remote connection.

When I let the CIO know what I had done his reply was, “What email?”! He hadn’t even been informed before the work at home email was sent to everyone that it was going to happen. And this for a change that would cause a significant number of people to contact IT and ask how they would be able to continue working. I would have been astounded except that I have now seen too many instances of poor to no internal communication which lead to ad hoc responses to many needs and inconsistent implementation of solutions.

I was fortunate to be IT director for a number of years at a business that was very proactive about communication and planning. (The business, sadly, was shut down by the parent and I haven’t succeeded in finding a similar role since.) As director I oversaw and participated in creation of policy and procedure for nearly every significant business operation that IT was part of or could have an impact on. The idea that a course of action would be taken that could require significant response from IT, or any department, to support it without consulting those departments prior to making the announcement would be unthinkable. How else to ensure some degree of readiness?

Who could’ve foreseen coronavirus? Depending the sources you read, several organizations and people have been advocating for more resources to study potential risk and impact from zoonotic diseases for years. If you haven’t seen it I highly recommend the following article / interview, The Man Who Saw the Pandemic Coming – Issue 83: Intelligence – Nautilus. Even though the specific virus couldn’t have been foreseen the effects of such an infectious disease and actions needed to counter have been foreseen.

After 9/11 many companies did make efforts to be prepared for disaster. Those efforts either never were taken or have been forgotten by my current employer.

I do very much yearn to be part of a forward thinking, proactive organization once again.

Fake news!

Be informed, not misinformed.

Fake news has been a problem since the Internet (before actually, but much easier to recognize then). With the rise of social media it has become a serious problem that is influencing large numbers of people with false and misleading information.

With a presidential election in the offing and intelligence services currently warning about active foreign interference, now would seem a good time to brush up on identifying fake news. Prevent oneself from going off half cocked on someone or making a choice based on a false story.

I found an NPR article, With An Election On The Horizon, Older Adults Get Help Spotting Fake News, and training about the problem.

And although the article’s title includes the words “Older Adults” the lessons are for everyone. There are many adults who need to be able to recognize and acknowledge fake news. Not only “Older Adults”.

Definitely good resources to be familiar with and to share. Please spread far and wide.

JavaScript and modular pages

An easy example of simplified page maintenance.

I have written about a website I maintain, the Senior Computer Learning Center. It was built from scratch when I knew absolutely nothing about coding webpages. And no understanding at all how to use libraries or a cms to style and customize pages.

One thing I realized right away, even on a simple site, it would be useful to build the navigation menu once and reuse it on each page. Less coding per page and a single place to edit the menu for changes.

With my first ever attempts at coding a simple web page I couldn’t find out how to load external elements into the page if they didn’t have a tag like <img>.

Now I’ve done it, learned how to load a document node from an external file. Understanding the JavaScript selector, $(), and how to pass an object to a function solved the problem.

Trying to solve the problem of maintaining the menu in one place and using it on multiple pages I searched and searched but couldn’t find examples that helped. I was trying to add a predefined menu to any <body> I wanted by loading it from a file.

After a lot of reading and trial and error I ended up with an external JavaScript file, custom.js. Currently it contains only one function. It adds DOM elements to the page so the menu is built dynamically when the page is loaded. Same menu on each page and only one place to maintain it. Much better maintainability.

Below is the HTML for the menu, which used to be in each of the seven pages of the SCLC site, embedded in an html() function that adds a node to the document.

function myMenu(target) {
    target.html('<h2>Winter 2019<br>Spring 2020</h2> \
                   <a href="index.html">Home</a> \
                   <a href="announcements.html">Announcements</a> \
                   <a href="schedule_changes.html">Schedule Changes</a> \
                   <a href="course_desc.html">Course Descriptions</a> \
                   <a href="schedules.html">Schedules</a> \
                   <a href="calendar.html">Calendar</a> \
                   <a href="enrollment.html">Enrollment Information</a>');
}

Now each of the seven pages uses a short <script> to get the menu when loading. Nothing to change when the menu changes.

<nav id="mainMenu">
     <script>myMenu($("nav#mainMenu"));</script>
</nav>

Modify the html() in myMenu() and all pages display the updated menu when refreshed.

Plenty more to do to the SCLC site to make it more maintainable and more useful for end users. Using a common routine on multiple pages is just one of the first steps.