Controlling file access

Use groups to maintain ACLs.

Digital information has creators, owners, editors, publishers, and consumers. Dependent on the information it has different approved audiences; public, creator’s organization, leadership, functional group, etc. And the audiences can be subdivided dependent on the level of authority they have; read only, modify, create, etc.

How to control who sees what? Accounts need to access, change and create information. At least some of that information will be in the cloud, either your own, or space and services hosted and invoiced monthly, or a combination. Access to public and private domains should be convenient for authorized users on supported platforms.

And be sure to classify the information! The public stuff has access control set so everyone can see it. Everything else needs to be someplace private. Add in an approval process for material to go public. Devise a rights scheme for the private domain. Owners, Editors, Readers.

Add to all this a folder hierarchy that supports the envisioned rights and document access should be understandable, maintainable, and auditable (with proper auditing enabled).

What’s the *perfect* configuration for all of this? As far as I’ve discovered, there isn’t one. Please comment with any reference if you know of some.

The perfect configuration is one that is maintained per business needs. Maintained is really the operative requirement.

Default everything to private so only authors have access to their own work?

How to collaborate? Give others read/edit access as needed per instructions from owner? That gets into LOTS and LOTS of ACL changes as people change in the organization, to say nothing of sun setting access. When should those collaborators have edit removed, or what about even read?

If rights are granted by individual account then this creates lots of future unidentified GUIDs in ACLs as accounts are removed, or lots of maintenance to find the accounts in the ACLs and remove them before the account is removed.

And, even if accounts aren’t removed because the person is changing position so should have access to different files, if requires lots of maintenance as people move from position to position.

Default everything to public read only and authors have edit access to their own work?

This limits the need to provide access to individual accounts unless the account needs edit rights to a document. If the same approach is taken to granting edit rights as was suggested for read rights above, then the same situation with maintaining access occurs except this time only for editors. Likely a lesser support burden but nonetheless still one that is likely to leave orphaned GUIDs in the ACLs.

Manage access by group!

Create Reader and Editor groups. As many as needed to accommodate each of the various groups needing access to the folders and files. Add and remove accounts from the groups as needed.

Managing access by group won’t cover all the needs. It may still be necessary to put individual accounts into the ACLs. However managing by group will limit the need to put individual accounts into the ACLs, and it will help make clear the rights if group name conventions are used to make the purpose of the group more apparent, e.g., AccountsPayableReaders, AccountsPayableEditors.

This can be taken further. If the two groups above have relatively steady membership then accounts that have limited need to access as readers or editors can be added to groups within these groups making it apparent the account holder has temporary access. The nested groups could be TmpAccountsPayableReaders, and TmpAccountsPayableEditors.

In the end..

There is not a “perfect” no maintenance system to manage and control access rights. Groups are certainly recommended over individual accounts. So long as the organization experiences changes that should affect document access it will be necessary to maintain ACLs.

The goal really is to limit the work needed to know what access is granted to which accounts, to maintain proper access, and use a method that is sustainable.

Groups really are the solution. Groups and a well established process to identify, classify, and assign rights to information throughout its lifecycle from creation to retirement.

Got a job!

Review LOTS of advertisements, select and apply, repeat. It’s a full time job that you want to dump.

After nine months of applications and 99% ghosted 🙁 got a job 🙂 !

Interesting that for the first time in my professional career the title is IT System Administrator. I’m familiar with all that has been needed so far. Seems a good fit. Yet I’ve never had this title before.

Good way to start a new job. Not lost in anything and able to contribute quickly.

Oh, and first post since commenting was enabled. Wondering what kind of spam will show up first.

Tracking Things, SO MANY THINGS, Which Are the Important Things?

Don’t get overwhelmed.

Digital devices, for discussion the range from smartphones to computers and devices making up the networks they attach to, offer so much information for monitoring health and diagnosing failures.

To maintain the health of that cloud of devices it’s good to know what’s going on. What to monitor. And by the same token, good to monitor things that affect your experience so the provider can be shown when it’s their problem.

For home Internet users the big things are usually the reliability and speed of the Internet connection. If it’s fast but down a lot that’s no good. And if it’s up and performance is good is it actually performing to spec? Are you getting what you’re paying for?

Only as an exercise in curiosity, wondered how often my public IP address changed, and how quickly the log would grow. Have been tracking since May, 2014 and have 11,441 lines in the log. It’s only grown to 670K in that time. Had 129 different public addresses and top five are 2,267, 1,681, 1,176, and 702 occurrences. More than half the instances.

Mostly just trivia. Having the log did help me discover one of the temporary IPs that I got in Flushing was on some black lists. When that happened I couldn’t log in to my (ISC)2 account. Once troubleshoot I was able to get it removed from the blacklist and was again able to get to (ISC)2 when I got that IP.

More immediate, is the Internet performance I contracted for being delivered? In my case it certainly seems it isn’t being delivered.

A typical recent week of service from my ISP. Any not green is bad :-/. There’s quite a bit of it.

Better times :-). Start of November, 2020.

That’s examples of some things to track. One seemingly more immediately useful than the other. There’s so many more. Which are important for security? Authenticate by location, time of day, second factor, log file access (hierarchy of criticality). Web browsing?

Need to ask and answer what’s critical, confidential, who should have access and access paths allowed.

A little bit Docker

Platform virtualization, a more granular way to virtualize.

Virtualization is something that’s been mainstream for years. I’ve used it for production environments to increase hardware utilization and improve failure tolerance. And it is also great for quickly setting up and using test environments whether to test before production deployment or to evaluate a technology without intermixing with your production environment.

Docker, platform as a service virtualization, has been around for a few years now, since 2013 actually. And I’d never used it. I decided it was time to change that.

So… what to do? Well I’ve been suspicious that my Internet Service Provider, ISP, isn’t actually providing the promised speeds. Whenever I’d check the speeds at a speed test website it would be slower than the service promise. Of course that’s very intermittent testing and I couldn’t really maintain a regular schedule, note the results, and have a documented history to complain to my ISP about.

In the past I’ve used a program called Nagios to monitor network services and computers on a network. A little searching and I found that Nagios has a feature, a plug-in it’s called, that can be used to monitor Internet upload and download speed on a schedule.

With this information in hand I decided to try using Docker to run a Nagios container with the speedtest plug-in. Quite a bit for me to get my head around. This particular plug-in works differently than the ones built into Docker Core so I needed to work out how to get it working. Of course there’s documentation online but it is old and the plug-in has been updated a few times while the documentation has not.

And with Docker itself there’s quite a bit to learn. Easy enough to get a container started. However there is a lot going on behind the scenes. Docker has images and containers. Images are the templates for the containers. Start an image and a container is created. Stop the container then start the image again and a new container is created. Without a guide, which I haven’t found yet, that explains the “theory of Docker” one might keep starting the same image and by that technique keep creating containers. This leads to not finding any of the customizations made in the last container because starting the image again starts a new container.

Then of course, there’s getting to the container command line. Basically, getting inside the machine. Once there it can be difficult to accomplish anything because many of the common command line tools, like a text editor, are not in the container. That leads to needing to find how to access the files and modify them from outside the container.

There are ways to resolve all the above. More than one way for each of the issues.

My good fortune is that I’m attending the (ISC)2 2020 Security Congress this year. Virtually, of course. And there is a Docker related session I’ve signed up for. Excited to learn about this.

  • 7 Layers of Container Insecurity

Coronavirus and work from home :-/

Communication and planning make a world of difference.

The office I work from is in Manhattan, NYC. Up until yesterday we were going into the office for work. About 5pm an email was sent to all staff that they should begin work from home the next day. Not much other guidance except — work from home.

My primary function is to connect to remote point-of-sale systems and poll their transactions if the routine automated polling from the night before isn’t successful. Depending on the day there are a few hand fulls of locations to poll. I’m not currently doing a lot of end user support because there’s another person who has that for their primary role.

The work from home email went out about an hour before we closed for the day. I installed the needed remote host on my work pc so I could get to my internal resources and informed our acting CIO (small shop but the IT department head is referred to as CIO) it had been done. My credential on LogMeIn enables me to download the host associated with our account but, once the host is installed, the CIO or another person needs to add it to the list of hosts before I can actually make a remote connection.

When I let the CIO know what I had done his reply was, “What email?”! He hadn’t even been informed before the work at home email was sent to everyone that it was going to happen. And this for a change that would cause a significant number of people to contact IT and ask how they would be able to continue working. I would have been astounded except that I have now seen too many instances of poor to no internal communication which lead to ad hoc responses to many needs and inconsistent implementation of solutions.

I was fortunate to be IT director for a number of years at a business that was very proactive about communication and planning. (The business, sadly, was shut down by the parent and I haven’t succeeded in finding a similar role since.) As director I oversaw and participated in creation of policy and procedure for nearly every significant business operation that IT was part of or could have an impact on. The idea that a course of action would be taken that could require significant response from IT, or any department, to support it without consulting those departments prior to making the announcement would be unthinkable. How else to ensure some degree of readiness?

Who could’ve foreseen coronavirus? Depending the sources you read, several organizations and people have been advocating for more resources to study potential risk and impact from zoonotic diseases for years. If you haven’t seen it I highly recommend the following article / interview, The Man Who Saw the Pandemic Coming – Issue 83: Intelligence – Nautilus. Even though the specific virus couldn’t have been foreseen the effects of such an infectious disease and actions needed to counter have been foreseen.

After 9/11 many companies did make efforts to be prepared for disaster. Those efforts either never were taken or have been forgotten by my current employer.

I do very much yearn to be part of a forward thinking, proactive organization once again.