Net neutrality day of action

Here’s a biased report on net neutrality from the BBC. In it, the author describes internet traffic is being analogous to road traffic. But this is terribly misleading, either by design or incompetence.

A motorway is a terrible analogy for internet traffic. All internet traffic moves at the same speed: i.e. slightly less than the speed of light. The issue is bandwidth, which is not speed.

A better analogy might be a passenger rail service. Traveller neutrality means just that: you can buy as many tickets as you want, you use it wherever your final destination happens to be, no matter how many in your party. Lack of any rules means the train company could arbitrarily prevent some travellers from reaching a particular destination, or charge them more to do so.

For example, if the rail company were paid by Burger Queen to make it harder for people to go to McDougal’s, then travellers heading to McDougal’s could be given tickets that only entitle them to use an older, dirtier, worse equipped train; or there may be fewer trains timetabled on which those tickets are valid. Maybe they only run Monday through Thursday.

The chances of the customer experiencing this and subsequently shunning McDougal’s in order to avoid the ignomy of travelling fourth class are greatly increased. McDougal’s sales fall, Burger Queen’s anticompetitive investment was successful.

If you pointed at your monitor and shouted “They can’t do that!” you’d be right. A passenger rail service is one type of essential public service, or critical infrastructure. Treating customers in the way described would be erecting barriers to competition.

I hope it’s obvious that this is in no way like making trucks travel slower or restricting the lanes on which they can travel ... this is in fact a very good description of European motorways. But those restrictions on heavy goods vehicles are not made for reasons of competition, but safety. The BBC’s analogy is wholly unrepresentative of the situation with internet traffic.

Now let’s test the passenger rail analogy by applying it to ISPs and phone/cable companies. Let’s group them as “internet carriers”. If such an internet carrier has an HD video service, then they may decide that they will throttle traffic from third-party video companies such as Google (YouTube) or Amazon. The internet company is restricting its competitors from showing HD video.

Worse, the internet carrier may block all videos marked with certain key words. This becomes an issue of censorship when Google carries a video with content that happens to go against the wishes of the internet carrier company. Say the internet carrier is a religious organization that prevents videos about evolution from being shown. Or it’s a science-savvy company that prevents videos about religion from being shown. Neither situation is a good one. Neither side is in the right. Neither side is in the wrong.

The only way to resolve this conflict of interest is to compel those companies which seek to provide critical infrastructure to act like it. Just as rail companies should never be allowed to impede their passengers from reaching destinations they don’t like, internet carriers - ISPs and cable/phone companies - should never be allowed to prevent their users from consuming content they don’t like.

Jeez, but what if the BBC – one of the world’s largest and best funded media companies – had a vested interest in making sure its readers were misinformed over net neutrality? What then? Might they post nonsense articles?

The #netneutrality day of action is 2017-07-12. Visit https://www.battleforthenet.com/july12/


Bless you!

A couple of etymological eye-poppers, here.

bless (v.)
Old English bletsian, bledsian, Northumbrian bloedsian “to consecrate, make holy, give thanks,” from Proto-Germanic *blodison “hallow with blood, mark with blood,” from *blotham “blood” (see blood (n.)). Originally a blood sprinkling on pagan altars.

So bless you originally would have meant “may you be sprinkled with blood as an offering to the gods”. The entry goes on to mention immolate as another example of a word describing a specific act of piety turning into a more general term:

immolate (v.)
1540s, “to sacrifice, kill as a victim,” from Latin immolatus, past participle of immolare “to sacrifice,” originally “to sprinkle with sacrificial meal,” from assimilated form of in- “into, in, on, upon” (from PIE root *en “in” ) + mola (salsa) “(sacrificial) meal,” related to molere “to grind”.

So in its original sense, immolation is the act of sprinkling with meal as an offering to the gods. That sounds nicer than getting a bloody nose from sneezing too much, although getting your breakfast off your trews can be pretty difficult.


Poorly-informed kernel commentary

This article in Infoworld highlights a Reddit thread asking if Linux kernel design is outdated. Helpfully, the best informed answer is placed at the top:

The downside to this [microkernel] approach is the eternal, inescapable overhead of all that IPC [interprocess communication]. If your program wants to load data from a file, it has to ask the filesystem driver, which means IPC to that process a process context switch, and two ring transitions. Then the filesystem driver asks the kernel to talk to the hardware, which means two ring transitions. Then the filesystem driver sends its reply, which means more IPC two ring transitions, and another context switch. Total overhead: two context switches, two IPC calls, and six ring transitions. Very expensive!

A monolithic kernel folds all the device drivers into the kernel. [...] If your program needs to load something from disk, it calls the kernel, which does a ring transition, talks to the hardware, computes the result, and returns the result, doing another ring transition. Total overhead: two ring transitions. Much cheaper! Much faster!

This all seems very reasonable. So why is Google building the Magenta kernel for Fuchsia?

Simply put, I/O wait. The expert who wrote the answer knows very little about how disks and CPUs interact. I/O wait is the fraction of time during disk access when the CPU is left waiting for the device. SSDs are blazingly fast, but they are no match for a CPU; and yet, the same arguments were being made about spinning disks, which are orders of magnitude slower.

Indeed, the same arguments are made in ignorance of filesystem schedulers, which if the incorrect one is selected, can slow down disk access by (again) orders of magnitude.

Modern spinning disks and SSDs are done a disservice by most GNU/Linux OS distributions, which leave the Completely Fair Queuing scheduler on by default, when they should be bypassing any scheduler.

So: even switching a system to the “noop” scheduler configuration, and hitting it hard, the CPU is negotiating the disk read/write for roughly 5% of the time it is left sitting idle, waiting for the disk. I’ll restate that to make sure the point gets across: under heavy load when the system might spend 1 ms negotiating the transfer, it was left hanging for 20 ms while the spinning disk looked for the data and stitched it together. For the SSD, the %iowait was approximately 2½ times larger than the %sys time spent in the driver: e.g. 1ms driver time and 2.5ms of wait time. (Go do something heavy, like compressing some small files with pbzip2 and running a clamdscan on the same SSD with lots of subdirectories, then run iostat -cmdx sda 2, or watch your sar log under /var/log/sa/―at the points of heaviest load, the percentage of CPU cycles under %sys will be about a twentieth of the percentage given under %iowait.)

With a blazing-fast CPU, those four additional ring transitions, two IPC calls, and two context switches start to look like a reasonable cost because the CPU was otherwise doing nothing useful. Oh what’s that? It’s sequential and it’s slowing down? Multithreaded I/O drivers will solve that.

And that’s just CPU. If some of the simple logic is handed off to a VPU (vector processing unit, general purpose GPU, you know what I mean), with hundreds of cheap cores, then we’re getting into the realms of “I don’t care about your miniscule driver overhead Buster” or more formally put, context switches are a computationally cheap price for the added layers of security and stability at this very fundamental layer of any modern operating system.

In the context of an increased focus on security and process-space partitioning, then yes: Linux kernel design is woefully outdated.

What then? Since Minix and Hurd are unlikely to ever get much in the way of mainstream support, it’s looking like the Fuchsia OS’s real-time microkernel, Magenta, is the answer.

Delving back into ancient history, remember the Amiga? Remember how, no matter how heavily loaded the system was, the GUI was always responsive? Exec is a microkernel in action, and it wasn’t even a realtime kernel. There are reasons why this is a bad example (lack of memory protection, kernel runs in user space) but it easily demonstrates how system responsiveness is perceived as better performance.

Unless SSDs come along and give us another several orders of magnitude of performance gains over their present contemporaries, and unless the hackers decide “guys? how about we don’t attack kernel space, okay?”, the microkernel is going to become a major thorn in Linux’s side.


Using sed to send alerts for FlexLM denials

It’s entirely probable that, at some point during your career as a sysadmin, you come across a hideous method of preventing access to software which legitimate users have paid for, a.k.a. FlexLM. If your organization buys into the idea that users need to be denied access to said software, you’ll tend to want to know which legitimate users have been falsely swept up in the rummage.

Since you’re paying extra for the weak copy protection scheme (software licensing software is licensed software, non-free software is non-free in other sense, natch), you’ll be glad to learn you can also spend even more on monitoring tools, such as IBM’s Platform Analytics. (Yeah okay I get that it does more than just FlexLM feedback.)

Or you can lift this script for your own crontab without crediting me (please don’t credit me, I’d be embarrassed).

# pick out DENIED flexlm licenses and email

TIMESTAMP="$(date --date "$DAYSBACK days ago" +'%-m/%-d/%Y')"
# get all entries since TIMESTAMP M/D/YYYY
#   if $TIMESTAMP is absent from $LOGFILE then sed matches the whole file
if IFS=$'\n' \
 DENIED=( $(sed -n 'H;\|'"TIMESTAMP ${TIMESTAMP}"'|h;${g;p}' "$LOGFILE"|grep DENIED) )
 mail \
  -S smtp=$SMTP \
  -s "[flexlm] ${#DENIED[@]} DENIED attempts since ${TIMESTAMP} ($DAYSBACK days)" \
 Found ${#DENIED[@]} DENIED attempts in $LOGFILE since ${TIMESTAMP} ($DAYSBACK days)
 $( printf "%s\n" "${DENIED[@]}" )

The value of this isn’t particularly the email (although, see if your phone provider also has an email-to-SMS service), or even the fact you can watch for license denials (because who cares, right?) – it’s in the regex. I’m assuming your FlexLM daemon writes “TIMESTAMP M/D/YYYY” in the logfile every so often, and if it doesn’t you can log it there daily from cron (which might actually work better – read on). This is useful because instead of getting swamped with every single denial since the year dot, you can tune it to only report what was done in the last N days.

Simply put, it looks in the logfile for the first occurrence of the word “TIMESTAMP” followed by a M/D/YYYY-formatted date from N days ago, and then greps for “DENIED”.

If you want to grok it, what sed is being asked to do is store (H) the current line in hold space, then if the “TIMESTAMP…” pattern matches, overwrite the hold space (h). Following this, we advance through the file still copying each line to hold space as we go, then at the end (${…}), we restore the accumulated hold space to the pattern space (g), and print it out (p).

Spot the obvious error. This will only match from the last occurrence of the “TIMESTAMP…” pattern through to the end of the file. Since FlexLM daemons write out multiple TIMESTAMP entries per day, bear that in mind. When you say “3 days ago”, you’re really only reliably covering from 2 days back.


Not a rant about ISV packaging

This is not a rant. I would be doing myself a disservice if this were a rant. I have to remind myself of that because this problem needles me each time I open up a package from an ISV.

See, in the Linux world, it has become commonplace to deliver a simple, straightforward tarball. You do this to install it:

sudo make install
And Bob's thine nuncle.

Well, okay, it's not always so simple. You're on RHEL5 and you want to update emacs? Then you must (using Ubuntu-style sudo for illustration purposes only):

wget http://ftp.gnu.org/pub/gnu/emacs/emacs-24.3.tar.gz &
sudo yum install giflib-devel libjpeg-devel libtiff-devel ncurses-devel
fg 1
tar xzf emacs-24.3.tar.gz
cd emacs-24.3
./configure --without-selinux --prefix=/usr/local/emacs24
sudo make install
mail motd@motdserver <<-EOF
Emacs version 24.3 is now available on $HOSTNAME
at /usr/local/emacs24/bin/emacs. Please set your
\$PATH accordingly!
Your friendly IT Dept.

The last piece is crucial, of course. Without documentation you just spent your sweet time doing nothing that nobody won't never use, no sir.

So, rolling that up into a package shouldn't be too much of an ordeal, right? Right. This is a textbook case of RPM packaging: list the source, %prep, %install, simple %files list... almost trivial. (Just to demonstrate how the simplest case can quickly be nontrivial, "Provides: emacs" would seem appropriate, and so would adding the manpages and htmldocs files, and ... are there any patches?)

Moving on to ISV packages, the target topic of my ire posting, we have typically are laden with binary blobs. No biggie! Just run through the install process in a chroot, diff the filesystem, and make a %Files list and a list of "install" commands, right? Oh ye of excess faith…

No, because this is how it goes:

unpack archive
launch script with "--help"
cancel full installation
load installation script in editor
kill terminal because the non-ASCII characters broke it
hunt through archive for documentation
hunt through website for documentation
email the vendor to ask where their documentation is
wait a few days while they write some release notes and tell you to "just run the installer it's okay it takes care of everything"
discard release notes
put on headphones
cue up Motörhead
install their shit on a half dozen different systems as different people to see what kind of stupid OS-based, host-based, and user-based decisions they think are so important
archive the common stuff
script the config
trash the rest
re-enter simple binary packaging process as described above

Every publisher seemingly has its own NIH installer script because they think they're doing something so utterly special that nobody's ever done before such as install a driver or install platform-specific files or set up a license server or whatever. This is because the publisher cannot tell the difference between:

  1. installation – the act of deploying a payload into a filesystem
  2. integration – the act of making an installed payload suitable for use in a particular operating environment
  3. configuration – the act of creating a set of site-specific options which are loaded by the integrated installation at runtime

Developers eventually find that RPM and DEB don't provide suitable methods for doing All Of The Above, and discover that Makefiles Are Hard, Man, and so they ask someone to learn bash, or csh, or Perl, or all of the above, and they hash out a terrifically bad script with tortured logic so twisted you could attach a propellor and keep cool all day long, and involving so much redundant paranoia that it a simple review would cut it to 20% of its original size. And yet no error handling, or if there is no return codes. Plus, they make you type stuff in, like "y" to continue even though they know you're automating this and then tell you to look into learning expect even though piping the output of "echo y" will work.

Anyway, I got through that without losing my temper, and I got their tools installed, integrated, and configured, so I guess I win, right?


Really awful paperless vacation apps

We had a great time at Disneyland and really easy flights with Southwest a couple of weeks back, but things would’ve been so much better if the Disneyland and Southwest apps had worked as advertised.

You see, each one has a paperless vacation option. Sign into the app, and you get access to your tickets, which can be scanned for admittance. Or, as we found out to our dismay, may not if the app has inexplicably signed you out since the evening before you needed them when you checked to ensure all the ducks were in a row.

It’s a bit irksome when an airline app does this kind of trick. But for travel, you’re going to bring paper copies just in case. No big line, no big deal.

On the other hand, when it’s an app that claims you can Skip the Ticket Lines: Buy park tickets with the app and show your barcode at the gate for admission when you first get to the park! No ticket booth lines to stand in or e-tickets to print out. Yes, quite. That would be wonderful.

Of course, in our case the “reset password” process sends an email to the account which neither of our phones is set up to receive. And because both our phones were already signed into the Disneyland app the night before and the e-ticket barcodes were showing bold and proud, we didn’t need to bring paper copies. With the upshot that one of us had to go sprinting off back to the hotel room to fetch the backup paper copies, and sprint back again while the other adult kept the kids amused some-god-knows-how and the wait lines built up until it was almost time to eat again. (At least, coffee and sustenance for the one having to make it through the day in uncomfortably sweated-through clothing.)

Room for improvement? Sure! Next time we’ll print out the god damned passwords in triplicate and bring along paper copies just in case! But also, this is on the app developers to improve their warez and store the account authorization on the end user’s phone. The assumption should be, just like many major apps containing monetary transaction features, that I am the only user of my phone or the person using it is my competent alternate. Jesuschrist man, if they weren’t, they’d not only have to steal my phone and break into it like the FBI can’t, but also race me to the airport or Disneyland and win before I realized and used another device to disable it. Steal it too early and I’ll close the window of opportunity. Steal it too late and I’ll be already in the line and it’ll be a very physical interruption of a crime in progress. Bottom line, App Developers Should Not Try To Be The Police.

I still can’t get over the fact that both of these apps did the same thing, on both phones, at the same time. That’s some ridiculous and terrible authentication code development there.

Still, we really enjoyed the flights because they were short enough, and the kids met Elsa and Anna, so all is well with the world.


The evolution of attitudes towards SElinux

RHEL3 ― “What’s SElinux?”
“It’s for breaking Linux.”
RHEL4 ― “What’s SElinux?”
“It’s for some organizations who want paranoid security.”
RHEL5 ― “What’s SElinux?”
“It’s for some organizations who care deeply about security.”
RHEL6 ― “What’s SElinux?”
“It’s for secure installs.”
RHEL7 ― “What’s SElinux?”
“You’re new here, aren’t you?”