Some people have project cars—cars that don’t work, that they put up on blocks or in the garage, and then lovingly restore to working order. I have some project books: e-texts that I got on a compilation CD in the 1990s, and finally decided to convert to EPUBs.
I’m using the Standard Ebooks style manual, since they seem to know what they’re doing. And these style guidelines can get pretty specific. For instance, if you have a numeric or date range, you should use an en dash surrounded by Unicode word joiner glyphs.
None of this is a problem if you’re using Emacs, of course. You can enter weird characters with M-x insert-char, which allows you to search for specific Unicode characters. And of course M-x query-replace-regex will find two numbers separated by an ASCII hyphen.
The clever part, though, was the realization that even though I have about 900 source xhtml files, they only come to about 35Mb, and in the 2020s, that’s not a lot (though it would have been science fiction back when I got that CD). So why not just load them all?
Once we do that, we can use occur-mode to look for patterns. It’s like grep for Emacs. In this case, we want to search for patterns in all of the source buffers, so we’ll use multi-occur-in-matching-buffers, which requires us to specify one pattern for the buffers to search (\.xhtml$) and another for the string or pattern to search in those buffers ([[:digit:]]-[[:digit:]]).
That brings up a buffer called *Occur* with all of the matching lines. And here’s the next cool bit: you can press e to switch to occur-edit mode: if you make a change to the *Occur* buffer, those changes propagate back to the source buffers. Which means that I can use standard tools like replace-string, query-replace-string, replace-regexp, and query-replace-regexp to either make changes in bulk if I’m sure of what I’m doing, or one at a time if I’m not.
One difficulty: what if some lines match what you want, and some don’t, but the “bad” lines match some other pattern? For instance, in my case, the CD publishers used an ASCII dash followed by a space for what should have been an em dash. But searching for “- ” (a dash followed by space) cluttered up my *Occur* buffer with a lot of HTML comments.
No problem: *Occur* is just another Emacs buffer, so all I had to do was use replace-regexp to delete the lines with HTML comments, which cleared away a lot of distracting noise. (And no, deleting whole lines in *Occur* doesn’t delete the corresponding lines in the source buffers.)
I’ve written before about literate programming, and how one of its most attractive features is that you can write code with the primary goal of conveying information to a person, and only secondarily of telling a computer what to do. So there’s a bit in my .bashrc that adds directories to $PATH that isn’t as reader-friendly as I’d like:
for dir in \
/usr/sbin \
/opt/sbin \
/usr/local/sbin \
/some/very/specific/directory \
; do
PATH="$dir:$PATH"
done
I’d like to be add a comment to each directory entry, explaining why I want it in $PATH, but sh syntax won’t let me: there’s just no way to interleave strings and comments this way. So far, I’ve documented these directories in a comment above the for loop, but that’s not exactly what I’d like to do. In fact, I’d like to do something like:
$PATH components
/usr/sbin
/usr/local/bin
for dir in \
{{path-components}} \
; do
PATH="$dir:$PATH"
done
Or even:
$PATH components
Directory
Comments
/usr/sbin
sbin directories contain sysadminny stuff, and should go before bin directories.
/usr/local/bin
Locally-installed utilities take precedence over vendor-installed ones.
for dir in \
{{path-components}} \
; do
PATH="$dir:$PATH"
done
Spoiler alert: both are possible with org-mode.
Lists
The key is to use Library of Babel code blocks: these allow you to execute org-mode code blocks and use the results elsewhere. Let’s start by writing the code that we want to be able to write:
#+name: path-list
- /usr/bin
- /opt/bin
- /usr/local/bin
- /sbin
- /opt/sbin
- /usr/local/sbin
#+begin_src bash :noweb no-export :tangle list.sh
for l in \
<<org-list-to-sh(l=path-list)>> \
; do
PATH="$l:$PATH"
done
#+end_src
Note the :noweb argument to the bash code block, and the <<org-list-to-sh()>> call in noweb brackets. This is a function we need to write. It’ll (somehow) take an org list as input and convert it into a string that can be inserted in this fragment of bash code.
This function is a Babel code block that we will evaluate, and which will return a string. We can write it in any supported language we like, such as R or Python, but for the sake of simplicity and portability, let’s stick with Emacs lisp.
Next, we’ll want a test rig to actually write the org-list-to-sh function. Let’s start with:
#+name: org-list-to-sh
#+begin_src emacs-lisp :var l='nil
l
#+end_src
#+name: test-list
- First
- Second
- Third
#+CALL: org-list-to-sh(l=test-list) :results value raw
The begin_src block at the top defines our function. For now, it simply takes one parameter, l, which defaults to nil, and returns l. Then there’s a list, to provide test data, and finally a #+CALL: line, which contains a call to org-list-to-sh and some header arguments, which we’ll get to in a moment.
If you press C-c C-c on the #+CALL line, Emacs will evaluate the call and write the result to a #+RESULTS block underneath. Go ahead and experiment with the Lisp code and any parameters you might be curious about.
The possible values for the :results header are listed under “Results of Evaluation” in the Org-Mode manual. There are a lot of them, but the one we care the most about is value: we’re going to execute code and take its return value, not its printed output. But this is the default, so it can be omitted.
If you tangle this file with C-c C-v C-t, you’ll see the following in list.sh:
for l in \
((/usr/bin) (/opt/bin) (/usr/local/bin) (/sbin) (/opt/sbin) (/usr/local/sbin)) \
; do
PATH="$l:$PATH"
done
It looks as though our org-mode list got turned into a Lisp list. As it turns out, yes, but not really. Let’s change the source of the org-list-to-sh() function to illustrate what’s going on:
So the return value from org-list-to-sh was turned into a string, and that string was inserted into the tangled file. This is because we chose :results raw in the definition of org-list-to-sh. If you play around with other values, you’ll see why they don’t work: vector wraps the result in extraneous parentheses, scalar adds extraneous quotation marks, and so on.
Really, what we want is a plain string, generated from Lisp code and inserted in our sh code as-is. So we’ll need to change the org-list-to-sh code to return a string, and use :results raw to insert that string unchanged in the tangled file.
We saw above that org-list-to-sh sees its parameter as a list of lists of strings, so let’s concatenate those strings, with space between them:
for l in \
/usr/bin /opt/bin /usr/local/bin /sbin /opt/sbin /usr/local/sbin \
; do
PATH="$l:$PATH"
done
which looks pretty nice. It would be nice to break that list of strings across multiple lines, and also quote them (in case there are directories with spaces in them), but I’ll leave that as an exercise for the reader.
Tables
That takes care of converting an org-mode list to a sh string. But earlier I said it would be even better to define the $PATH components in an org-mode table, with directories in the first column and comments in the second. This is easy, with what we’ve already done with strings. Let’s add a test table to our org-mode code, and some code to just return its input:
#+name: echo-input
#+begin_src emacs-lisp :var l='nil :results raw
l
#+end_src
#+name: test-table
| *Name* | *Comment* |
|----------+------------------|
| /bin | First directory |
| /sbin | Second directory |
| /opt/bin | Third directory |
#+CALL: echo-input(l=test-table) :results value code
#+RESULTS:
Press C-c C-c on the #+CALL line to evaluate it, and you’ll see the results:
First of all, note that, just as with lists, the table is converted to a list of lists of strings, where the first string in each list is the name of the directory. So we can just reuse our existing org-list-to-sh code. Secondly, org has helpfully stripped the header line and the horizontal rule underneath it, giving us a clean set of data to work with (this seems a bit fragile, however, so in your own code, be sure to sanitize your inputs). Just convert the list of directories to a table of directories, and you’re done.
Conclusion
We’ve seen how to convert org-mode lists and tables to code that can be inserted into a sh (or other language) source file when it’s tangled. This means that when our code includes data best represented by a list or table, we can, in the spirit of literate programming, use org-mode formatting to present that data to the user as a good-looking list or table, rather than just list it as code.
One final homework assignment: in the list or table that describes the path elements, it would be nice to use org-mode formatting for the directory name itself: =/bin= rather than /bin. Update org-list-to-sh to strip the formatting before converting to sh code.
Writing this down before I forget, somewhere where I won’t think to look for it the next time I need it.
So you’re running Container Station (i.e., Docker) on a QNAP NAS, and naturally you’ve created a cert for it, because why wouldn’t you?, except that it expired a few days ago and you forgot to renew it, because apparently you didn’t have calendar technology when you originally created the cert, and now Container Station won’t renew the cert because it’s expired, and it won’t tell you that: it just passively-aggressively lets you click the Renew Certificate button, but nothing changes and the Docker port continues using the old, expired cert. What to do?
Stop Container Station
Log in to the NAS and delete /etc/docker/tls (or just rename it).
Restart Container Station. Open it, and note the dialog box saying that the cert needs to be renewed.
Under Preferences → Docker Certificate, download the new certificate.
Restart Container Station to make it pick up the new cert.
Unzip the cert in your local Docker certificate directory: either ~/.docker or whatever you’ve set $DOCKER_CERT_PATH to.
Check that you have the right cert: the cert.pem that you just unzipped should be from the same keypair that’s being served by the Docker server: openssl x509 -noout -modulus -in cert.pem | openssl md5 and openssl s_client -connect $DOCKER_HOST:$DOCKER_PORT | openssl x509 -noout -modulus | openssl md5 should return the same string.
Check the expiration date on the new cert. Subtract 7 days, open a calendar at that date and write down “Renew Docker certificate” this time.
In Iowa’s Gazette, one John Hendrickson has an editorial on the importance of preserving the Electoral College, as opposed to electing the president by popular vote. So let’s see what his reasons are.
He starts out with a few paragraphs that use words like “attack”, “elimination”, “undermine” to create a vague feeling that something bad will happen if the Electoral College is eliminated, without actually making anything that could be considered an argument.
He eventually gets to
When the Founding Fathers met in Philadelphia in 1787, our Constitution created a republican form of governance that limited the powers of the federal government. In Federalist Paper 45 James Madison wrote: “The powers delegated by the proposed Constitution to the federal government are few and defined. Those which are to remain in the State governments are numerous and indefinite.” The 10th Amendment further solidified what Madison wrote in Federalist 45. Historian Allen Guelzo correctly points out that the “Constitution never set out to create a streamlined national government.”
Hearkening back to the founding of the nation is very much on-brand. The revolutionary period is seen by many as a golden age, when a perfect constitution leapt, Athena-like, from the founders’ collective brow. Even a cursory glance at American history will belie this simplistic story, or even the first ten amendments: a bunch of states wouldn’t join the union unless the constitution were first amended.
Beyond that, Hendrickson still isn’t making much of an argument: he implies that the National Popular Vote movement somehow wants to give to the federal government powers that were properly given to the states, but doesn’t spell out what these powers are, or how they would be taken away. If the National Popular Vote Interstate Compact passes, elections will still be conducted by the states, the same way as before. In fact, the compact relies on the fact that each state can conduct elections however it wants to, including taking into account the way other states vote, if it wants to.
Hendrickson continues:
The American Founders did not wish the states to have a diminished role under the Constitution. The late constitutional scholar James McClellan wrote that the “entire Constitution is actually honeycombed with provisions designed to protect the residual sovereignty and interests of the states and to give them influence in the decision-making process at the national level.”
This certainly includes the Electoral College. In considering the election of the executive, the Founders rejected outright a direct, national election. In designing the Electoral College, the Founders wanted to ensure that the selection of the executive was independent of Congress and included the states; the Electoral College was designed to protect the interests of the states and the people.
Again, there’s lots of handwaving, but very little by way of actual argument. “[T]he Founders rejected outright a direct, national election.” Well, yes; the Electoral College was a compromise toget states like Virginia to join the union: Virginia hada lot of slaves, but comparatively few voters, and the Electoral College gave itmore influence in presidential elections than it would otherwise have.
But the mere fact that the founders set things up a certain way is no reason to keep them that way. Back in 1789, different states might as well have been different countries. Today, you can easily send money from a bank in Maryland to order something from a company in Connecticut (whose headquarters are in Delaware), and have it shipped from Georgia. Heck, Oklahoma currently has the same population as the entire country did in 1789. Times are different. If we’re going to quote the founders, how about Thomas Jefferson?
“I am not an advocate for frequent changes in laws and constitutions, but laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths discovered and manners and opinions change, with the change of circumstances, institutions must advance also to keep pace with the times. We might as well require a man to wear still the coat which fitted him when a boy as civilized society to remain ever under the regimen of their barbarous ancestors.”
Hendrickson also writes, “the Founders wanted to ensure that the selection of the executive was independent of Congress and included the states”. Again, this is just vague threats. He doesn’t say how Congress would be involved in presidential elections, or how states would be excluded; he just throws that out as a vague threat.
Americans need to understand the importance of federalism to our constitutional republic. Federalism is as essential as separation of powers and checks and balances, allowing the states to maintain a level of sovereignty from the federal government. The Electoral College is the final “Rock of Gibraltar defense of federalism. “Federalism is in the bones of our nation and abolishing the Electoral College would point toward doing away with the entire federal system,” argues Guelzo.
Here, Hendrickson seems to be making some kind of slippery-slope argument, that electing the president by popular vote will somehow erode the separation of powers and, I don’t know, lead to autocratic rule or something. I’d like to avoid autocratic rule, and I’d be interested in how Hendrickson thinks this slow erosion might happen, if it weren’t for the fact that our Capitol was attacked a year and a half ago by a mob trying to undermine democracy and install an autocrat.
If the Electoral College was eliminated, Iowa would be ignored by presidential candidates and the voices of Iowans would be drowned out as elections would be decided by large urban centers.
Finally, we get to an actual pair of arguments. They’re stupid, but they’re still arguments: 1) if it weren’t for the Electoral College, candidates would ignore Iowa and Iowans’ concerns, and 2) the outcome of the election would be decided solely by big cities.
I’ve dispensed with (2) elsewhere. As for (1), the reason political campaigns pay attention to Iowa isn’t the Electoral College; it’s the fact that it has the first presidential caucuses. So there’s symbolism in winning the Iowa caucuses. It gives the winning candidates a morale boost. Beyond that, once the candidates are finally chosen, no one gives a shit. Iowa is a red state, one of those that the networks call on election night as soon as they’re legally allowed to. The Democrats write it off, and Republicans take it for granted. Getting rid of the Electoral College might actually help with that.
The fact that supporters of the Electoral College feel the need to write this sort of column is telling. If they had any solid arguments, they’d trot them out. Instead, we get this wishy-washy appeal to tradition and fear of change.
A while back, I became intrigued by Donald Knuth’s idea of Literate Programming, and decided to give it a shot. That first attempt was basically just me writing down what I knew as quickly as I learned it, and trying to pass it off as a knowledgeable tutorial. More recently, I tried a second project, a web-app that solves Wordle, and thought I’d write it in the Literate style as well.
The first time around, I learned the mechanics. The second time, I was able to learn one or two things about the coding itself.
(For those who don’t remember, in literate programming, you write code intertwined with prose that explains the code, and a post-processor turns the result into a pretty document for humans to read, and ugly code for computers to process.
1) The thing I liked the most, the part where literate programming really shines, is having the code be grouped not by function or by class, but by topic. I could introduce a <div class="message-box"></div> in the main HTML file, and in the next paragraph introduce the CSS that styles it, and the JavaScript code that manipulates it.
2) In the same vein, several times I rearranged the source to make the explanations flow better, not discuss variables or functions until I had explained why they’re there and what they do, without it altering the underlying HTML or JavaScript source. In fact, this led to a stylistic quandary:
3) I defined a few customization variables. You know, the kind that normally go at the top for easy customization:
var MIN_FOO = 30; var MAX_FOO = 1500; var LOG_FILE = "/var/log/mylogfile.log";
Of course, the natural tendency was to put them next to the code that they affect, somewhere in the middle of the source file. Should I have put them at the top of my source instead?
4) Even smaller: how do you pass command-line option definitions to getopt()? If you have options -a,-b, and -c, each will normally be defined in its own section. So in principle, the literate thing to do would be to write
getopt("{{option-a}}{{option-b}}{{option-c}}");
and have a section that defines option-a as “a“. As you can see, though, defining single-letter strings isn’t terribly readable, and literate programming is all about readability.
5) Speaking of readability, one thing that can come in really handy is the ability to generate a pretty document for human consumption. Knuth’s original tools generated TeX, of course, and it doesn’t get prettier than that.
I used org-mode, which accepts TeX style math notation, but also allows you to embed images and graphviz graphs. In my case, I needed to calculate the entropy of a variable, so being able to use proper equations, with nicely-formatted sigmas and italicized variables, was very nice. I’ve worked in the past on a number of projects where it would have been useful to embed a diagram with circles and arrows, rather than using words or ASCII art.
6) I was surprised to find that I had practically no comments in the base code (in the JavaScript, HTML, and CSS that were generated from my org-mode source file). I normally comment a lot. It’s not that I was less verbose. In fact, I was more verbose than usual. It’s just that I was putting all of the explanations about what I was trying to do, and why things were the way they are, in the human-docs part of the source, not the parts destined for computer consumption. Which, I guess, was the point.
7) Related to this, I think I had fewer bugs than I would normally have gotten in a project of this size. I don’t know why, but I suspect that it was due to some combination of thinking “out loud” (or at least in prose) before pounding out a chunk of code, and of having related bits of code next to each other, and not scattered across multiple files.
8) I don’t know whether I could tackle a large project in this way. You might say, “Why not? Donald Knuth wrote both TeX and Metafont as literate code, and even published the source in two fat books!” Well, yeah, but he’s Donald Knuth. Also, he was writing before IDEs, or even color-coded code, were available.
I found org-mode to be the most comfortable tool for me to use for this project. But of course that effectively prevents people who don’t use Emacs (even though they obviously should) from contributing.
One drawback of org-mode as a literate programming development environment is that you’re pretty much limited to one source file, which obviously doesn’t scale. There are other tools out there, like noweb, but I found those harder to set up, or they forced me to use (La)TeX when I didn’t want to, or the like.
9) One serious drawback of org-mode is that it makes it nearly impossible to add cross-reference links. If you have a section like
function myFunc() { var thing; {{calculate thing}} return thing; }
it would be very useful to have {{calculate thing}} be a link that you can click on to go to the definition of that chunk. But this is much harder to do in org-mode than it should be. So is labeling chunks, so that people can chase cross-references even without convenient links. It has a lot of work to be done in that regard.
The Cato Institute has an article arguing against the NPVIC. What I find interesting is that they use arguments that I haven’t seen a million times elsewhere:
Direct election of Electors: they argue that one reason the 1960 election was so close is that in Alabama, voters explicitly elected Electors, not presidential candidates. And that several of the chosen Electors were not pledged to any candidate. Under these circumstances, there’s no such thing as Alabama’s popular vote for president.
In this case, of course, the Secretary of State of each NPVIC state would, I presume, count the votes of people who voted for a pledged Elector as a vote for the candidate the Elector is pledged to vote for, and votes for unpledged Electors as “none of the above”: not votes for any presidential candidate in particular, so they don’t add to anyone’s tally.
Of course, while this sort of thing is both legal and in line with how I imagine the Founders imagined elections should be run, I can’t imagine any state adopting such a system any time in the foreseeable future: too many people are too used to the idea of voting for a presidential candidate to step away from that.
The Compact’s language simply assumes the existence of a traditional popular vote total in each state but it provides no details on how that is to be ascertained.
This is true. On one hand, yes, this seems like a flaw, since it provides little or no guidance in ambiguous or problematic cases. On the other hand, it gives a lot of power to states, the “laboratories of democracy”, which can come up with their own solutions.
Other shenanigans. North Dakota has already introduced a bill to publish a rough vote count, but withhold the precise vote totals until after the Compact states have to come up with a national vote winner. Yes, this is a clear jab at the NPVIC.Again, it seems the sensible approach for a Compact Secretary of State would be to take the minimum values of the rough counts and add those counts to each candidate’s totals.
In this particular case, North Dakota doesn’t have enough voters to make a difference in any but the tightest elections, but things could be different if a state like Texas or Florida tried to pull this. Of course, if Secretaries of State adopt the strategy I suggested above, that means that Texas or Florida would be reducing its vote counts (and its influence in the election) just to thumb its nose at a plan it doesn’t like. Which is not to say it couldn’t happen.
More generally, it seems likely that state legislatures will play silly games to try to undermine the NPVIC by blurring the vote count, making the Compact difficult to enforce, or otherwise. This could lead to some chaotic elections as states scramble to figure out how to come up with a popular vote when not all states are cooperating. In the long term, though, if the NPVIC passes, I suspect that people will quickly become enamored of directly voting for president, and won’t want to turn back the clock, not even to own the party they dislike.
Ranked Choice Voting. Maine apparently already used ranked-choice voting in presidential elections, and this does seem to present a special challenge.
I don’t think this is worth worrying about, though, since Maine already has to pick electors, which means they have to have a way of coming up with a final vote. I haven’t looked into this, but after some number of elimination rounds, some candidate gets N votes, where N is greater than 50%, and gets some number of Electoral Vote pledges. Maine also has a split system where not all of its Electors vote the same way, but however it’s decided, it has to boil down to “N votes > M votes”. So just add up the Ns to get Maine’s contribution to the national popular vote.
Cato’s objection, however, is a bit different: whoever wins the final round might actually end up with more votes than any of the first-choice candidates. So that creates an incentive for a candidate to not try too hard in Maine, and actually try to come out #3 or #4, rather than #1.
IMHO this seems fantastic. I seriously doubt that anyone can campaign with that kind of laser-precise skill. Candidates already have a hard enough time trying to be #1. I don’t know how you’d even manage to try to be #3 without seriously risking losing the election altogether.
But beyond that, ranked-choice voting is designed so that the candidates who come out ahead after several rounds are the compromise candidates that no one is especially excited about, but that everyone can live with. I would expect to see people like Bernie Sanders, Lyndon Larouche, Ralph Nader, Donald Trump — candidates that people feel very strongly about — to be at the top of ballots, and people like Joe Biden and Mitt Romney further down, under “I’ll settle for this person” rather than “I really want this person to be president.” So to the extent that Cato’s argument is true, it would seem that ranked-choice voting would tend to boost consensus or compromise candidates in Maine. And that’s fine.
Inconsistent results. The worst-case scenario envisioned by the Cato article is one in which different member states can’t agree on who won the popular vote, and allocate their Electors in an inconsistent manner. Once the dust settles, and the national popular vote is agreed on, it might be that the national popular vote winner didn’t get the presidency. I agree that that would be bad, but it also seems that the odds of this happening seem to be lower than one in nine, which means it’s already an improvement over the current system.
It’s well known that reading code is a lot harder than writing it. But I recently got some insight as to why that is.
I was debugging someone else’s sh script. This one seemed harder to read than most. There was a section that involved figuring out a bunch of dates associated with a particular object. The author was careful to show their work, and not just have a bunch of opaque calculations in the code. I won’t quote their code, but imagine something like:
NOW=$(date +%s)
THING_CREATION_EPOCH=$(<get $THING creation time, in Unix time format>)
THING_AGE_EPOCH=$(( $NOW - $THING_CREATION_EPOCH ))
THING_AGE_DAYS=$(( $THING_AGE_EPOCH / 86400 ))
Now imagine this for three or four aspects of $THING, like last modification time, which other elements use $THING, things like that. The precise details don’t matter.
Each variable makes sense. But there are four of them, just for the thing’s age since creation. And if you’re not as intimately familiar with it as someone who just wrote this code, that means you have to keep track of four variables in your head, and that gets very difficult very quickly.
Part of the problem is that it’s unclear which variables will be needed further down (or even above, in functions that use global variables), so you have to hold on to these variables; you can’t mentally let them fall. Compare this to something like
<Initialize some stuff>
for i in $LIST_OF_THINGS; do
ProcessStuff $i
done
<Finalize some stuff>
Here, you can be reasonably sure that $i won’t be used outside the loop. Once you get past done, you can let it go. Yes, it still exists, and it’s not illegal to use $i in the finalization code, but a well-meaning author won’t do that.
Which leads to a lesson to be learned from this: limit the number of variables that are used in any given chunk of code. You don’t want to have to remember some variable five pages away. To do this, try to break your code down into independent modules. These can be functions, classes, even just paragraph-sized chunks in some function. Ideally, you want your reader to be able to close the chapter, forget most of the details, and move on to the next bit.
In the same vein, this illustrates one reason global variables are frowned upon: they can potentially be accessed from anywhere in the code, which means that they never really disappear and can’t be completely ignored.
The other day, I was talking to an old friend and doing that annoying ritual where I told her all of the phone numbers and email addresses I had for her, and she told me which ones were out of date and which ones I was missing. And I wondered why there’s not a better way of doing this. Especially when it’s so simple.
For those who don’t know: when you make an appointment to take your car to the garage and the web site allows you to add it to your calendar, or when your doctor emails you a reminder for your upcoming exam that you can click and add to your calendar, you’re downloading an iCalendar file. This is a file format for describing calendar events. All major calendar tools support it. It’s a well-known, widely-supported standard file type.
You can also subscribe to a calendar: your school or community center can publish a calendar of events simply by publishing an iCalendar file on their web site. Then your desktop calendar program can check the calendar’s URL once an hour or whatever, and show you the events.
Now, there’s another file type for contacts, called vCard or VCF. Where iCalendar is for appointments, vCard is for contacts: you can store a person’s name, addresses, job title, phone numbers, web site, and so on and so forth. Tools like Google Contacts allow you to export some or all of your contacts in vCard format because it’s so widely-supported.
Which brings up the obvious question I asked above: why can’t I subscribe to an address card the same way I subscribe to a calendar? My friend could simply have put her iCalendar file on a web site somewhere, and my address book utility could check the URL once a day or so to see whether she changed her phone number or postal address, or added a Discord account or something.
Back when the Macintosh first came out, in the 1980s, it was presented as a more user-friendly alternatives to PCs running MS-DOS: it had a mouse that you could point with, and a graphical interface. Instead of memorizing commands and reading cryptic error messages, you could click on icons, or explore menus to see what was available.
Secondarily, Apple products have always been sleek. The iPod may have just been a microprocessor glued to a hard drive, it certainly looked attractive. Especially lately, any Apple product, be it server, desktop, wearable, or software, is sleek and uncluttered.
But lately, Apple seems to have gone all in on sleek design at the expense of usability. They used to be a leader in usability and now, a lot of their design choices are just… bad.
The latest example I ran across is in the Activity Monitor. The top of the window looks like:
The “x” in an octagon is a button to stop the currently-selected process. That’s pretty useful. Now, what if you don’t see the process you want to kill, but you know what it’s called? There’s a handy search tool. Let’s click on that:
Now I can search for a process, but where did the buttons and tabs go? It’s not as though there isn’t space for any of them: when you open the search tool, there’s now a huge blank space next to the search field, that the buttons, at least, can easily fit into.
But okay, you search for, and find, the process you want to kill, and highlight it:
Aha. It turns out that that chevron was actually a menu button. And there’s our “Stop”. Except that originally, we saw an octagonal icon, and now we see a word. There’s nothing on the screen to indicate that the icon and the menu entry are the same thing, and that’s just poor design.
I keep noticing this sort of thing more and more with Apple products, and it annoys me. This just happened to be a particularly egregious example.
I’m still annoyed at the time I thought my annotations to an MP3 file had been deleted by iTunes. It turned out that the information was there, but beneath the bottom of the window, and iTunes couldn’t be bothered to show me a scroll bar or give any indication that there was more than one windowful of information.