Archives 2021

Cato Institute argues against NPVIC

The Cato Institute has an article arguing against the NPVIC. What I find interesting is that they use arguments that I haven’t seen a million times elsewhere:

Direct election of Electors: they argue that one reason the 1960 election was so close is that in Alabama, voters explicitly elected Electors, not presidential candidates. And that several of the chosen Electors were not pledged to any candidate. Under these circumstances, there’s no such thing as Alabama’s popular vote for president.

In this case, of course, the Secretary of State of each NPVIC state would, I presume, count the votes of people who voted for a pledged Elector as a vote for the candidate the Elector is pledged to vote for, and votes for unpledged Electors as “none of the above”: not votes for any presidential candidate in particular, so they don’t add to anyone’s tally.

Of course, while this sort of thing is both legal and in line with how I imagine the Founders imagined elections should be run, I can’t imagine any state adopting such a system any time in the foreseeable future: too many people are too used to the idea of voting for a presidential candidate to step away from that.

The Compact’s language simply assumes the existence of a traditional popular vote total in each state but it provides no details on how that is to be ascertained.

This is true. On one hand, yes, this seems like a flaw, since it provides little or no guidance in ambiguous or problematic cases. On the other hand, it gives a lot of power to states, the “laboratories of democracy”, which can come up with their own solutions.

Other shenanigans. North Dakota has already introduced a bill to publish a rough vote count, but withhold the precise vote totals until after the Compact states have to come up with a national vote winner. Yes, this is a clear jab at the NPVIC.Again, it seems the sensible approach for a Compact Secretary of State would be to take the minimum values of the rough counts and add those counts to each candidate’s totals.

In this particular case, North Dakota doesn’t have enough voters to make a difference in any but the tightest elections, but things could be different if a state like Texas or Florida tried to pull this. Of course, if Secretaries of State adopt the strategy I suggested above, that means that Texas or Florida would be reducing its vote counts (and its influence in the election) just to thumb its nose at a plan it doesn’t like. Which is not to say it couldn’t happen.

More generally, it seems likely that state legislatures will play silly games to try to undermine the NPVIC by blurring the vote count, making the Compact difficult to enforce, or otherwise. This could lead to some chaotic elections as states scramble to figure out how to come up with a popular vote when not all states are cooperating. In the long term, though, if the NPVIC passes, I suspect that people will quickly become enamored of directly voting for president, and won’t want to turn back the clock, not even to own the party they dislike.

Ranked Choice Voting. Maine apparently already used ranked-choice voting in presidential elections, and this does seem to present a special challenge.

I don’t think this is worth worrying about, though, since Maine already has to pick electors, which means they have to have a way of coming up with a final vote. I haven’t looked into this, but after some number of elimination rounds, some candidate gets N votes, where N is greater than 50%, and gets some number of Electoral Vote pledges. Maine also has a split system where not all of its Electors vote the same way, but however it’s decided, it has to boil down to “N votes > M votes”. So just add up the Ns to get Maine’s contribution to the national popular vote.

Cato’s objection, however, is a bit different: whoever wins the final round might actually end up with more votes than any of the first-choice candidates. So that creates an incentive for a candidate to not try too hard in Maine, and actually try to come out #3 or #4, rather than #1.

IMHO this seems fantastic. I seriously doubt that anyone can campaign with that kind of laser-precise skill. Candidates already have a hard enough time trying to be #1. I don’t know how you’d even manage to try to be #3 without seriously risking losing the election altogether.

But beyond that, ranked-choice voting is designed so that the candidates who come out ahead after several rounds are the compromise candidates that no one is especially excited about, but that everyone can live with. I would expect to see people like Bernie Sanders, Lyndon Larouche, Ralph Nader, Donald Trump — candidates that people feel very strongly about — to be at the top of ballots, and people like Joe Biden and Mitt Romney further down, under “I’ll settle for this person” rather than “I really want this person to be president.” So to the extent that Cato’s argument is true, it would seem that ranked-choice voting would tend to boost consensus or compromise candidates in Maine. And that’s fine.

Inconsistent results. The worst-case scenario envisioned by the Cato article is one in which different member states can’t agree on who won the popular vote, and allocate their Electors in an inconsistent manner. Once the dust settles, and the national popular vote is agreed on, it might be that the national popular vote winner didn’t get the presidency. I agree that that would be bad, but it also seems that the odds of this happening seem to be lower than one in nine, which means it’s already an improvement over the current system.

Readable Code: Variable Overload

It’s well known that reading code is a lot harder than writing it. But I recently got some insight as to why that is.

I was debugging someone else’s sh script. This one seemed harder to read than most. There was a section that involved figuring out a bunch of dates associated with a particular object. The author was careful to show their work, and not just have a bunch of opaque calculations in the code. I won’t quote their code, but imagine something like:

NOW=$(date +%s)
THING_CREATION_EPOCH=$(<get $THING creation time, in Unix time format>)
THING_AGE_EPOCH=$(( $NOW - $THING_CREATION_EPOCH ))
THING_AGE_DAYS=$(( $THING_AGE_EPOCH / 86400 ))

Now imagine this for three or four aspects of $THING, like last modification time, which other elements use $THING, things like that. The precise details don’t matter.

Each variable makes sense. But there are four of them, just for the thing’s age since creation. And if you’re not as intimately familiar with it as someone who just wrote this code, that means you have to keep track of four variables in your head, and that gets very difficult very quickly.

Part of the problem is that it’s unclear which variables will be needed further down (or even above, in functions that use global variables), so you have to hold on to these variables; you can’t mentally let them fall. Compare this to something like

<Initialize some stuff>
for i in $LIST_OF_THINGS; do
    ProcessStuff $i
done
<Finalize some stuff>

Here, you can be reasonably sure that $i won’t be used outside the loop. Once you get past done, you can let it go. Yes, it still exists, and it’s not illegal to use $i in the finalization code, but a well-meaning author won’t do that.

Which leads to a lesson to be learned from this: limit the number of variables that are used in any given chunk of code. You don’t want to have to remember some variable five pages away. To do this, try to break your code down into independent modules. These can be functions, classes, even just paragraph-sized chunks in some function. Ideally, you want your reader to be able to close the chapter, forget most of the details, and move on to the next bit.

In the same vein, this illustrates one reason global variables are frowned upon: they can potentially be accessed from anywhere in the code, which means that they never really disappear and can’t be completely ignored.

Why Can’t I Subscribe to an Address Card?

The other day, I was talking to an old friend and doing that annoying ritual where I told her all of the phone numbers and email addresses I had for her, and she told me which ones were out of date and which ones I was missing. And I wondered why there’s not a better way of doing this. Especially when it’s so simple.

For those who don’t know: when you make an appointment to take your car to the garage and the web site allows you to add it to your calendar, or when your doctor emails you a reminder for your upcoming exam that you can click and add to your calendar, you’re downloading an iCalendar file. This is a file format for describing calendar events. All major calendar tools support it. It’s a well-known, widely-supported standard file type.

You can also subscribe to a calendar: your school or community center can publish a calendar of events simply by publishing an iCalendar file on their web site. Then your desktop calendar program can check the calendar’s URL once an hour or whatever, and show you the events.

Now, there’s another file type for contacts, called vCard or VCF. Where iCalendar is for appointments, vCard is for contacts: you can store a person’s name, addresses, job title, phone numbers, web site, and so on and so forth. Tools like Google Contacts allow you to export some or all of your contacts in vCard format because it’s so widely-supported.

Which brings up the obvious question I asked above: why can’t I subscribe to an address card the same way I subscribe to a calendar? My friend could simply have put her iCalendar file on a web site somewhere, and my address book utility could check the URL once a day or so to see whether she changed her phone number or postal address, or added a Discord account or something.

So… why?

The Triumph of Looks over Function

Back when the Macintosh first came out, in the 1980s, it was presented as a more user-friendly alternatives to PCs running MS-DOS: it had a mouse that you could point with, and a graphical interface. Instead of memorizing commands and reading cryptic error messages, you could click on icons, or explore menus to see what was available.

Secondarily, Apple products have always been sleek. The iPod may have just been a microprocessor glued to a hard drive, it certainly looked attractive. Especially lately, any Apple product, be it server, desktop, wearable, or software, is sleek and uncluttered.

But lately, Apple seems to have gone all in on sleek design at the expense of usability. They used to be a leader in usability and now, a lot of their design choices are just… bad.

The latest example I ran across is in the Activity Monitor. The top of the window looks like:
OSX Activity Monitor window

The “x” in an octagon is a button to stop the currently-selected process. That’s pretty useful. Now, what if you don’t see the process you want to kill, but you know what it’s called? There’s a handy search tool. Let’s click on that:
OSX Activity Monitor window with search button selected

Now I can search for a process, but where did the buttons and tabs go? It’s not as though there isn’t space for any of them: when you open the search tool, there’s now a huge blank space next to the search field, that the buttons, at least, can easily fit into.

But okay, you search for, and find, the process you want to kill, and highlight it:
Activity Monitor with selected process and menu

Aha. It turns out that that chevron was actually a menu button. And there’s our “Stop”. Except that originally, we saw an octagonal icon, and now we see a word. There’s nothing on the screen to indicate that the icon and the menu entry are the same thing, and that’s just poor design.

I keep noticing this sort of thing more and more with Apple products, and it annoys me. This just happened to be a particularly egregious example.

I’m still annoyed at the time I thought my annotations to an MP3 file had been deleted by iTunes. It turned out that the information was there, but beneath the bottom of the window, and iTunes couldn’t be bothered to show me a scroll bar or give any indication that there was more than one windowful of information.

Debate: Citizens United: Good or Bad?

The Citizens United decision has proven quite controversial, with advocates both for and against it. So why not have a debate?

For those who don’t remember, Citizens United was an organization that made a movie critical of then-candidate Hillary Clinton. The Federal Elections Commission deemed this to be a form of illegal campaign contribution, and fined them. The group appealed all the way to the Supreme Court, which ruled that money is speech, and companies are people, and since you can’t restrict people’s free speech, companies can give as much money as they want to political campaigns.

Resolution

Citizens United is a Good Thing.

Pro

First up, we have Senator Ted Cruz, who thinks Citizens United is a good idea:

Following Sen. Whitehouse’s 30-minute denunciation of dark money, Sen. Ted Cruz, R-Texas, used part of his time to defend the landmark Supreme Court case Citizens United that allowed for corporations and unions to spend unlimited money on political ads and other forms of influence campaigns.

“Citizens United concerned whether or not it was legal to make a movie criticizing a politician

On his Senate web page, he adds (emphasis in the original):

The Obama Justice Department took the position that it could fine — it could punish Citizens United for daring to make a movie critical of a politician. The case went all of the way to the U.S. Supreme Court at the oral argument, there was a moment that was truly chilling. Justice Sam Alito asked the Obama Justice Department, ‘Is it your position under your theory of the case that the Federal Government can ban books?’ And the Obama Justice Department responded yes. […] As far as I am concerned, that is a terrifying view of the First Amendment. […] By a narrow five-four majority, the Supreme Court concluded the First Amendment did not allow the Federal Government to punish you for making a movie critical of a politician. And likewise that the Federal Government couldn’t ban books. Four justices dissented, four justices were willing to say the federal government can ban books.”

Con

And now, opposing the motion, please welcome Senator Ted Cruz:

Sens. Ted Cruz and Josh Hawley raised concerns about getting meaningful legislation aimed at Silicon Valley passed because the Biden administration and prominent Democrats, who control Congress, could be beholden to financial ties to technology giants.

“Big Tech are the largest financial supporters of Democrats in the country,” Cruz told the Washington Examiner on Tuesday. “And so, to date, we have seen occasional rhetoric from Democrats directed at Big Tech, but when they’re your single-biggest donors, it shouldn’t be surprising that Democrats have been far less willing to engage in concrete action to rein in Big Tech.”

We hope you’ve enjoyed this debate, and will thank our debaters by contributing to their challengers when they come up for reelection. It’s your free speech, after all.

Guilt

Somewhere, I ran across the following story:

Fred was the most hated person in the Foreign Legion. Everyone wanted him dead. One day, Fred was assigned to go on a mission alone in the middle of the desert. At midnight the night before, Alex put poison in Fred’s canteen. An hour later, unaware of this, Bert poured out the contents of Fred’s canteen and filled it with sand. An hour later, also unaware of what the others were doing, Charlie poked a hole in the bottom of the canteen, so that the contents would pour out. Thus, poor despised Fred died in the desert the next day.

So the question is: who, if anyone, is guilty of killing Fred? It’s tempting to say Charlie, simply because he came last, but really, all three of Alex, Bert, and Charlie tried to kill Fred, and took steps to kill him. The only reason to let Alex off the hook is that Bert’s approach undid what Alex did, and Charlie’s preferred method undid Bert’s. But remove any one, or any two of them, and Fred still winds up dead. At best, Alex and Bert would be guilty only of attempted murder.

What if they had all poisoned the canteen? It would be a lot easier to argue that all of them are to blame. If only Alex were put on trial, it would be fairly easy to secure a conviction. And even if Bert’s and Charlie’s actions came to light later, few people would feel that Alex was wrongly convicted. Same for the other two.

The reason I bring this up is that the defense in Derek Chauvin’s trial for the murder of George Floyd has claimed that drugs in Floyd’s system are to blame for his death, not the fact that Chauvin knelt on Floyd’s neck for over eight minutes.

I trust you see the point: even if Floyd was incapacitated or in diminished health because of drugs, or had a heart attack, or something like that, what Chauvin did could have killed pretty much anyone. If his knee wasn’t the cause of death, that’s only on a technicality. If the situation were repeated, there is no reason to think that a similar technicality would let him off the hook, and every reason to believe that his actions would be similar.

On the Filibuster

The usual argument for the filibuster is that it prevents the majority from simply steamrolling its agenda: if every piece of legislation only needed a simple majority to pass, then in the current Senate, 51 Democrats (including VP Harris) can, if they’re united, do anything they want, and ignore the 50 Republicans. Clearly, that’s not ideal. There should to be a mechanism to prevent that, at least in important cases.

At the same time, Americans elect Senators to actually get stuff done. If Americans elect 59 Senators from party A, and 41 from party B, it’s because they want to advance party A’s agenda, and have it completely thwarted by party B is also not ideal.

But that’s what we have now: Senators don’t need to do a talking filibuster like Jimmy Stewart in Mr. Smith Goes to Washington. All they need to do is send email announcing their intention to filibuster a bill, and that bill effectively requires a 60-vote supermajority to pass. The Constitution reserves supermajority requirements for extreme situation, because the founders realized the need to balance fairness with getting stuff done; there’s a reason they switched from the Articles of Confederation to the federal constitution.

If, as is often claimed, the purpose of the filibuster is to promote compromise, then a better way to reform the Senate rules might be to guarantee the minority party(ies) certain rights, like being able to propose amendments, or introduce some legislation that the majority party doesn’t even want to consider. I’m open to the idea that it might be a good idea to preserve the talking filibuster, for those cases where one or a handful of Senators feel so strongly about an issue that they’re willing to pay a personal cost to block it. But blocking legislation shouldn’t be routine.

Ending the Tyranny of Biannual Time Changes

Like most people, I hate changing the clocks twice a year, to say nothing of having to get up an hour early for Daylight Saving Time. So naturally my ears perked up when I heard about Senate bill 623, which would make Daylight Saving Time permanent. We’re still switching to DST tomorrow, but if this passes, we won’t switch again this fall. Or next spring.

The bill was introduced by Marco Rubio (R-FL), and I can’ tell you how weird it feels to be on his side, but so far, I can’t see anything wrong with the bill. Well, there’s the fact that the text of the bill is not yet available. And the title is “A bill to make daylight saving time permanent, and for other purposes.” Without the text, we won’t know what those “other purposes” might be.

On the other hand, the bill has several Democratic cosponsors, so maybe I’m worried over nothing.

Meanwhile, IndyStar reminds us that Rubio and others have introduced this or similar bills in the past, and they have yet to pass. But still, call your senators and tell them you support this bill and doing away with changing clocks all the time.

Staying on Top of the Wave

I used to work for a computer researcher who didn’t actually use computers. Academics can sometimes be quirky, you know? He just had ~/.forward set up to print every message (on paper) as it came in. One problem with this setup was that people would send him résumés as PostScript (and later PDF) attachments. The printing system would print these out as plain text and use up all of the paper in the printer if someone didn’t catch it.

This raised a question for someone:


and I realized that the nightmare that printing under Unix is rather less nightmarish these days. The ubiquity of PDF means that I don’t really have to worry about converting files to the proper device-dependent format. Line printers are gone, so I don’t have to worry about the difference between printing text and printing graphics. Unicode means I don’t have to worry about whether a document is in English, French, or Russian.

Of course, all of this convenience comes at a cost. For instance, Unicode is a complex soup of characters, codepoints, encodings, and normal forms. When I was learning C, way back when, in between mammoth hunts and rubbing sticks together to make fire, everything was US English, in ASCII, and a letter was a character was a byte was an octet.

This may have been limited, but at least it made things easy to learn. Which made me wonder about people wading into the field these days: how do they learn, when even a simple “Hello world” program has all this complexity behind it?

There are two answers to that: in some cases, say Unity or Electron, tutorials will typically give you a bunch of standard scaffolding — headers, includes, starter classes — and say “Don’t worry about all that for now; we’ll cover it later.” In other cases, say, Python, a lot of the complexity is optional enough that it doesn’t need to be shown to the beginner.

XKCD cartoon of a jumble of blocks, representing software packages, piled precariously upon each other, as in a dependency diagram. A caption at the top describes the delicate edifice as "All modern digital infrastructure". An arrow points to a lone package that nearly everything depends on, off in a corner; the label says, "A project some random person in Nebraska has been thanklessly maintaining since 2003".

New people are coming in at the top of this graph. I came in a bit further down, but my entry point was already on top of a bunch of existing work by other people. After all, I got to use (complicated) operating systems instead of running code on bare metal, text editors rather than toggling my code in on the front panel, and much besides. People starting out today are wading in in the middle, just like I did, just at a different point.

And this is what we humans do. When Isaac Newton talked about standing on the shoulders of giants, this is what he was talking about. We build tools using existing tools, and then use those new tools to build newer tools, changing the world as we go.

Of course, new technology can also disrupt and undermine the old way of doing things, and there’s a persistent fear of AI taking away jobs that used to be done by humans, just as automation has. Just as dockworkers now use forklifts to load cargo rather than carry it themselves, companies now use chatbots to handle the front line of customer support. Machines have largely supplanted travel agents, and have even started writing press releases. What’s a poor meat puppet to do?

I keep thinking of how longshoremen became forklift operators: when new technology comes along, we still need to control it, steer it, decide what it ought to do. I used to install Unix on individual computers from CD, one at a time. These days, I use tools like Terraform and Puppet to orchestrate dozens or hundreds of machines. And I think that’s where the challenge is going to be in the future: staying on top of new technology and deciding what to do with it.

“The Filibuster Should Be Painful”

Joe Manchin appeared on Fox News Sunday and said he supports the filibuster, but it should come at a cost:

“The filibuster should be painful, it really should be painful and we’ve made it more comfortable over the years,” he said on “Fox News Sunday.” “Maybe it has to be more painful.”

“If you want to make it a little bit more painful, make him stand there and talk,” Manchin said. “I’m willing to look at any way we can, but I’m not willing to take away the involvement of the minority.”

Which echoes something I’ve been thinking for some time.

Yes, the filibuster is a hack. But what it does is give the minority — sometimes a minority of one — the power to block legislation. And yes, some legislation, even legislation with majority support, is bad and needs to be blocked. I’m aware that this paragraph would sound a lot better if it weren’t for the fact that the second most famous filibuster (after James Stewart’s) is Strom Thurmond’s filibuster of the Civil Rights Act. But let’s posit for the sake of argument that some legislation is bad and ought to be stopped.

The talking filibuster does that. But at some point in the 1960s, the Senate started switching to the “procedural filibuster”. In contrast to the stay-up-all-night-talking-without-a-bathroom-break, or “talking” filibuster, the procedural one basically means role-playing one: one Senator announces their intention to filibuster, the others roll for WIS take a vote on whether to make the first Senator shut up, and then either stop debate as if the clock had been run out, or tell the Senator‘s character to put a sock in it and take a vote on the original bill.

The problem with this is that it’s too easy: any contrarian dickbag can derail the Senate with no cost to themselves. That’s like having the emergency brake on a train accessible to toddlers, with no fines or repercussions for misusing it: unless you live in a community of saints, that train would never go anywhere. And so it is with the Senate these days. So returning to the talking filibuster would help ensure that legislation is blocked only when the minority feels very strongly about it; strongly enough to stay up all night talking without a break.

At the same time, as I said, it’s a hack. In particular, the talking filibuster would tend to favor younger, healthier senators. Perhaps a different solution could be worked out, like maybe Senators are given one filibuster coupon at the beginning of each session, and once it’s used up, it’s gone. Or maybe they can get one super-vote that’s worth ten regular votes, but then they forfeit the next ten votes. These are just off the top of my head, and I’m sure they can be abused as well. But I would like to stop letting every dumbass reactionary block legislation, so things can actually move forward.