Commitment and sophistication

Mar 13 JDN 2459652

One of the central insights of cognitive and behavioral economics is that understanding the limitations of our own rationality can help us devise mechanisms to overcome those limitations—that knowing we are not perfectly rational can make us more rational. The usual term for this is a somewhat vague one: behavioral economists generally call it simply sophistication.

For example, suppose that you are short-sighted and tend to underestimate the importance of the distant future. (This is true of most of us, to greater or lesser extent.)

It’s rational to consider the distant future less important than the present—things change in the meantime, and if we go far enough you may not even be around to see it. In fact, rationality alone doesn’t even say how much you should discount any given distance in the future. But most of us are inconsistent about our attitudes toward the future: We exhibit dynamic inconsistency.

For instance, suppose I ask you today whether you would like $100 today or $102 tomorrow. It is likely you’ll choose $100 today. But if I ask you whether you would like $100 365 days from now or $102 366 days from now, you’ll almost certainly choose the $102.


This means that if I asked you the second question first, then waited a year and asked you the first question, you’d change your mind—that’s inconsistent. Whichever choice is better shouldn’t systematically change over time. (It might happen to change, if your circumstances changed in some unexpected way. But on average it shouldn’t change.) Indeed, waiting a day for an extra $2 is typically going to be worth it; 2% daily interest is pretty hard to beat.

Now, suppose you have some option to make a commitment, something that will bind you to your earlier decision. It could be some sort of punishment for deviating from your earlier choice, some sort of reward for keeping to the path, or, in the most extreme example, a mechanism that simply won’t let you change your mind. (The literally classic example of this is Odysseus having his crew tie him to the mast so he can listen to the Sirens.)

If you didn’t know that your behavior was inconsistent, you’d never want to make such a commitment. You don’t expect to change your mind, and if you do change your mind, it would be because your circumstances changed in some unexpected way—in which case changing your mind would be the right thing to do. And if your behavior wasn’t inconsistent, this reasoning would be quite correct: No point in committing when you have less information.

But if you know that your behavior is inconsistent, you can sometimes improve the outcome for yourself by making a commitment. You can force your own behavior into consistency, even though you will later be tempted to deviate from your plan.

Yet there is a piece missing from this account, often not clearly enough stated: Why should we trust the version of you that has a year to plan over the version of you that is making the decision today? What’s the difference between those two versions of you that makes them inconsistent, and why is one more trustworthy than the other?

The biggest difference is emotional. You don’t really feel $100 a year from now, so you can do the math and see that 2% daily interest is pretty darn good. But $100 today makes you feel something—excitement over what you might buy, or relief over a bill you can now pay. (Actually that’s one of the few times when it would be rational to take $100 today: If otherwise you’re going to miss a deadline and pay a late fee.) And that feeling about $102 tomorrow just isn’t as strong.

We tend to think that our emotional selves and our rational selves are in conflict, and so we expect to be more rational when we are less emotional. There is some truth to this—strong emotions can cloud our judgments and make us behave rashly.

Yet this is only one side of the story. We also need emotions to be rational. There is a condition known as flat affect, often a symptom of various neurological disorders, in which emotional reactions are greatly blunted or even non-existent. People with flat affect aren’t more rational—they just do less. In the worst cases, they completely lose their ability to be motivated to do things and become outright inert, known as abulia.

Emotional judgments are often less accurate than thoughtfully reasoned arguments, but they are also much faster—and that’s why we have them. In many contexts, particularly when survival is at stake, doing something pretty well right away is often far better than waiting long enough to be sure you’ll get the right answer. Running away from a loud sound that turns out to be nothing is a lot better than waiting to carefully determine whether that sound was really a tiger—and finding that it was.

With this in mind, the cases where we should expected commitment to be effective are those that are unfamiliar, not only on an individual level, but in an evolutionary sense. I have no doubt that experienced stock traders can develop certain intuitions that make them better at understanding financial markets than randomly chosen people—but they still systematically underperform simple mathematical models, likely because finance is just so weird from an evolutionary perspective. So when deciding whether to accept some amount of money m1 at time t1 and some other amount of money m2 at time t2, your best bet is really to just do the math.

But this may not be the case for many other types of decisions. Sometimes how you feel in the moment really is the right signal to follow. Committing to work at your job every day may seem responsible, ethical, rational—but if you hate your job when you’re actually doing it, maybe it really isn’t how you should be spending your life. Buying a long-term gym membership to pressure yourself to exercise may seem like a good idea, but if you’re miserable every time you actually go to the gym, maybe you really need to be finding a better way to integrate exercise into your lifestyle.

There are no easy answers here. We can think of ourselves as really being made of two (if not more) individuals: A cold, calculating planner who looks far into the future, and a heated, emotional experiencer who lives in the moment. There’s a tendency to assume that the planner is our “true self”, the one we should always listen to, but this is wrong; we are both of those people, and a life well-lived requires finding the right balance between their conflicting desires.

Russia has invaded Ukraine.

Mar 6 JDN 2459645

Russia has invaded Ukraine. No doubt you have heard it by now, as it’s all over the news now in dozens of outlets, from CNN to NBC to The Guardian to Al-Jazeera. And as well it should be, as this is the first time in history that a nuclear power has annexed another country. Yes, nuclear powers have fought wars before—the US just got out of one in Afghanistan as you may recall. They have even started wars and led invasions—the US did that in Iraq. And certainly, countries have been annexing and conquering other countries for millennia. But never before—never before, in human historyhas a nuclear-armed state invaded another country simply to claim it as part of itself. (Trump said he thought the US should have done something like that, and the world was rightly horrified.)

Ukraine is not a nuclear power—not anymore. The Soviet Union built up a great deal of its nuclear production in Ukraine, and in 1991 when Ukraine became independent it still had a sizable nuclear arsenal. But starting in 1994 Ukraine began disarming that arsenal, and now it is gone. Now that Russia has invaded them, the government of Ukraine has begun publicly reconsidering their agreements to disarm their nuclear arsenal.

Russia’s invasion of Ukraine has just disproved the most optimistic models of international relations, which basically said that major power wars for territory were over at the end of WW2. Some thought it was nuclear weapons, others the United Nations, still others a general improvement in trade integration and living standards around the world. But they’ve all turned out to be wrong; maybe such wars are rarer, but they can clearly still happen, because one just did.

I would say that only two major theories of the Long Peace are still left standing in light of this invasion, and that is nuclear deterrence and the democratic peace. Ukraine gave up its nuclear arsenal and later got attacked—that’s consistent with nuclear deterrence. Russia under Putin is nearly as authoritarian as the Soviet Union, and Ukraine is a “hybrid regime” (let’s call it a solid D), so there’s no reason the democratic peace would stop this invasion. But any model which posits that trade or the UN prevent war is pretty much off the table now, as Ukraine had very extensive trade with both Russia and the EU and the UN has been utterly toothless so far. (Maybe we could say the UN prevents wars except those led by permanent Security Council members.)

Well, then, what if the nuclear deterrence theory is right? What would have happened if Ukraine had kept its nuclear weapons? Would that have made this situation better, or worse? It could have made it better, if it acted as a deterrent against Russian aggression. But it could also have made it much, much worse, if it resulted in a nuclear exchange between Russia and Ukraine.

This is the problem with nukes. They are not a guarantee of safety. They are a guarantee of fat tails. To explain what I mean by that, let’s take a brief detour into statistics.

A fat-tailed distribution is one for which very extreme events have non-negligible probability. For some distributions, like a uniform distribution, events are clearly contained within a certain interval and nothing outside is even possible. For others, like a normal distribution or lognormal distribution, extreme events are theoretically possible, but so vanishingly improbable they aren’t worth worrying about. But for fat-tailed distributions like a Cauchy distribution or a Pareto distribution, extreme events are not so improbable. They may be unlikely, but they are not so unlikely they can simply be ignored. Indeed, they can actually dominate the average—most of what happens, happens in a handful of extreme events.

Deaths in war seem to be fat-tailed, even in conventional warfare. They seem to follow a Pareto distribution. There are lots of tiny skirmishes, relatively frequent regional conflicts, occasional major wars, and a handful of super-deadly global wars. This kind of pattern tends to emerge when a phenomenon is self-reinforcing by positive feedback—hence why we also see it in distributions of income and wildfire intensity.

Fat-tailed distributions typically (though not always—it’s easy to construct counterexamples, like the Cauchy distribution with low values truncated off) have another property as well, which is that minor events are common. More common, in fact, than they would be under a normal distribution. What seems to happen is that the probability mass moves away from the moderate outcomes and shifts to both the extreme outcomes and the minor ones.

Nuclear weapons fit this pattern perfectly. They may in fact reduce the probability of moderate, regional conflicts, in favor of increasing the probability of tiny skirmishes or peaceful negotiations. But they also increase the probability of utterly catastrophic outcomes—a full-scale nuclear war could kill billions of people. It probably wouldn’t wipe out all of humanity, and more recent analyses suggest that a catastrophic “nuclear winter” is unlikely. But even 2 billion people dead would be literally the worst thing that has ever happened, and nukes could make it happen in hours when such a death toll by conventional weapons would take years.

If we could somehow guarantee that such an outcome would never occur, then the lower rate of moderate conflicts nuclear weapons provide would justify their existence. But we can’t. It hasn’t happened yet, but it doesn’t have to happen often to be terrible. Really, just once would be bad enough.

Let us hope, then, that the democratic peace turns out to be the theory that’s right. Because a more democratic world would clearly be better—while a more nuclearized world could be better, but could also be much, much worse.

Who still uses cash?

Feb 27 JDN 2459638

If you had to guess, what is the most common denomination of US dollar bills? You might check your wallet: $1? $20?

No, it’s actually $100. There are 13.1 billion $1 bills, 11.7 billion $20 bills, and 16.4 billion $100 bills. And since $100 bills are worth more, the vast majority of US dollar value in circulation is in those $100 bills—indeed, $1.64 trillion of the total $2.05 trillion cash supply.

This is… odd, to say the least. When’s the last time you spent a $100 bill? Then again, when’s the last time you spent… cash? In a typical week, 30% of Americans use no cash at all.

In the United States, cash is used for 26% of transactions, compared to 28% for debit card and 23% for credit cards. The US is actually a relatively cash-heavy country by First World standards. In the Netherlands and Scandinavia, cash is almost unheard of. When I last visited Amsterdam a couple of months ago, businesses were more likely to take US credit cards than they were to take cash euros.

A list of countries most reliant on cash shows mostly very poor countries, like Chad, Angola, and Burkina Faso. But even in Sub-Saharan Africa, mobile money is dominant in Botswana, Kenya and Uganda.

And yet the cash money supply is still quite large: $2.05 trillion is only a third of the US monetary base, but it’s still a huge amount of money. If most people aren’t using it, who is? And why is so much of it in the form of $100 bills?

It turns out that the answer to the second question can provide an answer to the first. $100 bills are not widely used for consumer purchases—indeed, most businesses won’t even accept them. (Honestly that has always bothered me: What exactly does “legal tender” mean, if you’re allowed to categorically refuse $100 bills? It’d be one thing to say “we can’t accept payment when we can’t make change”, and obviously nobody seriously expects you to accept $10,000 bills; but what if you have a $97 purchase?) When people spend cash, it’s mainly ones, fives, and twenties.

Who uses $100 bills? People who want to store money in a way that is anonymous, easily transportable—including across borders—and stable against market fluctuations. Drug dealers leap to mind (and indeed the money-laundering that HSBC did for drug cartels was largely in the form of thick stacks of $100 bills). Of course it isn’t just drug dealers, or even just illegal transactions, but it is mostly people who want to cross borders. 80% of US $100 bills are in circulation outside the United States. Since 80% of US cash is in the form of $100 bills, this means that nearly two-thirds of all US dollars are outside the US.

Knowing this, I have to wonder: Why does the Federal Reserve continue printing so many $100 bills? Okay, once they’re out there, it may be hard to get them back. But they do wear out eventually. (In fact, US dollars wear out faster than most currencies, because they are made of linen instead of plastic. Surprisingly, this actually makes them less eco-friendly despite being more biodegradable. Of course, the most eco-friendly method of payment is mobile payments, since their marginal environmental impact is basically zero.) So they could simply stop printing them, and eventually the global supply would dwindle.

They clearly haven’t done this—indeed, there were more $100 bills printed last year than any previous year, increasing the global supply by 2 billion bills, or $200 billion. Why not? Are they trying to keep money flowing for drug dealers? Even if the goal is to substitute for failing currencies in other countries (a somewhat odd, if altruistic, objective), wouldn’t that be more effective with $1 and $5 bills? $100 is a lot of money for people in Chad or Angola! Chad’s per-capita GDP is a staggeringly low $600 per year; that means that a $100 bill to a typical person in Chad would be like me holding onto a $10,000 bill (those exist, technically). Surely they’d prefer $1 bills—which would still feel to them like $100 bills feel to me. Even in middle-income countries, $100 is quite a bit; Ecuador actually uses the US dollar as its main currency, but their per-capita GDP is only $5,600, so $100 to them feels like $1000 to us.

If you want to usefully increase the money supply to stimulate consumer spending, print $20 bills—or just increase some numbers in bank reserve accounts. Printing $100 bills is honestly baffling to me. It seems at best inept, and at worst possibly corrupt—maybe they do want to support drug cartels?

Basic income reconsidered

Feb 20 JDN 2459631

In several previous posts I have sung the praises of universal basic income (though I have also tried to acknowledge the challenges involved).

In this post I’d like to take a step back and reconsider the question of whether basic income is really the best approach after all. One nagging thought keeps coming back to me, and it is the fact that basic income is extremely expensive.

About 11% of the US population lives below the standard poverty line. There are many criticisms of the standard poverty line: Some say it’s too high, because you can compare it favorably with middle-class incomes in much poorer countries. Others say it’s too low, because income at that level doesn’t allow people to really live in financial security. There are many difficult judgment calls that go into devising a poverty threshold, and we can reasonably debate whether the right ones were made here.

However, I think this threshold is at least approximately correct; maybe the true poverty threshold for a household of 1 should be not $12,880 but $11,000 or $15,000, but I don’t think it should be $5,000 or $25,000. Maybe for a household of 4 it should be not $26,500 but $19,000 or $32,000; but I don’t think it should be $12,000 or $40,000.

So let’s suppose that we wanted to implement a universal basic income in the United States that would lift everyone out of poverty. We could essentially do that by taking the 2-person-household threshold of $17,420 and dividing it by 2, yielding $8,710 per person per year. (Why not use the 1-person-household threshold? There aren’t very many 1-person households in poverty, and that threshold would be considerably higher and thus considerably more expensive. A typical poor household is a single parent and one or more children; as long as kids get the basic income, that household would be above the threshold in this system.)

The US population is currently about 331 million people. If every single one of them were to receive a basic income of $8,710, that would cost nearly $2.9 trillion per year. This is a feasible amount—it’s less than half the current total federal budget—but it is still a very large amount. The tax increases required to support it would be massive, and that’s probably why, despite ostensibly bipartisan support for the idea of a basic income, no serious proposal has ever gotten off of the ground.

If on the other hand we were to only give the basic income to people below the poverty line, that would cost only 11% of that amount: A far more manageable $320 billion per year.

We don’t want to do exactly that, however, because it would create all kinds of harmful distortions in the economy. Consider someone who is just below the threshold, considering whether to take on more work or get a higher-paying job. If their household pre-tax income is currently $15,000 and they could raise it to $18,000, a basic income given only to people below the threshold would mean that they are choosing between $15,000+$17,000=$32,000 if they keep their current work and $18,000 if they increase it. Clearly, they would not want to take on more work. That’s a terrible system—it amounts to a marginal tax rate above 100%.

Another possible method would be to simply top off people’s income, give them whatever they need to get to the poverty line but no more. (This would actually be even cheaper; it would probably cost something more like $160 billion per year.) That removes the distortion for people near the threshold, at the cost of making it much worse for those far below the threshold. Someone considering whether to work for $7,000 or work for $11,000 is, in such a system, choosing whether to work less for $17,000 or work more for… $17,000. They will surely choose to work less.

In order to solve these problems, what we would most likely need to do is gradually phase out the basic income, so that say increasing your pre-tax income by $1.00 would decrease your basic income payment by $0.50. The cost of this system would be somewhere in between that of a truly universal basic income and a threshold-based system, so let’s ballpark that as around $600 billion per year. It would effectively implement a marginal tax rate of 50% for anyone who is receiving basic income payments.

In theory, this is probably worse than a universal basic income, because in the latter case you can target the taxes however you like—and thus (probably) make them less cause less distortion than the phased-out basic income system would. But in practice, a truly universal basic income might simply not be politically viable, and some kind of phased-out system seems much more likely to actually get passed.


Even then, I confess I am not extremely optimistic. For some reason, everyone seems to want to end poverty, but very few seem willing to use the obvious solution: Give poor people money.

The fragility of encryption

Feb 13 JDN 2459620

I said in last week’s post that most of the world’s online security rests upon public-key encryption. It’s how we do our shopping, our banking, and paying our taxes.

Yet public-key encryption has an Achilles’ Heel. It relies entirely on the assumption that, even knowing someone’s public key, you can’t possibly figure out what their private key is. Yet obviously the two must be deeply connected: In order for my private key to decrypt all messages that are encrypted using my public key, they must, in a deep sense, contain the same information. There must be a mathematical operation that will translate from one to the other—and that mathematical operation must be invertible.

What we have been relying on to keep public-key encryption secure is the notion of a one-way function: A function that is easy to compute, but hard to invert. A typical example is multiplying two numbers: Multiplication is a basic computing operation that is extremely fast, even for numbers with thousands of digits; but factoring a number into its prime factors is far more difficult, and currently cannot be done in any reasonable amount of time for numbers that are more than a hundred digits long.


“Easy” and “hard” in what sense? The usual criterion is in polynomial time.

Say you have an input that is n bits long—i.e. n digits, when expressed as a binary number, all 0s and 1s. A function that can be computed in time proportional to n is linear time; if it can only be done in time proportional to n2, that is quadratic time; n3 would be cubic time. All of these are examples of polynomial time.

But if instead the time required were 2n, that would be exponential time. 3n and 1.5n would also be exponential time.

This is significant because of how much faster exponential functions grow relative to polynomial functions, for large values of n. For example, let’s compare n3 with2n. When n=3, the polynomial is actually larger: n3=27 but 2n=8. At n=10 they are nearly equal: n3=1000 but 2n=1024. But by n=20, n3 is only 8000 while 2n is over 1 million. At n=100, n3is a manageable (for a modern computer) 1 million, while 2nis a staggering 1030; that’s a million trillion trillion.

You may see that there is already something a bit fishy about this: There are lots of different ways to be polynomial and lots of different ways to be exponential. Linear time n is clearly fast, and for many types of problems it seems unlikely one could do any better. But is n100 time really all that fast? It’s still polynomial. It doesn’t take a large exponential base to make for very fast growth—2 doesn’t seem that big, after all, and when dealing with binary digits it shows up quite naturally. But while 2n grows very fast even for reasonably-sized n, 1.0000001n grows slower than most polynomials—even linear!—for quite a long range before eventually becoming very fast growth when n is in the hundreds of millions. Yet it is still exponential.


So, why do we use these categories? Well, computer scientists and mathematicians have discovered that many types of problems that seem different can in fact be translated into one another, so that solving one would solve the other. For instance, you can easily convert between the Boolean satisfiability problem and the subset-sum problem or the travelling salesman problem. These conversions always take time that is a polynomial in n(usually somewhere between linear and quadratic, as it turns out). This has allowed to build complexity classes, classes of problem such that any problem can be converted to any other in polynomial time or better.

Problems that can be solved in polynomial timeare in class P, for polynomial.

Problems that can be checked—but not necessarily solved—in polynomial time are in class NP, which actually stands for “non-deterministic polynomial” (not a great name, to be honest). Given a problem in NP, you may not be able to come up with a valid answer in polynomial time. But if someone gave you an answer, you could tell in polynomial time whether or not that answer was valid.

Boolean satisfiability (often abbreviated SAT) is the paradigmatic NP problem: Given a Boolean formula like (A OR B OR C) AND (¬A OR D OR E) AND (¬D OR ¬C OR B) and so on, it isn’t a simple task to determine if there’s some assignment of the variables A, B, C, D, E that makes it all true. But if someone handed you such an assignment, say (¬A, B, ¬C, D, E), you could easily check that it does in fact satisfy the expression. It turns out that in fact SAT is what’s called NP-complete: Any NP problem can be converted into SAT in polynomial time.

This is important because in order to be useful as an encryption system, we need our one-way function to be in class P (otherwise, we couldn’t compute it quickly). Yet, by definition, this means its inverse must be in class NP.


Thus, simply because it is easy to multiply two numbers, I know for sure that factoring numbers must be in NP: All I have to do to verify that a factorization is correct is multiply the numbers. Since the way to get a public key from a private key is (essentially) to multiply two numbers, this means that getting a private key from a public key is equivalent to factorization—which means it must be in NP.

This would be fine if we knew some problems in NP that could never, ever be solved in polynomial time. We could just pick one of those and make it the basis of our encryption system. Yet in fact, we do not know any such problems—indeed, we are not even certain they exist.

One of the biggest unsolved problems in mathematics is P versus NP, which asks the seemingly-simple question: “Are P and NP really different classes?” It certainly seems like they are—there are problems like multiplying numbers, or even finding out whether a number is prime, that are clearly in P, and there are other problems, like SAT, that are definitely in NP but seem to not be in P. But in fact no one has ever been able to prove that P ≠ NP. Despite decades of attempts, no one has managed it.

To be clear, no one has managed to prove that P = NP, either. (Doing either one would win you a Clay Millennium Prize.) But since the conventional wisdom among most mathematicians is that P ≠ NP (99% of experts polled in 2019 agreed), I actually think this possibility has not been as thoroughly considered.

Vague heuristic arguments are often advanced for why P ≠ NP, such as this one by Scott Aaronson: “If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in “creative leaps,” no fundamental gap between solving a problem and recognizing the solution once it’s found.”

That really doesn’t follow at all. Doing something in polynomial time is not the same thing as doing it instantly.

Say for instance someone finds an algorithm to solve SAT in n6 time. Such an algorithm would conclusively prove P = NP. n6; that’s a polynomial, all right. But it’s a big polynomial. The time required to check a SAT solution is linear in the number of terms in the Boolean formula—just check each one, see if it works. But if it turns out we could generate such a solution in time proportional to the sixth power of the number of terms, that would still mean it’s a lot easier to check than it is to solve. A lot easier.

I guess if your notion of a “fundamental gap” rests upon the polynomial/exponential distinction, you could say that’s not “fundamental”. But this is a weird notion to say the least. If n = 1 million can be checked in 1 million processor cycles (that is, milliseconds, or with some overhead, seconds), but only solved in 1036 processor cycles (that is, over a million trillion years), that sounds like a pretty big difference to me.

Even an n2 algorithm wouldn’t show there’s no difference. The difference between n and n2, is, well, a factor of n. So finding the answer could still take far longer than verifying it. This would be worrisome for encryption, however: Even a million times as long isn’t really that great actually. It means that if something would work in a few seconds for an ordinary computer (the timescale we want for our online shopping and banking), then, say, the Russian government with a supercomputer a thousand times better could spend half an hour on it. That’s… a problem. I guess if breaking our encryption was only feasible for superpower national intelligence agencies, it wouldn’t be a complete disaster. (Indeed, many people suspect that the NSA and FSB have already broken most of our encryption, and I wouldn’t be surprised to learn that’s true.)

But what I really want to say here is that since it may be true that P=NP—we don’t know it isn’t, even if most people strongly suspect as much—we should be trying to find methods of encryption that would remain secure even if that turns out to be the case. (There’s another reason as well: Quantum computers are known to be able to factor numbers in polynomial time—though it may be awhile before they get good enough to do so usefully.)

We do know two such methods, as a matter of fact. There is quantum encryption, which, like most things quantum, is very esoteric and hard to explain. (Maybe I’ll get to that in another post.) It also requires sophisticated, expensive hardware that most people are unlikely to be able to get.

And then there is onetime pad encryption, which is shockingly easy to explain and can be implemented on any home computer.

The problem with substitution ciphers is that you can look for patterns. You can do this because the key ultimately contains only so much information, based on how long it is. If the key contains 100 bits and the message contains 10,000 bits, at some point you’re going to have to repeat some kind of pattern—even if it’s a very complex, sophisticated one like the Enigma machine.

Well, what if the key were as long as the message? What if a 10,000 bit message used a 10,000 bit key? Then you could substitute every single letter for a different symbol each time. What if, on its first occurrence, E is D, but then it’s Q, and then it’s T—and each of these was generated randomly and independently each time? Then it can’t be broken by searching for patterns—because there are no patterns to be found.

Mathematically, it would look like this: Take each bit of the plaintext, and randomly generate another bit for the key. Add the key bit to the plaintext bit (technically you want to use bitwise XOR, but that’s basically adding), and you’ve got the ciphertext bit. At the other end, subtracting out each key bit will give back each plaintext bit. Provided you can generate random numbers efficiently, this will be fast to encrypt and decrypt—but literally impossible to break without the key.

Indeed, onetime-pad encryption is so secure that it is a proven mathematical theorem that there is no way to break it. Even if you had such staggering computing power that you could try every possible key, you wouldn’t even know when you got the right one—because every possible message can be generated from a given ciphertext, using some key. Even if you knew some parts of the message already, you would have no way to figure out any of the rest—because there are no patterns linking the two.

The downside is that you need to somehow send the keys. As I said in last week’s post, if you have a safe way to send the key, why can’t you send the message that way? Well, there is still an advantage, actually, and that’s speed.

If there is a slow, secure way to send information (e.g. deliver it physically by armed courier), and a fast, insecure way (e.g. send it over the Internet), then you can send the keys in advance by the slow, safe way and then send ciphertexts later the fast, risky way. Indeed, this kind of courier-based onetime-pad encryption is how the red phone” (really a fax line) linking the White House to the Kremlin works.

Now, for online banking, we’re not going to be able to use couriers. But here’s something we could do. When you open a bank account, the bank could give you a, say, 128 GB flash drive of onetime-pad keys for you to use in your online banking. You plug that into your computer every time you want to log in, and it grabs the next part of key each time (there are some tricky technical details with synchronizing this that could, in practice, create some risk—but, done right, the risk would be small). If you are sending 10 megabytes of encrypted data each time (and that’s surely enough to encode a bank statement, though they might want to use a format other than PDF), you’ll get over 10,000 uses out of that flash drive. If you’ve been sending a lot of data and your key starts to run low, you can physically show up at the bank branch and get a new one.

Similarly, you could have onetime-pad keys on flash drives (more literal flash keys)given to you by the US government for tax filing, and another from each of your credit card issuers. For online purchases, the sellers would probably need to have their own onetime-pad keys set up with the banks and credit card companies, so that you send the info to VISA encrypted one way and they send it to the seller encrypted another way. Businesses with large sales volume would go through keys very quickly—but then, they can afford to keep buying new flash drives. Since each transaction should only take a few kilobytes, the cost of additional onetime-pad should be small compared to the cost of packing, shipping, and the items themselves. For larger purchases, business could even get in the habit of sending you a free flash key with each purchase so that future purchases are easier.

This would render paywalls very difficult to implement, but good riddance. Cryptocurrency would die, but even better riddance.It would be most inconvenient to deal with things like, well, writing a blog like this; needing to get a physical key from WordPress sounds like quite a hassle. People might actually just tolerate having their blogs hacked on occasion, because… who is going to hack your blog, and who really cares if your blog gets hacked?

Yes, this system is awkward and inconvenient compared to our current system. But unlike our current system, it is provably secure. Right now, it may seem like a remote possibility that someone would find an algorithm to prove P=NP and break encryption. But it could definitely happen, and if it did happen, it could happen quite suddenly. It would be far better to prepare for the worst than be unprepared when it’s too late.

The importance of encryption

Feb 6 JDN 2459617

In last week’s post I told you of the compounding failures of cryptocurrency, which largely follow from the fact that it is very bad at being, well, currency. It doesn’t have a steady, predictable value, it isn’t easy to make payments with, and it isn’t accepted for most purchases.

But I realized that I haven’t ever gotten around to explaining anything about the crypto side of things—just what is encryption, and why does it matter?

At its core, encryption is any technique designed to disguise information so that it can be seen only by its intended viewers. Humans have been using some form of encryption since shortly after we invented writing—though, like any technology, our encryption has greatly improved over time.

Encryption involves converting a plaintext, the information you want to keep secret, into a ciphertext, a disguised form, using a key that is kept secret and can be used to convert the ciphertext back into plaintext. Decryption is the opposite process, extracting the plaintext from the ciphertext.

Some of the first forms of encryption were simple substitution ciphers: Have a set of rules that substitutes different letters for the original letters, such as “A becomes D, B becomes Q” and so on. This works pretty well, actually; but if each letter in the ciphertext always corresponds to the same letter in the plaintext, then you can look for patterns that would show up in text. For instance, E is usually the most common letter, so if you see a lot of three-letter sequences like BFP and P is a really common letter, odds are good that BFP is really THE and so you can guess that B=T, F=H, P=E.

More sophisticated ciphers tried to solve this problem by changing the substitution pattern as they go. The Enigma used by Nazi Germany was essentially this: It had a complex electrical and mechanical apparatus dedicated to changing the substitution rules with each key-press, in a manner that would be unpredictable to an outside observer but could be readily reproduced by using another Enigma machine. (Of course, it wasn’t actually as secure as they thought.)

For most of history, people have used what’s called private-key encryption, where there is a single key using for both encryption and decryption. In that case, you need to keep the key secret: If someone were to get their hands on it, they could easily decrypt all of your messages.

This is a problem, because with private-key encryption, you need to give the key to the person you want to read the message. And if there is a safe way to send the key, well… why couldn’t you send the message that way?

In the late 20th century mathematicians figured out an alternative, public-key encryption, which uses two keys: A private key, used to decrypt, and a new, public key, which can be used to encrypt. The public key is called “public” because you don’t need to keep it secret. You can hand it out to anyone, and they can encrypt messages with it. Those messages will be readable by you and you alone—for only you have the private key.

With most methods of public-key encryption, senders can even use their private key to prove to you that they are the person who sent the message, known as authentication. They encrypt it using their private key and your public key, and then you decrypt it using their public key and your private key.

This is great! It means that anyone can send messages to anyone else, and everyone will know not only that their information is safe, but also who it came from. You never have to transmit the private keys at all. Problem solved.


We now use public-key encryption for all sorts of things, particularly online: Online shopping, online banking, online tax filing. It’s not hard to imagine how catastrophic it could be if all of these forms of encryption were to suddenly fail.

In next week’s post, I’m going to talk about why I’m worried that something like that could one day happen, and what we might do in order to make sure it doesn’t. Stay tuned.

Cryptocurrency and its failures

Jan 30 JDN 2459620

It started out as a neat idea, though very much a solution in search of a problem. Using encryption, could we decentralize currency and eliminate the need for a central bank?

Well, it’s been a few years now, and we have now seen how well that went. Bitcoin recently crashed, but it has always been astonishingly volatile. As a speculative asset, such volatility is often tolerable—for many, even profitable. But as a currency, it is completely unbearable. People need to know that their money will be a store of value and a medium of exchange—and something that changes price one minute to the next is neither.

Some of cryptocurrency’s failures have been hilarious, like the ill-fated island called [yes, really] “Cryptoland”, which crashed and burned when they couldn’t find any investors to help them buy the island.

Others have been darkly comic, but tragic in their human consequences. Chief among these was the failed attempt by El Salvador to make Bitcoin an official currency.

At the time, President Bukele justified it by an economically baffling argument: Total value of all Bitcoin in the world is $680 billion, therefore if even 1% gets invested in El Salvador, GDP will increase by $6.8 billion, which is 25%!

First of all, that would only happen if 1% of all Bitcoin were invested in El Salvador each year—otherwise you’re looking at a one-time injection of money, not an increase in GDP.

But more importantly, this is like saying that the total US dollar supply is $6 trillion, (that’s physically cash; the actual money supply is considerably larger) so maybe by dollarizing your economy you can get 1% of that—$60 billion, baby! No, that’s not how any of this works. Dollarizing could still be a good idea (though it didn’t go all that well in El Salvador), but it won’t give you some kind of share in the US economy. You can’t collect dividends on US GDP.

It’s actually good how El Salvador’s experiment in bitcoin failed: Nobody bought into it in the first place. They couldn’t convince people to buy government assets that were backed by Bitcoin (perhaps because the assets were a strictly worse deal than just, er, buying Bitcoin). So the human cost of this idiotic experiment should be relatively minimal: It’s not like people are losing their homes over this.

That is, unless President Bukele doubles down, which he now appears to be doing. Even people who are big fans of cryptocurrency are unimpressed with El Salvador’s approach to it.

It would be one thing if there were some stable cryptocurrency that one could try pegging one’s national currency to, but there isn’t. Even so-called stablecoins are generally pegged to… regular currencies, typically the US dollar but also sometimes the Euro or a few other currencies. (I’ve seen the Australian Dollar and the Swiss Franc, but oddly enough, not the Pound Sterling.)

Or a country could try issuing its own cryptocurrency, as an all-digital currency instead of one that is partly paper. It’s not totally clear to me what advantages this would have over the current system (in which most of the money supply is bank deposits, i.e. already digital), but it would at least preserve the key advantage of having a central bank that can regulate your money supply.

But no, President Bukele decided to take an already-existing cryptocurrency, backed by nothing but the whims of the market, and make it legal tender. Somehow he missed the fact that a currency which rises and falls by 10% in a single day is generally considered bad.

Why? Is he just an idiot? I mean, maybe, though Bukele’s approval rating is astonishingly high. (And El Salvador is… mostly democratic. Unlike, say, Putin’s, I think these approval ratings are basically real.) But that’s not the only reason. My guess is that he was gripped by the same FOMO that has gripped everyone else who evangelizes for Bitcoin. The allure of easy money is often irresistible.

Consider President Bukele’s position. You’re governing a poor, war-torn country which has had economic problems of various types since its founding. When the national currency collapsed a generation ago, the country was put on the US dollar, but that didn’t solve the problem. So you’re looking for a better solution to the monetary doldrums your country has been in for decades.

You hear about a fancy new monetary technology, “cryptocurrency”, which has all the tech people really excited and seems to be making tons of money. You don’t understand a thing about it—hardly anyone seems to, in fact—but you know that people with a lot of insider knowledge of technology and finance are really invested in it, so it seems like there must be something good here. So, you decide to launch a program that will convert your country’s currency from the US dollar to one of these new cryptocurrencies—and you pick the most famous one, which is also extremely valuable, Bitcoin.

Could cryptocurrencies be the future of money, you wonder? Could this be the way to save your country’s economy?

Despite all the evidence that had already accumulated that cryptocurrency wasn’t working, I can understand why Bukele would be tempted by that dream. Just as we’d all like to get free money without having to work, he wanted to save his country’s economy without having to implement costly and unpopular reforms.

But there is no easy money. Not really. Some people get lucky; but they ultimately benefit from other people’s hard work.

The lesson here is deeper than cryptocurrency. Yes, clearly, it was a dumb idea to try to make Bitcoin a national currency, and it will get even dumber if Bukele really does double down on it. But more than that, we must all resist the lure of easy money. If it sounds too good to be true, it probably is.

Keynesian economics: It works, bitches

Jan 23 JDN 2459613

(I couldn’t resist; for the uninitiated, my slightly off-color title is referencing this XKCD comic.)

When faced with a bad recession, Keynesian economics prescribes the following response: Expand the money supply. Cut interest rates. Increase government spending, but decrease taxes. The bigger the recession, the more we should do all these things—especially increasing spending, because interest rates will often get pushed to zero, creating what’s called a liquidity trap.

Take a look at these two FRED graphs, both since the 1950s.
The first is interest rates (specifically the Fed funds effective rate):

The second is the US federal deficit as a proportion of GDP:

Interest rates were pushed to zero right after the 2008 recession, and didn’t start coming back up until 2016. Then as soon as we hit the COVID recession, they were dropped back to zero.

The deficit looks even more remarkable. At the 2009 trough of the recession, the deficit was large, nearly 10% of GDP; but then it was quickly reduced back to normal, to between 2% and 4% of GDP. And that initial surge is as much explained by GDP and tax receipts falling as by spending increasing.

Yet in 2020 we saw something quite different: The deficit became huge. Literally off the chart, nearly 15% of GDP. A staggering $2.8 trillion. We’ve not had a deficit that large as a proportion of GDP since WW2. We’ve never had a deficit that large in real billions of dollars.

Deficit hawks came out of the woodwork to complain about this, and for once I was worried they might actually be right. Their most credible complaint was that it would trigger inflation, and they weren’t wrong about that: Inflation became a serious concern for the first time in decades.

But these recessions were very large, and when you actually run the numbers, this deficit was the correct magnitude for what Keynesian models tell us to do. I wouldn’t have thought our government had the will and courage to actually do it, but I am very glad to have been wrong about that, for one very simple reason:

It worked.

In 2009, we didn’t actually fix the recession. We blunted it; we stopped it from getting worse. But we never really restored GDP, we just let it get back to its normal growth rate after it had plummeted, and eventually caught back up to where we had been.

2021 went completely differently. With a much larger deficit, we fixed this recession. We didn’t just stop the fall; we reversed it. We aren’t just back to normal growth rates—we are back to the same level of GDP, as if the recession had never happened.

This contrast is quite obvious from the GDP of US GDP:

In 2008 and 2009, GDP slumps downward, and then just… resumes its previous trend. It’s like we didn’t do anything to fix the recession, and just allowed the overall strong growth of our economy to carry us through.

The pattern in 2020 is completely different. GDP plummets downward—much further, much faster than in the Great Recession. But then it immediately surges back upward. By the end of 2021, it was above its pre-recession level, and looks to be back on its growth trend. With a recession this deep, if we’d just waited like we did last time, it would have taken four or five years to reach this point—we actually did it in less than one.

I wrote earlier about how this is a weird recession, one that actually seems to fit Real Business Cycle theory. Well, it was weird in another way as well: We fixed it. We actually had the courage to do what Keynes told us to do in 1936, and it worked exactly as it was supposed to.

Indeed, to go from unemployment almost 15% in April of 2020 to under 4% in December of 2021 is fast enough I feel like I’m getting whiplash. We have never seen unemployment drop that fast. Krugman is fond of comparing this to “morning in America”, but that’s really an understatement. Pitch black one moment, shining bright the next: this isn’t a sunrise, it’s pulling open a blackout curtain.

And all of this while the pandemic is still going on! The omicron variant has brought case numbers to their highest levels ever, though fortunately death rates so far are still below last year’s peak.

I’m not sure I have the words to express what a staggering achievement of economic policy it is to so rapidly and totally repair the economic damage caused by a pandemic while that pandemic is still happening. It’s the equivalent of repairing an airplane that is not only still in flight, but still taking anti-aircraft fire.

Why, it seems that Keynes fellow may have been onto something, eh?

Reversals in progress against poverty

Jan 16 JDN 2459606

I don’t need to tell you that the COVID pandemic has been very bad for the world. Yet perhaps the worst outcome of the pandemic is one that most people don’t recognize: It has reversed years of progress against global poverty.

Estimates of the number of people who will be thrown into extreme poverty as a result of the pandemic are consistently around 100 million, though some forecasts have predicted this will rise to 150 million, or, in the most pessimistic scenarios, even as high as 500 million.

Pre-COVID projections showed the global poverty rate falling steadily from 8.4% in 2019 to 6.3% by 2030. But COVID resulted in the first upward surge in global poverty in decades, and updated models now suggest that the global poverty rate in 2030 will be as high as 7.0%. That difference is 0.7% of a forecasted population of 8.5 billion—so that’s a difference of 59 million people.

This is a terrible reversal of fortune, and a global tragedy. Ten or perhaps even hundreds of millions of people will suffer the pain of poverty because of this global pandemic and the numerous missteps by many of the world’s governments—not least the United States—in response to it.

Yet it’s important to keep in mind that this is a short-term reversal in a long-term trend toward reduced poverty. Yes, the most optimistic predictions are turning out to be wrong—but the general pattern of dramatic reductions in global poverty over the late 20th and early 21st century are still holding up.

That post-COVID estimate of a global poverty rate of 7.0% needs to be compared against the fact that as recently as 1980 the global poverty rate at the same income level (adjust for inflation and purchasing power of course) income level was a whopping 44%.

This pattern makes me feel deeply ambivalent about the effects of globalization on inequality. While it now seems clear that globalization has exacerbated inequality within First World countries—and triggered a terrible backlash of right-wing populism as a result—it also seems clear that globalization was a major reason for the dramatic reductions in global poverty in the past few decades.

I think the best answer I’ve been able to come up with is that globalization is overall a good thing, and we must continue it—but we also need to be much more mindful of its costs, and we must make policy that mitigates those costs. Expanded trade has winners and losers, and we should be taxing the winners to compensate the losers. To make good economic policy, it simply isn’t enough to increase aggregate GDP; you actually have to make life better for everyone (or at least as many people as you can).

Unfortunately, knowing what policies to make is only half the battle. We must actually implement those policies, which means winning elections, which means restoring the public’s faith in the authority of economic experts.

Some of the people voting for Donald Trump were just what Hillary Clinton correctly (if tone-deafly) referred to as “deplorables“: racists, misogynists, xenophobes. But I think that many others weren’t voting for Trump but against Clinton; they weren’t embracing far-right populism but rather rejecting center-left technocratic globalization. They were tired of being told what to do by experts who didn’t seem to care about them or their interests.

And the thing is, they were right about that. Not about voting for Trump—that’s unforgivable—but about the fact that expert elites had been ignoring their interests and needed a wake-up call. There were a hundred better ways of making that wake-up call that didn’t involve putting a narcissistic, incompetent maniac in charge of the world’s largest economy, military and nuclear arsenal, and millions of people should be ashamed of themselves for not taking those better options. Yet the fact remains: The wake-up call was necessary, and we should be responding to it.

We expert elites (I think I can officially carry that card, now that I have a PhD and a faculty position at a leading research university) need to do a much better job of two things: First, articulating the case for our policy recommendations in a way that ordinary people can understand, so that they feel justified and not simply rammed down people’s throats; and second, recognizing the costs and downsides of these policies and taking action to mitigate them whenever possible.

For instance: Yes, we need to destroy all the coal jobs. They are killing workers and the planet. Coal companies need to be transitioned to new industries or else shut down. This is not optional. It must be done. But we also need to explain to those coal miners why it’s necessary to move on from coal to solar and nuclear, and we need to be implementing various policies to help those workers move on to better, safer jobs that pay as well and don’t involve filling their lungs with soot and the atmosphere with carbon dioxide. We need to articulate, emphasize—and loudly repeat—that this isn’t about hurting coal miners to help everyone else, but about helping everyone, coal miners included, and that if anyone gets hurt it will only be a handful of psychopathic billionaires who already have more money than any human being could possibly need or deserve.

Another example: We cannot stop trading with India and China. Hundreds of millions of innocent people would suddenly be thrown out of work and into poverty if we did. We need the products they make for us, and they need the money we pay for those products. But we must also acknowledge that trading with poor countries does put downward pressure on wages back home, and take action to help First World workers who are now forced to compete with global labor markets. Maybe this takes the form of better unemployment benefits, or job-matching programs, or government-sponsored job training. But we cannot simply shrug and let people lose their jobs and their homes because the factories they worked in were moved to China.

Strange times for the labor market

Jan 9 JDN 2459589

Labor markets have been behaving quite strangely lately, due to COVID and its consequences. As I said in an earlier post, the COVID recession was the one recession I can think of that actually seemed to follow Real Business Cycle theory—where it was labor supply, not demand, that drove employment.

I dare say that for the first time in decades, the US government actually followed Keynesian policy. US federal government spending surged from $4.8 trillion to $6.8 trillion in a single year:

That is a staggering amount of additional spending; I don’t think any country in history has ever increased their spending by that large an amount in a single year, even inflation-adjusted. Yet in response to a recession that severe, this is exactly what Keynesian models prescribed—and for once, we listened. Instead of balking at the big numbers, we went ahead and spent the money.

And apparently it worked, because unemployment spiked to the worst levels seen since the Great Depression, then suddenly plummeted back to normal almost immediately:

Nor was this just the result of people giving up on finding work. U-6, the broader unemployment measure that includes people who are underemployed or have given up looking for work, shows the same unprecedented pattern:

The oddest part is that people are now quitting their jobs at the highest rate seen in over 20 years:

[FRED_quits.png]

This phenomenon has been dubbed the Great Resignation, and while its causes are still unclear, it is clearly the most important change in the labor market in decades.

In a previous post I hypothesized that this surge in strikes and quits was a coordination effect: The sudden, consistent shock to all labor markets at once gave people a focal point to coordinate their decision to strike.

But it’s also quite possible that it was the Keynesian stimulus that did it: The relief payments made it safe for people to leave jobs they had long hated, and they leapt at the opportunity.

When that huge surge in government spending was proposed, the usual voices came out of the woodwork to warn of terrible inflation. It’s true, inflation has been higher lately than usual, nearly 7% last year. But we still haven’t hit the double-digit inflation rates we had in the late 1970s and early 1980s:

Indeed, most of the inflation we’ve had can be explained by the shortages created by the supply chain crisis, along with a very interesting substitution effect created by the pandemic. As services shut down, people bought goods instead: Home gyms instead of gym memberships, wifi upgrades instead of restaurant meals.

As a result, the price of durable goods actually rose, when it had previously been falling for decades. That broader pattern is worth emphasizing: As technology advances, services like healthcare and education get more expensive, durable goods like phones and washing machines get cheaper, and nondurable goods like food and gasoline fluctuate but ultimately stay about the same. But in the last year or so, durable goods have gotten more expensive too, because people want to buy more while supply chains are able to deliver less.

This suggests that the inflation we are seeing is likely to go away in a few years, once the pandemic is better under control (or else reduced to a new influenza where the virus is always there but we learn to live with it).

But I don’t think the effects on the labor market will be so transitory. The strikes and quits we’ve been seeing lately really are at a historic level, and they are likely to have a long-lasting effect on how work is organized. Employers are panicking about having to raise wages and whining about how “no one wants to work” (meaning, of course, no one wants to work at the current wage and conditions on offer). The correct response is the one from Goodfellas [language warning].

For the first time in decades, there are actually more job vacancies than unemployed workers:

This means that the tables have turned. The bargaining power is suddenly in the hands of workers again, after being in the hands of employers for as long as I’ve been alive. Of course it’s impossible to know whether some other shock could yield another reversal; but for now, it looks like we are finally on the verge of major changes in how labor markets operate—and I for one think it’s about time.