Who still uses cash?

Feb 27 JDN 2459638

If you had to guess, what is the most common denomination of US dollar bills? You might check your wallet: $1? $20?

No, it’s actually $100. There are 13.1 billion $1 bills, 11.7 billion $20 bills, and 16.4 billion $100 bills. And since $100 bills are worth more, the vast majority of US dollar value in circulation is in those $100 bills—indeed, $1.64 trillion of the total $2.05 trillion cash supply.

This is… odd, to say the least. When’s the last time you spent a $100 bill? Then again, when’s the last time you spent… cash? In a typical week, 30% of Americans use no cash at all.

In the United States, cash is used for 26% of transactions, compared to 28% for debit card and 23% for credit cards. The US is actually a relatively cash-heavy country by First World standards. In the Netherlands and Scandinavia, cash is almost unheard of. When I last visited Amsterdam a couple of months ago, businesses were more likely to take US credit cards than they were to take cash euros.

A list of countries most reliant on cash shows mostly very poor countries, like Chad, Angola, and Burkina Faso. But even in Sub-Saharan Africa, mobile money is dominant in Botswana, Kenya and Uganda.

And yet the cash money supply is still quite large: $2.05 trillion is only a third of the US monetary base, but it’s still a huge amount of money. If most people aren’t using it, who is? And why is so much of it in the form of $100 bills?

It turns out that the answer to the second question can provide an answer to the first. $100 bills are not widely used for consumer purchases—indeed, most businesses won’t even accept them. (Honestly that has always bothered me: What exactly does “legal tender” mean, if you’re allowed to categorically refuse $100 bills? It’d be one thing to say “we can’t accept payment when we can’t make change”, and obviously nobody seriously expects you to accept $10,000 bills; but what if you have a $97 purchase?) When people spend cash, it’s mainly ones, fives, and twenties.

Who uses $100 bills? People who want to store money in a way that is anonymous, easily transportable—including across borders—and stable against market fluctuations. Drug dealers leap to mind (and indeed the money-laundering that HSBC did for drug cartels was largely in the form of thick stacks of $100 bills). Of course it isn’t just drug dealers, or even just illegal transactions, but it is mostly people who want to cross borders. 80% of US $100 bills are in circulation outside the United States. Since 80% of US cash is in the form of $100 bills, this means that nearly two-thirds of all US dollars are outside the US.

Knowing this, I have to wonder: Why does the Federal Reserve continue printing so many $100 bills? Okay, once they’re out there, it may be hard to get them back. But they do wear out eventually. (In fact, US dollars wear out faster than most currencies, because they are made of linen instead of plastic. Surprisingly, this actually makes them less eco-friendly despite being more biodegradable. Of course, the most eco-friendly method of payment is mobile payments, since their marginal environmental impact is basically zero.) So they could simply stop printing them, and eventually the global supply would dwindle.

They clearly haven’t done this—indeed, there were more $100 bills printed last year than any previous year, increasing the global supply by 2 billion bills, or $200 billion. Why not? Are they trying to keep money flowing for drug dealers? Even if the goal is to substitute for failing currencies in other countries (a somewhat odd, if altruistic, objective), wouldn’t that be more effective with $1 and $5 bills? $100 is a lot of money for people in Chad or Angola! Chad’s per-capita GDP is a staggeringly low $600 per year; that means that a $100 bill to a typical person in Chad would be like me holding onto a $10,000 bill (those exist, technically). Surely they’d prefer $1 bills—which would still feel to them like $100 bills feel to me. Even in middle-income countries, $100 is quite a bit; Ecuador actually uses the US dollar as its main currency, but their per-capita GDP is only $5,600, so $100 to them feels like $1000 to us.

If you want to usefully increase the money supply to stimulate consumer spending, print $20 bills—or just increase some numbers in bank reserve accounts. Printing $100 bills is honestly baffling to me. It seems at best inept, and at worst possibly corrupt—maybe they do want to support drug cartels?

Basic income reconsidered

Feb 20 JDN 2459631

In several previous posts I have sung the praises of universal basic income (though I have also tried to acknowledge the challenges involved).

In this post I’d like to take a step back and reconsider the question of whether basic income is really the best approach after all. One nagging thought keeps coming back to me, and it is the fact that basic income is extremely expensive.

About 11% of the US population lives below the standard poverty line. There are many criticisms of the standard poverty line: Some say it’s too high, because you can compare it favorably with middle-class incomes in much poorer countries. Others say it’s too low, because income at that level doesn’t allow people to really live in financial security. There are many difficult judgment calls that go into devising a poverty threshold, and we can reasonably debate whether the right ones were made here.

However, I think this threshold is at least approximately correct; maybe the true poverty threshold for a household of 1 should be not $12,880 but $11,000 or $15,000, but I don’t think it should be $5,000 or $25,000. Maybe for a household of 4 it should be not $26,500 but $19,000 or $32,000; but I don’t think it should be $12,000 or $40,000.

So let’s suppose that we wanted to implement a universal basic income in the United States that would lift everyone out of poverty. We could essentially do that by taking the 2-person-household threshold of $17,420 and dividing it by 2, yielding $8,710 per person per year. (Why not use the 1-person-household threshold? There aren’t very many 1-person households in poverty, and that threshold would be considerably higher and thus considerably more expensive. A typical poor household is a single parent and one or more children; as long as kids get the basic income, that household would be above the threshold in this system.)

The US population is currently about 331 million people. If every single one of them were to receive a basic income of $8,710, that would cost nearly $2.9 trillion per year. This is a feasible amount—it’s less than half the current total federal budget—but it is still a very large amount. The tax increases required to support it would be massive, and that’s probably why, despite ostensibly bipartisan support for the idea of a basic income, no serious proposal has ever gotten off of the ground.

If on the other hand we were to only give the basic income to people below the poverty line, that would cost only 11% of that amount: A far more manageable $320 billion per year.

We don’t want to do exactly that, however, because it would create all kinds of harmful distortions in the economy. Consider someone who is just below the threshold, considering whether to take on more work or get a higher-paying job. If their household pre-tax income is currently $15,000 and they could raise it to $18,000, a basic income given only to people below the threshold would mean that they are choosing between $15,000+$17,000=$32,000 if they keep their current work and $18,000 if they increase it. Clearly, they would not want to take on more work. That’s a terrible system—it amounts to a marginal tax rate above 100%.

Another possible method would be to simply top off people’s income, give them whatever they need to get to the poverty line but no more. (This would actually be even cheaper; it would probably cost something more like $160 billion per year.) That removes the distortion for people near the threshold, at the cost of making it much worse for those far below the threshold. Someone considering whether to work for $7,000 or work for $11,000 is, in such a system, choosing whether to work less for $17,000 or work more for… $17,000. They will surely choose to work less.

In order to solve these problems, what we would most likely need to do is gradually phase out the basic income, so that say increasing your pre-tax income by $1.00 would decrease your basic income payment by $0.50. The cost of this system would be somewhere in between that of a truly universal basic income and a threshold-based system, so let’s ballpark that as around $600 billion per year. It would effectively implement a marginal tax rate of 50% for anyone who is receiving basic income payments.

In theory, this is probably worse than a universal basic income, because in the latter case you can target the taxes however you like—and thus (probably) make them less cause less distortion than the phased-out basic income system would. But in practice, a truly universal basic income might simply not be politically viable, and some kind of phased-out system seems much more likely to actually get passed.


Even then, I confess I am not extremely optimistic. For some reason, everyone seems to want to end poverty, but very few seem willing to use the obvious solution: Give poor people money.

The fragility of encryption

Feb 13 JDN 2459620

I said in last week’s post that most of the world’s online security rests upon public-key encryption. It’s how we do our shopping, our banking, and paying our taxes.

Yet public-key encryption has an Achilles’ Heel. It relies entirely on the assumption that, even knowing someone’s public key, you can’t possibly figure out what their private key is. Yet obviously the two must be deeply connected: In order for my private key to decrypt all messages that are encrypted using my public key, they must, in a deep sense, contain the same information. There must be a mathematical operation that will translate from one to the other—and that mathematical operation must be invertible.

What we have been relying on to keep public-key encryption secure is the notion of a one-way function: A function that is easy to compute, but hard to invert. A typical example is multiplying two numbers: Multiplication is a basic computing operation that is extremely fast, even for numbers with thousands of digits; but factoring a number into its prime factors is far more difficult, and currently cannot be done in any reasonable amount of time for numbers that are more than a hundred digits long.


“Easy” and “hard” in what sense? The usual criterion is in polynomial time.

Say you have an input that is n bits long—i.e. n digits, when expressed as a binary number, all 0s and 1s. A function that can be computed in time proportional to n is linear time; if it can only be done in time proportional to n2, that is quadratic time; n3 would be cubic time. All of these are examples of polynomial time.

But if instead the time required were 2n, that would be exponential time. 3n and 1.5n would also be exponential time.

This is significant because of how much faster exponential functions grow relative to polynomial functions, for large values of n. For example, let’s compare n3 with2n. When n=3, the polynomial is actually larger: n3=27 but 2n=8. At n=10 they are nearly equal: n3=1000 but 2n=1024. But by n=20, n3 is only 8000 while 2n is over 1 million. At n=100, n3is a manageable (for a modern computer) 1 million, while 2nis a staggering 1030; that’s a million trillion trillion.

You may see that there is already something a bit fishy about this: There are lots of different ways to be polynomial and lots of different ways to be exponential. Linear time n is clearly fast, and for many types of problems it seems unlikely one could do any better. But is n100 time really all that fast? It’s still polynomial. It doesn’t take a large exponential base to make for very fast growth—2 doesn’t seem that big, after all, and when dealing with binary digits it shows up quite naturally. But while 2n grows very fast even for reasonably-sized n, 1.0000001n grows slower than most polynomials—even linear!—for quite a long range before eventually becoming very fast growth when n is in the hundreds of millions. Yet it is still exponential.


So, why do we use these categories? Well, computer scientists and mathematicians have discovered that many types of problems that seem different can in fact be translated into one another, so that solving one would solve the other. For instance, you can easily convert between the Boolean satisfiability problem and the subset-sum problem or the travelling salesman problem. These conversions always take time that is a polynomial in n(usually somewhere between linear and quadratic, as it turns out). This has allowed to build complexity classes, classes of problem such that any problem can be converted to any other in polynomial time or better.

Problems that can be solved in polynomial timeare in class P, for polynomial.

Problems that can be checked—but not necessarily solved—in polynomial time are in class NP, which actually stands for “non-deterministic polynomial” (not a great name, to be honest). Given a problem in NP, you may not be able to come up with a valid answer in polynomial time. But if someone gave you an answer, you could tell in polynomial time whether or not that answer was valid.

Boolean satisfiability (often abbreviated SAT) is the paradigmatic NP problem: Given a Boolean formula like (A OR B OR C) AND (¬A OR D OR E) AND (¬D OR ¬C OR B) and so on, it isn’t a simple task to determine if there’s some assignment of the variables A, B, C, D, E that makes it all true. But if someone handed you such an assignment, say (¬A, B, ¬C, D, E), you could easily check that it does in fact satisfy the expression. It turns out that in fact SAT is what’s called NP-complete: Any NP problem can be converted into SAT in polynomial time.

This is important because in order to be useful as an encryption system, we need our one-way function to be in class P (otherwise, we couldn’t compute it quickly). Yet, by definition, this means its inverse must be in class NP.


Thus, simply because it is easy to multiply two numbers, I know for sure that factoring numbers must be in NP: All I have to do to verify that a factorization is correct is multiply the numbers. Since the way to get a public key from a private key is (essentially) to multiply two numbers, this means that getting a private key from a public key is equivalent to factorization—which means it must be in NP.

This would be fine if we knew some problems in NP that could never, ever be solved in polynomial time. We could just pick one of those and make it the basis of our encryption system. Yet in fact, we do not know any such problems—indeed, we are not even certain they exist.

One of the biggest unsolved problems in mathematics is P versus NP, which asks the seemingly-simple question: “Are P and NP really different classes?” It certainly seems like they are—there are problems like multiplying numbers, or even finding out whether a number is prime, that are clearly in P, and there are other problems, like SAT, that are definitely in NP but seem to not be in P. But in fact no one has ever been able to prove that P ≠ NP. Despite decades of attempts, no one has managed it.

To be clear, no one has managed to prove that P = NP, either. (Doing either one would win you a Clay Millennium Prize.) But since the conventional wisdom among most mathematicians is that P ≠ NP (99% of experts polled in 2019 agreed), I actually think this possibility has not been as thoroughly considered.

Vague heuristic arguments are often advanced for why P ≠ NP, such as this one by Scott Aaronson: “If P = NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in “creative leaps,” no fundamental gap between solving a problem and recognizing the solution once it’s found.”

That really doesn’t follow at all. Doing something in polynomial time is not the same thing as doing it instantly.

Say for instance someone finds an algorithm to solve SAT in n6 time. Such an algorithm would conclusively prove P = NP. n6; that’s a polynomial, all right. But it’s a big polynomial. The time required to check a SAT solution is linear in the number of terms in the Boolean formula—just check each one, see if it works. But if it turns out we could generate such a solution in time proportional to the sixth power of the number of terms, that would still mean it’s a lot easier to check than it is to solve. A lot easier.

I guess if your notion of a “fundamental gap” rests upon the polynomial/exponential distinction, you could say that’s not “fundamental”. But this is a weird notion to say the least. If n = 1 million can be checked in 1 million processor cycles (that is, milliseconds, or with some overhead, seconds), but only solved in 1036 processor cycles (that is, over a million trillion years), that sounds like a pretty big difference to me.

Even an n2 algorithm wouldn’t show there’s no difference. The difference between n and n2, is, well, a factor of n. So finding the answer could still take far longer than verifying it. This would be worrisome for encryption, however: Even a million times as long isn’t really that great actually. It means that if something would work in a few seconds for an ordinary computer (the timescale we want for our online shopping and banking), then, say, the Russian government with a supercomputer a thousand times better could spend half an hour on it. That’s… a problem. I guess if breaking our encryption was only feasible for superpower national intelligence agencies, it wouldn’t be a complete disaster. (Indeed, many people suspect that the NSA and FSB have already broken most of our encryption, and I wouldn’t be surprised to learn that’s true.)

But what I really want to say here is that since it may be true that P=NP—we don’t know it isn’t, even if most people strongly suspect as much—we should be trying to find methods of encryption that would remain secure even if that turns out to be the case. (There’s another reason as well: Quantum computers are known to be able to factor numbers in polynomial time—though it may be awhile before they get good enough to do so usefully.)

We do know two such methods, as a matter of fact. There is quantum encryption, which, like most things quantum, is very esoteric and hard to explain. (Maybe I’ll get to that in another post.) It also requires sophisticated, expensive hardware that most people are unlikely to be able to get.

And then there is onetime pad encryption, which is shockingly easy to explain and can be implemented on any home computer.

The problem with substitution ciphers is that you can look for patterns. You can do this because the key ultimately contains only so much information, based on how long it is. If the key contains 100 bits and the message contains 10,000 bits, at some point you’re going to have to repeat some kind of pattern—even if it’s a very complex, sophisticated one like the Enigma machine.

Well, what if the key were as long as the message? What if a 10,000 bit message used a 10,000 bit key? Then you could substitute every single letter for a different symbol each time. What if, on its first occurrence, E is D, but then it’s Q, and then it’s T—and each of these was generated randomly and independently each time? Then it can’t be broken by searching for patterns—because there are no patterns to be found.

Mathematically, it would look like this: Take each bit of the plaintext, and randomly generate another bit for the key. Add the key bit to the plaintext bit (technically you want to use bitwise XOR, but that’s basically adding), and you’ve got the ciphertext bit. At the other end, subtracting out each key bit will give back each plaintext bit. Provided you can generate random numbers efficiently, this will be fast to encrypt and decrypt—but literally impossible to break without the key.

Indeed, onetime-pad encryption is so secure that it is a proven mathematical theorem that there is no way to break it. Even if you had such staggering computing power that you could try every possible key, you wouldn’t even know when you got the right one—because every possible message can be generated from a given ciphertext, using some key. Even if you knew some parts of the message already, you would have no way to figure out any of the rest—because there are no patterns linking the two.

The downside is that you need to somehow send the keys. As I said in last week’s post, if you have a safe way to send the key, why can’t you send the message that way? Well, there is still an advantage, actually, and that’s speed.

If there is a slow, secure way to send information (e.g. deliver it physically by armed courier), and a fast, insecure way (e.g. send it over the Internet), then you can send the keys in advance by the slow, safe way and then send ciphertexts later the fast, risky way. Indeed, this kind of courier-based onetime-pad encryption is how the red phone” (really a fax line) linking the White House to the Kremlin works.

Now, for online banking, we’re not going to be able to use couriers. But here’s something we could do. When you open a bank account, the bank could give you a, say, 128 GB flash drive of onetime-pad keys for you to use in your online banking. You plug that into your computer every time you want to log in, and it grabs the next part of key each time (there are some tricky technical details with synchronizing this that could, in practice, create some risk—but, done right, the risk would be small). If you are sending 10 megabytes of encrypted data each time (and that’s surely enough to encode a bank statement, though they might want to use a format other than PDF), you’ll get over 10,000 uses out of that flash drive. If you’ve been sending a lot of data and your key starts to run low, you can physically show up at the bank branch and get a new one.

Similarly, you could have onetime-pad keys on flash drives (more literal flash keys)given to you by the US government for tax filing, and another from each of your credit card issuers. For online purchases, the sellers would probably need to have their own onetime-pad keys set up with the banks and credit card companies, so that you send the info to VISA encrypted one way and they send it to the seller encrypted another way. Businesses with large sales volume would go through keys very quickly—but then, they can afford to keep buying new flash drives. Since each transaction should only take a few kilobytes, the cost of additional onetime-pad should be small compared to the cost of packing, shipping, and the items themselves. For larger purchases, business could even get in the habit of sending you a free flash key with each purchase so that future purchases are easier.

This would render paywalls very difficult to implement, but good riddance. Cryptocurrency would die, but even better riddance.It would be most inconvenient to deal with things like, well, writing a blog like this; needing to get a physical key from WordPress sounds like quite a hassle. People might actually just tolerate having their blogs hacked on occasion, because… who is going to hack your blog, and who really cares if your blog gets hacked?

Yes, this system is awkward and inconvenient compared to our current system. But unlike our current system, it is provably secure. Right now, it may seem like a remote possibility that someone would find an algorithm to prove P=NP and break encryption. But it could definitely happen, and if it did happen, it could happen quite suddenly. It would be far better to prepare for the worst than be unprepared when it’s too late.

The importance of encryption

Feb 6 JDN 2459617

In last week’s post I told you of the compounding failures of cryptocurrency, which largely follow from the fact that it is very bad at being, well, currency. It doesn’t have a steady, predictable value, it isn’t easy to make payments with, and it isn’t accepted for most purchases.

But I realized that I haven’t ever gotten around to explaining anything about the crypto side of things—just what is encryption, and why does it matter?

At its core, encryption is any technique designed to disguise information so that it can be seen only by its intended viewers. Humans have been using some form of encryption since shortly after we invented writing—though, like any technology, our encryption has greatly improved over time.

Encryption involves converting a plaintext, the information you want to keep secret, into a ciphertext, a disguised form, using a key that is kept secret and can be used to convert the ciphertext back into plaintext. Decryption is the opposite process, extracting the plaintext from the ciphertext.

Some of the first forms of encryption were simple substitution ciphers: Have a set of rules that substitutes different letters for the original letters, such as “A becomes D, B becomes Q” and so on. This works pretty well, actually; but if each letter in the ciphertext always corresponds to the same letter in the plaintext, then you can look for patterns that would show up in text. For instance, E is usually the most common letter, so if you see a lot of three-letter sequences like BFP and P is a really common letter, odds are good that BFP is really THE and so you can guess that B=T, F=H, P=E.

More sophisticated ciphers tried to solve this problem by changing the substitution pattern as they go. The Enigma used by Nazi Germany was essentially this: It had a complex electrical and mechanical apparatus dedicated to changing the substitution rules with each key-press, in a manner that would be unpredictable to an outside observer but could be readily reproduced by using another Enigma machine. (Of course, it wasn’t actually as secure as they thought.)

For most of history, people have used what’s called private-key encryption, where there is a single key using for both encryption and decryption. In that case, you need to keep the key secret: If someone were to get their hands on it, they could easily decrypt all of your messages.

This is a problem, because with private-key encryption, you need to give the key to the person you want to read the message. And if there is a safe way to send the key, well… why couldn’t you send the message that way?

In the late 20th century mathematicians figured out an alternative, public-key encryption, which uses two keys: A private key, used to decrypt, and a new, public key, which can be used to encrypt. The public key is called “public” because you don’t need to keep it secret. You can hand it out to anyone, and they can encrypt messages with it. Those messages will be readable by you and you alone—for only you have the private key.

With most methods of public-key encryption, senders can even use their private key to prove to you that they are the person who sent the message, known as authentication. They encrypt it using their private key and your public key, and then you decrypt it using their public key and your private key.

This is great! It means that anyone can send messages to anyone else, and everyone will know not only that their information is safe, but also who it came from. You never have to transmit the private keys at all. Problem solved.


We now use public-key encryption for all sorts of things, particularly online: Online shopping, online banking, online tax filing. It’s not hard to imagine how catastrophic it could be if all of these forms of encryption were to suddenly fail.

In next week’s post, I’m going to talk about why I’m worried that something like that could one day happen, and what we might do in order to make sure it doesn’t. Stay tuned.

Cryptocurrency and its failures

Jan 30 JDN 2459620

It started out as a neat idea, though very much a solution in search of a problem. Using encryption, could we decentralize currency and eliminate the need for a central bank?

Well, it’s been a few years now, and we have now seen how well that went. Bitcoin recently crashed, but it has always been astonishingly volatile. As a speculative asset, such volatility is often tolerable—for many, even profitable. But as a currency, it is completely unbearable. People need to know that their money will be a store of value and a medium of exchange—and something that changes price one minute to the next is neither.

Some of cryptocurrency’s failures have been hilarious, like the ill-fated island called [yes, really] “Cryptoland”, which crashed and burned when they couldn’t find any investors to help them buy the island.

Others have been darkly comic, but tragic in their human consequences. Chief among these was the failed attempt by El Salvador to make Bitcoin an official currency.

At the time, President Bukele justified it by an economically baffling argument: Total value of all Bitcoin in the world is $680 billion, therefore if even 1% gets invested in El Salvador, GDP will increase by $6.8 billion, which is 25%!

First of all, that would only happen if 1% of all Bitcoin were invested in El Salvador each year—otherwise you’re looking at a one-time injection of money, not an increase in GDP.

But more importantly, this is like saying that the total US dollar supply is $6 trillion, (that’s physically cash; the actual money supply is considerably larger) so maybe by dollarizing your economy you can get 1% of that—$60 billion, baby! No, that’s not how any of this works. Dollarizing could still be a good idea (though it didn’t go all that well in El Salvador), but it won’t give you some kind of share in the US economy. You can’t collect dividends on US GDP.

It’s actually good how El Salvador’s experiment in bitcoin failed: Nobody bought into it in the first place. They couldn’t convince people to buy government assets that were backed by Bitcoin (perhaps because the assets were a strictly worse deal than just, er, buying Bitcoin). So the human cost of this idiotic experiment should be relatively minimal: It’s not like people are losing their homes over this.

That is, unless President Bukele doubles down, which he now appears to be doing. Even people who are big fans of cryptocurrency are unimpressed with El Salvador’s approach to it.

It would be one thing if there were some stable cryptocurrency that one could try pegging one’s national currency to, but there isn’t. Even so-called stablecoins are generally pegged to… regular currencies, typically the US dollar but also sometimes the Euro or a few other currencies. (I’ve seen the Australian Dollar and the Swiss Franc, but oddly enough, not the Pound Sterling.)

Or a country could try issuing its own cryptocurrency, as an all-digital currency instead of one that is partly paper. It’s not totally clear to me what advantages this would have over the current system (in which most of the money supply is bank deposits, i.e. already digital), but it would at least preserve the key advantage of having a central bank that can regulate your money supply.

But no, President Bukele decided to take an already-existing cryptocurrency, backed by nothing but the whims of the market, and make it legal tender. Somehow he missed the fact that a currency which rises and falls by 10% in a single day is generally considered bad.

Why? Is he just an idiot? I mean, maybe, though Bukele’s approval rating is astonishingly high. (And El Salvador is… mostly democratic. Unlike, say, Putin’s, I think these approval ratings are basically real.) But that’s not the only reason. My guess is that he was gripped by the same FOMO that has gripped everyone else who evangelizes for Bitcoin. The allure of easy money is often irresistible.

Consider President Bukele’s position. You’re governing a poor, war-torn country which has had economic problems of various types since its founding. When the national currency collapsed a generation ago, the country was put on the US dollar, but that didn’t solve the problem. So you’re looking for a better solution to the monetary doldrums your country has been in for decades.

You hear about a fancy new monetary technology, “cryptocurrency”, which has all the tech people really excited and seems to be making tons of money. You don’t understand a thing about it—hardly anyone seems to, in fact—but you know that people with a lot of insider knowledge of technology and finance are really invested in it, so it seems like there must be something good here. So, you decide to launch a program that will convert your country’s currency from the US dollar to one of these new cryptocurrencies—and you pick the most famous one, which is also extremely valuable, Bitcoin.

Could cryptocurrencies be the future of money, you wonder? Could this be the way to save your country’s economy?

Despite all the evidence that had already accumulated that cryptocurrency wasn’t working, I can understand why Bukele would be tempted by that dream. Just as we’d all like to get free money without having to work, he wanted to save his country’s economy without having to implement costly and unpopular reforms.

But there is no easy money. Not really. Some people get lucky; but they ultimately benefit from other people’s hard work.

The lesson here is deeper than cryptocurrency. Yes, clearly, it was a dumb idea to try to make Bitcoin a national currency, and it will get even dumber if Bukele really does double down on it. But more than that, we must all resist the lure of easy money. If it sounds too good to be true, it probably is.

Keynesian economics: It works, bitches

Jan 23 JDN 2459613

(I couldn’t resist; for the uninitiated, my slightly off-color title is referencing this XKCD comic.)

When faced with a bad recession, Keynesian economics prescribes the following response: Expand the money supply. Cut interest rates. Increase government spending, but decrease taxes. The bigger the recession, the more we should do all these things—especially increasing spending, because interest rates will often get pushed to zero, creating what’s called a liquidity trap.

Take a look at these two FRED graphs, both since the 1950s.
The first is interest rates (specifically the Fed funds effective rate):

The second is the US federal deficit as a proportion of GDP:

Interest rates were pushed to zero right after the 2008 recession, and didn’t start coming back up until 2016. Then as soon as we hit the COVID recession, they were dropped back to zero.

The deficit looks even more remarkable. At the 2009 trough of the recession, the deficit was large, nearly 10% of GDP; but then it was quickly reduced back to normal, to between 2% and 4% of GDP. And that initial surge is as much explained by GDP and tax receipts falling as by spending increasing.

Yet in 2020 we saw something quite different: The deficit became huge. Literally off the chart, nearly 15% of GDP. A staggering $2.8 trillion. We’ve not had a deficit that large as a proportion of GDP since WW2. We’ve never had a deficit that large in real billions of dollars.

Deficit hawks came out of the woodwork to complain about this, and for once I was worried they might actually be right. Their most credible complaint was that it would trigger inflation, and they weren’t wrong about that: Inflation became a serious concern for the first time in decades.

But these recessions were very large, and when you actually run the numbers, this deficit was the correct magnitude for what Keynesian models tell us to do. I wouldn’t have thought our government had the will and courage to actually do it, but I am very glad to have been wrong about that, for one very simple reason:

It worked.

In 2009, we didn’t actually fix the recession. We blunted it; we stopped it from getting worse. But we never really restored GDP, we just let it get back to its normal growth rate after it had plummeted, and eventually caught back up to where we had been.

2021 went completely differently. With a much larger deficit, we fixed this recession. We didn’t just stop the fall; we reversed it. We aren’t just back to normal growth rates—we are back to the same level of GDP, as if the recession had never happened.

This contrast is quite obvious from the GDP of US GDP:

In 2008 and 2009, GDP slumps downward, and then just… resumes its previous trend. It’s like we didn’t do anything to fix the recession, and just allowed the overall strong growth of our economy to carry us through.

The pattern in 2020 is completely different. GDP plummets downward—much further, much faster than in the Great Recession. But then it immediately surges back upward. By the end of 2021, it was above its pre-recession level, and looks to be back on its growth trend. With a recession this deep, if we’d just waited like we did last time, it would have taken four or five years to reach this point—we actually did it in less than one.

I wrote earlier about how this is a weird recession, one that actually seems to fit Real Business Cycle theory. Well, it was weird in another way as well: We fixed it. We actually had the courage to do what Keynes told us to do in 1936, and it worked exactly as it was supposed to.

Indeed, to go from unemployment almost 15% in April of 2020 to under 4% in December of 2021 is fast enough I feel like I’m getting whiplash. We have never seen unemployment drop that fast. Krugman is fond of comparing this to “morning in America”, but that’s really an understatement. Pitch black one moment, shining bright the next: this isn’t a sunrise, it’s pulling open a blackout curtain.

And all of this while the pandemic is still going on! The omicron variant has brought case numbers to their highest levels ever, though fortunately death rates so far are still below last year’s peak.

I’m not sure I have the words to express what a staggering achievement of economic policy it is to so rapidly and totally repair the economic damage caused by a pandemic while that pandemic is still happening. It’s the equivalent of repairing an airplane that is not only still in flight, but still taking anti-aircraft fire.

Why, it seems that Keynes fellow may have been onto something, eh?

Reversals in progress against poverty

Jan 16 JDN 2459606

I don’t need to tell you that the COVID pandemic has been very bad for the world. Yet perhaps the worst outcome of the pandemic is one that most people don’t recognize: It has reversed years of progress against global poverty.

Estimates of the number of people who will be thrown into extreme poverty as a result of the pandemic are consistently around 100 million, though some forecasts have predicted this will rise to 150 million, or, in the most pessimistic scenarios, even as high as 500 million.

Pre-COVID projections showed the global poverty rate falling steadily from 8.4% in 2019 to 6.3% by 2030. But COVID resulted in the first upward surge in global poverty in decades, and updated models now suggest that the global poverty rate in 2030 will be as high as 7.0%. That difference is 0.7% of a forecasted population of 8.5 billion—so that’s a difference of 59 million people.

This is a terrible reversal of fortune, and a global tragedy. Ten or perhaps even hundreds of millions of people will suffer the pain of poverty because of this global pandemic and the numerous missteps by many of the world’s governments—not least the United States—in response to it.

Yet it’s important to keep in mind that this is a short-term reversal in a long-term trend toward reduced poverty. Yes, the most optimistic predictions are turning out to be wrong—but the general pattern of dramatic reductions in global poverty over the late 20th and early 21st century are still holding up.

That post-COVID estimate of a global poverty rate of 7.0% needs to be compared against the fact that as recently as 1980 the global poverty rate at the same income level (adjust for inflation and purchasing power of course) income level was a whopping 44%.

This pattern makes me feel deeply ambivalent about the effects of globalization on inequality. While it now seems clear that globalization has exacerbated inequality within First World countries—and triggered a terrible backlash of right-wing populism as a result—it also seems clear that globalization was a major reason for the dramatic reductions in global poverty in the past few decades.

I think the best answer I’ve been able to come up with is that globalization is overall a good thing, and we must continue it—but we also need to be much more mindful of its costs, and we must make policy that mitigates those costs. Expanded trade has winners and losers, and we should be taxing the winners to compensate the losers. To make good economic policy, it simply isn’t enough to increase aggregate GDP; you actually have to make life better for everyone (or at least as many people as you can).

Unfortunately, knowing what policies to make is only half the battle. We must actually implement those policies, which means winning elections, which means restoring the public’s faith in the authority of economic experts.

Some of the people voting for Donald Trump were just what Hillary Clinton correctly (if tone-deafly) referred to as “deplorables“: racists, misogynists, xenophobes. But I think that many others weren’t voting for Trump but against Clinton; they weren’t embracing far-right populism but rather rejecting center-left technocratic globalization. They were tired of being told what to do by experts who didn’t seem to care about them or their interests.

And the thing is, they were right about that. Not about voting for Trump—that’s unforgivable—but about the fact that expert elites had been ignoring their interests and needed a wake-up call. There were a hundred better ways of making that wake-up call that didn’t involve putting a narcissistic, incompetent maniac in charge of the world’s largest economy, military and nuclear arsenal, and millions of people should be ashamed of themselves for not taking those better options. Yet the fact remains: The wake-up call was necessary, and we should be responding to it.

We expert elites (I think I can officially carry that card, now that I have a PhD and a faculty position at a leading research university) need to do a much better job of two things: First, articulating the case for our policy recommendations in a way that ordinary people can understand, so that they feel justified and not simply rammed down people’s throats; and second, recognizing the costs and downsides of these policies and taking action to mitigate them whenever possible.

For instance: Yes, we need to destroy all the coal jobs. They are killing workers and the planet. Coal companies need to be transitioned to new industries or else shut down. This is not optional. It must be done. But we also need to explain to those coal miners why it’s necessary to move on from coal to solar and nuclear, and we need to be implementing various policies to help those workers move on to better, safer jobs that pay as well and don’t involve filling their lungs with soot and the atmosphere with carbon dioxide. We need to articulate, emphasize—and loudly repeat—that this isn’t about hurting coal miners to help everyone else, but about helping everyone, coal miners included, and that if anyone gets hurt it will only be a handful of psychopathic billionaires who already have more money than any human being could possibly need or deserve.

Another example: We cannot stop trading with India and China. Hundreds of millions of innocent people would suddenly be thrown out of work and into poverty if we did. We need the products they make for us, and they need the money we pay for those products. But we must also acknowledge that trading with poor countries does put downward pressure on wages back home, and take action to help First World workers who are now forced to compete with global labor markets. Maybe this takes the form of better unemployment benefits, or job-matching programs, or government-sponsored job training. But we cannot simply shrug and let people lose their jobs and their homes because the factories they worked in were moved to China.

Strange times for the labor market

Jan 9 JDN 2459589

Labor markets have been behaving quite strangely lately, due to COVID and its consequences. As I said in an earlier post, the COVID recession was the one recession I can think of that actually seemed to follow Real Business Cycle theory—where it was labor supply, not demand, that drove employment.

I dare say that for the first time in decades, the US government actually followed Keynesian policy. US federal government spending surged from $4.8 trillion to $6.8 trillion in a single year:

That is a staggering amount of additional spending; I don’t think any country in history has ever increased their spending by that large an amount in a single year, even inflation-adjusted. Yet in response to a recession that severe, this is exactly what Keynesian models prescribed—and for once, we listened. Instead of balking at the big numbers, we went ahead and spent the money.

And apparently it worked, because unemployment spiked to the worst levels seen since the Great Depression, then suddenly plummeted back to normal almost immediately:

Nor was this just the result of people giving up on finding work. U-6, the broader unemployment measure that includes people who are underemployed or have given up looking for work, shows the same unprecedented pattern:

The oddest part is that people are now quitting their jobs at the highest rate seen in over 20 years:

[FRED_quits.png]

This phenomenon has been dubbed the Great Resignation, and while its causes are still unclear, it is clearly the most important change in the labor market in decades.

In a previous post I hypothesized that this surge in strikes and quits was a coordination effect: The sudden, consistent shock to all labor markets at once gave people a focal point to coordinate their decision to strike.

But it’s also quite possible that it was the Keynesian stimulus that did it: The relief payments made it safe for people to leave jobs they had long hated, and they leapt at the opportunity.

When that huge surge in government spending was proposed, the usual voices came out of the woodwork to warn of terrible inflation. It’s true, inflation has been higher lately than usual, nearly 7% last year. But we still haven’t hit the double-digit inflation rates we had in the late 1970s and early 1980s:

Indeed, most of the inflation we’ve had can be explained by the shortages created by the supply chain crisis, along with a very interesting substitution effect created by the pandemic. As services shut down, people bought goods instead: Home gyms instead of gym memberships, wifi upgrades instead of restaurant meals.

As a result, the price of durable goods actually rose, when it had previously been falling for decades. That broader pattern is worth emphasizing: As technology advances, services like healthcare and education get more expensive, durable goods like phones and washing machines get cheaper, and nondurable goods like food and gasoline fluctuate but ultimately stay about the same. But in the last year or so, durable goods have gotten more expensive too, because people want to buy more while supply chains are able to deliver less.

This suggests that the inflation we are seeing is likely to go away in a few years, once the pandemic is better under control (or else reduced to a new influenza where the virus is always there but we learn to live with it).

But I don’t think the effects on the labor market will be so transitory. The strikes and quits we’ve been seeing lately really are at a historic level, and they are likely to have a long-lasting effect on how work is organized. Employers are panicking about having to raise wages and whining about how “no one wants to work” (meaning, of course, no one wants to work at the current wage and conditions on offer). The correct response is the one from Goodfellas [language warning].

For the first time in decades, there are actually more job vacancies than unemployed workers:

This means that the tables have turned. The bargaining power is suddenly in the hands of workers again, after being in the hands of employers for as long as I’ve been alive. Of course it’s impossible to know whether some other shock could yield another reversal; but for now, it looks like we are finally on the verge of major changes in how labor markets operate—and I for one think it’s about time.

Reasons for optimism in 2022

Jan 2 JDN 2459582

When this post goes live, we will have begun the year 2022.

That still sounds futuristic, somehow. We’ve been in the 20th century long enough that most of my students were born in it and nearly all of them are old enough to drink (to be fair, it’s the UK, so “old enough to drink” only means 18). Yet “the year 2022” still seems like it belongs in science fiction, and not on our wall calendars.

2020 and 2021 were quite bad years. Death rates and poverty rates surged around the world. Almost all of that was directly or indirectly due to COVID.

Yet there are two things we should keep in perspective.

First, those death rates and poverty rates surged to what we used to consider normal 50 years ago. These are not uniquely bad times; indeed, they are still better than most of human history.

Second, there are many reasons to think that 2022—or perhaps a bit later than that, 2025 or 2030—will be better.

The Omicron variant is highly contagious, but so far does not appear to be as deadly as previous variants. COVID seems to be evolving to be more like influenza: Catching it will be virtually inevitable, but dying from it will be very rare.

Things are also looking quite good on the climate change front: Renewable energy production is growing at breathtaking speed and is now cheaper than almost every other form of energy. It’s awful that we panicked and locked down nuclear energy for the last 50 years, but at this point we may no longer need it: Solar and wind are just that good now.

Battery technology is also rapidly improving, giving us denser, cheaper, more stable batteries that may soon allow us to solve the intermittency problem: the wind may not always blow and the sun may not always shine, but if you have big enough batteries you don’t need them to. (You can get a really good feel for how much difference good batteries make in energy production by playing Factorio, or, more whimsically, Mewnbase.)

If we do go back to nuclear energy, it may not be fission anymore, but fusion. Now that we have nearly reached that vital milestone of break-even, investment in fusion technology has rapidly increased.


Fusion has basically all of the benefits of fission with none of the drawbacks. Unlike renewables, it can produce enormous amounts of energy in a way that can be easily scaled and controlled independently of weather conditions. Unlike fission, it requires no exotic nuclear fuels (deuterium can be readily attained from water), and produces no long-lived radioactive waste. (Indeed, development is ongoing of methods that could use fusion products to reduce the waste from fission reactors, making the effective rate of nuclear waste production for fusion negative.) Like both renewables and fission, it produces no carbon emissions other than those required to build the facility (mainly due to concrete).

Of course, technology is only half the problem: we still need substantial policy changes to get carbon emissions down. We’ve already dragged our feet for decades too long, and we will pay the price for that. But anyone saying that climate change is an inevitable catastrophe hasn’t been paying attention to recent developments in solar panels.

Technological development in general seems to be speeding up lately, after having stalled quite a bit in the early 2000s. Moore’s Law may be leveling off, but the technological frontier may simply be moving away from digital computing power and onto other things, such as biotechnology.

Star Trek told us that we’d have prototype warp drives by the 2060s but we wouldn’t have bionic implants to cure blindness until the 2300s. They seem to have gotten it backwards: We may never have warp drive, but we’ve got those bionic implants today.

Neural interfaces are allowing paralyzed people to move, speak, and now even write.

After decades of failed promises, gene therapy is finally becoming useful in treating real human diseases. CRISPR changes everything.

We are also entering a new era of space travel, thanks largely to SpaceX and their remarkable reusable rockets. The payload cost to LEO is a standard measure of the cost of space travel, which describes the cost of carrying a certain mass of cargo up to low Earth orbit. By this measure, costs have declined from nearly $20,000 per kg to only $1,500 per kg since the 1960s. Elon Musk claims that he can reduce the cost to as low as $10 per kg. I’m skeptical, to say the least—but even dropping it to $500 or $200 would be a dramatic improvement and open up many new options for space exploration and even colonization.

To put this in perspective, the cost of carrying a human being to the International Space Station (about 100 kg to LEO) has fallen from $2 million to $150,000. A further decrease to $200 per kg would lower that to $20,000, opening the possibility of space tourism; $20,000 might be something even upper-middle-class people could do as a once-in-a-lifetime vacation. If Musk is really right that he can drop it all the way to $10 per kg, the cost to carry a person to the ISS would be only $1000—something middle-class people could do regularly. (“Should we do Paris for our anniversary this year, or the ISS?”) Indeed, a cost that low would open the possibility of space-based shipping—for when you absolutely must have the product delivered from China to California in the next 2 hours.

Another way to put this in perspective is to convert these prices per mass in terms of those of commodities, such as precious metals. $20,000 per kg is nearly the price of solid platinum. $500 per kg is about the price of sterling silver. $10 per kg is roughly the price of copper.

The reasons for optimism are not purely technological. There has also been significant social progress just in the last few years, with major milestones on LGBT rights being made around the world in 2020 and 2021. Same-sex marriage is now legally recognized over nearly the entire Western Hemisphere.

None of that changes the fact that we are still in a global pandemic which seems to be increasingly out of control. I can’t tell you whether 2022 will be better than 2021, or just more of the same—or perhaps even worse.

But while these times are hard, overall the world is still making progress.

A very Omicron Christmas

Dec 26 JDN 2459575

Remember back in spring of 2020 when we thought that this pandemic would quickly get under control and life would go back to normal? How naive we were.

The newest Omicron strain seems to be the most infectious yet—even people who are fully vaccinated are catching it. The good news is that it also seems to be less deadly than most of the earlier strains. COVID is evolving to spread itself better, but not be as harmful to us—much as influenza and cold viruses evolved. While weekly cases are near an all-time peek, weekly deaths are well below the worst they had been.

Indeed, at this point, it’s looking like COVID will more or less be with us forever. In the most likely scenario, the virus will continue to evolve to be more infectious but less lethal, and then we will end up with another influenza on our hands: A virus that can’t be eradicated, gets huge numbers of people sick, but only kills a relatively small number. At some point we will decide that the risk of getting sick is low enough that it isn’t worth forcing people to work remotely or maybe even wear masks. And we’ll relax various restrictions and get back to normal with this new virus a regular part of our lives.


Merry Christmas?

But it’s not all bad news. The vaccination campaign has been staggeringly successful—now the total number of vaccine doses exceeds the world population, so the average human being has been vaccinated for COVID at least once.

And while 5.3 million deaths due to the virus over the last two years sounds terrible, it should be compared against the baseline rate of 15 million deaths during that same interval, and the fact that worldwide death rates have been rapidly declining. Had COVID not happened, 2021 would be like 2019, which had nearly the lowest death rate on record, at 7,579 deaths per million people per year. As it is, we’re looking at something more like 10,000 deaths per million people per year (1%), or roughly what we considered normal way back in the long-ago times of… the 1980s. To get even as bad as things were in the 1950s, we would have to double our current death rate.

Indeed, there’s something quite remarkable about the death rate we had in 2019, before the pandemic hit: 7,579 per million is only 0.76%. A being with a constant annual death rate of 0.76% would have a life expectancy of over 130 years. This very low death rate is partly due to demographics: The current world population is unusually young and healthy because the world recently went through huge surges in population growth. Due to demographic changes the UN forecasts that our death rate will start to climb again as fertility falls and the average age increases; but they are still predicting it will stabilize at about 11,200 per million per year, which would be a life expectancy of 90. And that estimate could well be too pessimistic, if medical technology continues advancing at anything like its current rate.

We call it Christmas, but it’s really a syncretized amalgamation of holidays: Yule, Saturnalia, various Solstice celebrations. (Indeed, there’s no particular reason to think Jesus was even born in December.) Most Northern-hemisphere civilizations have some sort of Solstice holiday, and we’ve greedily co-opted traditions from most of them. The common theme really seems to be this:

Now it is dark, but band together and have hope, for the light shall return.

Diurnal beings in northerly latitudes instinctively fear the winter, when it becomes dark and cold and life becomes more hazardous—but we have learned to overcome this fear together, and we remind ourselves that light and warmth will return by ritual celebrations.

The last two years have made those celebrations particularly difficult, as we have needed to isolate ourselves in order to keep ourselves and others safe. Humans are fundamentally social at a level most people—even most scientists—do not seem to grasp: We need contact with other human beings as deeply and vitally as we need food or sleep.

The Internet has allowed us to get some level of social contact while isolated, which has been a tremendous boon; but I think many of us underestimated how much we would miss real face-to-face contact. I think much of the vague sense of malaise we’ve all been feeling even when we aren’t sick and even when we’ve largely adapted our daily routine to working remotely comes from this: We just aren’t getting the chance to see people in person nearly as often as we want—as often as we hadn’t even realized we needed.

So, if you do travel to visit family this holiday season, I understand your need to do so. But be careful. Get vaccinated—three times, if you can. Don’t have any contact with others who are at high risk if you do have any reason to think you’re infected.

Let’s hope next Christmas is better.