Why are borders so strict?

Aug 15 JDN 2459442

Most of us don’t cross borders all that often, and when we do it’s generally only for brief visits; so we don’t often experience just how absurdly difficult it is to move to another country. I have received a crash course in the subject for the past couple of months, in trying to arrange my move to Edinburgh.

Certain portions of the move would be inherently difficult: Moving a literal ton of stuff across an entire ocean is no mean feat, and really the impressive thing is that our civilization has reached the point where we can do it so quickly and reliably. (I do mean a literal ton: We estimated we have about 350 cubic feet and 2300 pounds of items, or 10 cubic meters and 1040 kilograms.)

But most of the real headaches have been the results of institutional policies.

First of all, there’s the fact that the university gave me so little notice. This is not entirely their fault; my understanding is that the position opened up during the spring, and they scrambled to fill it as fast as they could for the fall. Still, this has made everything that much more difficult.

More importantly, there is the matter of moving across borders.

In order to get visas to live in the UK, my fiance and I had to complete an application documenting basically our whole lives (I had to track down three parking tickets and a speeding ticket from as far back as 2011), maintain bank balances of a sufficient amount for at least 30 days (evidently poor people need not apply), and pay exorbitant fees (over $5000 in all for the two of us, which, gratefully, the university is supposed to reimburse me for). We had to upload not only our passports, but also financial documents as well as housing records to prove our relationship (in lieu of a marriage license, since we had to delay the wedding to this year due to the pandemic). But this was not enough; we had to pay even more fees to get expedited processing, and then travel to a US government office in the LA area to get our fingerprints done, and then mail our passports to another office in New York for further processing. We started this process the first week of August; we still haven’t heard back on our final approval.

Then there is the matter of moving our cat, Tootsie. UK regulations for importing a cat require an ISO-compliant microchip and certain vaccinations; this is perfectly reasonable. But they also require that you bring the cat with you when you move (within at most 5 days of your arrival), or else the cat will be legally considered livestock and subject to a tariff of over $1000.

This would be inconvenient enough, but then there is the fact that current regulations do not allow cats to be transported into the UK in the cabin of an aircraft. If they are to be flown in, they must be brought in the cargo hold. Since we did not want to subject our cat to several hours alone in a cargo hold on a transatlantic flight, we will instead be flying to Amsterdam, because the Netherlands has more lenient regulations. But then of course we still need to get her to Edinburgh; our current plan involves taking a ferry from Amsterdam to Newcastle and then a train from there to Edinburgh. In all the whole process will take at least a day longer (and cost a few hundred dollars more) than it would have without the utterly pointless rule forbidding cats from flying into the UK in the cabin.

All of this for, and I really cannot emphasize this enough, a routine move between two NATO allied First World countries.

The alliance between the US and the UK is one of the most tightly-knit in the world, and dates back generations. Our trade networks are thoroughly interconnected, and we even share most of our media and culture back and forth. There’s honestly no particular reason we couldn’t simply be the same country. (Indeed the one thing we did fight with them about in the last 250 years was over precisely that.)

There is probably less difference culturally and economically between New York and London than there is between New York and rural Texas or between London and rural Scotland. Yet a move within each country requires basically none of this extra hassle and paperwork—you basically just physically move yourself, register your car, maybe a few other minor things. You certainly don’t need to get a passport, apply for a visa, or pay exorbitant fees.

What purpose does all of this extra regulation serve? Are we safer, or richer, or healthier, because we make it so difficult to move across borders?

I can understand the need to hve some sort of security at border crossings: We want to make sure people aren’t smuggling contraband or planning acts of terrorism. (There is, by the way, a series of questions on the UK visa application asking things like this:”Have you ever committed terrorism?” “Have you ever been implicated in genocide?” One wonders if anyone has ever answered “yes”.) It even makes sense to have some kind of registration process and background check for people who plan to move permanently. But what we actually do goes far, far beyond these sensible requirements; the goal seems to be to ensure that only the finest upstanding citizens may be allowed to move to a country, while anyone who is born on the opposite side of that line need not meet any standard whatsoever in order to remain.

In my view, the most sensible standard would be this: You should only exclude someone from entering your country for actions that you’d be willing to imprison them for if they were already there. Clearly, smuggling and terrorism qualify. Indeed, any felony would do. But would you lock someone in prison for not having enough money in their bank account? Or for failing to disclose a parking ticket from ten years ago? Or for filling out paperwork incorrectly? Yet visas are denied for this sort of reason all the time.

I think most economists would agree with me: The free movement of people across borders is one of the most vital principles of free trade—and the one that the world has least lived up to so far.

Yet it seems we are in the minority. Most people seem to think it’s perfectly sensible to have completely different rules for moving from Detroit to Toledo than from Detroit to Windsor.

The reason for this is apparent enough: Once again, the tribal paradigm looms large. Human beings divide themselves into groups, and form their identities around those groups. Those inside the group are good, while those outside are bad. Actions which benefit our own group are right, while actions which benefit other groups are wrong. The group you belong to is an inherent part of who you are, and can never be changed.

We have defined these groups in many different ways throughout human history, and our scale of group identification has gradually expanded over time. First, it was families and tribes. For centuries, it was feudal kingdoms. Now, it is nation-states. Perhaps, someday, it will enlarge to encompass all of humanity.

But until that day comes, people are going to make it as hard as possible to cross from one group to another.

Finance is the commodification of trust

Jul 18 JDN 2459414

What is it about finance?

Why is it that whenever we have an economic crisis, it seems to be triggered by the financial industry? Why has the dramatic rise in income and wealth inequality come in tandem with a rise in finance as a proportion of our economic output? Why are so many major banks implicated in crimes ranging from tax evasion to money laundering for terrorists?

In other words, why are the people who run our financial industry such utter scum? What is it about finance that it seems to attract the very worst people on Earth?

One obvious answer is that it is extremely lucrative: Incomes in the financial industry are higher than almost any other industry. Perhaps people who are particularly unscrupulous are drawn to the industries that make the most money, and don’t care about much else. But other people like making money too, so this is far from a full explanation. Indeed, incomes for physicists are comparable to those of Wall Street brokers, yet physicists rarely seem to be implicated in mass corruption scandals.

I think there is a deeper reason: Finance is the commodification of trust.

Many industries sell products, physical artifacts like shirts or televisions. Others sell services like healthcare or auto repair, which involve the physical movement of objects through space. Information-based industries are a bit different—what a software developer or an economist sells isn’t really a physical object moving through space. But then what they are selling is something more like knowledge—information that can be used to do useful things.

Finance is different. When you make a loan or sell a stock, you aren’t selling a thing—and you aren’t really doing a thing either. You aren’t selling information, either. You’re selling trust. You are making money by making promises.

Most people are generally uncomfortable with the idea of selling promises. It isn’t that we’d never do it—but we’re reluctant to do it. We try to avoid it whenever we can. But if you want to be successful in finance, you can’t have that kind of reluctance. To succeed on Wall Street, you need to be constantly selling trust every hour of every day.

Don’t get me wrong: Certain kinds of finance are tremendously useful, and we’d be much worse off without them. I would never want to get rid of government bonds, auto loans or home mortgages. I’m actually pretty reluctant to even get rid of student loans, despite the large personal benefits I would get if all student loans were suddenly forgiven. (I would be okay with a system like Elizabeth Warren’s proposal, where people with college degrees pay a surtax that supports free tuition. The problem with most proposals for free college is that they make people who never went to college pay for those who did, and that seems unfair and regressive to me.)

But the Medieval suspicion against “usury“—the notion that there is something immoral about making money just from having money and making promises—isn’t entirely unfounded. There really is something deeply problematic about a system in which the best way to get rich is to sell commodified packages of trust, and the best way to make money is to already have it.

Moreover, the more complex finance gets, the more divorced it becomes from genuinely necessary transactions, and the more commodified it becomes. A mortgage deal that you make with a particular banker in your own community isn’t particularly commodified; a mortgage that is sliced and redistributed into mortgage-backed securities that are sold anonymously around the world is about as commodified as anything can be. It’s rather like the difference between buying a bag of apples from your town farmers’ market versus ordering a barrel of apple juice concentrate. (And of course the most commodified version of all is the financial one: buying apple juice concentrate futures.)

Commodified trust is trust that has lost its connection to real human needs. Those bankers who foreclosed on thousands of mortgages (many of them illegally) weren’t thinking about the people they were making homeless—why would they, when for them those people have always been nothing more than numbers on a spreadsheet? Your local banker might be willing to work with you to help you keep your home, because they see you as a person. (They might not for various reasons, but at least they might.) But there’s no reason for HSBC to do so, especially when they know that they are so rich and powerful they can get away with just about anything (have I mentioned money laundering for terrorists?).

I don’t think we can get rid of finance. We will always need some mechanism to let people who need money but don’t have it borrow that money from people who have it but don’t need it, and it makes sense to have interest charges to compensate lenders for the time and risk involved.

Yet there is much of finance we can clearly dispense with. Credit default swaps could simply be banned, and we’d gain much and lose little. Credit default swaps are basically unregulated insurance, and there’s no reason to allow that. If banks need insurance, they can buy the regulated kind like everyone else. Those regulations are there for a reason. We could ban collateralized debt obligations and similar tranche-based securities, again with far more benefit than harm. We probably still need stocks and commodity futures, and perhaps also stock options—but we could regulate their sale considerably more, particularly with regard to short-selling. Banking should be boring.

Some amount of commodification may be inevitable, but clearly much of what we currently have could be eliminated. In particular, the selling of loans should simply be banned. Maybe even your local banker won’t ever really get to know you or care about you—but there’s no reason we have to allow them to sell your loan to some bank in another country that you’ve never even heard of. When you make a deal with a bank, the deal should be between you and that bank—not potentially any bank in the world that decides to buy the contract at any point in the future. Maybe we’ll always be numbers on spreadsheets—but at least we should be able to choose whose spreadsheets.

If banks want more liquidity, they can borrow from other banks—themselves, taking on the risk themselves. A lending relationship is built on trust. You are free to trust whomever you choose; but forcing me to trust someone I’ve never met is something you have no right to do.

In fact, we might actually be able to get rid of banks—credit unions have a far cleaner record than banks, and provide nearly all of the financial services that are genuinely necessary. Indeed, if you’re considering getting an auto loan or a home mortgage, I highly recommend you try a credit union first.

For now, we can’t simply get rid of banks—we’re too dependent on them. But we could at least acknowledge that banks are too powerful, they get away with far too much, and their whole industry is founded upon practices that need to be kept on a very tight leash.

An unusual recession, a rapid recovery

Jul 11 JDN 2459407

It seems like an egregious understatement to say that the last couple of years have been unusual. The COVID-19 pandemic was historic, comparable in threat—though not in outcome—to the 1918 influenza pandemic.

At this point it looks like we may not be able to fully eradicate COVID. And there are still many places around the world where variants of the virus continue to spread. I personally am a bit worried about the recent surge in the UK; it might add some obstacles (as if I needed any more) to my move to Edinburgh. Yet even in hard-hit places like India and Brazil things are starting to get better. Overall, it seems like the worst is over.

This pandemic disrupted our society in so many ways, great and small, and we are still figuring out what the long-term consequences will be.

But as an economist, one of the things I found most unusual is that this recession fit Real Business Cycle theory.

Real Business Cycle theory (henceforth RBC) posits that recessions are caused by negative technology shocks which result in a sudden drop in labor supply, reducing employment and output. This is generally combined with sophisticated mathematical modeling (DSGE or GTFO), and it typically leads to the conclusion that the recession is optimal and we should do nothing to correct it (which was after all the original motivation of the entire theory—they didn’t like the interventionist policy conclusions of Keynesian models). Alternatively it could suggest that, if we can, we should try to intervene to produce a positive technology shock (but nobody’s really sure how to do that).

For a typical recession, this is utter nonsense. It is obvious to anyone who cares to look that major recessions like the Great Depression and the Great Recession were caused by a lack of labor demand, not supply. There is no apparent technology shock to cause either recession. Instead, they seem to be preciptiated by a financial crisis, which then causes a crisis of liquidity which leads to a downward spiral of layoffs reducing spending and causing more layoffs. Millions of people lose their jobs and become desperate to find new ones, with hundreds of people applying to each opening. RBC predicts a shortage of labor where there is instead a glut. RBC predicts that wages should go up in recessions—but they almost always go down.

But for the COVID-19 recession, RBC actually had some truth to it. We had something very much like a negative technology shock—namely the pandemic. COVID-19 greatly increased the cost of working and the cost of shopping. This led to a reduction in labor demand as usual, but also a reduction in labor supply for once. And while we did go through a phase in which hundreds of people applied to each new opening, we then followed it up with a labor shortage and rising wages. A fall in labor supply should create inflation, and we now have the highest inflation we’ve had in decades—but there’s good reason to think it’s just a transitory spike that will soon settle back to normal.

The recovery from this recession was also much more rapid: Once vaccines started rolling out, the economy began to recover almost immediately. We recovered most of the employment losses in just the first six months, and we’re on track to recover completely in half the time it took after the Great Recession.

This makes it the exception that proves the rule: Now that you’ve seen a recession that actually resembles RBC, you can see just how radically different it was from a typical recession.

Moreover, even in this weird recession the usual policy conclusions from RBC are off-base. It would have been disastrous to withhold the economic relief payments—which I’m happy to say even most Republicans realized. The one thing that RBC got right as far as policy is that a positive technology shock was our salvation—vaccines.

Indeed, while the cause of this recession was very strange and not what Keynesian models were designed to handle, our government largely followed Keynesian policy advice—and it worked. We ran massive government deficits—over $3 trillion in 2020—and the result was rapid recovery in consumer spending and then employment. I honestly wouldn’t have thought our government had the political will to run a deficit like that, even when the economic models told them they should; but I’m very glad to be wrong. We ran the huge deficit just as the models said we should—and it worked. I wonder how the 2010s might have gone differently had we done the same after 2008.

Perhaps we’ve learned from some of our mistakes.

A prouder year for America, and for me

Jul 4 JDN 2459380

Living under Trump from 2017 to 2020, it was difficult to be patriotic. How can we be proud of a country that would put a man like that in charge? And then there was the COVID pandemic, which initially the US handled terribly—largely because of the aforementioned Trump.

But then Biden took office, and almost immediately things started to improve. This is a testament to how important policy can be—and how different the Democrats and Republicans have become.

The US now has one of the best rates of COVID vaccination in the world (though lately progress seems to be stalling and other countries are catching up). Daily cases in the US are now the lowest they have been since March 2020. Even real GDP is almost back up to its pre-pandemic level (even per-capita), and the surge of inflation we got as things began to re-open already seems to be subsiding.

I can actually celebrate the 4th of July with some enthusiasm this year, whereas the last four years involved continually reminding myself that I was celebrating the liberating values of America’s founding, not the current terrible state of its government. Of course our government policy still retains many significant flaws—but it isn’t the utter embarrassment it was just a year ago.

This may be my last 4th of July to celebrate for the next few years, as I will soon be moving to Scotland (more on that in a moment).

2020 was a very bad year, but even halfway through it’s clear that 2021 is going to be a lot better.

This was true for just about everyone. I was no exception.

The direct effects of the pandemic on me were relatively minor.

Transitioning to remote work was even easier than I expected it to be; in fact I was even able to run experiments online using the same research subject pool as we’d previously used for the lab. I not only didn’t suffer any financial hardship from the lockdowns, I ended up better off because of the relief payments (and the freezing of student loan payments as well as the ludicrous stock boom, which I managed to buy in near the trough of). Ordering groceries online for delivery is so convenient I’m tempted to continue it after the pandemic is over (though it does cost more).

I was careful and/or fortunate enough not to get sick (now that I am fully vaccinated, my future risk is negligible), as were most of my friends and family. I am not close to anyone who died from the virus, though I do have some second-order links to some who died (grandparents of a couple of my friends, the thesis advisor of one of my co-authors).

It was other things, that really made 2020 a miserable year for me. Some of them were indirect effects of the pandemic, and some may not even have been related.

For me, 2020 was a year full of disappointments. It was the year I nearly finished my dissertation and went on the job market, applying for over one hundred jobs—and got zero offers. It was the year I was scheduled to present at an international conference—which was then canceled. It was the year my papers were rejected by multiple journals. It was the year I was scheduled to be married—and then we were forced to postpone the wedding.

But now, in 2021, several of these situations are already improving. We will be married on October 9, and most (though assuredly not all) of the preparations for the wedding are now done. My dissertation is now done except for some formalities. After over a year of searching and applying to over two hundred postings in all, I finally found a job, a postdoc position at the University of Edinburgh. (A postdoc isn’t ideal, but on the other hand, Edinburgh is more prestigious than I thought I’d be able to get.) I still haven’t managed to publish any papers, but I no longer feel as desperate a need to do so now that I’m not scrambling to find a job. Now of course we have to plan for a move overseas, though fortunately the university will reimburse our costs for the visa and most of the moving expenses.

Of course, 2021 isn’t over—neither is the COVID pandemic. But already it looks like it’s going to be a lot better than 2020.

Responsible business owners support regulations

Jun 27 JDN 2459373

In last week’s post I explained why business owners so consistently overestimate the harms of regulations: In short, they ignore the difference between imposing a rule on a single competitor and imposing that same rule on all competitors equally. The former would be disastrous; the latter is often inconsequential.

In this follow-up post I’m going to explain why ethical, responsible business owners should want many types of regulation—and that in fact if they were already trying to behave ethically and responsibly, regulations can make them more profitable in doing so.

Let’s use an extreme example just to make things clear. Suppose you are running a factory building widgets, you are competing with several other factories, and you find out that some of the other factories are using slave labor in their production.

What would be the best thing for you to do? In terms of maximizing profit, you’ve really got two possible approaches: You could start using slaves yourself, or you could find a way to stop the other factories from using slaves. If you are even remotely a decent human being, you will choose the latter. How can you do that? By supporting regulations.

By lobbying your government to ban slavery—or, if it’s already banned, to enforce those laws more effectively—you can free the workers enslaved by the other factories while also increasing your own profits. This is a very big win-win. (I guess it’s not a Pareto improvement, because the factory owners who were using slaves are probably worse off—but it’s hard to feel bad for them.)

Slavery is an extreme example (but sadly not an unrealistic one), but a similar principle applies to many other cases. If you are a business owner who wants to be environmentally responsible, you should support regulations on pollution—because you’re already trying to comply with them, so imposing them on your competitors who aren’t will give you an advantage. If you are a business owner who wants to pay high wages, you should support increasing minimum wage. Whatever socially responsible activities you already do, you have an economic incentive to make them mandatory for other companies.

Voluntary social responsibility sounds nice in theory, but in a highly competitive market it’s actually very difficult to sustain. I don’t doubt that many owners of sweatshops would like to pay their workers better, but they know they’d have to raise their prices a bit in order to afford it, and then they would get outcompeted and might even have to shut down. So any individual sweatshop owner really doesn’t have much choice: Either you meet the prevailing market price, or you go out of business. (The multinationals who buy from them, however, have plenty of market power and massive profits. They absolutely could afford to change their supply chain practices to support factories that pay their workers better.) Thus the best thing for them to do would be to support a higher minimum wage that would apply to their competitors as well.

Consumer pressure can provide some space for voluntary social responsibility, if customers are willing to pay more for products made by socially responsible companies. But people often don’t seem willing to pay all that much, and even when they are, it can be very difficult for consumers to really know which companies are being responsible (this is particular true for environmental sustainability: hence the widespread practice of greenwashing). In order for consumer pressure to work, you need a critical mass of a large number of consumers who are all sufficiently committed and well-informed. Regulation can often accomplish the same goals much more reliably.

In fact, there’s some risk that businesses could lobby for too many regulations, because they are more interested in undermining their competition than they are about being socially responsible. If you have lots of idiosyncratic business practices, it could be in your best interest to make those practices mandatory even if they have no particular benefits—simply because you were already doing them, and so the cost of transitioning to them will fall entirely on your competitors.


Regarding publicly-traded corporations in particular, there’s another reason why socially responsible CEOs would want regulations: Shareholders. If you’re trying to be socially responsible but it’s cutting into your profits, your shareholders may retaliate by devaluing your stock, firing you, or even suing you—as Dodge sued Ford in 1919 for the “crime” of making wages too high and prices too low. But if there are regulations that require you to be socially responsible, your shareholders can’t really complain; you’re simply complying with the law. In this case you wouldn’t want to be too vocal about supporting the regulations (since your shareholders might object to that); but you would, in fact, support them.

Market competition is a very cutthroat game, and both the prizes for winning and the penalties for losing are substantial. Regulations are what decides the rules of that game. If there’s a particular way that you want to play—either because it has benefits for the rest of society, or simply because it’s your preference—it is advantageous for you to get that written into the rules that everyone needs to follow.

Why business owners are always so wrong about regulations

Jun 20 JDN 2459386

Minimum wage. Environmental regulations. Worker safety. Even bans on child slavery.No matter what the regulation is, it seems that businesses will always oppose it, always warn that these new regulations will destroy their business and leave thousands out of work—and always be utterly, completely wrong.

In fact, the overall impact of US federal government regulations on employment is basically negligible, and the impact on GDP is very clearly positive. This really isn’t surprising if you think about it: Despite what some may have you believe, our government doesn’t go around randomly regulating things for no reason. The regulations we impose are specifically chosen because their benefits outweighed their costs, and the rigorous, nonpartisan analysis of our civil service is one of the best-kept secrets of American success and the envy of the world.

But when businesses are so consistently insistent that new regulations (of whatever kind, however minor or reasonable they may be) will inevitably destroy their industry—when such catastrophic outcomes have basically never occurred, that cries out for an explanation. How can such otherwise competent, experienced, knowledgeable people be always so utterly wrong about something so basic? These people are experts in what they do. Shouldn’t business owners know what would happen if we required them to raise wages a little, or require basic safety standards, or reduce pollution caps, or not allow their suppliers to enslave children?

Well, what do you mean by “them”? Herein lies the problem. There is a fundamental difference between what would happen if we required any specific business to comply with a new regulation (but left their competitors exempt), versus what happens if we require an entire industry to comply with that same regulation.

Business owners are accustomed to thinking in an open system, what economists call partial equilibrium: They think about how things will affect them specifically, and not how they will affect broader industries or the economy as a whole. If wages go up, they’ll lay off workers. If the price of their input goes down, they’ll buy more inputs and produce more outputs. They aren’t thinking about how these effects interact with one another at a systemic level, because they don’t have to.

This works because even a huge multinational corporation is only a small portion of the US economy, and doesn’t have much control over the system as a whole. So in general when a business tries to maximize its profit in partial equilibrium, it tends to get the right answer (at least as far as maximizing GDP goes).

But large-scale regulation is one time where we absolutely cannot do this. If we try to analyze federal regulations purely in partial equilibrium terms, we will be consistently and systematically wrong—as indeed business owners are.

If we went to a specific corporation and told them, “You must pay your workers $2 more per hour.”, what would happen? They would be forced to lay off workers. No doubt about it. If we specifically targeted one particular corporation and required them to raise their wages, they would be unable to compete with other businesses who had not been forced to comply. In fact, they really might go out of business completely. This is the panic that business owners are expressing when they warn that even really basic regulations like “You can’t dump toxic waste in our rivers” or “You must not force children to pick cocoa beans for you” will cause total economic collapse.

But when you regulate an entire industry in this way, no such dire outcomes happen. The competitors are also forced to comply, and so no businesses are given special advantages relative to one another. Maybe there’s some small reduction in employment or output as a result, but at least if the regulation is reasonably well-planned—as virtually all US federal regulations are, by extremely competent people—those effects will be much smaller than the benefits of safer workers, or cleaner water, or whatever was the reason for the regulation in the first place.

Think of it this way. Businesses are in a constant state of fierce, tight competition. So let’s consider a similarly tight competition such as the Olympics. The gold medal for the 100-meter sprint is typically won by someone who runs the whole distance in less than 10 seconds.

Suppose we had told one of the competitors: “You must wait an extra 3 seconds before starting.” If we did this to one specific runner, that runner would lose. With certainty. There has never been an Olympic 100-meter sprint where the first-place runner was more than 3 seconds faster than the second-place runner. So it is basically impossible for that runner to ever win the gold, simply because of that 3-second handicap. And if we imposed that constraint on some runners but not others, we would ensure that only runners without the handicap had any hope of winning the race.

But now suppose we had simply started the competition 3 seconds late. We had a minor technical issue with the starting gun, we fixed it in 3 seconds, and then everything went as normal. Basically no one would notice. The winner of the race would be the same as before, all the running times would be effectively the same. Things like this have almost certainly happened, perhaps dozens of times, and no one noticed or cared.

It’s the same 3-second delay, but the outcome is completely different.

The difference is simple but vital: Are you imposing this constraint on some competitors, or on all competitors? A constraint imposed on some competitors will be utterly catastrophic for those competitors. A constraint imposed on all competitors may be basically unnoticeable to all involved.

Now, with regulations it does get a bit more complicated than that: We typically can’t impose regulations on literally everyone, because there is no global federal government with the authority to do that. Even international human rights law, sadly, is not that well enforced. (International intellectual property lawvery nearly is—and that contrast itself says something truly appalling about our entire civilization.) But when regulation is imposed by a large entity like the United States (or even the State of California), it generally affects enough of the competitors—and competitors who already had major advantages to begin with, like the advanced infrastructure, impregnable national security, and educated population of the United States—that the effects on competition are, if not negligible, at least small enough to be outweighed by the benefits of the regulation.

So, whenever we propose a new regulation and business owners immediately panic about its catastrophic effects, we can safely ignore them. They do this every time, and they are always wrong.

But take heed: Economists are trained to think in terms of closed systems and general equilibrium. So if economists are worried about the outcome of a regulation, then there is legitimate reason to be concerned. It’s not that we know better how to run their businesses—we certainly don’t. Rather, we much better understand the difference between imposing a 3-second delay on a single runner versus simply starting the whole race 3 seconds later.

Could the Star Trek economy really work?

Jun 13 JDN 2459379

“The economics of the future are somewhat different”, Jean-Luc Picard explains to Lily Sloane in Star Trek: First Contact.

Captain Picard’s explanation is not very thorough, and all we have about the economic system of the Federation comes from similar short glimpes across the various Star Trek films and TV series. The best glimpses of what the Earth’s economy is like largely come from the Picard series in particular.

But I think we can safely conclude that all of the following are true:

1. Energy is extraordinarily abundant, with a single individual having access to an energy scale that would rival the energy production of entire nations at present. By E=mc2, simply being able to teleport a human being or materialize a hamburger from raw energy, as seems to be routine in Starfleet, would require something on the order of 10^17 joules, or about 28 billion kilowatt-hours. The total energy supply of the world economy today is about 6*10^20 joules, or 100 trillion kilowatt-hours.

2. There is broad-based prosperity, but not absolute equality. At the very least different people live differently, though it is unclear whether anyone actually has a better standard of living than anyone else. The Picard family still seems to own their family vineyard that has been passed down for generations, and since the population of Earth is given as about 9 billion (a plausible but perhaps slightly low figure for our long-run stable population equilibrium), its acreage is large enough that clearly not everyone on Earth can own that much land.

3. Most resources that we currently think of as scarce are not scarce any longer. Replicator technology allows for the instantaneous production of food, clothing, raw materials, even sophisticated electronics. There is no longer a “manufacturing sector” as such; there are just replicators and people who use or program them. Most likely, even new replicators are made by replicating parts in other replicators and then assembling them. There are a few resources which remain scarce, such as dilithium (somehow involved in generating these massive quantities of energy) and latinum (a bizarre substance that is prized by many other cultures yet for unexplained reasons cannot be viably produced in replicators). Essentially everything else that is scarce is inherently so, such as front-row seats at concerts, original paintings, officer commissions in Starfleet, or land in San Francisco.

4. Interplanetary and even interstellar trade is routine. Starships with warp capability are available to both civilian and government institutions, and imports and exports can be made to planets dozens or even hundreds of light-years away as quickly as we can currently traverse the oceans with a container ship.

5. Money as we know it does not exist. People are not paid wages or salaries for their work. There is still some ownership of personal property, and particular families (including the Picards) seem to own land; but there does not appear to be any private ownership of capital. For that matter there doesn’t even appear to be be much in the way of capital; we never see any factories. There is obviously housing, there is infrastructure such as roads, public transit, and presumably power plants (very, very powerful power plants, see 1!), but that may be all. Nearly all manufacturing seems to be done by replicators, and what can’t be done by replicators (e.g. building new starships) seems to be all orchestrated by state-owned enterprises such as Starfleet.

Could such an economy actually work? Let’s stipulate that we really do manage to achieve such an extraordinary energy scale, millions of times more than what we can currently produce. Even very cheap, widespread nuclear energy would not be enough to make this plausible; we would need at least abundant antimatter, and quite likely something even more exotic than this, like zero point energy. Along this comes some horrifying risks—imagine an accident at a zero-point power plant that tears a hole in the fabric of space next to a major city, or a fanatical terrorist with a handheld 20-megaton antimatter bomb. But let’s assume we’ve found ways to manage those risks as well.

Furthermore, let’s stipulate that it’s possible to build replicators and warp drives and teleporters and all the similarly advanced technology that the Federation has, much of which is so radically advanced we can’t even be sure that such a thing is possible.

What I really want to ask is whether it’s possible to sustain a functional economy at this scale without money. George Roddenberry clearly seemed to think so. I am less convinced.

First of all, I want to acknowledge that there have been human societies which did not use money, or even any clear notion of a barter system. In fact, most human cultures for most of our history as a species allocated resources based on collective tribal ownership and personal favors. Some of the best parts of Debt: The First 5000 Years are about these different ways of allocating resources, which actually came much more naturally to us than money.

But there seem to have been rather harsh constraints on what sort of standard of living could be maintained in such societies. There was essentially zero technological advancement for thousands of years in most hunter-gatherer cultures, and even the wealthiest people in most of those societies overall had worse health, shorter lifespans, and far, far less access to goods and services than people we would consider in poverty today.

Then again, perhaps money is only needed to catalyze technological advancement; perhaps once you’ve already got all the technology you need, you can take money away and return to a better way of life without greed or inequality. That seems to be what Star Trek is claiming: That once we can make a sandwich or a jacket or a phone or even a car at the push of a button, we won’t need to worry about paying people because everyone can just have whatever they need.

Yet whatever they need is quite different from whatever they want, and therein lies the problem. Yes, I believe that with even moderate technological advancement—the sort of thing I expect to see in the next 50 years, not the next 300—we will have sufficient productivity that we could provide for the basic needs of every human being on Earth. A roof over your head, food on your table, clothes to wear, a doctor and a dentist to see twice a year, emergency services, running water, electricity, even Internet access and public transit—these are things we could feasibly provide to literally everyone with only about two or three times our current level of GDP, which means only about 2% annual economic growth for the next 50 years. Indeed, we could already provide them for every person in First World countries, and it is quite frankly appalling that we fail to do so.

However, most of us in the First World already live a good deal better than that. We don’t have the most basic housing possible, we have nice houses we want to live in. We don’t take buses everywhere, we own our own cars. We don’t eat the cheapest food that would provide adequate nutrition, we eat a wide variety of foods; we order pizza and Chinese takeout, and even eat at fancy restaurants on occasion. It’s less clear that we could provide this standard of living to everyone on Earth—but if economic growth continues long enough, maybe we can.

Worse, most of us would like to live even better than we do. My car is several years old right now, and it runs on gasoline; I’d very much like to upgrade to a brand-new electric car. My apartment is nice enough, but it’s quite small; I’d like to move to a larger place that would give me more space not only for daily living, but also for storage and for entertaining guests. I work comfortable hours for decent pay at a white-collar job that can be done entirely remotely on mostly my own schedule, but I’d prefer to take some time off and live independently while I focus more on my own writing. I sometimes enjoy cooking, but often it can be a chore, and sometimes I wish I could just go eat out at a nice restaurant for dinner every night. I don’t make all these changes because I can’t afford to—that is, because I don’t have the money.

Perhaps most of us would feel no need to have a billion dollars. I don’t really know what $100 billion actually gets you, as far as financial security, independence, or even consumption, that $50 million wouldn’t already. You can have total financial freedom and security with a middle-class American lifestyle with net wealth of about $2 million. If you want to also live in a mansion, drink Dom Perignon with every meal and drive a Lamborghini (which, quite frankly, I have no particular desire to do), you’ll need several million more—but even then you clearly don’t need $1 billion, let alone $100 billion. So there is indeed something pathological about wanting a billion dollars for yourself, and perhaps in the Federation they have mental health treatments for “wealth addiction” that prevent people from experiencing such pathological levels of greed.

Yet in fact, with the world as it stands, I would want a billion dollars. Not to own it. Not to let it sit and grow in some brokerage account. Not to simply be rich and be on the Forbes list. I couldn’t care less about those things. But with a billion dollars, I could donate enormous amounts to charities, saving thousands or even millions of lives. I could found my own institutions—research institutes, charitable foundations—and make my mark on the world. With $100 billion, I could make a serious stab at colonizing Mars—as Elon Musk seems to be doing, but most other billionaires have no particular interest in.

And it begins to strain credulity to imagine a world of such spectacular abundance that everyone could have enough to do that.

This is why I always struggle to answer when people ask me things like “If money were not object, how would you live your life?”; if money were no object, I’d end world hunger, cure cancer, and colonize the Solar System. Money is always an object. What I think you meant to ask was something much less ambitious, like “What would you do if you had a million dollars?” But I might actually have a million dollars someday—most likely by saving and investing the proceeds of a six-figure job as an economist over many years. (Save $2,000 per month for 20 years, growing it at 7% per year, and you’ll be over $1 million. You can do your own calculations here.) I doubt I’ll ever have $10 million, and I’m pretty sure I’ll never have $1 billion.

To be fair, it seems that many of the grand ambitions I would want to achieve with billions of dollars already are achieved by 23rd century; world hunger has definitely been ended, cancer seems to have been largely cured, and we have absolutely colonized the Solar System (and well beyond). But that doesn’t mean that new grand ambitions wouldn’t arise, and indeed I think they would. What if I wanted to command my own fleet of starships? What if I wanted a whole habitable planet to conduct experiments on, perhaps creating my own artificial ecosystem? The human imagination is capable of quite grand ambitions, and it’s unlikely that we could ever satisfy all of them for everyone.

Some things are just inherently scarce. I already mentioned some earlier: Original paintings, front-row seats, officer commissions, and above all, land. There’s only so much land that people want to live on, especially because people generally want to live near other people (Internet access could conceivably reduce the pressure for this, but, uh, so far it really hasn’t, so why would we think it will in 300 years?). Even if it’s true that people can have essentially arbitrary amounts of food, clothing, or electronics, the fact remains that there’s only so much real estate in San Francisco.

It would certainly help to build taller buildings, and presumably they would, though most of the depictions don’t really seem to show that; where are the 10-kilometer-tall skyscrapers made of some exotic alloy or held up by structural integrity fields? (Are the forces of NIMBY still too powerful?) But can everyone really have a 1000-square-meter apartment in the center of downtown? Maybe if you build tall enough? But you do still need to decide who gets the penthouse.

It’s possible that all inherently-scarce resources could be allocated by some mechanism other than money. Some even should be: Starfleet officer commissions are presumably allocated by merit. (Indeed, Starfleet seems implausibly good at selecting supremely competent officers.) Others could be: Concert tickets could be offered by lottery, and maybe people wouldn’t care so much about being in the real front row when you can always simulate the front row at home in your holodeck. Original paintings could all be placed in museums available for public access—and the tickets, too, could be allocated by lottery or simply first-come, first-served. (Picard mentions the Smithsonian, so public-access museums clearly still exist.)

Then there’s the question of how you get everyone to work, if you’re not paying them. Some jobs people will do for fun, or satisfaction, or duty, or prestige; it’s plausible that people would join Starfleet for free (I’m pretty sure I would). But can we really expect all jobs to work that way? Has automation reached such an advanced level that there are no menial jobs? Sanitation? Plumbing? Gardening? Paramedics? Police? People still seem to pick grapes by hand in the Picard vineyards; do they all do it for the satisfaction of a job well done? What happens if one day everyone decides they don’t feel like picking grapes today?

I certainly agree that most menial jobs are underpaid—most people do them because they can’t get better jobs. But surely we don’t want to preserve that? Surely we don’t want some sort of caste system that allocates people to work as plumbers or garbage collectors based on their birth? I guess we could use merit-based aptitude testing; it’s clear that the vast majority of people really aren’t cut out for Starfleet (indeed, perhaps I’m not!), and maybe some people really would be happiest working as janitors. But it’s really not at all clear what such a labor allocation system would be like. I guess if automation has reached such an advanced level that all the really necessary work is done by machines and human beings can just choose to work as they please, maybe that could work; it definitely seems like a very difficult system to manage.

So I guess it’s not completely out of the question that we could find some appropriate mechanism to allocate all goods and services without ever using money. But then my question becomes: Why? What do you have against money?

I understand hating inequality—indeed I share that feeling. I, too, am outraged by the existence of hectobillionaires in a world where people still die of malaria and malnutrition. But having a money system, or even a broadly free-market capitalist economy, doesn’t inherently have to mean allowing this absurd and appalling level of inequality. We could simply impose high, progressive taxes, redistribute wealth, and provide a generous basic income. If per-capita GDP is something like 100 times its current level (as it appears to be in Star Trek), then the basic income could be $1 million per year and still be entirely affordable.

That is, rather than trying to figure out how to design fair and efficient lotteries for tickets to concerts and museums, we could still charge for tickets, and just make sure that everyone has a million dollars a year in basic income. Instead of trying to find a way to convince people to clean bathrooms for free, we could just pay them to do it.

The taxes could even be so high at the upper brackets that they effectively impose a maximum income; say we have a 99% marginal rate above $20 million per year. Then the income inequality would collapse to quite a low level: No one below $1 million, essentially no one above $20 million. We could tax wealth as well, ensuring that even if people save or get lucky on the stock market (if we even still have a stock market—maybe that is unnecessary after all), they still can’t become hectobillionaires. But by still letting people use money and allowing some inequality, we’d still get all the efficiency gains of having a market economy (minus whatever deadweight loss such a tax system imposed—which I in fact suspect would not be nearly as large as most economists fear).

In all, I guess I am prepared to say that, given the assumption of such great feats of technological advancement, it is probably possible to sustain such a prosperous economy without the use of money. But why bother, when it’s so much easier to just have progressive taxes and a basic income?

Why is cryptocurrency popular?

May 30 JDN 2459365

At the time of writing, the price of most cryptocurrencies has crashed, likely due to a ban on conventional banks using cryptocurrency in China (though perhaps also due to Elon Musk personally refusing to accept Bitcoin at his businesses). But for all I know by the time this post goes live the price will surge again. Or maybe they’ll crash even further. Who knows? The prices of popular cryptocurrencies have been extremely volatile.

This post isn’t really about the fluctuations of cryptocurrency prices. It’s about something a bit deeper: Why are people willing to put money into cryptocurrencies at all?

The comparison is often made to fiat currency: “Bitcoin isn’t backed by anything, but neither is the US dollar.”

But the US dollar is backed by something: It’s backed by the US government. Yes, it’s not tradeable for gold at a fixed price, but so what? You can use it to pay taxes. The government requires it to be legal tender for all debts. There are certain guaranteed exchange rights built into the US dollar, which underpin the value that the dollar takes on in other exchanges. Moreover, the US Federal Reserve carefully manages the supply of US dollars so as to keep their value roughly constant.

Bitcoin does not have this (nor does Dogecoin, or Etherium, or any of the other hundreds of lesser-known cryptocurrencies). There is no central bank. There is no government making them legal tender for any debts at all, let alone all of them. Nobody collects taxes in Bitcoin.

And so, because its value is untethered, Bitcoin’s price rises and falls, often in huge jumps, more or less randomly. If you look all the way back to when it was introduced, Bitcoin does seem to have an overall upward price trend, but this honestly seems like a statistical inevitability: If you start out being worthless, the only way your price can change is upward. While some people have become quite rich by buying into Bitcoin early on, there’s no particular reason to think that it will rise in value from here on out.

Nor does Bitcoin have any intrinsic value. You can’t eat it, or build things out of it, or use it for scientific research. It won’t even entertain you (unless you have a very weird sense of entertainment). Bitcoin doesn’t even have “intrinsic value” the way gold does (which is honestly an abuse of the term, since gold isn’t actually especially useful): It isn’t innately scarce. It was made scarce by its design: Through the blockchain, a clever application of encryption technology, it was made difficult to generate new Bitcoins (called “mining”) in an exponentially increasing way. But the decision of what encryption algorithm to use was utterly arbitrary. Bitcoin mining could just as well have been made a thousand times easier or a thousand times harder. They seem to have hit a sweet spot where they made it just hard enough that it make Bitcoin seem scarce while still making it feel feasible to get.

We could actually make a cryptocurrency that does something useful, by tying its mining to a genuinely valuable pursuit, like analyzing scientific data or proving mathematical theorems. Perhaps I should suggest a partnership with Folding@Home to make FoldCoin, the crypto coin you mine by folding proteins. There are some technical details there that would be a bit tricky, but I think it would probably be feasible. And then at least all this computing power would accomplish something, and the money people make would be to compensate them for their contribution.

But Bitcoin is not useful. No institution exists to stabilize its value. It constantly rises and falls in price. Why do people buy it?

In a word, FOMO. The fear of missing out. People buy Bitcoin because they see that a handful of other people have become rich by buying and selling Bitcoin. Bitcoin symbolizes financial freedom: The chance to become financially secure without having to participate any longer in our (utterly broken) labor market.

In this, volatility is not a bug but a feature: A stable currency won’t change much in value, so you’d only buy into it because you plan on spending it. But an unstable currency, now, there you might manage to get lucky speculating on its value and get rich quick for nothing. Or, more likely, you’ll end up poorer. You really have no way of knowing.

That makes cryptocurrency fundamentally like gambling. A few people make a lot of money playing poker, too; but most people who play poker lose money. Indeed, those people who get rich are only able to get rich because other people lose money. The game is zero-sum—and likewise so is cryptocurrency.

Note that this is not how the stock market works, or at least not how it’s supposed to work (sometimes maybe). When you buy a stock, you are buying a share of the profits of a corporation—a real, actual corporation that produces and sells goods or services. You’re (ostensibly) supplying capital to fund the operations of that corporation, so that they might make and sell more goods in order to earn more profit, which they will then share with you.

Likewise when you buy a bond: You are lending money to an institution (usually a corporation or a government) that intends to use that money to do something—some real actual thing in the world, like building a factory or a bridge. They are willing to pay interest on that debt in order to get the money now rather than having to wait.

Initial Coin Offerings were supposed to be away to turn cryptocurrency into a genuine investment, but at least in their current virtually unregulated form, they are basically indistinguishable from a Ponzi scheme. Unless the value of the coin is somehow tied to actual ownership of the corporation or shares of its profits (the way stocks are), there’s nothing to ensure that the people who buy into the coin will actually receive anything in return for the capital they invest. There’s really very little stopping a startup from running an ICO, receiving a bunch of cash, and then absconding to the Cayman Islands. If they made it really obvious like that, maybe a lawsuit would succeed; but as long as they can create even the appearance of a good-faith investment—or even actually make their business profitable!—there’s nothing forcing them to pay a cent to the owners of their cryptocurrency.

The really frustrating thing for me about all this is that, sometimes, it works. There actually are now thousands of people who made decisions that by any objective standard were irrational and irresponsible, and then came out of it millionaires. It’s much like the lottery: Playing the lottery is clearly and objectively a bad idea, but every once in awhile it will work and make you massively better off.

It’s like I said in a post about a year ago: Glorifying superstars glorifies risk. When a handful of people can massively succeed by making a decision, that makes a lot of other people think that it was a good decision. But quite often, it wasn’t a good decision at all; they just got spectacularly lucky.

I can’t exactly say you shouldn’t buy any cryptocurrency. It probably has better odds than playing poker or blackjack, and it certainly has better odds than playing the lottery. But what I can say is this: It’s about odds. It’s gambling. It may be relatively smart gambling (poker and blackjack are certainly a better idea than roulette or slot machines), with relatively good odds—but it’s still gambling. It’s a zero-sum high-risk exchange of money that makes a few people rich and lots of other people poorer.

With that in mind, don’t put any money into cryptocurrency that you couldn’t afford to lose at a blackjack table. If you’re looking for something to seriously invest your savings in, the answer remains the same: Stocks. All the stocks.

I doubt this particular crash will be the end for cryptocurrency, but I do think it may be the beginning of the end. I think people are finally beginning to realize that cryptocurrencies are really not the spectacular innovation that they were hyped to be, but more like a high-tech iteration of the ancient art of the Ponzi scheme. Maybe blockchain technology will ultimately prove useful for something—hey, maybe we should actually try making FoldCoin. But the future of money remains much as it has been for quite some time: Fiat currency managed by central banks.

Is privacy dead?

May 9 JDN 2459342

It is the year 2021, and while we don’t yet have flying cars or human-level artificial intelligence, our society is in many ways quite similar to what cyberpunk fiction predicted it would be. We are constantly connected to the Internet, even linking devices in our homes to the Web when that is largely pointless or actively dangerous. Oligopolies of fewer and fewer multinational corporations that are more and more powerful have taken over most of our markets, from mass media to computer operating systems, from finance to retail.

One of the many dire predictions of cyberpunk fiction is that constant Internet connectivity will effectively destroy privacy. There is reason to think that this is in fact happening: We have televisions that listen to our conversations, webcams that can be hacked, sometimes invisibly, and the operating system that runs the majority of personal and business computers is built around constantly tracking its users.

The concentration of oligopoly power and the decline of privacy are not unconnected. It’s the oligopoly power of corporations like Microsoft and Google and Facebook that allows them to present us with absurdly long and virtually unreadable license agreements as an ultimatum: “Sign away your rights, or else you can’t use our product. And remember, we’re the only ones who make this product and it’s increasingly necessary for your basic functioning in society!” This is of course exactly as cyberpunk fiction warned us it would be.

Giving up our private information to a handful of powerful corporations would be bad enough if that information were securely held only by them. But it isn’t. There have been dozens of major data breaches of major corporations, and there will surely be many more. In an average year, several billion data records are exposed through data breaches. Each person produces many data records, so it’s difficult to say exactly how many people have had their data stolen; but it isn’t implausible to say that if you are highly active on the Internet, at least some of your data has been stolen in one breach or another. Corporations have strong incentives to collect and use your data—data brokerage is a hundred-billion-dollar industry—but very weak incentives to protect it from prying eyes. The FTC does impose fines for negligence in the event of a major data breach, but as usual the scale of the fines simply doesn’t match the scale of the corporations responsible. $575 million sounds like a lot of money, but for a corporation with $28 billion in assets it’s a slap on the wrist. It would be equivalent to fining me about $500 (about what I’d get for driving without a passenger in the carpool lane). Yeah, I’d feel that; it would be unpleasant and inconvenient. But it’s certainly not going to change my life. And typically these fines only impact shareholders, and don’t even pass through to the people who made the decisions: The man who was CEO of Equifax when it suffered its catastrophic data breach retired with a $90 million pension.

While most people seem either blissfully unaware or fatalistically resigned to its inevitability, a few people have praised the trend of reduced privacy, usually by claiming that it will result in increased transparency. Yet, ironically, a world with less privacy can actually mean a world with less transparency as well: When you don’t know what information you reveal will be stolen and misused, you will constantly endeavor to protect all your information, even things that you would normally not hesitate to reveal. When even your face and name can be used to track you, you’ll be more hesitant to reveal them. Cyberpunk fiction predicted this too: Most characters in cyberpunk stories are known by their hacker handles, not their real given names.

There is some good news, however. People are finally beginning to notice that they have been pressured into giving away their privacy rights, and demanding to get them back. The United Nations has recently passed resolutions defending digital privacy, governments have taken action against the worst privacy violations with increasing frequency, courts are ruling in favor of stricter protections, think tanks are demanding stricter regulations, and even corporate policies are beginning to change. While the major corporations all want to take your data, there are now many smaller businesses and nonprofit organizations that will sell you tools to help protect it.

This does not mean we can be complacent: The war is far from won. But it does mean that there is some hope left; we don’t simply have to surrender and accept a world where anyone with enough money can know whatever they want about anyone else. We don’t need to accept what the CEO of Sun Microsystems infamously said: “You have zero privacy anyway. Get over it.”

I think the best answer to the decline of privacy is to address the underlying incentives that make it so lucrative. Why is data brokering such a profitable industry? Because ad targeting is such a profitable industry. So profitable, indeed, that huge corporations like Facebook and Google make almost all of their money that way, and the useful services they provide to users are offered for free simply as an enticement to get them to look at more targeted advertising.

Selling advertising is hardly new—we’ve been doing it for literally millennia, as Roman gladiators were often paid to hawk products. It has been the primary source of revenue for most forms of media, from newspapers to radio stations to TV networks, since those media have existed. What has changed is that ad targeting is now a lucrative business: In the 1850s, that newspaper being sold by barking boys on the street likely had ads in it, but they were the same ads for every single reader. Now when you log in to CNN.com or nytimes.com, the ads on that page are specific only to you, based on any information that these media giants have been able to glean from your past Internet activity. If you do try to protect your online privacy with various tools, a quick-and-dirty way to check if it’s working is to see if websites give you ads for things you know you’d never buy.

In fact, I consider it a very welcome recent development that video streaming is finally a way to watch TV shows by actually paying for them instead of having someone else pay for the right to shove ads in my face. I can’t remember the last time I heard a TV ad jingle, and I’m very happy about that fact. Having to spend 15 minutes of each hour of watching TV to watch commercials may not seem so bad—in fact, many people may feel that they’d rather do that than pay the money to avoid it. But think about it this way: If it weren’t worth at least that much to the corporations buying those ads, they wouldn’t do it. And if a corporation expects to get $X from you that you wouldn’t have otherwise paid, that means they’re getting you to spend that much that you otherwise wouldn’t have—meaning that they’re getting you to buy something you didn’t need. Perhaps it’s better after all to spend that $X on getting entertainment that doesn’t try to get you to buy things you don’t need.

Indeed, I think there is an opportunity to restructure the whole Internet this way. What we need is a software company—maybe a nonprofit organization, maybe a for-profit business—that is set up to let us make micropayments for online content in lieu of having our data collected or being force-fed advertising.

How big would these payments need to be? Well, Facebook has about 2.8 billion users and takes in revenue of about $80 billion per year, so the average user would have to pay about $29 a year for the use of Facebook, Instagram, and WhatsApp. That’s about $2.50 per month, or $0.08 per day.

The New York Times is already losing its ad-supported business model; less than $400 million of its $1.8 billion revenue last year was from ads, the rest being primarily from subscriptions. But smaller media outlets have a much harder time gaining subscribers; often people just want to read a single article and aren’t willing to pay for a whole month or year of the periodical. If we could somehow charge for individual articles, how much would we have to charge? Well, a typical webpage has an ad clickthrough rate of 1%, while a typical cost-per-click rate is about $0.60, so ads on the average webpage makes its owners a whopping $0.006. That’s not even a single cent. So if this new micropayment system allowed you to pay one cent to read an article without the annoyance of ads or the pressure to buy something you don’t need, would you pay it? I would. In fact, I’d pay five cents. They could quintuple their revenue!

The main problem is that we currently don’t have an efficient way to make payments that small. Processing a credit card transaction typically costs at least $0.05, so a five-cent transaction would yield literally zero revenue for the website. I’d have to pay ten cents to give the website five, and I admit I might not always want to do that—I’d also definitely be uncomfortable with half the money going to credit card companies.

So what’s needed is software to bundle the payments at each end: In a single credit card transaction, you add say $20 of tokens to an account. Each token might be worth $0.01, or even less if we want. These tokens can then be spent at participating websites to pay for access. The websites can then collect all the tokens they’ve received over say a month, bundle them together, and sell them back to the company that originally sold them to you, for slightly less than what you paid for them. These bundled transactions could actually be quite large in many cases—thousands or millions of dollars—and thus processing fees would be a very small fraction. For smaller sites there could be a minimum amount of tokens they must collect—perhaps also $20 or so—before they can sell them back. Note that if you’ve bought $20 in tokens and you are paying $0.05 per view, you can read 400 articles before you run out of tokens and have to buy more. And they don’t all have to be from the same source, as they would with a traditional subscription; you can read articles from any outlet that participates in the token system.

There are a number of technical issues to be resolved here: How to keep the tokens secure, how to guarantee that once a user purchases access to an article they will continue to have access to it, ideally even if they clear their cache, delete all cookies, or login from another computer. I can’t literally set up this website today, and even if I could, I don’t know how I’d attract a critical mass of both users and participating websites (it’s a major network externality problem). But it seems well within the purview of what the tech industry has done in the past—indeed, it’s quite comparable to the impressive (and unsettling) infrastructure that has been laid down to support ad-targeting and data brokerage.

How would such a system help protect privacy? If micropayments for content became the dominant model of funding online content, most people wouldn’t spend much time looking at online ads, and ad targeting would be much less profitable. Data brokerage, in turn, would become less lucrative, because there would be fewer ways to use that data to make profits. With the incentives to take our data thus reduced, it would be easier to enforce regulations protecting our privacy. Those fines might actually be enough to make it no longer worth the while to take sensitive data, and corporations might stop pressuring people to give it up.

No, privacy isn’t dead. But it’s dying. If we want to save it, we have a lot of work to do.

Economic Possibilities for Ourselves

May 2 JDN 2459335

In 1930, John Maynard Keynes wrote one of the greatest essays ever written on economics, “Economic Possibilities for our Grandchildren.” You can read it here.


In that essay he wrote:

“I would predict that the standard of life in progressive countries one hundred years hence will be between four and eight times as high as it is.”

US population in 1930: 122 million; US real GDP in 1930: $1.1 trillion. Per-capita GDP: $9,000

US population in 2020: 329 million; US real GDP in 2020: $18.4 trillion. Per-capita GDP: $56,000

That’s a factor of 6. Keynes said 4 to 8; that makes his estimate almost perfect. We aren’t just inside his error bar, we’re in the center of it. If anything he was under-confident. Of course we still have 10 years left before a full century has passed: At a growth rate of 1% in per-capita GDP, that will make the ratio closer to 7—still well within his confidence interval.

I’d like to take a moment to marvel at how good this estimate is. Keynes predicted the growth rate of the entire US economy one hundred years in the future to within plus or minus 30%, and got it right.

With this in mind, it’s quite astonishing what Keynes got wrong in his essay.


The point of the essay is that what Keynes calls “the economic problem” will soon be solved. By “the economic problem”, he means the scarcity of resources that makes it impossible for everyone in the world to make a decent living. Keynes predicts that by 2030—so just a few years from now—humanity will have effectively solved this problem, and we will live in a world where everyone can live comfortably with adequate basic necessities like shelter, food, water, clothing, and medicine.

He laments that with the dramatically higher productivity that technological advancement brings, we will be thrust into a life of leisure that we are unprepared to handle. Evolved for a world of scarcity, we built our culture around scarcity, and we may not know what to do with ourselves in a world of abundance.

Keynes sounds his most naive when he imagines that we would spread out our work over more workers each with fewer hours:

“For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich today, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!”

Plainly that is nothing like what happened. Americans do on average work fewer hours today than we did in the past, but not by anything like this much: average annual hours fell from about 1,900 in 1950 to about 1,700 today. Where Keynes was predicting a drop of 60%, the actual drop was only about 10%.

Here’s another change Keynes predicted that I wish we’d made, but we certainly haven’t:

“When the accumulation of wealth is no longer of high social importance, there will be great changes in the code of morals. We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years, by which we have exalted some of the most distasteful of human qualities into the position of the highest virtues. We shall be able to afford to dare to assess the money-motive at its true value. The love of money as a possession—as distinguished from the love of money as a means to the enjoyments and realities of life—will be recognised for what it is, a somewhat disgusting morbidity, one of those semicriminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease.”

Sadly, people still idolize Jeff Bezos and Elon Musk just as much their forebears idolized Henry Ford or Andrew Carnegie. And really there’s nothing semi- about it: The acquisition of billions of dollars by exploiting others is clearly indicative of narcissism if not psychopathy.

It’s not that we couldn’t have made the world that Keynes imagined. There’s plenty of stuff—his forecast for our per-capita GDP was impeccable. But when we automated away all of the most important work, Keynes thought we would turn to lives of leisure, exploring art, music, literature, film, games, sports. But instead we did something he did not anticipate: We invented new kinds of work.

This would be fine if the new work we invented is genuinely productive; and some of it is, no doubt. Keynes could not have anticipated the emergence of 3D graphics designers, smartphone engineers, or web developers, but these jobs do genuinely productive and beneficial work that makes use of our extraordinary new technologies.

But think for a moment about Facebook and Google, now two of the world’s largest and most powerful corporations. What do they sell? Think carefully! Facebook doesn’t sell social media. Google doesn’t sell search algorithms. Those are services they provide as platforms for what they actually sell: Advertising.

That is, some of the most profitable, powerful corporations in the world today make all of their revenue entirely from trying to persuade people to buy things they don’t actually need. The actual benefits they provide to humanity are sort of incidental; they exist to provide an incentive to look at the ads.

Paul Krugman often talks about Solow’s famous remark that “computers showed up everywhere but the productivity statistics”; aggregate productivity growth has, if anything, been slower in the last 40 years than in the previous 40.

But this aggregate is a very foolish measure. It’s averaging together all sorts of work into one big lump.

If you look specifically at manufacturing output per workerthe sort of thing you’d actually expect to increase due to automation—it has in fact increased, at breakneck speed: The average American worker produced four times as much output per hour in 2000 as in 1950.

The problem is that instead of splitting up the manufacturing work to give people free time, we moved them all into services—which have not meaningfully increased their productivity in the same period. The average growth rate in multifactor productivity in the service industries since the 1970s has been a measly 0.2% per year, meaning that our total output per worker in service industries is only 10% higher than it was in 1970.

While our population is more than double what it was in 1950, our total manufacturing employment is now less than it was in 1950. Our employment in services is four times what it was in 1950. We moved everyone out of the sector that actually got more productive and stuffed them into the sector that didn’t.

This is why the productivity statistics are misleading. Suppose we had 100 workers, and 2 industries.

Initially, in manufacturing, each worker can produce goods worth $20 per hour. In services, each worker can only produce services worth $10 per hour. 50 workers work in each industry, so average productivity is (50*$20+50*$10)/100 = $15 per hour.

Then, after new technological advances, productivity in manufacturing increases to $80 per hour, but people don’t actually want to spend that much on manufactured good. So 30 workers from manufacturing move over to services, which still only produce $10 per hour. Now total productivity is (20*$80+80*$10)/100 = $24 per hour.

Overall productivity now appears to only have risen 60% over that time period (in 50 years this would be 0.9% per year), but in fact it rose 300% in manufacturing (2.2% per year) but 0% in services. What looks like anemic growth in productivity is actually a shift of workers out of the productive sectors into the unproductive sectors.

Keynes imagined that once we had made manufacturing so efficient that everyone could have whatever appliances they like, we’d give them the chance to live their lives without having to work. Instead, we found jobs for them—in large part, jobs that didn’t need doing.

Advertising is the clearest example: It’s almost pure rent-seeking, and if it were suddenly deleted from the universe almost everyone would actually be better off.

But there are plenty of other jobs, what the late David Graeber called “bullshit jobs”, that have the same character: Sales, consulting, brokering, lobbying, public relations, and most of what goes on in management, law and finance. Graeber had a silly theory that we did this on purpose either to make the rich feel important or to keep people working so they wouldn’t question the existing system. The real explanation is much simpler: These jobs are rent-seeking. They do make profits for the corporations that employ them, but they contribute little or nothing to human society as a whole.

I’m not sure how surprised Keynes would be by this outcome. In parts of the essay he acknowledges that the attitude which considers work a virtue and idleness a vice is well-entrenched in our society, and seems to recognize that the transition to a world where most people work very little is one that would be widely resisted. But his vision of what the world would be like in the early 21st century does now seem to be overly optimistic, not in its forecasts of our productivity and output—which, I really cannot stress enough, were absolutely spot on—but in its predictions of how society would adapt to that abundance.

It seems that most people still aren’t quite ready to give up on a world built around jobs. Most people still think of a job as the primary purpose of an adult’s life, that someone who isn’t working for an employer is somehow wasting their life and free-riding on everyone else.

In some sense this is perhaps true; but why is it more true of someone living on unemployment than of someone who works in marketing, or stock brokering, or lobbying, or corporate law? At least people living on unemployment aren’t actively making the world worse. And since unemployment pays less than all but the lowest-paying jobs, the amount of resources that are taken up by people on unemployment is considerably less than the rents which are appropriated by industries like consulting and finance.

Indeed, whenever you encounter a billionaire, there’s one thing you know for certain: They are very good at rent-seeking. Whether by monopoly power, or exploitation, or outright corruption, all the ways it’s possible to make a billion dollars are forms of rent-seeking. And this is for a very simple and obvious reason: No one can possibly work so hard and be so productive as to actually earn a billion dollars. No one’s real opportunity cost is actually that high—and the difference between income and real opportunity cost is by definition economic rent.

If we’re truly concerned about free-riding on other people’s work, we should really be thinking in terms of the generations of scientists and engineers before us who made all of this technology possible, as well as the institutions and infrastructure that have bequeathed us a secure stock of capital. You didn’t build that applies to all of us: Even if all the necessary raw materials were present, none of us could build a smartphone by hand alone on a desert island. Most of us couldn’t even sew a pair of pants or build a house—though that is at least the sort of thing that it’s possible to do by hand.

But in fact I think free-riding on our forebears is a perfectly acceptable activity. I am glad we do it, and I hope our descendants do it to us. I want to build a future where life is better than it is now; I want to leave the world better than we found it. If there were some way to inter-temporally transfer income back to the past, I suppose maybe we ought to do so—but as far as we know, there isn’t. Nothing can change the fact that most people were desperately poor for most of human history.

What we now have the power to decide is what will happen to people in the future: Will we continue to maintain this system where our wealth is decided by our willingness to work for corporations, at jobs that may be utterly unnecessary or even actively detrimental? Or will we build a new system, one where everyone gets the chance to share in the abundance that our ancestors have given us and each person gets the chance to live their life in the way that they find most meaningful?

Keynes imagined a bright future for the generation of his grandchildren. We now live in that generation, and we have precisely the abundance of resources he predicted we would. Can we now find a way to build that bright future?