Billionaires bear the burden of proof

Sep 15 JDN 2458743

A king sits atop a golden throne, surrounded by a thousand stacks of gold coins six feet high. A hundred starving peasants beseech him for just one gold coin each, so that they might buy enough food to eat and clothes for the winter. The king responds: “How dare you take my hard-earned money!”

This is essentially the world we live in today. I really cannot emphasize enough how astonishingly, horrifically, mind-bogglingly rich billionares are. I am writing this sentence at 13:00 PDT on September 8, 2019. A thousand seconds ago was 12:43, about when I started this post. A million seconds ago was Wednesday, August 28. A billion seconds ago was 1987. I will be a billion seconds old this October.

Jeff Bezos has $170 billion. 170 billion seconds ago was a thousand years before the construction of the Great Pyramid. To get as much money as he has gaining one dollar per second (that’s $3600 an hour!), Jeff Bezos would have had to work for as long as human civilization has existed.

At a more sensible wage like $30 per hour (still better than most people get), how long would it take to amass $170 billion? Oh, just about 600,000 years—or about twice the length of time that Homo sapiens has existed on Earth.

How does this compare to my fictional king with a thousand stacks of gold? A typical gold coin is worth about $500, depending on its age and condition. Coins are about 2 millimeters thick. So a thousand stacks, each 2 meters high, would be about $500*1000*1000 = $500 million. This king isn’t even a billionaire! Jeff Bezos has three hundred times as much as him.

Coins are about 30 millimeters in diameter, so assuming they are packed in neat rows, these thousand stacks of gold coins would fill a square about 0.9 meters to a side—in our silly Imperial units, that’s 3 feet wide, 3 feet deep, 6 feet tall. If Jeff Bezo’s stock portfolio were liquidated into gold coins (which would require about 2% of the world’s entire gold supply and surely tank the market), the neat rows of coins stacked a thousand high would fill a square over 16 meters to a side—that’s a 50-foot-wide block of gold coins. Smaug’s hoard in The Hobbit was probably about the same amount of money as what Jeff Bezos has.

And yet, somehow there are still people who believe that he deserves this money, that he earned it, that to take even a fraction of it away would be a crime tantamount to theft or even slavery.

Their arguments can be quite seductive: How would you feel about the government taking your hard-earned money? Entrepreneurs are brilliant, dedicated, hard-working people; why shouldn’t they be rewarded? What crime do CEOs commit by selling products at low prices?

The way to cut through these arguments is to never lose sight of the numbers. In defense of a man who had $5 million or even $20 million, such an argument might make sense. I can imagine how someone could contribute enough to humanity to legitimately deserve $20 million. I can understand how a talented person might work hard enough to earn $5 million. But it’s simply not possible for any human being to be so brilliant, so dedicated, so hard-working, or make such a contribution to the world, that they deserve to have more dollars than there have been seconds since the Great Pyramid.

It’s not necessary to find specific unethical behaviors that brought a billionaire to where he (and yes, it’s nearly always he) is. They are generally there to be found: At best, one becomes a billionaire by sheer luck. Typically, one becomes a billionaire by exerting monopoly power. At worst, one can become a billionaire by ruthless exploitation or even mass murder. But it’s not our responsibility to point out a specific crime for every specific billionaire.

The burden of proof is on billionaires: Explain how you can possibly deserve that much money.

It’s not enough to point to some good things you did, or emphasize what a bold innovator you are: You need to explain what you did that was so good that it deserves to be rewarded with Smaug-level hoards of wealth. Did you save the world from a catastrophic plague? Did you end world hunger? Did you personally prevent a global nuclear war? I could almost see the case for Norman Borlaug or Jonas Salk earning a billion dollars (neither did, by the way). But Jeff Bezos? You didn’t save the world. You made a company that sells things cheaply and ships them quickly. Get over yourself.

Where exactly do we draw that line? That’s a fair question. $20 million? $100 million? $500 million? Maybe there shouldn’t even be a hard cap. There are many other approaches we could take to reducing this staggering inequality. Previously I have proposed a tax system that gets continuously more progressive forever, as well as a CEO compensation cap based on the pay of the lowliest employees. We could impose a wealth tax, as Elizabeth Warren has proposed. Or we could simply raise the top marginal rate on income tax to something more like what it was in the 1960s. Or as Republicans today would call it, radical socialism.

Moral luck: How it matters, and how it doesn’t

Feb 10 JDN 2458525

The concept of moral luck is now relatively familiar to most philosophers, but I imagine most other people haven’t heard it before. It sounds like a contradiction, which is probably why it drew so much attention.

The term “moral luck” seems to have originated in essay by Thomas Nagel, but the intuition is much older, dating at least back to Greek philosophy (and really probably older than that; we just don’t have good records that far back).

The basic argument is this:

Most people would say that if you had no control over something, you can’t be held morally responsible for it. It was just luck.

But if you look closely, everything we do—including things we would conventionally regard as moral actions—depends heavily on things we don’t have control over.

Therefore, either we can be held responsible for things we have no control over, or we can’t be held responsible for anything at all!

Neither approach seems very satisfying; hence the conundrum.

For example, consider four drivers:

Anna is driving normally, and nothing of note happens.

Bob is driving recklessly, but nothing of note happens.

Carla is driving normally, but a child stumbles out into the street and she runs the child over.

Dan is driving recklessly, and a child stumbles out into the street and he runs the child over.

The presence or absence of a child in the street was not in the control of any of the four drivers. Yet I think most people would agree that Dan should be held more morally responsible than Bob, and Carla should be held more morally responsible than Anna. (Whether Bob should be held more morally responsible than Carla is not as clear.) Yet both Bob and Dan were driving recklessly, and both Anna and Carla were driving normally. The moral evaluation seems to depend upon the presence of the child, which was not under the drivers’ control.

Other philosophers have argued that the difference is an epistemic one: We know the moral character of someone who drove recklessly and ran over a child better than the moral character of someone who drove recklessly and didn’t run over a child. But do we, really?

Another response is simply to deny that we should treat Bob and Dan any differently, and say that reckless driving is reckless driving, and safe driving is safe driving. For this particular example, maybe that works. But it’s not hard to come up with better examples where that doesn’t work:

Ted is a psychopathic serial killer. He kidnaps, rapes, and murder people. Maybe he can control whether or not he rapes and murders someone. But the reason he rapes and murders someone is that he is a psychopath. And he can’t control that he is a psychopath. So how can we say that his actions are morally wrong?

Obviously, we want to say that his actions are morally wrong.

I have heard one alternative, which is to consider psychopaths as morally equivalent to viruses: Zero culpability, zero moral value, something morally neutral but dangerous that we should contain or eradicate as swiftly as possible. HIV isn’t evil; it’s just harmful. We should kill it not because it deserves to die, but because it will kill us if we don’t. On this theory, Ted doesn’t deserve to be executed; it’s just that we must execute him in order to protect ourselves from the danger he poses.

But this quickly becomes unsatisfactory as well:

Jonas is a medical researcher whose work has saved millions of lives. Maybe he can control the research he works on, but he only works on medical research because he was born with a high IQ and strong feelings of compassion. He can’t control that he was born with a high IQ and strong feelings of compassion. So how can we say his actions are morally right?

This is the line of reasoning that quickly leads to saying that all actions are outside our control, and therefore morally neutral; and then the whole concept of morality falls apart.

So we need to draw the line somewhere; there has to be a space of things that aren’t in our control, but nonetheless carry moral weight. That’s moral luck.

Philosophers have actually identified four types of moral luck, which turns out to be tremendously useful in drawing that line.

Resultant luck is luck that determines the consequences of your actions, how things “turn out”. Happening to run over the child because you couldn’t swerve fast enough is resultant luck.

Circumstantial luck is luck that determines the sorts of situations you are in, and what moral decisions you have to make. A child happening to stumble across the street is circumstantial luck.

Constitutive luck is luck that determines who you are, your own capabilities, virtues, intentions and so on. Having a high IQ and strong feelings of compassion is constitutive luck.

Causal luck is the inherent luck written into the fabric of the universe that determines all events according to the fundamental laws of physics. Causal luck is everything and everywhere; it is written into the universal wavefunction.

I have a very strong intuition that this list is ordered; going from top to bottom makes things “less luck” in a vital sense.

Resultant luck is pure luck, what we originally meant when we said the word “luck”. It’s the roll of the dice.

Circumstantial luck is still mostly luck, but maybe not entirely; there are some aspects of it that do seem to be under our control.

Constitutive luck is maybe luck, sort of, but not really. Yes, “You’re lucky to be so smart” makes sense, but “You’re lucky to not be a psychopath” already sounds pretty weird. We’re entering territory here where our ordinary notions of luck and responsibility really don’t seem to apply.

Causal luck is not luck at all. Causal luck is really the opposite of luck: Without a universe with fundamental laws of physics to maintain causal order, none of our actions would have any meaning at all. They wouldn’t even really be actions; they’d just be events. You can’t do something in a world of pure chaos; things only happen. And being made of physical particles doesn’t make you any less what you are; a table made of wood is still a table, and a rocket made of steel is still a rocket. Thou art physics.

And that, my dear reader, is the solution to the problem of moral luck. Forget “causal luck”, which isn’t luck at all. Then, draw a hard line at constitutive luck: regardless of how you became who you are, you are responsible for what you do.

You don’t need to have control over who you are (what would that even mean!?).

You merely need to have control over what you do.

This is how the word “control” is normally used, by the way; when we say that a manufacturing process is “under control” or a pilot “has control” of an airplane, we aren’t asserting some grand metaphysical claim of ultimate causation. We’re merely saying that the system is working as it’s supposed to; the outputs coming out are within the intended parameters. This is all we need for moral responsibility as well.

In some cases, maybe people’s brains really are so messed up that we can’t hold them morally responsible; they aren’t “under control”. Okay, we’re back to the virus argument then: Contain or eradicate. If a brain tumor makes you so dangerous that we can’t trust you around sharp objects, unless we can take out that tumor, we’ll need to lock you up somewhere where you can’t get any sharp objects. Sorry. Maybe you don’t deserve that in some ultimate sense, but it’s still obviously what we have to do. And this is obviously quite exceptional; most people are not suffering from brain tumors that radically alter their personalities—and even most psychopaths are otherwise neurologically normal.

Ironically, it’s probably my fellow social scientists who will scoff the most at this answer. “But so much of what we are is determined by our neurochemistry/cultural norms/social circumstances/political institutions/economic incentives!” Yes, that’s true. And if we want to change those things to make us and others better, I’m all for it. (Well, neurochemistry is a bit problematic, so let’s focus on the others first—but if you can make a pill that cures psychopathy, I would support mandatory administration of that pill to psychopaths in positions of power.)

When you make a moral choice, we have to hold you responsible for that choice.

Maybe Ted is psychopathic and sadistic because there was too much lead in his water as a child. That’s a good reason to stop putting lead in people’s water (like we didn’t already have plenty!); but it’s not a good reason to let Ted off the hook for all those rapes and murders.

Maybe Jonas is intelligent and compassionate because his parents were wealthy and well-educated. That’s a good reason to make sure people are financially secure and well-educated (again, did we need more?); but it’s not a good reason to deny Jonas his Nobel Prize for saving millions of lives.

Yes, “personal responsibility” has been used by conservatives as an excuse to not solve various social and economic problems (indeed, it has specifically been used to stop regulations on lead in water and public funding for education). But that’s not actually anything wrong with personal responsibility. We should hold those conservatives personally responsible for abusing the term in support of their destructive social and economic policies. No moral freedom is lost by preventing lead from turning children into psychopaths. No personal liberty is destroyed by ensuring that everyone has access to a good education.

In fact, there is evidence that telling people who are suffering from poverty or oppression that they should take personal responsibility for their choices benefits them. Self-perceived victimhood is linked to all sorts of destructive behaviors, even controlling for prior life circumstances. Feminist theorists have written about how taking responsibility even when you are oppressed can empower you to make your life better. Yes, obviously, we should be helping people when we can. But telling them that they are hopeless unless we come in to rescue them isn’t helping them.

This way of thinking may require a delicate balance at times, but it’s not inconsistent. You can both fight against lead pollution and support the criminal justice system. You can believe in both public education and the Nobel Prize. We should be working toward a world where people are constituted with more virtue for reasons beyond their control, and where people are held responsible for the actions they take that are under their control.

We can continue to talk about “moral luck” referring to constitutive luck, I suppose, but I think the term obscures more than it illuminates. The “luck” that made you a good or a bad person is very different from the “luck” that decides how things happen to turn out.

Do we always want to internalize externalities?

JDN 2457437

I often talk about the importance of externalitiesa full discussion in this earlier post, and one of their important implications, the tragedy of the commons, in another. Briefly, externalities are consequences of actions incurred upon people who did not perform those actions. Anything I do affecting you that you had no say in, is an externality.

Usually I’m talking about how we want to internalize externalities, meaning that we set up a system of incentives to make it so that the consequences fall upon the people who chose the actions instead of anyone else. If you pollute a river, you should have to pay to clean it up. If you assault someone, you should serve jail time as punishment. If you invent a new technology, you should be rewarded for it. These are all attempts to internalize externalities.

But today I’m going to push back a little, and ask whether we really always want to internalize externalities. If you think carefully, it’s not hard to come up with scenarios where it actually seems fairer to leave the externality in place, or perhaps reduce it somewhat without eliminating it.

For example, suppose indeed that someone invents a great new technology. To be specific, let’s think about Jonas Salk, inventing the polio vaccine. This vaccine saved the lives of thousands of people and saved millions more from pain and suffering. Its value to society is enormous, and of course Salk deserved to be rewarded for it.

But we did not actually fully internalize the externality. If we had, every family whose child was saved from polio would have had to pay Jonas Salk an amount equal to what they saved on medical treatments as a result, or even an amount somehow equal to the value of their child’s life (imagine how offended people would get if you asked that on a survey!). Those millions of people spared from suffering would need to each pay, at minimum, thousands of dollars to Jonas Salk, making him of course a billionaire.

And indeed this is more or less what would have happened, if he had been willing and able to enforce a patent on the vaccine. The inability of some to pay for the vaccine at its monopoly prices would add some deadweight loss, but even that could be removed if Salk Industries had found a way to offer targeted price vouchers that let them precisely price-discriminate so that every single customer paid exactly what they could afford to pay. If that had happened, we would have fully internalized the externality and therefore maximized economic efficiency.

But doesn’t that sound awful? Doesn’t it sound much worse than what we actually did, where Jonas Salk received a great deal of funding and support from governments and universities, and lived out his life comfortably upper-middle class as a tenured university professor?

Now, perhaps he should have been awarded a Nobel Prize—I take that back, there’s no “perhaps” about it, he definitely should have been awarded a Nobel Prize in Medicine, it’s absurd that he did not—which means that I at least do feel the externality should have been internalized a bit more than it was. But a Nobel Prize is only 10 million SEK, about $1.1 million. That’s about enough to be independently wealthy and live comfortably for the rest of your life; but it’s a small fraction of the roughly $7 billion he could have gotten if he had patented the vaccine. Yet while the possible world in which he wins a Nobel is better than this one, I’m fairly well convinced that the possible world in which he patents the vaccine and becomes a billionaire is considerably worse.

Internalizing externalities makes sense if your goal is to maximize total surplus (a concept I explain further in the linked post), but total surplus is actually a terrible measure of human welfare.

Total surplus counts every dollar of willingness-to-pay exactly the same across different people, regardless of whether they live on $400 per year or $4 billion.

It also takes no account whatsoever of how wealth is distributed. Suppose a new technology adds $10 billion in wealth to the world. As far as total surplus, it makes no difference whether that $10 billion is spread evenly across the entire planet, distributed among a city of a million people, concentrated in a small town of 2,000, or even held entirely in the bank account of a single man.

Particularly a propos of the Salk example, total surplus makes no distinction between these two scenarios: a perfectly-competitive market where everything is sold at a fair price, and a perfectly price-discriminating monopoly, where everything is sold at the very highest possible price each person would be willing to pay.

This is a perfectly-competitive market, where the benefits are more or less equally (in this case exactly equally, but that need not be true in real life) between sellers and buyers:

elastic_supply_competitive_labeled

This is a perfectly price-discriminating monopoly, where the benefits accrue entirely to the corporation selling the good:

elastic_supply_price_discrimination

In the former case, the company profits, consumers are better off, everyone is happy. In the latter case, the company reaps all the benefits and everyone else is left exactly as they were. In real terms those are obviously very different outcomes—the former being what we want, the latter being the cyberpunk dystopia we seem to be hurtling mercilessly toward. But in terms of total surplus, and therefore the kind of “efficiency” that is maximize by internalizing all externalities, they are indistinguishable.

In fact (as I hope to publish a paper about at some point), the way willingness-to-pay works, it weights rich people more. Redistributing goods from the poor to the rich will typically increase total surplus.

Here’s an example. Suppose there is a cake, which is sufficiently delicious that it offers 2 milliQALY in utility to whoever consumes it (this is a truly fabulous cake). Suppose there are two people to whom we might give this cake: Richie, who has $10 million in annual income, and Hungry, who has only $1,000 in annual income. How much will each of them be willing to pay?

Well, assuming logarithmic marginal utility of wealth (which is itself probably biasing slightly in favor of the rich), 1 milliQALY is about $1 to Hungry, so Hungry will be willing to pay $2 for the cake. To Richie, however, 1 milliQALY is about $10,000; so he will be willing to pay a whopping $20,000 for this cake.

What this means is that the cake will almost certainly be sold to Richie; and if we proposed a policy to redistribute the cake from Richie to Hungry, economists would emerge to tell us that we have just reduced total surplus by $19,998 and thereby committed a great sin against economic efficiency. They will cajole us into returning the cake to Richie and thus raising total surplus by $19,998 once more.

This despite the fact that I stipulated that the cake is worth just as much in real terms to Hungry as it is to Richie; the difference is due to their wildly differing marginal utility of wealth.

Indeed, it gets worse, because even if we suppose that the cake is worth much more in real utility to Hungry—because he is in fact hungry—it can still easily turn out that Richie’s willingness-to-pay is substantially higher. Suppose that Hungry actually gets 20 milliQALY out of eating the cake, while Richie still only gets 2 milliQALY. Hungry’s willingness-to-pay is now $20, but Richie is still going to end up with the cake.

Now, if your thought is, “Why would Richie pay $20,000, when he can go to another store and get another cake that’s just as good for $20?” Well, he wouldn’t—but in the sense we mean for total surplus, willingness-to-pay isn’t just what you’d actually be willing to pay given the actual prices of the goods, but the absolute maximum price you’d be willing to pay to get that good under any circumstances. It is instead the marginal utility of the good divided by your marginal utility of wealth. In this sense the cake is “worth” $20,000 to Richie, and “worth” substantially less to Hungry—but not because it’s actually worth less in real terms, but simply because Richie has so much more money.

Even economists often equate these two, implicitly assuming that we are spending our money up to the point where our marginal willingness-to-pay is the actual price we choose to pay; but in general our willingness-to-pay is higher than the price if we are willing to buy the good at all. The consumer surplus we get from goods is in fact equal to the difference between willingness-to-pay and actual price paid, summed up over all the goods we have purchased.

Internalizing all externalities would definitely maximize total surplus—but would it actually maximize happiness? Probably not.

If you asked most people what their marginal utility of wealth is, they’d have no idea what you’re talking about. But most people do actually have an intuitive sense that a dollar is worth more to a homeless person than it is to a millionaire, and that’s really all we mean by diminishing marginal utility of wealth.

I think the reason we’re uncomfortable with the idea of Jonas Salk getting $7 billion from selling the polio vaccine, rather than the same number of people getting the polio vaccine and Jonas Salk only getting the $1.1 million from a Nobel Prize, is that we intuitively grasp that after that $1.1 million makes him independently wealthy, the rest of the money is just going to sit in some stock account and continue making even more money, while if we’d let the families keep it they would have put it to much better use raising their children who are now protected from polio. We do want to reward Salk for his great accomplishment, but we don’t see why we should keep throwing cash at him when it could obviously be spent in better ways.

And indeed I think this intuition is correct; great accomplishments—which is to say, large positive externalities—should be rewarded, but not in direct proportion. Maybe there should be some threshold above which we say, “You know what? You’re rich enough now; we can stop giving you money.” Or maybe it should simply damp down very quickly, so that a contribution which is worth $10 billion to the world pays only slightly more than one that is worth $100 million, but a contribution that is worth $100,000 pays considerably more than one which is only worth $10,000.

What it ultimately comes down to is that if we make all the benefits incur to the person who did it, there aren’t any benefits anymore. The whole point of Jonas Salk inventing the polio vaccine (or Einstein discovering relativity, or Darwin figuring out natural selection, or any great achievement) is that it will benefit the rest of humanity, preferably on to future generations. If you managed to fully internalize that externality, this would no longer be true; Salk and Einstein and Darwin would have become fabulously wealthy, and then somehow we’d all have to continue paying into their estates or something an amount equal to the benefits we received from their discoveries. (Every time you use your GPS, pay a royalty to the Einsteins. Every time you take a pill, pay a royalty to the Darwins.) At some point we’d probably get fed up and decide we’re no better off with them than without them—which is exactly by construction how we should feel if the externality were fully internalized.

Internalizing negative externalities is much less problematic—it’s your mess, clean it up. We don’t want other people to be harmed by your actions, and if we can pull that off that’s fantastic. (In reality, we usually can’t fully internalize negative externalities, but we can at least try.)

But maybe internalizing positive externalities really isn’t so great after all.

How to change the world

JDN 2457166 EDT 17:53.

I just got back from watching Tomorrowland, which is oddly appropriate since I had already planned this topic in advance. How do we, as they say in the film, “fix the world”?

I can’t find it at the moment, but I vaguely remember some radio segment on which a couple of neoclassical economists were interviewed and asked what sort of career can change the world, and they answered something like, “Go into finance, make a lot of money, and then donate it to charity.”

In a slightly more nuanced form this strategy is called earning to give, and frankly I think it’s pretty awful. Most of the damage that is done to the world is done in the name of maximizing profits, and basically what you end up doing is stealing people’s money and then claiming you are a great altruist for giving some of it back. I guess if you can make enormous amounts of money doing something that isn’t inherently bad and then donate that—like what Bill Gates did—it seems better. But realistically your potential income is probably not actually raised that much by working in finance, sales, or oil production; you could have made the same income as a college professor or a software engineer and not be actively stripping the world of its prosperity. If we actually had the sort of ideal policies that would internalize all externalities, this dilemma wouldn’t arise; but we’re nowhere near that, and if we did have that system, the only billionaires would be Nobel laureate scientists. Albert Einstein was a million times more productive than the average person. Steve Jobs was just a million times luckier. Even then, there is the very serious question of whether it makes sense to give all the fruits of genius to the geniuses themselves, who very quickly find they have all they need while others starve. It was certainly Jonas Salk’s view that his work should only profit him modestly and its benefits should be shared with as many people as possible. So really, in an ideal world there might be no billionaires at all.

Here I would like to present an alternative. If you are an intelligent, hard-working person with a lot of talent and the dream of changing the world, what should you be doing with your time? I’ve given this a great deal of thought in planning my own life, and here are the criteria I came up with:

  1. You must be willing and able to commit to doing it despite great obstacles. This is another reason why earning to give doesn’t actually make sense; your heart (or rather, limbic system) won’t be in it. You’ll be miserable, you’ll become discouraged and demoralized by obstacles, and others will surpass you. In principle Wall Street quantitative analysts who make $10 million a year could donate 90% to UNICEF, but they don’t, and you know why? Because the kind of person who is willing and able to exploit and backstab their way to that position is the kind of person who doesn’t give money to UNICEF.
  2. There must be important tasks to be achieved in that discipline. This one is relatively easy to satisfy; I’ll give you a list in a moment of things that could be contributed by a wide variety of fields. Still, it does place some limitations: For one, it rules out the simplest form of earning to give (a more nuanced form might cause you to choose quantum physics over social work because it pays better and is just as productive—but you’re not simply maximizing income to donate). For another, it rules out routine, ordinary jobs that the world needs but don’t make significant breakthroughs. The world needs truck drivers (until robot trucks take off), but there will never be a great world-changing truck driver, because even the world’s greatest truck driver can only carry so much stuff so fast. There are no world-famous secretaries or plumbers. People like to say that these sorts of jobs “change the world in their own way”, which is a nice sentiment, but ultimately it just doesn’t get things done. We didn’t lift ourselves into the Industrial Age by people being really fantastic blacksmiths; we did it by inventing machines that make blacksmiths obsolete. We didn’t rise to the Information Age by people being really good slide-rule calculators; we did it by inventing computers that work a million times as fast as any slide-rule. Maybe not everyone can have this kind of grand world-changing impact; and I certainly agree that you shouldn’t have to in order to live a good life in peace and happiness. But if that’s what you’re hoping to do with your life, there are certain professions that give you a chance of doing so—and certain professions that don’t.
  3. The important tasks must be currently underinvested. There are a lot of very big problems that many people are already working on. If you work on the problems that are trendy, the ones everyone is talking about, your marginal contribution may be very small. On the other hand, you can’t just pick problems at random; many problems are not invested in precisely because they aren’t that important. You need to find problems people aren’t working on but should be—problems that should be the focus of our attention but for one reason or another get ignored. A good example here is to work on pancreatic cancer instead of breast cancer; breast cancer research is drowning in money and really doesn’t need any more; pancreatic cancer kills 2/3 as many people but receives less than 1/6 as much funding. If you want to do cancer research, you should probably be doing pancreatic cancer.
  4. You must have something about you that gives you a comparative—and preferably, absolute—advantage in that field. This is the hardest one to achieve, and it is in fact the reason why most people can’t make world-changing breakthroughs. It is in fact so hard to achieve that it’s difficult to even say you have until you’ve already done something world-changing. You must have something special about you that lets you achieve what others have failed. You must be one of the best in the world. Even as you stand on the shoulders of giants, you must see further—for millions of others stand on those same shoulders and see nothing. If you believe that you have what it takes, you will be called arrogant and naïve; and in many cases you will be. But in a few cases—maybe 1 in 100, maybe even 1 in 1000, you’ll actually be right. Not everyone who believes they can change the world does so, but everyone who changes the world believed they could.

Now, what sort of careers might satisfy all these requirements?

Well, basically any kind of scientific research:

Mathematicians could work on network theory, or nonlinear dynamics (the first step: separating “nonlinear dynamics” into the dozen or so subfields it should actually comprise—as has been remarked, “nonlinear” is a bit like “non-elephant”), or data processing algorithms for our ever-growing morasses of unprocessed computer data.

Physicists could be working on fusion power, or ways to neutralize radioactive waste, or fundamental physics that could one day unlock technologies as exotic as teleportation and faster-than-light travel. They could work on quantum encryption and quantum computing. Or if those are still too applied for your taste, you could work in cosmology and seek to answer some of the deepest, most fundamental questions in human existence.

Chemists could be working on stronger or cheaper materials for infrastructure—the extreme example being space elevators—or technologies to clean up landfills and oceanic pollution. They could work on improved batteries for solar and wind power, or nanotechnology to revolutionize manufacturing.

Biologists could work on any number of diseases, from cancer and diabetes to malaria and antibiotic-resistant tuberculosis. They could work on stem-cell research and regenerative medicine, or genetic engineering and body enhancement, or on gerontology and age reversal. Biology is a field with so many important unsolved problems that if you have the stomach for it and the interest in some biological problem, you can’t really go wrong.

Electrical engineers can obviously work on improving the power and performance of computer systems, though I think over the last 20 years or so the marginal benefits of that kind of research have begun to wane. Efforts might be better spent in cybernetics, control systems, or network theory, where considerably more is left uncharted; or in artificial intelligence, where computing power is only the first step.

Mechanical engineers could work on making vehicles safer and cheaper, or building reusable spacecraft, or designing self-constructing or self-repairing infrastructure. They could work on 3D printing and just-in-time manufacturing, scaling it up for whole factories and down for home appliances.

Aerospace engineers could link the world with hypersonic travel, build satellites to provide Internet service to the farthest reaches of the globe, or create interplanetary rockets to colonize Mars and the moons of Jupiter and Saturn. They could mine asteroids and make previously rare metals ubiquitous. They could build aerial drones for delivery of goods and revolutionize logistics.

Agronomists could work on sustainable farming methods (hint: stop farming meat), invent new strains of crops that are hardier against pests, more nutritious, or higher-yielding; on the other hand a lot of this is already being done, so maybe it’s time to think outside the box and consider what we might do to make our food system more robust against climate change or other catastrophes.

Ecologists will obviously be working on predicting and mitigating the effects of global climate change, but there are a wide variety of ways of doing so. You could focus on ocean acidification, or on desertification, or on fishery depletion, or on carbon emissions. You could work on getting the climate models so precise that they become completely undeniable to anyone but the most dogmatically opposed. You could focus on endangered species and habitat disruption. Ecology is in general so underfunded and undersupported that basically anything you could do in ecology would be beneficial.

Neuroscientists have plenty of things to do as well: Understanding vision, memory, motor control, facial recognition, emotion, decision-making and so on. But one topic in particular is lacking in researchers, and that is the fundamental Hard Problem of consciousness. This one is going to be an uphill battle, and will require a special level of tenacity and perseverance. The problem is so poorly understood it’s difficult to even state clearly, let alone solve. But if you could do it—if you could even make a significant step toward it—it could literally be the greatest achievement in the history of humanity. It is one of the fundamental questions of our existence, the very thing that separates us from inanimate matter, the very thing that makes questions possible in the first place. Understand consciousness and you understand the very thing that makes us human. That achievement is so enormous that it seems almost petty to point out that the revolutionary effects of artificial intelligence would also fall into your lap.

The arts and humanities also have a great deal to contribute, and are woefully underappreciated.

Artists, authors, and musicians all have the potential to make us rethink our place in the world, reconsider and reimagine what we believe and strive for. If physics and engineering can make us better at winning wars, art and literature and remind us why we should never fight them in the first place. The greatest works of art can remind us of our shared humanity, link us all together in a grander civilization that transcends the petty boundaries of culture, geography, or religion. Art can also be timeless in a way nothing else can; most of Aristotle’s science is long-since refuted, but even the Great Pyramid thousands of years before him continues to awe us. (Aristotle is about equidistant chronologically between us and the Great Pyramid.)

Philosophers may not seem like they have much to add—and to be fair, a great deal of what goes on today in metaethics and epistemology doesn’t add much to civilization—but in fact it was Enlightenment philosophy that brought us democracy, the scientific method, and market economics. Today there are still major unsolved problems in ethics—particularly bioethics—that are in need of philosophical research. Technologies like nanotechnology and genetic engineering offer us the promise of enormous benefits, but also the risk of enormous harms; we need philosophers to help us decide how to use these technologies to make our lives better instead of worse. We need to know where to draw the lines between life and death, between justice and cruelty. Literally nothing could be more important than knowing right from wrong.

Now that I have sung the praises of the natural sciences and the humanities, let me now explain why I am a social scientist, and why you probably should be as well.

Psychologists and cognitive scientists obviously have a great deal to give us in the study of mental illness, but they may actually have more to contribute in the study of mental health—in understanding not just what makes us depressed or schizophrenic, but what makes us happy or intelligent. The 21st century may not simply see the end of mental illness, but the rise of a new level of mental prosperity, where being happy, focused, and motivated are matters of course. The revolution that biology has brought to our lives may pale in comparison to the revolution that psychology will bring. On the more social side of things, psychology may allow us to understand nationalism, sectarianism, and the tribal instinct in general, and allow us to finally learn to undermine fanaticism, encourage critical thought, and make people more rational. The benefits of this are almost impossible to overstate: It is our own limited, broken, 90%-or-so heuristic rationality that has brought us from simians to Shakespeare, from gorillas to Godel. To raise that figure to 95% or 99% or 99.9% could be as revolutionary as was whatever evolutionary change first brought us out of the savannah as Australopithecus africanus.

Sociologists and anthropologists will also have a great deal to contribute to this process, as they approach the tribal instinct from the top down. They may be able to tell us how nations are formed and undermined, why some cultures assimilate and others collide. They can work to understand combat bigotry in all its forms, racism, sexism, ethnocentrism. These could be the fields that finally end war, by understanding and correcting the imbalances in human societies that give rise to violent conflict.

Political scientists and public policy researchers can allow us to understand and restructure governments, undermining corruption, reducing inequality, making voting systems more expressive and more transparent. They can search for the keystones of different political systems, finding the weaknesses in democracy to shore up and the weaknesses in autocracy to exploit. They can work toward a true international government, representative of all the world’s people and with the authority and capability to enforce global peace. If the sociologists don’t end war and genocide, perhaps the political scientists can—or more likely they can do it together.

And then, at last, we come to economists. While I certainly work with a lot of ideas from psychology, sociology, and political science, I primarily consider myself an economist. Why is that? Why do I think the most important problems for me—and perhaps everyone—to be working on are fundamentally economic?

Because, above all, economics is broken. The other social sciences are basically on the right track; their theories are still very limited, their models are not very precise, and there are decades of work left to be done, but the core principles upon which they operate are correct. Economics is the field to work in because of criterion 3: Almost all the important problems in economics are underinvested.

Macroeconomics is where we are doing relatively well, and yet the Keynesian models that allowed us to reduce the damage of the Second Depression nonetheless had no power to predict its arrival. While inflation has been at least somewhat tamed, the far worse problem of unemployment has not been resolved or even really understood.

When we get to microeconomics, the neoclassical models are totally defective. Their core assumptions of total rationality and total selfishness are embarrassingly wrong. We have no idea what controls assets prices, or decides credit constraints, or motivates investment decisions. Our models of how people respond to risk are all wrong. We have no formal account of altruism or its limitations. As manufacturing is increasingly automated and work shifts into services, most economic models make no distinction between the two sectors. While finance takes over more and more of our society’s wealth, most formal models of the economy don’t even include a financial sector.

Economic forecasting is no better than chance. The most widely-used asset-pricing model, CAPM, fails completely in empirical tests; its defenders concede this and then have the audacity to declare that it doesn’t matter because the mathematics works. The Black-Scholes derivative-pricing model that caused the Second Depression could easily have been predicted to do so, because it contains a term that assumes normal distributions when we know for a fact that financial markets are fat-tailed; simply put, it claims certain events will never happen that actually occur several times a year.

Worst of all, economics is the field that people listen to. When a psychologist or sociologist says something on television, people say that it sounds interesting and basically ignore it. When an economist says something on television, national policies are shifted accordingly. Austerity exists as national policy in part due to a spreadsheet error by two famous economists.

Keynes already knew this in 1936: “The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”

Meanwhile, the problems that economics deals with have a direct influence on the lives of millions of people. Bad economics gives us recessions and depressions; it cripples our industries and siphons off wealth to an increasingly corrupt elite. Bad economics literally starves people: It is because of bad economics that there is still such a thing as world hunger. We have enough food, we have the technology to distribute it—but we don’t have the economic policy to lift people out of poverty so that they can afford to buy it. Bad economics is why we don’t have the funding to cure diabetes or colonize Mars (but we have the funding for oil fracking and aircraft carriers, don’t we?). All of that other scientific research that needs done probably could be done, if the resources of our society were properly distributed and utilized.

This combination of both overwhelming influence, overwhelming importance and overwhelming error makes economics the low-hanging fruit; you don’t even have to be particularly brilliant to have better ideas than most economists (though no doubt it helps if you are). Economics is where we have a whole bunch of important questions that are unanswered—or the answers we have are wrong. (As Will Rogers said, “It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”)

Thus, rather than tell you go into finance and earn to give, those economists could simply have said: “You should become an economist. You could hardly do worse than we have.”

In honor of Pi Day, I for one welcome our new robot overlords

JDN 2457096 EDT 16:08

Despite my preference to use the Julian Date Number system, it has not escaped my attention that this weekend was Pi Day of the Century, 3/14/15. Yesterday morning we had the Moment of Pi: 3/14/15 9:26:53.58979… We arguably got an encore that evening if we allow 9:00 PM instead of 21:00.

Though perhaps it is a stereotype and/or cheesy segue, pi and associated mathematical concepts are often associated with computers and robots. Robots are an increasing part of our lives, from the industrial robots that manufacture our cars to the precision-timed satellites that provide our GPS navigation. When you want to know how to get somewhere, you pull out your pocket thinking machine and ask it to commune with the space robots who will guide you to your destination.

There are obvious upsides to these robots—they are enormously productive, and allow us to produce great quantities of useful goods at astonishingly low prices, including computers themselves, creating a positive feedback loop that has literally lowered the price of a given amount of computing power by a factor of one trillion in the latter half of the 20th century. We now very much live in the early parts of a cyberpunk future, and it is due almost entirely to the power of computer automation.

But if you know your SF you may also remember another major part of cyberpunk futures aside from their amazing technology; they also tend to be dystopias, largely because of their enormous inequality. In the cyberpunk future corporations own everything, governments are virtually irrelevant, and most individuals can barely scrape by—and that sounds all too familiar, doesn’t it? This isn’t just something SF authors made up; there really are a number of ways that computer technology can exacerbate inequality and give more power to corporations.

Why? The reason that seems to get the most attention among economists is skill-biased technological change; that’s weird because it’s almost certainly the least important. The idea is that computers can automate many routine tasks (no one disputes that part) and that routine tasks tend to be the sort of thing that uneducated workers generally do more often than educated ones (already this is looking fishy; think about accountants versus artists). But educated workers are better at using computers and the computers need people to operate them (clearly true). Hence while uneducated workers are substitutes for computers—you can use the computers instead—educated workers are complements for computers—you need programmers and engineers to make the computers work. As computers get cheaper, their substitutes also get cheaper—and thus wages for uneducated workers go down. But their complements get more valuable—and so wages for educated workers go up. Thus, we get more inequality, as high wages get higher and low wages get lower.

Or, to put it more succinctly, robots are taking our jobs. Not all our jobs—actually they’re creating jobs at the top for software programmers and electrical engineers—but a lot of our jobs, like welders and metallurgists and even nurses. As the technology improves more and more jobs will be replaced by automation.

The theory seems plausible enough—and in some form is almost certainly true—but as David Card has pointed out, this fails to explain most of the actual variation in inequality in the US and other countries. Card is one of my favorite economists; he is also famous for completely revolutionizing the economics of minimum wage, showing that prevailing theory that minimum wages must hurt employment simply doesn’t match the empirical data.

If it were just that college education is getting more valuable, we’d see a rise in income for roughly the top 40%, since over 40% of American adults have at least an associate’s degree. But we don’t actually see that; in fact contrary to popular belief we don’t even really see it in the top 1%. The really huge increases in income for the last 40 years have been at the top 0.01%—the top 1% of 1%.

Many of the jobs that are now automated also haven’t seen a fall in income; despite the fact that high-frequency trading algorithms do what stockbrokers do a thousand times better (“better” at making markets more unstable and siphoning wealth from the rest of the economy that is), stockbrokers have seen no such loss in income. Indeed, they simply appropriate the additional income from those computer algorithms—which raises the question why welders couldn’t do the same thing. And indeed, I’ll get to in a moment why that is exactly what we must do, that the robot revolution must also come with a revolution in property rights and income distribution.

No, the real reasons why technology exacerbates inequality are twofold: Patent rents and the winner-takes-all effect.

In an earlier post I already talked about the winner-takes-all effect, so I’ll just briefly summarize it this time around. Under certain competitive conditions, a small fraction of individuals can reap a disproportionate share of the rewards despite being only slightly more productive than those beneath them. This often happens when we have network externalities, in which a product becomes more valuable when more people use it, thus creating a positive feedback loop that makes the products which are already successful wildly so and the products that aren’t successful resigned to obscurity.

Computer technology—more specifically, the Internet—is particularly good at creating such situations. Facebook, Google, and Amazon are all examples of companies that (1) could not exist without Internet technology and (2) depend almost entirely upon network externalities for their business model. They are the winners who take all; thousands of other software companies that were just as good or nearly so are now long forgotten. The winners are not always the same, because the system is unstable; for instance MySpace used to be much more important—and much more profitable—until Facebook came along.

But the fact that a different handful of upper-middle-class individuals can find themselves suddenly and inexplicably thrust into fame and fortune while the rest of us toil in obscurity really isn’t much comfort, now is it? While technically the rise and fall of MySpace can be called “income mobility”, it’s clearly not what we actually mean when we say we want a society with a high level of income mobility. We don’t want a society where the top 10% can by little more than chance find themselves becoming the top 0.01%; we want a society where you don’t have to be in the top 10% to live well in the first place.

Even without network externalities the Internet still nurtures winner-takes-all markets, because digital information can be copied infinitely. When it comes to sandwiches or even cars, each new one is costly to make and costly to transport; it can be more cost-effective to choose the ones that are made near you even if they are of slightly lower quality. But with books (especially e-books), video games, songs, or movies, each individual copy costs nothing to create, so why would you settle for anything but the best? This may well increase the overall quality of the content consumers get—but it also ensures that the creators of that content are in fierce winner-takes-all competition. Hence J.K. Rowling and James Cameron on the one hand, and millions of authors and independent filmmakers barely scraping by on the other. Compare a field like engineering; you probably don’t know a lot of rich and famous engineers (unless you count engineers who became CEOs like Bill Gates and Thomas Edison), but nor is there a large segment of “starving engineers” barely getting by. Though the richest engineers (CEOs excepted) are not nearly as rich as the richest authors, the typical engineer is much better off than the typical author, because engineering is not nearly as winner-takes-all.

But the main topic for today is actually patent rents. These are a greatly underappreciated segment of our economy, and they grow more important all the time. A patent rent is more or less what it sounds like; it’s the extra money you get from owning a patent on something. You can get that money either by literally renting it—charging license fees for other companies to use it—or simply by being the only company who is allowed to manufacture something, letting you sell it at monopoly prices. It’s surprisingly difficult to assess the real value of patent rents—there’s a whole literature on different econometric methods of trying to tackle this—but one thing is clear: Some of the largest, wealthiest corporations in the world are built almost entirely upon patent rents. Drug companies, R&D companies, software companies—even many manufacturing companies like Boeing and GM obtain a substantial portion of their income from patents.

What is a patent? It’s a rule that says you “own” an idea, and anyone else who wants to use it has to pay you for the privilege. The very concept of owning an idea should trouble you—ideas aren’t limited in number, you can easily share them with others. But now think about the fact that most of these patents are owned by corporationsnot by inventors themselves—and you’ll realize that our system of property rights is built around the notion that an abstract entity can own an idea—that one idea can own another.

The rationale behind patents is that they are supposed to provide incentives for innovation—in exchange for investing the time and effort to invent something, you receive a certain amount of time where you get to monopolize that product so you can profit from it. But how long should we give you? And is this really the best way to incentivize innovation?

I contend it is not; when you look at the really important world-changing innovations, very few of them were done for patent rents, and virtually none of them were done by corporations. Jonas Salk was indignant at the suggestion he should patent the polio vaccine; it might have made him a billionaire, but only by letting thousands of children die. (To be fair, here’s a scholar arguing that he probably couldn’t have gotten the patent even if he wanted to—but going on to admit that even then the patent incentive had basically nothing to do with why penicillin and the polio vaccine were invented.)

Who landed on the moon? Hint: It wasn’t Microsoft. Who built the Hubble Space Telescope? Not Sony. The Internet that made Google and Facebook possible was originally invented by DARPA. Even when corporations seem to do useful innovation, it’s usually by profiting from the work of individuals: Edison’s corporation stole most of its good ideas from Nikola Tesla, and by the time the Wright Brothers founded a company their most important work was already done (though at least then you could argue that they did it in order to later become rich, which they ultimately did). Universities and nonprofits brought you the laser, light-emitting diodes, fiber optics, penicillin and the polio vaccine. Governments brought you liquid-fuel rockets, the Internet, GPS, and the microchip. Corporations brought you, uh… Viagra, the Snuggie, and Furbies. Indeed, even Google’s vaunted search algorithms were originally developed by the NSF. I can think of literally zero examples of a world-changing technology that was actually invented by a corporation in order to secure a patent. I’m hesitant to say that none exist, but clearly the vast majority of seminal inventions have been created by governments and universities.

This has always been true throughout history. Rome’s fire departments were notorious for shoddy service—and wholly privately-owned—but their great aqueducts that still stand today were built as government projects. When China invented paper, turned it into money, and defended it with the Great Wall, it was all done on government funding.

The whole idea that patents are necessary for innovation is simply a lie; and even the idea that patents lead to more innovation is quite hard to defend. Imagine if instead of letting Google and Facebook patent their technology all the money they receive in patent rents were instead turned into tax-funded research—frankly is there even any doubt that the results would be better for the future of humanity? Instead of better ad-targeting algorithms we could have had better cancer treatments, or better macroeconomic models, or better spacecraft engines.

When they feel their “intellectual property” (stop and think about that phrase for awhile, and it will begin to seem nonsensical) has been violated, corporations become indignant about “free-riding”; but who is really free-riding here? The people who copy music albums for free—because they cost nothing to copy, or the corporations who make hundreds of billions of dollars selling zero-marginal-cost products using government-invented technology over government-funded infrastructure? (Many of these companies also continue receive tens or hundreds of millions of dollars in subsidies every year.) In the immortal words of Barack Obama, “you didn’t build that!”

Strangely, most economists seem to be supportive of patents, despite the fact that their own neoclassical models point strongly in the opposite direction. There’s no logical connection between the fixed cost of inventing a technology and the monopoly rents that can be extracted from its patent. There is some connection—albeit a very weak one—between the benefits of the technology and its monopoly profits, since people are likely to be willing to pay more for more beneficial products. But most of the really great benefits are either in the form of public goods that are unenforceable even with patents (go ahead, try enforcing on that satellite telescope on everyone who benefits from its astronomical discoveries!) or else apply to people who are so needy they can’t possibly pay you (like anti-malaria drugs in Africa), so that willingness-to-pay link really doesn’t get you very far.

I guess a lot of neoclassical economists still seem to believe that willingness-to-pay is actually a good measure of utility, so maybe that’s what’s going on here; if it were, we could at least say that patents are a second-best solution to incentivizing the most important research.

But even then, why use second-best when you have best? Why not devote more of our society’s resources to governments and universities that have centuries of superior track record in innovation? When this is proposed the deadweight loss of taxation is always brought up, but somehow the deadweight loss of monopoly rents never seems to bother anyone. At least taxes can be designed to minimize deadweight loss—and democratic governments actually have incentives to do that; corporations have no interest whatsoever in minimizing the deadweight loss they create so long as their profit is maximized.

I’m not saying we shouldn’t have corporations at all—they are very good at one thing and one thing only, and that is manufacturing physical goods. Cars and computers should continue to be made by corporations—but their technologies are best invented by government. Will this dramatically reduce the profits of corporations? Of course—but I have difficulty seeing that as anything but a good thing.

Why am I talking so much about patents, when I said the topic was robots? Well, it’s typically because of the way these patents are assigned that robots taking people’s jobs becomes a bad thing. The patent is owned by the company, which is owned by the shareholders; so when the company makes more money by using robots instead of workers, the workers lose.

If when a robot takes your job, you simply received the income produced by the robot as capital income, you’d probably be better off—you get paid more and you also don’t have to work. (Of course, if you define yourself by your career or can’t stand the idea of getting “handouts”, you might still be unhappy losing your job even though you still get paid for it.)

There’s a subtler problem here though; robots could have a comparative advantage without having an absolute advantage—that is, they could produce less than the workers did before, but at a much lower cost. Where it cost $5 million in wages to produce $10 million in products, it might cost only $3 million in robot maintenance to produce $9 million in products. Hence you can’t just say that we should give the extra profits to the workers; in some cases those extra profits only exist because we are no longer paying the workers.

As a society, we still want those transactions to happen, because producing less at lower cost can still make our economy more efficient and more productive than it was before. Those displaced workers can—in theory at least—go on to other jobs where they are needed more.

The problem is that this often doesn’t happen, or it takes such a long time that workers suffer in the meantime. Hence the Luddites; they don’t want to be made obsolete even if it does ultimately make the economy more productive.

But this is where patents become important. The robots were probably invented at a university, but then a corporation took them and patented them, and is now selling them to other corporations at a monopoly price. The manufacturing company that buys the robots now has to spend more in order to use the robots, which drives their profits down unless they stop paying their workers.

If instead those robots were cheap because there were no patents and we were only paying for the manufacturing costs, the workers could be shareholders in the company and the increased efficiency would allow both the employers and the workers to make more money than before.

What if we don’t want to make the workers into shareholders who can keep their shares after they leave the company? There is a real downside here, which is that once you get your shares, why stay at the company? We call that a “golden parachute” when CEOs do it, which they do all the time; but most economists are in favor of stock-based compensation for CEOs, and once again I’m having trouble seeing why it’s okay when rich people do it but not when middle-class people do.

Another alternative would be my favorite policy, the basic income: If everyone knows they can depend on a basic income, losing your job to a robot isn’t such a terrible outcome. If the basic income is designed to grow with the economy, then the increased efficiency also raises everyone’s standard of living, as economic growth is supposed to do—instead of simply increasing the income of the top 0.01% and leaving everyone else where they were. (There is a good reason not to make the basic income track economic growth too closely, namely the business cycle; you don’t want the basic income payments to fall in a recession, because that would make the recession worse. Instead they should be smoothed out over multiple years or designed to follow a nominal GDP target, so that they continue to rise even in a recession.)

We could also combine this with expanded unemployment insurance (explain to me again why you can’t collect unemployment if you weren’t working full-time before being laid off, even if you wanted to be or you’re a full-time student?) and active labor market policies that help people re-train and find new and better jobs. These policies also help people who are displaced for reasons other than robots making their jobs obsolete—obviously there are all sorts of market conditions that can lead to people losing their jobs, and many of these we actually want to happen, because they involve reallocating the resources of our society to more efficient ends.

Why aren’t these sorts of policies on the table? I think it’s largely because we don’t think of it in terms of distributing goods—we think of it in terms of paying for labor. Since the worker is no longer laboring, why pay them?

This sounds reasonable at first, but consider this: Why give that money to the shareholder? What did they do to earn it? All they do is own a piece of the company. They may not have contributed to the goods at all. Honestly, on a pay-for-work basis, we should be paying the robot!

If it bothers you that the worker collects dividends even when he’s not working—why doesn’t it bother you that shareholders do exactly the same thing? By definition, a shareholder is paid according to what they own, not what they do. All this reform would do is make workers into owners.

If you justify the shareholder’s wealth by his past labor, again you can do exactly the same to justify worker shares. (And as I said above, if you’re worried about the moral hazard of workers collecting shares and leaving, you should worry just as much about golden parachutes.)

You can even justify a basic income this way: You paid taxes so that you could live in a society that would protect you from losing your livelihood—and if you’re just starting out, your parents paid those taxes and you will soon enough. Theoretically there could be “welfare queens” who live their whole lives on the basic income, but empirical data shows that very few people actually want to do this, and when given opportunities most people try to find work. Indeed, even those who don’t, rarely seem to be motivated by greed (even though, capitalists tell us, “greed is good”); instead they seem to be de-motivated by learned helplessness after trying and failing for so long. They don’t actually want to sit on the couch all day and collect welfare payments; they simply don’t see how they can compete in the modern economy well enough to actually make a living from work.

One thing is certain: We need to detach income from labor. As a society we need to get over the idea that a human being’s worth is decided by the amount of work they do for corporations. We need to get over the idea that our purpose in life is a job, a career, in which our lives are defined by the work we do that can be neatly monetized. (I admit, I suffer from the same cultural blindness at times, feeling like a failure because I can’t secure the high-paying and prestigious employment I want. I feel this clear sense that my society does not value me because I am not making money, and it damages my ability to value myself.)

As robots do more and more of our work, we will need to redefine the way we live by something else, like play, or creativity, or love, or compassion. We will need to learn to see ourselves as valuable even if nothing we do ever sells for a penny to anyone else.

A basic income can help us do that; it can redefine our sense of what it means to earn money. Instead of the default being that you receive nothing because you are worthless unless you work, the default is that you receive enough to live on because you are a human being of dignity and a citizen. This is already the experience of people who have substantial amounts of capital income; they can fall back on their dividends if they ever can’t or don’t want to find employment. A basic income would turn us all into capital owners, shareholders in the centuries of established capital that has been built by our forebears in the form of roads, schools, factories, research labs, cars, airplanes, satellites, and yes—robots.