The era of the eurodollar is upon us

Oct 16 JDN 2459869

I happen to be one of those weirdos who liked the game Cyberpunk 2077. It was hardly flawless, and had many unforced errors (like letting you choose your gender, but not making voice type independent from pronouns? That has to be, like, three lines of code to make your game significantly more inclusive). But overall I thought it did a good job of representing a compelling cyberpunk world that is dystopian but not totally hopeless, and had rich, compelling characters, along with reasonably good gameplay. The high level of character customization sets a new standard (aforementioned errors notwithstanding), and I for one appreciate how they pushed the envelope for sexuality in a AAA game.

It’s still not explicit—though I’m sure there are mods for that—but at least you can in fact get naked, and people talk about sex in a realistic way. It’s still weird to me that showing a bare breast or a penis is seen as ‘adult’ in the same way as showing someone’s head blown off (Remind me: Which of the three will nearly everyone have seen from the time they were a baby? Which will at least 50% of children see from birth, guaranteed, and virtually 100% of adults sooner or later? Which can you see on Venus de Milo and David?), but it’s at least some progress in our society toward a healthier relationship with sex.

A few things about the game’s world still struck me as odd, though. Chiefly it has to be the weird alternate history where apparently we have experimental AI and mind-uploading in the 2020s, but… those things are still experimental in the 2070s? So our technological progress was through the roof for the early 2000s, and then just completely plateaued? They should have had Johnny Silverhand’s story take place in something like 2050, not 2023. (You could leave essentially everything else unchanged! V could still have grown up hearing tales of Silverhand’s legendary exploits, because 2050 was 27 years ago in 2077; canonically, V is 28 years old when the game begins. Honestly it makes more sense in other ways: Rogue looks like she’s in her 60s, not her 80s.)

Another weird thing is the currency they use: They call it the “eurodollar”, and the symbol is, as you might expect, €$. When the game first came out, that seemed especially ridiculous, since euros were clearly worth more than dollars and basically always had been.

Well, they aren’t anymore. In fact, euros and dollars are now trading almost exactly at parity, and have been for weeks. CD Projekt Red was right: In the 2020s, the era of the eurodollar is upon us after all.

Of course, we’re unlike to actually merge the two currencies any time soon. (Can you imagine how Republicans would react if such a thing were proposed?) But the weird thing is that we could! It almost is like the two currencies are interchangeable—for the first time in history.

It isn’t so much that the euro is weak; it’s that the dollar is strong. When I first moved to the UK, the pound was trading at about $1.40. It is now trading at $1.10! If it continues dropping as it has, it could even reach parity as well! We might have, for the first time in history, the dollar, the pound, and the euro functioning as one currency. Get the Canadian dollar too (currently much too weak), and we’ll have the Atlantic Union dollar I use in some of my science fiction (I imagine the AU as an expansion of NATO into an economic union that gradually becomes its own government).Then again, the pound is especially weak right now because it plunged after the new prime minister announced an utterly idiotic economic plan. (Conservatives refusing to do basic math and promising that tax cuts would fix everything? Why, it felt like being home again! In all the worst ways.)

This is largely a bad thing. A strong dollar means that the US trade deficit will increase, and also that other countries will have trouble buying our exports. Conversely, with their stronger dollars, Americans will buy more imports from other countries. The combination of these two effects will make inflation worse in other countries (though it could reduce it in the US).

It’s not so bad for me personally, as my husband’s income is largely in dollars while our expenses are in pounds. (My income is in pounds and thus unaffected.) So a strong dollar and a weak pound means our real household income is about £4,000 than it would otherwise have been—which is not a small difference!

In general, the level of currency exchange rates isn’t very important. It’s changes in exchange rates that matter. The changes in relative prices will shift around a lot of economic activity, causing friction both in the US and in its (many) trading partners. Eventually all those changes should result in the exchange rates converging to a new, stable equilibrium; but that can take a long time, and exchange rates can fluctuate remarkably fast. In the meantime, such large shifts in exchange rates are going to cause even more chaos in a world already shaken by the COVID pandemic and the war in Ukraine.

Is privacy dead?

May 9 JDN 2459342

It is the year 2021, and while we don’t yet have flying cars or human-level artificial intelligence, our society is in many ways quite similar to what cyberpunk fiction predicted it would be. We are constantly connected to the Internet, even linking devices in our homes to the Web when that is largely pointless or actively dangerous. Oligopolies of fewer and fewer multinational corporations that are more and more powerful have taken over most of our markets, from mass media to computer operating systems, from finance to retail.

One of the many dire predictions of cyberpunk fiction is that constant Internet connectivity will effectively destroy privacy. There is reason to think that this is in fact happening: We have televisions that listen to our conversations, webcams that can be hacked, sometimes invisibly, and the operating system that runs the majority of personal and business computers is built around constantly tracking its users.

The concentration of oligopoly power and the decline of privacy are not unconnected. It’s the oligopoly power of corporations like Microsoft and Google and Facebook that allows them to present us with absurdly long and virtually unreadable license agreements as an ultimatum: “Sign away your rights, or else you can’t use our product. And remember, we’re the only ones who make this product and it’s increasingly necessary for your basic functioning in society!” This is of course exactly as cyberpunk fiction warned us it would be.

Giving up our private information to a handful of powerful corporations would be bad enough if that information were securely held only by them. But it isn’t. There have been dozens of major data breaches of major corporations, and there will surely be many more. In an average year, several billion data records are exposed through data breaches. Each person produces many data records, so it’s difficult to say exactly how many people have had their data stolen; but it isn’t implausible to say that if you are highly active on the Internet, at least some of your data has been stolen in one breach or another. Corporations have strong incentives to collect and use your data—data brokerage is a hundred-billion-dollar industry—but very weak incentives to protect it from prying eyes. The FTC does impose fines for negligence in the event of a major data breach, but as usual the scale of the fines simply doesn’t match the scale of the corporations responsible. $575 million sounds like a lot of money, but for a corporation with $28 billion in assets it’s a slap on the wrist. It would be equivalent to fining me about $500 (about what I’d get for driving without a passenger in the carpool lane). Yeah, I’d feel that; it would be unpleasant and inconvenient. But it’s certainly not going to change my life. And typically these fines only impact shareholders, and don’t even pass through to the people who made the decisions: The man who was CEO of Equifax when it suffered its catastrophic data breach retired with a $90 million pension.

While most people seem either blissfully unaware or fatalistically resigned to its inevitability, a few people have praised the trend of reduced privacy, usually by claiming that it will result in increased transparency. Yet, ironically, a world with less privacy can actually mean a world with less transparency as well: When you don’t know what information you reveal will be stolen and misused, you will constantly endeavor to protect all your information, even things that you would normally not hesitate to reveal. When even your face and name can be used to track you, you’ll be more hesitant to reveal them. Cyberpunk fiction predicted this too: Most characters in cyberpunk stories are known by their hacker handles, not their real given names.

There is some good news, however. People are finally beginning to notice that they have been pressured into giving away their privacy rights, and demanding to get them back. The United Nations has recently passed resolutions defending digital privacy, governments have taken action against the worst privacy violations with increasing frequency, courts are ruling in favor of stricter protections, think tanks are demanding stricter regulations, and even corporate policies are beginning to change. While the major corporations all want to take your data, there are now many smaller businesses and nonprofit organizations that will sell you tools to help protect it.

This does not mean we can be complacent: The war is far from won. But it does mean that there is some hope left; we don’t simply have to surrender and accept a world where anyone with enough money can know whatever they want about anyone else. We don’t need to accept what the CEO of Sun Microsystems infamously said: “You have zero privacy anyway. Get over it.”

I think the best answer to the decline of privacy is to address the underlying incentives that make it so lucrative. Why is data brokering such a profitable industry? Because ad targeting is such a profitable industry. So profitable, indeed, that huge corporations like Facebook and Google make almost all of their money that way, and the useful services they provide to users are offered for free simply as an enticement to get them to look at more targeted advertising.

Selling advertising is hardly new—we’ve been doing it for literally millennia, as Roman gladiators were often paid to hawk products. It has been the primary source of revenue for most forms of media, from newspapers to radio stations to TV networks, since those media have existed. What has changed is that ad targeting is now a lucrative business: In the 1850s, that newspaper being sold by barking boys on the street likely had ads in it, but they were the same ads for every single reader. Now when you log in to CNN.com or nytimes.com, the ads on that page are specific only to you, based on any information that these media giants have been able to glean from your past Internet activity. If you do try to protect your online privacy with various tools, a quick-and-dirty way to check if it’s working is to see if websites give you ads for things you know you’d never buy.

In fact, I consider it a very welcome recent development that video streaming is finally a way to watch TV shows by actually paying for them instead of having someone else pay for the right to shove ads in my face. I can’t remember the last time I heard a TV ad jingle, and I’m very happy about that fact. Having to spend 15 minutes of each hour of watching TV to watch commercials may not seem so bad—in fact, many people may feel that they’d rather do that than pay the money to avoid it. But think about it this way: If it weren’t worth at least that much to the corporations buying those ads, they wouldn’t do it. And if a corporation expects to get $X from you that you wouldn’t have otherwise paid, that means they’re getting you to spend that much that you otherwise wouldn’t have—meaning that they’re getting you to buy something you didn’t need. Perhaps it’s better after all to spend that $X on getting entertainment that doesn’t try to get you to buy things you don’t need.

Indeed, I think there is an opportunity to restructure the whole Internet this way. What we need is a software company—maybe a nonprofit organization, maybe a for-profit business—that is set up to let us make micropayments for online content in lieu of having our data collected or being force-fed advertising.

How big would these payments need to be? Well, Facebook has about 2.8 billion users and takes in revenue of about $80 billion per year, so the average user would have to pay about $29 a year for the use of Facebook, Instagram, and WhatsApp. That’s about $2.50 per month, or $0.08 per day.

The New York Times is already losing its ad-supported business model; less than $400 million of its $1.8 billion revenue last year was from ads, the rest being primarily from subscriptions. But smaller media outlets have a much harder time gaining subscribers; often people just want to read a single article and aren’t willing to pay for a whole month or year of the periodical. If we could somehow charge for individual articles, how much would we have to charge? Well, a typical webpage has an ad clickthrough rate of 1%, while a typical cost-per-click rate is about $0.60, so ads on the average webpage makes its owners a whopping $0.006. That’s not even a single cent. So if this new micropayment system allowed you to pay one cent to read an article without the annoyance of ads or the pressure to buy something you don’t need, would you pay it? I would. In fact, I’d pay five cents. They could quintuple their revenue!

The main problem is that we currently don’t have an efficient way to make payments that small. Processing a credit card transaction typically costs at least $0.05, so a five-cent transaction would yield literally zero revenue for the website. I’d have to pay ten cents to give the website five, and I admit I might not always want to do that—I’d also definitely be uncomfortable with half the money going to credit card companies.

So what’s needed is software to bundle the payments at each end: In a single credit card transaction, you add say $20 of tokens to an account. Each token might be worth $0.01, or even less if we want. These tokens can then be spent at participating websites to pay for access. The websites can then collect all the tokens they’ve received over say a month, bundle them together, and sell them back to the company that originally sold them to you, for slightly less than what you paid for them. These bundled transactions could actually be quite large in many cases—thousands or millions of dollars—and thus processing fees would be a very small fraction. For smaller sites there could be a minimum amount of tokens they must collect—perhaps also $20 or so—before they can sell them back. Note that if you’ve bought $20 in tokens and you are paying $0.05 per view, you can read 400 articles before you run out of tokens and have to buy more. And they don’t all have to be from the same source, as they would with a traditional subscription; you can read articles from any outlet that participates in the token system.

There are a number of technical issues to be resolved here: How to keep the tokens secure, how to guarantee that once a user purchases access to an article they will continue to have access to it, ideally even if they clear their cache, delete all cookies, or login from another computer. I can’t literally set up this website today, and even if I could, I don’t know how I’d attract a critical mass of both users and participating websites (it’s a major network externality problem). But it seems well within the purview of what the tech industry has done in the past—indeed, it’s quite comparable to the impressive (and unsettling) infrastructure that has been laid down to support ad-targeting and data brokerage.

How would such a system help protect privacy? If micropayments for content became the dominant model of funding online content, most people wouldn’t spend much time looking at online ads, and ad targeting would be much less profitable. Data brokerage, in turn, would become less lucrative, because there would be fewer ways to use that data to make profits. With the incentives to take our data thus reduced, it would be easier to enforce regulations protecting our privacy. Those fines might actually be enough to make it no longer worth the while to take sensitive data, and corporations might stop pressuring people to give it up.

No, privacy isn’t dead. But it’s dying. If we want to save it, we have a lot of work to do.

In honor of Pi Day, I for one welcome our new robot overlords

JDN 2457096 EDT 16:08

Despite my preference to use the Julian Date Number system, it has not escaped my attention that this weekend was Pi Day of the Century, 3/14/15. Yesterday morning we had the Moment of Pi: 3/14/15 9:26:53.58979… We arguably got an encore that evening if we allow 9:00 PM instead of 21:00.

Though perhaps it is a stereotype and/or cheesy segue, pi and associated mathematical concepts are often associated with computers and robots. Robots are an increasing part of our lives, from the industrial robots that manufacture our cars to the precision-timed satellites that provide our GPS navigation. When you want to know how to get somewhere, you pull out your pocket thinking machine and ask it to commune with the space robots who will guide you to your destination.

There are obvious upsides to these robots—they are enormously productive, and allow us to produce great quantities of useful goods at astonishingly low prices, including computers themselves, creating a positive feedback loop that has literally lowered the price of a given amount of computing power by a factor of one trillion in the latter half of the 20th century. We now very much live in the early parts of a cyberpunk future, and it is due almost entirely to the power of computer automation.

But if you know your SF you may also remember another major part of cyberpunk futures aside from their amazing technology; they also tend to be dystopias, largely because of their enormous inequality. In the cyberpunk future corporations own everything, governments are virtually irrelevant, and most individuals can barely scrape by—and that sounds all too familiar, doesn’t it? This isn’t just something SF authors made up; there really are a number of ways that computer technology can exacerbate inequality and give more power to corporations.

Why? The reason that seems to get the most attention among economists is skill-biased technological change; that’s weird because it’s almost certainly the least important. The idea is that computers can automate many routine tasks (no one disputes that part) and that routine tasks tend to be the sort of thing that uneducated workers generally do more often than educated ones (already this is looking fishy; think about accountants versus artists). But educated workers are better at using computers and the computers need people to operate them (clearly true). Hence while uneducated workers are substitutes for computers—you can use the computers instead—educated workers are complements for computers—you need programmers and engineers to make the computers work. As computers get cheaper, their substitutes also get cheaper—and thus wages for uneducated workers go down. But their complements get more valuable—and so wages for educated workers go up. Thus, we get more inequality, as high wages get higher and low wages get lower.

Or, to put it more succinctly, robots are taking our jobs. Not all our jobs—actually they’re creating jobs at the top for software programmers and electrical engineers—but a lot of our jobs, like welders and metallurgists and even nurses. As the technology improves more and more jobs will be replaced by automation.

The theory seems plausible enough—and in some form is almost certainly true—but as David Card has pointed out, this fails to explain most of the actual variation in inequality in the US and other countries. Card is one of my favorite economists; he is also famous for completely revolutionizing the economics of minimum wage, showing that prevailing theory that minimum wages must hurt employment simply doesn’t match the empirical data.

If it were just that college education is getting more valuable, we’d see a rise in income for roughly the top 40%, since over 40% of American adults have at least an associate’s degree. But we don’t actually see that; in fact contrary to popular belief we don’t even really see it in the top 1%. The really huge increases in income for the last 40 years have been at the top 0.01%—the top 1% of 1%.

Many of the jobs that are now automated also haven’t seen a fall in income; despite the fact that high-frequency trading algorithms do what stockbrokers do a thousand times better (“better” at making markets more unstable and siphoning wealth from the rest of the economy that is), stockbrokers have seen no such loss in income. Indeed, they simply appropriate the additional income from those computer algorithms—which raises the question why welders couldn’t do the same thing. And indeed, I’ll get to in a moment why that is exactly what we must do, that the robot revolution must also come with a revolution in property rights and income distribution.

No, the real reasons why technology exacerbates inequality are twofold: Patent rents and the winner-takes-all effect.

In an earlier post I already talked about the winner-takes-all effect, so I’ll just briefly summarize it this time around. Under certain competitive conditions, a small fraction of individuals can reap a disproportionate share of the rewards despite being only slightly more productive than those beneath them. This often happens when we have network externalities, in which a product becomes more valuable when more people use it, thus creating a positive feedback loop that makes the products which are already successful wildly so and the products that aren’t successful resigned to obscurity.

Computer technology—more specifically, the Internet—is particularly good at creating such situations. Facebook, Google, and Amazon are all examples of companies that (1) could not exist without Internet technology and (2) depend almost entirely upon network externalities for their business model. They are the winners who take all; thousands of other software companies that were just as good or nearly so are now long forgotten. The winners are not always the same, because the system is unstable; for instance MySpace used to be much more important—and much more profitable—until Facebook came along.

But the fact that a different handful of upper-middle-class individuals can find themselves suddenly and inexplicably thrust into fame and fortune while the rest of us toil in obscurity really isn’t much comfort, now is it? While technically the rise and fall of MySpace can be called “income mobility”, it’s clearly not what we actually mean when we say we want a society with a high level of income mobility. We don’t want a society where the top 10% can by little more than chance find themselves becoming the top 0.01%; we want a society where you don’t have to be in the top 10% to live well in the first place.

Even without network externalities the Internet still nurtures winner-takes-all markets, because digital information can be copied infinitely. When it comes to sandwiches or even cars, each new one is costly to make and costly to transport; it can be more cost-effective to choose the ones that are made near you even if they are of slightly lower quality. But with books (especially e-books), video games, songs, or movies, each individual copy costs nothing to create, so why would you settle for anything but the best? This may well increase the overall quality of the content consumers get—but it also ensures that the creators of that content are in fierce winner-takes-all competition. Hence J.K. Rowling and James Cameron on the one hand, and millions of authors and independent filmmakers barely scraping by on the other. Compare a field like engineering; you probably don’t know a lot of rich and famous engineers (unless you count engineers who became CEOs like Bill Gates and Thomas Edison), but nor is there a large segment of “starving engineers” barely getting by. Though the richest engineers (CEOs excepted) are not nearly as rich as the richest authors, the typical engineer is much better off than the typical author, because engineering is not nearly as winner-takes-all.

But the main topic for today is actually patent rents. These are a greatly underappreciated segment of our economy, and they grow more important all the time. A patent rent is more or less what it sounds like; it’s the extra money you get from owning a patent on something. You can get that money either by literally renting it—charging license fees for other companies to use it—or simply by being the only company who is allowed to manufacture something, letting you sell it at monopoly prices. It’s surprisingly difficult to assess the real value of patent rents—there’s a whole literature on different econometric methods of trying to tackle this—but one thing is clear: Some of the largest, wealthiest corporations in the world are built almost entirely upon patent rents. Drug companies, R&D companies, software companies—even many manufacturing companies like Boeing and GM obtain a substantial portion of their income from patents.

What is a patent? It’s a rule that says you “own” an idea, and anyone else who wants to use it has to pay you for the privilege. The very concept of owning an idea should trouble you—ideas aren’t limited in number, you can easily share them with others. But now think about the fact that most of these patents are owned by corporationsnot by inventors themselves—and you’ll realize that our system of property rights is built around the notion that an abstract entity can own an idea—that one idea can own another.

The rationale behind patents is that they are supposed to provide incentives for innovation—in exchange for investing the time and effort to invent something, you receive a certain amount of time where you get to monopolize that product so you can profit from it. But how long should we give you? And is this really the best way to incentivize innovation?

I contend it is not; when you look at the really important world-changing innovations, very few of them were done for patent rents, and virtually none of them were done by corporations. Jonas Salk was indignant at the suggestion he should patent the polio vaccine; it might have made him a billionaire, but only by letting thousands of children die. (To be fair, here’s a scholar arguing that he probably couldn’t have gotten the patent even if he wanted to—but going on to admit that even then the patent incentive had basically nothing to do with why penicillin and the polio vaccine were invented.)

Who landed on the moon? Hint: It wasn’t Microsoft. Who built the Hubble Space Telescope? Not Sony. The Internet that made Google and Facebook possible was originally invented by DARPA. Even when corporations seem to do useful innovation, it’s usually by profiting from the work of individuals: Edison’s corporation stole most of its good ideas from Nikola Tesla, and by the time the Wright Brothers founded a company their most important work was already done (though at least then you could argue that they did it in order to later become rich, which they ultimately did). Universities and nonprofits brought you the laser, light-emitting diodes, fiber optics, penicillin and the polio vaccine. Governments brought you liquid-fuel rockets, the Internet, GPS, and the microchip. Corporations brought you, uh… Viagra, the Snuggie, and Furbies. Indeed, even Google’s vaunted search algorithms were originally developed by the NSF. I can think of literally zero examples of a world-changing technology that was actually invented by a corporation in order to secure a patent. I’m hesitant to say that none exist, but clearly the vast majority of seminal inventions have been created by governments and universities.

This has always been true throughout history. Rome’s fire departments were notorious for shoddy service—and wholly privately-owned—but their great aqueducts that still stand today were built as government projects. When China invented paper, turned it into money, and defended it with the Great Wall, it was all done on government funding.

The whole idea that patents are necessary for innovation is simply a lie; and even the idea that patents lead to more innovation is quite hard to defend. Imagine if instead of letting Google and Facebook patent their technology all the money they receive in patent rents were instead turned into tax-funded research—frankly is there even any doubt that the results would be better for the future of humanity? Instead of better ad-targeting algorithms we could have had better cancer treatments, or better macroeconomic models, or better spacecraft engines.

When they feel their “intellectual property” (stop and think about that phrase for awhile, and it will begin to seem nonsensical) has been violated, corporations become indignant about “free-riding”; but who is really free-riding here? The people who copy music albums for free—because they cost nothing to copy, or the corporations who make hundreds of billions of dollars selling zero-marginal-cost products using government-invented technology over government-funded infrastructure? (Many of these companies also continue receive tens or hundreds of millions of dollars in subsidies every year.) In the immortal words of Barack Obama, “you didn’t build that!”

Strangely, most economists seem to be supportive of patents, despite the fact that their own neoclassical models point strongly in the opposite direction. There’s no logical connection between the fixed cost of inventing a technology and the monopoly rents that can be extracted from its patent. There is some connection—albeit a very weak one—between the benefits of the technology and its monopoly profits, since people are likely to be willing to pay more for more beneficial products. But most of the really great benefits are either in the form of public goods that are unenforceable even with patents (go ahead, try enforcing on that satellite telescope on everyone who benefits from its astronomical discoveries!) or else apply to people who are so needy they can’t possibly pay you (like anti-malaria drugs in Africa), so that willingness-to-pay link really doesn’t get you very far.

I guess a lot of neoclassical economists still seem to believe that willingness-to-pay is actually a good measure of utility, so maybe that’s what’s going on here; if it were, we could at least say that patents are a second-best solution to incentivizing the most important research.

But even then, why use second-best when you have best? Why not devote more of our society’s resources to governments and universities that have centuries of superior track record in innovation? When this is proposed the deadweight loss of taxation is always brought up, but somehow the deadweight loss of monopoly rents never seems to bother anyone. At least taxes can be designed to minimize deadweight loss—and democratic governments actually have incentives to do that; corporations have no interest whatsoever in minimizing the deadweight loss they create so long as their profit is maximized.

I’m not saying we shouldn’t have corporations at all—they are very good at one thing and one thing only, and that is manufacturing physical goods. Cars and computers should continue to be made by corporations—but their technologies are best invented by government. Will this dramatically reduce the profits of corporations? Of course—but I have difficulty seeing that as anything but a good thing.

Why am I talking so much about patents, when I said the topic was robots? Well, it’s typically because of the way these patents are assigned that robots taking people’s jobs becomes a bad thing. The patent is owned by the company, which is owned by the shareholders; so when the company makes more money by using robots instead of workers, the workers lose.

If when a robot takes your job, you simply received the income produced by the robot as capital income, you’d probably be better off—you get paid more and you also don’t have to work. (Of course, if you define yourself by your career or can’t stand the idea of getting “handouts”, you might still be unhappy losing your job even though you still get paid for it.)

There’s a subtler problem here though; robots could have a comparative advantage without having an absolute advantage—that is, they could produce less than the workers did before, but at a much lower cost. Where it cost $5 million in wages to produce $10 million in products, it might cost only $3 million in robot maintenance to produce $9 million in products. Hence you can’t just say that we should give the extra profits to the workers; in some cases those extra profits only exist because we are no longer paying the workers.

As a society, we still want those transactions to happen, because producing less at lower cost can still make our economy more efficient and more productive than it was before. Those displaced workers can—in theory at least—go on to other jobs where they are needed more.

The problem is that this often doesn’t happen, or it takes such a long time that workers suffer in the meantime. Hence the Luddites; they don’t want to be made obsolete even if it does ultimately make the economy more productive.

But this is where patents become important. The robots were probably invented at a university, but then a corporation took them and patented them, and is now selling them to other corporations at a monopoly price. The manufacturing company that buys the robots now has to spend more in order to use the robots, which drives their profits down unless they stop paying their workers.

If instead those robots were cheap because there were no patents and we were only paying for the manufacturing costs, the workers could be shareholders in the company and the increased efficiency would allow both the employers and the workers to make more money than before.

What if we don’t want to make the workers into shareholders who can keep their shares after they leave the company? There is a real downside here, which is that once you get your shares, why stay at the company? We call that a “golden parachute” when CEOs do it, which they do all the time; but most economists are in favor of stock-based compensation for CEOs, and once again I’m having trouble seeing why it’s okay when rich people do it but not when middle-class people do.

Another alternative would be my favorite policy, the basic income: If everyone knows they can depend on a basic income, losing your job to a robot isn’t such a terrible outcome. If the basic income is designed to grow with the economy, then the increased efficiency also raises everyone’s standard of living, as economic growth is supposed to do—instead of simply increasing the income of the top 0.01% and leaving everyone else where they were. (There is a good reason not to make the basic income track economic growth too closely, namely the business cycle; you don’t want the basic income payments to fall in a recession, because that would make the recession worse. Instead they should be smoothed out over multiple years or designed to follow a nominal GDP target, so that they continue to rise even in a recession.)

We could also combine this with expanded unemployment insurance (explain to me again why you can’t collect unemployment if you weren’t working full-time before being laid off, even if you wanted to be or you’re a full-time student?) and active labor market policies that help people re-train and find new and better jobs. These policies also help people who are displaced for reasons other than robots making their jobs obsolete—obviously there are all sorts of market conditions that can lead to people losing their jobs, and many of these we actually want to happen, because they involve reallocating the resources of our society to more efficient ends.

Why aren’t these sorts of policies on the table? I think it’s largely because we don’t think of it in terms of distributing goods—we think of it in terms of paying for labor. Since the worker is no longer laboring, why pay them?

This sounds reasonable at first, but consider this: Why give that money to the shareholder? What did they do to earn it? All they do is own a piece of the company. They may not have contributed to the goods at all. Honestly, on a pay-for-work basis, we should be paying the robot!

If it bothers you that the worker collects dividends even when he’s not working—why doesn’t it bother you that shareholders do exactly the same thing? By definition, a shareholder is paid according to what they own, not what they do. All this reform would do is make workers into owners.

If you justify the shareholder’s wealth by his past labor, again you can do exactly the same to justify worker shares. (And as I said above, if you’re worried about the moral hazard of workers collecting shares and leaving, you should worry just as much about golden parachutes.)

You can even justify a basic income this way: You paid taxes so that you could live in a society that would protect you from losing your livelihood—and if you’re just starting out, your parents paid those taxes and you will soon enough. Theoretically there could be “welfare queens” who live their whole lives on the basic income, but empirical data shows that very few people actually want to do this, and when given opportunities most people try to find work. Indeed, even those who don’t, rarely seem to be motivated by greed (even though, capitalists tell us, “greed is good”); instead they seem to be de-motivated by learned helplessness after trying and failing for so long. They don’t actually want to sit on the couch all day and collect welfare payments; they simply don’t see how they can compete in the modern economy well enough to actually make a living from work.

One thing is certain: We need to detach income from labor. As a society we need to get over the idea that a human being’s worth is decided by the amount of work they do for corporations. We need to get over the idea that our purpose in life is a job, a career, in which our lives are defined by the work we do that can be neatly monetized. (I admit, I suffer from the same cultural blindness at times, feeling like a failure because I can’t secure the high-paying and prestigious employment I want. I feel this clear sense that my society does not value me because I am not making money, and it damages my ability to value myself.)

As robots do more and more of our work, we will need to redefine the way we live by something else, like play, or creativity, or love, or compassion. We will need to learn to see ourselves as valuable even if nothing we do ever sells for a penny to anyone else.

A basic income can help us do that; it can redefine our sense of what it means to earn money. Instead of the default being that you receive nothing because you are worthless unless you work, the default is that you receive enough to live on because you are a human being of dignity and a citizen. This is already the experience of people who have substantial amounts of capital income; they can fall back on their dividends if they ever can’t or don’t want to find employment. A basic income would turn us all into capital owners, shareholders in the centuries of established capital that has been built by our forebears in the form of roads, schools, factories, research labs, cars, airplanes, satellites, and yes—robots.