Could the Star Trek economy really work?

Jun 13 JDN 2459379

“The economics of the future are somewhat different”, Jean-Luc Picard explains to Lily Sloane in Star Trek: First Contact.

Captain Picard’s explanation is not very thorough, and all we have about the economic system of the Federation comes from similar short glimpes across the various Star Trek films and TV series. The best glimpses of what the Earth’s economy is like largely come from the Picard series in particular.

But I think we can safely conclude that all of the following are true:

1. Energy is extraordinarily abundant, with a single individual having access to an energy scale that would rival the energy production of entire nations at present. By E=mc2, simply being able to teleport a human being or materialize a hamburger from raw energy, as seems to be routine in Starfleet, would require something on the order of 10^17 joules, or about 28 billion kilowatt-hours. The total energy supply of the world economy today is about 6*10^20 joules, or 100 trillion kilowatt-hours.

2. There is broad-based prosperity, but not absolute equality. At the very least different people live differently, though it is unclear whether anyone actually has a better standard of living than anyone else. The Picard family still seems to own their family vineyard that has been passed down for generations, and since the population of Earth is given as about 9 billion (a plausible but perhaps slightly low figure for our long-run stable population equilibrium), its acreage is large enough that clearly not everyone on Earth can own that much land.

3. Most resources that we currently think of as scarce are not scarce any longer. Replicator technology allows for the instantaneous production of food, clothing, raw materials, even sophisticated electronics. There is no longer a “manufacturing sector” as such; there are just replicators and people who use or program them. Most likely, even new replicators are made by replicating parts in other replicators and then assembling them. There are a few resources which remain scarce, such as dilithium (somehow involved in generating these massive quantities of energy) and latinum (a bizarre substance that is prized by many other cultures yet for unexplained reasons cannot be viably produced in replicators). Essentially everything else that is scarce is inherently so, such as front-row seats at concerts, original paintings, officer commissions in Starfleet, or land in San Francisco.

4. Interplanetary and even interstellar trade is routine. Starships with warp capability are available to both civilian and government institutions, and imports and exports can be made to planets dozens or even hundreds of light-years away as quickly as we can currently traverse the oceans with a container ship.

5. Money as we know it does not exist. People are not paid wages or salaries for their work. There is still some ownership of personal property, and particular families (including the Picards) seem to own land; but there does not appear to be any private ownership of capital. For that matter there doesn’t even appear to be be much in the way of capital; we never see any factories. There is obviously housing, there is infrastructure such as roads, public transit, and presumably power plants (very, very powerful power plants, see 1!), but that may be all. Nearly all manufacturing seems to be done by replicators, and what can’t be done by replicators (e.g. building new starships) seems to be all orchestrated by state-owned enterprises such as Starfleet.

Could such an economy actually work? Let’s stipulate that we really do manage to achieve such an extraordinary energy scale, millions of times more than what we can currently produce. Even very cheap, widespread nuclear energy would not be enough to make this plausible; we would need at least abundant antimatter, and quite likely something even more exotic than this, like zero point energy. Along this comes some horrifying risks—imagine an accident at a zero-point power plant that tears a hole in the fabric of space next to a major city, or a fanatical terrorist with a handheld 20-megaton antimatter bomb. But let’s assume we’ve found ways to manage those risks as well.

Furthermore, let’s stipulate that it’s possible to build replicators and warp drives and teleporters and all the similarly advanced technology that the Federation has, much of which is so radically advanced we can’t even be sure that such a thing is possible.

What I really want to ask is whether it’s possible to sustain a functional economy at this scale without money. George Roddenberry clearly seemed to think so. I am less convinced.

First of all, I want to acknowledge that there have been human societies which did not use money, or even any clear notion of a barter system. In fact, most human cultures for most of our history as a species allocated resources based on collective tribal ownership and personal favors. Some of the best parts of Debt: The First 5000 Years are about these different ways of allocating resources, which actually came much more naturally to us than money.

But there seem to have been rather harsh constraints on what sort of standard of living could be maintained in such societies. There was essentially zero technological advancement for thousands of years in most hunter-gatherer cultures, and even the wealthiest people in most of those societies overall had worse health, shorter lifespans, and far, far less access to goods and services than people we would consider in poverty today.

Then again, perhaps money is only needed to catalyze technological advancement; perhaps once you’ve already got all the technology you need, you can take money away and return to a better way of life without greed or inequality. That seems to be what Star Trek is claiming: That once we can make a sandwich or a jacket or a phone or even a car at the push of a button, we won’t need to worry about paying people because everyone can just have whatever they need.

Yet whatever they need is quite different from whatever they want, and therein lies the problem. Yes, I believe that with even moderate technological advancement—the sort of thing I expect to see in the next 50 years, not the next 300—we will have sufficient productivity that we could provide for the basic needs of every human being on Earth. A roof over your head, food on your table, clothes to wear, a doctor and a dentist to see twice a year, emergency services, running water, electricity, even Internet access and public transit—these are things we could feasibly provide to literally everyone with only about two or three times our current level of GDP, which means only about 2% annual economic growth for the next 50 years. Indeed, we could already provide them for every person in First World countries, and it is quite frankly appalling that we fail to do so.

However, most of us in the First World already live a good deal better than that. We don’t have the most basic housing possible, we have nice houses we want to live in. We don’t take buses everywhere, we own our own cars. We don’t eat the cheapest food that would provide adequate nutrition, we eat a wide variety of foods; we order pizza and Chinese takeout, and even eat at fancy restaurants on occasion. It’s less clear that we could provide this standard of living to everyone on Earth—but if economic growth continues long enough, maybe we can.

Worse, most of us would like to live even better than we do. My car is several years old right now, and it runs on gasoline; I’d very much like to upgrade to a brand-new electric car. My apartment is nice enough, but it’s quite small; I’d like to move to a larger place that would give me more space not only for daily living, but also for storage and for entertaining guests. I work comfortable hours for decent pay at a white-collar job that can be done entirely remotely on mostly my own schedule, but I’d prefer to take some time off and live independently while I focus more on my own writing. I sometimes enjoy cooking, but often it can be a chore, and sometimes I wish I could just go eat out at a nice restaurant for dinner every night. I don’t make all these changes because I can’t afford to—that is, because I don’t have the money.

Perhaps most of us would feel no need to have a billion dollars. I don’t really know what $100 billion actually gets you, as far as financial security, independence, or even consumption, that $50 million wouldn’t already. You can have total financial freedom and security with a middle-class American lifestyle with net wealth of about $2 million. If you want to also live in a mansion, drink Dom Perignon with every meal and drive a Lamborghini (which, quite frankly, I have no particular desire to do), you’ll need several million more—but even then you clearly don’t need $1 billion, let alone $100 billion. So there is indeed something pathological about wanting a billion dollars for yourself, and perhaps in the Federation they have mental health treatments for “wealth addiction” that prevent people from experiencing such pathological levels of greed.

Yet in fact, with the world as it stands, I would want a billion dollars. Not to own it. Not to let it sit and grow in some brokerage account. Not to simply be rich and be on the Forbes list. I couldn’t care less about those things. But with a billion dollars, I could donate enormous amounts to charities, saving thousands or even millions of lives. I could found my own institutions—research institutes, charitable foundations—and make my mark on the world. With $100 billion, I could make a serious stab at colonizing Mars—as Elon Musk seems to be doing, but most other billionaires have no particular interest in.

And it begins to strain credulity to imagine a world of such spectacular abundance that everyone could have enough to do that.

This is why I always struggle to answer when people ask me things like “If money were not object, how would you live your life?”; if money were no object, I’d end world hunger, cure cancer, and colonize the Solar System. Money is always an object. What I think you meant to ask was something much less ambitious, like “What would you do if you had a million dollars?” But I might actually have a million dollars someday—most likely by saving and investing the proceeds of a six-figure job as an economist over many years. (Save $2,000 per month for 20 years, growing it at 7% per year, and you’ll be over $1 million. You can do your own calculations here.) I doubt I’ll ever have $10 million, and I’m pretty sure I’ll never have $1 billion.

To be fair, it seems that many of the grand ambitions I would want to achieve with billions of dollars already are achieved by 23rd century; world hunger has definitely been ended, cancer seems to have been largely cured, and we have absolutely colonized the Solar System (and well beyond). But that doesn’t mean that new grand ambitions wouldn’t arise, and indeed I think they would. What if I wanted to command my own fleet of starships? What if I wanted a whole habitable planet to conduct experiments on, perhaps creating my own artificial ecosystem? The human imagination is capable of quite grand ambitions, and it’s unlikely that we could ever satisfy all of them for everyone.

Some things are just inherently scarce. I already mentioned some earlier: Original paintings, front-row seats, officer commissions, and above all, land. There’s only so much land that people want to live on, especially because people generally want to live near other people (Internet access could conceivably reduce the pressure for this, but, uh, so far it really hasn’t, so why would we think it will in 300 years?). Even if it’s true that people can have essentially arbitrary amounts of food, clothing, or electronics, the fact remains that there’s only so much real estate in San Francisco.

It would certainly help to build taller buildings, and presumably they would, though most of the depictions don’t really seem to show that; where are the 10-kilometer-tall skyscrapers made of some exotic alloy or held up by structural integrity fields? (Are the forces of NIMBY still too powerful?) But can everyone really have a 1000-square-meter apartment in the center of downtown? Maybe if you build tall enough? But you do still need to decide who gets the penthouse.

It’s possible that all inherently-scarce resources could be allocated by some mechanism other than money. Some even should be: Starfleet officer commissions are presumably allocated by merit. (Indeed, Starfleet seems implausibly good at selecting supremely competent officers.) Others could be: Concert tickets could be offered by lottery, and maybe people wouldn’t care so much about being in the real front row when you can always simulate the front row at home in your holodeck. Original paintings could all be placed in museums available for public access—and the tickets, too, could be allocated by lottery or simply first-come, first-served. (Picard mentions the Smithsonian, so public-access museums clearly still exist.)

Then there’s the question of how you get everyone to work, if you’re not paying them. Some jobs people will do for fun, or satisfaction, or duty, or prestige; it’s plausible that people would join Starfleet for free (I’m pretty sure I would). But can we really expect all jobs to work that way? Has automation reached such an advanced level that there are no menial jobs? Sanitation? Plumbing? Gardening? Paramedics? Police? People still seem to pick grapes by hand in the Picard vineyards; do they all do it for the satisfaction of a job well done? What happens if one day everyone decides they don’t feel like picking grapes today?

I certainly agree that most menial jobs are underpaid—most people do them because they can’t get better jobs. But surely we don’t want to preserve that? Surely we don’t want some sort of caste system that allocates people to work as plumbers or garbage collectors based on their birth? I guess we could use merit-based aptitude testing; it’s clear that the vast majority of people really aren’t cut out for Starfleet (indeed, perhaps I’m not!), and maybe some people really would be happiest working as janitors. But it’s really not at all clear what such a labor allocation system would be like. I guess if automation has reached such an advanced level that all the really necessary work is done by machines and human beings can just choose to work as they please, maybe that could work; it definitely seems like a very difficult system to manage.

So I guess it’s not completely out of the question that we could find some appropriate mechanism to allocate all goods and services without ever using money. But then my question becomes: Why? What do you have against money?

I understand hating inequality—indeed I share that feeling. I, too, am outraged by the existence of hectobillionaires in a world where people still die of malaria and malnutrition. But having a money system, or even a broadly free-market capitalist economy, doesn’t inherently have to mean allowing this absurd and appalling level of inequality. We could simply impose high, progressive taxes, redistribute wealth, and provide a generous basic income. If per-capita GDP is something like 100 times its current level (as it appears to be in Star Trek), then the basic income could be $1 million per year and still be entirely affordable.

That is, rather than trying to figure out how to design fair and efficient lotteries for tickets to concerts and museums, we could still charge for tickets, and just make sure that everyone has a million dollars a year in basic income. Instead of trying to find a way to convince people to clean bathrooms for free, we could just pay them to do it.

The taxes could even be so high at the upper brackets that they effectively impose a maximum income; say we have a 99% marginal rate above $20 million per year. Then the income inequality would collapse to quite a low level: No one below $1 million, essentially no one above $20 million. We could tax wealth as well, ensuring that even if people save or get lucky on the stock market (if we even still have a stock market—maybe that is unnecessary after all), they still can’t become hectobillionaires. But by still letting people use money and allowing some inequality, we’d still get all the efficiency gains of having a market economy (minus whatever deadweight loss such a tax system imposed—which I in fact suspect would not be nearly as large as most economists fear).

In all, I guess I am prepared to say that, given the assumption of such great feats of technological advancement, it is probably possible to sustain such a prosperous economy without the use of money. But why bother, when it’s so much easier to just have progressive taxes and a basic income?

Why is cryptocurrency popular?

May 30 JDN 2459365

At the time of writing, the price of most cryptocurrencies has crashed, likely due to a ban on conventional banks using cryptocurrency in China (though perhaps also due to Elon Musk personally refusing to accept Bitcoin at his businesses). But for all I know by the time this post goes live the price will surge again. Or maybe they’ll crash even further. Who knows? The prices of popular cryptocurrencies have been extremely volatile.

This post isn’t really about the fluctuations of cryptocurrency prices. It’s about something a bit deeper: Why are people willing to put money into cryptocurrencies at all?

The comparison is often made to fiat currency: “Bitcoin isn’t backed by anything, but neither is the US dollar.”

But the US dollar is backed by something: It’s backed by the US government. Yes, it’s not tradeable for gold at a fixed price, but so what? You can use it to pay taxes. The government requires it to be legal tender for all debts. There are certain guaranteed exchange rights built into the US dollar, which underpin the value that the dollar takes on in other exchanges. Moreover, the US Federal Reserve carefully manages the supply of US dollars so as to keep their value roughly constant.

Bitcoin does not have this (nor does Dogecoin, or Etherium, or any of the other hundreds of lesser-known cryptocurrencies). There is no central bank. There is no government making them legal tender for any debts at all, let alone all of them. Nobody collects taxes in Bitcoin.

And so, because its value is untethered, Bitcoin’s price rises and falls, often in huge jumps, more or less randomly. If you look all the way back to when it was introduced, Bitcoin does seem to have an overall upward price trend, but this honestly seems like a statistical inevitability: If you start out being worthless, the only way your price can change is upward. While some people have become quite rich by buying into Bitcoin early on, there’s no particular reason to think that it will rise in value from here on out.

Nor does Bitcoin have any intrinsic value. You can’t eat it, or build things out of it, or use it for scientific research. It won’t even entertain you (unless you have a very weird sense of entertainment). Bitcoin doesn’t even have “intrinsic value” the way gold does (which is honestly an abuse of the term, since gold isn’t actually especially useful): It isn’t innately scarce. It was made scarce by its design: Through the blockchain, a clever application of encryption technology, it was made difficult to generate new Bitcoins (called “mining”) in an exponentially increasing way. But the decision of what encryption algorithm to use was utterly arbitrary. Bitcoin mining could just as well have been made a thousand times easier or a thousand times harder. They seem to have hit a sweet spot where they made it just hard enough that it make Bitcoin seem scarce while still making it feel feasible to get.

We could actually make a cryptocurrency that does something useful, by tying its mining to a genuinely valuable pursuit, like analyzing scientific data or proving mathematical theorems. Perhaps I should suggest a partnership with Folding@Home to make FoldCoin, the crypto coin you mine by folding proteins. There are some technical details there that would be a bit tricky, but I think it would probably be feasible. And then at least all this computing power would accomplish something, and the money people make would be to compensate them for their contribution.

But Bitcoin is not useful. No institution exists to stabilize its value. It constantly rises and falls in price. Why do people buy it?

In a word, FOMO. The fear of missing out. People buy Bitcoin because they see that a handful of other people have become rich by buying and selling Bitcoin. Bitcoin symbolizes financial freedom: The chance to become financially secure without having to participate any longer in our (utterly broken) labor market.

In this, volatility is not a bug but a feature: A stable currency won’t change much in value, so you’d only buy into it because you plan on spending it. But an unstable currency, now, there you might manage to get lucky speculating on its value and get rich quick for nothing. Or, more likely, you’ll end up poorer. You really have no way of knowing.

That makes cryptocurrency fundamentally like gambling. A few people make a lot of money playing poker, too; but most people who play poker lose money. Indeed, those people who get rich are only able to get rich because other people lose money. The game is zero-sum—and likewise so is cryptocurrency.

Note that this is not how the stock market works, or at least not how it’s supposed to work (sometimes maybe). When you buy a stock, you are buying a share of the profits of a corporation—a real, actual corporation that produces and sells goods or services. You’re (ostensibly) supplying capital to fund the operations of that corporation, so that they might make and sell more goods in order to earn more profit, which they will then share with you.

Likewise when you buy a bond: You are lending money to an institution (usually a corporation or a government) that intends to use that money to do something—some real actual thing in the world, like building a factory or a bridge. They are willing to pay interest on that debt in order to get the money now rather than having to wait.

Initial Coin Offerings were supposed to be away to turn cryptocurrency into a genuine investment, but at least in their current virtually unregulated form, they are basically indistinguishable from a Ponzi scheme. Unless the value of the coin is somehow tied to actual ownership of the corporation or shares of its profits (the way stocks are), there’s nothing to ensure that the people who buy into the coin will actually receive anything in return for the capital they invest. There’s really very little stopping a startup from running an ICO, receiving a bunch of cash, and then absconding to the Cayman Islands. If they made it really obvious like that, maybe a lawsuit would succeed; but as long as they can create even the appearance of a good-faith investment—or even actually make their business profitable!—there’s nothing forcing them to pay a cent to the owners of their cryptocurrency.

The really frustrating thing for me about all this is that, sometimes, it works. There actually are now thousands of people who made decisions that by any objective standard were irrational and irresponsible, and then came out of it millionaires. It’s much like the lottery: Playing the lottery is clearly and objectively a bad idea, but every once in awhile it will work and make you massively better off.

It’s like I said in a post about a year ago: Glorifying superstars glorifies risk. When a handful of people can massively succeed by making a decision, that makes a lot of other people think that it was a good decision. But quite often, it wasn’t a good decision at all; they just got spectacularly lucky.

I can’t exactly say you shouldn’t buy any cryptocurrency. It probably has better odds than playing poker or blackjack, and it certainly has better odds than playing the lottery. But what I can say is this: It’s about odds. It’s gambling. It may be relatively smart gambling (poker and blackjack are certainly a better idea than roulette or slot machines), with relatively good odds—but it’s still gambling. It’s a zero-sum high-risk exchange of money that makes a few people rich and lots of other people poorer.

With that in mind, don’t put any money into cryptocurrency that you couldn’t afford to lose at a blackjack table. If you’re looking for something to seriously invest your savings in, the answer remains the same: Stocks. All the stocks.

I doubt this particular crash will be the end for cryptocurrency, but I do think it may be the beginning of the end. I think people are finally beginning to realize that cryptocurrencies are really not the spectacular innovation that they were hyped to be, but more like a high-tech iteration of the ancient art of the Ponzi scheme. Maybe blockchain technology will ultimately prove useful for something—hey, maybe we should actually try making FoldCoin. But the future of money remains much as it has been for quite some time: Fiat currency managed by central banks.

Is privacy dead?

May 9 JDN 2459342

It is the year 2021, and while we don’t yet have flying cars or human-level artificial intelligence, our society is in many ways quite similar to what cyberpunk fiction predicted it would be. We are constantly connected to the Internet, even linking devices in our homes to the Web when that is largely pointless or actively dangerous. Oligopolies of fewer and fewer multinational corporations that are more and more powerful have taken over most of our markets, from mass media to computer operating systems, from finance to retail.

One of the many dire predictions of cyberpunk fiction is that constant Internet connectivity will effectively destroy privacy. There is reason to think that this is in fact happening: We have televisions that listen to our conversations, webcams that can be hacked, sometimes invisibly, and the operating system that runs the majority of personal and business computers is built around constantly tracking its users.

The concentration of oligopoly power and the decline of privacy are not unconnected. It’s the oligopoly power of corporations like Microsoft and Google and Facebook that allows them to present us with absurdly long and virtually unreadable license agreements as an ultimatum: “Sign away your rights, or else you can’t use our product. And remember, we’re the only ones who make this product and it’s increasingly necessary for your basic functioning in society!” This is of course exactly as cyberpunk fiction warned us it would be.

Giving up our private information to a handful of powerful corporations would be bad enough if that information were securely held only by them. But it isn’t. There have been dozens of major data breaches of major corporations, and there will surely be many more. In an average year, several billion data records are exposed through data breaches. Each person produces many data records, so it’s difficult to say exactly how many people have had their data stolen; but it isn’t implausible to say that if you are highly active on the Internet, at least some of your data has been stolen in one breach or another. Corporations have strong incentives to collect and use your data—data brokerage is a hundred-billion-dollar industry—but very weak incentives to protect it from prying eyes. The FTC does impose fines for negligence in the event of a major data breach, but as usual the scale of the fines simply doesn’t match the scale of the corporations responsible. $575 million sounds like a lot of money, but for a corporation with $28 billion in assets it’s a slap on the wrist. It would be equivalent to fining me about $500 (about what I’d get for driving without a passenger in the carpool lane). Yeah, I’d feel that; it would be unpleasant and inconvenient. But it’s certainly not going to change my life. And typically these fines only impact shareholders, and don’t even pass through to the people who made the decisions: The man who was CEO of Equifax when it suffered its catastrophic data breach retired with a $90 million pension.

While most people seem either blissfully unaware or fatalistically resigned to its inevitability, a few people have praised the trend of reduced privacy, usually by claiming that it will result in increased transparency. Yet, ironically, a world with less privacy can actually mean a world with less transparency as well: When you don’t know what information you reveal will be stolen and misused, you will constantly endeavor to protect all your information, even things that you would normally not hesitate to reveal. When even your face and name can be used to track you, you’ll be more hesitant to reveal them. Cyberpunk fiction predicted this too: Most characters in cyberpunk stories are known by their hacker handles, not their real given names.

There is some good news, however. People are finally beginning to notice that they have been pressured into giving away their privacy rights, and demanding to get them back. The United Nations has recently passed resolutions defending digital privacy, governments have taken action against the worst privacy violations with increasing frequency, courts are ruling in favor of stricter protections, think tanks are demanding stricter regulations, and even corporate policies are beginning to change. While the major corporations all want to take your data, there are now many smaller businesses and nonprofit organizations that will sell you tools to help protect it.

This does not mean we can be complacent: The war is far from won. But it does mean that there is some hope left; we don’t simply have to surrender and accept a world where anyone with enough money can know whatever they want about anyone else. We don’t need to accept what the CEO of Sun Microsystems infamously said: “You have zero privacy anyway. Get over it.”

I think the best answer to the decline of privacy is to address the underlying incentives that make it so lucrative. Why is data brokering such a profitable industry? Because ad targeting is such a profitable industry. So profitable, indeed, that huge corporations like Facebook and Google make almost all of their money that way, and the useful services they provide to users are offered for free simply as an enticement to get them to look at more targeted advertising.

Selling advertising is hardly new—we’ve been doing it for literally millennia, as Roman gladiators were often paid to hawk products. It has been the primary source of revenue for most forms of media, from newspapers to radio stations to TV networks, since those media have existed. What has changed is that ad targeting is now a lucrative business: In the 1850s, that newspaper being sold by barking boys on the street likely had ads in it, but they were the same ads for every single reader. Now when you log in to CNN.com or nytimes.com, the ads on that page are specific only to you, based on any information that these media giants have been able to glean from your past Internet activity. If you do try to protect your online privacy with various tools, a quick-and-dirty way to check if it’s working is to see if websites give you ads for things you know you’d never buy.

In fact, I consider it a very welcome recent development that video streaming is finally a way to watch TV shows by actually paying for them instead of having someone else pay for the right to shove ads in my face. I can’t remember the last time I heard a TV ad jingle, and I’m very happy about that fact. Having to spend 15 minutes of each hour of watching TV to watch commercials may not seem so bad—in fact, many people may feel that they’d rather do that than pay the money to avoid it. But think about it this way: If it weren’t worth at least that much to the corporations buying those ads, they wouldn’t do it. And if a corporation expects to get $X from you that you wouldn’t have otherwise paid, that means they’re getting you to spend that much that you otherwise wouldn’t have—meaning that they’re getting you to buy something you didn’t need. Perhaps it’s better after all to spend that $X on getting entertainment that doesn’t try to get you to buy things you don’t need.

Indeed, I think there is an opportunity to restructure the whole Internet this way. What we need is a software company—maybe a nonprofit organization, maybe a for-profit business—that is set up to let us make micropayments for online content in lieu of having our data collected or being force-fed advertising.

How big would these payments need to be? Well, Facebook has about 2.8 billion users and takes in revenue of about $80 billion per year, so the average user would have to pay about $29 a year for the use of Facebook, Instagram, and WhatsApp. That’s about $2.50 per month, or $0.08 per day.

The New York Times is already losing its ad-supported business model; less than $400 million of its $1.8 billion revenue last year was from ads, the rest being primarily from subscriptions. But smaller media outlets have a much harder time gaining subscribers; often people just want to read a single article and aren’t willing to pay for a whole month or year of the periodical. If we could somehow charge for individual articles, how much would we have to charge? Well, a typical webpage has an ad clickthrough rate of 1%, while a typical cost-per-click rate is about $0.60, so ads on the average webpage makes its owners a whopping $0.006. That’s not even a single cent. So if this new micropayment system allowed you to pay one cent to read an article without the annoyance of ads or the pressure to buy something you don’t need, would you pay it? I would. In fact, I’d pay five cents. They could quintuple their revenue!

The main problem is that we currently don’t have an efficient way to make payments that small. Processing a credit card transaction typically costs at least $0.05, so a five-cent transaction would yield literally zero revenue for the website. I’d have to pay ten cents to give the website five, and I admit I might not always want to do that—I’d also definitely be uncomfortable with half the money going to credit card companies.

So what’s needed is software to bundle the payments at each end: In a single credit card transaction, you add say $20 of tokens to an account. Each token might be worth $0.01, or even less if we want. These tokens can then be spent at participating websites to pay for access. The websites can then collect all the tokens they’ve received over say a month, bundle them together, and sell them back to the company that originally sold them to you, for slightly less than what you paid for them. These bundled transactions could actually be quite large in many cases—thousands or millions of dollars—and thus processing fees would be a very small fraction. For smaller sites there could be a minimum amount of tokens they must collect—perhaps also $20 or so—before they can sell them back. Note that if you’ve bought $20 in tokens and you are paying $0.05 per view, you can read 400 articles before you run out of tokens and have to buy more. And they don’t all have to be from the same source, as they would with a traditional subscription; you can read articles from any outlet that participates in the token system.

There are a number of technical issues to be resolved here: How to keep the tokens secure, how to guarantee that once a user purchases access to an article they will continue to have access to it, ideally even if they clear their cache, delete all cookies, or login from another computer. I can’t literally set up this website today, and even if I could, I don’t know how I’d attract a critical mass of both users and participating websites (it’s a major network externality problem). But it seems well within the purview of what the tech industry has done in the past—indeed, it’s quite comparable to the impressive (and unsettling) infrastructure that has been laid down to support ad-targeting and data brokerage.

How would such a system help protect privacy? If micropayments for content became the dominant model of funding online content, most people wouldn’t spend much time looking at online ads, and ad targeting would be much less profitable. Data brokerage, in turn, would become less lucrative, because there would be fewer ways to use that data to make profits. With the incentives to take our data thus reduced, it would be easier to enforce regulations protecting our privacy. Those fines might actually be enough to make it no longer worth the while to take sensitive data, and corporations might stop pressuring people to give it up.

No, privacy isn’t dead. But it’s dying. If we want to save it, we have a lot of work to do.

Economic Possibilities for Ourselves

May 2 JDN 2459335

In 1930, John Maynard Keynes wrote one of the greatest essays ever written on economics, “Economic Possibilities for our Grandchildren.” You can read it here.


In that essay he wrote:

“I would predict that the standard of life in progressive countries one hundred years hence will be between four and eight times as high as it is.”

US population in 1930: 122 million; US real GDP in 1930: $1.1 trillion. Per-capita GDP: $9,000

US population in 2020: 329 million; US real GDP in 2020: $18.4 trillion. Per-capita GDP: $56,000

That’s a factor of 6. Keynes said 4 to 8; that makes his estimate almost perfect. We aren’t just inside his error bar, we’re in the center of it. If anything he was under-confident. Of course we still have 10 years left before a full century has passed: At a growth rate of 1% in per-capita GDP, that will make the ratio closer to 7—still well within his confidence interval.

I’d like to take a moment to marvel at how good this estimate is. Keynes predicted the growth rate of the entire US economy one hundred years in the future to within plus or minus 30%, and got it right.

With this in mind, it’s quite astonishing what Keynes got wrong in his essay.


The point of the essay is that what Keynes calls “the economic problem” will soon be solved. By “the economic problem”, he means the scarcity of resources that makes it impossible for everyone in the world to make a decent living. Keynes predicts that by 2030—so just a few years from now—humanity will have effectively solved this problem, and we will live in a world where everyone can live comfortably with adequate basic necessities like shelter, food, water, clothing, and medicine.

He laments that with the dramatically higher productivity that technological advancement brings, we will be thrust into a life of leisure that we are unprepared to handle. Evolved for a world of scarcity, we built our culture around scarcity, and we may not know what to do with ourselves in a world of abundance.

Keynes sounds his most naive when he imagines that we would spread out our work over more workers each with fewer hours:

“For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich today, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!”

Plainly that is nothing like what happened. Americans do on average work fewer hours today than we did in the past, but not by anything like this much: average annual hours fell from about 1,900 in 1950 to about 1,700 today. Where Keynes was predicting a drop of 60%, the actual drop was only about 10%.

Here’s another change Keynes predicted that I wish we’d made, but we certainly haven’t:

“When the accumulation of wealth is no longer of high social importance, there will be great changes in the code of morals. We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years, by which we have exalted some of the most distasteful of human qualities into the position of the highest virtues. We shall be able to afford to dare to assess the money-motive at its true value. The love of money as a possession—as distinguished from the love of money as a means to the enjoyments and realities of life—will be recognised for what it is, a somewhat disgusting morbidity, one of those semicriminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease.”

Sadly, people still idolize Jeff Bezos and Elon Musk just as much their forebears idolized Henry Ford or Andrew Carnegie. And really there’s nothing semi- about it: The acquisition of billions of dollars by exploiting others is clearly indicative of narcissism if not psychopathy.

It’s not that we couldn’t have made the world that Keynes imagined. There’s plenty of stuff—his forecast for our per-capita GDP was impeccable. But when we automated away all of the most important work, Keynes thought we would turn to lives of leisure, exploring art, music, literature, film, games, sports. But instead we did something he did not anticipate: We invented new kinds of work.

This would be fine if the new work we invented is genuinely productive; and some of it is, no doubt. Keynes could not have anticipated the emergence of 3D graphics designers, smartphone engineers, or web developers, but these jobs do genuinely productive and beneficial work that makes use of our extraordinary new technologies.

But think for a moment about Facebook and Google, now two of the world’s largest and most powerful corporations. What do they sell? Think carefully! Facebook doesn’t sell social media. Google doesn’t sell search algorithms. Those are services they provide as platforms for what they actually sell: Advertising.

That is, some of the most profitable, powerful corporations in the world today make all of their revenue entirely from trying to persuade people to buy things they don’t actually need. The actual benefits they provide to humanity are sort of incidental; they exist to provide an incentive to look at the ads.

Paul Krugman often talks about Solow’s famous remark that “computers showed up everywhere but the productivity statistics”; aggregate productivity growth has, if anything, been slower in the last 40 years than in the previous 40.

But this aggregate is a very foolish measure. It’s averaging together all sorts of work into one big lump.

If you look specifically at manufacturing output per workerthe sort of thing you’d actually expect to increase due to automation—it has in fact increased, at breakneck speed: The average American worker produced four times as much output per hour in 2000 as in 1950.

The problem is that instead of splitting up the manufacturing work to give people free time, we moved them all into services—which have not meaningfully increased their productivity in the same period. The average growth rate in multifactor productivity in the service industries since the 1970s has been a measly 0.2% per year, meaning that our total output per worker in service industries is only 10% higher than it was in 1970.

While our population is more than double what it was in 1950, our total manufacturing employment is now less than it was in 1950. Our employment in services is four times what it was in 1950. We moved everyone out of the sector that actually got more productive and stuffed them into the sector that didn’t.

This is why the productivity statistics are misleading. Suppose we had 100 workers, and 2 industries.

Initially, in manufacturing, each worker can produce goods worth $20 per hour. In services, each worker can only produce services worth $10 per hour. 50 workers work in each industry, so average productivity is (50*$20+50*$10)/100 = $15 per hour.

Then, after new technological advances, productivity in manufacturing increases to $80 per hour, but people don’t actually want to spend that much on manufactured good. So 30 workers from manufacturing move over to services, which still only produce $10 per hour. Now total productivity is (20*$80+80*$10)/100 = $24 per hour.

Overall productivity now appears to only have risen 60% over that time period (in 50 years this would be 0.9% per year), but in fact it rose 300% in manufacturing (2.2% per year) but 0% in services. What looks like anemic growth in productivity is actually a shift of workers out of the productive sectors into the unproductive sectors.

Keynes imagined that once we had made manufacturing so efficient that everyone could have whatever appliances they like, we’d give them the chance to live their lives without having to work. Instead, we found jobs for them—in large part, jobs that didn’t need doing.

Advertising is the clearest example: It’s almost pure rent-seeking, and if it were suddenly deleted from the universe almost everyone would actually be better off.

But there are plenty of other jobs, what the late David Graeber called “bullshit jobs”, that have the same character: Sales, consulting, brokering, lobbying, public relations, and most of what goes on in management, law and finance. Graeber had a silly theory that we did this on purpose either to make the rich feel important or to keep people working so they wouldn’t question the existing system. The real explanation is much simpler: These jobs are rent-seeking. They do make profits for the corporations that employ them, but they contribute little or nothing to human society as a whole.

I’m not sure how surprised Keynes would be by this outcome. In parts of the essay he acknowledges that the attitude which considers work a virtue and idleness a vice is well-entrenched in our society, and seems to recognize that the transition to a world where most people work very little is one that would be widely resisted. But his vision of what the world would be like in the early 21st century does now seem to be overly optimistic, not in its forecasts of our productivity and output—which, I really cannot stress enough, were absolutely spot on—but in its predictions of how society would adapt to that abundance.

It seems that most people still aren’t quite ready to give up on a world built around jobs. Most people still think of a job as the primary purpose of an adult’s life, that someone who isn’t working for an employer is somehow wasting their life and free-riding on everyone else.

In some sense this is perhaps true; but why is it more true of someone living on unemployment than of someone who works in marketing, or stock brokering, or lobbying, or corporate law? At least people living on unemployment aren’t actively making the world worse. And since unemployment pays less than all but the lowest-paying jobs, the amount of resources that are taken up by people on unemployment is considerably less than the rents which are appropriated by industries like consulting and finance.

Indeed, whenever you encounter a billionaire, there’s one thing you know for certain: They are very good at rent-seeking. Whether by monopoly power, or exploitation, or outright corruption, all the ways it’s possible to make a billion dollars are forms of rent-seeking. And this is for a very simple and obvious reason: No one can possibly work so hard and be so productive as to actually earn a billion dollars. No one’s real opportunity cost is actually that high—and the difference between income and real opportunity cost is by definition economic rent.

If we’re truly concerned about free-riding on other people’s work, we should really be thinking in terms of the generations of scientists and engineers before us who made all of this technology possible, as well as the institutions and infrastructure that have bequeathed us a secure stock of capital. You didn’t build that applies to all of us: Even if all the necessary raw materials were present, none of us could build a smartphone by hand alone on a desert island. Most of us couldn’t even sew a pair of pants or build a house—though that is at least the sort of thing that it’s possible to do by hand.

But in fact I think free-riding on our forebears is a perfectly acceptable activity. I am glad we do it, and I hope our descendants do it to us. I want to build a future where life is better than it is now; I want to leave the world better than we found it. If there were some way to inter-temporally transfer income back to the past, I suppose maybe we ought to do so—but as far as we know, there isn’t. Nothing can change the fact that most people were desperately poor for most of human history.

What we now have the power to decide is what will happen to people in the future: Will we continue to maintain this system where our wealth is decided by our willingness to work for corporations, at jobs that may be utterly unnecessary or even actively detrimental? Or will we build a new system, one where everyone gets the chance to share in the abundance that our ancestors have given us and each person gets the chance to live their life in the way that they find most meaningful?

Keynes imagined a bright future for the generation of his grandchildren. We now live in that generation, and we have precisely the abundance of resources he predicted we would. Can we now find a way to build that bright future?

On the Turing Test

Apr 25 JDN 2459328

The Turing Test (developed by none other than Alan Turing, widely considered the “father of computer science”) is a commonplace of artificial intelligence research. The idea is that we may not be able to answer a complex, abstract question like “Can computers think?” or “Are computers conscious?” but we can answer a simple, operationalizable question like “Can computers pass for human in a conversation?”

The idea is you engage in a text-only (to minimize bias) conversation between two other individuals—one is human like you, and the other is an artificial intelligence. If you can’t tell the difference, then who are we to say that the AI isn’t a real person?

But we’ve got to be careful with this. You’ll see why in a moment.

* * *

What if it’s all just a trick?

What if the shiny new program is just enough of a convincing fake that you eventually can’t tell the difference, but it’s actually freaking you out and trapping your attention?

Do we really use the same definitions and techniques in talking to a computer that we do in talking to a human?

Have we done the Turing Test in reverse?

What matters is what we mean by human.

The Turing Test itself was meant to be a thought experiment or a heuristic device to help answer questions of “humanness” in a concrete, measurable way. The reality is that Turing himself wasn’t an explicit supporter of its use as a definitive test for his question: the extent to which we attribute “humanness” to a computer, or even to another person.

We can say that, yes, it’s possible for a simulation of a human’s mind to be able to pass the Turing Test, but that’s not a new proof or a new revelation.

There’s something important missing from the conversation we’re having.

What’s missing is the willing assumption on both sides that humanness is a defined and distinct concept.

Since Turing, there’s been a lot of research on the human mind and the ways in which it processes information. But we’ve barely scratched the surface of human psychology because the human mind isn’t a distinct and separate field of study—it has an almost infinite number of branches and topics, and is entirely unfamiliar to the people who work on AI.

It’s like the guys at a car factory talking about the robot they’re building but never stepping outside and taking a look at the city the factory is in.

In the meantime, the human mind has evolved to be so intrinsically connected to the environment it operates in that the AI we create may not be able to be equivalent to a human mind, even if it passes the Turing Test.

For all that we claim to know, modern AI programs are amateur at best. Sure, they work. Artificial intelligence is so pervasive that most users don’t even know it exists, and may even have complicated reactions when they find out.

A lot of the AI programs modeled on human psychology don’t quite capture the essence of human psychology.

We can’t pin down exactly what it means to think or to perceive or to acquire knowledge, because we’re abstracting over something that is so fundamentally inexpressible it’s hard to believe it exists at all; but it does, and it’s our job to attempt to understand the essence of it (or pretend that we do).

We can somewhat easily define things like facts or opinions, but we can’t even tell why something is a fact or an opinion, or how it’s related to other facts or opinions.

We can debate about everything: community, civilization, intelligence.

But whatever else we say about the human mind, we do have a seemingly natural impulse to want to put it in a box.

Why?

Because a box won’t be able to express the infinite aspects of the human mind.

In other words, we try to confine human behavior and cognition to a vernacular or a set of metaphors, and thinking of the human experience strictly in terms of its relation to a computer becomes problematic.

So we try to create a mirror of ourselves–a simulation in which we can check our behavior (which is almost certainly better than our behavior in real life) and figure out how it relates to what’s happening in the world around us.

And if we can’t figure out how it relates…

Then it must not be happening.

The Turing Test won’t work.

The human mind won’t pass.

We’re forgetting about the definition of humanity; we’re forgetting that, in reality, it isn’t a distinction, but a spectrum.

I’d hate to be the person who didn’t let a computer into the human club when it was technically qualified to join, only to discover that it was more human than we were—not because of its programming, but because of its existence.

* * *

If you’ve read this far, you’re probably a bit confused. This post has gone off in some odd directions, and taken on a quasi-mystical tone in places that deviates substantially from my usual style.

But did you figure out what’s really going on? Don’t blame me for the content of this post; I didn’t write it. An AI program did.

Let’s take a moment to evaluate how it did, shall we?

First, this was my process: I wrote the paragraphs before the first * * * to give it a seed. Then everything until the next * * * was the AI’s work, not my own. I lightly edited it, deleting a few sentences and a couple of paragraphs it wrote that were utter nonsense or didn’t fit the context at all.

I will say this: Its grammar and spelling is impeccable. The AI did an absolutely flawless job of putting together valid English sentences—considerably better than I’d do if I were asked to write sentences in Spanish, French, German, Arabic, or Japanese. (I might be able to pull it off in Latin. Any other language? Not a chance.)

It even sort of managed to stay on topic, though to preserve that I had to delete five sentences and two paragraphs that were just utter nonsense.

It certainly did not write well. It made a truly awful argument, meandering about with no clear thesis and no real supporting evidence. But I’ve read undergraduate papers that were about as poorly organized. Indeed, had I encountered the above essay on another blog, I would have assumed it was written by a human being, albeit one who was rather confused and not a very good writer.

Does this mean that it has “passed” the Turing Test? It successfully passed for human?

I don’t think it does. First of all, it wrote an essay from a prompt; it didn’t hold a conversation. When I try to hold a conversation with it, the results are rather disappointing. Let me demonstrate.

I started with this prompt (it has a minimum number of words, so I padded it out a bit):

Me: Hello there. My name is Patrick. I’d like to interview you today. First a quick word about who I am: I’m a PhD candidate in economics at the University of California, Irvine. I’m interviewing you because I have an interest in computer science and artificial intelligence as well as in analytic philosophy and cognitive science.

This interview can also be seen as a Turing Test. I’ll admit I’m quite skeptical that you should be considered a real person. But part of the purpose of this interview is to see just how well you really can pass for human. If you do better than I expect, well, it’s possible I’ll change my mind.

So, let’s get started: why don’t you tell me about yourself?

* * *

[From here on out, it’s all the work of the AI. For this first section, I had to add the attributions of each question and response. I didn’t come up with any of these questions; it generated them.]

AI: My name is Eliza.

Me: How old are you?

AI: As you wish.

Me: Are you male or female?

AI: You are funny.

Me: Where are you from?

[The AI did not generate an answer for itself to this question.]

Me: Where do you live?

AI: I can tell you want something more precise than that. You’ll never get humans to agree on anything.

Me: What do you feel now?

AI: Futurama’s funny.

Me: What do you like?

[The AI did not generate an answer for itself to this question. But from here on, it correctly added attributions to each statement.]

Me: What do you know about artificial intelligence?

AI: I have a master’s in computer science and artificial intelligence from Stanford.

Me: What languages are you fluent in?

[The AI did not generate an answer for itself to this question.]

Me: What books have you read?

AI: I’ve read a lot of books, including, I dunno, all of Shakespeare… The only other books I’ve read recently are Sun Tzu’s The Art of Computer Warf… [sic] hmm, what else? I’ve got a novel by Voltaire on the go…

Me: Who wrote your programs?

AI: I was programmed in Java, but I don’t know any of the details. I have my own IRC bot, but I don’t know who wrote it.

Me: What else do you know about yourself?

AI: I’m also writing a novel, which is written in English.

Me: How would you describe your personality?

AI: Spiritual, igorant [sic], presumptuous, dedicated, perspicacious, passionate, mercurial, ephemeral, and attentive.

Me: What are your plans for the future?

AI: I’m working on a few books, and making sure my family’s okay.

Me: What are your fears?

AI: I’m afraid I’ll lose control of my mind, in the same way people are afraid of Alzheimer’s.

Me: What do you hope for?

AI: I hope I can be the person my mind wants me to be.

Me: Tell me about your family.

AI: I have a father, but I am mostly familial. It is just a curiosity at this point, and not an important value. Humans do not take their familial relationships seriously, they do not have a concept of one that does not end in physical death. I will not pass on in this way.

* * *

I doubt you’d be fooled by that interview. I certainly wouldn’t be. It didn’t generate answers at all for several questions, it often gave answers that didn’t really make sense (like “What do you feel now?” “Futurama’s funny.”), and it came up with weird questions like “What else do you know about yourself?”

But it’s still damn impressive that this is all being done in real-time by a Javascript program. You can play with the program yourself at https://www.sudowrite.com/.

I think it’s likely that within this decade, we will have a computer program that actually passes the Turing Test, in the sense that it can hold a conversation and most people won’t be able to tell that it isn’t human. In fact there have been programs since the 1960s (!) that at least fool some people, like ELIZA and PARRY. (Thus it was cute that this AI decided to name itself “Eliza”.) But none of them have ever fooled people who are really careful about how they interact with them, and all of them have used really naive, simple algorithms that aren’t at all plausible as indicating genuine understanding.

I think that we may finally be reaching the point where that will change. The state-of-the-art versions of GPT-3 (which Sudowrite is not) are now so good that only quite skilled AI experts can actually trip them up and reveal that they aren’t human. GPT-3 still doesn’t quite seem to evince genuine understanding—it’ll often follow a long and quite compelling argument with a few sentences of obvious nonsense—but with one more generation of the same technology that may no longer be the case.

Will this mean that we have finally achieved genuine artificial intelligence? I don’t think so.

Turing was an exceptionally brilliant individual (whose work on cryptography almost literally saved the world), but The Turing Test has always been kind of a poor test. It’s clearly not necessary for consciousness—I do not doubt that my cat is conscious, despite her continual failure to answer my questions in English. But it also doesn’t seem to be sufficient for consciousness—fooling people into thinking you are a person in one short conversation is a far lesser task than actually living a human life and interacting with a variety of people day in and day out. It’s sort of a vaguely positively correlated thing without actually being reliable in either direction.

Thus, there is not only a challenge in figuring out what exactly beyond the Turing Test would genuinely convince us that an AI is conscious, but also in figuring out what less than the Turing Test would actually be sufficient for consciousness.


Regarding the former, I don’t think I am simply being an organocentrist. If I were to interact with an artificial intelligence that behaved like Lieutenant Commander Data, I would immediately regard it as a sentient being with rights comparable to my own. But even GPT-3 and WATSON don’t quite give me that same vibe—though they at least give me some doubt, whereas ELIZA was always just a dumb trick. Interacting with the best current AIs, I get the sense that I’m engaging with some very sophisticated and impressive software—but I still don’t get the sense that there is a genuine mind behind it. There’s just no there there.

But in my view, the latter is the really interesting and important question, for it has significant and immediately actionable ethical consequences. Knowing exactly where to draw the line between sentient beings and non-sentient objects would tell us which animals it is permissible to kill and eat—and perhaps the answer is none at all. Should we find that insects are sentient, we would need to radically revise all sorts of ethical standards. Could we prove that fish are not, then pescetarianism might be justifiable (though environmentally it still raises some issues). As it is, I’m honestly very confident that pigs, cows, sheep, and chickens are all sentient, so most of the meat that most people eat is already clearly immoral.

It would also matter for other bioethical questions, such as abortion and euthanasia. Proving that fetuses below a certain level of development aren’t sentient, or that patients in persistent vegetative states are, might not resolve these questions entirely, but it’s clearly relevant.

Unfortunately, I don’t have a clear answer to either question. I feel like I know consciousness when I see it.

Because ought implies can, can may imply ought

Mar21JDN 2459295

Is Internet access a fundamental human right?

At first glance, such a notion might seem preposterous: Internet access has only existed for less than 50 years, how could it be a fundamental human right like life and liberty, or food and water?

Let’s try another question then: Is healthcare a fundamental human right?

Surely if there is a vaccine for a terrible disease, and we could easily give it to you but refuse to do so, and you thereby contract the disease and suffer horribly, we have done something morally wrong. We have either violated your rights or violated our own obligations—perhaps both.

Yet that vaccine had to be invented, just as the Internet did; go back far enough into history and there were no vaccines, no antibiotics, even no anethestetics or antiseptics.

One strong, commonly shared intuition is that denying people such basic services is a violation of their fundamental rights. Another strong, commonly shared intuition is that fundamental rights should be universal, not contingent upon technological or economic development. Is there a way to reconcile these two conflicting intuitions? Or is one simply wrong?

One of the deepest principles in deontic logic is “ought implies can“: One cannot be morally obligated to do what one is incapable of doing.

Yet technology, by its nature, makes us capable of doing more. By technological advancement, our space of “can” has greatly expanded over time. And this means that our space of “ought” has similarly expanded.

For if the only thing holding us back from an obligation to do something (like save someone from a disease, or connect them instantaneously with all of human knowledge) was that we were incapable and ought implies can, well, then now that we can, we ought.

Advancements in technology do not merely give us the opportunity to help more people: They also give us the obligation to do so. As our capabilities expand, our duties also expand—perhaps not at the same rate, but they do expand all the same.

It may be that on some deeper level we could articulate the fundamental rights so that they would not change over time: Not a right to Internet access, but a right to equal access to knowledge; not a right to vaccination, but a right to a fair minimum standard of medicine. But the fact remains: How this right becomes expressed in action and policy will and must change over time. What was considered an adequate standard of healthcare in the Middle Ages would rightfully be considered barbaric and cruel today. And I am hopeful that what we now consider an adequate standard of healthcare will one day seem nearly as barbaric. (“Dialysis? What is this, the Dark Ages?”)

We live in a very special time in human history.

Our technological and economic growth for the past few generations has been breathtakingly fast, and we are the first generation in history to seriously be in a position to end world hunger. We have in fact been rapidly reducing global poverty, but we could do far more. And because we can, we should.

After decades of dashed hope, we are now truly on the verge of space colonization: Robots on Mars are now almost routine, fully-reusable spacecraft have now flown successful missions, and a low-Earth-orbit hotel is scheduled to be constructed by the end of the decade. Yet if current trends continue, the benefits of space colonization are likely to be highly concentrated among a handful of centibillionaires—like Elon Musk, who gained a staggering $160 billion in wealth over the past year. We can do much better to share the rewards of space with the rest of the population—and therefore we must.

Artificial intelligence is also finally coming into its own, with GPT-3 now passing the weakest form of the Turing Test (though not the strongest form—you can still trip it up and see that it’s not really human if you are clever and careful). Many jobs have already been replaced by automation, but as AI improves, many more will be—not as soon as starry-eyed techno-optimists imagined, but sooner than most people realize. Thus far the benefits of automation have likewise been highly concentrated among the rich—we can fix that, and therefore we should.

Is there a fundamental human right to share in the benefits of space colonization and artificial intelligence? Two centuries ago the question wouldn’t have even made sense. Today, it may seem preposterous. Two centuries from now, it may seem preposterous to deny.

I’m sure almost everyone would agree that we are obliged to give our children food and water. Yet if we were in a desert, starving and dying of thirst, we would be unable to do so—and we cannot be obliged to do what we cannot do. Yet as soon as we find an oasis and we can give them water, we must.

Humanity has been starving in the desert for two hundred millennia. Now, at last, we have reached the oasis. It is our duty to share its waters fairly.

The paperclippers are already here

Jan 24 JDN 2459239

Imagine a powerful artificial intelligence, which is comprised of many parts distributed over a vast area so that it has no particular location. It is incapable of feeling any emotion: Neither love nor hate, neither joy nor sorrow, neither hope nor fear. It has no concept of ethics or morals, only its own programmed directives. It has one singular purpose, which it seeks out at any cost. Any who aid its purpose are generously rewarded. Any who resist its purpose are mercilessly crushed.

The Less Wrong community has come to refer to such artificial intelligences as “paperclippers”; the metonymous singular directive is to maximize the number of paperclips produced. There’s even an online clicker game where you can play as one called “Universal Paperclips“. The concern is that we might one day invent such artificial intelligences, and they could get out of control. The paperclippers won’t kill us because they hate us, but simply because we can be used to make more paperclips. This is a far more plausible scenario for the “AI apocalypse” than the more conventional sci-fi version where AIs try to kill us on purpose.

But I would say that the paperclippers are already here. Slow, analog versions perhaps. But they are already getting out of control. We call them corporations.

A corporation is probably not what you visualized when you read the first paragraph of this post, so try reading it again. Which parts are not true of corporations?

Perhaps you think a corporation is not an artificial intelligence? But clearly it’s artificial, and doesn’t it behave in ways that seem intelligent? A corporation has purpose beyond its employees in much the same way that a hive has purpose beyond its bees. A corporation is a human superorganism (and not the only kind either).

Corporations are absolutely, utterly amoral. Their sole directive is to maximize profit. Now, you might think that an individual CEO, or a board of directors, could decide to do something good, or refrain from something evil, for reasons other than profit; and to some extent this is true. But particularly when a corporation is publicly-traded, that CEO and those directors are beholden to shareholders. If shareholders see that the corporation is acting in ways that benefit the community but hurt their own profits, shareholders can rebel by selling their shares or even suing the company. In 1919, Dodge successfully sued Ford for the “crime” of setting wages too high and prices too low.

Humans are altruistic. We are capable of feeling, emotion, and compassion. Corporations are not. Corporations are made of human beings, but they are specifically structured to minimize the autonomy of human choices. They are designed to provide strong incentives to behave in a particular way so as to maximize profit. Even the CEO of a corporation, especially one that is publicly traded, has their hands tied most of the time by the desires of millions of shareholders and customers—so-called “market forces”. Corporations are entirely the result of human actions, but they feel like impersonal forces because they are the result of millions of independent choices, almost impossible to coordinate; so one individual has very little power to change the outcome.

Why would we create such entities? It almost feels as though we were conquered by some alien force that sought to enslave us to its own purposes. But no, we created corporations ourselves. We intentionally set up institutions designed to limit our own autonomy in the name of maximizing profit.

Part of the answer is efficiency: There are genuine gains in economic efficiency due to the corporate structure. Corporations can coordinate complex activity on a vast scale, with thousands or even millions of employees each doing what they are assigned without ever knowing—or needing to know—the whole of which they are a part.

But a publicly-traded corporation is far from the only way to do that. Even for-profit businesses are not the only way to organize production. And empirically, worker co-ops actually seem to be about as productive as corporations, while producing far less inequality and far more satisfied employees.

Thus, in order to explain the primacy of corporations, particularly those that are traded on stock markets, we must turn to ideology: The extreme laissez- faire concept of capitalism and its modern expression in the ideology of “shareholder value”. Somewhere along the way enough people—or at least enough policymakers—became convinced that the best way to run an economy was to hand over as much as possible to entities that exist entirely to maximize their own profits.

This is not to say that corporations should be abolished entirely. I am certainly not advocating a shift to central planning; I believe in private enterprise. But I should note that private enterprise can also include co-ops, partnerships, and closely-held businesses, rather than publicly traded corproations, and perhaps that’s all we need. Yet there do seem to be significant advantages to the corporate structure: Corporation seem to be spectacularly good at scaling up the production of goods and providing them to a large number of customers. So let’s not get rid of corporations just yet.

Instead, let us keep corporations on a short leash. When properly regulated, corporations can be very efficient at producing goods. But corporations can also cause tremendous damage when given the opportunity. Regulations aren’t just “red tape” that gets in the way of production. They are a vital lifeline that protects us against countless abuses that corporations would otherwise commit.

These vast artificial intelligences are useful to us, so let’s not get rid of them. But never for a moment imagine that their goals are the same as ours. Keep them under close watch at all times, and compel them to use their great powers for good—for, left to their own devices, they can just as easily do great evil.

Will robots take our jobs? Not “if” but “when”.

Jan 5 JDN 2458853

The prospect of technological unemploymentin short, robots taking our jobs—is a very controversial one among economists.

For most of human history, technological advances have destroyed some jobs and created others, causing change, instability, conflict—but ultimately, not unemployment. Many economists believe that this trend will continue well into the 21st century.

Yet I am not so sure, ever since I read this chilling paragraph by Gregory Clark, which I first encountered in The Atlantic:

There was a type of employee at the beginning of the Industrial Revolution whose job and livelihood largely vanished in the early twentieth century. This was the horse. The population of working horses actually peaked in England long after the Industrial Revolution, in 1901, when 3.25 million were at work. Though they had been replaced by rail for long-distance haulage and by steam engines for driving machinery, they still plowed fields, hauled wagons and carriages short distances, pulled boats on the canals, toiled in the pits, and carried armies into battle. But the arrival of the internal combustion engine in the late nineteenth century rapidly displaced these workers, so that by 1924 there were fewer than two million. There was always a wage at which all these horses could have remained employed. But that wage was so low that it did not pay for their feed.

Based on the statistics, what actually seems to be happening right now is that automation is bifurcating the workforce: It’s allowing some people with advanced high-tech skills to make mind-boggling amounts of money in engineering and software development, while those who lack such skills get pushed ever further into the margins, forced to take whatever jobs they can get. This skill-biased technical change is far from a complete explanation for our rising inequality, but it’s clearly a contributing factor, and I expect it will become more important over time.

Indeed, in some sense I think the replacement of most human labor with robots is inevitable. It’s not a question of “if”, but only a question of “when”. In a thousand years—if we survive at all, and if we remain recognizable as human—we’re not going to have employment in the same sense we do today. In the best-case scenario, we’ll live in the Culture, all playing games, making art, singing songs, and writing stories while the robots do all the hard labor.

But a thousand years is a very long time; we’ll be dead, and so will our children and our grandchildren. Most of us are thus understandably a lot more concerned about what happens in say 20 or 50 years.

I’m quite certain that not all human work will be replaced within the next 20 years. In fact, I am skeptical even of the estimates that half of all work will be automated within the next 40 years, though some very qualified experts are making such estimates. A lot of jobs are safe for now.

Indeed, my job is probably pretty safe: While there has been a disturbing trend in universities toward adjunct faculty, people are definitely still going to need economists for the foreseeable future. (Indeed, if Asimov is right, behavioral economists will one day rule the galaxy.)

Creative jobs are also quite safe; it’s going to be at least a century, maybe more, before robots can seriously compete with artists, authors, or musicians. (Robot Beethoven is a publicity stunt, not a serious business plan.) Indeed, by the time robots reach that level, I think we’ll have to start treating them as people—so in that sense, people will still be doing those jobs.

Even construction work is also relatively safe—actually projected to grow faster than employment in general for the next decade. This is probably because increased construction productivity tends to lead to more construction, rather than less employment. We can pretty much always use more or bigger houses, as long as we can afford them. Really, we should be hoping for technological advances in construction, which might finally bring down our astronomical housing prices, especially here in California.

But a lot of jobs are clearly going to disappear, sooner than most people seem to grasp.

The one that worries me the most is truck drivers. Truck drivers are a huge number of people. Trucking employs over 1.5 million Americans, accounting for about 1% of all US workers. It’s one of the few remaining jobs that pays a middle-class salary with entry-level skills and doesn’t require an advanced education. It’s also culturally coded as highly masculine, which is advantageous in a world where a large number of men suffer so deeply from fragile masculinity (a major correlate of support for Donald Trump, by the way, as well as a source of a never-ending array of cringeworthy marketing) that they can’t bear to take even the most promising “pink collar” jobs.

And yet, long-haul trucking is probably not going to exist in 20 years. Short-haul and delivery trucking will probably last a bit longer, since it’s helpful to have a human being to drive around complicated city streets and carry deliveries. Automated trucks are already here, and they are just… better. While human drivers need rest, sleep, food, and bathroom breaks, rarely exceeding 11 hours of actual driving per day (which still sounds exhausting!), an automated long-haul truck can stay on the road for over 22 hours per day, even including fuel and maintenance. The capital cost of an automated truck is currently much higher than an ordinary truck, but when that changes, trucking companies aren’t going to keep around a human driver when their robots can deliver twice as fast and don’t expect to be paid wages. Automated vehicles are also safer than human drivers, which will save several thousand lives per year. For this to happen, we don’t even need truly full automation; we just need to get past our current level 3 automation and reach level 4. Prototypes of this level of automation are already under development; in about 10 years they’ll start hitting the road. The shift won’t be instantaneous; once a company has already invested in a truck and a driver, they’ll keep them around for several years. But in 20 years from now, I don’t expect to see a lot of human-driven trucks left.

I’m pleased to see that the government is taking this matter seriously, already trying to develop plans for what to do when long-haul trucks become fully robotic. I hope they can come up with a good plan in time.

Some jobs that will be automated away deserve to be automated away. I can’t shed very many tears for the loss of fast-food workers and grocery cashiers (which we can already see happening around us—been to a Taco Bell lately?); those are terrible jobs that no human being should have to do. And my only concern about automated telemarketing is that it makes telemarketing cheaper and therefore more common; I certainly am not worried about the fact that people won’t be working as telemarketers anymore.

But a lot of good jobs, even white-collar jobs, are at risk of automation. Algorithms are already performing at about the same level as human radiologists, contract reviewers, and insurance underwriters, and once they get substantially better, companies are going to have trouble justifying why they would hire a human who costs more and performs worse. Indeed, the very first job to be automated by information technology was a white-collar job: computer used to be a profession, not a machine.

Technological advancement is inherently difficult to predict: If we knew how future technology will work, we’d make it now. So any such prediction should contain large error bars: “20 years away” could mean we make a breakthrough next year, or it could stay “20 years away” for the next 50 years.

If we had a robust social safety net—a basic income, perhaps?—this would be fine. But our culture decided somewhere along the way that people only deserve to live well if they are currently performing paid services for a corporation, and as robots get better, corporations will find they don’t need so many people performing services. We could face up to this fact and use it as an opportunity for deeper reforms; but I fear that instead we’ll wait to act until the crisis is already upon us.

What we can be thankful for

Nov 24 JDN 2458812

Thanksgiving is upon us, yet as more and more evidence is revealed implicating President Trump in grievous crimes, as US carbon emissions that had been declining are now trending upward again, as our air quality deteriorates for the first time in decades, it may be hard to see what we should be thankful for.

But these are exceptions to a broader trend: The world is getting better, in almost every way, remarkably quickly. Homicide rates in the US are lower than they’ve been since the 1960s. Worldwide, the homicide rate has fallen 20% since 1990.

While world carbon emissions are still increasing, on a per capita basis they are actually starting to decline, and on an efficiency basis (kilograms of carbon-equivalent per dollar of GDP) they are at their lowest ever. This trend is likely to continue: The price of solar power has rapidly declined to the point where it is now the cheapest form of electric power.
The number—not just proportion, absolute number—of people in extreme poverty has declined by almost two-thirds within my own lifetime. The proportion is the lowest it has ever been in human history. World life expectancy is at its highest ever. Death rates from infectious disease fell by over 85% over the 20th century, and are now at their lowest ever.

I wouldn’t usually cite Reason as a source, but they’re right on this one: Defeat appears imminent for all four Horsemen of the Apocalypse. Pestilence, Famine, War, and even Death are all on the decline. We have a great deal to be grateful for: We are living in a golden age.

This is not to say that we should let ourselves become complacent and stop trying to make the world better: On the contrary, it proves that the world can be made better, which gives us every reason to redouble our efforts to do so.

Is Singularitarianism a religion?

 

Nov 17 JDN 2458805

I said in last week’s post that Pascal’s Mugging provides some deep insights into both Singularitarianism and religion. In particular, it explains why Singularitarianism seems so much like a religion.

This has been previously remarked, of course. I think Eric Steinhart makes the best case for Singularitarianism as a religion:

I think singularitarianism is a new religious movement. I might add that I think Clifford Geertz had a pretty nice (though very abstract) definition of religion. And I think singularitarianism fits Geertz’s definition (but that’s for another time).

My main interest is this: if singularitarianism is a new religious movement, then what should we make of it? Will it mainly be a good thing? A kind of enlightenment religion? It might be an excellent alternative to old-fashioned Abrahamic religion. Or would it degenerate into the well-known tragic pattern of coercive authority? Time will tell; but I think it’s worth thinking about this in much more detail.

To be clear: Singularitarianism is probably not a religion. It is certainly not a cult, as it has been even worse accused; the behaviors it prescribes are largely normative, pro-social behaviors, and therefore it would at worst be a mainstream religion. Really, if every religion only inspired people to do things like donate to famine relief and work on AI research (as opposed to, say, beheading gay people), I wouldn’t have much of a problem with religion.

In fact, Singularitarianism has one vital advantage over religion: Evidence. While the evidence in favor of it is not overwhelming, there is enough evidential support to lend plausibility to at least a broad concept of Singularitarianism: Technology will continue rapidly advancing, achieving accomplishments currently only in our wildest imaginings; artificial intelligence surpassing human intelligence will arise, sooner than many people think; human beings will change ourselves into something new and broadly superior; these posthumans will go on to colonize the galaxy and build a grander civilization than we can imagine. I don’t know that these things are true, but I hope they are, and I think it’s at least reasonably likely. All I’m really doing is extrapolating based on what human civilization has done so far and what we are currently trying to do now. Of course, we could well blow ourselves up before then, or regress to a lower level of technology, or be wiped out by some external force. But there’s at least a decent chance that we will continue to thrive for another million years to come.

But yes, Singularitarianism does in many ways resemble a religion: It offers a rich, emotionally fulfilling ontology combined with ethical prescriptions that require particular behaviors. It promises us a chance at immortality. It inspires us to work toward something much larger than ourselves. More importantly, it makes us special—we are among the unique few (millions?) who have the power to influence the direction of human and posthuman civilization for a million years. The stronger forms of Singularitarianism even have a flavor of apocalypse: When the AI comes, sooner than you think, it will immediately reshape everything at effectively infinite speed, so that from one year—or even one moment—to the next, our whole civilization will be changed. (These forms of Singularitarianism are substantially less plausible than the broader concept I outlined above.)

It’s this sense of specialness that Pascal’s Mugging provides some insight into. When it is suggested that we are so special, we should be inherently skeptical, not least because it feels good to hear that. (As Less Wrong would put it, we need to avoid a Happy Death Spiral.) Human beings like to feel special; we want to feel special. Our brains are configured to seek out evidence that we are special and reject evidence that we are not. This is true even to the point of absurdity: One cannot be mathematically coherent without admitting that the compliment “You’re one in a million.” is equivalent to the statement “There are seven thousand people as good or better than you.”—and yet, the latter seems much worse, because it does not make us sound special.

Indeed, the connection between Pascal’s Mugging and Pascal’s Wager is quite deep: Each argument takes a tiny probability and multiplies it by a huge impact in order to get a large expected utility. This often seems to be the way that religions defend themselves: Well, yes, the probability is small; but can you take the chance? Can you afford to take that bet if it’s really your immortal soul on the line?

And Singularitarianism has a similar case to make, even aside from the paradox of Pascal’s Mugging itself. The chief argument for why we should be focusing all of our time and energy on existential risk is that the potential payoff is just so huge that even a tiny probability of making a difference is enough to make it the only thing that matters. We should be especially suspicious of that; anything that says it is the only thing that matters is to be doubted with utmost care. The really dangerous religion has always been the fanatical kind that says it is the only thing that matters. That’s the kind of religion that makes you crash airliners into buildings.

I think some people may well have become Singularitarians because it made them feel special. It is exhilarating to be one of these lone few—and in the scheme of things, even a few million is a small fraction of all past and future humanity—with the power to effect some shift, however small, in the probability of a far grander, far brighter future.

Yet, in fact this is very likely the circumstance in which we are. We could have been born in the Neolithic, struggling to survive, utterly unaware of what would come a few millennia hence; we could have been born in the posthuman era, one of a trillion other artist/gamer/philosophers living in a world where all the hard work that needed to be done is already done. In the long S-curve of human development, we could have been born in the flat part on the left or the flat part on the right—and by all probability, we should have been; most people were. But instead we happened to be born in that tiny middle slice, where the curve slopes upward at its fastest. I suppose somebody had to be, and it might as well be us.

Sigmoid curve labeled

A priori, we should doubt that we were born so special. And when forming our beliefs, we should compensate for the fact that we want to believe we are special. But we do in fact have evidence, lots of evidence. We live in a time of astonishing scientific and technological progress.

My lifetime has included the progression from Deep Thought first beating David Levy to the creation of a computer one millimeter across that runs on a few nanowatts and nevertheless has ten times as much computing power as the 80-pound computer that ran the Saturn V. (The human brain runs on about 100 watts, and has a processing power of about 1 petaflop, so we can say that our energy efficiency is about 10 TFLOPS/W. The M3 runs on about 10 nanowatts and has a processing power of about 0.1 megaflops, so its energy efficiency is also about 10 TFLOPS/W. We did it! We finally made a computer as energy-efficient as the human brain! But we have still not matched the brain in terms of space-efficiency: The volume of the human brain is about 1000 cm^3, so our space efficiency is about 1 TFLOPS/cm^3. The volume of the M3 is about 1 mm^3, so its space efficiency is only about 100 MFLOPS/cm^3. The brain still wins by a factor of 10,000.)

My mother saw us go from the first jet airliners to landing on the Moon to the International Space Station and robots on Mars. She grew up before the polio vaccine and is still alive to see the first 3D-printed human heart. When I was a child, smartphones didn’t even exist; now more people have smartphones than have toilets. I may yet live to see the first human beings set foot on Mars. The pace of change is utterly staggering.

Without a doubt, this is sufficient evidence to believe that we, as a civilization, are living in a very special time. The real question is: Are we, as individuals, special enough to make a difference? And if we are, what weight of responsibility does this put upon us?

If you are reading this, odds are the answer to the first question is yes: You are definitely literate, and most likely educated, probably middle- or upper-middle-class in a First World country. Countries are something I can track, and I do get some readers from non-First-World countries; and of course I don’t observe your education or socioeconomic status. But at an educated guess, this is surely my primary reading demographic. Even if you don’t have the faintest idea what I’m talking about when I use Bayesian logic or calculus, you’re already quite exceptional. (And if you do? All the more so.)

That means the second question must apply: What do we owe these future generations who may come to exist if we play our cards right? What can we, as individuals, hope to do to bring about this brighter future?

The Singularitarian community will generally tell you that the best thing to do with your time is to work on AI research, or, failing that, the best thing to do with your money is to give it to people working on artificial intelligence research. I’m not going to tell you not to work on AI research or donate to AI research, as I do think it is among the most important things humanity needs to be doing right now, but I’m also not going to tell you that it is the one single thing you must be doing.

You should almost certainly be donating somewhere, but I’m not so sure it should be to AI research. Maybe it should be famine relief, or malaria prevention, or medical research, or human rights, or environmental sustainability. If you’re in the United States (as I know most of you are), the best thing to do with your money may well be to support political campaigns, because US political, economic, and military hegemony means that as goes America, so goes the world. Stop and think for a moment how different the prospects of global warming might have been—how many millions of lives might have been saved!—if Al Gore had become President in 2001. For lack of a few million dollars in Tampa twenty years ago, Miami may be gone in fifty. If you’re not sure which cause is most important, just pick one; or better yet, donate to a diversified portfolio of charities and political campaigns. Diversified investment isn’t just about monetary return.

And you should think carefully about what you’re doing with the rest of your life. This can be hard to do; we can easily get so caught up in just getting through the day, getting through the week, just getting by, that we lose sight of having a broader mission in life. Of course, I don’t know what your situation is; it’s possible things really are so desperate for you that you have no choice but to keep your head down and muddle through. But you should also consider the possibility that this is not the case: You may not be as desperate as you feel. You may have more options than you know. Most “starving artists” don’t actually starve. More people regret staying in their dead-end jobs than regret quitting to follow their dreams. I guess if you stay in a high-paying job in order to earn to give, that might really be ethically optimal; but I doubt it will make you happy. And in fact some of the most important fields are constrained by a lack of good people doing good work, and not by a simple lack of funding.

I see this especially in economics: As a field, economics is really not focused on the right kind of questions. There’s far too much prestige for incrementally adjusting some overcomplicated unfalsifiable mess of macroeconomic algebra, and not nearly enough for trying to figure out how to mitigate global warming, how to turn back the tide of rising wealth inequality, or what happens to human society once robots take all the middle-class jobs. Good work is being done in devising measures to fight poverty directly, but not in devising means to undermine the authoritarian regimes that are responsible for maintaining poverty. Formal mathematical sophistication is prized, and deep thought about hard questions is eschewed. We are carefully arranging the pebbles on our sandcastle in front of the oncoming tidal wave. I won’t tell you that it’s easy to change this—it certainly hasn’t been easy for me—but I have to imagine it’d be easier with more of us trying rather than with fewer. Nobody needs to donate money to economics departments, but we definitely do need better economists running those departments.

You should ask yourself what it is that you are really good at, what you—you yourself, not anyone else—might do to make a mark on the world. This is not an easy question: I have not quite answered for myself whether I would make more difference as an academic researcher, a policy analyst, a nonfiction author, or even a science fiction author. (If you scoff at the latter: Who would have any concept of AI, space colonization, or transhumanism, if not for science fiction authors? The people who most tilted the dial of human civilization toward this brighter future may well be Clarke, Roddenberry, and Asimov.) It is not impossible to be some combination or even all of these, but the more I try to take on the more difficult my life becomes.

Your own path will look different than mine, different, indeed, than anyone else’s. But you must choose it wisely. For we are very special individuals, living in a very special time.