The Parable of the Dishwasher

JDN 2456478

Much like free trade, technological unemployment is an issue where the consensus opinion among economists diverges quite sharply from that of the general population.

Enough people think that “robots taking our jobs” is something bad that I’ve seen a fair number of memes like this:

EVERY TIME you use the Self Checkout you are ELIMINATING JOBS!

But like almost all economists, I think that self-checkouts, robots, and automation in general are a pretty good thing. They do have a few downsides, chiefly in terms of forcing us to make transitions that are costly and painful; but in general I want more robots, not fewer.

To help turn you toward this view, I offer a parable.

Suppose we have a family, the (stereo)typical American family with a father, a mother, and two kids, a boy named Joe and a girl named Sue.

The kids do chores for their allowance, and split them as follows: Joe always does the dishes, and Sue always vacuums the carpet. They both spend about 1 hour per week and they both get paid $10 a week.

But one day, Dad decides to buy a dishwasher. This dramatically cuts down the time it takes Joe to do the dishes; where he used to spend 1 hour washing dishes, now he can load the dishwasher and get it done in 5 minutes.

  1. Mom suggests they just sell back the dishwasher to get rid of the problem.
  2. Dad says that Joe should now only be paid for the 5 minutes he works each week, so he would now be paid $0.83 per week. (He’s not buying a lot of video games on that allowance.)
  3. Joe protests that he gets the same amount of work done, so he should be paid the same $10 for doing it.
  4. Sue says it would be unfair for her to have to work so much more than Joe, and has a different solution: They’ll trade off the two sets of chores each week, and they should of course get paid the same amount of money for getting the same amount of work done—$10 per kid per week, for an average of 32.5 minutes of work each.

Which of those solutions sounds the most sensible to you?

Mom’s solution is clearly the worst, right? It’s the Luddite solution, the one that throws away technological progress and makes everything less efficient. Yet that is the solution being offered by people who say “Don’t use the self-checkout machine!” Indeed, anyone who speaks of the virtues of “hard work” is really speaking Mom’s language here; they should be talking about the virtues of getting things done. The purpose of washing dishes is to have clean dishes, not to “work hard”. And likewise, when we construct bridges or make cars or write books or solve equations, our goal should be to get that thing done—not to fulfill some sense of moral obligation to prove our worthiness through hard work.

Joe’s solution is what neoclassical economics says should happen—higher productivity should yield higher wages, so the same amount of production should yield the same pay. This seems like it could work, but empirically it rarely happens. There’s also something vaguely unfair about it; if productivity increases in your industry but not in someone else’s, you get to cut your work hours dramatically while they are stuck working just as hard as before.

Dad’s “solution” is clearly terrible, and makes no sense at all. Yet this is what we actually tend to observe—capital owners appropriate all (or nearly all) the benefits of the new technology, and workers get displaced or get ever-smaller wages. (I talked about that in a recent post.)

It’s Sue’s solution that really seems to make the most sense, isn’t it? When one type of work becomes more efficient, people should shift into different types of labor so that people can work fewer hours—and wages should rise enough that incomes remain the same. “Baumol’s disease” is not a disease—it is the primary means by which capitalism raises human welfare. (That’s why I prefer to use the term “Baumol Effect” instead.)

One problem with this in practice is that sometimes people can’t switch into other industries. That’s a little hard to imagine in this case, but let’s stipulate that for some reason Joe can’t do the vacuuming. Maybe he has some sort of injury that makes it painful to use the vacuum cleaner, but doesn’t impair his ability to wash dishes. Or maybe he has a severe dust allergy, so bad that the dust thrown up by the vacuum cleaner sends him into fits of coughing.

In that case I think we’re back to Joe’s solution; he should get paid the same for getting the same amount of work done. I’m actually tempted to say that Sue should get paid more, to compensate her for the unfairness; but in the real world there is a pretty harsh budget constraint there, so we need to essentially pretend that Dad only has $20 per week to give out in allowances. A possible compromise would be to raise Sue up to $12 and cut Joe down to $8; Joe will probably still be better off than he was, because he has that extra 55 minutes of free time each week for which he only had to “pay” $2. This also makes the incentives work out better—Joe doesn’t have a reason to malinger and exaggerate his dust allergy just to get out of doing the vacuuming, since he would actually get paid more if he were willing to do the vacuuming; but if his allergy really is that bad, he can still do okay otherwise. (There’s a lesson here for the proper structure of Social Security Disability, methinks.)

But you know what really seems like the best solution? Buy a Roomba.

Buy a Roomba, make it Sue’s job to spend 5 minutes a week keeping the Roomba working at vacuuming the carpet, and continue paying both kids $10 per week. Give them both 55 minutes more per week to hang out with their friends or play video games. Whether you think of this $10 as a “higher wage” for higher productivity or simply an allowance they get anyway—a basic income—ultimately doesn’t matter all that much. The point is that everyone gets enough money and nobody has to work very much, because the robots do everything.

And now, hopefully you see why I think we need more robots, not fewer.

Of course, like any simple analogy, this isn’t perfect; it may be difficult to reduce the hours in some jobs or move more people into them. There are a lot of additional frictions and complications that go into the real-world problem of achieving equitable labor markets. But I hope I’ve gotten across the basic idea that robots are not the problem, and could in fact be the solution–not just to our current labor market woes, but to the very problem of wage labor itself.

My ultimate goal is a world where “work” itself is fundamentally redefined—so that it always means the creative sense “This painting is some of my best work.” and not the menial sense “Sweeping this floor is so much work!”; so that human beings do things because we want to do them, because they are worth doing, and not because some employer is holding our food and housing hostage if we don’t.

But that will require our whole society to rethink a lot of our core assumptions about work, jobs, and economics in general. We’re so invested in this idea that “hard work” is inherently virtuous that we forgot the purpose of an economy was to get things done. Work is not a benefit; work is a cost. Costs are to be reduced. Puritanical sexual norms have been extremely damaging to American society, but time will tell if Puritanical work ethic actually does more damage to our long-term future.

What can we do to make the world a better place?

JDN 2457475

There are an awful lot of big problems in the world: war, poverty, oppression, disease, terrorism, crime… I could go on for awhile, but I think you get the idea. Solving or even mitigating these huge global problems could improve or even save the lives of millions of people.

But precisely because these problems are so big, they can also make us feel powerless. What can one person, or even a hundred people, do against problems on this scale?

The answer is quite simple: Do your share.

No one person can solve any of these problems—not even someone like Bill Gates, though he for one at least can have a significant impact on poverty and disease because he is so spectacularly mind-bogglingly rich; the Gates Foundation has a huge impact because it has as much wealth as the annual budget of the NIH.

But all of us together can have an enormous impact. This post today is about helping you see just how cheap and easy it would be to end world hunger and cure poverty-related diseases, if we simply got enough people to contribute.

The Against Malaria Foundation releases annual reports for all their regular donors. I recently got a report that my donations personally account for 1/100,000 of their total assets. That’s terrible. The global population is 7 billion people; in the First World alone it’s over 1 billion. I am the 0.01%, at least when it comes to donations to the Against Malaria Foundation.

I’ve given them only $850. Their total assets are only $80 million. They shouldn’t have $80 million—they should have $80 billion. So, please, if you do nothing else as a result of this post, go make a donation to the Against Malaria Foundation. I am entirely serious; if you think you might forget or change your mind, do it right now. Even a dollar would be worth it. If everyone in the First World gave $1, they would get 12 times as much as they currently have.

GiveWell is an excellent source for other places you should donate; they rate charities around the world for their cost-effectiveness in the only way worth doing: Lives saved per dollar donated. They don’t just naively look at what percentage goes to administrative costs; they look at how everything is being spent and how many children have their diseases cured.

Until the end of April, UNICEF is offering an astonishing five times matching funds—meaning that if you donate $10, a full $50 goes to UNICEF projects. I have really mixed feelings about donors that offer matching funds (So what you’re saying is, you won’t give if we don’t?), but when they are being offered, use them.

All those charities are focused on immediate poverty reduction; if you’re looking for somewhere to give that fights Existential Risk, I highly recommend the Union of Concerned Scientists—one of the few Existential Risk organizations that uses evidence-based projections and recognizes that nuclear weapons and climate change are the threats we need to worry about.

And let’s not be too anthropocentrist; there are a lot of other sentient beings on this planet, and Animal Charity Evaluator can help you find which charities will best improve the lives of other animals.

I’ve just listed a whole bunch of ways you can give money—and that probably is the best thing for you to give; your time is probably most efficiently used working in your own profession whatever that may be—but there are other ways you can contribute as well.

One simple but important change you can make, if you haven’t already, is to become vegetarian. Even aside from the horrific treatment of animals in industrial farming, you don’t have to believe that animals deserve rights to understand that meat is murder. Meat production is a larger contributor to global greenhouse gas emissions than transportation, so everyone becoming vegetarian would have a larger impact against climate change than taking literally every car and truck in the world off the road. Since the world population is less than 10 billion, meat is 18% of greenhouse emissions and the IPCC projects that climate change will kill between 10 and 100 million people over the next century, every 500 to 5000 new vegetarians saves a life.

You can move your money from a bank to a credit union, as even the worst credit unions are generally better than the best for-profit banks, and the worst for-profit banks are very, very bad. The actual transition can be fairly inconvenient, but a good credit union will provide you with all the same services, and most credit unions link their networks and have online banking, so for example I can still deposit and withdraw from my University of Michigan Credit Union account while in California.

Another thing you can do is reduce your consumption of sweatshop products in favor of products manufactured under fair labor standards. This is harder than it sounds; it can be very difficult to tell what a company’s true labor conditions are like, as the worst companies work very hard to hide them (now, if they worked half as hard to improve them… it reminds me of how many students seem willing to do twice as much work to cheat as they would to simply learn the material in the first place).

You should not simply stop buying products that say “Made in China”; in fact, this could be counterproductive. We want products to be made in China; we need products to be made in China. What we have to do is improve labor standards in China, so that products made in China are like products made in Japan or Korea—skilled workers with high-paying jobs in high-tech factories. Presumably it doesn’t bother you when something says “Made in Switzerland” or “Made in the UK”, because you know their labor standards are at least as high as our own; that’s where I’d like to get with “Made in China”.

The simplest way to do this is of course to buy Fair Trade products, particularly coffee and chocolate. But most products are not available Fair Trade (there are no Fair Trade computers, and only loose analogues for clothing and shoes).

Moreover, we must not let the perfect be the enemy of the good; companies that have done terrible things in the past may still be the best companies to support, because there are no alternatives that are any better. In order to incentivize improvement, we must buy from the least of all evils for awhile until the new competitive pressure makes non-evil corporations viable. With this in mind, the Fair Labor Association may not be wrong to endorse companies like Adidas and Apple, even though they surely have substantial room to improve. Similarly, few companies on the Ethisphere list are spotless, but they probably are genuinely better than their competitors. (Well, those that have competitors; Hasbro is on there. Name a well-known board game, and odds are it’s made by a Hasbro subsidiary: they own Parker Brothers, Milton Bradley, and Wizards of the Coast. Wikipedia has their own category, Hasbro subsidiaries. Maybe they’ve been trying to tell us something with all those versions of Monopoly?)

I’m not very happy with the current state of labor standards reporting (much less labor standards enforcement), so I don’t want to recommend any of these sources too highly. But if you are considering buying from one of three companies and only one of them is endorsed by the Fair Labor Association, it couldn’t hurt to buy from that one instead of the others.

Buying from ethical companies will generally be more expensive—but rarely prohibitively so, and this is part of how we use price signals to incentivize better behavior. For about a year, BP gasoline was clearly cheaper than other gasoline, because nobody wanted to buy from BP and they were forced to sell at a discount after the Deepwater Horizon disaster. Their profits tanked as a result. That’s the kind of outcome we want—preferably for a longer period of time.

I suppose you could also save money by buying cheaper products and then donate the difference, and in the short run this would actually be most cost-effective for global utility; but (1) nobody really does that; people who buy Fair Trade also tend to donate more, maybe just because they are more generous in general, and (2) in the long run what we actually want is more ethical businesses, not a system where businesses exploit everyone and then we rely upon private charity to compensate us for our exploitation. For similar reasons, philanthropy is a stopgap—and a much-needed one—but not a solution.

Of course, you can vote. And don’t just vote in the big name elections like President of the United States. Your personal impact may actually be larger from voting in legislatures and even local elections and ballot proposals. Certainly your probability of being a deciding vote is far larger, though this is compensated by the smaller effect of the resulting policies. Most US states have a website where you can look up any upcoming ballots you’ll be eligible to vote on, so you can plan out your decisions well in advance.

You may even want to consider running for office at the local level, though I realize this is a very large commitment. But most local officials run uncontested, which means there is no real democracy at work there at all.

Finally, you can contribute in some small way to making the world a better place simply by spreading the word, as I hope I’m doing right now.

Efficient markets and the Wisdom of Crowds

JDN 2457471

There is a well-known principle in social science called wisdom of the crowd, popularized in a book called The Wisdom of Crowds by James Surowiecki. It basically says that a group of people who aggregate their opinions can be more accurate than any individual opinion, even that of an expert; it is one of the fundamental justifications for democracy and free markets.

It is also often used to justify what is called the efficient market hypothesis, which in its weak form is approximately true (financial markets are unpredictable, unless you’ve got inside information or really good tools), but in its strong form is absolutely ludicrous (no, financial markets do not accurately reflect the most rational expectation of future outcomes in the real economy).

This post is about what the wisdom of the crowd actually does—and does not—say, and why it fails to justify the efficient market hypothesis even in its weak form.

The wisdom of the crowd says that when a group of people with a moderate level of accuracy all get together average their predictions, the resulting estimate is better, on average, than what they came up with individually. A group of people who all “sort of” know something can get together and create a prediction that is much better than any one of them could come up with.

This can actually be articulated as a mathematical theorem, the diversity prediction theorem:

(If you want to see the full equation, you can render the LaTeX here.)

Collective error = average individual error – prediction diversity

(\bar{x} – \mu)^2 = \frac{1}{n} \sum (x – \mu)^2 – \frac{1}{n} \sum (x – \bar{x})^2

This is a mathematical theorem; it’s beyond dispute. By the definition of the sample mean, this equation holds.

But in applying it, we must be careful; it doesn’t simply say that adding diversity will improve our predictions. Adding diversity will improve our predictions provided that we don’t increase average individual error too much.

Here, I’ll give some examples. Suppose we are guessing the weight of a Smart car. Person A says 1500 pounds; person B says 3000 pounds. Suppose the true weight is 2000 pounds.

Our collective estimate is the average of 1500 and 3000, which is 2250. So it’s a bit high.

Suppose we add person C, who guesses the weight of the car as 1800 pounds. This is closer to the real value, so we’d expect our collective estimate to improve, and it does: It’s now 2100 pounds.

But where the theorem can be a bit counter-intuitive is that we can add someone who is not particularly accurate, and still improve the estimate: If we also add person D, who guessed 1400 pounds, this seems like it should make our estimate worse—but it does not. Our new estimate is now 1925 pounds, which is a bit closer to the truth than 2100—and furthermore better than any individual estimate.

However, the theorem does not say that adding someone new will always improve the estimate; if we add person E, who has no idea how cars work and says that the car must weigh 50 pounds, we throw off the estimate so that it is now 1550 pounds. If we add enough such people, we can make the entire estimate wildly inaccurate: Add four more copies of person E and our new estimate of the car’s weight is a mere 883 pounds.

In all cases the theorem holds, however. Let’s consider the case where adding person E ruined our otherwise very good estimate.

Before we added person E, we had four estimates:

A said 1500, B said 3000, C said 1800, and D said 1400.

Our collective estimate was 1925.

Thus, collective error is (1925 – 2000)^2 = 5625, uh, square pounds? (Variances often have weird units.)

The individual errors are, respectively:

A: (1500 – 2000)^2 = 250,000

B: (3000 – 2000)^2 = 1,000,000

C: (1800 – 2000)^2 = 40,000

D: (1400 – 2000)^2 = 360,000

Average individual error is 412,500. So our collective error is much smaller than our average individual error. The difference is accounted for by prediction diversity.

Prediction diversity is found as the squared distance between each individual estimate and the average estimate:

A: (1500 – 1925)^2 = 180,625

B: (3000 – 1925)^2 = 1,155,625

C: (1800 – 1925)^2 = 15,625

D: (1400 – 1925)^2 = 275,625

Thus, prediction diversity is the average of these, 406875. And sure enough, 412,500 – 406,875 = 5625.

When we add on the fifth estimate of 50 and repeat the process, here’s what we get:
The new collective estimate is 1550. The prediction diversity went way up; it’s now 888,000. But the average error rose even faster, so it is now 1,090,500. As a result, the collective error got a lot worse, and is now 202,500. So adding more people does not always improve your estimates, if those people have no idea what they’re doing.

When it comes to the stock market, most people have no idea what they’re doing. Even most financial experts can forecast the market no better than chance.

The wisdom of the crowd holds when most people can basically get it right; maybe their predictions are 75% accurate for binary choices, or within a factor of 2 for quantitative estimates, something like that. Then, each guess is decent, but not great; and by combining a lot of decent estimates we get one really good estimate.

Of course, the diversity prediction theorem does still apply: Most individual investors underperform the stock market as a whole, just as the theorem would say—average individual prediction is worse than collective prediction.

Moreover, stock prices do have something to do with fundamentals, because fundamental analysis does often work, contrary to most forms of the efficient market hypothesis. (It’s a very oddly named hypothesis, really; what’s “efficient” about a market that is totally unpredictable?)

But in order for stock prices to actually be a good measure of the real value of a company, most of the people buying and selling stock would have to be using fundamental analysis. In order for stocks to reflect real values, stock choices must be based on real values—that’s the only mechanism by which real values could ever enter the equation.

While there are definitely a lot of people who use fundamental analysis, it really doesn’t seem like there are enough. At least for short-run ups and downs, most decisions seem to be made on a casual form of technical analysis: “It’s going up! Buy!” or “It just went down! Buy!” (Yes, you hear both of those; the latter is closer to true for short-run fluctuations, but the real pattern is a bit more complicated than that.)

For the wisdom of the crowd to work, the estimates need to be independent—each person makes a reasonable guess on their own, then we average over all the guesses. When you do this for simple tasks like the weight of a car or the number of jellybeans in a jar, you get some really astonishingly accurate results. Even for harder tasks where people have a vague idea, like the number of visible stars in the sky, you can do pretty well. But if you let people talk about their answers, the aggregate guess often gets much worse, especially if there are no experts in the group. And we definitely talk about stocks an awful lot; one of the best sources for utterly meaningless post hoc statements in the world is the financial news section, which will always find some explanation for any market change, often tenuous at best, and then offer some sort of prediction for what will happen next which is almost always wrong.

This lack of independence fundamentally changes the system. The main thing that people consider when choosing which stocks to buy is which stocks other people are buying. This is called a Keynesian beauty contest; apparently these beauty contests used to be a thing in the 1930s, where you’d send in pictures of your baby and then people would vote on which baby was the cutest—but the key part in Keynes’s version is that you win money not based on whether your baby wins, but based on whether the baby you vote for wins. So you don’t necessarily vote for the one you think is cutest; you vote for the one you think other people will vote for, which is based on what they think other people will vote for, and so on. There are ways to make that infinite series converge, but there are also lots of cases where it diverges, and in reality what I think happens here is our brains max out and give up. (According to Dennett, we can handle about 7 layers of intentionality before our brains max out.)

A similar process is at work in the stock market, as well as with strategic voting—yet another reason why we should be designing our voting system to disincentivize strategic voting.

What we have then is a system with a feedback loop: We buy Apple because we buy Apple because we buy Apple. (Just as we use Facebook because we use Facebook because we use Facebook.)

Feedback loops can introduce chaotic behavior. Depending on the precise parameters involved, all of this guessing could turn out to converge to the real value of companies—or it could converge to something else entirely, or keep fluctuating all over the place indefinitely. Since the latter seems to be what happens, I think the real parameters are probably in that range of fluctuating instability. (I’ve actually programmed some simple computer models with parameters in that chaotic range, and they come out pretty darn close to the real behavior of stock markets—much better than the Black-Scholes model, for instance.) If you want a really in-depth analysis of the irrationality of financial markets, I highly recommend Robert Shiller, who after all won a Nobel for this sort of thing.

What does this mean for the efficient market hypothesis? That it’s basically a non-starter. We have no reason to believe that stock prices accurately integrate real fundamental information, and many reasons to think they do not. The unpredictability of stock prices could be just that—unpredictability, meaning that stock prices in the short run are simply random, and short-term trading is literally gambling. In the long run they seem to settle out into trends with some relation to fundamentals—but as Keynes said, in the long run we are all dead, and the market can remain irrational longer than you can remain solvent.

Free trade is not the problem. Billionaires are the problem.

JDN 2457468

One thing that really stuck out to me about the analysis of the outcome of the Michigan primary elections was that people kept talking about trade; when Bernie Sanders, a center-left social democrat, and Donald Trump, a far-right populist nationalist (and maybe even crypto-fascist) are the winners, something strange is at work. The one common element that the two victors seemed to have was their opposition to free trade agreements. And while people give many reasons to support Trump, many quite baffling, his staunch protectionism is one of the stronger voices. While Sanders is not as staunchly protectionist, he definitely has opposed many free-trade agreements.

Most of the American middle class feels as though they are running in place, working as hard as they can to stay where they are and never moving forward. The income statistics back them up on this; as you can see in this graph from FRED, real median household income in the US is actually lower than it was a decade ago; it never really did recover from the Second Depression:

US_median_household_income

As I talk to people about why they think this is, one of the biggest reasons they always give is some variant of “We keep sending our jobs to China.” There is this deep-seated intuition most Americans seem to have that the degradation of the middle class is the result of trade globalization. Bernie Sanders speaks about ending this by changes in tax policy and stronger labor regulations (which actually makes some sense); Donald Trump speaks of ending this by keeping out all those dirty foreigners (which appeals to the worst in us); but ultimately, they both are working from the narrative that free trade is the problem.

But free trade is not the problem. Like almost all economists, I support free trade. Free trade agreements might be part of the problem—but that’s because a lot of free trade agreements aren’t really about free trade. Many trade agreements, especially the infamous TRIPS accord, were primarily about restricting trade—specifically on “intellectual property” goods like patented drugs and copyrighted books. They were about expanding the monopoly power of corporations over their products so that the monopoly applied not just to the United States, but indeed to the whole world. This is the opposite of free trade and everything that it stands for. The TPP was a mixed bag, with some genuinely free-trade provisions (removing tariffs on imported cars) and some awful anti-trade provisions (making patents on drugs even stronger).

Every product we buy as an import is another product we sell as an export. This is not quite true, as the US does run a trade deficit; but our trade deficit is small compared to our overall volume of trade (which is ludicrously huge). Total US exports for 2014, the last full year we’ve fully tabulated, were $3.306 trillion—roughly the entire budget of the federal government. Total US imports for 2014 were $3.578 trillion. This makes our trade deficit $272 billion, which is 7.6% of our imports, or about 1.5% of our GDP of $18.148 trillion. So to be more precise, every 100 products we buy as imports are 92 products we sell as exports.

If we stopped making all these imports, what would happen? Well, for one thing, millions of people in China would lose their jobs and fall back into poverty. But even if you’re just looking at the US specifically, there’s no reason to think that domestic production would increase nearly as much as the volume of trade was reduced, because the whole point of trade is that it’s more efficient than domestic production alone. It is actually generous to think that by switching to autarky we’d have even half the domestic production that we’re currently buying in imports. And then of course countries we export to would retaliate, and we’d lose all those exports. The net effect of cutting ourselves off from world trade would be a loss of about $1.5 trillion in GDP—average income would drop by 8%.

Now, to be fair, there are winners and losers. Offshoring of manufacturing does destroy the manufacturing jobs that are offshored; but at least when done properly, it also creates new jobs by improved efficiency. These two effects are about the same size, so the overall effect is a small decline in the overall number of US manufacturing jobs. It’s not nearly large enough to account for the collapsing middle class.

Globalization may be one contributor to rising inequality, as may changes in technology that make some workers (software programmers) wildly more productive as they make other workers (cashiers, machinists, and soon truck drivers) obsolete. But those of us who have looked carefully at the causes of rising income inequality know that this is at best a small part of what’s really going on.

The real cause is what Bernie Sanders is always on about: The 1%. Gains in income in the US for the last few decades (roughly as long as I’ve been alive) have been concentrated in a very small minority of the population—in fact, even 1% may be too coarse. Most of the income gains have actually gone to more like the top 0.5% or top 0.25%, and the most spectacular increases in income have all been concentrated in the top 0.01%.

The story that we’ve been told—I dare say sold—by the mainstream media (which is, lets face it, owned by a handful of corporations) is that new technology has made it so that anyone who works hard (or at least anyone who is talented and works hard and gets a bit lucky) can succeed or even excel in this new tech-driven economy.

I just gave up on a piece of drivel called Bold that was seriously trying to argue that anyone with a brilliant idea can become a billionaire if they just try hard enough. (It also seemed positively gleeful about the possibility of a cyberpunk dystopia in which corporations use mass surveillance on their customers and competitors—yes, seriously, this was portrayed as a good thing.) If you must read it, please, don’t give these people any more money. Find it in a library, or find a free ebook version, or something. Instead you should give money to the people who wrote the book I switched to, Raw Deal, whose authors actually understand what’s going on here (though I maintain that the book should in fact be called Uber Capitalism).

When you look at where all the money from the tech-driven “new economy” is going, it’s not to the people who actually make things run. A typical wage for a web developer is about $35 per hour, and that’s relatively good as far as entry-level tech jobs. A typical wage for a social media intern is about $11 per hour, which is probably less than what the minimum wage ought to be. The “sharing economy” doesn’t produce outstandingly high incomes for workers, just outstandingly high income risk because you aren’t given a full-time salary. Uber has claimed that its drivers earn $90,000 per year, but in fact their real take-home pay is about $25 per hour. A typical employee at Airbnb makes $28 per hour. If you do manage to find full-time hours at those rates, you can make a middle-class salary; but that’s a big “if”. “Sharing economy”? Robert Reich has aptly renamed it the “share the crumbs economy”.

So where’s all this money going? CEOs. The CEO of Uber has net wealth of $8 billion. The CEO of Airbnb has net wealth of $3.3 billion. But they are paupers compared to the true giants of the tech industry: Larry Page of Google has $36 billion. Jeff Bezos of Amazon has $49 billion. And of course who can forget Bill Gates, founder of Microsoft, and his mind-boggling $77 billion.

Can we seriously believe that this is because their ideas were so brilliant, or because they are so talented and skilled? Uber’s “brilliant” idea is just to monetize carpooling and automate linking people up. Airbnb’s “revolutionary” concept is an app to advertise your bed-and-breakfast. At least Google invented some very impressive search algorithms, Amazon created one of the most competitive product markets in the world, and Microsoft democratized business computing. Of course, none of these would be possible without the invention of the Internet by government and university projects.

As for what these CEOs do that is so skilled? At this point they basically don’t do… anything. Any real work they did was in the past, and now it’s all delegated to other people; they just rake in money because they own things. They can manage if they want, but most of them have figured out that the best CEOs do very little while CEOS who micromanage typically fail. While I can see some argument for the idea that working hard in the past could merit you owning capital in the future, I have a very hard time seeing how being very good at programming and marketing makes you deserve to have so much money you could buy a new Ferrari every day for the rest of your life.

That’s the heuristic I like to tell people, to help them see the absolutely enormous difference between a millionaire and a billionaire: A millionaire is someone who can buy a Ferrari. A billionaire is someone who can buy a new Ferrari every day for the rest of their life. A high double-digit billionaire like Bezos or Gates could buy a new Ferrari every hour for the rest of their life. (Do the math; a Ferrari is about $250,000. Remember that they get a return on capital typically between 5% and 15% per year. With $1 billion, you get $50 to $150 million just in interest and dividends every year, and $100 million is enough to buy 365 Ferraris. As long as you don’t have several very bad years in a row on your stocks, you can keep doing this more or less forever—and that’s with only $1 billion.)

Immigration and globalization are not what is killing the American middle class. Corporatization is what’s killing the American middle class. Specifically, the use of regulatory capture to enforce monopoly power and thereby appropriate almost all the gains of new technologies into into the hands of a few dozen billionaires. Typically this is achieved through intellectual property, since corporate-owned patents basically just are monopolistic regulatory capture.

Since 1984, US real GDP per capita rose from $28,416 to $46,405 (in 2005 dollars). In that same time period, real median household income only rose from $48,664 to $53,657 (in 2014 dollars). That means that the total amount of income per person in the US rose by 49 log points (63%), while the amount of income that a typical family received only rose 10 log points (10%). If median income had risen at the same rate as per-capita GDP (and if inequality remained constant, it would), it would now be over $79,000, instead of $53,657. That is, a typical family would have $25,000 more than they actually do. The poverty line for a family of 4 is $24,300; so if you’re a family of 4 or less, the billionaires owe you a poverty line. You should have three times the poverty line, and in fact you have only two—because they took the rest.

And let me be very clear: I mean took. I mean stole, in a very real sense. This is not wealth that they created by their brilliance and hard work. This is wealth that they expropriated by exploiting people and manipulating the system in their favor. There is no way that the top 1% deserves to have as much wealth as the bottom 95% combined. They may be talented; they may work hard; but they are not that talented, and they do not work that hard. You speak of “confiscation of wealth” and you mean income taxes? No, this is the confiscation of our nation’s wealth.

Those of us who voted for Bernie Sanders voted for someone who is trying to stop it.

Those of you who voted for Donald Trump? Congratulations on supporting someone who epitomizes it.

This is why we must vote our consciences.

JDN 2457465

As I write, Bernie Sanders has just officially won the Michigan Democratic Primary. It was a close race—he was ahead by about 2% the entire time—so the delegates will be split; but he won.

This is notable because so many forecasters said it was impossible. Before the election, Nate Silver, one of the best political forecasters in the world (and he still deserves that title) had predicted a less than 1% chance Bernie Sanders could win. In fact, had he taken his models literally, he would have predicted a less than 1 in 10 million chance Bernie Sanders could win—I think it speaks highly of him that he was not willing to trust his models quite that far. I got into one of the wonkiest flamewars of all time earlier today debating whether this kind of egregious statistical error should call into question many of our standard statistical methods (I think it should; another good example is the total failure of the Black-Scholes model during the 2008 financial crisis).

Had we trusted the forecasters, held our noses and voted for the “electable” candidate, this would not have happened. But instead we voted our consciences, and the candidate we really wanted won.

It is an unfortunate truth that our system of plurality “first-past-the-post” voting does actually strongly incentivize strategic voting. Indeed, did it not, we wouldn’t need primaries in the first place. With a good range voting or even Condorcet voting system, you could basically just vote honestly among all candidates and expect a good outcome. Technically it’s still possible to vote strategically in range and Condorcet systems, but it’s not necessary the way it is in plurality vote systems.

The reason we need primaries is that plurality voting is not cloneproof; if two very similar candidates (“clones”) run that everyone likes, votes will be split between them and the two highly-favored candidates can lose to a less-favored candidate. Condorcet voting is cloneproof in most circumstances, and range voting is provably cloneproof everywhere and always. (Have I mentioned that we should really have range voting?)

Hillary Clinton and Bernie Sanders are not clones by any means, but they are considerably more similar to one another than either is to Donald Trump or Ted Cruz. If all the Republicans were to immediately drop out besides Trump while Clinton and Sanders stayed in the race, Trump could end up winning because votes were split between Clinton and Sanders. Primaries exist to prevent this outcome; either Sanders or Clinton will be in the final election, but not both (the #BernieOrBust people notwithstanding), so it will be a simple matter of whether they are preferred to Trump, which of course both Clinton and Sanders are. Don’t put too much stock in these polls, as polls this early are wildly unreliable. But I think they at least give us some sense of which direction the outcome is likely to be.

Ideally, we wouldn’t need to worry about that, and we could just vote our consciences all the time. But in the general election, you really do need to vote a little strategically and choose the better (or less-bad) option among the two major parties. No third-party Presidential candidate has ever gotten close to actually winning an election, and the best they ever seem to do is acting as weak clones undermining other similar candidates, as Ross Perot and Ralph Nader did. (Still, if you were thinking of not voting at all, it is obviously preferable for you to vote for a third-party candidate. If everyone who didn’t vote had instead voted for Ralph Nader, Nader would have won by a landslide—and US climate policy would be at least a decade ahead of where it is now, and we might not be already halfway to the 2 C global warming threshold.)

But in the primary? Vote your conscience. Primaries exist to make this possible, and we just showed that it can work. When people actually turn out to vote and support candidates they believe in, they win elections. If the same thing happens in several other states that just happened in Michigan, Bernie Sanders could win this election. And even if he doesn’t, he’s already gone a lot further than most of the pundits ever thought he could. (Sadly, so has Trump.)

We do not benefit from economic injustice.

JDN 2457461

Recently I think I figured out why so many middle-class White Americans express so much guilt about global injustice: A lot of people seem to think that we actually benefit from it. Thus, they feel caught between a rock and a hard place; conquering injustice would mean undermining their own already precarious standard of living, while leaving it in place is unconscionable.

The compromise, is apparently to feel really, really guilty about it, constantly tell people to “check their privilege” in this bizarre form of trendy autoflagellation, and then… never really get around to doing anything about the injustice.

(I guess that’s better than the conservative interpretation, which seems to be that since we benefit from this, we should keep doing it, and make sure we elect big, strong leaders who will make that happen.)

So let me tell you in no uncertain words: You do not benefit from this.

If anyone does—and as I’ll get to in a moment, that is not even necessarily true—then it is the billionaires who own the multinational corporations that orchestrate these abuses. Billionaires and billionaires only stand to gain from the exploitation of workers in the US, China, and everywhere else.

How do I know this with such certainty? Allow me to explain.

First of all, it is a common perception that prices of goods would be unattainably high if they were not produced on the backs of sweatshop workers. This perception is mistaken. The primary effect of the exploitation is simply to raise the profits of the corporation; there is a secondary effect of raising the price a moderate amount; and even this would be overwhelmed by the long-run dynamic effect of the increased consumer spending if workers were paid fairly.

Let’s take an iPad, for example. The price of iPads varies around the world in a combination of purchasing power parity and outright price discrimination; but the top model almost never sells for less than $500. The raw material expenditure involved in producing one is about $370—and the labor expenditure? Just $11. Not $110; $11. If it had been $110, the price could still be kept under $500 and turn a profit; it would simply be much smaller. That is, even if prices are really so elastic that Americans would refuse to buy an iPad at any more than $500 that would still mean Apple could still afford to raise the wages they pay (or rather, their subcontractors pay) workers by an order of magnitude. A worker who currently works 50 hours a week for $10 per day could now make $10 per hour. And the price would not have to change; Apple would simply lose profit, which is why they don’t do this. In the absence of pressure to the contrary, corporations will do whatever they can to maximize profits.

Now, in fact, the price probably would go up, because Apple fans are among the most inelastic technology consumers in the world. But suppose it went up to $600, which would mean a 1:1 absorption of these higher labor expenditures into price. Does that really sound like “Americans could never afford this”? A few people right on the edge might decide they couldn’t buy it at that price, but it wouldn’t be very many—indeed, like any well-managed monopoly, Apple knows to stop raising the price at the point where they start losing more revenue than they gain.

Similarly, half the price of an iPhone is pure profit for Apple, and only 2% goes into labor. Once again, wages could be raised by an order of magnitude and the price would not need to change.

Apple is a particularly obvious example, but it’s quite simple to see why exploitative labor cannot be the source of improved economic efficiency. Paying workers less does not make them do better work. Treating people more harshly does not improve their performance. Quite the opposite: People work much harder when they are treated well. In addition, at the levels of income we’re talking about, small improvements in wages would result in substantial improvements in worker health, further improving performance. Finally, substitution effect dominates income effect at low incomes. At very high incomes, income effect can dominate substitution effect, so higher wages might result in less work—but it is precisely when we’re talking about poor people that it makes the least sense to say they would work less if you paid them more and treated them better.

At most, paying higher wages can redistribute existing wealth, if we assume that the total amount of wealth does not increase. So it’s theoretically possible that paying higher wages to sweatshop workers would result in them getting some of the stuff that we currently have (essentially by a price mechanism where the things we want get more expensive, but our own wages don’t go up). But in fact our wages are most likely too low as well—wages in the US have become unlinked from productivity, around the time of Reagan—so there’s reason to think that a more just system would improve our standard of living also. Where would all the extra wealth come from? Well, there’s an awful lot of room at the top.

The top 1% in the US own 35% of net wealth, about as much as the bottom 95%. The 400 billionaires of the Forbes list have more wealth than the entire African-American population combined. (We’re double-counting Oprah—but that’s it, she’s the only African-American billionaire in the US.) So even assuming that the total amount of wealth remains constant (which is too conservative, as I’ll get to in a moment), improving global labor standards wouldn’t need to pull any wealth from the middle class; it could get plenty just from the top 0.01%.

In surveys, most Americans are willing to pay more for goods in order to improve labor standards—and the amounts that people are willing to pay, while they may seem small (on the order of 10% to 20% more), are in fact clearly enough that they could substantially increase the wages of sweatshop workers. The biggest problem is that corporations are so good at covering their tracks that it’s difficult to know whether you are really supporting higher labor standards. The multiple layers of international subcontractors make things even more complicated; the people who directly decide the wages are not the people who ultimately profit from them, because subcontractors are competitive while the multinationals that control them are monopsonists.

But for now I’m not going to deal with the thorny question of how we can actually regulate multinational corporations to stop them from using sweatshops. Right now, I just really want to get everyone on the same page and be absolutely clear about cui bono. If there is a benefit at all, it’s not going to you and me.

Why do I keep saying “if”? As so many people will ask me: “Isn’t it obvious that if one person gets less money, someone else must get more?” If you’ve been following my blog at all, you know that the answer is no.

On a single transaction, with everything else held constant, that is true. But we’re not talking about a single transaction. We’re talking about a system of global markets. Indeed, we’re not really talking about money at all; we’re talking about wealth.

By paying their workers so little that those workers can barely survive, corporations are making it impossible for those workers to go out and buy things of their own. Since the costs of higher wages are concentrated in one corporation while the benefits of higher wages are spread out across society, there is a Tragedy of the Commons where each corporation acting in its own self-interest undermines the consumer base that would have benefited all corporations (not to mention people who don’t own corporations). It does depend on some parameters we haven’t measured very precisely, but under a wide range of plausible values, it works out that literally everyone is worse off under this system than they would have been under a system of fair wages.

This is not simply theoretical. We have empirical data about what happened when companies (in the US at least) stopped using an even more extreme form of labor exploitation: slavery.

Because we were on the classical gold standard, GDP growth in the US in the 19th century was extremely erratic, jumping up and down as high as 10 lp and as low as -5 lp. But if you try to smooth out this roller-coaster business cycle, you can see that our growth rate did not appear tobe slowed by the ending of slavery:

US_GDP_growth_1800s

 

Looking at the level of real per capita GDP (on a log scale) shows a continuous growth trend as if nothing had changed at all:

US_GDP_per_capita_1800s

In fact, if you average the growth rates (in log points, averaging makes sense) from 1800 to 1860 as antebellum and from 1865 to 1900 as postbellum, you find that the antebellum growth rate averaged 1.04 lp, while the postbellum growth rate averaged 1.77 lp. Over a period of 50 years, that’s the difference between growing by a factor of 1.7 and growing by a factor of 2.4. Of course, there were a lot of other factors involved besides the end of slavery—but at the very least it seems clear that ending slavery did not reduce economic growth, which it would have if slavery were actually an efficient economic system.

This is a different question from whether slaveowners were irrational in continuing to own slaves. Purely on the basis of individual profit, it was most likely rational to own slaves. But the broader effects on the economic system as a whole were strongly negative. I think that part of why the debate on whether slavery is economically inefficient has never been settled is a confusion between these two questions. One side says “Slavery damaged overall economic growth.” The other says “But owning slaves produced a rate of return for investors as high as manufacturing!” Yeah, those… aren’t answering the same question. They are in fact probably both true. Something can be highly profitable for individuals while still being tremendously damaging to society.

I don’t mean to imply that sweatshops are as bad as slavery; they are not. (Though there is still slavery in the world, and some sweatshops tread a fine line.) What I’m saying is that showing that sweatshops are profitable (no doubt there) or even that they are better than most of the alternatives for their workers (probably true in most cases) does not show that they are economically efficient. Sweatshops are beneficent exploitationthey make workers better off, but in an obviously unjust way. And they only make workers better off compared to the current alternatives; if they were replaced with industries paying fair wages, workers would obviously be much better off still.

And my point is, so would we. While the prices of goods would increase slightly in the short run, in the long run the increased consumer spending by people in Third World countries—which soon would cease to be Third World countries, as happened in Korea and Japan—would result in additional trade with us that would raise our standard of living, not lower it. The only people it is even plausible to think would be harmed are the billionaires who own our multinational corporations; and yet even they might stand to benefit from the improved efficiency of the global economy.

No, you do not benefit from sweatshops. So stop feeling guilty, stop worrying so much about “checking your privilege”—and let’s get out there and do something about it.

The real Existential Risk we should be concerned about

JDN 2457458

There is a rather large subgroup within the rationalist community (loosely defined because organizing freethinkers is like herding cats) that focuses on existential risks, also called global catastrophic risks. Prominent examples include Nick Bostrom and Eliezer Yudkowsky.

Their stated goal in life is to save humanity from destruction. And when you put it that way, it sounds pretty darn important. How can you disagree with wanting to save humanity from destruction?

Well, there are actually people who do (the Voluntary Human Extinction movement), but they are profoundly silly. It should be obvious to anyone with even a basic moral compass that saving humanity from destruction is a good thing.

It’s not the goal of fighting existential risk that bothers me. It’s the approach. Specifically, they almost all seem to focus on exotic existential risks, vivid and compelling existential risks that are the stuff of great science fiction stories. In particular, they have a rather odd obsession with AI.

Maybe it’s the overlap with Singularitarians, and their inability to understand that exponentials are not arbitrarily fast; if you just keep projecting the growth in computing power as growing forever, surely eventually we’ll have a computer powerful enough to solve all the world’s problems, right? Well, yeah, I guess… if we can actually maintain the progress that long, which we almost certainly can’t, and if the problems turn out to be computationally tractable at all (the fastest possible computer that could fit inside the observable universe could not brute-force solve the game of Go, though a heuristic AI did just beat one of the world’s best players), and/or if we find really good heuristic methods of narrowing down the solution space… but that’s an awful lot of “if”s.

But AI isn’t what we need to worry about in terms of saving humanity from destruction. Nor is it asteroid impacts; NASA has been doing a good job watching for asteroids lately, and estimates the current risk of a serious impact (by which I mean something like a city-destroyer or global climate shock, not even a global killer) at around 1/10,000 per year. Alien invasion is right out; we can’t even find clear evidence of bacteria on Mars, and the skies are so empty of voices it has been called a paradox. Gamma ray bursts could kill us, and we aren’t sure about the probability of that (we think it’s small?), but much like brain aneurysms, there really isn’t a whole lot we can do to prevent them.

There is one thing that we really need to worry about destroying humanity, and one other thing that could potentially get close over a much longer timescale. The long-range threat is ecological collapse; as global climate change gets worse and the oceans become more acidic and the aquifers are drained, we could eventually reach the point where humanity cannot survive on Earth, or at least where our population collapses so severely that civilization as we know it is destroyed. This might not seem like such a threat, since we would see this coming decades or centuries in advance—but we are seeing it coming decades or centuries in advance, and yet we can’t seem to get the world’s policymakers to wake up and do something about it. So that’s clearly the second-most important existential risk.

But the most important existential risk, by far, no question, is nuclear weapons.

Nuclear weapons are the only foreseeable, preventable means by which humanity could be destroyed in the next twenty minutes.

Yes, that is approximately the time it takes an ICBM to hit its target after launch. There are almost 4,000 ICBMs currently deployed, mostly by the US and Russia. Once we include submarine-launched missiles and bombers, the total number of global nuclear weapons is over 15,000. I apologize for terrifying you by saying that these weapons could be deployed in a moment’s notice to wipe out most of human civilization within half an hour, followed by a global ecological collapse and fallout that would endanger the future of the entire human race—but it’s the truth. If you’re not terrified, you’re not paying attention.

I’ve intentionally linked the Union of Concerned Scientists as one of those sources. Now they are people who understand existential risk. They don’t talk about AI and asteroids and aliens (how alliterative). They talk about climate change and nuclear weapons.

We must stop this. We must get rid of these weapons. Next to that, literally nothing else matters.

“What if we’re conquered by tyrants?” It won’t matter. “What if there is a genocide?” It won’t matter. “What if there is a global economic collapse?” None of these things will matter, if the human race wipes itself out with nuclear weapons.

To speak like an economist for a moment, the utility of a global nuclear war must be set at negative infinity. Any detectable reduction in the probability of that event must be considered worth paying any cost to achieve. I don’t care if it costs $20 trillion and results in us being taken over by genocidal fascists—we are talking about the destruction of humanity. We can spend $20 trillion (actually the US as a whole does every 14 months!). We can survive genocidal fascists. We cannot survive nuclear war.

The good news is, we shouldn’t actually have to pay that sort of cost. All we have to do is dismantle our nuclear arsenal, and get other countries—particularly Russia—to dismantle theirs. In the long run, we will increase our wealth as our efforts are no longer wasted maintaining doomsday machines.

The main challenge is actually a matter of game theory. The surprisingly-sophisticated 1990s cartoon show the Animaniacs basically got it right when they sang: “We’d beat our swords into liverwurst / Down by the East Riverside / But no one wants to be the first!”

The thinking, anyway, is that this is basically a Prisoner’s Dilemma. If the US disarms and Russia doesn’t, Russia can destroy the US. Conversely, if Russia disarms and the US doesn’t, the US can destroy Russia. If neither disarms, we’re left where we are. Whether or not the other country disarms, you’re always better off not disarming. So neither country disarms.

But I contend that it is not, in fact, a Prisoner’s Dilemma. It could be a Stag Hunt; if that’s the case, then only multilateral disarmament makes sense, because the best outcome is if we both disarm, but the worst outcome is if we disarm and they don’t. Once we expect them to disarm, we have no temptation to renege on the deal ourselves; but if we think there’s a good chance they won’t, we might not want to either. Stag Hunts have two stable Nash equilibria; one is where both arm, the other where both disarm.

But in fact, I think it may be simply the trivial game.

There aren’t actually that many possible symmetric two-player nonzero-sum games (basically it’s a question of ordering 4 possibilities, and it’s symmetric, so 12 possible games), and one that we never talk about (because it’s sort of boring) is the trivial game: If I do the right thing and you do the right thing, we’re both better off. If you do the wrong thing and I do the right thing, I’m better off. If we both do the wrong thing, we’re both worse off. So, obviously, we both do the right thing, because we’d be idiots not to. Formally, we say that cooperation is a strictly dominant strategy. There’s no dilemma, no paradox; the self-interested strategy is the optimal strategy. (I find it kind of amusing that laissez-faire economics basically amounts to assuming that all real-world games are the trivial game.)

That is, I don’t think the US would actually benefit from nuking Russia, even if we could do so without retaliation. Likewise, I don’t think Russia would actually benefit from nuking the US. One of the things we’ve discovered—the hardest way possible—through human history is that working together is often better for everyone than fighting. Russia could nuke NATO, and thereby destroy all of their largest trading partners, or they could continue trading with us. Even if they are despicable psychopaths who think nothing of committing mass murder (Putin might be, but surely there are people under his command who aren’t?), it’s simply not in Russia’s best interest to nuke the US and Europe. Likewise, it is not in our best interest to nuke them.

Nuclear war is a strange game: The only winning move is not to play.

So I say, let’s stop playing. Yes, let’s unilaterally disarm, the thing that so many policy analysts are terrified of because they’re so convinced we’re in a Prisoner’s Dilemma or a Stag Hunt. “What’s to stop them from destroying us, if we make it impossible for us to destroy them!?” I dunno, maybe basic human decency, or failing that, rationality?

Several other countries have already done this—South Africa unilaterally disarmed, and nobody nuked them. Japan refused to build nuclear weapons in the first place—and I think it says something that they’re the only people to ever have them used against them.

Our conventional military is plenty large enough to defend us against all realistic threats, and could even be repurposed to defend against nuclear threats as well, by a method I call credible targeted conventional response. Instead of building ever-larger nuclear arsenals to threaten devastation in the world’s most terrifying penis-measuring contest, you deploy covert operatives (perhaps Navy SEALS in submarines, or double agents, or these days even stealth drones) around the world, with the standing order that if they have reason to believe a country initiated a nuclear attack, they will stop at nothing to hunt down and kill the specific people responsible for that attack. Not the country they came from; not the city they live in; those specific people. If a leader is enough of a psychopath to be willing to kill 300 million people in another country, he’s probably enough of a psychopath to be willing to lose 150 million people in his own country. He likely has a secret underground bunker that would allow him to survive, at least if humanity as a whole does. So you should be threatening the one thing he does care about—himself. You make sure he knows that if he pushes that button, you’ll find that bunker, drop in from helicopters, and shoot him in the face.

The “targeted conventional response” should be clear by now—you use non-nuclear means to respond, and you target the particular leaders responsible—but let me say a bit more about the “credible” part. The threat of mutually-assured destruction is actually not a credible one. It’s not what we call in game theory a subgame perfect Nash equilibrium. If you know that Russia has launched 1500 ICBMs to destroy every city in America, you actually have no reason at all to retaliate with your own 1500 ICBMs, and the most important reason imaginable not to. Your people are dead either way; you can’t save them. You lose. The only question now is whether you risk taking the rest of humanity down with you. If you have even the most basic human decency, you will not push that button. You will not “retaliate” in useless vengeance that could wipe out human civilization. Thus, your threat is a bluff—it is not credible.

But if your response is targeted and conventional, it suddenly becomes credible. It’s exactly reversed; you now have every reason to retaliate, and no reason not to. Your covert operation teams aren’t being asked to destroy humanity; they’re being tasked with finding and executing the greatest mass murderer in history. They don’t have some horrific moral dilemma to resolve; they have the opportunity to become the world’s greatest heroes. Indeed, they’d very likely have the whole world (or what’s left of it) on their side; even the population of the attacking country would rise up in revolt and the double agents could use the revolt as cover. Now you have no reason to even hesitate; your threat is completely credible. The only question is whether you can actually pull it off, and if we committed the full resources of the United States military to preparing for this possibility, I see no reason to doubt that we could. If a US President can be assassinated by a lone maniac (and yes, that is actually what happened), then the world’s finest covert operations teams can assassinate whatever leader pushed that button.

This is a policy that works both unilaterally and multilaterally. We could even assemble an international coalition—perhaps make the UN “peacekeepers” put their money where their mouth is and train the finest special operatives in the history of the world tasked with actually keeping the peace.

Let’s not wait for someone else to save humanity from destruction. Let’s be the first.

Is America uniquely… mean?

JDN 2457454

I read this article yesterday which I found both very resonant and very disturbing: At least among First World countries, the United States really does seem uniquely, for lack of a better word, mean.

The formal psychological terminology is social dominance orientation; the political science term is authoritarianism. In economics, we notice the difference due to its effect on income inequality. But all of these concepts are capturing part of a deeper underlying reality that in the age of Trump I am finding increasingly hard to deny. The best predictor of support for Trump is authoritarianism.

Of course I’ve already talked about our enormous military budget; but then Tennessee had to make their official state rifle a 50-caliber weapon capable of destroying light tanks. There is something especially dominant, aggressive, and violent about American culture.

We are certainly not unique in the world as a whole—actually I think the amount of social dominance orientation, authoritarianism, and inequality in the US is fairly similar to the world average. We are unique in our gun ownership, but our military spending proportional to GDP is not particularly high by world standards—we’re just an extremely rich country. But in all these respects we are a unique outlier among First World countries; in many ways we resemble a rich authoritarian petrostate like Qatar rather than a European social democracy like France or the UK. (At least we’re not Saudi Arabia?)

More than other First World cultures, Americans believe in hierarchy; they believe that someone should be on top and other people should be on the bottom. More than that, they believe that people “like us” should be on top and people “not like us” should be on the bottom, however that is defined—often in terms of race or religion, but not necessarily.

Indeed, one of the things I find most baffling about this is that it is often more important to people that others be held down than that they themselves be lifted up. This is the only way I can make sense of the fact that people who have watched their wages be drained into the pockets of billionaires for a generation can think that the most important things to do right now are block out illegal immigrants and deport Muslims.

It seems to be that people become convinced that their own status, whatever it may be, is deserved: If they are rich, it is obviously because they are so brilliant and hard-working (something Trump clearly believes about himself, being a textbook example of Narcissistic Personality Disorder); if they are poor, it is obviously because they are so incompetent and lazy. Thus, being lifted up doesn’t make sense; why would you give me things I don’t deserve?

But then when they see people who are different from them, they know automatically that those people must be by definition inferior, as all who are Not of Our Tribe are by definition inferior. And therefore, any of them who are rich gained their position through corruption or injustice, and all of them who are poor deserve their fate for being so inferior. Thus, it is most vital to ensure that these Not of Our Tribe are held down from reaching high positions they so obviously do not deserve.

I’m fairly sure that most of this happens at a very deep unconscious level; it calls upon ancient evolutionary instincts to love our own tribe, to serve the alpha male, to fear and hate those of other tribes. These instincts may well have served us 200,000 years ago (then again, they may just have been the best our brains could manage at the time); but they are becoming a dangerous liability today.

As E.O. Wilson put it: “The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology.”

Yet this cannot be a complete explanation, for there is variation in these attitudes. A purely instinctual theory should say that all human cultures have this to an essentially equal degree; but I started this post by pointing out that the United States appears to have a particularly large amount relative to Europe.

So, there must be something in the cultures or institutions of different nations that makes them either enhance or suppress this instinctual tribalism. There must be something that Europe is doing right, the US is doing wrong, and Saudi Arabia is doing very, very wrong.
Well, the obvious one that sticks out at me is religion. It seems fairly obvious to me that Sweden is less religious than the US, which is less religious than Saudi Arabia.

Data does back me up on this. Religiosity isn’t easy to measure, but we have methods of doing so. If we ask people in various countries if religion is very important in their lives, the percentage of people who say yes gives us an indication of how religious that country is.

In Saudi Arabia, 93% say yes. In the United States, 65% say yes. In Sweden, only 17% say yes.

Religiosity tends to be highest in the poorest countries, but the US is an outlier, far too rich for our religion (or too religious for our wealth).

Religiosity also tends to be highest in countries with high inequality—this time, the US fits right in.

The link between religion and inequality is quite clear. It’s harder to say which way the causation runs. Perhaps high inequality makes people cling more to religion as a comfort, and getting rid of religion would only mean taking that comfort away. Or, perhaps religion actually makes people believe more in social dominance, and thus is part of what keeps that high inequality in place. It could also be a feedback loop, in which higher inequality leads to higher religiosity which leads to higher inequality.

That said, I think we actually have some evidence that causality runs from religion to inequality, rather than the other way around. The secularization of France took place around the same time as the French Revolution that overthrew the existing economic system and replaced it with one that had substantially less inequality. Iran’s government became substantially more based on religion in the latter half of the 20th century, and their inequality soared thereafter.

Above all, Donald Trump dominates the evangelical vote, which makes absolutely no sense if religion is a comfort against inequality—but perfect sense if religion solidifies the tendency of people to think in terms of hierarchy and authoritarianism.

This also makes sense in terms of the content of religion, especially Abrahamaic religion; read the Bible and the Qur’an, and you will see that their primary goal seems to be to convince you that some people, namely people who believe in this book, are just better than other people, and we should be in charge because God says so. (And you wouldn’t try to argue with God, would you?) They really make no particular effort to convince you that God actually exists; they spend all their argumentative effort on what God wants you to do and who God wants you to put in charge—and for some strange reason it always seems to be the same guys who are writing down “God’s words” in the book! What a coincidence!

If religion is indeed the problem, or a large part of the problem, what can we do about it? That’s the most difficult part. We’ve been making absolutely conclusive rational arguments against religion since literally 300 years before Jesus was even born (there has never been a time in human history in which it was rational for an educated person to believe in Christianity or Islam, for the religions did not come into existence until well after the arguments to refute them were well-known!), and the empirical evidence against theism has only gotten stronger ever since; so that clearly isn’t enough.

I think what we really need to do at this point is confront the moral monopoly that religion has asserted for itself. The “Moral Majority” was neither, but its name still sort of makes sense to us because we so strongly associate being moral with being religious. We use terms like “Christian” and “generous” almost interchangeably. And whenever you get into a debate about religion, shortly after you have thoroughly demolished any shred of empirical credibility religion still had left, you can basically guarantee that the response will be: “But without God, how can you know right from wrong?”

What is perhaps most baffling about this concept of morality so commonplace in our culture is that not only is the command of a higher authority that rewards and punishes you not the highest level of moral development—it is literally the lowest. Of the six stages of moral thinking Kohlberg documented in children, the reward and punishment orientation exemplified by the Bible and the Qur’an is the very first. I think many of these people really truly haven’t gotten past level 1, which is why when you start trying to explain how you base your moral judgments on universal principles of justice and consequences (level 6) they don’t seem to have any idea what you’re talking about.

Perhaps this is a task for our education system (philosophy classes in middle school?), perhaps we need something more drastic than that, or perhaps it is enough that we keep speaking about it in public. But somehow we need to break up the monopoly that religion has on moral concepts, so that people no longer feel ashamed to say that something is morally wrong without being able to cite a particular passage from a particular book from the Iron Age. Perhaps once we can finally make people realize that morality does not depend on religion, we can finally free them from the grip of religion—and therefore from the grip of authoritarianism and social dominance.

If this is right, then the reason America is so mean is that we are so Christian—and people need to realize that this is not a paradoxical statement.

Will robots take our jobs?

JDN 2457451
I briefly discussed this topic before, but I thought it deserved a little more depth. Also, the SF author in me really likes writing this sort of post where I get to speculate about futures that are utopian, dystopian, or (most likely) somewhere in between.

The fear is quite widespread, but how realistic is it? Will robots in fact take all our jobs?

Most economists do not think so. Robert Solow famously quipped, “You can see the computer age everywhere but in the productivity statistics.” (It never quite seemed to occur to him that this might be a flaw in the way we measure productivity statistics.)

By the usual measure of labor productivity, robots do not appear to have had a large impact. Indeed, their impact appears to have been smaller than almost any other major technological innovation.

Using BLS data (which was formatted badly and thus a pain to clean, by the way—albeit not as bad as the World Bank data I used on my master’s thesis, which was awful), I made this graph of the growth rate of labor productivity as usually measured:

Productivity_growth

The fluctuations are really jagged due to measurement errors, so I also made an annually smoothed version:

Productivity_growth_smooth

Based on this standard measure, productivity has grown more or less steadily during my lifetime, fluctuating with the business cycle around a value of about 3.5% per year (3.4 log points). If anything, the growth rate seems to be slowing down; in recent years it’s been around 1.5% (1.5 lp).

This was clearly the time during which robots became ubiquitous—autonomous robots did not emerge until the 1970s and 1980s, and robots became widespread in factories in the 1980s. Then there’s the fact that computing power has been doubling every 1.5 years during this period, which is an annual growth rate of 59% (46 lp). So why hasn’t productivity grown at anywhere near that rate?

I think the main problem is that we’re measuring productivity all wrong. We measure it in terms of money instead of in terms of services. Yes, we try to correct for inflation; but we fail to account for the fact that computers have allowed us to perform literally billions of services every day that could not have been performed without them. You can’t adjust that away by plugging into the CPI or the GDP deflator.

Think about it: Your computer provides you the services of all the following:

  1. A decent typesetter and layout artist
  2. A truly spectacular computer (remember, that used to be a profession!)
  3. A highly skilled statistician (who takes no initiative—you must tell her what calculations to do)
  4. A painting studio
  5. A photographer
  6. A video camera operator
  7. A professional orchestra of the highest quality
  8. A decent audio recording studio
  9. Thousands of books, articles, and textbooks
  10. Ideal seats at every sports stadium in the world

And that’s not even counting things like social media and video games that can’t even be readily compared to services that were provided before computers.

If you added up the value of all of those jobs, the amount you would have had to pay in order to hire all those people to do all those things for you before computers existed, your computer easily provides you with at least $1 million in professional services every year. Put another way, your computer has taken jobs that would have provided $1 million in wages. You do the work of a hundred people with the help of your computer.

This isn’t counted in our productivity statistics precisely because it’s so efficient. If we still had to pay that much for all these services, it would be included in our GDP and then our GDP per worker would properly reflect all this work that is getting done. But then… whom would we be paying? And how would we have enough to pay that? Capitalism isn’t actually set up to handle this sort of dramatic increase in productivity—no system is, really—and thus the market price for work has almost no real relation to the productive capacity of the technology that makes that work possible.

Instead it has to do with scarcity of work—if you are the only one in the world who can do something (e.g. write Harry Potter books), you can make an awful lot of money doing that thing, while something that is far more important but can be done by almost anyone (e.g. feed babies) will pay nothing or next to nothing. At best we could say it has to do with marginal productivity, but marginal in the sense of your additional contribution over and above what everyone else could already do—not in the sense of the value actually provided by the work that you are doing. Anyone who thinks that markets automatically reward hard work or “pay you what you’re worth” clearly does not understand how markets function in the real world.

So, let’s ask again: Will robots take our jobs?

Well, they’ve already taken many jobs already. There isn’t even a clear high-skill/low-skill dichotomy here; robots are just as likely to make pharmacists obsolete as they are truck drivers, just as likely to replace surgeons as they are cashiers.

Labor force participation is declining, though slowly:

Labor_force_participation

Yet I think this also underestimates the effect of technology. As David Graeber points out, most of the new jobs we’ve been creating seem to be for lack of a better term bullshit jobs—jobs that really don’t seem like they need to be done, other than to provide people with something to do so that we can justify paying them salaries.

As he puts it:

Again, an objective measure is hard to find, but one easy way to get a sense is to ask: what would happen were this entire class of people to simply disappear? Say what you like about nurses, garbage collectors, or mechanics, it’s obvious that were they to vanish in a puff of smoke, the results would be immediate and catastrophic. A world without teachers or dock-workers would soon be in trouble, and even one without science fiction writers or ska musicians would clearly be a lesser place. It’s not entirely clear how humanity would suffer were all private equity CEOs, lobbyists, PR researchers, actuaries, telemarketers, bailiffs or legal consultants to similarly vanish. (Many suspect it might markedly improve.)

The paragon of all bullshit jobs is sales. Sales is a job that simply should not exist. If something is worth buying, you should be able to present it to the market and people should choose to buy it. If there are many choices for a given product, maybe we could have some sort of independent product rating agencies that decide which ones are the best. But sales means trying to convince people to buy your product—you have an absolutely overwhelming conflict of interest that makes your statements to customers so utterly unreliable that they are literally not even information anymore. The vast majority of advertising, marketing, and sales is thus, in a fundamental sense, literally noise. Sales contributes absolutely nothing to our economy, and because we spend so much effort on it and advertising occupies so much of our time and attention, takes a great deal away. But sales is one of our most steadily growing labor sectors; once we figure out how to make things without people, we employ the people in trying to convince customers to buy the new things we’ve made. Sales is also absolutely miserable for many of the people who do it, as I know from personal experience in two different sales jobs that I had to quit before the end of the first week.

Fortunately we have not yet reached the point where sales is the fastest growing labor sector. Currently the fastest-growing jobs fall into three categories: Medicine, green energy, and of course computers—but actually mostly medicine. Yet even this is unlikely to last; one of the easiest ways to reduce medical costs would be to replace more and more medical staff with automated systems. A nursing robot may not be quite as pleasant as a real professional nurse—but if by switching to robots the hospital can save several million dollars a year, they’re quite likely to do so.

Certain tasks are harder to automate than others—particularly anything requiring creativity and originality is very hard to replace, which is why I believe that in the 2050s or so there will be a Revenge of the Humanities Majors as all the supposedly so stable and forward-thinking STEM jobs disappear and the only jobs that are left are for artists, authors, musicians, game designers and graphic designers. (Also, by that point, very likely holographic designers, VR game designers, and perhaps even neurostim artists.) Being good at math won’t mean anything anymore—frankly it probably shouldn’t right now. No human being, not even great mathematical savants, is anywhere near as good at arithmetic as a pocket calculator. There will still be a place for scientists and mathematicians, but it will be the creative aspects of science and math that persist—design of experiments, development of new theories, mathematical intuition to develop new concepts. The grunt work of cleaning data and churning through statistical models will be fully automated.

Most economists appear to believe that we will continue to find tasks for human beings to perform, and this improved productivity will simply raise our overall standard of living. As any ECON 101 textbook will tell you, “scarcity is a fundamental fact of the universe, because human needs are unlimited and resources are finite.”

In fact, neither of those claims are true. Human needs are not unlimited; indeed, on Maslow’s hierarchy of needs First World countries have essentially reached the point where we could provide the entire population with the whole pyramid, guaranteed, all the time—if we were willing and able to fundamentally reform our economic system.

Resources are not even finite; what constitutes a “resource” depends on technology, as does how accessible or available any given source of resources will be. When we were hunter-gatherers, our only resources were the plants and animals around us. Agriculture turned seeds and arable land into a vital resource. Whale oil used to be a major scarce resource, until we found ways to use petroleum. Petroleum in turn is becoming increasingly irrelevant (and cheap) as solar and wind power mature. Soon the waters of the oceans themselves will be our power source as we refine the deuterium for fusion. Eventually we’ll find we need something for interstellar travel that we used to throw away as garbage (perhaps it will in fact be dilithium!) I suppose that if the universe is finite or if FTL is impossible, we will be bound by what is available in the cosmic horizon… but even that is not finite, as the universe continues to expand! If the universe is open (as it probably is) and one day we can harness the dark energy that seethes through the ever-expanding vacuum, our total energy consumption can grow without bound just as the universe does. Perhaps we could even stave off the heat death of the universe this way—we after all have billions of years to figure out how.

If scarcity were indeed this fundamental law that we could rely on, then more jobs would always continue to emerge, producing whatever is next on the list of needs ordered by marginal utility. Life would always get better, but there would always be more work to be done. But in fact, we are basically already at the point where our needs are satiated; we continue to try to make more not because there isn’t enough stuff, but because nobody will let us have it unless we do enough work to convince them that we deserve it.

We could continue on this route, making more and more bullshit jobs, pretending that this is work that needs done so that we don’t have to adjust our moral framework which requires that people be constantly working for money in order to deserve to live. It’s quite likely in fact that we will, at least for the foreseeable future. In this future, robots will not take our jobs, because we’ll make up excuses to create more.

But that future is more on the dystopian end, in my opinion; there is another way, a better way, the world could be. As technology makes it ever easier to produce as much wealth as we need, we could learn to share that wealth. As robots take our jobs, we could get rid of the idea of jobs as something people must have in order to live. We could build a new economic system: One where we don’t ask ourselves whether children deserve to eat before we feed them, where we don’t expect adults to spend most of their waking hours pushing papers around in order to justify letting them have homes, where we don’t require students to take out loans they’ll need decades to repay before we teach them history and calculus.

This second vision is admittedly utopian, and perhaps in the worst way—perhaps there’s simply no way to make human beings actually live like this. Perhaps our brains, evolved for the all-too-real scarcity of the ancient savannah, simply are not plastic enough to live without that scarcity, and so create imaginary scarcity by whatever means they can. It is indeed hard to believe that we can make so fundamental a shift. But for a Homo erectus in 500,000 BP, the idea that our descendants would one day turn rocks into thinking machines that travel to other worlds would be pretty hard to believe too.

Will robots take our jobs? Let’s hope so.