The right (and wrong) way to buy stocks

July 9, JDN 2457944

Most people don’t buy stocks at all. Stock equity is the quintessential form of financial wealth, and 42% of financial net wealth in the United States is held by the top 1%, while the bottom 80% owns essentially none.

Half of American households do not have any private retirement savings at all, and are depending either on employee pensions or Social Security for their retirement plans.

This is not necessarily irrational. In order to save for retirement, one must first have sufficient income to live on. Indeed, I got very annoyed at a “financial planning seminar” for grad students I attended recently, trying to scare us about the fact that almost none of us had any meaningful retirement savings. No, we shouldn’t have meaningful retirement savings, because our income is currently much lower than what we can expect to get once we graduate and enter our professions. It doesn’t make sense for someone scraping by on a $20,000 per year graduate student stipend to be saving up for retirement, when they can quite reasonably expect to be making $70,000-$100,000 per year once they finally get that PhD and become a professional economist (or sociologist, or psychologist or physicist or statistician or political scientist or material, mechanical, chemical, or aerospace engineer, or college professor in general, etc.). Even social workers, historians, and archaeologists make a lot more money than grad students. If you are already in the workforce and only expect to be getting small raises in the future, maybe you should start saving for retirement in your 20s. If you’re a grad student, don’t bother. It’ll be a lot easier to save once your income triples after graduation. (Personally, I keep about $700 in stocks mostly to get a feel for what it is like owning and trading stocks that I will apply later, not out of any serious expectation to support a retirement fund. Even at Warren Buffet-level returns I wouldn’t make more than $200 a year this way.)

Total US retirement savings are over $25 trillion, which… does actually sound low to me. In a country with a GDP now over $19 trillion, that means we’ve only saved a year and change of total income. If we had a rapidly growing population this might be fine, but we don’t; our population is fairly stable. People seem to be relying on economic growth to provide for their retirement, and since we are almost certainly at steady-state capital stock and fairly near full employment, that means waiting for technological advancement.

So basically people are hoping that we get to the Wall-E future where the robots will provide for us. And hey, maybe we will; but assuming that we haven’t abandoned capitalism by then (as they certainly haven’t in Wall-E), maybe you should try to make sure you own some assets to pay for robots with?

But okay, let’s set all that aside, and say you do actually want to save for retirement. How should you go about doing it?

Stocks are clearly the way to go. A certain proportion of government bonds also makes sense as a hedge against risk, and maybe you should even throw in the occasional commodity future. I wouldn’t recommend oil or coal at this point—either we do something about climate change and those prices plummet, or we don’t and we’ve got bigger problems—but it’s hard to go wrong with corn or steel, and for this one purpose it also can make sense to buy gold as well. Gold is not a magical panacea or the foundation of all wealth, but its price does tend to correlate negatively with stock returns, so it’s not a bad risk hedge.

Don’t buy exotic derivatives unless you really know what you’re doing—they can make a lot of money, but they can lose it just as fast—and never buy non-portfolio assets as a financial investment. If your goal is to buy something to make money, make it something you can trade at the click of a button. Buy a house because you want to live in that house. Buy wine because you like drinking wine. Don’t buy a house in the hopes of making a financial return—you’ll have leveraged your entire portfolio 10 to 1 while leaving it completely undiversified. And the problem with investing in wine, ironically, is its lack of liquidity.

The core of your investment portfolio should definitely be stocks. The biggest reason for this is the equity premium; equities—that is, stocks—get returns so much higher than other assets that it’s actually baffling to most economists. Bond returns are currently terrible, while stock returns are currently fantastic. The former is currently near 0% in inflation-adjusted terms, while the latter is closer to 16%. If this continues for the next 10 years, that means that $1000 put in bonds would be worth… $1000, while $1000 put in stocks would be worth $4400. So, do you want to keep the same amount of money, or quadruple your money? It’s up to you.

Higher risk is generally associated with higher return, because rational investors will only accept additional risk when they get some additional benefit from it; and stocks are indeed riskier than most other assets, but not that much riskier. For this to be rational, people would need to be extremely risk-averse, to the point where they should never drive a car or eat a cheeseburger. (Of course, human beings are terrible at assessing risk, so what I really think is going on is that people wildly underestimate the risk of driving a car and wildly overestimate the risk of buying stocks.)

Next, you may be asking: How does one buy stocks? This doesn’t seem to be something people teach in school.

You will need a brokerage of some sort. There are many such brokerages, but they are basically all equivalent except for the fees they charge. Some of them will try to offer you various bells and whistles to justify whatever additional cut they get of your trades, but they are almost never worth it. You should choose one that has a low a trade fee as possible, because even a few dollars here and there can add up surprisingly quickly.

Fortunately, there is now at least one well-established reliable stock brokerage available to almost anyone that has a standard trade fee of zero. They are called Robinhood, and I highly recommend them. If they have any downside, it is ironically that they make trading too easy, so you can be tempted to do it too often. Learn to resist that urge, and they will serve you well and cost you nothing.

Now, which stocks should you buy? There are a lot of them out there. The answer I’m going to give may sound strange: All of them. You should buy all the stocks.

All of them? How can you buy all of them? Wouldn’t that be ludicrously expensive?

No, it’s quite affordable in fact. In my little $700 portfolio, I own every single stock in the S&P 500 and the NASDAQ. If I get a little extra money to save, I may expand to own every stock in Europe and China as well.

How? A clever little arrangement called an exchange-traded fund, or ETF for short. An ETF is actually a form of mutual fund, where the fund purchases shares in a huge array of stocks, and adjusts what they own to precisely track the behavior of an entire stock market (such as the S&P 500). Then what you can buy is shares in that mutual fund, which are usually priced somewhere between $100 and $300 each. As the price of stocks in the market rises, the price of shares in the mutual fund rises to match, and you can reap the same capital gains they do.

A major advantage of this arrangement, especially for a typical person who isn’t well-versed in stock markets, is that it requires almost no attention at your end. You can buy into a few ETFs and then leave your money to sit there, knowing that it will grow as long as the overall stock market grows.

But there is an even more important advantage, which is that it maximizes your diversification. I said earlier that you shouldn’t buy a house as an investment, because it’s not at all diversified. What I mean by this is that the price of that house depends only on one thing—that house itself. If the price of that house changes, the full change is reflected immediately in the value of your asset. In fact, if you have 10% down on a mortgage, the full change is reflected ten times over in your net wealth, because you are leveraged 10 to 1.

An ETF is basically the opposite of that. Instead of its price depending on only one thing, it depends on a vast array of things, averaging over the prices of literally hundreds or thousands of different corporations. When some fall, others will rise. On average, as long as the economy continues to grow, they will rise.

The result is that you can get the same average return you would from owning stocks, while dramatically reducing the risk you bear.

To see how this works, consider the past year’s performance of Apple (AAPL), which has done very well, versus Fitbit (FIT), which has done very poorly, compared with the NASDAQ as a whole, of which they are both part.

AAPL has grown over 50% (40 log points) in the last year; so if you’d bought $1000 of their stock a year ago it would be worth $1500. FIT has fallen over 60% (84 log points) in the same time, so if you’d bought $1000 of their stock instead, it would be worth only $400. That’s the risk you’re taking by buying individual stocks.

Whereas, if you had simply bought a NASDAQ ETF a year ago, your return would be 35%, so that $1000 would be worth $1350.

Of course, that does mean you don’t get as high a return as you would if you had managed to choose the highest-performing stock on that index. But you’re unlikely to be able to do that, as even professional financial forecasters are worse than random chance. So, would you rather take a 50-50 shot between gaining $500 and losing $600, or would you prefer a guaranteed $350?

If higher return is not your only goal, and you want to be socially responsible in your investments, there are ETFs for that too. Instead of buying the whole stock market, these funds buy only a section of the market that is associated with some social benefit, such as lower carbon emissions or better representation of women in management. On average, you can expect a slightly lower return this way; but you are also helping to make a better world. And still your average return is generally going to be better than it would be if you tried to pick individual stocks yourself. In fact, certain classes of socially-responsible funds—particularly green tech and women’s representation—actually perform better than conventional ETFs, probably because most investors undervalue renewable energy and, well, also undervalue women. Women CEOs perform better at lower prices; why would you not want to buy their companies?

In fact ETFs are not literally guaranteed—the market as a whole does move up and down, so it is possible to lose money even by buying ETFs. But because the risk is so much lower, your odds of losing money are considerably reduced. And on average, an ETF will, by construction, perform exactly as well as the average performance of a randomly-chosen stock from that market.

Indeed, I am quite convinced that most people don’t take enough risk on their investment portfolios, because they confuse two very different types of risk.

The kind you should be worried about is idiosyncratic risk, which is risk tied to a particular investment—the risk of having chosen the Fitbit instead of Apple. But a lot of the time people seem to be avoiding market risk, which is the risk tied to changes in the market as a whole. Avoiding market risk does reduce your chances of losing money, but it does so at the cost of reducing your chances of making money even more.

Idiosyncratic risk is basically all downside. Yeah, you could get lucky; but you could just as well get unlucky. Far better if you could somehow average over that risk and get the average return. But with diversification, that is exactly what you can do. Then you are left only with market risk, which is the kind of risk that is directly tied to higher average returns.

Young people should especially be willing to take more risk in their portfolios. As you get closer to retirement, it becomes important to have more certainty about how much money will really be available to you once you retire. But if retirement is still 30 years away, the thing you should care most about is maximizing your average return. That means taking on a lot of market risk, which is then less risky overall if you diversify away the idiosyncratic risk.

I hope now that I have convinced you to avoid buying individual stocks. For most people most of the time, this is the advice you need to hear. Don’t try to forecast the market, don’t try to outperform the indexes; just buy and hold some ETFs and leave your money alone to grow.

But if you really must buy individual stocks, either because you think you are savvy enough to beat the forecasters or because you enjoy the gamble, here’s some additional advice I have for you.

My first piece of advice is that you should still buy ETFs. Even if you’re willing to risk some of your wealth on greater gambles, don’t risk all of it that way.

My second piece of advice is to buy primarily large, well-established companies (like Apple or Microsoft or Ford or General Electric). Their stocks certainly do rise and fall, but they are unlikely to completely crash and burn the way that young companies like Fitbit can.

My third piece of advice is to watch the price-earnings ratio (P/E for short). Roughly speaking, this is the number of years it would take for the profits of this corporation to pay off the value of its stock. If they pay most of their profits in dividends, it is approximately how many years you’d need to hold the stock in order to get as much in dividends as you paid for the shares.

Do you want P/E to be large or small? You want it to be small. This is called value investing, but it really should just be called “investing”. The alternatives to value investing are actually not investment but speculation and arbitrage. If you are actually investing, you are buying into companies that are currently undervalued; you want them to be cheap.

Of course, it is not always easy to tell whether a company is undervalued. A common rule-of-thumb is that you should aim for a P/E around 20 (20 years to pay off means about 5% return in dividends); if the P/E is below 10, it’s a fantastic deal, and if it is above 30, it might not be worth the price. But reality is of course more complicated than this. You don’t actually care about current earnings, you care about future earnings, and it could be that a company which is earning very little now will earn more later, or vice-versa. The more you can learn about a company, the better judgment you can make about their future profitability; this is another reason why it makes sense to buy large, well-known companies rather than tiny startups.

My final piece of advice is not to trade too frequently. Especially with something like Robinhood where trades are instant and free, it can be tempting to try to ride every little ripple in the market. Up 0.5%? Sell! Down 0.3%? Buy! And yes, in principle, if you could perfectly forecast every such fluctuation, this would be optimal—and make you an almost obscene amount of money. But you can’t. We know you can’t. You need to remember that you can’t. You should only trade if one of two things happens: Either your situation changes, or the company’s situation changes. If you need the money, sell, to get the money. If you have extra savings, buy, to give those savings a good return. If something bad happened to the company and their profits are going to fall, sell. If something good happened to the company and their profits are going to rise, buy. Otherwise, hold. In the long run, those who hold stocks longer are better off.

Argumentum ab scientia is not argumentum baculo: The difference between authority and expertise

May 7, JDN 2457881

Americans are, on the whole, suspicious of authority. This is a very good thing; it shields us against authoritarianism. But it comes with a major downside, which is a tendency to forget the distinction between authority and expertise.

Argument from authority is an informal fallacy, argumentum baculo. The fact that something was said by the Pope, or the President, or the General Secretary of the UN, doesn’t make it true. (Aside: You’re probably more familiar with the phrase argumentum ad baculum, which is terrible Latin. That would mean “argument toward a stick”, when clearly the intended meaning was “argument by means of a stick”, which is argumentum baculo.)

But argument from expertise, argumentum ab scientia, is something quite different. The world is much too complicated for any one person to know everything about everything, so we have no choice but to specialize our knowledge, each of us becoming an expert in only a few things. So if you are not an expert in a subject, when someone who is an expert in that subject tells you something about that subject, you should probably believe them.

You should especially be prepared to believe them when the entire community of experts is in consensus or near-consensus on a topic. The scientific consensus on climate change is absolutely overwhelming. Is this a reason to believe in climate change? You’re damn right it is. Unless you have years of education and experience in understanding climate models and atmospheric data, you have no basis for challenging the expert consensus on this issue.

This confusion has created a deep current of anti-intellectualism in our culture, as Isaac Asimov famously recognized:

There is a cult of ignorance in the United States, and there always has been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that “my ignorance is just as good as your knowledge.”

This is also important to understand if you have heterodox views on any scientific topic. The fact that the whole field disagrees with you does not prove that you are wrong—but it does make it quite likely that you are wrong. Cranks often want to compare themselves to Galileo or Einstein, but here’s the thing: Galileo and Einstein didn’t act like cranks. They didn’t expect the scientific community to respect their ideas before they had gathered compelling evidence in their favor.

When behavioral economists found that neoclassical models of human behavior didn’t stand up to scrutiny, did they shout from the rooftops that economics is all a lie? No, they published their research in peer-reviewed journals, and talked with economists about the implications of their results. There may have been times when they felt ignored or disrespected by the mainstream, but they pressed on, because the data was on their side. And ultimately, the mainstream gave in: Daniel Kahneman won the Nobel Prize in Economics.

Experts are not always right, that is true. But they are usually right, and if you think they are wrong you’d better have a good reason to think so. The best reasons are the sort that come about when you yourself have spent the time and effort to become an expert, able to challenge the consensus on its own terms.

Admittedly, that is a very difficult thing to do—and more difficult than it should be. I have seen firsthand how difficult and painful the slow grind toward a PhD can be, and how many obstacles will get thrown in your way, ranging from nepotism and interdepartmental politics, to discrimination against women and minorities, to mismatches of interest between students and faculty, all the way to illness, mental health problems, and the slings and arrows of outrageous fortune in general. If you have particularly heterodox ideas, you may face particularly harsh barriers, and sometimes it behooves you to hold your tongue and toe the lie awhile.

But this is no excuse not to gain expertise. Even if academia itself is not available to you, we live in an age of unprecedented availability of information—it’s not called the Information Age for nothing. A sufficiently talented and dedicated autodidact can challenge the mainstream, if their ideas are truly good enough. (Perhaps the best example of this is the mathematician savant Srinivasa Ramanujan. But he’s… something else. I think he is about as far from the average genius as the average genius is from the average person.) No, that won’t be easy either. But if you are really serious about advancing human understanding rather than just rooting for your political team (read: tribe), you should be prepared to either take up the academic route or attack it as an autodidact from the outside.

In fact, most scientific fields are actually quite good about admitting what they don’t know. A total consensus that turns out to be wrong is actually a very rare phenomenon; much more common is a clash of multiple competing paradigms where one ultimately wins out, or they end up replaced by a totally new paradigm or some sort of synthesis. In almost all cases, the new paradigm wins not because it becomes fashionable or the ancien regime dies out (as Planck cynically claimed) but because overwhelming evidence is observed in its favor, often in the form of explaining some phenomenon that was previously impossible to understand. If your heterodox theory doesn’t do that, then it probably won’t win, because it doesn’t deserve to.

(Right now you might think of challenging me: Does my heterodox theory do that? Does the tribal paradigm explain things that either total selfishness or total altruism cannot? I think it’s pretty obvious that it does. I mean, you are familiar with a little thing called “racism”, aren’t you? There is no explanation for racism in neoclassical economics; to understand it at all you have to just impose it as an arbitrary term on the utility function. But at that point, why not throw in whatever you please? Maybe some people enjoy bashing their heads against walls, and other people take great pleasure in the taste of arsenic. Why would this particular self- (not to mention other-) destroying behavior be universal to all human societies?)

In practice, I think most people who challenge the mainstream consensus aren’t genuinely interested in finding out the truth—certainly not enough to actually go through the work of doing it. It’s a pattern you can see in a wide range of fringe views: Anti-vaxxers, 9/11 truthers, climate denialists, they all think the same way. The mainstream disagrees with my preconceived ideology, therefore the mainstream is some kind of global conspiracy to deceive us. The overwhelming evidence that vaccination is safe and (wildly) cost-effective, 9/11 was indeed perpetrated by Al Qaeda and neither planned nor anticipated by anyone in the US government , and the global climate is being changed by human greenhouse gas emissions—these things simply don’t matter to them, because it was never really about the truth. They knew the answer before they asked the question. Because their identity is wrapped up in that political ideology, they know it couldn’t possibly be otherwise, and no amount of evidence will change their mind.

How do we reach such people? That, I don’t know. I wish I did. But I can say this much: We can stop taking them seriously when they say that the overwhelming scientific consensus against them is just another “appeal to authority”. It’s not. It never was. It’s an argument from expertise—there are people who know this a lot better than you, and they think you’re wrong, so you’re probably wrong.

Unpaid work and the double burden

Apr 16, JDN 2457860

When we say the word “work”, what leaps to mind is usually paid work in the formal sector—the work people do for employers. When you “go to work” each morning, you are going to do your paid work in the formal sector.

But a large quantity of the world’s labor does not take this form. First, there is the informal sectorwork done for cash “under the table”, where there is no formal employment structure and often no reporting or payment of taxes. Many economists estimate that the majority of the world’s workers are employed in the informal sector. The ILO found that informal employment comprises as much as 70% of employment in some countries. However, it depends how you count: A lot of self-employment could be considered either formal or informal. If you base it on whether you do any work outside an employer-employee relationship, informal sector work is highly prevalent around the world. If you base it on not reporting to the government to avoid taxes, informal sector work is less common. If it must be your primary source of income, whether or not you pay taxes, informal sector work is uncommon. And if you only include informal sector work when it is your primary income source and not reported to the government, informal sector work is relatively rare and largely restricted to underdeveloped countries.

But that’s not really my focus for today, because you at least get paid in the informal sector. Nor am I talking about forced laborthat is, slavery, essentially—which is a serious human rights violation that sadly still goes on in many countries.

No, the unpaid work I want to talk about today is work that people willingly do for free.

I’m also excluding internships and student work, where (at least in theory) the idea is that instead of getting paid you are doing the work in order to acquire skills and experience that will be valuable to you later on. I’m talking about work that you do for its own sake.

Such work can be divided into three major categories.
First there is vocation—the artist who would paint even if she never sold a single canvas; the author who is compelled to write day and night and would give the books away for free. Vocation is work that you do for fun, or because it is fulfilling. It doesn’t even feel like “work” in quite the same sense. For me, writing and research are vocation, at least in part; even if I had $5 million in stocks I would still do at least some writing and research as part of what gives my life meaning.

Second there is volunteering—the soup kitchen, the animal shelter, the protest march. Volunteering is work done out of altruism, to help other people or work toward some greater public goal. You don’t do it for yourself, you do it for others.

Third, and really my main focus for this post, is domestic labor—vacuuming the rug, mopping the floor, washing the dishes, fixing the broken faucet, changing the baby’s diapers. This is generally not work that anyone finds particularly meaningful or fulfilling, nor is it done out of any great sense of altruism (perhaps toward your own family, but that’s about the extent of it). But you also don’t get paid to do it. You do it because it must be done.

There is also considerable overlap, of course: Many people find meaning in their activism or charitable work, and part of what motivates artists and authors is a desire to change the world.

Vocation is ultimately what I would like to see the world move towards. One of the great promises of a basic income is that it might finally free us from the grind of conventional employment that has gripped us ever since we first managed to escape the limitations of subsistence farming—which in turn gripped us ever since we escaped the desperation of hunter-gatherer survival. The fourth great stage in human prosperity might finally be a world where we can work not for food or for pay, but for meaning. A world of musicians and painters, of authors and playwrights, of sculptors and woodcutters, yes; but also a world of cinematographers and video remixers, of 3D modelers and holographers, of VR designers and video game modders. If you ever fret that no work would be done without the constant pressure of the wage incentive, spend some time on Stack Overflow or the Steam Workshop. People will spend hundreds of person-hours at extremely high-skill tasks—I’m talking AI programming and 3D modeling here—not for money but for fun.

Volunteering is frankly kind of overrated; as the Effective Altruism community will eagerly explain to you any chance they get, it’s usually more efficient for you to give money rather than time, because money is fungible while giving your time only makes sense if your skills are actually the ones that the project needs. If this criticism of so much well-intentioned work sounds petty, note that literally thousands of lives would be saved each year if instead of volunteering people donated an equivalent amount of money so that charities could hire qualified workers instead. Unskilled volunteers and donations of useless goods after a disaster typically cause what aid professionals call the “second disaster”. Still, people do find meaning in volunteering, and there is value in that; and also there are times when you really are the best one to do it, particularly when it comes to local politics.

But what should we do with domestic labor?

Some of it can and will be automated away—the Parable of the Dishwasher with literal dishwashers. But it will be awhile before it all can, and right now it’s still a bit expensive. Maybe instead of vacuuming I should buy a Roomba—but $500 feels like a lot of money right now.

Much domestic labor we could hire out to someone else, but we simply choose not to. I could always hire someone to fix my computer, unclog my bathtub, or even mop my floors; I just don’t because it seems too expensive.
From the perspective of an economist, it’s actually a bit odd that it seems too expensive. I might have a comparative advantage in fixing my computer—it’s mine, after all, so I know its ins and outs, and while I’m no hotshot Google admin I am a reasonably competent programmer and debugger in my own right. And while for many people auto repair is a household chore, I do actually hire auto mechanics; I don’t even change my own oil, though partly that’s because my little Smart has an extremely compact design that makes it hard to work on. But I surely have no such comparative advantage in cleaning my floors or unclogging my pipes; so why doesn’t it seem worth it to hire someone else to do that?

Maybe I’m being irrational; hiring a cleaning service isn’t that expensive after all. I could hire a cleaning service to do my whole apartment for something like $80, and if I scheduled a regular maid it would probably be something like that per month. That’s what I would charge for two hours of tutoring, so maybe it would behoove me to hire a maid and spend that extra time tutoring or studying.

Or maybe it’s this grad student budget of mine; money is pretty tight at the moment, as I go through this strange societal ritual where young adults go through a period of near-poverty, overwhelming workload and constant anxiety not in spite but because we are so intelligent and hard-working. Perhaps if and when I get that $70,000 job as a professional economist my marginal utility of wealth will decrease and I will feel more inclined to hire maid services.

There are also transaction costs I save on by doing the work myself. A maid would have to commute here, first of all, reducing the efficiency gains from their comparative advantage in the work; but more than that, there’s a lot of effort I’d have to put in just to prepare for the maid and deal with any problems that might arise. There are scheduling issues, and the work probably wouldn’t get done as quickly unless I were to spend enough to hire a maid on a regular basis. There’s also a psychological cost in comfort and privacy to dealing with a stranger in one’s home, and a small but nontrivial risk that the maid might damage or steal something important.

But honestly it might be as simple as social norms (remember: to a first approximation, all human behavior is social norms). Regardless of whether or not it is affordable, it feels strange to hire a maid. That’s the sort of thing only rich, decadent people do. A responsible middle-class adult is supposed to mop their own floors and do their own laundry. Indeed, while hiring a plumber or an auto mechanic feels like paying for a service, hiring a maid crosses a line and feels like hiring a servant. (I honestly always feel a little awkward around the gardeners hired by our housing development for that reason. I’m only paying them indirectly, but there’s still this vague sense that they are somehow subservient—and surely, we are of quite distinct socioeconomic classes. Maybe it would help if I brushed up on my Spanish and got to know them better?)

And then there’s the gender factor. Being in a same-sex couple household changes the domestic labor dynamic quite a bit relative to the conventional opposite-sex couple household. Even in ostensibly liberal, feminist, egalitarian households, and even when both partners are employed full-time, it usually ends up being the woman who does most of the housework. This is true in the US; it is true in the UK; it is true in Europe; indeed it’s true in most if not all countries around the world, and, unsurprisingly, it is worst in India, where women spend a whopping five hours per day more on housework than men. (I was not surprised by the fact that Japan and China also do poorly, given their overall gender norms; but I’m a bit shocked at how badly Ireland and Italy do on this front.) And yes, while #ScandinaviaIsBetter, still in Sweden and Norway women spend half an hour to an hour more on housework on an average day than men.

Which, of course, supports the social norm theory. Any time you see both an overwhelming global trend against women and considerable cross-country variation within that trend, your first hypothesis should be sexism. Without the cross-country variation, maybe it could be biology—the sex differences in height and upper-body strength, for example, are pretty constant across countries. But women doing half an hour more in Norway but five hours more in India looks an awful lot like sexism.

This is called the double burden: To meet the social norms of being responsible middle-class adults, men are merely expected to work full-time at a high-paying job, but women are expected to do both the full effort of maintaining a household and the full effort of working at a full-time job. This is surely an improvement over the time when women were excluded from the formal workforce, not least because of the financial freedom that full-time work affords many women; but it would be very nice if we could also find a way to share some of that domestic burden as well. There has been some trend toward a less unequal share of housework as more women enter the workforce, but it still has a long way to go, even in highly-developed countries.

So, we can start by trying to shift the social norm that housework is gendered: Women clean the floors and change the diapers, while men fix the car and paint the walls. Childcare in particular is something that should be done equally by all parents, and while it’s plausible that one person may be better or worse at mopping or painting, it strains credulity to think that it’s always the woman who is better at mopping and the man who is better at painting.

Yet perhaps this is a good reason to try to shift away from another social norm as well, the one where only rich people hire maids and maids are servants. Unfortunately, it’s likely that most maids will continue to be women for the foreseeable future—cleaning services are gendered in much the same way that nursing and childcare are gendered. But at least by getting paid to clean, one can fulfill the “job” norm and the “housekeeping” norm in one fell swoop; and then women who are in other professions can carry only one burden instead of two. And if we can begin to think of cleaning services as more like plumbing and auto repair—buying a service, not hiring a servant—this is likely to improve the condition and social status of a great many maids. I doubt we’d ever get to the point where mopping floors is as prestigious as performing neurosurgery, but maybe we can at least get to the point where being a maid is as respectable as being a plumber. Cleaning needs done; it shouldn’t be shameful to be someone who is very good at doing it and gets paid to do so. (That is perhaps the most pernicious aspect of socioeconomic class, this idea that some jobs are “shameful” because they are done by workers with less education or involve more physical labor.)
This also makes good sense in terms of economic efficiency: Your comparative advantage is probably not in cleaning services, or if it is then perhaps you should do that as a career. So by selling your labor at whatever you are good at and then buying the services of someone who is especially good at cleaning, you should, at least in theory, be able to get the same cleaning done and maintain the same standard of living for yourself while also accomplishing more at whatever it is you do in your profession and providing income for whomever you hire to do the cleaning.

So, should I go hire a cleaning service after all? I don’t know, that still sounds pretty expensive.

How we sold our privacy piecemeal

Apr 2, JDN 2457846

The US Senate just narrowly voted to remove restrictions on the sale of user information by Internet Service Providers. Right now, your ISP can basically sell your information to whomever they like without even telling you. The new rule that the Senate struck down would have required them to at least make you sign a form with some fine print on it, which you probably would sign without reading it. So in practical terms maybe it makes no difference.

…or does it? Maybe that’s really the mistake we’ve been making all along.

In cognitive science we have a concept called the just-noticeable difference (JND); it is basically what it sounds like. If you have two stimuli—two colors, say, or sounds of two different pitches—that differ by an amount smaller than the JND, people will not notice it. But if they differ by more than the JND, people will notice. (In practice it’s a bit more complicated than that, as different people have different JND thresholds and even within a person they can vary from case to case based on attention or other factors. But there’s usually a relatively narrow range of JND values, such that anything below that is noticed by no one and anything above that is noticed by almost everyone.)

The JND seems like an intuitively obvious concept—of course you can’t tell the difference between a color of 432.78 nanometers and 432.79 nanometers!—but it actually has profound implications. In particular it undermines the possibility of having truly transitive preferences. If you prefer some colors to others—which most of us do—but you have a nonzero JND in color wavelengths—as we all do—then I can do the following: Find one color you like (for concreteness, say you like blue of 475 nm), and another color you don’t (say green of 510 nm). Let you choose between the blue you like and another blue, 475.01 nm. Will you prefer one to the other? Of course not, the difference is within your JND. So now compare 475.01 nm and 475.02 nm; which do you prefer? Again, you’re indifferent. And I can go on and on this way a few thousand times, until finally I get to 510 nanometers, the green you didn’t like. I have just found a chain of your preferences that is intransitive; you said A = B = C = D… all the way down the line to X = Y = Z… but then at the end you said A > Z. Your preferences aren’t transitive, and therefore aren’t well-defined rational preferences. And you could do the same to me, so neither are mine.

Part of the reason we’ve so willingly given up our privacy in the last generation or so is our paranoid fear of terrorism, which no doubt triggers deep instincts about tribal warfare. Depressingly, the plurality of Americans think that our government has not gone far enough in its obvious overreaches of the Constitution in the name of defending us from a threat that has killed fewer Americans in my lifetime than die from car accidents each month.

But that doesn’t explain why we—and I do mean we, for I am as guilty as most—have so willingly sold our relationships to Facebook and our schedules to Google. Google isn’t promising to save me from the threat of foreign fanatics; they’re merely offering me a more convenient way to plan my activities. Why, then, am I so cavalier about entrusting them with so much personal data?

 

Well, I didn’t start by giving them my whole life. I created an email account, which I used on occasion. I tried out their calendar app and used it to remind myself when my classes were. And so on, and so forth, until now Google knows almost as much about me as I know about myself.

At each step, it didn’t feel like I was doing anything of significance; perhaps indeed it was below my JND. Each bit of information I was giving didn’t seem important, and perhaps it wasn’t. But all together, our combined information allows Google to make enormous amounts of money without charging most of its users a cent.

The process goes something like this. Imagine someone offering you a penny in exchange for telling them how many times you made left turns last week. You’d probably take it, right? Who cares how many left turns you made last week? But then they offer another penny in exchange for telling them how many miles you drove on Tuesday. And another penny for telling them the average speed you drive during the afternoon. This process continues hundreds of times, until they’ve finally given you say $5.00—and they know exactly where you live, where you work, and where most of your friends live, because all that information was encoded in the list of driving patterns you gave them, piece by piece.

Consider instead how you’d react if someone had offered, “Tell me where you live and work and I’ll give you $5.00.” You’d be pretty suspicious, wouldn’t you? What are they going to do with that information? And $5.00 really isn’t very much money. Maybe there’s a price at which you’d part with that information to a random suspicious stranger—but it’s probably at least $50 or even more like $500, not $5.00. But by asking it in 500 different questions for a penny each, they can obtain that information from you at a bargain price.

If you work out how much money Facebook and Google make from each user, it’s actually pitiful. Facebook has been increasing their revenue lately, but it’s still less than $20 per user per year. The stranger asks, “Tell me who all your friends are, where you live, where you were born, where you work, and what your political views are, and I’ll give you $20.” Do you take that deal? Apparently, we do. Polls find that most Americans are willing to exchange privacy for valuable services, often quite cheaply.

 

Of course, there isn’t actually an alternative social network that doesn’t sell data and instead just charges a subscription fee. I don’t think this is a fundamentally unfeasible business model, but it hasn’t succeeded so far, and it will have an uphill battle for two reasons.

The first is the obvious one: It would have to compete with Facebook and Google, who already have the enormous advantage of a built-in user base of hundreds of millions of people.

The second one is what this post is about: The social network based on conventional economics rather than selling people’s privacy can’t take advantage of the JND.

I suppose they could try—charge $0.01 per month at first, then after awhile raise it to $0.02, $0.03 and so on until they’re charging $2.00 per month and actually making a profit—but that would be much harder to pull off, and it would provide the least revenue when it is needed most, at the early phase when the up-front costs of establishing a network are highest. Moreover, people would still feel that; it’s a good feature of our monetary system that you can’t break money into small enough denominations to really consistently hide under the JND. But information can be broken down into very tiny pieces indeed. Much of the revenue earned by these corporate giants is actually based upon indexing the keywords of the text we write; we literally sell off our privacy word by word.

 

What should we do about this? Honestly, I’m not sure. Facebook and Google do in fact provide valuable services, without which we would be worse off. I would be willing to pay them their $20 per year, if I could ensure that they’d stop selling my secrets to advertisers. But as long as their current business model keeps working, they have little incentive to change. There is in fact a huge industry of data brokering, corporations you’ve probably never heard of that make their revenue entirely from selling your secrets.

In a rare moment of actual journalism, TIME ran an article about a year ago arguing that we need new government policy to protect us from this kind of predation of our privacy. But they had little to offer in the way of concrete proposals.

The ACLU does better: They have specific proposals for regulations that should be made to protect our information from the most harmful prying eyes. But as we can see, the current administration has no particular interest in pursuing such policies—if anything they seem to do the opposite.

Why New Year’s resolutions fail

Jan 1, JDN 2457755

Last week’s post was on Christmas, so by construction this week’s post will be on New Year’s Day.

It is a tradition in many cultures, especially in the US and Europe, to start every new year with a New Year’s resolution, a promise to ourselves to change our behavior in some positive way.

Yet, over 80% of these resolutions fail. Why is this?

If we are honest, most of us would agree that there is something about our own behavior that could stand to be improved. So why do we so rarely succeed in actually making such improvements?

One possibility, which I’m guessing most neoclassical economists would favor, is to say that we don’t actually want to. We may pretend that we do in order to appease others, but ultimately our rational optimization has already chosen that we won’t actually bear the cost to make the improvement.

I think this is actually quite rare. I’ve seen too many people with resolutions they didn’t share with anyone, for example, to think that it’s all about social pressure. And I’ve seen far too many people try very hard to achieve their resolutions, day after day, and yet still fail.

Sometimes we make resolutions that are not entirely within our control, such as “get a better job” or “find a girlfriend” (last year I made a resolution to publish a work of commercial fiction or a peer-reviewed article—and alas, failed at that task, unless I somehow manage it in the next few days). Such resolutions may actually be unwise to make in the first place, as it can feel like breaking a promise to yourself when you’ve actually done all you possibly could.

So let’s set those aside and talk only about things we should be in control over, like “lose weight” or “save more money”. Even these kinds of resolutions typically fail; why? What is this “weakness of will”? How is it possible to really want something that you are in full control over, and yet still fail to accomplish it?

Well, first of all, I should be clear what I mean by “in full control over”. In some sense you’re not in full control, which is exactly the problem. Your conscious mind is not actually an absolute tyrant over your entire body; you’re more like an elected president who has to deal with a legislature in order to enact policy.

You do have a great deal of power over your own behavior, and you can learn to improve this control (much as real executive power in presidential democracies has expanded over the last century!); but there are fundamental limits to just how well you can actually consciously will your body to do anything, limits imposed by billions of years of evolution that established most of the traits of your body and nervous system millions of generations before there even was such a thing as rational conscious reasoning.

One thing that makes a surprisingly large difference lies in whether your goals are reduced to specific, actionable objectives. “Lose weight” is almost guaranteed to fail. “Lose 30 pounds” is still unlikely to succeed. “Work out for 2 hours per week,” on the other hand, might have a chance. “Save money” is never going to make it, but “move to a smaller apartment and set aside $200 per month” just might.

I think the government metaphor is helpful here; if you President of the United States and you want something done, do you state some vague, broad goal like “Improve the economy”? No, you make a specific, actionable demand that allows you to enforce compliance, like “increase infrastructure spending by 24% over the next 5 years”. Even then it is possible to fail if you can’t push it through the legislature (in the metaphor, the “legislature” is your habits, instincts and other subconscious processes), but you’re much more likely to succeed if you have a detailed plan.

Another technique that helps is to visualize the benefits of succeeding and the costs of failing, and keep these in your mind. This counteracts the tendency for the costs of succeeding and the benefits of giving up to be more salient—losing 30 pounds sounds nice in theory, but that treadmill is so much work right now!

This salience effect has a lot to do with the fact that human beings are terrible at dealing with the future.

Rationally, we are supposed to use exponential discounting; each successive moment is supposed to be worth less to us than the previous by a fixed proportion, say 5% per year. This is actually a mathematical theorem; if you don’t discount this way, your decisions will be systematically irrational.

And yet… we don’t discount that way. Some behavioral economists argue that we use hyperbolic discounting, in which instead of discounting time by a fixed proportion, we use a different formula that drops off too quickly early on and not quickly enough later on.

But I am increasingly convinced that human beings don’t actually use discounting at all. We have a series of rough-and-ready heuristics for making future judgments, which can sort of act like discounting, but require far less computation than actually calculating a proper discount rate. (Recent empirical evidence seems to be tilting this direction.)

In any case, whatever we do is clearly not a proper rational discount rate. And this means that our behavior can be time-inconsistent; a choice that seems rational at one time can not seem rational at a later time. When we’re planning out our year and saying we will hit the treadmill more, it seems like a good idea; but when we actually get to the gym and feel our legs ache as we start running, we begin to regret our decision.

The challenge, really, is determining which “version” of us is correct! A priori, we don’t actually know whether the view of our distant self contemplating the future or the view of our current self making the choice in the moment is the right one. Actually, when I frame it this way, it almost seems like the self that’s closer to the choice should have better information—and yet typically we think the exact opposite, that it is our past self making plans that really knows what’s best for us.

So where does that come from? Why do we think, at least in most cases, that the “me” which makes a plan a year in advance is the smart one, and the “me” that actually decides in the moment is untrustworthy.

Kahneman has a good explanation for this, in his model of System 1 and System 2. System 1 is simple and fast, but often gets the wrong answer. System 2 usually gets the right answer, but it is complex and slow. When we are making plans, we have a lot of time to think, and we can afford to expend the extra effort to engage the full power of System 2. But when we are living in the moment, choosing what to do right now, we don’t have that luxury of time, and we are forced to fall back on System 1. System 1 is easier—but it’s also much more likely to be wrong.

How, then, do we resolve this conflict? Commitment. (Perhaps that’s why it’s called a New Year’s resolution!)

We make promises to ourselves, commitments that we will feel bad about not following through.

If we rationally discounted, this would be a baffling thing to do; we’re just imposing costs on ourselves for no reason. But because we don’t discount rationally, commitments allow us to change the calculation for our future selves.

This brings me to one last strategy to use when making your resolutions: Include punishment.

“I will work out at least 2 hours per week, and if I don’t, I’m not allowed to watch TV all weekend.” Now that is a resolution you are actually likely to keep.

To see why, consider the decision problem for your System 2 self today versus your System 1 self throughout the year.

Your System 2 self has done the cost-benefit analysis and ruled that working out 2 hours per week is worthwhile for its health benefits.

If you left it at that, your System 1 self would each day find an excuse to procrastinate the workouts, because at least from where they’re sitting, working out for 2 hours looks a lot more painful than the marginal loss in health from missing just this one week. And of course this will keep happening, week after week—and then 52 go by and you’ve had few if any workouts.

But by adding the punishment of “no TV”, you have imposed an additional cost on your System 1 self, something that they care about. Suddenly the calculation changes; it’s not just 2 hours of workout weighed against vague long-run health benefits, but 2 hours of workout weighed against no TV all weekend. That punishment is surely too much to bear; so you’d best do the workout after all.

Do it right, and you will rarely if ever have to impose the punishment. But don’t make it too large, or then it will seem unreasonable and you won’t want to enforce it if you ever actually need to. Your System 1 self will then know this, and treat the punishment as nonexistent. (Formally the equilibrium is not subgame perfect; I am gravely concerned that our nuclear deterrence policy suffers from precisely this flaw.) “If I don’t work out, I’ll kill myself” is a recipe for depression, not healthy exercise habits.

But if you set clear, actionable objectives and sufficient but reasonable punishments, there’s at least a good chance you will actually be in the minority of people who actually succeed in keeping their New Year’s resolution.

And if not, there’s always next year.

Experimentally testing categorical prospect theory

Dec 4, JDN 2457727

In last week’s post I presented a new theory of probability judgments, which doesn’t rely upon people performing complicated math even subconsciously. Instead, I hypothesize that people try to assign categories to their subjective probabilities, and throw away all the information that wasn’t used to assign that category.

The way to most clearly distinguish this from cumulative prospect theory is to show discontinuity. Kahneman’s smooth, continuous function places fairly strong bounds on just how much a shift from 0% to 0.000001% can really affect your behavior. In particular, if you want to explain the fact that people do seem to behave differently around 10% compared to 1% probabilities, you can’t allow the slope of the smooth function to get much higher than 10 at any point, even near 0 and 1. (It does depend on the precise form of the function, but the more complicated you make it, the more free parameters you add to the model. In the most parsimonious form, which is a cubic polynomial, the maximum slope is actually much smaller than this—only 2.)

If that’s the case, then switching from 0.% to 0.0001% should have no more effect in reality than a switch from 0% to 0.00001% would to a rational expected utility optimizer. But in fact I think I can set up scenarios where it would have a larger effect than a switch from 0.001% to 0.01%.

Indeed, these games are already quite profitable for the majority of US states, and they are called lotteries.

Rationally, it should make very little difference to you whether your odds of winning the Powerball are 0 (you bought no ticket) or 0.000000001% (you bought a ticket), even when the prize is $100 million. This is because your utility of $100 million is nowhere near 100 million times as large as your marginal utility of $1. A good guess would be that your lifetime income is about $2 million, your utility is logarithmic, the units of utility are hectoQALY, and the baseline level is about 100,000.

I apologize for the extremely large number of decimals, but I had to do that in order to show any difference at all. I have bolded where the decimals first deviate from the baseline.

Your utility if you don’t have a ticket is ln(20) = 2.9957322736 hQALY.

Your utility if you have a ticket is (1-10^-9) ln(20) + 10^-9 ln(1020) = 2.9957322775 hQALY.

You gain a whopping 40 microQALY over your whole lifetime. I highly doubt you could even perceive such a difference.

And yet, people are willing to pay nontrivial sums for the chance to play such lotteries. Powerball tickets sell for about $2 each, and some people buy tickets every week. If you do that and live to be 80, you will spend some $8,000 on lottery tickets during your lifetime, which results in this expected utility: (1-4*10^-6) ln(20-0.08) + 4*10^-6 ln(1020) = 2.9917399955 hQALY.
You have now sacrificed 0.004 hectoQALY, which is to say 0.4 QALY—that’s months of happiness you’ve given up to play this stupid pointless game.

Which shouldn’t be surprising, as (with 99.9996% probability) you have given up four months of your lifetime income with nothing to show for it. Lifetime income of $2 million / lifespan of 80 years = $25,000 per year; $8,000 / $25,000 = 0.32. You’ve actually sacrificed slightly more than this, which comes from your risk aversion.

Why would anyone do such a thing? Because while the difference between 0 and 10^-9 may be trivial, the difference between “impossible” and “almost impossible” feels enormous. “You can’t win if you don’t play!” they say, but they might as well say “You can’t win if you do play either.” Indeed, the probability of winning without playing isn’t zero; you could find a winning ticket lying on the ground, or win due to an error that is then upheld in court, or be given the winnings bequeathed by a dying family member or gifted by an anonymous donor. These are of course vanishingly unlikely—but so was winning in the first place. You’re talking about the difference between 10^-9 and 10^-12, which in proportional terms sounds like a lot—but in absolute terms is nothing. If you drive to a drug store every week to buy a ticket, you are more likely to die in a car accident on the way to the drug store than you are to win the lottery.

Of course, these are not experimental conditions. So I need to devise a similar game, with smaller stakes but still large enough for people’s brains to care about the “almost impossible” category; maybe thousands? It’s not uncommon for an economics experiment to cost thousands, it’s just usually paid out to many people instead of randomly to one person or nobody. Conducting the experiment in an underdeveloped country like India would also effectively amplify the amounts paid, but at the fixed cost of transporting the research team to India.

But I think in general terms the experiment could look something like this. You are given $20 for participating in the experiment (we treat it as already given to you, to maximize your loss aversion and endowment effect and thereby give us more bang for our buck). You then have a chance to play a game, where you pay $X to get a P probability of $Y*X, and we vary these numbers.

The actual participants wouldn’t see the variables, just the numbers and possibly the rules: “You can pay $2 for a 1% chance of winning $200. You can also play multiple times if you wish.” “You can pay $10 for a 5% chance of winning $250. You can only play once or not at all.”

So I think the first step is to find some dilemmas, cases where people feel ambivalent, and different people differ in their choices. That’s a good role for a pilot study.

Then we take these dilemmas and start varying their probabilities slightly.

In particular, we try to vary them at the edge of where people have mental categories. If subjective probability is continuous, a slight change in actual probability should never result in a large change in behavior, and furthermore the effect of a change shouldn’t vary too much depending on where the change starts.

But if subjective probability is categorical, these categories should have edges. Then, when I present you with two dilemmas that are on opposite sides of one of the edges, your behavior should radically shift; while if I change it in a different way, I can make a large change without changing the result.

Based solely on my own intuition, I guessed that the categories roughly follow this pattern:

Impossible: 0%

Almost impossible: 0.1%

Very unlikely: 1%

Unlikely: 10%

Fairly unlikely: 20%

Roughly even odds: 50%

Fairly likely: 80%

Likely: 90%

Very likely: 99%

Almost certain: 99.9%

Certain: 100%

So for example, if I switch from 0%% to 0.01%, it should have a very large effect, because I’ve moved you out of your “impossible” category (indeed, I think the “impossible” category is almost completely sharp; literally anything above zero seems to be enough for most people, even 10^-9 or 10^-10). But if I move from 1% to 2%, it should have a small effect, because I’m still well within the “very unlikely” category. Yet the latter change is literally one hundred times larger than the former. It is possible to define continuous functions that would behave this way to an arbitrary level of approximation—but they get a lot less parsimonious very fast.

Now, immediately I run into a problem, because I’m not even sure those are my categories, much less that they are everyone else’s. If I knew precisely which categories to look for, I could tell whether or not I had found it. But the process of both finding the categories and determining if their edges are truly sharp is much more complicated, and requires a lot more statistical degrees of freedom to get beyond the noise.

One thing I’m considering is assigning these values as a prior, and then conducting a series of experiments which would adjust that prior. In effect I would be using optimal Bayesian probability reasoning to show that human beings do not use optimal Bayesian probability reasoning. Still, I think that actually pinning down the categories would require a large number of participants or a long series of experiments (in frequentist statistics this distinction is vital; in Bayesian statistics it is basically irrelevant—one of the simplest reasons to be Bayesian is that it no longer bothers you whether someone did 2 experiments of 100 people or 1 experiment of 200 people, provided they were the same experiment of course). And of course there’s always the possibility that my theory is totally off-base, and I find nothing; a dissertation replicating cumulative prospect theory is a lot less exciting (and, sadly, less publishable) than one refuting it.

Still, I think something like this is worth exploring. I highly doubt that people are doing very much math when they make most probabilistic judgments, and using categories would provide a very good way for people to make judgments usefully with no math at all.

Bigotry is more powerful than the market

Nov 20, JDN 2457683

If there’s one message we can take from the election of Donald Trump, it is that bigotry remains a powerful force in our society. A lot of autoflagellating liberals have been trying to explain how this election result really reflects our failure to help people displaced by technology and globalization (despite the fact that personal income and local unemployment had negligible correlation with voting for Trump), or Hillary Clinton’s “bad campaign” that nonetheless managed the same proportion of Democrat turnout that re-elected her husband in 1996.

No, overwhelmingly, the strongest predictor of voting for Trump was being White, and living in an area where most people are White. (Well, actually, that’s if you exclude authoritarianism as an explanatory variable—but really I think that’s part of what we’re trying to explain.) Trump voters were actually concentrated in areas less affected by immigration and globalization. Indeed, there is evidence that these people aren’t racist because they have anxiety about the economy—they are anxious about the economy because they are racist. How does that work? Obama. They can’t believe that the economy is doing well when a Black man is in charge. So all the statistics and even personal experiences mean nothing to them. They know in their hearts that unemployment is rising, even as the BLS data clearly shows it’s falling.

The wide prevalence and enormous power of bigotry should be obvious. But economists rarely talk about it, and I think I know why: Their models say it shouldn’t exist. The free market is supposed to automatically eliminate all forms of bigotry, because they are inefficient.

The argument for why this is supposed to happen actually makes a great deal of sense: If a company has the choice of hiring a White man or a Black woman to do the same job, but they know that the market wage for Black women is lower than the market wage for White men (which it most certainly is), and they will do the same quality and quantity of work, why wouldn’t they hire the Black woman? And indeed, if human beings were rational profit-maximizers, this is probably how they would think.

More recently some neoclassical models have been developed to try to “explain” this behavior, but always without daring to give up the precious assumption of perfect rationality. So instead we get the two leading neoclassical theories of discrimination, which are statistical discrimination and taste-based discrimination.

Statistical discrimination is the idea that under asymmetric information (and we surely have that), features such as race and gender can act as signals of quality because they are correlated with actual quality for various reasons (usually left unspecified), so it is not irrational after all to choose based upon them, since they’re the best you have.

Taste-based discrimination is the idea that people are rationally maximizing preferences that simply aren’t oriented toward maximizing profit or well-being. Instead, they have this extra term in their utility function that says they should also treat White men better than women or Black people. It’s just this extra thing they have.

A small number of studies have been done trying to discern which of these is at work.
The correct answer, of course, is neither.

Statistical discrimination, at least, could be part of what’s going on. Knowing that Black people are less likely to be highly educated than Asians (as they definitely are) might actually be useful information in some circumstances… then again, you list your degree on your resume, don’t you? Knowing that women are more likely to drop out of the workforce after having a child could rationally (if coldly) affect your assessment of future productivity. But shouldn’t the fact that women CEOs outperform men CEOs be incentivizing shareholders to elect women CEOs? Yet that doesn’t seem to happen. Also, in general, people seem to be pretty bad at statistics.

The bigger problem with statistical discrimination as a theory is that it’s really only part of a theory. It explains why not all of the discrimination has to be irrational, but some of it still does. You need to explain why there are these huge disparities between groups in the first place, and statistical discrimination is unable to do that. In order for the statistics to differ this much, you need a past history of discrimination that wasn’t purely statistical.

Taste-based discrimination, on the other hand, is not a theory at all. It’s special pleading. Rather than admit that people are failing to rationally maximize their utility, we just redefine their utility so that whatever they happen to be doing now “maximizes” it.

This is really what makes the Axiom of Revealed Preference so insidious; if you really take it seriously, it says that whatever you do, must by definition be what you preferred. You can’t possibly be irrational, you can’t possibly be making mistakes of judgment, because by definition whatever you did must be what you wanted. Maybe you enjoy bashing your head into a wall, who am I to judge?

I mean, on some level taste-based discrimination is what’s happening; people think that the world is a better place if they put women and Black people in their place. So in that sense, they are trying to “maximize” some “utility function”. (By the way, most human beings behave in ways that are provably inconsistent with maximizing any well-defined utility function—the Allais Paradox is a classic example.) But the whole framework of calling it “taste-based” is a way of running away from the real explanation. If it’s just “taste”, well, it’s an unexplainable brute fact of the universe, and we just need to accept it. If people are happier being racist, what can you do, eh?

So I think it’s high time to start calling it what it is. This is not a question of taste. This is a question of tribal instinct. This is the product of millions of years of evolution optimizing the human brain to act in the perceived interest of whatever it defines as its “tribe”. It could be yourself, your family, your village, your town, your religion, your nation, your race, your gender, or even the whole of humanity or beyond into all sentient beings. But whatever it is, the fundamental tribe is the one thing you care most about. It is what you would sacrifice anything else for.

And what we learned on November 9 this year is that an awful lot of Americans define their tribe in very narrow terms. Nationalistic and xenophobic at best, racist and misogynistic at worst.

But I suppose this really isn’t so surprising, if you look at the history of our nation and the world. Segregation was not outlawed in US schools until 1955, and there are women who voted in this election who were born before American women got the right to vote in 1920. The nationalistic backlash against sending jobs to China (which was one of the chief ways that we reduced global poverty to its lowest level ever, by the way) really shouldn’t seem so strange when we remember that over 100,000 Japanese-Americans were literally forcibly relocated into camps as recently as 1942. The fact that so many White Americans seem all right with the biases against Black people in our justice system may not seem so strange when we recall that systemic lynching of Black people in the US didn’t end until the 1960s.

The wonder, in fact, is that we have made as much progress as we have. Tribal instinct is not a strange aberration of human behavior; it is our evolutionary default setting.

Indeed, perhaps it is unreasonable of me to ask humanity to change its ways so fast! We had millions of years to learn how to live the wrong way, and I’m giving you only a few centuries to learn the right way?

The problem, of course, is that the pace of technological change leaves us with no choice. It might be better if we could wait a thousand years for people to gradually adjust to globalization and become cosmopolitan; but climate change won’t wait a hundred, and nuclear weapons won’t wait at all. We are thrust into a world that is changing very fast indeed, and I understand that it is hard to keep up; but there is no way to turn back that tide of change.

Yet “turn back the tide” does seem to be part of the core message of the Trump voter, once you get past the racial slurs and sexist slogans. People are afraid of what the world is becoming. They feel that it is leaving them behind. Coal miners fret that we are leaving them behind by cutting coal consumption. Factory workers fear that we are leaving them behind by moving the factory to China or inventing robots to do the work in half the time for half the price.

And truth be told, they are not wrong about this. We are leaving them behind. Because we have to. Because coal is polluting our air and destroying our climate, we must stop using it. Moving the factories to China has raised them out of the most dire poverty, and given us a fighting chance toward ending world hunger. Inventing the robots is only the next logical step in the process that has carried humanity forward from the squalor and suffering of primitive life to the security and prosperity of modern society—and it is a step we must take, for the progress of civilization is not yet complete.

They wouldn’t have to let themselves be left behind, if they were willing to accept our help and learn to adapt. That carbon tax that closes your coal mine could also pay for your basic income and your job-matching program. The increased efficiency from the automated factories could provide an abundance of wealth that we could redistribute and share with you.

But this would require them to rethink their view of the world. They would have to accept that climate change is a real threat, and not a hoax created by… uh… never was clear on that point actually… the Chinese maybe? But 45% of Trump supporters don’t believe in climate change (and that’s actually not as bad as I’d have thought). They would have to accept that what they call “socialism” (which really is more precisely described as social democracy, or tax-and-transfer redistribution of wealth) is actually something they themselves need, and will need even more in the future. But despite rising inequality, redistribution of wealth remains fairly unpopular in the US, especially among Republicans.

Above all, it would require them to redefine their tribe, and start listening to—and valuing the lives of—people that they currently do not.

Perhaps we need to redefine our tribe as well; many liberals have argued that we mistakenly—and dangerously—did not include people like Trump voters in our tribe. But to be honest, that rings a little hollow to me: We aren’t the ones threatening to deport people or ban them from entering our borders. We aren’t the ones who want to build a wall (though some have in fact joked about building a wall to separate the West Coast from the rest of the country, I don’t think many people really want to do that). Perhaps we live in a bubble of liberal media? But I make a point of reading outlets like The American Conservative and The National Review for other perspectives (I usually disagree, but I do at least read them); how many Trump voters do you think have ever read the New York Times, let alone Huffington Post? Cosmopolitans almost by definition have the more inclusive tribe, the more open perspective on the world (in fact, do I even need the “almost”?).

Nor do I think we are actually ignoring their interests. We want to help them. We offer to help them. In fact, I want to give these people free money—that’s what a basic income would do, it would take money from people like me and give it to people like them—and they won’t let us, because that’s “socialism”! Rather, we are simply refusing to accept their offered solutions, because those so-called “solutions” are beyond unworkable; they are absurd, immoral and insane. We can’t bring back the coal mining jobs, unless we want Florida underwater in 50 years. We can’t reinstate the trade tariffs, unless we want millions of people in China to starve. We can’t tear down all the robots and force factories to use manual labor, unless we want to trigger a national—and then global—economic collapse. We can’t do it their way. So we’re trying to offer them another way, a better way, and they’re refusing to take it. So who here is ignoring the concerns of whom?

Of course, the fact that it’s really their fault doesn’t solve the problem. We do need to take it upon ourselves to do whatever we can, because, regardless of whose fault it is, the world will still suffer if we fail. And that presents us with our most difficult task of all, a task that I fully expect to spend a career trying to do and yet still probably failing: We must understand the human tribal instinct well enough that we can finally begin to change it. We must know enough about how human beings form their mental tribes that we can actually begin to shift those parameters. We must, in other words, cure bigotry—and we must do it now, for we are running out of time.