Moral responsibility does not inherit across generations

JDN 2457548

In last week’s post I made a sharp distinction between believing in human progress and believing that colonialism was justified. To make this argument, I relied upon a moral assumption that seems to me perfectly obvious, and probably would to most ethicists as well: Moral responsibility does not inherit across generations, and people are only responsible for their individual actions.

But is in fact this principle is not uncontroversial in many circles. When I read utterly nonsensical arguments like this one from the aptly-named Race Baitr saying that White people have no role to play in the liberation of Black people apparently because our blood is somehow tainted by the crimes our ancestors, it becomes apparent to me that this principle is not obvious to everyone, and therefore is worth defending. Indeed, many applications of the concept of “White Privilege” seem to ignore this principle, speaking as though racism is not something one does or participates in, but something that one is simply by being born with less melanin. Here’s a Salon interview specifically rejecting the proposition that racism is something one does:

For white people, their identities rest on the idea of racism as about good or bad people, about moral or immoral singular acts, and if we’re good, moral people we can’t be racist – we don’t engage in those acts. This is one of the most effective adaptations of racism over time—that we can think of racism as only something that individuals either are or are not “doing.”

If racism isn’t something one does, then what in the world is it? It’s all well and good to talk about systems and social institutions, but ultimately systems and social institutions are made of human behaviors. If you think most White people aren’t doing enough to combat racism (which sounds about right to me!), say that—don’t make some bizarre accusation that simply by existing we are inherently racist. (Also: We? I’m only 75% White, so am I only 75% inherently racist?) And please, stop redefining the word “racism” to mean something other than what everyone uses it to mean; “White people are snakes” is in fact a racist sentiment (and yes, one I’ve actually heard–indeed, here is the late Muhammad Ali comparing all White people to rattlesnakes, and Huffington Post fawning over him for it).

Racism is clearly more common and typically worse when performed by White people against Black people—but contrary to the claims of some social justice activists the White perpetrator and Black victim are not part of the definition of racism. Similarly, sexism is more common and more severe committed by men against women, but that doesn’t mean that “men are pigs” is not a sexist statement (and don’t tell me you haven’t heard that one). I don’t have a good word for bigotry by gay people against straight people (“heterophobia”?) but it clearly does happen on occasion, and similarly cannot be defined out of existence.

I wouldn’t care so much that you make this distinction between “racism” and “racial prejudice”, except that it’s not the normal usage of the word “racism” and therefore confuses people, and also this redefinition clearly is meant to serve a political purpose that is quite insidious, namely making excuses for the most extreme and hateful prejudice as long as it’s committed by people of the appropriate color. If “White people are snakes” is not racism, then the word has no meaning.

Not all discussions of “White Privilege” are like this, of course; this article from Occupy Wall Street actually does a fairly good job of making “White Privilege” into a sensible concept, albeit still not a terribly useful one in my opinion. I think the useful concept is oppression—the problem here is not how we are treating White people, but how we are treating everyone else. What privilege gives you is the freedom to be who you are.”? Shouldn’t everyone have that?

Almost all the so-called “benefits” or “perks” associated with privilege” are actually forgone harms—they are not good things done to you, but bad things not done to you. But benefitting from racist systems doesn’t mean that everything is magically easy for us. It just means that as hard as things are, they could always be worse.” No, that is not what the word “benefit” means. The word “benefit” means you would be worse off without it—and in most cases that simply isn’t true. Many White people obviously think that it is true—which is probably a big reason why so many White people fight so hard to defend racism, you know; you’ve convinced them it is in their self-interest. But, with rare exceptions, it is not; most racial discrimination has literally zero long-run benefit. It’s just bad. Maybe if we helped people appreciate that more, they would be less resistant to fighting racism!

The only features of “privilege” that really make sense as benefits are those that occur in a state of competition—like being more likely to be hired for a job or get a loan—but one of the most important insights of economics is that competition is nonzero-sum, and fairer competition ultimately means a more efficient economy and thus more prosperity for everyone.

But okay, let’s set that aside and talk about this core question of what sort of responsibility we bear for the acts of our ancestors. Many White people clearly do feel deep shame about what their ancestors (or people the same color as their ancestors!) did hundreds of years ago. The psychological reactance to that shame may actually be what makes so many White people deny that racism even exists (or exists anymore)—though a majority of Americans of all races do believe that racism is still widespread.

We also apply some sense of moral responsibility applied to whole races quite frequently. We speak of a policy “benefiting White people” or “harming Black people” and quickly elide the distinction between harming specific people who are Black, and somehow harming “Black people” as a group. The former happens all the time—the latter is utterly nonsensical. Similarly, we speak of a “debt owed by White people to Black people” (which might actually make sense in the very narrow sense of economic reparations, because people do inherit money! They probably shouldn’t, that is literally feudalist, but in the existing system they in fact do), which makes about as much sense as a debt owed by tall people to short people. As Walter Michaels pointed out in The Trouble with Diversity (which I highly recommend), because of this bizarre sense of responsibility we are often in the habit of “apologizing for something you didn’t do to people to whom you didn’t do it (indeed to whom it wasn’t done)”. It is my responsibility to condemn colonialism (which I indeed do), to fight to ensure that it never happens again; it is not my responsibility to apologize for colonialism.

This makes some sense in evolutionary terms; it’s part of the all-encompassing tribal paradigm, wherein human beings come to identify themselves with groups and treat those groups as the meaningful moral agents. It’s much easier to maintain the cohesion of a tribe against the slings and arrows (sometimes quite literal) of outrageous fortune if everyone believes that the tribe is one moral agent worthy of ultimate concern.

This concept of racial responsibility is clearly deeply ingrained in human minds, for it appears in some of our oldest texts, including the Bible: “You shall not bow down to them or worship them; for I, the Lord your God, am a jealous God, punishing the children for the sin of the parents to the third and fourth generation of those who hate me,” (Exodus 20:5)

Why is inheritance of moral responsibility across generations nonsensical? Any number of reasons, take your pick. The economist in me leaps to “Ancestry cannot be incentivized.” There’s no point in holding people responsible for things they can’t control, because in doing so you will not in any way alter behavior. The Stanford Encyclopedia of Philosophy article on moral responsibility takes it as so obvious that people are only responsible for actions they themselves did that they don’t even bother to mention it as an assumption. (Their big question is how to reconcile moral responsibility with determinism, which turns out to be not all that difficult.)

An interesting counter-argument might be that descent can be incentivized: You could use rewards and punishments applied to future generations to motivate current actions. But this is actually one of the ways that incentives clearly depart from moral responsibilities; you could incentivize me to do something by threatening to murder 1,000 children in China if I don’t, but even if it was in fact something I ought to do, it wouldn’t be those children’s fault if I didn’t do it. They wouldn’t deserve punishment for my inaction—I might, and you certainly would for using such a cruel incentive.

Moreover, there’s a problem with dynamic consistency here: Once the action is already done, what’s the sense in carrying out the punishment? This is why a moral theory of punishment can’t merely be based on deterrence—the fact that you could deter a bad action by some other less-bad action doesn’t make the less-bad action necessarily a deserved punishment, particularly if it is applied to someone who wasn’t responsible for the action you sought to deter. In any case, people aren’t thinking that we should threaten to punish future generations if people are racist today; they are feeling guilty that their ancestors were racist generations ago. That doesn’t make any sense even on this deterrence theory.

There’s another problem with trying to inherit moral responsibility: People have lots of ancestors. Some of my ancestors were most likely rapists and murderers; most were ordinary folk; a few may have been great heroes—and this is true of just about anyone anywhere. We all have bad ancestors, great ancestors, and, mostly, pretty good ancestors. 75% of my ancestors are European, but 25% are Native American; so if I am to apologize for colonialism, should I be apologizing to myself? (Only 75%, perhaps?) If you go back enough generations, literally everyone is related—and you may only have to go back about 4,000 years. That’s historical time.

Of course, we wouldn’t be different colors in the first place if there weren’t some differences in ancestry, but there is a huge amount of gene flow between different human populations. The US is a particularly mixed place; because most Black Americans are quite genetically mixed, it is about as likely that any randomly-selected Black person in the US is descended from a slaveowner as it is that any randomly-selected White person is. (Especially since there were a large number of Black slaveowners in Africa and even some in the United States.) What moral significance does this have? Basically none! That’s the whole point; your ancestors don’t define who you are.

If these facts do have any moral significance, it is to undermine the sense most people seem to have that there are well-defined groups called “races” that exist in reality, to which culture responds. No; races were created by culture. I’ve said this before, but it bears repeating: The “races” we hold most dear in the US, White and Black, are in fact the most nonsensical. “Asian” and “Native American” at least almost make sense as categories, though Chippewa are more closely related to Ainu than Ainu are to Papuans. “Latino” isn’t utterly incoherent, though it includes as much Aztec as it does Iberian. But “White” is a club one can join or be kicked out of, while “Black” is the majority of genetic diversity.

Sex is a real thing—while there are intermediate cases of course, broadly speaking humans, like most metazoa, are sexually dimorphic and come in “male” and “female” varieties. So sexism took a real phenomenon and applied cultural dynamics to it; but that’s not what happened with racism. Insofar as there was a real phenomenon, it was extremely superficial—quite literally skin deep. In that respect, race is more like class—a categorization that is itself the result of social institutions.

To be clear: Does the fact that we don’t inherit moral responsibility from our ancestors absolve us from doing anything to rectify the inequities of racism? Absolutely not. Not only is there plenty of present discrimination going on we should be fighting, there are also inherited inequities due to the way that assets and skills are passed on from one generation to the next. If my grandfather stole a painting from your grandfather and both our grandfathers are dead but I am now hanging that painting in my den, I don’t owe you an apology—but I damn well owe you a painting.

The further we become from the past discrimination the harder it gets to make reparations, but all hope is not lost; we still have the option of trying to reset everyone’s status to the same at birth and maintaining equality of opportunity from there. Of course we’ll never achieve total equality of opportunity—but we can get much closer than we presently are.

We could start by establishing an extremely high estate tax—on the order of 99%—because no one has a right to be born rich. Free public education is another good way of equalizing the distribution of “human capital” that would otherwise be concentrated in particular families, and expanding it to higher education would make it that much better. It even makes sense, at least in the short run, to establish some affirmative action policies that are race-conscious and sex-conscious, because there are so many biases in the opposite direction that sometimes you must fight bias with bias.

Actually what I think we should do in hiring, for example, is assemble a pool of applicants based on demographic quotas to ensure a representative sample, and then anonymize the applications and assess them on merit. This way we do ensure representation and reduce bias, but don’t ever end up hiring anyone other than the most qualified candidate. But nowhere should we think that this is something that White men “owe” to women or Black people; it’s something that people should do in order to correct the biases that otherwise exist in our society. Similarly with regard to sexism: Women exhibit just as much unconscious bias against other women as men do. This is not “men” hurting “women”—this is a set of unconscious biases found in almost everywhere and social structures almost everywhere that systematically discriminate against people because they are women.

Perhaps by understanding that this is not about which “team” you’re on (which tribe you’re in), but what policy we should have, we can finally make these biases disappear, or at least fade so small that they are negligible.

Why is there a “corporate ladder”?

JDN 2457482

We take this concept for granted; there are “entry-level” jobs, and then you can get “promoted”, until perhaps you’re lucky enough or talented enough to rise to the “top”. Jobs that are “higher” on this “ladder” pay better, offer superior benefits, and also typically involve more pleasant work environments and more autonomy, though they also typically require greater skill and more responsibility.

But I contend that an alien lifeform encountering our planet for the first time, even one that somehow knew all about neoclassical economic theory (admittedly weird, but bear with me here), would be quite baffled by this arrangement.

The classic “rags to riches” story always involves starting work in some menial job like working in the mailroom, from which you then more or less magically rise to the position of CEO. (The intermediate steps are rarely told in the story, probably because they undermine the narrative; successful entrepreneurs usually make their first successful business using funds from their wealthy relatives, and if you haven’t got any wealthy relatives, that’s just too bad for you.)

Even despite its dubious accuracy, the story is bizarre in another way: There’s no reason to think that being really good at working in the mail room has anything at all to do with being good at managing a successful business. They’re totally orthogonal skills. They may even be contrary in personality terms; the kind of person who makes a good entrepreneur is innovative, decisive, and independent—and those are exactly the kind of personality traits that will make you miserable in a menial job where you’re constantly following orders.

Yet in almost every profession, we have this process where you must first “earn” your way to “higher” positions by doing menial and at best tangentially-related tasks.

This even happens in science, where we ought to know better! There’s really no reason to think that being good at taking multiple-choice tests strongly predicts your ability to do scientific research, nor that being good at grading multiple-choice tests does either; and yet to become a scientific researcher you must pass a great many multiple-choice tests (at bare minimum the SAT and GRE), and probably as a grad student you’ll end up grading some as well.

This process is frankly bizarre; worldwide, we are probably leaving tens of trillions of dollars of productivity on the table by instituting these arbitrary selection barriers that have nothing to do with actual skills. Simply optimizing our process of CEO selection alone would probably add a trillion dollars to US GDP.

If neoclassical economics were right, we should assign jobs solely based on marginal productivity; there should be some sort of assessment of your ability at each task you might perform, and whichever you’re best at (in the sense of comparative advantage) is what you end up doing, because that’s what you’ll be paid the most to do. Actually for this to really work the selection process would have to be extremely cheap, extremely reliable, and extremely fast, lest the friction of the selection system itself introduce enormous inefficiencies. (The fact that this never even seems to work even in SF stories with superintelligent sorting AIs, let alone in real life, is just so much the worse for neoclassical economics. The last book I read in which it actually seemed to work was Harry Potter and the Sorceror’s Stone—so it was literally just magic.)

The hope seems to be that competition will somehow iron out this problem, but in order for that to work, we must all be competing on a level playing field, and furthermore the mode of competition must accurately assess our real ability. The reason Olympic sports do a pretty good job of selecting the best athletes in the world is that they obey these criteria; the reason corporations do a terrible job of selecting the best CEOs is that they do not.

I’m quite certain I could do better than the former CEO of the late Lehman Brothers (and, to be fair, there are others who could do better still than I), but I’ll likely never get the chance to own a major financial firm—and I’m a lot closer than most people. I get to tick most of the boxes you need to be in that kind of position: White, male, American, mostly able-bodied, intelligent, hard-working, with a graduate degree in economics. Alas, I was only born in the top 10% of the US income distribution, not the top 1% or 0.01%, so my odds are considerably reduced. (That and I’m pretty sure that working for a company as evil as the late Lehman Brothers would destroy my soul.) Somewhere in Sudan there is a little girl who would be the best CEO of an investment bank the world has ever seen, but she is dying of malaria. Somewhere in India there is a little boy who would have been a greater physicist than Einstein, but no one ever taught him to read.

Competition may help reduce the inefficiency of this hierarchical arrangement—but it cannot explain why we use a hierarchy in the first place. Some people may be especially good at leadership and coordination; but in an efficient system they wouldn’t be seen as “above” other people, but as useful coordinators and advisors that people consult to ensure they are allocating tasks efficiently. You wouldn’t do things because “your boss told you to”, but because those things were the most efficient use of your time, given what everyone else in the group was doing. You’d consult your coordinator often, and usually take their advice; but you wouldn’t see them as orders you were required to follow.

Moreover, coordinators would probably not be paid much better than those they coordinate; what they were paid would depend on how much the success of the tasks depends upon efficient coordination, as well as how skilled other people are at coordination. It’s true that if having you there really does make a company with $1 billion in revenue 1% more efficient, that is in fact worth $10 million; but that isn’t how we set the pay of managers. It’s simply obvious to most people that managers should be paid more than their subordinates—that with a “promotion” comes more leadership and more pay. You’re “moving up the corporate ladder” Your pay reflects your higher status, not your marginal productivity.

This is not an optimal economic system by any means. And yet it seems perfectly natural to us to do this, and most people have trouble thinking any other way—which gives us a hint of where it’s probably coming from.

Perfectly natural. That is, instinctual. That is, evolutionary.

I believe that the corporate ladder, like most forms of hierarchy that humans use, is actually a recapitulation of our primate instincts to form a mating hierarchy with an alpha male.

First of all, the person in charge is indeed almost always male—over 90% of all high-level business executives are men. This is clearly discrimination, because women executives are paid less and yet show higher competence. Rare, underpaid, and highly competent is exactly the pattern we would expect in the presence of discrimination. If it were instead a lack of innate ability, we would expect that women executives would be much less competent on average, though they would still be rare and paid less. If there were no discrimination and no difference in ability, we would see equal pay, equal competence, and equal prevalence (this happens almost nowhere—the closest I think we get is in undergraduate admissions). Executives are also usually tall, healthy, and middle-aged—just like alpha males among chimpanzees and gorillas. (You can make excuses for why: Height is correlated with IQ, health makes you more productive, middle age is when you’re old enough to have experience but young enough to have vigor and stamina—but the fact remains, you’re matching the gorillas.)

Second, many otherwise-baffling economic decisions make sense in light of this hypothesis.

When a large company is floundering, why do we cut 20,000 laborers instead of simply reducing the CEO’s stock option package by half to save the same amount of money? Think back to the alpha male: Would he give himself less in a time of scarcity? Of course not. Nor would he remove his immediate subordinates, unless they had done something to offend him. If resources are scarce, the “obvious” answer is to take them from those at the bottom of the hierarchy—resource conservation is always accomplished at the expense of the lowest-status individuals.

Why are the very same poor people who would most stand to gain from redistribution of wealth often those who are most fiercely opposed to it? Because, deep down, they just instinctually “know” that alpha males are supposed to get the bananas, and if they are of low status it is their deserved lot in life. That is how people who depend on TANF and Medicaid to survive can nonetheless vote for Donald Trump. (As for how they can convince themselves that they “don’t get anything from the government”, that I’m not sure. “Keep your government hands off my Medicare!”)

Why is power an aphrodisiac, as well as for many an apparent excuse for bad behavior? I’ll let Cameron Anderson (a psychologist at UC Berkeley) give you the answer: “powerful people act with great daring and sometimes behave rather like gorillas”. With higher status comes a surge in testosterone (makes sense if you’re going to have more mates, and maybe even if you’re commanding an army—but running an investment bank?), which is directly linked to dominance behavior.

These attitudes may well have been adaptive for surviving in the African savannah 2 million years ago. In a world red in tooth and claw, having the biggest, strongest male be in charge of the tribe might have been the most efficient means of ensuring the success of the tribe—or rather I should say, the genes of the tribe, since the only reason we have a tribal instinct is that tribal instinct genes were highly successful at propagating themselves.

I’m actually sort of agnostic on the question of whether our evolutionary heuristics were optimal for ancient survival, or simply the best our brains could manage; but one thing is certain: They are not optimal today. The uninhibited dominance behavior associated with high status may work well enough for a tribal chieftain, but it could be literally apocalyptic when exhibited by the head of state of a nuclear superpower. Allocation of resources by status hierarchy may be fine for hunter-gatherers, but it is disastrously inefficient in an information technology economy.

From now on, whenever you hear “corporate ladder” and similar turns of phrase, I want you to substitute “primate status hierarchy”. You’ll quickly see how well it fits; and hopefully once enough people realize this, together we can all find a way to change to a better system.

Why is our diet so unhealthy?

JDN 2457447

One of the most baffling facts about the world, particularly to a development economist, is that the leading causes of death around the world broadly cluster into two categories: Obesity, in First World countries, and starvation, in Third World countries. At first glance, it seems like the rich are eating too much and there isn’t enough left for the poor.

Yet in fact it’s not quite so simple as that, because in fact obesity is most common among the poor in First World countries, and in Third World countries obesity rates are rising rapidly and co-existing with starvation. It is becoming recognized that there are many different kinds of obesity, and that a past history of starvation is actually a major risk factor in future obesity.

Indeed, the really fundamental problem is malnutrition—people are not necessarily eating too much or too little, they are eating the wrong things. So, my question is: Why?

It is widely thought that foods which are nutritious are also unappetizing, and conversely that foods which are delicious are unhealthy. There is a clear kernel of truth here, as a comparison of Brussels sprouts versus ice cream will surely indicate. But this is actually somewhat baffling. We are an evolved organism; one would think that natural selection would shape us so that we enjoy foods which are good for us and avoid foods which are bad for us.

I think it did, actually; the problem is, we have changed our situation so drastically by means of culture and technology that evolution hasn’t had time to catch up. We have evolved significantly since the dawn of civilization, but we haven’t had any time to evolve since one event in particular: The Green Revolution. Indeed, many people are still alive today who were born while the Green Revolution was still underway.

The Green Revolution is the culmination of a long process of development in agriculture and industrialization, but it would be difficult to overstate its importance as an epoch in the history of our species. We now have essentially unlimited food.

Not literally unlimited, of course; we do still need land, and water, and perhaps most notably energy (oil-driven machines are a vital part of modern agriculture). But we can produce vastly more food than was previously possible, and food supply is no longer a binding constraint on human population. Indeed, we already produce enough food to feed 10 billion people. People who say that some new agricultural technology will end world hunger don’t understand what world hunger actually is. Food production is not the problem—distribution of wealth is the problem.

I often speak about the possibility of reaching post-scarcity in the future; but we have essentially already done so in the domain of food production. If everyone ate what would be optimally healthy, and we distributed food evenly across the world, there would be plenty of food to go around and no such thing as obesity or starvation.

So why hasn’t this happened? Well, the main reason, like I said, is distribution of wealth.

But that doesn’t explain why so many people who do have access to good foods nonetheless don’t eat them.

The first thing to note is that healthy food is more expensive. It isn’t a huge difference by First World standards—about $550 per year extra per person. But when we compare the cost of a typical nutritious diet to that of a typical diet, the nutritious diet is significantly more expensive. Worse yet, this gap appears to be growing over time.

But why is this the case? It’s actually quite baffling on its face. Nutritious foods are typically fruits and vegetables that one can simply pluck off plants. Unhealthy foods are typically complex processed foods that require machines and advanced technology. There should be “value added”, at least in the economic sense; additional labor must go in, additional profits must come out. Why is it cheaper?

In a word? Subsidies.

Somehow, huge agribusinesses have convinced governments around the world that they deserve to be paid extra money, either simply for existing or based on how much they produce. Of course, when I say “somehow”, I of course mean lobbying.

In the US, these subsidies overwhelmingly go toward corn, followed by cotton, followed by soybeans.

In fact, they don’t actually even go to corn as you would normally think of it, like sweet corn or corn on the cob. No, they go to feed corn—really awful stuff that includes the entire plant, is barely even recognizable as corn, and has its “quality” literally rated by scales and sieves. No living organism was ever meant to eat this stuff.

Humans don’t, of course. Cows do. But they didn’t evolve for this stuff either; they can’t digest it properly, and it’s because of this terrible food we force-feed them that they need so many antibiotics.

Thus, these corn subsides are really primarily beef subsidies—they are a means of externalizing the cost of beef production and keeping the price of hamburgers artificially low. In all, 2/3 of US agricultural subsidies ultimately go to meat production. I haven’t been able to find any really good estimates, but as a ballpark figure it seems that meat would cost about twice as much if we didn’t subsidize it.

Fortunately a lot of these subsidies have been decreased under the Obama administration, particularly “direct payments” which are sort of like a basic income, but for agribusinesses. (That is not what basic incomes are for.) You can see the decline in US corn subsidies here.

Despite all this, however, subsidies cannot explain obesity. Removing them would have only a small effect.

An often overlooked consideration is that nutritious food can be more expensive for a family even if the actual pricetag is the same.

Why? Because kids won’t eat it.

To raise kids on a nutritious diet, you have to feed them small amounts of good food over a long period of time, until they acquire the taste. In order to do this, you need to be prepared to waste a lot of food, and that costs money. It’s cheaper to simply feed them something unhealthy, like ice cream or hot dogs, that you know they’ll eat.

And this brings me to what I think is the real ultimate cause of our awful diet: We evolved for a world of starvation, and our bodies cannot cope with abundance.

It’s important to be clear about what we mean by “unhealthy food”; people don’t enjoy consuming lead and arsenic. Rather, we enjoy consuming fat and sugar. Contrary to what fad diets will tell you, fat and sugar are not inherently bad for human health; indeed, we need a certain amount of fat and sugar in order to survive. What we call “unhealthy food” is actually food that we desperately need—in small quantities.

Under the conditions in which we evolved, fat and sugar were extremely scarce. Eating fat meant hunting a large animal, which required the cooperation of the whole tribe (a quite literal Stag Hunt) and carried risk of life and limb, not to mention simply failing and getting nothing. Eating sugar meant finding fruit trees and gathering fruit from them—and fruit trees are not all that common in nature. These foods also spoil quite quickly, so you eat them right away or not at all.

As such, we evolved to really crave these things, to ensure that we would eat them whenever they are available. Since they weren’t available all that often, this was just about right to ensure that we managed to eat enough, and rarely meant that we ate too much.

 

But now fast-forward to the Green Revolution. They aren’t scarce anymore. They’re everywhere. There are whole buildings we can go to with shelves upon shelves of them, which we ourselves can claim simply by swiping a little plastic card through a reader. We don’t even need to understand how that system of encrypted data networks operates, or what exactly is involved in maintaining our money supply (and most people clearly don’t); all we need to do is perform the right ritual and we will receive an essentially unlimited abundance of fat and sugar.

Even worse, this food is in processed form, so we can extract the parts that make it taste good, while separating them from the parts that actually make it nutritious. If fruits were our main source of sugar, that would be fine. But instead we get it from corn syrup and sugarcane, and even when we do get it from fruit, we extract the sugar instead of eating the whole fruit.

Natural selection had no particular reason to give us that level of discrimination; since eating apples and oranges was good for us, we evolved to like the taste of apples and oranges. There wasn’t a sufficient selection pressure to make us actually eat the whole fruit as opposed to extracting the sugar, because extracting the sugar was not an option available to our ancestors. But it is available to us now.

Vegetables, on the other hand, are also more abundant now, but were already fairly abundant. Indeed, it may be significant that we’ve had enough time to evolve since agriculture, but not enough time since fertilizer. Agriculture allowed us to make plenty of wheat and carrots; but it wasn’t until fertilizer that we could make enough hamburgers for people to eat them regularly. It could be that our hunter-gatherer ancestors actually did crave carrots in much the same way they and we crave sugar; but since agriculture we have no further reason to do so because carrots have always been widely available.

One thing I do still find a bit baffling: Why are so many green vegetables so bitter? It would be one thing if they simply weren’t as appealing as fat and sugar; but it honestly seems like a lot of green vegetables, such as broccoli, spinach, and Brussels sprouts, are really quite actively aversive, at least until you acquire the taste for them. Given how nutritious they are, it seems like there should have been a selective pressure in favor of liking the taste of green vegetables; but there wasn’t. I wonder if it’s actually coevolution—if perhaps broccoli has been evolving to not be eaten as quickly as we were evolving to eat it. This wouldn’t happen with apples and oranges, because in an evolutionary sense apples and oranges “want” to be eaten; they spread their seeds in the droppings of animals. But for any given stalk of broccoli, becoming lunch is definitely bad news.

Yet even this is pretty weird, because broccoli has definitely evolved substantially since agriculture—indeed, broccoli as we know it would not exist otherwise. Ancestral Brassica oleracea was bred to become cabbage, broccoli, cauliflower, kale, Brussels sprouts, collard greens, savoy, kohlrabi and kai-lan—and looks like none of them.

It looks like I still haven’t solved the mystery. In short, we get fat because kids hate broccoli; but why in the world do kids hate broccoli?

The power of exponential growth

JDN 2457390

There’s a famous riddle: If the water in a lakebed doubles in volume every day, and the lakebed started filling on January 1, and is half full on June 17, when will it be full?

The answer is of course June 18—if it doubles every day, it will go from half full to full in a single day.

But most people assume that half the work takes about half the time, so they usually give answers in December. Others try to correct, but don’t go far enough, and say something like October.

Human brains are programmed to understand linear processes. We expect things to come in direct proportion: If you work twice as hard, you expect to get twice as much done. If you study twice as long, you expect to learn twice as much. If you pay twice as much, you expect to get twice as much stuff.

We tend to apply this same intuition to situations where it does not belong, processes that are not actually linear but exponential. As a result, when we extrapolate the slow growth early in the process, we wildly underestimate the total growth in the long run.

For example, suppose we have two countries. Arcadia has a GDP of $100 billion per year, and they grow at 4% per year. Berkland has a GDP of $200 billion, and they grow at 2% per year. Assuming that they maintain these growth rates, how long will it take for Arcadia’s GDP to exceed Berkland’s?

If we do this intuitively, we might sort of guess that at 4% you’d add 100% in 25 years, and at 2% you’d add 100% in 50 years; so it should be something like 75 years, because then Arcadia will have added $300 million while Berkland added $200 million. You might even just fudge the numbers in your head and say “about a century”.

In fact, it is only 35 years. You could solve this exactly by setting (100)(1.04^x) = (200)(1.02^x); but I have an intuitive method that I think may help you to estimate exponential processes in the future.

Divide the percentage into 69. (For some numbers it’s easier to use 70 or 72; remember, these are just to be approximate. The exact figure is 100*ln(2) = 69.3147… and then it wouldn’t be the percentage p but 100*ln(1+p/100); try plotting those and you’ll see why using p works.) This is the time it will take to double.

So at 4%, Arcadia will double in about 17.5 years, quadrupling in 35 years. At 2%, Berkland will double in about 35 years. Thus, in 35 years, Arcadia will quadruple and Berkland will double, so their GDPs will be equal.

Economics is full of exponential processes: Compound interest is exponential, and over moderately long periods GDP and population both tend to grow exponentially. (In fact they grow logistically, which is similar to exponential until it gets very large and begins to slow down. If you smooth out our recessions, you can get a sense that since the 1940s, US GDP growth has slowed down from about 4% per year to about 2% per year.) It is therefore quite important to understand how exponential growth works.

Let’s try another one. If one account has $1 million, growing at 5% per year, and another has $1,000, growing at 10% per year, how long will it take for the second account to have more money in it?

69/5 is about 14, so the first account doubles in 14 years. 69/10 is about 7, so the second account doubles in 7 years. A factor of 1000 is about 10 doublings (2^10 = 1024), so the second account needs to have doubled 10 times more than the first account. Since it doubles twice as often, this means that it must have doubled 20 times while the other doubled 10 times. Therefore, it will take about 140 years.

In fact, it takes 141—so our quick approximation is actually remarkably good.

This example is instructive in another way; 141 years is a pretty long time, isn’t it? You can’t just assume that exponential growth is “as fast as you want it to be”. Once people realize that exponential growth is very fast, they often overcorrect, assuming that exponential growth automatically means growth that is absurdly—or arbitrarily—fast. (XKCD made a similar point in this comic.)

I think the worst examples of this mistake are among Singularitarians. They—correctly—note that computing power has become exponentially greater and cheaper over time, doubling about every 18 months, which has been dubbed Moore’s Law. They assume that this will continue into the indefinite future (this is already problematic; the growth rate seems to be already slowing down). And therefore they conclude there will be a sudden moment, a technological singularity, at which computers will suddenly outstrip humans in every way and bring about a new world order of artificial intelligence basically overnight. They call it a “hard takeoff”; here’s a direct quote:

But many thinkers in this field including Nick Bostrom and Eliezer Yudkowsky worry that AI won’t work like this at all. Instead there could be a “hard takeoff”, a huge subjective discontinuity in the function mapping AI research progress to intelligence as measured in ability-to-get-things-done. If on January 1 you have a toy AI as smart as a cow, one which can identify certain objects in pictures and navigate a complex environment, and on February 1 it’s proved the Riemann hypothesis and started building a ring around the sun, that was a hard takeoff.

Wait… what? For someone like me who understands exponential growth, the last part is a baffling non sequitur. If computers start half as smart as us and double every 18 months, in 18 months, they will be as smart as us. In 36 months, they will be twice as smart as us. Twice as smart as us literally means that two people working together perfectly can match them—certainly a few dozen working realistically can. We’re not in danger of total AI domination from that. With millions of people working against the AI, we should be able to keep up with it for at least another 30 years. So are you assuming that this trend is continuing or not? (Oh, and by the way, we’ve had AIs that can identify objects and navigate complex environments for a couple years now, and so far, no ringworld around the Sun.)

That same essay make a biological argument, which misunderstands human evolution in a way that is surprisingly subtle yet ultimately fundamental:

If you were to come up with a sort of objective zoological IQ based on amount of evolutionary work required to reach a certain level, complexity of brain structures, etc, you might put nematodes at 1, cows at 90, chimps at 99, homo erectus at 99.9, and modern humans at 100. The difference between 99.9 and 100 is the difference between “frequently eaten by lions” and “has to pass anti-poaching laws to prevent all lions from being wiped out”.

No, actually, what makes humans what we are is not that we are 1% smarter than chimpanzees.

First of all, we’re actually more like 200% smarter than chimpanzees, measured by encephalization quotient; they clock in at 2.49 while we hit 7.44. If you simply measure by raw volume, they have about 400 mL to our 1300 mL, so again roughly 3 times as big. But that’s relatively unimportant; with Moore’s Law, tripling only takes about 2.5 years.

But even having triple the brain power is not what makes humans different. It was a necessary condition, but not a sufficient one. Indeed, it was so insufficient that for about 200,000 years we had brains just as powerful as we do now and yet we did basically nothing in technological or economic terms—total, complete stagnation on a global scale. This is a conservative estimate of when we had brains of the same size and structure as we do today.

What makes humans what we are? Cooperation. We are what we are because we are together.
The capacity of human intelligence today is not 1300 mL of brain. It’s more like 1.3 gigaliters of brain, where a gigaliter, a billion liters, is about the volume of the Empire State Building. We have the intellectual capacity we do not because we are individually geniuses, but because we have built institutions of research and education that combine, synthesize, and share the knowledge of billions of people who came before us. Isaac Newton didn’t understand the world as well as the average third-grader in the 21st century does today. Does the third-grader have more brain? Of course not. But they absolutely do have more knowledge.

(I recently finished my first playthrough of Legacy of the Void, in which a central point concerns whether the Protoss should detach themselves from the Khala, a psychic union which combines all their knowledge and experience into one. I won’t spoil the ending, but let me say this: I can understand their hesitation, for it is basically our equivalent of the Khala—first literacy, and now the Internet—that has made us what we are. It would no doubt be the Khala that made them what they are as well.)

Is AI still dangerous? Absolutely. There are all sorts of damaging effects AI could have, culturally, economically, militarily—and some of them are already beginning to happen. I even agree with the basic conclusion of that essay that OpenAI is a bad idea because the cost of making AI available to people who will abuse it or create one that is dangerous is higher than the benefit of making AI available to everyone. But exponential growth not only isn’t the same thing as instantaneous takeoff, it isn’t even compatible with it.

The next time you encounter an example of exponential growth, try this. Don’t just fudge it in your head, don’t overcorrect and assume everything will be fast—just divide the percentage into 69 to see how long it will take to double.

Nature via Nurture

JDN 2457222 EDT 16:33.

One of the most common “deep questions” human beings have asked ourselves over the centuries is also one of the most misguided, the question of “nature versus nurture”: Is it genetics or environment that makes us what we are?

Humans are probably the single entity in the universe for which this question makes least sense. Artificial constructs have no prior existence, so they are “all nurture”, made what we choose to make them. Most other organisms on Earth behave accordingly to fixed instinctual programming, acting out a specific series of responses that have been honed over millions of years, doing only one thing, but doing it exceedingly well. They are in this sense “all nature”. As the saying goes, the fox knows many things, but the hedgehog knows one very big thing. Most organisms on Earth are in this sense hedgehogs, but we Homo sapiens are the ultimate foxes. (Ironically, hedgehogs are not actually “hedgehogs” in this sense: Being mammals, they have an advanced brain capable of flexibly responding to environmental circumstances. Foxes are a good deal more intelligent still, however.)

But human beings are by far the most flexible, adaptable organism on Earth. We live on literally every continent; despite being savannah apes we even live deep underwater and in outer space. Unlike most other species, we do not fit into a well-defined ecological niche; instead, we carve our own. This certainly has downsides; human beings are ourselves a mass extinction event.

Does this mean, therefore, that we are tabula rasa, blank slates upon which anything can be written?

Hardly. We’re more like word processors. Staring (as I of course presently am) at the blinking cursor of a word processor on a computer screen, seeing that wide, open space where a virtual infinity of possible texts could be written, depending entirely upon a sequence of miniscule key vibrations, you could be forgiven for thinking that you are looking at a blank slate. But in fact you are looking at the pinnacle of thousands of years of technological advancement, a machine so advanced, so precisely engineered, that its individual components are one ten-thousandth the width of a human hair (Intel just announced that we can now do even better than that). At peak performance, it is capable of over 100 billion calculations per second. Its random-access memory stores as much information as all the books on a stacks floor of the Hatcher Graduate Library, and its hard drive stores as much as all the books in the US Library of Congress. (Of course, both libraries contain digital media as well, exceeding anything my humble hard drive could hold by a factor of a thousand.)

All of this, simply to process text? Of course not; word processing is an afterthought for a processor that is specifically designed for dealing with high-resolution 3D images. (Of course, nowadays even a low-end netbook that is designed only for word processing and web browsing can typically handle a billion calculations per second.) But there the analogy with humans is quite accurate as well: Written language is about 10,000 years old, while the human visual mind is at least 100,000. We were 3D image analyzers long before we were word processors. This may be why we say “a picture is worth a thousand words”; we process each with about as much effort, even though the image necessarily contains thousands of times as many bits.

Why is the computer capable of so many different things? Why is the human mind capable of so many more? Not because they are simple and impinged upon by their environments, but because they are complex and precision-engineered to nonlinearly amplify tiny inputs into vast outputs—but only certain tiny inputs.

That is, it is because of our nature that we are capable of being nurtured. It is precisely the millions of years of genetic programming that have optimized the human brain that allow us to learn and adapt so flexibly to new environments and form a vast multitude of languages and cultures. It is precisely the genetically-programmed humanity we all share that makes our environmentally-acquired diversity possible.

In fact, causality also runs the other direction. Indeed, when I said other organisms were “all nature” that wasn’t right either; for even tightly-programmed instincts are evolved through millions of years of environmental pressure. Human beings have even been involved in cultural interactions long enough that it has begun to affect our genetic evolution; the reason I can digest lactose is that my ancestors about 10,000 years ago raised goats. We have our nature because of our ancestors’ nurture.

And then of course there’s the fact that we need a certain minimum level of environmental enrichment even to develop normally; a genetically-normal human raised into a deficient environment will suffer a kind of mental atrophy, as when children raised feral lose their ability to speak.

Thus, the question “nature or nurture?” seems a bit beside the point: We are extremely flexible and responsive to our environment, because of innate genetic hardware and software, which requires a certain environment to express itself, and which arose because of thousands of years of culture and millions of years of the struggle for survival—we are nurture because nature because nurture.

But perhaps we didn’t actually mean to ask about human traits in general; perhaps we meant to ask about some specific trait, like spatial intelligence, or eye color, or gender identity. This at least can be structured as a coherent question: How heritable is the trait? What proportion of the variance in this population is caused by genetic variation? Heritability analysis is a well-established methodology in behavioral genetics.
Yet, that isn’t the same question at all. For while height is extremely heritable within a given population (usually about 80%), human height worldwide has been increasing dramatically over time due to environmental influences and can actually be used as a measure of a nation’s economic development. (Look at what happened to the height of men in Japan.) How heritable is height? You have to be very careful what you mean.

Meanwhile, the heritability of neurofibromatosis is actually quite low—as many people acquire the disease by new mutations as inherit it from their parents—but we know for a fact it is a genetic disorder, because we can point to the specific genes that mutate to cause the disease.

Heritability also depends on the population under consideration; speaking English is more heritable within the United States than it is across the world as a whole, because there are a larger proportion of non-native English speakers in other countries. In general, a more diverse environment will lead to lower heritability, because there are simply more environmental influences that could affect the trait.

As children get older, their behavior gets more heritablea result which probably seems completely baffling, until you understand what heritability really means. Your genes become a more important factor in your behavior as you grow up, because you become separated from the environment of your birth and immersed into the general environment of your whole society. Lower environmental diversity means higher heritability, by definition. There’s also an effect of choosing your own environment; people who are intelligent and conscientious are likely to choose to go to college, where they will be further trained in knowledge and self-control. This latter effect is called niche-picking.

This is why saying something like “intelligence is 80% genetic” is basically meaningless, and “intelligence is 80% heritable” isn’t much better until you specify the reference population. The heritability of intelligence depends very much on what you mean by “intelligence” and what population you’re looking at for heritability. But even if you do find a high heritability (as we do for, say, Spearman’s g within the United States), this doesn’t mean that intelligence is fixed at birth; it simply means that parents with high intelligence are likely to have children with high intelligence. In evolutionary terms that’s all that matters—natural selection doesn’t care where you got your traits, only that you have them and pass them to your offspring—but many people do care, and IQ being heritable because rich, educated parents raise rich, educated children is very different from IQ being heritable because innately intelligent parents give birth to innately intelligent children. If genetic variation is systematically related to environmental variation, you can measure a high heritability even though the genes are not directly causing the outcome.

We do use twin studies to try to sort this out, but because identical twins raised apart are exceedingly rare, two very serious problems emerge: One, there usually isn’t a large enough sample size to say anything useful; and more importantly, this is actually an inaccurate measure in terms of natural selection. The evolutionary pressure is based on the correlation with the genes—it actually doesn’t matter whether the genes are directly causal. All that matters is that organisms with allele X survive and organisms with allele Y do not. Usually that’s because allele X does something useful, but even if it’s simply because people with allele X happen to mostly come from a culture that makes better guns, that will work just as well.

We can see this quite directly: White skin spread across the world not because it was useful (it’s actually terrible in any latitude other than subarctic), but because the cultures that conquered the world happened to be comprised mostly of people with White skin. In the 15th century you’d find a very high heritability of “using gunpowder weapons”, and there was definitely a selection pressure in favor of that trait—but it obviously doesn’t take special genes to use a gun.

The kind of heritability you get from twin studies is answering a totally different, nonsensical question, something like: “If we reassigned all offspring to parents randomly, how much of the variation in this trait in the new population would be correlated with genetic variation?” And honestly, I think the only reason people think that this is the question to ask is precisely because even biologists don’t fully grasp the way that nature and nurture are fundamentally entwined. They are trying to answer the intuitive question, “How much of this trait is genetic?” rather than the biologically meaningful “How strongly could a selection pressure for this trait evolve this gene?”

And if right now you’re thinking, “I don’t care how strongly a selection pressure for the trait could evolve some particular gene”, that’s fine; there are plenty of meaningful scientific questions that I don’t find particularly interesting and are probably not particularly important. (I hesitate to provide a rigid ranking, but I think it’s safe to say that “How does consciousness arise?” is a more important question than “Why are male platypuses venomous?” and “How can poverty be eradicated?” is a more important question than “How did the aircraft manufacturing duopoly emerge?”) But that’s really the most meaningful question we can construct from the ill-formed question “How much of this trait is genetic?” The next step is to think about why you thought that you were asking something important.

What did you really mean to ask?

For a bald question like, “Is being gay genetic?” there is no meaningful answer. We could try to reformulate it as a meaningful biological question, like “What is the heritability of homosexual behavior among males in the United States?” or “Can we find genetic markers strongly linked to self-identification as ‘gay’?” but I don’t think those are the questions we really meant to ask. I think actually the question we meant to ask was more fundamental than that: Is it legitimate to discriminate against gay people? And here the answer is unequivocal: No, it isn’t. It is a grave mistake to think that this moral question has anything to do with genetics; discrimination is wrong even against traits that are totally environmental (like religion, for example), and there are morally legitimate actions to take based entirely on a person’s genes (the obvious examples all coming from medicine—you don’t treat someone for cystic fibrosis if they don’t actually have it).

Similarly, when we ask the question “Is intelligence genetic?” I don’t think most people are actually interested in the heritability of spatial working memory among young American males. I think the real question they want to ask is about equality of opportunity, and what it would look like if we had it. If success were entirely determined by intelligence and intelligence were entirely determined by genetics, then even a society with equality of opportunity would show significant inequality inherited across generations. Thus, inherited inequality is not necessarily evidence against equality of opportunity. But this is in fact a deeply disingenuous argument, used by people like Charles Murray to excuse systemic racism, sexism, and concentration of wealth.

We didn’t have to say that inherited inequality is necessarily or undeniably evidence against equality of opportunity—merely that it is, in fact, evidence of inequality of opportunity. Moreover, it is far from the only evidence against equality of opportunity; we also can observe the fact that college-educated Black people are no more likely to be employed than White people who didn’t even finish high school, for example, or the fact that otherwise identical resumes with predominantly Black names (like “Jamal”) are less likely to receive callbacks compared to predominantly White names (like “Greg”). We can observe that the same is true for resumes with obviously female names (like “Sarah”) versus obviously male names (like “David”), even when the hiring is done by social scientists. We can directly observe that one-third of the 400 richest Americans inherited their wealth (and if you look closer into the other two-thirds, all of them had some very unusual opportunities, usually due to their family connections—“self-made” is invariably a great exaggeration). The evidence for inequality of opportunity in our society is legion, regardless of how genetics and intelligence are related. In fact, I think that the high observed heritability of intelligence is largely due to the fact that educational opportunities are distributed in a genetically-biased fashion, but I could be wrong about that; maybe there really is a large genetic influence on human intelligence. Even so, that does not justify widespread and directly-measured discrimination. It does not justify a handful of billionaires luxuriating in almost unimaginable wealth as millions of people languish in poverty. Intelligence can be as heritable as you like and it is still wrong for Donald Trump to have billions of dollars while millions of children starve.

This is what I think we need to do when people try to bring up a “nature versus nurture” question. We can certainly talk about the real complexity of the relationship between genetics and environment, which I think are best summarized as “nature via nurture”; but in fact usually we should think about why we are asking that question, and try to find the real question we actually meant to ask.

Love is rational

JDN 2457066 PST 15:29.

Since I am writing this the weekend of Valentine’s Day (actually by the time it is published it will be Valentine’s Day) and sitting across from my boyfriend, it seems particularly appropriate that today’s topic should be love. As I am writing it is in fact Darwin Day, so it is fitting that evolution will be a major topic as well.

Usually we cognitive economists are the ones reminding neoclassical economists that human beings are not always rational. Today however I must correct a misconception in the opposite direction: Love is rational, or at least it can be, should be, and typically is.

Lately I’ve been reading The Logic of Life which actually makes much the same point, about love and many other things. I had expected it to be a dogmatic defense of economic rationality—published in 2008 no less, which would make it the scream of a dying paradigm as it carries us all down with it—but I was in fact quite pleasantly surprised. The book takes a nuanced position on rationality very similar to my own, and actually incorporates many of the insights from neuroeconomics and cognitive economics. I think Harford would basically agree with me that human beings are 90% rational (but woe betide the other 10%).

We have this romantic (Romantic?) notion in our society that love is not rational, it is “beyond” rationality somehow. “Love is blind”, they say; and this is often used as a smug reply to the notion that rationality is the proper guide to live our lives.

The argument would seem to follow: “Love is not rational, love is good, therefore rationality is not always good.”

But then… the argument would follow? What do you mean, follow? Follow logically? Follow rationally? Something is clearly wrong if we’ve constructed a rational argument intended to show that we should not live our lives by rational arguments.

And the problem of course is the premise that love is not rational. Whatever made you say that?

It’s true that love is not directly volitional, not in the way that it is volitional to move your arm upward or close your eyes or type the sentence “Jackdaws ate my big sphinx of quartz.” You don’t exactly choose to love someone, weighing the pros and cons and making a decision the way you might choose which job offer to take or which university to attend.

But then, you don’t really choose which university you like either, now do you? You choose which to attend. But your enjoyment of that university is not a voluntary act. And similarly you do in fact choose whom to date, whom to marry. And you might well consider the pros and cons of such decisions. So the difference is not as large as it might at first seem.

More importantly, to say that our lives should be rational is not the same as saying they should be volitional. You simply can’t live your life as completely volitional, no matter how hard you try. You simply don’t have the cognitive resources to maintain constant awareness of every breath, every heartbeat. Yet there is nothing irrational about breathing or heartbeats—indeed they are necessary for survival and thus a precondition of anything rational you might ever do.

Indeed, in many ways it is our subconscious that is the most intelligent part of us. It is not as flexible as our conscious mind—that is why our conscious mind is there—but the human subconscious is unmatched in its efficiency and reliability among literally all known computational systems in the known universe. Walk across a room and it will solve reverse kinematics in real time. Throw a ball and it will solve three-dimensional nonlinear differential equations as well. Look at a familiar face and it will immediately identify it among a set of hundreds of faces with near-perfect accuracy regardless of the angle, lighting conditions, or even hairstyle. To see that I am not exaggerating the immense difficulty of these tasks, look at how difficult it is to make robots that can walk on two legs or throw balls. Face recognition is so difficult that it is still an unsolved problem with an extensive body of ongoing research.

And love, of course, is the subconscious system that has been most directly optimized by natural selection. Our very survival has depended upon it for millions of years. Indeed, it’s amazing how often it does seem to fail given those tight optimization constraints; I think this is for two reasons. First, natural selection optimizes for inclusive fitness, which is not the same thing as optimizing for happiness—what’s good for your genes may not be good for you per se. Many of the ways that love hurts us seem to be based around behaviors that probably did on average spread more genes on the African savannah. Second, the task of selecting an optimal partner is so mind-bogglingly complex that even the most powerful computational system in the known universe still can only do it so well. Imagine trying to construct a formal decision model that would tell you whom you should marry—all the variables you’d need to consider, the cost of sampling each of those variables sufficiently, the proper weightings on all the different terms in the utility function. Perhaps the wonder is that love is as rational as it is.

Indeed, love is evidence-based—and when it isn’t, this is cause for concern. The evidence is most often presented in small ways over long periods of time—a glance, a kiss, a gift, a meeting canceled to stay home and comfort you. Some ways are larger—a career move postponed to keep the family together, a beautiful wedding, a new house. We aren’t formally calculating the Bayesian probability at each new piece of evidence—though our subconscious brains might be, and whatever they’re doing the results aren’t far off from that mathematical optimum.

The notion that you will never “truly know” if others love you is no more epistemically valid or interesting than the notion that you will never “truly know” if your shirt is grue instead of green or if you are a brain in a vat. Perhaps we’ve been wrong about gravity all these years, and on April 27, 2016 it will suddenly reverse direction! No, it won’t, and I’m prepared to literally bet the whole world on that (frankly I’m not sure I have a choice). To be fair, the proposition that your spouse of twenty years or your mother loves you is perhaps not that certain—but it’s pretty darn certain. Perhaps the proper comparison is the level of certainty that climate change is caused by human beings, or even less, the level of certainty that your car will not suddenly veer off the road and kill you. The latter is something that actually happens—but we all drive every day assuming it won’t. By the time you marry someone, you can and should be that certain that they love you.

Love without evidence is bad love. The sort of unrequited love that builds in secret based upon fleeing glimpses, hours of obsessive fantasy, and little or no interaction with its subject isn’t romantic—it’s creepy and psychologically unhealthy. The extreme of that sort of love is what drove John Hinckley Jr. to shoot Ronald Reagan in order to impress Jodie Foster.

I don’t mean to make you feel guilty if you have experienced such a love—most of us have at one point or another—but it disgusts me how much our society tries to elevate that sort of love as the “true love” to which we should all aspire. We encourage people—particularly teenagers—to conceal their feelings for a long time and then release them in one grand surprise gesture of affection, which is just about the opposite of what you should actually be doing. (Look at Love Actually, which is just about the opposite of what its title says.) I think a great deal of strife in our society would be eliminated if we taught our children how to build relationships gradually over time instead of constantly presenting them with absurd caricatures of love that no one can—or should—follow.

I am pleased to see that our cultural norms on that point seem to be changing. A corporation as absurdly powerful as Disney is both an influence upon and a barometer of our social norms, and the trope in the most recent Disney films (like Frozen and Maleficent) is that true love is not the fiery passion of love at first sight, but the deep bond between family members that builds over time. This is a much healthier concept of love, though I wouldn’t exclude romantic love entirely. Romantic love can be true love, but only by building over time through a similar process.

Perhaps there is another reason people are uncomfortable with the idea that love is rational; by definition, rational behaviors respond to incentives. And since we tend to conceive of incentives as a purely selfish endeavor, this would seem to imply that love is selfish, which seems somewhere between painfully cynical and outright oxymoronic.

But while love certainly does carry many benefits for its users—being in love will literally make you live longer, by quite a lot, an effect size comparable to quitting smoking or exercising twice a week—it also carries many benefits for its recipients as well. Love is in fact the primary means by which evolution has shaped us toward altruism; it is the love for our family and our tribe that makes us willing to sacrifice so much for them. Not all incentives are selfish; indeed, an incentive is really just something that motivates you to action. If you could truly convince me that a given action I took would have even a reasonable chance of ending world hunger, I would do almost anything to achieve it; I can scarcely imagine a greater incentive, even though I would be harmed and the benefits would incur to people I have never met.

Love evolved because it advanced the fitness of our genes, of course. And this bothers many people; it seems to make our altruism ultimately just a different form of selfishness I guess, selfishness for our genes instead of ourselves. But this is a genetic fallacy, isn’t it? Yes, evolution by natural selection is a violent process, full of death and cruelty and suffering (as Darwin said, red in tooth and claw); but that doesn’t mean that its outcome—namely ourselves—is so irredeemable. We are, in fact, altruistic, regardless of where that altruism came from. The fact that it advanced our genes can actually be comforting in a way, because it reminds us that the universe is nonzero-sum and benefiting others does not have to mean harming ourselves.

One question I like to ask when people suggest that some scientific fact undermines our moral status in this way is: “Well, what would you prefer?” If the causal determinism of neural synapses undermines our free will, then what should we have been made of? Magical fairy dust? If we were, fairy dust would be a real phenomenon, and it would obey laws of nature, and you’d just say that the causal determinism of magical fairy dust undermines free will all over again. If the fact that our altruistic emotions evolved by natural selection to advance our inclusive fitness makes us not truly altruistic, then where should have altruism come from? A divine creator who made us to love one another? But then we’re just following our programming! You can always make this sort of argument, which either means that live is necessarily empty of meaning, that no possible universe could ever assuage our ennui—or, what I believe, that life’s meaning does not come from such ultimate causes. It is not what you are made of or where you come from that defines what you are. We are best defined by what we do.

It seems to depend how you look at it: Romantics are made of stardust and the fabric of the cosmos, while cynics are made of the nuclear waste expelled in the planet-destroying explosions of dying balls of fire. Romantics are the cousins of all living things in one grand family, while cynics are apex predators evolved from millions of years of rape and murder. Both of these views are in some sense correct—but I think the real mistake is in thinking that they are incompatible. Human beings are both those things, and more; we are capable of both great compassion and great cruelty—and also great indifference. It is a mistake to think that only the dark sides—or for that matter only the light sides—of us are truly real.

Love is rational; love responds to incentives; love is an evolutionary adaptation. Love binds us together; love makes us better; love leads us to sacrifice for one another.

Love is, above all, what makes us not infinite identical psychopaths.

Beware the false balance

JDN 2457046 PST 13:47.

I am now back in Long Beach, hence the return to Pacific Time. Today’s post is a little less economic than most, though it’s certainly still within the purview of social science and public policy. It concerns a question that many academic researchers and in general reasonable, thoughtful people have to deal with: How do we remain unbiased and nonpartisan?

This would not be so difficult if the world were as the most devoted “centrists” would have you believe, and it were actually the case that both sides have their good points and bad points, and both sides have their scandals, and both sides make mistakes or even lie, so you should never take the side of the Democrats or the Republicans but always present both views equally.

Sadly, this is not at all the world in which we live. While Democrats are far from perfect—they are human beings after all, not to mention politicians—Republicans have become completely detached from reality. As Stephen Colbert has said, “Reality has a liberal bias.” You know it’s bad when our detractors call us the reality-based community. Treating both sides as equal isn’t being unbiased—it’s committing a balance fallacy.

Don’t believe me? Here is a list of objective, scientific facts that the Republican Party (and particularly its craziest subset, the Tea Party) has officially taken political stances against:

  1. Global warming is a real problem, and largely caused by human activity. (The Republican majority in the Senate voted down a resolution acknowledging this.)
  2. Human beings share a common ancestor with chimpanzees. (48% of Republicans think that we were created in our present form.)
  3. Animals evolve over time due to natural selection. (Only 43% of Republicans believe this.)
  4. The Earth is approximately 4.5 billion years old. (Marco Rubio said he thinks maybe the Earth was made in seven days a few thousand years ago.)
  5. Hydraulic fracturing can trigger earthquakes.(Republican in Congress are trying to nullify local regulations on fracking because they insist it is so safe we don’t even need to keep track.)
  6. Income inequality in the United States is the worst it has been in decades and continues to rise. (Mitt Romney said that the concern about income inequality is just “envy”.)
  7. Progressive taxation reduces inequality without adversely affecting economic growth. (Here’s a Republican former New York Senator saying that the President “should be ashamed” for raising taxes on—you guessed it—”job creators”.)
  8. Moderate increases in the minimum wage do not yield significant losses in employment. (Republicans consistently vote against even small increases in the minimum wage, and Democrats consistently vote in favor.)
  9. The United States government has no reason to ever default on its debt. (John Boehner, now Speaker of the House, once said that “America is broke” and if we don’t stop spending we’ll never be able to pay the national debt.)
  10. Human embryos are not in any way sentient, and fetuses are not sentient until at least 17 weeks of gestation, probably more like 30 weeks. (Yet if I am to read it in a way that would make moral sense, “Life begins at conception”—which several Republicans explicitly endorsed at the National Right to Life Convention—would have to imply that even zygotes are sentient beings. If you really just meant “alive”, then that would equally well apply to plants or even bacteria. Sentience is the morally relevant category.)

And that’s not even counting the Republican Party’s association with Christianity and all of the objectively wrong scientific claims that necessarily entails—like the existence of an afterlife and the intervention of supernatural forces. Most Democrats also self-identify as Christian, though rarely with quite the same fervor (the last major Democrat I can think of who was a devout Christian was Jimmy Carter), probably because most Americans self-identify as Christian and are hesitant to elect an atheist President (despite the fact that 93% of the National Academy of Sciences is comprised of atheists and the higher your IQ the more likely you are to be an atheist; we wouldn’t want to elect someone who agrees with smart people, now would we?).

It’s true, there are some other crazy ideas out there with a left-wing slant, like the anti-vaccination movement that has wrought epidemic measles upon us, the anti-GMO crowd that rejects basic scientific facts about genetics, and the 9/11 “truth” movement that refuses to believe that Al Qaeda actually caused the attacks. There are in fact far-left Marxists out there who want to tear down the whole capitalist system by glorious revolution and replace it with… er… something (they’re never quite clear on that last point). But none of these things are the official positions of standing members of Congress.

The craziest belief by a standing Democrat I can think of is Dennis Kucinich’s belief that he saw an alien spacecraft. And to be perfectly honest, alien spacecraft are about a thousand times more plausible than Christianity in general, let alone Creationism. There almost certainly are alien spacecraft somewhere in the universe—just most likely so far away we’ll need FTL to encounter them. Moreover, this is not Kucinich’s official position as a member of Congress and it’s not something he has ever made policy based upon.

Indeed, if you’re willing to include the craziest individuals with no real political power who identify with a particular side of the political spectrum, then we should include on the right-wing side people like the Bundy militia in Nevada, neo-Nazis in Detroit, and the dozens of KKK chapters across the US. Not to mention this pastor who wants to murder all gay people in the world (because he truly believes what Leviticus 20:13 actually and clearly says).

If you get to include Marxists on the left, then we get to include Nazis on the right. Or, we could be reasonable and say that only the official positions of elected officials or mainstream pundits actually count, in which case Democrats have views that are basically accurate and reasonable while the majority of Republicans have views that are still completely objectively wrong.

There’s no balance here. For every Democrat who is wrong, there is a Republicans who is totally delusional. For every Democrat who distorts the truth, there is a Republican who blatantly lies about basic facts. Not to mention that for every Democrat who has had an ill-advised illicit affair there is a Republican who has committed war crimes.

Actually war crimes are something a fair number of Democrats have done as well, but the difference still stands out in high relief: Barack Obama has ordered double-tap drone strikes that are in violation of the Geneva Convention, but George W. Bush orchestrated a worldwide mass torture campaign and launched pointless wars that slaughtered hundreds of thousands of people. Bill Clinton ordered some questionable CIA operations, but George H.W. Bush was the director of the CIA.

I wish we had two parties that were equally reasonable. I wish there were two—or three, or four—proposals on the table in each discussion, all of which had merits and flaws worth considering. Maybe if we somehow manage to get the Green Party a significant seat in power, or the Social Democrat party, we can actually achieve that goal. But that is not where we are right now. Right now, we have the Democrats, who have some good ideas and some bad ideas; and then we have the Republicans, who are completely out of their minds.

There is an important concept in political science called the Overton window; it is the range of political ideas that are considered “reasonable” or “mainstream” within a society. Things near the middle of the Overton window are considered sensible, even “nonpartisan” ideas, while things near the edges are “partisan” or “political”, and things near but outside the window are seen as “extreme” and “radical”. Things far outside the window are seen as “absurd” or even “unthinkable”.

Right now, our Overton window is in the wrong place. Things like Paul Ryan’s plan to privatize Social Security and Medicare are seen as reasonable when they should be considered extreme. Progressive income taxes of the kind we had in the 1960s are seen as extreme when they should be considered reasonable. Cutting WIC and SNAP with nothing to replace them and letting people literally starve to death are considered at most partisan, when they should be outright unthinkable. Opposition to basic scientific facts like climate change and evolution is considered a mainstream political position—when in terms of empirical evidence Creationism should be more intellectually embarrassing than being a 9/11 truther or thinking you saw an alien spacecraft. And perhaps worst of all, military tactics like double-tap strikes that are literally war crimes are considered “liberal”, while the “conservative” position involves torture, worldwide surveillance and carpet bombing—if not outright full-scale nuclear devastation.

I want to restore reasonable conversation to our political system, I really do. But that really isn’t possible when half the politicians are totally delusional. We have but one choice: We must vote them out.

I say this particularly to people who say “Why bother? Both parties are the same.” No, they are not the same. They are deeply, deeply different, for all the reasons I just outlined above. And if you can’t bring yourself to vote for a Democrat, at least vote for someone! A Green, or a Social Democrat, or even a Libertarian or a Socialist if you must. It is only by the apathy of reasonable people that this insanity can propagate in the first place.

The irrationality of racism

JDN 2457039 EST 12:07.

I thought about making today’s post about the crazy currency crisis in Switzerland, but currency exchange rates aren’t really my area of expertise; this is much more in Krugman’s bailiwick, so you should probably read what Krugman says about the situation. There is one thing I’d like to say, however: I think there is a really easy way to create credible inflation and boost aggregate demand, but for some reason nobody is ever willing to do it: Give people money. Emphasis here on the people—not banks. Don’t adjust interest rates or currency pegs, don’t engage in quantitative easing. Give people money. Actually write a bunch of checks, presumably in the form of refundable tax rebates.

The only reason I can think of that economists don’t do this is they are afraid of helping poor people. They wouldn’t put it that way; maybe they’d say they want to avoid “moral hazard” or “perverse incentives”. But those fears didn’t stop them from loaning $2 trillion to banks or adding $4 trillion to the monetary base; they didn’t stop them from fighting for continued financial deregulation when what the world economy most desperately needs is stronger financial regulation. Our whole derivatives market practically oozes moral hazard and perverse incentives, but they aren’t willing to shut down that quadrillion-dollar con game. So that can’t be the actual fear. No, it has to be a fear of helping poor people instead of rich people, as though “capitalism” meant a system in which we squeeze the poor as tight as we can and heap all possible advantages upon those who are already wealthy. No, that’s called feudalism. Capitalism is supposed to be a system where markets are structured to provide free and fair competition, with everyone on a level playing field.

A basic income is a fundamentally capitalist policy, which maintains equal opportunity with a minimum of government intervention and allows the market to flourish. I suppose if you want to say that all taxation and government spending is “socialist”, fine; then every nation that has ever maintained stability for more than a decade has been in this sense “socialist”. Every soldier, firefighter and police officer paid by a government payroll is now part of a “socialist” system. Okay, as long as we’re consistent about that; but now you really can’t say that socialism is harmful; on the contrary, on this definition socialism is necessary for capitalism. In order to maintain security of property, enforcement of contracts, and equality of opportunity, you need government. Maybe we should just give up on the words entirely, and speak more clearly about what specific policies we want. If I don’t get to say that a basic income is “capitalist”, you don’t get to say financial deregulation is “capitalist”. Better yet, how about you can’t even call it “deregulation”? You have to actually argue in front of a crowd of people that it should be legal for banks to lie to them, and there should be no serious repercussions for any bank that cheats, steals, colludes, or even launders money for terrorists. That is, after all, what financial deregulation actually does in the real world.

Okay, that’s enough about that.

My birthday is coming up this Monday; thus completes my 27th revolution around the Sun. With birthdays come thoughts of ancestry: Though I appear White, I am legally one-quarter Native American, and my total ethnic mix includes English, German, Irish, Mohawk, and Chippewa.

Biologically, what exactly does that mean? Next to nothing.

Human genetic diversity is a real thing, and there are genetic links to not only dozens of genetic diseases and propensity toward certain types of cancer, but also personality and intelligence. There are also of course genes for skin pigmentation.

The human population does exhibit some genetic clustering, but the categories are not what you’re probably used to: Good examples of relatively well-defined genetic clusters include Ashkenazi, Papuan, and Mbuti. There are also many different haplogroups, such as mitochondrial haplogroups L3 and CZ.

Maybe you could even make a case for the “races” East Asian, South Asian, Pacific Islander, and Native American, since the indigenous populations of these geographic areas largely do come from the same genetic clusters. Or you could make a bigger category and call them all “Asian”—but if you include Papuan and Aborigine in “Asian” you’d pretty much have to include Chippewa and Najavo as well.

But I think it tells you a lot about what “race” really means when you realize that the two “race” categories which are most salient to Americans are in fact the categories that are genetically most meaningless. “White” and “Black” are totally nonsensical genetic categorizations.

Let’s start with “Black”; defining a “Black” race is like defining a category of animals by the fact that they are all tinted red—foxes yes, dogs no; robins yes, swallows no; ladybirds yes, cockroaches no. There is more genetic diversity within Africa than there is outside of it. There are African populations that are more closely related to European populations than they are to other African populations. The only thing “Black” people have in common is that their skin is dark, which is due to convergent evolution: It’s not due to common ancestry, but a common environment. Dark skin has a direct survival benefit in climates with intense sunlight.  The similarity is literally skin deep.

What about “White”? Well, there are some fairly well-defined European genetic populations, so if we clustered those together we might be able to get something worth calling “White”. The problem is, that’s not how it happened. “White” is a club. The definition of who gets to be “White” has expanded over time, and even occasionally contracted. Originally Hebrew, Celtic, Hispanic, and Italian were not included (and Hebrew, for once, is actually a fairly sensible genetic category, as long as you restrict it to Ashkenazi), but then later they were. But now that we’ve got a lot of poor people coming in from Mexico, we don’t quite think of Hispanics as “White” anymore. We actually watched Arabs lose their “White” card in real-time in 2001; before 9/11, they were “White”; now, “Arab” is a separate thing. And “Muslim” is even treated like a race now, which is like making a racial category of “Keynesians”—never forget that Islam is above all a belief system.

Actually, “White privilege” is almost a tautology—the privilege isn’t given to people who were already defined as “White”, the privilege is to be called “White”. The privilege is to have your ancestors counted in the “White” category so that they can be given rights, while people who are not in the category are denied those rights. There does seem to be a certain degree of restriction by appearance—to my knowledge, no population with skin as dark as Kenyans has ever been considered “White”, and Anglo-Saxons and Nordics have always been included—but the category is flexible to political and social changes.

But really I hate that word “privilege”, because it gets the whole situation backwards. When you talk about “White privilege”, you make it sound as though the problem with racism is that it gives unfair advantages to White people (or to people arbitrarily defined as “White”). No, the problem is that people who are not White are denied rights. It isn’t what White people have that’s wrong; it’s what Black people don’t have. Equating those two things creates a vision of the world as zero-sum, in which each gain for me is a loss for you.

Here’s the thing about zero-sum games: All outcomes are Pareto-efficient. Remember when I talked about Pareto-efficiency? As a quick refresher, an outcome is Pareto-efficient if there is no way for one person to be made better off without making someone else worse off. In general, it’s pretty hard to disagree that, other things equal, Pareto-efficiency is a good thing, and Pareto-inefficiency is a bad thing. But if racism were about “White privilege” and the game were zero-sum, racism would have to be Pareto-efficient.

In fact, racism is Pareto-inefficient, and that is part of why it is so obviously bad. It harms literally billions of people, and benefits basically no one. Maybe there are a few individuals who are actually, all things considered, better off than they would have been if racism had not existed. But there are certainly not very many such people, and in fact I’m not sure there are any at all. If there are any, it would mean that technically racism is not Pareto-inefficient—but it is definitely very close. At the very least, the damage caused by racism is several orders of magnitude larger than any benefits incurred.

That’s why the “privilege” language, while well-intentioned, is so insidious; it tells White people that racism means taking things away from them. Many of these people are already in dire straits—broke, unemployed, or even homeless—so taking away what they have sounds particularly awful. Of course they’d be hostile to or at least dubious of attempts to reduce racism. You just told them that racism is the only thing keeping them afloat! In fact, quite the opposite is the case: Poor White people are, second only to poor Black people, those who stand the most to gain from a more just society. David Koch and Donald Trump should be worried; we will probably have to take most of their money away in order to achieve social justice. (Bill Gates knows we’ll have to take most of his money away, but he’s okay with that; in fact he may end up giving it away before we get around to taking it.) But the average White person will almost certainly be better off than they were.

Why does it seem like there are benefits to racism? Again, because people are accustomed to thinking of the world as zero-sum. One person is denied a benefit, so that benefit must go somewhere else right? Nope—it can just disappear entirely, and in this case typically does.

When a Black person is denied a job in favor of a White person who is less qualified, doesn’t that White person benefit? Uh, no, actually, not really. They have been hired for a job that isn’t an optimal fit for them; they aren’t working to their comparative advantage, and that Black person isn’t either and may not be working at all. The total output of the economy will be thereby reduced slightly. When this happens millions of times, the total reduction in output can be quite substantial, and as a result that White person was hired at $30,000 for an unsuitable job when in a racism-free world they’d have been hired at $40,000 for a suitable one. A similar argument holds for sexism; men don’t benefit from getting jobs women are denied if one of those women would have invented a cure for prostate cancer.

Indeed, the empowerment of women and minorities is kind of the secret cheat code for creating a First World economy. The great successes of economic development—Korea, Japan, China, the US in WW2—had their successes precisely at a time when they suddenly started including women in manufacturing, effectively doubling their total labor capacity. Moreover, it’s pretty clear that the causation ran in this direction. Periods of economic growth are associated with increases in solidarity with other groups—and downturns with decreased solidarity—but the increase in women in the workforce was sudden and early while the increase in growth and total output was prolonged.

Racism is irrational. Indeed it is so obviously irrational that for decades now neoclassical economists have been insisting that there is no need for civil rights policy, affirmative action, etc. because the market will automatically eliminate racism by the rational profit motive. A more recent literature has attempted to show that, contrary to all appearances, racism actually is rational in some cases. Inevitably it relies upon either the background of a racist society (maybe Black people are, on average, genuinely less qualified, but it would only be because they’ve been given poorer opportunities), or an assumption of “discriminatory tastes”, which is basically giving up and redefining the utility function so that people simply get direct pleasure from being racists. Of course, on that sort of definition, you can basically justify any behavior as “rational”: Maybe he just enjoys banging his head against the wall! (A similar slipperiness is used by egoists to argue that caring for your children is actually “selfish”; well, it makes you happy, doesn’t it? Yes, but that’s not why we do it.)

There’s a much simpler way to understand this situation: Racism is irrational, and so is human behavior.

That isn’t a complete explanation, of course; and I think one major misunderstanding neoclassical economists have of cognitive economists is that they think this is what we do—we point out that something is irrational, and then high-five and go home. No, that’s not what we do. Finding the irrationality is just the start; next comes explaining the irrationality, understanding the irrationality, and finally—we haven’t reached this point in most cases—fixing the irrationality.

So what explains racism? In short, the tribal paradigm. Human beings evolved in an environment in which the most important factor in our survival and that of our offspring was not food supply or temperature or predators, it was tribal cohesion. With a cohesive tribe, we could find food, make clothes, fight off lions. Without one, we were helpless. Millions of years in this condition shaped our brains, programming them to treat threats to tribal cohesion as the greatest possible concern. We even reached the point where solidarity for the tribe actually began to dominate basic survival instincts: For a suicide bomber the unity of the tribe—be it Marxism for the Tamil Tigers or Islam for Al-Qaeda—is more important than his own life. We will do literally anything if we believe it is necessary to defend the identities we believe in.

And no, we rationalists are no exception here. We are indeed different from other groups; the beliefs that define us, unlike the beliefs of literally every other group that has ever existed, are actually rationally founded. The scientific method really isn’t just another religion, for unlike religion it actually works. But still, if push came to shove and we were forced to kill and die in order to defend rationality, we would; and maybe we’d even be right to do so. Maybe the French Revolution was, all things considered, a good thing—but it sure as hell wasn’t nonviolent.

This is the background we need to understand racism. It actually isn’t enough to show people that racism is harmful and irrational, because they are programmed not to care. As long as racial identification is the salient identity, the tribe by which we define ourselves, we will do anything to defend the cohesion of that tribe. It is not enough to show that racism is bad; we must in fact show that race doesn’t matter. Fortunately, this is easy, for as I explained above, race does not actually exist.

That makes racism in some sense easier to deal with than sexism, because the very categories of races upon which it is based are fundamentally faulty. Sexes, on the other hand, are definitely a real thing. Males and females actually are genetically different in important ways. Exactly how different in what ways is an open question, and what we do know is that for most of the really important traits like intelligence and personality the overlap outstrips the difference. (The really big, categorical differences all appear to be physical: Anatomy, size, testosterone.) But conquering sexism may always be a difficult balance, for there are certain differences we won’t be able to eliminate without altering DNA. That no more justifies sexism than the fact that height is partly genetic would justify denying rights to short people (which, actually, is something we do); but it does make matters complicated, because it’s difficult to know whether an observed difference (for instance, most pediatricians are female, while most neurosurgeons are male) is due to discrimination or innate differences.

Racism, on the other hand, is actually quite simple: Almost any statistically significant difference in behavior or outcome between races must be due to some form of discrimination somewhere down the line. Maybe it’s not discrimination right here, right now; maybe it’s discrimination years ago that denied opportunities, or discrimination against their ancestors that led them to inherit less generations later; but it almost has to be discrimination against someone somewhere, because it is only by social construction that races exist in the first place. I do say “almost” because I can think of a few exceptions: Black people are genuinely less likely to use tanning salons and genuinely more likely to need vitamin D supplements, but both of those things are directly due to skin pigmentation. They are also more likely to suffer from sickle-cell anemia, which is another convergent trait that evolved in tropical climates as a response to malaria. But unless you can think of a reason why employment outcomes would depend upon vitamin D, the huge difference in employment between Whites and Blacks really can’t be due to anything but discrimination.

I imagine most of my readers are more sophisticated than this, but just in case you’re wondering about the difference in IQ scores between Whites and Blacks, that is indeed a real observation, but IQ isn’t entirely genetic. The reason IQ scores are rising worldwide (the Flynn Effect) is due to improvements in environmental conditions: Fewer environmental pollutants—particularly lead and mercury, the removal of which is responsible for most of the reduction in crime in America over the last 20 yearsbetter nutrition, better education, less stress. Being stupid does not make you poor (or how would we explain Donald Trump?), but being poor absolutely does make you stupid. Combine that with the challenges and inconsistencies in cross-national IQ comparisons, and it’s pretty clear that the higher IQ scores in rich nations are an effect, not a cause, of their affluence. Likewise, the lower IQ scores of Black people in the US are entirely explained by their poorer living conditions, with no need for any genetic hypothesis—which would also be very difficult in the first place precisely because “Black” is such a weird genetic category.

Unfortunately, I don’t yet know exactly what it takes to change people’s concept of group identification. Obviously it can be done, for group identities change all the time, sometimes quite rapidly; but we simply don’t have good research on what causes those changes or how they might be affected by policy. That’s actually a major part of the experiment I’ve been trying to get funding to run since 2009, which I hope can now become my PhD thesis. All I can say is this: I’m working on it.

Are humans rational?

JDN 2456928 PDT 11:21.

The central point of contention between cognitive economists and neoclassical economists hinges upon the word “rational”: Are humans rational? What do we mean by “rational”?

Neoclassicists are very keen to insist that they think humans are rational, and often characterize the cognitivist view as saying that humans are irrational. (Daniel Ariely has a habit of feeding this view, titling books things like Predictably Irrational and The Upside of Irrationality.) But I really don’t think this is the right way to characterize the difference.

Daniel Kahneman has a somewhat better formulation (from Thinking, Fast and Slow): “I often cringe when my work is credited as demonstrating that human choices are irrational, when in fact our research only shows that Humans are not well described by the rational-agent model.” (Yes, he capitalizes the word “Humans” throughout, which is annoying; but in general it is a great book.)

The problem is that saying “humans are irrational” has the connotation of a universal statement; it seems to be saying that everything we do, all the time, is always and everywhere utterly irrational. And this of course could hardly be further from the truth; we would not have even survived in the savannah, let alone invented the Internet, if we were that irrational. If we simply lurched about randomly without any concept of goals or response to information in the environment, we would have starved to death millions of years ago.

But at the same time, the neoclassical definition of “rational” obviously does not describe human beings. We aren’t infinite identical psychopaths. Particularly bizarre (and frustrating) is the continued insistence that rationality entails selfishness; apparently economists are getting all their philosophy from Ayn Rand (who barely even qualifies as such), rather than the greats such as Immanuel Kant and John Stuart Mill or even the best contemporary philosophers such as Thomas Pogge and John Rawls. All of these latter would be baffled by the notion that selfless compassion is irrational.

Indeed, Kant argued that rationality implies altruism, that a truly coherent worldview requires assent to universal principles that are morally binding on yourself and every other rational being in the universe. (I am not entirely sure he is correct on this point, and in any case it is clear to me that neither you nor I are anywhere near advanced enough beings to seriously attempt such a worldview. Where neoclassicists envision infinite identical psychopaths, Kant envisions infinite identical altruists. In reality we are finite diverse tribalists.)

But even if you drop selfishness, the requirements of perfect information and expected utility maximization are still far too strong to apply to real human beings. If that’s your standard for rationality, then indeed humans—like all beings in the real world—are irrational.

The confusion, I think, comes from the huge gap between ideal rationality and total irrationality. Our behavior is neither perfectly optimal nor hopelessly random, but somewhere in between.

In fact, we are much closer to the side of perfect rationality! Our brains are limited, so they operate according to heuristics: simplified, approximate rules that are correct most of the time. Clever experiments—or complex environments very different from how we evolved—can cause those heuristics to fail, but we must not forget that the reason we have them is that they work extremely well in most cases in the environment in which we evolved. We are about 90% rational—but woe betide that other 10%.

The most obvious example is phobias: Why are people all over the world afraid of snakes, spiders, falling, and drowning? Because those used to be leading causes of death. In the African savannah 200,000 years ago, you weren’t going to be hit by a car, shot with a rifle bullet or poisoned by carbon monoxide. (You’d probably die of malaria, actually; for that one, instead of evolving to be afraid of mosquitoes we evolved a biological defense mechanism—sickle-cell red blood cells.) Death in general was actually much more likely then, particularly for children.

A similar case can be made for other heuristics we use: We are tribal because the proper functioning of our 100-person tribe used to be the most important factor in our survival. We are racist because people physically different from us were usually part of rival tribes and hence potential enemies. We hoard resources even when our technology allows abundance, because a million years ago no such abundance was possible and every meal might be our last.

When asked how common something is, we don’t calculate a posterior probability based upon Bayesian inference—that’s hard. Instead we try to think of examples—that’s easy. That’s the availability heuristic. And if we didn’t have mass media constantly giving us examples of rare events we wouldn’t otherwise have known about, the availability heuristic would actually be quite accurate. Right now, people think of terrorism as common (even though it’s astoundingly rare) because it’s always all over the news; but if you imagine living in an ancient tribe—or even an medieval village!—anything you heard about that often would almost certainly be something actually worth worrying about. Our level of panic over Ebola is totally disproportionate; but in the 14th century that same level of panic about the Black Death would be entirely justified.

When we want to know whether something is a member of a category, again we don’t try to calculate the actual probability; instead we think about how well it seems to fit a model we have of the paradigmatic example of that category—the representativeness heuristic. You see a Black man on a street corner in New York City at night; how likely is it that he will mug you? Pretty small actually, because there were less than 200,000 crimes in all of New York City last year in a city of 8,000,000 people—meaning the probability any given person committed a crime in the previous year was only 2.5%; the probability on any given day would then be less than 0.01%. Maybe having those attributes raises the probability somewhat, but you can still be about 99% sure that this guy isn’t going to mug you tonight. But since he seemed representative of the category in your mind “criminals”, your mind didn’t bother asking how many criminals there are in the first place—an effect called base rate neglect. Even 200 years ago—let alone 1 million—you didn’t have these sorts of reliable statistics, so what else would you use? You basically had no choice but to assess based upon representative traits.

As you probably know, people have trouble dealing with big numbers, and this is a problem in our modern economy where we actually need to keep track of millions or billions or even trillions of dollars moving around. And really I shouldn’t say it that way, because $1 million ($1,000,000) is an amount of money an upper-middle class person could have in a retirement fund, while $1 billion ($1,000,000,000) would make you in the top 1000 richest people in the world, and $1 trillion ($1,000,000,000,000) is enough to end world hunger for at least the next 15 years (it would only take about $1.5 trillion to do it forever, by paying only the interest on the endowment). It’s important to keep this in mind, because otherwise the natural tendency of the human mind is to say “big number” and ignore these enormous differences—it’s called scope neglect. But how often do you really deal with numbers that big? In ancient times, never. Even in the 21st century, not very often. You’ll probably never have $1 billion, and even $1 million is a stretch—so it seems a bit odd to say that you’re irrational if you can’t tell the difference. I guess technically you are, but it’s an error that is unlikely to come up in your daily life.

Where it does come up, of course, is when we’re talking about national or global economic policy. Voters in the United States today have a level of power that for 99.99% of human existence no ordinary person has had. 2 million years ago you may have had a vote in your tribe, but your tribe was only 100 people. 2,000 years ago you may have had a vote in your village, but your village was only 1,000 people. Now you have a vote on the policies of a nation of 300 million people, and more than that really: As goes America, so goes the world. Our economic, cultural, and military hegemony is so total that decisions made by the United States reverberate through the entire human population. We have choices to make about war, trade, and ecology on a far larger scale than our ancestors could have imagined. As a result, the heuristics that served us well millennia ago are now beginning to cause serious problems.

[As an aside: This is why the “Downs Paradox” is so silly. If you’re calculating the marginal utility of your vote purely in terms of its effect on you—you are a psychopath—then yes, it would be irrational for you to vote. And really, by all means: psychopaths, feel free not to vote. But the effect of your vote is much larger than that; in a nation of N people, the decision will potentially affect N people. Your vote contributes 1/N to a decision that affects N people, making the marginal utility of your vote equal to N*1/N = 1. It’s constant. It doesn’t matter how big the nation is, the value of your vote will be exactly the same. The fact that your vote has a small impact on the decision is exactly balanced by the fact that the decision, once made, will have such a large effect on the world. Indeed, since larger nations also influence other nations, the marginal effect of your vote is probably larger in large elections, which means that people are being entirely rational when they go to greater lengths to elect the President of the United States (58% turnout) rather than the Wayne County Commission (18% turnout).]

So that’s the problem. That’s why we have economic crises, why climate change is getting so bad, why we haven’t ended world hunger. It’s not that we’re complete idiots bumbling around with no idea what we’re doing. We simply aren’t optimized for the new environment that has been recently thrust upon us. We are forced to deal with complex problems unlike anything our brains evolved to handle. The truly amazing part is actually that we can solve these problems at all; most lifeforms on Earth simply aren’t mentally flexible enough to do that. Humans found a really neat trick (actually in a formal evolutionary sense a goodtrick, which we know because it also evolved in cephalopods): Our brains have high plasticity, meaning they are capable of adapting themselves to their environment in real-time. Unfortunately this process is difficult and costly; it’s much easier to fall back on our old heuristics. We ask ourselves: Why spend 10 times the effort to make it work 99% of the time when you can make it work 90% of the time so much easier?

Why? Because it’s so incredibly important that we get these things right.