Why are humans so bad with probability?

Apr 29 JDN 2458238

In previous posts on deviations from expected utility and cumulative prospect theory, I’ve detailed some of the myriad ways in which human beings deviate from optimal rational behavior when it comes to probability.

This post is going to be a bit different: Yes, we behave irrationally when it comes to probability. Why?

Why aren’t we optimal expected utility maximizers?
This question is not as simple as it sounds. Some of the ways that human beings deviate from neoclassical behavior are simply because neoclassical theory requires levels of knowledge and intelligence far beyond what human beings are capable of; basically anything requiring “perfect information” qualifies, as does any game theory prediction that involves solving extensive-form games with infinite strategy spaces by backward induction. (Don’t feel bad if you have no idea what that means; that’s kind of my point. Solving infinite extensive-form games by backward induction is an unsolved problem in game theory; just this past week I saw a new paper presented that offered a partial potential solutionand yet we expect people to do it optimally every time?)

I’m also not going to include questions of fundamental uncertainty, like “Will Apple stock rise or fall tomorrow?” or “Will the US go to war with North Korea in the next ten years?” where it isn’t even clear how we would assign a probability. (Though I will get back to them, for reasons that will become clear.)

No, let’s just look at the absolute simplest cases, where the probabilities are all well-defined and completely transparent: Lotteries and casino games. Why are we so bad at that?

Lotteries are not a computationally complex problem. You figure out how much the prize is worth to you, multiply it by the probability of winning—which is clearly spelled out for you—and compare that to how much the ticket price is worth to you. The most challenging part lies in specifying your marginal utility of wealth—the “how much it’s worth to you” part—but that’s something you basically had to do anyway, to make any kind of trade-offs on how to spend your time and money. Maybe you didn’t need to compute it quite so precisely over that particular range of parameters, but you need at least some idea how much $1 versus $10,000 is worth to you in order to get by in a market economy.

Casino games are a bit more complicated, but not much, and most of the work has been done for you; you can look on the Internet and find tables of probability calculations for poker, blackjack, roulette, craps and more. Memorizing all those probabilities might take some doing, but human memory is astonishingly capacious, and part of being an expert card player, especially in blackjack, seems to involve memorizing a lot of those probabilities.

Furthermore, by any plausible expected utility calculation, lotteries and casino games are a bad deal. Unless you’re an expert poker player or blackjack card-counter, your expected income from playing at a casino is always negative—and the casino set it up that way on purpose.

Why, then, can lotteries and casinos stay in business? Why are we so bad at such a simple problem?

Clearly we are using some sort of heuristic judgment in order to save computing power, and the people who make lotteries and casinos have designed formal models that can exploit those heuristics to pump money from us. (Shame on them, really; I don’t fully understand why this sort of thing is legal.)

In another previous post I proposed what I call “categorical prospect theory”, which I think is a decently accurate description of the heuristics people use when assessing probability (though I’ve not yet had the chance to test it experimentally).

But why use this particular heuristic? Indeed, why use a heuristic at all for such a simple problem?

I think it’s helpful to keep in mind that these simple problems are weird; they are absolutely not the sort of thing a tribe of hunter-gatherers is likely to encounter on the savannah. It doesn’t make sense for our brains to be optimized to solve poker or roulette.

The sort of problems that our ancestors encountered—indeed, the sort of problems that we encounter, most of the time—were not problems of calculable probability risk; they were problems of fundamental uncertainty. And they were frequently matters of life or death (which is why we’d expect them to be highly evolutionarily optimized): “Was that sound a lion, or just the wind?” “Is this mushroom safe to eat?” “Is that meat spoiled?”

In fact, many of the uncertainties most important to our ancestors are still important today: “Will these new strangers be friendly, or dangerous?” “Is that person attracted to me, or am I just projecting my own feelings?” “Can I trust you to keep your promise?” These sorts of social uncertainties are even deeper; it’s not clear that any finite being could ever totally resolve its uncertainty surrounding the behavior of other beings with the same level of intelligence, as the cognitive arms race continues indefinitely. The better I understand you, the better you understand me—and if you’re trying to deceive me, as I get better at detecting deception, you’ll get better at deceiving.

Personally, I think that it was precisely this sort of feedback loop that resulting in human beings getting such ridiculously huge brains in the first place. Chimpanzees are pretty good at dealing with the natural environment, maybe even better than we are; but even young children can outsmart them in social tasks any day. And once you start evolving for social cognition, it’s very hard to stop; basically you need to be constrained by something very fundamental, like, say, maximum caloric intake or the shape of the birth canal. Where chimpanzees look like their brains were what we call an “interior solution”, where evolution optimized toward a particular balance between cost and benefit, human brains look more like a “corner solution”, where the evolutionary pressure was entirely in one direction until we hit up against a hard constraint. That’s exactly what one would expect to happen if we were caught in a cognitive arms race.

What sort of heuristic makes sense for dealing with fundamental uncertainty—as opposed to precisely calculable probability? Well, you don’t want to compute a utility function and multiply by it, because that adds all sorts of extra computation and you have no idea what probability to assign. But you’ve got to do something like that in some sense, because that really is the optimal way to respond.

So here’s a heuristic you might try: Separate events into some broad categories based on how frequently they seem to occur, and what sort of response would be necessary.

Some things, like the sun rising each morning, seem to always happen. So you should act as if those things are going to happen pretty much always, because they do happen… pretty much always.

Other things, like rain, seem to happen frequently but not always. So you should look for signs that those things might happen, and prepare for them when the signs point in that direction.

Still other things, like being attacked by lions, happen very rarely, but are a really big deal when they do. You can’t go around expecting those to happen all the time, that would be crazy; but you need to be vigilant, and if you see any sign that they might be happening, even if you’re pretty sure they’re not, you may need to respond as if they were actually happening, just in case. The cost of a false positive is much lower than the cost of a false negative.

And still other things, like people sprouting wings and flying, never seem to happen. So you should act as if those things are never going to happen, and you don’t have to worry about them.

This heuristic is quite simple to apply once set up: It can simply slot in memories of when things did and didn’t happen in order to decide which category they go in—i.e. availability heuristic. If you can remember a lot of examples of “almost never”, maybe you should move it to “unlikely” instead. If you get a really big number of examples, you might even want to move it all the way to “likely”.

Another large advantage of this heuristic is that by combining utility and probability into one metric—we might call it “importance”, though Bayesian econometricians might complain about that—we can save on memory space and computing power. I don’t need to separately compute a utility and a probability; I just need to figure out how much effort I should put into dealing with this situation. A high probability of a small cost and a low probability of a large cost may be equally worth my time.

How might these heuristics go wrong? Well, if your environment changes sufficiently, the probabilities could shift and what seemed certain no longer is. For most of human history, “people walking on the Moon” would seem about as plausible as sprouting wings and flying away, and yet it has happened. Being attacked by lions is now exceedingly rare except in very specific places, but we still harbor a certain awe and fear before lions. And of course availability heuristic can be greatly distorted by mass media, which makes people feel like terrorist attacks and nuclear meltdowns are common and deaths by car accidents and influenza are rare—when exactly the opposite is true.

How many categories should you set, and what frequencies should they be associated with? This part I’m still struggling with, and it’s an important piece of the puzzle I will need before I can take this theory to experiment. There is probably a trade-off between more categories giving you more precision in tailoring your optimal behavior, but costing more cognitive resources to maintain. Is the optimal number 3? 4? 7? 10? I really don’t know. Even I could specify the number of categories, I’d still need to figure out precisely what categories to assign.

How do we reach people with ridiculous beliefs?

Oct 16, JDN 2457678

One of the most unfortunate facts in the world—indeed, perhaps the most unfortunate fact, from which most other unfortunate facts follow—is that it is quite possible for a human brain to sincerely and deeply hold a belief that is, by any objective measure, totally and utterly ridiculous.

And to be clear, I don’t just mean false; I mean ridiculous. People having false beliefs is an inherent part of being finite beings in a vast and incomprehensible universe. Monetarists are wrong, but they are not ludicrous. String theorists are wrong, but they are not absurd. Multiregionalism is wrong, but it is not nonsensical. Indeed, I, like anyone else, am probably wrong about a great many things, though of course if I knew which ones I’d change my mind. (Indeed, I admit a small but nontrivial probability of being wrong about the three things I just listed.)

I mean ridiculous beliefs. I mean that any rational, objective assessment of the probability of that belief being true would be vanishingly small, 1 in 1 million at best. I’m talking about totally nonsensical beliefs, beliefs that go against overwhelming evidence; some of them are outright incoherent. Yet millions of people go on believing them.

For example, over 40% of Americans believe that human beings were created by God in their present form less than 10,000 years ago, and typically offer no evidence for this besides “The Bible says so.” (Strictly speaking, even that isn’t true—standard interpretations of the Bible say so. The Bible itself contains no clearly stated date for creation.) This despite the absolutely overwhelming body of evidence supporting the theory of evolution by Darwinian natural selection.

Over a third of Americans don’t believe in global warming, which is not only a complete consensus among all credible climate scientists based on overwhelming evidence, but one of the central threats facing human civilization over the 21st century. On a global scale this is rather like standing on a train track and saying you don’t believe in trains. (Or like the time my mother once told me about where an alert went out to her office that there was a sniper in the area, indiscriminately shooting at civilians, and one of her co-workers refused to join the security protocol and declared smugly, “I don’t believe in snipers.” Fortunately, he was unharmed in the incident. This time.)

1/4 of Americans believe in astrology, and 1/4 Americans believe that aliens have visited the Earth. (Not sure if it’s the same 1/4. Probably considerable but not total overlap.) The existence of extraterrestrial civilizations somewhere in this mind-bogglingly (perhaps infinitely) vast universe has probability 1. But visiting us is quite another matter, and there is absolutely no credible evidence of it. As for astrology? I shouldn’t have to explain why the position of Jupiter, much less Sirius, on your birthday is not a major influence on your behavior or life outcomes. Your obstetrician exerted more gravitational force on you than Jupiter did at the moment you were born.

The majority of Americans believe in telepathy or extrasensory perception. I confess that I actually did when I was very young, though I think I disabused myself of this around the time I stopped believing in Santa Claus.

I love the term “extrasensory perception” because it is such an oxymoron; if you’re perceiving, it is via senses. “Sixth sense” is better, except that we actually already have at least nine senses: The ones you probably know, vision (sight), audition (hearing), olfaction (smell), gustation (taste), and tactition (touch)—and the ones you may not know, thermoception (heat), proprioception (body position), vestibulation (balance), and nociception (pain). These can probably be subdivided further—vision and spatial reasoning are dissociated in blind people, heat and cold are separate nerve pathways, pain and itching are distinct systems, and there are a variety of different sensors used for proprioception. So we really could have as many as twenty senses, depending on how you’re counting.

What about telepathy? Well, that is not actually impossible in principle; it’s just that there’s no evidence that humans actually do it. Smartphones do it almost literally constantly, transmitting data via high-frequency radio waves back and forth to one another. We could have evolved some sort of radio transceiver organ (perhaps an offshoot of an electric defense organ such as that of electric eels), but as it turns out we didn’t. Actually in some sense—which some might say is trivial, but I think it’s actually quite deep—we do have telepathy; it’s just that we transmit our thoughts not via radio waves or anything more exotic, but via sound waves (speech) and marks on paper (writing) and electronic images (what you’re reading right now). Human beings really do transmit our thoughts to one another, and this truly is a marvelous thing we should not simply take for granted (it is one of our most impressive feats of Mundane Magic); but somehow I don’t think that’s what people mean when they say they believe in psychic telepathy.

And lest you think this is a uniquely American phenomenon: The particular beliefs vary from place to place, but bizarre beliefs abound worldwide, from conspiracy theories in the UK to 9/11 “truthers” in Canada to HIV denialism in South Africa (fortunately on the wane). The American examples are more familiar to me and most of my readers are Americans, but wherever you are reading from, there are probably ridiculous beliefs common there.

I could go on, listing more objectively ridiculous beliefs that are surprisingly common; but the more I do that, the more I risk alienating you, in case you should happen to believe one of them. When you add up the dizzying array of ridiculous beliefs one could hold, odds are that most people you’d ever meet will have at least one of them. (“Not me!” you’re thinking; and perhaps you’re right. Then again, I’m pretty sure that the 4% or so of people who believe in the Reptilians think the same thing.)

Which brings me to my real focus: How do we reach these people?

One possible approach would be to just ignore them, leave them alone, or go about our business with them as though they did not have ridiculous beliefs. This is in fact the right thing to do under most circumstances, I think; when a stranger on the bus starts blathering about how the lizard people are going to soon reveal themselves and establish the new world order, I don’t think it’s really your responsibility to persuade that person to realign their beliefs with reality. Nodding along quietly would be acceptable, and it would be above and beyond the call of duty to simply say, “Um, no… I’m fairly sure that isn’t true.”

But this cannot always be the answer, if for no other reason than the fact that we live in a democracy, and people with ridiculous beliefs frequently vote according to them. Then people with ridiculous beliefs can take office, and make laws that affect our lives. Actually this would be true even if we had some other system of government; there’s nothing in particular to stop monarchs, hereditary senates, or dictators from believing ridiculous things. If anything, the opposite; dictators are known for their eccentricity precisely because there are no checks on their behavior.

At some point, we’re going to need to confront the fact that over half of the Republicans in the US Congress do not believe in climate change, and are making policy accordingly, rolling drunk on petroleum and treating the hangover with the hair of the dog.

We’re going to have to confront the fact that school boards in Southern states, particularly Texas, continually vote to censor biology textbooks of their dreaded Darwinian evolution.

So we really do need to find a way to talk to people who have ridiculous beliefs, and engage with them, understand why they think the way they do, and then—hopefully at least—tilt them a little bit back toward rational reality. You will not be able to change their mind completely right away, but if each of us can at least chip away at their edifice of absurdity, then all together perhaps we can eventually bring them to enlightenment.

Of course, a good start is probably not to say you think that their beliefs are ridiculous, because people get very defensive when you do that, even—perhaps especially—when it’s true. People invest their identity in beliefs, and decide what beliefs to profess based on the group identities they value most.

This is the link that we must somehow break. We must show people that they are not defined by their beliefs, that it is okay to change your mind. We must be patient and compassionate—sometimes heroically so, as people spout offensive nonsense in our faces, sometimes offensive nonsense that directly attacks us personally. (“Atheists deserve Hell”, taken literally, would constitute something like a death threat except infinitely worse. While to them it very likely is just reciting a slogan, to the atheist listening it says that you believe that they are so evil, so horrible that they deserve eternal torture for believing what they do. And you get mad when we say your beliefs are ridiculous?)

We must also remind people that even very smart people can believe very dumb things—indeed, I’d venture a guess that most dumb things are in fact believed by smart people. Even the most intelligent human beings can only glimpse a tiny fraction of the universe, and all human brains are subject to the same fundamental limitations, the same core heuristics and biases. Make it clear that you’re saying you think their beliefs are false, not that they are stupid or crazy. And indeed, make it clear to yourself that this is indeed what you believe, because it ought to be. It can be tempting to think that only an idiot would believe something so ridiculous—and you are safe, for you are no idiot!—but the truth is far more humbling: Human brains are subject to many flaws, and guarding the fortress of the mind against error and deceit is a 24-7 occupation. Indeed, I hope that you will ask yourself: “What beliefs do I hold that other people might find ridiculous? Are they, in fact, ridiculous?”

Even then, it won’t be easy. Most people are strongly resistant to any change in belief, however small, and it is in the nature of ridiculous beliefs that they require radical changes in order to restore correspondence with reality. So we must try in smaller steps.

Maybe don’t try to convince them that 9/11 was actually the work of Osama bin Laden; start by pointing out that yes, steel does bend much more easily at the temperature at which jet fuel burns. Maybe don’t try to persuade them that astrology is meaningless; start by pointing out the ways that their horoscope doesn’t actually seem to fit them, or could be made to fit anybody. Maybe don’t try to get across the real urgency of climate change just yet, and instead point out that the “study” they read showing it was a hoax was clearly funded by oil companies, who would perhaps have a vested interest here. And as for ESP? I think it’s a good start just to point out that we have more than five senses already, and there are many wonders of the human brain that actual scientists know about well worth exploring—so who needs to speculate about things that have no scientific evidence?

Moral responsibility does not inherit across generations

JDN 2457548

In last week’s post I made a sharp distinction between believing in human progress and believing that colonialism was justified. To make this argument, I relied upon a moral assumption that seems to me perfectly obvious, and probably would to most ethicists as well: Moral responsibility does not inherit across generations, and people are only responsible for their individual actions.

But is in fact this principle is not uncontroversial in many circles. When I read utterly nonsensical arguments like this one from the aptly-named Race Baitr saying that White people have no role to play in the liberation of Black people apparently because our blood is somehow tainted by the crimes our ancestors, it becomes apparent to me that this principle is not obvious to everyone, and therefore is worth defending. Indeed, many applications of the concept of “White Privilege” seem to ignore this principle, speaking as though racism is not something one does or participates in, but something that one is simply by being born with less melanin. Here’s a Salon interview specifically rejecting the proposition that racism is something one does:

For white people, their identities rest on the idea of racism as about good or bad people, about moral or immoral singular acts, and if we’re good, moral people we can’t be racist – we don’t engage in those acts. This is one of the most effective adaptations of racism over time—that we can think of racism as only something that individuals either are or are not “doing.”

If racism isn’t something one does, then what in the world is it? It’s all well and good to talk about systems and social institutions, but ultimately systems and social institutions are made of human behaviors. If you think most White people aren’t doing enough to combat racism (which sounds about right to me!), say that—don’t make some bizarre accusation that simply by existing we are inherently racist. (Also: We? I’m only 75% White, so am I only 75% inherently racist?) And please, stop redefining the word “racism” to mean something other than what everyone uses it to mean; “White people are snakes” is in fact a racist sentiment (and yes, one I’ve actually heard–indeed, here is the late Muhammad Ali comparing all White people to rattlesnakes, and Huffington Post fawning over him for it).

Racism is clearly more common and typically worse when performed by White people against Black people—but contrary to the claims of some social justice activists the White perpetrator and Black victim are not part of the definition of racism. Similarly, sexism is more common and more severe committed by men against women, but that doesn’t mean that “men are pigs” is not a sexist statement (and don’t tell me you haven’t heard that one). I don’t have a good word for bigotry by gay people against straight people (“heterophobia”?) but it clearly does happen on occasion, and similarly cannot be defined out of existence.

I wouldn’t care so much that you make this distinction between “racism” and “racial prejudice”, except that it’s not the normal usage of the word “racism” and therefore confuses people, and also this redefinition clearly is meant to serve a political purpose that is quite insidious, namely making excuses for the most extreme and hateful prejudice as long as it’s committed by people of the appropriate color. If “White people are snakes” is not racism, then the word has no meaning.

Not all discussions of “White Privilege” are like this, of course; this article from Occupy Wall Street actually does a fairly good job of making “White Privilege” into a sensible concept, albeit still not a terribly useful one in my opinion. I think the useful concept is oppression—the problem here is not how we are treating White people, but how we are treating everyone else. What privilege gives you is the freedom to be who you are.”? Shouldn’t everyone have that?

Almost all the so-called “benefits” or “perks” associated with privilege” are actually forgone harms—they are not good things done to you, but bad things not done to you. But benefitting from racist systems doesn’t mean that everything is magically easy for us. It just means that as hard as things are, they could always be worse.” No, that is not what the word “benefit” means. The word “benefit” means you would be worse off without it—and in most cases that simply isn’t true. Many White people obviously think that it is true—which is probably a big reason why so many White people fight so hard to defend racism, you know; you’ve convinced them it is in their self-interest. But, with rare exceptions, it is not; most racial discrimination has literally zero long-run benefit. It’s just bad. Maybe if we helped people appreciate that more, they would be less resistant to fighting racism!

The only features of “privilege” that really make sense as benefits are those that occur in a state of competition—like being more likely to be hired for a job or get a loan—but one of the most important insights of economics is that competition is nonzero-sum, and fairer competition ultimately means a more efficient economy and thus more prosperity for everyone.

But okay, let’s set that aside and talk about this core question of what sort of responsibility we bear for the acts of our ancestors. Many White people clearly do feel deep shame about what their ancestors (or people the same color as their ancestors!) did hundreds of years ago. The psychological reactance to that shame may actually be what makes so many White people deny that racism even exists (or exists anymore)—though a majority of Americans of all races do believe that racism is still widespread.

We also apply some sense of moral responsibility applied to whole races quite frequently. We speak of a policy “benefiting White people” or “harming Black people” and quickly elide the distinction between harming specific people who are Black, and somehow harming “Black people” as a group. The former happens all the time—the latter is utterly nonsensical. Similarly, we speak of a “debt owed by White people to Black people” (which might actually make sense in the very narrow sense of economic reparations, because people do inherit money! They probably shouldn’t, that is literally feudalist, but in the existing system they in fact do), which makes about as much sense as a debt owed by tall people to short people. As Walter Michaels pointed out in The Trouble with Diversity (which I highly recommend), because of this bizarre sense of responsibility we are often in the habit of “apologizing for something you didn’t do to people to whom you didn’t do it (indeed to whom it wasn’t done)”. It is my responsibility to condemn colonialism (which I indeed do), to fight to ensure that it never happens again; it is not my responsibility to apologize for colonialism.

This makes some sense in evolutionary terms; it’s part of the all-encompassing tribal paradigm, wherein human beings come to identify themselves with groups and treat those groups as the meaningful moral agents. It’s much easier to maintain the cohesion of a tribe against the slings and arrows (sometimes quite literal) of outrageous fortune if everyone believes that the tribe is one moral agent worthy of ultimate concern.

This concept of racial responsibility is clearly deeply ingrained in human minds, for it appears in some of our oldest texts, including the Bible: “You shall not bow down to them or worship them; for I, the Lord your God, am a jealous God, punishing the children for the sin of the parents to the third and fourth generation of those who hate me,” (Exodus 20:5)

Why is inheritance of moral responsibility across generations nonsensical? Any number of reasons, take your pick. The economist in me leaps to “Ancestry cannot be incentivized.” There’s no point in holding people responsible for things they can’t control, because in doing so you will not in any way alter behavior. The Stanford Encyclopedia of Philosophy article on moral responsibility takes it as so obvious that people are only responsible for actions they themselves did that they don’t even bother to mention it as an assumption. (Their big question is how to reconcile moral responsibility with determinism, which turns out to be not all that difficult.)

An interesting counter-argument might be that descent can be incentivized: You could use rewards and punishments applied to future generations to motivate current actions. But this is actually one of the ways that incentives clearly depart from moral responsibilities; you could incentivize me to do something by threatening to murder 1,000 children in China if I don’t, but even if it was in fact something I ought to do, it wouldn’t be those children’s fault if I didn’t do it. They wouldn’t deserve punishment for my inaction—I might, and you certainly would for using such a cruel incentive.

Moreover, there’s a problem with dynamic consistency here: Once the action is already done, what’s the sense in carrying out the punishment? This is why a moral theory of punishment can’t merely be based on deterrence—the fact that you could deter a bad action by some other less-bad action doesn’t make the less-bad action necessarily a deserved punishment, particularly if it is applied to someone who wasn’t responsible for the action you sought to deter. In any case, people aren’t thinking that we should threaten to punish future generations if people are racist today; they are feeling guilty that their ancestors were racist generations ago. That doesn’t make any sense even on this deterrence theory.

There’s another problem with trying to inherit moral responsibility: People have lots of ancestors. Some of my ancestors were most likely rapists and murderers; most were ordinary folk; a few may have been great heroes—and this is true of just about anyone anywhere. We all have bad ancestors, great ancestors, and, mostly, pretty good ancestors. 75% of my ancestors are European, but 25% are Native American; so if I am to apologize for colonialism, should I be apologizing to myself? (Only 75%, perhaps?) If you go back enough generations, literally everyone is related—and you may only have to go back about 4,000 years. That’s historical time.

Of course, we wouldn’t be different colors in the first place if there weren’t some differences in ancestry, but there is a huge amount of gene flow between different human populations. The US is a particularly mixed place; because most Black Americans are quite genetically mixed, it is about as likely that any randomly-selected Black person in the US is descended from a slaveowner as it is that any randomly-selected White person is. (Especially since there were a large number of Black slaveowners in Africa and even some in the United States.) What moral significance does this have? Basically none! That’s the whole point; your ancestors don’t define who you are.

If these facts do have any moral significance, it is to undermine the sense most people seem to have that there are well-defined groups called “races” that exist in reality, to which culture responds. No; races were created by culture. I’ve said this before, but it bears repeating: The “races” we hold most dear in the US, White and Black, are in fact the most nonsensical. “Asian” and “Native American” at least almost make sense as categories, though Chippewa are more closely related to Ainu than Ainu are to Papuans. “Latino” isn’t utterly incoherent, though it includes as much Aztec as it does Iberian. But “White” is a club one can join or be kicked out of, while “Black” is the majority of genetic diversity.

Sex is a real thing—while there are intermediate cases of course, broadly speaking humans, like most metazoa, are sexually dimorphic and come in “male” and “female” varieties. So sexism took a real phenomenon and applied cultural dynamics to it; but that’s not what happened with racism. Insofar as there was a real phenomenon, it was extremely superficial—quite literally skin deep. In that respect, race is more like class—a categorization that is itself the result of social institutions.

To be clear: Does the fact that we don’t inherit moral responsibility from our ancestors absolve us from doing anything to rectify the inequities of racism? Absolutely not. Not only is there plenty of present discrimination going on we should be fighting, there are also inherited inequities due to the way that assets and skills are passed on from one generation to the next. If my grandfather stole a painting from your grandfather and both our grandfathers are dead but I am now hanging that painting in my den, I don’t owe you an apology—but I damn well owe you a painting.

The further we become from the past discrimination the harder it gets to make reparations, but all hope is not lost; we still have the option of trying to reset everyone’s status to the same at birth and maintaining equality of opportunity from there. Of course we’ll never achieve total equality of opportunity—but we can get much closer than we presently are.

We could start by establishing an extremely high estate tax—on the order of 99%—because no one has a right to be born rich. Free public education is another good way of equalizing the distribution of “human capital” that would otherwise be concentrated in particular families, and expanding it to higher education would make it that much better. It even makes sense, at least in the short run, to establish some affirmative action policies that are race-conscious and sex-conscious, because there are so many biases in the opposite direction that sometimes you must fight bias with bias.

Actually what I think we should do in hiring, for example, is assemble a pool of applicants based on demographic quotas to ensure a representative sample, and then anonymize the applications and assess them on merit. This way we do ensure representation and reduce bias, but don’t ever end up hiring anyone other than the most qualified candidate. But nowhere should we think that this is something that White men “owe” to women or Black people; it’s something that people should do in order to correct the biases that otherwise exist in our society. Similarly with regard to sexism: Women exhibit just as much unconscious bias against other women as men do. This is not “men” hurting “women”—this is a set of unconscious biases found in almost everywhere and social structures almost everywhere that systematically discriminate against people because they are women.

Perhaps by understanding that this is not about which “team” you’re on (which tribe you’re in), but what policy we should have, we can finally make these biases disappear, or at least fade so small that they are negligible.

Why is there a “corporate ladder”?

JDN 2457482

We take this concept for granted; there are “entry-level” jobs, and then you can get “promoted”, until perhaps you’re lucky enough or talented enough to rise to the “top”. Jobs that are “higher” on this “ladder” pay better, offer superior benefits, and also typically involve more pleasant work environments and more autonomy, though they also typically require greater skill and more responsibility.

But I contend that an alien lifeform encountering our planet for the first time, even one that somehow knew all about neoclassical economic theory (admittedly weird, but bear with me here), would be quite baffled by this arrangement.

The classic “rags to riches” story always involves starting work in some menial job like working in the mailroom, from which you then more or less magically rise to the position of CEO. (The intermediate steps are rarely told in the story, probably because they undermine the narrative; successful entrepreneurs usually make their first successful business using funds from their wealthy relatives, and if you haven’t got any wealthy relatives, that’s just too bad for you.)

Even despite its dubious accuracy, the story is bizarre in another way: There’s no reason to think that being really good at working in the mail room has anything at all to do with being good at managing a successful business. They’re totally orthogonal skills. They may even be contrary in personality terms; the kind of person who makes a good entrepreneur is innovative, decisive, and independent—and those are exactly the kind of personality traits that will make you miserable in a menial job where you’re constantly following orders.

Yet in almost every profession, we have this process where you must first “earn” your way to “higher” positions by doing menial and at best tangentially-related tasks.

This even happens in science, where we ought to know better! There’s really no reason to think that being good at taking multiple-choice tests strongly predicts your ability to do scientific research, nor that being good at grading multiple-choice tests does either; and yet to become a scientific researcher you must pass a great many multiple-choice tests (at bare minimum the SAT and GRE), and probably as a grad student you’ll end up grading some as well.

This process is frankly bizarre; worldwide, we are probably leaving tens of trillions of dollars of productivity on the table by instituting these arbitrary selection barriers that have nothing to do with actual skills. Simply optimizing our process of CEO selection alone would probably add a trillion dollars to US GDP.

If neoclassical economics were right, we should assign jobs solely based on marginal productivity; there should be some sort of assessment of your ability at each task you might perform, and whichever you’re best at (in the sense of comparative advantage) is what you end up doing, because that’s what you’ll be paid the most to do. Actually for this to really work the selection process would have to be extremely cheap, extremely reliable, and extremely fast, lest the friction of the selection system itself introduce enormous inefficiencies. (The fact that this never even seems to work even in SF stories with superintelligent sorting AIs, let alone in real life, is just so much the worse for neoclassical economics. The last book I read in which it actually seemed to work was Harry Potter and the Sorceror’s Stone—so it was literally just magic.)

The hope seems to be that competition will somehow iron out this problem, but in order for that to work, we must all be competing on a level playing field, and furthermore the mode of competition must accurately assess our real ability. The reason Olympic sports do a pretty good job of selecting the best athletes in the world is that they obey these criteria; the reason corporations do a terrible job of selecting the best CEOs is that they do not.

I’m quite certain I could do better than the former CEO of the late Lehman Brothers (and, to be fair, there are others who could do better still than I), but I’ll likely never get the chance to own a major financial firm—and I’m a lot closer than most people. I get to tick most of the boxes you need to be in that kind of position: White, male, American, mostly able-bodied, intelligent, hard-working, with a graduate degree in economics. Alas, I was only born in the top 10% of the US income distribution, not the top 1% or 0.01%, so my odds are considerably reduced. (That and I’m pretty sure that working for a company as evil as the late Lehman Brothers would destroy my soul.) Somewhere in Sudan there is a little girl who would be the best CEO of an investment bank the world has ever seen, but she is dying of malaria. Somewhere in India there is a little boy who would have been a greater physicist than Einstein, but no one ever taught him to read.

Competition may help reduce the inefficiency of this hierarchical arrangement—but it cannot explain why we use a hierarchy in the first place. Some people may be especially good at leadership and coordination; but in an efficient system they wouldn’t be seen as “above” other people, but as useful coordinators and advisors that people consult to ensure they are allocating tasks efficiently. You wouldn’t do things because “your boss told you to”, but because those things were the most efficient use of your time, given what everyone else in the group was doing. You’d consult your coordinator often, and usually take their advice; but you wouldn’t see them as orders you were required to follow.

Moreover, coordinators would probably not be paid much better than those they coordinate; what they were paid would depend on how much the success of the tasks depends upon efficient coordination, as well as how skilled other people are at coordination. It’s true that if having you there really does make a company with $1 billion in revenue 1% more efficient, that is in fact worth $10 million; but that isn’t how we set the pay of managers. It’s simply obvious to most people that managers should be paid more than their subordinates—that with a “promotion” comes more leadership and more pay. You’re “moving up the corporate ladder” Your pay reflects your higher status, not your marginal productivity.

This is not an optimal economic system by any means. And yet it seems perfectly natural to us to do this, and most people have trouble thinking any other way—which gives us a hint of where it’s probably coming from.

Perfectly natural. That is, instinctual. That is, evolutionary.

I believe that the corporate ladder, like most forms of hierarchy that humans use, is actually a recapitulation of our primate instincts to form a mating hierarchy with an alpha male.

First of all, the person in charge is indeed almost always male—over 90% of all high-level business executives are men. This is clearly discrimination, because women executives are paid less and yet show higher competence. Rare, underpaid, and highly competent is exactly the pattern we would expect in the presence of discrimination. If it were instead a lack of innate ability, we would expect that women executives would be much less competent on average, though they would still be rare and paid less. If there were no discrimination and no difference in ability, we would see equal pay, equal competence, and equal prevalence (this happens almost nowhere—the closest I think we get is in undergraduate admissions). Executives are also usually tall, healthy, and middle-aged—just like alpha males among chimpanzees and gorillas. (You can make excuses for why: Height is correlated with IQ, health makes you more productive, middle age is when you’re old enough to have experience but young enough to have vigor and stamina—but the fact remains, you’re matching the gorillas.)

Second, many otherwise-baffling economic decisions make sense in light of this hypothesis.

When a large company is floundering, why do we cut 20,000 laborers instead of simply reducing the CEO’s stock option package by half to save the same amount of money? Think back to the alpha male: Would he give himself less in a time of scarcity? Of course not. Nor would he remove his immediate subordinates, unless they had done something to offend him. If resources are scarce, the “obvious” answer is to take them from those at the bottom of the hierarchy—resource conservation is always accomplished at the expense of the lowest-status individuals.

Why are the very same poor people who would most stand to gain from redistribution of wealth often those who are most fiercely opposed to it? Because, deep down, they just instinctually “know” that alpha males are supposed to get the bananas, and if they are of low status it is their deserved lot in life. That is how people who depend on TANF and Medicaid to survive can nonetheless vote for Donald Trump. (As for how they can convince themselves that they “don’t get anything from the government”, that I’m not sure. “Keep your government hands off my Medicare!”)

Why is power an aphrodisiac, as well as for many an apparent excuse for bad behavior? I’ll let Cameron Anderson (a psychologist at UC Berkeley) give you the answer: “powerful people act with great daring and sometimes behave rather like gorillas”. With higher status comes a surge in testosterone (makes sense if you’re going to have more mates, and maybe even if you’re commanding an army—but running an investment bank?), which is directly linked to dominance behavior.

These attitudes may well have been adaptive for surviving in the African savannah 2 million years ago. In a world red in tooth and claw, having the biggest, strongest male be in charge of the tribe might have been the most efficient means of ensuring the success of the tribe—or rather I should say, the genes of the tribe, since the only reason we have a tribal instinct is that tribal instinct genes were highly successful at propagating themselves.

I’m actually sort of agnostic on the question of whether our evolutionary heuristics were optimal for ancient survival, or simply the best our brains could manage; but one thing is certain: They are not optimal today. The uninhibited dominance behavior associated with high status may work well enough for a tribal chieftain, but it could be literally apocalyptic when exhibited by the head of state of a nuclear superpower. Allocation of resources by status hierarchy may be fine for hunter-gatherers, but it is disastrously inefficient in an information technology economy.

From now on, whenever you hear “corporate ladder” and similar turns of phrase, I want you to substitute “primate status hierarchy”. You’ll quickly see how well it fits; and hopefully once enough people realize this, together we can all find a way to change to a better system.

Why is our diet so unhealthy?

JDN 2457447

One of the most baffling facts about the world, particularly to a development economist, is that the leading causes of death around the world broadly cluster into two categories: Obesity, in First World countries, and starvation, in Third World countries. At first glance, it seems like the rich are eating too much and there isn’t enough left for the poor.

Yet in fact it’s not quite so simple as that, because in fact obesity is most common among the poor in First World countries, and in Third World countries obesity rates are rising rapidly and co-existing with starvation. It is becoming recognized that there are many different kinds of obesity, and that a past history of starvation is actually a major risk factor in future obesity.

Indeed, the really fundamental problem is malnutrition—people are not necessarily eating too much or too little, they are eating the wrong things. So, my question is: Why?

It is widely thought that foods which are nutritious are also unappetizing, and conversely that foods which are delicious are unhealthy. There is a clear kernel of truth here, as a comparison of Brussels sprouts versus ice cream will surely indicate. But this is actually somewhat baffling. We are an evolved organism; one would think that natural selection would shape us so that we enjoy foods which are good for us and avoid foods which are bad for us.

I think it did, actually; the problem is, we have changed our situation so drastically by means of culture and technology that evolution hasn’t had time to catch up. We have evolved significantly since the dawn of civilization, but we haven’t had any time to evolve since one event in particular: The Green Revolution. Indeed, many people are still alive today who were born while the Green Revolution was still underway.

The Green Revolution is the culmination of a long process of development in agriculture and industrialization, but it would be difficult to overstate its importance as an epoch in the history of our species. We now have essentially unlimited food.

Not literally unlimited, of course; we do still need land, and water, and perhaps most notably energy (oil-driven machines are a vital part of modern agriculture). But we can produce vastly more food than was previously possible, and food supply is no longer a binding constraint on human population. Indeed, we already produce enough food to feed 10 billion people. People who say that some new agricultural technology will end world hunger don’t understand what world hunger actually is. Food production is not the problem—distribution of wealth is the problem.

I often speak about the possibility of reaching post-scarcity in the future; but we have essentially already done so in the domain of food production. If everyone ate what would be optimally healthy, and we distributed food evenly across the world, there would be plenty of food to go around and no such thing as obesity or starvation.

So why hasn’t this happened? Well, the main reason, like I said, is distribution of wealth.

But that doesn’t explain why so many people who do have access to good foods nonetheless don’t eat them.

The first thing to note is that healthy food is more expensive. It isn’t a huge difference by First World standards—about $550 per year extra per person. But when we compare the cost of a typical nutritious diet to that of a typical diet, the nutritious diet is significantly more expensive. Worse yet, this gap appears to be growing over time.

But why is this the case? It’s actually quite baffling on its face. Nutritious foods are typically fruits and vegetables that one can simply pluck off plants. Unhealthy foods are typically complex processed foods that require machines and advanced technology. There should be “value added”, at least in the economic sense; additional labor must go in, additional profits must come out. Why is it cheaper?

In a word? Subsidies.

Somehow, huge agribusinesses have convinced governments around the world that they deserve to be paid extra money, either simply for existing or based on how much they produce. Of course, when I say “somehow”, I of course mean lobbying.

In the US, these subsidies overwhelmingly go toward corn, followed by cotton, followed by soybeans.

In fact, they don’t actually even go to corn as you would normally think of it, like sweet corn or corn on the cob. No, they go to feed corn—really awful stuff that includes the entire plant, is barely even recognizable as corn, and has its “quality” literally rated by scales and sieves. No living organism was ever meant to eat this stuff.

Humans don’t, of course. Cows do. But they didn’t evolve for this stuff either; they can’t digest it properly, and it’s because of this terrible food we force-feed them that they need so many antibiotics.

Thus, these corn subsides are really primarily beef subsidies—they are a means of externalizing the cost of beef production and keeping the price of hamburgers artificially low. In all, 2/3 of US agricultural subsidies ultimately go to meat production. I haven’t been able to find any really good estimates, but as a ballpark figure it seems that meat would cost about twice as much if we didn’t subsidize it.

Fortunately a lot of these subsidies have been decreased under the Obama administration, particularly “direct payments” which are sort of like a basic income, but for agribusinesses. (That is not what basic incomes are for.) You can see the decline in US corn subsidies here.

Despite all this, however, subsidies cannot explain obesity. Removing them would have only a small effect.

An often overlooked consideration is that nutritious food can be more expensive for a family even if the actual pricetag is the same.

Why? Because kids won’t eat it.

To raise kids on a nutritious diet, you have to feed them small amounts of good food over a long period of time, until they acquire the taste. In order to do this, you need to be prepared to waste a lot of food, and that costs money. It’s cheaper to simply feed them something unhealthy, like ice cream or hot dogs, that you know they’ll eat.

And this brings me to what I think is the real ultimate cause of our awful diet: We evolved for a world of starvation, and our bodies cannot cope with abundance.

It’s important to be clear about what we mean by “unhealthy food”; people don’t enjoy consuming lead and arsenic. Rather, we enjoy consuming fat and sugar. Contrary to what fad diets will tell you, fat and sugar are not inherently bad for human health; indeed, we need a certain amount of fat and sugar in order to survive. What we call “unhealthy food” is actually food that we desperately need—in small quantities.

Under the conditions in which we evolved, fat and sugar were extremely scarce. Eating fat meant hunting a large animal, which required the cooperation of the whole tribe (a quite literal Stag Hunt) and carried risk of life and limb, not to mention simply failing and getting nothing. Eating sugar meant finding fruit trees and gathering fruit from them—and fruit trees are not all that common in nature. These foods also spoil quite quickly, so you eat them right away or not at all.

As such, we evolved to really crave these things, to ensure that we would eat them whenever they are available. Since they weren’t available all that often, this was just about right to ensure that we managed to eat enough, and rarely meant that we ate too much.

 

But now fast-forward to the Green Revolution. They aren’t scarce anymore. They’re everywhere. There are whole buildings we can go to with shelves upon shelves of them, which we ourselves can claim simply by swiping a little plastic card through a reader. We don’t even need to understand how that system of encrypted data networks operates, or what exactly is involved in maintaining our money supply (and most people clearly don’t); all we need to do is perform the right ritual and we will receive an essentially unlimited abundance of fat and sugar.

Even worse, this food is in processed form, so we can extract the parts that make it taste good, while separating them from the parts that actually make it nutritious. If fruits were our main source of sugar, that would be fine. But instead we get it from corn syrup and sugarcane, and even when we do get it from fruit, we extract the sugar instead of eating the whole fruit.

Natural selection had no particular reason to give us that level of discrimination; since eating apples and oranges was good for us, we evolved to like the taste of apples and oranges. There wasn’t a sufficient selection pressure to make us actually eat the whole fruit as opposed to extracting the sugar, because extracting the sugar was not an option available to our ancestors. But it is available to us now.

Vegetables, on the other hand, are also more abundant now, but were already fairly abundant. Indeed, it may be significant that we’ve had enough time to evolve since agriculture, but not enough time since fertilizer. Agriculture allowed us to make plenty of wheat and carrots; but it wasn’t until fertilizer that we could make enough hamburgers for people to eat them regularly. It could be that our hunter-gatherer ancestors actually did crave carrots in much the same way they and we crave sugar; but since agriculture we have no further reason to do so because carrots have always been widely available.

One thing I do still find a bit baffling: Why are so many green vegetables so bitter? It would be one thing if they simply weren’t as appealing as fat and sugar; but it honestly seems like a lot of green vegetables, such as broccoli, spinach, and Brussels sprouts, are really quite actively aversive, at least until you acquire the taste for them. Given how nutritious they are, it seems like there should have been a selective pressure in favor of liking the taste of green vegetables; but there wasn’t. I wonder if it’s actually coevolution—if perhaps broccoli has been evolving to not be eaten as quickly as we were evolving to eat it. This wouldn’t happen with apples and oranges, because in an evolutionary sense apples and oranges “want” to be eaten; they spread their seeds in the droppings of animals. But for any given stalk of broccoli, becoming lunch is definitely bad news.

Yet even this is pretty weird, because broccoli has definitely evolved substantially since agriculture—indeed, broccoli as we know it would not exist otherwise. Ancestral Brassica oleracea was bred to become cabbage, broccoli, cauliflower, kale, Brussels sprouts, collard greens, savoy, kohlrabi and kai-lan—and looks like none of them.

It looks like I still haven’t solved the mystery. In short, we get fat because kids hate broccoli; but why in the world do kids hate broccoli?

The power of exponential growth

JDN 2457390

There’s a famous riddle: If the water in a lakebed doubles in volume every day, and the lakebed started filling on January 1, and is half full on June 17, when will it be full?

The answer is of course June 18—if it doubles every day, it will go from half full to full in a single day.

But most people assume that half the work takes about half the time, so they usually give answers in December. Others try to correct, but don’t go far enough, and say something like October.

Human brains are programmed to understand linear processes. We expect things to come in direct proportion: If you work twice as hard, you expect to get twice as much done. If you study twice as long, you expect to learn twice as much. If you pay twice as much, you expect to get twice as much stuff.

We tend to apply this same intuition to situations where it does not belong, processes that are not actually linear but exponential. As a result, when we extrapolate the slow growth early in the process, we wildly underestimate the total growth in the long run.

For example, suppose we have two countries. Arcadia has a GDP of $100 billion per year, and they grow at 4% per year. Berkland has a GDP of $200 billion, and they grow at 2% per year. Assuming that they maintain these growth rates, how long will it take for Arcadia’s GDP to exceed Berkland’s?

If we do this intuitively, we might sort of guess that at 4% you’d add 100% in 25 years, and at 2% you’d add 100% in 50 years; so it should be something like 75 years, because then Arcadia will have added $300 million while Berkland added $200 million. You might even just fudge the numbers in your head and say “about a century”.

In fact, it is only 35 years. You could solve this exactly by setting (100)(1.04^x) = (200)(1.02^x); but I have an intuitive method that I think may help you to estimate exponential processes in the future.

Divide the percentage into 69. (For some numbers it’s easier to use 70 or 72; remember, these are just to be approximate. The exact figure is 100*ln(2) = 69.3147… and then it wouldn’t be the percentage p but 100*ln(1+p/100); try plotting those and you’ll see why using p works.) This is the time it will take to double.

So at 4%, Arcadia will double in about 17.5 years, quadrupling in 35 years. At 2%, Berkland will double in about 35 years. Thus, in 35 years, Arcadia will quadruple and Berkland will double, so their GDPs will be equal.

Economics is full of exponential processes: Compound interest is exponential, and over moderately long periods GDP and population both tend to grow exponentially. (In fact they grow logistically, which is similar to exponential until it gets very large and begins to slow down. If you smooth out our recessions, you can get a sense that since the 1940s, US GDP growth has slowed down from about 4% per year to about 2% per year.) It is therefore quite important to understand how exponential growth works.

Let’s try another one. If one account has $1 million, growing at 5% per year, and another has $1,000, growing at 10% per year, how long will it take for the second account to have more money in it?

69/5 is about 14, so the first account doubles in 14 years. 69/10 is about 7, so the second account doubles in 7 years. A factor of 1000 is about 10 doublings (2^10 = 1024), so the second account needs to have doubled 10 times more than the first account. Since it doubles twice as often, this means that it must have doubled 20 times while the other doubled 10 times. Therefore, it will take about 140 years.

In fact, it takes 141—so our quick approximation is actually remarkably good.

This example is instructive in another way; 141 years is a pretty long time, isn’t it? You can’t just assume that exponential growth is “as fast as you want it to be”. Once people realize that exponential growth is very fast, they often overcorrect, assuming that exponential growth automatically means growth that is absurdly—or arbitrarily—fast. (XKCD made a similar point in this comic.)

I think the worst examples of this mistake are among Singularitarians. They—correctly—note that computing power has become exponentially greater and cheaper over time, doubling about every 18 months, which has been dubbed Moore’s Law. They assume that this will continue into the indefinite future (this is already problematic; the growth rate seems to be already slowing down). And therefore they conclude there will be a sudden moment, a technological singularity, at which computers will suddenly outstrip humans in every way and bring about a new world order of artificial intelligence basically overnight. They call it a “hard takeoff”; here’s a direct quote:

But many thinkers in this field including Nick Bostrom and Eliezer Yudkowsky worry that AI won’t work like this at all. Instead there could be a “hard takeoff”, a huge subjective discontinuity in the function mapping AI research progress to intelligence as measured in ability-to-get-things-done. If on January 1 you have a toy AI as smart as a cow, one which can identify certain objects in pictures and navigate a complex environment, and on February 1 it’s proved the Riemann hypothesis and started building a ring around the sun, that was a hard takeoff.

Wait… what? For someone like me who understands exponential growth, the last part is a baffling non sequitur. If computers start half as smart as us and double every 18 months, in 18 months, they will be as smart as us. In 36 months, they will be twice as smart as us. Twice as smart as us literally means that two people working together perfectly can match them—certainly a few dozen working realistically can. We’re not in danger of total AI domination from that. With millions of people working against the AI, we should be able to keep up with it for at least another 30 years. So are you assuming that this trend is continuing or not? (Oh, and by the way, we’ve had AIs that can identify objects and navigate complex environments for a couple years now, and so far, no ringworld around the Sun.)

That same essay make a biological argument, which misunderstands human evolution in a way that is surprisingly subtle yet ultimately fundamental:

If you were to come up with a sort of objective zoological IQ based on amount of evolutionary work required to reach a certain level, complexity of brain structures, etc, you might put nematodes at 1, cows at 90, chimps at 99, homo erectus at 99.9, and modern humans at 100. The difference between 99.9 and 100 is the difference between “frequently eaten by lions” and “has to pass anti-poaching laws to prevent all lions from being wiped out”.

No, actually, what makes humans what we are is not that we are 1% smarter than chimpanzees.

First of all, we’re actually more like 200% smarter than chimpanzees, measured by encephalization quotient; they clock in at 2.49 while we hit 7.44. If you simply measure by raw volume, they have about 400 mL to our 1300 mL, so again roughly 3 times as big. But that’s relatively unimportant; with Moore’s Law, tripling only takes about 2.5 years.

But even having triple the brain power is not what makes humans different. It was a necessary condition, but not a sufficient one. Indeed, it was so insufficient that for about 200,000 years we had brains just as powerful as we do now and yet we did basically nothing in technological or economic terms—total, complete stagnation on a global scale. This is a conservative estimate of when we had brains of the same size and structure as we do today.

What makes humans what we are? Cooperation. We are what we are because we are together.
The capacity of human intelligence today is not 1300 mL of brain. It’s more like 1.3 gigaliters of brain, where a gigaliter, a billion liters, is about the volume of the Empire State Building. We have the intellectual capacity we do not because we are individually geniuses, but because we have built institutions of research and education that combine, synthesize, and share the knowledge of billions of people who came before us. Isaac Newton didn’t understand the world as well as the average third-grader in the 21st century does today. Does the third-grader have more brain? Of course not. But they absolutely do have more knowledge.

(I recently finished my first playthrough of Legacy of the Void, in which a central point concerns whether the Protoss should detach themselves from the Khala, a psychic union which combines all their knowledge and experience into one. I won’t spoil the ending, but let me say this: I can understand their hesitation, for it is basically our equivalent of the Khala—first literacy, and now the Internet—that has made us what we are. It would no doubt be the Khala that made them what they are as well.)

Is AI still dangerous? Absolutely. There are all sorts of damaging effects AI could have, culturally, economically, militarily—and some of them are already beginning to happen. I even agree with the basic conclusion of that essay that OpenAI is a bad idea because the cost of making AI available to people who will abuse it or create one that is dangerous is higher than the benefit of making AI available to everyone. But exponential growth not only isn’t the same thing as instantaneous takeoff, it isn’t even compatible with it.

The next time you encounter an example of exponential growth, try this. Don’t just fudge it in your head, don’t overcorrect and assume everything will be fast—just divide the percentage into 69 to see how long it will take to double.

Nature via Nurture

JDN 2457222 EDT 16:33.

One of the most common “deep questions” human beings have asked ourselves over the centuries is also one of the most misguided, the question of “nature versus nurture”: Is it genetics or environment that makes us what we are?

Humans are probably the single entity in the universe for which this question makes least sense. Artificial constructs have no prior existence, so they are “all nurture”, made what we choose to make them. Most other organisms on Earth behave accordingly to fixed instinctual programming, acting out a specific series of responses that have been honed over millions of years, doing only one thing, but doing it exceedingly well. They are in this sense “all nature”. As the saying goes, the fox knows many things, but the hedgehog knows one very big thing. Most organisms on Earth are in this sense hedgehogs, but we Homo sapiens are the ultimate foxes. (Ironically, hedgehogs are not actually “hedgehogs” in this sense: Being mammals, they have an advanced brain capable of flexibly responding to environmental circumstances. Foxes are a good deal more intelligent still, however.)

But human beings are by far the most flexible, adaptable organism on Earth. We live on literally every continent; despite being savannah apes we even live deep underwater and in outer space. Unlike most other species, we do not fit into a well-defined ecological niche; instead, we carve our own. This certainly has downsides; human beings are ourselves a mass extinction event.

Does this mean, therefore, that we are tabula rasa, blank slates upon which anything can be written?

Hardly. We’re more like word processors. Staring (as I of course presently am) at the blinking cursor of a word processor on a computer screen, seeing that wide, open space where a virtual infinity of possible texts could be written, depending entirely upon a sequence of miniscule key vibrations, you could be forgiven for thinking that you are looking at a blank slate. But in fact you are looking at the pinnacle of thousands of years of technological advancement, a machine so advanced, so precisely engineered, that its individual components are one ten-thousandth the width of a human hair (Intel just announced that we can now do even better than that). At peak performance, it is capable of over 100 billion calculations per second. Its random-access memory stores as much information as all the books on a stacks floor of the Hatcher Graduate Library, and its hard drive stores as much as all the books in the US Library of Congress. (Of course, both libraries contain digital media as well, exceeding anything my humble hard drive could hold by a factor of a thousand.)

All of this, simply to process text? Of course not; word processing is an afterthought for a processor that is specifically designed for dealing with high-resolution 3D images. (Of course, nowadays even a low-end netbook that is designed only for word processing and web browsing can typically handle a billion calculations per second.) But there the analogy with humans is quite accurate as well: Written language is about 10,000 years old, while the human visual mind is at least 100,000. We were 3D image analyzers long before we were word processors. This may be why we say “a picture is worth a thousand words”; we process each with about as much effort, even though the image necessarily contains thousands of times as many bits.

Why is the computer capable of so many different things? Why is the human mind capable of so many more? Not because they are simple and impinged upon by their environments, but because they are complex and precision-engineered to nonlinearly amplify tiny inputs into vast outputs—but only certain tiny inputs.

That is, it is because of our nature that we are capable of being nurtured. It is precisely the millions of years of genetic programming that have optimized the human brain that allow us to learn and adapt so flexibly to new environments and form a vast multitude of languages and cultures. It is precisely the genetically-programmed humanity we all share that makes our environmentally-acquired diversity possible.

In fact, causality also runs the other direction. Indeed, when I said other organisms were “all nature” that wasn’t right either; for even tightly-programmed instincts are evolved through millions of years of environmental pressure. Human beings have even been involved in cultural interactions long enough that it has begun to affect our genetic evolution; the reason I can digest lactose is that my ancestors about 10,000 years ago raised goats. We have our nature because of our ancestors’ nurture.

And then of course there’s the fact that we need a certain minimum level of environmental enrichment even to develop normally; a genetically-normal human raised into a deficient environment will suffer a kind of mental atrophy, as when children raised feral lose their ability to speak.

Thus, the question “nature or nurture?” seems a bit beside the point: We are extremely flexible and responsive to our environment, because of innate genetic hardware and software, which requires a certain environment to express itself, and which arose because of thousands of years of culture and millions of years of the struggle for survival—we are nurture because nature because nurture.

But perhaps we didn’t actually mean to ask about human traits in general; perhaps we meant to ask about some specific trait, like spatial intelligence, or eye color, or gender identity. This at least can be structured as a coherent question: How heritable is the trait? What proportion of the variance in this population is caused by genetic variation? Heritability analysis is a well-established methodology in behavioral genetics.
Yet, that isn’t the same question at all. For while height is extremely heritable within a given population (usually about 80%), human height worldwide has been increasing dramatically over time due to environmental influences and can actually be used as a measure of a nation’s economic development. (Look at what happened to the height of men in Japan.) How heritable is height? You have to be very careful what you mean.

Meanwhile, the heritability of neurofibromatosis is actually quite low—as many people acquire the disease by new mutations as inherit it from their parents—but we know for a fact it is a genetic disorder, because we can point to the specific genes that mutate to cause the disease.

Heritability also depends on the population under consideration; speaking English is more heritable within the United States than it is across the world as a whole, because there are a larger proportion of non-native English speakers in other countries. In general, a more diverse environment will lead to lower heritability, because there are simply more environmental influences that could affect the trait.

As children get older, their behavior gets more heritablea result which probably seems completely baffling, until you understand what heritability really means. Your genes become a more important factor in your behavior as you grow up, because you become separated from the environment of your birth and immersed into the general environment of your whole society. Lower environmental diversity means higher heritability, by definition. There’s also an effect of choosing your own environment; people who are intelligent and conscientious are likely to choose to go to college, where they will be further trained in knowledge and self-control. This latter effect is called niche-picking.

This is why saying something like “intelligence is 80% genetic” is basically meaningless, and “intelligence is 80% heritable” isn’t much better until you specify the reference population. The heritability of intelligence depends very much on what you mean by “intelligence” and what population you’re looking at for heritability. But even if you do find a high heritability (as we do for, say, Spearman’s g within the United States), this doesn’t mean that intelligence is fixed at birth; it simply means that parents with high intelligence are likely to have children with high intelligence. In evolutionary terms that’s all that matters—natural selection doesn’t care where you got your traits, only that you have them and pass them to your offspring—but many people do care, and IQ being heritable because rich, educated parents raise rich, educated children is very different from IQ being heritable because innately intelligent parents give birth to innately intelligent children. If genetic variation is systematically related to environmental variation, you can measure a high heritability even though the genes are not directly causing the outcome.

We do use twin studies to try to sort this out, but because identical twins raised apart are exceedingly rare, two very serious problems emerge: One, there usually isn’t a large enough sample size to say anything useful; and more importantly, this is actually an inaccurate measure in terms of natural selection. The evolutionary pressure is based on the correlation with the genes—it actually doesn’t matter whether the genes are directly causal. All that matters is that organisms with allele X survive and organisms with allele Y do not. Usually that’s because allele X does something useful, but even if it’s simply because people with allele X happen to mostly come from a culture that makes better guns, that will work just as well.

We can see this quite directly: White skin spread across the world not because it was useful (it’s actually terrible in any latitude other than subarctic), but because the cultures that conquered the world happened to be comprised mostly of people with White skin. In the 15th century you’d find a very high heritability of “using gunpowder weapons”, and there was definitely a selection pressure in favor of that trait—but it obviously doesn’t take special genes to use a gun.

The kind of heritability you get from twin studies is answering a totally different, nonsensical question, something like: “If we reassigned all offspring to parents randomly, how much of the variation in this trait in the new population would be correlated with genetic variation?” And honestly, I think the only reason people think that this is the question to ask is precisely because even biologists don’t fully grasp the way that nature and nurture are fundamentally entwined. They are trying to answer the intuitive question, “How much of this trait is genetic?” rather than the biologically meaningful “How strongly could a selection pressure for this trait evolve this gene?”

And if right now you’re thinking, “I don’t care how strongly a selection pressure for the trait could evolve some particular gene”, that’s fine; there are plenty of meaningful scientific questions that I don’t find particularly interesting and are probably not particularly important. (I hesitate to provide a rigid ranking, but I think it’s safe to say that “How does consciousness arise?” is a more important question than “Why are male platypuses venomous?” and “How can poverty be eradicated?” is a more important question than “How did the aircraft manufacturing duopoly emerge?”) But that’s really the most meaningful question we can construct from the ill-formed question “How much of this trait is genetic?” The next step is to think about why you thought that you were asking something important.

What did you really mean to ask?

For a bald question like, “Is being gay genetic?” there is no meaningful answer. We could try to reformulate it as a meaningful biological question, like “What is the heritability of homosexual behavior among males in the United States?” or “Can we find genetic markers strongly linked to self-identification as ‘gay’?” but I don’t think those are the questions we really meant to ask. I think actually the question we meant to ask was more fundamental than that: Is it legitimate to discriminate against gay people? And here the answer is unequivocal: No, it isn’t. It is a grave mistake to think that this moral question has anything to do with genetics; discrimination is wrong even against traits that are totally environmental (like religion, for example), and there are morally legitimate actions to take based entirely on a person’s genes (the obvious examples all coming from medicine—you don’t treat someone for cystic fibrosis if they don’t actually have it).

Similarly, when we ask the question “Is intelligence genetic?” I don’t think most people are actually interested in the heritability of spatial working memory among young American males. I think the real question they want to ask is about equality of opportunity, and what it would look like if we had it. If success were entirely determined by intelligence and intelligence were entirely determined by genetics, then even a society with equality of opportunity would show significant inequality inherited across generations. Thus, inherited inequality is not necessarily evidence against equality of opportunity. But this is in fact a deeply disingenuous argument, used by people like Charles Murray to excuse systemic racism, sexism, and concentration of wealth.

We didn’t have to say that inherited inequality is necessarily or undeniably evidence against equality of opportunity—merely that it is, in fact, evidence of inequality of opportunity. Moreover, it is far from the only evidence against equality of opportunity; we also can observe the fact that college-educated Black people are no more likely to be employed than White people who didn’t even finish high school, for example, or the fact that otherwise identical resumes with predominantly Black names (like “Jamal”) are less likely to receive callbacks compared to predominantly White names (like “Greg”). We can observe that the same is true for resumes with obviously female names (like “Sarah”) versus obviously male names (like “David”), even when the hiring is done by social scientists. We can directly observe that one-third of the 400 richest Americans inherited their wealth (and if you look closer into the other two-thirds, all of them had some very unusual opportunities, usually due to their family connections—“self-made” is invariably a great exaggeration). The evidence for inequality of opportunity in our society is legion, regardless of how genetics and intelligence are related. In fact, I think that the high observed heritability of intelligence is largely due to the fact that educational opportunities are distributed in a genetically-biased fashion, but I could be wrong about that; maybe there really is a large genetic influence on human intelligence. Even so, that does not justify widespread and directly-measured discrimination. It does not justify a handful of billionaires luxuriating in almost unimaginable wealth as millions of people languish in poverty. Intelligence can be as heritable as you like and it is still wrong for Donald Trump to have billions of dollars while millions of children starve.

This is what I think we need to do when people try to bring up a “nature versus nurture” question. We can certainly talk about the real complexity of the relationship between genetics and environment, which I think are best summarized as “nature via nurture”; but in fact usually we should think about why we are asking that question, and try to find the real question we actually meant to ask.