How can we stop rewarding psychopathy?

Oct 1, JDN 24578028

A couple of weeks ago The New York Times ran an interesting article about how entrepreneurs were often juvenile delinquents, who then often turn into white-collar criminals. They didn’t quite connect the dots, though; they talked about the relevant trait driving this behavior as “rule-breaking”, when it is probably better defined as psychopathy. People like Martin Shkreli aren’t just “rule-breakers”; they are psychopaths. While only about 1% of humans in general are psychopaths, somewhere between 3% and 4% of business executives are psychopaths. I was unable to find any specific data assessing the prevalence of psychopathy among politicians, but if you just read the Hare checklist, it’s not hard to see that psychopathic traits are overrepresented among politicians as well.

This is obviously the result of selection bias; as a society, we are systematically appointing psychopaths to positions of wealth and power. Why are we doing this? How can we stop?

One very important factor here that may be especially difficult to deal with is desire. We generally think that in a free society, people should be allowed to seek out the sort of life they want to live. But one of the reasons that psychopaths are more likely to become rich and powerful is precisely that they want it more.

To most of us, being rich is probably something we want, but not the most important thing to us. We’d accept being poor if it meant we could be happy, surrounded by friends and family who love us, and made a great contribution to society. We would like to be rich, but it’s more important that we be good people. But to many psychopaths, being rich is the one single thing they care about. All those other considerations are irrelevant.

With power, matters are even more extreme: Most people actually seem convinced that they don’t want power at all. They associate power with corruption and cruelty (because, you know, so many of the people in power are psychopaths!), and they want no part of it.

So the saying goes: “Power tends to corrupt, and absolute power corrupts absolutely.” Does it, now? Did power corrupt George Washington and Abraham Lincoln? Did it corrupt Mahatma Gandhi and Nelson Mandela? I’m not saying that any of these men were without flaws, even serious ones—but was it power that made them so? Who would they have been, and more importantly, what would they have done, if they hadn’t had power? Would the world really have been better off if Abraham Lincoln and Nelson Mandela had stayed out of politics? I don’t think so.

Part of what we need, therefore, is to convince good people that wanting power is not inherently bad. Power just means the ability to do things; it’s what you do that matters. You should want power—the power to right wrongs, mend injustices, uplift humanity’s future. Thinking that the world would be better if you were in charge not only isn’t a bad thing—it is quite likely to be true. If you are not a psychopath, then the world would probably be better off if you were in charge of it.

Of course, that depends partly on what “in charge of the world” even means; it’s not like we have a global government, after all. But even suppose you were granted the power of an absolute dictatorship over all of humanity; what would you do with that power? My guess is that you’d probably do what I would do: Start by using that power to correct the greatest injustices, then gradually cede power to a permanent global democracy. That wouldn’t just be a good thing; it would be quite literally and without a doubt the best thing that ever happened. Of course, it would be all the better if we never built such a dictatorship in the first place; but mainly that’s because of the sort of people who tend to become dictators. A benevolent dictatorship really would be a wonderful thing; the problem is that dictators almost never remain benevolent. Dictatorship is simply too enticing to psychopaths.

And what if you don’t think you’re competent enough in policy to make such decisions? Simple: You don’t make them yourself, you delegate them to responsible and trustworthy people to make them for you. Recognizing your own limitations is one of the most important differences between a typical leader and a good leader.

Desire isn’t the only factor here, however. Even though psychopaths tend to seek wealth and power with more zeal than others, there are still a lot of good people trying to seek wealth and power. We need to look very carefully at the process of how we select our leaders.

Let’s start with the private sector. How are managers chosen? Mainly, by managers above them. What criteria do they use? Mostly, they use similarity. Managers choose other managers who are “like them”—middle-aged straight White men with psychopathic tendencies.

This is something that could be rectified with regulation; we could require businesses to choose a more diverse array of managers that is more representative of the population at large. While this would no doubt trigger many complaints of “government interference” and “inefficiency”, in fact it almost certainly would increase the long-term profitability of most corporations. Study after study after study shows that increased diversity, particularly including more equal representation of women, results in better business performance. A recent MIT study found that switching from an all-male or all-female management population to a 50-50 male/female split could increase profits by as much as forty percent. The reason boards of directors aren’t including more diversity is that they ultimately care more about protecting their old boys’ club (and increasing their own compensation, of course) than they do about maximizing profits for their shareholders.

I think it would actually be entirely reasonable to include regulations about psychopathy in particular; designate certain industries (such as lobbying and finance; I would not include medicine, as psychopaths actually seem to make pretty good neurosurgeons!) as “systematically vital” and require psychopathy screening tests as part of their licensing process. This is no small matter, and definitely does represent an incursion into civil liberties; but given the enormous potential benefits, I don’t think it can be dismissed out of hand. We do license professions; why shouldn’t at least a minimal capacity for empathy and ethical behavior be part of that licensing process?

Where the civil liberty argument becomes overwhelming is in politics. I don’t think we can justify any restrictions on who should be allowed to run for office. Frankly, I think even the age limits should be struck from the Constitution; you should be allowed to run for President at 18 if you want. Requiring psychological tests for political office borders on dystopian.

That means we need to somehow reform either the campaign system, the voting system, or the behavior of voters themselves.

Of course, we should reform all three. Let’s start with the voting system itself, as that is the simplest: We should be using range voting, and we should abolish the Electoral College. Districts should be replaced by proportional representation through reweighted range voting, eliminating gerrymandering once and for all without question.

The campaign system is trickier. We could start by eliminating or tightly capping private and corporate campaign donations, and replace them with a system similar to the “Democracy Vouchers” being tested in Seattle. The basic idea is simple and beautiful: Everyone gets an equal amount of vouchers to give to whatever candidates they like, and then all the vouchers can be redeemed for campaign financing from public funds. It’s like everyone giving a donation (or monetary voting), but everyone has the same amount of “money”.

This would not solve all the problems, however. There is still an oligopoly of news media distorting our political discourse. There is still astonishingly bad journalism even in our most respected outlets, like the way the New York Times was obsessed with Comey’s letter and CNN’s wall-to-wall coverage of totally unfounded speculation about a missing airliner.

Then again, CNN’s ratings skyrocketed during that period. This shows that the problems run much deeper than a handful of bad journalists or corrupt media companies. These companies are, to a surprisingly large degree, just trying to cater to what their audience has said it wants, just “giving the people what they want”.

Our fundamental challenge, therefore, is to change what the people want. We have to somehow convince the public at large—or at least a big enough segment of the public at large—that they don’t really want TV news that spends hours telling them nothing and they don’t really want to elect the candidate who is the tallest or has the nicest hair. And we have to get them to actually change the way they behave accordingly.

When it comes to that part, I have no idea what to do. A voting population that is capable of electing Donald Trump—Electoral College nonsense notwithstanding, he won sixty million votes—is one that I honestly have no idea how to interface with at all. But we must try.

The replication crisis, and the future of science

Aug 27, JDN 2457628 [Sat]

After settling in a little bit in Irvine, I’m now ready to resume blogging, but for now it will be on a reduced schedule. I’ll release a new post every Saturday, at least for the time being.

Today’s post was chosen by Patreon vote, though only one person voted (this whole Patreon voting thing has not been as successful as I’d hoped). It’s about something we scientists really don’t like to talk about, but definitely need to: We are in the middle of a major crisis of scientific replication.

Whenever large studies are conducted attempting to replicate published scientific results, their ability to do so is almost always dismal.

Psychology is the one everyone likes to pick on, because their record is particularly bad. Only 39% of studies were really replicated with the published effect size, though a further 36% were at least qualitatively but not quantitatively similar. Yet economics has its own replication problem, and even medical research is not immune to replication failure.

It’s important not to overstate the crisis; the majority of scientific studies do at least qualitatively replicate. We are doing better than flipping a coin, which is better than one can say of financial forecasters.
There are three kinds of replication, and only one of them should be expected to give near-100% results. That kind is reanalysiswhen you take the same data and use the same methods, you absolutely should get the exact same results. I favor making reanalysis a routine requirement of publication; if we can’t get your results by applying your statistical methods to your data, then your paper needs revision before we can entrust it to publication. A number of papers have failed on reanalysis, which is absurd and embarrassing; the worst offender was probably Rogart-Reinhoff, which was used in public policy decisions around the world despite having spreadsheet errors.

The second kind is direct replication—when you do the exact same experiment again and see if you get the same result within error bounds. This kind of replication should work something like 90% of the time, but in fact works more like 60% of the time.

The third kind is conceptual replication—when you do a similar experiment designed to test the same phenomenon from a different perspective. This kind of replication should work something like 60% of the time, but actually only works about 20% of the time.

Economists are well equipped to understand and solve this crisis, because it’s not actually about science. It’s about incentives. I facepalm every time I see another article by an aggrieved statistician about the “misunderstanding” of p-values; no, scientist aren’t misunderstanding anything. They know damn well how p-values are supposed to work. So why do they keep using them wrong? Because their jobs depend on doing so.

The first key point to understand here is “publish or perish”; academics in an increasingly competitive system are required to publish their research in order to get tenure, and frequently required to get tenure in order to keep their jobs at all. (Or they could become adjuncts, who are paid one-fifth as much.)

The second is the fundamentally defective way our research journals are run (as I have discussed in a previous post). As private for-profit corporations whose primary interest is in raising more revenue, our research journals aren’t trying to publish what will genuinely advance scientific knowledge. They are trying to publish what will draw attention to themselves. It’s a similar flaw to what has arisen in our news media; they aren’t trying to convey the truth, they are trying to get ratings to draw advertisers. This is how you get hours of meaningless fluff about a missing airliner and then a single chyron scroll about a war in Congo or a flood in Indonesia. Research journals haven’t fallen quite so far because they have reputations to uphold in order to attract scientists to read them and publish in them; but still, their fundamental goal is and has always been to raise attention in order to raise revenue.

The best way to do that is to publish things that are interesting. But if a scientific finding is interesting, that means it is surprising. It has to be unexpected or unusual in some way. And above all, it has to be positive; you have to have actually found an effect. Except in very rare circumstances, the null result is never considered interesting. This adds up to making journals publish what is improbable.

In particular, it creates a perfect storm for the abuse of p-values. A p-value, roughly speaking, is the probability you would get the observed result if there were no effect at all—for instance, the probability that you’d observe this wage gap between men and women in your sample if in the real world men and women were paid the exact same wages. The standard heuristic is a p-value of 0.05; indeed, it has become so enshrined that it is almost an explicit condition of publication now. Your result must be less than 5% likely to happen if there is no real difference. But if you will only publish results that show a p-value of 0.05, then the papers that get published and read will only be the ones that found such p-values—which renders the p-values meaningless.

It was never particularly meaningful anyway; as we Bayesians have been trying to explain since time immemorial, it matters how likely your hypothesis was in the first place. For something like wage gaps where we’re reasonably sure, but maybe could be wrong, the p-value is not too unreasonable. But if the theory is almost certainly true (“does gravity fall off as the inverse square of distance?”), even a high p-value like 0.35 is still supportive, while if the theory is almost certainly false (“are human beings capable of precognition?”—actual study), even a tiny p-value like 0.001 is still basically irrelevant. We really should be using much more sophisticated inference techniques, but those are harder to do, and don’t provide the nice simple threshold of “Is it below 0.05?”

But okay, p-values can be useful in many cases—if they are used correctly and you see all the results. If you have effect X with p-values 0.03, 0.07, 0.01, 0.06, and 0.09, effect X is probably a real thing. If you have effect Y with p-values 0.04, 0.02, 0.29, 0.35, and 0.74, effect Y is probably not a real thing. But I’ve just set it up so that these would be published exactly the same. They each have two published papers with “statistically significant” results. The other papers never get published and therefore never get seen, so we throw away vital information. This is called the file drawer problem.

Researchers often have a lot of flexibility in designing their experiments. If their only goal were to find truth, they would use this flexibility to test a variety of scenarios and publish all the results, so they can be compared holistically. But that isn’t their only goal; they also care about keeping their jobs so they can pay rent and feed their families. And under our current system, the only way to ensure that you can do that is by publishing things, which basically means only including the parts that showed up as statistically significant—otherwise, journals aren’t interested. And so we get huge numbers of papers published that tell us basically nothing, because we set up such strong incentives for researchers to give misleading results.

The saddest part is that this could be easily fixed.

First, reduce the incentives to publish by finding other ways to evaluate the skill of academics—like teaching for goodness’ sake. Working papers are another good approach. Journals already get far more submissions than they know what to do with, and most of these papers will never be read by more than a handful of people. We don’t need more published findings, we need better published findings—so stop incentivizing mere publication and start finding ways to incentivize research quality.

Second, eliminate private for-profit research journals. Science should be done by government agencies and nonprofits, not for-profit corporations. (And yes, I would apply this to pharmaceutical companies as well, which should really be pharmaceutical manufacturers who make cheap drugs based off of academic research and carry small profit margins.) Why? Again, it’s all about incentives. Corporations have no reason to want to find truth and every reason to want to tilt it in their favor.

Third, increase the number of tenured faculty positions. Instead of building so many new grand edifices to please your plutocratic donors, use your (skyrocketing) tuition money to hire more professors so that you can teach more students better. You can find even more funds if you cut the salaries of your administrators and football coaches. Come on, universities; you are the one industry in the world where labor demand and labor supply are the same people a few years later. You have no excuse for not having the smoothest market clearing in the world. You should never have gluts or shortages.

Fourth, require pre-registration of research studies (as some branches of medicine already do). If the study is sound, an optimal rational agent shouldn’t care in the slightest whether it had a positive or negative result, and if our ape brains won’t let us think that way, we need to establish institutions to force it to happen. They shouldn’t even see the effect size and p-value before they make the decision to publish it; all they should care about is that the experiment makes sense and the proper procedure was conducted.
If we did all that, the replication crisis could be almost completely resolved, as the incentives would be realigned to more closely match the genuine search for truth.

Alas, I don’t see universities or governments or research journals having the political will to actually make such changes, which is very sad indeed.

“But wait, there’s more!”: The clever tricks of commercials

JDN 2457565

I’m sure you’ve all seen commercials like this dozens of times:

A person is shown (usually in black-and-white) trying to use an ordinary consumer product, and failing miserably. Often their failure can only be attributed to the most abject incompetence, but the narrator will explain otherwise: “Old product is so hard to use. Who can handle [basic household activity] and [simple instructions]?”

“Struggle no more!” he says (it’s almost always a masculine narrator), and the video turns to full color as the same person is shown using the new consumer product effortlessly. “With innovative high-tech new product, you can do [basic household activity] with ease in no time!”

“Best of all, new product, a $400 value, can be yours for just five easy payments of $19.95. That’s five easy payments of $19.95!”

And then, here it comes: “But wait. There’s more! Order within the next 15 minutes and you will get two new products, for the same low price. That’s $800 in value for just five easy payments of $19.95! And best of all, your satisfaction is guaranteed! If you don’t like new product, return it within 30 days for your money back!” (A much quieter, faster voice says: “Just pay shipping and handling.”)

Call 555-1234. That’s 555-1234.

“CALL NOW!”

Did you ever stop and think about why so many commercials follow this same precise format?

In short, because it works. Indeed, it works a good deal better than simply presenting the product’s actual upsides and downsides and reporting a sensible market price—even if that sensible market price is lower than the “five easy payments of $19.95”.

We owe this style of marketing to one Ron Popeil; Ron Popeil was a prolific inventor, but none of his inventions have had so much impact as the market methods he used to sell them.

Let’s go through step by step. Why is the person using the old product so incompetent? Surely they could sell their product without implying that we don’t know how to do basic household activities like boiling pasta and cutting vegetables?

Well, first of all, many of these products do nothing but automate such simple household activities (like the famous Veg-O-Matic which cuts vegetables and “It slices! It dices!”), so if they couldn’t at least suggest that this is a lot of work they’re saving us, we’d have no reason to want their product.

But there’s another reason as well: Watching someone else fumble with basic household appliances is funny, as any fan of the 1950s classic I Love Lucy would attest (in fact, it may not be a coincidence that the one fumbling with the vegetables is often a woman who looks a lot like Lucy), and meta-analysis of humor in advertising has shown that it draws attention and triggers positive feelings.

Why use black-and-white for the first part? The switch to color enhances the feeling of contrast, and the color video is more appealing. You wouldn’t consciously say “Wow, that slicer changed the tomatoes from an ugly grey to a vibrant red!” but your subconscious mind is still registering that association.

Then they will hit you with appealing but meaningless buzzwords. For technology it will be things like “innovative”, “ground-breaking”, “high-tech” and “state-of-the-art”, while for foods and nutritional supplements it will be things like “all-natural”, “organic”, “no chemicals”, and “just like homemade”. It will generally be either so vague as to be unverifiable (what constitutes “innovative”?), utterly tautological (all carbon-based substances are “organic” and this term is not regulated), or transparently false but nonetheless not specific enough to get them in trouble (“just like homemade” literally can’t be true if you’re buying it from a TV ad). These give you positive associations without forcing the company to commit to making a claim they could actually be sued for breaking. It’s the same principle as the Applause Lights that politicians bring to every speech: “Three cheers for moms!” “A delicious slice of homemade apple pie!” “God Bless America!”

Occasionally you’ll also hear buzzwords that do have some meaning, but often not nearly as strong as people imagine: “Patent pending” means that they applied for the patent and it wasn’t summarily rejected—but not that they’ll end up getting it approved. “Certified organic” means that the USDA signed off on the farming standards, which is better than nothing but leaves a lot of wiggle room for animal abuse and irresponsible environmental practices.

And then we get to the price. They’ll quote some ludicrous figure for its “value”, which may be a price that no one has ever actually paid for a product of this kind, then draw a line through it and replace it with the actual price, which will be far lower.

Indeed, not just lower: The actual price is almost always $19.99 or $19.95. If the product is too expensive to make for them to sell it at $19.95, they will sell it at several payments of $19.95, and emphasize that these are “easy” payments, as though the difficulty of writing the check were a major factor in people’s purchasing decisions. (That actually is a legitimate concern for micropayments, but not for buying kitchen appliances!) They’ll repeat the price because repetition improves memory and also makes statements more persuasive.

This is what we call psychological pricing, and it’s one of those enormous market distortions that once you realize it’s there, you see it everywhere and start to wonder how our whole market system hasn’t collapsed on itself from the sheer weight of our overwhelming irrationality. The price of a product sold on TV will almost always be just slightly less than $20.

In general, most prices will take the form of $X.95 or $X.99; Costco even has a code system they use in the least significant digit. Continuous substances like gasoline can even be sold at fractional pennies, and so they’ll usually be at $X.X99, being not even one penny less. It really does seem to work; despite being an eminently trivial difference from the round number, and typically rounded up from what it actually should have been, it just feels like less to see $19.95 rather than $20.00.

Moreover, I have less data to support this particular hypothesis, but I think that $20 in particular is a very specific number, because $19.95 pops up so very, very often. I think most Americans have what we might call a “Jackson heuristic”, which is as follows: If something costs less than a Jackson (a $20 bill, though hopefully they’ll put Harriet Tubman on soon, so “Tubman heuristic”), you’re allowed to buy it on impulse without thinking too hard about whether it’s worth it. But if it costs more than a Jackson, you need to stop and think about it, weigh the alternatives before you come to a decision. Since these TV ads are almost always aiming for the thoughtless impulse buy, they try to scrape in just under the Jackson heuristic.

Of course, inflation will change the precise figure over time; in the 1980s it was probably a Hamilton heuristic, in the 1970s a Lincoln heuristic, in the 1940s a Washington heuristic. Soon enough it will be a Grant heuristic and then a Benjamin heuristic. In fact it’s probably something like “The closest commonly-used cash denomination to half a milliQALY”, but nobody does that calculation consciously; the estimate is made automatically without thinking. This in turn is probably figured because you could literally do that once a day every single day for only about 20% of your total income, and if you hold it to once a week you’re under 3% of your income. So if you follow the Jackson heuristic on impulse buys every week or so, your impulse spending is a “statistically insignificant” proportion of your income. (Why do we use that anyway? And suddenly we realize: The 95% confidence level is itself nothing more than a heuristic.)

Then they take advantage of our difficulty in discounting time rationally, by spreading it into payments; “five easy payments of $19.95” sounds a lot more affordable than “$100”, but they are in fact basically the same. (You save $0.25 by the payment plan, maybe as much as a few dollars if your cashflow is very bad and thus you have a high temporal discount rate.)

And then, finally, “But wait. There’s more!” They offer you another of the exact same product, knowing full well you’ll probably have no use for the second one. They’ll multiply their previous arbitrary “value” by 2 to get an even more ludicrous number. Now it sounds like they’re doing you a favor, so you’ll feel obliged to do one back by buying the product. Gifts often have this effect in experiments: People are significantly more motivated to answer a survey if you give them a small gift beforehand, even if they get to keep it without taking the survey.

They’ll tell you to call in the next 15 minutes so that you feel like part of an exclusive club (when in reality you could probably call at any time and get the same deal). This also ensures that you’re staying in impulse-buy mode, since if you wait longer to think, you’ll miss the window!

They will offer a “money-back guarantee” to give you a sense of trust in the product, and this would be a rational response, except for that little disclaimer: “Just pay shipping and handling.” For many products, especially nutritional supplements (which cost basically nothing to make), the “handling” fee is high enough that they don’t lose much money, if any, even if you immediately send it back for a refund. Besides, they know that hardly anyone actually bothers to return products. Retailers are currently in a panic about “skyrocketing” rates of product returns that are still under 10%.

Then, they’ll repeat their phone number, followed by a remarkably brazen direct command: “Call now!” Personally I tend to bristle at direct commands, even from legitimate authorities; but apparently I’m unusual in that respect, and most people will in fact obey direct commands from random strangers as long as they aren’t too demanding. A famous demonstration of this you could try yourself if you’re feeling like a prankster is to walk into a room, point at someone, and say “You! Stand up!” They probably will. There’s a whole literature in social psychology about what makes people comply with commands of this sort.

And all, to make you buy a useless gadget you’ll try to use once and then leave in a cupboard somewhere. What untold billions of dollars in wealth are wasted this way?

Why it matters that torture is ineffective

JDN 2457531

Like “longest-ever-serving Speaker of the House sexually abuses teenagers” and “NSA spy program is trying to monitor the entire telephone and email system”, the news that the US government systematically tortures suspects is an egregious violation that goes to the highest levels of our government—that for some reason most Americans don’t particularly seem to care about.

The good news is that President Obama signed an executive order in 2009 banning torture domestically, reversing official policy under the Bush Administration, and then better yet in 2014 expanded the order to apply to all US interests worldwide. If this is properly enforced, perhaps our history of hypocrisy will finally be at its end. (Well, not if Trump wins…)

Yet as often seems to happen, there are two extremes in this debate and I think they’re both wrong.
The really disturbing side is “Torture works and we have to use it!” The preferred mode of argumentation for this is the “ticking time bomb scenario”, in which we have some urgent disaster to prevent (such as a nuclear bomb about to go off) and torture is the only way to stop it from happening. Surely then torture is justified? This argument may sound plausible, but as I’ll get to below, this is a lot like saying, “If aliens were attacking from outer space trying to wipe out humanity, nuclear bombs would probably be justified against them; therefore nuclear bombs are always justified and we can use them whenever we want.” If you can’t wait for my explanation, The Atlantic skewers the argument nicely.

Yet the opponents of torture have brought this sort of argument on themselves, by staking out a position so extreme as “It doesn’t matter if torture works! It’s wrong, wrong, wrong!” This kind of simplistic deontological reasoning is very appealing and intuitive to humans, because it casts the world into simple black-and-white categories. To show that this is not a strawman, here are several different people all making this same basic argument, that since torture is illegal and wrong it doesn’t matter if it works and there should be no further debate.

But the truth is, if it really were true that the only way to stop a nuclear bomb from leveling Los Angeles was to torture someone, it would be entirely justified—indeed obligatory—to torture that suspect and stop that nuclear bomb.

The problem with that argument is not just that this is not our usual scenario (though it certainly isn’t); it goes much deeper than that:

That scenario makes no sense. It wouldn’t happen.

To use the example the late Antonin Scalia used from an episode of 24 (perhaps the most egregious Fictional Evidence Fallacy ever committed), if there ever is a nuclear bomb planted in Los Angeles, that would literally be one of the worst things that ever happened in the history of the human race—literally a Holocaust in the blink of an eye. We should be prepared to cause extreme suffering and death in order to prevent it. But not only is that event (fortunately) very unlikely, torture would not help us.

Why? Because torture just doesn’t work that well.

It would be too strong to say that it doesn’t work at all; it’s possible that it could produce some valuable intelligence—though clear examples of such results are amazingly hard to come by. There are some social scientists who have found empirical results showing some effectiveness of torture, however. We can’t say with any certainty that it is completely useless. (For obvious reasons, a randomized controlled experiment in torture is wildly unethical, so none have ever been attempted.) But to justify torture it isn’t enough that it could work sometimes; it has to work vastly better than any other method we have.

And our empirical data is in fact reliable enough to show that that is not the case. Torture often produces unreliable information, as we would expect from the game theory involved—your incentive is to stop the pain, not provide accurate intel; the psychological trauma that torture causes actually distorts memory and reasoning; and as a matter of fact basically all the useful intelligence obtained in the War on Terror was obtained through humane interrogation methods. As interrogation experts agree, torture just isn’t that effective.

In principle, there are four basic cases to consider:

1. Torture is vastly more effective than the best humane interrogation methods.

2. Torture is slightly more effective than the best humane interrogation methods.

3. Torture is as effective as the best humane interrogation methods.

4. Torture is less effective than the best humane interrogation methods.

The evidence points most strongly to case 4, which would mean that torture is a no-brainer; if it doesn’t even work as well as other methods, it’s absurd to use it. You’re basically kicking puppies at that point—purely sadistic violence that accomplishes nothing. But the data isn’t clear enough for us to rule out case 3 or even case 2. There is only one case we can strictly rule out, and that is case 1.

But it was only in case 1 that torture could ever be justified!

If you’re trying to justify doing something intrinsically horrible, it’s not enough that it has some slight benefit.

People seem to have this bizarre notion that we have only two choices in morality:

Either we are strict deontologists, and wrong actions can never be justified by good outcomes ever, in which case apparently vaccines are morally wrong, because stabbing children with needles is wrong. Tto be fair, some people seem to actually believe this; but then, some people believe the Earth is less than 10,000 years old.

Or alternatively we are the bizarre strawman concept most people seem to have of utilitarianism, under which any wrong action can be justified by even the slightest good outcome, in which case all you need to do to justify slavery is show that it would lead to a 1% increase in per-capita GDP. Sadly, there honestly do seem to be economists who believe this sort of thing. Here’s one arguing that US chattel slavery was economically efficient, and some of the more extreme arguments for why sweatshops are good can take on this character. Sweatshops may be a necessary evil for the time being, but they are still an evil.

But what utilitarianism actually says (and I consider myself some form of nuanced rule-utilitarian, though actually I sometimes call it “deontological consequentialism” to emphasize that I mean to synthesize the best parts of the two extremes) is not that the ends always justify the means, but that the ends can justify the means—that it can be morally good or even obligatory to do something intrinsically bad (like stabbing children with needles) if it is the best way to accomplish some greater good (like saving them from measles and polio). But the good actually has to be greater, and it has to be the best way to accomplish that good.

To see why this later proviso is important, consider the real-world ethical issues involved in psychology experiments. The benefits of psychology experiments are already quite large, and poised to grow as the science improves; one day the benefits of cognitive science to humanity may be even larger than the benefits of physics and biology are today. Imagine a world without mood disorders or mental illness of any kind; a world without psychopathy, where everyone is compassionate; a world where everyone is achieving their full potential for happiness and self-actualization. Cognitive science may yet make that world possible—and I haven’t even gotten into its applications in artificial intelligence.

To achieve that world, we will need a great many psychology experiments. But does that mean we can just corral people off the street and throw them into psychology experiments without their consent—or perhaps even their knowledge? That we can do whatever we want in those experiments, as long as it’s scientifically useful? No, it does not. We have ethical standards in psychology experiments for a very good reason, and while those ethical standards do slightly reduce the efficiency of the research process, the reduction is small enough that the moral choice is obviously to retain the ethics committees and accept the slight reduction in research efficiency. Yes, randomly throwing people into psychology experiments might actually be slightly better in purely scientific terms (larger and more random samples)—but it would be terrible in moral terms.

Along similar lines, even if torture works about as well or even slightly better than other methods, that’s simply not enough to justify it morally. Making a successful interrogation take 16 days instead of 17 simply wouldn’t be enough benefit to justify the psychological trauma to the suspect (and perhaps the interrogator!), the risk of harm to the falsely accused, or the violation of international human rights law. And in fact a number of terrorism suspects were waterboarded for months, so even the idea that it could shorten the interrogation is pretty implausible. If anything, torture seems to make interrogations take longer and give less reliable information—case 4.

A lot of people seem to have this impression that torture is amazingly, wildly effective, that a suspect who won’t crack after hours of humane interrogation can be tortured for just a few minutes and give you all the information you need. This is exactly what we do not find empirically; if he didn’t crack after hours of talk, he won’t crack after hours of torture. If you literally only have 30 minutes to find the nuke in Los Angeles, I’m sorry; you’re not going to find the nuke in Los Angeles. No adversarial interrogation is ever going to be completed that quickly, no matter what technique you use. Evacuate as many people to safe distances or underground shelters as you can in the time you have left.

This is why the “ticking time-bomb” scenario is so ridiculous (and so insidious); that’s simply not how interrogation works. The best methods we have for “rapid” interrogation of hostile suspects take hours or even days, and they are humane—building trust and rapport is the most important step. The goal is to get the suspect to want to give you accurate information.

For the purposes of the thought experiment, okay, you can stipulate that it would work (this is what the Stanford Encyclopedia of Philosophy does). But now all you’ve done is made the thought experiment more distant from the real-world moral question. The closest real-world examples we’ve ever had involved individual crimes, probably too small to justify the torture (as bad as a murdered child is, think about what you’re doing if you let the police torture people). But by the time the terrorism to be prevented is large enough to really be sufficient justification, it (1) hasn’t happened in the real world and (2) surely involves terrorists who are sufficiently ideologically committed that they’ll be able to resist the torture. If such a situation arises, of course we should try to get information from the suspects—but what we try should be our best methods, the ones that work most consistently, not the ones that “feel right” and maybe happen to work on occasion.

Indeed, the best explanation I have for why people use torture at all, given its horrible effects and mediocre effectiveness at best is that it feels right.

When someone does something terrible (such as an act of terrorism), we rightfully reduce our moral valuation of them relative to everyone else. If you are even tempted to deny this, suppose a terrorist and a random civilian are both inside a burning building and you only have time to save one. Of course you save the civilian and not the terrorist. And that’s still true even if you know that once the terrorist was rescued he’d go to prison and never be a threat to anyone else. He’s just not worth as much.

In the most extreme circumstances, a person can be so terrible that their moral valuation should be effectively zero: If the only person in a burning building is Stalin, I’m not sure you should save him even if you easily could. But it is a grave moral mistake to think that a person’s moral valuation should ever go negative, yet I think this is something that people do when confronted with someone they truly hate. The federal agents torturing those terrorists didn’t merely think of them as worthless—they thought of them as having negative worth. They felt it was a positive good to harm them. But this is fundamentally wrong; no sentient being has negative worth. Some may be so terrible as to have essentially zero worth; and we are often justified in causing harm to some in order to save others. It would have been entirely justified to kill Stalin (as a matter of fact he died of heart disease and old age), to remove the continued threat he posed; but to torture him would not have made the world a better place, and actually might well have made it worse.

Yet I can see how psychologically it could be useful to have a mechanism in our brains that makes us hate someone so much we view them as having negative worth. It makes it a lot easier to harm them when necessary, makes us feel a lot better about ourselves when we do. The idea that any act of homicide is a tragedy but some of them are necessary tragedies is a lot harder to deal with than the idea that some people are just so evil that killing or even torturing them is intrinsically good. But some of the worst things human beings have ever done ultimately came from that place in our brains—and torture is one of them.

The Expanse gets the science right—including the economics

JDN 2457502

Despite constantly working on half a dozen projects at once (literally—preparing to start my PhD, writing this blog, working at my day job, editing a novel, preparing to submit a nonfiction book, writing another nonfiction book with three of my friends as co-authors, and creating a card game—that’s seven actually), I do occasionally find time to do things for fun. One I’ve been doing lately is catching up on The Expanse on DVR (I’m about halfway through the first season so far).

If you’re not familiar with The Expanse, it has been fairly aptly described as Battlestar Galactica meets Game of Thrones, though I think that particular comparison misrepresents the tone and attitudes of the series, because both BG and GoT are so dark and cynical (“It’s a nice day… for a… red wedding!”). I think “Star Trek meets Game of Thrones” might be better actually—the extreme idealism of Star Trek would cancel out the extreme cynicism of Game of Thrones, with the result being a complex mix of idealism and cynicism that more accurately reflects the real world (a world where Mahatma Gandhi and Adolf Hitler lived at the same time). That complex, nuanced world (or should I say worlds?) is where The Expanse takes place. ST is also more geopolitical than BG and The Expanse is nothing if not geopolitical.

But The Expanse is not just psychologically realistic—it is also scientifically and economically realistic. It may in fact be the hardest science fiction I have ever encountered, and is definitely the hardest science fiction I’ve seen in a television show. (There are a few books that might be slightly harder, as well as some movies based on them.)

The only major scientific inaccuracy I’ve been able to find so far is the use of sound effects in space, and actually even these can be interpreted as reflecting an omniscient narrator perspective that would hear any sounds that anyone would hear, regardless of what planet or ship they might be on. The sounds the audience hears all seem to be sounds that someone would hear—there’s simply no particular person who would hear all of them. When people are actually thrown into hard vacuum, we don’t hear them make any noise.

Like Firefly (and for once I think The Expanse might actually be good enough to deserve that comparison), there is no FTL, no aliens, no superhuman AI. Human beings are bound within our own solar system, and travel between planets takes weeks or months depending on your energy budget. They actually show holograms projecting the trajectory of various spacecraft and the trajectories actually make good sense in terms of orbital mechanics. Finally screenwriters had the courage to give us the terrifying suspense and inevitability of an incoming nuclear missile rounding a nearby asteroid and intercepting your trajectory, where you have minutes to think about it but not nearly enough delta-v to get out of its blast radius. That is what space combat will be like, if we ever have space combat (as awesome as it is to watch, I strongly hope that we will not ever actually do it). Unlike what Star Trek would have you believe, space is not a 19th century ocean.

They do have stealth in space—but it requires technology that even to them is highly advanced. Moreover it appears to only work for relatively short periods and seems most effective against civilian vessels that would likely lack state-of-the-art sensors, both of which make it a lot more plausible.

Computers are more advanced in the 2200s then they were in the 2000s, but not radically so, at most a million times faster, about what we gained since the 1980s. I’m guessing a smartphone in The Expanse runs at a few petaflops. Essentially they’re banking on Moore’s Law finally dying sometime in the mid 21st century, but then, so am I. Perhaps a bit harder to swallow is that no one has figured out good enough heuristics to match human cognition; but then, human cognition is very tightly optimized.

Spacecraft don’t have artificial gravity except for the thrust of their engines, and people float around as they should when ships are freefalling. They actually deal with the fact that Mars and Ceres have lower gravity than Earth, and the kinds of health problems that result from this. (One thing I do wish they’d done is had the Martian cruiser set a cruising acceleration of Mars-g—about 38% Earth-g—that would feel awkward and dizzying to their Earther captives. Instead they basically seem to assume that Martians still like to use Earth-g for space transit, but that does make some sense in terms of both human health and simply transit time.) It doesn’t seem like people move around quite awkwardly enough in the very low gravity of Ceres—which should be only about 3% Earth-g—but they do establish that electromagnetic boots are ubiquitous and that could account for most of this.

They fight primarily with nuclear missiles and kinetic weapons, and the damage done by nuclear missiles is appropriately reduced by the fact that vacuum doesn’t transmit shockwaves. (Nuclear missiles would still be quite damaging in space by releasing large amounts of wide-spectrum radiation; but they wouldn’t cause the total devastation they do within atmosphere.) Oddly they decided not to go with laser weapons as far as I can tell, which actually seems to me like they’ve underestimated advancement; laser weapons have a number of advantages that would be particularly useful in space, once we can actually make them affordable and reliable enough for widespread deployment. There could also be a three-tier system, where missiles are used at long range, railguns at medium range, and lasers at short range. (Yes, short range—the increased speed of lasers would be only slight compared to a good railgun, and would be more than offset by the effect of diffraction. At orbital distances, a laser is a shotgun.) Then again, it could well work out that railguns are just better—depending on how vessels are structured, puncturing their hulls with kinetic rounds could well be more useful than burning them up with infrared lasers.

But I think what really struck me about the realism of The Expanse is how it even makes the society realistic (in a way that, say, Firefly really doesn’t—we wanted a Western and we got a Western!).

The only major offworld colonies are Mars and Ceres, both of which seem to be fairly well-established, probably originally colonized as much as a century ago. Different societies have formed on each world; Earth has largely united under the United Nations (one of the lead characters is an undersecretary for the UN), but meanwhile Mars has split off into its own independent nation (“Martian” is now an ethnicity like “German” rather than meaning “extraterrestrial”), and the asteroid belt colonists, while formally still under Earth’s government, think of themselves as a different culture (“Belters”) and are seeking independence. There are some fairly obvious—but deftly managed rather than heavy-handed—parallels between the Belter independence movement and real-world independence movements, particularly Palestine (it’s hard not to think of the PLO when they talk about the OPA). Both Mars and the Belt have their own languages, while Earth’s languages have largely coalesced around English as the language of politics and commerce. (If the latter seems implausible, I remind you that the majority of the Internet and all international air traffic control are in English.) English is the world’s lingua franca (which is a really bizarre turn of phrase because it’s the Latin for French).

There is some of the conniving and murdering of Game of Thrones, but it is at a much more subdued level, and all of the major factions display both merits and flaws. There is no clear hero and no clear villain, just conflict and misunderstanding between a variety of human beings each with their own good and bad qualities. There does seem to be a sense that the most idealistic characters suffer for their idealism much as the Starks often do, but unlike the Starks they usually survive and learn from the experience. Indeed, some of the most cynical also seem to suffer for their cynicism—in the episode I just finished, the grizzled UN Colonel assumed the worst of his adversary and ended up branded “the butcher of Anderson Station”.

Cost of living on Ceres is extraordinarily high because of the limited living space (the apartments look a lot like the tiny studios of New York or San Francisco), and above all the need to constantly import air and water from Earth. A central plot point in the first episode is that a ship carrying comet ice—i.e., water—to Ceres is lost in a surprise attack by unknown adversaries with advanced technology, and the result is a deepening of an already dire water shortage, exacerbating the Belter’s craving for rebellion.

Air and water are recyclable, so it wouldn’t be that literally every drink and every breath needs to be supplied from outside—indeed that would clearly be cost-prohibitive. But recycling is never perfect, and Ceres also appears to have a growing population, both of which would require a constant input of new resources to sustain. It makes perfect sense that the most powerful people on Ceres are billionaire tycoons who own water and air transport corporations.

The police on Ceres (of which another lead character is a detective) are well-intentioned but understaffed, underfunded and moderately corrupt, similar to what we seem to find in large inner-city police departments like the NYPD and LAPD. It felt completely right when they responded to an attempt to kill a police officer with absolutely overwhelming force and little regard for due process and procedure—for this is what real-world police departments almost always do.

But why colonize the asteroid belt at all? Mars is a whole planet, there is plenty there—and in The Expanse they are undergoing terraforming at a very plausible rate (there’s a moving scene where a Martian says to an Earther, “We’re trying to finish building our garden before you finish paving over yours.”). Mars has as much land as Earth, and it has water, abundant metals, and CO2 you could use to make air.Even just the frontier ambition could be enough to bring us to Mars.

But why go to Ceres? The explanation The Expanse offers is a very sensible one: Mining, particularly so-called “rare earth metals”. Gold and platinum might have been profitable to mine at first, but once they became plentiful the market would probably collapse or at least drop off to a level where they aren’t particularly expensive or interesting—because they aren’t useful for very much. But neodymium, scandium, and prometheum are all going to be in extremely high demand in a high-tech future based on nuclear-powered spacecraft, and given that we’re already running out of easily accessible deposits on Earth, by the 2200s there will probably be basically none left. The asteroid belt, however, will have plenty for centuries to come.

As a result Ceres is organized like a mining town, or perhaps an extractive petrostate (metallostate?); but due to lightspeed interplanetary communication—very important in the series—and some modicum of free speech it doesn’t appear to have attained more than a moderate level of corruption. This also seems realistic; the “end-of-history” thesis is often overstated, but the basic idea that some form of democracy and welfare-state capitalism is fast becoming the only viable model of governance does seem to be true, and that is almost certainly the model of governance we would export to other planets. In such a system corruption can only get so bad before it is shown on the mass media and people won’t take it anymore.

The show doesn’t deal much with absolute dollar (or whatever currency) numbers, which is probably wise; but nominal incomes on Ceres are likely extremely high even though the standard of living is quite poor, because the tiny living space and need to import air and water would make prices (literally?) astronomical. Most people on Ceres seem to have grown up there, but the initial attraction could have been something like the California Gold Rush, where rumors of spectacularly high incomes clashed with similarly spectacular expenses incurred upon arrival. “Become a millionaire!” “Oh, by the way, your utility bill this month is $112,000.”

Indeed, even the poor on Ceres don’t seem that poor, which is a very nice turn toward realism that a lot of other science fiction shows seem unprepared to make. In Firefly, the poor are poor—they can barely afford food and clothing, and have no modern conveniences whatsoever. (“Jaynestown”, perhaps my favorite episode, depicts this vividly.) But even the poor in the US today are rarely that poor; our minimalistic and half-hearted welfare state has a number of cracks one can fall through, but as long as you get the benefits you’re supposed to get you should be able to avoid starvation and homelessness. Similarly I find it hard to believe that any society with high enough productivity to routinely build interstellar spacecraft the way we build container ships would not have at least the kind of welfare state that provides for the most basic needs. Chronic dehydration is probably still a problem for Belters, because water would be too expensive to subsidize in this way; but they all seem to have fairly nice clothes, home appliances, and smartphones, and that seems right to me. At one point a character loses his arm, and the “cheap” solution is a cybernetic prosthetic—the “expensive” one would be to grow him a new arm. As today but perhaps even more so, poverty in The Expanse is really about inequality—the enormous power granted to those who have millions of times as much as others. (Another show that does this quite well, though is considerably softer as far as the physics, is Continuum. If I recall correctly, Alec Sadler in 2079 is literally a trillionaire.)

Mars also appears to be a democracy, and actually quite a thriving one. In many ways Mars appears to be surpassing Earth economically and technologically. This suggests that Mars was colonized with our best and brightest, but not necessarily; Australians have done quite well for themselves despite being founded as a penal colony. Mars colonization would also have a way of justifying their frontier idealism that no previous frontiers have granted: No indigenous people to displace, no local ecology to despoil, and no gifts from the surrounding environment. You really are working entirely out of your own hard work and know-how (and technology and funding from Earth of course) to establish a truly new world on the open and unspoiled frontier. You’re not naive or a hypocrite, it’s the real truth. That kind of realistic idealism could make the Martian Dream a success in ways even the American Dream never quite was.

In all it is a very compelling series, and should appeal to people like me who crave geopolitical nuance in fiction. But it also has its moments of huge space battles with exploding star cruisers, so there’s that.

Why is there a “corporate ladder”?

JDN 2457482

We take this concept for granted; there are “entry-level” jobs, and then you can get “promoted”, until perhaps you’re lucky enough or talented enough to rise to the “top”. Jobs that are “higher” on this “ladder” pay better, offer superior benefits, and also typically involve more pleasant work environments and more autonomy, though they also typically require greater skill and more responsibility.

But I contend that an alien lifeform encountering our planet for the first time, even one that somehow knew all about neoclassical economic theory (admittedly weird, but bear with me here), would be quite baffled by this arrangement.

The classic “rags to riches” story always involves starting work in some menial job like working in the mailroom, from which you then more or less magically rise to the position of CEO. (The intermediate steps are rarely told in the story, probably because they undermine the narrative; successful entrepreneurs usually make their first successful business using funds from their wealthy relatives, and if you haven’t got any wealthy relatives, that’s just too bad for you.)

Even despite its dubious accuracy, the story is bizarre in another way: There’s no reason to think that being really good at working in the mail room has anything at all to do with being good at managing a successful business. They’re totally orthogonal skills. They may even be contrary in personality terms; the kind of person who makes a good entrepreneur is innovative, decisive, and independent—and those are exactly the kind of personality traits that will make you miserable in a menial job where you’re constantly following orders.

Yet in almost every profession, we have this process where you must first “earn” your way to “higher” positions by doing menial and at best tangentially-related tasks.

This even happens in science, where we ought to know better! There’s really no reason to think that being good at taking multiple-choice tests strongly predicts your ability to do scientific research, nor that being good at grading multiple-choice tests does either; and yet to become a scientific researcher you must pass a great many multiple-choice tests (at bare minimum the SAT and GRE), and probably as a grad student you’ll end up grading some as well.

This process is frankly bizarre; worldwide, we are probably leaving tens of trillions of dollars of productivity on the table by instituting these arbitrary selection barriers that have nothing to do with actual skills. Simply optimizing our process of CEO selection alone would probably add a trillion dollars to US GDP.

If neoclassical economics were right, we should assign jobs solely based on marginal productivity; there should be some sort of assessment of your ability at each task you might perform, and whichever you’re best at (in the sense of comparative advantage) is what you end up doing, because that’s what you’ll be paid the most to do. Actually for this to really work the selection process would have to be extremely cheap, extremely reliable, and extremely fast, lest the friction of the selection system itself introduce enormous inefficiencies. (The fact that this never even seems to work even in SF stories with superintelligent sorting AIs, let alone in real life, is just so much the worse for neoclassical economics. The last book I read in which it actually seemed to work was Harry Potter and the Sorceror’s Stone—so it was literally just magic.)

The hope seems to be that competition will somehow iron out this problem, but in order for that to work, we must all be competing on a level playing field, and furthermore the mode of competition must accurately assess our real ability. The reason Olympic sports do a pretty good job of selecting the best athletes in the world is that they obey these criteria; the reason corporations do a terrible job of selecting the best CEOs is that they do not.

I’m quite certain I could do better than the former CEO of the late Lehman Brothers (and, to be fair, there are others who could do better still than I), but I’ll likely never get the chance to own a major financial firm—and I’m a lot closer than most people. I get to tick most of the boxes you need to be in that kind of position: White, male, American, mostly able-bodied, intelligent, hard-working, with a graduate degree in economics. Alas, I was only born in the top 10% of the US income distribution, not the top 1% or 0.01%, so my odds are considerably reduced. (That and I’m pretty sure that working for a company as evil as the late Lehman Brothers would destroy my soul.) Somewhere in Sudan there is a little girl who would be the best CEO of an investment bank the world has ever seen, but she is dying of malaria. Somewhere in India there is a little boy who would have been a greater physicist than Einstein, but no one ever taught him to read.

Competition may help reduce the inefficiency of this hierarchical arrangement—but it cannot explain why we use a hierarchy in the first place. Some people may be especially good at leadership and coordination; but in an efficient system they wouldn’t be seen as “above” other people, but as useful coordinators and advisors that people consult to ensure they are allocating tasks efficiently. You wouldn’t do things because “your boss told you to”, but because those things were the most efficient use of your time, given what everyone else in the group was doing. You’d consult your coordinator often, and usually take their advice; but you wouldn’t see them as orders you were required to follow.

Moreover, coordinators would probably not be paid much better than those they coordinate; what they were paid would depend on how much the success of the tasks depends upon efficient coordination, as well as how skilled other people are at coordination. It’s true that if having you there really does make a company with $1 billion in revenue 1% more efficient, that is in fact worth $10 million; but that isn’t how we set the pay of managers. It’s simply obvious to most people that managers should be paid more than their subordinates—that with a “promotion” comes more leadership and more pay. You’re “moving up the corporate ladder” Your pay reflects your higher status, not your marginal productivity.

This is not an optimal economic system by any means. And yet it seems perfectly natural to us to do this, and most people have trouble thinking any other way—which gives us a hint of where it’s probably coming from.

Perfectly natural. That is, instinctual. That is, evolutionary.

I believe that the corporate ladder, like most forms of hierarchy that humans use, is actually a recapitulation of our primate instincts to form a mating hierarchy with an alpha male.

First of all, the person in charge is indeed almost always male—over 90% of all high-level business executives are men. This is clearly discrimination, because women executives are paid less and yet show higher competence. Rare, underpaid, and highly competent is exactly the pattern we would expect in the presence of discrimination. If it were instead a lack of innate ability, we would expect that women executives would be much less competent on average, though they would still be rare and paid less. If there were no discrimination and no difference in ability, we would see equal pay, equal competence, and equal prevalence (this happens almost nowhere—the closest I think we get is in undergraduate admissions). Executives are also usually tall, healthy, and middle-aged—just like alpha males among chimpanzees and gorillas. (You can make excuses for why: Height is correlated with IQ, health makes you more productive, middle age is when you’re old enough to have experience but young enough to have vigor and stamina—but the fact remains, you’re matching the gorillas.)

Second, many otherwise-baffling economic decisions make sense in light of this hypothesis.

When a large company is floundering, why do we cut 20,000 laborers instead of simply reducing the CEO’s stock option package by half to save the same amount of money? Think back to the alpha male: Would he give himself less in a time of scarcity? Of course not. Nor would he remove his immediate subordinates, unless they had done something to offend him. If resources are scarce, the “obvious” answer is to take them from those at the bottom of the hierarchy—resource conservation is always accomplished at the expense of the lowest-status individuals.

Why are the very same poor people who would most stand to gain from redistribution of wealth often those who are most fiercely opposed to it? Because, deep down, they just instinctually “know” that alpha males are supposed to get the bananas, and if they are of low status it is their deserved lot in life. That is how people who depend on TANF and Medicaid to survive can nonetheless vote for Donald Trump. (As for how they can convince themselves that they “don’t get anything from the government”, that I’m not sure. “Keep your government hands off my Medicare!”)

Why is power an aphrodisiac, as well as for many an apparent excuse for bad behavior? I’ll let Cameron Anderson (a psychologist at UC Berkeley) give you the answer: “powerful people act with great daring and sometimes behave rather like gorillas”. With higher status comes a surge in testosterone (makes sense if you’re going to have more mates, and maybe even if you’re commanding an army—but running an investment bank?), which is directly linked to dominance behavior.

These attitudes may well have been adaptive for surviving in the African savannah 2 million years ago. In a world red in tooth and claw, having the biggest, strongest male be in charge of the tribe might have been the most efficient means of ensuring the success of the tribe—or rather I should say, the genes of the tribe, since the only reason we have a tribal instinct is that tribal instinct genes were highly successful at propagating themselves.

I’m actually sort of agnostic on the question of whether our evolutionary heuristics were optimal for ancient survival, or simply the best our brains could manage; but one thing is certain: They are not optimal today. The uninhibited dominance behavior associated with high status may work well enough for a tribal chieftain, but it could be literally apocalyptic when exhibited by the head of state of a nuclear superpower. Allocation of resources by status hierarchy may be fine for hunter-gatherers, but it is disastrously inefficient in an information technology economy.

From now on, whenever you hear “corporate ladder” and similar turns of phrase, I want you to substitute “primate status hierarchy”. You’ll quickly see how well it fits; and hopefully once enough people realize this, together we can all find a way to change to a better system.

Bet five dollars for maximum performance

JDN 2457433

One of the more surprising findings from the study of human behavior under stress is the Yerkes-Dodson curve:

OriginalYerkesDodson
This curve shows how well humans perform at a given task, as a function of how high the stakes are on whether or not they do it properly.

For simple tasks, it says what most people intuitively expect—and what neoclassical economists appear to believe: As the stakes rise, the more highly incentivized you are to do it, and the better you do it.

But for complex tasks, it says something quite different: While increased stakes do raise performance to a point—with nothing at stake at all, people hardly work at all—it is possible to become too incentivized. Formally we say the curve is not monotonic; it has a local maximum.

This is one of many reasons why it’s ridiculous to say that top CEOs should make tens of millions of dollars a year on the rise and fall of their company’s stock price (as a great many economists do in fact say). Even if I believed that stock prices accurately reflect the company’s viability (they do not), and believed that the CEO has a great deal to do with the company’s success, it would still be a case of overincentivizing. When a million dollars rides on a decision, that decision is going to be worse than if the stakes had only been $100. With this in mind, it’s really not surprising that higher CEO pay is correlated with worse company performance. Stock options are terrible motivators, but do offer a subtle way of making wages adjust to the business cycle.

The reason for this is that as the stakes get higher, we become stressed, and that stress response inhibits our ability to use higher cognitive functions. The sympathetic nervous system evolved to make us very good at fighting or running away in the face of danger, which works well should you ever be attacked by a tiger. It did not evolve to make us good at complex tasks under high stakes, the sort of skill we’d need when calculating the trajectory of an errant spacecraft or disarming a nuclear warhead.

To be fair, most of us never have to worry about piloting errant spacecraft or disarming nuclear warheads—indeed, you’re about as likely to get attacked by a tiger even in today’s world. (The rate of tiger attacks in the US is just under 2 per year, and the rate of manned space launches in the US was about 5 per year until the Space Shuttle was terminated.)

There are certain professions, such as pilots and surgeons, where performing complex tasks under life-or-death pressure is commonplace, but only a small fraction of people take such professions for precisely that reason. And if you’ve ever wondered why we use checklists for pilots and there is discussion of also using checklists for surgeons, this is why—checklists convert a single complex task into many simple tasks, allowing high performance even at extreme stakes.

But we do have to do a fair number of quite complex tasks with stakes that are, if not urgent life-or-death scenarios, then at least actions that affect our long-term life prospects substantially. In my tutoring business I encounter one in particular quite frequently: Standardized tests.

Tests like the SAT, ACT, GRE, LSAT, GMAT, and other assorted acronyms are not literally life-or-death, but they often feel that way to students because they really do have a powerful impact on where you’ll end up in life. Will you get into a good college? Will you get into grad school? Will you get the job you want? Even subtle deviations from the path of optimal academic success can make it much harder to achieve career success in the future.

Of course, these are hardly the only examples. Many jobs require us to complete tasks properly on tight deadlines, or else risk being fired. Working in academia infamously requires publishing in journals in time to rise up the tenure track, or else falling off the track entirely. (This incentivizes the production of huge numbers of papers, whether they’re worth writing or not; yes, the number of papers published goes down after tenure, but is that a bad thing? What we need to know is whether the number of good papers goes down. My suspicion is that most if not all of the reduction in publications is due to not publishing things that weren’t worth publishing.)

So if you are faced with this sort of task, what can you do? If you realize that you are faced with a high-stakes complex task, you know your performance will be bad—which only makes your stress worse!

My advice is to pretend you’re betting five dollars on the outcome.

Ignore all other stakes, and pretend you’re betting five dollars. $5.00 USD. Do it right and you get a Lincoln; do it wrong and you lose one.
What this does is ensures that you care enough—you don’t want to lose $5 for no reason—but not too much—if you do lose $5, you don’t feel like your life is ending. We want to put you near that peak of the Yerkes-Dodson curve.

The great irony here is that you most want to do this when it is most untrue. If you actually do have a task for which you’ve bet $5 and nothing else rides on it, you don’t need this technique, and any technique to improve your performance is not particularly worthwhile. It’s when you have a standardized test to pass that you really want to use this—and part of me even hopes that people know to do this whenever they have nuclear warheads to disarm. It is precisely when the stakes are highest that you must put those stakes out of your mind.

Why five dollars? Well, the exact amount is arbitrary, but this is at least about the right order of magnitude for most First World individuals. If you really want to get precise, I think the optimal stakes level for maximum performance is something like 100 microQALY per task, and assuming logarithmic utility of wealth, $5 at the US median household income of $53,600 is approximately 100 microQALY. If you have a particularly low or high income, feel free to adjust accordingly. Literally you should be prepared to bet about an hour of your life; but we are not accustomed to thinking that way, so use $5. (I think most people, if asked outright, would radically overestimate what an hour of life is worth to them. “I wouldn’t give up an hour of my life for $1,000!” Then why do you work at $20 an hour?)

It’s a simple heuristic, easy to remember, and sometimes effective. Give it a try.