The facts will not speak for themselves, so we must speak for them

August 3, JDN 2457604

I finally began to understand the bizarre and terrifying phenomenon that is the Donald Trump Presidential nomination when I watched this John Oliver episode:

https://www.youtube.com/watch?v=U-l3IV_XN3c

These lines in particular, near the end, finally helped me put it all together:

What is truly revealing is his implication that believing something to be true is the same as it being true. Because if anything, that was the theme of the Republican Convention this week; it was a four-day exercise in emphasizing feelings over facts.

The facts against Donald Trump are absolutely overwhelming. He is not even a competent business man, just a spectacularly manipulative one—and even then, it’s not clear he made any more money than he would have just keeping his inheritance in a diversified stock portfolio. His casinos were too fraudulent for Atlantic City. His university was fraudulent. He has the worst honesty rating Politifact has ever given a candidate. (Bernie Sanders, Barack Obama, and Hillary Clinton are statistically tied for some of the best.)

More importantly, almost every policy he has proposed or even suggested is terrible, and several of them could be truly catastrophic.

Let’s start with economic policy: His trade policy would set back decades of globalization and dramatically increase global poverty, while doing little or nothing to expand employment in the US, especially if it sparks a trade war. His fiscal policy would permanently balloon the deficit by giving one of the largest tax breaks to the rich in history. His infamous wall would probably cost about as much as the federal government currently spends on all basic scientific research combined, and his only proposal for funding it fundamentally misunderstands how remittances and trade deficits work. He doesn’t believe in climate change, and would roll back what little progress we have made at reducing carbon emissions, thereby endangering millions of lives. He could very likely cause a global economic collapse comparable to the Great Depression.

His social policy is equally terrible: He has proposed criminalizing abortion, (in express violation of Roe v. Wade) which even many pro-life people find too extreme. He wants to deport all Muslims and ban Muslims from entering, which not just a direct First Amendment violation but also literally involves jackbooted soldiers breaking into the homes of law-abiding US citizens to kidnap them and take them out of the country. He wants to deport 11 million undocumented immigrants, the largest deportation in US history.

Yet it is in foreign policy above all that Trump is truly horrific. He has explicitly endorsed targeting the families of terrorists, which is a war crime (though not as bad as what Ted Cruz wanted to do, which is carpet-bombing cities). Speaking of war crimes, he thinks our torture policy wasn’t severe enough, and doesn’t even care if it is ineffective. He has made the literally mercantilist assertion that the purpose of military alliances is to create trade surpluses, and if European countries will not provide us with trade surpluses (read: tribute), he will no longer commit to defending them, thereby undermining decades of global stability that is founded upon America’s unwavering commitment to defend our allies. And worst of all, he will not rule out the first-strike deployment of nuclear weapons.

I want you to understand that I am not exaggerating when I say that a Donald Trump Presidency carries a nontrivial risk of triggering global nuclear war. Will this probably happen? No. It has a probability of perhaps 1%. But a 1% chance of a billion deaths is not a risk anyone should be prepared to take.

 

All of these facts scream at us that Donald Trump would be a catastrophe for America and the world. Why, then, are so many people voting for him? Why do our best election forecasts give him a good chance of winning the election?

Because facts don’t speak for themselves.

This is how the left, especially the center-left, has dropped the ball in recent decades. We joke that reality has a liberal bias, because so many of the facts are so obviously on our side. But meanwhile the right wing has nodded and laughed, even mockingly called us the “reality-based community”, because they know how to manipulate feelings.

Donald Trump has essentially no other skills—but he has that one, and it is enough. He knows how to fan the flames of anger and hatred and point them at his chosen targets. He knows how to rally people behind meaningless slogans like “Make America Great Again” and convince them that he has their best interests at heart.

Indeed, Trump’s persuasiveness is one of his many parallels with Adolf Hitler; I am not yet prepared to accuse Donald Trump of seeking genocide, yet at the same time I am not yet willing to put it past him. I don’t think it would take much of a spark at this point to trigger a conflagration of hatred that launches a genocide against Muslims in the United States, and I don’t trust Trump not to light such a spark.

Meanwhile, liberal policy wonks are looking on in horror, wondering how anyone could be so stupid as to believe him—and even publicly basically calling people stupid for believing him. Or sometimes we say they’re not stupid, they’re just racist. But people don’t believe Donald Trump because they are stupid; they believe Donald Trump because he is persuasive. He knows the inner recesses of the human mind and can harness our heuristics to his will. Do not mistake your unique position that protects you—some combination of education, intellect, and sheer willpower—for some inherent superiority. You are not better than Trump’s followers; you are more resistant to Trump’s powers of persuasion. Yes, statistically, Trump voters are more likely to be racist; but racism is a deep-seated bias in the human mind that to some extent we all share. Trump simply knows how to harness it.

Our enemies are persuasive—and therefore we must be as well. We can no longer act as though facts will automatically convince everyone by the power of pure reason; we must learn to stir emotions and rally crowds just as they do.

Or rather, not just as they do—not quite. When we see lies being so effective, we may be tempted to lie ourselves. When we see people being manipulated against us, we may be tempted to manipulate them in return. But in the long run, we can’t afford to do that. We do need to use reason, because reason is the only way to ensure that the beliefs we instill are true.

Therefore our task must be to make people see reason. Let me be clear: Not demand they see reason. Not hope they see reason. Not lament that they don’t. This will require active investment on our part. We must actually learn to persuade people in such a manner that their minds become more open to reason. This will mean using tools other than reason, but it will also mean treading a very fine line, using irrationality only when rationality is insufficient.

We will be tempted to take the easier, quicker path to the Dark Side, but we must resist. Our goal must be not to make people do what we want them to—but to do what they would want to if they were fully rational and fully informed. We will need rhetoric; we will need oratory; we may even need some manipulation. But as we fight our enemy, we must be vigilant not to become them.

This means not using bad arguments—strawmen and conmen—but pointing out the flaws in our opponents’ arguments even when they seem obvious to us—bananamen. It means not overstating our case about free trade or using implausible statistical results simply because they support our case.

But it also means not understating our case, not hiding in page 17 of an opaque technical report that if we don’t do something about climate change right now millions of people will die. It means not presenting our ideas as “political opinions” when they are demonstrated, indisputable scientific facts. It means taking the media to task for their false balance that must find a way to criticize a Democrat every time they criticize a Republican: Sure, he is a pathological liar and might trigger global economic collapse or even nuclear war, but she didn’t secure her emails properly. If you objectively assess the facts and find that Republicans lie three times as often as Democrats, maybe that’s something you should be reporting on instead of trying to compensate for by changing your criteria.

Speaking of the media, we should be pressuring them to include a regular—preferably daily, preferably primetime—segment on climate change, because yes, it is that important. How about after the weather report every day, you show a climate scientist explaining why we keep having record-breaking summer heat and more frequent natural disasters? If we suffer a global ecological collapse, this other stuff you’re constantly talking about really isn’t going to matter—that is, if it mattered in the first place. When ISIS kills 200 people in an attack, you don’t just report that a bunch of people died without examining the cause or talking about responses. But when a typhoon triggered by climate change kills 7,000, suddenly it’s just a random event, an “act of God” that nobody could have predicted or prevented. Having an appropriate caution about whether climate change caused any particular disaster should not prevent us from drawing the very real links between more carbon emissions and more natural disasters—and sometimes there’s just no other explanation.

It means demanding fact-checks immediately, not as some kind of extra commentary that happens after the debate, but as something the moderator says right then and there. (You have a staff, right? And they have Google access, right?) When a candidate says something that is blatantly, demonstrably false, they should receive a warning. After three warnings, their mic should be cut for that question. After ten, they should be kicked off the stage for the remainder of the debate. Donald Trump wouldn’t have lasted five minutes. But instead, they not only let him speak, they spent the next week repeating what he said in bold, exciting headlines. At least CNN finally realized that their headlines could actually fact-check Trump’s statements rather than just repeat them.
Above all, we will need to understand why people think the way they do, and learn to speak to them persuasively and truthfully but without elitism or condescension. This is one I know I’m not very good at myself; sometimes I get so frustrated with people who think the Earth is 6,000 years old (over 40% of Americans) or don’t believe in climate change (35% don’t think it is happening at all, another 30% don’t think it’s a big deal) that I come off as personally insulting them—and of course from that point forward they turn off. But irrational beliefs are not proof of defective character, and we must make that clear to ourselves as well as to others. We must not say that people are stupid or bad; but we absolutely must say that they are wrong. We must also remember that despite our best efforts, some amount of reactance will be inevitable; people simply don’t like having their beliefs challenged.

Yet even all this is probably not enough. Many people don’t watch mainstream media, or don’t believe it when they do (not without reason). Many people won’t even engage with friends or family members who challenge their political views, and will defriend or even disown them. We need some means of reaching these people too, and the hardest part may be simply getting them to listen to us in the first place. Perhaps we need more grassroots action—more protest marches, or even activists going door to door like Jehovah’s Witnesses. Perhaps we need to establish new media outlets that will be as widely accessible but held to a higher standard.

But we must find a way–and we have little time to waste.

Two terms in marginal utility of wealth

JDN 2457569

This post is going to be a little wonkier than most; I’m actually trying to sort out my thoughts and draw some public comment on a theory that has been dancing around my head for awhile. The original idea of separating terms in marginal utility of wealth was actually suggested by my boyfriend, and from there I’ve been trying to give it some more mathematical precision to see if I can come up with a way to test it experimentally. My thinking is also influenced by a paper Miles Kimball wrote about the distinction between happiness and utility.

There are lots of ways one could conceivably spend money—everything from watching football games to buying refrigerators to building museums to inventing vaccines. But insofar as we are rational (and we are after all about 90% rational), we’re going to try to spend our money in such a way that its marginal utility is approximately equal across various activities. You’ll buy one refrigerator, maybe two, but not seven, because the marginal utility of refrigerators drops off pretty fast; instead you’ll spend that money elsewhere. You probably won’t buy a house that’s twice as large if it means you can’t afford groceries anymore. I don’t think our spending is truly optimal at maximizing utility, but I think it’s fairly good.

Therefore, it doesn’t make much sense to break down marginal utility of wealth into all these different categories—cars, refrigerators, football games, shoes, and so on—because we already do a fairly good job of equalizing marginal utility across all those different categories. I could see breaking it down into a few specific categories, such as food, housing, transportation, medicine, and entertainment (and this definitely seems useful for making your own household budget); but even then, I don’t get the impression that most people routinely spend too much on one of these categories and not enough on the others.

However, I can think of two quite different fundamental motives behind spending money, which I think are distinct enough to be worth separating.

One way to spend money is on yourself, raising your own standard of living, making yourself more comfortable. This would include both football games and refrigerators, really anything that makes your life better. We could call this the consumption motive, or maybe simply the self-directed motive.

The other way is to spend it on other people, which, depending on your personality can take either the form of philanthropy to help others, or as a means of self-aggrandizement to raise your own relative status. It’s also possible to do both at the same time in various combinations; while the Gates Foundation is almost entirely philanthropic and Trump Tower is almost entirely self-aggrandizing, Carnegie Hall falls somewhere in between, being at once a significant contribution to our society and an obvious attempt to bring praise and adulation to himself. I would also include spending on Veblen goods that are mainly to show off your own wealth and status in this category. We can call this spending the philanthropic/status motive, or simply the other-directed motive.

There is some spending which combines both motives: A car is surely useful, but a Ferrari is mainly for show—but then, a Lexus or a BMW could be either to show off or really because you like the car better. Some form of housing is a basic human need, and bigger, fancier houses are often better, but the main reason one builds mansions in Beverly Hills is to demonstrate to the world that one is fabulously rich. This complicates the theory somewhat, but basically I think the best approach is to try to separate a sort of “spending proportion” on such goods, so that say $20,000 of the Lexus is for usefulness and $15,000 is for show. Empirically this might be hard to do, but theoretically it makes sense.

One of the central mysteries in cognitive economics right now is the fact that while self-reported happiness rises very little, if at all, as income increases, a finding which was recently replicated even in poor countries where we might not expect it to be true, nonetheless self-reported satisfaction continues to rise indefinitely. A number of theories have been proposed to explain this apparent paradox.

This model might just be able to account for that, if by “happiness” we’re really talking about the self-directed motive, and by “satisfaction” we’re talking about the other-directed motive. Self-reported happiness seems to obey a rule that $100 is worth as much to someone with $10,000 as $25 is to someone with $5,000, or $400 to someone with $20,000.

Self-reported satisfaction seems to obey a different rule, such that each unit of additional satisfaction requires a roughly equal proportional increase in income.

By having a utility function with two terms, we can account for both of these effects. Total utility will be u(x), happiness h(x), and satisfaction s(x).

u(x) = h(x) + s(x)

To obey the above rule, happiness must obey harmonic utility, like this, for some constants h0 and r:

h(x) = h0 – r/x

Proof of this is straightforward, though to keep it simple I’ve hand-waved why it’s a power law:

Given

h'(2x) = 1/4 h'(x)

Let

h'(x) = r x^n

h'(2x) = r (2x)^n

r (2x)^n = 1/4 r x^n

n = -2

h'(x) = r/x^2

h(x) = – r x^(-1) + C

h(x) = h0 – r/x

Miles Kimball also has some more discussion on his blog about how a utility function of this form works. (His statement about redistribution at the end is kind of baffling though; sure, dollar for dollar, redistributing wealth from the middle class to the poor would produce a higher gain in utility than redistributing wealth from the rich to the middle class. But neither is as good as redistributing from the rich to the poor, and the rich have a lot more dollars to redistribute.)

Satisfaction, however, must obey logarithmic utility, like this, for some constants s0 and k.

The x+1 means that it takes slightly less proportionally to have the same effect as your wealth increases, but it allows the function to be equal to s0 at x=0 instead of going to negative infinity:

s(x) = s0 + k ln(x)

Proof of this is very simple, almost trivial:

Given

s'(x) = k/x

s(x) = k ln(x) + s0

Both of these functions actually have a serious problem that as x approaches zero, they go to negative infinity. For self-directed utility this almost makes sense (if your real consumption goes to zero, you die), but it makes no sense at all for other-directed utility, and since there are causes most of us would willingly die for, the disutility of dying should be large, but not infinite.

Therefore I think it’s probably better to use x +1 in place of x:

h(x) = h0 – r/(x+1)

s(x) = s0 + k ln(x+1)

This makes s0 the baseline satisfaction of having no other-directed spending, though the baseline happiness of zero self-directed spending is actually h0 – r rather than just h0. If we want it to be h0, we could use this form instead:

h(x) = h0 + r x/(x+1)

This looks quite different, but actually only differs by a constant.

Therefore, my final answer for the utility of wealth (or possibly income, or spending? I’m not sure which interpretation is best just yet) is actually this:

u(x) = h(x) + s(x)

h(x) = h0 + r x/(x+1)

s(x) = s0 + k ln(x+1)

Marginal utility is then the derivatives of these:

h'(x) = r/(x+1)^2

s'(x) = k/(x+1)

Let’s assign some values to the constants so that we can actually graph these.

Let h0 = s0 = 0, so our baseline is just zero.

Furthermore, let r = k = 1, which would mean that the value of $1 is the same whether spent either on yourself or on others, if $1 is all you have. (This is probably wrong, actually, but it’s the simplest to start with. Shortly I’ll discuss what happens as you vary the ratio k/r.)

Here is the result graphed on a linear scale:

Utility_linear

And now, graphed with wealth on a logarithmic scale:

Utility_log

As you can see, self-directed marginal utility drops off much faster than other-directed marginal utility, so the amount you spend on others relative to yourself rapidly increases as your wealth increases. If that doesn’t sound right, remember that I’m including Veblen goods as “other-directed”; when you buy a Ferrari, it’s not really for yourself. While proportional rates of charitable donation do not increase as wealth increases (it’s actually a U-shaped pattern, largely driven by poor people giving to religious institutions), they probably should (people should really stop giving to religious institutions! Even the good ones aren’t cost-effective, and some are very, very bad.). Furthermore, if you include spending on relative power and status as the other-directed motive, that kind of spending clearly does proportionally increase as wealth increases—gotta keep up with those Joneses.

If r/k = 1, that basically means you value others exactly as much as yourself, which I think is implausible (maybe some extreme altruists do that, and Peter Singer seems to think this would be morally optimal). r/k < 1 would mean you should never spend anything on yourself, which not even Peter Singer believes. I think r/k = 10 is a more reasonable estimate.

For any given value of r/k, there is an optimal ratio of self-directed versus other-directed spending, which can vary based on your total wealth.

Actually deriving what the optimal proportion would be requires a whole lot of algebra in a post that probably already has too much algebra, but the point is, there is one, and it will depend strongly on the ratio r/k, that is, the overall relative importance of self-directed versus other-directed motivation.

Take a look at this graph, which uses r/k = 10.

Utility_marginal

If you only have 2 to spend, you should spend it entirely on yourself, because up to that point the marginal utility of self-directed spending is always higher. If you have 3 to spend, you should spend most of it on yourself, but a little bit on other people, because after you’ve spent about 2.2 on yourself there is more marginal utility for spending on others than on yourself.

If your available wealth is W, you would spend some amount x on yourself, and then W-x on others:

u(x) = h(x) + s(W-x)

u(x) = r x/(x+1) + k ln(W – x + 1)

Then you take the derivative and set it equal to zero to find the local maximum. I’ll spare you the algebra, but this is the result of that optimization:

x = – 1 – r/(2k) + sqrt(r/k) sqrt(2 + W + r/(4k))

As long as k <= r (which more or less means that you care at least as much about yourself as about others—I think this is true of basically everyone) then as long as W > 0 (as long as you have some money to spend) we also have x > 0 (you will spend at least something on yourself).

Below a certain threshold (depending on r/k), the optimal value of x is greater than W, which means that, if possible, you should be receiving donations from other people and spending them on yourself. (Otherwise, just spend everything on yourself). After that, x < W, which means that you should be donating to others. The proportion that you should be donating smoothly increases as W increases, as you can see on this graph (which uses r/k = 10, a figure I find fairly plausible):

Utility_donation

While I’m sure no one literally does this calculation, most people do seem to have an intuitive sense that you should donate an increasing proportion of your income to others as your income increases, and similarly that you should pay a higher proportion in taxes. This utility function would justify that—which is something that most proposed utility functions cannot do. In most models there is a hard cutoff where you should donate nothing up to the point where your marginal utility is equal to the marginal utility of donating, and then from that point forward you should donate absolutely everything. Maybe a case can be made for that ethically, but psychologically I think it’s a non-starter.

I’m still not sure exactly how to test this empirically. It’s already quite difficult to get people to answer questions about marginal utility in a way that is meaningful and coherent (people just don’t think about questions like “Which is worth more? $4 to me now or $10 if I had twice as much wealth?” on a regular basis). I’m thinking maybe they could play some sort of game where they have the opportunity to make money at the game, but must perform tasks or bear risks to do so, and can then keep the money or donate it to charity. The biggest problem I see with that is that the amounts would probably be too small to really cover a significant part of anyone’s total wealth, and therefore couldn’t cover much of their marginal utility of wealth function either. (This is actually a big problem with a lot of experiments that use risk aversion to try to tease out marginal utility of wealth.) But maybe with a variety of experimental participants, all of whom we get income figures on?

“But wait, there’s more!”: The clever tricks of commercials

JDN 2457565

I’m sure you’ve all seen commercials like this dozens of times:

A person is shown (usually in black-and-white) trying to use an ordinary consumer product, and failing miserably. Often their failure can only be attributed to the most abject incompetence, but the narrator will explain otherwise: “Old product is so hard to use. Who can handle [basic household activity] and [simple instructions]?”

“Struggle no more!” he says (it’s almost always a masculine narrator), and the video turns to full color as the same person is shown using the new consumer product effortlessly. “With innovative high-tech new product, you can do [basic household activity] with ease in no time!”

“Best of all, new product, a $400 value, can be yours for just five easy payments of $19.95. That’s five easy payments of $19.95!”

And then, here it comes: “But wait. There’s more! Order within the next 15 minutes and you will get two new products, for the same low price. That’s $800 in value for just five easy payments of $19.95! And best of all, your satisfaction is guaranteed! If you don’t like new product, return it within 30 days for your money back!” (A much quieter, faster voice says: “Just pay shipping and handling.”)

Call 555-1234. That’s 555-1234.

“CALL NOW!”

Did you ever stop and think about why so many commercials follow this same precise format?

In short, because it works. Indeed, it works a good deal better than simply presenting the product’s actual upsides and downsides and reporting a sensible market price—even if that sensible market price is lower than the “five easy payments of $19.95”.

We owe this style of marketing to one Ron Popeil; Ron Popeil was a prolific inventor, but none of his inventions have had so much impact as the market methods he used to sell them.

Let’s go through step by step. Why is the person using the old product so incompetent? Surely they could sell their product without implying that we don’t know how to do basic household activities like boiling pasta and cutting vegetables?

Well, first of all, many of these products do nothing but automate such simple household activities (like the famous Veg-O-Matic which cuts vegetables and “It slices! It dices!”), so if they couldn’t at least suggest that this is a lot of work they’re saving us, we’d have no reason to want their product.

But there’s another reason as well: Watching someone else fumble with basic household appliances is funny, as any fan of the 1950s classic I Love Lucy would attest (in fact, it may not be a coincidence that the one fumbling with the vegetables is often a woman who looks a lot like Lucy), and meta-analysis of humor in advertising has shown that it draws attention and triggers positive feelings.

Why use black-and-white for the first part? The switch to color enhances the feeling of contrast, and the color video is more appealing. You wouldn’t consciously say “Wow, that slicer changed the tomatoes from an ugly grey to a vibrant red!” but your subconscious mind is still registering that association.

Then they will hit you with appealing but meaningless buzzwords. For technology it will be things like “innovative”, “ground-breaking”, “high-tech” and “state-of-the-art”, while for foods and nutritional supplements it will be things like “all-natural”, “organic”, “no chemicals”, and “just like homemade”. It will generally be either so vague as to be unverifiable (what constitutes “innovative”?), utterly tautological (all carbon-based substances are “organic” and this term is not regulated), or transparently false but nonetheless not specific enough to get them in trouble (“just like homemade” literally can’t be true if you’re buying it from a TV ad). These give you positive associations without forcing the company to commit to making a claim they could actually be sued for breaking. It’s the same principle as the Applause Lights that politicians bring to every speech: “Three cheers for moms!” “A delicious slice of homemade apple pie!” “God Bless America!”

Occasionally you’ll also hear buzzwords that do have some meaning, but often not nearly as strong as people imagine: “Patent pending” means that they applied for the patent and it wasn’t summarily rejected—but not that they’ll end up getting it approved. “Certified organic” means that the USDA signed off on the farming standards, which is better than nothing but leaves a lot of wiggle room for animal abuse and irresponsible environmental practices.

And then we get to the price. They’ll quote some ludicrous figure for its “value”, which may be a price that no one has ever actually paid for a product of this kind, then draw a line through it and replace it with the actual price, which will be far lower.

Indeed, not just lower: The actual price is almost always $19.99 or $19.95. If the product is too expensive to make for them to sell it at $19.95, they will sell it at several payments of $19.95, and emphasize that these are “easy” payments, as though the difficulty of writing the check were a major factor in people’s purchasing decisions. (That actually is a legitimate concern for micropayments, but not for buying kitchen appliances!) They’ll repeat the price because repetition improves memory and also makes statements more persuasive.

This is what we call psychological pricing, and it’s one of those enormous market distortions that once you realize it’s there, you see it everywhere and start to wonder how our whole market system hasn’t collapsed on itself from the sheer weight of our overwhelming irrationality. The price of a product sold on TV will almost always be just slightly less than $20.

In general, most prices will take the form of $X.95 or $X.99; Costco even has a code system they use in the least significant digit. Continuous substances like gasoline can even be sold at fractional pennies, and so they’ll usually be at $X.X99, being not even one penny less. It really does seem to work; despite being an eminently trivial difference from the round number, and typically rounded up from what it actually should have been, it just feels like less to see $19.95 rather than $20.00.

Moreover, I have less data to support this particular hypothesis, but I think that $20 in particular is a very specific number, because $19.95 pops up so very, very often. I think most Americans have what we might call a “Jackson heuristic”, which is as follows: If something costs less than a Jackson (a $20 bill, though hopefully they’ll put Harriet Tubman on soon, so “Tubman heuristic”), you’re allowed to buy it on impulse without thinking too hard about whether it’s worth it. But if it costs more than a Jackson, you need to stop and think about it, weigh the alternatives before you come to a decision. Since these TV ads are almost always aiming for the thoughtless impulse buy, they try to scrape in just under the Jackson heuristic.

Of course, inflation will change the precise figure over time; in the 1980s it was probably a Hamilton heuristic, in the 1970s a Lincoln heuristic, in the 1940s a Washington heuristic. Soon enough it will be a Grant heuristic and then a Benjamin heuristic. In fact it’s probably something like “The closest commonly-used cash denomination to half a milliQALY”, but nobody does that calculation consciously; the estimate is made automatically without thinking. This in turn is probably figured because you could literally do that once a day every single day for only about 20% of your total income, and if you hold it to once a week you’re under 3% of your income. So if you follow the Jackson heuristic on impulse buys every week or so, your impulse spending is a “statistically insignificant” proportion of your income. (Why do we use that anyway? And suddenly we realize: The 95% confidence level is itself nothing more than a heuristic.)

Then they take advantage of our difficulty in discounting time rationally, by spreading it into payments; “five easy payments of $19.95” sounds a lot more affordable than “$100”, but they are in fact basically the same. (You save $0.25 by the payment plan, maybe as much as a few dollars if your cashflow is very bad and thus you have a high temporal discount rate.)

And then, finally, “But wait. There’s more!” They offer you another of the exact same product, knowing full well you’ll probably have no use for the second one. They’ll multiply their previous arbitrary “value” by 2 to get an even more ludicrous number. Now it sounds like they’re doing you a favor, so you’ll feel obliged to do one back by buying the product. Gifts often have this effect in experiments: People are significantly more motivated to answer a survey if you give them a small gift beforehand, even if they get to keep it without taking the survey.

They’ll tell you to call in the next 15 minutes so that you feel like part of an exclusive club (when in reality you could probably call at any time and get the same deal). This also ensures that you’re staying in impulse-buy mode, since if you wait longer to think, you’ll miss the window!

They will offer a “money-back guarantee” to give you a sense of trust in the product, and this would be a rational response, except for that little disclaimer: “Just pay shipping and handling.” For many products, especially nutritional supplements (which cost basically nothing to make), the “handling” fee is high enough that they don’t lose much money, if any, even if you immediately send it back for a refund. Besides, they know that hardly anyone actually bothers to return products. Retailers are currently in a panic about “skyrocketing” rates of product returns that are still under 10%.

Then, they’ll repeat their phone number, followed by a remarkably brazen direct command: “Call now!” Personally I tend to bristle at direct commands, even from legitimate authorities; but apparently I’m unusual in that respect, and most people will in fact obey direct commands from random strangers as long as they aren’t too demanding. A famous demonstration of this you could try yourself if you’re feeling like a prankster is to walk into a room, point at someone, and say “You! Stand up!” They probably will. There’s a whole literature in social psychology about what makes people comply with commands of this sort.

And all, to make you buy a useless gadget you’ll try to use once and then leave in a cupboard somewhere. What untold billions of dollars in wealth are wasted this way?

Moral responsibility does not inherit across generations

JDN 2457548

In last week’s post I made a sharp distinction between believing in human progress and believing that colonialism was justified. To make this argument, I relied upon a moral assumption that seems to me perfectly obvious, and probably would to most ethicists as well: Moral responsibility does not inherit across generations, and people are only responsible for their individual actions.

But is in fact this principle is not uncontroversial in many circles. When I read utterly nonsensical arguments like this one from the aptly-named Race Baitr saying that White people have no role to play in the liberation of Black people apparently because our blood is somehow tainted by the crimes our ancestors, it becomes apparent to me that this principle is not obvious to everyone, and therefore is worth defending. Indeed, many applications of the concept of “White Privilege” seem to ignore this principle, speaking as though racism is not something one does or participates in, but something that one is simply by being born with less melanin. Here’s a Salon interview specifically rejecting the proposition that racism is something one does:

For white people, their identities rest on the idea of racism as about good or bad people, about moral or immoral singular acts, and if we’re good, moral people we can’t be racist – we don’t engage in those acts. This is one of the most effective adaptations of racism over time—that we can think of racism as only something that individuals either are or are not “doing.”

If racism isn’t something one does, then what in the world is it? It’s all well and good to talk about systems and social institutions, but ultimately systems and social institutions are made of human behaviors. If you think most White people aren’t doing enough to combat racism (which sounds about right to me!), say that—don’t make some bizarre accusation that simply by existing we are inherently racist. (Also: We? I’m only 75% White, so am I only 75% inherently racist?) And please, stop redefining the word “racism” to mean something other than what everyone uses it to mean; “White people are snakes” is in fact a racist sentiment (and yes, one I’ve actually heard–indeed, here is the late Muhammad Ali comparing all White people to rattlesnakes, and Huffington Post fawning over him for it).

Racism is clearly more common and typically worse when performed by White people against Black people—but contrary to the claims of some social justice activists the White perpetrator and Black victim are not part of the definition of racism. Similarly, sexism is more common and more severe committed by men against women, but that doesn’t mean that “men are pigs” is not a sexist statement (and don’t tell me you haven’t heard that one). I don’t have a good word for bigotry by gay people against straight people (“heterophobia”?) but it clearly does happen on occasion, and similarly cannot be defined out of existence.

I wouldn’t care so much that you make this distinction between “racism” and “racial prejudice”, except that it’s not the normal usage of the word “racism” and therefore confuses people, and also this redefinition clearly is meant to serve a political purpose that is quite insidious, namely making excuses for the most extreme and hateful prejudice as long as it’s committed by people of the appropriate color. If “White people are snakes” is not racism, then the word has no meaning.

Not all discussions of “White Privilege” are like this, of course; this article from Occupy Wall Street actually does a fairly good job of making “White Privilege” into a sensible concept, albeit still not a terribly useful one in my opinion. I think the useful concept is oppression—the problem here is not how we are treating White people, but how we are treating everyone else. What privilege gives you is the freedom to be who you are.”? Shouldn’t everyone have that?

Almost all the so-called “benefits” or “perks” associated with privilege” are actually forgone harms—they are not good things done to you, but bad things not done to you. But benefitting from racist systems doesn’t mean that everything is magically easy for us. It just means that as hard as things are, they could always be worse.” No, that is not what the word “benefit” means. The word “benefit” means you would be worse off without it—and in most cases that simply isn’t true. Many White people obviously think that it is true—which is probably a big reason why so many White people fight so hard to defend racism, you know; you’ve convinced them it is in their self-interest. But, with rare exceptions, it is not; most racial discrimination has literally zero long-run benefit. It’s just bad. Maybe if we helped people appreciate that more, they would be less resistant to fighting racism!

The only features of “privilege” that really make sense as benefits are those that occur in a state of competition—like being more likely to be hired for a job or get a loan—but one of the most important insights of economics is that competition is nonzero-sum, and fairer competition ultimately means a more efficient economy and thus more prosperity for everyone.

But okay, let’s set that aside and talk about this core question of what sort of responsibility we bear for the acts of our ancestors. Many White people clearly do feel deep shame about what their ancestors (or people the same color as their ancestors!) did hundreds of years ago. The psychological reactance to that shame may actually be what makes so many White people deny that racism even exists (or exists anymore)—though a majority of Americans of all races do believe that racism is still widespread.

We also apply some sense of moral responsibility applied to whole races quite frequently. We speak of a policy “benefiting White people” or “harming Black people” and quickly elide the distinction between harming specific people who are Black, and somehow harming “Black people” as a group. The former happens all the time—the latter is utterly nonsensical. Similarly, we speak of a “debt owed by White people to Black people” (which might actually make sense in the very narrow sense of economic reparations, because people do inherit money! They probably shouldn’t, that is literally feudalist, but in the existing system they in fact do), which makes about as much sense as a debt owed by tall people to short people. As Walter Michaels pointed out in The Trouble with Diversity (which I highly recommend), because of this bizarre sense of responsibility we are often in the habit of “apologizing for something you didn’t do to people to whom you didn’t do it (indeed to whom it wasn’t done)”. It is my responsibility to condemn colonialism (which I indeed do), to fight to ensure that it never happens again; it is not my responsibility to apologize for colonialism.

This makes some sense in evolutionary terms; it’s part of the all-encompassing tribal paradigm, wherein human beings come to identify themselves with groups and treat those groups as the meaningful moral agents. It’s much easier to maintain the cohesion of a tribe against the slings and arrows (sometimes quite literal) of outrageous fortune if everyone believes that the tribe is one moral agent worthy of ultimate concern.

This concept of racial responsibility is clearly deeply ingrained in human minds, for it appears in some of our oldest texts, including the Bible: “You shall not bow down to them or worship them; for I, the Lord your God, am a jealous God, punishing the children for the sin of the parents to the third and fourth generation of those who hate me,” (Exodus 20:5)

Why is inheritance of moral responsibility across generations nonsensical? Any number of reasons, take your pick. The economist in me leaps to “Ancestry cannot be incentivized.” There’s no point in holding people responsible for things they can’t control, because in doing so you will not in any way alter behavior. The Stanford Encyclopedia of Philosophy article on moral responsibility takes it as so obvious that people are only responsible for actions they themselves did that they don’t even bother to mention it as an assumption. (Their big question is how to reconcile moral responsibility with determinism, which turns out to be not all that difficult.)

An interesting counter-argument might be that descent can be incentivized: You could use rewards and punishments applied to future generations to motivate current actions. But this is actually one of the ways that incentives clearly depart from moral responsibilities; you could incentivize me to do something by threatening to murder 1,000 children in China if I don’t, but even if it was in fact something I ought to do, it wouldn’t be those children’s fault if I didn’t do it. They wouldn’t deserve punishment for my inaction—I might, and you certainly would for using such a cruel incentive.

Moreover, there’s a problem with dynamic consistency here: Once the action is already done, what’s the sense in carrying out the punishment? This is why a moral theory of punishment can’t merely be based on deterrence—the fact that you could deter a bad action by some other less-bad action doesn’t make the less-bad action necessarily a deserved punishment, particularly if it is applied to someone who wasn’t responsible for the action you sought to deter. In any case, people aren’t thinking that we should threaten to punish future generations if people are racist today; they are feeling guilty that their ancestors were racist generations ago. That doesn’t make any sense even on this deterrence theory.

There’s another problem with trying to inherit moral responsibility: People have lots of ancestors. Some of my ancestors were most likely rapists and murderers; most were ordinary folk; a few may have been great heroes—and this is true of just about anyone anywhere. We all have bad ancestors, great ancestors, and, mostly, pretty good ancestors. 75% of my ancestors are European, but 25% are Native American; so if I am to apologize for colonialism, should I be apologizing to myself? (Only 75%, perhaps?) If you go back enough generations, literally everyone is related—and you may only have to go back about 4,000 years. That’s historical time.

Of course, we wouldn’t be different colors in the first place if there weren’t some differences in ancestry, but there is a huge amount of gene flow between different human populations. The US is a particularly mixed place; because most Black Americans are quite genetically mixed, it is about as likely that any randomly-selected Black person in the US is descended from a slaveowner as it is that any randomly-selected White person is. (Especially since there were a large number of Black slaveowners in Africa and even some in the United States.) What moral significance does this have? Basically none! That’s the whole point; your ancestors don’t define who you are.

If these facts do have any moral significance, it is to undermine the sense most people seem to have that there are well-defined groups called “races” that exist in reality, to which culture responds. No; races were created by culture. I’ve said this before, but it bears repeating: The “races” we hold most dear in the US, White and Black, are in fact the most nonsensical. “Asian” and “Native American” at least almost make sense as categories, though Chippewa are more closely related to Ainu than Ainu are to Papuans. “Latino” isn’t utterly incoherent, though it includes as much Aztec as it does Iberian. But “White” is a club one can join or be kicked out of, while “Black” is the majority of genetic diversity.

Sex is a real thing—while there are intermediate cases of course, broadly speaking humans, like most metazoa, are sexually dimorphic and come in “male” and “female” varieties. So sexism took a real phenomenon and applied cultural dynamics to it; but that’s not what happened with racism. Insofar as there was a real phenomenon, it was extremely superficial—quite literally skin deep. In that respect, race is more like class—a categorization that is itself the result of social institutions.

To be clear: Does the fact that we don’t inherit moral responsibility from our ancestors absolve us from doing anything to rectify the inequities of racism? Absolutely not. Not only is there plenty of present discrimination going on we should be fighting, there are also inherited inequities due to the way that assets and skills are passed on from one generation to the next. If my grandfather stole a painting from your grandfather and both our grandfathers are dead but I am now hanging that painting in my den, I don’t owe you an apology—but I damn well owe you a painting.

The further we become from the past discrimination the harder it gets to make reparations, but all hope is not lost; we still have the option of trying to reset everyone’s status to the same at birth and maintaining equality of opportunity from there. Of course we’ll never achieve total equality of opportunity—but we can get much closer than we presently are.

We could start by establishing an extremely high estate tax—on the order of 99%—because no one has a right to be born rich. Free public education is another good way of equalizing the distribution of “human capital” that would otherwise be concentrated in particular families, and expanding it to higher education would make it that much better. It even makes sense, at least in the short run, to establish some affirmative action policies that are race-conscious and sex-conscious, because there are so many biases in the opposite direction that sometimes you must fight bias with bias.

Actually what I think we should do in hiring, for example, is assemble a pool of applicants based on demographic quotas to ensure a representative sample, and then anonymize the applications and assess them on merit. This way we do ensure representation and reduce bias, but don’t ever end up hiring anyone other than the most qualified candidate. But nowhere should we think that this is something that White men “owe” to women or Black people; it’s something that people should do in order to correct the biases that otherwise exist in our society. Similarly with regard to sexism: Women exhibit just as much unconscious bias against other women as men do. This is not “men” hurting “women”—this is a set of unconscious biases found in almost everywhere and social structures almost everywhere that systematically discriminate against people because they are women.

Perhaps by understanding that this is not about which “team” you’re on (which tribe you’re in), but what policy we should have, we can finally make these biases disappear, or at least fade so small that they are negligible.

The unending madness of the gold standard

JDN 2457545

If you work in economics in any capacity (much like “How is the economy doing?” you don’t even really need to be in macroeconomics), you will encounter many people who believe in the gold standard. Many of these people will be otherwise quite intelligent and educated; they often understand economics better than most people (not that this is saying a whole lot). Yet somehow they continue to hold—and fiercely defend—this incredibly bizarre and anachronistic view of macroeconomics.

They even bring it up at the oddest times; I recently encountered someone who wrote a long and rambling post arguing for drug legalization (which I largely agree with, by the way) and concluded it with #EndTheFed, not seeming to grasp the total and utter irrelevance of this juxtaposition. It seems like it was just a conditioned response, or maybe the sort of irrelevant but consistent coda originally perfected by Cato and his “Carthago delenda est. “Foederale Reservatum delendum est. Hey, maybe that’s why they’re called the Cato Institute.

So just how bizarre is the gold standard? Well, let’s look at what sort of arguments they use to defend it. I’ll use Charles Kadlic, prominent Libertarian blogger on Forbes, as an example, with his “Top Ten Reasons That You Should Support the ‘Gold Commission’”:

  1. A gold standard is key to achieving a period of sustained, 4% real economic growth.
  2. A gold standard reduces the risk of recessions and financial crises.
  3. A gold standard would restore rising living standards to the middle-class.
  4. A gold standard would restore long-term price stability.
  5. A gold standard would stop the rise in energy prices.
  6. A gold standard would be a powerful force for restoring fiscal balance to federal state and local governments.
  7. A gold standard would help save Medicare and Social Security.
  8. A gold standard would empower Main Street over Wall Street.
  9. A gold standard would increase the liberty of the American people.
  10. Creation of a gold commission will provide the forum to chart a prudent path toward a 21st century gold standard.

Number 10 can be safely ignored, as clearly Kadlic just ran out of reasons and to make a round number tacked on the implicit assumption of the entire article, namely that this ‘gold commission’ would actually realistically lead us toward a gold standard. (Without it, the other 9 reasons are just non sequitur.)

So let’s look at the other 9, shall we? Literally none of them are true. Several are outright backward.

You know a policy is bad when even one of its most prominent advocates can’t even think of a single real benefit it would have. A lot of quite bad policies do have perfectly real benefits, they’re just totally outweighed by their costs: For example, cutting the top income tax rate to 20% probably would actually contribute something to economic growth. Not a lot, and it would cut a swath through the federal budget and dramatically increase inequality—but it’s not all downside. Yet Kadlic couldn’t actually even think of one benefit of the gold standard that actually holds up. (I actually can do his work for him: I do know of one benefit of the gold standard, but as I’ll get to momentarily it’s quite small and can easily be achieved in better ways.)

First of all, it’s quite clear that the gold standard did not increase economic growth. If you cherry-pick your years properly, you can make it seem like Nixon leaving the gold standard hurt growth, but if you look at the real long-run trends in economic growth it’s clear that we had really erratic growth up until about the 1910s (the surge of government spending in WW1 and the establishment of the Federal Reserve), at which point went through a temporary surge recovering from the Great Depression and then during WW2, and finally, if you smooth out the business cycle, our growth rates have slowly trended downward as growth in productivity has gradually slowed down.

Here’s GDP growth from 1800 to 1900, when we were on the classical gold standard:

US_GDP_growth_1800s

Here’s GDP growth from 1929 to today, using data from the Bureau of Economic Analysis:

US_GDP_growth_BEA

Also, both of these are total GDP growth (because that is what Kadlic said), which means that part of what you’re seeing here is population growth rather than growth in income per person. Here’s GDP per person in the 1800s:

US_GDP_growth_1800s

If you didn’t already know, I bet you can’t guess where on those graphs we left the gold standard, which you’d clearly be able to do if the gold standard had this dramatic “double your GDP growth” kind of effect. I can’t immediately rule out some small benefit to the gold standard just from this data, but don’t worry; more thorough economic studies have done that. Indeed, it is the mainstream consensus among economists today that the gold standard is what caused the Great Depression.

Indeed, there’s a whole subfield of historical economics research that basically amounts to “What were they thinking?” trying to explain why countries stayed on the gold standard for so long when it clearly wasn’t working. Here’s a paper trying to argue it was a costly signal of your “rectitude” in global bond markets, but I find much more compelling the argument that it was psychological: Their belief in the gold standard was simply too strong, so confirmation bias kept holding them back from what needed to be done. They were like my aforementioned #EndTheFed acquaintance.

Then we get to Kadlic’s second point: Does the gold standard reduce the risk of financial crises? Let’s also address point 4, which is closely related: Does the gold standard improve price stability? Tell that to 1929.

In fact, financial crises were more common on the classical gold standard; the period of pure fiat monetary policy was so stable that it was called the Great Moderation, until the crash in 2008 screwed it all up—and that crash occurred essentially outside the standard monetary system, in the “shadow banking system” of unregulated and virtually unlimited derivatives. Had we actually forced banks to stay within the light of the standard banking system, the Great Moderation might have continued indefinitely.

As for “price stability”, that’s sort of true if you look at the long run, because prices were as likely to go down as they were to go up. But that isn’t what we mean by “price stability”. A system with good price stability will have a low but positive and steady level of inflation, and will therefore exhibit some long-run increases in price levels; it won’t have prices jump up and down erratically and end up on average the same.

For jump up and down is what prices did on the gold standard, as you can see from FRED:

US_inflation_longrun

This is something we could have predicted in advance; the price of any given product jumps up and down over time, and gold is just one product among many. Tying prices to gold makes no more sense than tying them to any other commodity.

As for stopping the rise in energy prices, energy prices aren’t rising. Even if they were (and they could at some point), the only way the gold standard would stop that is by triggering deflation (and therefore recession) in the rest of the economy.

Regarding number 6, I don’t see how the fiscal balance of federal and state governments is improved by periodic bouts of deflation that make their debt unpayable.

As for number 7, saving Medicare and Social Security, their payments out are tied to inflation and their payments in are tied to nominal GDP, so overall inflation has very little effect on their long-term stability. In any case, the problem with Medicare is spiraling medical costs (which Obamacare has done a lot to fix), and the problem with Social Security is just the stupid arbitrary cap on the income subject to payroll tax; the gold standard would do very little to solve either of those problems, though I guess it would make the nominal income cap less binding by triggering deflation, which is just about the worst way to avoid a price ceiling I’ve ever heard.

Regarding 8 and 9, I don’t even understand why Kadlic thinks that going to a gold standard would empower individuals over banks (does it seem like individuals were empowered over banks in the “Robber Baron Era”?), or what in the world it has to do with giving people more liberty (all that… freedom… you lose… when the Fed… stabilizes… prices?), so I don’t even know where to begin on those assertions. You know what empowers people over banks? The Consumer Financial Protection Bureau. You know what would enhance liberty? Ending mass incarceration. Libertarians fight tooth and nail against the former; sometimes they get behind the latter, but sometimes they don’t; Gary Johnson for some bizarre reason believes in privatization of prisons, which are directly linked to the surge in US incarceration.

The only benefit I’ve been able to come up with for the gold standard is as a commitment mechanism, something the Federal Reserve could do to guarantee its future behavior and thereby reduce the fear that it will suddenly change course on its past promises. This would make forward guidance a lot more effective at changing long-term interest rates, because people would have reason to believe that the Fed means what it says when it projects its decisions 30 years out.

But there are much simpler and better commitment mechanisms the Fed could use. They could commit to a Taylor Rule or nominal GDP targeting, both of which mainstream economists have been clamoring for for decades. There are some definite downsides to both proposals, but also some important upsides; and in any case they’re both obviously better than the gold standard and serve the same forward guidance function.

Indeed, it’s really quite baffling that so many people believe in the gold standard. It cries out for some sort of psychological explanation, as to just what cognitive heuristic is failing when otherwise-intelligent and highly-educated people get monetary policy so deeply, deeply wrong. A lot of them don’t even to seem grasp when or how we left the gold standard; it really happened when FDR suspended gold convertibility in 1933. After that on the Bretton Woods system only national governments could exchange money for gold, and the Nixon shock that people normally think of as “ending the gold standard” was just the final nail in the coffin, and clearly necessary since inflation was rapidly eating through our gold reserves.

A lot of it seems to come down to a deep distrust of government, especially federal government (I still do not grok why the likes of Ron Paul think state governments are so much more trustworthy than the federal government); the Federal Reserve is a government agency (sort of) and is therefore not to be trusted—and look, it has federal right there in the name.

But why do people hate government so much? Why do they think politicians are much less honest than they actually are? Part of it could have to do with the terrifying expansion of surveillance and weakening of civil liberties in the face of any perceived outside threat (Sedition Act, PATRIOT ACT, basically the same thing), but often the same people defending those programs are the ones who otherwise constantly complain about Big Government. Why do polls consistently show that people don’t trust the government, but want it to do more?

I think a lot of this comes down to the vague meaning of the word “government” and the associations we make with particular questions about it. When I ask “Do you trust the government?” you think of the NSA and the Vietnam War and Watergate, and you answer “No.” But when I ask “Do you want the government to do more?” you think of the failure at Katrina, the refusal to expand Medicaid, the pitiful attempts at reducing carbon emissions, and you answer “Yes.” When I ask if you like the military, your conditioned reaction is to say the patriotic thing, “Yes.” But if I ask whether you like the wars we’ve been fighting lately, you think about the hundreds of thousands of people killed and the wanton destruction to achieve no apparent actual objective, and you say “No.” Most people don’t come to these polls with thought-out opinions they want to express; the questions evoke emotional responses in them and they answer accordingly. You can also evoke different responses by asking “Should we cut government spending?” (People say “Yes.”) versus asking “Should we cut military spending, Social Security, or Medicare?” (People say “No.”) The former evokes a sense of abstract government taking your tax money; the latter evokes the realization that this money is used for public services you value.

So, the gold standard has acquired positive emotional vibes, and the Fed has acquired negative emotional vibes.

The former is fairly easy to explain: “good as gold” is an ancient saying, and “the gold standard” is even a saying we use in general to describe the right way of doing something (“the gold standard in prostate cancer treatment”). Humans have always had a weird relationship with gold; something about its timeless and noncorroding shine mesmerizes us. That’s why you occasionally get proposals for a silver standard, but no one ever seems to advocate an oil standard, an iron standard, or a lumber standard, which would make about as much sense.

The latter is a bit more difficult to explain: What did the Fed ever do to you? But I think it might have something to do with the complexity of sound monetary policy, and the resulting air of technocratic mystery surrounding it. Moreover, the Fed actively cultivates this image, by using “open-market operations” and “quantitative easing” to “target interest rates”, instead of just saying, “We’re printing money.” There may be some good reasons to do it this way, but a lot of it really does seem to be intended to obscure the truth from the uninitiated and perpetuate the myth that they are almost superhuman. “It’s all very complicated, you see; you wouldn’t understand.” People are hoarding their money, so there’s not enough money in circulation, so prices are falling, so you’re printing more money and trying to get it into circulation. That’s really not that complicated. Indeed, if it were, we wouldn’t be able to write a simple equation like a Taylor Rule or nominal GDP targeting in order to automate it!

The reason so many people become gold bugs after taking a couple of undergraduate courses in economics, then, is that this teaches them enough that they feel they have seen through the veil; the curtain has been pulled open and the all-powerful Wizard revealed to be an ordinary man at a control panel. (Spoilers? The movie came out in 1939. Actually, it was kind of about the gold standard.) “What? You’ve just been printing money all this time? But that is surely madness!” They don’t actually understand why printing money is actually a perfectly sensible thing to do on many occasions, and it feels to them a lot like what would happen if they just went around printing money (counterfeiting) or what a sufficiently corrupt government could do if they printed unlimited amounts (which is why they keep bringing up Zimbabwe). They now grasp what is happening, but not why. A little learning is a dangerous thing.

Now as for why Paul Volcker wants to go back to Bretton Woods? That, I cannot say. He’s definitely got more than a little learning. At least he doesn’t want to go back to the classical gold standard.

Why it matters that torture is ineffective

JDN 2457531

Like “longest-ever-serving Speaker of the House sexually abuses teenagers” and “NSA spy program is trying to monitor the entire telephone and email system”, the news that the US government systematically tortures suspects is an egregious violation that goes to the highest levels of our government—that for some reason most Americans don’t particularly seem to care about.

The good news is that President Obama signed an executive order in 2009 banning torture domestically, reversing official policy under the Bush Administration, and then better yet in 2014 expanded the order to apply to all US interests worldwide. If this is properly enforced, perhaps our history of hypocrisy will finally be at its end. (Well, not if Trump wins…)

Yet as often seems to happen, there are two extremes in this debate and I think they’re both wrong.
The really disturbing side is “Torture works and we have to use it!” The preferred mode of argumentation for this is the “ticking time bomb scenario”, in which we have some urgent disaster to prevent (such as a nuclear bomb about to go off) and torture is the only way to stop it from happening. Surely then torture is justified? This argument may sound plausible, but as I’ll get to below, this is a lot like saying, “If aliens were attacking from outer space trying to wipe out humanity, nuclear bombs would probably be justified against them; therefore nuclear bombs are always justified and we can use them whenever we want.” If you can’t wait for my explanation, The Atlantic skewers the argument nicely.

Yet the opponents of torture have brought this sort of argument on themselves, by staking out a position so extreme as “It doesn’t matter if torture works! It’s wrong, wrong, wrong!” This kind of simplistic deontological reasoning is very appealing and intuitive to humans, because it casts the world into simple black-and-white categories. To show that this is not a strawman, here are several different people all making this same basic argument, that since torture is illegal and wrong it doesn’t matter if it works and there should be no further debate.

But the truth is, if it really were true that the only way to stop a nuclear bomb from leveling Los Angeles was to torture someone, it would be entirely justified—indeed obligatory—to torture that suspect and stop that nuclear bomb.

The problem with that argument is not just that this is not our usual scenario (though it certainly isn’t); it goes much deeper than that:

That scenario makes no sense. It wouldn’t happen.

To use the example the late Antonin Scalia used from an episode of 24 (perhaps the most egregious Fictional Evidence Fallacy ever committed), if there ever is a nuclear bomb planted in Los Angeles, that would literally be one of the worst things that ever happened in the history of the human race—literally a Holocaust in the blink of an eye. We should be prepared to cause extreme suffering and death in order to prevent it. But not only is that event (fortunately) very unlikely, torture would not help us.

Why? Because torture just doesn’t work that well.

It would be too strong to say that it doesn’t work at all; it’s possible that it could produce some valuable intelligence—though clear examples of such results are amazingly hard to come by. There are some social scientists who have found empirical results showing some effectiveness of torture, however. We can’t say with any certainty that it is completely useless. (For obvious reasons, a randomized controlled experiment in torture is wildly unethical, so none have ever been attempted.) But to justify torture it isn’t enough that it could work sometimes; it has to work vastly better than any other method we have.

And our empirical data is in fact reliable enough to show that that is not the case. Torture often produces unreliable information, as we would expect from the game theory involved—your incentive is to stop the pain, not provide accurate intel; the psychological trauma that torture causes actually distorts memory and reasoning; and as a matter of fact basically all the useful intelligence obtained in the War on Terror was obtained through humane interrogation methods. As interrogation experts agree, torture just isn’t that effective.

In principle, there are four basic cases to consider:

1. Torture is vastly more effective than the best humane interrogation methods.

2. Torture is slightly more effective than the best humane interrogation methods.

3. Torture is as effective as the best humane interrogation methods.

4. Torture is less effective than the best humane interrogation methods.

The evidence points most strongly to case 4, which would mean that torture is a no-brainer; if it doesn’t even work as well as other methods, it’s absurd to use it. You’re basically kicking puppies at that point—purely sadistic violence that accomplishes nothing. But the data isn’t clear enough for us to rule out case 3 or even case 2. There is only one case we can strictly rule out, and that is case 1.

But it was only in case 1 that torture could ever be justified!

If you’re trying to justify doing something intrinsically horrible, it’s not enough that it has some slight benefit.

People seem to have this bizarre notion that we have only two choices in morality:

Either we are strict deontologists, and wrong actions can never be justified by good outcomes ever, in which case apparently vaccines are morally wrong, because stabbing children with needles is wrong. Tto be fair, some people seem to actually believe this; but then, some people believe the Earth is less than 10,000 years old.

Or alternatively we are the bizarre strawman concept most people seem to have of utilitarianism, under which any wrong action can be justified by even the slightest good outcome, in which case all you need to do to justify slavery is show that it would lead to a 1% increase in per-capita GDP. Sadly, there honestly do seem to be economists who believe this sort of thing. Here’s one arguing that US chattel slavery was economically efficient, and some of the more extreme arguments for why sweatshops are good can take on this character. Sweatshops may be a necessary evil for the time being, but they are still an evil.

But what utilitarianism actually says (and I consider myself some form of nuanced rule-utilitarian, though actually I sometimes call it “deontological consequentialism” to emphasize that I mean to synthesize the best parts of the two extremes) is not that the ends always justify the means, but that the ends can justify the means—that it can be morally good or even obligatory to do something intrinsically bad (like stabbing children with needles) if it is the best way to accomplish some greater good (like saving them from measles and polio). But the good actually has to be greater, and it has to be the best way to accomplish that good.

To see why this later proviso is important, consider the real-world ethical issues involved in psychology experiments. The benefits of psychology experiments are already quite large, and poised to grow as the science improves; one day the benefits of cognitive science to humanity may be even larger than the benefits of physics and biology are today. Imagine a world without mood disorders or mental illness of any kind; a world without psychopathy, where everyone is compassionate; a world where everyone is achieving their full potential for happiness and self-actualization. Cognitive science may yet make that world possible—and I haven’t even gotten into its applications in artificial intelligence.

To achieve that world, we will need a great many psychology experiments. But does that mean we can just corral people off the street and throw them into psychology experiments without their consent—or perhaps even their knowledge? That we can do whatever we want in those experiments, as long as it’s scientifically useful? No, it does not. We have ethical standards in psychology experiments for a very good reason, and while those ethical standards do slightly reduce the efficiency of the research process, the reduction is small enough that the moral choice is obviously to retain the ethics committees and accept the slight reduction in research efficiency. Yes, randomly throwing people into psychology experiments might actually be slightly better in purely scientific terms (larger and more random samples)—but it would be terrible in moral terms.

Along similar lines, even if torture works about as well or even slightly better than other methods, that’s simply not enough to justify it morally. Making a successful interrogation take 16 days instead of 17 simply wouldn’t be enough benefit to justify the psychological trauma to the suspect (and perhaps the interrogator!), the risk of harm to the falsely accused, or the violation of international human rights law. And in fact a number of terrorism suspects were waterboarded for months, so even the idea that it could shorten the interrogation is pretty implausible. If anything, torture seems to make interrogations take longer and give less reliable information—case 4.

A lot of people seem to have this impression that torture is amazingly, wildly effective, that a suspect who won’t crack after hours of humane interrogation can be tortured for just a few minutes and give you all the information you need. This is exactly what we do not find empirically; if he didn’t crack after hours of talk, he won’t crack after hours of torture. If you literally only have 30 minutes to find the nuke in Los Angeles, I’m sorry; you’re not going to find the nuke in Los Angeles. No adversarial interrogation is ever going to be completed that quickly, no matter what technique you use. Evacuate as many people to safe distances or underground shelters as you can in the time you have left.

This is why the “ticking time-bomb” scenario is so ridiculous (and so insidious); that’s simply not how interrogation works. The best methods we have for “rapid” interrogation of hostile suspects take hours or even days, and they are humane—building trust and rapport is the most important step. The goal is to get the suspect to want to give you accurate information.

For the purposes of the thought experiment, okay, you can stipulate that it would work (this is what the Stanford Encyclopedia of Philosophy does). But now all you’ve done is made the thought experiment more distant from the real-world moral question. The closest real-world examples we’ve ever had involved individual crimes, probably too small to justify the torture (as bad as a murdered child is, think about what you’re doing if you let the police torture people). But by the time the terrorism to be prevented is large enough to really be sufficient justification, it (1) hasn’t happened in the real world and (2) surely involves terrorists who are sufficiently ideologically committed that they’ll be able to resist the torture. If such a situation arises, of course we should try to get information from the suspects—but what we try should be our best methods, the ones that work most consistently, not the ones that “feel right” and maybe happen to work on occasion.

Indeed, the best explanation I have for why people use torture at all, given its horrible effects and mediocre effectiveness at best is that it feels right.

When someone does something terrible (such as an act of terrorism), we rightfully reduce our moral valuation of them relative to everyone else. If you are even tempted to deny this, suppose a terrorist and a random civilian are both inside a burning building and you only have time to save one. Of course you save the civilian and not the terrorist. And that’s still true even if you know that once the terrorist was rescued he’d go to prison and never be a threat to anyone else. He’s just not worth as much.

In the most extreme circumstances, a person can be so terrible that their moral valuation should be effectively zero: If the only person in a burning building is Stalin, I’m not sure you should save him even if you easily could. But it is a grave moral mistake to think that a person’s moral valuation should ever go negative, yet I think this is something that people do when confronted with someone they truly hate. The federal agents torturing those terrorists didn’t merely think of them as worthless—they thought of them as having negative worth. They felt it was a positive good to harm them. But this is fundamentally wrong; no sentient being has negative worth. Some may be so terrible as to have essentially zero worth; and we are often justified in causing harm to some in order to save others. It would have been entirely justified to kill Stalin (as a matter of fact he died of heart disease and old age), to remove the continued threat he posed; but to torture him would not have made the world a better place, and actually might well have made it worse.

Yet I can see how psychologically it could be useful to have a mechanism in our brains that makes us hate someone so much we view them as having negative worth. It makes it a lot easier to harm them when necessary, makes us feel a lot better about ourselves when we do. The idea that any act of homicide is a tragedy but some of them are necessary tragedies is a lot harder to deal with than the idea that some people are just so evil that killing or even torturing them is intrinsically good. But some of the worst things human beings have ever done ultimately came from that place in our brains—and torture is one of them.

The powerful persistence of bigotry

JDN 2457527

Bigotry has been a part of human society since the beginning—people have been hating people they perceive as different since as long as there have been people, and maybe even before that. I wouldn’t be surprised to find that different tribes of chimpanzees or even elephants hold bigoted beliefs about each other.

Yet it may surprise you that neoclassical economics has basically no explanation for this. There is a long-standing famous argument that bigotry is inherently irrational: If you hire based on anything aside from actual qualifications, you are leaving money on the table for your company. Because women CEOs are paid less and perform better, simply ending discrimination against women in top executive positions could save any typical large multinational corporation tens of millions of dollars a year. And yet, they don’t! Fancy that.

More recently there has been work on the concept of statistical discrimination, under which it is rational (in the sense of narrowly-defined economic self-interest) to discriminate because categories like race and gender may provide some statistically valid stereotype information. For example, “Black people are poor” is obviously not true across the board, but race is strongly correlated with wealth in the US; “Asians are smart” is not a universal truth, but Asian-Americans do have very high educational attainment. In the absence of more reliable information that might be your best option for making good decisions. Of course, this creates a vicious cycle where people in the positive stereotype group are better off and have more incentive to improve their skills than people in the negative stereotype group, thus perpetuating the statistical validity of the stereotype.

But of course that assumes that the stereotypes are statistically valid, and that employers don’t have more reliable information. Yet many stereotypes aren’t even true statistically: If “women are bad drivers”, then why do men cause 75% of traffic fatalities? Furthermore, in most cases employers have more reliable information—resumes with education and employment records. Asian-Americans are indeed more likely to have bachelor’s degrees than Latino Americans, but when it say right on Mr. Lorenzo’s resume that he has a B.A. and on Mr. Suzuki’s resume that he doesn’t, that racial stereotype no longer provides you with any further information. Yet even if the resumes are identical, employers will be more likely to hire a White applicant than a Black applicant, and more likely to hire a male applicant than a female applicant—we have directly tested this in experiments. In an experiment where employers had direct performance figures in front of them, they were still more likely to choose the man when they had the same scores—and sometimes even when the woman had a higher score!

Even our assessments of competence are often biased, probably subconsciously; given the same essay to review, most reviewers find more spelling errors and are more concerned about those errors if they are told that the author is Black. If they thought the author was White, they thought of the errors as “minor mistakes” by a student with “otherwise good potential”; but if they thought the author was Black, they “can’t believe he got into this school in the first place”. These reviewers were reading the same essay. The alleged author’s race was decided randomly. Most if not all of these reviewers were not consciously racist. Subconscious racial biases are all over the place; almost everyone exhibits some subconscious racial bias.

No, discrimination isn’t just rational inference based on valid (if unfortunate and self-reinforcing) statistical trends. There is a significant component of just outright irrational bigotry.

We’re seeing this play out in North Carolina; due to their arbitrary discrimination against lesbian, gay, bisexual and especially transgender people, they are now hemorrhaging jobs as employers pull out, and their federal funding for student loans is now in jeopardy due to the obvious Title IX violation. This is obviously not in the best interest of the people of North Carolina (even the ones who aren’t LGBT!); and it’s all being justified on the grounds of an epidemic of sexual assaults by people pretending to be trans that doesn’t even exist. It turns out that more Republican Senators have been arrested for sexual misconduct in bathrooms than transgender people—and while the number of transgender people in the US is surprisingly hard to measure, it’s clearly a lot larger than the number of Republican Senators!

In fact, discrimination is even more irrational than it may seem, because empirically the benefits of discrimination (such as they are—short-term narrow economic self-interest) fall almost entirely on the rich while the harms fall mainly on the poor, yet poor people are much more likely to be racist! Since income and education are highly correlated, education accounts for some of this effect. This is reason to be hopeful, for as educational attainment has soared, we have found that racism has decreased.

But education doesn’t seem to explain the full effect. One theory to account this is what’s called last-place aversiona highly pernicious heuristic where people are less concerned about their own absolute status than they are about not having the worst status. In economic experiments, people are usually more willing to give money to people worse off than them than to those better off than them—unless giving it to the worse-off would make those people better off than they themselves are. I think we actually need to do further study to see what happens if it would make those other people exactly as well-off as they are, because that turns out to be absolutely critical to whether people would be willing to support a basic income. In other words, do people count “tied for last”? Would they rather play a game where everyone gets $100, or one where they get $50 but everyone else only gets $10?

I would hope that humanity is better than that—that we would want to play the $100 game, which is analogous to a basic income. But when I look at the extreme and persistent inequality that has plagued human society for millennia, I begin to wonder if perhaps there really are a lot of people who think of the world in such zero-sum, purely relative terms, and care more about being better than others than they do about doing well themselves. Perhaps the horrific poverty of Sub-Saharan Africa and Southeast Asia is, for many First World people, not a bug but a feature; we feel richer when we know they are poorer. Scarcity seems to amplify this zero-sum thinking; racism gets worse whenever we have economic downturns. Precisely because discrimination is economically inefficient, this can create a vicious cycle where poverty causes bigotry which worsens poverty.

There is also something deeper going on, something evolutionary; bigotry is part of what I call the tribal paradigm, the core aspect of human psychology that defines identity in terms of in-groups which are good and out-groups which are bad. We will probably never fully escape the tribal paradigm, but this is not a reason to give up hope; we have made substantial progress in reducing bigotry in many places. What seems to happen is that people learn to expand their mental tribe, so that it encompasses larger and larger groups—not just White Americans but all Americans, or not just Americans but all human beings. Peter Singer calls this the Expanding Circle (also the title of his book on it). We may one day be able to make our tribe large enough to encompass all sentient beings in the universe; at that point, it’s just fine if we are only interested in advancing the interests of those in our tribe, because our tribe would include everyone. Yet I don’t think any of us are quite there yet, and some people have a really long way to go.

But with these expanding tribes in mind, perhaps I can leave you with a fact that is as counter-intuitive as it is encouraging, and even easier still to take out of context: Racism was better than what came before it. What I mean by this is not that racism is good—of course it’s terrible—but that in order to be racism, to define the whole world into a small number of “racial groups”, people already had to enormously expand their mental tribe from where it started. When we evolved on the African savannah millions of years ago, our tribe was 150 people; to this day, that’s about the number of people we actually feel close to and interact with on a personal level. We could have stopped there, and for millennia we did. But over time we managed to expand beyond that number, to a village of 1,000, a town of 10,000, a city of 100,000. More recently we attained mental tribes of whole nations, in some case hundreds of millions of people. Racism is about that same scale, if not a bit larger; what most people (rather arbitrarily, and in a way that changes over time) call “White” constitutes about a billion people. “Asian” (including South Asian) is almost four billion. These are astonishingly huge figures, some seven orders of magnitude larger than what we originally evolved to handle. The ability to feel empathy for all “White” people is just a little bit smaller than the ability to feel empathy for all people period. Similarly, while today the gender in “all men are created equal” is jarring to us, the idea at the time really was an incredibly radical broadening of the moral horizon—Half the world? Are you mad?

Therefore I am confident that one day, not too far from now, the world will take that next step, that next order of magnitude, which many of us already have (or try to), and we will at last conquer bigotry, and if not eradicate it entirely then force it completely into the most distant shadows and deny it its power over our society.

What is the processing power of the human brain?

JDN 2457485

Futurists have been predicting that AI will “surpass humans” any day now for something like 50 years. Eventually they’ll be right, but it will be more or less purely by chance, since they’ve been making the same prediction longer than I’ve been alive. (Similarity, whenever someone projects the date at which immortality will be invented, it always seems to coincide with just slightly before the end of the author’s projected life expectancy.) Any technology that is “20 years away” will be so indefinitely.

There are a lot of reasons why this prediction keeps failing so miserably. One is an apparent failure to grasp the limitations of exponential growth. I actually think the most important is that a lot of AI fans don’t seem to understand how human cognition actually works—that it is primarily social cognition, where most of the processing has already been done and given to us as cached results, some of them derived centuries before we were born. We are smart enough to run a civilization with airplanes and the Internet not because any individual human is so much smarter than any other animal, but because all humans together are—and other animals haven’t quite figured out how to unite their cognition in the same way. We’re about 3 times smarter than any other animal as individuals—and several billion times smarter when we put our heads together.

A third reason is that even if you have sufficient computing power, that is surprisingly unimportant; what you really need are good heuristics to make use of your computing power efficiently. Any nontrivial problem is too complex to brute-force by any conceivable computer, so simply increasing computing power without improving your heuristics will get you nowhere. Conversely, if you have really good heuristics like the human brain does, you don’t even need all that much computing power. A chess grandmaster was once asked how many moves ahead he can see on the board, and he replied: “I only see one move ahead. The right one.” In cognitive science terms, people asked him how much computing power he was using, expecting him to say something far beyond normal human capacity, and he replied that he was using hardly any—it was all baked into the heuristics he had learned from years of training and practice.

Making an AI capable of human thought—a true artificial person—will require a level of computing power we can already reach (as long as we use huge supercomputers), but that is like having the right material. To really create the being we will need to embed the proper heuristics. We are trying to make David, and we have finally mined enough marble—now all we need is Michelangelo.

But another reason why so many futurists have failed in their projections is that they have wildly underestimated the computing power of the human brain. Reading 1980s cyberpunk is hilarious in hindsight; Neuromancer actually quite accurately projected the number of megabytes that would flow through the Internet at any given moment, but somehow thought that a few hundred megaflops would be enough to copy human consciousness. The processing power of the human brain is actually on the order of a few petaflops. So, you know, Gibson was only off by a factor of a few million.

We can now match petaflops—the world’s fastest supercomputer is actually about 30 petaflops. Of course, it cost half a month of China’s GDP to build, and requires 24 megawatts to run and cool, which is about the output of a mid-sized solar power station. The human brain consumes only about 400 kcal per day, which is about 20 watts—roughly the consumption of a typical CFL lightbulb. Even if you count the rest of the human body as necessary to run the human brain (which I guess is sort of true), we’re still clocking in at about 100 watts—so even though supercomputers can now process at the same speed, our brains are almost a million times as energy-efficient.

How do I know it’s a few petaflops?

Earlier this year a study was published showing that a conservative lower bound for the total capacity of human memory is about 4 bits per synapse, where previously some scientists thought that each synapse might carry only 1 bit (I’ve always suspected it was more like 10 myself).

So then we need to figure out how many synapses we have… which turns out to be really difficult actually. They are in a constant state of flux, growing, shrinking, and moving all the time; and when we die they fade away almost immediately (reason #3 I’m skeptical of cryonics). We know that we have about 100 billion neurons, and each one can have anywhere between 100 and 15,000 synapses with other neurons. The average seems to be something like 5,000 (but highly skewed in a power-law distribution), so that’s about 500 trillion synapses. If each one is carrying 4 bits to be as conservative as possible, that’s a total storage capacity of about 2 quadrillion bits, which is about 0.2 petabytes.

Of course, that’s assuming that our brains store information the same way as a computer—every bit flipped independently, each bit stored forever. Not even close. Human memory is constantly compressing and decompressing data, using a compression scheme that’s lossy enough that we not only forget things, we can systematically misremember and even be implanted with false memories. That may seem like a bad thing, and in a sense it is; but if the compression scheme is that lossy, it must be because it’s also that efficient—that our brains are compressing away the vast majority of the data to make room for more. Our best lossy compression algorithms for video are about 100:1; but the human brain is clearly much better than that. Our core data format for long-term memory appears to be narrative; more or less we store everything not as audio or video (that’s short-term memory, and quite literally so), but as stories.

How much compression can you get by storing things as narrative? Think about The Lord of the Rings. The extended edition of the films runs to 6 discs of movie (9 discs of other stuff), where a Blu-Ray disc can store about 50 GB. So that’s 300 GB. Compressed into narrative form, we have the books (which, if you’ve read them, are clearly not optimally compressed—no, we do not need five paragraphs about the trees, and I’m gonna say it, Tom Bombadil is totally superfluous and Peter Jackson was right to remove him), which run about 500,000 words altogether. If the average word is 10 letters (normally it’s less than that, but this is Tolkien we’re talking about), each word will take up about 10 bytes (because in ASCII or Unicode a letter is a byte). So altogether the total content of the entire trilogy, compressed into narrative, can be stored in about 5 million bytes, that is, 5 MB. So the compression from HD video to narrative takes us all the way from 300 GB to 5 MB, which is a factor of 60,000. Sixty thousand. I believe that this is the proper order of magnitude for the compression capability of the human brain.

Even more interesting is the fact that the human brain is almost certainly in some sense holographic storage; damage to a small part of your brain does not produce highly selective memory loss as if you had some bad sectors of your hard drive, but rather an overall degradation of your total memory processing as if you in some sense stored everything everywhere—that is, holographically. How exactly this is accomplished by the brain is still very much an open question; it’s probably not literally a hologram in the quantum sense, but it definitely seems to function like a hologram. (Although… if the human brain is a quantum computer that would explain an awful lot—it especially helps with the binding problem. The problem is explaining how a biological system at 37 C can possibly maintain the necessary quantum coherences.) The data storage capacity of holograms is substantially larger than what can be achieved by conventional means—and furthermore has similar properties to human memory in that you can more or less always add more, but then what you had before gradually gets degraded. Since neural nets are much closer to the actual mechanics of the brain as we know them, understanding human memory will probably involve finding ways to simulate holographic storage with neural nets.

With these facts in mind, the amount of information we can usefully take in and store is probably not 0.2 petabytes—it’s probably more like 10 exabytes. The human brain can probably hold just about as much as the NSA’s National Cybersecurity Initiative Data Center in Utah, which is itself more or less designed to contain the Internet. (The NSA is at once awesome and terrifying.)

But okay, maybe that’s not fair if we’re comparing human brains to computers; even if you can compress all your data by a factor of 100,000, that isn’t the same thing as having 100,000 times as much storage.

So let’s use that smaller figure, 0.2 petabytes. That’s how much we can store; how much can we process?

The next thing to understand is that our processing architecture is fundamentally difference from that of computers.

Computers generally have far more storage than they have processing power, because they are bottlenecked through a CPU that can only process 1 thing at once (okay, like 8 things at once with a hyperthreaded quad-core; as you’ll see in a moment this is a trivial difference). So it’s typical for a new computer these days to have processing power in gigaflops (It’s usually reported in gigahertz, but that’s kind of silly; hertz just tells you clock cycles, while what you really wanted to know is calculations—and that you get from flops. They’re generally pretty comparable numbers though.), while they have storage in terabytes—meaning that it would take about 1000 seconds (about 17 minutes) for the computer to process everything in its entire storage once. In fact it would take a good deal longer than that, because there are further bottlenecks in terms of memory access, especially from hard-disk drives (RAM and solid-state drives are faster, but would still slow it down to a couple of hours).

The human brain, by contrast, integrates processing and memory into the same system. There is no clear distinction between “memory synapses” and “processing synapses”, and no single CPU bottleneck that everything has to go through. There is however something like a “clock cycle” as it turns out; synaptic firings are synchronized across several different “rhythms”, the fastest of which is about 30 Hz. No, not 30 GHz, not 30 MHz, not even 30 kHz; 30 hertz. Compared to the blazing speed of billions of cycles per second that goes on in our computers, the 30 cycles per second our brains are capable of may seem bafflingly slow. (Even more bafflingly slow is the speed of nerve conduction, which is not limited by the speed of light as you might expect, but is actually less than the speed of sound. When you trigger the knee-jerk reflex doctors often test, it takes about a tenth of a second for the reflex to happen—not because your body is waiting for anything, but because it simply takes that long for the signal to travel to your spinal cord and back.)

The reason we can function at all is because of our much more efficient architecture; instead of passing everything through a single bottleneck, we do all of our processing in parallel. All of those 100 billion neurons with 500 trillion synapses storing 2 quadrillion bits work simultaneously. So whereas a computer does 8 things at a time, 3 billion times per second, a human brain does 2 quadrillion things at a time, 30 times per second. Provided that the tasks can be fully parallelized (vision, yes; arithmetic, no), a human brain can therefore process 60 quadrillion bits per second—which turns out to be just over 6 petaflops, somewhere around 6,000,000,000,000,000 calculations per second.

So, like I said, a few petaflops.

Why is there a “corporate ladder”?

JDN 2457482

We take this concept for granted; there are “entry-level” jobs, and then you can get “promoted”, until perhaps you’re lucky enough or talented enough to rise to the “top”. Jobs that are “higher” on this “ladder” pay better, offer superior benefits, and also typically involve more pleasant work environments and more autonomy, though they also typically require greater skill and more responsibility.

But I contend that an alien lifeform encountering our planet for the first time, even one that somehow knew all about neoclassical economic theory (admittedly weird, but bear with me here), would be quite baffled by this arrangement.

The classic “rags to riches” story always involves starting work in some menial job like working in the mailroom, from which you then more or less magically rise to the position of CEO. (The intermediate steps are rarely told in the story, probably because they undermine the narrative; successful entrepreneurs usually make their first successful business using funds from their wealthy relatives, and if you haven’t got any wealthy relatives, that’s just too bad for you.)

Even despite its dubious accuracy, the story is bizarre in another way: There’s no reason to think that being really good at working in the mail room has anything at all to do with being good at managing a successful business. They’re totally orthogonal skills. They may even be contrary in personality terms; the kind of person who makes a good entrepreneur is innovative, decisive, and independent—and those are exactly the kind of personality traits that will make you miserable in a menial job where you’re constantly following orders.

Yet in almost every profession, we have this process where you must first “earn” your way to “higher” positions by doing menial and at best tangentially-related tasks.

This even happens in science, where we ought to know better! There’s really no reason to think that being good at taking multiple-choice tests strongly predicts your ability to do scientific research, nor that being good at grading multiple-choice tests does either; and yet to become a scientific researcher you must pass a great many multiple-choice tests (at bare minimum the SAT and GRE), and probably as a grad student you’ll end up grading some as well.

This process is frankly bizarre; worldwide, we are probably leaving tens of trillions of dollars of productivity on the table by instituting these arbitrary selection barriers that have nothing to do with actual skills. Simply optimizing our process of CEO selection alone would probably add a trillion dollars to US GDP.

If neoclassical economics were right, we should assign jobs solely based on marginal productivity; there should be some sort of assessment of your ability at each task you might perform, and whichever you’re best at (in the sense of comparative advantage) is what you end up doing, because that’s what you’ll be paid the most to do. Actually for this to really work the selection process would have to be extremely cheap, extremely reliable, and extremely fast, lest the friction of the selection system itself introduce enormous inefficiencies. (The fact that this never even seems to work even in SF stories with superintelligent sorting AIs, let alone in real life, is just so much the worse for neoclassical economics. The last book I read in which it actually seemed to work was Harry Potter and the Sorceror’s Stone—so it was literally just magic.)

The hope seems to be that competition will somehow iron out this problem, but in order for that to work, we must all be competing on a level playing field, and furthermore the mode of competition must accurately assess our real ability. The reason Olympic sports do a pretty good job of selecting the best athletes in the world is that they obey these criteria; the reason corporations do a terrible job of selecting the best CEOs is that they do not.

I’m quite certain I could do better than the former CEO of the late Lehman Brothers (and, to be fair, there are others who could do better still than I), but I’ll likely never get the chance to own a major financial firm—and I’m a lot closer than most people. I get to tick most of the boxes you need to be in that kind of position: White, male, American, mostly able-bodied, intelligent, hard-working, with a graduate degree in economics. Alas, I was only born in the top 10% of the US income distribution, not the top 1% or 0.01%, so my odds are considerably reduced. (That and I’m pretty sure that working for a company as evil as the late Lehman Brothers would destroy my soul.) Somewhere in Sudan there is a little girl who would be the best CEO of an investment bank the world has ever seen, but she is dying of malaria. Somewhere in India there is a little boy who would have been a greater physicist than Einstein, but no one ever taught him to read.

Competition may help reduce the inefficiency of this hierarchical arrangement—but it cannot explain why we use a hierarchy in the first place. Some people may be especially good at leadership and coordination; but in an efficient system they wouldn’t be seen as “above” other people, but as useful coordinators and advisors that people consult to ensure they are allocating tasks efficiently. You wouldn’t do things because “your boss told you to”, but because those things were the most efficient use of your time, given what everyone else in the group was doing. You’d consult your coordinator often, and usually take their advice; but you wouldn’t see them as orders you were required to follow.

Moreover, coordinators would probably not be paid much better than those they coordinate; what they were paid would depend on how much the success of the tasks depends upon efficient coordination, as well as how skilled other people are at coordination. It’s true that if having you there really does make a company with $1 billion in revenue 1% more efficient, that is in fact worth $10 million; but that isn’t how we set the pay of managers. It’s simply obvious to most people that managers should be paid more than their subordinates—that with a “promotion” comes more leadership and more pay. You’re “moving up the corporate ladder” Your pay reflects your higher status, not your marginal productivity.

This is not an optimal economic system by any means. And yet it seems perfectly natural to us to do this, and most people have trouble thinking any other way—which gives us a hint of where it’s probably coming from.

Perfectly natural. That is, instinctual. That is, evolutionary.

I believe that the corporate ladder, like most forms of hierarchy that humans use, is actually a recapitulation of our primate instincts to form a mating hierarchy with an alpha male.

First of all, the person in charge is indeed almost always male—over 90% of all high-level business executives are men. This is clearly discrimination, because women executives are paid less and yet show higher competence. Rare, underpaid, and highly competent is exactly the pattern we would expect in the presence of discrimination. If it were instead a lack of innate ability, we would expect that women executives would be much less competent on average, though they would still be rare and paid less. If there were no discrimination and no difference in ability, we would see equal pay, equal competence, and equal prevalence (this happens almost nowhere—the closest I think we get is in undergraduate admissions). Executives are also usually tall, healthy, and middle-aged—just like alpha males among chimpanzees and gorillas. (You can make excuses for why: Height is correlated with IQ, health makes you more productive, middle age is when you’re old enough to have experience but young enough to have vigor and stamina—but the fact remains, you’re matching the gorillas.)

Second, many otherwise-baffling economic decisions make sense in light of this hypothesis.

When a large company is floundering, why do we cut 20,000 laborers instead of simply reducing the CEO’s stock option package by half to save the same amount of money? Think back to the alpha male: Would he give himself less in a time of scarcity? Of course not. Nor would he remove his immediate subordinates, unless they had done something to offend him. If resources are scarce, the “obvious” answer is to take them from those at the bottom of the hierarchy—resource conservation is always accomplished at the expense of the lowest-status individuals.

Why are the very same poor people who would most stand to gain from redistribution of wealth often those who are most fiercely opposed to it? Because, deep down, they just instinctually “know” that alpha males are supposed to get the bananas, and if they are of low status it is their deserved lot in life. That is how people who depend on TANF and Medicaid to survive can nonetheless vote for Donald Trump. (As for how they can convince themselves that they “don’t get anything from the government”, that I’m not sure. “Keep your government hands off my Medicare!”)

Why is power an aphrodisiac, as well as for many an apparent excuse for bad behavior? I’ll let Cameron Anderson (a psychologist at UC Berkeley) give you the answer: “powerful people act with great daring and sometimes behave rather like gorillas”. With higher status comes a surge in testosterone (makes sense if you’re going to have more mates, and maybe even if you’re commanding an army—but running an investment bank?), which is directly linked to dominance behavior.

These attitudes may well have been adaptive for surviving in the African savannah 2 million years ago. In a world red in tooth and claw, having the biggest, strongest male be in charge of the tribe might have been the most efficient means of ensuring the success of the tribe—or rather I should say, the genes of the tribe, since the only reason we have a tribal instinct is that tribal instinct genes were highly successful at propagating themselves.

I’m actually sort of agnostic on the question of whether our evolutionary heuristics were optimal for ancient survival, or simply the best our brains could manage; but one thing is certain: They are not optimal today. The uninhibited dominance behavior associated with high status may work well enough for a tribal chieftain, but it could be literally apocalyptic when exhibited by the head of state of a nuclear superpower. Allocation of resources by status hierarchy may be fine for hunter-gatherers, but it is disastrously inefficient in an information technology economy.

From now on, whenever you hear “corporate ladder” and similar turns of phrase, I want you to substitute “primate status hierarchy”. You’ll quickly see how well it fits; and hopefully once enough people realize this, together we can all find a way to change to a better system.

Why is our diet so unhealthy?

JDN 2457447

One of the most baffling facts about the world, particularly to a development economist, is that the leading causes of death around the world broadly cluster into two categories: Obesity, in First World countries, and starvation, in Third World countries. At first glance, it seems like the rich are eating too much and there isn’t enough left for the poor.

Yet in fact it’s not quite so simple as that, because in fact obesity is most common among the poor in First World countries, and in Third World countries obesity rates are rising rapidly and co-existing with starvation. It is becoming recognized that there are many different kinds of obesity, and that a past history of starvation is actually a major risk factor in future obesity.

Indeed, the really fundamental problem is malnutrition—people are not necessarily eating too much or too little, they are eating the wrong things. So, my question is: Why?

It is widely thought that foods which are nutritious are also unappetizing, and conversely that foods which are delicious are unhealthy. There is a clear kernel of truth here, as a comparison of Brussels sprouts versus ice cream will surely indicate. But this is actually somewhat baffling. We are an evolved organism; one would think that natural selection would shape us so that we enjoy foods which are good for us and avoid foods which are bad for us.

I think it did, actually; the problem is, we have changed our situation so drastically by means of culture and technology that evolution hasn’t had time to catch up. We have evolved significantly since the dawn of civilization, but we haven’t had any time to evolve since one event in particular: The Green Revolution. Indeed, many people are still alive today who were born while the Green Revolution was still underway.

The Green Revolution is the culmination of a long process of development in agriculture and industrialization, but it would be difficult to overstate its importance as an epoch in the history of our species. We now have essentially unlimited food.

Not literally unlimited, of course; we do still need land, and water, and perhaps most notably energy (oil-driven machines are a vital part of modern agriculture). But we can produce vastly more food than was previously possible, and food supply is no longer a binding constraint on human population. Indeed, we already produce enough food to feed 10 billion people. People who say that some new agricultural technology will end world hunger don’t understand what world hunger actually is. Food production is not the problem—distribution of wealth is the problem.

I often speak about the possibility of reaching post-scarcity in the future; but we have essentially already done so in the domain of food production. If everyone ate what would be optimally healthy, and we distributed food evenly across the world, there would be plenty of food to go around and no such thing as obesity or starvation.

So why hasn’t this happened? Well, the main reason, like I said, is distribution of wealth.

But that doesn’t explain why so many people who do have access to good foods nonetheless don’t eat them.

The first thing to note is that healthy food is more expensive. It isn’t a huge difference by First World standards—about $550 per year extra per person. But when we compare the cost of a typical nutritious diet to that of a typical diet, the nutritious diet is significantly more expensive. Worse yet, this gap appears to be growing over time.

But why is this the case? It’s actually quite baffling on its face. Nutritious foods are typically fruits and vegetables that one can simply pluck off plants. Unhealthy foods are typically complex processed foods that require machines and advanced technology. There should be “value added”, at least in the economic sense; additional labor must go in, additional profits must come out. Why is it cheaper?

In a word? Subsidies.

Somehow, huge agribusinesses have convinced governments around the world that they deserve to be paid extra money, either simply for existing or based on how much they produce. Of course, when I say “somehow”, I of course mean lobbying.

In the US, these subsidies overwhelmingly go toward corn, followed by cotton, followed by soybeans.

In fact, they don’t actually even go to corn as you would normally think of it, like sweet corn or corn on the cob. No, they go to feed corn—really awful stuff that includes the entire plant, is barely even recognizable as corn, and has its “quality” literally rated by scales and sieves. No living organism was ever meant to eat this stuff.

Humans don’t, of course. Cows do. But they didn’t evolve for this stuff either; they can’t digest it properly, and it’s because of this terrible food we force-feed them that they need so many antibiotics.

Thus, these corn subsides are really primarily beef subsidies—they are a means of externalizing the cost of beef production and keeping the price of hamburgers artificially low. In all, 2/3 of US agricultural subsidies ultimately go to meat production. I haven’t been able to find any really good estimates, but as a ballpark figure it seems that meat would cost about twice as much if we didn’t subsidize it.

Fortunately a lot of these subsidies have been decreased under the Obama administration, particularly “direct payments” which are sort of like a basic income, but for agribusinesses. (That is not what basic incomes are for.) You can see the decline in US corn subsidies here.

Despite all this, however, subsidies cannot explain obesity. Removing them would have only a small effect.

An often overlooked consideration is that nutritious food can be more expensive for a family even if the actual pricetag is the same.

Why? Because kids won’t eat it.

To raise kids on a nutritious diet, you have to feed them small amounts of good food over a long period of time, until they acquire the taste. In order to do this, you need to be prepared to waste a lot of food, and that costs money. It’s cheaper to simply feed them something unhealthy, like ice cream or hot dogs, that you know they’ll eat.

And this brings me to what I think is the real ultimate cause of our awful diet: We evolved for a world of starvation, and our bodies cannot cope with abundance.

It’s important to be clear about what we mean by “unhealthy food”; people don’t enjoy consuming lead and arsenic. Rather, we enjoy consuming fat and sugar. Contrary to what fad diets will tell you, fat and sugar are not inherently bad for human health; indeed, we need a certain amount of fat and sugar in order to survive. What we call “unhealthy food” is actually food that we desperately need—in small quantities.

Under the conditions in which we evolved, fat and sugar were extremely scarce. Eating fat meant hunting a large animal, which required the cooperation of the whole tribe (a quite literal Stag Hunt) and carried risk of life and limb, not to mention simply failing and getting nothing. Eating sugar meant finding fruit trees and gathering fruit from them—and fruit trees are not all that common in nature. These foods also spoil quite quickly, so you eat them right away or not at all.

As such, we evolved to really crave these things, to ensure that we would eat them whenever they are available. Since they weren’t available all that often, this was just about right to ensure that we managed to eat enough, and rarely meant that we ate too much.

 

But now fast-forward to the Green Revolution. They aren’t scarce anymore. They’re everywhere. There are whole buildings we can go to with shelves upon shelves of them, which we ourselves can claim simply by swiping a little plastic card through a reader. We don’t even need to understand how that system of encrypted data networks operates, or what exactly is involved in maintaining our money supply (and most people clearly don’t); all we need to do is perform the right ritual and we will receive an essentially unlimited abundance of fat and sugar.

Even worse, this food is in processed form, so we can extract the parts that make it taste good, while separating them from the parts that actually make it nutritious. If fruits were our main source of sugar, that would be fine. But instead we get it from corn syrup and sugarcane, and even when we do get it from fruit, we extract the sugar instead of eating the whole fruit.

Natural selection had no particular reason to give us that level of discrimination; since eating apples and oranges was good for us, we evolved to like the taste of apples and oranges. There wasn’t a sufficient selection pressure to make us actually eat the whole fruit as opposed to extracting the sugar, because extracting the sugar was not an option available to our ancestors. But it is available to us now.

Vegetables, on the other hand, are also more abundant now, but were already fairly abundant. Indeed, it may be significant that we’ve had enough time to evolve since agriculture, but not enough time since fertilizer. Agriculture allowed us to make plenty of wheat and carrots; but it wasn’t until fertilizer that we could make enough hamburgers for people to eat them regularly. It could be that our hunter-gatherer ancestors actually did crave carrots in much the same way they and we crave sugar; but since agriculture we have no further reason to do so because carrots have always been widely available.

One thing I do still find a bit baffling: Why are so many green vegetables so bitter? It would be one thing if they simply weren’t as appealing as fat and sugar; but it honestly seems like a lot of green vegetables, such as broccoli, spinach, and Brussels sprouts, are really quite actively aversive, at least until you acquire the taste for them. Given how nutritious they are, it seems like there should have been a selective pressure in favor of liking the taste of green vegetables; but there wasn’t. I wonder if it’s actually coevolution—if perhaps broccoli has been evolving to not be eaten as quickly as we were evolving to eat it. This wouldn’t happen with apples and oranges, because in an evolutionary sense apples and oranges “want” to be eaten; they spread their seeds in the droppings of animals. But for any given stalk of broccoli, becoming lunch is definitely bad news.

Yet even this is pretty weird, because broccoli has definitely evolved substantially since agriculture—indeed, broccoli as we know it would not exist otherwise. Ancestral Brassica oleracea was bred to become cabbage, broccoli, cauliflower, kale, Brussels sprouts, collard greens, savoy, kohlrabi and kai-lan—and looks like none of them.

It looks like I still haven’t solved the mystery. In short, we get fat because kids hate broccoli; but why in the world do kids hate broccoli?