Why are humans so bad with probability?

Apr 29 JDN 2458238

In previous posts on deviations from expected utility and cumulative prospect theory, I’ve detailed some of the myriad ways in which human beings deviate from optimal rational behavior when it comes to probability.

This post is going to be a bit different: Yes, we behave irrationally when it comes to probability. Why?

Why aren’t we optimal expected utility maximizers?
This question is not as simple as it sounds. Some of the ways that human beings deviate from neoclassical behavior are simply because neoclassical theory requires levels of knowledge and intelligence far beyond what human beings are capable of; basically anything requiring “perfect information” qualifies, as does any game theory prediction that involves solving extensive-form games with infinite strategy spaces by backward induction. (Don’t feel bad if you have no idea what that means; that’s kind of my point. Solving infinite extensive-form games by backward induction is an unsolved problem in game theory; just this past week I saw a new paper presented that offered a partial potential solutionand yet we expect people to do it optimally every time?)

I’m also not going to include questions of fundamental uncertainty, like “Will Apple stock rise or fall tomorrow?” or “Will the US go to war with North Korea in the next ten years?” where it isn’t even clear how we would assign a probability. (Though I will get back to them, for reasons that will become clear.)

No, let’s just look at the absolute simplest cases, where the probabilities are all well-defined and completely transparent: Lotteries and casino games. Why are we so bad at that?

Lotteries are not a computationally complex problem. You figure out how much the prize is worth to you, multiply it by the probability of winning—which is clearly spelled out for you—and compare that to how much the ticket price is worth to you. The most challenging part lies in specifying your marginal utility of wealth—the “how much it’s worth to you” part—but that’s something you basically had to do anyway, to make any kind of trade-offs on how to spend your time and money. Maybe you didn’t need to compute it quite so precisely over that particular range of parameters, but you need at least some idea how much $1 versus $10,000 is worth to you in order to get by in a market economy.

Casino games are a bit more complicated, but not much, and most of the work has been done for you; you can look on the Internet and find tables of probability calculations for poker, blackjack, roulette, craps and more. Memorizing all those probabilities might take some doing, but human memory is astonishingly capacious, and part of being an expert card player, especially in blackjack, seems to involve memorizing a lot of those probabilities.

Furthermore, by any plausible expected utility calculation, lotteries and casino games are a bad deal. Unless you’re an expert poker player or blackjack card-counter, your expected income from playing at a casino is always negative—and the casino set it up that way on purpose.

Why, then, can lotteries and casinos stay in business? Why are we so bad at such a simple problem?

Clearly we are using some sort of heuristic judgment in order to save computing power, and the people who make lotteries and casinos have designed formal models that can exploit those heuristics to pump money from us. (Shame on them, really; I don’t fully understand why this sort of thing is legal.)

In another previous post I proposed what I call “categorical prospect theory”, which I think is a decently accurate description of the heuristics people use when assessing probability (though I’ve not yet had the chance to test it experimentally).

But why use this particular heuristic? Indeed, why use a heuristic at all for such a simple problem?

I think it’s helpful to keep in mind that these simple problems are weird; they are absolutely not the sort of thing a tribe of hunter-gatherers is likely to encounter on the savannah. It doesn’t make sense for our brains to be optimized to solve poker or roulette.

The sort of problems that our ancestors encountered—indeed, the sort of problems that we encounter, most of the time—were not problems of calculable probability risk; they were problems of fundamental uncertainty. And they were frequently matters of life or death (which is why we’d expect them to be highly evolutionarily optimized): “Was that sound a lion, or just the wind?” “Is this mushroom safe to eat?” “Is that meat spoiled?”

In fact, many of the uncertainties most important to our ancestors are still important today: “Will these new strangers be friendly, or dangerous?” “Is that person attracted to me, or am I just projecting my own feelings?” “Can I trust you to keep your promise?” These sorts of social uncertainties are even deeper; it’s not clear that any finite being could ever totally resolve its uncertainty surrounding the behavior of other beings with the same level of intelligence, as the cognitive arms race continues indefinitely. The better I understand you, the better you understand me—and if you’re trying to deceive me, as I get better at detecting deception, you’ll get better at deceiving.

Personally, I think that it was precisely this sort of feedback loop that resulting in human beings getting such ridiculously huge brains in the first place. Chimpanzees are pretty good at dealing with the natural environment, maybe even better than we are; but even young children can outsmart them in social tasks any day. And once you start evolving for social cognition, it’s very hard to stop; basically you need to be constrained by something very fundamental, like, say, maximum caloric intake or the shape of the birth canal. Where chimpanzees look like their brains were what we call an “interior solution”, where evolution optimized toward a particular balance between cost and benefit, human brains look more like a “corner solution”, where the evolutionary pressure was entirely in one direction until we hit up against a hard constraint. That’s exactly what one would expect to happen if we were caught in a cognitive arms race.

What sort of heuristic makes sense for dealing with fundamental uncertainty—as opposed to precisely calculable probability? Well, you don’t want to compute a utility function and multiply by it, because that adds all sorts of extra computation and you have no idea what probability to assign. But you’ve got to do something like that in some sense, because that really is the optimal way to respond.

So here’s a heuristic you might try: Separate events into some broad categories based on how frequently they seem to occur, and what sort of response would be necessary.

Some things, like the sun rising each morning, seem to always happen. So you should act as if those things are going to happen pretty much always, because they do happen… pretty much always.

Other things, like rain, seem to happen frequently but not always. So you should look for signs that those things might happen, and prepare for them when the signs point in that direction.

Still other things, like being attacked by lions, happen very rarely, but are a really big deal when they do. You can’t go around expecting those to happen all the time, that would be crazy; but you need to be vigilant, and if you see any sign that they might be happening, even if you’re pretty sure they’re not, you may need to respond as if they were actually happening, just in case. The cost of a false positive is much lower than the cost of a false negative.

And still other things, like people sprouting wings and flying, never seem to happen. So you should act as if those things are never going to happen, and you don’t have to worry about them.

This heuristic is quite simple to apply once set up: It can simply slot in memories of when things did and didn’t happen in order to decide which category they go in—i.e. availability heuristic. If you can remember a lot of examples of “almost never”, maybe you should move it to “unlikely” instead. If you get a really big number of examples, you might even want to move it all the way to “likely”.

Another large advantage of this heuristic is that by combining utility and probability into one metric—we might call it “importance”, though Bayesian econometricians might complain about that—we can save on memory space and computing power. I don’t need to separately compute a utility and a probability; I just need to figure out how much effort I should put into dealing with this situation. A high probability of a small cost and a low probability of a large cost may be equally worth my time.

How might these heuristics go wrong? Well, if your environment changes sufficiently, the probabilities could shift and what seemed certain no longer is. For most of human history, “people walking on the Moon” would seem about as plausible as sprouting wings and flying away, and yet it has happened. Being attacked by lions is now exceedingly rare except in very specific places, but we still harbor a certain awe and fear before lions. And of course availability heuristic can be greatly distorted by mass media, which makes people feel like terrorist attacks and nuclear meltdowns are common and deaths by car accidents and influenza are rare—when exactly the opposite is true.

How many categories should you set, and what frequencies should they be associated with? This part I’m still struggling with, and it’s an important piece of the puzzle I will need before I can take this theory to experiment. There is probably a trade-off between more categories giving you more precision in tailoring your optimal behavior, but costing more cognitive resources to maintain. Is the optimal number 3? 4? 7? 10? I really don’t know. Even I could specify the number of categories, I’d still need to figure out precisely what categories to assign.

What is the processing power of the human brain?

JDN 2457485

Futurists have been predicting that AI will “surpass humans” any day now for something like 50 years. Eventually they’ll be right, but it will be more or less purely by chance, since they’ve been making the same prediction longer than I’ve been alive. (Similarity, whenever someone projects the date at which immortality will be invented, it always seems to coincide with just slightly before the end of the author’s projected life expectancy.) Any technology that is “20 years away” will be so indefinitely.

There are a lot of reasons why this prediction keeps failing so miserably. One is an apparent failure to grasp the limitations of exponential growth. I actually think the most important is that a lot of AI fans don’t seem to understand how human cognition actually works—that it is primarily social cognition, where most of the processing has already been done and given to us as cached results, some of them derived centuries before we were born. We are smart enough to run a civilization with airplanes and the Internet not because any individual human is so much smarter than any other animal, but because all humans together are—and other animals haven’t quite figured out how to unite their cognition in the same way. We’re about 3 times smarter than any other animal as individuals—and several billion times smarter when we put our heads together.

A third reason is that even if you have sufficient computing power, that is surprisingly unimportant; what you really need are good heuristics to make use of your computing power efficiently. Any nontrivial problem is too complex to brute-force by any conceivable computer, so simply increasing computing power without improving your heuristics will get you nowhere. Conversely, if you have really good heuristics like the human brain does, you don’t even need all that much computing power. A chess grandmaster was once asked how many moves ahead he can see on the board, and he replied: “I only see one move ahead. The right one.” In cognitive science terms, people asked him how much computing power he was using, expecting him to say something far beyond normal human capacity, and he replied that he was using hardly any—it was all baked into the heuristics he had learned from years of training and practice.

Making an AI capable of human thought—a true artificial person—will require a level of computing power we can already reach (as long as we use huge supercomputers), but that is like having the right material. To really create the being we will need to embed the proper heuristics. We are trying to make David, and we have finally mined enough marble—now all we need is Michelangelo.

But another reason why so many futurists have failed in their projections is that they have wildly underestimated the computing power of the human brain. Reading 1980s cyberpunk is hilarious in hindsight; Neuromancer actually quite accurately projected the number of megabytes that would flow through the Internet at any given moment, but somehow thought that a few hundred megaflops would be enough to copy human consciousness. The processing power of the human brain is actually on the order of a few petaflops. So, you know, Gibson was only off by a factor of a few million.

We can now match petaflops—the world’s fastest supercomputer is actually about 30 petaflops. Of course, it cost half a month of China’s GDP to build, and requires 24 megawatts to run and cool, which is about the output of a mid-sized solar power station. The human brain consumes only about 400 kcal per day, which is about 20 watts—roughly the consumption of a typical CFL lightbulb. Even if you count the rest of the human body as necessary to run the human brain (which I guess is sort of true), we’re still clocking in at about 100 watts—so even though supercomputers can now process at the same speed, our brains are almost a million times as energy-efficient.

How do I know it’s a few petaflops?

Earlier this year a study was published showing that a conservative lower bound for the total capacity of human memory is about 4 bits per synapse, where previously some scientists thought that each synapse might carry only 1 bit (I’ve always suspected it was more like 10 myself).

So then we need to figure out how many synapses we have… which turns out to be really difficult actually. They are in a constant state of flux, growing, shrinking, and moving all the time; and when we die they fade away almost immediately (reason #3 I’m skeptical of cryonics). We know that we have about 100 billion neurons, and each one can have anywhere between 100 and 15,000 synapses with other neurons. The average seems to be something like 5,000 (but highly skewed in a power-law distribution), so that’s about 500 trillion synapses. If each one is carrying 4 bits to be as conservative as possible, that’s a total storage capacity of about 2 quadrillion bits, which is about 0.2 petabytes.

Of course, that’s assuming that our brains store information the same way as a computer—every bit flipped independently, each bit stored forever. Not even close. Human memory is constantly compressing and decompressing data, using a compression scheme that’s lossy enough that we not only forget things, we can systematically misremember and even be implanted with false memories. That may seem like a bad thing, and in a sense it is; but if the compression scheme is that lossy, it must be because it’s also that efficient—that our brains are compressing away the vast majority of the data to make room for more. Our best lossy compression algorithms for video are about 100:1; but the human brain is clearly much better than that. Our core data format for long-term memory appears to be narrative; more or less we store everything not as audio or video (that’s short-term memory, and quite literally so), but as stories.

How much compression can you get by storing things as narrative? Think about The Lord of the Rings. The extended edition of the films runs to 6 discs of movie (9 discs of other stuff), where a Blu-Ray disc can store about 50 GB. So that’s 300 GB. Compressed into narrative form, we have the books (which, if you’ve read them, are clearly not optimally compressed—no, we do not need five paragraphs about the trees, and I’m gonna say it, Tom Bombadil is totally superfluous and Peter Jackson was right to remove him), which run about 500,000 words altogether. If the average word is 10 letters (normally it’s less than that, but this is Tolkien we’re talking about), each word will take up about 10 bytes (because in ASCII or Unicode a letter is a byte). So altogether the total content of the entire trilogy, compressed into narrative, can be stored in about 5 million bytes, that is, 5 MB. So the compression from HD video to narrative takes us all the way from 300 GB to 5 MB, which is a factor of 60,000. Sixty thousand. I believe that this is the proper order of magnitude for the compression capability of the human brain.

Even more interesting is the fact that the human brain is almost certainly in some sense holographic storage; damage to a small part of your brain does not produce highly selective memory loss as if you had some bad sectors of your hard drive, but rather an overall degradation of your total memory processing as if you in some sense stored everything everywhere—that is, holographically. How exactly this is accomplished by the brain is still very much an open question; it’s probably not literally a hologram in the quantum sense, but it definitely seems to function like a hologram. (Although… if the human brain is a quantum computer that would explain an awful lot—it especially helps with the binding problem. The problem is explaining how a biological system at 37 C can possibly maintain the necessary quantum coherences.) The data storage capacity of holograms is substantially larger than what can be achieved by conventional means—and furthermore has similar properties to human memory in that you can more or less always add more, but then what you had before gradually gets degraded. Since neural nets are much closer to the actual mechanics of the brain as we know them, understanding human memory will probably involve finding ways to simulate holographic storage with neural nets.

With these facts in mind, the amount of information we can usefully take in and store is probably not 0.2 petabytes—it’s probably more like 10 exabytes. The human brain can probably hold just about as much as the NSA’s National Cybersecurity Initiative Data Center in Utah, which is itself more or less designed to contain the Internet. (The NSA is at once awesome and terrifying.)

But okay, maybe that’s not fair if we’re comparing human brains to computers; even if you can compress all your data by a factor of 100,000, that isn’t the same thing as having 100,000 times as much storage.

So let’s use that smaller figure, 0.2 petabytes. That’s how much we can store; how much can we process?

The next thing to understand is that our processing architecture is fundamentally difference from that of computers.

Computers generally have far more storage than they have processing power, because they are bottlenecked through a CPU that can only process 1 thing at once (okay, like 8 things at once with a hyperthreaded quad-core; as you’ll see in a moment this is a trivial difference). So it’s typical for a new computer these days to have processing power in gigaflops (It’s usually reported in gigahertz, but that’s kind of silly; hertz just tells you clock cycles, while what you really wanted to know is calculations—and that you get from flops. They’re generally pretty comparable numbers though.), while they have storage in terabytes—meaning that it would take about 1000 seconds (about 17 minutes) for the computer to process everything in its entire storage once. In fact it would take a good deal longer than that, because there are further bottlenecks in terms of memory access, especially from hard-disk drives (RAM and solid-state drives are faster, but would still slow it down to a couple of hours).

The human brain, by contrast, integrates processing and memory into the same system. There is no clear distinction between “memory synapses” and “processing synapses”, and no single CPU bottleneck that everything has to go through. There is however something like a “clock cycle” as it turns out; synaptic firings are synchronized across several different “rhythms”, the fastest of which is about 30 Hz. No, not 30 GHz, not 30 MHz, not even 30 kHz; 30 hertz. Compared to the blazing speed of billions of cycles per second that goes on in our computers, the 30 cycles per second our brains are capable of may seem bafflingly slow. (Even more bafflingly slow is the speed of nerve conduction, which is not limited by the speed of light as you might expect, but is actually less than the speed of sound. When you trigger the knee-jerk reflex doctors often test, it takes about a tenth of a second for the reflex to happen—not because your body is waiting for anything, but because it simply takes that long for the signal to travel to your spinal cord and back.)

The reason we can function at all is because of our much more efficient architecture; instead of passing everything through a single bottleneck, we do all of our processing in parallel. All of those 100 billion neurons with 500 trillion synapses storing 2 quadrillion bits work simultaneously. So whereas a computer does 8 things at a time, 3 billion times per second, a human brain does 2 quadrillion things at a time, 30 times per second. Provided that the tasks can be fully parallelized (vision, yes; arithmetic, no), a human brain can therefore process 60 quadrillion bits per second—which turns out to be just over 6 petaflops, somewhere around 6,000,000,000,000,000 calculations per second.

So, like I said, a few petaflops.

Why is there a “corporate ladder”?

JDN 2457482

We take this concept for granted; there are “entry-level” jobs, and then you can get “promoted”, until perhaps you’re lucky enough or talented enough to rise to the “top”. Jobs that are “higher” on this “ladder” pay better, offer superior benefits, and also typically involve more pleasant work environments and more autonomy, though they also typically require greater skill and more responsibility.

But I contend that an alien lifeform encountering our planet for the first time, even one that somehow knew all about neoclassical economic theory (admittedly weird, but bear with me here), would be quite baffled by this arrangement.

The classic “rags to riches” story always involves starting work in some menial job like working in the mailroom, from which you then more or less magically rise to the position of CEO. (The intermediate steps are rarely told in the story, probably because they undermine the narrative; successful entrepreneurs usually make their first successful business using funds from their wealthy relatives, and if you haven’t got any wealthy relatives, that’s just too bad for you.)

Even despite its dubious accuracy, the story is bizarre in another way: There’s no reason to think that being really good at working in the mail room has anything at all to do with being good at managing a successful business. They’re totally orthogonal skills. They may even be contrary in personality terms; the kind of person who makes a good entrepreneur is innovative, decisive, and independent—and those are exactly the kind of personality traits that will make you miserable in a menial job where you’re constantly following orders.

Yet in almost every profession, we have this process where you must first “earn” your way to “higher” positions by doing menial and at best tangentially-related tasks.

This even happens in science, where we ought to know better! There’s really no reason to think that being good at taking multiple-choice tests strongly predicts your ability to do scientific research, nor that being good at grading multiple-choice tests does either; and yet to become a scientific researcher you must pass a great many multiple-choice tests (at bare minimum the SAT and GRE), and probably as a grad student you’ll end up grading some as well.

This process is frankly bizarre; worldwide, we are probably leaving tens of trillions of dollars of productivity on the table by instituting these arbitrary selection barriers that have nothing to do with actual skills. Simply optimizing our process of CEO selection alone would probably add a trillion dollars to US GDP.

If neoclassical economics were right, we should assign jobs solely based on marginal productivity; there should be some sort of assessment of your ability at each task you might perform, and whichever you’re best at (in the sense of comparative advantage) is what you end up doing, because that’s what you’ll be paid the most to do. Actually for this to really work the selection process would have to be extremely cheap, extremely reliable, and extremely fast, lest the friction of the selection system itself introduce enormous inefficiencies. (The fact that this never even seems to work even in SF stories with superintelligent sorting AIs, let alone in real life, is just so much the worse for neoclassical economics. The last book I read in which it actually seemed to work was Harry Potter and the Sorceror’s Stone—so it was literally just magic.)

The hope seems to be that competition will somehow iron out this problem, but in order for that to work, we must all be competing on a level playing field, and furthermore the mode of competition must accurately assess our real ability. The reason Olympic sports do a pretty good job of selecting the best athletes in the world is that they obey these criteria; the reason corporations do a terrible job of selecting the best CEOs is that they do not.

I’m quite certain I could do better than the former CEO of the late Lehman Brothers (and, to be fair, there are others who could do better still than I), but I’ll likely never get the chance to own a major financial firm—and I’m a lot closer than most people. I get to tick most of the boxes you need to be in that kind of position: White, male, American, mostly able-bodied, intelligent, hard-working, with a graduate degree in economics. Alas, I was only born in the top 10% of the US income distribution, not the top 1% or 0.01%, so my odds are considerably reduced. (That and I’m pretty sure that working for a company as evil as the late Lehman Brothers would destroy my soul.) Somewhere in Sudan there is a little girl who would be the best CEO of an investment bank the world has ever seen, but she is dying of malaria. Somewhere in India there is a little boy who would have been a greater physicist than Einstein, but no one ever taught him to read.

Competition may help reduce the inefficiency of this hierarchical arrangement—but it cannot explain why we use a hierarchy in the first place. Some people may be especially good at leadership and coordination; but in an efficient system they wouldn’t be seen as “above” other people, but as useful coordinators and advisors that people consult to ensure they are allocating tasks efficiently. You wouldn’t do things because “your boss told you to”, but because those things were the most efficient use of your time, given what everyone else in the group was doing. You’d consult your coordinator often, and usually take their advice; but you wouldn’t see them as orders you were required to follow.

Moreover, coordinators would probably not be paid much better than those they coordinate; what they were paid would depend on how much the success of the tasks depends upon efficient coordination, as well as how skilled other people are at coordination. It’s true that if having you there really does make a company with $1 billion in revenue 1% more efficient, that is in fact worth $10 million; but that isn’t how we set the pay of managers. It’s simply obvious to most people that managers should be paid more than their subordinates—that with a “promotion” comes more leadership and more pay. You’re “moving up the corporate ladder” Your pay reflects your higher status, not your marginal productivity.

This is not an optimal economic system by any means. And yet it seems perfectly natural to us to do this, and most people have trouble thinking any other way—which gives us a hint of where it’s probably coming from.

Perfectly natural. That is, instinctual. That is, evolutionary.

I believe that the corporate ladder, like most forms of hierarchy that humans use, is actually a recapitulation of our primate instincts to form a mating hierarchy with an alpha male.

First of all, the person in charge is indeed almost always male—over 90% of all high-level business executives are men. This is clearly discrimination, because women executives are paid less and yet show higher competence. Rare, underpaid, and highly competent is exactly the pattern we would expect in the presence of discrimination. If it were instead a lack of innate ability, we would expect that women executives would be much less competent on average, though they would still be rare and paid less. If there were no discrimination and no difference in ability, we would see equal pay, equal competence, and equal prevalence (this happens almost nowhere—the closest I think we get is in undergraduate admissions). Executives are also usually tall, healthy, and middle-aged—just like alpha males among chimpanzees and gorillas. (You can make excuses for why: Height is correlated with IQ, health makes you more productive, middle age is when you’re old enough to have experience but young enough to have vigor and stamina—but the fact remains, you’re matching the gorillas.)

Second, many otherwise-baffling economic decisions make sense in light of this hypothesis.

When a large company is floundering, why do we cut 20,000 laborers instead of simply reducing the CEO’s stock option package by half to save the same amount of money? Think back to the alpha male: Would he give himself less in a time of scarcity? Of course not. Nor would he remove his immediate subordinates, unless they had done something to offend him. If resources are scarce, the “obvious” answer is to take them from those at the bottom of the hierarchy—resource conservation is always accomplished at the expense of the lowest-status individuals.

Why are the very same poor people who would most stand to gain from redistribution of wealth often those who are most fiercely opposed to it? Because, deep down, they just instinctually “know” that alpha males are supposed to get the bananas, and if they are of low status it is their deserved lot in life. That is how people who depend on TANF and Medicaid to survive can nonetheless vote for Donald Trump. (As for how they can convince themselves that they “don’t get anything from the government”, that I’m not sure. “Keep your government hands off my Medicare!”)

Why is power an aphrodisiac, as well as for many an apparent excuse for bad behavior? I’ll let Cameron Anderson (a psychologist at UC Berkeley) give you the answer: “powerful people act with great daring and sometimes behave rather like gorillas”. With higher status comes a surge in testosterone (makes sense if you’re going to have more mates, and maybe even if you’re commanding an army—but running an investment bank?), which is directly linked to dominance behavior.

These attitudes may well have been adaptive for surviving in the African savannah 2 million years ago. In a world red in tooth and claw, having the biggest, strongest male be in charge of the tribe might have been the most efficient means of ensuring the success of the tribe—or rather I should say, the genes of the tribe, since the only reason we have a tribal instinct is that tribal instinct genes were highly successful at propagating themselves.

I’m actually sort of agnostic on the question of whether our evolutionary heuristics were optimal for ancient survival, or simply the best our brains could manage; but one thing is certain: They are not optimal today. The uninhibited dominance behavior associated with high status may work well enough for a tribal chieftain, but it could be literally apocalyptic when exhibited by the head of state of a nuclear superpower. Allocation of resources by status hierarchy may be fine for hunter-gatherers, but it is disastrously inefficient in an information technology economy.

From now on, whenever you hear “corporate ladder” and similar turns of phrase, I want you to substitute “primate status hierarchy”. You’ll quickly see how well it fits; and hopefully once enough people realize this, together we can all find a way to change to a better system.

The World Development Report is on cognitive economics this year!

JDN 2457013 EST 21:01.

On a personal note, I can now proudly report that I have successfully defended my thesis “Corruption, ‘the Inequality Trap’, and ‘the 1% of the 1%’ “, and I now have completed a master’s degree in economics. I’m back home in Michigan for the holidays (hence my use of Eastern Standard Time), and then, well… I’m not entirely sure. I have a gap of about six months before PhD programs start. I have a number of job applications out, but unless I get a really good offer (such as the position at the International Food Policy Research Institute in DC) I think I may just stay in Michigan for awhile and work on my own projects, particularly publishing two of my books (my nonfiction magnum opus, The Mathematics of Tears and Joy, and my first novel, First Contact) and making some progress on a couple of research papers—ideally publishing one of them as well. But the future for me right now is quite uncertain, and that is now my major source of stress. Ironically I’d probably be less stressed if I were working full-time, because I would have a clear direction and sense of purpose. If I could have any job in the world, it would be a hard choice between a professorship at UC Berkeley or a research position at the World Bank.

Which brings me to the topic of today’s post: The people who do my dream job have just released a report showing that they basically agree with me on how it should be done.

If you have some extra time, please take a look at the World Bank World Development Report. They put one out each year, and it provides a rigorous and thorough (236 pages) but quite readable summary of the most important issues in the world economy today. It’s not exactly light summer reading, but nor is it the usual morass of arcane jargon. If you like my blog, you can probably follow most of the World Development Report. If you don’t have time to read the whole thing, you can at least skim through all the sidebars and figures to get a general sense of what it’s all about. Much of the report is written in the form of personal vignettes that make the general principles more vivid; but these are not mere anecdotes, for the report rigorously cites an enormous volume of empirical research.

The title of the 2015 report? “Mind, Society, and Behavior”. In other words, cognitive economics. The world’s foremost international economic institution has just endorsed cognitive economics and rejected neoclassical economics, and their report on the subject provides a brilliant introduction to the subject replete with direct applications to international development.

For someone like me who lives and breathes cognitive economics, the report is pure joy. It’s all there, from anchoring heuristic to social proof, corruption to discrimination. The report is broadly divided into three parts.

Part 1 explains the theory and evidence of cognitive economics, subdivided into “thinking automatically” (heuristics), “thinking socially” (social cognition), and “thinking with mental models” (bounded rationality). (If I wrote it I’d also include sections on the tribal paradigm and narrative, but of course I’ll have to publish that stuff in the actual research literature first.) Anyway the report is so amazing as it is I really can’t complain. It includes some truly brilliant deorbits on neoclassical economics, such as this one from page 47: ” In other words, the canonical model of human behavior is not supported in any society that has been studied.”

Part 2 uses cognitive economic theory to analyze and improve policy. This is the core of the report, with chapters on poverty, childhood, finance, productivity, ethnography, health, and climate change. So many different policies are analyzed I’m not sure I can summarize them with any justice, but a few particularly stuck out: First, the high cognitive demands of poverty can basically explain the whole observed difference in IQ between rich and poor people—so contrary to the right-wing belief that people are poor because they are stupid, in fact people seem stupid because they are poor. Simplifying the procedures for participation in social welfare programs (which is desperately needed, I say with a stack of incomplete Medicaid paperwork on my table—even I find these packets confusing, and I have a master’s degree in economics) not only increases their uptake but also makes people more satisfied with them—and of course a basic income could simplify social welfare programs enormously. “Are you a US citizen? Is it the first of the month? Congratulations, here’s $670.” Another finding that I found particularly noteworthy is that productivity is in many cases enhanced by unconditional gifts more than it is by incentives that are conditional on behavior—which goes against the very core of neoclassical economic theory. (It also gives us yet another item on the enormous list of benefits of a basic income: Far from reducing work incentives by the income effect, an unconditional basic income, as a shared gift from your society, may well motivate you even more than the same payment as a wage.)

Part 3 is a particularly bold addition: It turns the tables and applies cognitive economics to economists themselves, showing that human irrationality is by no means limited to idiots or even to poor people (as the report discusses in chapter 4, there are certain biases that poor people exhibit more—but there are also some they exhibit less.); all human beings are limited by the same basic constraints, and economists are human beings. We like to think of ourselves as infallibly rational, but we are nothing of the sort. Even after years of studying cognitive economics I still sometimes catch myself making mistakes based on heuristics, particularly when I’m stressed or tired. As a long-term example, I have a number of vague notions of entrepreneurial projects I’d like to do, but none for which I have been able to muster the effort and confidence to actually seek loans or investors. Rationally, I should either commit or abandon them, yet cannot quite bring myself to do either. And then of course I’ve never met anyone who didn’t procrastinate to some extent, and actually those of us who are especially smart often seem especially prone—though we often adopt the strategy of “active procrastination”, in which you end up doing something else useful when procrastinating (my apartment becomes cleanest when I have an important project to work on), or purposefully choose to work under pressure because we are more effective that way.

And the World Bank pulled no punches here, showing experiments on World Bank economists clearly demonstrating confirmation bias, sunk-cost fallacy, and what the report calls “home team advantage”, more commonly called ingroup-outgroup bias—which is basically a form of the much more general principle that I call the tribal paradigm.

If there is one flaw in the report, it’s that it’s quite long and fairly exhausting to read, which means that many people won’t even try and many who do won’t make it all the way through. (The fact that it doesn’t seem to be available in hard copy makes it worse; it’s exhausting to read lengthy texts online.) We only have so much attention and processing power to devote to a task, after all—which is kind of the whole point, really.