The replication crisis, and the future of science

Aug 27, JDN 2457628 [Sat]

After settling in a little bit in Irvine, I’m now ready to resume blogging, but for now it will be on a reduced schedule. I’ll release a new post every Saturday, at least for the time being.

Today’s post was chosen by Patreon vote, though only one person voted (this whole Patreon voting thing has not been as successful as I’d hoped). It’s about something we scientists really don’t like to talk about, but definitely need to: We are in the middle of a major crisis of scientific replication.

Whenever large studies are conducted attempting to replicate published scientific results, their ability to do so is almost always dismal.

Psychology is the one everyone likes to pick on, because their record is particularly bad. Only 39% of studies were really replicated with the published effect size, though a further 36% were at least qualitatively but not quantitatively similar. Yet economics has its own replication problem, and even medical research is not immune to replication failure.

It’s important not to overstate the crisis; the majority of scientific studies do at least qualitatively replicate. We are doing better than flipping a coin, which is better than one can say of financial forecasters.
There are three kinds of replication, and only one of them should be expected to give near-100% results. That kind is reanalysiswhen you take the same data and use the same methods, you absolutely should get the exact same results. I favor making reanalysis a routine requirement of publication; if we can’t get your results by applying your statistical methods to your data, then your paper needs revision before we can entrust it to publication. A number of papers have failed on reanalysis, which is absurd and embarrassing; the worst offender was probably Rogart-Reinhoff, which was used in public policy decisions around the world despite having spreadsheet errors.

The second kind is direct replication—when you do the exact same experiment again and see if you get the same result within error bounds. This kind of replication should work something like 90% of the time, but in fact works more like 60% of the time.

The third kind is conceptual replication—when you do a similar experiment designed to test the same phenomenon from a different perspective. This kind of replication should work something like 60% of the time, but actually only works about 20% of the time.

Economists are well equipped to understand and solve this crisis, because it’s not actually about science. It’s about incentives. I facepalm every time I see another article by an aggrieved statistician about the “misunderstanding” of p-values; no, scientist aren’t misunderstanding anything. They know damn well how p-values are supposed to work. So why do they keep using them wrong? Because their jobs depend on doing so.

The first key point to understand here is “publish or perish”; academics in an increasingly competitive system are required to publish their research in order to get tenure, and frequently required to get tenure in order to keep their jobs at all. (Or they could become adjuncts, who are paid one-fifth as much.)

The second is the fundamentally defective way our research journals are run (as I have discussed in a previous post). As private for-profit corporations whose primary interest is in raising more revenue, our research journals aren’t trying to publish what will genuinely advance scientific knowledge. They are trying to publish what will draw attention to themselves. It’s a similar flaw to what has arisen in our news media; they aren’t trying to convey the truth, they are trying to get ratings to draw advertisers. This is how you get hours of meaningless fluff about a missing airliner and then a single chyron scroll about a war in Congo or a flood in Indonesia. Research journals haven’t fallen quite so far because they have reputations to uphold in order to attract scientists to read them and publish in them; but still, their fundamental goal is and has always been to raise attention in order to raise revenue.

The best way to do that is to publish things that are interesting. But if a scientific finding is interesting, that means it is surprising. It has to be unexpected or unusual in some way. And above all, it has to be positive; you have to have actually found an effect. Except in very rare circumstances, the null result is never considered interesting. This adds up to making journals publish what is improbable.

In particular, it creates a perfect storm for the abuse of p-values. A p-value, roughly speaking, is the probability you would get the observed result if there were no effect at all—for instance, the probability that you’d observe this wage gap between men and women in your sample if in the real world men and women were paid the exact same wages. The standard heuristic is a p-value of 0.05; indeed, it has become so enshrined that it is almost an explicit condition of publication now. Your result must be less than 5% likely to happen if there is no real difference. But if you will only publish results that show a p-value of 0.05, then the papers that get published and read will only be the ones that found such p-values—which renders the p-values meaningless.

It was never particularly meaningful anyway; as we Bayesians have been trying to explain since time immemorial, it matters how likely your hypothesis was in the first place. For something like wage gaps where we’re reasonably sure, but maybe could be wrong, the p-value is not too unreasonable. But if the theory is almost certainly true (“does gravity fall off as the inverse square of distance?”), even a high p-value like 0.35 is still supportive, while if the theory is almost certainly false (“are human beings capable of precognition?”—actual study), even a tiny p-value like 0.001 is still basically irrelevant. We really should be using much more sophisticated inference techniques, but those are harder to do, and don’t provide the nice simple threshold of “Is it below 0.05?”

But okay, p-values can be useful in many cases—if they are used correctly and you see all the results. If you have effect X with p-values 0.03, 0.07, 0.01, 0.06, and 0.09, effect X is probably a real thing. If you have effect Y with p-values 0.04, 0.02, 0.29, 0.35, and 0.74, effect Y is probably not a real thing. But I’ve just set it up so that these would be published exactly the same. They each have two published papers with “statistically significant” results. The other papers never get published and therefore never get seen, so we throw away vital information. This is called the file drawer problem.

Researchers often have a lot of flexibility in designing their experiments. If their only goal were to find truth, they would use this flexibility to test a variety of scenarios and publish all the results, so they can be compared holistically. But that isn’t their only goal; they also care about keeping their jobs so they can pay rent and feed their families. And under our current system, the only way to ensure that you can do that is by publishing things, which basically means only including the parts that showed up as statistically significant—otherwise, journals aren’t interested. And so we get huge numbers of papers published that tell us basically nothing, because we set up such strong incentives for researchers to give misleading results.

The saddest part is that this could be easily fixed.

First, reduce the incentives to publish by finding other ways to evaluate the skill of academics—like teaching for goodness’ sake. Working papers are another good approach. Journals already get far more submissions than they know what to do with, and most of these papers will never be read by more than a handful of people. We don’t need more published findings, we need better published findings—so stop incentivizing mere publication and start finding ways to incentivize research quality.

Second, eliminate private for-profit research journals. Science should be done by government agencies and nonprofits, not for-profit corporations. (And yes, I would apply this to pharmaceutical companies as well, which should really be pharmaceutical manufacturers who make cheap drugs based off of academic research and carry small profit margins.) Why? Again, it’s all about incentives. Corporations have no reason to want to find truth and every reason to want to tilt it in their favor.

Third, increase the number of tenured faculty positions. Instead of building so many new grand edifices to please your plutocratic donors, use your (skyrocketing) tuition money to hire more professors so that you can teach more students better. You can find even more funds if you cut the salaries of your administrators and football coaches. Come on, universities; you are the one industry in the world where labor demand and labor supply are the same people a few years later. You have no excuse for not having the smoothest market clearing in the world. You should never have gluts or shortages.

Fourth, require pre-registration of research studies (as some branches of medicine already do). If the study is sound, an optimal rational agent shouldn’t care in the slightest whether it had a positive or negative result, and if our ape brains won’t let us think that way, we need to establish institutions to force it to happen. They shouldn’t even see the effect size and p-value before they make the decision to publish it; all they should care about is that the experiment makes sense and the proper procedure was conducted.
If we did all that, the replication crisis could be almost completely resolved, as the incentives would be realigned to more closely match the genuine search for truth.

Alas, I don’t see universities or governments or research journals having the political will to actually make such changes, which is very sad indeed.

The facts will not speak for themselves, so we must speak for them

August 3, JDN 2457604

I finally began to understand the bizarre and terrifying phenomenon that is the Donald Trump Presidential nomination when I watched this John Oliver episode:

https://www.youtube.com/watch?v=U-l3IV_XN3c

These lines in particular, near the end, finally helped me put it all together:

What is truly revealing is his implication that believing something to be true is the same as it being true. Because if anything, that was the theme of the Republican Convention this week; it was a four-day exercise in emphasizing feelings over facts.

The facts against Donald Trump are absolutely overwhelming. He is not even a competent business man, just a spectacularly manipulative one—and even then, it’s not clear he made any more money than he would have just keeping his inheritance in a diversified stock portfolio. His casinos were too fraudulent for Atlantic City. His university was fraudulent. He has the worst honesty rating Politifact has ever given a candidate. (Bernie Sanders, Barack Obama, and Hillary Clinton are statistically tied for some of the best.)

More importantly, almost every policy he has proposed or even suggested is terrible, and several of them could be truly catastrophic.

Let’s start with economic policy: His trade policy would set back decades of globalization and dramatically increase global poverty, while doing little or nothing to expand employment in the US, especially if it sparks a trade war. His fiscal policy would permanently balloon the deficit by giving one of the largest tax breaks to the rich in history. His infamous wall would probably cost about as much as the federal government currently spends on all basic scientific research combined, and his only proposal for funding it fundamentally misunderstands how remittances and trade deficits work. He doesn’t believe in climate change, and would roll back what little progress we have made at reducing carbon emissions, thereby endangering millions of lives. He could very likely cause a global economic collapse comparable to the Great Depression.

His social policy is equally terrible: He has proposed criminalizing abortion, (in express violation of Roe v. Wade) which even many pro-life people find too extreme. He wants to deport all Muslims and ban Muslims from entering, which not just a direct First Amendment violation but also literally involves jackbooted soldiers breaking into the homes of law-abiding US citizens to kidnap them and take them out of the country. He wants to deport 11 million undocumented immigrants, the largest deportation in US history.

Yet it is in foreign policy above all that Trump is truly horrific. He has explicitly endorsed targeting the families of terrorists, which is a war crime (though not as bad as what Ted Cruz wanted to do, which is carpet-bombing cities). Speaking of war crimes, he thinks our torture policy wasn’t severe enough, and doesn’t even care if it is ineffective. He has made the literally mercantilist assertion that the purpose of military alliances is to create trade surpluses, and if European countries will not provide us with trade surpluses (read: tribute), he will no longer commit to defending them, thereby undermining decades of global stability that is founded upon America’s unwavering commitment to defend our allies. And worst of all, he will not rule out the first-strike deployment of nuclear weapons.

I want you to understand that I am not exaggerating when I say that a Donald Trump Presidency carries a nontrivial risk of triggering global nuclear war. Will this probably happen? No. It has a probability of perhaps 1%. But a 1% chance of a billion deaths is not a risk anyone should be prepared to take.

 

All of these facts scream at us that Donald Trump would be a catastrophe for America and the world. Why, then, are so many people voting for him? Why do our best election forecasts give him a good chance of winning the election?

Because facts don’t speak for themselves.

This is how the left, especially the center-left, has dropped the ball in recent decades. We joke that reality has a liberal bias, because so many of the facts are so obviously on our side. But meanwhile the right wing has nodded and laughed, even mockingly called us the “reality-based community”, because they know how to manipulate feelings.

Donald Trump has essentially no other skills—but he has that one, and it is enough. He knows how to fan the flames of anger and hatred and point them at his chosen targets. He knows how to rally people behind meaningless slogans like “Make America Great Again” and convince them that he has their best interests at heart.

Indeed, Trump’s persuasiveness is one of his many parallels with Adolf Hitler; I am not yet prepared to accuse Donald Trump of seeking genocide, yet at the same time I am not yet willing to put it past him. I don’t think it would take much of a spark at this point to trigger a conflagration of hatred that launches a genocide against Muslims in the United States, and I don’t trust Trump not to light such a spark.

Meanwhile, liberal policy wonks are looking on in horror, wondering how anyone could be so stupid as to believe him—and even publicly basically calling people stupid for believing him. Or sometimes we say they’re not stupid, they’re just racist. But people don’t believe Donald Trump because they are stupid; they believe Donald Trump because he is persuasive. He knows the inner recesses of the human mind and can harness our heuristics to his will. Do not mistake your unique position that protects you—some combination of education, intellect, and sheer willpower—for some inherent superiority. You are not better than Trump’s followers; you are more resistant to Trump’s powers of persuasion. Yes, statistically, Trump voters are more likely to be racist; but racism is a deep-seated bias in the human mind that to some extent we all share. Trump simply knows how to harness it.

Our enemies are persuasive—and therefore we must be as well. We can no longer act as though facts will automatically convince everyone by the power of pure reason; we must learn to stir emotions and rally crowds just as they do.

Or rather, not just as they do—not quite. When we see lies being so effective, we may be tempted to lie ourselves. When we see people being manipulated against us, we may be tempted to manipulate them in return. But in the long run, we can’t afford to do that. We do need to use reason, because reason is the only way to ensure that the beliefs we instill are true.

Therefore our task must be to make people see reason. Let me be clear: Not demand they see reason. Not hope they see reason. Not lament that they don’t. This will require active investment on our part. We must actually learn to persuade people in such a manner that their minds become more open to reason. This will mean using tools other than reason, but it will also mean treading a very fine line, using irrationality only when rationality is insufficient.

We will be tempted to take the easier, quicker path to the Dark Side, but we must resist. Our goal must be not to make people do what we want them to—but to do what they would want to if they were fully rational and fully informed. We will need rhetoric; we will need oratory; we may even need some manipulation. But as we fight our enemy, we must be vigilant not to become them.

This means not using bad arguments—strawmen and conmen—but pointing out the flaws in our opponents’ arguments even when they seem obvious to us—bananamen. It means not overstating our case about free trade or using implausible statistical results simply because they support our case.

But it also means not understating our case, not hiding in page 17 of an opaque technical report that if we don’t do something about climate change right now millions of people will die. It means not presenting our ideas as “political opinions” when they are demonstrated, indisputable scientific facts. It means taking the media to task for their false balance that must find a way to criticize a Democrat every time they criticize a Republican: Sure, he is a pathological liar and might trigger global economic collapse or even nuclear war, but she didn’t secure her emails properly. If you objectively assess the facts and find that Republicans lie three times as often as Democrats, maybe that’s something you should be reporting on instead of trying to compensate for by changing your criteria.

Speaking of the media, we should be pressuring them to include a regular—preferably daily, preferably primetime—segment on climate change, because yes, it is that important. How about after the weather report every day, you show a climate scientist explaining why we keep having record-breaking summer heat and more frequent natural disasters? If we suffer a global ecological collapse, this other stuff you’re constantly talking about really isn’t going to matter—that is, if it mattered in the first place. When ISIS kills 200 people in an attack, you don’t just report that a bunch of people died without examining the cause or talking about responses. But when a typhoon triggered by climate change kills 7,000, suddenly it’s just a random event, an “act of God” that nobody could have predicted or prevented. Having an appropriate caution about whether climate change caused any particular disaster should not prevent us from drawing the very real links between more carbon emissions and more natural disasters—and sometimes there’s just no other explanation.

It means demanding fact-checks immediately, not as some kind of extra commentary that happens after the debate, but as something the moderator says right then and there. (You have a staff, right? And they have Google access, right?) When a candidate says something that is blatantly, demonstrably false, they should receive a warning. After three warnings, their mic should be cut for that question. After ten, they should be kicked off the stage for the remainder of the debate. Donald Trump wouldn’t have lasted five minutes. But instead, they not only let him speak, they spent the next week repeating what he said in bold, exciting headlines. At least CNN finally realized that their headlines could actually fact-check Trump’s statements rather than just repeat them.
Above all, we will need to understand why people think the way they do, and learn to speak to them persuasively and truthfully but without elitism or condescension. This is one I know I’m not very good at myself; sometimes I get so frustrated with people who think the Earth is 6,000 years old (over 40% of Americans) or don’t believe in climate change (35% don’t think it is happening at all, another 30% don’t think it’s a big deal) that I come off as personally insulting them—and of course from that point forward they turn off. But irrational beliefs are not proof of defective character, and we must make that clear to ourselves as well as to others. We must not say that people are stupid or bad; but we absolutely must say that they are wrong. We must also remember that despite our best efforts, some amount of reactance will be inevitable; people simply don’t like having their beliefs challenged.

Yet even all this is probably not enough. Many people don’t watch mainstream media, or don’t believe it when they do (not without reason). Many people won’t even engage with friends or family members who challenge their political views, and will defriend or even disown them. We need some means of reaching these people too, and the hardest part may be simply getting them to listen to us in the first place. Perhaps we need more grassroots action—more protest marches, or even activists going door to door like Jehovah’s Witnesses. Perhaps we need to establish new media outlets that will be as widely accessible but held to a higher standard.

But we must find a way–and we have little time to waste.

Moral responsibility does not inherit across generations

JDN 2457548

In last week’s post I made a sharp distinction between believing in human progress and believing that colonialism was justified. To make this argument, I relied upon a moral assumption that seems to me perfectly obvious, and probably would to most ethicists as well: Moral responsibility does not inherit across generations, and people are only responsible for their individual actions.

But is in fact this principle is not uncontroversial in many circles. When I read utterly nonsensical arguments like this one from the aptly-named Race Baitr saying that White people have no role to play in the liberation of Black people apparently because our blood is somehow tainted by the crimes our ancestors, it becomes apparent to me that this principle is not obvious to everyone, and therefore is worth defending. Indeed, many applications of the concept of “White Privilege” seem to ignore this principle, speaking as though racism is not something one does or participates in, but something that one is simply by being born with less melanin. Here’s a Salon interview specifically rejecting the proposition that racism is something one does:

For white people, their identities rest on the idea of racism as about good or bad people, about moral or immoral singular acts, and if we’re good, moral people we can’t be racist – we don’t engage in those acts. This is one of the most effective adaptations of racism over time—that we can think of racism as only something that individuals either are or are not “doing.”

If racism isn’t something one does, then what in the world is it? It’s all well and good to talk about systems and social institutions, but ultimately systems and social institutions are made of human behaviors. If you think most White people aren’t doing enough to combat racism (which sounds about right to me!), say that—don’t make some bizarre accusation that simply by existing we are inherently racist. (Also: We? I’m only 75% White, so am I only 75% inherently racist?) And please, stop redefining the word “racism” to mean something other than what everyone uses it to mean; “White people are snakes” is in fact a racist sentiment (and yes, one I’ve actually heard–indeed, here is the late Muhammad Ali comparing all White people to rattlesnakes, and Huffington Post fawning over him for it).

Racism is clearly more common and typically worse when performed by White people against Black people—but contrary to the claims of some social justice activists the White perpetrator and Black victim are not part of the definition of racism. Similarly, sexism is more common and more severe committed by men against women, but that doesn’t mean that “men are pigs” is not a sexist statement (and don’t tell me you haven’t heard that one). I don’t have a good word for bigotry by gay people against straight people (“heterophobia”?) but it clearly does happen on occasion, and similarly cannot be defined out of existence.

I wouldn’t care so much that you make this distinction between “racism” and “racial prejudice”, except that it’s not the normal usage of the word “racism” and therefore confuses people, and also this redefinition clearly is meant to serve a political purpose that is quite insidious, namely making excuses for the most extreme and hateful prejudice as long as it’s committed by people of the appropriate color. If “White people are snakes” is not racism, then the word has no meaning.

Not all discussions of “White Privilege” are like this, of course; this article from Occupy Wall Street actually does a fairly good job of making “White Privilege” into a sensible concept, albeit still not a terribly useful one in my opinion. I think the useful concept is oppression—the problem here is not how we are treating White people, but how we are treating everyone else. What privilege gives you is the freedom to be who you are.”? Shouldn’t everyone have that?

Almost all the so-called “benefits” or “perks” associated with privilege” are actually forgone harms—they are not good things done to you, but bad things not done to you. But benefitting from racist systems doesn’t mean that everything is magically easy for us. It just means that as hard as things are, they could always be worse.” No, that is not what the word “benefit” means. The word “benefit” means you would be worse off without it—and in most cases that simply isn’t true. Many White people obviously think that it is true—which is probably a big reason why so many White people fight so hard to defend racism, you know; you’ve convinced them it is in their self-interest. But, with rare exceptions, it is not; most racial discrimination has literally zero long-run benefit. It’s just bad. Maybe if we helped people appreciate that more, they would be less resistant to fighting racism!

The only features of “privilege” that really make sense as benefits are those that occur in a state of competition—like being more likely to be hired for a job or get a loan—but one of the most important insights of economics is that competition is nonzero-sum, and fairer competition ultimately means a more efficient economy and thus more prosperity for everyone.

But okay, let’s set that aside and talk about this core question of what sort of responsibility we bear for the acts of our ancestors. Many White people clearly do feel deep shame about what their ancestors (or people the same color as their ancestors!) did hundreds of years ago. The psychological reactance to that shame may actually be what makes so many White people deny that racism even exists (or exists anymore)—though a majority of Americans of all races do believe that racism is still widespread.

We also apply some sense of moral responsibility applied to whole races quite frequently. We speak of a policy “benefiting White people” or “harming Black people” and quickly elide the distinction between harming specific people who are Black, and somehow harming “Black people” as a group. The former happens all the time—the latter is utterly nonsensical. Similarly, we speak of a “debt owed by White people to Black people” (which might actually make sense in the very narrow sense of economic reparations, because people do inherit money! They probably shouldn’t, that is literally feudalist, but in the existing system they in fact do), which makes about as much sense as a debt owed by tall people to short people. As Walter Michaels pointed out in The Trouble with Diversity (which I highly recommend), because of this bizarre sense of responsibility we are often in the habit of “apologizing for something you didn’t do to people to whom you didn’t do it (indeed to whom it wasn’t done)”. It is my responsibility to condemn colonialism (which I indeed do), to fight to ensure that it never happens again; it is not my responsibility to apologize for colonialism.

This makes some sense in evolutionary terms; it’s part of the all-encompassing tribal paradigm, wherein human beings come to identify themselves with groups and treat those groups as the meaningful moral agents. It’s much easier to maintain the cohesion of a tribe against the slings and arrows (sometimes quite literal) of outrageous fortune if everyone believes that the tribe is one moral agent worthy of ultimate concern.

This concept of racial responsibility is clearly deeply ingrained in human minds, for it appears in some of our oldest texts, including the Bible: “You shall not bow down to them or worship them; for I, the Lord your God, am a jealous God, punishing the children for the sin of the parents to the third and fourth generation of those who hate me,” (Exodus 20:5)

Why is inheritance of moral responsibility across generations nonsensical? Any number of reasons, take your pick. The economist in me leaps to “Ancestry cannot be incentivized.” There’s no point in holding people responsible for things they can’t control, because in doing so you will not in any way alter behavior. The Stanford Encyclopedia of Philosophy article on moral responsibility takes it as so obvious that people are only responsible for actions they themselves did that they don’t even bother to mention it as an assumption. (Their big question is how to reconcile moral responsibility with determinism, which turns out to be not all that difficult.)

An interesting counter-argument might be that descent can be incentivized: You could use rewards and punishments applied to future generations to motivate current actions. But this is actually one of the ways that incentives clearly depart from moral responsibilities; you could incentivize me to do something by threatening to murder 1,000 children in China if I don’t, but even if it was in fact something I ought to do, it wouldn’t be those children’s fault if I didn’t do it. They wouldn’t deserve punishment for my inaction—I might, and you certainly would for using such a cruel incentive.

Moreover, there’s a problem with dynamic consistency here: Once the action is already done, what’s the sense in carrying out the punishment? This is why a moral theory of punishment can’t merely be based on deterrence—the fact that you could deter a bad action by some other less-bad action doesn’t make the less-bad action necessarily a deserved punishment, particularly if it is applied to someone who wasn’t responsible for the action you sought to deter. In any case, people aren’t thinking that we should threaten to punish future generations if people are racist today; they are feeling guilty that their ancestors were racist generations ago. That doesn’t make any sense even on this deterrence theory.

There’s another problem with trying to inherit moral responsibility: People have lots of ancestors. Some of my ancestors were most likely rapists and murderers; most were ordinary folk; a few may have been great heroes—and this is true of just about anyone anywhere. We all have bad ancestors, great ancestors, and, mostly, pretty good ancestors. 75% of my ancestors are European, but 25% are Native American; so if I am to apologize for colonialism, should I be apologizing to myself? (Only 75%, perhaps?) If you go back enough generations, literally everyone is related—and you may only have to go back about 4,000 years. That’s historical time.

Of course, we wouldn’t be different colors in the first place if there weren’t some differences in ancestry, but there is a huge amount of gene flow between different human populations. The US is a particularly mixed place; because most Black Americans are quite genetically mixed, it is about as likely that any randomly-selected Black person in the US is descended from a slaveowner as it is that any randomly-selected White person is. (Especially since there were a large number of Black slaveowners in Africa and even some in the United States.) What moral significance does this have? Basically none! That’s the whole point; your ancestors don’t define who you are.

If these facts do have any moral significance, it is to undermine the sense most people seem to have that there are well-defined groups called “races” that exist in reality, to which culture responds. No; races were created by culture. I’ve said this before, but it bears repeating: The “races” we hold most dear in the US, White and Black, are in fact the most nonsensical. “Asian” and “Native American” at least almost make sense as categories, though Chippewa are more closely related to Ainu than Ainu are to Papuans. “Latino” isn’t utterly incoherent, though it includes as much Aztec as it does Iberian. But “White” is a club one can join or be kicked out of, while “Black” is the majority of genetic diversity.

Sex is a real thing—while there are intermediate cases of course, broadly speaking humans, like most metazoa, are sexually dimorphic and come in “male” and “female” varieties. So sexism took a real phenomenon and applied cultural dynamics to it; but that’s not what happened with racism. Insofar as there was a real phenomenon, it was extremely superficial—quite literally skin deep. In that respect, race is more like class—a categorization that is itself the result of social institutions.

To be clear: Does the fact that we don’t inherit moral responsibility from our ancestors absolve us from doing anything to rectify the inequities of racism? Absolutely not. Not only is there plenty of present discrimination going on we should be fighting, there are also inherited inequities due to the way that assets and skills are passed on from one generation to the next. If my grandfather stole a painting from your grandfather and both our grandfathers are dead but I am now hanging that painting in my den, I don’t owe you an apology—but I damn well owe you a painting.

The further we become from the past discrimination the harder it gets to make reparations, but all hope is not lost; we still have the option of trying to reset everyone’s status to the same at birth and maintaining equality of opportunity from there. Of course we’ll never achieve total equality of opportunity—but we can get much closer than we presently are.

We could start by establishing an extremely high estate tax—on the order of 99%—because no one has a right to be born rich. Free public education is another good way of equalizing the distribution of “human capital” that would otherwise be concentrated in particular families, and expanding it to higher education would make it that much better. It even makes sense, at least in the short run, to establish some affirmative action policies that are race-conscious and sex-conscious, because there are so many biases in the opposite direction that sometimes you must fight bias with bias.

Actually what I think we should do in hiring, for example, is assemble a pool of applicants based on demographic quotas to ensure a representative sample, and then anonymize the applications and assess them on merit. This way we do ensure representation and reduce bias, but don’t ever end up hiring anyone other than the most qualified candidate. But nowhere should we think that this is something that White men “owe” to women or Black people; it’s something that people should do in order to correct the biases that otherwise exist in our society. Similarly with regard to sexism: Women exhibit just as much unconscious bias against other women as men do. This is not “men” hurting “women”—this is a set of unconscious biases found in almost everywhere and social structures almost everywhere that systematically discriminate against people because they are women.

Perhaps by understanding that this is not about which “team” you’re on (which tribe you’re in), but what policy we should have, we can finally make these biases disappear, or at least fade so small that they are negligible.

Why is it so hard to get a job?

JDN 2457411

The United States is slowly dragging itself out of the Second Depression.

Unemployment fell from almost 10% to about 5%.

Core inflation has been kept between 0% and 2% most of the time.

Overall inflation has been within a reasonable range:

US_inflation

Real GDP has returned to its normal growth trend, though with a permanent loss of output relative to what would have happened without the Great Recession.

US_GDP_growth

Consumption spending is also back on trend, tracking GDP quite precisely.

The Federal Reserve even raised the federal funds interest rate above the zero lower bound, signaling a return to normal monetary policy. (As I argued previously, I’m pretty sure that was their main goal actually.)

Employment remains well below the pre-recession peak, but is now beginning to trend upward once more.

The only thing that hasn’t recovered is labor force participation, which continues to decline. This is how we can have unemployment go back to normal while employment remains depressed; people leave the labor force by retiring, going back to school, or simply giving up looking for work. By the formal definition, someone is only unemployed if they are actively seeking work. No, this is not new, and it is certainly not Obama rigging the numbers. This is how we have measured unemployment for decades.

Actually, it’s kind of the opposite: Since the Clinton administration we’ve also kept track of “broad unemployment”, which includes people who’ve given up looking for work or people who have some work but are trying to find more. But we can’t directly compare it to anything that happened before 1994, because the BLS didn’t keep track of it before then. All we can do is estimate based on what we did measure. Based on such estimation, it is likely that broad unemployment in the Great Depression may have gotten as high as 50%. (I’ve found that one of the best-fitting models is actually one of the simplest; assume that broad unemployment is 1.8 times narrow unemployment. This fits much better than you might think.)

So, yes, we muddle our way through, and the economy eventually heals itself. We could have brought the economy back much sooner if we had better fiscal policy, but at least our monetary policy was good enough that we were spared the worst.

But I think most of us—especially in my generation—recognize that it is still really hard to get a job. Overall GDP is back to normal, and even unemployment looks all right; but why are so many people still out of work?

I have a hypothesis about this: I think a major part of why it is so hard to recover from recessions is that our system of hiring is terrible.

Contrary to popular belief, layoffs do not actually substantially increase during recessions. Quits are substantially reduced, because people are afraid to leave current jobs when they aren’t sure of getting new ones. As a result, rates of job separation actually go down in a recession. Job separation does predict recessions, but not in the way most people think. One of the things that made the Great Recession different from other recessions is that most layoffs were permanent, instead of temporary—but we’re still not sure exactly why.

Here, let me show you some graphs from the BLS.

This graph shows job openings from 2005 to 2015:

job_openings

This graph shows hires from 2005 to 2015:

job_hires

Both of those show the pattern you’d expect, with openings and hires plummeting in the Great Recession.

But check out this graph, of job separations from 2005 to 2015:

job_separations

Same pattern!

Unemployment in the Second Depression wasn’t caused by a lot of people losing jobs. It was caused by a lot of people not getting jobs—either after losing previous ones, or after graduating from school. There weren’t enough openings, and even when there were openings there weren’t enough hires.

Part of the problem is obviously just the business cycle itself. Spending drops because of a financial crisis, then businesses stop hiring people because they don’t project enough sales to justify it; then spending drops even further because people don’t have jobs, and we get caught in a vicious cycle.

But we are now recovering from the cyclical downturn; spending and GDP are back to their normal trend. Yet the jobs never came back. Something is wrong with our hiring system.

So what’s wrong with our hiring system? Probably a lot of things, but here’s one that’s been particularly bothering me for a long time.
As any job search advisor will tell you, networking is essential for career success.

There are so many different places you can hear this advice, it honestly gets tiring.

But stop and think for a moment about what that means. One of the most important determinants of what job you will get is… what people you know?

It’s not what you are best at doing, as it would be if the economy were optimally efficient.
It’s not even what you have credentials for, as we might expect as a second-best solution.

It’s not even how much money you already have, though that certainly is a major factor as well.

It’s what people you know.

Now, I realize, this is not entirely beyond your control. If you actively participate in your community, attend conferences in your field, and so on, you can establish new contacts and expand your network. A major part of the benefit of going to a good college is actually the people you meet there.

But a good portion of your social network is more or less beyond your control, and above all, says almost nothing about your actual qualifications for any particular job.

There are certain jobs, such as marketing, that actually directly relate to your ability to establish rapport and build weak relationships rapidly. These are a tiny minority. (Actually, most of them are the sort of job that I’m not even sure needs to exist.)

For the vast majority of jobs, your social skills are a tiny, almost irrelevant part of the actual skill set needed to do the job well. This is true of jobs from writing science fiction to teaching calculus, from diagnosing cancer to flying airliners, from cleaning up garbage to designing spacecraft. Social skills are rarely harmful, and even often provide some benefit, but if you need a quantum physicist, you should choose the recluse who can write down the Dirac equation by heart over the well-connected community leader who doesn’t know what an integral is.

At the very least, it strains credibility to suggest that social skills are so important for every job in the world that they should be one of the defining factors in who gets hired. And make no mistake: Networking is as beneficial for landing a job at a local bowling alley as it is for becoming Chair of the Federal Reserve. Indeed, for many entry-level positions networking is literally all that matters, while advanced positions at least exclude candidates who don’t have certain necessary credentials, and then make the decision based upon who knows whom.

Yet, if networking is so inefficient, why do we keep using it?

I can think of a couple reasons.

The first reason is that this is how we’ve always done it. Indeed, networking strongly pre-dates capitalism or even money; in ancient tribal societies there were certainly jobs to assign people to: who will gather berries, who will build the huts, who will lead the hunt. But there were no colleges, no certifications, no resumes—there was only your position in the social structure of the tribe. I think most people simply automatically default to a networking-based system without even thinking about it; it’s just the instinctual System 1 heuristic.

One of the few things I really liked about Debt: The First 5000 Years was the discussion of how similar the behavior of modern CEOs is to that of ancient tribal chieftans, for reasons that make absolutely no sense in terms of neoclassical economic efficiency—but perfect sense in light of human evolution. I wish Graeber had spent more time on that, instead of many of these long digressions about international debt policy that he clearly does not understand.

But there is a second reason as well, a better reason, a reason that we can’t simply give up on networking entirely.

The problem is that many important skills are very difficult to measure.

College degrees do a decent job of assessing our raw IQ, our willingness to persevere on difficult tasks, and our knowledge of the basic facts of a discipline (as well as a fantastic job of assessing our ability to pass standardized tests!). But when you think about the skills that really make a good physicist, a good economist, a good anthropologist, a good lawyer, or a good doctor—they really aren’t captured by any of the quantitative metrics that a college degree provides. Your capacity for creative problem-solving, your willingness to treat others with respect and dignity; these things don’t appear in a GPA.

This is especially true in research: The degree tells how good you are at doing the parts of the discipline that have already been done—but what we really want to know is how good you’ll be at doing the parts that haven’t been done yet.

Nor are skills precisely aligned with the content of a resume; the best predictor of doing something well may in fact be whether you have done so in the past—but how can you get experience if you can’t get a job without experience?

These so-called “soft skills” are difficult to measure—but not impossible. Basically the only reliable measurement mechanisms we have require knowing and working with someone for a long span of time. You can’t read it off a resume, you can’t see it in an interview (interviews are actually a horribly biased hiring mechanism, particularly biased against women). In effect, the only way to really know if someone will be good at a job is to work with them at that job for awhile.

There’s a fundamental information problem here I’ve never quite been able to resolve. It pops up in a few other contexts as well: How do you know whether a novel is worth reading without reading the novel? How do you know whether a film is worth watching without watching the film? When the information about the quality of something can only be determined by paying the cost of purchasing it, there is basically no way of assessing the quality of things before we purchase them.

Networking is an attempt to get around this problem. To decide whether to read a novel, ask someone who has read it. To decide whether to watch a film, ask someone who has watched it. To decide whether to hire someone, ask someone who has worked with them.

The problem is that this is such a weak measure that it’s not much better than no measure at all. I often wonder what would happen if businesses were required to hire people based entirely on resumes, with no interviews, no recommendation letters, and any personal contacts treated as conflicts of interest rather than useful networking opportunities—a world where the only thing we use to decide whether to hire someone is their documented qualifications. Could it herald a golden age of new economic efficiency and job fulfillment? Or would it result in widespread incompetence and catastrophic collapse? I honestly cannot say.

Drift-diffusion decision-making: The stock market in your brain

JDN 2456173 EDT 17:32.

Since I’ve been emphasizing the “economics” side of things a lot lately, I decided this week to focus more on the “cognitive” side. Today’s topic comes from cutting-edge research in cognitive science and neuroeconomics, so we still haven’t ironed out all the details.

The question we are trying to answer is an incredibly basic one: How do we make decisions? Given the vast space of possible behaviors human beings can engage in, how do we determine which ones we actually do?

There are actually two phases of decision-making.

The first phase is alternative generation, in which we come up with a set of choices. Some ideas occur to us, others do not; some are familiar and come to mind easily, others only appear after careful consideration. Techniques like brainstorming exist to help us with this task, but none of them are really very good; one of the most important bottlenecks in human cognition is the individual capacity to generate creative alternatives. The task is mind-bogglingly complex; the number of possible choices you could make at any given moment is already vast, and with each passing moment the number of possible behavioral sequences grows exponentially. Just think about all the possible sentences I could type write now, and then think about how incredibly narrow a space of possible behavioral options it is to assume that I’m typing sentences.

Most of the world’s innovation can ultimately be attributed to better alternative generation; particular with regard to social systems, but in many cases even with regard to technologies, the capability existed for decades or even centuries but the idea simply never occurred to anyone. (You can see this by looking at the work of Heron of Alexandria and Leonardo da Vinci; the capacity to build these machines existed, and a handful of individuals were creative enough to actually try it, but it never occurred to anyone that there could be enormous, world-changing benefits to expanding these technologies for mass production.)

Unfortunately, we basically don’t understand alternative generation at all. It’s an almost complete gap in our understanding of human cognition. It actually has a lot to do with some of the central unsolved problems of cognitive science and artificial intelligence; if we could create a computer that is capable of creative thought, we would basically make human beings obsolete once and for all. (Oddly enough, physical labor is probably where human beings would still be necessary the longest; robots aren’t yet very good at climbing stairs or lifting irregularly-shaped objects, much less giving haircuts or painting on canvas.)

The second part is what most “decision-making” research is actually about, and I’ll call it alternative selection. Once you have a list of two, three or four viable options—rarely more than this, as I’ll talk about more in a moment—how do you go about choosing the one you’ll actually do?

This is a topic that has undergone considerable research, and we’re beginning to make progress. The leading models right now are variants of drift-diffusion (hence the title of the post), and these models have the very appealing property that they are neurologically plausible, predictively accurate, and yet close to rationally optimal.

Drift-diffusion models basically are, as I said in the subtitle, a stock market in your brain. Picture the stereotype of the trading floor of the New York Stock Exchange, with hundreds of people bustling about, shouting “Buy!” “Sell!” “Buy!” with the price going up with every “Buy!” and down with every “Sell!”; in reality the NYSE isn’t much like that, and hasn’t been for decades, because everyone is staring at a screen and most of the trading is automated and occurs in microseconds. (It’s kind of like how if you draw a cartoon of a doctor, they will invariably be wearing a head mirror, but if you’ve actually been to a doctor lately, they don’t actually wear those anymore.)

Drift-diffusion, however, is like that. Let’s say we have a decision to make, “Yes” or “No”. Thousands of neurons devoted to that decision start firing, some saying “Yes”, exciting other “Yes” neurons and inhibiting “No” neurons, while others say “No”, exciting other “No” neurons and inhibiting other “Yes” neurons. New information feeds in, triggering some to “Yes” and others to “No”. The resulting process behaves like a random walk, specifically a trend random walk, where the intensity of the trend is determined by whatever criteria you are feeding into the decision. The decision will be made when a certain threshold is reached, say, 95% agreement among all neurons.

I wrote a little R program to demonstrate drift-diffusion models; the images I’ll be showing are R plots from that program. The graphs represent the aggregated “opinion” of all the deciding neurons; as you go from left to right, time passes, and the opinions “drift” toward one side or the other. For these graphs, the top of the graph represents the better choice.

It may actually be easiest to understand if you imagine that we are choosing a belief; new evidence accumulates that pushes us toward the correct answer (top) or the incorrect answer (bottom), because even a true belief will have some evidence that seems to be against it. You encounter this evidence more or less randomly (or do you?), and which belief you ultimately form will depend upon both how strong the evidence is and how thoughtful you are in forming your beliefs.

If the evidence is very strong (or in general, the two choices are very different), the trend will be very strong, and you’ll almost certainly come to a decision very quickly:

   strong_bias

If the evidence is weaker (the two choices are very similar), the trend will be much weaker, and it will take much longer to make a decision:

weak_bias

One way to make a decision faster would be to have a weaker threshold, like 75% agreement instead of 95%; but this has the downside that it can result in making the wrong choice. Notice how some of the paths go down to the bottom, which in this case is the worse choice:

low_threshold

But if there is actually no difference between the two options, a low threshold is good, because you don’t spend time waffling over a pointless decision. (I know that I’ve had a problem with that in real life, spending too long making a decision that ultimately is of minor importance; my drift thresholds are too high!) With a low threshold, you get it over with:

indifferent

With a high threshold, you can go on for ages:

ambivalent

This is the difference between indifferent about a decision and being ambivalent. If you are indifferent, you are dealing with two small amounts of utility and it doesn’t really matter which one you choose. If you are ambivalent, you are dealing with two large amounts of utility and it’s very important to get it right—but you aren’t sure which one to choose. If you are indifferent, you should use a low threshold and get it over with; but if you are ambivalent, it actually makes sense to keep your threshold high and spend a lot of time thinking about the problem in order to be sure you get it right.

It’s also possible to set a higher threshold for one option than the other; I think this is actually what we’re doing when we exhibit many cognitive biases like confirmation bias. If the decision you’re making is between keeping your current beliefs and changing them to something else, your diffusion space actually looks more like this:

confirmation_bias

You’ll only make the correct choice (top) if you set equal thresholds (meaning you reason fairly instead of exhibiting cognitive biases) and high thresholds (meaning you spend sufficient time thinking about the question). If I may change to a sports metaphor, people tend to move the goalposts—the team “change your mind” has to kick a lot further than the team “keep your current belief”.

We can also extend drift-diffusion models to changing your mind (or experiencing regret such as “buyer’s remorse“) if we assume that the system doesn’t actually cut off once it reaches a threshold; the threshold makes us take the action, but then our neurons keep on arguing it out in the background. We may hover near the threshold or soar off into absolute certainty—but on the other hand we may waffle all the way back to the other decision:

regret

There are all sorts of generalizations and extensions of drift-diffusion models, but these basic ones should give you a sense of how useful they are. More importantly, they are accurate; drift-diffusion models produce very sharp mathematical predictions about human behavior, and in general these predictions are verified in experiments.

The main reason we started using drift-diffusion models is that they account very well for the fact that decisions become more accurate when we spend more time on them. The way they do that is quite elegant: Under harsher time pressure, we use lower thresholds, which speeds up the process but also introduces more errors. When we don’t have time pressure, we use high thresholds and take a long time, but almost always make the right decision.

Under certain (rather narrow) circumstances, drift-diffusion models can actually be equivalent to the optimal Bayesian model. These models can also be extended for use in purchasing choices, and one day we will hopefully have a stock-market-in-the-brain model of actual stock market decisions!

Drift-diffusion models are based on decisions between two alternatives with only one relevant attribute under consideration, but they are being expanded to decisions with multiple attributes and decisions with multiple alternatives; the fact that this is difficult is in my opinion not a bug but a feature—decisions with multiple alternatives and attributes are actually difficult for human beings to make. The fact that drift-diffusion models have difficulty with the very situations that human beings have difficulty with provides powerful evidence that drift-diffusion models are accurately representing the processes that go on inside a human brain. I’d be worried if it were too easy to extend the models to complex decisions—it would suggest that our model is describing a more flexible decision process than the one human beings actually use. Human decisions really do seem to be attempts to shoehorn two-choice single-attribute decision methods onto more complex problems, and a lot of mistakes we make are attributable to that.

In particular, the phenomena of analysis paralysis and the paradox of choice are easily explained this way. Why is it that when people are given more alternatives, they often spend far more time trying to decide and often end up less satisfied than they were before? This makes sense if, when faced with a large number of alternatives, we spend time trying to compare them pairwise on every attribute, and then get stuck with a whole bunch of incomparable pairwise comparisons that we then have to aggregate somehow. If we could simply assign a simple utility value to each attribute and sum them up, adding new alternatives should only increase the time required by a small amount and should never result in a reduction in final utility.

When I have an important decision to make, I actually assemble a formal utility model, as I did recently when deciding on a new computer to buy (it should be in the mail any day now!). The hardest part, however, is assigning values to the coefficients in the model; just how much am I willing to spend for an extra gigabyte of RAM, anyway? How exactly do those CPU benchmarks translate into dollar value for me? I can clearly tell that this is not the native process of my mental architecture.

No, alas, we seem to be stuck with drift-diffusion, which is nearly optimal for choices with two alternatives on a single attribute, but actually pretty awful for multiple-alternative multiple-attribute decisions. But perhaps by better understanding our suboptimal processes, we can rearrange our environment to bring us closer to optimal conditions—or perhaps, one day, change the processes themselves!

Beware the false balance

JDN 2457046 PST 13:47.

I am now back in Long Beach, hence the return to Pacific Time. Today’s post is a little less economic than most, though it’s certainly still within the purview of social science and public policy. It concerns a question that many academic researchers and in general reasonable, thoughtful people have to deal with: How do we remain unbiased and nonpartisan?

This would not be so difficult if the world were as the most devoted “centrists” would have you believe, and it were actually the case that both sides have their good points and bad points, and both sides have their scandals, and both sides make mistakes or even lie, so you should never take the side of the Democrats or the Republicans but always present both views equally.

Sadly, this is not at all the world in which we live. While Democrats are far from perfect—they are human beings after all, not to mention politicians—Republicans have become completely detached from reality. As Stephen Colbert has said, “Reality has a liberal bias.” You know it’s bad when our detractors call us the reality-based community. Treating both sides as equal isn’t being unbiased—it’s committing a balance fallacy.

Don’t believe me? Here is a list of objective, scientific facts that the Republican Party (and particularly its craziest subset, the Tea Party) has officially taken political stances against:

  1. Global warming is a real problem, and largely caused by human activity. (The Republican majority in the Senate voted down a resolution acknowledging this.)
  2. Human beings share a common ancestor with chimpanzees. (48% of Republicans think that we were created in our present form.)
  3. Animals evolve over time due to natural selection. (Only 43% of Republicans believe this.)
  4. The Earth is approximately 4.5 billion years old. (Marco Rubio said he thinks maybe the Earth was made in seven days a few thousand years ago.)
  5. Hydraulic fracturing can trigger earthquakes.(Republican in Congress are trying to nullify local regulations on fracking because they insist it is so safe we don’t even need to keep track.)
  6. Income inequality in the United States is the worst it has been in decades and continues to rise. (Mitt Romney said that the concern about income inequality is just “envy”.)
  7. Progressive taxation reduces inequality without adversely affecting economic growth. (Here’s a Republican former New York Senator saying that the President “should be ashamed” for raising taxes on—you guessed it—”job creators”.)
  8. Moderate increases in the minimum wage do not yield significant losses in employment. (Republicans consistently vote against even small increases in the minimum wage, and Democrats consistently vote in favor.)
  9. The United States government has no reason to ever default on its debt. (John Boehner, now Speaker of the House, once said that “America is broke” and if we don’t stop spending we’ll never be able to pay the national debt.)
  10. Human embryos are not in any way sentient, and fetuses are not sentient until at least 17 weeks of gestation, probably more like 30 weeks. (Yet if I am to read it in a way that would make moral sense, “Life begins at conception”—which several Republicans explicitly endorsed at the National Right to Life Convention—would have to imply that even zygotes are sentient beings. If you really just meant “alive”, then that would equally well apply to plants or even bacteria. Sentience is the morally relevant category.)

And that’s not even counting the Republican Party’s association with Christianity and all of the objectively wrong scientific claims that necessarily entails—like the existence of an afterlife and the intervention of supernatural forces. Most Democrats also self-identify as Christian, though rarely with quite the same fervor (the last major Democrat I can think of who was a devout Christian was Jimmy Carter), probably because most Americans self-identify as Christian and are hesitant to elect an atheist President (despite the fact that 93% of the National Academy of Sciences is comprised of atheists and the higher your IQ the more likely you are to be an atheist; we wouldn’t want to elect someone who agrees with smart people, now would we?).

It’s true, there are some other crazy ideas out there with a left-wing slant, like the anti-vaccination movement that has wrought epidemic measles upon us, the anti-GMO crowd that rejects basic scientific facts about genetics, and the 9/11 “truth” movement that refuses to believe that Al Qaeda actually caused the attacks. There are in fact far-left Marxists out there who want to tear down the whole capitalist system by glorious revolution and replace it with… er… something (they’re never quite clear on that last point). But none of these things are the official positions of standing members of Congress.

The craziest belief by a standing Democrat I can think of is Dennis Kucinich’s belief that he saw an alien spacecraft. And to be perfectly honest, alien spacecraft are about a thousand times more plausible than Christianity in general, let alone Creationism. There almost certainly are alien spacecraft somewhere in the universe—just most likely so far away we’ll need FTL to encounter them. Moreover, this is not Kucinich’s official position as a member of Congress and it’s not something he has ever made policy based upon.

Indeed, if you’re willing to include the craziest individuals with no real political power who identify with a particular side of the political spectrum, then we should include on the right-wing side people like the Bundy militia in Nevada, neo-Nazis in Detroit, and the dozens of KKK chapters across the US. Not to mention this pastor who wants to murder all gay people in the world (because he truly believes what Leviticus 20:13 actually and clearly says).

If you get to include Marxists on the left, then we get to include Nazis on the right. Or, we could be reasonable and say that only the official positions of elected officials or mainstream pundits actually count, in which case Democrats have views that are basically accurate and reasonable while the majority of Republicans have views that are still completely objectively wrong.

There’s no balance here. For every Democrat who is wrong, there is a Republicans who is totally delusional. For every Democrat who distorts the truth, there is a Republican who blatantly lies about basic facts. Not to mention that for every Democrat who has had an ill-advised illicit affair there is a Republican who has committed war crimes.

Actually war crimes are something a fair number of Democrats have done as well, but the difference still stands out in high relief: Barack Obama has ordered double-tap drone strikes that are in violation of the Geneva Convention, but George W. Bush orchestrated a worldwide mass torture campaign and launched pointless wars that slaughtered hundreds of thousands of people. Bill Clinton ordered some questionable CIA operations, but George H.W. Bush was the director of the CIA.

I wish we had two parties that were equally reasonable. I wish there were two—or three, or four—proposals on the table in each discussion, all of which had merits and flaws worth considering. Maybe if we somehow manage to get the Green Party a significant seat in power, or the Social Democrat party, we can actually achieve that goal. But that is not where we are right now. Right now, we have the Democrats, who have some good ideas and some bad ideas; and then we have the Republicans, who are completely out of their minds.

There is an important concept in political science called the Overton window; it is the range of political ideas that are considered “reasonable” or “mainstream” within a society. Things near the middle of the Overton window are considered sensible, even “nonpartisan” ideas, while things near the edges are “partisan” or “political”, and things near but outside the window are seen as “extreme” and “radical”. Things far outside the window are seen as “absurd” or even “unthinkable”.

Right now, our Overton window is in the wrong place. Things like Paul Ryan’s plan to privatize Social Security and Medicare are seen as reasonable when they should be considered extreme. Progressive income taxes of the kind we had in the 1960s are seen as extreme when they should be considered reasonable. Cutting WIC and SNAP with nothing to replace them and letting people literally starve to death are considered at most partisan, when they should be outright unthinkable. Opposition to basic scientific facts like climate change and evolution is considered a mainstream political position—when in terms of empirical evidence Creationism should be more intellectually embarrassing than being a 9/11 truther or thinking you saw an alien spacecraft. And perhaps worst of all, military tactics like double-tap strikes that are literally war crimes are considered “liberal”, while the “conservative” position involves torture, worldwide surveillance and carpet bombing—if not outright full-scale nuclear devastation.

I want to restore reasonable conversation to our political system, I really do. But that really isn’t possible when half the politicians are totally delusional. We have but one choice: We must vote them out.

I say this particularly to people who say “Why bother? Both parties are the same.” No, they are not the same. They are deeply, deeply different, for all the reasons I just outlined above. And if you can’t bring yourself to vote for a Democrat, at least vote for someone! A Green, or a Social Democrat, or even a Libertarian or a Socialist if you must. It is only by the apathy of reasonable people that this insanity can propagate in the first place.

The World Development Report is on cognitive economics this year!

JDN 2457013 EST 21:01.

On a personal note, I can now proudly report that I have successfully defended my thesis “Corruption, ‘the Inequality Trap’, and ‘the 1% of the 1%’ “, and I now have completed a master’s degree in economics. I’m back home in Michigan for the holidays (hence my use of Eastern Standard Time), and then, well… I’m not entirely sure. I have a gap of about six months before PhD programs start. I have a number of job applications out, but unless I get a really good offer (such as the position at the International Food Policy Research Institute in DC) I think I may just stay in Michigan for awhile and work on my own projects, particularly publishing two of my books (my nonfiction magnum opus, The Mathematics of Tears and Joy, and my first novel, First Contact) and making some progress on a couple of research papers—ideally publishing one of them as well. But the future for me right now is quite uncertain, and that is now my major source of stress. Ironically I’d probably be less stressed if I were working full-time, because I would have a clear direction and sense of purpose. If I could have any job in the world, it would be a hard choice between a professorship at UC Berkeley or a research position at the World Bank.

Which brings me to the topic of today’s post: The people who do my dream job have just released a report showing that they basically agree with me on how it should be done.

If you have some extra time, please take a look at the World Bank World Development Report. They put one out each year, and it provides a rigorous and thorough (236 pages) but quite readable summary of the most important issues in the world economy today. It’s not exactly light summer reading, but nor is it the usual morass of arcane jargon. If you like my blog, you can probably follow most of the World Development Report. If you don’t have time to read the whole thing, you can at least skim through all the sidebars and figures to get a general sense of what it’s all about. Much of the report is written in the form of personal vignettes that make the general principles more vivid; but these are not mere anecdotes, for the report rigorously cites an enormous volume of empirical research.

The title of the 2015 report? “Mind, Society, and Behavior”. In other words, cognitive economics. The world’s foremost international economic institution has just endorsed cognitive economics and rejected neoclassical economics, and their report on the subject provides a brilliant introduction to the subject replete with direct applications to international development.

For someone like me who lives and breathes cognitive economics, the report is pure joy. It’s all there, from anchoring heuristic to social proof, corruption to discrimination. The report is broadly divided into three parts.

Part 1 explains the theory and evidence of cognitive economics, subdivided into “thinking automatically” (heuristics), “thinking socially” (social cognition), and “thinking with mental models” (bounded rationality). (If I wrote it I’d also include sections on the tribal paradigm and narrative, but of course I’ll have to publish that stuff in the actual research literature first.) Anyway the report is so amazing as it is I really can’t complain. It includes some truly brilliant deorbits on neoclassical economics, such as this one from page 47: ” In other words, the canonical model of human behavior is not supported in any society that has been studied.”

Part 2 uses cognitive economic theory to analyze and improve policy. This is the core of the report, with chapters on poverty, childhood, finance, productivity, ethnography, health, and climate change. So many different policies are analyzed I’m not sure I can summarize them with any justice, but a few particularly stuck out: First, the high cognitive demands of poverty can basically explain the whole observed difference in IQ between rich and poor people—so contrary to the right-wing belief that people are poor because they are stupid, in fact people seem stupid because they are poor. Simplifying the procedures for participation in social welfare programs (which is desperately needed, I say with a stack of incomplete Medicaid paperwork on my table—even I find these packets confusing, and I have a master’s degree in economics) not only increases their uptake but also makes people more satisfied with them—and of course a basic income could simplify social welfare programs enormously. “Are you a US citizen? Is it the first of the month? Congratulations, here’s $670.” Another finding that I found particularly noteworthy is that productivity is in many cases enhanced by unconditional gifts more than it is by incentives that are conditional on behavior—which goes against the very core of neoclassical economic theory. (It also gives us yet another item on the enormous list of benefits of a basic income: Far from reducing work incentives by the income effect, an unconditional basic income, as a shared gift from your society, may well motivate you even more than the same payment as a wage.)

Part 3 is a particularly bold addition: It turns the tables and applies cognitive economics to economists themselves, showing that human irrationality is by no means limited to idiots or even to poor people (as the report discusses in chapter 4, there are certain biases that poor people exhibit more—but there are also some they exhibit less.); all human beings are limited by the same basic constraints, and economists are human beings. We like to think of ourselves as infallibly rational, but we are nothing of the sort. Even after years of studying cognitive economics I still sometimes catch myself making mistakes based on heuristics, particularly when I’m stressed or tired. As a long-term example, I have a number of vague notions of entrepreneurial projects I’d like to do, but none for which I have been able to muster the effort and confidence to actually seek loans or investors. Rationally, I should either commit or abandon them, yet cannot quite bring myself to do either. And then of course I’ve never met anyone who didn’t procrastinate to some extent, and actually those of us who are especially smart often seem especially prone—though we often adopt the strategy of “active procrastination”, in which you end up doing something else useful when procrastinating (my apartment becomes cleanest when I have an important project to work on), or purposefully choose to work under pressure because we are more effective that way.

And the World Bank pulled no punches here, showing experiments on World Bank economists clearly demonstrating confirmation bias, sunk-cost fallacy, and what the report calls “home team advantage”, more commonly called ingroup-outgroup bias—which is basically a form of the much more general principle that I call the tribal paradigm.

If there is one flaw in the report, it’s that it’s quite long and fairly exhausting to read, which means that many people won’t even try and many who do won’t make it all the way through. (The fact that it doesn’t seem to be available in hard copy makes it worse; it’s exhausting to read lengthy texts online.) We only have so much attention and processing power to devote to a task, after all—which is kind of the whole point, really.