Reflections on the Chinese Room

Jul 12 JDN 2459044

Perhaps the most famous thought experiment in the philosophy of mind, John Searle’s Chinese Room is the sort of argument that basically every expert knows is wrong, yet can’t quite explain what is wrong with it. Here’s a brief summary of the argument; for more detail you can consult Wikipedia or the Stanford Encyclopedia of Philosophy.

I am locked in a room. The only way to communicate with me is via a slot in the door, through which papers can be passed.

Someone on the other side of the door is passing me papers with Chinese writing on them. I do not speak any Chinese. Fortunately, there is a series of file cabinets in the room, containing instruction manuals which explain (in English) what an appropriate response in Chinese would be to any given input of Chinese characters. These instructions are simply conditionals like “After receiving input A B C, output X.”

I can follow these instructions and thereby ‘hold a conversation’ in Chinese with the person outside, despite never understanding Chinese.

This room is like a Turing Test. A computer is fed symbols and has instructions telling it to output symbols; it may ‘hold a conversation’, but it will never really understand language.

First, let me note that if this argument were right, it would pretty much doom the entire project of cognitive science. Searle seems to think that calling consciousness a “biological function” as opposed to a “computation” can somehow solve this problem; but this is not how functions work. We don’t say that a crane ‘isn’t really lifting’ because it’s not made of flesh and bone. We don’t say that an airplane ‘isn’t really flying’ because it doesn’t flap its wings like a bird. He often compares to digestion, which is unambiguously a biological function; but if you make a machine that processes food chemically in the same way as digestion, that is basically a digestion machine. (In fact there is a machine called a digester that basically does that.) If Searle is right that no amount of computation could ever get you to consciousness, then we basically have no idea how anything would ever get us to consciousness.

Second, I’m guessing that the argument sounds fairly compelling, especially if you’re not very familiar with the literature. Searle chose his examples very carefully to create a powerfully seductive analogy that tilts our intuitions in a particular direction.

There are various replies that have been made to the Chinese Room. Some have pointed out that the fact that I don’t understand Chinese doesn’t mean that the system doesn’t understand Chinese (the “Systems Reply”). Others have pointed out that in the real world, conscious beings interact with their environment; they don’t just passively respond to inputs (the “Robot Reply”).

Searle has his own counter-reply to these arguments: He insists that if instead of having all those instruction manuals, I memorized all the rules, and then went out in the world and interacted with Chinese speakers, it would still be the case that I didn’t actually understand Chinese. This seems quite dubious to me: For one thing, how is that different from what we would actually observe in someone who does understand Chinese? For another, once you’re interacting with people in the real world, they can do things like point to an object and say the word for it; in such interactions, wouldn’t you eventually learn to genuinely understand the language?

But I’d like to take a somewhat different approach, and instead attack the analogy directly. The argument I’m making here is very much in the spirit of Churchland’s Luminous Room reply, but a little more concrete.

I want you to stop and think about just how big those file cabinets would have to be.

For a proper Turing Test, you can’t have a pre-defined list of allowed topics and canned responses. You’re allowed to talk about anything and everything. There are thousands of symbols in Chinese. There’s no specified limit to how long the test needs to go, or how long each sentence can be.

After each 10-character sequence, the person in the room has to somehow sort through all those file cabinets and find the right set of instructions—not simply to find the correct response to that particular 10-character sequence, but to that sequence in the context of every other sequence that has occurred so far. “What do you think about that?” is a question that one answers very differently depending on what was discussed previously.

The key issue here is combinatoric explosion. Suppose we’re dealing with 100 statements, each 10 characters long, from a vocabulary of 10,000 characters. This means that there are ((10,000)^10)^100 = 10^4000 possible conversations. That’s a ludicrously huge number. It’s bigger than a googol. Even if each atom could store one instruction, there aren’t enough atoms in the known universe. After a few dozen sentences, simply finding the correct file cabinet would be worse than finding a needle in a haystack; it would be finding a hydrogen atom in the whole galaxy.

Even if you assume a shorter memory (which I don’t think is fair; human beings can absolutely remember 100 statements back), say only 10 statements, things aren’t much better: ((10,000)^10)^10 is 10^400, which is still more atoms than there are in the known universe.

In fact, even if I assume no memory at all, just a simple Markov chain that responds only to your previous statement (which can be easily tripped up by asking the same question in a few different contexts), that would still be 10,000^10 = 10^40 sequences, which is at least a quintillion times the total data storage of every computer currently on Earth.

And I’m supposed to imagine that this can be done by hand, in real time, in order to carry out a conversation?

Note that I am not simply saying that a person in a room is too slow for the Chinese Room to work. You can use an exaflop quantum supercomputer if you like; it’s still utterly impossible to store and sort through all possible conversations.

This means that, whatever is actually going on inside the head of a real human being, it is nothing like a series of instructions that say “After receiving input A B C, output X.” A human mind cannot even fathom the total set of possible conversations, much less have a cached response to every possible sequence. This means that rules that simple cannot possibly mimic consciousness. This doesn’t mean consciousness isn’t computational; it means you’re doing the wrong kind of computations.

I’m sure Searle’s response would be to say that this is a difference only of degree, not of kind. But is it, really? Sometimes a sufficiently large difference of degree might as well be a difference of kind. (Indeed, perhaps all differences of kind are really very large differences of degree. Remember, there is a continuous series of common ancestors that links you and I to bananas.)

Moreover, Searle has claimed that his point was about semantics rather than consciousness: In an exchange with Daniel Dennett he wrote “Rather he [Dennett] misstates my position as being about consciousness rather than about semantics.” Yet semantics is exactly how we would solve this problem of combinatoric explosion.

Suppose that instead of simply having a list of symbol sequences, the file cabinets contained detailed English-to-Chinese dictionaries and grammars. After reading and memorizing those, then conversing for awhile with the Chinese speaker outside the room, who would deny that the person in the room understands Chinese? Indeed what other way is there to understand Chinese, if not reading dictionaries and talking to Chinese speakers?

Now imagine somehow converting those dictionaries and grammars into a form that a computer could directly apply. I don’t simply mean digitizing the dictionary; of course that’s easy, and it’s been done. I don’t even mean writing a program that translates automatically between English and Chinese; people are currently working on this sort of thing, and while still pretty poor, it’s getting better all the time.

No, I mean somehow coding the software so that the computer can respond to sentences in Chinese with appropriate responses in Chinese. I mean having some kind of mapping within the software of how different concepts relate to one another, with categorizations and associations built in.

I mean something like a searchable cross-referenced database, so that when asked the question, “What’s your favorite farm animal?” despite never having encountered this sentence before, the computer can go through a list of farm animals and choose one to designate as its ‘favorite’, and then store that somewhere so that later on when it is again asked it will give the same answer. And then why asked “Why do you like goats?” the computer can go through the properties of goats, choose some to be the ‘reason’ why it ‘likes’ them, and then adjust its future responses accordingly. If it decides that the reason is “horns are cute”, then when you mention some other horned animal, it updates to increase its probability of considering that animal “cute”.

I mean something like a program that is programmed to follow conversational conventions, so when you ask it its name, will not only tell you something; it will ask you your name in return, and stores that information for later. And then it will map the sound of your name to known patterns of ethnic naming conventions, and so when you say your name is “Ling-Ling Xu” it asks “Is your family Chinese?” And then when you say “yes” it asks “What part of China are they from?” and then when you say “Shanghai” it asks “Did you grow up there?” and so on. It’s not that it has some kind of rule that says “Respond to ‘Shanghai’ with ‘Did you grow up there?’”; on the contrary, later in the conversation you may say “Shanghai” and get a different response because it was in a different context. In fact, if you were to keep spamming “Shanghai” over and over again, it would sound confused: “Why do you keep saying ‘Shanghai’? I don’t understand.”

In other words, I mean semantics. I mean something approaching how human beings actually seem to organize the meanings of words in their brains. Words map to other words and contexts, and some very fundamental words (like “pain” or “red”) map directly to sensory experiences. If you are asked to define what a word means, you generally either use a lot of other words, or you point to a thing and say “It means that.” Why can’t a robot do the same thing?

I really cannot emphasize enough how radically different that process would be from simply having rules like “After receiving input A B C, output X.” I think part of why Searle’s argument is so seductive is that most people don’t have a keen grasp of computer science, and so the difference between a task that is O(N^2) like what I just outlined above doesn’t sound that different to them compared to a task that is O(10^(10^N)) like the simple input-output rules Searle describes. With a fast enough computer it wouldn’t matter, right? Well, if by “fast enough” you mean “faster than could possibly be built in our known universe”, I guess so. But O(N^2) tasks with N in the thousands are done by your computer all the time; no O(10^(10^N)) task will ever be accomplished for such an N within the Milky Way in the next ten billion years.

I suppose you could still insist that this robot, despite having the same conceptual mappings between words as we do, and acquiring new knowledge in the same way we do, and interacting in the world in the same way we do, and carrying on conversations of arbitrary length on arbitrary topics in ways indistinguishable from the way we do, still nevertheless “is not really conscious”. I don’t know how I would conclusively prove you wrong.

But I have two things to say about that: One, how do I know you aren’t such a machine? This is the problem of zombies. Two, is that really how you would react, if you met such a machine? When you see Lieutenant Commander Data on Star Trek: The Next Generation, is your thought “Oh, he’s just a calculating engine that makes a very convincing simulation of human behavior”? I don’t think it is. I think the natural, intuitive response is actually to assume that anything behaving that much like us is in fact a conscious being.

And that’s all the Chinese Room was anyway: Intuition. Searle never actually proved that the person in the room, or the person-room system, or the person-room-environment system, doesn’t actually understand Chinese. He just feels that way, and expects us to feel that way as well. But I contend that if you ever did actually meet a machine that really, truly passed the strictest form of a Turing Test, your intuition would say something quite different: You would assume that machine was as conscious as you and I.

A better kind of patriotism

Jul 5 JDN 2459037

Yesterday was the Fourth of July, but a lot of us haven’t felt much like celebrating. When things are this bad—pandemic, economic crisis, corrupt government, police brutality, riots, and so on—it can be hard to find much pride in our country.

Perhaps this is why Republicans tend to describe themselves as more patriotic than Democrats. Republicans have always held our country to a far lower standard (indeed, do they hold it to any standard at all!?) and so they can be proud of it even in its darkest times.

Indeed, in some sense national pride in general is a weird concept: We weren’t even alive when our nation was founded, and even today there are hundreds of millions of people in our nation, so most of what it does has nothing to do with us. But human beings are tribal: We feel a deep need to align ourselves with groups larger than ourselves. In the current era, nations fill much of that role (though certainly not all of it, as we form many other types of groups as well). We identify so strongly with our nation that our pride or shame in it becomes pride or shame in ourselves.

As the toppling of statues extends beyond Confederate leaders (obviously those statues should come down! Would Great Britain put up statues of Napoleon?) and Christopher Columbus (who was recognized as a monster in his own time!) to more ambiguous cases like Ulysses Grant, George Washington and Thomas Jefferson, or even utterly nonsensical ones like Matthias Baldwin, one does begin to get the sense that the left wing doesn’t just hate racism; some of them really do seem to hate America.

Don’t get me wrong: The list of America’s sins is long and weighty. From the very beginning the United States was built by forcing out Native populations and importing African slaves. The persistent inequality between racial groups today suggests that reparations for these crimes may still be necessary.

But I think it is a mistake to look at a statue of George Washington or Thomas Jefferson and see only a slaveowner. They were slaveowners, certainly—and we shouldn’t sweep that under the rug. Perhaps it is wrong to idolize anyone, because our heroes never live up to our expectations and great men are almost always bad men. Even Martin Luther King was a sexual predator and Mahatma Gandhi abused his wife. Then again, people seem to need heroes: Without something to aspire to, some sense of pride in who they are, people rapidly become directionless or even hopeless.

While there is much to be appalled by in Washington or Jefferson, there is also much to admire. Indeed, specifically what we are celebrating on Independence Day strikes me as something particularly noteworthy, something truly worthy of the phrase “American exceptionalism”.

For most of human history, every major nation formed organically. Many were ruled by hereditary dynasties that extended to time immemorial. Others were aware that they had experienced coups and revolutions, but all of these were about the interests of one king (or prince, or duke) versus another. The Greek philosophers had debated what the best sort of government would be, but never could agree on anything; insofar as they did agree, they seemed to prefer benevolent autocracy. Even where democracies existed, they too had formed organically, and in practice rarely had suffrage beyond upper-class men. Nations had laws, but these laws were subordinate to the men who made and enforced them; one king’s sacred duty was another’s heinous crime.

Then came the Founding Fathers. After fighting their way out of the grip of the British Empire, they could easily have formed their own new monarchy and declared their own King George—and there were many who wanted to do this. They could have kept things running basically the same way they always had.

But they didn’t. Instead, they gathered together a group of experts and leaders from the revolution, all to ask the question: “What is the best way to run a country?” Of course there were many different ideas about the answer. A long series of impassioned arguments and bitter conflicts ensued. Different sides cited historians and philosophers back and forth at each other, often using the same source to entirely opposite conclusions. Great compromises were made that neither side was happy with (like the Three-Fifths Compromise and the Connecticut Compromise).

When all the dust cleared and all the signatures were collected, the result was a document that all involved knew was imperfect and incomplete—but nevertheless represented a remarkable leap forward for the very concept of what it means to govern a nation. However painfully and awkwardly, they came to some kind of agreement as to what was the best way to run a country—and then they made that country.

It’s difficult to overstate what a watershed moment this was in human history. With a few exceptions—mostly small communities—every other government on earth had been created to serve the interests of its rulers, with barely even a passing thought toward what would be ethical or in the best interests of the citizens. Of course some self-interest crept in even to the US Constitution, and in some ways we’ve been trying to fix that ever since. But even asking what sort of government would be best for the people was something deeply radical.

Today the hypocrisy of a slaveowner writing “all men are created equal” is jarring to us; but at the time the shock was not that he would own slaves, but that he would even give lip service to universal human equality. It seems bizarre to us that someone could announce “inalienable rights to life, liberty, and the pursuit of happiness” and then only grant voting rights to landowning White men—but to his contemporaries, the odd thing was citing philosophers (specifically John Locke) in your plan for a new government.

Indeed, perhaps the most radical thing of all about the Constitution of the United States is that they knew it was imperfect. The Founding Fathers built into the very text of the document a procedure for amending and improving it. And since then we have amended it 27 times (though to be fair the first 10 were more like “You know what? We should actually state clearly that people have free speech rather than assuming courts will automatically protect that.”)

Every nation has a founding myth that lionizes its founders. And certainly many, if not most, Americans believe a version of this myth that is as much fable as fact. But even the historical truth with all of its hypocrises has plenty to be proud of.

Though we may not have had any control over how our nation was founded, we do have a role in deciding its future. If we feel nothing but pride in our nation, we will not do enough to mend and rectify its flaws. If we feel nothing but shame in our nation, we will not do enough to preserve and improve its strengths.

Thus, this Independence Day, I remind you to be ambivalent: There is much to be ashamed of, but also much to be proud of.

“The Patriarchy” is not a thing

Jun 28 JDN 2459030

It’s really mainly a coincidence that I am writing this post on Father’s Day; working at home and almost never going out due to the pandemic, I have become unmoored from the normal passage of time. It’s a wonder I can remember it’s Sunday. But it is at least a bit ironic, since the word “patriarchy” comes from the Latin word pater meaning “father”.

A great deal of feminist discourse references “The Patriarchy”: Examples are included as links in this sentence from a variety of different sources.

This is a problem, because “The Patriarchy” plainly does not exist.

Am I saying that patriarchy doesn’t exist? Of course not. Patriarchy plainly exists. What I’m saying is that there is no one single source “The Patriarchy“.

China and Japan are both extremely patriarchial societies. They have fought wars with one another dozens of times. Saudi Arabia and Iran are both extremely patriarchal. They hate each other and have likewise fought numerous wars.

Indeed, nearly every human society is to some degree patriarchal; and yet, somehow we seem to be in conflict with one another quite frequently. If patriarchy all stemmed from some common source “The Patriarchy”, such a result would be baffling: If we’re all following the same ruler, how can we fight each other so much? Whoever is running this conspiracy is doing a really awful job!

Yes, there are common elements between the various forms of patriarchy in different societies—otherwise, we wouldn’t recognize them all as patriarchy. But there are also substantial differences. Nearly all societies regulate how women must dress, but precisely what women are expected to wear varies a great deal. Nearly all societies put more men in positions of power than women, but the degree to which this is true runs a wide gamut.

Patriarchy is like authoritarianism, or fanaticism, or corruption; yes, obviously authoritarianism, fanaticism, and corruption exist, and are important forces in the world. But there are no such things as “The Authoritarianism”, “The Fanaticism”, or “The Corruption”. There is no single unified source of these things. Indeed, authoritarians are often at each other’s throats, fanatics fight with other fanatics all the time, and those who are corrupt have no qualms about exploiting others who are corrupt.

Is this important? Perhaps it’s just a provocative turn of phrase, and I’m being overly pedantic.

But I do think it’s important, for the following reasons.

Many feminists who use the phrase “The Patriarchy” really do seem to think that all patriarchial ideas, beliefs, norms, attitudes, and behaviors stem from some common root, as the following quotes attest:

Only “patriarchy” seems to capture the peculiar elusiveness of gendered power – the idea that it does not reside in any one site or institution, but seems spread throughout the world. Only “patriarchy” seems to express that it is felt in the way individual examples of gender inequality interact, reinforcing each other to create entire edifices of oppression.

~Charlotte Higgins, The Guardian

I’m not angry because I hate men. I’m not even angry at men. I’m angry at the system that, for the lack of a better term, most people refer to as the patriarchy.

~Anne Theriaut, The Good Men Project

Remember in “Terminator 2” how the bad terminator kept getting smashed and shattered and ripped apart, but it didn’t matter? He just kept re-emerging, rising from the ashes, as an unstoppable force. Now imagine that terminator is a vessel to keep power, wealth and status in the hands of men — that’s the patriarchy. It can feel indestructible, coming back ever stronger despite seemingly endless efforts to smash it.

~Maya Salam, The New York Times

If you imagine that there is such a thing as “The Patriarchy”, it gives you the sense that you have just one enemy to fight. It makes the world simple and comprehensible. There’s a lot of psychological appeal in that kind of worldview. But it also makes you miss a great deal of the real complexity and nuance in the world. You have reified the concept.

Such a simplistic worldview might motivate you to fight harder against patriarchy, which would be a good thing. But then again, it could actually sap your motivation, by making it seem like you have a single implacable enemy that controls the entire world and has throughout history. If there is such a thing as “The Patriarchy”, then its power must be tremendous; perhaps we have weakened its hold upon the world, but could we ever hope to completely defeat it? (I made a similar point in an old post about how acknowledging progress is vital in order to make more progress.)

Moreover, thinking that all patriarchy stems from the same source could cause you to misdiagnose problems and fail to notice solutions that would otherwise be readily available. If you go around thinking that any disparity between how men and women are treated must be the result of some global phenomenon called “The Patriarchy”, you may not think to try simple fixes like blinded auditions or revising or eliminating student evaluations. You may assume that sexism is around ever corner when often the real causes are nepotism and network effects.

Slate Star Codex made a similar point about racism in an excellent post called “Murderism”. If your view of the world is that all bad things (or even all bad things in a broad class like “racism” or “sexism”) must stem from the same source, you will be unable to analyze the real nuances of what causes problems and thus be powerless to fix them.

Yes, of course patriarchy exists; and it’s important. But it comes in many different kinds, and many difference degrees, and policies that amelioriate it in some contexts may be ineffective—or even counterproductive—in others. This is why I say that it’s dangerous to use a phrase like “The Patriarchy”—for patriarchy isn’t a thing, it’s many things.

How we measure efficiency affects our efficiency

Jun 21 JDN 2459022

Suppose we are trying to minimize carbon emissions, and we can afford one of the two following policies to improve fuel efficiency:

  1. Policy A will replace 10,000 cars that average 25 MPG with hybrid cars that average 100 MPG.
  2. Policy B will replace 5,000 diesel trucks that average 5 MPG with turbocharged, aerodynamic diesel trucks that average 10 MPG.

Assume that both cars and trucks last about 100,000 miles (in reality this of course depends on a lot of factors), and diesel and gas pollute about the same amount per gallon (this isn’t quite true, but it’s close). Which policy should we choose?

It seems obvious: Policy A, right? 10,000 vehicles, each increasing efficiency by 75 MPG or a factor of 4, instead of 5,000 vehicles, each increasing efficiency by only 5 MPG or a factor of 2.

And yet—in fact the correct answer is definitely policy B, because the use of MPG has distorted our perception of what constitutes efficiency. We should have been using the inverse: gallons per hundred miles.

  1. Policy A will replace 10,000 cars that average 4 GPHM with cars that average 1 GPHM.
  2. Policy B will replace 5,000 trucks that average 20 GPHM with trucks that average 10 GPHM.

This means that policy A will save (10,000)(100,000/100)(4-1) = 30 million gallons, while policy B will save (5,000)(100,000/100)(20-10) = 50 million gallons.

A gallon of gasoline produces about 9 kg of CO2 when burned. This means that by choosing the right policy here, we’ll have saved 450,000 tons of CO2—or by choosing the wrong one we would only have saved 270,000.

The simple choice of which efficiency measure to use when making our judgment—GPHM versus MPG—has had a profound effect on the real impact of our choices.

Let’s try applying the same reasoning to charities. Again suppose we can choose one of two policies.

  1. Policy C will move $10 million that currently goes to local community charities which can save one QALY for $1 million to medical-research charities that can save one QALY for $50,000.
  2. Policy D will move $10 million that currently goes to direct-transfer charities which can save one QALY for $1000 to anti-malaria net charities that can save one QALY for $800.

Policy C means moving funds from charities that are almost useless ($1 million per QALY!?) to charities that meet a basic notion of cost-effectiveness (most public health agencies in the First World have a standard threshold of about $50,000 or $100,000 per QALY).

Policy D means moving funds from charities that are already highly cost-effective to other charities that are only a bit more cost-effective. It almost seems pedantic to even concern ourselves with the difference between $1000 per QALY and $800 per QALY.

It’s the same $10 million either way. So, which policy should we pick?

If the lesson you took from the MPG example is that we should always be focused on increasing the efficiency of the least efficient, you’ll get the wrong answer. The correct answer is based on actually using the right measure of efficiency.

Here, it’s not dollars per QALY we should care about; it’s QALY per million dollars.

  1. Policy C will move $10 million from charities which get 1 QALY per million dollars to charities which get 20 QALY per million dollars.
  2. Policy D will move $10 million from charities which get 1000 QALY per million dollars to charities which get 1250 QALY per million dollars.

Multiply that out, and policy C will gain (10)(20-1) = 190 QALY, while policy D will gain (10)(1250-1000) = 2500 QALY. Assuming that “saving a life” means about 50 QALY, this is the difference between saving 4 lives and saving 50 lives.

My intuition actually failed me on this one; before I actually did the math, I had assumed that it would be far more important to move funds from utterly useless charities to ones that meet a basic standard. But it turns out that it’s actually far more important to make sure that the funds being targeted at the most efficient charities are really the most efficient—even apparently tiny differences matter a great deal.

Of course, if we can move that $10 million from the useless charities to the very best charities, that’s the best of all; it would save (10)(1250-1) = 12,490 QALY. This is nearly 250 lives.

In the fuel economy example, there’s no feasible way to upgrade a semitrailer to get 100 MPG. If we could, we totally should; but nobody has any idea how to do that. Even an electric semi probably won’t be that efficient, depending on how the grid produces electricity. (Obviously if the grid were all nuclear, wind, and solar, it would be; but very few places are like that.)

But when we’re talking about charities, this is just money; it is by definition fungible. So it is absolutely feasible in an economic sense to get all the money currently going towards nearly-useless charities like churches and museums and move that money directly toward high-impact charities like anti-malaria nets and vaccines.

Then again, it may not be feasible in a practical or political sense. Someone who currently donates to their local church may simply not be motivated by the same kind of cosmopolitan humanitarianism that motivates Effective Altruism. They may care more about supporting their local community, or be motivated by genuine religious devotion. This isn’t even inherently a bad thing; nobody is a cosmopolitan in everything they do, nor should we be—we have good reasons to care more about our own friends, family, and community than we do about random strangers in foreign countries thousands of miles away. (And while I’m fairly sure Jesus himself would have been an Effective Altruist if he’d been alive today, I’m well aware that most Christians aren’t—and this doesn’t make them “false Christians”.) There might be some broader social or cultural change that could make this happen—but it’s not something any particular person can expect to accomplish.

Whereas, getting people who are already Effective Altruists giving to efficient charities to give to a slightly more efficient charity is relatively easy: Indeed, it’s basically the whole purpose for which GiveWell exists. And there are analysts working at GiveWell right now whose job it is to figure out exactly which charities yield the most QALY per dollar and publish that information. One person doing that job even slightly better can save hundreds or even thousands of lives.

Indeed, I’m seriously considering applying to be one myself—it sounds both more pleasant and more important than anything I’d be likely to get in academia.

No, unemployment doesn’t kill people

Jun 14 JDN 2459015

Some people have argued that lockdown measures were unnecessary, or ineffective. The data definitely leans the other direction, but there’s enough uncertainty in all this that I can at least consider that a serious possibility. That doesn’t mean we were wrong to use them; in the presence of high uncertainty, assuming the worst-case scenario is often the best strategy. Far better to overreact than underreact. And indeed, I’d say that right now we still can’t be confident enough that things are safe to really re-open most of the economy. Re-opening too early could make things far worse.

There’s another argument for re-opening the economy which seems far more seductive: What about the people harmed by the lockdowns? This massive unemployment is terrible too, isn’t it? In fact, what if we’re killing more people by unemployment than we are saving from the virus? The Mises Institute warns: “Unemployment Kills”. Others have speculated that the recession could cause more deaths than the virus.

But in fact, unemployment does not kill. The evidence on this is quite clear. Even in the Great Depression, with massive unemployment, terrible monetary policy, and only the most minimal social welfare measures in place, death rates did not increase. In fact, for all causes except suicide, death rates decrease during recessions—probably because pollution, traffic accidents, and work-related injury and illness go down. And the suicide rate increase isn’t enough to increase the overall death rate.

Of course, dying by suicide is not the same thing as dying from cancer—and indeed, they are most likely different people being affected in each case. So in that sense unemployment can kill people; but it typically saves more people than it kills. Almost any policy choice will cause some deaths and prevent others, so really the best we can do is look at the overall aggregate and see whether our QALY have gone up or down.

This doesn’t mean that we should go out of our way to have recessions in order to save lives; the number of lives saved is small and the loss in quality of life is probably large enough to compensate for it. (That’s why we use quality-adjusted life years after all.) But this recession isn’t arbitrary; it’s the result of trying to stop a global pandemic, so that we don’t have a repeat of what influenza did in 1918.


When the CDC says it’s okay to open back up, by all means, let’s do that. They have issued guidelines for what we need to do in order to make that happen. But until then, let’s trust in the experts—the epidemiologists who say that we still need lockdown measures, and the economists who agree that it’s worth the cost.

Moral disagreement is not bad faith

Jun 7 JDN 2459008

One of the most dangerous moves to make in an argument is to accuse your opponent of bad faith. It’s a powerful, and therefore tempting, maneuver: If they don’t even really believe what they are saying, then you can safely ignore basically whatever comes out of their mouth. And part of why this is so tempting is that it is in fact occasionally true—people do sometimes misrepresent their true beliefs in various ways for various reasons. On the Internet especially, sometimes people are just trolling.

But unless you have really compelling evidence that someone is arguing in bad faith, you should assume good faith. You should assume that whatever they are asserting is what they actually believe. For if you assume bad faith and are wrong, you have just cut off any hope of civil discourse between the two of you. You have made it utterly impossible for either side to learn anything or change their mind in any way. If you assume good faith and are wrong, you may have been overly charitable; but in the end you are the one that is more likely to persuade any bystanders, not the one who was arguing in bad faith.

Furthermore, it is important to really make an effort to understand your opponent’s position as they understand it before attempting to respond to it. Far too many times, I have seen someone accused of bad faith by an opponent who simply did not understand their worldview—and did not even seem willing to try to understand their worldview.

In this post, I’m going to point out some particularly egregious examples of this phenomenon that I’ve found, all statements made by left-wing people in response to right-wing people. Why am I focusing on these? Well, for one thing, it’s as important to challenge bad arguments on your own side as it is to do so on the other side. I also think I’m more likely to be persuasive to a left-wing audience. I could find right-wing examples easily enough, but I think it would be less useful: It would be too tempting to think that this is something only the other side does.

Example 1: “Republicans Have Stopped Pretending to Care About Life”

The phrase “pro-life” means thinking that abortion is wrong. That’s all it means. It’s jargon at this point. The phrase has taken on this meaning independent of its constituent parts, just as a red herring need not be either red or a fish.

Stop accusing people of not being “truly pro-life” because they don’t adopt some other beliefs that are not related to abortion. Even if those would be advancing life in some sense (most people probably think that most things they think are good advance life in some sense!), they aren’t relevant to the concept of being “pro-life”. Moreover, being “pro-life” in the traditional conservative sense isn’t even about minimizing the harm of abortion or the abortion rate. It’s about emphasizing the moral wrongness of abortion itself, and often even criminalizing it.


I don’t think this is really so hard to understand. If someone truly, genuinely believes that abortion is murdering a child, it’s quite clear why they won’t be convinced by attempts at minimizing harm or trying to reduce the abortion rate via contraception or other social policy. Many policies are aimed at “reducing the demand for abortion”; would you want to “reduce the demand for murder”? No, you’d want murderers to be locked up. You wouldn’t care what their reasons were, and you wouldn’t be interested in using social policies to address those reasons. It’s not even hard to understand why this would be such an important issue to them, overriding almost anything else: If you thought that millions of people were murdering children you would consider that an extremely important issue too.

If you want to convince people to support Roe v. Wade, you’re going to have to change their actual belief that abortion is murder. You may even be able to convince them that they don’t really think abortion is murder—many conservatives support the death penalty for murder, but very few do so for abortion. But they clearly do think that abortion is a grave moral wrong, and you can’t simply end-run around that by calling them hypocrites because they don’t care about whatever other issue you think they should care about.

Example 2: “Stop pretending to care about human life if you support wars in the Middle East”

I had some trouble finding the exact wording of the meme I originally saw with this sentiment, but the gist of it was basically that if you support bombing Afghanistan, Libya, Iraq, and/or Syria, you have lost all legitimacy to claiming that you care about human life.

Say what you will about these wars (though to be honest I think what the US has done in Libya and Syria has done more good than harm), but simply supporting a war does not automatically undermine all your moral legitimacy. The kind of radical pacifism that requires us to never kill anyone ever is utterly unrealistic; the question is and has always been “Which people is it okay to kill, when and how and why?” Some wars are justified; we have to accept that.

It would be different if these were wars of genocidal extermination; I can see a case for saying that anyone who supported the Holocaust or the Rwandan Genocide has lost all moral legitimacy. But even then it isn’t really accurate to say that those people don’t care about human life; it’s much more accurate to say that they have assigned the group of people they want to kill to a subhuman status. Maybe you would actually get more traction by saying “They are human beings too!” rather than by accusing people of not believing in the value of human life.

And clearly these are not wars of extermination—if the US military wanted to exterminate an entire nation of people, they could do so much more efficiently than by using targeted airstrikes and conventional warfare. Remember: They have nuclear weapons. Even if you think that they wouldn’t use nukes because of fear of retaliation (Would Russia or China really retaliate using their own nukes if the US nuked Afghanistan or Iran?), it’s clear that they could have done a lot more to kill a lot more innocent people if that were actually their goal. It’s one thing to say they don’t take enough care not to kill innocent civilians—I agree with that. It’s quite another to say that they actively try to kill innocent civilians—that’s clearly not what is happening.

Example 3: “Stop pretending to be Christian if you won’t help the poor.”

This one I find a good deal more tempting: In the Bible, Jesus does spend an awful lot more words on helping the poor than he does on, well, almost anything else; and he doesn’t even once mention abortion or homosexuality. (The rest of the Bible does at least mention homosexuality, but it really doesn’t have any clear mentions of abortion.) So it really is tempting to say that anyone who doesn’t make helping the poor their number one priority can’t really be a Christian.

But the world is more complicated than that. People can truly and deeply believe some aspects of a religion while utterly rejecting others. They can do this more or less arbitrarily, in a way that may not even be logically coherent. They may even honestly believe that every single word of the Bible to be the absolute perfect truth of an absolute perfect God, and yet there are still passages you could point them to that they would have to admit they don’t believe in. (There are literally hundreds of explicit contradictions in the Bible. Many are minor—though still undermine any claim to absolute perfect truth—but some are really quite substantial. Does God forgive and forget, or does he visit revenge upon generations to come? That’s kind of a big deal! And should we be answering fools or not?) In some sense they don’t really believe that every word is true, then; but they do seem to believe in believing it.

Yes, it’s true; people can worship a penniless son of a carpenter who preached peace and charity and at the same time support cutting social welfare programs and bombing the Middle East. Such a worldview may not be entirely self-consistent; it’s certainly not the worldview that Jesus himself espoused. But it nevertheless is quite sincerely believed by many millions of people.

It may still be useful to understand the Bible in order to persuade Christians to help the poor more. There are certainly plenty of passages you can point them to where Jesus talks about how important it is to help the poor. Likewise, Jesus doesn’t seem to much like the rich, so it is fair to ask: How Christian is it for Republicans to keep cutting taxes on the rich? (I literally laughed out loud when I first saw this meme: “Celebrate Holy Week By Flogging a Banker: It’s What Jesus Would Have Done!“) But you should not accuse people of “pretending to be Christian”. They really do strongly identify themselves as Christian, and would sooner give up almost anything else about their identity. If you accuse them of pretending, all that will do is shut down the conversation.

Now, after all that, let me give one last example that doesn’t fit the trend, one example where I really do think the other side is acting in bad faith.


Example 4: “#AllLivesMatter is a lie. You don’t actually think all lives matter.”

I think this one is actually true. If you truly believed that all lives matter, you wouldn’t post the hashtag #AllLivesMatter in response to #BlackLivesMatter protests against police brutality.

First of all, you’d probably be supporting those protests. But even if you didn’t for some reason, that isn’t how you would use the hashtag. As a genuine expression of caring, the hashtag #AllLivesMatter would only really make sense for something like Oxfam or UNICEF: Here are these human lives that are in danger and we haven’t been paying enough attention to them, and here, you can follow my hashtag and give some money to help them because all lives matter. If it were really about all lives mattering, then you’d see the hashtag pop up after a tsunami in Southeast Asia or a famine in central Africa. (For awhile I tried actually using it that way; I quickly found that it was overwhelmed by the bad faith usage and decided to give up.)

No, this hashtag really seems to be trying to use a genuinely reasonable moral norm—all lives matter—as a weapon against a political movement. We don’t see #AllLivesMatter popping up asking people to help save some lives—it’s always as a way of shouting down other people who want to save some lives. It’s a glib response that lets you turn away and ignore their pleas, without ever actually addressing the substance of what they are saying. If you really believed that all lives matter, you would not be so glib; you would want to understand how so many people are suffering and want to do something to help them. Even if you ultimately disagreed with what they were saying, you would respect them enough to listen.

The counterpart #BlueLivesMatter isn’t in bad faith, but it is disturbing in a different way: What are ‘blue lives’? People aren’t born police officers. They volunteer for that job. They can quit if want. No one can quit being Black. Working as a police officer isn’t even especially dangerous! But it’s not a bad faith argument: These people really do believe that the lives of police officers are worth more—apparently much more—than the lives of Black civilians.

I do admit, the phrasing “#BlackLivesMatter” is a bit awkward, and could be read to suggest that other lives don’t matter, but it takes about 2 minutes of talking to someone (or reading a blog by someone) who supports those protests to gather that this is not their actual view. Perhaps they should have used #BlackLivesMatterToo, but when your misconception is that easily rectified the responsibility to avoid it falls on you. (Then again, some people do seem to stoke this misconception: I was quite annoyed when a question was asked at a Democratic debate: “Do Black Lives Matter, or Do All Lives Matter?” The correct answer of course is “All lives matter, which is why I support the Black Lives Matter movement.”)

So, yes, bad faith arguments do exist, and sometimes we need to point them out. But I implore you, consider that a last resort, a nuclear option you’ll only deploy when all other avenues have been exhausted. Once you accuse someone of bad faith, you have shut down the conversation completely—preventing you, them, and anyone else who was listening from having any chance of learning or changing their mind.

The fable of the billionaires

May 31 JDN 2458999

There are great many distortions in real-world markets that cause them to deviate from the ideal of perfectly competitive free markets, and economists rightfully spend much of their time locating, analyzing, and mitigating such distortions.

But I think there is a general perception among economists, and perhaps among others as well, that if we could somehow make markets perfectly competitive and efficient, we’d be done; the world, or at least the market, would be just and fair and all would be good. And this perception is gravely mistaken. To make that clear to you, I offer a little fable.

Once upon a time, widgets were made by hand. One person, working for one eight-hour day, could make 100 widgets. Most people were employed making widgets full-time. The wage for making widgets was $1 per widget.

Then, an inventor came up with a way to automate the production of widgets. For $100 per day, the same cost to hire a worker to make 100 widgets, the machine could instead make 101 widgets.

Because it was 1% more efficient, businesses began adopting the new machine, and now made slightly more widgets than before. But some workers who had previously made widgets were laid off, while others saw their wages fall to only $0.99 per widget.


If there were more widgets, but fewer people were getting paid less to make them, where did the extra wealth go? To the inventor, of course, who now owns 10% of all widget production and has billions of dollars.

Later, another inventor came up with an even better machine, which could make 102 widgets in a day. And that inventor became a billionare too, while more became unemployed and wages fell to $0.98 per widget.

And then there was another inventor, and another, and another; and today the machines can make 200 widgets in a day and wages are only $0.50 per widget. We now have twice as many widgets as we used to have, and hundreds of billionaires; yet only half as many people now work making widgets as once did, and those who remain make only half of what they once did.

Was this market inefficient or uncompetitive? Not at all! In fact it was quite efficient: It delivered the most widgets for the least cost every step of the way. And the first round of billionaires didn’t get enough power to keep the next round from innovating even better and also becoming billionaires. No one stole or cheated to get where they are; the billionaires really made it to the top by being brilliant innovators who made the world more efficient.

Indeed, by the standard measures of economic surplus, the world has gotten better with each new machine. GDP has gone up, wealth has gone up. Yet millions of people are out of work, and millions more are making pitifully low wages. Overall the nation seems to be worse off, even though all the numbers keep saying things are getting better.

There are some relatively simple solutions to this problem: We could tax those billionaires, and use the money to provide public goods to everyone else; and then the added wealth from doubling our quantity of widgets would benefit everyone and not just the inventors who made it happen. Would that reduce the incentives to innovate? A little, perhaps; but it’s hard to believe that most people who would be willing to invent something for $1 billion wouldn’t be willing to do so for $500 million or even for $50 million. At some point that extra money really isn’t benefiting you all that much. And what’s the point of incentivizing innovation if it makes life worse for most of our population?

In the real world there are lots of other problems, of course. Corruption, regulatory capture, rent-seeking, collusion, and so on all make our markets less efficient than they could have been. But even if markets were efficient, it’s not clear that they would be fair or just, or that they would be making most people’s lives better.

Indeed, I’m not convinced that most billionaires really got where they are by being particularly innovative. I can appreciate the innovations made by Cisco and Microsoft, but what brilliant innovation underlies Facebook or Amazon? The Internet itself is a great innovation (largely created by DARPA and universities), but is using it to talk to people or sell things really such a great leap? Tesla and SpaceX are innovative, but they have largely been money pits for Elon Musk, who inherited a good chunk of his wealth and made most of the rest by owning shares in PayPal. Yet even if we suppose that all the billionaires got where they are by inventing things that made the economy more efficient, it’s still not clear that they deserve to keep that staggering wealth.

I think the fundamental problem is that we have mentally equated ‘value of marginal product’ with ‘what you rightfully earn’. But the former is dependent upon the rest of the market: Who you are competing with, what your customers want. You can work very hard and be very talented, but if you’re making something that people aren’t willing to pay for, you won’t make any money. And the fact that people won’t pay for something doesn’t mean it isn’t valuable: If you produce public goods, they could benefit many people a great deal but still not draw in profits. Conversely, the fact that something is profitable doesn’t necessarily make it valuable: It could just be a very effective method of rent-seeking.

I’m not saying we should do away with markets; they’re very useful, and they do have a lot of benefits. But we should acknowledge their limitations. We should be aware not only that real-world markets are not perfectly efficient, but also that even a perfectly efficient market wouldn’t make for the best possible world.

Failures of democracy or capitalism?

May 24 JDN 2458992

Blaming capitalism for the world’s woes is a common habit of the left wing in general, but it seems to have greatly increased in frequency and volume in the era of Trump. I don’t want to say that this is always entirely wrong; capitalism in its purest form certainly does have genuine flaws that need to be addressed (and that’s why we have taxes, regulations, the welfare state, etc.).

But I’ve noticed that a lot of the things people complain about most really don’t seem to have a lot to do with capitalism.

For instance: Forced labor in Third World countries? First of all, that’s been around for as long as civilization has existed, and quite probably longer. It’s certainly not new to capitalism. Second, the freedom to choose who you transact with—including who employs you—is a fundamental principle of capitalism. In that sense, forced labor is the very opposite of capitalism; it spits upon everything capitalism stands for.

It’s certainly the case that many multinational corporations are implicated in slavery, even today—usually through complex networks of subsidiaries and supply chains. But it’s not clear to me that socialism is any kind of solution to this problem; nationalized industries are perfectly capable of enslaving people. (You may have heard of a place called the Gulag?)

Or what about corporate welfare, the trillions of dollars in subsidies we give to the oil and coal industries? Well, that’s not very capitalist either; capitalism is supposed to be equal competition in a free market, not the government supporting particular businesses or industries at the expense of others. And it’s not like socialist Venezuela has any lack of oil subsidies—indeed it’s not quite clear to me where the government ends and PDVSA begins. We need a word for such policies that are neither capitalist nor socialist; perhaps “corporatist”?

And really, the things that worry me about America today are not flaws in our markets; they are flaws in our government. We are not witnessing a catastrophic failure of capitalism; we are witnessing a catastrophic failure of democracy.

As if the Electoral College weren’t bad enough (both Al Gore and Hillary Clinton should have won the Presidency, by any sensible notion of democratic voting!), we are now seeing extreme levels of voter suppression, including refusing to accept mail-in ballots in the middle of a historic pandemic. This looks disturbingly like how democracy has collapsed in other countries, such as Turkey and Hungary.

The first-past-the-post plurality vote is already basically the worst possible voting system that can still technically be considered democratic. But it is rendered far worse by a defective primary system, which was even more of a shambles this year than usual. The number of errors in the Iowa caucus was ridiculous, and the primaries as a whole suffered from so many flaws that many voters now consider them illegitimate.

And of course there’s Donald Trump himself. He is certainly a capitalist (though he’s not exactly a free-trade neoliberal; he’s honestly more like a mercantilist). But what really makes him dangerous is not his free-market ideology, which is basically consistent with the US right wing going back at least 30 years; it’s his willingness to flaunt basic norms of democracy and surround himself with corrupt, incompetent sycophants. Republicans have been cutting the upper tax brackets and subsidizing oil companies for quite some time now; but it’s only recently that they have so blatantly disregarded the guardrails of democracy.

I’m not saying it’s wrong to criticize capitalism. There certainly are things worth criticizing, particularly about the most extreme free-market ideology. But it’s important to be clear about where exactly problems lie if you want to fix them—and right now we desperately need to fix them. America is in a crisis right now, something much bigger than just this pandemic. We are not in this crisis because of an excessive amount of deregulation or tax-cutting; we are in this crisis because of an excessive amount of corruption, incompetence, and authoritarianism. We wouldn’t fix this by nationalizing industries or establishing worker co-ops. We need to fix it first by voting out those responsible, and second by reforming our system so that they won’t get back in.

Terrible but not likely, likely but not terrible

May 17 JDN 2458985

The human brain is a remarkably awkward machine. It’s really quite bad at organizing data, relying on associations rather than formal categories.

It is particularly bad at negation. For instance, if I tell you that right now, no matter what, you must not think about a yellow submarine, the first thing you will do is think about a yellow submarine. (You may even get the Beatles song stuck in your head, especially now that I’ve mentioned it.) A computer would never make such a grievous error.

The human brain is also quite bad at separation. Daniel Dennett coined a word “deepity” for a particular kind of deep-sounding but ultimately trivial aphorism that seems to be quite common, which relies upon this feature of the brain. A deepity has at least two possible readings: On one reading, it is true, but utterly trivial. On another, it would be profound if true, but it simply isn’t true. But if you experience both at once, your brain is triggered for both “true” and “profound” and yields “profound truth”. The example he likes to use is “Love is just a word”. Well, yes, “love” is in fact just a word, but who cares? Yeah, words are words. But love, the underlying concept it describes, is not just a word—though if it were that would change a lot.

One thing I’ve come to realize about my own anxiety is that it involves a wide variety of different scenarios I imagine in my mind, and broadly speaking these can be sorted into two categories: Those that are likely but not terrible, and those that are terrible but not likely.

In the former category we have things like taking an extra year to finish my dissertation; the mean time to completion for a PhD is over 8 years, so finishing in 6 instead of 5 can hardly be considered catastrophic.

In the latter category we have things like dying from COVID-19. Yes, I’m a male with type A blood and asthma living in a high-risk county; but I’m also a young, healthy nonsmoker living under lockdown. Even without knowing the true fatality rate of the virus, my chances of actually dying from it are surely less than 1%.

But when both of those scenarios are running through my brain at the same time, the first triggers a reaction for “likely” and the second triggers a reaction for “terrible”, and I get this feeling that something terrible is actually likely to happen. And indeed if my probability of dying were as high as my probability of needing a 6th year to finish my PhD, that would be catastrophic.

I suppose it’s a bit strange that the opposite doesn’t happen: I never seem to get the improbability of dying attached to the mildness of needing an extra year. The confusion never seems to trigger “neither terrible nor likely”. Or perhaps it does, and my brain immediately disregards that as not worthy of consideration? It makes a certain sort of sense: An event that is neither probable nor severe doesn’t seem to merit much anxiety.

I suspect that many other people’s brains work the same way, eliding distinctions between different outcomes and ending up with a sort of maximal product of probability and severity.
The solution to this is not an easy one: It requires deliberate effort and extensive practice, and benefits greatly from formal training by a therapist. Counter-intuitively, you need to actually focus more on the scenarios that cause you anxiety, and accept the anxiety that such focus triggers in you. I find that it helps to actually write down the details of each scenario as vividly as possible, and review what I have written later. After doing this enough times, you can build up a greater separation in your mind, and more clearly categorize—this one is likely but not terrible, that one is terrible but not likely. It isn’t a cure, but it definitely helps me a great deal. Perhaps it could help you.

We still don’t know the fatality rate of COVID-19

May 10 JDN2458978

You’d think after being in this pandemic for several weeks we would now have a clear idea of the fatality rate of the virus. Unfortunately, this is not the case.

The problem is that what we can track really doesn’t tell us what we need to know.

What we can track is how many people have tested positive versus how many people have died. As of this writing, 247,000 people have died and 3,504,000 have tested positive. If this were the true fatality rate, it would be horrifying: A death rate of 7% is clearly in excess of even the 1918 influenza pandemic.

Fortunately, this is almost certainly an overestimate. But it’s actually possible for it to be an underestimate, and here’s why: A lot of those people who currently have the virus could still die.

We really shouldn’t be dividing (total deaths)/(total confirmed infections). We should be dividing (total deaths)/(total deaths + total recoveries). If people haven’t recovered yet, it’s too soon to say whether they will live.

On that basis, this begins to look more like an ancient plague: The number of recoveries is only about four times the number of deaths, which would be a staggering fatality rate of 20%.

But as I said, it’s far more likely that this is an overestimate, because we don’t actually know how many people have been infected. We only know how many people have been infected and gotten tested. A large proportion have never been tested; many of these were simply asymptomatic.
We know this because of the few cases we have of rigorous testing of a whole population, such as the passengers on this cruise liner bound for Antarctica. On that cruise liner, 6 were hospitalized, but 128 tested positive for the virus. This means that the number of asymptomatic infections was twenty times that of the number of symptomatic infections.

There have been several studies attempting to determine what proportion of infections are asymptomatic, because this knowledge is so vital. Unfortunately the results are wildly inconsistent. They seem to range from 5% asymptomatic and 95% symptomatic to 95% asymptomatic and 5% symptomatic. The figure I find most plausible is about 80%: This means that the number of asymptomatic infected is about four times that of the number of symptomatic infected.

This means that the true calculation we should be doing actually looks like this: (total deaths)/(total deaths + total recoveries + total asymptomatic).

The number of deaths seems to be about one fourth the number of recoveries. But when you add the fact that four times as many who get infected are asymptomatic, things don’t look quite so bad. This yields an overall fatality rate of about 4%. This is still very high, and absolutely comparable to the 1918 influenza pandemic.

But the truth is, we just don’t know. South Korea’s fatality rate was only 0.7%, which would be a really bad flu season but nothing catastrophic. (A typical flu has a fatality rate of about 0.1%.) On the (deaths)/(deaths + recoveries) basis, it looks almost as bad as the Black Death.

With so much uncertainty, there’s really only one option: Prepare for the worst-case scenario. Assume that the real death rate is massive, and implement lockdown measures until you can confirm that it isn’t.