The afterlife

Dec 1 JDN 2460646

Super-human beings aren’t that strange a thing to posit, but they are the sort of thing we’d expect to see clear evidence of if they existed. Without them, prayer is a muddled concept that is difficult to distinguish from simply “things that don’t work”. That leaves the afterlife. Could there be an existence for human consciousness after death?

No. There isn’t. Once you’re dead, you’re dead. It’s really that unequivocal. It is customary in most discussions of this matter to hedge and fret and be “agnostic” about what might lie beyond the grave—but in fact the evidence is absolutely overwhelming.

Everything we know about neuroscience—literally everything—would have to be abandoned in order for an afterlife to make sense. The core of neuroscience, the foundation from which the entire field is built, is what I call the Basic Fact of Cognitive Science: you are your brain. It is your brain that feels, your brain that thinks, your brain that dreams, your brain that remembers. We do not yet understand most of these processes in detail—though some we actually do, such as the processing of visual images. But it doesn’t take an expert mechanic to know that removing the engine makes the car stop running. It doesn’t take a brilliant electrical engineer to know that smashing the CPU makes the computer stop working. Saying that your mind continues to work without your brain is like saying that you can continue to digest without having a stomach or intestines.

This fundamental truth underlies everything we know about the science of consciousness. It can even be directly verified in a piecemeal form: There are specific areas of your brain that, when damaged, will cause you to become blind, or unable to understand language, or unable to speak grammatically (those are two distinct areas), or destroy your ability to form new memories or recall old ones, or even eliminate your ability to recognize faces. Most terrifying of all—yet by no means surprising to anyone who really appreciates the Basic Fact—is the fact that damage to certain parts of your brain will even change your personality, often making you impulsive, paranoid or cruel, literally making you a worse person. More surprising and baffling is the fact that cutting your brain down the middle into left and right halves can split you into two people, each of whom operates half of your body (the opposite half, oddly enough), who mostly agree on things and work together but occasionally don’t. All of these are people we can actually interact with in laboratories, and (except for language deficits of course) talk to them about their experiences. It’s true that we can’t ask people what it’s like when their whole brain is dead, but of course not; there’s nobody left to ask.

This means that if you take away all the functions that experiments have shown require certain brain parts to function, whatever “soul” is left that survives brain death cannot do any of the following: See, hear, speak, understand, remember, recognize faces, or make moral decisions. In what sense is that worth calling a “soul”? In what sense is that you? Those are just the ones we know for sure; as our repertoire expands, more and more cognitive functions will be mapped to specific brain regions. And of course there’s no evidence that anything survives whatsoever.

Nor are near-death experiences any kind of evidence of an afterlife. Yes, some people who were close to dying or briefly technically dead (“He’s only mostly dead!”) have had very strange experiences during that time. Of course they did! Of course you’d have weird experiences as your brain is shutting down or struggling to keep itself online. Think about a computer that has had a magnet run over its hard drive; all sorts of weird glitches and errors are going to occur. (In fact, powerful magnets can have an effect on humans not all that dissimilar from what weaker magnets can do to computers! Certain sections of the brain can be disrupted or triggered in this way; it’s called transcranial magnetic stimulation and it’s actually a promising therapy for some neurological and psychological disorders.) People also have a tendency to over-interpret these experiences as supporting their particular religion, when in fact it’s usually something no more complicated than “a bright light” or “a long tunnel” (another popular item is “positive feelings”). If you stop and think about all the different ways you might come to see “a bright light” and have “positive feelings”, it should be pretty obvious that this isn’t evidence of St. Paul and the Pearly Gates.

The evidence against an afterlife is totally overwhelming. The fact that when we die, we are gone, is among the most certain facts in science. So why do people cling to this belief? Probably because it’s comforting—or rather because the truth that death is permanent and irrevocable is terrifying. You’re damn right it is; it’s basically the source of all other terror, in fact. But guess what? “Terrifying” does not mean “false”. The idea of an afterlife may be comforting, but it’s still obviously not true.

While I was in the process of writing this book, my father died of a ruptured intracranial aneurysm. The event was sudden and unexpected, and by the time I was able to fly from California to Michigan to see him, he had already lost consciousness—for what would turn out to be forever. This event caused me enormous grief, grief from which I may never fully recover. Nothing would make me happier than knowing that he was not truly gone, that he lives on somewhere watching over me. But alas, I know it is not true. He is gone. Forever.

However, I do have a couple of things to say that might offer some degree of consolation:

First, because human minds are software, pieces of our loved ones do go on—in us. Our memories of those we have lost are tiny shards of their souls. When we tell stories about them to others, we make copies of those shards; or to use a more modern metaphor, we back up their data in the cloud. Were we to somehow reassemble all these shards together, we could not rebuild the whole person—there are always missing pieces. But it is also not true that nothing remains. What we have left is how they touched our lives. And when we die, we will remain in how we touch the lives of others. And so on, and so on, as the ramifications of our deeds in life and the generations after us ripple out through the universe at the speed of light, until the end of time.

Moreover, if there’s no afterlife there can be no Hell, and Hell is literally the worst thing imaginable. To subject even a single person—even the most horrible person who ever lived, Hitler, Stalin, Mao, whomever—to the experience of maximum possible suffering forever is an atrocity of incomparable magnitude. Hitler may have deserved a million years of suffering for what he did—but I’m not so sure about maximum suffering, and forever is an awful lot longer than a million years. Indeed, forever is so much longer than a million years that if your sentence is forever, then after serving a million years you still have as much left to go as when you began. But the Bible doesn’t even just say that the most horrible mass murderers will go to Hell; no, it says everyone will go to Hell by default, and deserve it, and can only be forgiven if we believe. No amount of good works will save us from this fate, only God’s grace.

If you believe this—or even suspect it—religion has caused you deep psychological damage. This is the theology of an abusive father—“You must do exactly as I say, or you are worthless and undeserving of love and I will hurt you and it will be all your fault.” No human being, no matter what they have done or failed to do, could ever possibly deserve a punishment as terrible as maximum possible suffering forever. Even if you’re a serial rapist and murderer—and odds are, you’re not—you still don’t deserve to suffer forever. You have lived upon this planet for only a finite time; you can therefore only have committed finitely many crimes and you can only deserve at most finite suffering. In fact, the vast majority of the world’s population is comprised of good, decent people who deserve joy, not suffering.

Indeed, many ethicists would say that nobody deserves suffering, it is simply a necessary evil that we use as a deterrent from greater harms. I’m actually not sure I buy this—if you say that punishment is all about deterrence and not about desert, then you end up with the result that anything which deters someone could count as a fair punishment, even if it’s inflicted upon someone else who did nothing wrong. But no ethicist worthy of the name believes that anybody deserves eternal punishment—yet this is what Jesus says we all deserve in the Bible. And Muhammad says similar things in the Qur’an, about lakes of eternal burning (4:56) and eternal boiling water to drink (47:15) and so on. It’s entirely understandable that such things would motivate you—indeed, they should motivate you completely to do just about anything—if you believed they were true. What I don’t get is why anybody would believe they are true. And I certainly don’t get why anyone would be willing to traumatize their children with these horrific lies.

Then there is Pascal’s Wager: An infinite punishment can motivate you if it has any finite probability, right? Theoretically, yes… but here’s the problem with that line of reasoning: Anybody can just threaten you with infinite punishment to make you do anything. Clearly something is wrong with your decision theory if any psychopath can just make you do whatever he wants because you’re afraid of what might happen just in case what he says might possibly be true. Beware of plausible-seeming theories that lead to such absurd conclusions; it may not be obvious what’s wrong with the argument, but it should be obvious that something is.

Against Moral Anti-Realism

Sep 22 JDN 2460576

Moral anti-realism is more philosophically sophisticated than relativism, but it is equally mistaken. It is what is sounds like, the negation of moral realism. Moral anti-realists hold that moral truths are meaningless because they rest upon presumptions about the world that fail to hold. To an anti-realist, “genocide is wrong” is meaningless because there is no such thing as “wrong”, much as to any sane person “unicorns have purple feathers” is meaningless because there are no such things as unicorns. They aren’t saying that genocide isn’t wrong—they’re saying that wrong itself is a defective concept.

The vast majority of people profess strong beliefs in moral truth, and indeed strong beliefs about particular moral issues, such as abortion, capital punishment, same-sex marriage, euthanasia, contraception, civil liberties, and war. There is at the very least a troubling tension here between academia and daily life.

This does not by itself prove that moral truths exist. Ordinary people could be simply wrong about these core beliefs. Indeed, I must acknowledge that most ordinary people clearly are deeply ignorant about certain things, as only 55\% of Americans believe that the theory of evolution is true, and only 66\% of Americans agree that the majority of recent changes in Earth’s climate has been caused by human activity, when in reality these are scientific facts, empirically demonstrable through multiple lines of evidence, verified beyond all reasonable doubt, and both evolution and climate change are universally accepted within the scientific community. In scientific terms there is no more doubt about evolution or climate change than there is about the shape of the Earth or the structure of the atom.

If there were similarly compelling reasons to be moral anti-realists, then the fact that most people believe in morality would be little different: Perhaps most ordinary people are simply wrong about these issues. But when asked to provide similarly compelling evidence for why they reject the moral views of ordinary people, moral anti-realists have little to offer.

Many anti-realists will note the diversity of moral opinions in the world, as John Burgess did, which would be rather like noting the diversity of beliefs about the soul as an argument against neuroscience, or noting the diversity of beliefs about the history of life as an argument against evolution. Many people are wrong about many things that science has shown to be the case; this is worrisome for various reasons, but it is not an argument against the validity of scientific knowledge. Similarly, a diversity of opinions about morality is worrisome, but hardly evidence against the validity of morality.

In fact, when they talk about such fundamental disagreements in morality, anti-realists don’t have very compelling examples. It’s easy to find fundamental disagreements about biology—ask an evolutionary biologist and a Creationist whether humans share an ancestor with chimpanzees. It’s easy to find fundamental disagreements about cosmology—ask a physicist and an evangelical Christian how the Earth began. It’s easy to find fundamental disagreements about climate—ask a climatologist and an oil company executive whether human beings are causing global warming. But where are these fundamental disagreements in morality? Sure, on specific matters there is some disagreement. There are differences between cultures regarding what animals it is acceptable to eat, and differences between cultures about what constitutes acceptable clothing, and differences on specific political issues. But in what society is it acceptable to kill people arbitrarily? Where is it all right to steal whatever you want? Where is lying viewed as a good thing? Where is it obligatory to eat only dirt? In what culture has wearing clothes been a crime? Moral realists are by no means committed to saying that everyone agrees about everything—but it does support our case to point out that most people agree on most things most of the time.

There are a few compelling cases of moral disagreement, but they hardly threaten moral realism. How might we show one culture’s norms to be better than another’s? Compare homicide rates. Compare levels of poverty. Compare overall happiness, perhaps using surveys—or even brain scans. This kind of data exists, and it has a fairly clear pattern: people living in social democratic societies (such as Sweden and Norway) are wealthier, safer, longer-lived, and overall happier than people in other societies. Moreover, using the same publicly-available data, democratic societies in general do much better than authoritarian societies, by almost any measure. This is an empirical fact. It doesn’t necessarily mean that such societies are doing everything right—but they are clearly doing something right. And it really isn’t so implausible to say that what they are doing right is enforcing a good system of moral, political, and cultural norms.

Then again, perhaps some people would accept these empirical facts but still insist that their culture is superior; suppose the disagreement really is radical and intractable. This still leaves two possibilities for moral realism.

The most obvious answer would be to say that one group is wrong—that, objectively, one culture is better than another.

But even if that doesn’t work, there is another way: Perhaps both are right, or more precisely, perhaps these two cultural systems are equally good but incompatible. Is this relativism? Some might call it that, but if it is, it’s relativism of a very narrow kind. I am emphatically not saying that all existing cultures are equal, much less that all possible cultures are equal. Instead, I am saying that it is entirely possible to have two independent moral systems which prescribe different behaviors yet nonetheless result in equally-good overall outcomes.

I could make a mathematical argument involving local maxima of nonlinear functions, but instead I think I’ll use an example: Traffic laws.

In the United States, we drive on the right side of the road. In the United Kingdom, they drive on the left side. Which way is correct? Both are—both systems work well, and neither is superior in any discernible way. In fact, there are other systems that would be just as effective, like the system of all one-way roads that prevails in Manhattan.

Yet does this mean that we should abandon reason in our traffic planning, throw up our hands and declare that any traffic system is as good as any other? On the contrary—there are plenty of possible traffic systems that clearly don’t work. Pointing several one-way roads into one another with no exit is clearly not going to result in good traffic flow. Having each driver flip a coin to decide whether to drive on the left or the right would result in endless collisions. Moreover, our own system clearly isn’t perfect. Nearly 40,000 Americans die of car collisions every year; perhaps we can find a better system that will prevent some or all of these deaths. The mere fact that two, or three, or even 400 different systems of laws or morals are equally good does not entail that all systems are equally good. Even if two cultures really are equal, that doesn’t mean we need to abandon moral realism; it merely means that some problems have multiple solutions. “X2 = 4; what is X?” has two perfectly correct answers (2 and -2), but it also has an infinite variety of wrong answers.

In fact, moral disagreement may not be evidence of anti-realism at all. In order to disagree with someone, you must think that there is an objective fact to be decided. If moral statements were seen as arbitrary and subjective, then people wouldn’t argue about them very much. Imagine an argument, “Chocolate is the best flavor of ice cream!” “No, vanilla is the best!”. This sort of argument might happen on occasion between seven-year-olds, but it is definitely not the sort of thing we hear from mature adults. This is because as adults we realize that tastes in ice cream really are largely subjective. An anti-realist can, in theory, account for this, if they can explain why moral values are falsely perceived as objective while values in taste are not; but if all values are all really arbitrary and subjective, why is it that this is obvious to everyone in the one case and not the other? In fact, there are compelling reasons to think that we couldn’t perceive moral values as arbitrary even if we tried. Some people say “abortion is a right”, others say “abortion is murder”. Even if we were to say that these are purely arbitrary, we would still be left with the task of deciding what laws to make on abortion. Regardless of where the goals come from, some goals are just objectively incompatible.

Another common anti-realist argument rests upon the way that arguments about morality often become emotional and irrational. Charles Stevenson has made this argument; apparently Stevenson has never witnessed an argument about religion, science, or policy, certainly not one outside academia. Many laypeople will insist passionately that the free market is perfect, global warming is a lie, or the Earth is only 6,000 years old. (Often the same people, come to think of it.) People will grow angry and offended if such beliefs are disputed. Yet these are objectively false claims. Unless we want to be anti-realists about GDP, temperature and radiometric dating, emotional and irrational arguments cannot compel us to abandon realism.

Another frequent claim, commonly known as the “argument from queerness”, says that moral facts would need to be something very strange, usually imagined as floating obligations existing somewhere in space; but this is rather like saying that mathematical facts cannot exist because we do not see floating theorems in space and we have never met a perfect triangle. In fact, there is no such thing as a floating speed of light or a floating Schrodinger’s equation either, but no one thinks this is an argument against physics.

A subtler version of this argument, the original “argument from queerness” put forth by J.L. Mackie, says that moral facts are strange because they are intrinsically motivating, something no other kind of facts would be. This is no doubt true; but it seems to me a fairly trivial observation, since part of the definition of “moral fact” is that anything which has this kind of motivational force is a moral (or at least normative) fact. Any well-defined natural kind is subject to the same sort of argument. Spheres are perfectly round three-dimensional objects, something no other object is. Eyes are organs that perceive light, something no other organ does. Moral facts are indeed facts that categorically motivate action, which no other thing does—but so what? All this means is that we have a well-defined notion of what it means to be a moral fact.

Finally, it is often said that moral claims are too often based on religion, and religion is epistemically unfounded, so morality must fall as well. Now, unlike most people, I completely agree that religion is epistemically unfounded. Instead, the premise I take issue with is the idea that moral claims have anything to do with religion. A lot of people seem to think so; but in fact our most important moral values transcend religion and in many cases actually contradict it.

Now, it may well be that the majority of claims people make about morality are to some extent based in their religious beliefs. The majority of governments in history have been tyrannical; does that mean that government is inherently tyrannical, there is no such thing as a just government? The vast majority of human beings have never traveled in outer space; does that mean space travel is impossible? Similarly, I see no reason to say that simply because the majority of moral claims (maybe) are religious, therefore moral claims are inherently religious.

Generally speaking, moral anti-realists make a harsh distinction between morality and other domains of knowledge. They agree that there are such things as trucks and comets and atoms, but do not agree that there are such things as obligations and rights. Indeed, a typical moral anti-realist speaks as if they are being very rigorous and scientific while we moral realists are being foolish, romantic, even superstitious. Moral anti-realism has an attitude of superciliousness not seen in a scientific faction since behaviorism.

But in fact, I think moral anti-realism is the result of a narrow understanding of fundamental physics and cognitive science. It is a failure to drink deep enough of the Pierian springs. This is not surprising, since fundamental physics and cognitive science are so mind-bogglingly difficult that even the geniuses of the world barely grasp them. Quoth Feynman: “I think I can safely say that nobody understands quantum mechanics.” This was of course a bit overstated—Feynman surely knew that there are things we do understand about quantum physics, for he was among those who best understood them. Still, even the brightest minds in the world face total bafflement before problems like dark energy, quantum gravity, the binding problem, and the Hard Problem. It is no moral failing to have a narrow understanding of fundamental physics and cognitive science, for the world’s greatest minds have a scarcely broader understanding.

The failing comes from trying to apply this narrow understanding of fundamental science to moral problems without the humility to admit that the answers are never so simple. “Neuroscience proves we have no free will.” No it doesn’t! It proves we don’t have the kind of free will you thought we did. “We are all made of atoms, therefore there can be no such thing as right and wrong.” And what do you suppose we would have been made of if there were such things as right and wrong? Magical fairy dust?

Here is what I think moral anti-realists get wrong: They hear only part of what scientists say. Neuroscientists explain to them that the mind is a function of matter, and they hear it as if we had said there is only mindless matter. Physicists explain to them that we have much more precise models of atomic phenomena than we do of human behavior, and they hear it as if we had said that scientific models of human behavior are fundamentally impossible. They trust that we know very well what atoms are made of and very poorly what is right and wrong—when quite the opposite is the case.

In fact, the more we learn about physics and cognitive science, the more similar the two fields seem. There was a time when Newtonian mechanics ruled, when everyone thought that physical objects are made of tiny billiard balls bouncing around according to precise laws, while consciousness was some magical, “higher” spiritual substance that defied explanation. But now we understand that quantum physics is all chaos and probability, while cognitive processes can be mathematically modeled and brain waves can be measured in the laboratory. Something as apparently simple as a proton—let alone an extended, complex object, like a table or a comet—is fundamentally a functional entity, a unit of structure rather than substance. To be a proton is to be organized the way protons are and to do what protons do; and so to be human is to be organized the way humans are and to do what humans do. The eternal search for “stuff” of which everything is made has come up largely empty; eventually we may find the ultimate “stuff”, but when we do, it will already have long been apparent that substance is nowhere near as important as structure. Reductionism isn’t so much wrong as beside the point—when we want to understand what makes a table a table or what makes a man a man, it simply doesn’t matter what stuff they are made of. The table could be wood, glass, plastic, or metal; the man could be carbon, nitrogen and water like us, or else silicon and tantalum like Lieutenant Commander Data on Star Trek. Yes, structure must be made of something, and the substance does affect the structures that can be made out of it, but the structure is what really matters, not the substance.

Hence, I think it is deeply misguided to suggest that because human beings are made of molecules, this means that we are just the same thing as our molecules. Love is indeed made of oxytocin (among other things), but only in the sense that a table is made of wood. To know that love is made of oxytocin really doesn’t tell us very much about love; we need also to understand how oxytocin interacts with the bafflingly complex system that is a human brain—and indeed how groups of brains get together in relationships and societies. This is because love, like so much else, is not substance but function—something you do, not something you are made of.

It is not hard, rigorous science that says love is just oxytocin and happiness is just dopamine; it is naive, simplistic science. It is the sort of “science” that comes from overlaying old prejudices (like “matter is solid, thoughts are ethereal”) with a thin veneer of knowledge. To be a realist about protons but not about obligations is to be a realist about some functional relations and not others. It is to hear “mind is matter”, and fail to understand the is—the identity between them—instead acting as if we had said “there is no mind; there is only matter”. You may find it hard to believe that mind can be made of matter, as do we all; yet the universe cares not about our incredulity. The perfect correlation between neurochemical activity and cognitive activity has been verified in far too many experiments to doubt. Somehow, that kilogram of wet, sparking gelatin in your head is actually thinking and feeling—it is actually you.

And once we realize this, I do not think it is a great leap to realize that the vast collection of complex, interacting bodies moving along particular trajectories through space that was the Holocaust was actually wrong, really, objectively wrong.

What does “can” mean, anyway?

Apr 7 JDN 2460409

I don’t remember where, but I believe I once heard a “philosopher” defined as someone who asks the sort of question everyone knows the answer to, and doesn’t know the answer.

By that definition, I’m feeling very much a philosopher today.

“can” is one of the most common words in the English language; the Oxford English Corpus lists it as the 53rd most common word. Similar words are found in essentially every language, and nearly always rank among their most common.

Yet when I try to precisely define what we mean by this word, it’s surprisingly hard.

Why, you might even say I can’t.

The very concept of “capability” is surprisingly slippery—just what is someone capable of?

My goal in this post is basically to make you as confused about the concept as I am.

I think that experiencing disabilities that include executive dysfunction has made me especially aware of just how complicated the concept of ability really is. This also relates back to my previous post questioning the idea of “doing your best”.

Here are some things that “can” might mean, or even sometimes seems to mean:

1. The laws of physics do not explicitly prevent it.

This seems far too broad. By this definition, you “can” do almost anything—as long as you don’t make free energy, reduce entropy, or exceed the speed of light.

2. The task is something that other human beings have performed in the past.

This is surely a lot better; it doesn’t say that I “can” fly to Mars or turn into a tree. But by this definition, I “can” sprint as fast as Usain Bolt and swim as long as Michael Phelps—which certainly doesn’t seem right. Indeed, not only would I say I can’t do that; I’d say I couldn’t do that, no matter how hard I tried.

3. The task is something that human beings in similar physical condition to my own have performed in the past.

Okay, we’re getting warmer. But just what do we mean, “similar condition”? No one else in the world is in exactly the same condition I am.

And even if those other people are in the same physical condition, their mental condition could be radically different. Maybe they’re smarter than I am, or more creative—or maybe they just speak Swahili. It doesn’t seem right to say that I can speak Swahili. Maybe I could speak Swahili, if I spent a lot of time and effort learning it. But at present, I can’t.

4. The task is something that human beings in similar physical and mental condition to my own have performed in the past.

Better still. This seems to solve the most obvious problems. It says that I can write blog posts (check), and I can’t speak Swahili (also check).

But it’s still not specific enough. For, even if we can clearly define what constitutes “people like me” (can we?), there are many different circumstances in which people like me have been in, and what they did has varied quite a bit, depending on those circumstances.

People in extreme emergencies have performed astonishingly feats of strength, such as lifting cars. Maybe I could do something like that, should the circumstance arise? But it certainly doesn’t seem right to say that I can lift cars.

5. The task is something that human beings in similar physical and mental condition to my own have performed in the past, in circumstances similar to my own.

That solves the above problems (provided we can sufficiently define “similar” for both people and circumstances). But it actually raises a different problem: If the circumstances were so similar, shouldn’t their behavior and mine be the same?

By that metric, it seems like the only way to know if I can do something is to actually do it. If I haven’t actually done it—in that mental state, in those circumstances—then I can’t really say I could have done it. At that point, “can” becomes a really funny way of saying “do”.

So it seems we may have narrowed down a little too much here.

And what about the idea that I could speak Swahili, if I studied hard? That seems to be something broader; maybe it’s this:

6. The task is something that human beings who are in physical or mental condition that is attainable from my own condition have performed in the past.

But now we have to ask, what do we mean by “attainable”? We come right back to asking about capability again: What kind of effort can I make in order to learn Swahili, train as a pilot, or learn to SCUBA dive?

Maybe I could lift a car, if I had to do it to save my life or the life of a loved one. But without the adrenaline rush of such emergency, I might be completely unable to do it, and even with that adrenaline rush, I’m sure the task would injure me severely. Thus, I don’t think it’s fair to say I can lift cars.

So how much can I lift? I have found that I can, as part of a normal workout, bench-press about 80 pounds. But I don’t think is the limit of what I can lift; it’s more like what I can lift safely and comfortably for multiple sets of multiple reps without causing myself undue pain. For a single rep, I could probably do considerably more—though how much more is quite hard to say. 100 pounds? 120? (There are online calculators that supposedly will convert your multi-rep weight to a single-rep max, but for some reason, they don’t seem to be able to account for multiple sets for some reason. If I do 4 sets of 10 reps, is that 10 reps, or 40 reps? This is the difference between my one-rep max being 106 and it being 186. The former seems closer to the truth, but is probably still too low.)

If I absolutely had to—say, something that heavy has fallen on me and lifting it is the only way to escape—could I bench-press my own weight of about 215 pounds? I think so. But I’m sure it would hurt like hell, and I’d probably be sore for days afterward.

Now, consider tasks that require figuring something out, something I don’t currently know but could conceivably learn or figure out. It doesn’t seem right to say that I can solve the P/NP problem or the Riemann Hypothesis. But it does seem right to say that I can at least work on those problems—I know enough about them that I can at least get started, if perhaps not make much real progress. Whereas most people, while they could theoretically read enough books about mathematics to one day know enough that they could do this, are not currently in a state where they could even begin to do that.

Here’s another question for you to ponder:

Can I write a bestselling novel?

Maybe that’s no fair. Making it a bestseller depends on all sorts of features of the market that aren’t entirely under my control. So let’s make it easier:

Can I write a novel?

I have written novels. So at first glance it seems obvious that I can write a novel.

But there are many days, especially lately, on which I procrastinate my writing and struggle to get any writing done. On such a day, can I write a novel? If someone held a gun to my head and demanded that I write the novel, could I get it done?

I honestly don’t know.

Maybe there’s some amount of pressure that would in fact compel me, even on the days of my very worst depression, to write the novel. Or maybe if you put that gun to my head, I’d just die. I don’t know.

But I do know one thing for sure: It would hurt.

Writing a novel on my worst days would require enormous effort and psychological pain—and honestly, I think it wouldn’t feel all that different from trying to lift 200 pounds.

Now we are coming to the real heart of the matter:

How much cost am I expected to pay, for it to still count as within my ability?

There are many things that I can do easily, that don’t really require much effort. But this varies too.

On most days, brushing my teeth is something I just can do—I remember to do it, I choose to do it, it happens; I don’t feel like I have exerted a great deal of effort or paid any substantial cost.

But there are days when even brushing my teeth is hard. Generally I do make it happen, so evidently I can do it—but it is no longer free and effortless the way it usually is.

There are other things which require effort, but are generally feasible, such as working out. Working out isn’t easy (essentially by design), but if I put in the effort, I can make it happen.

But again, some days are much harder than others.

Then there are things which require so much effort they feel impossible, even if they theoretically aren’t.

Right now, that’s where I’m at with trying to submit my work to journals or publishers. Each individual action is certainly something I should be physically able to take. I know the process of what to do—I’m not trying to solve the Riemann Hypothesis here. I have even done it before.

But right now, today, I don’t feel like I can do it. There may be some sense in which I “can”, but it doesn’t feel relevant.

And I felt the same way yesterday, and the day before, and pretty much every day for at least the past year.

I’m not even sure if there is an amount of pressure that could compel me to do it—e.g. if I had a gun to my head. Maybe there is. But I honestly don’t know for sure—and if it did work, once again, it would definitely hurt.

Others in the disability community have a way of describing this experience, which probably sounds strange if you haven’t heard it before:

“Do you have enough spoons?”

(For D&D fans, I’ve also heard others substitute “spell slots”.)

The idea is this: Suppose you are endowed with a certain number of spoons, which you can consume as a resource in order to achieve various tasks. The only way to replenish your spoons is rest.

Some tasks are cheap, requiring only 1 or 2 spoons. Others may be very costly, requiring 10, or 20, or perhaps even 50 or 100 spoons.

But the number of spoons you start with each morning may not always be the same. If you start with 200, then a task that requires 2 will seem trivial. But if you only started with 5, even those 2 will feel like a lot.

As you deplete your available spoons, you will find you need to ration which tasks you are able to complete; thus, on days when you wake up with fewer spoons, things that you would ordinarily do may end up not getting done.

I think submitting to a research journal is a 100-spoon task, and I simply haven’t woken up with more than 50 spoons in any given day within the last six months.

I don’t usually hear it formulated this way, but for me, I think the cost varies too.

I think that on a good day, brushing my teeth is a 0-spoon task (a “cantrip”, if you will); I could do it as many times as necessary without expending any detectable effort. But on a very bad day, it will cost me a couple of spoons just to do that. I’ll still get it done, but I’ll feel drained by it. I couldn’t keep doing it indefinitely. It will prevent me from being able to do something else, later in the day.

Writing is something that seems to vary a great deal in its spoon cost. On a really good day when I’m feeling especially inspired, I might get 5000 words written and feel like I’ve only spent 20 spoons; while on a really bad day, that same 20 spoons won’t even get me a single paragraph.

It may occur to you to ask:

What is the actual resource being depleted here?

Just what are the spoons, anyway?

That, I really can’t say.

I don’t think it’s as simple as brain glucose, though there were a few studies that seemed to support such a view. If it were, drinking something sugary ought to fix it, and generally that doesn’t work (and if you do that too often, it’s bad for your health). Even weirder is that, for some people, just tasting sugar seems to help with self-control. My own guess is that if your particular problem is hypoglycemia, drinking sugar works, and otherwise, not so much.

There could be literally some sort of neurotransmitter reserves that get depleted, or receptors that get overloaded; but I suspect it’s not even that simple either. These are the models we use because they’re the best we have—but the brain is in reality far more complicated than any of our models.

I’ve heard people say “I ran out of serotonin today”, but I’m fairly sure they didn’t actually get their cerebrospinal fluid tested first. (And since most of your serotonin is actually in your gut, if they really ran out they should be having severe gastrointestinal symptoms.) (I had my cerebrospinal fluid tested once; most agonizing pain of my life. To say that I don’t recommend the experience is such an understatement, it’s rather like saying Hell sounds like a bad vacation spot. Indeed, if I believed in Hell, I would have to imagine it feels like getting a spinal tap every day for eternity.)

So for now, the best I can say is, I really don’t know what spoons are. And I still don’t entirely know what “can” means. But at least maybe now you’re as confused as I am.

The stochastic overload model

The stochastic overload model

Mar 12 JDN 2460016

The next few posts are going to be a bit different, a bit more advanced and technical than usual. This is because, for the first time in several months at least, I am actually working on what could be reasonably considered something like theoretical research.

I am writing it up in the form of blog posts, because actually writing a paper is still too stressful for me right now. This also forces me to articulate my ideas in a clearer and more readable way, rather than dive directly into a morass of equations. It also means that even if I do never actually get around to finishing a paper, the idea is out there, and maybe someone else could make use of it (and hopefully give me some of the credit).

I’ve written previously about the Yerkes-Dodson effect: On cognitively-demanding tasks, increased stress increases performance, but only to a point, after which it begins decreasing it again. The effect is well-documented, but the mechanism is poorly understood.

I am currently on the wrong side of the Yerkes-Dodson curve, which is why I’m too stressed to write this as a formal paper right now. But that also gave me some ideas about how it may work.

I have come up with a simple but powerful mathematical model that may provide a mechanism for the Yerkes-Dodson effect.

This model is clearly well within the realm of a behavioral economic model, but it is also closely tied to neuroscience and cognitive science.

I call it the stochastic overload model.

First, a metaphor: Consider an engine, which can run faster or slower. If you increase its RPMs, it will output more power, and provide more torque—but only up to a certain point. Eventually it hits a threshold where it will break down, or even break apart. In real engines, we often include safety systems that force the engine to shut down as it approaches such a threshold.

I believe that human brains function on a similar principle. Stress increases arousal, which activates a variety of processes via the sympathetic nervous system. This activation improves performance on both physical and cognitive tasks. But it has a downside; especially on cognitively demanding tasks which required sustained effort, I hypothesize that too much sympathetic activation can result in a kind of system overload, where your brain can no longer handle the stress and processes are forced to shut down.

This shutdown could be brief—a few seconds, or even a fraction of a second—or it could be prolonged—hours or days. That might depend on just how severe the stress is, or how much of your brain it requires, or how prolonged it is. For purposes of the model, this isn’t vital. It’s probably easiest to imagine it being a relatively brief, localized shutdown of a particular neural pathway. Then, your performance in a task is summed up over many such pathways over a longer period of time, and by the law of large numbers your overall performance is essentially the average performance of all your brain systems.

That’s the “overload” part of the model. Now for the “stochastic” part.

Let’s say that, in the absence of stress, your brain has a certain innate level of sympathetic activation, which varies over time in an essentially chaotic, unpredictable—stochastic—sort of way. It is never really completely deactivated, and may even have some chance of randomly overloading itself even without outside input. (Actually, a potential role in the model for the personality trait neuroticism is an innate tendency toward higher levels of sympathetic activation in the absence of outside stress.)

Let’s say that this innate activation is x, which follows some kind of known random distribution F(x).

For simplicity, let’s also say that added stress s adds linearly to your level of sympathetic activation, so your overall level of activation is x + s.

For simplicity, let’s say that activation ranges between 0 and 1, where 0 is no activation at all and 1 is the maximum possible activation and triggers overload.

I’m assuming that if a pathway shuts down from overload, it doesn’t contribute at all to performance on the task. (You can assume it’s only reduced performance, but this adds complexity without any qualitative change.)

Since sympathetic activation improves performance, but can result in overload, your overall expected performance in a given task can be computed as the product of two terms:

[expected value of x + s, provided overload does not occur] * [probability overload does not occur]

E[x + s | x + s < 1] P[x + s < 1]

The first term can be thought of as the incentive effect: Higher stress promotes more activation and thus better performance.

The second term can be thought of as the overload effect: Higher stress also increases the risk that activation will exceed the threshold and force shutdown.

This equation actually turns out to have a remarkably elegant form as an integral (and here’s where I get especially technical and mathematical):

\int_{0}^{1-s} (x+s) dF(x)

The integral subsumes both the incentive effect and the overload effect into one term; you can also think of the +s in the integrand as the incentive effect and the 1-s in the limit of integration as the overload effect.

For the uninitated, this is probably just Greek. So let me show you some pictures to help with your intuition. These are all freehand sketches, so let me apologize in advance for my limited drawing skills. Think of this as like Arthur Laffer’s famous cocktail napkin.

Suppose that, in the absence of outside stress, your innate activation follows a distribution like this (this could be a normal or logit PDF; as I’ll talk about next week, logit is far more tractable):

As I start adding stress, this shifts the distribution upward, toward increased activation:

Initially, this will improve average performance.

But at some point, increased stress actually becomes harmful, as it increases the probability of overload.

And eventually, the probability of overload becomes so high that performance becomes worse than it was with no stress at all:

The result is that overall performance, as a function of stress, looks like an inverted U-shaped curve—the Yerkes-Dodson curve:

The precise shape of this curve depends on the distribution that we use for the innate activation, which I will save for next week’s post.

Hypocrisy is underrated

Sep 12 JDN 2459470

Hypocrisy isn’t a good thing, but it isn’t nearly as bad as most people seem to think. Often accusing someone of hypocrisy is taken as a knock-down argument for everything they are saying, and this is just utterly wrong. Someone can be a hypocrite and still be mostly right.

Often people are accused of hypocrisy when they are not being hypocritical; for instance the right wing seems to think that “They want higher taxes on the rich, but they are rich!” is hypocrisy, when in fact it’s simply altruism. (If they had wanted the rich guillotined, that would be hypocrisy. Maybe the problem is that the right wing can’t tell the difference?) Even worse, “They live under capitalism but they want to overthrow capitalism!” is not even close to hypocrisy—let alone why, how would someone overthrow a system they weren’t living under? (There are many things wrong with Marxists, but that is not one of them.)

But in fact I intend something stronger: Hypocrisy itself just isn’t that bad.


There are currently two classes of Republican politicians with regard to the COVID vaccines: Those who are consistent in their principles and don’t get the vaccines, and those who are hypocrites and get the vaccines while telling their constituents not to. Of the two, who is better? The hypocrites. At least they are doing the right thing even as they say things that are very, very wrong.

There are really four cases to consider. The principles you believe in could be right, or they could be wrong. And you could follow those principles, or you could be a hypocrite. Each of these two factors is independent.

If your principles are right and you are consistent, that’s the best case; if your principles are right and you are a hypocrite, that’s worse.

But if your principles are wrong and you are consistent, that’s the worst case; if your principles are wrong and you are a hypocrite, that’s better.

In fact I think for most things the ordering goes like this: Consistent Right > Hypocritical Wrong > Hypocritical Right > Consistent Wrong. Your behavior counts for more than your principles—so if you’re going to be a hypocrite, it’s better for your good actions to not match your bad principles.

Obviously if we could get people to believe good moral principles and then follow them, that would be best. And we should in fact be working to achieve that.

But if you know that someone’s moral principles are wrong, it doesn’t accomplish anything to accuse them of being a hypocrite. If it’s true, that’s a good thing.

Here’s a pretty clear example for you: Anyone who says that the Bible is infallible but doesn’t want gay people stoned to death is a hypocrite. The Bible is quite clear on this matter; Leviticus 20:13 really doesn’t leave much room for interpretation. By this standard, most Christians are hypocrites—and thank goodness for that. I owe my life to the hypocrisy of millions.

Of course if I could convince them that the Bible isn’t infallible—perhaps by pointing out all the things it says that contradict their most deeply-held moral and factual beliefs—that would be even better. But the last thing I want to do is make their behavior more consistent with their belief that the Bible is infallible; that would turn them into fanatical monsters. The Spanish Inquisition was very consistent in behaving according to the belief that the Bible is infallible.

Here’s another example: Anyone who thinks that cruelty to cats and dogs is wrong but is willing to buy factory-farmed beef and ham is a hypocrite. Any principle that would tell you that it’s wrong to kick a dog or cat would tell you that the way cows and pigs are treated in CAFOs is utterly unconscionable. But if you are really unwilling to give up eating meat and you can’t find or afford free-range beef, it still would be bad for you to start kicking dogs in a display of your moral consistency.

And one more example for good measure: The leaders of any country who resist human rights violations abroad but tolerate them at home are hypocrites. Obviously the best thing to do would be to fight human rights violations everywhere. But perhaps for whatever reason you are unwilling or unable to do this—one disturbing truth is that many human rights violations at home (such as draconian border policies) are often popular with your local constituents. Human-rights violations abroad are also often more severe—detaining children at the border is one thing, a full-scale genocide is quite another. So, for good reasons or bad, you may decide to focus your efforts on resisting human rights violations abroad rather than at home; this would make you a hypocrite. But it would still make you much better than a more consistent leader who simply ignores all human rights violations wherever they may occur.

In fact, there are cases in which it may be optimal for you to knowingly be a hypocrite. If you have two sets of competing moral beliefs, and you don’t know which is true but you know that as a whole they are inconsistent, your best option is to apply each set of beliefs in the domain for which you are most confident that it is correct, while searching for more information that might allow you to correct your beliefs and reconcile the inconsistency. If you are self-aware about this, you will know that you are behaving in a hypocritical way—but you will still behave better than you would if you picked the wrong beliefs and stuck to them dogmatically. In fact, given a reasonable level of risk aversion, you’ll be better off being a hypocrite than you would by picking one set of beliefs arbitrarily (say, at the flip of a coin). At least then you avoid the worst-case scenario of being the most wrong.

There is yet another factor to take into consideration. Sometimes following your own principles is hard.

Considerable ink has been spilled on the concept of akrasia, or “weakness of will”, in which we judge that A is better than B yet still find ourselves doing B. Philosophers continue to debate to this day whether this really happens. As a behavioral economist, I observe it routinely, perhaps even daily. In fact, I observe it in myself.

I think the philosophers’ mistake is to presume that there is one simple, well-defined “you” that makes all observations and judgments and takes actions. Our brains are much more complicated than that. There are many “you”s inside your brain, each with its own capacities, desires, and judgments. Yes, there is some important sense in which they are all somehow unified into a single consciousness—by a mechanism which still eludes our understanding. But it doesn’t take esoteric cognitive science to see that there are many minds inside you: Haven’t you ever felt an urge to do something you knew you shouldn’t do? Haven’t you ever succumbed to such an urge—drank the drink, eaten the dessert, bought the shoes, slept with the stranger—when it seemed so enticing but you knew it wasn’t really the right choice?

We even speak of being “of two minds” when we are ambivalent about something, and I think there is literal truth in this. The neural networks in your brain are forming coalitions, and arguing between them over which course of action you ought to take. Eventually one coalition will prevail, and your action will be taken; but afterward your reflective mind need not always agree that the coalition which won the vote was the one that deserved to.

The evolutionary reason for this is simple: We’re a kludge. We weren’t designed from the top down for optimal efficiency. We were the product of hundreds of millions of years of subtle tinkering, adding a bit here, removing a bit there, layering the mammalian, reflective cerebral cortex over the reptilian, emotional limbic system over the ancient, involuntary autonomic system. Combine this with the fact that we are built in pairs, with left and right halves of each kind of brain (and yes, they are independently functional when their connection is severed), and the wonder is that we ever agree with our own decisions.

Thus, there is a kind of hypocrisy that is not a moral indictment at all: You may genuinely and honestly agree that it is morally better to do something and still not be able to bring yourself to do it. You may know full well that it would be better to donate that money to malaria treatment rather than buy yourself that tub of ice cream—you may be on a diet and full well know that the ice cream won’t even benefit you in the long run—and still not be able to stop yourself from buying the ice cream.

Sometimes your feeling of hesitation at an altruistic act may be a useful insight; I certainly don’t think we should feel obliged to give all our income, or even all of our discretionary income, to high-impact charities. (For most people I encourage 5%. I personally try to aim for 10%. If all the middle-class and above in the First World gave even 1% we could definitely end world hunger.) But other times it may lead you astray, make you unable to resist the temptation of a delicious treat or a shiny new toy when even you know the world would be better off if you did otherwise.

Yet when following our own principles is so difficult, it’s not really much of a criticism to point out that someone has failed to do so, particularly when they themselves already recognize that they failed. The inconsistency between behavior and belief indicates that something is wrong, but it may not be any dishonesty or even anything wrong with their beliefs.

I wouldn’t go so far as to say you should stop ever calling out hypocrisy. Sometimes it is clearly useful to do so. But while hypocrisy is often the sign of a moral failing, it isn’t always—and even when it is, often as not the problem is the bad principles, not the behavior inconsistent with them.

The evolution of cuteness

Dec20 JDN 2459204

I thought I’d go for something a little more light-hearted for this week’s post. It’s been a very difficult year for a lot of people, though with Biden winning the election and the recent FDA approval of a COVID vaccine for emergency use, the light at the end of the tunnel is now visible. I’ve also had some relatively good news in my job search; I now have a couple of job interviews lined up for tenure-track assistant professor positions.

So rather than the usual economic and political topics, I thought I would focus today on cuteness. First of all, this allows me the opportunity to present you with a bunch of photos of cute animals (free stock photos brought to you by pexels.com):

Beyond the joy I hope this brings you in a dark time, I have a genuine educational purpose here, which is to delve into the surprisingly deep evolutionary question: Why does cuteness exist?

Well, first of all, what is cuteness? We evaluate a person or animal (or robot, or alien) as cute based on certain characteristics like wide eyes, a large head, a posture or expression that evokes innocence. We feel positive feelings toward that which we identify as cute, and we want to help them rather than harm them. We often feel protective toward them.

It’s not too hard to provide an evolutionary rationale for why we would find our own offspring cute: We have good reasons to want to protect and support our own offspring, and given the substantial amounts of effort involved in doing so, it behooves us to have a strong motivation for committing to doing so.

But it’s less obvious why we would feel this way about so many other things that are not human. Dogs and cats have co-evolved along with us as they became domesticated, dogs starting about 40,000 years ago and cats starting around 8,000 years ago. So perhaps it’s not so surprising that we find them cute as well: Becoming domesticated is, in many ways, simply the process of maximizing your level of cuteness so that humans will continue to feed and protect you.

But why are non-domesticated animals also often quite cute? That red panda, penguin, owl, and hedgehog are not domesticated; this is what they look like in the wild. And yet I personally find the red panda to be probably the cutest among an already very cute collection.

Some animals we do not find cute, or at least most people don’t. Here’s a collection of “cute snakes” that I honestly am not getting much cuteness reaction from. These “cute snails” work a little better, but they’re assuredly not as cute as kittens or red pandas. But honestly these “cute spiders” are doing a remarkably good job of it, despite the general sense I have (and I think I share with most people) that spiders are not generally cute. And while tentacles are literally the stuff of Lovecraftian nightmares, this “adorable octopus” lives up to the moniker.

The standard theory is that animals that we find cute are simply those that most closely resemble our own babies, but I don’t really buy it. Naked mole rats have their moments, but they are certainly not as cute as puppies or kittens, despite clearly bearing a closer resemblance to the naked wrinkly blob that most human infants look like. Indeed, I think it’s quite striking that babies aren’t really that cute; yes, some are, but many are not, and even the cutest babies are rarely as cute as the average kitten or red panda.

It actually seems to me more that we have some idealized concept of what a cute creature should look like, and maybe it evolved to reflect some kind of “optimal baby” of perfect health and vigor—but most of our babies don’t quite manage to meet that standard. Perhaps the cuteness of penguins or red pandas is sheer coincidence; out of the millions of animal species out there, some of them were bound to send our cuteness-detectors into overdrive. Dogs and cats, then, started as such coincidence—and then through domestication they evolved to fit our cuteness standard better and better, because this was in fact the primary determinant of their survival. That’s how you can get the adorable abomination that is a pug:

Such a creature would never survive in the wild, but we created it because we liked it (or enough of us did, anyway).

There are actually important reasons why having such a strong cuteness response could be maladaptive—we’re apex predators, after all. If finding animals cute prevents us from killing and eating them, that’s an important source of nutrition we are passing up. So whatever evolutionary pressure molded our cuteness response, it must be strong enough to overcome that risk.

Indeed, perhaps the cuteness of cats and dogs goes beyond not only coincidence but also the co-opting of an impulse to protect our offspring. Perhaps it is something that co-evolved in us for the direct purpose of incentivizing us to care for cats and dogs. It has been long enough for that kind of effect—we evolved our ability to digest wheat and milk in roughly the same time period. Indeed, perhaps the very cuteness response that makes us hesitant to kill a rabbit ourselves actually made us better at hunting rabbits, by making us care for dogs who could do the hunting even better than we could. Perhaps the cuteness of a mouse is less relevant to how we relate to mice than the cuteness of the cat who will have that mouse for dinner.

This theory is much more speculative, and I admit I don’t have very clear evidence of it; but let me at least say this: A kitten wouldn’t get cuter by looking more like a human baby. The kitten already seems quite well optimized for us to see it as cute, and any deviation from that optimum is going to be downward, not upward. Any truly satisfying theory of cuteness needs to account for that.

I also think it’s worth noting that behavior is an important element of cuteness; while a kitten will pretty much look cute no matter what it’s doing, where or not a snail or a bird looks cute often depends on the pose it is in.


There is an elegance and majesty to the tiger below, but I wouldn’t call them cute; indeed, should you encounter either one in the wild, the correct response is for you to run for your life.

Cuteness is playful, innocent, or passive; aggressive and powerful postures rapidly undermine cuteness. A lion make look cute as it rubs against a tree—but not once it turns to you and roars.

The truth is, I’m not sure we fully grasp what is going on in our brains when we identify something as cute. But it does seem to brighten our days.

Creativity and mental illness

Dec 1 JDN 2458819

There is some truth to the stereotype that artistic people are crazy. Mental illnesses, particularly bipolar disorder, are overrepresented among artists, writers, and musicians. Creative people score highly on literally all five of the Big Five personality traits: They are higher in Openness, higher in Conscientiousness, higher in Extraversion (that one actually surprised me), higher in Agreeableness, and higher in Neuroticism. Creative people just have more personality, it seems.

But in fact mental illness is not as overrepresented among creative people as most people think, and the highest probability of being a successful artist occurs when you have close relatives with mental illness, but are not yourself mentally ill. Those with mental illness actually tend to be most creative when their symptoms are in remission. This suggests that the apparent link between creativity and mental illness may actually increase over time, as treatments improve and remission becomes easier.

One possible source of the link is that artistic expression may be a form of self-medication: Art therapy does seem to have some promise in treating a variety of mental disorders (though it is not nearly as effective as therapy and medication). And that wouldn’t explain why family history of mental illness is actually a better predictor of creativity than mental illness itself.

My guess is that in order to be creative, you need to think differently than other people. You need to see the world in a way that others do not see it. Mental illness is surely not the only way to do that, but it’s definitely one way.

But creativity also requires basic functioning: If you are totally crippled by a mental illness, you’re not going to be very creative. So the people who are most creative have just enough craziness to think differently, but not so much that it takes over their lives.

This might even help explain how mental illness persisted in our population, despite its obvious survival disadvantages. It could be some form of heterozygote advantage.

The classic example of heterozygote advantage is sickle-cell anemia: If you have no copies of the sickle-cell gene, you’re normal. If you have two copies, you have sickle-cell anemia, which is very bad. But if you have only one copy, you’re healthy—and you’re resistant to malaria. Thus, high risk of malaria—as we certainly had, living in central Africa—creates a selection pressure that keeps sickle-cell genes in the population, even though having two copies is much worse than having none at all.

Mental illness might function something like this. I suspect it’s far more complicated than sickle-cell anemia, which is literally just two alleles of a single gene; but the overall process may be similar. If having just a little bit of bipolar disorder or schizophrenia makes you see the world differently than other people and makes you more creative, there are lots of reasons why that might improve the survival of your genes: There are the obvious problem-solving benefits, but also the simple fact that artists are sexy.

The downside of such “weird-thinking” genes is that they can go too far and make you mentally ill, perhaps if you have too many copies of them, or if you face an environmental trigger that sets them off. Sometimes the reason you see the world differently than everyone else is that you’re just seeing it wrong. But if the benefits of creativity are high enough—and they surely are—this could offset the risks, in an evolutionary sense.

But one thing is quite clear: If you are mentally ill, don’t avoid treatment for fear it will damage your creativity. Quite the opposite: A mental illness that is well treated and in remission is the optimal state for creativity. Go seek treatment, so that your creativity may blossom.

Moral luck: How it matters, and how it doesn’t

Feb 10 JDN 2458525

The concept of moral luck is now relatively familiar to most philosophers, but I imagine most other people haven’t heard it before. It sounds like a contradiction, which is probably why it drew so much attention.

The term “moral luck” seems to have originated in essay by Thomas Nagel, but the intuition is much older, dating at least back to Greek philosophy (and really probably older than that; we just don’t have good records that far back).

The basic argument is this:

Most people would say that if you had no control over something, you can’t be held morally responsible for it. It was just luck.

But if you look closely, everything we do—including things we would conventionally regard as moral actions—depends heavily on things we don’t have control over.

Therefore, either we can be held responsible for things we have no control over, or we can’t be held responsible for anything at all!

Neither approach seems very satisfying; hence the conundrum.

For example, consider four drivers:

Anna is driving normally, and nothing of note happens.

Bob is driving recklessly, but nothing of note happens.

Carla is driving normally, but a child stumbles out into the street and she runs the child over.

Dan is driving recklessly, and a child stumbles out into the street and he runs the child over.

The presence or absence of a child in the street was not in the control of any of the four drivers. Yet I think most people would agree that Dan should be held more morally responsible than Bob, and Carla should be held more morally responsible than Anna. (Whether Bob should be held more morally responsible than Carla is not as clear.) Yet both Bob and Dan were driving recklessly, and both Anna and Carla were driving normally. The moral evaluation seems to depend upon the presence of the child, which was not under the drivers’ control.

Other philosophers have argued that the difference is an epistemic one: We know the moral character of someone who drove recklessly and ran over a child better than the moral character of someone who drove recklessly and didn’t run over a child. But do we, really?

Another response is simply to deny that we should treat Bob and Dan any differently, and say that reckless driving is reckless driving, and safe driving is safe driving. For this particular example, maybe that works. But it’s not hard to come up with better examples where that doesn’t work:

Ted is a psychopathic serial killer. He kidnaps, rapes, and murder people. Maybe he can control whether or not he rapes and murders someone. But the reason he rapes and murders someone is that he is a psychopath. And he can’t control that he is a psychopath. So how can we say that his actions are morally wrong?

Obviously, we want to say that his actions are morally wrong.

I have heard one alternative, which is to consider psychopaths as morally equivalent to viruses: Zero culpability, zero moral value, something morally neutral but dangerous that we should contain or eradicate as swiftly as possible. HIV isn’t evil; it’s just harmful. We should kill it not because it deserves to die, but because it will kill us if we don’t. On this theory, Ted doesn’t deserve to be executed; it’s just that we must execute him in order to protect ourselves from the danger he poses.

But this quickly becomes unsatisfactory as well:

Jonas is a medical researcher whose work has saved millions of lives. Maybe he can control the research he works on, but he only works on medical research because he was born with a high IQ and strong feelings of compassion. He can’t control that he was born with a high IQ and strong feelings of compassion. So how can we say his actions are morally right?

This is the line of reasoning that quickly leads to saying that all actions are outside our control, and therefore morally neutral; and then the whole concept of morality falls apart.

So we need to draw the line somewhere; there has to be a space of things that aren’t in our control, but nonetheless carry moral weight. That’s moral luck.

Philosophers have actually identified four types of moral luck, which turns out to be tremendously useful in drawing that line.

Resultant luck is luck that determines the consequences of your actions, how things “turn out”. Happening to run over the child because you couldn’t swerve fast enough is resultant luck.

Circumstantial luck is luck that determines the sorts of situations you are in, and what moral decisions you have to make. A child happening to stumble across the street is circumstantial luck.

Constitutive luck is luck that determines who you are, your own capabilities, virtues, intentions and so on. Having a high IQ and strong feelings of compassion is constitutive luck.

Causal luck is the inherent luck written into the fabric of the universe that determines all events according to the fundamental laws of physics. Causal luck is everything and everywhere; it is written into the universal wavefunction.

I have a very strong intuition that this list is ordered; going from top to bottom makes things “less luck” in a vital sense.

Resultant luck is pure luck, what we originally meant when we said the word “luck”. It’s the roll of the dice.

Circumstantial luck is still mostly luck, but maybe not entirely; there are some aspects of it that do seem to be under our control.

Constitutive luck is maybe luck, sort of, but not really. Yes, “You’re lucky to be so smart” makes sense, but “You’re lucky to not be a psychopath” already sounds pretty weird. We’re entering territory here where our ordinary notions of luck and responsibility really don’t seem to apply.

Causal luck is not luck at all. Causal luck is really the opposite of luck: Without a universe with fundamental laws of physics to maintain causal order, none of our actions would have any meaning at all. They wouldn’t even really be actions; they’d just be events. You can’t do something in a world of pure chaos; things only happen. And being made of physical particles doesn’t make you any less what you are; a table made of wood is still a table, and a rocket made of steel is still a rocket. Thou art physics.

And that, my dear reader, is the solution to the problem of moral luck. Forget “causal luck”, which isn’t luck at all. Then, draw a hard line at constitutive luck: regardless of how you became who you are, you are responsible for what you do.

You don’t need to have control over who you are (what would that even mean!?).

You merely need to have control over what you do.

This is how the word “control” is normally used, by the way; when we say that a manufacturing process is “under control” or a pilot “has control” of an airplane, we aren’t asserting some grand metaphysical claim of ultimate causation. We’re merely saying that the system is working as it’s supposed to; the outputs coming out are within the intended parameters. This is all we need for moral responsibility as well.

In some cases, maybe people’s brains really are so messed up that we can’t hold them morally responsible; they aren’t “under control”. Okay, we’re back to the virus argument then: Contain or eradicate. If a brain tumor makes you so dangerous that we can’t trust you around sharp objects, unless we can take out that tumor, we’ll need to lock you up somewhere where you can’t get any sharp objects. Sorry. Maybe you don’t deserve that in some ultimate sense, but it’s still obviously what we have to do. And this is obviously quite exceptional; most people are not suffering from brain tumors that radically alter their personalities—and even most psychopaths are otherwise neurologically normal.

Ironically, it’s probably my fellow social scientists who will scoff the most at this answer. “But so much of what we are is determined by our neurochemistry/cultural norms/social circumstances/political institutions/economic incentives!” Yes, that’s true. And if we want to change those things to make us and others better, I’m all for it. (Well, neurochemistry is a bit problematic, so let’s focus on the others first—but if you can make a pill that cures psychopathy, I would support mandatory administration of that pill to psychopaths in positions of power.)

When you make a moral choice, we have to hold you responsible for that choice.

Maybe Ted is psychopathic and sadistic because there was too much lead in his water as a child. That’s a good reason to stop putting lead in people’s water (like we didn’t already have plenty!); but it’s not a good reason to let Ted off the hook for all those rapes and murders.

Maybe Jonas is intelligent and compassionate because his parents were wealthy and well-educated. That’s a good reason to make sure people are financially secure and well-educated (again, did we need more?); but it’s not a good reason to deny Jonas his Nobel Prize for saving millions of lives.

Yes, “personal responsibility” has been used by conservatives as an excuse to not solve various social and economic problems (indeed, it has specifically been used to stop regulations on lead in water and public funding for education). But that’s not actually anything wrong with personal responsibility. We should hold those conservatives personally responsible for abusing the term in support of their destructive social and economic policies. No moral freedom is lost by preventing lead from turning children into psychopaths. No personal liberty is destroyed by ensuring that everyone has access to a good education.

In fact, there is evidence that telling people who are suffering from poverty or oppression that they should take personal responsibility for their choices benefits them. Self-perceived victimhood is linked to all sorts of destructive behaviors, even controlling for prior life circumstances. Feminist theorists have written about how taking responsibility even when you are oppressed can empower you to make your life better. Yes, obviously, we should be helping people when we can. But telling them that they are hopeless unless we come in to rescue them isn’t helping them.

This way of thinking may require a delicate balance at times, but it’s not inconsistent. You can both fight against lead pollution and support the criminal justice system. You can believe in both public education and the Nobel Prize. We should be working toward a world where people are constituted with more virtue for reasons beyond their control, and where people are held responsible for the actions they take that are under their control.

We can continue to talk about “moral luck” referring to constitutive luck, I suppose, but I think the term obscures more than it illuminates. The “luck” that made you a good or a bad person is very different from the “luck” that decides how things happen to turn out.

How personality makes cognitive science hard

August 13, JDN 2457614

Why is cognitive science so difficult? First of all, let’s acknowledge that it is difficult—that even those of us who understand it better than most are still quite baffled by it in quite fundamental ways. The Hard Problem still looms large over us all, and while I know that the Chinese Room Argument is wrong, I cannot precisely pin down why.

The recursive, reflexive character of cognitive science is part of the problem; can a thing understand itself without understanding understanding itself, understanding understanding understanding itself, and on in an infinite regress? But this recursiveness applies just as much to economics and sociology, and honestly to physics and biology as well. We are physical biological systems in an economic and social system, yet most people at least understand these sciences at the most basic level—which is simply not true of cognitive science.

One of the most basic facts of cognitive science (indeed I am fond of calling it The Basic Fact of Cognitive Science) is that we are our brains, that everything human consciousness does is done by and within the brain. Yet the majority of humans believe in souls (including the majority of Americans and even the majority of Brits), and just yesterday I saw a news anchor say “Based on a new study, that feeling may originate in your brain!” He seriously said “may”. “may”? Why, next you’ll tell me that when my arms lift things, maybe they do it with muscles! Other scientists are often annoyed by how many misconceptions the general public has about science, but this is roughly the equivalent of a news anchor saying, “Based on a new study, human bodies may be made of cells!” or “Based on a new study, diamonds may be made of carbon atoms!” The misunderstanding of many sciences is widespread, but the misunderstanding of cognitive science is fundamental.

So what makes cognitive science so much harder? I have come to realize that there is a deep feature of human personality that makes cognitive science inherently difficult in a way other sciences are not.

Decades of research have uncovered a number of consistent patterns in human personality, where people’s traits tend to lie along a continuum from one extreme to another, and usually cluster near either end. Most people are familiar with a few of these, such as introversion/extraversion and optimism/pessimism; but the one that turns out to be important here is empathizing/systematizing.

Empathizers view the world as composed of sentient beings, living agents with thoughts, feelings, and desires. They are good at understanding other people and providing social support. Poets are typically empathizers.

Systematizers view the world as composed of interacting parts, interlocking components that have complex inner workings which can be analyzed and understood. They are good at solving math problems and tinkering with machines. Engineers are typically systematizers.

Most people cluster near one end of the continuum or the other; they are either strong empathizers or strong systematizers. (If you’re curious, there’s an online test you can take to find out which you are.)

But a rare few of us, perhaps as little as 2% and no more than 10%, are both; we are empathizer-systematizers, strong on both traits (showing that it’s not really a continuum between two extremes after all, and only seemed to be because the two traits are negatively correlated). A comparable number are also low on both traits, which must quite frankly make the world a baffling place in general.

Empathizer-systematizers understand the world as it truly is: Composed of sentient beings that are made of interacting parts.

The very title of this blog shows I am among this group: “human” for the empathizer, “economics” for the systematizer!

We empathizer-systematizers can intuitively grasp that there is no contradiction in saying that a person is sad because he lost his job and he is sad because serotonin levels in his cingulate gyrus are low—because it was losing his job that triggered other thoughts and memories that lowered serotonin levels in his cingulate gyrus and thereby made him sad. No one fully understands the details of how low serotonin feels like sadness—hence, the Hard Problem—but most people can’t even seem to grasp the connection at all. How can something as complex and beautiful as a human mind be made of… sparking gelatin?

Well, what would you prefer it to be made of? Silicon chips? We’re working on that. Something else? Magical fairy dust, perhaps? Pray tell, what material could the human mind be constructed from that wouldn’t bother you on a deep level?

No, what really seems to bother people is the very idea that a human mind can be constructed from material, that thoughts and feelings can be divisible into their constituent parts.

This leads people to adopt one of two extreme positions on cognitive science, both of which are quite absurd—frankly I’m not sure they are even coherent.

Pure empathizers often become dualists, saying that the mind cannot be divisible, cannot be made of material, but must be… something else, somehow, outside the material universe—whatever that means.

Pure systematizers instead often become eliminativists, acknowledging the functioning of the brain and then declaring proudly that the mind does not exist—that consciousness, emotion, and experience are all simply illusions that advanced science will one day dispense with—again, whatever that means.

I can at least imagine what a universe would be like if eliminativism were true and there were no such thing as consciousness—just a vast expanse of stars and rocks and dust, lifeless and empty. Of course, I know that I’m not in such a universe, because I am experiencing consciousness right now, and the illusion of consciousness is… consciousness. (You are not experiencing what you are experiencing right now, I say!) But I can at least visualize what such a universe would be like, and indeed it probably was our universe (or at least our solar system) up until about a billion years ago when the first sentient animals began to evolve.

Dualists, on the other hand, are speaking words, structured into grammatical sentences, but I’m not even sure they are forming coherent assertions. Sure, you can sort of imagine our souls being floating wisps of light and energy (ala the “ascended beings”, my least-favorite part of the Stargate series, which I otherwise love), but ultimately those have to be made of something, because nothing can be both fundamental and complex. Moreover, the fact that they interact with ordinary matter strongly suggests that they are made of ordinary matter (and to be fair to Stargate, at one point in the series Rodney with his already-great intelligence vastly increased declares confidently that ascended beings are indeed nothing more than “protons and electrons, protons and electrons”). Even if they were made of some different kind of matter like dark matter, they would need to obey a common system of physical laws, and ultimately we would come to think of them as matter. Otherwise, how do the two interact? If we are made of soul-stuff which is fundamentally different from other stuff, then how do we even know that other stuff exists? If we are not our bodies, then how do we experience pain when they are damaged and control them with our volition? The most coherent theory of dualism is probably Malebranche’s, which is quite literally “God did it”. Epiphenomenalism, which says that thoughts are just sort of an extra thing that also happens but has no effect (an “epiphenomenon”) on the physical brain, is also quite popular for some reason. People don’t quite seem to understand that the Law of Conservation of Energy directly forbids an “epiphenomenon” in this sense, because anything that happens involves energy, and that energy (unlike, say, money) can’t be created out of nothing; it has to come from somewhere. Analogies are often used: The whistle of a train, the smoke of a flame. But the whistle of a train is a pressure wave that vibrates the train; the smoke from a flame is made of particulates that could be used to smother the flame. At best, there are some phenomena that don’t affect each other very much—but any causal interaction at all makes dualism break down.

How can highly intelligent, highly educated philosophers and scientists make such basic errors? I think it has to be personality. They have deep, built-in (quite likely genetic) intuitions about the structure of the universe, and they just can’t shake them.

And I confess, it’s very hard for me to figure out what to say in order to break those intuitions, because my deep intuitions are so different. Just as it seems obvious to them that the world cannot be this way, it seems obvious to me that it is. It’s a bit like living in a world where 45% of people can see red but not blue and insist the American Flag is red and white, another 45% of people can see blue but not red and insist the flag is blue and white, and I’m here in the 10% who can see all colors and I’m trying to explain that the flag is red, white, and blue.

The best I can come up with is to use analogies, and computers make for quite good analogies, not least because their functioning is modeled on our thinking.

Is this word processor program (LibreOffice Writer, as it turns out) really here, or is it merely an illusion? Clearly it’s really here, right? I’m using it. It’s doing things right now. Parts of it are sort of illusions—it looks like a blank page, but it’s actually an LCD screen lit up all the way; it looks like ink, but it’s actually where the LCD turns off. But there is clearly something here, an actual entity worth talking about which has properties that are usefully described without trying to reduce them to the constituent interactions of subatomic particles.

On the other hand, can it be reduced to the interactions of subatomic particles? Absolutely. A brief sketch is something like this: It’s a software program, running on an operating system, and these in turn are represented in the physical hardware as long binary sequences, stored by ever-so-slightly higher or lower voltages in particular hardware components, which in turn are due to electrons being moved from one valence to another. Those electrons move in precise accordance with the laws of quantum mechanics, I assure you; yet this in no way changes the fact that I’m typing a blog post on a word processor.

Indeed, it’s not even particularly useful to know that the electrons are obeying the laws of quantum mechanics, and quite literally no possible computer that could be constructed in our universe could ever be large enough to fully simulate all these quantum interactions within the amount of time since the dawn of the universe. If we are to understand it at all, it must be at a much higher level—and the “software program” level really seems to be the best one for most circumstances. The vast majority of problems I’m likely to encounter are either at the software level or the macro hardware level; it’s conceivable that a race condition could emerge in the processor cache or the voltage could suddenly spike or even that a cosmic ray could randomly ionize a single vital electron, but these scenarios are far less likely to affect my life than, say, I accidentally deleted the wrong file or the battery ran out of charge because I forgot to plug it in.

Likewise, when dealing with a relationship problem, or mediating a conflict between two friends, it’s rarely relevant that some particular neuron is firing in someone’s nucleus accumbens, or that one of my friends is very low on dopamine in his mesolimbic system today. It could be, particularly if some sort of mental or neurological illness in involved, but in most cases the real issues are better understood as higher level phenomena—people being angry, or tired, or sad. These emotions are ultimately constructed of axon potentials and neurotransmitters, but that doesn’t make them any less real, nor does it change the fact that it is at the emotional level that most human matters are best understood.

Perhaps part of the problem is that human emotions take on moral significance, which other higher-level entities generally do not? But they sort of do, really, in a more indirect way. It matters a great deal morally whether or not climate change is a real phenomenon caused by carbon emissions (it is). Ultimately this moral significance can be tied to human experiences, so everything rests upon human experiences being real; but they are real, in much the same way that rocks and trees and carbon emissions are real. No amount of neuroscience will ever change that, just as no amount of biological science would disprove the existence of trees.

Indeed, some of the world’s greatest moral problems could be better solved if people were better empathizer-systematizers, and thus more willing to do cost-benefit analysis.

What is the processing power of the human brain?

JDN 2457485

Futurists have been predicting that AI will “surpass humans” any day now for something like 50 years. Eventually they’ll be right, but it will be more or less purely by chance, since they’ve been making the same prediction longer than I’ve been alive. (Similarity, whenever someone projects the date at which immortality will be invented, it always seems to coincide with just slightly before the end of the author’s projected life expectancy.) Any technology that is “20 years away” will be so indefinitely.

There are a lot of reasons why this prediction keeps failing so miserably. One is an apparent failure to grasp the limitations of exponential growth. I actually think the most important is that a lot of AI fans don’t seem to understand how human cognition actually works—that it is primarily social cognition, where most of the processing has already been done and given to us as cached results, some of them derived centuries before we were born. We are smart enough to run a civilization with airplanes and the Internet not because any individual human is so much smarter than any other animal, but because all humans together are—and other animals haven’t quite figured out how to unite their cognition in the same way. We’re about 3 times smarter than any other animal as individuals—and several billion times smarter when we put our heads together.

A third reason is that even if you have sufficient computing power, that is surprisingly unimportant; what you really need are good heuristics to make use of your computing power efficiently. Any nontrivial problem is too complex to brute-force by any conceivable computer, so simply increasing computing power without improving your heuristics will get you nowhere. Conversely, if you have really good heuristics like the human brain does, you don’t even need all that much computing power. A chess grandmaster was once asked how many moves ahead he can see on the board, and he replied: “I only see one move ahead. The right one.” In cognitive science terms, people asked him how much computing power he was using, expecting him to say something far beyond normal human capacity, and he replied that he was using hardly any—it was all baked into the heuristics he had learned from years of training and practice.

Making an AI capable of human thought—a true artificial person—will require a level of computing power we can already reach (as long as we use huge supercomputers), but that is like having the right material. To really create the being we will need to embed the proper heuristics. We are trying to make David, and we have finally mined enough marble—now all we need is Michelangelo.

But another reason why so many futurists have failed in their projections is that they have wildly underestimated the computing power of the human brain. Reading 1980s cyberpunk is hilarious in hindsight; Neuromancer actually quite accurately projected the number of megabytes that would flow through the Internet at any given moment, but somehow thought that a few hundred megaflops would be enough to copy human consciousness. The processing power of the human brain is actually on the order of a few petaflops. So, you know, Gibson was only off by a factor of a few million.

We can now match petaflops—the world’s fastest supercomputer is actually about 30 petaflops. Of course, it cost half a month of China’s GDP to build, and requires 24 megawatts to run and cool, which is about the output of a mid-sized solar power station. The human brain consumes only about 400 kcal per day, which is about 20 watts—roughly the consumption of a typical CFL lightbulb. Even if you count the rest of the human body as necessary to run the human brain (which I guess is sort of true), we’re still clocking in at about 100 watts—so even though supercomputers can now process at the same speed, our brains are almost a million times as energy-efficient.

How do I know it’s a few petaflops?

Earlier this year a study was published showing that a conservative lower bound for the total capacity of human memory is about 4 bits per synapse, where previously some scientists thought that each synapse might carry only 1 bit (I’ve always suspected it was more like 10 myself).

So then we need to figure out how many synapses we have… which turns out to be really difficult actually. They are in a constant state of flux, growing, shrinking, and moving all the time; and when we die they fade away almost immediately (reason #3 I’m skeptical of cryonics). We know that we have about 100 billion neurons, and each one can have anywhere between 100 and 15,000 synapses with other neurons. The average seems to be something like 5,000 (but highly skewed in a power-law distribution), so that’s about 500 trillion synapses. If each one is carrying 4 bits to be as conservative as possible, that’s a total storage capacity of about 2 quadrillion bits, which is about 0.2 petabytes.

Of course, that’s assuming that our brains store information the same way as a computer—every bit flipped independently, each bit stored forever. Not even close. Human memory is constantly compressing and decompressing data, using a compression scheme that’s lossy enough that we not only forget things, we can systematically misremember and even be implanted with false memories. That may seem like a bad thing, and in a sense it is; but if the compression scheme is that lossy, it must be because it’s also that efficient—that our brains are compressing away the vast majority of the data to make room for more. Our best lossy compression algorithms for video are about 100:1; but the human brain is clearly much better than that. Our core data format for long-term memory appears to be narrative; more or less we store everything not as audio or video (that’s short-term memory, and quite literally so), but as stories.

How much compression can you get by storing things as narrative? Think about The Lord of the Rings. The extended edition of the films runs to 6 discs of movie (9 discs of other stuff), where a Blu-Ray disc can store about 50 GB. So that’s 300 GB. Compressed into narrative form, we have the books (which, if you’ve read them, are clearly not optimally compressed—no, we do not need five paragraphs about the trees, and I’m gonna say it, Tom Bombadil is totally superfluous and Peter Jackson was right to remove him), which run about 500,000 words altogether. If the average word is 10 letters (normally it’s less than that, but this is Tolkien we’re talking about), each word will take up about 10 bytes (because in ASCII or Unicode a letter is a byte). So altogether the total content of the entire trilogy, compressed into narrative, can be stored in about 5 million bytes, that is, 5 MB. So the compression from HD video to narrative takes us all the way from 300 GB to 5 MB, which is a factor of 60,000. Sixty thousand. I believe that this is the proper order of magnitude for the compression capability of the human brain.

Even more interesting is the fact that the human brain is almost certainly in some sense holographic storage; damage to a small part of your brain does not produce highly selective memory loss as if you had some bad sectors of your hard drive, but rather an overall degradation of your total memory processing as if you in some sense stored everything everywhere—that is, holographically. How exactly this is accomplished by the brain is still very much an open question; it’s probably not literally a hologram in the quantum sense, but it definitely seems to function like a hologram. (Although… if the human brain is a quantum computer that would explain an awful lot—it especially helps with the binding problem. The problem is explaining how a biological system at 37 C can possibly maintain the necessary quantum coherences.) The data storage capacity of holograms is substantially larger than what can be achieved by conventional means—and furthermore has similar properties to human memory in that you can more or less always add more, but then what you had before gradually gets degraded. Since neural nets are much closer to the actual mechanics of the brain as we know them, understanding human memory will probably involve finding ways to simulate holographic storage with neural nets.

With these facts in mind, the amount of information we can usefully take in and store is probably not 0.2 petabytes—it’s probably more like 10 exabytes. The human brain can probably hold just about as much as the NSA’s National Cybersecurity Initiative Data Center in Utah, which is itself more or less designed to contain the Internet. (The NSA is at once awesome and terrifying.)

But okay, maybe that’s not fair if we’re comparing human brains to computers; even if you can compress all your data by a factor of 100,000, that isn’t the same thing as having 100,000 times as much storage.

So let’s use that smaller figure, 0.2 petabytes. That’s how much we can store; how much can we process?

The next thing to understand is that our processing architecture is fundamentally difference from that of computers.

Computers generally have far more storage than they have processing power, because they are bottlenecked through a CPU that can only process 1 thing at once (okay, like 8 things at once with a hyperthreaded quad-core; as you’ll see in a moment this is a trivial difference). So it’s typical for a new computer these days to have processing power in gigaflops (It’s usually reported in gigahertz, but that’s kind of silly; hertz just tells you clock cycles, while what you really wanted to know is calculations—and that you get from flops. They’re generally pretty comparable numbers though.), while they have storage in terabytes—meaning that it would take about 1000 seconds (about 17 minutes) for the computer to process everything in its entire storage once. In fact it would take a good deal longer than that, because there are further bottlenecks in terms of memory access, especially from hard-disk drives (RAM and solid-state drives are faster, but would still slow it down to a couple of hours).

The human brain, by contrast, integrates processing and memory into the same system. There is no clear distinction between “memory synapses” and “processing synapses”, and no single CPU bottleneck that everything has to go through. There is however something like a “clock cycle” as it turns out; synaptic firings are synchronized across several different “rhythms”, the fastest of which is about 30 Hz. No, not 30 GHz, not 30 MHz, not even 30 kHz; 30 hertz. Compared to the blazing speed of billions of cycles per second that goes on in our computers, the 30 cycles per second our brains are capable of may seem bafflingly slow. (Even more bafflingly slow is the speed of nerve conduction, which is not limited by the speed of light as you might expect, but is actually less than the speed of sound. When you trigger the knee-jerk reflex doctors often test, it takes about a tenth of a second for the reflex to happen—not because your body is waiting for anything, but because it simply takes that long for the signal to travel to your spinal cord and back.)

The reason we can function at all is because of our much more efficient architecture; instead of passing everything through a single bottleneck, we do all of our processing in parallel. All of those 100 billion neurons with 500 trillion synapses storing 2 quadrillion bits work simultaneously. So whereas a computer does 8 things at a time, 3 billion times per second, a human brain does 2 quadrillion things at a time, 30 times per second. Provided that the tasks can be fully parallelized (vision, yes; arithmetic, no), a human brain can therefore process 60 quadrillion bits per second—which turns out to be just over 6 petaflops, somewhere around 6,000,000,000,000,000 calculations per second.

So, like I said, a few petaflops.