Freedom and volition

Oct 13 JDN 2460597

Introduction

What freedom do we have to choose some actions over others, and how are we responsible for what we do? Without some kind of freedom and responsibility, morality becomes meaningless—what does it matter what we ought to do if what we will do is completely inevitable? Morality becomes a trivial exercise, trying to imagine fanciful worlds in which things were not only other than they are, but other than they ever could be.

Many people think that science and morality are incompatible precisely because science requires determinism—the causal unity of the universe, wherein all effects have causes and all systems obey conservation laws. This seems to limit our capacity for freedom, since all our actions are determined by physical causes, and could (in principle) be predicted far in advance from the state of the universe around us. In fact, quantum mechanics isn’t necessarily deterministic (though in my preferred version, the Bohm interpretation, it is), but a small amount of randomness at the level of atoms and molecules doesn’t seem to add much in the way of human freedom.

The fear is that determinism undermines human agency; if we are part of a closed causal system, how can we be free to make our own choices? In fact, this is a mistake. Determinism isn’t the right question to be asking at all. There are really four possibilities to consider:

  • Acausalism: Actions are uncaused but inevitable; everything is ultimately random and meaningless.
  • Libertarianism: Actions are uncaused and free; we are the masters of our own destiny, independent of the laws of nature.
  • Fatalism: Actions are caused and inevitable; the universe is a clockwork machine of which we are components.
  • Compatibilism: Actions are caused but free; we are rational participants in the universe’s causal mechanism.

Acausalism

Hardly anyone holds to acausalism, but it is a logically coherent position. Perhaps the universe is ultimately random, meaningless—our actions are done neither by the laws of nature nor by our own wills, but simply by the random flutterings of molecular motion. In such a universe, we are not ultimately responsible for our actions, but nor can we stop ourselves from pretending that we are, for everything we think, say, and do is determined only by the roll of the dice. This is a hopeless, terrifying approach to reality, and it would drive one to suicide but for the fact that if it is true, suicide, just like everything else, must ultimately be decided by chance.

Libertarianism

Most people, if asked—including evolutionary biologists—seem to believe something like libertarianism. (This is metaphysical libertarianism, the claim that free will is real and intrinsically uncaused; it is not to be confused with political Libertarianism.) As human beings we have an intuitive sense that we are not like the rest of the universe. Leaves fall, but people climb; everything decays, but we construct. If this is right, then morality is unproblematic: Moral rules apply to agents with this sort of deep free will, and not to other things.

But libertarian free will runs into serious metaphysical problems. If I am infected by a virus, do I choose to become sick? If I am left without food, do I choose to starve? If I am hit by a car, do I choose to be injured? Anyone can see that this is not the case: No one chooses these things—they happen, as a result of the laws of nature—physics, chemistry, biology.

Yet, so much of our lives is determined by these kinds of events: How can Stephen Hawking be said to have chosen life as a physicist and not a basketball player when he spent his whole adult life crippled by amytropic lateral sclerosis? He could not possibly have been a professional basketball player, no matter how badly he might have desired to be. Perhaps he could have been an artist or a philosopher—but still, his options were severely limited by his biology.

Indeed, it is worse than this, for we do not choose our parents, our culture, our genes; yet all of these things strongly influence who we are. I have myopia and migraines not because I wanted to, not because I did something to cause it to happen, but because I was born this way—and while myopia isn’t a serious problem with eyeglasses, migraines have adversely affected my life in many ways, and while treatment has helped me enormously, a full cure remains elusive. Culture influences us even more: It is entirely beyond my control that I speak English and live in an upper-middle-class American family; though I’m fairly happy with this result, I was never given a choice in the matter. All of these things have influenced what schools I’ve attended, what friends I’ve made, even what ideas I have considered. My brain itself is a physical system bound to the determinism of the universe. Therefore, in what sense can anything I do be considered free?

Fatalism

This reasoning leads quickly to fatalism, the notion that because everything we do is controlled by laws of nature, nothing we do is free, and we cannot rightly be held responsible for any of our actions. If this is true, then we still can’t stop ourselves from acting the way we do. People who murder will murder, people who punish murderers will punish murderers—it’s all inevitable. There may be slightly more hope in fatalism than acausalism, since it suggests that everything we do is done in some sense for a purpose, if not any purpose we would recognize or understand. Still, the thought that death and suffering, larceny and rape, starvation and genocide, are in all instances inevitable—this is the sort of idea that will keep a thoughtful person awake at night.

By way of reconciling determinism with libertarian free will, some thinkers (such as Michael Shermer) have suggested that free will is a “useful fiction”.

But the very concept of anything being useful depends upon at least a minimal degree of free will—the ability to choose actions based upon their usefulness. A fiction can only be useful if beliefs affect actions. If there even is such a thing as a “useful fiction” (I’m quite dubious of the notion), free will is certainly not an example, for in order for anything to ever be useful we must have at least some degree of free will. The best one could say under fatalism would be something like “some people happen to believe in free will and can’t change that”; but that doesn’t make free will true, it just makes many people incorrigibly wrong.

Yet the inference to fatalism is not, itself, inevitable; it doesn’t follow from the fact that much or even most of what we do is beyond our control that all we do is beyond our control. Indeed, it makes intuitive sense to say that we are in control of certain things—what we eat, what we say, how we move our bodies. We feel at least that we are in control of these things, and we can operate quite effectively on this presumption.

On the other hand, different levels of analysis yield different results. At the level of the brain, at the level of biochemistry, and especially at the level of quantum physics, there is little difference between what we choose to do and what merely happens to us. In a powerful enough microscope, being hit by a car and punching someone in the face look the same: It’s all protons and electrons interacting by exchanging photons.

Compatibilism

But free will is not inherently opposed to causality. In order to exercise free will, we must act not from chance, but from character; someone whose actions are random is not choosing freely, and conversely someone can freely choose to be completely predictable. It can be rational to choose some degree of randomness, but it cannot be rational to choose total randomness. As John Baer convincingly argues, at least some degree of causal determinacy is necessary for free will—hence, libertarianism is not viable, and a lack of determinism would lead only to acausalism. In the face of this knowledge, compatibilism is the obvious choice.

One thing that humans do that only a few other things do—some animals, perhaps computers if we’re generous—is reason; we consider alternatives and select the one we consider best. When water flows down a hill, it never imagines doing otherwise. When asteroids collide, they don’t consider other options. Yet we humans behave quite differently; we consider possibilities, reflect on our desires, seek to choose the best option. This process we call volition, and it is central to our experience of choice and freedom.

Another thing we do that other things don’t—except animals again, but definitely not computers this time—is feel emotion; we love and hurt, feel joy and sorrow. It is our emotions that motivate our actions, give them purpose. Water flowing downhill not only doesn’t choose to do so, it doesn’t care whether it does so. Sometimes things happen to us that we do not choose, but we always care.

This is what I mean when I say “free will”: experiences, beliefs, and actions are part of the same causal system. What we are affects what we think, what we think affects what we do. What we do affects what we are, and the system feeds back into itself. From this realization I can make sense of claims that people are good and bad, that acts are right and wrong; and without it I don’t think we could make sense of anything at all.

It’s not that we have some magical soul that lives outside our bodies; we are our bodies. Our brains are our souls. (I call this the Basic Fact of Cognitive Science: We are our brains.) Nor is it that neuron firings somehow “make” our thoughts and feelings as some kind of extra bonus; the patterns of neuron firings and the information that they process are our thoughts and feelings. Free will isn’t some mystical dualism; it is a direct consequence of the fact that we have capacities for conscious volition. Yes, our actions can be ultimately explained by the patterns in our brains. Of course they can! The patterns in our brains comprise our personalities, our beliefs, our memories, our desires.

Yes, the software of human consciousness is implemented on the hardware of the human brain. Why should we have expected something different? Whatever stuff makes consciousness, it is still stuff, and it obeys the laws that stuff obeys. We can imagine that we might be made of invisible fairy dust, but if that were so, then invisible fairy dust would need to be a real phenomenon and hence obey physical laws like the conservation of energy. Cognition is not opposed to physics; it is a subset of physics. Just as a computer obeys Turing’s laws if you program it but also Newton’s laws if you throw it, so humans are both mental and physical beings.

In fact, the intuitive psychology of free will is among the most powerfully and precisely predictive scientific theories ever devised, right alongside Darwinian evolution and quantum physics.

Consider the following experiment, conducted about twenty years ago. In November of 2006, I planned a road trip with several of my friends from our home in Ann Arbor to the Secular Student Alliance conference in Boston that was coming in April 2007. Months in advance, we researched hotels, we registered for the conference, we planned out how much we would need to spend. When the time came, we gathered in my car and drove the 1300 kilometers to the conference. Now, stop and think for a moment: How did I know, in November 2006, that in April 2007, on a particular date and time, E.O. Wilson would be in a particular room and so would I? Because that’s what the schedule said. Consider for a moment these two extremely complicated extended bodies in space, each interacting with thousands of other such bodies continuously; no physicist could possibly have gathered enough data to predict six months in advance that the two bodies would each travel hundreds of kilometers over the Earth’s surface in order to meet within 10 meters of one another, remain there for roughly an hour, and then split apart and henceforth remain hundreds of kilometers apart. Yet our simple intuitive psychology could, and did, make just that prediction correctly. Of course in the face of incomplete data, no theory is perfect, and the prediction could have been wrong. Indeed because Boston is exceedingly difficult to navigate (we got lost), the prediction that I and Steven Pinker would be in the same room at the same time the previous evening turned out not to be accurate. But even this is something that intuitive psychology could have taken into account better than any other scientific theory we have. Neither quantum physics nor stoichiometric chemistry nor evolutionary biology could have predicted that we’d get lost, nor recommend that if we ever return to Boston we should bring a smartphone with a GPS uplink; yet intuitive psychology can.

Moreover, intuitive psychology explicitly depends upon rational volition. If you had thought that I didn’t want to go to the conference, or that I was mistaken about the conference’s location, then you would have predicted that I would not occupy that spatial location at that time; and had these indeed been the case, that prediction would have been completely accurate. And yet, these predictions insist upon such entities as desires (wanting to go) and beliefs (being mistaken) that eliminativists, behaviorists, and epiphenomenalists have been insisting for years are pseudoscientific. Quite the opposite is the case: Eliminativism, behaviorism, and epiphenomenalism are pseudosciences.

Understanding the constituent parts of a process does not make the process an illusion. Rain did not stop falling when we developed mathematical models of meteorology. Fire did not stop being hot when we formalized statistical dynamics. Thunder did not stop being loud when we explained the wave properties of sound. Advances in computer technology have now helped us realize how real information processing can occur in systems made of physical parts that obey physical laws; it isn’t too great a stretch to think that human minds operate on similar principles. Just as the pattern of electrical firings in my computer really is Windows, the pattern of electrochemical firings in my brain really is my consciousness.

There is a kind of naive theology called “God of the gaps”; it rests upon the notion that whenever a phenomenon cannot be explained by science, this leaves room for God as an explanation. This theology is widely rejected by philosophers, because it implies that whenever science advances, religion must retreat. Libertarianism and fatalism rest upon the presumption of something quite similar, what I would call “free will of the gaps”. As cognitive science advances, we will discover more and more about the causation of human mental states; if this is enough to make us doubt free will, then “free will” was just another name for ignorance of cognitive science. I defend a much deeper sense of free will than this, one that is not at all threatened by scientific advancement.

Yes, our actions are caused—caused by what we think about the world! We are responsible for what we do not because it lacks causation, but because it has causation, specifically causation in our own beliefs, desires, and intentions. These beliefs, desires, and intentions are themselves implemented upon physical hardware, and we don’t fully understand how this implementation operates; but nonetheless the hardware is real and the phenomena are real, at least as real as such things as rocks, rivers, clouds, trees, dogs, and televisions, all of which are also complex functional ensembles of many smaller, simpler parts.

Conclusion

Libertarianism is largely discredited; we don’t have the mystical sort of free will that allows us to act outside of causal laws. But this doesn’t mean that we must accept fatalism; compatibilism is the answer. We have discovered many surprising things about cognitive science, and we will surely need to discover many more; but the fundamental truth of rational volition remains untarnished.

We know, to a high degree of certainty, that human beings are capable of volitional action. I contend that this is all the freedom we need—perhaps even all we could ever have. When a comet collides with Jupiter, and we ask “Why?”, the only sensible answer involves happenstance and laws of physics. When a leaf falls from a tree, and we ask “Why?”, we can do better, talking about evolutionary adaptations in the phylogenetic history of trees. But when a human being robs a bank, starts a war, feeds a child, or writes a book, and we ask “Why?”, we can move away from simple causes and talk about reasons—desires, intentions, beliefs; reasons, unlike mere causes, can make more or less sense, be more or less justified.

Psychological and neurological experiments have shown that volition is more complicated than we usually think—it can be strongly affected by situational factors, and it has more to do with inhibiting and selecting actions than with generating them, what Sukhvinder Obhi and Patrick Haggard call “not free will but free won’t”; yet still we have volitional control over many of our actions, and hence responsibility for them. In simple tasks, there is brain activity that predicts our behavior several seconds before we actually consciously experience the decision—but this is hardly surprising, since the brain needs to use processing power to actually generate a decision. Deliberation requires processing, not all of which can be conscious. It’s a little surprising that the activity can predict the decision in advance of the conscious experience of volition, but it can’t predict the decision perfectly, even in very simple tasks. (And in true real-life tasks, like choosing a college or a spouse, it basically can’t predict at all.) This shows that the conscious volition is doing something—perhaps inhibiting undesired behaviors or selecting desired ones. No compatibilist needs to be committed to the claim that subconscious urges have nothing to do with our decisions—since at least Freud that kind of free will has been clearly discredited.

Indeed, evolutionary psychology would be hard-pressed to explain an illusion of free will that isn’t free will. It simply doesn’t make sense for conscious volition to evolve unless it does something that affects our behavior in some way. Illusions are a waste of brain matter, which in turn is a waste of metabolic energy. (The idea that we would want to have free will in order to feel like life is worth living is profoundly silly: If our beliefs didn’t affect our behavior, our survival would be unrelated to whether we thought life was worth living!) You can make excuses and say that conscious experience is just an epiphenomenon upon neurological processes—an effect but not a cause—but there is no such thing as an “epiphenomenon” in physics as we know it. The smoke of a flame can smother that flame; the sound of a train is a sonic pressure wave that shakes the metal of the track. Anything that moves has energy, and energy is conserved. Epiphenomenalism would require new laws of physics, by which consciousness can be created ex nihilo, a new entity that requires no energy to make and “just happens” whenever certain matter is arranged in the right way.

Windows is not an “epiphenomenon” upon the electrons running through my computer’s processor core; the functional arrangement of those electrons is Windows—it implements Windows. I don’t see why we can’t say the same thing about my consciousness—that it is a software implementation by the computational hardware of my brain. Epiphenomenalists will often insist that they are being tough-minded scientists accepting the difficult facts while the rest of us are being silly and mystical; but they are talking about mysterious new physics and I’m talking about software-hardware interaction—so really, who is being mystical here?

In the future it may be possible to predict people’s behavior relatively accurately based on their brain activity—but so what? This only goes to show that the brain is the source of our decisions, which is precisely what compatibilism says. One can easily predict that rain will fall from clouds of a certain composition; but rain still falls from clouds. The fact that I can sometimes predict your behavior doesn’t make your behavior any less volitional; it only makes me a better psychologist (and for that matter a more functional human being). Moreover, detailed predictions of long-term behaviors will probably always remain impossible, due to the deep computational complexity involved. (If it were simple to predict who you’d marry, why would your brain expend so much effort working on the problem?)

For all these reasons, I say: Yes, we do have free will.

We ignorant, incompetent gods

May 21 JDN 2460086

A review of Homo Deus

The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.

E.O. Wilson

Homo Deus is a very good read—and despite its length, a quick one; as you can see, I read it cover to cover in a week. Yuval Noah Harari’s central point is surely correct: Our technology is reaching a threshold where it grants us unprecedented power and forces us to ask what it means to be human.

Biotechnology and artificial intelligence are now advancing so rapidly that advancements in other domains, such as aerospace and nuclear energy, seem positively mundane. Who cares about making flight or electricity a bit cleaner when we will soon have the power to modify ourselves or we’ll all be replaced by machines?

Indeed, we already have technology that would have seemed to ancient people like the powers of gods. We can fly; we can witness or even control events thousands of miles away; we can destroy mountains; we can wipeout entire armies in an instant; we can even travel into outer space.

Harari rightly warns us that our not-so-distant descendants are likely to have powers that we would see as godlike: Immortality, superior intelligence, self-modification, the power to create life.

And where it is scary to think about what they might do with that power if they think the way we do—as ignorant and foolish and tribal as we are—Harari points out that it is equally scary to think about what they might do if they don’t think the way we do—for then, how do they think? If their minds are genetically modified or even artificially created, who will they be? What values will they have, if not ours? Could they be better? What if they’re worse?

It is of course difficult to imagine values better than our own—if we thought those values were better, we’d presumably adopt them. But we should seriously consider the possibility, since presumably most of us believe that our values today are better than what most people’s values were 1000 years ago. If moral progress continues, does it not follow that people’s values will be better still 1000 years from now? Or at least that they could be?

I also think Harari overestimates just how difficult it is to anticipate the future. This may be a useful overcorrection; the world is positively infested with people making overprecise predictions about the future, often selling them for exorbitant fees (note that Harari was quite well-compensated for this book as well!). But our values are not so fundamentally alien from those of our forebears, and we have reason to suspect that our descendants’ values will be no more different from ours.

For instance, do you think that medieval people thought suffering and death were good? I assure you they did not. Nor did they believe that the supreme purpose in life is eating cheese. (They didn’t even believe the Earth was flat!) They did not have the concept of GDP, but they could surely appreciate the value of economic prosperity.

Indeed, our world today looks very much like a medieval peasant’s vision of paradise. Boundless food in endless variety. Near-perfect security against violence. Robust health, free from nearly all infectious disease. Freedom of movement. Representation in government! The land of milk and honey is here; there they are, milk and honey on the shelves at Walmart.

Of course, our paradise comes with caveats: Not least, we are by no means free of toil, but instead have invented whole new kinds of toil they could scarcely have imagined. If anything I would have to guess that coding a robot or recording a video lecture probably isn’t substantially more satisfying than harvesting wheat or smithing a sword; and reconciling receivables and formatting spreadsheets is surely less. Our tasks are physically much easier, but mentally much harder, and it’s not obvious which of those is preferable. And we are so very stressed! It’s honestly bizarre just how stressed we are, given the abudance in which we live; there is no reason for our lives to have stakes so high, and yet somehow they do. It is perhaps this stress and economic precarity that prevents us from feeling such joy as the medieval peasants would have imagined for us.

Of course, we don’t agree with our ancestors on everything. The medieval peasants were surely more religious, more ignorant, more misogynistic, more xenophobic, and more racist than we are. But projecting that trend forward mostly means less ignorance, less misogyny, less racism in the future; it means that future generations should see the world world catch up to what the best of us already believe and strive for—hardly something to fear. The values that I believe are surely not what we as a civilization act upon, and I sorely wish they were. Perhaps someday they will be.

I can even imagine something that I myself would recognize as better than me: Me, but less hypocritical. Strictly vegan rather than lacto-ovo-vegetarian, or at least more consistent about only buying free range organic animal products. More committed to ecological sustainability, more willing to sacrifice the conveniences of plastic and gasoline. Able to truly respect and appreciate all life, even humble insects. (Though perhaps still not mosquitoes; this is war. They kill more of us than any other animal, including us.) Not even casually or accidentally racist or sexist. More courageous, less burnt out and apathetic. I don’t always live up to my own ideals. Perhaps someday someone will.

Harari fears something much darker, that we will be forced to give up on humanist values and replace them with a new techno-religion he calls Dataism, in which the supreme value is efficient data processing. I see very little evidence of this. If it feels like data is worshipped these days, it is only because data is profitable. Amazon and Google constantly seek out ever richer datasets and ever faster processing because that is how they make money. The real subject of worship here is wealth, and that is nothing new. Maybe there are some die-hard techno-utopians out there who long for us all to join the unified oversoul of all optimized data processing, but I’ve never met one, and they are clearly not the majority. (Harari also uses the word ‘religion’ in an annoyingly overbroad sense; he refers to communism, liberalism, and fascism as ‘religions’. Ideologies, surely; but religions?)

Harari in fact seems to think that ideologies are strongly driven by economic structures, so maybe he would even agree that it’s about profit for now, but thinks it will become religion later. But I don’t really see history fitting this pattern all that well. If monotheism is directly tied to the formation of organized bureaucracy and national government, then how did Egypt and Rome last so long with polytheistic pantheons? If atheism is the natural outgrowth of industrialized capitalism, then why are Africa and South America taking so long to get the memo? I do think that economic circumstances can constrain culture and shift what sort of ideas become dominant, including religious ideas; but there clearly isn’t this one-to-one correspondence he imagines. Moreover, there was never Coalism or Oilism aside from the greedy acquisition of these commodities as part of a far more familiar ideology: capitalism.

He also claims that all of science is now, or is close to, following a united paradigm under which everything is a data processing algorithm, which suggests he has not met very many scientists. Our paradigms remain quite varied, thank you; and if they do all have certain features in common, it’s mainly things like rationality, naturalism and empiricism that are more or less inherent to science. It’s not even the case that all cognitive scientists believe in materialism (though it probably should be); there are still dualists out there.

Moreover, when it comes to values, most scientists believe in liberalism. This is especially true if we use Harari’s broad sense (on which mainline conservatives and libertarians are ‘liberal’ because they believe in liberty and human rights), but even in the narrow sense of center-left. We are by no means converging on a paradigm where human life has no value because it’s all just data processing; maybe some scientists believe that, but definitely not most of us. If scientists ran the world, I can’t promise everything would be better, but I can tell you that Bush and Trump would never have been elected and we’d have a much better climate policy in place by now.

I do share many of Harari’s fears of the rise of artificial intelligence. The world is clearly not ready for the massive economic disruption that AI is going to cause all too soon. We still define a person’s worth by their employment, and think of ourselves primarily as collection of skills; but AI is going to make many of those skills obsolete, and may make many of us unemployable. It would behoove us to think in advance about who we truly are and what we truly want before that day comes. I used to think that creative intellectual professions would be relatively secure; ChatGPT and Midjourney changed my mind. Even writers and artists may not be safe much longer.

Harari is so good at sympathetically explaining other views he takes it to a fault. At times it is actually difficult to know whether he himself believes something and wants you to, or if he is just steelmanning someone else’s worldview. There’s a whole section on ‘evolutionary humanism’ where he details a worldview that is at best Nietschean and at worst Nazi, but he makes it sound so seductive. I don’t think it’s what he believes, in part because he has similarly good things to say about liberalism and socialism—but it’s honestly hard to tell.

The weakest part of the book is when Harari talks about free will. Like most people, he just doesn’t get compatibilism. He spends a whole chapter talking about how science ‘proves we have no free will’, and it’s just the same old tired arguments hard determinists have always made.

He talks about how we can make choices based on our desires, but we can’t choose our desires; well of course we can’t! What would that even mean? If you could choose your desires, what would you choose them based on, if not your desires? Your desire-desires? Well, then, can you choose your desire-desires? What about your desire-desire-desires?

What even is this ultimate uncaused freedom that libertarian free will is supposed to consist in? No one seems capable of even defining it. (I’d say Kant got the closest: He defined it as the capacity to act based upon what ought rather than what is. But of course what we believe about ‘ought’ is fundamentally stored in our brains as a particular state, a way things are—so in the end, it’s an ‘is’ we act on after all.)

Maybe before you lament that something doesn’t exist, you should at least be able to describe that thing as a coherent concept? Woe is me, that 2 plus 2 is not equal to 5!

It is true that as our technology advances, manipulating other people’s desires will become more and more feasible. Harari overstates the case on so-called robo-rats; they aren’t really mind-controlled, it’s more like they are rewarded and punished. The rat chooses to go left because she knows you’ll make her feel good if she does; she’s still freely choosing to go left. (Dangling a carrot in front of a horse is fundamentally the same thing—and frankly, paying a wage isn’t all that different.) The day may yet come where stronger forms of control become feasible, and woe betide us when it does. Yet this is no threat to the concept of free will; we already knew that coercion was possible, and mind control is simply a more precise form of coercion.

Harari reports on a lot of interesting findings in neuroscience, which are important for people to know about, but they do not actually show that free will is an illusion. What they do show is that free will is thornier than most people imagine. Our desires are not fully unified; we are often ‘of two minds’ in a surprisingly literal sense. We are often tempted by things we know are wrong. We often aren’t sure what we really want. Every individual is in fact quite divisible; we literally contain multitudes.

We do need a richer account of moral responsibility that can deal with the fact that human beings often feel multiple conflicting desires simultaneously, and often experience events differently than we later go on to remember them. But at the end of the day, human consciousness is mostly unified, our choices are mostly rational, and our basic account of moral responsibility is mostly valid.

I think for now we should perhaps be less worried about what may come in the distant future, what sort of godlike powers our descendants may have—and more worried about what we are doing with the godlike powers we already have. We have the power to feed the world; why aren’t we? We have the power to save millions from disease; why don’t we? I don’t see many people blindly following this ‘Dataism’, but I do see an awful lot blinding following a 19th-century vision of capitalism.

And perhaps if we straighten ourselves out, the future will be in better hands.

There is no problem of free will, just a lot of really confused people

Jan 15, JDN 2457769

I was hoping for some sort of news item to use as a segue, but none in particular emerged, so I decided to go on with it anyway. I haven’t done any cognitive science posts in awhile, and this is one I’ve been meaning to write for a long time—actually it’s the sort of thing that even a remarkable number of cognitive scientists frequently get wrong, perhaps because the structure of human personality makes cognitive science inherently difficult.

Do we have free will?

The question has been asked so many times by so many people it is now a whole topic in philosophy. The Stanford Encyclopedia of Philosophy has an entire article on free will. The Information Philosopher has a gateway page “The Problem of Free Will” linking to a variety of subpages. There are even YouTube videos about “the problem of free will”.

The constant arguing back and forth about this would be problematic enough, but what really grates me are the many, many people who write “bold” articles and books about how “free will does not exist”. Examples include Sam Harris and Jerry Coyne, and have been published in everything from Psychology Today to the Chronicle of Higher Education. There’s even a TED talk.

The worst ones are those that follow with “but you should believe in it anyway”. In The Atlantic we have “Free will does not exist. But we’re better off believing in it anyway.” Scientific American offers a similar view, “Scientists say free will probably doesn’t exist, but urge: “Don’t stop believing!””

This is a mind-bogglingly stupid approach. First of all, if you want someone to believe in something, you don’t tell them it doesn’t exist. Second, if something doesn’t exist, that is generally considered a pretty compelling reason not to believe in it. You’d need a really compelling counter-argument, and frankly I’m not even sure the whole idea is logically coherent. How can I believe in something if I know it doesn’t exist? Am I supposed to delude myself somehow?

But the really sad part is that it’s totally unnecessary. There is no problem of free will. There are just an awful lot of really, really confused people. (Fortunately not everyone is confused; there are those, such as Daniel Dennett, who actually understand what’s going on.)

The most important confusion is over what you mean by the phrase “free will”. There are really two core meanings here, and the conflation of them is about 90% of the problem.

1. Moral responsibility: We have “free will” if and only if we are morally responsible for our actions.

2. Noncausality: We have “free will” if and only if our actions are not caused by the laws of nature.

Basically, every debate over “free will” boils down to someone pointing out that noncausality doesn’t exist, and then arguing that this means that moral responsibility doesn’t exist. Then someone comes back and says that moral responsibility does exist, and then infers that this means noncausality must exist. Or someone points out that noncausality doesn’t exist, and then they realize how horrible it would be if moral responsibility didn’t exist, and then tells people they should go on believing in noncausality so that they don’t have to give up moral responsibility.

Let me be absolutely clear here: Noncausality could not possibly exist.

Noncausality isn’t even a coherent concept. Actions, insofar as they are actions, must, necessarily, by definition, be caused by the laws of nature.

I can sort of imagine an event not being caused; perhaps virtual electron-positron pairs can really pop into existence without ever being caused. (Even then I’m not entirely convinced; I think quantum mechanics might actually be deterministic at the most fundamental level.)

But an action isn’t just a particle popping into existence. It requires the coordinated behavior of some 10^26 or more particles, all in a precisely organized, unified way, structured so as to move some other similarly large quantity of particles through space in a precise way so as to change the universe from one state to another state according to some system of objectives. Typically, it involves human muscles intervening on human beings or inanimate objects. (Recently it has come to mean specifically human fingers on computer keyboards a rather large segment of the time!) If what you do is an action—not a muscle spasm, not a seizure, not a slip or a trip, but something you did on purpose—then it must be caused. And if something is caused, it must be caused according to the laws of nature, because the laws of nature are the laws underlying all causality in the universe!

And once you realize that, the “problem of free will” should strike you as one of the stupidest “problems” ever proposed. Of course our actions are caused by the laws of nature! Why in the world would you think otherwise?

If you think that noncausality is necessary—or even useful—for free will, what kind of universe do you think you live in? What kind of universe could someone live in, that would fit your idea of what free will is supposed to be?

It’s like I said in that much earlier post about The Basic Fact of Cognitive Science (we are our brains): If you don’t think a mind can be made of matter, what do you think minds are made of? What sort of magical invisible fairy dust would satisfy you? If you can’t even imagine something that would satisfy the constraints you’ve imposed, did it maybe occur to you that your constraints are too strong?

Noncausality isn’t worth fretting over for the same reason that you shouldn’t fret over the fact that pi is irrational and you can’t make a square circle. There is no possible universe in which that isn’t true. So if it bothers you, it’s not that there’s something wrong with the universe—it’s clearly that there’s something wrong with you. Your thinking on the matter must be too confused, too dependent on unquestioned intuitions, if you think that murder can’t be wrong unless 2+2=5.

In philosophical jargon I am called a “compatibilist” because I maintain that free will and determinism are “compatible”. But this is much too weak a term. I much prefer Eleizer Yudkowsky’s “requiredism”, which he explains in one of the greatest blog posts of all time (seriously, read it immediately if you haven’t before—I’m okay with you cutting off my blog post here and reading his instead, because it truly is that brilliant), entitled simply “Thou Art Physics”. This quote sums it up briefly:

My position might perhaps be called “Requiredism.” When agency, choice, control, and moral responsibility are cashed out in a sensible way, they require determinism—at least some patches of determinism within the universe. If you choose, and plan, and act, and bring some future into being, in accordance with your desire, then all this requires a lawful sort of reality; you cannot do it amid utter chaos. There must be order over at least over those parts of reality that are being controlled by you. You are within physics, and so you/physics have determined the future. If it were not determined by physics, it could not be determined by you.

Free will requires a certain minimum level of determinism in the universe, because the universe must be orderly enough that actions make sense and there isn’t simply an endless succession of random events. Call me a “requiredist” if you need to call me something. I’d prefer you just realize the whole debate is silly because moral responsibility exists and noncausality couldn’t possibly.

We could of course use different terms besides “free will”. “Moral responsibility” is certainly a good one, but it is missing one key piece, which is the issue of why we can assign moral responsibility to human beings and a few other entities (animals, perhaps robots) and not to the vast majority of entities (trees, rocks, planets, tables), and why we are sometimes willing to say that even a human being does not have moral responsibility (infancy, duress, impairment).

This is why my favored term is actually “rational volition”. The characteristic that human beings have (at least most of us, most of the time), which also many animals and possibly some robots share (if not now, then soon enough), which justifies our moral responsibility is precisely our capacity to reason. Things don’t just happen to us the way they do to some 99.999,999,999% of the universe; we do things. We experience the world through our senses, have goals we want to achieve, and act in ways that are planned to make the world move closer to achieving those goals. We have causes, sure enough; but not just any causes. We have a specific class of causes, which are related to our desires and intentions—we call these causes reasons.

So if you want to say that we don’t have “free will” because that implies some mysterious nonsensical noncausality, sure; that’s fine. But then don’t go telling us that this means we don’t have moral responsibility, or that we should somehow try to delude ourselves into believing otherwise in order to preserve moral responsibility. Just recognize that we do have rational volition.

How do I know we have rational volition? That’s the best part, really: Experiments. While you’re off in la-la land imagining fanciful universes where somehow causes aren’t really causes even though they are, I can point to not only centuries of human experience but decades of direct, controlled experiments in operant conditioning. Human beings and most other animals behave quite differently in behavioral experiments than, say, plants or coffee tables. Indeed, it is precisely because of this radical difference that it seems foolish to even speak of a “behavioral experiment” about coffee tables—because coffee tables don’t behave, they just are. Coffee tables don’t learn. They don’t decide. They don’t plan or consider or hope or seek.

Japanese, as it turns out, may be a uniquely good language for cognitive science, because it has two fundamentally different verbs for “to be” depending on whether an entity is sentient. Humans and animals imasu, while inanimate objects merely arimasu. We have free will because and insofar as we imasu.

Once you get past that most basic confusion of moral responsibility with noncausality, there are a few other confusions you might run into as well. Another one is two senses of “reductionism”, which Dennett refers to as “ordinary” and “greedy”:

1. Ordinary reductionism: All systems in the universe are ultimately made up of components that always and everywhere obey the laws of nature.

2. Greedy reductionism: All systems in the universe just are their components, and have no existence, structure, or meaning aside from those components.

I actually had trouble formulating greedy reductionism as a coherent statement, because it’s such a nonsensical notion. Does anyone really think that a pile of two-by-fours is the same thing as a house? But people do speak as though they think this about human brains, when they say that “love is just dopamine” or “happiness is just serotonin”. But dopamine in a petri dish isn’t love, any more than a pile of two-by-fours is a house; and what I really can’t quite grok is why anyone would think otherwise.

Maybe they’re simply too baffled by the fact that love is made of dopamine (among other things)? They can’t quite visualize how that would work (nor can I, nor, I think, can anyone in the world at this level of scientific knowledge). You can see how the two-by-fours get nailed together and assembled into the house, but you can’t see how dopamine and action potentials would somehow combine into love.

But isn’t that a reason to say that love isn’t the same thing as dopamine, rather than that it is? I can understand why some people are still dualists who think that consciousness is somehow separate from the functioning of the brain. That’s wrong—totally, utterly, ridiculously wrong—but I can at least appreciate the intuition that underlies it. What I can’t quite grasp is why someone would go so far the other way and say that the consciousness they are currently experiencing does not exist.

Another thing that might confuse people is the fact that minds, as far as we know, are platform independentthat is, your mind could most likely be created out of a variety of different materials, from the gelatinous brain it currently is to some sort of silicon supercomputer, to perhaps something even more exotic. This independence follows from the widely-believed Church-Turing thesis, which essentially says that all computation is computation, regardless of how it is done. This may not actually be right, but I see many reasons to think that it is, and if so, this means that minds aren’t really what they are made of at all—they could be made of lots of things. What makes a mind a mind is how it is structured and above all what it does.

If this is baffling to you, let me show you how platform-independence works on a much simpler concept: Tables. Tables are also in fact platform-independent. You can make a table out of wood, or steel, or plastic, or ice, or bone. You could take out literally every single atom of a table and replace it will a completely different atom of a completely different element—carbon for iron, for example—and still end up with a table. You could conceivably even do so without changing the table’s weight, strength, size, etc., though that would be considerably more difficult.
Does this mean that tables somehow exist “beyond” their constituent matter? In some very basic sense, I suppose so—they are, again, platform-independent. But not in any deep, mysterious sense. Start with a wooden table, take away all the wood, and you no longer have a table. Take apart the table and you have a bunch of wood, which you could use to build something else. There is no “essence” comprising the table. There is no “table soul” that would persist when the table is deconstructed.

And—now for the hard part—so it is with minds. Your mind is your brain. The constituent atoms of your brain are gradually being replaced, day by day, but your mind is the same, because it exists in the arrangement and behavior, not the atoms themselves. Yet there is nothing “extra” or “beyond” that makes up your mind. You have no “soul” that lies beyond your brain. If your brain is destroyed, your mind will also be destroyed. If your brain could be copied, your mind would also be copied. And one day it may even be possible to construct your mind in some other medium—some complex computer made of silicon and tantalum, most likely—and it would still be a mind, and in all its thoughts, feelings and behaviors your mind, if not numerically identical to you.

Thus, when we engage in rational volition—when we use our “free will” if you like that term—there is no special “extra” process beyond what’s going on in our brains, but there doesn’t have to be. Those particular configurations of action potentials and neurotransmitters are our thoughts, desires, plans, intentions, hopes, fears, goals, beliefs. These mental concepts are not in addition to the physical material; they are made of that physical material. Your soul is made of gelatin.

Again, this is not some deep mystery. There is no “paradox” here. We don’t actually know the details of how it works, but that makes this no different from a Homo erectus who doesn’t know how fire works. Maybe he thinks there needs to be some extra “fire soul” that makes it burn, but we know better; and in far fewer centuries than separate that Homo erectus from us, our descendants will know precisely how the brain creates the mind.

Until then, simply remember that any mystery here lies in us—in our ignorance—and not in the universe. And take heart that the kind of “free will” that matters—moral responsibility—has absolutely no need for the kind of “free will” that doesn’t exist—noncausality. They’re totally different things.