Drift-diffusion decision-making: The stock market in your brain

JDN 2456173 EDT 17:32.

Since I’ve been emphasizing the “economics” side of things a lot lately, I decided this week to focus more on the “cognitive” side. Today’s topic comes from cutting-edge research in cognitive science and neuroeconomics, so we still haven’t ironed out all the details.

The question we are trying to answer is an incredibly basic one: How do we make decisions? Given the vast space of possible behaviors human beings can engage in, how do we determine which ones we actually do?

There are actually two phases of decision-making.

The first phase is alternative generation, in which we come up with a set of choices. Some ideas occur to us, others do not; some are familiar and come to mind easily, others only appear after careful consideration. Techniques like brainstorming exist to help us with this task, but none of them are really very good; one of the most important bottlenecks in human cognition is the individual capacity to generate creative alternatives. The task is mind-bogglingly complex; the number of possible choices you could make at any given moment is already vast, and with each passing moment the number of possible behavioral sequences grows exponentially. Just think about all the possible sentences I could type write now, and then think about how incredibly narrow a space of possible behavioral options it is to assume that I’m typing sentences.

Most of the world’s innovation can ultimately be attributed to better alternative generation; particular with regard to social systems, but in many cases even with regard to technologies, the capability existed for decades or even centuries but the idea simply never occurred to anyone. (You can see this by looking at the work of Heron of Alexandria and Leonardo da Vinci; the capacity to build these machines existed, and a handful of individuals were creative enough to actually try it, but it never occurred to anyone that there could be enormous, world-changing benefits to expanding these technologies for mass production.)

Unfortunately, we basically don’t understand alternative generation at all. It’s an almost complete gap in our understanding of human cognition. It actually has a lot to do with some of the central unsolved problems of cognitive science and artificial intelligence; if we could create a computer that is capable of creative thought, we would basically make human beings obsolete once and for all. (Oddly enough, physical labor is probably where human beings would still be necessary the longest; robots aren’t yet very good at climbing stairs or lifting irregularly-shaped objects, much less giving haircuts or painting on canvas.)

The second part is what most “decision-making” research is actually about, and I’ll call it alternative selection. Once you have a list of two, three or four viable options—rarely more than this, as I’ll talk about more in a moment—how do you go about choosing the one you’ll actually do?

This is a topic that has undergone considerable research, and we’re beginning to make progress. The leading models right now are variants of drift-diffusion (hence the title of the post), and these models have the very appealing property that they are neurologically plausible, predictively accurate, and yet close to rationally optimal.

Drift-diffusion models basically are, as I said in the subtitle, a stock market in your brain. Picture the stereotype of the trading floor of the New York Stock Exchange, with hundreds of people bustling about, shouting “Buy!” “Sell!” “Buy!” with the price going up with every “Buy!” and down with every “Sell!”; in reality the NYSE isn’t much like that, and hasn’t been for decades, because everyone is staring at a screen and most of the trading is automated and occurs in microseconds. (It’s kind of like how if you draw a cartoon of a doctor, they will invariably be wearing a head mirror, but if you’ve actually been to a doctor lately, they don’t actually wear those anymore.)

Drift-diffusion, however, is like that. Let’s say we have a decision to make, “Yes” or “No”. Thousands of neurons devoted to that decision start firing, some saying “Yes”, exciting other “Yes” neurons and inhibiting “No” neurons, while others say “No”, exciting other “No” neurons and inhibiting other “Yes” neurons. New information feeds in, triggering some to “Yes” and others to “No”. The resulting process behaves like a random walk, specifically a trend random walk, where the intensity of the trend is determined by whatever criteria you are feeding into the decision. The decision will be made when a certain threshold is reached, say, 95% agreement among all neurons.

I wrote a little R program to demonstrate drift-diffusion models; the images I’ll be showing are R plots from that program. The graphs represent the aggregated “opinion” of all the deciding neurons; as you go from left to right, time passes, and the opinions “drift” toward one side or the other. For these graphs, the top of the graph represents the better choice.

It may actually be easiest to understand if you imagine that we are choosing a belief; new evidence accumulates that pushes us toward the correct answer (top) or the incorrect answer (bottom), because even a true belief will have some evidence that seems to be against it. You encounter this evidence more or less randomly (or do you?), and which belief you ultimately form will depend upon both how strong the evidence is and how thoughtful you are in forming your beliefs.

If the evidence is very strong (or in general, the two choices are very different), the trend will be very strong, and you’ll almost certainly come to a decision very quickly:

   strong_bias

If the evidence is weaker (the two choices are very similar), the trend will be much weaker, and it will take much longer to make a decision:

weak_bias

One way to make a decision faster would be to have a weaker threshold, like 75% agreement instead of 95%; but this has the downside that it can result in making the wrong choice. Notice how some of the paths go down to the bottom, which in this case is the worse choice:

low_threshold

But if there is actually no difference between the two options, a low threshold is good, because you don’t spend time waffling over a pointless decision. (I know that I’ve had a problem with that in real life, spending too long making a decision that ultimately is of minor importance; my drift thresholds are too high!) With a low threshold, you get it over with:

indifferent

With a high threshold, you can go on for ages:

ambivalent

This is the difference between indifferent about a decision and being ambivalent. If you are indifferent, you are dealing with two small amounts of utility and it doesn’t really matter which one you choose. If you are ambivalent, you are dealing with two large amounts of utility and it’s very important to get it right—but you aren’t sure which one to choose. If you are indifferent, you should use a low threshold and get it over with; but if you are ambivalent, it actually makes sense to keep your threshold high and spend a lot of time thinking about the problem in order to be sure you get it right.

It’s also possible to set a higher threshold for one option than the other; I think this is actually what we’re doing when we exhibit many cognitive biases like confirmation bias. If the decision you’re making is between keeping your current beliefs and changing them to something else, your diffusion space actually looks more like this:

confirmation_bias

You’ll only make the correct choice (top) if you set equal thresholds (meaning you reason fairly instead of exhibiting cognitive biases) and high thresholds (meaning you spend sufficient time thinking about the question). If I may change to a sports metaphor, people tend to move the goalposts—the team “change your mind” has to kick a lot further than the team “keep your current belief”.

We can also extend drift-diffusion models to changing your mind (or experiencing regret such as “buyer’s remorse“) if we assume that the system doesn’t actually cut off once it reaches a threshold; the threshold makes us take the action, but then our neurons keep on arguing it out in the background. We may hover near the threshold or soar off into absolute certainty—but on the other hand we may waffle all the way back to the other decision:

regret

There are all sorts of generalizations and extensions of drift-diffusion models, but these basic ones should give you a sense of how useful they are. More importantly, they are accurate; drift-diffusion models produce very sharp mathematical predictions about human behavior, and in general these predictions are verified in experiments.

The main reason we started using drift-diffusion models is that they account very well for the fact that decisions become more accurate when we spend more time on them. The way they do that is quite elegant: Under harsher time pressure, we use lower thresholds, which speeds up the process but also introduces more errors. When we don’t have time pressure, we use high thresholds and take a long time, but almost always make the right decision.

Under certain (rather narrow) circumstances, drift-diffusion models can actually be equivalent to the optimal Bayesian model. These models can also be extended for use in purchasing choices, and one day we will hopefully have a stock-market-in-the-brain model of actual stock market decisions!

Drift-diffusion models are based on decisions between two alternatives with only one relevant attribute under consideration, but they are being expanded to decisions with multiple attributes and decisions with multiple alternatives; the fact that this is difficult is in my opinion not a bug but a feature—decisions with multiple alternatives and attributes are actually difficult for human beings to make. The fact that drift-diffusion models have difficulty with the very situations that human beings have difficulty with provides powerful evidence that drift-diffusion models are accurately representing the processes that go on inside a human brain. I’d be worried if it were too easy to extend the models to complex decisions—it would suggest that our model is describing a more flexible decision process than the one human beings actually use. Human decisions really do seem to be attempts to shoehorn two-choice single-attribute decision methods onto more complex problems, and a lot of mistakes we make are attributable to that.

In particular, the phenomena of analysis paralysis and the paradox of choice are easily explained this way. Why is it that when people are given more alternatives, they often spend far more time trying to decide and often end up less satisfied than they were before? This makes sense if, when faced with a large number of alternatives, we spend time trying to compare them pairwise on every attribute, and then get stuck with a whole bunch of incomparable pairwise comparisons that we then have to aggregate somehow. If we could simply assign a simple utility value to each attribute and sum them up, adding new alternatives should only increase the time required by a small amount and should never result in a reduction in final utility.

When I have an important decision to make, I actually assemble a formal utility model, as I did recently when deciding on a new computer to buy (it should be in the mail any day now!). The hardest part, however, is assigning values to the coefficients in the model; just how much am I willing to spend for an extra gigabyte of RAM, anyway? How exactly do those CPU benchmarks translate into dollar value for me? I can clearly tell that this is not the native process of my mental architecture.

No, alas, we seem to be stuck with drift-diffusion, which is nearly optimal for choices with two alternatives on a single attribute, but actually pretty awful for multiple-alternative multiple-attribute decisions. But perhaps by better understanding our suboptimal processes, we can rearrange our environment to bring us closer to optimal conditions—or perhaps, one day, change the processes themselves!

How to change the world

JDN 2457166 EDT 17:53.

I just got back from watching Tomorrowland, which is oddly appropriate since I had already planned this topic in advance. How do we, as they say in the film, “fix the world”?

I can’t find it at the moment, but I vaguely remember some radio segment on which a couple of neoclassical economists were interviewed and asked what sort of career can change the world, and they answered something like, “Go into finance, make a lot of money, and then donate it to charity.”

In a slightly more nuanced form this strategy is called earning to give, and frankly I think it’s pretty awful. Most of the damage that is done to the world is done in the name of maximizing profits, and basically what you end up doing is stealing people’s money and then claiming you are a great altruist for giving some of it back. I guess if you can make enormous amounts of money doing something that isn’t inherently bad and then donate that—like what Bill Gates did—it seems better. But realistically your potential income is probably not actually raised that much by working in finance, sales, or oil production; you could have made the same income as a college professor or a software engineer and not be actively stripping the world of its prosperity. If we actually had the sort of ideal policies that would internalize all externalities, this dilemma wouldn’t arise; but we’re nowhere near that, and if we did have that system, the only billionaires would be Nobel laureate scientists. Albert Einstein was a million times more productive than the average person. Steve Jobs was just a million times luckier. Even then, there is the very serious question of whether it makes sense to give all the fruits of genius to the geniuses themselves, who very quickly find they have all they need while others starve. It was certainly Jonas Salk’s view that his work should only profit him modestly and its benefits should be shared with as many people as possible. So really, in an ideal world there might be no billionaires at all.

Here I would like to present an alternative. If you are an intelligent, hard-working person with a lot of talent and the dream of changing the world, what should you be doing with your time? I’ve given this a great deal of thought in planning my own life, and here are the criteria I came up with:

  1. You must be willing and able to commit to doing it despite great obstacles. This is another reason why earning to give doesn’t actually make sense; your heart (or rather, limbic system) won’t be in it. You’ll be miserable, you’ll become discouraged and demoralized by obstacles, and others will surpass you. In principle Wall Street quantitative analysts who make $10 million a year could donate 90% to UNICEF, but they don’t, and you know why? Because the kind of person who is willing and able to exploit and backstab their way to that position is the kind of person who doesn’t give money to UNICEF.
  2. There must be important tasks to be achieved in that discipline. This one is relatively easy to satisfy; I’ll give you a list in a moment of things that could be contributed by a wide variety of fields. Still, it does place some limitations: For one, it rules out the simplest form of earning to give (a more nuanced form might cause you to choose quantum physics over social work because it pays better and is just as productive—but you’re not simply maximizing income to donate). For another, it rules out routine, ordinary jobs that the world needs but don’t make significant breakthroughs. The world needs truck drivers (until robot trucks take off), but there will never be a great world-changing truck driver, because even the world’s greatest truck driver can only carry so much stuff so fast. There are no world-famous secretaries or plumbers. People like to say that these sorts of jobs “change the world in their own way”, which is a nice sentiment, but ultimately it just doesn’t get things done. We didn’t lift ourselves into the Industrial Age by people being really fantastic blacksmiths; we did it by inventing machines that make blacksmiths obsolete. We didn’t rise to the Information Age by people being really good slide-rule calculators; we did it by inventing computers that work a million times as fast as any slide-rule. Maybe not everyone can have this kind of grand world-changing impact; and I certainly agree that you shouldn’t have to in order to live a good life in peace and happiness. But if that’s what you’re hoping to do with your life, there are certain professions that give you a chance of doing so—and certain professions that don’t.
  3. The important tasks must be currently underinvested. There are a lot of very big problems that many people are already working on. If you work on the problems that are trendy, the ones everyone is talking about, your marginal contribution may be very small. On the other hand, you can’t just pick problems at random; many problems are not invested in precisely because they aren’t that important. You need to find problems people aren’t working on but should be—problems that should be the focus of our attention but for one reason or another get ignored. A good example here is to work on pancreatic cancer instead of breast cancer; breast cancer research is drowning in money and really doesn’t need any more; pancreatic cancer kills 2/3 as many people but receives less than 1/6 as much funding. If you want to do cancer research, you should probably be doing pancreatic cancer.
  4. You must have something about you that gives you a comparative—and preferably, absolute—advantage in that field. This is the hardest one to achieve, and it is in fact the reason why most people can’t make world-changing breakthroughs. It is in fact so hard to achieve that it’s difficult to even say you have until you’ve already done something world-changing. You must have something special about you that lets you achieve what others have failed. You must be one of the best in the world. Even as you stand on the shoulders of giants, you must see further—for millions of others stand on those same shoulders and see nothing. If you believe that you have what it takes, you will be called arrogant and naïve; and in many cases you will be. But in a few cases—maybe 1 in 100, maybe even 1 in 1000, you’ll actually be right. Not everyone who believes they can change the world does so, but everyone who changes the world believed they could.

Now, what sort of careers might satisfy all these requirements?

Well, basically any kind of scientific research:

Mathematicians could work on network theory, or nonlinear dynamics (the first step: separating “nonlinear dynamics” into the dozen or so subfields it should actually comprise—as has been remarked, “nonlinear” is a bit like “non-elephant”), or data processing algorithms for our ever-growing morasses of unprocessed computer data.

Physicists could be working on fusion power, or ways to neutralize radioactive waste, or fundamental physics that could one day unlock technologies as exotic as teleportation and faster-than-light travel. They could work on quantum encryption and quantum computing. Or if those are still too applied for your taste, you could work in cosmology and seek to answer some of the deepest, most fundamental questions in human existence.

Chemists could be working on stronger or cheaper materials for infrastructure—the extreme example being space elevators—or technologies to clean up landfills and oceanic pollution. They could work on improved batteries for solar and wind power, or nanotechnology to revolutionize manufacturing.

Biologists could work on any number of diseases, from cancer and diabetes to malaria and antibiotic-resistant tuberculosis. They could work on stem-cell research and regenerative medicine, or genetic engineering and body enhancement, or on gerontology and age reversal. Biology is a field with so many important unsolved problems that if you have the stomach for it and the interest in some biological problem, you can’t really go wrong.

Electrical engineers can obviously work on improving the power and performance of computer systems, though I think over the last 20 years or so the marginal benefits of that kind of research have begun to wane. Efforts might be better spent in cybernetics, control systems, or network theory, where considerably more is left uncharted; or in artificial intelligence, where computing power is only the first step.

Mechanical engineers could work on making vehicles safer and cheaper, or building reusable spacecraft, or designing self-constructing or self-repairing infrastructure. They could work on 3D printing and just-in-time manufacturing, scaling it up for whole factories and down for home appliances.

Aerospace engineers could link the world with hypersonic travel, build satellites to provide Internet service to the farthest reaches of the globe, or create interplanetary rockets to colonize Mars and the moons of Jupiter and Saturn. They could mine asteroids and make previously rare metals ubiquitous. They could build aerial drones for delivery of goods and revolutionize logistics.

Agronomists could work on sustainable farming methods (hint: stop farming meat), invent new strains of crops that are hardier against pests, more nutritious, or higher-yielding; on the other hand a lot of this is already being done, so maybe it’s time to think outside the box and consider what we might do to make our food system more robust against climate change or other catastrophes.

Ecologists will obviously be working on predicting and mitigating the effects of global climate change, but there are a wide variety of ways of doing so. You could focus on ocean acidification, or on desertification, or on fishery depletion, or on carbon emissions. You could work on getting the climate models so precise that they become completely undeniable to anyone but the most dogmatically opposed. You could focus on endangered species and habitat disruption. Ecology is in general so underfunded and undersupported that basically anything you could do in ecology would be beneficial.

Neuroscientists have plenty of things to do as well: Understanding vision, memory, motor control, facial recognition, emotion, decision-making and so on. But one topic in particular is lacking in researchers, and that is the fundamental Hard Problem of consciousness. This one is going to be an uphill battle, and will require a special level of tenacity and perseverance. The problem is so poorly understood it’s difficult to even state clearly, let alone solve. But if you could do it—if you could even make a significant step toward it—it could literally be the greatest achievement in the history of humanity. It is one of the fundamental questions of our existence, the very thing that separates us from inanimate matter, the very thing that makes questions possible in the first place. Understand consciousness and you understand the very thing that makes us human. That achievement is so enormous that it seems almost petty to point out that the revolutionary effects of artificial intelligence would also fall into your lap.

The arts and humanities also have a great deal to contribute, and are woefully underappreciated.

Artists, authors, and musicians all have the potential to make us rethink our place in the world, reconsider and reimagine what we believe and strive for. If physics and engineering can make us better at winning wars, art and literature and remind us why we should never fight them in the first place. The greatest works of art can remind us of our shared humanity, link us all together in a grander civilization that transcends the petty boundaries of culture, geography, or religion. Art can also be timeless in a way nothing else can; most of Aristotle’s science is long-since refuted, but even the Great Pyramid thousands of years before him continues to awe us. (Aristotle is about equidistant chronologically between us and the Great Pyramid.)

Philosophers may not seem like they have much to add—and to be fair, a great deal of what goes on today in metaethics and epistemology doesn’t add much to civilization—but in fact it was Enlightenment philosophy that brought us democracy, the scientific method, and market economics. Today there are still major unsolved problems in ethics—particularly bioethics—that are in need of philosophical research. Technologies like nanotechnology and genetic engineering offer us the promise of enormous benefits, but also the risk of enormous harms; we need philosophers to help us decide how to use these technologies to make our lives better instead of worse. We need to know where to draw the lines between life and death, between justice and cruelty. Literally nothing could be more important than knowing right from wrong.

Now that I have sung the praises of the natural sciences and the humanities, let me now explain why I am a social scientist, and why you probably should be as well.

Psychologists and cognitive scientists obviously have a great deal to give us in the study of mental illness, but they may actually have more to contribute in the study of mental health—in understanding not just what makes us depressed or schizophrenic, but what makes us happy or intelligent. The 21st century may not simply see the end of mental illness, but the rise of a new level of mental prosperity, where being happy, focused, and motivated are matters of course. The revolution that biology has brought to our lives may pale in comparison to the revolution that psychology will bring. On the more social side of things, psychology may allow us to understand nationalism, sectarianism, and the tribal instinct in general, and allow us to finally learn to undermine fanaticism, encourage critical thought, and make people more rational. The benefits of this are almost impossible to overstate: It is our own limited, broken, 90%-or-so heuristic rationality that has brought us from simians to Shakespeare, from gorillas to Godel. To raise that figure to 95% or 99% or 99.9% could be as revolutionary as was whatever evolutionary change first brought us out of the savannah as Australopithecus africanus.

Sociologists and anthropologists will also have a great deal to contribute to this process, as they approach the tribal instinct from the top down. They may be able to tell us how nations are formed and undermined, why some cultures assimilate and others collide. They can work to understand combat bigotry in all its forms, racism, sexism, ethnocentrism. These could be the fields that finally end war, by understanding and correcting the imbalances in human societies that give rise to violent conflict.

Political scientists and public policy researchers can allow us to understand and restructure governments, undermining corruption, reducing inequality, making voting systems more expressive and more transparent. They can search for the keystones of different political systems, finding the weaknesses in democracy to shore up and the weaknesses in autocracy to exploit. They can work toward a true international government, representative of all the world’s people and with the authority and capability to enforce global peace. If the sociologists don’t end war and genocide, perhaps the political scientists can—or more likely they can do it together.

And then, at last, we come to economists. While I certainly work with a lot of ideas from psychology, sociology, and political science, I primarily consider myself an economist. Why is that? Why do I think the most important problems for me—and perhaps everyone—to be working on are fundamentally economic?

Because, above all, economics is broken. The other social sciences are basically on the right track; their theories are still very limited, their models are not very precise, and there are decades of work left to be done, but the core principles upon which they operate are correct. Economics is the field to work in because of criterion 3: Almost all the important problems in economics are underinvested.

Macroeconomics is where we are doing relatively well, and yet the Keynesian models that allowed us to reduce the damage of the Second Depression nonetheless had no power to predict its arrival. While inflation has been at least somewhat tamed, the far worse problem of unemployment has not been resolved or even really understood.

When we get to microeconomics, the neoclassical models are totally defective. Their core assumptions of total rationality and total selfishness are embarrassingly wrong. We have no idea what controls assets prices, or decides credit constraints, or motivates investment decisions. Our models of how people respond to risk are all wrong. We have no formal account of altruism or its limitations. As manufacturing is increasingly automated and work shifts into services, most economic models make no distinction between the two sectors. While finance takes over more and more of our society’s wealth, most formal models of the economy don’t even include a financial sector.

Economic forecasting is no better than chance. The most widely-used asset-pricing model, CAPM, fails completely in empirical tests; its defenders concede this and then have the audacity to declare that it doesn’t matter because the mathematics works. The Black-Scholes derivative-pricing model that caused the Second Depression could easily have been predicted to do so, because it contains a term that assumes normal distributions when we know for a fact that financial markets are fat-tailed; simply put, it claims certain events will never happen that actually occur several times a year.

Worst of all, economics is the field that people listen to. When a psychologist or sociologist says something on television, people say that it sounds interesting and basically ignore it. When an economist says something on television, national policies are shifted accordingly. Austerity exists as national policy in part due to a spreadsheet error by two famous economists.

Keynes already knew this in 1936: “The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”

Meanwhile, the problems that economics deals with have a direct influence on the lives of millions of people. Bad economics gives us recessions and depressions; it cripples our industries and siphons off wealth to an increasingly corrupt elite. Bad economics literally starves people: It is because of bad economics that there is still such a thing as world hunger. We have enough food, we have the technology to distribute it—but we don’t have the economic policy to lift people out of poverty so that they can afford to buy it. Bad economics is why we don’t have the funding to cure diabetes or colonize Mars (but we have the funding for oil fracking and aircraft carriers, don’t we?). All of that other scientific research that needs done probably could be done, if the resources of our society were properly distributed and utilized.

This combination of both overwhelming influence, overwhelming importance and overwhelming error makes economics the low-hanging fruit; you don’t even have to be particularly brilliant to have better ideas than most economists (though no doubt it helps if you are). Economics is where we have a whole bunch of important questions that are unanswered—or the answers we have are wrong. (As Will Rogers said, “It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”)

Thus, rather than tell you go into finance and earn to give, those economists could simply have said: “You should become an economist. You could hardly do worse than we have.”

Why the Republican candidates like flat income tax—and we really, really don’t

JDN 2456160 EDT 13:55.

The Republican Party is scrambling to find viable Presidential candidates for next year’s election. The Democrats only have two major contenders: Hillary Clinton looks like the front-runner (and will obviously have the most funding), but Bernie Sanders is doing surprisingly well, and is particularly refreshing because he is running purely on his principles and ideas. He has no significant connections, no family dynasty (unlike Jeb Bush and, again, Hillary Clinton) and not a huge amount of wealth (Bernie’s net wealth is about $500,000, making him comfortably upper-middle class; compare to Hillary’s $21.5 million and her husband’s $80 million); but he has ideas that resonate with people. Bernie Sanders is what politics is supposed to be. Clinton’s campaign will certainly raise more than his; but he has already raised over $4 million, and if he makes it to about $10 million studies suggest that additional spending above that point is largely negligible. He actually has a decent chance of winning, and if he did it would be a very good sign for the future of America.

But the Republican field is a good deal more contentious, and the 19 candidates currently running have been scrambling to prove that they are the most right-wing in order to impress far-right primary voters. (When the general election comes around, whoever wins will of course pivot back toward the center, changing from, say, outright fascism to something more like reactionism or neo-feudalism. If you were hoping they’d pivot so far back as to actually be sensible center-right capitalists, think again; Hillary Clinton is the only one who will take that role, and they’ll go out of their way to disagree with her in every way they possibly can, much as they’ve done with Obama.) One of the ways that Republicans are hoping to prove their right-wing credentials is by proposing a flat income tax and eliminating the IRS.

Unlike most of their proposals, I can see why many people think this actually sounds like a good idea. It would certainly dramatically reduce bureaucracy, and that’s obviously worthwhile since excess bureaucracy is pure deadweight loss. (A surprising number of economists seem to forget that government does other things besides create excess bureaucracy, but I must admit it does in fact create excess bureaucracy.)

Though if they actually made the flat tax rate 20% or even—I can’t believe this is seriously being proposed—10%, there is no way the federal government would have enough revenue. The only options would be (1) massive increases in national debt (2) total collapse of government services—including their beloved military, mind you, or (3) directly linking the Federal Reserve quantitative easing program to fiscal policy and funding the deficit with printed money. Of these, 3 might not actually be that bad (it would probably trigger some inflation, but actually we could use that right now), but it’s extremely unlikely to happen, particularly under Republicans. In reality, after getting a taste of 2, we’d clearly end up with 1. And then they’d be complaining about the debt and clamor for more spending cuts, more spending cuts, ever more spending cuts, but there would simply be no way to run a functioning government on 10% of GDP in anything like our current system. Maybe you could do it on 20%—maybe—but we currently spend more like 35%, and that’s already a very low amount of spending for a First World country. The UK is more typical at 47%, while Germany is a bit low at 44%; Sweden spends 52% and France spends a whopping 57%. Anyone who suggests we cut government spending from 35% to 20% needs to explain which 3/7 of government services are going to immediately disappear—not to mention which 3/7 of government employees are going to be immediately laid off.

And then they want to add investment deductions; in general investment deductions are a good thing, as long as you tie them to actual investments in genuinely useful things like factories and computer servers. (Or better yet, schools, research labs, or maglev lines, but private companies almost never invest in that sort of thing, so the deduction wouldn’t apply.) The kernel of truth in the otherwise ridiculous argument that we should never tax capital is that taxing real investment would definitely be harmful in the long run. As I discussed with Miles Kimball (a cognitive economist at Michigan and fellow econ-blogger I hope to work with at some point), we could minimize the distortionary effects of corporate taxes by establishing a strong deduction for real investment, and this would allow us to redistribute some of this enormous wealth inequality without dramatically harming economic growth.

But if you deduct things that aren’t actually investments—like stock speculation and derivatives arbitrage—then you reduce your revenue dramatically and don’t actually incentivize genuinely useful investments. This is the problem with our current system, in which GE can pay no corporate income tax on $108 billion in annual profit—and you know they weren’t using all that for genuinely productive investment activities. But then, if you create a strong enforcement system for ensuring it is real investment, you need bureaucracy—which is exactly what the flat tax was claimed to remove. At the very least, the idea of eliminating the IRS remains ridiculous if you have any significant deductions.

Thus, the benefits of a flat income tax are minimal if not outright illusory; and the costs, oh, the costs are horrible. In order to have remotely reasonable amounts of revenue, you’d need to dramatically raise taxes on the majority of people, while significantly lowering them on the rich. You would create a direct transfer of wealth from the poor to the rich, increasing our already enormous income inequality and driving millions of people into poverty.

Thus, it would be difficult to more clearly demonstrate that you care only about the interests of the top 1% than to propose a flat income tax. I guess Mitt Romney’s 47% rant actually takes the cake on that one though (Yes, all those freeloading… soldiers… and children… and old people?).

Many Republicans are insisting that a flat tax would create a surge of economic growth, but that’s simply not how macroeconomics works. If you steeply raise taxes on the majority of people while cutting them on the rich, you’ll see consumer spending plummet and the entire economy will be driven into recession. Rich people simply don’t spend their money in the same way as the rest of us, and the functioning of the economy depends upon a continuous flow of spending. There is a standard neoclassical economic argument about how reducing spending and increasing saving would lead to increased investment and greater prosperity—but that model basically assumes that we have a fixed amount of stuff we’re either using up or making more stuff with, which is simply not how money works; as James Kroeger cogently explains on his blog “Nontrivial Pursuits”, money is created as it is needed; investment isn’t determined by people saving what they don’t spend. Indeed, increased consumption generally leads to increased investment, because our economy is currently limited by demand, not supply. We could build a lot more stuff, if only people could afford to buy it.

And that’s not even considering the labor incentives; as I already talked about in my previous post on progressive taxation, there are two incentives involved when you increase someone’s hourly wage. On the one hand, they get paid more for each hour, which is a reason to work; that’s the substitution effect. But on the other hand, they have more money in general, which is a reason they don’t need to work; that’s the income effect. Broadly speaking, the substitution effect dominates at low incomes (about $20,000 or less), the income effect dominates at high incomes (about $100,000 or more), and the two effects cancel out at moderate incomes. Since a tax on your income hits you in much the same way as a reduction in your wage, this means that raising taxes on the poor makes them work less, while raising taxes on the rich makes them work more. But if you go from our currently slightly-progressive system to a flat system, you raise taxes on the poor and cut them on the rich, which would mean that the poor would work less, and the rich would also work less! This would reduce economic output even further. If you want to maximize the incentive to work, you want progressive taxes, not flat taxes.

Flat taxes sound appealing because they are so simple; even the basic formula for our current tax rates is complicated, and we combine it with hundreds of pages of deductions and credits—not to mention tens of thousands of pages of case law!—making it a huge morass of bureaucracy that barely anyone really understands and corporate lawyers can easily exploit. I’m all in favor of getting rid of that; but you don’t need a flat tax to do that. You can fit the formula for a progressive tax on a single page—indeed, on a single line: r = 1 – I^-p

That’s it. It’s simple enough to be plugged into any calculator that is capable of exponents, not to mention efficiently implemented in Microsoft Excel (more efficiently than our current system in fact).

Combined with that simple formula, you could list all of the sensible deductions on a couple of additional pages (business investments and educational expenses, mostly—poverty should be addressed by a basic income, not by tax deductions on things like heating and housing, which are actually indirect corporate subsidies), along with a land tax (one line: $3000 per hectare), a basic income (one more line: $8,000 per adult and $4,000 per child), and some additional excise taxes on goods with negative externalities (like alcohol, tobacco, oil, coal, and lead), with a line for each; then you can provide a supplementary manual of maybe 50 pages explaining the detailed rules for applying each of those deductions in unusual cases. The entire tax code should be readable by an ordinary person in a single sitting no longer than a few hours. That means no more than 100 pages and no more than a 7th-grade reading level.

Why do I say this? Isn’t that a ridiculous standard? No, it is a Constitutional imperative. It is a fundamental violation of your liberty to tax you according to rules you cannot reasonably understand—indeed, bordering on Kafkaesque. While this isn’t taxation without representation—we do vote for representatives, after all—it is something very much like it; what good is the ability to change rules if you don’t even understand the rules in the first place? Nor would it be all that difficult: You first deduct these things from your income, then plug the result into this formula.

So yes, I absolutely agree with the basic principle of tax reform. The tax code should be scrapped and recreated from scratch, and the final product should be a primary form of only a few pages combined with a supplementary manual of no more than 100 pages. But you don’t need a flat tax to do that, and indeed for many other reasons a flat tax is a terrible idea, particularly if the suggested rate is 10% or 15%, less than half what we actually spend. The real question is why so many Republican candidates think that this will appeal to their voter base—and why they could actually be right about that.

Part of it is the entirely justified outrage at the complexity of our current tax system, and the appealing simplicity of a flat tax. Part of it is the long history of American hatred of taxes; we were founded upon resisting taxes, and we’ve been resisting taxes ever since. In some ways this is healthy; taxes per se are not a good thing, they are a bad thing, a necessary evil.

But those two things alone cannot explain why anyone would advocate raising taxes on the poorest half of the population while dramatically cutting them on the top 1%. If you are opposed to taxes in general, you’d cut them on everyone; and if you recognize the necessity of taxation, you’d be trying to find ways to minimize the harm while ensuring sufficient tax revenue, which in general means progressive taxation.

To understand why they would be pushing so hard for flat taxes, I think we need to say that many Republicans, particularly those in positions of power, honestly do think that rich people are better than poor people and we should always give more to the rich and less to the poor. (Maybe it’s partly halo effect, in which good begets good and bad begets bad? Or maybe just world theory, the ingrained belief that the world is as it ought to be?)

Romney’s 47% rant wasn’t an exception; it was what he honestly believes, what he says when he doesn’t know he’s on camera. He thinks that he earned every penny of his $250 million net wealth; yes, even the part he got from marrying his wife and the part he got from abusing tax laws, arbitraging assets and liquidating companies. He thinks that people who live on $4,000 or even $400 a year are simply lazy freeloaders, who could easily work harder, perhaps do some arbitrage and liquidation of their own (check out these alleged “rags to riches” stories including the line “tried his hand at mortgage brokering”), but choose not to, and as a result deserve what they get. (It’s important to realize just how bizarre this moral attitude truly is; even if I thought you were the laziest person on Earth, I wouldn’t let you starve to death.) He thinks that the social welfare programs which have reduced poverty but never managed to eliminate it are too generous—if he even thinks they should exist at all. And in thinking these things, he is not some bizarre aberration; he is representing an entire class of people, nearly all of whom vote Republican.

The good news is, these people are still in the minority. They hold significant sway over the Republican primary, but will not have nearly as much impact in the general election. And right now, the Republican candidates are so numerous and so awful that I have trouble seeing how the Democrats could possibly lose. (But please, don’t take that as a challenge, you guys.)

What you need to know about tax incidence

JDN 2457152 EDT 14:54.

I said in my previous post that I consider tax incidence to be one of the top ten things you should know about economics. If I actually try to make a top ten list, I think it goes something like this:

  1. Supply and demand
  2. Monopoly and oligopoly
  3. Externalities
  4. Tax incidence
  5. Utility, especially marginal utility of wealth
  6. Pareto-efficiency
  7. Risk and loss aversion
  8. Biases and heuristics, including sunk-cost fallacy, scope neglect, herd behavior, anchoring and representative heuristic
  9. Asymmetric information
  10. Winner-takes-all effect

So really tax incidence is in my top five things you should know about economics, and yet I still haven’t talked about it very much. Well, today I will. The basic principles of supply and demand I’m basically assuming you know, but I really should spend some more time on monopoly and externalities at some point.

Why is tax incidence so important? Because of one central fact: The person who pays the tax is not the person who writes the check.

It doesn’t matter whether a tax is paid by the buyer or the seller; it matters what the buyer and seller can do to avoid the tax. If you can change your behavior in order to avoid paying the tax—buy less stuff, or buy somewhere else, or deduct something—you will not bear the tax as much as someone else who can’t do anything to avoid the tax, even if you are the one who writes the check. If you can avoid it and they can’t, other parties in the transaction will adjust their prices in order to eat the tax on your behalf.

Thus, if you have a good that you absolutely must buy no matter what—like, say, table saltand then we make everyone who sells that good pay an extra $5 per kilogram, I can guarantee you that you will pay an extra $5 per kilogram, and the suppliers will make just as much money as they did before. (A salt tax would be an excellent way to redistribute wealth from ordinary people to corporations, if you’re into that sort of thing. Not that we have any trouble doing that in America.)

On the other hand, if you have a good that you’ll only buy at a very specific price—like, say, fast food—then we can make you write the check for a tax of an extra $5 per kilogram you use, and in real terms you’ll pay hardly any tax at all, because the sellers will either eat the cost themselves by lowering the prices or stop selling the product entirely. (A fast food tax might actually be a good idea as a public health measure, because it would reduce production and consumption of fast food—remember, heart disease is one of the leading causes of death in the United States, making cheeseburgers a good deal more dangerous than terrorists—but it’s a bad idea as a revenue measure, because rather than pay it, people are just going to buy and sell less.)

In the limit in which supply and demand are both completely fixed (perfectly inelastic), you can tax however you want and it’s just free redistribution of wealth however you like. In the limit in which supply and demand are both locked into a single price (perfectly elastic), you literally cannot tax that good—you’ll just eliminate production entirely. There aren’t a lot of perfectly elastic goods in the real world, but the closest I can think of is cash. If you instituted a 2% tax on all cash withdrawn, most people would stop using cash basically overnight. If you want a simple way to make all transactions digital, find a way to enforce a cash tax. When you have a perfect substitute available, taxation eliminates production entirely.

To really make sense out of tax incidence, I’m going to need a lot of a neoclassical economists’ favorite thing: Supply and demand curves. These things pop up everywhere in economics; and they’re quite useful. I’m not so sure about their application to things like aggregate demand and the business cycle, for example, but today I’m going to use them for the sort of microeconomic small-market stuff that they were originally designed for; and what I say here is going to be basically completely orthodox, right out of what you’d find in an ECON 301 textbook.

Let’s assume that things are linear, just to make the math easier. You’d get basically the same answers with nonlinear demand and supply functions, but it would be a lot more work. Likewise, I’m going to assume a unit tax on goods—like $2890 per hectare—as opposed to a proportional tax on sales—like 6% property tax—again, for mathematical simplicity.

The next concept I’m going to have to talk about is elasticitywhich is the proportional amount that quantity sold changes relative to price. If price increases 2% and you buy 4% less, you have a demand elasticity of -2. If price increases 2% and you buy 1% less, you have a demand elasticity of -1/2. If price increases 3% and you sell 6% more, you have a supply elasticity of 2. If price decreases 5% and you sell 1% less, you have a supply elasticity of 1/5.

Elasticity doesn’t have any units of measurement, it’s just a number—which is part of why we like to use it. It also has some very nice mathematical properties involving logarithms, but we won’t be needing those today.

The price that renters are willing and able to pay, the demand price PD will start at their maximum price, the reserve price PR, and then it will decrease linearly according to the quantity of land rented Q, according to a linear function (simply because we assumed that) which will vary according to a parameter e that represents the elasticity of demand (it isn’t strictly equal to it, but it’s sort of a linearization).

We’re interested in what is called the consumer surplus; it is equal to the total amount of value that buyers get from their purchases, converted into dollars, minus the amount they had to pay for those purchases. This we add to the producer surplus, which is the amount paid for those purchases minus the cost of producing themwhich is basically just the same thing as profit. Togerther the consumer surplus and producer surplus make the total economic surplus, which economists generally try to maximize. Because different people have different marginal utility of wealth, this is actually a really terrible idea for deep and fundamental reasons—taking a house from Mitt Romney and giving it to a homeless person would most definitely reduce economic surplus, even though it would obviously make the world a better place. Indeed, I think that many of the problems in the world, particularly those related to inequality, can be traced to the fact that markets maximize economic surplus rather than actual utility. But for now I’m going to ignore all that, and pretend that maximizing economic surplus is what we want to do.

You can read off the economic surplus straight from the supply and demand curves; it’s the area between the lines. (Mathematically, it’s an integral; but that’s equivalent to the area under a curve, and with straight lines they’re just triangles.) I’m going to call the consumer surplus just “surplus”, and producer surplus I’ll call “profit”.

Below the demand curve and above the price is the surplus, and below the price and above the supply curve is the profit:

elastic_supply_competitive_labeled

I’m going to be bold here and actually use equations! Hopefully this won’t turn off too many readers. I will give each equation in both a simple text format and in proper LaTeX. Remember, you can render LaTeX here.

PD = PR – 1/e * Q

P_D = P_R – \frac{1}{e} Q \\

The marginal cost that landlords have to pay, the supply price PS, is a bit weirder, as I’ll talk about more in a moment. For now let’s say that it is a linear function, starting at zero cost for some quantity Q0 and then increases linearly according to a parameter n that similarly represents the elasticity of supply.

PS = 1/n * (Q – Q0)

P_S = \frac{1}{n} \left( Q – Q_0 \right) \\

Now, if you introduce a tax, there will be a difference between the price that renters pay and the price that landlords receive—namely, the tax, which we’ll call T. I’m going to assume that, on paper, the landlord pays the whole tax. As I said above, this literally does not matter. I could assume that on paper the renter pays the whole tax, and the real effect on the distribution of wealth would be identical. All we’d have to do is set PD = P and PS = P – T; the consumer and producer surplus would end up exactly the same. Or we could do something in between, with P’D = P + rT and P’S = P – (1 – r) T.

Then, if the market is competitive, we just set the prices equal, taking the tax into account:

P = PD – T = PR – 1/e * Q – T = PS = 1/n * (Q – Q0)

P= P_D – T = P_R – \frac{1}{e} Q – T= P_S = \frac{1}{n} \left(Q – Q_0 \right) \\

P_R – 1/e * Q – T = 1/n * (Q – Q0)

P_R – \frac{1}{e} Q – T = \frac{1}{n} \left(Q – Q_0 \right) \\

Notice the equivalency here; if we set P’D = P + rT and P’S = P – (1 – r) T, so that the consumer now pays a fraction of the tax r.

P = P’D – rT = P_r – 1/e*Q = P’S + (1 – r) T + 1/n * (Q – Q0) + (1 – r) T

P^\prime_D – r T = P = P_R – \frac{1}{e} Q = P^\prime_S = \frac{1}{n} \left(Q – Q_0 \right) + (1 – r) T\\

The result is exactly the same:

P_R – 1/e * Q – T = 1/n * (Q – Q0)

P_R – \frac{1}{e} Q – T = \frac{1}{n} \left(Q – Q_0 \right) \\

I’ll spare you the algebra, but this comes out to:

Q = (PR – T)/(1/n + 1/e) + (Q0)/(1 + n/e)

Q = \frac{P_R – T}{\frac{1}{n} + \frac{1}{e}} + \frac{Q_0}{1 + \frac{n}{e}}

P = (PR – T)/(1+ n/e) – (Q0)/(e + n)

P = \frac{P_R – T}}{1 + \frac{n}{e}} – \frac{Q_0}{e+n} \\

That’s if the market is competitive.

If the market is a monopoly, instead of setting the prices equal, we set the price the landlord receives equal to the marginal revenue—which takes into account the fact that increasing the amount they sell forces them to reduce the price they charge everyone else. Thus, the marginal revenue drops faster than the price as the quantity sold increases.

After a bunch of algebra (and just a dash of calculus), that comes out to these very similar, but not quite identical, equations:

Q = (PR – T)/(1/n + 2/e) + (Q0)/(1+ 2n/e)

Q = \frac{P_R – T}{\frac{1}{n} + \frac{2}{e}} + \frac{Q_0}{1 + \frac{2n}{e}} \\

P = (PR – T)*((1/n + 1/e)/(1/n + 2/e) – (Q0)/(e + 2n)

P = \left( P_R – T\right)\frac{\frac{1}{n} + \frac{1}{e}}{\frac{1}{n} + \frac{2}{e}} – \frac{Q_0}{e+2n} \\

Yes, it changes some 1s into 2s. That by itself accounts for the full effect of monopoly. That’s why I think it’s worthwhile to use the equations; they are deeply elegant and express in a compact form all of the different cases. They look really intimidating right now, but for most of the cases we’ll consider these general equations simplify quite dramatically.

There are several cases to consider.

Land has an extremely high cost to create—for practical purposes, we can consider its supply fixed, that is, perfectly inelastic. If the market is competitive, so that landlords have no market power, then they will simply rent out all the land they have at whatever price the market will bear:

Inelastic_supply_competitive_labeled

This is like setting n = 0 and T = 0 in the above equations, the competitive ones.

Q = Q0

Q = Q_0 \\

P = PR – Q0/e

P = P_R – \frac{Q_0}{e} \\

If we now introduce a tax, it will fall completely on the landlords, because they have little choice but to rent out all the land they have, and they can only rent it at a price—including tax—that the market will bear.

inelastic_supply_competitive_tax_labeled

Now we still have n = 0 but not T = 0.

Q = Q0

Q = Q_0 \\

P = PR – T – Q0/e

P = P_R – T – \frac{Q_0}{e} \\

The consumer surplus will be:

½ (Q)(PR – P – T) = 1/(2e)* Q02

\frac{1}{2}Q(P_R – P – T) = \frac{1}{2e}Q_0^2 \\

Notice how T isn’t in the result. The consumer surplus is unaffected by the tax.

The producer surplus, on the other hand, will be reduced by the tax:

(Q)(P) = (PR – T – Q0/e) Q0 = PR Q0 – 1/e Q02 – TQ0

(Q)(P) = (P_R – T – \frac{Q_0}{e})Q_0 = P_R Q_0 – \frac{1}{e} Q_0^2 – T Q_0 \\

T appears linearly as TQ0, which is the same as the tax revenue. All the money goes directly from the landlord to the government, as we want if our goal is to redistribute wealth without raising rent.

But now suppose that the market is not competitive, and by tacit collusion or regulatory capture the landlords can exert some market power; this is quite likely the case in reality. Actually in reality we’re probably somewhere in between monopoly and competition, either oligopoly or monopolistic competitionwhich I will talk about a good deal more in a later post, I promise.

It could be that demand is still sufficiently high that even with their market power, landlords have an incentive to rent out all their available land, in which case the result will be the same as in the competitive market.

inelastic_supply_monopolistic_labeled

A tax will then fall completely on the landlords as before:

inelastic_supply_monopolistic_tax_labeled

Indeed, in this case it doesn’t really matter that the market is monopolistic; everything is the same as it would be under a competitive market. Notice how if you set n = 0, the monopolistic equations and the competitive equations come out exactly the same. The good news is, this is quite likely our actual situation! So even in the presence of significant market power the land tax can redistribute wealth in just the way we want.

But there are a few other possibilities. One is that demand is not sufficiently high, so that the landlords’ market power causes them to actually hold back some land in order to raise the price:

zerobound_supply_monopolistic_labeled

This will create some of what we call deadweight loss, in which some economic value is wasted. By restricting the land they rent out, the landlords make more profit, but the harm they cause to tenant is created than the profit they gain, so there is value wasted.

Now instead of setting n = 0, we actually set n = infinity. Why? Because the reason that the landlords restrict the land they sell is that their marginal revenue is actually negative beyond that point—they would actually get less money in total if they sold more land. Instead of being bounded by their cost of production (because they have none, the land is there whether they sell it or not), they are bounded by zero. (Once again we’ve hit upon a fundamental concept in economics, particularly macroeconomics, that I don’t have time to talk about today: the zero lower bound.) Thus, they can change quantity all they want (within a certain range) without changing the price, which is equivalent to a supply elasticity of infinity.

Introducing a tax will then exacerbate this deadweight loss (adding DWL2 to the original DWL1), because it provides even more incentive for the landlords to restrict the supply of land:

zerobound_supply_monopolistic_tax_labeled

Q = e/2*(PR – T)

Q = \frac{e}{2} \left(P_R – T\right)\\

P = 1/2*(PR – T)

P = \frac{1}{2} \left(P_R – T\right) \\

The quantity Q0 completely drops out, because it doesn’t matter how much land is available (as long as it’s enough); it only matters how much land it is profitable to rent out.

We can then find the consumer and producer surplus, and see that they are both reduced by the tax. The consumer surplus is as follows:

½ (Q)(PR – 1/2(PR – T)) = e/4*(PR2 – T2)

\frac{1}{2}Q \left( P_R – \frac{1}{2}left( P – T \right) \right) = \frac{e}{4}\left( P_R^2 – T^2 \right) \\

This time, the tax does have an effect on reducing the consumer surplus.

The producer surplus, on the other hand, will be:

(Q)(P) = 1/2*(PR – T)*e/2*(PR – T) = e/4*(PR – T)2

(Q)(P) = \frac{1}{2}\left(P_R – T \right) \frac{e}{2} \left(P_R – T\right) = \frac{e}{4} \left(P_R – T)^2 \\

Notice how it is also reduced by the tax—and no longer in a simple linear way.

The tax revenue is now a function of the demand:

TQ = e/2*T(PR – T)

T Q = \frac{e}{2} T (P_R – T) \\

If you add all these up, you’ll find that the sum is this:

e/2 * (PR^2 – T^2)

\frac{e}{2} \left(P_R^2 – T^2 \right) \\

The sum is actually reduced by an amount equal to e/2*T^2, which is the deadweight loss.

Finally there is an even worse scenario, in which the tax is so large that it actually creates an incentive to restrict land where none previously existed:

zerobound_supply_monopolistic_hugetax_labeled

Notice, however, that because the supply of land is inelastic the deadweight loss is still relatively small compared to the huge amount of tax revenue.

But actually this isn’t the whole story, because a land tax provides an incentive to get rid of land that you’re not profiting from. If this incentive is strong enough, the monopolistic power of landlords will disappear, as the unused land gets sold to more landholders or to the government. This is a way of avoiding the tax, but it’s one that actually benefits society, so we don’t mind incentivizing it.

Now, let’s compare this to our current system of property taxes, which include the value of buildings. Buildings are expensive to create, but we build them all the time; the supply of buildings is strongly dependent upon the price at which those buildings will sell. This makes for a supply curve that is somewhat elastic.

If the market were competitive and we had no taxes, it would be optimally efficient:

elastic_supply_competitive_labeled

Property taxes create an incentive to produce fewer buildings, and this creates deadweight loss. Notice that this happens even if the market is perfectly competitive:

elastic_supply_competitive_tax_labeled

Since both n and e are finite and nonzero, we’d need to use the whole equations: Since the algebra is such a mess, I don’t see any reason to subject you to it; but suffice it to say, the T does not drop out. Tenants do see their consumer surplus reduced, and the larger the tax the more this is so.

Now, suppose that the market for buildings is monopolistic, as it most likely is. This would create deadweight loss even in the absence of a tax:

elastic_supply_monopolistic_labeled

But a tax will add even more deadweight loss:

elastic_supply_monopolistic_tax_labeled

Once again, we’d need the full equations, and once again it’s a mess; but the result is, as before, that the tax gets passed on to the tenants in the form of more restricted sales and therefore higher rents.

Because of the finite supply elasticity, there’s no way that the tax can avoid raising the rent. As long as landlords have to pay more taxes when they build more or better buildings, they are going to raise the rent in those buildings accordingly—whether the market is competitive or not.

If the market is indeed monopolistic, there may be ways to bring the rent down: suppose we know what the competitive market price of rent should be, and we can establish rent control to that effect. If we are truly correct about the price to set, this rent control can not only reduce rent, it can actually reduce the deadweight loss:

effective_rent_control_tax_labeled

But if we set the rent control too low, or don’t properly account for the varying cost of different buildings, we can instead introduce a new kind of deadweight loss, by making it too expensive to make new buildings.

ineffective_rent_control_tax_labeled

In fact, what actually seems to happen is more complicated than that—because otherwise the number of buildings is obviously far too small, rent control is usually set to affect some buildings and not others. So what seems to happen is that the rent market fragments into two markets: One, which is too small, but very good for those few who get the chance to use it; and the other, which is unaffected by the rent control but is more monopolistic and therefore raises prices even further. This is why almost all economists are opposed to rent control (PDF); it doesn’t solve the problem of high rent and simply causes a whole new set of problems.

A land tax with a basic income, on the other hand, would help poor people at least as much as rent control presently does—probably a good deal more—without discouraging the production and maintenance of new apartment buildings.

But now we come to a key point: The land tax must be uniform per hectare.

If it is instead based on the value of the land, then this acts like a finite elasticity of supply; it provides an incentive to reduce the value of your own land in order to avoid the tax. As I showed above, this is particularly pernicious if the market is monopolistic, but even if it is competitive the effect is still there.

One exception I can see is if there are different tiers based on broad classes of land that it’s difficult to switch between, such as “land in Manhattan” versus “land in Brooklyn” or “desert land” versus “forest land”. But even this policy would have to be done very carefully, because any opportunity to substitute can create an opportunity to pass on the tax to someone else—for instance if land taxes are lower in Brooklyn developers are going to move to Brooklyn. Maybe we want that, in which case that is a good policy; but we should be aware of these sorts of additional consequences. The simplest way to avoid all these problems is to simply make the land tax uniform. And given the quantities we’re talking about—less than $3000 per hectare per year—it should be affordable for anyone except the very large landholders we’re trying to distribute wealth from in the first place.

The good news is, most economists would probably be on board with this proposal. After all, the neoclassical models themselves say it would be more efficient than our current system of rent control and property taxes—and the idea is at least as old as Adam Smith. Perhaps we can finally change the fact that the rent is too damn high.

What if you couldn’t own land?

JDN 2457145 EDT 20:49.

Today’s post we’re on the socialism scale somewhere near the The Guess Who, but not quite all the way to John Lennon. I’d like to questions one of the fundamental tenets of modern capitalism, but not the basic concept of private ownership itself:

What if you couldn’t own land?

Many things that you can own were more-or-less straightforwardly created by someone. A car, a computer, a television, a pair of shoes; for today let’s even take for granted intellectual property like books, movies, and songs; at least those things (“things”) were actually made by someone.

But land? We’re talking about chunks of the Earth here. They were here billions of years before us, and in all probability will be here billions of years after we’re gone. There’s no need to incentivize its creation; the vast majority of land was already here and did not need to be created. (I do have to say “the vast majority”, because in places like Japan, Hong Kong, and the Netherlands real estate has become so scarce that people do literally build land out into the sea. But this is something like 0.0001% of the world’s land.)

What we want to incentivize is land development; we want it to be profitable to build buildings and irrigate deserts, and yes, even cut down forests sometimes (though then there should be a carbon tax with credits for forested land to ensure that there isn’t too much incentive). Yet our current property tax system doesn’t do this very well; if you build bigger buildings, you end up paying more property taxes. Yes, you may also make some profit on the buildings—but it’s risky, and you may not get enough benefit to justify the added property taxes.

Moreover, we want to allocate land—we want some way of deciding who is allowed to use what land where and when (and perhaps why). Allowing land to be bought and sold is one way to do that, but it is not the only way.

Indeed, land ownership suffers from a couple of truly glaring flaws as an allocation system:

      1. It creates self-perpetuating inequality. Because land grows in value over time (due to population growth and urbanization, among other things), those who currently own land end up getting an ever-growing quantity of wealth while those who do not own land do not, and very likely end up having to pay ever-growing rents to the landlords. (I like calling them “landlords”; it really drives home the fact that our landholding system is still basically the same as it was under feudalism.) In fact, the recent rise in the share of income that goes to owners of capital rather than workers is almost entirely attributable to the rise in the price of real estate. As that post rightly recognizes, this does nothing to undermine Piketty’s central message of rising inequality due to capital income (pace The Washington Post); it merely tells us to focus on real estate instead of other forms of capital.
      2. It has no non-arbitrary allocation. If we want to decide who owns a car, we can ask questions like, “Who built it? Did someone buy it from them? Did they pay a fair price?”; if we want to decide who owns a book, we can ask questions like, “Who wrote it? Did they sell it to a publisher? What was the royalty rate?” That is, there is a clear original owner, and there is a sense of whether the transfer of ownership can be considered fair. But if we want to decide who owns a chunk of land, basically all we can ask is, “What does the deed say?” The owner is the owner because they are the owner; there’s no sense in which that ownership is fair. We certainly can’t go back to the original creation of the land, because that was due to natural forces gigayears ago. If we keep tracing the ownership backward, we will eventually end up with some guy (almost certainly a man, a White man in fact) with a gun who pointed that gun at other people and said, “This is mine.” This is true of basically all the land in the world (aside from those little bits of Japan and such); it was already there, and the only reason someone got to own it was because they said so and had a bigger gun. And a flag, perhaps: “Do you have a flag?” I suppose, in theory at least, there are a few ways of allocating land which seem less arbitrary: One would be to give everyone an equal amount. But this is practically very difficult: What do you do when the population changes? If you have 2% annual population growth, do you carve off 2% of everybody’s lot each year? Another would be to let people squat land, and automatically own the land that they live on—but again practical difficulties quickly become enormous. In any case, these two methods bear about as much resemblance to our actual allocation of land as a squirrel does to a Tyrannosaurus.

So, what else might we use? The system that makes the most sense to me is that we would own all land as a society. In practical terms this would mean that all land is Federal land, and if you want to use it for something, you need to pay rent to the government. There are many different ways the government could set the rent, but the most sensible might be to charge a flat rate per hectare regardless of where the land is or what it’s being used for, because that would maximize the incentive to develop the land. It would also make the rent fall entirely on the landowner, because the rent would be perfectly inelasticmeaning that you can’t change the quantity you make based on the price, because you aren’t making it; it’s just already sitting there.

Of course, this idea is obviously politically impossible in our current environment—or indeed any foreseeable political environment. I’m just fantasizing here, right?

Well, not quite. There is one thing we could do that would be economically quite similar to government-only land ownership; it’s called a land tax. The idea is incredibly simple: you just collect a flat tax per hectare of land. Economists have known that a land tax is efficient at providing revenue and reducing inequality since at least Adam Smith. So maybe ownership of land isn’t actually foundational to capitalism, after all; maybe we’ve just never fully gotten over feudalism. (I basically agree with Adam Smith, and for doing so I am often called a socialist.) The beautiful thing about a land tax is that it has a tax incidence in which the owners of the land end up bearing the full brunt of the tax.

Tax incidence is something it’s very important to understand; it would be on my list of the top ten economic principles that people should learn. We often have fierce political debates over who will actually write the check: Should employers pay the health insurance premium, or should employees? Will buyers pay sales tax, or sellers? Should we tax corporate profits or personal capital gains?

Please understand that I am not exaggerating when I say that these sorts of questions are totally irrelevant. It simply does not matter who actually writes the check; what matters is who bears the cost. Making the employer pay the health insurance premium doesn’t make the slightest difference if all they’re going to do is cut wages by the exact same amount. You can see the irrelevance of the fact that sellers pay sales tax every time you walk into a store—you always end up paying the price plus the tax, don’t you? (I found that the base price of most items was the same between Long Beach and Ann Arbor, but my total expenditure was always 3% more because of the 9% sales tax versus the 6%.) How do we determine who actually pays the tax? It depends on the elasticity—how easily can you change your behavior in order to avoid the tax? Can you find a different job because the health insurance premiums are too high? No? Then you’re probably paying that premium, even if your employer writes the check. If you can find a new job whenever you want, your employer might have to pay it for you even if you write the check.

The incidence of corporate taxes and taxes on capital gains are even more complicated, because it could affect the behavior of corporations in many different ways; indeed, many economists argue that the corporate tax simply results in higher unemployment or lower wages for workers. I don’t think that’s actually true, but I honestly can’t rule it out completely, precisely because corporate taxes are so complicated. You need to know all sorts of things about the structure of stock markets, and the freedom of trade, and the mobility of immigration… it’s a complete and total mess.

It’s because of tax incidence that a land tax makes so much sense; there’s no way for the landowner to escape it, other than giving up the land entirely. In particular, they can’t charge more for rent without being out-competed (unless landowners are really good at colluding—which might be true for large developers, but not individual landlords). Their elasticity is so low that they’re forced to bear the full cost of the tax.

If the land tax were high enough, it could eliminate the automatic growth in wealth that comes from holding land, and thereby reducing long-run inequality dramatically. The revenue could be used for my other favorite fiscal policy, the basic income—and real estate is a big enough part of our nation’s wealth that it’s actually entirely realistic to fund an $8,000 per person per year basic income entirely on land tax revenue. The total value of US land is about $14 trillion, and an $8,000 basic income for 320 million people would cost about $2.6 trillion; that’s only 19%. You’d actually want to make it a flat tax per hectare, so how much would that be? Well, 60% of US land is privately owned at present (no sense taxing the land the government already owns), and total US land area is about 9 million square kilometers, so to raise $2.5 trillion you’d need a tax of $289,000 per square kilometer, or $2,890 per hectare. If you own a hectare—which is bigger than most single-family lots—you’d only pay $2,890 per year in land tax, well within what most middle-class families could handle. But if you own 290,000 acres like Jeff Bezos, (that’s 117,000 hectares) you’re paying $338 million per year. Since Jeff Bezos has about $38 billion in net wealth, he can actually afford to pay that ($338 million per year is about one-tenth of what Jeff Bezos makes automatically on dividends), though he might consider selling off some of the land to avoid the taxes, which is exactly the sort of incentive we wanted to create.

Indeed, when I contemplate this policy I’m struck by the fact that it has basically no downside—usually in public policy you’re forced to make hard compromises and tradeoffs, but a land tax plus basic income is a system that carries almost no downsides at all. It won’t disincentivize investment, it won’t disincentivize working, it will dramatically reduce inequality, it will save the government a great deal of money on social welfare spending, and best of all it will eliminate poverty immediately and forever. The only people it would hurt at all are extremely rich, and they wouldn’t even be hurt very much, while it would benefit millions of people including some of the most needy.

Why aren’t we doing this already!?

The Cognitive Science of Morality Part II: Molly Crockett

JDN 2457140 EDT 20:16.

This weekend has been very busy for me, so this post is going to be shorter than most—which is probably a good thing anyway, since my posts tend to run a bit long.

In an earlier post I discussed the Weinberg Cognitive Science Conference and my favorite speaker in the lineup, Joshua Greene. After a brief interlude from Capybara Day, it’s now time to talk about my second-favorite speaker, Molly Crockett. (Is it just me, or does the name “Molly” somehow seem incongruous with a person of such prestige?)

Molly Crockett is a neuroeconomist, though you’d never hear her say that. She doesn’t think of herself as an economist at all, but purely as a neuroscientist. I suspect this is because when she hears the word “economist” she thinks of only mainstream neoclassical economists, and she doesn’t want to be associated with such things.

Still, what she studies is clearly neuroeconomics—I in fact first learned of her work by reading the textbook Neuroeconomics, though I really got interested in her work after watching her TED Talk. It’s one of the better TED talks (they put out so many of them now that the quality is mixed at best); she talks about news reporting on neuroscience, how it is invariably ridiculous and sensationalist. This is particularly frustrating because of how amazing and important neuroscience actually is.

I could almost forgive the sensationalism if they were talking about something that’s actually fantastically boring, like, say, tax codes, or financial regulations. Of course, even then there is the Oliver Effect: You can hide a lot of evil by putting it in something boring. But Dodd-Frank is 2300 pages long; I read an earlier draft that was only (“only”) 600 pages, and it literally contained a three-page section explaining how to define the word “bank”. (Assuming direct proportionality, I would infer that there is now a twelve-page section defining the word “bank”. Hopefully not?) It doesn’t get a whole lot more snoozeworthy than that. So if you must be a bit sensationalist in order to get people to see why eliminating margin requirements and the swaps pushout rule are terrible, terrible ideas, so be it.

But neuroscience is not boring, and so sensationalism only means that news outlets are making up exciting things that aren’t true instead of saying the actually true things that are incredibly exciting.

Here, let me express without sensationalism what Molly Crockett does for a living: Molly Crockett experimentally determines how psychoactive drugs modulate moral judgments. The effects she observes are small, but they are real; and since these experiments are done using small doses for a short period of time, if these effects scale up they could be profound. This is the basic research component—when it comes to technological fruition it will be literally A Clockwork Orange. But it may be A Clockwork Orange in the best possible way: It could be, at last, a medical cure for psychopathy, a pill to make us not just happier or healthier, but better. We are not there yet by any means, but this is clearly the first step: Molly Crockett is to A Clockwork Orange roughly as Michael Faraday is to the Internet.

In one of the experiments she talked about at the conference, Crockett found that serotonin reuptake inhibitors enhance harm aversion. Serotonin reuptake inhibitors are very commonly used drugs—you are likely familiar with one called Prozac. So basically what this study means is that Prozac makes people more averse to causing pain in themselves or others. It doesn’t necessarily make them more altruistic, let alone more ethical; but it does make them more averse to causing pain. (To see the difference, imagine a 19th-century field surgeon dealing with a wounded soldier; there is no anesthetic, but an amputation must be made. Sometimes being ethical requires causing pain.)

The experiment is actually what Crockett calls “the honest Milgram Experiment“; under Milgram, the experimenters told their subjects they would be causing shocks, but no actual shocks were administered. Under Crockett, the shocks are absolutely 100% real (though they are restricted to a much lower voltage of course). People are given competing offers that contain an amount of money and a number of shocks to be delivered, either to you or to the other subject. They decide how much it’s worth to them to bear the shocks—or to make someone else bear them. It’s a classic willingness-to-pay paradigm, applied to the Milgram Experiment.

What Crockett found did not surprise me, nor do I expect it will surprise you if you imagine yourself in the same place; but it would totally knock the socks off of any neoclassical economist. People are much more willing to bear shocks for money than they are to give shocks for money. They are what Crockett terms hyper-altruistic; I would say that they are exhibiting an apparent solidarity coefficient greater than 1. They seem to be valuing others more than they value themselves.

Normally I’d say that this makes no sense at all—why would you value some random stranger more than yourself? Equally perhaps, and obviously only a psychopath would value them not at all; but more? And there’s no way you can actually live this way in your daily life; you’d give away all your possessions and perhaps even starve yourself to death. (I guess maybe Jesus lived that way.) But Crockett came up with a model that explains it pretty well: We are morally risk-averse. If we knew we were dealing with someone very strong who had no trouble dealing with shocks, we’d be willing to shock them a fairly large amount. But we might actually be dealing with someone very vulnerable who would suffer greatly; and we don’t want to take that chance.

I think there’s some truth to that. But her model leaves something else out that I think is quite important: We are also averse to unfairness. We don’t like the idea of raising one person while lowering another. (Obviously not so averse as to never do it—we do it all the time—but without a compelling reason we consider it morally unjustified.) So if the two subjects are in roughly the same condition (being two undergrads at Oxford, they probably are), then helping one while hurting the other is likely to create inequality where none previously existed. But if you hurt yourself in order to help yourself, no such inequality is created; all you do is raise yourself up, provided that you do believe that the money is good enough to be worth the shocks. It’s actually quite Rawslian; lifting one person up while not affecting the other is exactly the sort of inequality you’re allowed to create according to the Difference Principle.

There’s also the fact that the subjects can’t communicate; I think if I could make a deal to share the money afterward, I’d feel better about shocking someone more in order to get us both more money. So perhaps with communication people would actually be willing to shock others more. (And the sensation headline would of course be: “Talking makes people hurt each other.”)

But all of these ideas are things that could be tested in future experiments! And maybe I’ll do those experiments someday, or Crockett, or one of her students. And with clever experimental paradigms we might find out all sorts of things about how the human mind works, how moral intuitions are structured, and ultimately how chemical interventions can actually change human moral behavior. The potential for both good and evil is so huge, it’s both wondrous and terrifying—but can you deny that it is exciting?

And that’s not even getting into the Basic Fact of Cognitive Science, which undermines all concepts of afterlife and theistic religion. I already talked about it before—as the sort of thing that I sort of wish I could say when I introduce myself as a cognitive scientist—but I think it bears repeating.

As Patricia Churchland said on the Colbert Report: Colbert asked, “Are you saying I have no soul?” and she answered, “Yes.” I actually prefer Daniel Dennett’s formulation: “Yes, we have a soul, but it’s made of lots of tiny robots.”

We don’t have a magical, supernatural soul (whatever that means); we don’t have an immortal soul that will rise into Heaven or be reincarnated in someone else. But we do have something worth preserving: We have minds that are capable of consciousness. We love and hate, exalt and suffer, remember and imagine, understand and wonder. And yes, we are born and we die. Once the unique electrochemical pattern that defines your consciousness is sufficiently degraded, you are gone. Nothing remains of what you were—except perhaps the memories of others, or things you have created. But even this legacy is unlikely to last forever. One day it is likely that all of us—and everything we know, and everything we have built, from the Great Pyramids to Hamlet to Beethoven’s Ninth to Principia Mathematica to the US Interstate Highway System—will be gone. I don’t have any consolation to offer you on that point; I can’t promise you that anything will survive a thousand years, much less a million. There is a chance—even a chance that at some point in the distant future, whatever humanity has become will find a way to reverse the entropic decay of the universe itself—but nothing remotely like a guarantee. In all probability you, and I, and all of this will be gone someday, and that is absolutely terrifying.

But it is also undeniably true. The fundamental link between the mind and the brain is one of the basic facts of cognitive science; indeed I like to call it The Basic Fact of Cognitive Science. We know specifically which kinds of brain damage will make you unable to form memories, comprehend language, speak language (a totally different area), see, hear, smell, feel anger, integrate emotions with logic… do I need to go on? Everything that you are is done by your brain—because you are your brain.

Now why can’t the science journalists write about that? Instead we get “The Simple Trick That Can Boost Your Confidence Immediately” and “When it Comes to Picking Art, Men & Women Just Don’t See Eye to Eye.” HuffPo is particularly awful of course; the New York Times is better, but still hardly as good as one might like. They keep trying to find ways to make it exciting—but so rarely seem to grasp how exciting it already is.

Happy Capybara Day! Or the power of culture

JDN 2457131 EDT 14:33.

Did you celebrate Capybara Day yesterday? You didn’t? Why not? We weren’t able to find any actual capybaras this year, but maybe next year we’ll be able to plan better and find a capybara at a zoo; unfortunately the nearest zoo with a capybara appears to be in Maryland. But where would we be without a capybara to consult annually on the stock market?

Right now you are probably rather confused, perhaps wondering if I’ve gone completely insane. This is because Capybara Day is a holiday of my own invention, one which only a handful of people have even heard about.

But if you think we’d never have a holiday so bizarre, think again: For all I did was make some slight modifications to Groundhog Day. Instead of consulting a groundhog about the weather every February 2, I proposed that we consult a capybara about the stock market every April 17. And if you think you have some reason why groundhogs are better at predicting the weather (perhaps because they at least have some vague notion of what weather is) than capybaras are at predicting the stock market (since they have no concept of money or numbers), think about this: Capybara Day could produce extremely accurate predictions, provided only that people actually believed it. The prophecy of rising or falling stock prices could very easily become self-fulfilling. If it were a cultural habit of ours to consult capybaras about the stock market, capybaras would become good predictors of the stock market.

That might seem a bit far-fetched, but think about this: Why is there a January Effect? (To be fair, some researchers argue that there isn’t, and the apparent correlation between higher stock prices and the month of January is simply an illusion, perhaps the result of data overfitting.)

But I think it probably is real, and moreover has some very obvious reasons behind it. In this I’m in agreement with Richard Thaler, a founder of cognitive economics who wrote about such anomalies in the 1980s. December is a time when two very culturally-important events occur: The end of the year, during which many contracts end, profits are assessed, and tax liabilities are determined; and Christmas, the greatest surge of consumer spending and consumer debt.

The first effect means that corporations are very likely to liquidate assets—particularly assets that are running at a loss—in order to minimize their tax liabilities for the year, which will drive down prices. The second effect means that consumers are in search of financing for extravagant gift purchases, and those who don’t run up credit cards may instead sell off stocks. This is if anything a more rational way of dealing with the credit constraint, since interest rates on credit cards are typically far in excess of stock returns. But this surge of selling due to credit constraints further depresses prices.

In January, things return to normal; assets are repurchased, debt is repaid. This brings prices back up to where they were, which results in a higher than normal return for January.

Neoclassical economists are loath to admit that such a seasonal effect could exist, because it violates their concept of how markets work—and to be fair, the January Effect is actually weak enough to be somewhat ambiguous. But actually it doesn’t take much deviation from neoclassical models to explain the effect: Tax policies and credit constraints are basically enough to do it, so you don’t even need to go that far into understanding human behavior. It’s perfectly rational to behave this way given the distortions that are created by taxes and credit limits, and the arbitrage opportunity is one that you can only take advantage of if you have large amounts of credit and aren’t worried about minimizing your tax liabilities. It’s important to remember just how strong the assumptions of models like CAPM truly are; in addition to the usual infinite identical psychopaths, CAPM assumes there are no taxes, no transaction costs, and unlimited access to credit. I’d say it’s amazing that it works at all, but actually, it doesn’t—check out this graph of risk versus return and tell me if you think CAPM is actually giving us any information at all about how stock markets behave. It frankly looks like you could have drawn a random line through a scatter plot and gotten just as good a fit. Knowing how strong its assumptions are, we would not expect CAPM to work—and sure enough, it doesn’t.

Of course, that leaves the question of why our tax policy would be structured in this way—why make the year end on December 31 instead of some other date? And for that, you need to go back through hundreds of years of history, the Gregorian calendar, which in turn was influenced by Christianity, and before that the Julian calendar—in other words, culture.

Culture is one of the most powerful forces that influences human behavior—and also one of the strangest and least-understood. Economic theory is basically silent on the matter of culture. Typically it is ignored entirely, assumed to be irrelevant against the economic incentives that are the true drivers of human action. (There’s a peculiar emotion many neoclassical economists express that I can best describe as self-righteous cynicism, the attitude that we alone—i.e., economists—understand that human beings are not the noble and altruistic creatures many imagine us to be, nor beings of art and culture, but simply cold, calculating machines whose true motives are reducible to profit incentives—and all who think otherwise are being foolish and naïve; true enlightenment is understanding that human beings are infinite identical psychopaths. This is the attitude epitomized by the economist who once sent me an email with “altruism” written in scare quotes.)

Occasionally culture will be invoked as an external (in jargon, exogenous) force, to explain some aspect of human behavior that is otherwise so totally irrational that even invoking nonsensical preferences won’t make it go away. When a suicide bomber blows himself up in a crowd of people, it’s really pretty hard to explain that in terms of rational profit incentives—though I have seen it tried. (It could be self-interest at a larger scale, like families or nations—but then, isn’t that just the tribal paradigm I’ve been arguing for all along?)

But culture doesn’t just motivate us to do extreme or wildly irrational things. It motivates us all the time, often in quite beneficial ways; we wait in line, hold doors for people walking behind us, tip waiters who serve us, and vote in elections, not because anyone pressures us directly to do so (unlike say Australia we do not have compulsory voting) but because it’s what we feel we ought to do. There is a sense of altruism—and altruism provides the ultimate justification for why it is right to do these things—but the primary motivator in most cases is culture—that’s what people do, and are expected to do, around here.

Indeed, even when there is a direct incentive against behaving a certain way—like criminal penalties against theft—the probability of actually suffering a direct penalty is generally so low that it really can’t be our primary motivation. Instead, the reason we don’t cheat and steal is that we think we shouldn’t, and a major part of why we think we shouldn’t is that we have cultural norms against it.

We can actually observe differences in cultural norms across countries in the laboratory. In this 2008 study by Massimo Castro (PDF) comparing British and Italian people playing an economic game called the public goods game in which you can pay a cost yourself to benefit the group as a whole, it was found not only that people were less willing to benefit groups of foreigners than groups of compatriots, British people were overall more generous than Italian people. This 2010 study by Gachter et. al. (actually Joshua Greene talked about it last week) compared how people play the game in various cities, they found three basic patterns: In Western European and American cities such as Zurich, Copenhagen and Boston, cooperation started out high and remained high throughout; people were just cooperative in general. In Asian cities such as Chengdu and Seoul, cooperation started out low, but if people were punished for not cooperating, cooperation would improve over time, eventually reaching about the same place as in the highly cooperative cities. And in Mediterranean cities such as Istanbul, Athens, and Riyadh, cooperation started low and stayed low—even when people could be punished for not cooperating, nobody actually punished them. (These patterns are broadly consistent with the World Bank corruption ratings of these regions, by the way; Western Europe shows very low corruption, while Asia and the Mediterranean show high corruption. Of course this isn’t all that’s going on—and Asia isn’t much less corrupt than the Middle East, while this experiment might make you think so.)

Interestingly, these cultural patterns showed Melbourne as behaving more like an Asian city than a Western European one—perhaps being in the Pacific has worn off on Australia more than they realize.

This is very preliminary, cutting-edge research I’m talking about, so be careful about drawing too many conclusions. But in general we’ve begun to find some fairly clear cultural differences in economic behavior across different societies. While this would not be at all surprising to a sociologist or anthropologist, it’s the sort of thing that economists have insisted for years is impossible.

This is the frontier of cognitive economics, in my opinion. We know that culture is a very powerful motivator of our behavior, and it is time for us to understand how it works—and then, how it can be changed. We know that culture can be changed—cultural norms do change over time, sometimes remarkably rapidly; but we have only a faint notion of how or why they change. Changing culture has the power to do things that simply changing policy cannot, however; policy requires enforcement, and when the enforcement is removed the behavior will often disappear. But if a cultural norm can be imparted, it could sustain itself for a thousand years without any government action at all.

The cognitive science of morality part I: Joshua Greene

JDN 2457124 EDT 15:33.

Thursday and Friday of this past week there was a short symposium at the University of Michigan called “The Cognitive Science of Moral Minds“, sponsored by the Weinberg Cognitive Science Institute, a new research institute at Michigan. It was founded by a former investment banker, because those are the only people who actually have money these days—and Michigan, like most universities, will pretty much take money from whoever offers it (including naming buildings after those people and not even changing the name after it’s revealed that the money was obtained in a $550-million fraud scheme, for which he was fined $200 million, because that’s apparently how our so-called “justice” system so-called “works”. A hint for the SEC: If the fine paid divided by the amount defrauded would be a sensible rate for a marginal income tax, that’s not a punishment). So far as I know Weinberg isn’t a white-collar criminal the way Wyly is, so that’s good at least. Still, why are we relying upon investment bankers to decide what science institutes we’ll found?

The Weinberg Institute was founded just last year. Yes, four years after I got my bachelor’s degree in cognitive science from Michigan, they decide to actually make that a full institute instead of an awkward submajor of the psychology department. Oh, and did I mention how neither the psychology or economics department would support my thesis research in behavioral economics but then called in Daniel Kahneman as the keynote speaker at my graduation? Yeah, sometimes I think I’m a little too cutting-edge for my own good.

The symposium had Joshua Greene of Harvard and Molly Crockett of Oxford, both of whom I’d been hoping to meet for a few years now. I finally got the chance! (It also had Peter Railton—likely not hard to get, seeing as he works at our own philosophy department, but still has some fairly interesting ideas—and some law professor I’d never heard of named John Mikhail, whose talk was really boring.) I asked Greene how I could get in on his research, and he said I should do a PhD at Harvard… which is something I’ve been trying to convince Harvard for three years now—they keep not letting me in.

Anyway… the symposium was actually quite good, and the topic of moral cognition is incredibly fascinating and of course incredibly relevant to Infinite Identical Psychopaths.

Let’s start with Greene’s work. His basic research program is studying what our brains are doing when we try to resolve moral dilemmas. Normally I’m not a huge fan of fMRI research, because it’s just so damn coarse; I like to point out that it is basically equivalent to trying to understand how your computer works by running a voltmeter over the motherboard. But Greene does a good job of not over-interpreting results and combining careful experimental methods to really get a better sense of what’s going on.

There are basically two standard moral dilemmas people like to use in moral cognition research, and frankly I think this is a problem, because they don’t only differ in the intended way but also in many other ways; also once you’ve heard them, they no longer surprise you, so if you ever are a subject in one moral cognition experiment, it’s going to color your responses in any others from then on. I think we should come up with a much more extensive list of dilemmas that differ in various different dimensions; this would also make it much less likely for someone to already have seen them all before. A few weeks ago I made a Facebook post proposing a new dilemma of this sort, and the response, while an entirely unscientific poll, at least vaguely suggested that something may be wrong with the way Greene and others interpret the two standard dilemmas.

What are the standard dilemmas? They are called the trolley dilemma and the footbridge dilemma respectively; collectively they are trolley problems, of which there are several—but most aren’t actually used in moral cognition research for some reason.

In the trolley dilemma, there is, well, a trolley, hurtling down a track on which, for whatever reason, five people are trapped. There is another track, and you can flip a switch to divert the trolley onto that track, which will save those five people; but alas there is one other person trapped on that other track, who will now die. Do you flip the switch? Like most people, I say “Yes”.

In the footbridge dilemma, the trolley is still hurtling toward five people, but now you are above the track, standing on a footbridge beside an extremely fat man. The man is so fat, in fact, that if you push him in front of the trolley he will cause it to derail before it hits the five other people. You yourself are not fat enough to achieve this. Do you push the fat man? Like most people, I say “No.”

I actually hope you weren’t familiar with those dilemmas before, because your first impression is really useful to what I’m about to say next: Aren’t those really weird?

I mean, really weird, particularly the second one—what sort of man is fat enough to stop a trolley, yet nonetheless light enough or precariously balanced enough that I can reliably push him off a footbridge? These sorts of dilemmas are shades of the plugged-in-violinist; well, if the Society of Violin Enthusiasts ever does that, I suppose you can unplug the violinist—but what the hell does that have to do with abortion? (At the end of this post I’ve made a little appendix about the plugged-in-violinist and why it fails so miserably as an argument, but since it’s tangential I’ll move on for now.)

Even the first trolley problem, which seems a paragon of logical causality by comparison, is actually pretty bizarre. What are these people doing on the tracks? Why can’t they get off the tracks? Why is the trolley careening toward them? Why can’t the trolley be stopped some other way? Why is nobody on the trolley? What is this switch doing here, and why am I able to switch tracks despite having no knowledge, expertise or authority in trolley traffic control? Where are the proper traffic controllers? (There’s actually a pretty great sequence in Stargate: Atlantis where they have exactly this conversation.)

Now, if your goal is only to understand the core processes of human moral reasoning, using bizarre scenarios actually makes some sense; you can precisely control the variables—though, as I already said, they really don’t usuallyand see what exactly it is that makes us decide right from wrong. Would you do it for five? No? What about ten? What about fifty? Just what is the marginal utility of pushing a fat man off a footbridge? What if you could flip a switch to drop him through a trapdoor instead of pushing him? (Actually Greene did do that one, and the result is that more people do it than would push him, but not as many as would flip the switch to shift the track.) You’d probably do it if he willingly agreed, right? What if you had to pay his family $100,000 in life insurance as part of the deal? Does it matter if it’s your money or someone else’s? Does it matter how much you have to pay his family? $1,000,000? $1,000? Only $10? If he only needs $1 of enticement, is that as good as giving free consent?

You can go the other way as well: So you’d flip the switch for five? What about three? What about two? Okay, you strict act-utilitarian you: Would you do it for only one? Would you flip a coin because the expected marginal utility of two random strangers is equal? You wouldn’t, would you? So now your intervention does mean something, even if you think it’s less important than maximizing the number of lives saved. What if it were 10,000,001 lives versus 10,000,000 lives? Would you nuke a slightly smaller city to save a slightly larger one? Does it matter to you which country the cities are in? Should it matter?

Greene’s account is basically the standard one, which is that the reason we won’t push the fat man off the footbridge is that we have an intense emotional reaction to physically manhandling someone, but in the case of flipping the switch we don’t have that reaction, so our minds are clearer and we can simply rationally assess that five lives matter more than one. Greene maintains that this emotional response is irrational, an atavistic holdover from our evolutionary history, and we would make society better by suppressing it and going with the “rational”, (act-)utilitarian response. (I know he knows the difference between act-utilitarian and rule-utilitarian, because he has a PhD in philosophy. Why he didn’t mention it in the lecture, I cannot say.)

He does make a pretty good case for that, including the fMRIs showing that emotion centers light up a lot more for the footbridge dilemma than for the trolley dilemma; but I must say, I’m really not quite convinced.

Does flipping the switch to drop him through a trapdoor yield more support because it’s emotionally more distant? Or because it makes a bit more sense? We’ve solved the “Why can I push him hard enough?” problem, albeit not the “How he is heavy enough to stop a trolley?” problem.

I’ve also thought about ways to make the gruesome manhandling happen but nonetheless make more logical sense, and the best I’ve come up with is what we might call the lion dilemma: There is a hungry lion about to attack a group of five children and eat them all. You are standing on a ridge above, where the lion can’t easily get to you; if he eats the kids you’ll easily escape. Beside you is a fat man who weighs as much as the five children combined. If you push him off the ridge, he’ll be injured and unable to run, so the lion will attack him first, and then after eating him the lion will no longer be hungry and will leave the children alone. You yourself aren’t fat enough to make this work, however; you only weigh as much as two of the kids, not all five. You don’t have any weapons to kill the lion or anyone you could call for help, but you are sure you can push the fat man off the ridge quickly enough. Do you push the fat man off the ridge? I think I do—as did most of my friends in my aforementioned totally unscientific Facebook poll—though I’m not as sure of that as I was about flipping the switch. Yet nobody can deny the physicality of my action; not only am I pushing him just as before, he’s not going to be merely run over by a trolley, he’s going to be mauled and eaten by a lion. Of course, I might actually try something else, like yelling, “Run, kids!” and sliding down with the fat man to try to wrestle the lion together; and again we can certainly ask what the seven of us are doing out here unarmed and alone with lions about. But given the choice between the kids being eaten, myself and three of the kids being eaten, or the fat man being eaten, the last one does actually seem like the least-bad option.

Another good one, actually by the same Judith Thompson of plugged-in-violinist fame (for once her dilemma actually makes some sense; seriously, read A Defense of Abortion and you’ll swear she was writing it on psilocybin), is the transplant dilemma: You’re a doctor in a hospital where are five dying patients of different organ failures—two kidneys, one liver, one heart, and one lung, let’s say. You are one of the greatest transplant surgeons of all time, and there is no doubt in your mind that if you had a viable organ for each of them, you could save their lives—but you don’t. Yet as it so happens, a young man is visiting town and came to the hospital after severely breaking his leg in a skateboarding accident. He is otherwise in perfect health, and what’s more, he’s an organ donor and actually a match for all five of your dying patients. You could quietly take him into the surgical wing, give him a little too much anesthesia “by accident” as you operate on his leg, and then take his organs and save all five other patients. Nobody would ever know. Do you do it? Of course you don’t, you’re not a monster. But… you could save five by killing one, right? Is it just your irrational emotional aversion to cutting people open? No, you’re a surgeon—and I think you’ll be happy to know that actual surgeons agree that this is not the sort of thing they should be doing, despite the fact that they obviously have no problem cutting people open for the greater good all the time. The aversion to harm your own patient may come from (or be the source of) the Hippocratic Oath—are we prepared to say that the Hippocratic Oath is irrational?

I also came up with another really interesting one I’ll call the philanthropist assassin dilemma. One day, as you are walking past a dark alley, a shady figure pops out and makes you an offer: If you take this little vial of cyanide and pour it in the coffee of that man across the street while he’s in the bathroom, a donation of $100,000 will be made to UNICEF. If you refuse, the shady character will keep the $100,000 for himself. Nevermind the weirdness—they’re all weird, and unlike the footbridge dilemma this one actually could happen even though it probably won’t. Assume that despite being a murderous assassin this fellow really intends to make the donation if you help him carry out this murder. $100,000 to UNICEF would probably save the lives of over a hundred children. Furthermore, you can give the empty vial back to the philanthropist assassin and since there’s no logical connection between you and the victim, there’s basically no chance you’d ever be caught even if he is. (Also, how can you care more about your own freedom than the lives of a hundred children?) How can you justify not doing it? It’s just one man you don’t know, who apparently did something bad enough to draw the ire of philanthropist assassins, against the lives of a hundred innocent children! Yet I’m sure you share my strong intuition that you should not take the offer. It doesn’t require manhandling anybody—just a quick little pour into a cup of coffee—so that can’t be it. A hundred children! And yet I still don’t see how I could carry out this murder. Is that irrational, as Greene claims? Should we be prepared to carry out such a murder if the opportunity ever arises?

Okay, how about this one then, the white-collar criminal dilemma? You are a highly-skilled hacker, and you could hack into the accounts of a major bank and steal a few dollars from each account, gathering a total of $1 billion that you can then immediately donate to UNICEF, covering their entire operating budget for this year and possibly next year as well, saving the lives of countless children—perhaps literally millions of children. Should you do it? Honestly in this case I think maybe you should! (Maybe Sam Wyly isn’t so bad after all? He donated his stolen money to a university, which isn’t nearly as good as UNICEF… also he stole $550 million and donated $10 million, so there’s that.) But now suppose that you can only get into the system if you physically break into the bank and kill several of the guards. What are a handful of guards against millions of children? Yet you sound like a Well-Intentioned Extremist in a Hollywood blockbuster (seriously, someone should make this movie), and your action certainly doesn’t seem as unambigously heroic as one might think of any act that saves the lives of a million children and only kills a handful of people. Why is it that I think we should lobby governments and corporations to make these donations voluntarily, even if it takes a decade longer, rather than finding someone who can steal the money by force? Children will die in the meantime! Don’t those children matter?

I don’t have a good answer, actually. Maybe Greene is right and it’s just this atavistic emotional response that prevents me from seeing that these acts would be justified. But then again, maybe it’s not—maybe there’s something more here that Greene is missing.

And that brings me back to the act-utilitarian versus rule-utilitarian distinction, which Greene ignored in his lecture. In act-utilitarian terms, obviously you save the children; it’s a no-brainer, 100 children > 1 hapless coffee-drinker and 1,000,000 children >> 10 guards. But in rule-utilitarian terms, things come out a bit different. What kind of society would we live in, if at any moment we could fear the wrath of philanthropist assassins? Right now, there’s plenty of money in the bank for anyone to steal, but what would happen to our financial system if we didn’t punish bank robbers so long as they spent the money on the right charities? All of it, or just most of it? And which charities are the right charities? What would our medical system be like if we knew that our organs might be harvested at any time so long as there were two or more available recipients? Despite these dilemmas actually being a good deal more realistic than the standard trolley problems, the act-utilitarian response still relies upon assuming that this is an exceptional circumstance which will never be heard about or occur again. Yet those are by definition precisely the sort of moral principles we can’t live our lives by.

This post has already gotten really long, so I won’t even get into Molly Crockett’s talk until a later post. I probably won’t do it as the next post either, but the one after that, because next Friday is Capybara Day (“What?” you say? Stay tuned).

Appendix: The plugged-in-violinist

Continue reading

What do we mean by “risk”?

JDN 2457118 EDT 20:50.

In an earlier post I talked about how, empirically, expected utility theory can’t explain the fact that we buy both insurance and lottery tickets, and how, normatively it really doesn’t make a lot of sense to buy lottery tickets precisely because of what expected utility theory says about them.

But today I’d like to talk about one of the major problems with expected utility theory, which I consider one of the major unexplored frontiers of economics: Expected utility theory treats all kinds of risk exactly the same.

In reality there are three kinds of risk: The first is what I’ll call classical risk, which is like the game of roulette; the odds are well-defined and known in advance, and you can play the game a large number of times and average out the results. This is where expected utility theory really shines; if you are dealing with classical risk, expected utility is obviously the way to go and Von Neumann and Morgenstern quite literally proved mathematically that anything else is irrational.

The second is uncertainty, a distinction which was most famously expounded by Frank Knight, an economist at the University of Chicago. (Chicago is a funny place; on the one hand they are a haven for the madness that is Austrian economics; on the other hand they have led the charge in behavioral and cognitive economics. Knight was a perfect fit, because he was a little of both.) Uncertainty is risk under ill-defined or unknown probabilities, where there is no way to play the game twice. Most real-world “risk” is actually uncertainty: Will the People’s Republic of China collapse in the 21st century? How many deaths will global warming cause? Will human beings ever colonize Mars? Is P = NP? None of those questions have known answers, but nor can we clearly assign probabilities either; Either P = NP or not, as a mathematical theorem (or, like the continuum hypothesis, it’s independent of ZFC, the most bizarre possibility of all), and it’s not as if someone is rolling dice to decide how many people global warming will kill. You can think of this in terms of “possible worlds”, though actually most modal theorists would tell you that we can’t even say that P=NP is possible (nor can we say it isn’t possible!) because, as a necessary statement, it can only be possible if it is actually true; this follows from the S5 axiom of modal logic, and you know what, even I am already bored with that sentence. Clearly there is some sense in which P=NP is possible, and if that’s not what modal logic says then so much the worse for modal logic. I am not a modal realist (not to be confused with a moral realist, which I am); I don’t think that possible worlds are real things out there somewhere. I think possibility is ultimately a statement about ignorance, and since we don’t know that P=NP is false then I contend that it is possible that it is true. Put another way, it would not be obviously irrational to place a bet that P=NP will be proved true by 2100; but if we can’t even say that it is possible, how can that be?

Anyway, that’s the mess that uncertainty puts us in, and almost everything is made of uncertainty. Expected utility theory basically falls apart under uncertainty; it doesn’t even know how to give an answer, let alone one that is correct. In reality what we usually end up doing is waving our hands and trying to assign a probability anyway—because we simply don’t know what else to do.

The third one is not one that’s usually talked about, yet I think it’s quite important; I will call it one-shot risk. The probabilities are known or at least reasonably well approximated, but you only get to play the game once. You can also generalize to few-shot risk, where you can play a small number of times, where “small” is defined relative to the probabilities involved; this is a little vaguer, but basically what I have in mind is that even though you can play more than once, you can’t play enough times to realistically expect the rarest outcomes to occur. Expected utility theory almost works on one-shot and few-shot risk, but you have to be very careful about taking it literally.

I think an example make things clearer: Playing the lottery is a few-shot risk. You can play the lottery multiple times, yes; potentially hundreds of times in fact. But hundreds of times is nothing compared to the 1 in 400 million chance you have of actually winning. You know that probability; it can be computed exactly from the rules of the game. But nonetheless expected utility theory runs into some serious problems here.

If we were playing a classical risk game, expected utility would obviously be right. So for example if you know that you will live one billion years, and you are offered the chance to play a game (somehow compensating for the mind-boggling levels of inflation, economic growth, transhuman transcendence, and/or total extinction that will occur during that vast expanse of time) in which at each year you can either have a guaranteed $40,000 of inflation-adjusted income or a 99.999,999,75% chance of $39,999 of inflation-adjusted income and a 0.000,000,25% chance of $100 million in inflation-adjusted income—which will disappear at the end of the year, along with everything you bought with it, so that each year you start afresh. Should you take the second option? Absolutely not, and expected utility theory explains why; that one or two years where you’ll experience 8 QALY per year isn’t worth dropping from 4.602056 QALY per year to 4.602049 QALY per year for the other nine hundred and ninety-eight million years. (Can you even fathom how long that is? From here, one billion years is all the way back to the Mesoproterozoic Era, which we think is when single-celled organisms first began to reproduce sexually. The gain is to be Mitt Romney for a year or two; the loss is the value of a dollar each year over and over again for the entire time that has elapsed since the existence of gamete meiosis.) I think it goes without saying that this whole situation is almost unimaginably bizarre. Yet that is implicitly what we’re assuming when we use expected utility theory to assess whether you should buy lottery tickets.

The real situation is more like this: There’s one world you can end up in, and almost certainly will, in which you buy lottery tickets every year and end up with an income of $39,999 instead of $40,000. There is another world, so unlikely as to be barely worth considering, yet not totally impossible, in which you get $100 million and you are completely set for life and able to live however you want for the rest of your life. Averaging over those two worlds is a really weird thing to do; what do we even mean by doing that? You don’t experience one world 0.000,000,25% as much as the other (whereas in the billion-year scenario, that is exactly what you do); you only experience one world or the other.

In fact, it’s worse than this, because if a classical risk game is such that you can play it as many times as you want as quickly as you want, we don’t even need expected utility theory—expected money theory will do. If you can play a game where you have a 50% chance of winning $200,000 and a 50% chance of losing $50,000, which you can play up to once an hour for the next 48 hours, and you will be extended any credit necessary to cover any losses, you’d be insane not to play; your 99.9% confidence level of wealth at the end of the two days is from $850,000 to $6,180,000. While you may lose money for awhile, it is vanishingly unlikely that you will end up losing more than you gain.

Yet if you are offered the chance to play this game only once, you probably should not take it, and the reason why then comes back to expected utility. If you have good access to credit you might consider it, because going $50,000 into debt is bad but not unbearably so (I did, going to college) and gaining $200,000 might actually be enough better to justify the risk. Then the effect can be averaged over your lifetime; let’s say you make $50,000 per year over 40 years. Losing $50,000 means making your average income $48,750, while gaining $200,000 means making your average income $55,000; so your QALY per year go from a guaranteed 4.70 to a 50% chance of 4.69 and a 50% chance of 4.74; that raises your expected utility from 4.70 to 4.715.

But if you don’t have good access to credit and your income for this year is $50,000, then losing $50,000 means losing everything you have and living in poverty or even starving to death. The benefits of raising your income by $200,000 this year aren’t nearly great enough to take that chance. Your expected utility goes from 4.70 to a 50% chance of 5.30 and a 50% chance of zero.

So expected utility theory only seems to properly apply if we can play the game enough times that the improbable events are likely to happen a few times, but not so many times that we can be sure our money will approach the average. And that’s assuming we know the odds and we aren’t just stuck with uncertainty.

Unfortunately, I don’t have a good alternative; so far expected utility theory may actually be the best we have. But it remains deeply unsatisfying, and I like to think we’ll one day come up with something better.

How following the crowd can doom us all

JDN 2457110 EDT 21:30

Humans are nothing if not social animals. We like to follow the crowd, do what everyone else is doing—and many of us will continue to do so even if our own behavior doesn’t make sense to us. There is a very famous experiment in cognitive science that demonstrates this vividly.

People are given a very simple task to perform several times: We show you line X and lines A, B, and C. Now tell us which of A, B or C is the same length as X. Couldn’t be easier, right? But there’s a trick: seven other people are in the same room performing the same experiment, and they all say that B is the same length as X, even though you can clearly see that A is the correct answer. Do you stick with what you know, or say what everyone else is saying? Typically, you say what everyone else is saying. Over 18 trials, 75% of people followed the crowd at least once, and some people followed the crowd every single time. Some people even began to doubt their own perception, wondering if B really was the right answer—there are four lights, anyone?

Given that our behavior can be distorted by others in such simple and obvious tasks, it should be no surprise that it can be distorted even more in complex and ambiguous tasks—like those involved in finance. If everyone is buying up Beanie Babies or Tweeter stock, maybe you should too, right? Can all those people be wrong?

In fact, matters are even worse with the stock market, because it is in a sense rational to buy into a bubble if you know that other people will as well. As long as you aren’t the last to buy in, you can make a lot of money that way. In speculation, you try to predict the way that other people will cause prices to move and base your decisions around that—but then everyone else is doing the same thing. By Keynes called it a “beauty contest”; apparently in his day it was common to have contests for picking the most beautiful photo—but how is beauty assessed? By how many people pick it! So you actually don’t want to choose the one you think is most beautiful, you want to choose the one you think most people will think is the most beautiful—or the one you think most people will think most people will think….

Our herd behavior probably made a lot more sense when we evolved it millennia ago; when most of your threats are external and human beings don’t have that much influence over our environment, the majority opinion is quite likely to be right, and can often given you an answer much faster than you could figure it out on your own. (If everyone else thinks a lion is hiding in the bushes, there’s probably a lion hiding in the bushes—and if there is, the last thing you want is to be the only one who didn’t run.) The problem arises when this tendency to follow the ground feeds back on itself, and our behavior becomes driven not by the external reality but by an attempt to predict each other’s predictions of each other’s predictions. Yet this is exactly how financial markets are structured.

With this in mind, the surprise is not why markets are unstable—the surprise is why markets are ever stable. I think the main reason markets ever manage price stability is actually something most economists think of as a failure of markets: Price rigidity and so-called “menu costs“. If it’s costly to change your price, you won’t be constantly trying to adjust it to the mood of the hour—or the minute, or the microsecondbut instead trying to tie it to the fundamental value of what you’re selling so that the price will continue to be close for a long time ahead. You may get shortages in times of high demand and gluts in times of low demand, but as long as those two things roughly balance out you’ll leave the price where it is. But if you can instantly and costlessly change the price however you want, you can raise it when people seem particularly interested in buying and lower it when they don’t, and then people can start trying to buy when your price is low and sell when it is high. If people were completely rational and had perfect information, this arbitrage would stabilize prices—but since they’re not, arbitrage attempts can over- or under-compensate, and thus result in cyclical or even chaotic changes in prices.

Our herd behavior then makes this worse, as more people buying leads to, well, more people buying, and more people selling leads to more people selling. If there were no other causes of behavior, the result would be prices that explode outward exponentially; but even with other forces trying to counteract them, prices can move suddenly and unpredictably.

If most traders are irrational or under-informed while a handful are rational and well-informed, the latter can exploit the former for enormous amounts of money; this fact is often used to argue that irrational or under-informed traders will simply drop out, but it should only take you a few moments of thought to see why that isn’t necessarily true. The incentives isn’t just to be well-informed but also to keep others from being well-informed. If everyone were rational and had perfect information, stock trading would be the most boring job in the world, because the prices would never change except perhaps to grow with the growth rate of the overall economy. Wall Street therefore has every incentive in the world not to let that happen. And now perhaps you can see why they are so opposed to regulations that would require them to improve transparency or slow down market changes. Without the ability to deceive people about the real value of assets or trigger irrational bouts of mass buying or selling, Wall Street would make little or no money at all. Not only are markets inherently unstable by themselves, in addition we have extremely powerful individuals and institutions who are driven to ensure that this instability is never corrected.

This is why as our markets have become ever more streamlined and interconnected, instead of becoming more efficient as expected, they have actually become more unstable. They were never stable—and the gold standard made that instability worse—but despite monetary policy that has provided us with very stable inflation in the prices of real goods, the prices of assets such as stocks and real estate have continued to fluctuate wildly. Real estate isn’t as bad as stocks, again because of price rigidity—houses rarely have their values re-assessed multiple times per year, let alone multiple times per second. But real estate markets are still unstable, because of so many people trying to speculate on them. We think of real estate as a good way to make money fast—and if you’re lucky, it can be. But in a rational and efficient market, real estate would be almost as boring as stock trading; your profits would be driven entirely by population growth (increasing the demand for land without changing the supply) and the value added in construction of buildings. In fact, the population growth effect should be sapped by a land tax, and then you should only make a profit if you actually build things. Simply owning land shouldn’t be a way of making money—and the reason for this should be obvious: You’re not actually doing anything. I don’t like patent rents very much, but at least inventing new technologies is actually beneficial for society. Owning land contributes absolutely nothing, and yet it has been one of the primary means of amassing wealth for centuries and continues to be today.

But (so-called) investors and the banks and hedge funds they control have little reason to change their ways, as long as the system is set up so that they can keep profiting from the instability that they foster. Particularly when we let them keep the profits when things go well, but immediately rush to bail them out when things go badly, they have basically no incentive at all not to take maximum risk and seek maximum instability. We need a fundamentally different outlook on the proper role and structure of finance in our economy.

Fortunately one is emerging, summarized in a slogan among economically-savvy liberals: Banking should be boring. (Elizabeth Warren has said this, as have Joseph Stiglitz and Paul Krugman.) And indeed it should, for all banks are supposed to be doing is lending money from people who have it and don’t need it to people who need it but don’t have it. They aren’t supposed to be making large profits of their own, because they aren’t the ones actually adding value to the economy. Indeed it was never quite clear to me why banks should be privatized in the first place, though I guess it makes more sense than, oh, say, prisons.

Unfortunately, the majority opinion right now, at least among those who make policy, seems to be that banks don’t need to be restructured or even placed on a tighter leash; no, they need to be set free so they can work their magic again. Even otherwise reasonable, intelligent people quickly become unshakeable ideologues when it comes to the idea of raising taxes or tightening regulations. And as much as I’d like to think that it’s just a small but powerful minority of people who thinks this way, I know full well that a large proportion of Americans believe in these views and intentionally elect politicians who will act upon them.

All the more reason to break from the crowd, don’t you think?