What behavioral economics needs

Apr 16 JDN 2460049

The transition from neoclassical to behavioral economics has been a vital step forward in science. But lately we seem to have reached a plateau, with no major advances in the paradigm in quite some time.

It could be that there is work already being done which will, in hindsight, turn out to be significant enough to make that next step forward. But my fear is that we are getting bogged down by our own methodological limitations.

Neoclassical economics shared with us its obsession with mathematical sophistication. To some extent this was inevitable; in order to impress neoclassical economists enough to convert some of them, we had to use fancy math. We had to show that we could do it their way in order to convince them why we shouldn’t—otherwise, they’d just have dismissed us the way they had dismissed psychologists for decades, as too “fuzzy-headed” to do the “hard work” of putting everything into equations.

But the truth is, putting everything into equations was never the right approach. Because human beings clearly don’t think in equations. Once we write down a utility function and get ready to take its derivative and set it equal to zero, we have already distanced ourselves from how human thought actually works.

When dealing with a simple physical system, like an atom, equations make sense. Nobody thinks that the electron knows the equation and is following it intentionally. That equation simply describes how the forces of the universe operate, and the electron is subject to those forces.

But human beings do actually know things and do things intentionally. And while an equation could be useful for analyzing human behavior in the aggregate—I’m certainly not objecting to statistical analysis—it really never made sense to say that people make their decisions by optimizing the value of some function. Most people barely even know what a function is, much less remember calculus well enough to optimize one.

Yet right now, behavioral economics is still all based in that utility-maximization paradigm. We don’t use the same simplistic utility functions as neoclassical economists; we make them more sophisticated and realistic. Yet in that very sophistication we make things more complicated, more difficult—and thus in at least that respect, even further removed from how actual human thought must operate.

The worst offender here is surely Prospect Theory. I recognize that Prospect Theory predicts human behavior better than conventional expected utility theory; nevertheless, it makes absolutely no sense to suppose that human beings actually do some kind of probability-weighting calculation in their heads when they make judgments. Most of my students—who are well-trained in mathematics and economics—can’t even do that probability-weighting calculation on paper, with a calculator, on an exam. (There’s also absolutely no reason to do it! All it does it make your decisions worse!) This is a totally unrealistic model of human thought.

This is not to say that human beings are stupid. We are still smarter than any other entity in the known universe—computers are rapidly catching up, but they haven’t caught up yet. It is just that whatever makes us smart must not be easily expressible as an equation that maximizes a function. Our thoughts are bundles of heuristics, each of which may be individually quite simple, but all of which together make us capable of not only intelligence, but something computers still sorely, pathetically lack: wisdom. Computers optimize functions better than we ever will, but we still make better decisions than they do.

I think that what behavioral economics needs now is a new unifying theory of these heuristics, which accounts for not only how they work, but how we select which one to use in a given situation, and perhaps even where they come from in the first place. This new theory will of course be complex; there’s a lot of things to explain, and human behavior is a very complex phenomenon. But it shouldn’t be—mustn’t be—reliant on sophisticated advanced mathematics, because most people can’t do advanced mathematics (almost by construction—we would call it something different otherwise). If your model assumes that people are taking derivatives in their heads, your model is already broken. 90% of the world’s people can’t take a derivative.

I guess it could be that our cognitive processes in some sense operate as if they are optimizing some function. This is commonly posited for the human motor system, for instance; clearly baseball players aren’t actually solving differential equations when they throw and catch balls, but the trajectories that balls follow do in fact obey such equations, and the reliability with which baseball players can catch and throw suggests that they are in some sense acting as if they can solve them.

But I think that a careful analysis of even this classic example reveals some deeper insights that should call this whole notion into question. How do baseball players actually do what they do? They don’t seem to be calculating at all—in fact, if you asked them to try to calculate while they were playing, it would destroy their ability to play. They learn. They engage in practiced motions, acquire skills, and notice patterns. I don’t think there is anywhere in their brains that is actually doing anything like solving a differential equation. It’s all a process of throwing and catching, throwing and catching, over and over again, watching and remembering and subtly adjusting.

One thing that is particularly interesting to me about that process is that is astonishingly flexible. It doesn’t really seem to matter what physical process you are interacting with; as long as it is sufficiently orderly, such a method will allow you to predict and ultimately control that process. You don’t need to know anything about differential equations in order to learn in this way—and, indeed, I really can’t emphasize this enough, baseball players typically don’t.

In fact, learning is so flexible that it can even perform better than calculation. The usual differential equations most people would think to use to predict the throw of a ball would assume ballistic motion in a vacuum, which absolutely not what a curveball is. In order to throw a curveball, the ball must interact with the air, and it must be launched with spin; curving a baseball relies very heavily on the Magnus Effect. I think it’s probably possible to construct an equation that would fully predict the motion of a curveball, but it would be a tremendously complicated one, and might not even have an exact closed-form solution. In fact, I think it would require solving the Navier-Stokes equations, for which there is an outstanding Millennium Prize. Since the viscosity of air is very low, maybe you could get away with approximating using the Euler fluid equations.

To be fair, a learning process that is adapting to a system that obeys an equation will yield results that become an ever-closer approximation of that equation. And it is in that sense that a baseball player can be said to be acting as if solving a differential equation. But this relies heavily on the system in question being one that obeys an equation—and when it comes to economic systems, is that even true?

What if the reason we can’t find a simple set of equations that accurately describe the economy (as opposed to equations of ever-escalating complexity that still utterly fail to describe the economy) is that there isn’t one? What if the reason we can’t find the utility function people are maximizing is that they aren’t maximizing anything?

What behavioral economics needs now is a new approach, something less constrained by the norms of neoclassical economics and more aligned with psychology and cognitive science. We should be modeling human beings based on how they actually think, not some weird mathematical construct that bears no resemblance to human reasoning but is designed to impress people who are obsessed with math.

I’m of course not the first person to have suggested this. I probably won’t be the last, or even the one who most gets listened to. But I hope that I might get at least a few more people to listen to it, because I have gone through the mathematical gauntlet and earned my bona fides. It is too easy to dismiss this kind of reasoning from people who don’t actually understand advanced mathematics. But I do understand differential equations—and I’m telling you, that’s not how people think.

The importance of encryption

Feb 6 JDN 2459617

In last week’s post I told you of the compounding failures of cryptocurrency, which largely follow from the fact that it is very bad at being, well, currency. It doesn’t have a steady, predictable value, it isn’t easy to make payments with, and it isn’t accepted for most purchases.

But I realized that I haven’t ever gotten around to explaining anything about the crypto side of things—just what is encryption, and why does it matter?

At its core, encryption is any technique designed to disguise information so that it can be seen only by its intended viewers. Humans have been using some form of encryption since shortly after we invented writing—though, like any technology, our encryption has greatly improved over time.

Encryption involves converting a plaintext, the information you want to keep secret, into a ciphertext, a disguised form, using a key that is kept secret and can be used to convert the ciphertext back into plaintext. Decryption is the opposite process, extracting the plaintext from the ciphertext.

Some of the first forms of encryption were simple substitution ciphers: Have a set of rules that substitutes different letters for the original letters, such as “A becomes D, B becomes Q” and so on. This works pretty well, actually; but if each letter in the ciphertext always corresponds to the same letter in the plaintext, then you can look for patterns that would show up in text. For instance, E is usually the most common letter, so if you see a lot of three-letter sequences like BFP and P is a really common letter, odds are good that BFP is really THE and so you can guess that B=T, F=H, P=E.

More sophisticated ciphers tried to solve this problem by changing the substitution pattern as they go. The Enigma used by Nazi Germany was essentially this: It had a complex electrical and mechanical apparatus dedicated to changing the substitution rules with each key-press, in a manner that would be unpredictable to an outside observer but could be readily reproduced by using another Enigma machine. (Of course, it wasn’t actually as secure as they thought.)

For most of history, people have used what’s called private-key encryption, where there is a single key using for both encryption and decryption. In that case, you need to keep the key secret: If someone were to get their hands on it, they could easily decrypt all of your messages.

This is a problem, because with private-key encryption, you need to give the key to the person you want to read the message. And if there is a safe way to send the key, well… why couldn’t you send the message that way?

In the late 20th century mathematicians figured out an alternative, public-key encryption, which uses two keys: A private key, used to decrypt, and a new, public key, which can be used to encrypt. The public key is called “public” because you don’t need to keep it secret. You can hand it out to anyone, and they can encrypt messages with it. Those messages will be readable by you and you alone—for only you have the private key.

With most methods of public-key encryption, senders can even use their private key to prove to you that they are the person who sent the message, known as authentication. They encrypt it using their private key and your public key, and then you decrypt it using their public key and your private key.

This is great! It means that anyone can send messages to anyone else, and everyone will know not only that their information is safe, but also who it came from. You never have to transmit the private keys at all. Problem solved.


We now use public-key encryption for all sorts of things, particularly online: Online shopping, online banking, online tax filing. It’s not hard to imagine how catastrophic it could be if all of these forms of encryption were to suddenly fail.

In next week’s post, I’m going to talk about why I’m worried that something like that could one day happen, and what we might do in order to make sure it doesn’t. Stay tuned.

The power of exponential growth

JDN 2457390

There’s a famous riddle: If the water in a lakebed doubles in volume every day, and the lakebed started filling on January 1, and is half full on June 17, when will it be full?

The answer is of course June 18—if it doubles every day, it will go from half full to full in a single day.

But most people assume that half the work takes about half the time, so they usually give answers in December. Others try to correct, but don’t go far enough, and say something like October.

Human brains are programmed to understand linear processes. We expect things to come in direct proportion: If you work twice as hard, you expect to get twice as much done. If you study twice as long, you expect to learn twice as much. If you pay twice as much, you expect to get twice as much stuff.

We tend to apply this same intuition to situations where it does not belong, processes that are not actually linear but exponential. As a result, when we extrapolate the slow growth early in the process, we wildly underestimate the total growth in the long run.

For example, suppose we have two countries. Arcadia has a GDP of $100 billion per year, and they grow at 4% per year. Berkland has a GDP of $200 billion, and they grow at 2% per year. Assuming that they maintain these growth rates, how long will it take for Arcadia’s GDP to exceed Berkland’s?

If we do this intuitively, we might sort of guess that at 4% you’d add 100% in 25 years, and at 2% you’d add 100% in 50 years; so it should be something like 75 years, because then Arcadia will have added $300 million while Berkland added $200 million. You might even just fudge the numbers in your head and say “about a century”.

In fact, it is only 35 years. You could solve this exactly by setting (100)(1.04^x) = (200)(1.02^x); but I have an intuitive method that I think may help you to estimate exponential processes in the future.

Divide the percentage into 69. (For some numbers it’s easier to use 70 or 72; remember, these are just to be approximate. The exact figure is 100*ln(2) = 69.3147… and then it wouldn’t be the percentage p but 100*ln(1+p/100); try plotting those and you’ll see why using p works.) This is the time it will take to double.

So at 4%, Arcadia will double in about 17.5 years, quadrupling in 35 years. At 2%, Berkland will double in about 35 years. Thus, in 35 years, Arcadia will quadruple and Berkland will double, so their GDPs will be equal.

Economics is full of exponential processes: Compound interest is exponential, and over moderately long periods GDP and population both tend to grow exponentially. (In fact they grow logistically, which is similar to exponential until it gets very large and begins to slow down. If you smooth out our recessions, you can get a sense that since the 1940s, US GDP growth has slowed down from about 4% per year to about 2% per year.) It is therefore quite important to understand how exponential growth works.

Let’s try another one. If one account has $1 million, growing at 5% per year, and another has $1,000, growing at 10% per year, how long will it take for the second account to have more money in it?

69/5 is about 14, so the first account doubles in 14 years. 69/10 is about 7, so the second account doubles in 7 years. A factor of 1000 is about 10 doublings (2^10 = 1024), so the second account needs to have doubled 10 times more than the first account. Since it doubles twice as often, this means that it must have doubled 20 times while the other doubled 10 times. Therefore, it will take about 140 years.

In fact, it takes 141—so our quick approximation is actually remarkably good.

This example is instructive in another way; 141 years is a pretty long time, isn’t it? You can’t just assume that exponential growth is “as fast as you want it to be”. Once people realize that exponential growth is very fast, they often overcorrect, assuming that exponential growth automatically means growth that is absurdly—or arbitrarily—fast. (XKCD made a similar point in this comic.)

I think the worst examples of this mistake are among Singularitarians. They—correctly—note that computing power has become exponentially greater and cheaper over time, doubling about every 18 months, which has been dubbed Moore’s Law. They assume that this will continue into the indefinite future (this is already problematic; the growth rate seems to be already slowing down). And therefore they conclude there will be a sudden moment, a technological singularity, at which computers will suddenly outstrip humans in every way and bring about a new world order of artificial intelligence basically overnight. They call it a “hard takeoff”; here’s a direct quote:

But many thinkers in this field including Nick Bostrom and Eliezer Yudkowsky worry that AI won’t work like this at all. Instead there could be a “hard takeoff”, a huge subjective discontinuity in the function mapping AI research progress to intelligence as measured in ability-to-get-things-done. If on January 1 you have a toy AI as smart as a cow, one which can identify certain objects in pictures and navigate a complex environment, and on February 1 it’s proved the Riemann hypothesis and started building a ring around the sun, that was a hard takeoff.

Wait… what? For someone like me who understands exponential growth, the last part is a baffling non sequitur. If computers start half as smart as us and double every 18 months, in 18 months, they will be as smart as us. In 36 months, they will be twice as smart as us. Twice as smart as us literally means that two people working together perfectly can match them—certainly a few dozen working realistically can. We’re not in danger of total AI domination from that. With millions of people working against the AI, we should be able to keep up with it for at least another 30 years. So are you assuming that this trend is continuing or not? (Oh, and by the way, we’ve had AIs that can identify objects and navigate complex environments for a couple years now, and so far, no ringworld around the Sun.)

That same essay make a biological argument, which misunderstands human evolution in a way that is surprisingly subtle yet ultimately fundamental:

If you were to come up with a sort of objective zoological IQ based on amount of evolutionary work required to reach a certain level, complexity of brain structures, etc, you might put nematodes at 1, cows at 90, chimps at 99, homo erectus at 99.9, and modern humans at 100. The difference between 99.9 and 100 is the difference between “frequently eaten by lions” and “has to pass anti-poaching laws to prevent all lions from being wiped out”.

No, actually, what makes humans what we are is not that we are 1% smarter than chimpanzees.

First of all, we’re actually more like 200% smarter than chimpanzees, measured by encephalization quotient; they clock in at 2.49 while we hit 7.44. If you simply measure by raw volume, they have about 400 mL to our 1300 mL, so again roughly 3 times as big. But that’s relatively unimportant; with Moore’s Law, tripling only takes about 2.5 years.

But even having triple the brain power is not what makes humans different. It was a necessary condition, but not a sufficient one. Indeed, it was so insufficient that for about 200,000 years we had brains just as powerful as we do now and yet we did basically nothing in technological or economic terms—total, complete stagnation on a global scale. This is a conservative estimate of when we had brains of the same size and structure as we do today.

What makes humans what we are? Cooperation. We are what we are because we are together.
The capacity of human intelligence today is not 1300 mL of brain. It’s more like 1.3 gigaliters of brain, where a gigaliter, a billion liters, is about the volume of the Empire State Building. We have the intellectual capacity we do not because we are individually geniuses, but because we have built institutions of research and education that combine, synthesize, and share the knowledge of billions of people who came before us. Isaac Newton didn’t understand the world as well as the average third-grader in the 21st century does today. Does the third-grader have more brain? Of course not. But they absolutely do have more knowledge.

(I recently finished my first playthrough of Legacy of the Void, in which a central point concerns whether the Protoss should detach themselves from the Khala, a psychic union which combines all their knowledge and experience into one. I won’t spoil the ending, but let me say this: I can understand their hesitation, for it is basically our equivalent of the Khala—first literacy, and now the Internet—that has made us what we are. It would no doubt be the Khala that made them what they are as well.)

Is AI still dangerous? Absolutely. There are all sorts of damaging effects AI could have, culturally, economically, militarily—and some of them are already beginning to happen. I even agree with the basic conclusion of that essay that OpenAI is a bad idea because the cost of making AI available to people who will abuse it or create one that is dangerous is higher than the benefit of making AI available to everyone. But exponential growth not only isn’t the same thing as instantaneous takeoff, it isn’t even compatible with it.

The next time you encounter an example of exponential growth, try this. Don’t just fudge it in your head, don’t overcorrect and assume everything will be fast—just divide the percentage into 69 to see how long it will take to double.

How to change the world

JDN 2457166 EDT 17:53.

I just got back from watching Tomorrowland, which is oddly appropriate since I had already planned this topic in advance. How do we, as they say in the film, “fix the world”?

I can’t find it at the moment, but I vaguely remember some radio segment on which a couple of neoclassical economists were interviewed and asked what sort of career can change the world, and they answered something like, “Go into finance, make a lot of money, and then donate it to charity.”

In a slightly more nuanced form this strategy is called earning to give, and frankly I think it’s pretty awful. Most of the damage that is done to the world is done in the name of maximizing profits, and basically what you end up doing is stealing people’s money and then claiming you are a great altruist for giving some of it back. I guess if you can make enormous amounts of money doing something that isn’t inherently bad and then donate that—like what Bill Gates did—it seems better. But realistically your potential income is probably not actually raised that much by working in finance, sales, or oil production; you could have made the same income as a college professor or a software engineer and not be actively stripping the world of its prosperity. If we actually had the sort of ideal policies that would internalize all externalities, this dilemma wouldn’t arise; but we’re nowhere near that, and if we did have that system, the only billionaires would be Nobel laureate scientists. Albert Einstein was a million times more productive than the average person. Steve Jobs was just a million times luckier. Even then, there is the very serious question of whether it makes sense to give all the fruits of genius to the geniuses themselves, who very quickly find they have all they need while others starve. It was certainly Jonas Salk’s view that his work should only profit him modestly and its benefits should be shared with as many people as possible. So really, in an ideal world there might be no billionaires at all.

Here I would like to present an alternative. If you are an intelligent, hard-working person with a lot of talent and the dream of changing the world, what should you be doing with your time? I’ve given this a great deal of thought in planning my own life, and here are the criteria I came up with:

  1. You must be willing and able to commit to doing it despite great obstacles. This is another reason why earning to give doesn’t actually make sense; your heart (or rather, limbic system) won’t be in it. You’ll be miserable, you’ll become discouraged and demoralized by obstacles, and others will surpass you. In principle Wall Street quantitative analysts who make $10 million a year could donate 90% to UNICEF, but they don’t, and you know why? Because the kind of person who is willing and able to exploit and backstab their way to that position is the kind of person who doesn’t give money to UNICEF.
  2. There must be important tasks to be achieved in that discipline. This one is relatively easy to satisfy; I’ll give you a list in a moment of things that could be contributed by a wide variety of fields. Still, it does place some limitations: For one, it rules out the simplest form of earning to give (a more nuanced form might cause you to choose quantum physics over social work because it pays better and is just as productive—but you’re not simply maximizing income to donate). For another, it rules out routine, ordinary jobs that the world needs but don’t make significant breakthroughs. The world needs truck drivers (until robot trucks take off), but there will never be a great world-changing truck driver, because even the world’s greatest truck driver can only carry so much stuff so fast. There are no world-famous secretaries or plumbers. People like to say that these sorts of jobs “change the world in their own way”, which is a nice sentiment, but ultimately it just doesn’t get things done. We didn’t lift ourselves into the Industrial Age by people being really fantastic blacksmiths; we did it by inventing machines that make blacksmiths obsolete. We didn’t rise to the Information Age by people being really good slide-rule calculators; we did it by inventing computers that work a million times as fast as any slide-rule. Maybe not everyone can have this kind of grand world-changing impact; and I certainly agree that you shouldn’t have to in order to live a good life in peace and happiness. But if that’s what you’re hoping to do with your life, there are certain professions that give you a chance of doing so—and certain professions that don’t.
  3. The important tasks must be currently underinvested. There are a lot of very big problems that many people are already working on. If you work on the problems that are trendy, the ones everyone is talking about, your marginal contribution may be very small. On the other hand, you can’t just pick problems at random; many problems are not invested in precisely because they aren’t that important. You need to find problems people aren’t working on but should be—problems that should be the focus of our attention but for one reason or another get ignored. A good example here is to work on pancreatic cancer instead of breast cancer; breast cancer research is drowning in money and really doesn’t need any more; pancreatic cancer kills 2/3 as many people but receives less than 1/6 as much funding. If you want to do cancer research, you should probably be doing pancreatic cancer.
  4. You must have something about you that gives you a comparative—and preferably, absolute—advantage in that field. This is the hardest one to achieve, and it is in fact the reason why most people can’t make world-changing breakthroughs. It is in fact so hard to achieve that it’s difficult to even say you have until you’ve already done something world-changing. You must have something special about you that lets you achieve what others have failed. You must be one of the best in the world. Even as you stand on the shoulders of giants, you must see further—for millions of others stand on those same shoulders and see nothing. If you believe that you have what it takes, you will be called arrogant and naïve; and in many cases you will be. But in a few cases—maybe 1 in 100, maybe even 1 in 1000, you’ll actually be right. Not everyone who believes they can change the world does so, but everyone who changes the world believed they could.

Now, what sort of careers might satisfy all these requirements?

Well, basically any kind of scientific research:

Mathematicians could work on network theory, or nonlinear dynamics (the first step: separating “nonlinear dynamics” into the dozen or so subfields it should actually comprise—as has been remarked, “nonlinear” is a bit like “non-elephant”), or data processing algorithms for our ever-growing morasses of unprocessed computer data.

Physicists could be working on fusion power, or ways to neutralize radioactive waste, or fundamental physics that could one day unlock technologies as exotic as teleportation and faster-than-light travel. They could work on quantum encryption and quantum computing. Or if those are still too applied for your taste, you could work in cosmology and seek to answer some of the deepest, most fundamental questions in human existence.

Chemists could be working on stronger or cheaper materials for infrastructure—the extreme example being space elevators—or technologies to clean up landfills and oceanic pollution. They could work on improved batteries for solar and wind power, or nanotechnology to revolutionize manufacturing.

Biologists could work on any number of diseases, from cancer and diabetes to malaria and antibiotic-resistant tuberculosis. They could work on stem-cell research and regenerative medicine, or genetic engineering and body enhancement, or on gerontology and age reversal. Biology is a field with so many important unsolved problems that if you have the stomach for it and the interest in some biological problem, you can’t really go wrong.

Electrical engineers can obviously work on improving the power and performance of computer systems, though I think over the last 20 years or so the marginal benefits of that kind of research have begun to wane. Efforts might be better spent in cybernetics, control systems, or network theory, where considerably more is left uncharted; or in artificial intelligence, where computing power is only the first step.

Mechanical engineers could work on making vehicles safer and cheaper, or building reusable spacecraft, or designing self-constructing or self-repairing infrastructure. They could work on 3D printing and just-in-time manufacturing, scaling it up for whole factories and down for home appliances.

Aerospace engineers could link the world with hypersonic travel, build satellites to provide Internet service to the farthest reaches of the globe, or create interplanetary rockets to colonize Mars and the moons of Jupiter and Saturn. They could mine asteroids and make previously rare metals ubiquitous. They could build aerial drones for delivery of goods and revolutionize logistics.

Agronomists could work on sustainable farming methods (hint: stop farming meat), invent new strains of crops that are hardier against pests, more nutritious, or higher-yielding; on the other hand a lot of this is already being done, so maybe it’s time to think outside the box and consider what we might do to make our food system more robust against climate change or other catastrophes.

Ecologists will obviously be working on predicting and mitigating the effects of global climate change, but there are a wide variety of ways of doing so. You could focus on ocean acidification, or on desertification, or on fishery depletion, or on carbon emissions. You could work on getting the climate models so precise that they become completely undeniable to anyone but the most dogmatically opposed. You could focus on endangered species and habitat disruption. Ecology is in general so underfunded and undersupported that basically anything you could do in ecology would be beneficial.

Neuroscientists have plenty of things to do as well: Understanding vision, memory, motor control, facial recognition, emotion, decision-making and so on. But one topic in particular is lacking in researchers, and that is the fundamental Hard Problem of consciousness. This one is going to be an uphill battle, and will require a special level of tenacity and perseverance. The problem is so poorly understood it’s difficult to even state clearly, let alone solve. But if you could do it—if you could even make a significant step toward it—it could literally be the greatest achievement in the history of humanity. It is one of the fundamental questions of our existence, the very thing that separates us from inanimate matter, the very thing that makes questions possible in the first place. Understand consciousness and you understand the very thing that makes us human. That achievement is so enormous that it seems almost petty to point out that the revolutionary effects of artificial intelligence would also fall into your lap.

The arts and humanities also have a great deal to contribute, and are woefully underappreciated.

Artists, authors, and musicians all have the potential to make us rethink our place in the world, reconsider and reimagine what we believe and strive for. If physics and engineering can make us better at winning wars, art and literature and remind us why we should never fight them in the first place. The greatest works of art can remind us of our shared humanity, link us all together in a grander civilization that transcends the petty boundaries of culture, geography, or religion. Art can also be timeless in a way nothing else can; most of Aristotle’s science is long-since refuted, but even the Great Pyramid thousands of years before him continues to awe us. (Aristotle is about equidistant chronologically between us and the Great Pyramid.)

Philosophers may not seem like they have much to add—and to be fair, a great deal of what goes on today in metaethics and epistemology doesn’t add much to civilization—but in fact it was Enlightenment philosophy that brought us democracy, the scientific method, and market economics. Today there are still major unsolved problems in ethics—particularly bioethics—that are in need of philosophical research. Technologies like nanotechnology and genetic engineering offer us the promise of enormous benefits, but also the risk of enormous harms; we need philosophers to help us decide how to use these technologies to make our lives better instead of worse. We need to know where to draw the lines between life and death, between justice and cruelty. Literally nothing could be more important than knowing right from wrong.

Now that I have sung the praises of the natural sciences and the humanities, let me now explain why I am a social scientist, and why you probably should be as well.

Psychologists and cognitive scientists obviously have a great deal to give us in the study of mental illness, but they may actually have more to contribute in the study of mental health—in understanding not just what makes us depressed or schizophrenic, but what makes us happy or intelligent. The 21st century may not simply see the end of mental illness, but the rise of a new level of mental prosperity, where being happy, focused, and motivated are matters of course. The revolution that biology has brought to our lives may pale in comparison to the revolution that psychology will bring. On the more social side of things, psychology may allow us to understand nationalism, sectarianism, and the tribal instinct in general, and allow us to finally learn to undermine fanaticism, encourage critical thought, and make people more rational. The benefits of this are almost impossible to overstate: It is our own limited, broken, 90%-or-so heuristic rationality that has brought us from simians to Shakespeare, from gorillas to Godel. To raise that figure to 95% or 99% or 99.9% could be as revolutionary as was whatever evolutionary change first brought us out of the savannah as Australopithecus africanus.

Sociologists and anthropologists will also have a great deal to contribute to this process, as they approach the tribal instinct from the top down. They may be able to tell us how nations are formed and undermined, why some cultures assimilate and others collide. They can work to understand combat bigotry in all its forms, racism, sexism, ethnocentrism. These could be the fields that finally end war, by understanding and correcting the imbalances in human societies that give rise to violent conflict.

Political scientists and public policy researchers can allow us to understand and restructure governments, undermining corruption, reducing inequality, making voting systems more expressive and more transparent. They can search for the keystones of different political systems, finding the weaknesses in democracy to shore up and the weaknesses in autocracy to exploit. They can work toward a true international government, representative of all the world’s people and with the authority and capability to enforce global peace. If the sociologists don’t end war and genocide, perhaps the political scientists can—or more likely they can do it together.

And then, at last, we come to economists. While I certainly work with a lot of ideas from psychology, sociology, and political science, I primarily consider myself an economist. Why is that? Why do I think the most important problems for me—and perhaps everyone—to be working on are fundamentally economic?

Because, above all, economics is broken. The other social sciences are basically on the right track; their theories are still very limited, their models are not very precise, and there are decades of work left to be done, but the core principles upon which they operate are correct. Economics is the field to work in because of criterion 3: Almost all the important problems in economics are underinvested.

Macroeconomics is where we are doing relatively well, and yet the Keynesian models that allowed us to reduce the damage of the Second Depression nonetheless had no power to predict its arrival. While inflation has been at least somewhat tamed, the far worse problem of unemployment has not been resolved or even really understood.

When we get to microeconomics, the neoclassical models are totally defective. Their core assumptions of total rationality and total selfishness are embarrassingly wrong. We have no idea what controls assets prices, or decides credit constraints, or motivates investment decisions. Our models of how people respond to risk are all wrong. We have no formal account of altruism or its limitations. As manufacturing is increasingly automated and work shifts into services, most economic models make no distinction between the two sectors. While finance takes over more and more of our society’s wealth, most formal models of the economy don’t even include a financial sector.

Economic forecasting is no better than chance. The most widely-used asset-pricing model, CAPM, fails completely in empirical tests; its defenders concede this and then have the audacity to declare that it doesn’t matter because the mathematics works. The Black-Scholes derivative-pricing model that caused the Second Depression could easily have been predicted to do so, because it contains a term that assumes normal distributions when we know for a fact that financial markets are fat-tailed; simply put, it claims certain events will never happen that actually occur several times a year.

Worst of all, economics is the field that people listen to. When a psychologist or sociologist says something on television, people say that it sounds interesting and basically ignore it. When an economist says something on television, national policies are shifted accordingly. Austerity exists as national policy in part due to a spreadsheet error by two famous economists.

Keynes already knew this in 1936: “The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”

Meanwhile, the problems that economics deals with have a direct influence on the lives of millions of people. Bad economics gives us recessions and depressions; it cripples our industries and siphons off wealth to an increasingly corrupt elite. Bad economics literally starves people: It is because of bad economics that there is still such a thing as world hunger. We have enough food, we have the technology to distribute it—but we don’t have the economic policy to lift people out of poverty so that they can afford to buy it. Bad economics is why we don’t have the funding to cure diabetes or colonize Mars (but we have the funding for oil fracking and aircraft carriers, don’t we?). All of that other scientific research that needs done probably could be done, if the resources of our society were properly distributed and utilized.

This combination of both overwhelming influence, overwhelming importance and overwhelming error makes economics the low-hanging fruit; you don’t even have to be particularly brilliant to have better ideas than most economists (though no doubt it helps if you are). Economics is where we have a whole bunch of important questions that are unanswered—or the answers we have are wrong. (As Will Rogers said, “It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”)

Thus, rather than tell you go into finance and earn to give, those economists could simply have said: “You should become an economist. You could hardly do worse than we have.”