Taylor Swift and the means of production

Oct 5 JDN 2460954

This post is one I’ve been meaning to write for awhile, but current events keep taking precedence.

In 2023, Taylor Swift did something very interesting from an economic perspective, which turns out to have profound implications for our economic future.

She re-recorded an entire album and released it through a different record company.

The album was called 1989 (Taylor’s Version), and she created it because for the last four years she had been fighting with Big Machine Records over the rights to her previous work, including the original album 1989.

A Marxist might well say she seized the means of production! (How rich does she have to get before she becomes bourgeoisie, I wonder? Is she already there, even though she’s one of a handful of billionaires who can truly say they were self-made?)

But really she did something even more interesting than that. It was more like she said:

Seize the means of production? I am the means of production.”

Singing and songwriting are what is known as a human-capital-intensive industry. That is, the most important factor of production is not land, or natural resources, or physical capital (yes, you need musical instruments, amplifiers, recording equipment and the like—but these are a small fraction of what it costs to get Talor Swift for a concert), or even labor in the ordinary sense. It’s one where so-called (honestly poorly named) “human capital” is the most important factor of production.

A labor-intensive industry is one where you just need a lot of work to be done, but you can get essentially anyone to do it: Cleaning floors is labor-intensive. A lot of construction work is labor-intensive (though excavators and the like also make it capital-intensive).

No, for a human-capital-intensive industry, what you need is expertise or talent. You don’t need a lot of people doing back-breaking work; you need a few people who are very good at doing the specific thing you need to get done.

Taylor Swift was able to re-record and re-release her songs because the one factor of production that couldn’t be easily substituted was herself. Big Machine Records overplayed their hand; they thought they could control her because they owned the rights to her recordings. But she didn’t need her recordings; she could just sing the songs again.

But now I’m sure you’re wondering: So what?

Well, Taylor Swift’s story is, in large part, the story of us all.

For most of the 18th, 19th, and 20th centuries, human beings in developed countries saw a rapid increase in their standard of living.

Yes, a lot of countries got left behind until quite recently.

Yes, this process seems to have stalled in the 21st century, with “real GDP” continuing to rise but inequality and cost of living rising fast enough that most people don’t feel any richer (and I’ll get to why that may be the case in a moment).

But for millions of people, the gains were real, and substantial. What was it that brought about this change?

The story we are usually told is that it was capital; that as industries transitioned from labor-intensive to capital-intensive, worker productivity greatly increased, and this allowed us to increase our standard of living.

That’s part of the story. But it can’t be the whole thing.

Why not, you ask?

Because very few people actually own the capital.

When capital ownership is so heavily concentrated, any increases in productivity due to capital-intensive production can simply be captured by the rich people who own the capital. Competition was supposed to fix this, compelling them to raise wages to match productivity, but we often haven’t actually had competitive markets; we’ve had oligopolies that consolidate market power in a handful of corporations. We had Standard Oil before, and we have Microsoft now. (Did you know that Microsoft not only owns more than half the consumer operating system industry, but after acquiring Activision Blizzard, is now the largest video game company in the world?) In the presence of an oligopoly, the owners of the capital will reap the gains from capital-intensive productivity.

But standards of living did rise. So what happened?

The answer is that production didn’t just become capital-intensive. It became human-capital-intensive.

More and more jobs required skills that an average person didn’t have. This created incentives for expanding public education, making workers not just more productive, but also more aware of how things work and in a stronger bargaining position.

Today, it’s very clear that the jobs which are most human-capital-intensive—like doctors, lawyers, researchers, and software developers—are the ones with the highest pay and the greatest social esteem. (I’m still not 100% sure why stock traders are so well-paid; it really isn’t that hard to be a stock trader. I could write you an algorithm in 50 lines of Python that would beat the average trader (mostly by buying ETFs). But they pretend to be human-capital-intensive by hiring Harvard grads, and they certainly pay as if they are.)

The most capital-intensive industries—like factory work—are reasonably well-paid, but not that well-paid, and actually seem to be rapidly disappearing as the capital simply replaces the workers. Factory worker productivity is now staggeringly high thanks to all this automation, but the workers themselves have gained only a small fraction of this increase in higher wages; by far the bigger effect has been increased profits for the capital owners and reduced employment in manufacturing.

And of course the real money is all in capital ownership. Elon Musk doesn’t have $400 billion because he’s a great engineer who works very hard. He has $400 billion because he owns a corporation that is extremely highly valued (indeed, clearly overvalued) in the stock market. Maybe being a great engineer or working very hard helped him get there, but it was neither necessary nor sufficient (and I’m sure that his dad’s emerald mine also helped).

Indeed, this is why I’m so worried about artificial intelligence.

Most forms of automation replace labor, in the conventional labor-intensive sense: Because you have factory robots, you need fewer factory workers; because you have mountaintop removal, you need fewer coal miners. It takes fewer people to do the same amount of work. But you still need people to plan and direct the process, and in fact those people need to be skilled experts in order to be effective—so there’s a complementarity between automation and human capital.

But AI doesn’t work like that. AI substitutes for human capital. It doesn’t just replace labor; it replaces expertise.

So far, AI is currently too unreliable to replace any but entry-level workers in human-capital-intensive industries (though there is some evidence it’s already doing that). But it will most likely get more reliable over time, if not via the current LLM paradigm, than through the next one that comes after. At some point, AI will come to replace experienced software developers, and then veteran doctors—and I don’t think we’ll be ready.

The long-term pattern here seems to be transitioning away from human-capital-intensive production to purely capital-intensive production. And if we don’t change the fact that capital ownership is heavily concentrated and so many of our markets are oligopolies—which we absolutely do not seem poised to do anything about; Democrats do next to nothing and Republicans actively and purposefully make it worse—then this transition will be a recipe for even more staggering inequality than before, where the rich will get even more spectacularly mind-bogglingly rich while the rest of us stagnate or even see our real standard of living fall.

The tech bros promise us that AI will bring about a utopian future, but that would only work if capital ownership were equally shared. If they continue to own all the AIs, they may get a utopia—but we sure won’t.

We can’t all be Taylor Swift. (And if AI music catches on, she may not be able to much longer either.)

What am I without you?

Jul 16 JDN 2460142

When this post goes live, it will be my husband’s birthday. He will probably read it before that, as he follows my Patreon. In honor of his birthday, I thought I would make romance the topic of today’s post.

In particular, there’s a certain common sentiment that is usually viewed as romantic, which I in fact think is quite toxic. This is the notion that “Without you, I am nothing”—that in the absence of the one we love, we would be empty or worthless.

Here is this sentiment being expressed by various musicians:

I’m all out of love,
I’m so lost without you.
I know you were right,
Believing for so long.
I’m all out of love,
What am I without you?

– “All Out of Love”, Air Supply

<quotation>

Well what am I, what am I without you?
What am I without you?
Your love makes me burn.
No, no, no
Well what am I, what am I without you?
I’m nothing without you.
So lеt love burn.

– “What am I without you?”, Suede

Without you, I’m nothing.
Without you, I’m nothing.
Without you, I’m nothing.
Without you, I’m nothing at all.

– “Without you I’m nothing”, Placebo

I’ll be nothin’, nothin’, nothin’, nothin’ without you.
I’ll be nothin’, nothin’, nothin’, nothin’ without you.
Yeah
I was too busy tryna find you with someone else,
The one I couldn’t stand to be with was myself.
‘Cause I’ll be nothin’, nothin’, nothin’, nothin’ without you.

– “Nothing without you”, The Weeknd

You were my strength when I was weak.
You were my voice when I couldn’t speak.
You were my eyes when I couldn’t see.
You saw the best there was in me!
Lifted me up when I couldn’t reach,
You gave me faith ’cause you believed!
I’m everything I am,
Because you loved me.


– “Because You Loved Me”, Celine Dion

Hopefully that’s enough to convince you that this is not a rare sentiment. Moreover, these songs do seem quite romantic, and there are parts of them that still resonate quite strongly for me (particularly “Because You Loved Me”).

Yet there is still something toxic here: They make us lose sight of our own self-worth independently of our relationships with others. Humans are deeply social creatures, so of course we want to fill our lives with relationships with others, and as well we should. But you are more than your relationships.

Stranded alone on a deserted island, you would still be a person of worth. You would still have inherent dignity. You would still deserve to live.

It’s also unhealthy even from a romantic perspective. Yes, once you’ve found the love of your life and you really do plan to live together forever, tying your identity so tightly to the relationship may not be disastrous—though it could still be unhealthy and promote a cycle of codependency. But what about before you’ve made that commitment? If you are nothing without the one you love, what happens when you break up? Who are you then?

And even if you are with the love of your life, what happens if they die?

Of course our relationships do change who we are. To some degree, our identity is inextricably tied to those we love, and this would probably still be desirable even if it weren’t inevitable. But there must always be part of you that isn’t bound to anyone in particular other than yourself—and if you can’t find that part, it’s a very bad sign.

Now compare a quite different sentiment:

If I didn’t have you to hold me tight,

If I didn’t have you to lie with at night,

If I didn’t have you to share my sighs,

And to kiss me and dry my tears when I cry…

Well, I…

Really think that I would…

Probably…

Have somebody else.

– “If I Didn’t Have You”, Tim Minchin

Tim Minchin is a comedy musician, and the song is very much written in that vein. He doesn’t want you to take it too seriously.

Another song Tim Minchin wrote for his wife, “Beautiful Head”, reflects upon the inevitable chasm that separates any two minds—he knows all about her, but not what goes on inside that beautiful head. He also has another sort-of love song, called “I’ll Take Lonely Tonight”, about rejecting someone because he wants to remain faithful to his wife. It’s bittersweet despite the humor within, and honestly I think it shows a deeper sense of romance than the vast majority of love songs I’ve heard.

Yet I must keep coming back to one thing: This is a much healthier attitude.

The factual claim is almost certainly objectively true: In all probability, should you find yourself separated from your current partner, you would, sooner or later, find someone else.

None of us began our lives in romantic partnerships—so who were we before then? No doubt our relationships change us, and losing them would change us yet again. But we were something before, and should it end, we will continue to be something after.

And the attitude that our lives would be empty and worthless without the one we love is dangerously close to the sort of self-destructive self-talk I know all too well from years of depression. “I’m worthless without you, I’m nothing without you” is really not so far from “I’m worthless, I’m nothing” simpliciter. If you hollow yourself out for love, you have still hollowed yourself out.

Why, then, do we only see this healthier attitude expressed as comedy? Why can’t we take seriously the idea that love doesn’t define your whole identity? Why does the toxic self-deprecation of “I am nothing without you” sound more romantic to our ears than the honest self-respect of “I would probably have somebody else”? Why is so much of what we view as “romantic” so often unrealistic—or even harmful?

Tim Minchin himself seems to wonder, as the song alternates between serious expressions of love and ironic jabs:

And if I may conjecture a further objection,
Love is nothing to do with destined perfection.
The connection is strengthened,
The affection simply grows over time,

Like a flower,
Or a mushroom,
Or a guinea pig,
Or a vine,
Or a sponge,
Or bigotry…
…or a banana.

And love is made more powerful
By the ongoing drama of shared experience,
And the synergy of a kind of symbiotic empathy, or… something.

I believe that a healthier form of love is possible. I believe that we can unite ourselves with others in a way that does not sacrifice our own identity and self-worth. I believe that love makes us more than we were—but not that we would be nothing without it. I am more than I was because you loved me—but not everything I am.

This is already how most of us view friendship: We care for our friends, we value our relationships with them—but we would recognize it as toxic to declare that we’d be nothing without them. Indeed, there is a contradiction in our usual attitude here: If part of who I am is in my friendships, then how can losing my romantic partner render me nothing? Don’t I still at least have my friends?

I can now answer this question: What am I without you? An unhappier me. But still, me.

So, on your birthday, let me say this to you, my dear husband:

But with all my heart and all my mind,
I know one thing is true:
I have just one life and just one love,
And my love, that love is you.
And if it wasn’t for you,
Darling, you…
I really think that I would…
Possibly…
Have somebody else.

Creativity and mental illness

Dec 1 JDN 2458819

There is some truth to the stereotype that artistic people are crazy. Mental illnesses, particularly bipolar disorder, are overrepresented among artists, writers, and musicians. Creative people score highly on literally all five of the Big Five personality traits: They are higher in Openness, higher in Conscientiousness, higher in Extraversion (that one actually surprised me), higher in Agreeableness, and higher in Neuroticism. Creative people just have more personality, it seems.

But in fact mental illness is not as overrepresented among creative people as most people think, and the highest probability of being a successful artist occurs when you have close relatives with mental illness, but are not yourself mentally ill. Those with mental illness actually tend to be most creative when their symptoms are in remission. This suggests that the apparent link between creativity and mental illness may actually increase over time, as treatments improve and remission becomes easier.

One possible source of the link is that artistic expression may be a form of self-medication: Art therapy does seem to have some promise in treating a variety of mental disorders (though it is not nearly as effective as therapy and medication). And that wouldn’t explain why family history of mental illness is actually a better predictor of creativity than mental illness itself.

My guess is that in order to be creative, you need to think differently than other people. You need to see the world in a way that others do not see it. Mental illness is surely not the only way to do that, but it’s definitely one way.

But creativity also requires basic functioning: If you are totally crippled by a mental illness, you’re not going to be very creative. So the people who are most creative have just enough craziness to think differently, but not so much that it takes over their lives.

This might even help explain how mental illness persisted in our population, despite its obvious survival disadvantages. It could be some form of heterozygote advantage.

The classic example of heterozygote advantage is sickle-cell anemia: If you have no copies of the sickle-cell gene, you’re normal. If you have two copies, you have sickle-cell anemia, which is very bad. But if you have only one copy, you’re healthy—and you’re resistant to malaria. Thus, high risk of malaria—as we certainly had, living in central Africa—creates a selection pressure that keeps sickle-cell genes in the population, even though having two copies is much worse than having none at all.

Mental illness might function something like this. I suspect it’s far more complicated than sickle-cell anemia, which is literally just two alleles of a single gene; but the overall process may be similar. If having just a little bit of bipolar disorder or schizophrenia makes you see the world differently than other people and makes you more creative, there are lots of reasons why that might improve the survival of your genes: There are the obvious problem-solving benefits, but also the simple fact that artists are sexy.

The downside of such “weird-thinking” genes is that they can go too far and make you mentally ill, perhaps if you have too many copies of them, or if you face an environmental trigger that sets them off. Sometimes the reason you see the world differently than everyone else is that you’re just seeing it wrong. But if the benefits of creativity are high enough—and they surely are—this could offset the risks, in an evolutionary sense.

But one thing is quite clear: If you are mentally ill, don’t avoid treatment for fear it will damage your creativity. Quite the opposite: A mental illness that is well treated and in remission is the optimal state for creativity. Go seek treatment, so that your creativity may blossom.

How to change the world

JDN 2457166 EDT 17:53.

I just got back from watching Tomorrowland, which is oddly appropriate since I had already planned this topic in advance. How do we, as they say in the film, “fix the world”?

I can’t find it at the moment, but I vaguely remember some radio segment on which a couple of neoclassical economists were interviewed and asked what sort of career can change the world, and they answered something like, “Go into finance, make a lot of money, and then donate it to charity.”

In a slightly more nuanced form this strategy is called earning to give, and frankly I think it’s pretty awful. Most of the damage that is done to the world is done in the name of maximizing profits, and basically what you end up doing is stealing people’s money and then claiming you are a great altruist for giving some of it back. I guess if you can make enormous amounts of money doing something that isn’t inherently bad and then donate that—like what Bill Gates did—it seems better. But realistically your potential income is probably not actually raised that much by working in finance, sales, or oil production; you could have made the same income as a college professor or a software engineer and not be actively stripping the world of its prosperity. If we actually had the sort of ideal policies that would internalize all externalities, this dilemma wouldn’t arise; but we’re nowhere near that, and if we did have that system, the only billionaires would be Nobel laureate scientists. Albert Einstein was a million times more productive than the average person. Steve Jobs was just a million times luckier. Even then, there is the very serious question of whether it makes sense to give all the fruits of genius to the geniuses themselves, who very quickly find they have all they need while others starve. It was certainly Jonas Salk’s view that his work should only profit him modestly and its benefits should be shared with as many people as possible. So really, in an ideal world there might be no billionaires at all.

Here I would like to present an alternative. If you are an intelligent, hard-working person with a lot of talent and the dream of changing the world, what should you be doing with your time? I’ve given this a great deal of thought in planning my own life, and here are the criteria I came up with:

  1. You must be willing and able to commit to doing it despite great obstacles. This is another reason why earning to give doesn’t actually make sense; your heart (or rather, limbic system) won’t be in it. You’ll be miserable, you’ll become discouraged and demoralized by obstacles, and others will surpass you. In principle Wall Street quantitative analysts who make $10 million a year could donate 90% to UNICEF, but they don’t, and you know why? Because the kind of person who is willing and able to exploit and backstab their way to that position is the kind of person who doesn’t give money to UNICEF.
  2. There must be important tasks to be achieved in that discipline. This one is relatively easy to satisfy; I’ll give you a list in a moment of things that could be contributed by a wide variety of fields. Still, it does place some limitations: For one, it rules out the simplest form of earning to give (a more nuanced form might cause you to choose quantum physics over social work because it pays better and is just as productive—but you’re not simply maximizing income to donate). For another, it rules out routine, ordinary jobs that the world needs but don’t make significant breakthroughs. The world needs truck drivers (until robot trucks take off), but there will never be a great world-changing truck driver, because even the world’s greatest truck driver can only carry so much stuff so fast. There are no world-famous secretaries or plumbers. People like to say that these sorts of jobs “change the world in their own way”, which is a nice sentiment, but ultimately it just doesn’t get things done. We didn’t lift ourselves into the Industrial Age by people being really fantastic blacksmiths; we did it by inventing machines that make blacksmiths obsolete. We didn’t rise to the Information Age by people being really good slide-rule calculators; we did it by inventing computers that work a million times as fast as any slide-rule. Maybe not everyone can have this kind of grand world-changing impact; and I certainly agree that you shouldn’t have to in order to live a good life in peace and happiness. But if that’s what you’re hoping to do with your life, there are certain professions that give you a chance of doing so—and certain professions that don’t.
  3. The important tasks must be currently underinvested. There are a lot of very big problems that many people are already working on. If you work on the problems that are trendy, the ones everyone is talking about, your marginal contribution may be very small. On the other hand, you can’t just pick problems at random; many problems are not invested in precisely because they aren’t that important. You need to find problems people aren’t working on but should be—problems that should be the focus of our attention but for one reason or another get ignored. A good example here is to work on pancreatic cancer instead of breast cancer; breast cancer research is drowning in money and really doesn’t need any more; pancreatic cancer kills 2/3 as many people but receives less than 1/6 as much funding. If you want to do cancer research, you should probably be doing pancreatic cancer.
  4. You must have something about you that gives you a comparative—and preferably, absolute—advantage in that field. This is the hardest one to achieve, and it is in fact the reason why most people can’t make world-changing breakthroughs. It is in fact so hard to achieve that it’s difficult to even say you have until you’ve already done something world-changing. You must have something special about you that lets you achieve what others have failed. You must be one of the best in the world. Even as you stand on the shoulders of giants, you must see further—for millions of others stand on those same shoulders and see nothing. If you believe that you have what it takes, you will be called arrogant and naïve; and in many cases you will be. But in a few cases—maybe 1 in 100, maybe even 1 in 1000, you’ll actually be right. Not everyone who believes they can change the world does so, but everyone who changes the world believed they could.

Now, what sort of careers might satisfy all these requirements?

Well, basically any kind of scientific research:

Mathematicians could work on network theory, or nonlinear dynamics (the first step: separating “nonlinear dynamics” into the dozen or so subfields it should actually comprise—as has been remarked, “nonlinear” is a bit like “non-elephant”), or data processing algorithms for our ever-growing morasses of unprocessed computer data.

Physicists could be working on fusion power, or ways to neutralize radioactive waste, or fundamental physics that could one day unlock technologies as exotic as teleportation and faster-than-light travel. They could work on quantum encryption and quantum computing. Or if those are still too applied for your taste, you could work in cosmology and seek to answer some of the deepest, most fundamental questions in human existence.

Chemists could be working on stronger or cheaper materials for infrastructure—the extreme example being space elevators—or technologies to clean up landfills and oceanic pollution. They could work on improved batteries for solar and wind power, or nanotechnology to revolutionize manufacturing.

Biologists could work on any number of diseases, from cancer and diabetes to malaria and antibiotic-resistant tuberculosis. They could work on stem-cell research and regenerative medicine, or genetic engineering and body enhancement, or on gerontology and age reversal. Biology is a field with so many important unsolved problems that if you have the stomach for it and the interest in some biological problem, you can’t really go wrong.

Electrical engineers can obviously work on improving the power and performance of computer systems, though I think over the last 20 years or so the marginal benefits of that kind of research have begun to wane. Efforts might be better spent in cybernetics, control systems, or network theory, where considerably more is left uncharted; or in artificial intelligence, where computing power is only the first step.

Mechanical engineers could work on making vehicles safer and cheaper, or building reusable spacecraft, or designing self-constructing or self-repairing infrastructure. They could work on 3D printing and just-in-time manufacturing, scaling it up for whole factories and down for home appliances.

Aerospace engineers could link the world with hypersonic travel, build satellites to provide Internet service to the farthest reaches of the globe, or create interplanetary rockets to colonize Mars and the moons of Jupiter and Saturn. They could mine asteroids and make previously rare metals ubiquitous. They could build aerial drones for delivery of goods and revolutionize logistics.

Agronomists could work on sustainable farming methods (hint: stop farming meat), invent new strains of crops that are hardier against pests, more nutritious, or higher-yielding; on the other hand a lot of this is already being done, so maybe it’s time to think outside the box and consider what we might do to make our food system more robust against climate change or other catastrophes.

Ecologists will obviously be working on predicting and mitigating the effects of global climate change, but there are a wide variety of ways of doing so. You could focus on ocean acidification, or on desertification, or on fishery depletion, or on carbon emissions. You could work on getting the climate models so precise that they become completely undeniable to anyone but the most dogmatically opposed. You could focus on endangered species and habitat disruption. Ecology is in general so underfunded and undersupported that basically anything you could do in ecology would be beneficial.

Neuroscientists have plenty of things to do as well: Understanding vision, memory, motor control, facial recognition, emotion, decision-making and so on. But one topic in particular is lacking in researchers, and that is the fundamental Hard Problem of consciousness. This one is going to be an uphill battle, and will require a special level of tenacity and perseverance. The problem is so poorly understood it’s difficult to even state clearly, let alone solve. But if you could do it—if you could even make a significant step toward it—it could literally be the greatest achievement in the history of humanity. It is one of the fundamental questions of our existence, the very thing that separates us from inanimate matter, the very thing that makes questions possible in the first place. Understand consciousness and you understand the very thing that makes us human. That achievement is so enormous that it seems almost petty to point out that the revolutionary effects of artificial intelligence would also fall into your lap.

The arts and humanities also have a great deal to contribute, and are woefully underappreciated.

Artists, authors, and musicians all have the potential to make us rethink our place in the world, reconsider and reimagine what we believe and strive for. If physics and engineering can make us better at winning wars, art and literature and remind us why we should never fight them in the first place. The greatest works of art can remind us of our shared humanity, link us all together in a grander civilization that transcends the petty boundaries of culture, geography, or religion. Art can also be timeless in a way nothing else can; most of Aristotle’s science is long-since refuted, but even the Great Pyramid thousands of years before him continues to awe us. (Aristotle is about equidistant chronologically between us and the Great Pyramid.)

Philosophers may not seem like they have much to add—and to be fair, a great deal of what goes on today in metaethics and epistemology doesn’t add much to civilization—but in fact it was Enlightenment philosophy that brought us democracy, the scientific method, and market economics. Today there are still major unsolved problems in ethics—particularly bioethics—that are in need of philosophical research. Technologies like nanotechnology and genetic engineering offer us the promise of enormous benefits, but also the risk of enormous harms; we need philosophers to help us decide how to use these technologies to make our lives better instead of worse. We need to know where to draw the lines between life and death, between justice and cruelty. Literally nothing could be more important than knowing right from wrong.

Now that I have sung the praises of the natural sciences and the humanities, let me now explain why I am a social scientist, and why you probably should be as well.

Psychologists and cognitive scientists obviously have a great deal to give us in the study of mental illness, but they may actually have more to contribute in the study of mental health—in understanding not just what makes us depressed or schizophrenic, but what makes us happy or intelligent. The 21st century may not simply see the end of mental illness, but the rise of a new level of mental prosperity, where being happy, focused, and motivated are matters of course. The revolution that biology has brought to our lives may pale in comparison to the revolution that psychology will bring. On the more social side of things, psychology may allow us to understand nationalism, sectarianism, and the tribal instinct in general, and allow us to finally learn to undermine fanaticism, encourage critical thought, and make people more rational. The benefits of this are almost impossible to overstate: It is our own limited, broken, 90%-or-so heuristic rationality that has brought us from simians to Shakespeare, from gorillas to Godel. To raise that figure to 95% or 99% or 99.9% could be as revolutionary as was whatever evolutionary change first brought us out of the savannah as Australopithecus africanus.

Sociologists and anthropologists will also have a great deal to contribute to this process, as they approach the tribal instinct from the top down. They may be able to tell us how nations are formed and undermined, why some cultures assimilate and others collide. They can work to understand combat bigotry in all its forms, racism, sexism, ethnocentrism. These could be the fields that finally end war, by understanding and correcting the imbalances in human societies that give rise to violent conflict.

Political scientists and public policy researchers can allow us to understand and restructure governments, undermining corruption, reducing inequality, making voting systems more expressive and more transparent. They can search for the keystones of different political systems, finding the weaknesses in democracy to shore up and the weaknesses in autocracy to exploit. They can work toward a true international government, representative of all the world’s people and with the authority and capability to enforce global peace. If the sociologists don’t end war and genocide, perhaps the political scientists can—or more likely they can do it together.

And then, at last, we come to economists. While I certainly work with a lot of ideas from psychology, sociology, and political science, I primarily consider myself an economist. Why is that? Why do I think the most important problems for me—and perhaps everyone—to be working on are fundamentally economic?

Because, above all, economics is broken. The other social sciences are basically on the right track; their theories are still very limited, their models are not very precise, and there are decades of work left to be done, but the core principles upon which they operate are correct. Economics is the field to work in because of criterion 3: Almost all the important problems in economics are underinvested.

Macroeconomics is where we are doing relatively well, and yet the Keynesian models that allowed us to reduce the damage of the Second Depression nonetheless had no power to predict its arrival. While inflation has been at least somewhat tamed, the far worse problem of unemployment has not been resolved or even really understood.

When we get to microeconomics, the neoclassical models are totally defective. Their core assumptions of total rationality and total selfishness are embarrassingly wrong. We have no idea what controls assets prices, or decides credit constraints, or motivates investment decisions. Our models of how people respond to risk are all wrong. We have no formal account of altruism or its limitations. As manufacturing is increasingly automated and work shifts into services, most economic models make no distinction between the two sectors. While finance takes over more and more of our society’s wealth, most formal models of the economy don’t even include a financial sector.

Economic forecasting is no better than chance. The most widely-used asset-pricing model, CAPM, fails completely in empirical tests; its defenders concede this and then have the audacity to declare that it doesn’t matter because the mathematics works. The Black-Scholes derivative-pricing model that caused the Second Depression could easily have been predicted to do so, because it contains a term that assumes normal distributions when we know for a fact that financial markets are fat-tailed; simply put, it claims certain events will never happen that actually occur several times a year.

Worst of all, economics is the field that people listen to. When a psychologist or sociologist says something on television, people say that it sounds interesting and basically ignore it. When an economist says something on television, national policies are shifted accordingly. Austerity exists as national policy in part due to a spreadsheet error by two famous economists.

Keynes already knew this in 1936: “The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”

Meanwhile, the problems that economics deals with have a direct influence on the lives of millions of people. Bad economics gives us recessions and depressions; it cripples our industries and siphons off wealth to an increasingly corrupt elite. Bad economics literally starves people: It is because of bad economics that there is still such a thing as world hunger. We have enough food, we have the technology to distribute it—but we don’t have the economic policy to lift people out of poverty so that they can afford to buy it. Bad economics is why we don’t have the funding to cure diabetes or colonize Mars (but we have the funding for oil fracking and aircraft carriers, don’t we?). All of that other scientific research that needs done probably could be done, if the resources of our society were properly distributed and utilized.

This combination of both overwhelming influence, overwhelming importance and overwhelming error makes economics the low-hanging fruit; you don’t even have to be particularly brilliant to have better ideas than most economists (though no doubt it helps if you are). Economics is where we have a whole bunch of important questions that are unanswered—or the answers we have are wrong. (As Will Rogers said, “It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”)

Thus, rather than tell you go into finance and earn to give, those economists could simply have said: “You should become an economist. You could hardly do worse than we have.”