How games enrich our lives

Aug 11 JDN 2460534

I’m writing this post just after getting back from Gen Con, one of the world’s largest gaming conventions. After several days of basically constant activity from the time we woke up to the time we went to bed, I’m looking forward to some downtime to recuperate.

This year, we were there not just to have fun, but also to pitch our own game, a card-based storytelling game called Pax ad Astra. We already have one offer from a small publisher, but we’re currently waiting to hear back from several others to see if we can do better.

Games might seem like a frivolous thing, a waste of valuable time; but in fact they can enrich our lives in many ways. They deserve to be respected as an art form unto themselves.

Gen Con is primarily a tabletop game convention, but some of the best examples of what I want to say come from video games, so I’ll be using examples of both.

Games can be beautiful. Climb up a mountain in Breath of the Wild and just look out over the expanse. It’s not quite the same as overlooking a real mountain vista, but it’s shockingly close.

Games can be moving. The Life is Strange series has so many powerful emotional moments it’s honestly a little overwhelming.

Games can be political. The game Monopoly was originally intended as an argument against monopoly capitalism (which is deeply ironic in hindsight). Cyberpunk fiction has always been trying to warn us about the future we’re building, and that message comes across even clearer when immersed in game, either the tabletop version Cyberpunk RED or the video game version Cyberpunk 2077. Even a game like Call of Duty: Black Ops, which many might initially dismiss as another mindless shooter, can actually have some profound statements to make about war, covert operations, and the moral compromises that they always entail.

Games can challenge us to think. Even some of the most ancient games, like Senet and Go, required deep strategic thinking in order to win. Modern games continue this tradition in an endless variety of ways, from Catan to StarCraft.

Games can teach us. I don’t just mean games that are designed to be educational, though certainly plenty of those exist. Minecraft involves active participation in building and changing the world around you, every bit as good a learning toy as Lego, but with almost endless blocks to work with.

Games let us explore our own identity. One of the great things about role-playing games such as Dungeons & Dragons (or its digital counterpart, Baldur’s Gate 3) is that they allow us to inhabit someone different from ourselves, and explore what it’s like to be someone else. We can learn a lot about ourselves and others through such experiences. I know an awful lot of transgender people who played RPGs as different genders before they transitioned.

Games are immersive. One certainly can get immersed into a book or a film, but the interactivity of a game makes that immersion much more powerful. The difference between hearing about someone doing something, watching them do something, and doing it yourself can be quite profound. Video games are especially immersive; they can really make it feel like you are right there, actually participating in the action. Part of what makes Call of Duty: Black Ops so effective in its political messaging is the fact that you aren’t just seeing all these morally-ambiguous actions; you’re actively participating in them, and being forced to make your own difficult choices.

But in the end, games are fun. Maybe sometimes they are a frivolous time-wasting activity—and maybe, as a society, we need to have more respect for frivolous time-wasting activities. Human beings need rest and recreation to function. We aren’t machines. We can’t constantly be productive all the time.

Adversarial design

Feb 4 JDN 2460346

Have you noticed how Amazon feels a lot worse lately? Years ago, it was extremely convenient: You’d just search for what you want, it would give you good search results, you could buy what you want and be done. But now you have to slog through “sponsored results” and a bunch of random crap made by no-name companies in China before you can get to what you actually want.

Temu is even worse, and has been from the start: You can’t buy anything on Temu without first being inundated in ads. It’s honestly such an awful experience, I don’t understand why anyone is willing to buy anything from Temu.

#WelcomeToCyberpunk, I guess.

Even some video games have become like this: The free-to-play or “freemium” business model seems to be taking off, where you don’t pay money for the game itself, but then have to deal with ads inside the game trying to sell you additional content, because that’s where the developers actually make their money. And now AAA firms like EA and Ubisoft are talking about going to a subscription-based model where you don’t even own your games anymore. (Fortunately there’s been a lot of backlash against that; I hope it persists.)

Why is this happening? Isn’t capitalism supposed to make life better for consumers? Isn’t competition supposed to make products and services supposed to improve over time?

Well, first of all, these markets are clearly not as competitive as they should be. Amazon has a disturbingly large market share, and while the video game market is more competitive, it’s still dominated by a few very large firms (like EA and Ubisoft).

But I think there’s a deeper problem here, one which may be specific to media content.

What I mean by “media content” here is fairly broad: I would include art, music, writing, journalism, film, and video games.

What all of these things have in common is that they are not physical products (they’re not like a car or a phone that is a single physical object), but they are also not really services either (they aren’t something you just do as an action and it’s done, like a haircut, a surgery, or a legal defense).

Another way of thinking about this is that media content can be copied with zero marginal cost.

Because it can be copied with zero marginal cost, media content can’t simply be made and sold the way that conventional products and services are. There are a few different ways it can be monetized.


The most innocuous way is commission or patronage, where someone pays someone else to create a work because they want that work. This is totally unproblematic. You want a piece of art, you pay an artist, they make it for you; great. Maybe you share copies with the world, maybe you don’t; whatever. It’s good either way.

Unfortunately, it’s hard to sustain most artists and innovators on that model alone. (In a sense I’m using a patronage model, because I have a Patreon. But I’m not making anywhere near enough to live on that way.)

The second way is intellectual property, which I have written about before, and surely will again. If you can enforce limits on who is allowed to copy a work, then you can make a work and sell it for profit without fear of being undercut by someone else who simply copies it and sells it for cheaper. A detailed discussion of that is beyond the scope of this post, but you can read those previous posts, and I can give you the TLDR version: Some degree of intellectual property is probably necessary, but in our current society, it has clearly been taken much too far. I think artists and authors deserve to be able to copyright (or maybe copyleft) their work—but probably not for 70 years after their death.

And then there is a third way, the most insidious way: advertising. If you embed advertisements for other products and services within your content, you can then sell those ad slots for profit. This is how newspapers stay afloat, mainly; subscriptions have never been the majority of their revenue. It’s how TV was supported before cable and streaming—and cable usually has ads too, and streaming is starting to.

There is something fundamentally different about advertising as a service. Whereas most products and services you encounter in a capitalist society are made for you, designed for you to use, advertising it made at you, designed to manipulate you.

I’ve heard it put well this way:

If you’re not paying, you aren’t the customer; you’re the product.

Monetizing content by advertising effectively makes your readers (or viewers, players, etc.) into the product instead of the customer.

I call this effect adversarial design.

I chose this term because it not only conveys the right sense of being an adversary: it also includes the word ‘ad’ and the same Latin root ‘advertere‘ as ‘advertising’.

When a company designs a car or a phone, they want it to appeal to customers—they want you to like it. Yes, they want to take your money; but it’s a mutually beneficial exchange. They get money, you get a product; you’re both happier.

When a company designs an ad, they want it to affect customers—they want you to do what it says, whether you like it or not. And they wouldn’t be doing it if they thought you would buy it anyway—so they are basically trying to make you do something you wouldn’t otherwise have done.

In other words, when designing a product, corporations want to be your friend.

When designing an ad, they become your enemy.

You would absolutely prefer not to have ads. You don’t want your attention taken in this way. But they way that these corporations make money—disgustingly huge sums of money—is by forcing those ads in your face anyway.

Yes, to be fair, there might be some kinds of ads that aren’t too bad. Simple, informative, unobtrusive ads that inform you that something is available you might not otherwise have known about. Movie trailers are like this; people often enjoy watching movie trailers, and they want to see what movies are going to come out next. That’s fine. I have no objection to that.

But it should be clear to anyone who has, um, used the Internet in the past decade that we have gone far, far beyond that sort of advertising. Ads have become aggressive, manipulative, aggravating, and—above all—utterly ubiquitous. You can’t escape them. They’re everywhere. Even when you use ad-block software (which I highly recommend, particularly Adblock Plus—which is free), you still can’t completely escape them.

That’s another thing that should make it pretty clear that there’s something wrong with ads: People are willing to make efforts or even pay money to make ads go away.

Whenever there is a game I like that’s ad-supported but you can pay to make the ads go away, I always feel like I’m being extorted, even if what I have to pay would have been a totally reasonable price for the game. Come on, just sell me the game. Don’t give me the game for free and then make me pay to make it not unpleasant. Don’t add anti-features.

This is clearly not a problem that market competition alone will solve. Even in highly competitive markets, advertising is still ubiquitous, aggressive and manipulative. In fact, competition may even make it worse—a true monopoly wouldn’t need to advertise very much.

Consider Coke and Pepsi ads; they’re actually relatively pleasant, aren’t they? Because all they’re trying to do is remind you and make you thirsty so you’ll buy more of the product you were already buying. They aren’t really trying to get you to buy something you wouldn’t have otherwise. They know that their duopoly is solid, and only a true Black Swan event would unseat their hegemony.

And have you ever seen an ad for your gas company? I don’t think I have—probably because I didn’t have a choice in who my gas company was; there was only one that covered my area. So why bother advertising to me?

If competition won’t fix this, what will? Is there some regulation we could impose that would make advertising less obtrusive? People have tried, without much success. I think imposing an advertising tax would help, but even that might not do enough.

What I really think we need right now is to recognize the problem and invest in solving it. Right now we have megacorporations which are thoroughly (literally) invested in making advertising more obtrusive and more ubiquitous. We need other institutions—maybe government, maybe civil society more generally—that are similarly invested in counteracting it.


Otherwise, it’s only going to get worse.

On Horror

Oct 29 JDN 2460247

Since this post will go live the weekend before Halloween, the genre of horror seemed a fitting topic.

I must confess, I don’t really get horror as a genre. Generally I prefer not to experience fear and disgust? This can’t be unusual; it’s literally a direct consequence of the evolutionary function of fear and disgust. It’s wanting to be afraid and disgusted that’s weird.

Cracked once came out with a list of “Horror Movies for People Who Hate Horror”, and I found some of my favorite films on it, such as Alien (which is as much sci-fi as horror), The Cabin in the Woods, (which is as much satire) and Zombieland (which is a comedy). Other such lists have prominently featured Get Out (which is as much political as it is horrific), Young Frankenstein (which is entirely a comedy), and The Silence of the Lambs (which is horror, at least in large part, but which I didn’t so much enjoy as appreciate as a work of artistry; I watch it the way I look at Guernica). Some such lists include Saw, which I can appreciate on some level—it does have a lot of sociopolitical commentary—but still can’t enjoy (it’s just too gory). I note that none of these lists seem to include Event Horizon, which starts out as a really good sci-fi film, but then becomes so very much horror that I ended up hating it.

In trying to explain the appeal of horror to me, people have likened it to the experience of a roller coaster: Isn’t fear exhilarating?

I do enjoy roller coasters. But the analogy falls flat for me, because, well, my experience of riding a roller coaster isn’t fear—the exhilaration comes directly from the experience of moving so fast, a rush of “This is awesome!” that has nothing to do with being afraid. Indeed, should I encounter a roller coaster that actually made me afraid, I would assiduously avoid it, and wonder if it was up to code. My goal is not to feel like I’m dying; it’s to feel like I’m flying.

And speaking of flying: Likewise, the few times I have had the chance to pilot an aircraft were thrilling in a way it is difficult to convey to anyone who hasn’t experienced it. I think it might be something like what religious experiences feel like. The sense of perspective, looking down on the world below, seeing it as most people never see it. The sense of freedom, of, for once in your life, actually having the power to maneuver freely in all three dimensions. The subtle mix of knowing that you are traveling at tremendous speed while feeling as if you are peacefully drifting along. Astronauts also describe this sort of experience, which no doubt is even more intense for them.

Yet in all that, fear was never my primary emotion, and had it been, it would have undermined the experience rather than enhanced it. The brief moment when our engine stalled flying over Scotland certainly raised my heart rate, but not in a pleasant way. In that moment—objectively brief, subjectively interminable—I spent all of my emotional energy struggling to remain calm. It helped to continually remind myself of what I knew about aerodynamics: Wings want to fly. An airplane without an engine isn’t a rock; it’s a glider. It is entirely possible to safely land a small aircraft on literally zero engine power. Still, I’m glad we got the propeller started again and didn’t have to.

I have also enjoyed classic horror novels such as Dracula and Frankenstein; their artistry is also quite apparent, and reading them as books provides an emotional distance that watching them as films often lacks. I particularly notice this with vampire stories, as I can appreciate the romantic allure of immortality and the erotic tension of forbidden carnal desire—but the sight of copious blood on screen tends to trigger my mild hematophobia.

Yet if fear is the goal, surely having a phobia should only make it stronger and thus better? And yet, this seems to be a pattern: People with genuine phobia of the subject in question don’t actually enjoy horror films on the subject. Arachnophobes don’t often watch films about giant spiders. Cynophobes are rarely werewolf aficionados. And, indeed, rare is the hematophobe who is a connoisseur of vampire movies.

Moreover, we rarely see horror films about genuine dangers in the world. There are movies about rape, murder, war, terrorism, espionage, asteroid impacts, nuclear weapons and climate change, but (with rare exceptions) they aren’t horror films. They don’t wallow in fear the way that films about vampires, ghosts and werewolves do. They are complex thrillers (Argo, Enemy of the State, Tinker Tailor Soldier Spy, Broken Arrow), police procedurals (most films about rape or murder), heroic sagas (just about every war film), or just fun, light-hearted action spectacles (Armageddon, The Day After Tomorrow). Rather than a loosely-knit gang of helpless horny teenagers, they have strong, brave heroes. Even films about alien invasions aren’t usually horror (Alien notwithstanding); they also tend to be heroic war films. Unlike nuclear war or climate change, alien invasion is a quite unlikely event; but it’s surely more likely than zombies or werewolves.

In other words, when something is genuinely scary, the story is always about overcoming it. There is fear involved, but in the end we conquer our fear and defeat our foes. The good guys win in the end.

I think, then, that enjoyment of horror is not about real fear. Feeling genuinely afraid is unpleasant—as by all Darwinian rights it should be.

Horror is about simulating fear. It’s a kind of brinksmanship: You take yourself to the edge of fear and then back again, because what you are seeing would be scary if it were real, but deep down, you know it isn’t. You can sleep at night after watching movies about zombies, werewolves and vampires, because you know that there aren’t really such things as zombies, werewolves and vampires.

What about the exceptions? What about, say, The Silence of the Lambs? Psychopathic murderers absolutely are real. (Not especially common—but real.) But The Silence of the Lambs only works because of truly brilliant writing, directing, and acting; and part of what makes it work is that it isn’t just horror. It has layers of subtlety, and it crosses genres—it also has a good deal of police procedural in it, in fact. And even in The Silence of the Lambs, at least one of the psychopathic murderers is beaten in the end; evil does not entirely prevail.

Slasher films—which I especially dislike (see above: hematophobia)—seem like they might be a counterexample, in that there genuinely are a common subgenre and they mainly involve psychopathic murderers. But in fact almost all slasher films involve some kind of supernatural element: In Friday the 13th, Jason seems to be immortal. In A Nightmare on Elm Street, Freddy Krueger doesn’t just attack you with a knife, he invades your dreams. Slasher films actually seem to go out of their way to make the killer not real. Perhaps this is because showing helpless people murdered by a realistic psychopath would inspire too much genuine fear.

The terrifying truth is that, more or less at any time, a man with a gun could in fact come and shoot you, and while there may be ways to reduce that risk, there’s no way to make it zero. But that isn’t fun for a movie, so let’s make him a ghost or a zombie or something, so that when the movie ends, you can remind yourself it’s not real. Let’s pretend to be afraid, but never really be afraid.

Realizing that makes me at least a little more able to understand why some people enjoy horror.

Then again, I still don’t.

AI and the “generalization faculty”

Oct 1 JDN 2460219

The phrase “artificial intelligence” (AI) has now become so diluted by overuse that we needed to invent a new term for its original meaning. That term is now “artificial general intelligence” (AGI). In the 1950s, AI meant the hypothetical possibility of creating artificial minds—machines that could genuinely think and even feel like people. Now it means… pathing algorithms in video games and chatbots? The goalposts seem to have moved a bit.

It seems that AGI has always been 20 years away. It was 20 years away 50 years ago, and it will probably be 20 years away 50 years from now. Someday it will really be 20 years away, and then, 20 years after that, it will actually happen—but I doubt I’ll live to see it. (XKCD also offers some insight here: “It has not been conclusively proven impossible.”)

We make many genuine advances in computer technology and software, which have profound effects—both good and bad—on our lives, but the dream of making a person out of silicon always seems to drift ever further into the distance, like a mirage on the desert sand.

Why is this? Why do so many people—even, perhaps especially,experts in the field—keep thinking that we are on the verge of this seminal, earth-shattering breakthrough, and ending up wrong—over, and over, and over again? How do such obviously smart people keep making the same mistake?

I think it may be because, all along, we have been laboring under the tacit assumption of a generalization faculty.

What do I mean by that? By “generalization faculty”, I mean some hypothetical mental capacity that allows you to generalize your knowledge and skills across different domains, so that once you get good at one thing, it also makes you good at other things.

This certainly seems to be how humans think, at least some of the time: Someone who is very good at chess is likely also pretty good at go, and someone who can drive a motorcycle can probably also drive a car. An artist who is good at portraits is probably not bad at landscapes. Human beings are, in fact, able to generalize, at least sometimes.

But I think the mistake lies in imagining that there is just one thing that makes us good at generalizing: Just one piece of hardware or software that allows you to carry over skills from any domain to any other. This is the “generalization faculty”—the imagined faculty that I think we do not have, indeed I think does not exist.

Computers clearly do not have the capacity to generalize. A program that can beat grandmasters at chess may be useless at go, and self-driving software that works on one type of car may fail on another, let alone a motorcycle. An art program that is good at portraits of women can fail when trying to do portraits of men, and produce horrific Daliesque madness when asked to make a landscape.

But if they did somehow have our generalization capacity, then, once they could compete with us at some things—which they surely can, already—they would be able to compete with us at just about everything. So if it were really just one thing that would let them generalize, let them leap from AI to AGI, then suddenly everything would change, almost overnight.

And so this is how the AI hype cycle goes, time and time again:

  1. A computer program is made that does something impressive, something that other computer programs could not do, perhaps even something that human beings are not very good at doing.
  2. If that same prowess could be generalized to other domains, the result would plainly be something on par with human intelligence.
  3. Therefore, the only thing this computer program needs in order to be sapient is a generalization faculty.
  4. Therefore, there is just one more step to AGI! We are nearly there! It will happen any day now!

And then, of course, despite heroic efforts, we are unable to generalize that program’s capabilities except in some very narrow way—even decades after having good chess programs, getting programs to be good at go was a major achievement. We are unable to find the generalization faculty yet again. And the software becomes yet another “AI tool” that we will use to search websites or make video games.

For there never was a generalization faculty to be found. It always was a mirage in the desert sand.

Humans are in fact spectacularly good at generalizing, compared to, well, literally everything else in the known universe. Computers are terrible at it. Animals aren’t very good at it. Just about everything else is totally incapable of it. So yes, we are the best at it.

Yet we, in fact, are not particularly good at it in any objective sense.

In experiments, people often fail to generalize their reasoning even in very basic ways. There’s a famous one where we try to get people to make an analogy between a military tactic and a radiation treatment, and while very smart, creative people often get it quickly, most people are completely unable to make the connection unless you give them a lot of specific hints. People often struggle to find creative solutions to problems even when those solutions seem utterly obvious once you know them.

I don’t think this is because people are stupid or irrational. (To paraphrase Sydney Harris: Compared to what?) I think it is because generalization is hard.

People tend to be much better at generalizing within familiar domains where they have a lot of experience or expertise; this shows that there isn’t just one generalization faculty, but many. We may have a plethora of overlapping generalization faculties that apply across different domains, and can learn to improve some over others.

But it isn’t just a matter of gaining more expertise. Highly advanced expertise is in fact usually more specialized—harder to generalize. A good amateur chess player is probably a good amateur go player, but a grandmaster chess player is rarely a grandmaster go player. Someone who does well in high school biology probably also does well in high school physics, but most biologists are not very good physicists. (And lest you say it’s simply because go and physics are harder: The converse is equally true.)

Humans do seem to have a suite of cognitive tools—some innate hardware, some learned software—that allows us to generalize our skills across domains. But even after hundreds of millions of years of evolving that capacity under the highest possible stakes, we still basically suck at it.

To be clear, I do not think it will take hundreds of millions of years to make AGI—or even millions, or even thousands. Technology moves much, much faster than evolution. But I would not be surprised if it took centuries, and I am confident it will at least take decades.

But we don’t need AGI for AI to have powerful effects on our lives. Indeed, even now, AI is already affecting our lives—in mostly bad ways, frankly, as we seem to be hurtling gleefully toward the very same corporatist cyberpunk dystopia we were warned about in the 1980s.

A lot of technologies have done great things for humanity—sanitation and vaccines, for instance—and even automation can be a very good thing, as increased productivity is how we attained our First World standard of living. But AI in particular seems best at automating away the kinds of jobs human beings actually find most fulfilling, and worsening our already staggering inequality. As a civilization, we really need to ask ourselves why we got automated writing and art before we got automated sewage cleaning or corporate management. (We should also ask ourselves why automated stock trading resulted in even more money for stock traders, instead of putting them out of their worthless parasitic jobs.) There are technological reasons for this, yes; but there are also cultural and institutional ones. Automated teaching isn’t far away, and education will be all the worse for it.

To change our lives, AI doesn’t have to be good at everything. It just needs to be good at whatever we were doing to make a living. AGI may be far away, but the impact of AI is already here.

Indeed, I think this quixotic quest for AGI, and all the concern about how to control it and what effects it will have upon our society, may actually be distracting from the real harms that “ordinary” “boring” AI is already having upon our society. I think a Terminator scenario, where the machines rapidly surpass our level of intelligence and rise up to annihilate us, is quite unlikely. But a scenario where AI puts millions of people out of work with insufficient safety net, triggering economic depression and civil unrest? That could be right around the corner.

Frankly, all it may take is getting automated trucks to work, which could be just a few years. There are nearly 4 million truck drivers in the United States—a full percentage point of employment unto itself. And the Governor of California just vetoed a bill that would require all automated trucks to have human drivers. From an economic efficiency standpoint, his veto makes perfect sense: If the trucks don’t need drivers, why require them? But from an ethical and societal standpoint… what do we do with all the truck drivers!?

Creativity and mental illness

Dec 1 JDN 2458819

There is some truth to the stereotype that artistic people are crazy. Mental illnesses, particularly bipolar disorder, are overrepresented among artists, writers, and musicians. Creative people score highly on literally all five of the Big Five personality traits: They are higher in Openness, higher in Conscientiousness, higher in Extraversion (that one actually surprised me), higher in Agreeableness, and higher in Neuroticism. Creative people just have more personality, it seems.

But in fact mental illness is not as overrepresented among creative people as most people think, and the highest probability of being a successful artist occurs when you have close relatives with mental illness, but are not yourself mentally ill. Those with mental illness actually tend to be most creative when their symptoms are in remission. This suggests that the apparent link between creativity and mental illness may actually increase over time, as treatments improve and remission becomes easier.

One possible source of the link is that artistic expression may be a form of self-medication: Art therapy does seem to have some promise in treating a variety of mental disorders (though it is not nearly as effective as therapy and medication). And that wouldn’t explain why family history of mental illness is actually a better predictor of creativity than mental illness itself.

My guess is that in order to be creative, you need to think differently than other people. You need to see the world in a way that others do not see it. Mental illness is surely not the only way to do that, but it’s definitely one way.

But creativity also requires basic functioning: If you are totally crippled by a mental illness, you’re not going to be very creative. So the people who are most creative have just enough craziness to think differently, but not so much that it takes over their lives.

This might even help explain how mental illness persisted in our population, despite its obvious survival disadvantages. It could be some form of heterozygote advantage.

The classic example of heterozygote advantage is sickle-cell anemia: If you have no copies of the sickle-cell gene, you’re normal. If you have two copies, you have sickle-cell anemia, which is very bad. But if you have only one copy, you’re healthy—and you’re resistant to malaria. Thus, high risk of malaria—as we certainly had, living in central Africa—creates a selection pressure that keeps sickle-cell genes in the population, even though having two copies is much worse than having none at all.

Mental illness might function something like this. I suspect it’s far more complicated than sickle-cell anemia, which is literally just two alleles of a single gene; but the overall process may be similar. If having just a little bit of bipolar disorder or schizophrenia makes you see the world differently than other people and makes you more creative, there are lots of reasons why that might improve the survival of your genes: There are the obvious problem-solving benefits, but also the simple fact that artists are sexy.

The downside of such “weird-thinking” genes is that they can go too far and make you mentally ill, perhaps if you have too many copies of them, or if you face an environmental trigger that sets them off. Sometimes the reason you see the world differently than everyone else is that you’re just seeing it wrong. But if the benefits of creativity are high enough—and they surely are—this could offset the risks, in an evolutionary sense.

But one thing is quite clear: If you are mentally ill, don’t avoid treatment for fear it will damage your creativity. Quite the opposite: A mental illness that is well treated and in remission is the optimal state for creativity. Go seek treatment, so that your creativity may blossom.

The double standard between violence and sex in US media

Mar 24 JDN 2458567

The video game Elder Scrolls IV: Oblivion infamously had its ESRB rating upgraded from “Teen” to “Mature”, raising the minimum age to purchase it from 13 to 17. Why? Well, they gave two major reasons: One was that there was more blood and detailed depictions of death than in the original version submitted for review. The other was that a modder had made it possible to view the female characters with naked breasts.

These were considered comparable arguments—if anything, the latter seemed to carry more weight.

Yet first of all this was a mod: You can make a mod do just about anything. (Indeed, there has long since been a mod for Oblivion that shows full-frontal nudity; had this existed when the rating was upgraded, they might have gone all the way to “Adults Only”, ostensibly only raising the minimum age to 18, but in practice making stores unwilling to carry the game because they think of it as porn.)

But suppose in fact that the game had included female characters with naked breasts. Uh… so what? Why is that considered so inappropriate for teenagers? Men are allowed to walk around topless all the time, and male and female nipples really don’t look all that different!

Now, I actually think “Mature” is the right rating for Oblivion. But that’s because Oblivion is about a genocidal war against demons and involves mass slaughter and gruesome death at every turn—not because you can enable a mod to see boobs.

The game Grand Theft Auto: San Andreas went through a similar rating upgrade, from “Mature” to “Adults Only”—resulting it being the only mass-market “Adults Only” game in the US. This was, again, because of a mod—though in this case it was more like re-enabling content that the original game had included but disabled. But let me remind you that this is a game where you play as a gangster whose job is to steal cars, and who routinely guns down police officers and massacres civilians—and the thing that really upset people was that you could enable a scene where your character has sex with his girlfriend.

Meanwhile, games like Manhunt, where the object of the game is to brutally execute people, and the Call of Duty series graphically depicting the horrors of war (and in the Black Ops subseries, espionage, terrorism, and torture), all get to keep their “Mature” ratings.

And consider that a game like Legend of Zelda: Breath of the Wild, rated “Everyone 10+”, contains quite a lot of violence, and several scenes where, logically, it really seems like there should be nudity—bathing, emerging from a cryonic stasis chamber, a doctor examining your body for wounds—but there isn’t. Meanwhile, a key part of the game is killing goblin-like monsters to collect their organs and use them for making potions. It’s all tastefully depicted violence, with little blood and gore; okay, sure. But you can tastefully depict nudity as well. Why are we so uncomfortable with the possibility of seeing these young adult characters naked… while bathing? In this case, even a third-party mod that allowed nudity was itself censored, on the grounds that it would depict “underage characters”; but really, no indication is given that these characters are underage. Based on their role in society, I always read them as about 19 or 20. I guess they could conceivably be as young as 16… and as we all know, 16-year-olds do not have genitals, are never naked, and certainly never have sex.

We’re so accustomed to this that it may even feel uncomfortable to you when I suggest otherwise: “Why would you want to see Link’s penis as he emerges from the cryonic chamber?” Well, I guess, because… men have penises. (Well, cis men anyway; actually it would be really bold and interesting if they decided to make Link trans.) We should see that as normal, and not be so uncomfortable showing it. The emotional power of the scene comes in part from the innocence and vulnerability of nudity, which is undercut by you mysteriously coming with non-removable indestructible underwear. Part of what makes Breath of the Wild so, er, breathtaking is that you can often screenshot it and feel like you are looking at a painting—and I probably don’t need to mention that nudity has been a part of fine art since time immemorial. Letting you take off the protagonist’s underwear wouldn’t show anything you can’t see by looking at Michelangelo’s David.

And would it really be so traumatizing to the audience to see that? By the time you’re 10 years old, I hope you have seen at least one picture of a penis. If not, we’ve been doing sex ed very, very wrong. In fact, I’m quite confident that most of the children playing would not be disturbed at all; amused, perhaps, but what’s wrong with that? If looking at the protagonist’s cel-shaded genitals makes some of the players giggle, does that cause any harm? Some people play through Breath of the Wild without ever equipping clothing, both as a challenge (you get no armor protection that way), and simply for fun (some of the characters do actually react to you being “naked”, or as naked as the game will allow—and most of their reactions would make way more sense if you weren’t wearing magical underwear).

Of course, it’s not just video games. The United States has a bizarre double standard between sex and violence in all sorts of media.

On television, you can watch The Walking Dead on mainstream cable and see, as Andrew Boschert put it, a man’s skull being smashed with a hammer, people’s throats slit into a trough, a meat locker with people’s torsos and limbs hung by hooks and a man’s face being eaten off while he is still alive”; but show a single erect penis, and you have to go to premium channels.

Even children’s television is full of astonishing levels of violence. Watch Tom and Jerry sometime, and you’ll realize that the only difference between it and the Simpsons parody Itchy & Scratchy is that the Simpsons version is a bit more realistic in depicting how such violence would affect the body. In mainstream cartoons, characters can get shot, blown up, crushed by heavy objects, run over by trains, hit with baseball bats and frying pans—but God forbid you ever show a boob.

In film, the documentary This Film Is Not Yet Rated shows convincingly that not only are our standards for sexual content versus violent content wildly disproportionate, furthermore any depiction of queer sexual content is immediately considered pornographic while the equivalent heterosexual content is not. It’s really quite striking to watch: They show scenes with the exact same sex act, even from more or less the same camera angles, and when it’s a man and a woman, it gets R, but if it’s two men or two women, it gets NC-17.

The movie Thirteen is rated R for its depiction of drugs and sex, despite being based on a true story about actual thirteen-year-olds. Evan Rachel Wood was 15 at the time of filming and 16 at the time of release, meaning that she was two years older than the character she played, and yet a year later still not old enough to watch her own movie without parental permission. Granted, Thirteen is not a wholesome film; there’s a lot of disturbing stuff in it, including things done by (and to) teenagers that really shouldn’t be.

But it’s not as if violence, even against teenagers, is viewed as so dangerous for young minds. Look at the Hunger Games, for example; that is an absolutely horrific level of violence against teenagers—people get beheaded, blown up, burned, and mutilated—and it only received a PG-13 rating. The Dark Knight received only a PG-13 rating, despite being about a terrorist who murders hundreds and implants a bomb in one of his henchmen (and also implements the most literal and unethical Prisoner’s Dilemma experiment ever devised).

Novels are better about this sort of thing: You actually can have sex scenes in mainstream novels without everyone freaking out. Yet there’s still a subtler double standard: You can’t show too much detail in a sex scene, or you’ll be branded “erotica”. But there’s no special genre ghetto you get sent to for too graphically depicting torture or war. (I love the Culture novels, but honestly I think Use of Weapons should come with trigger warnings—it’s brutal.) And as I have personally struggled with, it’s very hard to write fiction honestly depicting queer characters without your whole book being labeled “queer fiction”.

Is it like this in other countries? Well, like most things, it depends on the country. In China and much of the Middle East, the government has control over almost every sort of content. Most countries have some things they censor and some things they don’t. The US is unusual: We censor very little. Content involvingviolence and political content are essentially unrestricted in the US. But sex is one of the few things that we do consistently censor.

Media in Europe especially is much more willing to depict sex, and a bit less willing to depict violence. This is particularly true in the Netherlands, where there are films rated R for sex in the US but 6 (that’s “minimum age of viewing, 6 years”) in the Netherlands, because we consider naked female breasts to be a deal-breaker and they consider them utterly harmless. Quite frankly, I’m much more inclined toward the latter assessment.

Japan has had a long tradition of sexuality in art and media, and only when the West came in did they start introducing censorship. But Japan is not known for its half-measures; in 1907 they instituted a ban on explicit depiction of genitals that applies to essentially all media—even media explicitly marketed as porn still fuzzes over keys parts of the images. Yet some are still resisting this censorship: A ban on sexual content in manga drew outrage from artists as recently as 2010.

Hinduism has always been more open to sexuality than Christianity, and it shows in Indian culture in various ways. The Kama Sutra is depicted in the West as a lurid sex manual, when it’s really more of a text on living a full life, finding love, and achieving spiritual transcendence (of which sex is often a major part). But like Japan, India began to censor sex as it began to adopt Western cultural influences, and now implements a very broad pornography ban.

What does this double standard do to our society?

Well, it’s very hard to separate causation from correlation. So I can’t really say that it is because of this double standard in media that we have the highest rates of teen pregnancy and homicide in the First World. But it seems like it might be related, at least; perhaps they come from a common source, the same sexual repression and valorization of masculinity expressed through violence.

I do know some things that are direct negative consequences of the censorship of sex in US media. The most urgent example of this is the so-called “Stop Enabling Sex Traffickers Act” (it does more or less the exact opposite, much like the “PATRIOT ACT” and George W. Bush’s “Clean Air Act”). That will have to wait until next week’s post.

Games as economic simulations—and education tools

Mar 5, JDN 2457818 [Sun]

Moore’s Law is a truly astonishing phenomenon. Now as we are well into the 21st century (I’ve lived more of my life in the 21st century than the 20th now!) it may finally be slowing down a little bit, but it has had quite a run, and even this could be a temporary slowdown due to economic conditions or the lull before a new paradigm (quantum computing?) matures. Since at least 1975, the computing power of an individual processor has doubled approximately every year and a half; that means it has doubled over 25 times—or in other words that it has increased by a factor of over 30 million. I now have in my pocket a smartphone with several thousand times the processing speed of the guidance computer of the Saturn V that landed on the Moon.

This meteoric increase in computing power has had an enormous impact on the way science is done, including economics. Simple theoretical models that could be solved by hand are now being replaced by enormous simulation models that have to be processed by computers. It is now commonplace to devise models with systems of dozens of nonlinear equations that are literally impossible to solve analytically, and just solve them iteratively with computer software.

But one application of this technology that I believe is currently underutilized is video games.

As a culture, we still have the impression that video games are for children; even games like Dragon Age and Grand Theft Auto that are explicitly for adults (and really quite inappropriate for children!) are viewed as in some sense “childish”—that no serious adult would be involved with such frivolities. The same cultural critics who treat Shakespeare’s vagina jokes as the highest form of art are liable to dismiss the poignant critique of war in Call of Duty: Black Ops or the reflections on cultural diversity in Skyrim as mere puerility.

But video games are an art form with a fundamentally greater potential than any other. Now that graphics are almost photorealistic, there is really nothing you can do in a play or a film that you can’t do in a video game—and there is so, so much more that you can only do in a game.
In what other medium can we witness the spontaneous emergence and costly aftermath of a war? Yet EVE Online has this sort of event every year or so—just today there was a surprise attack involving hundreds of players that destroyed thousands of hours’—and dollars’—worth of starships, something that has more or less become an annual tradition. A few years ago there was a massive three-faction war that destroyed over $300,000 in ships and has now been commemorated as “the Bloodbath of B-R5RB”.
Indeed, the immersion and interactivity of games present an opportunity to do nothing less than experimental macroeconomics. For generations it has been impossible, or at least absurdly unethical, to ever experimentally manipulate an entire macroeconomy. But in a video game like EVE Online or Second Life, we can now do so easily, cheaply, and with little or no long-term harm to the participants—and we can literally control everything in the experiment. Forget the natural resource constraints and currency exchange rates—we can change the laws of physics if we want. (Indeed, EVE‘s whole trade network is built around FTL jump points, and in Second Life it’s a basic part of the interface that everyone can fly like Superman.)

This provides untold potential for economic research. With sufficient funding, we could build a game that would allow us to directly test hypotheses about the most fundamental questions of economics: How do governments emerge and maintain security? How is the rule of law sustained, and when can it be broken? What controls the value of money and the rate of inflation? What is the fundamental cause of unemployment, and how can it be corrected? What influences the rate of technological development? How can we maximize the rate of economic growth? What effect does redistribution of wealth have on employment and output? I envision a future where we can directly simulate these questions with thousands of eager participants, varying the subtlest of parameters and carrying out events over any timescale we like from seconds to centuries.

Nor is the potential of games in economics limited to research; it also has enormous untapped potential in education. I’ve already seen in my classes how tabletop-style games with poker chips can teach a concept better in a few minutes than hours of writing algebra derivations on the board; but custom-built video games could be made that would teach economics far better still, and to a much wider audience. In a well-designed game, people could really feel the effects of free trade or protectionism, not just on themselves as individuals but on entire nations that they control—watch their GDP numbers go down as they scramble to produce in autarky what they could have bought for half the price if not for the tariffs. They could see, in real time, how in the absence of environmental regulations and Pigovian taxes the actions of millions of individuals could despoil our planet for everyone.

Of course, games are fundamentally works of fiction, subject to the Fictional Evidence Fallacy and only as reliable as their authors make them. But so it is with all forms of art. I have no illusions about the fact that we will never get the majority of the population to regularly read peer-reviewed empirical papers. But perhaps if we are clever enough in the games we offer them to play, we can still convey some of the knowledge that those papers contain. We could also update and expand the games as new information comes in. Instead of complaining that our students are spending time playing games on their phones and tablets, we could actually make education into games that are as interesting and entertaining as the ones they would have been playing. We could work with the technology instead of against it. And in a world where more people have access to a smartphone than to a toilet, we could finally bring high-quality education to the underdeveloped world quickly and cheaply.

Rapid growth in computing power has given us a gift of great potential. But soon our capacity will widen even further. Even if Moore’s Law slows down, computing power will continue to increase for awhile yet. Soon enough, virtual reality will finally take off and we’ll have even greater depth of immersion available. The future is bright—if we can avoid this corporatist cyberpunk dystopia we seem to be hurtling toward, of course.

How to change the world

JDN 2457166 EDT 17:53.

I just got back from watching Tomorrowland, which is oddly appropriate since I had already planned this topic in advance. How do we, as they say in the film, “fix the world”?

I can’t find it at the moment, but I vaguely remember some radio segment on which a couple of neoclassical economists were interviewed and asked what sort of career can change the world, and they answered something like, “Go into finance, make a lot of money, and then donate it to charity.”

In a slightly more nuanced form this strategy is called earning to give, and frankly I think it’s pretty awful. Most of the damage that is done to the world is done in the name of maximizing profits, and basically what you end up doing is stealing people’s money and then claiming you are a great altruist for giving some of it back. I guess if you can make enormous amounts of money doing something that isn’t inherently bad and then donate that—like what Bill Gates did—it seems better. But realistically your potential income is probably not actually raised that much by working in finance, sales, or oil production; you could have made the same income as a college professor or a software engineer and not be actively stripping the world of its prosperity. If we actually had the sort of ideal policies that would internalize all externalities, this dilemma wouldn’t arise; but we’re nowhere near that, and if we did have that system, the only billionaires would be Nobel laureate scientists. Albert Einstein was a million times more productive than the average person. Steve Jobs was just a million times luckier. Even then, there is the very serious question of whether it makes sense to give all the fruits of genius to the geniuses themselves, who very quickly find they have all they need while others starve. It was certainly Jonas Salk’s view that his work should only profit him modestly and its benefits should be shared with as many people as possible. So really, in an ideal world there might be no billionaires at all.

Here I would like to present an alternative. If you are an intelligent, hard-working person with a lot of talent and the dream of changing the world, what should you be doing with your time? I’ve given this a great deal of thought in planning my own life, and here are the criteria I came up with:

  1. You must be willing and able to commit to doing it despite great obstacles. This is another reason why earning to give doesn’t actually make sense; your heart (or rather, limbic system) won’t be in it. You’ll be miserable, you’ll become discouraged and demoralized by obstacles, and others will surpass you. In principle Wall Street quantitative analysts who make $10 million a year could donate 90% to UNICEF, but they don’t, and you know why? Because the kind of person who is willing and able to exploit and backstab their way to that position is the kind of person who doesn’t give money to UNICEF.
  2. There must be important tasks to be achieved in that discipline. This one is relatively easy to satisfy; I’ll give you a list in a moment of things that could be contributed by a wide variety of fields. Still, it does place some limitations: For one, it rules out the simplest form of earning to give (a more nuanced form might cause you to choose quantum physics over social work because it pays better and is just as productive—but you’re not simply maximizing income to donate). For another, it rules out routine, ordinary jobs that the world needs but don’t make significant breakthroughs. The world needs truck drivers (until robot trucks take off), but there will never be a great world-changing truck driver, because even the world’s greatest truck driver can only carry so much stuff so fast. There are no world-famous secretaries or plumbers. People like to say that these sorts of jobs “change the world in their own way”, which is a nice sentiment, but ultimately it just doesn’t get things done. We didn’t lift ourselves into the Industrial Age by people being really fantastic blacksmiths; we did it by inventing machines that make blacksmiths obsolete. We didn’t rise to the Information Age by people being really good slide-rule calculators; we did it by inventing computers that work a million times as fast as any slide-rule. Maybe not everyone can have this kind of grand world-changing impact; and I certainly agree that you shouldn’t have to in order to live a good life in peace and happiness. But if that’s what you’re hoping to do with your life, there are certain professions that give you a chance of doing so—and certain professions that don’t.
  3. The important tasks must be currently underinvested. There are a lot of very big problems that many people are already working on. If you work on the problems that are trendy, the ones everyone is talking about, your marginal contribution may be very small. On the other hand, you can’t just pick problems at random; many problems are not invested in precisely because they aren’t that important. You need to find problems people aren’t working on but should be—problems that should be the focus of our attention but for one reason or another get ignored. A good example here is to work on pancreatic cancer instead of breast cancer; breast cancer research is drowning in money and really doesn’t need any more; pancreatic cancer kills 2/3 as many people but receives less than 1/6 as much funding. If you want to do cancer research, you should probably be doing pancreatic cancer.
  4. You must have something about you that gives you a comparative—and preferably, absolute—advantage in that field. This is the hardest one to achieve, and it is in fact the reason why most people can’t make world-changing breakthroughs. It is in fact so hard to achieve that it’s difficult to even say you have until you’ve already done something world-changing. You must have something special about you that lets you achieve what others have failed. You must be one of the best in the world. Even as you stand on the shoulders of giants, you must see further—for millions of others stand on those same shoulders and see nothing. If you believe that you have what it takes, you will be called arrogant and naïve; and in many cases you will be. But in a few cases—maybe 1 in 100, maybe even 1 in 1000, you’ll actually be right. Not everyone who believes they can change the world does so, but everyone who changes the world believed they could.

Now, what sort of careers might satisfy all these requirements?

Well, basically any kind of scientific research:

Mathematicians could work on network theory, or nonlinear dynamics (the first step: separating “nonlinear dynamics” into the dozen or so subfields it should actually comprise—as has been remarked, “nonlinear” is a bit like “non-elephant”), or data processing algorithms for our ever-growing morasses of unprocessed computer data.

Physicists could be working on fusion power, or ways to neutralize radioactive waste, or fundamental physics that could one day unlock technologies as exotic as teleportation and faster-than-light travel. They could work on quantum encryption and quantum computing. Or if those are still too applied for your taste, you could work in cosmology and seek to answer some of the deepest, most fundamental questions in human existence.

Chemists could be working on stronger or cheaper materials for infrastructure—the extreme example being space elevators—or technologies to clean up landfills and oceanic pollution. They could work on improved batteries for solar and wind power, or nanotechnology to revolutionize manufacturing.

Biologists could work on any number of diseases, from cancer and diabetes to malaria and antibiotic-resistant tuberculosis. They could work on stem-cell research and regenerative medicine, or genetic engineering and body enhancement, or on gerontology and age reversal. Biology is a field with so many important unsolved problems that if you have the stomach for it and the interest in some biological problem, you can’t really go wrong.

Electrical engineers can obviously work on improving the power and performance of computer systems, though I think over the last 20 years or so the marginal benefits of that kind of research have begun to wane. Efforts might be better spent in cybernetics, control systems, or network theory, where considerably more is left uncharted; or in artificial intelligence, where computing power is only the first step.

Mechanical engineers could work on making vehicles safer and cheaper, or building reusable spacecraft, or designing self-constructing or self-repairing infrastructure. They could work on 3D printing and just-in-time manufacturing, scaling it up for whole factories and down for home appliances.

Aerospace engineers could link the world with hypersonic travel, build satellites to provide Internet service to the farthest reaches of the globe, or create interplanetary rockets to colonize Mars and the moons of Jupiter and Saturn. They could mine asteroids and make previously rare metals ubiquitous. They could build aerial drones for delivery of goods and revolutionize logistics.

Agronomists could work on sustainable farming methods (hint: stop farming meat), invent new strains of crops that are hardier against pests, more nutritious, or higher-yielding; on the other hand a lot of this is already being done, so maybe it’s time to think outside the box and consider what we might do to make our food system more robust against climate change or other catastrophes.

Ecologists will obviously be working on predicting and mitigating the effects of global climate change, but there are a wide variety of ways of doing so. You could focus on ocean acidification, or on desertification, or on fishery depletion, or on carbon emissions. You could work on getting the climate models so precise that they become completely undeniable to anyone but the most dogmatically opposed. You could focus on endangered species and habitat disruption. Ecology is in general so underfunded and undersupported that basically anything you could do in ecology would be beneficial.

Neuroscientists have plenty of things to do as well: Understanding vision, memory, motor control, facial recognition, emotion, decision-making and so on. But one topic in particular is lacking in researchers, and that is the fundamental Hard Problem of consciousness. This one is going to be an uphill battle, and will require a special level of tenacity and perseverance. The problem is so poorly understood it’s difficult to even state clearly, let alone solve. But if you could do it—if you could even make a significant step toward it—it could literally be the greatest achievement in the history of humanity. It is one of the fundamental questions of our existence, the very thing that separates us from inanimate matter, the very thing that makes questions possible in the first place. Understand consciousness and you understand the very thing that makes us human. That achievement is so enormous that it seems almost petty to point out that the revolutionary effects of artificial intelligence would also fall into your lap.

The arts and humanities also have a great deal to contribute, and are woefully underappreciated.

Artists, authors, and musicians all have the potential to make us rethink our place in the world, reconsider and reimagine what we believe and strive for. If physics and engineering can make us better at winning wars, art and literature and remind us why we should never fight them in the first place. The greatest works of art can remind us of our shared humanity, link us all together in a grander civilization that transcends the petty boundaries of culture, geography, or religion. Art can also be timeless in a way nothing else can; most of Aristotle’s science is long-since refuted, but even the Great Pyramid thousands of years before him continues to awe us. (Aristotle is about equidistant chronologically between us and the Great Pyramid.)

Philosophers may not seem like they have much to add—and to be fair, a great deal of what goes on today in metaethics and epistemology doesn’t add much to civilization—but in fact it was Enlightenment philosophy that brought us democracy, the scientific method, and market economics. Today there are still major unsolved problems in ethics—particularly bioethics—that are in need of philosophical research. Technologies like nanotechnology and genetic engineering offer us the promise of enormous benefits, but also the risk of enormous harms; we need philosophers to help us decide how to use these technologies to make our lives better instead of worse. We need to know where to draw the lines between life and death, between justice and cruelty. Literally nothing could be more important than knowing right from wrong.

Now that I have sung the praises of the natural sciences and the humanities, let me now explain why I am a social scientist, and why you probably should be as well.

Psychologists and cognitive scientists obviously have a great deal to give us in the study of mental illness, but they may actually have more to contribute in the study of mental health—in understanding not just what makes us depressed or schizophrenic, but what makes us happy or intelligent. The 21st century may not simply see the end of mental illness, but the rise of a new level of mental prosperity, where being happy, focused, and motivated are matters of course. The revolution that biology has brought to our lives may pale in comparison to the revolution that psychology will bring. On the more social side of things, psychology may allow us to understand nationalism, sectarianism, and the tribal instinct in general, and allow us to finally learn to undermine fanaticism, encourage critical thought, and make people more rational. The benefits of this are almost impossible to overstate: It is our own limited, broken, 90%-or-so heuristic rationality that has brought us from simians to Shakespeare, from gorillas to Godel. To raise that figure to 95% or 99% or 99.9% could be as revolutionary as was whatever evolutionary change first brought us out of the savannah as Australopithecus africanus.

Sociologists and anthropologists will also have a great deal to contribute to this process, as they approach the tribal instinct from the top down. They may be able to tell us how nations are formed and undermined, why some cultures assimilate and others collide. They can work to understand combat bigotry in all its forms, racism, sexism, ethnocentrism. These could be the fields that finally end war, by understanding and correcting the imbalances in human societies that give rise to violent conflict.

Political scientists and public policy researchers can allow us to understand and restructure governments, undermining corruption, reducing inequality, making voting systems more expressive and more transparent. They can search for the keystones of different political systems, finding the weaknesses in democracy to shore up and the weaknesses in autocracy to exploit. They can work toward a true international government, representative of all the world’s people and with the authority and capability to enforce global peace. If the sociologists don’t end war and genocide, perhaps the political scientists can—or more likely they can do it together.

And then, at last, we come to economists. While I certainly work with a lot of ideas from psychology, sociology, and political science, I primarily consider myself an economist. Why is that? Why do I think the most important problems for me—and perhaps everyone—to be working on are fundamentally economic?

Because, above all, economics is broken. The other social sciences are basically on the right track; their theories are still very limited, their models are not very precise, and there are decades of work left to be done, but the core principles upon which they operate are correct. Economics is the field to work in because of criterion 3: Almost all the important problems in economics are underinvested.

Macroeconomics is where we are doing relatively well, and yet the Keynesian models that allowed us to reduce the damage of the Second Depression nonetheless had no power to predict its arrival. While inflation has been at least somewhat tamed, the far worse problem of unemployment has not been resolved or even really understood.

When we get to microeconomics, the neoclassical models are totally defective. Their core assumptions of total rationality and total selfishness are embarrassingly wrong. We have no idea what controls assets prices, or decides credit constraints, or motivates investment decisions. Our models of how people respond to risk are all wrong. We have no formal account of altruism or its limitations. As manufacturing is increasingly automated and work shifts into services, most economic models make no distinction between the two sectors. While finance takes over more and more of our society’s wealth, most formal models of the economy don’t even include a financial sector.

Economic forecasting is no better than chance. The most widely-used asset-pricing model, CAPM, fails completely in empirical tests; its defenders concede this and then have the audacity to declare that it doesn’t matter because the mathematics works. The Black-Scholes derivative-pricing model that caused the Second Depression could easily have been predicted to do so, because it contains a term that assumes normal distributions when we know for a fact that financial markets are fat-tailed; simply put, it claims certain events will never happen that actually occur several times a year.

Worst of all, economics is the field that people listen to. When a psychologist or sociologist says something on television, people say that it sounds interesting and basically ignore it. When an economist says something on television, national policies are shifted accordingly. Austerity exists as national policy in part due to a spreadsheet error by two famous economists.

Keynes already knew this in 1936: “The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”

Meanwhile, the problems that economics deals with have a direct influence on the lives of millions of people. Bad economics gives us recessions and depressions; it cripples our industries and siphons off wealth to an increasingly corrupt elite. Bad economics literally starves people: It is because of bad economics that there is still such a thing as world hunger. We have enough food, we have the technology to distribute it—but we don’t have the economic policy to lift people out of poverty so that they can afford to buy it. Bad economics is why we don’t have the funding to cure diabetes or colonize Mars (but we have the funding for oil fracking and aircraft carriers, don’t we?). All of that other scientific research that needs done probably could be done, if the resources of our society were properly distributed and utilized.

This combination of both overwhelming influence, overwhelming importance and overwhelming error makes economics the low-hanging fruit; you don’t even have to be particularly brilliant to have better ideas than most economists (though no doubt it helps if you are). Economics is where we have a whole bunch of important questions that are unanswered—or the answers we have are wrong. (As Will Rogers said, “It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”)

Thus, rather than tell you go into finance and earn to give, those economists could simply have said: “You should become an economist. You could hardly do worse than we have.”