Do we always want to internalize externalities?

JDN 2457437

I often talk about the importance of externalitiesa full discussion in this earlier post, and one of their important implications, the tragedy of the commons, in another. Briefly, externalities are consequences of actions incurred upon people who did not perform those actions. Anything I do affecting you that you had no say in, is an externality.

Usually I’m talking about how we want to internalize externalities, meaning that we set up a system of incentives to make it so that the consequences fall upon the people who chose the actions instead of anyone else. If you pollute a river, you should have to pay to clean it up. If you assault someone, you should serve jail time as punishment. If you invent a new technology, you should be rewarded for it. These are all attempts to internalize externalities.

But today I’m going to push back a little, and ask whether we really always want to internalize externalities. If you think carefully, it’s not hard to come up with scenarios where it actually seems fairer to leave the externality in place, or perhaps reduce it somewhat without eliminating it.

For example, suppose indeed that someone invents a great new technology. To be specific, let’s think about Jonas Salk, inventing the polio vaccine. This vaccine saved the lives of thousands of people and saved millions more from pain and suffering. Its value to society is enormous, and of course Salk deserved to be rewarded for it.

But we did not actually fully internalize the externality. If we had, every family whose child was saved from polio would have had to pay Jonas Salk an amount equal to what they saved on medical treatments as a result, or even an amount somehow equal to the value of their child’s life (imagine how offended people would get if you asked that on a survey!). Those millions of people spared from suffering would need to each pay, at minimum, thousands of dollars to Jonas Salk, making him of course a billionaire.

And indeed this is more or less what would have happened, if he had been willing and able to enforce a patent on the vaccine. The inability of some to pay for the vaccine at its monopoly prices would add some deadweight loss, but even that could be removed if Salk Industries had found a way to offer targeted price vouchers that let them precisely price-discriminate so that every single customer paid exactly what they could afford to pay. If that had happened, we would have fully internalized the externality and therefore maximized economic efficiency.

But doesn’t that sound awful? Doesn’t it sound much worse than what we actually did, where Jonas Salk received a great deal of funding and support from governments and universities, and lived out his life comfortably upper-middle class as a tenured university professor?

Now, perhaps he should have been awarded a Nobel Prize—I take that back, there’s no “perhaps” about it, he definitely should have been awarded a Nobel Prize in Medicine, it’s absurd that he did not—which means that I at least do feel the externality should have been internalized a bit more than it was. But a Nobel Prize is only 10 million SEK, about $1.1 million. That’s about enough to be independently wealthy and live comfortably for the rest of your life; but it’s a small fraction of the roughly $7 billion he could have gotten if he had patented the vaccine. Yet while the possible world in which he wins a Nobel is better than this one, I’m fairly well convinced that the possible world in which he patents the vaccine and becomes a billionaire is considerably worse.

Internalizing externalities makes sense if your goal is to maximize total surplus (a concept I explain further in the linked post), but total surplus is actually a terrible measure of human welfare.

Total surplus counts every dollar of willingness-to-pay exactly the same across different people, regardless of whether they live on $400 per year or $4 billion.

It also takes no account whatsoever of how wealth is distributed. Suppose a new technology adds $10 billion in wealth to the world. As far as total surplus, it makes no difference whether that $10 billion is spread evenly across the entire planet, distributed among a city of a million people, concentrated in a small town of 2,000, or even held entirely in the bank account of a single man.

Particularly a propos of the Salk example, total surplus makes no distinction between these two scenarios: a perfectly-competitive market where everything is sold at a fair price, and a perfectly price-discriminating monopoly, where everything is sold at the very highest possible price each person would be willing to pay.

This is a perfectly-competitive market, where the benefits are more or less equally (in this case exactly equally, but that need not be true in real life) between sellers and buyers:

elastic_supply_competitive_labeled

This is a perfectly price-discriminating monopoly, where the benefits accrue entirely to the corporation selling the good:

elastic_supply_price_discrimination

In the former case, the company profits, consumers are better off, everyone is happy. In the latter case, the company reaps all the benefits and everyone else is left exactly as they were. In real terms those are obviously very different outcomes—the former being what we want, the latter being the cyberpunk dystopia we seem to be hurtling mercilessly toward. But in terms of total surplus, and therefore the kind of “efficiency” that is maximize by internalizing all externalities, they are indistinguishable.

In fact (as I hope to publish a paper about at some point), the way willingness-to-pay works, it weights rich people more. Redistributing goods from the poor to the rich will typically increase total surplus.

Here’s an example. Suppose there is a cake, which is sufficiently delicious that it offers 2 milliQALY in utility to whoever consumes it (this is a truly fabulous cake). Suppose there are two people to whom we might give this cake: Richie, who has $10 million in annual income, and Hungry, who has only $1,000 in annual income. How much will each of them be willing to pay?

Well, assuming logarithmic marginal utility of wealth (which is itself probably biasing slightly in favor of the rich), 1 milliQALY is about $1 to Hungry, so Hungry will be willing to pay $2 for the cake. To Richie, however, 1 milliQALY is about $10,000; so he will be willing to pay a whopping $20,000 for this cake.

What this means is that the cake will almost certainly be sold to Richie; and if we proposed a policy to redistribute the cake from Richie to Hungry, economists would emerge to tell us that we have just reduced total surplus by $19,998 and thereby committed a great sin against economic efficiency. They will cajole us into returning the cake to Richie and thus raising total surplus by $19,998 once more.

This despite the fact that I stipulated that the cake is worth just as much in real terms to Hungry as it is to Richie; the difference is due to their wildly differing marginal utility of wealth.

Indeed, it gets worse, because even if we suppose that the cake is worth much more in real utility to Hungry—because he is in fact hungry—it can still easily turn out that Richie’s willingness-to-pay is substantially higher. Suppose that Hungry actually gets 20 milliQALY out of eating the cake, while Richie still only gets 2 milliQALY. Hungry’s willingness-to-pay is now $20, but Richie is still going to end up with the cake.

Now, if your thought is, “Why would Richie pay $20,000, when he can go to another store and get another cake that’s just as good for $20?” Well, he wouldn’t—but in the sense we mean for total surplus, willingness-to-pay isn’t just what you’d actually be willing to pay given the actual prices of the goods, but the absolute maximum price you’d be willing to pay to get that good under any circumstances. It is instead the marginal utility of the good divided by your marginal utility of wealth. In this sense the cake is “worth” $20,000 to Richie, and “worth” substantially less to Hungry—but not because it’s actually worth less in real terms, but simply because Richie has so much more money.

Even economists often equate these two, implicitly assuming that we are spending our money up to the point where our marginal willingness-to-pay is the actual price we choose to pay; but in general our willingness-to-pay is higher than the price if we are willing to buy the good at all. The consumer surplus we get from goods is in fact equal to the difference between willingness-to-pay and actual price paid, summed up over all the goods we have purchased.

Internalizing all externalities would definitely maximize total surplus—but would it actually maximize happiness? Probably not.

If you asked most people what their marginal utility of wealth is, they’d have no idea what you’re talking about. But most people do actually have an intuitive sense that a dollar is worth more to a homeless person than it is to a millionaire, and that’s really all we mean by diminishing marginal utility of wealth.

I think the reason we’re uncomfortable with the idea of Jonas Salk getting $7 billion from selling the polio vaccine, rather than the same number of people getting the polio vaccine and Jonas Salk only getting the $1.1 million from a Nobel Prize, is that we intuitively grasp that after that $1.1 million makes him independently wealthy, the rest of the money is just going to sit in some stock account and continue making even more money, while if we’d let the families keep it they would have put it to much better use raising their children who are now protected from polio. We do want to reward Salk for his great accomplishment, but we don’t see why we should keep throwing cash at him when it could obviously be spent in better ways.

And indeed I think this intuition is correct; great accomplishments—which is to say, large positive externalities—should be rewarded, but not in direct proportion. Maybe there should be some threshold above which we say, “You know what? You’re rich enough now; we can stop giving you money.” Or maybe it should simply damp down very quickly, so that a contribution which is worth $10 billion to the world pays only slightly more than one that is worth $100 million, but a contribution that is worth $100,000 pays considerably more than one which is only worth $10,000.

What it ultimately comes down to is that if we make all the benefits incur to the person who did it, there aren’t any benefits anymore. The whole point of Jonas Salk inventing the polio vaccine (or Einstein discovering relativity, or Darwin figuring out natural selection, or any great achievement) is that it will benefit the rest of humanity, preferably on to future generations. If you managed to fully internalize that externality, this would no longer be true; Salk and Einstein and Darwin would have become fabulously wealthy, and then somehow we’d all have to continue paying into their estates or something an amount equal to the benefits we received from their discoveries. (Every time you use your GPS, pay a royalty to the Einsteins. Every time you take a pill, pay a royalty to the Darwins.) At some point we’d probably get fed up and decide we’re no better off with them than without them—which is exactly by construction how we should feel if the externality were fully internalized.

Internalizing negative externalities is much less problematic—it’s your mess, clean it up. We don’t want other people to be harmed by your actions, and if we can pull that off that’s fantastic. (In reality, we usually can’t fully internalize negative externalities, but we can at least try.)

But maybe internalizing positive externalities really isn’t so great after all.

Bet five dollars for maximum performance

JDN 2457433

One of the more surprising findings from the study of human behavior under stress is the Yerkes-Dodson curve:

OriginalYerkesDodson
This curve shows how well humans perform at a given task, as a function of how high the stakes are on whether or not they do it properly.

For simple tasks, it says what most people intuitively expect—and what neoclassical economists appear to believe: As the stakes rise, the more highly incentivized you are to do it, and the better you do it.

But for complex tasks, it says something quite different: While increased stakes do raise performance to a point—with nothing at stake at all, people hardly work at all—it is possible to become too incentivized. Formally we say the curve is not monotonic; it has a local maximum.

This is one of many reasons why it’s ridiculous to say that top CEOs should make tens of millions of dollars a year on the rise and fall of their company’s stock price (as a great many economists do in fact say). Even if I believed that stock prices accurately reflect the company’s viability (they do not), and believed that the CEO has a great deal to do with the company’s success, it would still be a case of overincentivizing. When a million dollars rides on a decision, that decision is going to be worse than if the stakes had only been $100. With this in mind, it’s really not surprising that higher CEO pay is correlated with worse company performance. Stock options are terrible motivators, but do offer a subtle way of making wages adjust to the business cycle.

The reason for this is that as the stakes get higher, we become stressed, and that stress response inhibits our ability to use higher cognitive functions. The sympathetic nervous system evolved to make us very good at fighting or running away in the face of danger, which works well should you ever be attacked by a tiger. It did not evolve to make us good at complex tasks under high stakes, the sort of skill we’d need when calculating the trajectory of an errant spacecraft or disarming a nuclear warhead.

To be fair, most of us never have to worry about piloting errant spacecraft or disarming nuclear warheads—indeed, you’re about as likely to get attacked by a tiger even in today’s world. (The rate of tiger attacks in the US is just under 2 per year, and the rate of manned space launches in the US was about 5 per year until the Space Shuttle was terminated.)

There are certain professions, such as pilots and surgeons, where performing complex tasks under life-or-death pressure is commonplace, but only a small fraction of people take such professions for precisely that reason. And if you’ve ever wondered why we use checklists for pilots and there is discussion of also using checklists for surgeons, this is why—checklists convert a single complex task into many simple tasks, allowing high performance even at extreme stakes.

But we do have to do a fair number of quite complex tasks with stakes that are, if not urgent life-or-death scenarios, then at least actions that affect our long-term life prospects substantially. In my tutoring business I encounter one in particular quite frequently: Standardized tests.

Tests like the SAT, ACT, GRE, LSAT, GMAT, and other assorted acronyms are not literally life-or-death, but they often feel that way to students because they really do have a powerful impact on where you’ll end up in life. Will you get into a good college? Will you get into grad school? Will you get the job you want? Even subtle deviations from the path of optimal academic success can make it much harder to achieve career success in the future.

Of course, these are hardly the only examples. Many jobs require us to complete tasks properly on tight deadlines, or else risk being fired. Working in academia infamously requires publishing in journals in time to rise up the tenure track, or else falling off the track entirely. (This incentivizes the production of huge numbers of papers, whether they’re worth writing or not; yes, the number of papers published goes down after tenure, but is that a bad thing? What we need to know is whether the number of good papers goes down. My suspicion is that most if not all of the reduction in publications is due to not publishing things that weren’t worth publishing.)

So if you are faced with this sort of task, what can you do? If you realize that you are faced with a high-stakes complex task, you know your performance will be bad—which only makes your stress worse!

My advice is to pretend you’re betting five dollars on the outcome.

Ignore all other stakes, and pretend you’re betting five dollars. $5.00 USD. Do it right and you get a Lincoln; do it wrong and you lose one.
What this does is ensures that you care enough—you don’t want to lose $5 for no reason—but not too much—if you do lose $5, you don’t feel like your life is ending. We want to put you near that peak of the Yerkes-Dodson curve.

The great irony here is that you most want to do this when it is most untrue. If you actually do have a task for which you’ve bet $5 and nothing else rides on it, you don’t need this technique, and any technique to improve your performance is not particularly worthwhile. It’s when you have a standardized test to pass that you really want to use this—and part of me even hopes that people know to do this whenever they have nuclear warheads to disarm. It is precisely when the stakes are highest that you must put those stakes out of your mind.

Why five dollars? Well, the exact amount is arbitrary, but this is at least about the right order of magnitude for most First World individuals. If you really want to get precise, I think the optimal stakes level for maximum performance is something like 100 microQALY per task, and assuming logarithmic utility of wealth, $5 at the US median household income of $53,600 is approximately 100 microQALY. If you have a particularly low or high income, feel free to adjust accordingly. Literally you should be prepared to bet about an hour of your life; but we are not accustomed to thinking that way, so use $5. (I think most people, if asked outright, would radically overestimate what an hour of life is worth to them. “I wouldn’t give up an hour of my life for $1,000!” Then why do you work at $20 an hour?)

It’s a simple heuristic, easy to remember, and sometimes effective. Give it a try.

Why Millennials feel “entitled”

JDN 2457064

I’m sure you’ve already heard this plenty of times before, but just in case here are a few particularly notable examples: “Millennials are entitled.” “Millennials are narcissistic.” “Millennials expect instant gratification.

Fortunately there are some more nuanced takes as well: One survey shows that we are perceived as “entitled” and “self-centered” but also “hardworking” and “tolerant”. This article convincingly argues that Baby Boomers show at least as much ‘entitlement’ as we do. Another article points out that young people have been called these sorts of names for decades—though actually the proper figure is centuries.

Though some of the ‘defenses’ leave a lot to be desired: “OK, admittedly, people do live at home. But that’s only because we really like our parents. And why shouldn’t we?” Uh, no, that’s not it. Nor is it that we’re holding off on getting married. The reason we live with our parents is that we have no money and can’t pay for our own housing. And why aren’t we getting married? Because we can’t afford to pay for a wedding, much less buy a home and start raising kids. (Since the time I drafted this for Patreon and it went live, yet another article hand-wringing over why we’re not getting married was publishedin Scientific American, of all places.)

Are we not buying cars because we don’t like cars? No, we’re not buying cars because we can’t afford to pay for them.

The defining attributes of the Millennial generation are that we are young (by definition) and broke (with very few exceptions). We’re not uniquely narcissistic or even tolerant; younger generations always have these qualities.

But there may be some kernel of truth here, which is that we were promised a lot more than we got.

Educational attainment in the United States is the highest it has ever been. Take a look at this graph from the US Department of Education:

Percentage of 25- to 29-year-olds who completed a bachelor’s or higher degree, by race/ethnicity: Selected years, 1990–2014

education_attainment_race

More young people of every demographic except American Indians now have college degrees (and those figures fluctuate a lot because of small samples—whether my high school had an achievement gap for American Indians depended upon how I self-identified on the form, because there were only two others and I was tied for the highest GPA).

Even the IQ of Millennials is higher than that of our parents’ generation, which is higher than their parents’ generation; (measured) intelligence rises over time in what is called the Flynn Effect. IQ tests have to be adjusted to be harder by about 3 points every 10 years because otherwise the average score would stop being 100.

As your level of education increases, your income tends to go up and your unemployment tends to go down. In 2014, while people with doctorates or professional degrees had about 2% unemployment and made a median income of $1590 per week, people without even high school diplomas had about 9% unemployment and made a median income of only $490 per week. The Bureau of Labor Statistics has a nice little bar chart of these differences:

education_employment_earnings

Now the difference is not quite as stark. With the most recent data, the unemployment rate is 6.7% for people without a high school diploma and 2.5% for people with a bachelor’s degree or higher.

But that’s for the population as a whole. What about the population of people 18 to 35, those of us commonly known as Millennials?

Well, first of all, our unemployment rate overall is much higher. With the most recent data, unemployment among people ages 20-24 is a whopping 9.4%. For ages 25 to 34 it gets better, 5.3%; but it’s still much worse than unemployment at ages 35-44 (4.0%), 45-54 (3.6%), or 55+ (3.2%). Overall, unemployment among Millennials is about 6.7% while unemployment among Baby Boomers is about 3.2%, half as much. (Gen X is in between, but a lot closer to the Boomers at around 3.8%.)

It was hard to find data specifically breaking it down by both age and education at the same time, but the hunt was worth it.

Among people age 20-24 not in school:

Without a high school diploma, 328,000 are unemployed, out of 1,501,000 in the labor force. That’s an unemployment rate of 21.9%. Not a typo, that’s 21.9%.

With only a high school diploma, 752,000 are unemployed, out of 5,498,000 in the labor force. That’s an unemployment rate of 13.7%.

With some college but no bachelor’s degree, 281,000 are unemployed, out of 3,620,000 in the labor force. That’s an unemployment rate of 7.7%.

With a bachelor’s degree, 90,000 are unemployed, out of 2,313,000 in the labor force. That’s an unemployment rate of 3.9%.

What this means is that someone 24 or under needs to have a bachelor’s degree in order to have the same overall unemployment rate that people from Gen X have in general, and even with a bachelor’s degree, people under 24 still have a higher unemployment rate than what Baby Boomers simply have by default. If someone under 24 doesn’t even have a high school diploma, forget it; their unemployment rate is comparable to the population unemployment rate at the trough of the Great Depression.

In other words, we need to have college degrees just to match the general population older than us, of whom only 20% have a college degree; and there is absolutely nothing a Millennial can do in terms of education to ever have the tiny unemployment rate (about 1.5%) of Baby Boomers with professional degrees. (Be born White, be in perfect health, have a professional degree, have rich parents, and live in a city with very high employment, and you just might be able to pull it off.)

So, why do Millennials feel like a college degree should “entitle” us to a job?

Because it does for everyone else.

Why do we feel “entitled” to a higher standard of living than the one we have?
Take a look at this graph of GDP per capita in the US:

US_GDP_per_capita

You may notice a rather sudden dip in 2009, around the time most Millennials graduated from college and entered the labor force. On the next graph, I’ve added a curve approximating what it would look like if the previous trend had continued:

US_GDP_per_capita_trend

(There’s a lot on this graph for wonks like me. You can see how the unit-root hypothesis seemed to fail in the previous four recessions, where economic output rose back up to potential; but it clearly held in this recession, and there was a permanent loss of output. It also failed in the recession before that. So what’s the deal? Why do we recover from some recessions and take a permanent blow from others?)

If the Great Recession hadn’t happened, instead of per-capita GDP being about $46,000 in 2005 dollars, it would instead be closer to $51,000 in 2005 dollars. In today’s money, that means our current $56,000 would be instead closer to $62,000. If we had simply stayed on the growth trajectory we were promised, we’d be almost 10 log points richer (11% for the uninitiated).

So, why do Millennials feel “entitled” to things we don’t have? In a word, macroeconomics.

People anchored their expectations of what the world would be like on forecasts. The forecasts said that the skies were clear and economic growth would continue apace; so naturally we assumed that this was true. When the floor fell out from under our economy, only a few brilliant and/or lucky economists saw it coming; even people who were paying quite close attention were blindsided. We were raised in a world where economic growth promised rising standard of living and steady employment for the rest of our lives. And then the storm hit, and we were thrown into a world of poverty and unemployment—and especially poverty and unemployment for us.

We are angry about how we had been promised more than we were given, angry about how the distribution of what wealth we do have gets ever more unequal. We are angry that our parents’ generation promised what they could not deliver, and angry that it was their own blind worship of the corrupt banking system that allowed the crash to happen.

And because we are angry and demand a fairer share, they have the audacity to call us “narcissistic”.

Why is it so hard to get a job?

JDN 2457411

The United States is slowly dragging itself out of the Second Depression.

Unemployment fell from almost 10% to about 5%.

Core inflation has been kept between 0% and 2% most of the time.

Overall inflation has been within a reasonable range:

US_inflation

Real GDP has returned to its normal growth trend, though with a permanent loss of output relative to what would have happened without the Great Recession.

US_GDP_growth

Consumption spending is also back on trend, tracking GDP quite precisely.

The Federal Reserve even raised the federal funds interest rate above the zero lower bound, signaling a return to normal monetary policy. (As I argued previously, I’m pretty sure that was their main goal actually.)

Employment remains well below the pre-recession peak, but is now beginning to trend upward once more.

The only thing that hasn’t recovered is labor force participation, which continues to decline. This is how we can have unemployment go back to normal while employment remains depressed; people leave the labor force by retiring, going back to school, or simply giving up looking for work. By the formal definition, someone is only unemployed if they are actively seeking work. No, this is not new, and it is certainly not Obama rigging the numbers. This is how we have measured unemployment for decades.

Actually, it’s kind of the opposite: Since the Clinton administration we’ve also kept track of “broad unemployment”, which includes people who’ve given up looking for work or people who have some work but are trying to find more. But we can’t directly compare it to anything that happened before 1994, because the BLS didn’t keep track of it before then. All we can do is estimate based on what we did measure. Based on such estimation, it is likely that broad unemployment in the Great Depression may have gotten as high as 50%. (I’ve found that one of the best-fitting models is actually one of the simplest; assume that broad unemployment is 1.8 times narrow unemployment. This fits much better than you might think.)

So, yes, we muddle our way through, and the economy eventually heals itself. We could have brought the economy back much sooner if we had better fiscal policy, but at least our monetary policy was good enough that we were spared the worst.

But I think most of us—especially in my generation—recognize that it is still really hard to get a job. Overall GDP is back to normal, and even unemployment looks all right; but why are so many people still out of work?

I have a hypothesis about this: I think a major part of why it is so hard to recover from recessions is that our system of hiring is terrible.

Contrary to popular belief, layoffs do not actually substantially increase during recessions. Quits are substantially reduced, because people are afraid to leave current jobs when they aren’t sure of getting new ones. As a result, rates of job separation actually go down in a recession. Job separation does predict recessions, but not in the way most people think. One of the things that made the Great Recession different from other recessions is that most layoffs were permanent, instead of temporary—but we’re still not sure exactly why.

Here, let me show you some graphs from the BLS.

This graph shows job openings from 2005 to 2015:

job_openings

This graph shows hires from 2005 to 2015:

job_hires

Both of those show the pattern you’d expect, with openings and hires plummeting in the Great Recession.

But check out this graph, of job separations from 2005 to 2015:

job_separations

Same pattern!

Unemployment in the Second Depression wasn’t caused by a lot of people losing jobs. It was caused by a lot of people not getting jobs—either after losing previous ones, or after graduating from school. There weren’t enough openings, and even when there were openings there weren’t enough hires.

Part of the problem is obviously just the business cycle itself. Spending drops because of a financial crisis, then businesses stop hiring people because they don’t project enough sales to justify it; then spending drops even further because people don’t have jobs, and we get caught in a vicious cycle.

But we are now recovering from the cyclical downturn; spending and GDP are back to their normal trend. Yet the jobs never came back. Something is wrong with our hiring system.

So what’s wrong with our hiring system? Probably a lot of things, but here’s one that’s been particularly bothering me for a long time.
As any job search advisor will tell you, networking is essential for career success.

There are so many different places you can hear this advice, it honestly gets tiring.

But stop and think for a moment about what that means. One of the most important determinants of what job you will get is… what people you know?

It’s not what you are best at doing, as it would be if the economy were optimally efficient.
It’s not even what you have credentials for, as we might expect as a second-best solution.

It’s not even how much money you already have, though that certainly is a major factor as well.

It’s what people you know.

Now, I realize, this is not entirely beyond your control. If you actively participate in your community, attend conferences in your field, and so on, you can establish new contacts and expand your network. A major part of the benefit of going to a good college is actually the people you meet there.

But a good portion of your social network is more or less beyond your control, and above all, says almost nothing about your actual qualifications for any particular job.

There are certain jobs, such as marketing, that actually directly relate to your ability to establish rapport and build weak relationships rapidly. These are a tiny minority. (Actually, most of them are the sort of job that I’m not even sure needs to exist.)

For the vast majority of jobs, your social skills are a tiny, almost irrelevant part of the actual skill set needed to do the job well. This is true of jobs from writing science fiction to teaching calculus, from diagnosing cancer to flying airliners, from cleaning up garbage to designing spacecraft. Social skills are rarely harmful, and even often provide some benefit, but if you need a quantum physicist, you should choose the recluse who can write down the Dirac equation by heart over the well-connected community leader who doesn’t know what an integral is.

At the very least, it strains credibility to suggest that social skills are so important for every job in the world that they should be one of the defining factors in who gets hired. And make no mistake: Networking is as beneficial for landing a job at a local bowling alley as it is for becoming Chair of the Federal Reserve. Indeed, for many entry-level positions networking is literally all that matters, while advanced positions at least exclude candidates who don’t have certain necessary credentials, and then make the decision based upon who knows whom.

Yet, if networking is so inefficient, why do we keep using it?

I can think of a couple reasons.

The first reason is that this is how we’ve always done it. Indeed, networking strongly pre-dates capitalism or even money; in ancient tribal societies there were certainly jobs to assign people to: who will gather berries, who will build the huts, who will lead the hunt. But there were no colleges, no certifications, no resumes—there was only your position in the social structure of the tribe. I think most people simply automatically default to a networking-based system without even thinking about it; it’s just the instinctual System 1 heuristic.

One of the few things I really liked about Debt: The First 5000 Years was the discussion of how similar the behavior of modern CEOs is to that of ancient tribal chieftans, for reasons that make absolutely no sense in terms of neoclassical economic efficiency—but perfect sense in light of human evolution. I wish Graeber had spent more time on that, instead of many of these long digressions about international debt policy that he clearly does not understand.

But there is a second reason as well, a better reason, a reason that we can’t simply give up on networking entirely.

The problem is that many important skills are very difficult to measure.

College degrees do a decent job of assessing our raw IQ, our willingness to persevere on difficult tasks, and our knowledge of the basic facts of a discipline (as well as a fantastic job of assessing our ability to pass standardized tests!). But when you think about the skills that really make a good physicist, a good economist, a good anthropologist, a good lawyer, or a good doctor—they really aren’t captured by any of the quantitative metrics that a college degree provides. Your capacity for creative problem-solving, your willingness to treat others with respect and dignity; these things don’t appear in a GPA.

This is especially true in research: The degree tells how good you are at doing the parts of the discipline that have already been done—but what we really want to know is how good you’ll be at doing the parts that haven’t been done yet.

Nor are skills precisely aligned with the content of a resume; the best predictor of doing something well may in fact be whether you have done so in the past—but how can you get experience if you can’t get a job without experience?

These so-called “soft skills” are difficult to measure—but not impossible. Basically the only reliable measurement mechanisms we have require knowing and working with someone for a long span of time. You can’t read it off a resume, you can’t see it in an interview (interviews are actually a horribly biased hiring mechanism, particularly biased against women). In effect, the only way to really know if someone will be good at a job is to work with them at that job for awhile.

There’s a fundamental information problem here I’ve never quite been able to resolve. It pops up in a few other contexts as well: How do you know whether a novel is worth reading without reading the novel? How do you know whether a film is worth watching without watching the film? When the information about the quality of something can only be determined by paying the cost of purchasing it, there is basically no way of assessing the quality of things before we purchase them.

Networking is an attempt to get around this problem. To decide whether to read a novel, ask someone who has read it. To decide whether to watch a film, ask someone who has watched it. To decide whether to hire someone, ask someone who has worked with them.

The problem is that this is such a weak measure that it’s not much better than no measure at all. I often wonder what would happen if businesses were required to hire people based entirely on resumes, with no interviews, no recommendation letters, and any personal contacts treated as conflicts of interest rather than useful networking opportunities—a world where the only thing we use to decide whether to hire someone is their documented qualifications. Could it herald a golden age of new economic efficiency and job fulfillment? Or would it result in widespread incompetence and catastrophic collapse? I honestly cannot say.

The challenges of a global basic income

JDN 2457404

In the previous post I gave you the good news. Now for the bad news.

So we are hoping to implement a basic income of $3,000 per person per year worldwide, eliminating poverty once and for all.

There is no global government to implement this system. There is no global income tax to be collected or refunded. The United Nations and the World Bank, for all the good work that they do, are nowhere near powerful enough (or well-funded enough) to accomplish this feat.

Worse, the people we need to help the most, not coincidentally, live in the countries that are worst-managed. They are surrounded not only by squalor, but also by corruption, war, ethnic tension. Most of the people are underfed, uneducated, and dying from diseases such as malaria and schistomoniasis that we could treat in a day for pocket change. Their infrastructure is either crumbling or nonexistent. Their water is unsafe to drink. And worst of all, many of their governments don’t care. Tyrants like Robert Mugabe, Kim Jong-un, King Salman (of our lovely ally Saudi Arabia), and Isayas Afewerki care nothing for the interests of the people they rule, and are interested only in maximizing their own wealth and power. If we arranged to provide grants to these countries in an amount sufficient to provide the basic income, there’s no reason to think they’d actually provide it; they’d simply deposit the check in their own personal bank accounts, and use it to buy ever more extravagant mansions or build ever greater monuments to themselves. They really do seem to follow a utility function based entirely upon their own consumption; witness your neoclassical rational agent and despair.

There are ways for international institutions and non-governmental organizations to intervene to help people in these countries, and indeed many have done so to considerable effect. As bad as things are, they are much better than they used to be, and they promise to be even better tomorrow. But there is only so much they can do without the force of law at their backs, without the power to tax incomes and print currency.

We will therefore need a new kind of institutional framework, if not a true world government then something very much like it. Establishing this new government will not be easy, and worst of all I see no way to do it other than military force. Tyrants will not give up their power willingly; it will need to be taken from them. We will need to capture and imprison tyrants like Robert Mugabe and Kim Jong Un in the same way that we once did to mob bosses like John Dillinger and Al Capone, for ultimately a tyrant is nothing but a mob boss with an army.Unless we can find some way to target them precisely and smoothly replace their regimes with democracies, this will mean nothing less than war, and it could kill thousands, even millions of people—but millions of people are already dying, and will continue to die as long as we leave these men in power. Sanctions might help (though sanctions kill people too), and perhaps a few can be persuaded to step down, but the rest must be overthrown, by some combination of local revolutions and international military coalitions. The best model I’ve seen for how this might be pulled off is Libya, where Qaddafi was at last removed by an international military force supporting a local revolution—but even Libya is not exactly sunshine and rainbows right now. One of the first things we need to do is seriously plan a strategy for removing repressive dictators with a minimum of collateral damage.

To many, I suspect this sounds like imperialism, colonialism redux. Didn’t so many imperialistic powers say that they were doing it to help the local population? Yes, they did; and one of the facts that we must face up to is that it was occasionally true. Or if helping the local population was not their primary motivation, it was nonetheless a consequence. Countries colonized by the British Empire in particular are now the most prosperous, free nations in the world: The United States, Canada, Australia. South Africa and India might seem like exceptions (GDP PPP per capita of $12,400 and $5,500 respectively) but they really aren’t, compared to what they were before—or even compared to what is next to them today: Angola has a per capita GDP PPP of $7,546 while Bangladesh has only $2,991. Zimbabwe is arguably an exception (per capita GDP PPP of $1,773), but their total economic collapse occurred after the British left. To include Zimbabwe in this basic income program would literally triple the income of most of their population. But to do that, we must first get through Robert Mugabe.

Furthermore, I believe that we can avoid many of the mistakes of the past. We don’t have to do exactly the same thing that countries used to do when they invaded each other and toppled governments. Of course we should not enslave, subjugate, or murder the local population—one would hope that would go without saying, but history shows it doesn’t. We also shouldn’t annex the territory and claim it as our own, nor should we set up puppet governments that are only democratic as long as it serves our interests. (And make no mistake, we have done this, all too recently.) The goal must really be to help the people of countries like Zimbabwe and Eritrea establish their own liberal democracy, including the right to make policies we don’t like—or even policies we think are terrible ideas. If we can do so without war, of course we should. But right now what is usually called “pacifism” leaves millions of people to starve while we do nothing.

The argument that we have previously supported (or even continue to support, ahem, Saudi Arabia) many of these tyrants is sort of beside the point. Yes, that is clearly true; and yes, that is clearly terrible. But do you think that if we simply leave the situation alone they’ll go away? We should never have propped up Saddam Hussein or supported the mujihadeen who became the Taliban; and yes, I do think we could have known that at the time. But once they are there, what do you propose to do now? Wait for them to die? Hope they collapse on their own? Give our #thoughtsandprayers to revolutionaries? When asked what you think we should do, “We shouldn’t have done X” is not a valid response.

Imagine there is a mob boss who had kidnapped several families and is holding them in a warehouse. Suppose that at some point the police supported the mob boss in some way; in a deal to undermine a worse rival mafia family, they looked the other way on some things he did, or even gave him money that he used to strengthen his mob. (With actual police, the former is questionable, but actually done all the time; the latter would be definitely illegal. In the international analogy, both are ubiquitous.) Even suppose that the families who were kidnapped were previously from a part of town that the police would regularly shake down for petty crimes and incessant stop-and-frisks. The police definitely have a lot to answer for in all this; their crimes should not be forgotten. But how does it follow in any way that the police should not intervene to rescue the families from the warehouse? Suppose we even know that the warehouse is heavily guarded, and the resulting firefight may kill some of the hostages we are hoping to save. This gives us reason to negotiate, or to find the swiftest, most precise means to deploy the SWAT teams; but does it give us reason to do nothing?

Once again I think Al Capone is the proper analogy; when the FBI captured Al Capone, they didn’t bomb Chicago to the ground, nor did they attempt to enslave the population of Illinois. They thought of themselves as targeting one man and his lieutenants and re-establishing order and civil government to a free people; that is what we must do in Eritrea and Zimbabwe. (In response to all this, no doubt someone will say: “You just want the US to be the world’s police.” Well, no, I want an international coalition; but yes, given our military and economic hegemony, the US will take a very important role. Above all, yes, I want the world to have police. Why don’t you?)

For everything we did wrong in the recent wars in Afghanistan and Iraq, I think we actually did this part right: Afghanistan’s GDP PPP per capita has risen over 70% since 2002, and Iraq’s is now 17% higher than its pre-war peak. It’s a bit early to say whether we have really established stable liberal democracies there, and the Iraq War surely contributed to the rise of Daesh; but when the previous condition was the Taliban and Saddam Hussein it’s hard not to feel that things are at least somewhat improving. In a generation or two maybe we really will say “Iraq” in the same breath as “Korea” as one of the success stories of prosperous democracies set up after US wars. Or maybe it will all fall apart; it’s hard to say at this point.

So, we must find a way to topple the tyrants. Once that is done, we will need to funnel huge amounts of resources—at least one if not two orders of magnitude larger than our current level of foreign aid into building infrastructure, educating people, and establishing sound institutions. Our current “record high” foreign aid is less than 0.3% of world’s GDP. We have a model for this as well: It’s what we did in West Germany and Japan after WW2, as well as what we did in South Korea after the Korean War. It is not a coincidence that Germany soon regained its status as a world power while Japan and Korea were the first of the “Asian Tigers”, East Asian nations that rose up to join us at a First World standard of living.

Will all of this be expensive? Absolutely. By assuming $3,000 per person per year I am already figuring in an expenditure of $21 trillion per year, indefinitely. This would be the most expensive project upon which humanity has ever embarked. But it could also be the most important—an end to poverty, everywhere, forever. And we have that money, we’re simply using it for other things. At purchasing power parity the world spends over $100 trillion per year. Using 20% of the world’s income to eliminate poverty forever doesn’t seem like such a bad deal to me. (It’s not like it would disappear; it would be immediately spent back into the economy anyway. We might even see growth as a result.)

When dealing with events on this scale, it’s easy to get huge numbers that sound absurd. But even if we assumed that only the US, Europe, and China supported this program, it would only take 37% of our combined income—roughly what we currently spend on housing.

Whenever people complain, “We spend billions of dollars a year on aid, and we haven’t solved world hunger!” the proper answer is, “That’s right; we should be spending trillions.”

The Tragedy of the Commons

JDN 2457387

In a previous post I talked about one of the most fundamental—perhaps the most fundamental—problem in game theory, the Prisoner’s Dilemma, and how neoclassical economic theory totally fails to explain actual human behavior when faced with this problem in both experiments and the real world.

As a brief review, the essence of the game is that both players can either cooperate or defect; if they both cooperate, the outcome is best overall; but it is always in each player’s interest to defect. So a neoclassically “rational” player would always defect—resulting in a bad outcome for everyone. But real human beings typically cooperate, and thus do better. The “paradox” of the Prisoner’s Dilemma is that being “rational” results in making less money at the end.

Obviously, this is not actually a good definition of rational behavior. Being short-sighted and ignoring the impact of your behavior on others doesn’t actually produce good outcomes for anybody, including yourself.

But the Prisoner’s Dilemma only has two players. If we expand to a larger number of players, the expanded game is called a Tragedy of the Commons.

When we do this, something quite surprising happens: As you add more people, their behavior starts converging toward the neoclassical solution, in which everyone defects and we get a bad outcome for everyone.

Indeed, people in general become less cooperative, less courageous, and more apathetic the more of them you put together. K was quite apt when he said, “A person is smart; people are dumb, panicky, dangerous animals and you know it.” There are ways to counteract this effect, as I’ll get to in a moment—but there is a strong effect that needs to be counteracted.

We see this most vividly in the bystander effect. If someone is walking down the street and sees someone fall and injure themselves, there is about a 70% chance that they will go try to help the person who fell—humans are altruistic. But if there are a dozen people walking down the street who all witness the same event, there is only a 40% chance that any of them will help—humans are irrational.

The primary reason appears to be diffusion of responsibility. When we are alone, we are the only one could help, so we feel responsible for helping. But when there are others around, we assume that someone else could take care of it for us, so if it isn’t done that’s not our fault.

There also appears to be a conformity effect: We want to conform our behavior to social norms (as I said, to a first approximation, all human behavior is social norms). The mere fact that there are other people who could have helped but didn’t suggests the presence of an implicit social norm that we aren’t supposed to help this person for some reason. It never occurs to most people to ask why such a norm would exist or whether it’s a good one—it simply never occurs to most people to ask those questions about any social norms. In this case, by hesitating to act, people actually end up creating the very norm they think they are obeying.

This can lead to what’s called an Abilene Paradox, in which people simultaneously try to follow what they think everyone else wants and also try to second-guess what everyone else wants based on what they do, and therefore end up doing something that none of them actually wanted. I think a lot of the weird things humans do can actually be attributed to some form of the Abilene Paradox. (“Why are we sacrificing this goat?” “I don’t know, I thought you wanted to!”)

Autistic people are not as good at following social norms (though some psychologists believe this is simply because our social norms are optimized for the neurotypical population). My suspicion is that autistic people are therefore less likely to suffer from the bystander effect, and more likely to intervene to help someone even if they are surrounded by passive onlookers. (Unfortunately I wasn’t able to find any good empirical data on that—it appears no one has ever thought to check before.) I’m quite certain that autistic people are less likely to suffer from the Abilene Paradox—if they don’t want to do something, they’ll tell you so (which sometimes gets them in trouble).

Because of these psychological effects that blunt our rationality, in large groups human beings often do end up behaving in a way that appears selfish and short-sighted.

Nowhere is this more apparent than in ecology. Recycling, becoming vegetarian, driving less, buying more energy-efficient appliances, insulating buildings better, installing solar panels—none of these things are particularly difficult or expensive to do, especially when weighed against the tens of millions of people who will die if climate change continues unabated. Every recyclable can we throw in the trash is a silent vote for a global holocaust.

But as it no doubt immediately occurred to you to respond: No single one of us is responsible for all that. There’s no way I myself could possibly save enough carbon emissions to significantly reduce climate change—indeed, probably not even enough to save a single human life (though maybe). This is certainly true; the error lies in thinking that this somehow absolves us of the responsibility to do our share.

I think part of what makes the Tragedy of the Commons so different from the Prisoner’s Dilemma, at least psychologically, is that the latter has an identifiable victimwe know we are specifically hurting that person more than we are helping ourselves. We may even know their name (and if we don’t, we’re more likely to defect—simply being on the Internet makes people more aggressive because they don’t interact face-to-face). In the Tragedy of the Commons, it is often the case that we don’t know who any of our victims are; moreover, it’s quite likely that we harm each one less than we benefit ourselves—even though we harm everyone overall more.

Suppose that driving a gas-guzzling car gives me 1 milliQALY of happiness, but takes away an average of 1 nanoQALY from everyone else in the world. A nanoQALY is tiny! Negligible, even, right? One billionth of a year, a mere 30 milliseconds! Literally less than the blink of an eye. But take away 30 milliseconds from everyone on Earth and you have taken away 7 years of human life overall. Do that 10 times, and statistically one more person is dead because of you. And you have gained only 10 milliQALY, roughly the value of $300 to a typical American. Would you kill someone for $300?

Peter Singer has argued that we should in fact think of it this way—when we cause a statistical death by our inaction, we should call it murder, just as if we had left a child to drown to keep our clothes from getting wet. I can’t agree with that. When you think seriously about the scale and uncertainty involved, it would be impossible to live at all if we were constantly trying to assess whether every action would lead to statistically more or less happiness to the aggregate of all human beings through all time. We would agonize over every cup of coffee, every new video game. In fact, the global economy would probably collapse because none of us would be able to work or willing to buy anything for fear of the consequences—and then whom would we be helping?

That uncertainty matters. Even the fact that there are other people who could do the job matters. If a child is drowning and there is a trained lifeguard right next to you, the lifeguard should go save the child, and if they don’t it’s their responsibility, not yours. Maybe if they don’t you should try; but really they should have been the one to do it.
But we must also not allow ourselves to simply fall into apathy, to do nothing simply because we cannot do everything. We cannot assess the consequences of every specific action into the indefinite future, but we can find general rules and patterns that govern the consequences of actions we might take. (This is the difference between act utilitarianism, which is unrealistic, and rule utilitarianism, which I believe is the proper foundation for moral understanding.)

Thus, I believe the solution to the Tragedy of the Commons is policy. It is to coordinate our actions together, and create enforcement mechanisms to ensure compliance with that coordinated effort. We don’t look at acts in isolation, but at policy systems holistically. The proper question is not “What should I do?” but “How should we live?”

In the short run, this can lead to results that seem deeply suboptimal—but in the long run, policy answers lead to sustainable solutions rather than quick-fixes.

People are starving! Why don’t we just steal money from the rich and use it to feed people? Well, think about what would happen if we said that the property system can simply be unilaterally undermined if someone believes they are achieving good by doing so. The property system would essentially collapse, along with the economy as we know it. A policy answer to that same question might involve progressive taxation enacted by a democratic legislature—we agree, as a society, that it is justified to redistribute wealth from those who have much more than they need to those who have much less.

Our government is corrupt! We should launch a revolution! Think about how many people die when you launch a revolution. Think about past revolutions. While some did succeed in bringing about more just governments (e.g. the French Revolution, the American Revolution), they did so only after a long period of strife; and other revolutions (e.g. the Russian Revolution, the Iranian Revolution) have made things even worse. Revolution is extremely costly and highly unpredictable; we must use it only as a last resort against truly intractable tyranny. The policy answer is of course democracy; we establish a system of government that elects leaders based on votes, and then if they become corrupt we vote to remove them. (Sadly, we don’t seem so good about that second part—the US Congress has a 14% approval rating but a 95% re-election rate.)

And in terms of ecology, this means that berating ourselves for our sinfulness in forgetting to recycle or not buying a hybrid car does not solve the problem. (Not that it’s bad to recycle, drive a hybrid car, and eat vegetarian—by all means, do these things. But it’s not enough.) We need a policy solution, something like a carbon tax or cap-and-trade that will enforce incentives against excessive carbon emissions.

In case you don’t think politics makes a difference, all of the Democrat candidates for President have proposed such plans—Bernie Sanders favors a carbon tax, Martin O’Malley supports an aggressive cap-and-trade plan, and Hillary Clinton favors heavily subsidizing wind and solar power. The Republican candidates on the other hand? Most of them don’t even believe in climate change. Chris Christie and Carly Fiorina at least accept the basic scientific facts, but (1) they are very unlikely to win at this point and (2) even they haven’t announced any specific policy proposals for dealing with it.

This is why voting is so important. We can’t do enough on our own; the coordination problem is too large. We need to elect politicians who will make policy. We need to use the systems of coordination enforcement that we have built over generations—and that is fundamentally what a government is, a system of coordination enforcement. Only then can we overcome the tendency among human beings to become apathetic and short-sighted when faced with a Tragedy of the Commons.

No, advertising is not signaling

JDN 2457373

Awhile ago, I wrote a post arguing that advertising is irrational, that at least with advertising as we know it, no real information is conveyed and thus either consumers are being irrational in their purchasing decisions, or advertisers are irrational for buying ads that don’t work.

One of the standard arguments neoclassical economists make to defend the rationality of advertising is that advertising is signaling—that even though the content of the ads conveys no useful information, the fact that there are ads is a useful signal of the real quality of goods being sold.

The idea is that by spending on advertising, a company shows that they have a lot of money to throw around, and are therefore a stable and solvent company that probably makes good products and is going to stick around for awhile.

Here are a number of different papers all making this same basic argument, often with sophisticated mathematical modeling. This paper takes an even bolder approach, arguing that people benefit from ads and would therefore pay to get them if they had to. Does that sound even remotely plausible to you? It sure doesn’t to me. Some ads are fairly entertaining, but generally if someone is willing to pay money for a piece of content, they charge money for that content.

Could spending on advertising offer a signal of the quality of a product or the company that makes it? Yes. That is something that actually could happen. The reason this argument is ridiculous is not that advertising signaling couldn’t happen—it’s that advertising is clearly nowhere near the best way to do that. The content of ads is clearly nothing remotely like what it would be if advertising were meant to be a costly signal of quality.

Look at this ad for Orangina. Look at it. Look at it.

Now, did that ad tell you anything about Orangina? Anything at all?

As far as I can tell, the thing it actually tells you isn’t even true—it strongly implies that Orangina is a form of aftershave when in fact it is an orange-flavored beverage. It’d be kind of like having an ad for the iPad that involves scantily-clad dog-people riding the iPad like it’s a hoverboard. (Now that I’ve said it, Apple is probably totally working on that ad.)

This isn’t an isolated incident for Orangina, who have a tendency to run bizarre and somewhat suggestive (let’s say PG-13) TV spots involving anthropomorphic animals.

But more than that, it’s endemic to the whole advertising industry.

Look at GEICO, for instance; without them specifically mentioning that this is car insurance, you’d never know what they were selling from all the geckos,

and Neanderthals,

and… golf Krakens?

Progressive does slightly better, talking about some of their actual services while also including an adorably-annoying spokesperson (she’s like Jar Jar, but done better):

State Farm also includes at least a few tidbits about their insurance amidst the teleportation insanity:

But honestly the only car insurance commercials I can think of that are actually about car insurance are Allstate’s, and even then they’re mostly about Dennis Haybert’s superhuman charisma. I would buy bacon cheeseburgers from this man, and I’m vegetarian.

Esurance is also relatively informative (and owned by Allstate, by the way); they talk about their customer service and low prices (in other words, the only things you actually care about with car insurance). But even so, what reason do we have to believe their bald assertions of good customer service? And what’s the deal with the whole money-printing thing?

And of course I could deluge you with examples from other companies, from Coca-Cola’s polar bears and Santa Claus to this commercial, which is literally the most American thing I have ever seen:

If you’re from some other country and are going, “What!?” right now, that’s totally healthy. Honestly I think we would too if constant immersion in this sort of thing hadn’t deadened our souls.

Do these ads signal that their companies have a lot of extra money to burn? Sure. But there are plenty of other ways to do that which would also serve other valuable functions. I honestly can’t imagine any scenario in which the best way to tell me the quality of an auto insurance company is to show me 30-second spots about geckos and Neanderthals.

If a company wants to signal that they have a lot of money, they could simply report their financial statement. That’s even regulated so that we know it has to be accurate (and this is one of the few financial regulations we actually enforce). The amount you spent on an ad is not obvious from the result of the ad, and doesn’t actually prove that you’re solvent, only that you have enough access to credit. (Pets.com famously collapsed the same year they ran a multi-million-dollar Super Bowl ad.)

If a company wants to signal that they make a good product, they could pay independent rating agencies to rate products on their quality (you know, like credit rating agencies and reviewers of movies and video games). Paying an independent agency is far more reliable than the signaling provided by advertising. Consumers could also pay their own agencies, which would be even more reliable; credit rating agencies and movie reviewers do sometimes have a conflict of interest, which could be resolved by making them report to consumers instead of producers.

If a company wants to establish that they are both financially stable and socially responsible, they could make large public donations to important charities. (This is also something that corporations do on occasion, such as Subaru’s recent campaign.) Or they could publicly announce a raise for all their employees. This would not only provide us with the information that they have this much money to spend—it would actually have a direct positive social effect, thus putting their money where there mouth is.

Signaling theory in advertising is based upon the success of signaling theory in evolutionary biology, which is beyond dispute; but evolution is tightly constrained in what it can do, so wasteful costly signals make sense. Human beings are smarter than that; we can find ways to convey information that don’t involve ludicrous amounts of waste.

If we were anywhere near as rational as these neoclassical models assume us to be, we would take the constant bombardment of meaningless ads not as a signal of a company’s quality but as a personal assault—they are needlessly attacking our time and attention when all the genuinely-valuable information they convey could have been conveyed much more easily and reliably. We would not buy more from them; we would refuse to buy from them. And indeed, I’ve learned to do just that; the more a company bombards me with annoying or meaningless advertisements, the more I make a point of not buying their product if I have a viable substitute. (For similar reasons, I make a point of never donating to any charity that uses hard-sell tactics to solicit donations.)

But of course the human mind is limited. We only have so much attention, and by bombarding us frequently and intensely enough they can overcome our mental defenses and get us to make decisions we wouldn’t if we were optimally rational. I can feel this happening when I am hungry and a food ad appears on TV; my autonomic hunger response combined with their expert presentation of food in the perfect lighting makes me want that food, if only for the few seconds it takes my higher cognitive functions to kick in and make me realize that I don’t eat meat and I don’t like mayonnaise.

Car commercials have always been particularly baffling to me. Who buys a car based on a commercial? A decision to spend $20,000 should not be made based upon 30 seconds of obviously biased information. But either people do buy cars based on commercials or they don’t; if they do, consumers are irrational, and if they don’t, car companies are irrational.

Advertising isn’t the source of human irrationality, but it feeds upon human irrationality, and is specifically designed to exploit our own stupidity to make us spend money in ways we wouldn’t otherwise. This means that markets will not be efficient, and huge amounts of productivity can be wasted because we spent it on what they convinced us to buy instead of what would truly have made our lives better. Those companies then profit more, which encourages them to make even more stuff nobody actually wants and sell it that much harder… and basically we all end up buying lots of worthless stuff and putting it in our garages and wondering what happened to our money and the meaning in our lives. Neoclassical economists really need to stop making ridiculous excuses for this damaging and irrational behavior–and maybe then we could actually find a way to make it stop.

Why building more roads doesn’t stop rush hour

JDN 2457362

The topic of this post was selected based on the very first Patreon vote (which was albeit limited because I only had three patrons eligible to vote and only one of them actually did vote; but these things always start small, right?). It is what you (well, one of you) wanted to see. In future months there will be more such posts, and hopefully more people will vote.

Most Americans face an economic paradox every morning and every evening. Our road network is by far the largest in the world (for three reasons: We’re a huge country geographically, we have more money than anyone else, and we love our cars), and we continue to expand it; yet every morning around 8:00-9:00 and every evening around 17:00-18:00 we face rush hour, in which our roads become completely clogged by commuters and it takes two or three times as long to get anywhere.

Indeed, rush hour is experienced around the world, though it often takes the slightly different form of clogged public transit instead of clogged roads. In most countries, there are two specific one-hour periods in the morning and the evening in which all transportation is clogged to a standstill.

This is probably such a familiar part of your existence you never stopped to question it. But in fact it is quite bizarre; the natural processes of economic supply and demand should have solved this problem decades ago, so why haven’t they?

There are a number of important forces at work here, all of which conspire to doom our transit systems.

The first is the Tragedy of the Commons, which I’ll likely write about in the future (but since it didn’t win the vote, not just yet). The basic idea of the Tragedy of the Commons is similar to the Prisoner’s Dilemma, but expanded to a large number of people. A Tragedy of the Commons is a situation in which there are many people, each of whom has the opportunity to either cooperate with the group and help everyone a small amount, or defect from the group and help themselves a larger amount. If everyone cooperates, everyone is better off; but holding everyone else’s actions fixed, it is in each person’s self-interest to defect.

As it turns out, people do act closer to the neoclassical prediction in the Tragedy of the Commons—which is something I’d definitely like to get into at some point. Two different psychological mechanisms counter one another, and result in something fairly close to the prediction of neoclassical rational self-interest, at least when the number of people involved is very large. It’s actually a good example of how real human beings can deviate from neoclassical rationality both in a good way (we are altruistic) and in a bad way (we are irrational).

The large-scale way roads are a Tragedy of the Commons is that they are a public good, something that we share as a society. Except for toll roads (which I’ll get to in a moment), roads are set up so that once they are built, anyone can use them; so the best option for any individual person is to get everyone else to pay to build them and then quite literally free-ride on the roads everyone else built. But if everyone tries to do that, nobody is going to pay for the roads at all.

And indeed, our roads are massively underfunded. Simply to maintain currently-existing roads we need to spend about an additional $100 billion per year over what we’re already spending. Yet once you factor in all the extra costs of damaged vehicles, increased accidents, time wasted, and the fact that fixing things is cheaper than replacing them, in fact the cost to not maintain our roads is about 3 times as large as that. This is exactly what you expect to see in a Tragedy of the Commons; there’s a huge benefit for everyone just sitting there, not getting done, because nobody wants to pay for it themselves. Michigan saw this quite dramatically when we voted down increased road funding because it would have slightly increased sales taxes. (Granted, we should be funding roads with fuel taxes, not general sales taxes—but those are hardly any more popular.)

Toll roads can help with this, because they internalize the externality: When you have to pay for the roads that you use, you either use them less (creating less wear and tear) or pay more; either way, the gap between what is paid and what is needed is closed. And indeed, toll roads are better maintained than other roads. There are downsides, however; the additional effort to administrate the tolls is expensive, and traffic can be slowed down by toll booths (though modern transponder systems mitigate this effect substantially). Also, it’s difficult to fully privatize roads, because there is a large up-front cost and it takes a long time for a toll road to become profitable; most corporations don’t want to wait that long.

But we do build a lot of roads, and yet still we have rush hour. So that isn’t the full explanation.

The small-scale way that roads are a Tragedy of the Commons is that when you decide to drive during rush hour, you are in a sense defecting in a Tragedy of the Commons. You will get to your destination sooner than if you had waited until traffic clears; but by adding one more car to the congestion you have slowed everyone else down just a little bit. When we sum up all these little delays, we get the total gridlock that is rush hour. If you had instead waited to drive on clear roads, you would get to your destination without inconveniencing anyone else—but you’d get there a lot later.

The second major reason why we have rush hour is what is called induced demand. When you widen a road or add a parallel route, you generally fail to reduce traffic congestion on that route in the long run. What happens instead is that driving during rush hour becomes more convenient for a little while, which makes more people start driving during rush hour—they buy a car when they used to take the bus, or they don’t leave as early to go to work. Eventually enough people shift over that the equilibrium is restored—and the equilibrium is gridlock.

But if you think carefully, that can’t be the whole explanation. There are only so many people who could start driving during rush hour, so what if we simply built enough roads to accommodate them all? And if our public transit systems were better, people would feel no need to switch to driving, even if driving had in fact been made more convenient. And indeed, transportation economists have found that adding more capacity does reduce congestion—it just isn’t enough unless you also improve public transit. So why aren’t we improving public transit? See above, Tragedy of the Commons.

Yet we still don’t have a complete explanation, because of something that’s quite obvious in hindsight: Why do we all work 9:00 to 17:00!? There’s no reason for that. There’s nothing inherent about the angle of sunlight or something which requires us to work these hours—indeed, if there were, Daylight Savings Time wouldn’t work (which is not to say that it works well—Daylight Savings Times kills).

There should be a competitive market pressure to work different hours, which should ultimately lead to an equilibrium where traffic is roughly constant throughout the day, at least during the time when a large swath of the population is awake and outside. Congestion should spread itself out over time, because it is to the advantage of all involved if each driver tries to drive at a time when other driver’s aren’t. Driving outside of rush hour gives us an opportunity for something like “temporal arbitrage”, where you can pay a small amount of time here to get a larger amount of time there. And if there’s one thing a competitive economy is supposed to get rid of, it’s arbitrage.

But no, we keep almost all our working hours aligned at 09:00-17:00, and thus we get rush hour.

In fact, a lot of jobs would function better if they weren’t aligned in this way—retail sales, for example, is most successful during the “off hours”, because people only shop when they aren’t working. (Well, except for online shopping, and even then they’re not supposed to.) Banks continually insist on making their hours 9:00 to 17:00 when they know that on most days they’d actually get more business from 17:00 to 19:00 than they did from 9:00 to 17:00. Some banks are at least figuring that out enough to be open from 17:00 to 19:00—but they still don’t seem to grasp that retail banking services have no reason to be open during normal business hours. Commerce banking services do; but that’s a small portion of their overall customers (albeit not of their overall revenue). There’s no reason to have so many full branches open so many hours with most of the tellers doing nothing most of the time.

Education would be better off being later in the day, when students—particularly teenagers—have a chance to sleep in the way their brains are evolved to. The benefits of later school days in terms of academic performance and public health are actually astonishingly large. When you move the start of high school from 07:00 to 09:00, auto collisions involving teenagers drop 70%. Perhaps should be the new slogans: “Early classes cause car crashes.” Since 25% of auto collisions occur during rush hour, here’s another: “Always working nine to five? Vehicular homicide.”

Other jobs could have whatever hours they please. There’s no reason for most forms of manufacturing to be done at any particular hour of the day. Most clerical and office work could be done at any time (and thanks to the Internet, any place; though there are real benefits to working in an office). Writing can be done whenever it is convenient for the author—and when you think about it, an awful lot of jobs basically amount to writing.

Finance is only handled 09:00-17:00 because we force it to be. The idea of “opening” and “closing” the stock market each day is profoundly anachronistic, and actually amounts to granting special arbitrage privileges to the small number of financial institutions that are allowed to do so-called “after hours” trading.

And then there’s the fact that different people have different circadian rhythms, require different amounts of sleep and prefer to sleep at different times—it’s genetic. (My boyfriend and I are roughly three hours phase-shifted relative to one another, which made it surprisingly convenient to stay in touch when I lived in California and he lived in Michigan.)

Why do we continue to accept such absurdity?

Whenever you find yourself asking that question, try this answer first, for it is by far the most likely:

Social norms.

Social norms will make human beings do just about anything, from eating cockroaches to murdering elephants, from kilts to burqas, from waving giant foam hands to throwing octopus onto ice rinks, from landing on the moon to crashing into the World Trade Center, from bombing Afghanistan to marching on Washington, from eating only raw foods to using dead pigs as sex toys. Our basic mental architecture is structured around tribal identity, and to preserve that identity we will follow almost any rule imaginable. To a first approximation, all human behavior is social norms.

And indeed I can find no other explanation for why we continue to work on a “nine-to-five” 09:00-17:00 schedule (or for that matter why it probably feels weird to you that I say “17:00” instead of the far less efficient and more confusion-prone “5:00 PM”). Our productivity has skyrocketed, increasing by a factor of 4 just since 1950 (and these figures dramatically underestimate the gains in productivity from computer technology, because so much is in the form of free content, which isn’t counted in GDP). We could do the same work in a quarter the time, or twice as much in half the time. Yet still we continue to work the same old 40-hour work week, nine-to-five work day. We each do the work of a dozen previous workers, yet we still find a way to fill the same old work week, and the rich who grow ever richer still pay us more or less the same real wages. It’s all basically social norms at this point; this is how things have always been done, and we can’t imagine any other way. When you get right down to it, capitalism is fundamentally a system of social norms—a very successful one, but far from the only possibility and perhaps not the best.

Thus, why does building more roads not solve the problem of rush hour? Because we have a social norm that says we are all supposed to start work at 09:00 and end work at 17:00.
And that, dear readers, is what we must endeavor to change. Change our thinking, and we will change the norms. Change the norms, and we will change the world.

Tax incidence revisited, part 4: Surplus and deadweight loss

JDN 2457355

I’ve already mentioned the fact that taxation creates deadweight loss, but in order to understand tax incidence it’s important to appreciate exactly how this works.

Deadweight loss is usually measured in terms of total economic surplus, which is a strange and deeply-flawed measure of value but relatively easy to calculate.

Surplus is based upon the concept of willingness-to-pay; the value of something is determined by the maximum amount of money you would be willing to pay for it.

This is bizarre for a number of reasons, and I think the most important one is that people differ in how much wealth they have, and therefore in their marginal utility of wealth. $1 is worth more to a starving child in Ghana than it is to me, and worth more to me than it is to a hedge fund manager, and worth more to a hedge fund manager than it is to Bill Gates. So when you try to set what something is worth based on how much someone will pay for it, which someone are you using?

People also vary, of course, in how much real value a good has to them: Some people like dark chocolate, some don’t. Some people love spicy foods and others despise them. Some people enjoy watching sports, others would rather read a book. A meal is worth a lot more to you if you haven’t eaten in days than if you just ate half an hour ago. That’s not actually a problem; part of the point of a market economy is to distribute goods to those who value them most. But willingness-to-pay is really the product of two different effects: The real effect, how much utility the good provides you; and the wealth effect, how your level of wealth affects how much you’d pay to get the same amount of utility. By itself, willingness-to-pay has no means of distinguishing these two effects, and actually I think one of the deepest problems with capitalism is that ultimately capitalism has no means of distinguishing these two effects. Products will be sold to the highest bidder, not the person who needs it the most—and that’s why Americans throw away enough food to end world hunger.

But for today, let’s set that aside. Let’s pretend that willingness-to-pay is really a good measure of value. One thing that is really nice about it is that you can read it right off the supply and demand curves.

When you buy something, your consumer surplus is the difference between your willingness-to-pay and how much you actually did pay. If a sandwich is worth $10 to you and you pay $5 to get it, you have received $5 of consumer surplus.

When you sell something, your producer surplus is the difference between how much you were paid and your willingness-to-accept, which is the minimum amount of money you would accept to part with it. If making that sandwich cost you $2 to buy ingredients and $1 worth of your time, your willingness-to-accept would be $3; if you then sell it for $5, you have received $2 of producer surplus.

Total economic surplus is simply the sum of consumer surplus and producer surplus. One of the goals of an efficient market is to maximize total economic surplus.

Let’s return to our previous example, where a 20% tax raised the original wage from $22.50 and thus resulted in an after-tax wage of $18.

Before the tax, the supply and demand curves looked like this:

equilibrium_notax

Consumer surplus is the area below the demand curve, above the price, up to the total number of goods sold. The basic reasoning behind this is that the demand curve gives the willingness-to-pay for each good, which decreases as more goods are sold because of diminishing marginal utility. So what this curve is saying is that the first hour of work was worth $40 to the employer, but each following hour was worth a bit less, until the 10th hour of work was only worth $35. Thus the first hour gave $40-$20 = $20 of surplus, while the 10th hour only gave $35-$20 = $15 of surplus.

Producer surplus is the area above the supply curve, below the price, again up to the total number of goods sold. The reasoning is the same: If the first hour of work cost $5 worth of time but the 10th hour cost $10 worth of time, the first hour provided $20-$5 = $15 in producer surplus, but the 10th hour only provided $20-$10 = $10 in producer surplus.

Imagine drawing a little 1-pixel-wide line straight down from the demand curve to the price for each hour and then adding up all those little lines into the total area under the curve, and similarly drawing little 1-pixel-wide lines straight up from the supply curve.

surplus

The employer was paying $20 * 40 = $800 for an amount of work that they actually valued at $1200 (the total area under the demand curve up to 40 hours), so they benefit by $400. The worker was being paid $800 for an amount of work that they would have been willing to accept $480 to do (the total area under the supply curve up to 40 hours), so they benefit $320. The sum of these is the total surplus $720.

equilibrium_notax_surplus

After the tax, the employer is paying $22.50 * 35 = $787.50, but for an amount of work that they only value at $1093.75, so their new surplus is only $306.25. The worker is receiving $18 * 35 = $630, for an amount of work they’d have been willing to accept $385 to do, so their new surplus is $245. Even when you add back in the government revenue of $4.50 * 35 = $157.50, the total surplus is still only $708.75. What happened to that extra $11.25 of value? It simply disappeared. It’s gone. That’s what we mean by “deadweight loss”. That’s why there is a downside to taxation.

equilibrium_tax_surplus

How large the deadweight loss is depends on the precise shape of the supply and demand curves, specifically on how elastic they are. Remember that elasticity is the proportional change in the quantity sold relative to the change in price. If increasing the price 1% makes you want to buy 2% less, you have a demand elasticity of -2. (Some would just say “2”, but then how do we say it if raising the price makes you want to buy more? The Law of Demand is more like what you’d call a guideline.) If increasing the price 1% makes you want to sell 0.5% more, you have a supply elasticity of 0.5.

If supply and demand are highly elastic, deadweight loss will be large, because even a small tax causes people to stop buying and selling a large amount of goods. If either supply or demand is inelastic, deadweight loss will be small, because people will more or less buy and sell as they always did regardless of the tax.

I’ve filled in the deadweight loss with brown in each of these graphs. They are designed to have the same tax rate, and the same price and quantity sold before the tax.

When supply and demand are elastic, the deadweight loss is large:

equilibrium_elastic_tax_surplus

But when supply and demand are inelastic, the deadweight loss is small:

equilibrium_inelastic_tax_surplus

Notice that despite the original price and the tax rate being the same, the tax revenue is also larger in the case of inelastic supply and demand. (The total surplus is also larger, but it’s generally thought that we don’t have much control over the real value and cost of goods, so we can’t generally make something more inelastic in order to increase total surplus.)

Thus, all other things equal, it is better to tax goods that are inelastic, because this will raise more tax revenue while producing less deadweight loss.

But that’s not all that elasticity does!

At last, the end of our journey approaches: In the next post in this series, I will explain how elasticity affects who actually ends up bearing the burden of the tax.

Tax incidence revisited, part 3: Taxation and the value of money

JDN 2457352

Our journey through the world of taxes continues. I’ve already talked about how taxes have upsides and downsides, as well as how taxes directly affect prices and “before-tax” prices are almost meaningless.

Now it’s time to get into something that even a lot of economists don’t quite seem to grasp, yet which turns out to be fundamental to what taxes truly are.

In the usual way of thinking, it works something like this: We have an economy, through which a bunch of money flows, and then the government comes in and takes some of that money in the form of taxes. They do this because they want to spend money on a variety of services, from military defense to public schools, and in order to afford doing that they need money, so they take in taxes.

This view is not simply wrong—it’s almost literally backwards. Money is not something the economy had that the government comes in and takes. Money is something that the government creates and then adds to the economy to make it function more efficiently. Taxes are not the government taking out money that they need to use; taxes are the government regulating the quantity of money in the system in order to stabilize its value. The government could spend as much money as they wanted without collecting a cent in taxes (not should, but could—it would be a bad idea, but definitely possible); taxes do not exist to fund the government, but to regulate the money supply.

Indeed—and this is the really vital and counter-intuitive point—without taxes, money would have no value.

There is an old myth of how money came into existence that involves bartering: People used to trade goods for other goods, and then people found that gold was particularly good for trading, and started using it for everything, and then eventually people started making paper notes to trade for gold, and voila, money was born.

In fact, such a “barter economy” has never been documented to exist. It probably did once or twice, just given the enormous variety of human cultures; but it was never widespread. Ancient economies were based on family sharing, gifts, and debts of honor.

It is true that gold and silver emerged as the first forms of money, “commodity money”, but they did not emerge endogenously out of trading that was already happening—they were created by the actions of governments. The real value of the gold or silver may have helped things along, but it was not the primary reason why people wanted to hold the money. Money has been based upon government for over 3000 years—the history of money and civilization as we know it. “Fiat money” is basically a redundancy; almost all money, even in a gold standard system, is ultimately fiat money.

The primary reason why people wanted the money was so that they could use it to pay taxes.

It’s really quite simple, actually.

When there is a rule imposed by the government that you will be punished if you don’t turn up on April 15 with at least $4,287 pieces of green paper marked “US Dollar”, you will try to acquire $4,287 pieces of green paper marked “US Dollar”. You will not care whether those notes are exchangeable for gold or silver; you will not care that they were printed by the government originally. Because you will be punished if you don’t come up with those pieces of paper, you will try to get some.

If someone else has some pieces of green paper marked “US Dollar”, and knows that you need them to avoid being punished on April 15, they will offer them to you—provided that you give them something they want in return. Perhaps it’s a favor you could do for them, or something you own that they’d like to have. You will be willing to make this exchange, in order to avoid being punished on April 15.
Thus, taxation gives money value, and allows purchases to occur.

Once you establish a monetary system, it becomes self-sustaining. If you know other people will accept money as payment, you are more willing to accept money as payment because you know that you can go spend it with those people. “Legal tender” also helps this process along—the government threatens to punish people who refuse to accept money as payment. In practice, however, this sort of law is rarely enforced, and doesn’t need to be, because taxation by itself is sufficient to form the basis of the monetary system.

It’s deeply ironic that people who complain about printing money often say we are “debasing” the currency; when you think carefully about what debasement was, it clearly shows that the value of money never really resided in the gold or silver itself. If a government can successfully extract revenue from its monetary system by changing the amount of gold or silver in each coin, then the value of those coins can’t be in the gold and silver—it has to be in the power of the government. You can’t make a profit by dividing a commodity into smaller pieces and then selling the pieces. (Okay, you sort of can, by buying in bulk and selling at retail. But that’s not what we’re talking about. You can’t make money by buying 100 50-gallon barrels of oil and then selling them as 125 40-gallon barrels of oil; it’s the same amount of oil.)

Similarly, the fact that there is such a thing as seignioragethe value of currency in excess of its cost to create—shows that governments impart value to their money. Indeed, one of the reasons for debasement was to realign the value of coins with the value of the metals in the coins, which wouldn’t be necessary if those were simply by definition the same thing.

Taxation serves another important function in the monetary system, which is to regulate the supply of money. The government adds money to the economy by spending, and removes it by taxing; if they add more than they remove—a deficit—the money supply increases, while if they remove more than they add—a surplus—the money supply decreases. In order to maintain stable prices, you want the money supply to increase at approximately the rate of growth; for moderate inflation (which is probably better than actual price stability), you want the money supply to increase slightly faster than the rate of growth. Thus, in general we want the government deficit as a portion of GDP to be slightly larger than the growth rate of the economy. Thus, our current deficit of 2.8% of GDP is actually about where it should be, and we have no particular reason to want to decrease it. (This is somewhat oversimplified, because it ignores the contribution of the Federal Reserve, interest rates, and bank-created money. Most of the money in the world is actually not created by the government, but by banks which are restrained to greater or lesser extent by the government.)

Even a lot of people who try to explain modern monetary theory mistakenly speak as though there was a fundamental shift when we fully abandoned the gold standard in the 1970s. (This is a good explanation overall, but it makes this very error.) But in fact a gold standard really isn’t money “backed” by anything—gold is not what gives the money value, gold is almost worthless by itself. It’s pretty and it doesn’t corrode, but otherwise, what exactly can you do with it? Being tied to money is what made gold valuable, not the other way around. To see this, imagine a world where you have 20,000 tons of gold, but you know that you can never sell it. No one will ever purchase a single ounce. Would you feel particularly rich in that scenario? I think not. Now suppose you have a virtually limitless quantity of pieces of paper that you know people will accept for anything you would ever wish to buy. They are backed by nothing, they are just pieces of paper—but you are now rich, by the standard definition of the word. I can even make the analogy remove the exchange value of money and just use taxation: if you know that in two days you will be imprisoned if you don’t have this particular piece of paper, for the next two days you will guard that piece of paper with your life. It won’t bother you that you can’t exchange that piece of paper for anything else—you wouldn’t even want to. If instead someone else has it, you’ll be willing to do some rather large favors for them in order to get it.

Whenever people try to tell me that our money is “worthless” because it’s based on fiat instead of backed by gold (this happens surprisingly often), I always make them an offer: If you truly believe that our money is worthless, I’ll gladly take any you have off of your hands. I will even provide you with something of real value in return, such as an empty aluminum can or a pair of socks. If they truly believe that fiat money is worthless, they should eagerly accept my offer—yet oddly, nobody ever does.

This does actually create a rather interesting argument against progressive taxation: If the goal of taxation is simply to control inflation, shouldn’t we tax people based only on their spending? Well, if that were the only goal, maybe. But we also have other goals, such as maintaining employment and controlling inequality. Progressive taxation may actually take a larger amount of money out of the system than would be necessary simply to control inflation; but it does so in order to ensure that the super-rich do not become even more rich and powerful.

Governments are limited by real constraints of power and resources, but they they have no monetary constraints other than those they impose themselves. There is definitely something strongly coercive about taxation, and therefore about a monetary system which is built upon taxation. Unfortunately, I don’t know of any good alternatives. We might be able to come up with one: Perhaps people could donate to public goods in a mutually-enforced way similar to Kickstarter, but nobody has yet made that practical; or maybe the government could restructure itself to make a profit by selling private goods at the same time as it provides public goods, but then we have all the downsides of nationalized businesses. For the time being, the only system which has been shown to work to provide public goods and maintain long-term monetary stability is a system in which the government taxes and spends.

A gold standard is just a fiat monetary system in which the central bank arbitrarily decides that their money supply will be directly linked to the supply of an arbitrarily chosen commodity. At best, this could be some sort of commitment strategy to ensure that they don’t create vastly too much or too little money; but at worst, it prevents them from actually creating the right amount of money—and the gold standard was basically what caused the Great Depression. A gold standard is no more sensible a means of backing your currency than would be a standard requiring only prime-numbered interest rates, or one which requires you to print exactly as much money per minute as the price of a Ferrari.

No, the real thing that backs our money is the existence of the tax system. Far from taxation being “taking your hard-earned money”, without taxes money itself could not exist.