Reflections on Past and Future

Jan 19 JDN 2458868

This post goes live on my birthday. Unfortunately, I won’t be able to celebrate much, as I’ll be in the process of moving. We moved just a few months ago, and now we’re moving again, because this apartment turned out to be full of mold that keeps triggering my migraines. Our request for a new apartment was granted, but the university housing system gives very little time to deal with such things: They told us on Tuesday that we needed to commit by Wednesday, and then they set our move-in date for that Saturday.

Still, a birthday seems like a good time to reflect on how my life is going, and where I want it to go next. As for how old I am? This is the probably the penultimate power of two I’ll reach.

The biggest change in my life over the previous year was my engagement. Our wedding will be this October. (We have the venue locked in; invitations are currently in the works.) This was by no means unanticipated; really, folks had been wondering when we’d finally get around to it. Yet it still feels strange, a leap headlong into adulthood for someone of a generation that has been saddled with a perpetual adolescence. The articles on “Millennials” talking about us like we’re teenagers still continue, despite the fact that there are now Millenials with college-aged children. Thanks to immigration and mortality, we now outnumber Boomers. Based on how each group voted in 2016, this bodes well for the 2020 election. (Then again, a lot of young people stay home on Election Day.)

I don’t doubt that graduate school has contributed to this feeling of adolescence: If we count each additional year of schooling as a grade, I would now be in the 22nd grade. Yet from others my age, even those who didn’t go to grad school, I’ve heard similar experiences about getting married, buying homes, or—especially—having children of their own: Society doesn’t treat us like adults, so we feel strange acting like adults. 30 is the new 23.

Perhaps as life expectancy continues to increase and educational attainment climbs ever higher, future generations will continue to experience this feeling ever longer, until we’re like elves in a Tolkienesque fantasy setting, living to 1000 but not considered a proper adult until we hit 100. I wonder if people will still get labeled by generation when there are 40 generations living simultaneously, or if we’ll find some other category system to stereotype by.

Another major event in my life this year was the loss of our cat Vincent. He was quite old by feline standards, and had been sick for a long time; so his demise was not entirely unexpected. Still, it’s never easy to lose a loved one, even if they are covered in fur and small enough to fit under an airplane seat.

Most of the rest of my life has remained largely unchanged: Still in grad school, still living in the same city, still anxious about my uncertain career prospects. Trump is still President, and still somehow managing to outdo his own high standards of unreasonableness. I do feel some sense of progress now, some glimpses of the light at the end of the tunnel. I can vaguely envision finishing my dissertation some time this year, and I’m hoping that in a couple years I’ll have settled into a job that actually pays well enough to start paying down my student loans, and we’ll have a good President (or at least Biden).

I’ve reached the point where people ask me what I am going to do next with my life. I want to give an answer, but the problem is, this is almost entirely out of my control. I’ll go wherever I end up getting job offers. Based on the experience of past cohorts, most people seem to apply to about 200 positions, interview for about 20, and get offers from about 2. So asking me where I’ll work in five years is like asking me what number I’m going to roll on a 100-sided die. I could probably tell you what order I would prioritize offers in, more or less; but even that would depend a great deal on the details. There are difficult tradeoffs to be made: Take a private sector offer with higher pay, or stay in academia for more autonomy and security? Accept a postdoc or adjunct position at a prestigious university, or go for an assistant professorship at a lower-ranked college?

I guess I can say that I do still plan to stay in academia, though I’m less certain of that than I once was; I will definitely cast a wider net. I suppose the job market isn’t like that for most people? I imagine most people at least know what city they’ll be living in. (I’m not even positive what country—opportunities for behavioral economics actually seem to be generally better in Europe and Australia than they are in the US.)

But perhaps most people simply aren’t as cognizant of how random and contingent their own career paths truly were. The average number of job changes per career is 12. You may want to think that you chose where you ended up, but for the most part you landed where the wind blew you. This can seem tragic in a way, but it is also a call for compassion: “There but for the grace of God go I.”

Really, all I can do now is hang on and try to enjoy the ride.

Darkest Before the Dawn: Bayesian Impostor Syndrome

Jan 12 JDN 2458860

At the time of writing, I have just returned from my second Allied Social Sciences Association Annual Meeting, the AEA’s annual conference (or AEA and friends, I suppose, since there several other, much smaller economics and finance associations are represented as well). This one was in San Diego, which made it considerably cheaper for me to attend than last year’s. Alas, next year’s conference will be in Chicago. At least flights to Chicago tend to be cheap because it’s a major hub.

My biggest accomplishment of the conference was getting some face-time and career advice from Colin Camerer, the Caltech economist who literally wrote the book on behavioral game theory. Otherwise I would call the conference successful, but not spectacular. Some of the talks were much better than others; I think I liked the one by Emmanuel Saez best, and I also really liked the one on procrastination by Matthew Gibson. I was mildly disappointed by Ben Bernanke’s keynote address; maybe I would have found it more compelling if I were more focused on macroeconomics.

But while sitting through one of the less-interesting seminars I had a clever little idea, which may help explain why Impostor Syndrome seems to occur so frequently even among highly competent, intelligent people. This post is going to be more technical than most, so be warned: Here There Be Bayes. If you fear yon algebra and wish to skip it, I have marked below a good place for you to jump back in.

Suppose there are two types of people, high talent H and low talent L. (In reality there is of course a wide range of talents, so I could assign a distribution over that range, but it would complicate the model without really changing the conclusions.) You don’t know which one you are; all you know is a prior probability h that you are high-talent. It doesn’t matter too much what h is, but for concreteness let’s say h = 0.50; you’ve got to be in the top 50% to be considered “high-talent”.

You are engaged in some sort of activity that comes with a high risk of failure. Many creative endeavors fit this pattern: Perhaps you are a musician looking for a producer, an actor looking for a gig, an author trying to secure an agent, or a scientist trying to publish in a journal. Or maybe you’re a high school student applying to college, or a unemployed worker submitting job applications.

If you are high-talent, you’re more likely to succeed—but still very likely to fail. And even low-talent people don’t always fail; sometimes you just get lucky. Let’s say the probability of success if you are high-talent is p, and if you are low-talent, the probability of success is q. The precise value depends on the domain; but perhaps p = 0.10 and q = 0.02.

Finally, let’s suppose you are highly rational, a good and proper Bayesian. You update all your probabilities based on your observations, precisely as you should.

How will you feel about your talent, after a series of failures?

More precisely, what posterior probability will you assign to being a high-talent individual, after a series of n+k attempts, of which k met with success and n met with failure?

Since failure is likely even if you are high-talent, you shouldn’t update your probability too much on a failurebut each failure should, in fact, lead to revising your probability downward.

Conversely, since success is rare, it should cause you to revise your probability upward—and, as will become important, your revisions upon success should be much larger than your revisions upon failure.

We begin as any good Bayesian does, with Bayes’ Law:

P[H|(~S)^n (S)^k] = P[(~S)^n (S)^k|H] P[H] / P[(~S)^n (S)^k]

In words, this reads: The posterior probability of being high-talent, given that you have observed k successes and n failures, is equal to the probability of observing such an outcome, given that you are high-talent, times the prior probability of being high-skill, divided by the prior probability of observing such an outcome.

We can compute the probabilities on the right-hand side using the binomial distribution:

P[H] = h

P[(~S)^n (S)^k|H] = (n+k C k) p^k (1-p)^n

P[(~S)^n (S)^k] = (n+k C k) p^k (1-p)^n h + (n+k C k) q^k (1-q)^n (1-h)

Plugging all this back in and canceling like terms yields:

P[H|(~S)^n (S)^k] = 1/(1 + [1-h/h] [q/p]^k [(1-q)/(1-p)]^n)

This turns out to be particularly convenient in log-odds form:

L[X] = ln [ P(X)/P(~X) ]

L[(~S)^n) (S)^k|H] = ln [h/(1-h)] + k ln [p/q] + n ln [(1-p)/(1-q)]

Since p > q, ln[p/q] is a positive number, while ln[(1-p)/(1-q)] is a negative number. This corresponds to the fact that you will increase your posterior when you observe a success (k increases by 1) and decrease your posterior when you observe a failure (n increases by 1).

But when p and q are small, it turns out that ln[p/q] is much larger in magnitude than ln[(1-p)/(1-q)]. For the numbers I gave above, p = 0.10 and q = 0.02, ln[p/q] = 1.609 while ln[(1-p)/(1-q)] = -0.085. You will therefore update substantially more upon a success than on a failure.

Yet successes are rare! This means that any given success will most likely be first preceded by a sequence of failures. This results in what I will call the darkest-before-dawn effect: Your opinion of your own talent will tend to be at its very worst in the moments just preceding a major success.

I’ve graphed the results of a few simulations illustrating this: On the X-axis is the number of overall attempts made thus far, and on the Y-axis is the posterior probability of being high-talent. The simulated individual undergoes randomized successes and failures with the probabilities I chose above.

Bayesian_Impostor_full

There are 10 simulations on that one graph, which may make it a bit confusing. So let’s focus in on two runs in particular, which turned out to be run 6 and run 10:

[If you skipped over the math, here’s a good place to come back. Welcome!]

Bayesian_Impostor_focus

Run 6 is a lucky little devil. They had an immediate success, followed by another success in their fourth attempt. As a result, they quickly update their posterior to conclude that they are almost certainly a high-talent individual, and even after a string of failures beyond that they never lose faith.

Run 10, on the other hand, probably has Impostor Syndrome. Failure after failure after failure slowly eroded their self-esteem, leading them to conclude that they are probably a low-talent individual. And then, suddenly, a miracle occurs: On their 20th attempt, at last they succeed, and their whole outlook changes; perhaps they are high-talent after all.

Note that all the simulations are of high-talent individuals. Run 6 and run 10 are equally competent. Ex ante, the probability of success for run 6 and run 10 was exactly the same. Moreover, both individuals are completely rational, in the sense that they are doing perfect Bayesian updating.

And yet, if you compare their self-evaluations after the 19th attempt, they could hardly look more different: Run 6 is 85% sure that they are high-talent, even though they’ve been in a slump for the last 13 attempts. Run 10, on the other hand, is 83% sure that they are low-talent, because they’ve never succeeded at all.

It is darkest just before the dawn: Run 10’s self-evaluation is at its very lowest right before they finally have a success, at which point their self-esteem surges upward, almost to baseline. With just one more success, their opinion of themselves would in fact converge to the same as Run 6’s.

This may explain, at least in part, why Impostor Syndrome is so common. When successes are few and far between—even for the very best and brightest—then a string of failures is the most likely outcome for almost everyone, and it can be difficult to tell whether you are so bright after all. Failure after failure will slowly erode your self-esteem (and should, in some sense; you’re being a good Bayesian!). You’ll observe a few lucky individuals who get their big break right away, and it will only reinforce your fear that you’re not cut out for this (whatever this is) after all.

Of course, this model is far too simple: People don’t just come in “talented” and “untalented” varieties, but have a wide range of skills that lie on a continuum. There are degrees of success and failure as well: You could get published in some obscure field journal hardly anybody reads, or in the top journal in your discipline. You could get into the University of Northwestern Ohio, or into Harvard. And people face different barriers to success that may have nothing to do with talent—perhaps why marginalized people such as women, racial minorities, LGBT people, and people with disabilities tend to have the highest rates of Impostor Syndrome. But I think the overall pattern is right: People feel like impostors when they’ve experienced a long string of failures, even when that is likely to occur for everyone.

What can be done with this information? Well, it leads me to three pieces of advice:

1. When success is rare, find other evidence. If truly “succeeding” (whatever that means in your case) is unlikely on any given attempt, don’t try to evaluate your own competence based on that extremely noisy signal. Instead, look for other sources of data: Do you seem to have the kinds of skills that people who succeed in your endeavors have—preferably based on the most objective measures you can find? Do others who know you or your work have a high opinion of your abilities and your potential? This, perhaps is the greatest mistake we make when falling prey to Impostor Syndrome: We imagine that we have somehow “fooled” people into thinking we are competent, rather than realizing that other people’s opinions of us are actually evidence that we are in fact competent. Use this evidence. Update your posterior on that.

2. Don’t over-update your posterior on failures—and don’t under-update on successes. Very few living humans (if any) are true and proper Bayesians. We use a variety of heuristics when judging probability, most notably the representative and availability heuristics. These will cause you to over-respond to failures, because this string of failures makes you “look like” the kind of person who would continue to fail (representative), and you can’t conjure to mind any clear examples of success (availability). Keeping this in mind, your update upon experiencing failure should be small, probably as small as you can make it. Conversely, when you do actually succeed, even in a small way, don’t dismiss it. Don’t look for reasons why it was just luck—it’s always luck, at least in part, for everyone. Try to update your self-evaluation more when you succeed, precisely because success is rare for everyone.

3. Don’t lose hope. The next one really could be your big break. While astronomically baffling (no, it’s darkest at midnight, in between dusk and dawn!), “it is always darkest before the dawn” really does apply here. You are likely to feel the worst about yourself at the very point where you are about to finally succeed. The lowest self-esteem you ever feel will be just before you finally achieve a major success. Of course, you can’t know if the next one will be it—or if it will take five, or ten, or twenty more tries. And yes, each new failure will hurt a little bit more, make you doubt yourself a little bit more. But if you are properly grounded by what others think of your talents, you can stand firm, until that one glorious day comes and you finally make it.

Now, if I could only manage to take my own advice….

Will robots take our jobs? Not “if” but “when”.

Jan 5 JDN 2458853

The prospect of technological unemploymentin short, robots taking our jobs—is a very controversial one among economists.

For most of human history, technological advances have destroyed some jobs and created others, causing change, instability, conflict—but ultimately, not unemployment. Many economists believe that this trend will continue well into the 21st century.

Yet I am not so sure, ever since I read this chilling paragraph by Gregory Clark, which I first encountered in The Atlantic:

<quote>

There was a type of employee at the beginning of the Industrial Revolution whose job and livelihood largely vanished in the early twentieth century. This was the horse. The population of working horses actually peaked in England long after the Industrial Revolution, in 1901, when 3.25 million were at work. Though they had been replaced by rail for long-distance haulage and by steam engines for driving machinery, they still plowed fields, hauled wagons and carriages short distances, pulled boats on the canals, toiled in the pits, and carried armies into battle. But the arrival of the internal combustion engine in the late nineteenth century rapidly displaced these workers, so that by 1924 there were fewer than two million. There was always a wage at which all these horses could have remained employed. But that wage was so low that it did not pay for their feed.

</quote>

Based on the statistics, what actually seems to be happening right now is that automation is bifurcating the workforce: It’s allowing some people with advanced high-tech skills to make mind-boggling amounts of money in engineering and software development, while those who lack such skills get pushed ever further into the margins, forced to take whatever jobs they can get. This skill-biased technical change is far from a complete explanation for our rising inequality, but it’s clearly a contributing factor, and I expect it will become more important over time.

Indeed, in some sense I think the replacement of most human labor with robots is inevitable. It’s not a question of “if”, but only a question of “when”. In a thousand years—if we survive at all, and if we remain recognizable as human—we’re not going to have employment in the same sense we do today. In the best-case scenario, we’ll live in the Culture, all playing games, making art, singing songs, and writing stories while the robots do all the hard labor.

But a thousand years is a very long time; we’ll be dead, and so will our children and our grandchildren. Most of us are thus understandably a lot more concerned about what happens in say 20 or 50 years.

I’m quite certain that not all human work will be replaced within the next 20 years. In fact, I am skeptical even of the estimates that half of all work will be automated within the next 40 years, though some very qualified experts are making such estimates. A lot of jobs are safe for now.

Indeed, my job is probably pretty safe: While there has been a disturbing trend in universities toward adjunct faculty, people are definitely still going to need economists for the foreseeable future. (Indeed, if Asimov is right, behavioral economists will one day rule the galaxy.)

Creative jobs are also quite safe; it’s going to be at least a century, maybe more, before robots can seriously compete with artists, authors, or musicians. (Robot Beethoven is a publicity stunt, not a serious business plan.) Indeed, by the time robots reach that level, I think we’ll have to start treating them as people—so in that sense, people will still be doing those jobs.

Even construction work is also relatively safe—actually projected to grow faster than employment in general for the next decade. This is probably because increased construction productivity tends to lead to more construction, rather than less employment. We can pretty much always use more or bigger houses, as long as we can afford them. Really, we should be hoping for technological advances in construction, which might finally bring down our astronomical housing prices, especially here in California.

But a lot of jobs are clearly going to disappear, sooner than most people seem to grasp.

The one that worries me the most is truck drivers. Truck drivers are a huge number of people. Trucking employs over 1.5 million Americans, accounting for about 1% of all US workers. It’s one of the few remaining jobs that pays a middle-class salary with entry-level skills and doesn’t require an advanced education. It’s also culturally coded as highly masculine, which is advantageous in a world where a large number of men suffer so deeply from fragile masculinity (a major correlate of support for Donald Trump, by the way, as well as a source of a never-ending array of cringeworthy marketing) that they can’t bear to take even the most promising “pink collar” jobs.

And yet, long-haul trucking is probably not going to exist in 20 years. Short-haul and delivery trucking will probably last a bit longer, since it’s helpful to have a human being to drive around complicated city streets and carry deliveries. Automated trucks are already here, and they are just… better. While human drivers need rest, sleep, food, and bathroom breaks, rarely exceeding 11 hours of actual driving per day (which still sounds exhausting!), an automated long-haul truck can stay on the road for over 22 hours per day, even including fuel and maintenance. The capital cost of an automated truck is currently much higher than an ordinary truck, but when that changes, trucking companies aren’t going to keep around a human driver when their robots can deliver twice as fast and don’t expect to be paid wages. Automated vehicles are also safer than human drivers, which will save several thousand lives per year. For this to happen, we don’t even need truly full automation; we just need to get past our current level 3 automation and reach level 4. Prototypes of this level of automation are already under development; in about 10 years they’ll start hitting the road. The shift won’t be instantaneous; once a company has already invested in a truck and a driver, they’ll keep them around for several years. But in 20 years from now, I don’t expect to see a lot of human-driven trucks left.

I’m pleased to see that the government is taking this matter seriously, already trying to develop plans for what to do when long-haul trucks become fully robotic. I hope they can come up with a good plan in time.

Some jobs that will be automated away deserve to be automated away. I can’t shed very many tears for the loss of fast-food workers and grocery cashiers (which we can already see happening around us—been to a Taco Bell lately?); those are terrible jobs that no human being should have to do. And my only concern about automated telemarketing is that it makes telemarketing cheaper and therefore more common; I certainly am not worried about the fact that people won’t be working as telemarketers anymore.

But a lot of good jobs, even white-collar jobs, are at risk of automation. Algorithms are already performing at about the same level as human radiologists, contract reviewers, and insurance underwriters, and once they get substantially better, companies are going to have trouble justifying why they would hire a human who costs more and performs worse. Indeed, the very first job to be automated by information technology was a white-collar job: computer used to be a profession, not a machine.

Technological advancement is inherently difficult to predict: If we knew how future technology will work, we’d make it now. So any such prediction should contain large error bars: “20 years away” could mean we make a breakthrough next year, or it could stay “20 years away” for the next 50 years.

If we had a robust social safety net—a basic income, perhaps?—this would be fine. But our culture decided somewhere along the way that people only deserve to live well if they are currently performing paid services for a corporation, and as robots get better, corporations will find they don’t need so many people performing services. We could face up to this fact and use it as an opportunity for deeper reforms; but I fear that instead we’ll wait to act until the crisis is already upon us.

On compromise: The kind of politics that can be bipartisan—and the kind that can’t

Dec 29 JDN 2458847

The “polarization” of our current government has been much maligned. And there is some truth to this: The ideological gap between Democrats and Republicans in Congress is larger than it has been in a century. There have been many calls by self-proclaimed “centrists” for a return to “bipartisanship”.

But there is nothing centrist about compromising with fascists. If one party wants to destroy democracy and the other wants to save it, a true centrist would vote entirely with the pro-democracy party.

There is a kind of politics that can be bipartisan, that can bear reasonable compromise. Most economic policy is of this kind. If one side wants a tax of 40% and the other wants 20%, it’s quite reasonable to set the tax at 30%. If one side wants a large tariff and the other no tariff, it’s quite reasonable to make a small tariff. It could still be wrong—I’d tend to say that the 40% tax with no tariff is the right way to go—but it won’t be unjust. We can in fact “agree to disagree” in such cases. There really is a reasonable intermediate view between the extremes.

But there is also a kind of politics that can’t be bipartisan, in which compromise is inherently unjust. Most social policy is of this kind. If one side wants to let women vote and the other doesn’t, you can’t compromise by letting half of women vote. Women deserve the right to vote, period. All of them. In some sense letting half of women vote would be an improvement over none at all, but it’s obviously not an acceptable policy. The only just thing to do is to keep fighting until all women can vote.

This isn’t a question of importance per se.

Climate change is probably the single most important thing going on in the world this century, but it is actually something we can reasonably compromise about. It isn’t obvious where exactly the emission targets should be set to balance environmental sustainability with economic growth, and reasonable people can disagree about how to draw that line. (It is not reasonable to deny that climate change is important and refuse to take any action at all—which, sadly, is what the Republicans have been doing lately.) Thousands of innocent people have already been killed by Trump’s nonsensical deregulation of air pollution—but in fact it’s a quite difficult problem to decide exactly how pollution should be regulated.

Conversely, voter suppression has a small, if any, effect on our actual outcomes. In a country of 320 million people, even tens of thousands of votes rarely make a difference, and the (Constitutional) Electoral College does far greater damage to the principle of “one person, one vote” than voter suppression ever could. But voter suppression is fundamentally, inherently anti-democractic. When you try to suppress votes, you declare yourself an enemy of the free world.

There has always been disagreement about both kinds of issues; that hasn’t changed. The fundamental rights of women, racial minorities, and LGBT people have always been politically contentious, when—qua fundamental rights—they should never have been. But at least as far as I could tell, we seemed to be making progress on all these fronts. The left wing was dragging the right wing, kicking and screaming if necessary, toward a more just society.

Then came President Donald Trump.

The Trump administration, at least more than any administration I can remember, has been reversing social progress, taking hardline far-right positions on the kind of issues that we can’t compromise about. Locking up children at the border. Undermining judicial due process. Suppressing voter participation. These are attacks upon the foundations of a free society. We can’t “agree to disagree” on them.

Indeed, Trump’s economic policy has been surprisingly ambivalent; while he cuts taxes on the rich like a standard Republican, his trade war is much more of a leftist idea. It’s not so much that he’s willing to compromise as that he’s utterly inconsistent, but at least he’s not a consistent extremist on these issues.

That is what makes Trump an anomaly. The Republicans have gradually become more extreme over time, but it was Trump who carried them over a threshold, where they stopped retarding social progress and began actively reversing it. Removing Trump himself will not remove the problem—but nor would it be an empty gesture. He is a real part of the problem, and removing him might just give us the chance to make the deeper changes that need to be made.

The House agrees. Unfortunately, I doubt the Senate will.

Tithing makes quite a lot of sense

Dec 22 JDN 2458840

Christmas is coming soon, and it is a season of giving: Not only gifts to those we love, but also to charities that help people around the world. It’s a theme of some of our most classic Christmas stories, like A Christmas Carol. (I do have to admit: Scrooge really isn’t wrong for not wanting to give to some random charity without any chance to evaluate it. But I also get the impression he wasn’t giving a lot to evaluated charities either.) And people do really give more around this time of year: Charitable donation rates peak in November and December (though that may also have something to do with tax deductions).

Where should we give? This is not an easy question, but it’s one that we now have tools to answer: There are various independent charity evaluation agencies, like GiveWell and Charity Navigator, which can at least provide some idea of which charities are most cost-effective.

How much should we give? This question is a good deal harder.

Perhaps a perfect being would determine their own precise marginal utility of wealth, and the marginal utility of spending on every possible charity, and give of your wealth to the best possible charity up until those two marginal utilities are equal. Since $1 to UNICEF or the Against Malaria Foundation saves about 0.02 QALY, and (unless you’re a billionaire) you don’t have enough money to meaningfully affect the budget of UNICEF, you’d probably need to give until you are yourself at the UN poverty level of $1.90 per day.

I don’t know of anyone who does this. Even Peter Singer, who writes books that essentially tell us to do this, doesn’t do this. I’m not sure it’s humanly possible to do this. Indeed, I’m not even so sure that a perfect being would do it, since it would require destroying their own life and their own future potential.

How about we all give 10%? In other words, how about we tithe? Yes, it sounds arbitrary—because it is. It could just as well have been 8% or 11%. Perhaps one-tenth feels natural to a base-10 culture made of 10-fingered beings, and if we used a base-12 numeral system we’d think in terms of giving one-twelfth instead. But 10% feels reasonable to a lot of people, it has a lot of cultural support behind it already, and it has become a Schelling point for coordination on this otherwise intractable problem. We need to draw the line somewhere, and it might as well be there.

As Slate Star Codex put it:

It’s ten percent because that’s the standard decreed by Giving What We Can and the effective altruist community. Why should we believe their standard? I think we should believe it because if we reject it in favor of “No, you are a bad person unless you give all of it,” then everyone will just sit around feeling very guilty and doing nothing. But if we very clearly say “You have discharged your moral duty if you give ten percent or more,” then many people will give ten percent or more. The most important thing is having a Schelling point, and ten percent is nice, round, divinely ordained, and – crucially – the Schelling point upon which we have already settled. It is an active Schelling point. If you give ten percent, you can have your name on a nice list and get access to a secret forum on the Giving What We Can site which is actually pretty boring.

It’s ten percent because definitions were made for Man, not Man for definitions, and if we define “good person” in a way such that everyone is sitting around miserable because they can’t reach an unobtainable standard, we are stupid definition-makers. If we are smart definition-makers, we will define it in whichever way which makes it the most effective tool to convince people to give at least that much.

I think it would be also reasonable to adjust this proportion according to your household income. If you are extremely poor, give a token amount: Perhaps 1% or 2%. (As it stands, most poor people already give more than this, and most rich people give less.) If you are somewhat below the median household income, give a bit less: Perhaps 6% or 8%. (I currently give 8%; I plan to increase to 10% once I get a higher-paying job after graduation.) If you are somewhat above, give a bit more: Perhaps 12% or 15%. If you are spectacularly rich, maybe you should give as much as 25%.

Is 10% enough? Well, actually, if everyone gave, even 1% would probably be enough. The total GDP of the First World is about $40 trillion; 1% of that is $400 billion per year, which is more than enough to end world hunger. But since we know that not everyone will give, we need to adjust our standard upward so that those who do give will give enough. (There’s actually an optimization problem here which is basically equivalent to finding a monopoly’s profit-maximizing price.) And just ending world hunger probably isn’t enough; there is plenty of disease to cure, education to improve, research to do, and ecology to protect. If say a third of First World people give 10%, that would be about $1.3 trillion, which would be enough money to at least make a huge difference in all those areas.

You can decide for yourself where you think you should draw the line. But 10% is a pretty good benchmark, and above all—please, give something. If you give anything, you are probably already above average. A large proportion of people give nothing at all. (Only 24% of US tax returns include a charitable deduction—though, to be fair, a lot of us donate but don’t itemize deductions. Even once you account for that, only about 60% of US households give to charity in any given year.)

To a first approximation, all human behavior is social norms

Dec 15 JDN 2458833

The language we speak, the food we eat, and the clothes we wear—indeed, the fact that we wear clothes at all—are all the direct result of social norms. But norms run much deeper than this: Almost everything we do is more norm than not.

Why do sleep and wake up at a particular time of day? For most people, the answer is that they needed to get up to go to work. Why do you need to go to work at that specific time? Why does almost everyone go to work at the same time? Social norms.

Even the most extreme human behaviors are often most comprehensible in terms of social norms. The most effective predictive models of terrorism are based on social networks: You are much more likely to be a terrorist if you know people who are terrorists, and much more likely to become a terrorist if you spend a lot of time talking with terrorists. Cultists and conspiracy theorists seem utterly baffling if you imagine that humans form their beliefs rationally—and totally unsurprising if you realize that humans mainly form their beliefs by matching those around them.

For a long time, economists have ignored social norms at our peril; we’ve assumed that financial incentives will be sufficient to motivate behavior, when social incentives can very easily override them. Indeed, it is entirely possible for a financial incentive to have a negative effect, when it crowds out a social incentive: A good example is a friend who would gladly come over to help you with something as a friend, but then becomes reluctant if you offer to pay him $25. I previously discussed another example, where taking a mentor out to dinner sounds good but paying him seems corrupt.

Why do you drive on the right side of the road (or the left, if you’re in Britain)? The law? Well, the law is already a social norm. But in fact, it’s hardly just that. You probably sometimes speed or run red lights, which are also in violation of traffic laws. Yet somehow driving on the right side seem to be different. Well, that’s because driving on the right has a much stronger norm—and in this case, that norm is self-enforcing with the risk of severe bodily harm or death.

This is a good example of why it isn’t necessary for everyone to choose to follow a norm for that norm to have a great deal of power. As long as the norms include some mechanism for rewarding those who follow and punishing those who don’t, norms can become compelling even to those who would prefer not to obey. Sometimes it’s not even clear whether people are following a norm or following direct incentives, because the two are so closely aligned.

Humans are not the only social species, but we are by far the most social species. We form larger, more complex groups than any other animal; we form far more complex systems of social norms; and we follow those norms with slavish obedience. Indeed, I’m a little suspicious of some of the evolutionary models predicting the evolution of social norms, because they predict it too well; they seem to suggest that it should arise all the time, when in fact it’s only a handful of species who exhibit it at all and only we who build our whole existence around it.

Along with our extreme capacity for altruism, this is another way that human beings actually deviate more from the infinite identical psychopaths of neoclassical economics than most other animals. Yes, we’re smarter than other animals; other animals are more likely to make mistakes (though certainly we make plenty of our own). But most other animals aren’t motivated by entirely different goals than individual self-interest (or “evolutionary self-interest” in a Selfish Gene sort of sense) the way we typically are. Other animals try to be selfish and often fail; we try not to be selfish and usually succeed.

Economics experiments often go out of their way to exclude social motives as much as possible—anonymous random matching with no communication, for instance—and still end up failing. Human behavior in experiments is consistent, systematic—and almost never completely selfish.

Once you start looking for norms, you see them everywhere. Indeed, it becomes hard to see anything else. To a first approximation, all human behavior is social norms.

Good for the economy isn’t the same as good

Dec 8 JDN 2458826

Many of the common critiques of economics are actually somewhat misguided, or at least outdated: While there are still some neoclassical economists who think that markets are perfect and humans are completely rational, most economists these days would admit that there are at least some exceptions to this. But there’s at least one common critique that I think still has a good deal of merit: “Good for the economy” isn’t the same thing as good.

I’ve read literally dozens, if not hundreds, of articles on economics, in both popular press and peer-reviewed journals, that all defend their conclusions in the following way: “Intervention X would statistically be expected to increase GDP/raise total surplus/reduce unemployment. Therefore, policymakers should implement intervention X.” The fact that a policy would be “good for the economy” (in a very narrow sense) is taken as a completely compelling reason that this policy must be overall good.

The clearest examples of this always turn up during a recession, when inevitably people will start saying that cutting unemployment benefits will reduce unemployment. Sometimes it’s just right-wing pundits, but often it’s actually quite serious economists.

The usual left-wing response is to deny the claim, explain all the structural causes of unemployment in a recession and point out that unemployment benefits are not what caused the surge in unemployment. This is true; it is also utterly irrelevant. It can be simultaneously true that the unemployment was caused by bad monetary policy or a financial shock, and also true that cutting unemployment benefits would in fact reduce unemployment.

Indeed, I’m fairly certain that both of those propositions are true, to greater or lesser extent. Most people who are unemployed will remain unemployed regardless of how high or low unemployment benefits are; and likewise most people who are employed will remain so. But at the margin, I’m sure there’s someone who is on the fence about searching for a job, or who is trying to find a job but could try a little harder with some extra pressure, or who has a few lousy job offers they’re not taking because they hope to find a better offer later. That is, I have little doubt that the claim “Cutting unemployment benefits would reduce unemployment” is true.

The problem is that this is in no way a sufficient argument for cutting unemployment benefits. For while it might reduce unemployment per se, more importantly it would actually increase the harm of unemployment. Indeed, those two effects are in direct proportion: Cutting unemployment benefits only reduces unemployment insofar as it makes being unemployed a more painful and miserable experience for the unemployed.

Indeed, the very same (oversimplified) economic models that predict that cutting benefits would reduce unemployment use that precise mechanism, and thereby predict, necessarily, that cutting unemployment benefits will harm those who are unemployed. It has to. In some sense, it’s supposed to; otherwise it wouldn’t have any effect at all.
That is, if your goal is actually to help the people harmed by a recession, cutting unemployment benefits is absolutely not going to accomplish that. But if your goal is actually to reduce unemployment at any cost, I suppose it would in fact do that. (Also highly effective against unemployment: Mass military conscription. If everyone’s drafted, no one is unemployed!)

Similarly, I’ve read more than a few policy briefs written to the governments of poor countries telling them how some radical intervention into their society would (probably) increase their GDP, and then either subtly implying or outright stating that this means they are obliged to enact this intervention immediately.

Don’t get me wrong: Poor countries need to increase their GDP. Indeed, it’s probably the single most important thing they need to do. Providing better security, education, healthcare, and sanitation are all things that will increase GDP—but they’re also things that will be easier if you have more GDP.

(Rich countries, on the other hand? Maybe we don’t actually need to increase GDP. We may actually be better off focusing on things like reducing inequality and improving environmental sustainability, while keeping our level of GDP roughly the same—or maybe even reducing it somewhat. Stay inside the wedge.)

But the mere fact that a policy will increase GDP is not a sufficient reason to implement that policy. You also need to consider all sorts of other effects the policy will have: Poverty, inequality, social unrest, labor standards, pollution, and so on.

To be fair, sometimes these articles only say that the policy will increase GDP, and don’t actually assert that this is a sufficient reason to implement it, theoretically leaving open the possibility that other considerations will be overriding.

But that’s really not all that comforting. If the only thing you say about a policy is a major upside, like it or not, you are implicitly endorsing that policy. Framing is vital. Everything you say could be completely, objectively, factually true; but if you only tell one side of the story, you are presenting a biased view. There’s a reason the oath is “The truth, the whole truth, and nothing but the truth.” A partial view of the facts can be as bad as an outright lie.

Of course, it’s unreasonable to expect you to present every possible consideration that could become relevant. Rather, I expect you to do two things: First, if you include some positive aspects, also include some negative ones, and vice-versa; never let your argument sound completely one-sided. Second, clearly and explicitly acknowledge that there are other considerations you haven’t mentioned.

Moreover, if you are talking about something like increasing GDP or decreasing unemployment—something that has been, many times, by many sources, treated as though it were a completely compelling reason unto itself—you must be especially careful. In such a context, an article that would be otherwise quite balanced can still come off as an unqualified endorsement.