Housing should be cheap

Sep 1 JDN 2460555

We are of two minds about housing in our society. On the one hand, we recognize that shelter is a necessity, and we want it to be affordable for all. On the other hand, we see real estate as an asset, and we want it to appreciate in value and thereby provide a store of wealth. So on the one hand we want it to be cheap, but on the other hand we want it to be expensive. And of course it can’t be both.

This is not a uniquely American phenomenon. As Noah Smith points out, it seems to be how things are done in almost every country in the world. It may be foolish for me to try to turn such a tide. But I’m going to try anyway.

Housing should be cheap.

For some reason, inflation is seen as a bad thing for every other good, necessity and luxury alike; but when it comes to housing in particular—the single biggest expense for almost everyone—suddenly we are conflicted about it, and think that maybe inflation is a good thing actually.

This is because owning a home that appreciates in value provides the illusion of increasing wealth.

Yes, I said illusion. In some particular circumstances it can sometimes increase real wealth, but when housing is getting more expensive everywhere at once (which is basically true), it doesn’t actually increase real wealth—because you still need to have a home. So while you’d get more money if you sold your current home, you’d have to go buy another home that would be just as expensive. That extra wealth is largely imaginary.

In fact, what isn’t an illusion is your increased property tax bill. If you aren’t planning on selling your home any time soon, you should really see its appreciation as a bad thing; now you suddenly owe more in taxes.

Home equity lines of credit complicate this a bit; for some reason we let people collateralize part of the home—even though the whole home is already collateralized with a mortgage to someone else—and thereby turn that largely-imaginary wealth into actual liquid cash. This is just one more way that our financial system is broken; we shouldn’t be offering these lines of credit, just as we shouldn’t be creating mortgage-backed securities. Cleverness is not a virtue in finance; banking should be boring.

But you’re probably still not convinced. So I’d like you to consider a simple thought experiment, where we take either view to the extreme: Make housing 100 times cheaper or 100 times more expensive.

Currently, houses cost about $400,000. So in Cheap World, houses cost $4,000. In Expensive World, they cost $40 million.

In Cheap World, there is no homelessness. Seriously, zero. It would make no sense at all for the government not to simply buy everyone a house. If you want to also buy your own house—or a dozen—go ahead, that’s fine; but you get one for free, paid for by tax dollars, because that’s cheaper than a year of schooling for a high-school student; it’s in fact not much more than what we’d currently spend to house someone in a homeless shelter for a year. So given the choice of offering someone two years at a shelter versus never homeless ever again, it’s pretty obvious we should choose the latter. Thus, in Cheap World, we all have a roof over our heads. And instead of storing their wealth in their homes in Cheap World, people store their wealth in stocks and bonds, which have better returns anyway.

In Expensive World, the top 1% are multi-millionaires who own homes, maybe the top 10% can afford rent, and the remaining 89% of the population are homeless. There’s simply no way to allocate the wealth of our society such that a typical middle class household has $40 million. We’re just not that rich. We probably never will be that rich. It may not even be possible to make a society that rich. In Expensive World, most people live in tents on the streets, because housing has been priced out of reach for all but the richest families.

Cheap World sounds like an amazing place to live. Expensive World is a horrific dystopia. The only thing I changed was the price of housing.


Yes, I changed it a lot; but that was to make the example as clear as possible, and it’s not even as extreme as it probably sounds. At 10% annual growth, 100 times more expensive only takes 49 years. At the current growth rate of housing prices of about 5% per year, it would take 95 years. A century from now, if we don’t fix our housing market, we will live in Expensive World. (Yes, we’ll most likely be richer then too; but will we be that much richer? Median income has not been rising nearly as fast as median housing price. If current trends continue, median income will be 5 times bigger and housing prices will be 100 times bigger—that’s still terrible.)

We’re already seeing something that feels a lot like Expensive World in some of our most expensive cities. San Francisco has ludicrously expensive housing and also a massive homelessness crisis—this is not a coincidence. Homelessness does still exist in more affordable cities, but clearly not at the same crisis level.

I think part of the problem is that people don’t really understand what wealth is. They see the number go up, and they think that means there is more wealth. Real wealth consists in goods, not in prices. The wealth we have is made of real things, not monetary prices. Prices merely decide how wealth is allocated.

A home is wealth, yes. But it’s the same amount of real wealth regardless of what price it has, because what matters is what it’s good for. If you become genuinely richer by selling an appreciated home, you gained that extra wealth from somewhere else; it was not contained within your home. You have appropriated wealth that someone else used to have. You haven’t created wealth; you’ve merely obtained it.

For you as an individual, that may not make a difference; you still get richer. But as a society, it makes all the difference: Moving wealth around doesn’t make our society richer, and all higher prices can do is move wealth around.

This means that rising housing prices simply cannot make our whole society richer. Better houses could do that. More houses could do that. But simply raising the price tag isn’t making our society richer. If it makes anyone richer—which, again, typically it does not—it does so by moving wealth from somewhere else. And since homeowners are generally richer than non-homeowners (even aside from their housing wealth!), more expensive homes means moving wealth from poorer people to richer people—increased inequality.

We used to have affordable housing, just a couple of generations ago. But we may never have truly affordable housing again, because people really don’t like to see that number go down, and they vote for policies accordingly—especially at the local level. Our best hope right now seems to be to keep it from going up faster than the growth rate of income, so that homes don’t become any more unaffordable than they already are.

But frankly I’m not optimistic. I think part of the cyberpunk dystopia we’re careening towards is Expensive World.

How to detect discrimination, empirically

Aug 25 JDN 2460548

For concreteness, I’ll use men and women as my example, though the same principles would apply for race, sexual orientation, and so on. Suppose we find that there are more men than women in a given profession; does this mean that women are being discriminated against?

Not necessarily. Maybe women are less interested in that kind of work, or innately less qualified. Is there a way we can determine empirically that it really is discrimination?

It turns out that there is. All we need is a reliable measure of performance in that profession. Then, we compare performance between men and women, and that comparison can tell us whether discrimination is happening or not. The key insight is that workers in a job are not a random sample; they are a selected sample. The results of that selection can tell us whether discrimination is happening.

Here’s a simple model to show how this works.

Suppose there are five different skill levels in the job, from 1 to 5 where 5 is the most skilled. And suppose there are 5 women and 5 men in the population.

1. Baseline

The baseline case to consider is when innate talents are equal and there is no discrimination. In that case, we should expect men and women to be equally represented in the profession.

For the simplest case, let’s say that there is one person at each skill level:

MenWomen
11
22
33
44
55

Now suppose that everyone above a certain skill threshold gets hired. Since we’re assuming no discrimination, the threshold should be the same for men and women. Let’s say it’s 3; then these are the people who get hired:

Hired MenHired Women
33
44
55

The result is that not only are there the same number of men and women in the job, their skill levels are also the same. There are just as many highly-competent men as highly-competent women.

2. Innate Differences

Now, suppose there is some innate difference in talent between men and women for this job. For most jobs this seems suspicious, but consider pro sports: Men really are better at basketball, in general, than women, and this is pretty clearly genetic. So it’s not absurd to suppose that for at least some jobs, there might be some innate differences. What would that look like?


Again suppose a population of 5 men and 5 women, but now the women are a bit less qualified: There are two 1s and no 5s among the women.

MenWomen
11
21
32
43
54

Then, this is the group that will get hired:

Hired MenHired Women
33
44
5

The result will be fewer women who are on average less qualified. The most highly-qualified individuals at that job will be almost entirely men. (In this simple model, entirely men; but you can easily extend it so that there are a few top-qualified women.)

This is in fact what we see for a lot of pro sports; in a head-to-head match, even the best WNBA teams would generally lose against most NBA teams. That’s what it looks like when there are real innate differences.

But it’s hard to find clear examples outside of sports. The genuine, large differences in size and physical strength between the sexes just don’t seem to be associated with similar differences in mental capabilities or even personality. You can find some subtler effects, but nothing very large—and certainly nothing large enough to explain the huge gender gaps in various industries.

3. Discrimination

What does it look like when there is discrimination?

Now assume that men and women are equally qualified, but it’s harder for women to get hired, because of discrimination. The key insight here is that this amounts to women facing a higher threshold. Where men only need to have level 3 competence to get hired, women need level 4.

So if the population looks like this:

MenWomen
11
22
33
44
55

The hired employees will look like this:

Hired MenHired Women
3
44
55

Once again we’ll have fewer women in the profession, but they will be on average more qualified. The top-performing individuals will be as likely to be women as they are to be men, while the lowest-performing individuals will be almost entirely men.

This is the kind of pattern we observe when there is discrimination. Do we see it in real life?

Yes, we see it all the time.

Corporations with women CEOs are more profitable.

Women doctors have better patient outcomes.

Startups led by women are more likely to succeed.

This shows that there is some discrimination happening, somewhere in the process. Does it mean that individual firms are actively discriminating in their hiring process? No, it doesn’t. The discrimination could be happening somewhere else; maybe it happens during education, or once women get hired. Maybe it’s a product of sexism in society as a whole, that isn’t directly under the control of employers. But it must be in there somewhere. If women are both rarer and more competent, there must be some discrimination going on.

What if there is also innate difference? We can detect that too!

4. Both

Suppose now that men are on average more talented, but there is also discrimination against women. Then the population might look like this:

MenWomen
11
21
32
43
54

And the hired employees might look like this:

Hired MenHired Women
3
4
54

In such a scenario, you’ll see a large gender imbalance, but there may not be a clear difference in competence. The tiny fraction of women who get hired will perform about as well as the men, on average.

Of course, this assumes that the two effects are of equal strength. In reality, we might see a whole spectrum of possibilities, from very strong discrimination with no innate differences, all the way to very large innate differences with no discrimination. The outcomes will then be similarly along a spectrum: When discrimination is much larger than innate difference, women will be rare but more competent. When innate difference is much larger than discrimination, women will be rare and less competent. And when there is a mix of both, women will be rare but won’t show as much difference in competence.

Moreover, if you look closer at the distribution of performance, you can still detect the two effects independently. If the lowest-performing workers are almost all men, that’s evidence of discrimination against women; while if the highest-performing workers are almost all men, that’s evidence of innate difference. And if you look at the table above, that’s exactly what we see: Both the 3 and the 5 are men, indicating the presence of both effects.

What does affirmative action do?

Effectively, affirmative action lowers the threshold for hiring women (or minorities) in order to equalize representation in the workplace. In the presence of discrimination raising that threshold, this is exactly what we need! It can take us from case 3 (discrimination) to case 1 (equality), or from case 4 (both discrimination and innate difference) to case 2 (innate difference only).

Of course, it’s possible for us to overshoot, using more affirmative action than we should have. If we achieve better representation of women, but the lowest performers at the job are women, then we have overshot, effectively now discriminating against men. Fortunately, there is very little evidence of this in practice. In general, even with affirmative action programs in place, we tend to find that the lowest performers are still men—so there is still discrimination against women that we’ve failed to compensate for.

What if we can’t measure competence?

Of course, it’s possible that we don’t have good measures of competence in a given industry. (One must wonder how firms decide who to hire, but frankly I’m prepared to believe they’re just really bad at it.) Then we can’t observe discrimination statistically in this way. What do we do then?

Well, there is at least one avenue left for us to detect discrimination: We can do direct experiments comparing resumes with male names versus female names. These sorts of experiments typically don’t find very much, though—at least for women. For different races, they absolutely do find strong results. They also find evidence of discrimination against people with disabilities, older people, and people who are physically unattractive. There’s also evidence of intersectional effects, where women of particular ethnic groups get discriminated against even when women in general don’t.

But this will only pick up discrimination if it occurs during the hiring process. The advantage of having a competence measure is that it can detect discrimination that occurs anywhere—even outside employer control. Of course, if we don’t know where the discrimination is happening, that makes it very hard to fix; so the two approaches are complementary.

And there is room for new methods too; right now we don’t have a good way to detect discrimination in promotion decisions, for example. Many of us suspect that it occurs, but unless you have a good measure of competence, you can’t really distinguish promotion discrimination from innate differences in talent. We don’t have a good method for testing that in a direct experiment, either, because unlike hiring, we can’t just use fake resumes with masculine or feminine names on them.

Why are groceries so expensive?

Aug 18 JDN 2460541

There has been unusually high inflation the past few years, mostly attributable to the COVID pandemic and its aftermath. But groceries in particular seem to have gotten especially more expensive. We’ve all felt it: Eggs, milk, and toilet paper especially soared to extreme prices and then, even when they came back down, never came down all the way.

Why would this be?

Did it involve supply chain disruptions? Sure. Was it related to the war in Ukraine? Probably.

But it clearly wasn’t just those things—because, as the FTC recently found, grocery stores have been colluding and price-gouging. Large grocery chains like Walmart and Kroger have a lot of market power, and they used that power to raise prices considerably faster than was necessary to keep up with their increased costs; as a result, they made record profits. Their costs did genuinely increase, but they increased their prices even more, and ended up being better off.

The big chains were also better able to protect their own supply chains than smaller companies, and so the effects of the pandemic further entrenched the market power of a handful of corporations. Some of them also imposed strict delivery requirements on their suppliers, pressuring them to prioritize the big companies over the small ones.

This kind of thing is what happens when we let oligopolies take control. When only a few companies control the market, prices go up, quality goes down, and inequality gets worse.

For far too long, institutions like the FTC have failed to challenge the ever tighter concentration of our markets in the hands of a small number of huge corporations.

And it’s not just grocery stores.

Our media is dominated by five corporations: Disney, WarnerMedia, NBCUniversal, Sony, and Paramount.

Our cell phone service is 99% controlled by three corporations: T-Mobile, Verizon, and AT&T.

Our music industry is dominated by three corporations: Sony, Universal, and Warner.

Two-thirds of US airline traffic are in four airlines: American, Delta, Southwest, and United.

Nearly 40% of US commercial banking assets are controlled by just three banks: JPMorgan Chase, Bank of America, and Citigroup.

Do I even need to mention the incredible market share Google has in search—over 90%—or Facebook has in social media—over 50%?

And most of these lists used to be longer. Disney recently acquired 21st Century Fox. Viacom recently merged with CBS and then became Paramount. Universal recently acquired EMI. Our markets aren’t simply alarmingly concentrated; they have also been getting more concentrated over time.

Institutions like the FTC are supposed to be protecting us from oligopolies, by ensuring that corporations can’t merge and acquire each other once they reach a certain market share. But decades of underfunding and laissez-faire ideology have weakened these institutions. So many mergers that obviously shouldn’t have been allowed were allowed, because no regulatory agency had the will and the strength to stop them.

The good news is that this is finally beginning to change: The FTC has recently (finally!) sued Google for maintaining a monopoly on Internet search. And among grocery stores in particular, the FTC is challenging Kroger’s acquisition of Albertson’s—though it remains unclear whether that challenge will succeed.

Hopefully this is a sign that the FTC has found its teeth again, and will continue to prosecute anti-trust cases against oligopolies. A lot of that may depend on who ends up in the White House this November.

Adverse selection and all-you-can-eat

Jul 7 JDN 2460499

The concept of adverse selection is normally associated with finance and insurance, and they certainly do have a lot of important applications there. But finance and insurance are complicated (possibly intentionally?) and a lot of people are intimidated by them, and it turns out there’s a much simpler example of this phenomenon, which most people should find familiar:

All-you-can-eat meals.

At most restaurants, you buy a specific amount of food: One cheeseburger, one large order of fries. But at some, you have another option: You can buy an indeterminate amount of food, as much as you are able to eat at one sitting.

Now think about this from the restaurant’s perspective: How do you price an all-you-can-eat meal and turn a profit? Your cost obviously depends on how much food you need to prepare, but you don’t know exactly how much each customer is going to eat.

Fortunately, you don’t need to! You only need to know how much people will eat on average. As long as the average customer’s meal is worth less than what they paid for it, you will continue to make a profit, even though some customers end up eating more than what they paid for.

Insurance works the same way: Some people will cash in on their insurance, costing the company money; but most will not, providing the company with revenue. In fact, you could think of an all-you-can-eat-meal as a form of food insurance.

So, all you need to do is figure out how much an average person eats in one meal, and price based on that, right?

Wrong. Here’s the problem: The people who eat at your restaurant aren’t a random sample of people. They are specifically the kind of people who eat at all-you-can-eat restaurants.

Someone who eats very little probably won’t want to go to your restaurant very much, because they’ll have to pay a high price for very little food. But someone with a big appetite will go to your restaurant frequently, because they get to eat a large amount of food for that same price.

This means that, on average, your customers will end up eating more than what an average restaurant customer eats. You’ll have to raise the price accordingly—which will make the effect even stronger.

This can end in one of two ways: Either an equilibrium is reached where the price is pretty high and most of the customers have big appetites, or no equilibrium is reached, and the restaurant either goes bankrupt or gets rid of its all-you-can-eat policy.

But there’s basically no way to get the outcome that seems the best, which is a low price and a wide variety of people attending the restaurant. Those who eat very little just won’t show up.

That’s adverse selection. Because there’s no way to charge people who eat more a higher price (other than, you know, not being all-you-can-eat), people will self-select by choosing whether or not to attend, and the people who show up at your restaurant will be the ones with big appetites.

The same thing happens with insurance. Say we’re trying to price health insurance; we don’t just need to know the average medical expenses of our population, even if we know a lot of specific demographic information. People who are very healthy may choose not to buy insurance, leaving us with only the less-healthy people buying our insurance—which will force us to raise the price of our insurance.

Once again, you’re not getting a random sample; you’re getting a sample of the kind of people who buy health insurance.

Obamacare was specifically designed to prevent this, by imposing a small fine on people who choose not to buy health insurance. The goal was to get more healthy people buying insurance, in order to bring the cost down. It worked, at least for awhile—but now that individual mandate has been nullified, so adverse selection will once again rear its ugly head. Had our policymakers better understood this concept, they might not have removed the individual mandate.

Another option might occur to you, analogous to the restaurant: What if we just didn’t offer insurance, and made people pay for all their own healthcare? This would be like the restaurant ending its all-you-can-eat policy and charging for each new serving. Most restaurants do that, so maybe it’s the better option in general?

There are two problems here, one ethical, one economic.

The ethical problem is that people don’t deserve to be sick or injured. They didn’t choose those things. So it isn’t fair to let them suffer or bear all the costs of getting better. As a society, we should share in those costs. We should help people in need. (If you don’t already believe this, I don’t know how to convince you of it. But hopefully most people do already believe this.)

The economic problem is that some healthcare is rarely needed, but very expensive. That’s exactly the sort of situation where insurance makes sense, to spread the cost around. If everyone had to pay for their own care with no insurance at all, then most people who get severe illnesses simply wouldn’t be able to afford it. They’d go massively into debt, go bankrupt—people already do, even with insurance!—and still not even get much of the care they need. It wouldn’t matter that we have good treatments for a lot of cancers now; they are all very expensive, so most people with cancer would be unable to pay for them, and they’d just die anyway.

In fact, the net effect of such a policy would probably be to make us all poorer, because a lot of illness and disability would go untreated, making our workforce less productive. Even if you are very healthy and never need health insurance, it may still be in your own self-interest to support a policy of widespread health insurance, so that sick people get treated and can go back to work.

A world without all-you-can-eat restaurants wouldn’t be so bad. But a world without health insurance would be one in which millions of people suffer needlessly because they can’t afford healthcare.

Why does everyone work full-time?

Jun 30 JDN 2460492

Over 70% of US workers work “full-time”, that is, at least 40 hours a week. The average number of hours worked per week is 33.8, and the average number of overtime hours is only 3.6. So basically, about 2/3 of workers work almost exactly 40 hours per week.

We’re accustomed to this situation, so it may not seem strange to you. But stop and think for a moment: What are the odds that across every industry, exactly 40 hours per week is the most efficient arrangement?

Indeed, there is mounting evidence that in many industries, 40 hours is too much, and something like 5 or even 30 would actually be more efficient. Yet we continue to work 40-hour weeks.

This looks like a corner solution: Rather than choosing an optimal amount, we’re all up against some kind of constraint.


What’s the constraint? Well, the government requires (for most workers) that anything above 40 hours per week must be paid as overtime, that is, at a higher wage rate. So it looks like we would all be working more than 40 hours per week, but we hit the upper limit due to these regulations.

Does this mean we would be better off without the regulations? Clearly not. As I just pointed out, the evidence is mounting that 40 hours is too much, not too little. But why, then, would we all be trying to work so many hours?

I believe this is yet another example of hyper-competition, where competition drives us to an inefficient outcome.

Employers value employees who work a lot of hours. Indeed, I contend that they do so far more than makes any rational sense; they seem to care more about how many hours you work than about the actual quality or quantity of your output. Maybe this is because hours worked is easier to measure, or because it seems like a fairer estimate of your effort; but for whatever reason, employers really seem to reward employees who work a lot of hours, regardless of almost everything else.

In the absence of a limit on hours worked, then, employers are going to heap rewards on whoever works the most hours, and so people will be pressured to work more and more hours. Then we would all work ourselves to death, and it’s not even clear that this would be good for GDP.

Indeed, this seems to be what happened, before the 40-hour work week became the standard. In the 1800s, the average American worked over 60 hours per week. It wasn’t until the 1940s that 40-hour weeks became the norm.

But speaking of norms, that also seems to be a big factor here. The truth is, overtime isn’t really that expensive, and employers could be smarter about rewarding good work rather than more hours. But once a norm establishes itself in a society, it can be very hard to change. And right now, the norm is that 40 hours is a “normal” “standard” “full” work week—any more is above and beyond, and any less is inferior.

This is a problem, because a lot of people can’t work 40-hour weeks. Our standard for what makes someone “disabled” isn’t that you can’t work at all; it’s that you can’t work as much as society expects. I wonder how many people are currently living on disability who could have been working part-time, but there just weren’t enough part-time jobs available. The employment rate among people with a disability is only 41%, compared to 77% of the general population.

And it’s not that we need to work this much. Our productivity is now staggeringly high: We produce more than five times as much wealth per hour of work than we did as recently as the 1940s. So in theory, we should be able to live just as well while working one-fifth as much… but that’s clearly not what happened.

Keynes accurately predicted our high level of productivity; but he wrongly predicted that we would work less, when instead we just kept right on working almost as hard as before.

Indeed, it doesn’t even seem like we live five times as well while working just as much. Many things are better now—healthcare, entertainment, and of course electronics—but somehow, we really don’t feel like we are living better lives than our ancestors.

The Economic Policy Institute offers an explanation for this phenomenon: Our pay hasn’t kept up with our productivity.


Up until about 1980, productivity and pay rose in lockstep. But then they started to diverge, and they never again converged. Productivity continued to soar, while real wages only barely increased. The result is that since then, productivity has grown by 64%, and hourly pay has only grown 15%.

This is definitely part of the problem, but I think there’s more to it as well. Housing and healthcare have become so utterly unaffordable in this country that it really doesn’t matter that our cars are nice and our phones are dirt cheap. We are theoretically wealthier now, but most of that extra wealth goes into simply staying healthy and having a home. Our consumption has been necessitized.

If we can solve these problems, maybe people won’t feel a need to work so many hours. Or, maybe competition will continue to pressure them to work those hours… but at least we’ll actually feel richer when we do it.

No, the system is not working as designed

You say you’ve got a real solution…

Well, you know,

We’d all love to see the plan.

“Revolution”, the Beatles


Jun 16 JDN 2460478


There are several different versions of the meme, but they all follow the same basic format: Rejecting the statement “the system is broken and must be fixed”, they endorse the statement “the system is working exactly as intended and must be destroyed”.


This view is not just utterly wrong; it’s also incredibly dangerous.

First of all, it should be apparent to anyone who has ever worked in any large, complex organization—a corporation, a university, even a large nonprofit org—that no human system works exactly as intended. Some obviously function better than others, and most function reasonably well most of the time (probably because those that don’t fail and disappear, so there is a sort of natural selection process at work); but even with apparently simple goals and extensive resources, no complex organization will ever be able to coordinate its actions perfectly toward those goals.

But when we’re talking about “the system”, well, first of all:

What exactly is “the system”?

Is it government? Society as a whole? The whole culture, or some subculture? Is it local, national, or international? Are we talking about democracy, or maybe capitalism? The world isn’t just one system; it’s a complex network of interacting systems. So to be quite honest with you, I don’t even know what people are complaining about when they complain about “the system”. All I know is that there is some large institution that they don’t like.

Let’s suppose we can pin that down—say we’re talking about capitalism, for instance, or the US government. Then, there is still the obvious fact that any real-world implementation of a system is going to have failures. Particularly when millions of people are involved, no system is ever going to coordinate exactly toward achieving its goals as efficiently as possible. At best it’s going to coordinate reasonably well and achieve its goals most of the time.

But okay, let’s try to be as charitable as possible here.

What are people trying to say when they say this?

I think that fundamentally this is meant as an expression of Conflict Theory over Mistake Theory: The problems with the world aren’t due to well-intentioned people making honest mistakes, they are due to people being evil. The response isn’t to try to correct their mistakes; it’s to fight them (kill them?), because they are evil.

Well, it is certainly true that evil people exist. There are mass murderers and tyrants, rapists and serial killers. And though they may be less extreme, it is genuinely true that billionaires are disproportionately likely to be psychopaths and that those who aren’t typically share a lot of psychopathic traits.

But does this really look like the sort of system that was designed to optimize payoffs for a handful of psychopaths? Really? You can’t imagine any way that the world could be more optimized for that goal?

How about, say… feudalism?

Not that long ago, historically—less than a millennium—the world was literally ruled by those same sorts of uber-rich psychopaths, and they wielded absolute power over their subjects. In medieval times, your king could confiscate your wealth whenever he chose, or even have you executed on a whim. That system genuinely looks like it’s optimized for the power of a handful of evil people.

Democracy, on the other hand, actually looks like it’s trying to be better. Maybe sometimes it isn’t better—or at least isn’t enough better. But why would they even bother letting us vote, if they were building a system to optimize their own power over us? Why would we have these free speech protections—that allow you to post those memes without going to prison?

In fact, there are places today where near-absolute power really is concentrated in a handful of psychopaths, where authoritarian dictators still act very much like kings of yore. In North Korea or Russia or China, there really is a system in place that’s very well optimized to maximize the power of a few individuals over everyone else.

But in the United States, we don’t have that. Not yet, anyway. Our democracy is flawed and imperilled, but so far, it stands. It needs our constant vigilance to defend it, but so far, it stands.

This is precisely why these ideas are so dangerous.

If you tell people that the system is already as bad as it’s ever going to get, that the only hope now is to burn it all down and build something new, then those people aren’t going to stand up and defend what we still have. They aren’t going to fight to keep authoritarians out of office, because they don’t believe that their votes or donations or protests actually do anything to control who ends up in office.

In other words, they are acting exactly as the authoritarians want them to.

Short of your actual support, the best gift you can give your enemy is apathy.

If all the good people give up on democracy, then it will fail, and we will see something worse in its place. Your belief that the world can’t get any worse can make the world much, much worse.

I’m not saying our system of government couldn’t be radically improved. It absolutely could, even by relatively simple reforms, such as range voting and a universal basic income. But there are people who want to tear it all down, and if they succeed, what they put in its place is almost certainly going to be worse, not better.

That’s what happened in Communist countries, after all: They started with bad systems, they tore them down in the name of making something better—and then they didn’t make something better. They made something worse.

And I don’t think it’s an accident that Marxists are so often Conflict Theorists; Marx himself certainly was. Marx seemed convinced that all we needed to do was tear down the old system, and a new, better system would spontaneously emerge. But that isn’t how any of this works.

Good governance is actually really hard.

Life isn’t simple. People aren’t easy to coordinate. Conflicts of interest aren’t easy to resolve. Coordination failures are everywhere. If you tear down the best systems we have for solving these problems, with no vision at all of what you would replace them with, you’re not going to get something better.

Different people want different things. We have to resolve those disagreements somehow. There are lots of ways we could go about doing that. But so far, some variation on voting seems to be the best method we have for resolving disagreements fairly.

It’s true; some people out there are really just bad people. Some of what even good people want is ultimately not reasonable, or based on false presumptions. (Like people who want to “cut” foreign aid to 5% of the budget—when it is in fact about 1%.) Maybe there is some alternative system out there that could solve these problems better, ensure that only the reasonable voices with correct facts actually get heard.

If so, well, you know:

We’d all love to see the plan.

It’s not enough to recognize that our current system is flawed and posit that something better could exist. You need to actually have a clear vision of what that better system looks like. For if you go tearing down the current system without any idea of what to replace it with, you’re going to end up with something much worse.

Indeed, if you had a detailed plan of how to improve things, it’s quite possible you could convince enough people to get that plan implemented, without tearing down the whole system first.

We’ve done it before, after all:

We ended slavery, then racial segregation. We gave women the right to vote, then integrated them into the workforce. We removed the ban of homosexuality, and then legalized same-sex marriage.


We have a very clear track record of reform working. Things are getting better, on a lot of different fronts. (Maybe not all fronts, I admit.) When the moral case becomes overwhelming, we really can convince people to change their minds and then vote to change our policies.

We do not have such a track record when it comes to revolutions.

Yes, some revolutions have worked out well, such as the one that founded the United States. (But I really cannot emphasize this: they had a plan!) But plenty more have worked out very badly. Even France, which turned out okay in the end, had to go through a Napoleon phase first.

Overall, it seems like our odds are better when we treat the system as broken and try to fix it, than when we treat it as evil and try to tear it down.

The world could be a lot better than it is. But never forget: It could also be a lot worse.

Wrongful beneficence

Jun 9 JDN 2460471

One of the best papers I’ve ever read—one that in fact was formative in making me want to be an economist—is Wrongful Beneficence by Chris Meyers.

This paper opened my eyes to a whole new class of unethical behavior: Acts that unambiguously make everyone better off, but nevertheless are morally wrong. Hence, wrongful beneficence.

A lot of economists don’t even seem to believe in such things. They seem convinced that as long as no one is made worse off by a transaction, that transaction must be ethically defensible.

Chris Meyers convinced me that they are wrong.

The key insight here is that it’s still possible to exploit someone even if you make them better off. This happens when they are in a desperate situation and you take advantage of that to get an unfair payoff.


Here one of the cases Meyers offers to demonstrate this:

Suppose Carole is driving across the desert on a desolate road when her car breaks down. After two days and two nights without seeing a single car pass by, she runs out of water and feels rather certain that she will perish if not rescued soon. Now suppose that Jason happens to drive down this road and finds Carole. He sees that her situation is rather desperate and that she needs (or strongly desires) to get to the nearest town as soon as possible. So Jason offers her a ride but only on the condition that […] [she gives him] her entire net worth, the title to her house and car, all of her money in the bank, and half of her earnings for the next ten years.

Carole obviously is better off than she would be if Jason hadn’t shown up—she might even have died. She freely consented to this transaction—again, because if she didn’t, she might die. Yet it seems absurd to say that Jason has done nothing wrong by making such an exorbitant demand. If he had asked her to pay for gas, or even to compensate him for his time at a reasonable rate, we’d have no objection. But to ask for her life savings, all her assets, and half her earnings for ten years? Obviously unfair—and obviously unethical. Jason is making Carole (a little) better off while making himself (a lot) better off, so everyone is benefited; but what he’s doing is obviously wrong.

Once you recognize that such behavior can exist, you start to see it all over the place, particularly in markets, where corporations are quite content to gouge their customers with high prices and exploit their workers with low wages—but still, technically, we’re better off than we would be with no products and no jobs at all.

Indeed, the central message of Wrongful Beneficence is actually about sweatshop labor: It’s not that the workers are worse off than they would have been (in general, they aren’t); it’s that they are so desperate that corporations can get away with exploiting them with obviously unfair wages and working conditions.

Maybe it would be easier just to move manufacturing back to First World countries?

Right-wingers are fond of making outlandish claims that making products at First World wages would be utterly infeasible; here’s one claiming that an iPhone would need to cost $30,000 if it were made in the US. In fact, the truth is that it would only need to cost about $40 more—because hardly any of its cost is actually going to labor. Most of its price is pure monopoly profit for Apple; most of the rest is components and raw materials. (Of course, if those also had to come from the US, the price would go up more; but even so, we’re talking something like double its original price, not thirty times. Workers in the US are indeed paid a lot more than workers in China; they are also more productive.)

It’s true that actually moving manufacturing from other countries back to the US would be a substantial undertaking, requiring retooling factories, retraining engineers, and so on; but it’s not like we’ve never done that sort of thing before. I’m sure it could not be done overnight; but of course it could be done. We do this sort of thing all the time.

Ironically, this sort of right-wing nonsense actually seems to feed the far left as well, supporting their conviction that all this prosperity around us is nothing more than an illusion, that all our wealth only exists because we steal it from others. But this could scarcely be further from the truth; our wealth comes from technology, not theft. If we offered a fairer bargain to poorer countries, we’d be a bit less rich, but they would be much less poor—the overall wealth in the world would in fact probably increase.

A better argument for not moving manufacturing back to the First World is that many Third World economies would collapse if they stopped manufacturing things for other countries, and that would be disastrous for millions of people.

And free trade really does increase efficiency and prosperity for all.

So, yes; let’s keep on manufacturing goods wherever it is cheapest to do so. But when we decide what’s cheapest, let’s evaluate that based on genuinely fair wages and working conditions, not the absolute cheapest that corporations think they can get away with.

Sometimes they may even decide that it’s not really cheaper to manufacture in poorer countries, because they need advanced technology and highly-skilled workers that are easier to come by in First World countries. In that case, bringing production back here is the right thing to do.

Of course, this raises the question:

What would be fair wages and working conditions?

That’s not so easy to answer. Since workers in Third World countries are less educated than workers in First World countries, and have access to less capital and worse technology, we should in fact expect them to be less productive and therefore get paid less. That may be unfair in some cosmic sense, but it’s not anyone’s fault, and it’s not any particular corporation’s responsibility to fix it.

But when there are products for which less than 1% of the sales price of the product goes to the workers who actually made the product, something is wrong. When the profit margin is often wildly larger than the total amount spent on labor, something is wrong.

It may be that we will never have precise thresholds we can set to decide what definitely is or is not exploitative; but that doesn’t mean we can’t ever recognize it when we see it. There are various institutional mechanisms we could use to enforce better wages and working conditions without ever making such a sharp threshold.

One of the simplest, in fact, is Fair Trade.

Fair Trade is by no means a flawless system; in fact there’s a lot of research debating how effective it is at achieving its goals. But it does seem to be accomplishing something. And it’s a system that we already have in place, operating successfully in many countries; it simply needs to be scaled up (and hopefully improved along the way).

One of the clearest pieces of evidence that it’s helping, in fact, is that farmers are willing to participate in it. That shows that it is beneficent.

Of course, that doesn’t mean that it’s genuinely fair! This could just be another kind of wrongful beneficence. Perhaps Fair Trade is really just less exploitative than all the available alternatives.

If so, then we need something even better still, some new system that will reliably pass on the increased cost for customers all the way down to increased wages for workers.

Fair Trade shows us something else, too: A lot of customers clearly are willing to pay a bit more in order to see workers treated better. Even if they weren’t, maybe they should be forced to. But the fact is, they are! Even those who are most adamantly opposed to Fair Trade can’t deny that people really are willing to pay more to help other people. (Yet another example of obvious altruism that neoclassical economists somehow manage to ignore.) They simply deny that it’s actually helping, which is an empirical matter.

But if this isn’t helping enough, fine; let’s find something else that does.

How Effective Altruism hurt me

May 12 JDN 2460443

I don’t want this to be taken the wrong way. I still strongly believe in the core principles of Effective Altruism. Indeed, it’s shockingly hard to deny them, because basically they come out to this:

Doing more good is better than doing less good.

Then again, most people want to do good. Basically everyone agrees that more good is better than less good. So what’s the big deal about Effective Altruism?

Well, in practice, most people put shockingly little effort into trying to ensure that they are doing the most good they can. A lot of people just try to be nice people, without ever concerning themselves with the bigger picture. Many of these people don’t give to charity at all.

Then, even among people who do give to charity, typically give to charities more or less at random—or worse, in proportion to how much mail those charities send them begging for donations. (Surely you can see how that is a perverse incentive?) They donate to religious organizations, which sometimes do good things, but fundamentally are founded upon ignorance, patriarchy, and lies.

Effective Altruism is a movement intended to fix this, to get people to see the bigger picture and focus their efforts on where they will do the most good. Vet charities not just for their honesty, but also their efficiency and cost-effectiveness:

Just how many mQALY can you buy with that $1?

That part I still believe in. There is a lot of value in assessing which charities are the most effective, and trying to get more people to donate to those high-impact charities.

But there is another side to Effective Altruism, which I now realize has severely damaged my mental health.

That is the sense of obligation to give as much as you possibly can.

Peter Singer is the most extreme example of this. He seems to have mellowed—a little—in more recent years, but in some of his most famous books he uses the following thought experiment:

To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.

I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.

Basically everyone agrees with this particular decision: Even if you are wearing a very expensive suit that will be ruined, even if you’ll miss something really important like a job interview or even a wedding—most people agree that if you ever come across a drowning child, you should save them.

(Oddly enough, when contemplating this scenario, nobody ever seems to consider the advice that most lifeguards give, which is to throw a life preserver and then go find someone qualified to save the child—because saving someone who is drowning is a lot harder and a lot riskier than most people realize. (“Reach or throw, don’t go.”) But that’s a bit beside the point.)

But Singer argues that we are basically in this position all the time. For somewhere between $500 and $3000, you—yes, you—could donate to a high-impact charity, and thereby save a child’s life.

Does it matter that many other people are better positioned to donate than you are? Does it matter that the child is thousands of miles away and you’ll never see them? Does it matter that there are actually millions of children, and you could never save them all by yourself? Does it matter that you’ll only save a child in expectation, rather than saving some specific child with certainty?

Singer says that none of this matters. For a long time, I believed him.

Now, I don’t.

For, if you actually walked by a drowning child that you could save, only at the cost of missing a wedding and ruining your tuxedo, you clearly should do that. (If it would risk your life, maybe not—and as I alluded to earlier, that’s more likely than you might imagine.) If you wouldn’t, there’s something wrong with you. You’re a bad person.

But most people don’t donate everything they could to high-impact charities. Even Peter Singer himself doesn’t. So if donating is the same as saving the drowning child, it follows that we are all bad people.

(Note: In general, if an ethical theory results in the conclusion that the whole of humanity is evil, there is probably something wrong with that ethical theory.)

Singer has tried to get out of this by saying we shouldn’t “sacrifice things of comparable importance”, and then somehow cash out what “comparable importance” means in such a way that it doesn’t require you to live on the street and eat scraps from trash cans. (Even though the people you’d be donating to largely do live that way.)

I’m not sure that really works, but okay, let’s say it does. Even so, it’s pretty clear that anything you spend money on purely for enjoyment would have to go. You would never eat out at restaurants, unless you could show that the time saved allowed you to get more work done and therefore donate more. You would never go to movies or buy video games, unless you could show that it was absolutely necessary for your own mental functioning. Your life would be work, work, work, then donate, donate, donate, and then do the absolute bare minimum to recover from working and donating so you can work and donate some more.

You would enslave yourself.

And all the while, you’d believe that you were never doing enough, you were never good enough, you are always a terrible person because you try to cling to any personal joy in your own life rather than giving, giving, giving all you have.

I now realize that Effective Altruism, as a movement, had been basically telling me to do that. And I’d been listening.

I now realize that Effective Altruism has given me this voice in my head, which I hear whenever I want to apply for a job or submit work for publication:

If you try, you will probably fail. And if you fail, a child will die.

The “if you try, you will probably fail” is just an objective fact. It’s inescapable. Any given job application or writing submission will probably fail.

Yes, maybe there’s some sort of bundling we could do to reframe that, as I discussed in an earlier post. But basically, this is correct, and I need to accept it.

Now, what about the second part? “If you fail, a child will die.” To most of you, that probably sounds crazy. And it is crazy. It’s way more pressure than any ordinary person should have in their daily life. This kind of pressure should be reserved for neurosurgeons and bomb squads.

But this is essentially what Effective Altruism taught me to believe. It taught me that every few thousand dollars I don’t donate is a child I am allowing to die. And since I can’t donate what I don’t have, it follows that every few thousand dollars I fail to get is another dead child.

And since Effective Altruism is so laser-focused on results above all else, it taught me that it really doesn’t matter whether I apply for the job and don’t get it, or never apply at all; the outcome is the same, and that outcome is that children suffer and die because I had no money to save them.

I think part of the problem here is that Effective Altruism is utilitarian through and through, and utilitarianism has very little place for good enough. There is better and there is worse; but there is no threshold at which you can say that your moral obligations are discharged and you are free to live your life as you wish. There is always more good that you could do, and therefore always more that you should do.

Do we really want to live in a world where to be a good person is to owe your whole life to others?

I do not believe in absolute selfishness. I believe that we owe something to other people. But I no longer believe that we owe everything. Sacrificing my own well-being at the altar of altruism has been incredibly destructive to my mental health, and I don’t think I’m the only one.

By all means, give to high-impact charities. But give a moderate amount—at most, tithe—and then go live your life. You don’t owe the world more than that.

Everyone includes your mother and Los Angeles

Apr 28 JDN 2460430

What are the chances that artificial intelligence will destroy human civilization?

A bunch of experts were surveyed on that question and similar questions, and half of respondents gave a probability of 5% or more; some gave probabilities as high as 99%.

This is incredibly bizarre.

Most AI experts are people who work in AI. They are actively participating in developing this technology. And yet more than half of them think that the technology they are working on right now has a more than 5% chance of destroying human civilization!?

It feels to me like they honestly don’t understand what they’re saying. They can’t really grasp at an intuitive level just what a 5% or 10% chance of global annihilation means—let alone a 99% chance.

If something has a 5% chance of killing everyone, we should consider that at least as bad as something that is guaranteed to kill 5% of people.

Probably worse, in fact, because you can recover from losing 5% of the population (we have, several times throughout history). But you cannot recover from losing everyone. So really, it’s like losing 5% of all future people who will ever live—which could be a very large number indeed.

But let’s be a little conservative here, and just count people who already, currently exist, and use 5% of that number.

5% of 8 billion people is 400 million people.

So anyone who is working on AI and also says that AI has a 5% chance of causing human extinction is basically saying: “In expectation, I’m supporting 20 Holocausts.”

If you really think the odds are that high, why aren’t you demanding that any work on AI be tried as a crime against humanity? Why aren’t you out there throwing Molotov cocktails at data centers?

(To be fair, Eliezer Yudkowsky is actually calling for a global ban on AI that would be enforced by military action. That’s the kind of thing you should be doing if indeed you believe the odds are that high. But most AI doomsayers don’t call for such drastic measures, and many of them even continue working in AI as if nothing is wrong.)

I think this must be scope neglector something even worse.

If you thought a drug had a 99% chance of killing your mother, you would never let her take the drug, and you would probably sue the company for making it.

If you thought a technology had a 99% chance of destroying Los Angeles, you would never even consider working on that technology, and you would want that technology immediately and permanently banned.

So I would like to remind anyone who says they believe the danger is this great and yet continues working in the industry:

Everyone includes your mother and Los Angeles.

If AI destroys human civilization, that means AI destroys Los Angeles. However shocked and horrified you would be if a nuclear weapon were detonated in the middle of Hollywood, you should be at least that shocked and horrified by anyone working on advancing AI, if indeed you truly believe that there is at least a 5% chance of AI destroying human civilization.

But people just don’t seem to think this way. Their minds seem to take on a totally different attitude toward “everyone” than they would take toward any particular person or even any particular city. The notion of total human annihilation is just so remote, so abstract, they can’t even be afraid of it the way they are afraid of losing their loved ones.

This despite the fact that everyone includes all your loved ones.

If a drug had a 5% chance of killing your mother, you might let her take it—but only if that drug was the best way to treat some very serious disease. Chemotherapy can be about that risky—but you don’t go on chemo unless you have cancer.

If a technology had a 5% chance of destroying Los Angeles, I’m honestly having trouble thinking of scenarios in which we would be willing to take that risk. But the closest I can come to it is the Manhattan Project. If you’re currently fighting a global war against fascist imperialists, and they are also working on making an atomic bomb, then being the first to make an atomic bomb may in fact be the best option, even if you know that it carries a serious risk of utter catastrophe.

In any case, I think one thing is clear: You don’t take that kind of serious risk unless there is some very large benefit. You don’t take chemotherapy on a whim. You don’t invent atomic bombs just out of curiosity.

Where’s the huge benefit of AI that would justify taking such a huge risk?

Some forms of automation are clearly beneficial, but so far AI per se seems to have largely made our society worse. ChatGPT lies to us. Robocalls inundate us. Deepfakes endanger journalism. What’s the upside here? It makes a ton of money for tech companies, I guess?

Now, fortunately, I think 5% is too high an estimate.

(Scientific American agrees.)

My own estimate is that, over the next two centuries, there is about a 1% chance that AI destroys human civilization, and only a 0.1% chance that it results in human extinction.

This is still really high.

People seem to have trouble with that too.

“Oh, there’s a 99.9% chance we won’t all die; everything is fine, then?” No. There are plenty of other scenarios that would also be very bad, and a total extinction scenario is so terrible that even a 0.1% chance is not something we can simply ignore.

0.1% of people is still 8 million people.

I find myself in a very odd position: On the one hand, I think the probabilities that doomsayers are giving are far too high. On the other hand, I think the actions that are being taken—even by those same doomsayers—are far too small.

Most of them don’t seem to consider a 5% chance to be worthy of drastic action, while I consider a 0.1% chance to be well worthy of it. I would support a complete ban on all AI research immediately, just from that 0.1%.

The only research we should be doing that is in any way related to AI should involve how to make AI safer—absolutely no one should be trying to make it more powerful or apply it to make money. (Yet in reality, almost the opposite is the case.)

Because 8 million people is still a lot of people.

Is it fair to treat a 0.1% chance of killing everyone as equivalent to killing 0.1% of people?

Well, first of all, we have to consider the uncertainty. The difference between a 0.05% chance and a 0.015% chance is millions of people, but there’s probably no way we can actually measure it that precisely.

But it seems to me that something expected to kill between 4 million and 12 million people would still generally be considered very bad.

More importantly, there’s also a chance that AI will save people, or have similarly large benefits. We need to factor that in as well. Something that will kill 4-12 million people but also save 15-30 million people is probably still worth doing (but we should also be trying to find ways to minimize the harm and maximize the benefit).

The biggest problem is that we are deeply uncertain about both the upsides and the downsides. There are a vast number of possible outcomes from inventing AI. Many of those outcomes are relatively mundane; some are moderately good, others are moderately bad. But the moral question seems to be dominated by the big outcomes: With some small but non-negligible probability, AI could lead to either a utopian future or an utter disaster.

The way we are leaping directly into applying AI without even being anywhere close to understanding AI seems to me especially likely to lean toward disaster. No other technology has ever become so immediately widespread while also being so poorly understood.

So far, I’ve yet to see any convincing arguments that the benefits of AI are anywhere near large enough to justify this kind of existential risk. In the near term, AI really only promises economic disruption that will largely be harmful. Maybe one day AI could lead us into a glorious utopia of automated luxury communism, but we really have no way of knowing that will happen—and it seems pretty clear that Google is not going to do that.

Artificial intelligence technology is moving too fast. Even if it doesn’t become powerful enough to threaten our survival for another 50 years (which I suspect it won’t), if we continue on our current path of “make money now, ask questions never”, it’s still not clear that we would actually understand it well enough to protect ourselves by then—and in the meantime it is already causing us significant harm for little apparent benefit.

Why are we even doing this? Why does halting AI research feel like stopping a freight train?

I dare say it’s because we have handed over so much power to corporations.

The paperclippers are already here.

Surviving in an ad-supported world

Apr 21 JDN 2460423

Advertising is as old as money—perhaps even older. Scams have likewise been a part of human society since time immemorial.

But I think it’s fair to say that recently, since the dawn of the Internet at least, both advertising and scams have been proliferating, far beyond what they used to be.

We live in an ad-supported world.

News sites are full of ads. Search engines are full of ads. Even shopping sites are full of ads now; we literally came here planning to buy something, but that wasn’t good enough for you; you want us to also buy something else. Most of the ads are for legitimate products; but some are for scams. (And then there’s multi-level marketing, which is somewhere in between: technically not a scam.)

We’re so accustomed to getting spam emails, phone calls, and texts full of ads and scams that we just accept it as a part of our lives. But these are not something people had to live with even 50 years ago. This is a new, fresh Hell we have wrought for ourselves as a civilization.

AI promises to make this problem even worse. AI still isn’t very good at doing anything particularly useful; you can’t actually trust it to drive a truck or diagnose an X-ray. (There are people working on this sort of thing, but they haven’t yet succeeded.) But it’s already pretty good at making spam texts and phone calls. It’s already pretty good at catfishing people. AI isn’t smart enough to really help us, but it is smart enough to hurt us, especially those of us who are most vulnerable.

I think that this causes a great deal more damage to our society than is commonly understood.

It’s not just that ads are annoying (though they are), or that they undermine our attention span (though they do), or that they exploit the vulnerable (though they do).

I believe that an ad-supported world is a world where trust goes to die.

When the vast majority of your interactions with other people involve those people trying to get your money, some of them by outright fraud—but none of them really honestly—you have no choice but to ratchet down your sense of trust. It begins to feel as this financial transactions are the only form of interaction there is in the world.

But in fact most people can be trusted, and should be trusted—you are missing out on a great deal of what makes life worth living if you do not know how to trust.

The question is whom you trust. You should trust people you know, people you interact with personally and directly. Even strangers are more trustworthy than any corporation will ever be. And never are corporations more dishonest than when they are sending out ads.


The more the world fills with ads, the less room it has for trust.

Is there any way to stem this tide? Or are we simply doomed to live in the cyberpunk dystopia our forebears warned about, where everything is for sale and all available real estate is used for advertising?

Ads and scams only exist because they are profitable; so our goal should be to make them no longer profitable.

Here is one very simple piece of financial advice that will help protect you. Indeed, I believe it can protect so well, that if everyone followed it consistently, we would stem the tide.

Only give money to people you have sought out yourself.

Only buy things you already knew you wanted.

Yes, of course you must buy things. We live in a capitalist society. You can’t survive without buying things. But this is how buying things should work:

You check your fridge and see you are out of milk. So you put “milk” on your grocery list, you go to the grocery store, you find some milk that looks good, and you buy it.

Or, your car is getting old and expensive to maintain, and you decide you need a new one. You run the numbers on your income and expenses, and come up with a budget for a new car. You go to the dealership, they help you pick out a car that fits your needs and your budget, and you buy it.

Your tennis shoes are getting frayed, and it’s time to replace them. You go online and search for “tennis shoes”, looking up sizes and styles until you find a pair that suits you. You order that pair.

You should be the one to decide that you need a thing, and then you should go out looking for it.

It’s okay to get help searching, or even listen to some sales pitches, as long as the whole thing was your idea from the start.

But if someone calls you, texts you, or emails you, asking for your money for something?

Don’t give them a cent.

Just don’t. Don’t do it. Even if it sounds like a good product. Even if it is a good product. If the product they are selling sounds so great that you decide you actually want to buy it, go look for it on your own. Shop around. If you can, go out of your way to buy it from a competing company.

Your attention is valuable. Don’t reward them for stealing it.

This applies to donations, too. Donation asks aren’t as awful as ads, let alone scams, but they are pretty obnoxious, and they only send those things out because people respond to them. If we all stopped responding, they’d stop sending.

Yes, you absolutely should give money to charity. But you should seek out the charities to donate to. You should use trusted sources (like GiveWell and Charity Navigator) to vet them for their reliability, transparency, and cost-effectiveness.

If you just receive junk mail asking you for donations, feel free to take out any little gifts they gave you (it’s often return address labels, for some reason), and then recycle the rest.

Don’t give to the ones who ask for it. Give to the ones who will use it the best.

Reward the charities that do good, not the charities that advertise well.

This is the rule to follow:

If someone contacts you—if they initiate the contact—refuse to give them any money. Ever.

Does this rule seem too strict? It is quite strict, in fact. It requires you to pass up many seemingly-appealing opportunities, and the more ads there are, the more opportunities you’ll need to pass up.

There may even be a few exceptions; no great harm befalls us if we buy Girl Scout cookies or donate to the ASPCA because the former knocked on our doors and the latter showed us TV ads. (Then again, you could just donate to feminist and animal rights charities without any ads or sales pitches.)

But in general, we live in a society that is absolutely inundated with people accosting us and trying to take our money, and they’re only ever going to stop trying to get our money if we stop giving it to them. They will not stop it out of the goodness of their hearts—no, not even the charities, who at least do have some goodness in their hearts. (And certainly not the scammers, who have none.)

They will only stop if it stops working.

So we need to make it stop working. We need to draw this line.

Trust the people around you, who have earned it. Do not trust anyone who seeks you out asking for money.

Telemarketing calls? Hang up. Spam emails? Delete. Junk mail? Recycle. TV ads? Mute and ignore.

And then, perhaps, future generations won’t have to live in an ad-supported world.