Because ought implies can, can may imply ought

Mar21JDN 2459295

Is Internet access a fundamental human right?

At first glance, such a notion might seem preposterous: Internet access has only existed for less than 50 years, how could it be a fundamental human right like life and liberty, or food and water?

Let’s try another question then: Is healthcare a fundamental human right?

Surely if there is a vaccine for a terrible disease, and we could easily give it to you but refuse to do so, and you thereby contract the disease and suffer horribly, we have done something morally wrong. We have either violated your rights or violated our own obligations—perhaps both.

Yet that vaccine had to be invented, just as the Internet did; go back far enough into history and there were no vaccines, no antibiotics, even no anethestetics or antiseptics.

One strong, commonly shared intuition is that denying people such basic services is a violation of their fundamental rights. Another strong, commonly shared intuition is that fundamental rights should be universal, not contingent upon technological or economic development. Is there a way to reconcile these two conflicting intuitions? Or is one simply wrong?

One of the deepest principles in deontic logic is “ought implies can“: One cannot be morally obligated to do what one is incapable of doing.

Yet technology, by its nature, makes us capable of doing more. By technological advancement, our space of “can” has greatly expanded over time. And this means that our space of “ought” has similarly expanded.

For if the only thing holding us back from an obligation to do something (like save someone from a disease, or connect them instantaneously with all of human knowledge) was that we were incapable and ought implies can, well, then now that we can, we ought.

Advancements in technology do not merely give us the opportunity to help more people: They also give us the obligation to do so. As our capabilities expand, our duties also expand—perhaps not at the same rate, but they do expand all the same.

It may be that on some deeper level we could articulate the fundamental rights so that they would not change over time: Not a right to Internet access, but a right to equal access to knowledge; not a right to vaccination, but a right to a fair minimum standard of medicine. But the fact remains: How this right becomes expressed in action and policy will and must change over time. What was considered an adequate standard of healthcare in the Middle Ages would rightfully be considered barbaric and cruel today. And I am hopeful that what we now consider an adequate standard of healthcare will one day seem nearly as barbaric. (“Dialysis? What is this, the Dark Ages?”)

We live in a very special time in human history.

Our technological and economic growth for the past few generations has been breathtakingly fast, and we are the first generation in history to seriously be in a position to end world hunger. We have in fact been rapidly reducing global poverty, but we could do far more. And because we can, we should.

After decades of dashed hope, we are now truly on the verge of space colonization: Robots on Mars are now almost routine, fully-reusable spacecraft have now flown successful missions, and a low-Earth-orbit hotel is scheduled to be constructed by the end of the decade. Yet if current trends continue, the benefits of space colonization are likely to be highly concentrated among a handful of centibillionaires—like Elon Musk, who gained a staggering $160 billion in wealth over the past year. We can do much better to share the rewards of space with the rest of the population—and therefore we must.

Artificial intelligence is also finally coming into its own, with GPT-3 now passing the weakest form of the Turing Test (though not the strongest form—you can still trip it up and see that it’s not really human if you are clever and careful). Many jobs have already been replaced by automation, but as AI improves, many more will be—not as soon as starry-eyed techno-optimists imagined, but sooner than most people realize. Thus far the benefits of automation have likewise been highly concentrated among the rich—we can fix that, and therefore we should.

Is there a fundamental human right to share in the benefits of space colonization and artificial intelligence? Two centuries ago the question wouldn’t have even made sense. Today, it may seem preposterous. Two centuries from now, it may seem preposterous to deny.

I’m sure almost everyone would agree that we are obliged to give our children food and water. Yet if we were in a desert, starving and dying of thirst, we would be unable to do so—and we cannot be obliged to do what we cannot do. Yet as soon as we find an oasis and we can give them water, we must.

Humanity has been starving in the desert for two hundred millennia. Now, at last, we have reached the oasis. It is our duty to share its waters fairly.

What if everyone owned their own home?

Mar 14 JDN 2459288

In last week’s post I suggested that if we are to use the term “gentrification”, it should specifically apply to the practice of buying homes for the purpose of renting them out.

But don’t people need to be able to rent homes? Surely we couldn’t have a system where everyone always owned their own home?

Or could we?

The usual argument for why renting is necessary is that people don’t want to commit to living in one spot for 15 or 30 years, the length of a mortgage. And this is quite reasonable; very few careers today offer the kind of stability that lets you commit in advance to 15 or more years of working in the same place. (Tenured professors are one of the few exceptions, and I dare say this has given academic economists some severe blind spots regarding the costs and risks involved in changing jobs.)

But how much does renting really help with this? One does not rent a home for a few days or even few weeks at a time. If you are staying somewhere for an interval that short, you generally room with a friend or pay for a hotel. (Or get an AirBNB, which is sort of intermediate between the two.)

One only rents housing for months at a time—in fact, most leases are 12-month leases. But since the average time to sell a house is 60-90 days, in what sense is renting actually less of a commitment than buying? It feels like less of a commitment to most people—but I’m not sure it really is less of a commitment.

There is a certainty that comes with renting—you know that once your lease is up you’re free to leave, whereas selling your house will on average take two or three months, but could very well be faster or slower than that.

Another potential advantage of renting is that you have a landlord who is responsible for maintaining the property. But this advantage is greatly overstated: First of all, if they don’t do it (and many surely don’t), you actually have very little recourse in practice. Moreover, if you own your own home, you don’t actually have to do all the work yourself; you could pay carpenters and plumbers and electricians to do it for you—which is all that most landlords were going to do anyway.

All of the “additional costs” of owning over renting such as maintenance and property taxes are going to be factored into your rent in the first place. This is a good argument for recognizing that a $1000 mortgage payment is not equivalent to a $1000 rent payment—the rent payment is all-inclusive in a way the mortgage is not. But it isn’t a good argument for renting over buying in general.

Being foreclosed on a mortgage is a terrible experience—but surely no worse than being evicted from a rental. If anything, foreclosure is probably not as bad, because you can essentially only be foreclosed for nonpayment, since the bank only owns the loan; landlords can and do evict people for all sorts of reasons, because they own the home. In particular, you can’t be foreclosed for annoying your neighbors or damaging the property. If you own your home, you can cut a hole in a wall any time you like. (Not saying you should necessarily—just that you can, and nobody can take your home away for doing so.)

I think the primary reason that people rent instead of buying is the cost of a down payment. For some reason, we have decided as a society that you should be expected to pay 10%-20% of the cost of a home up front, or else you never deserve to earn any equity in your home whatsoever. This is one of many ways that being rich makes it easier to get richer—but it is probably the most important one holding back most of the middle class of the First World.

And make no mistake, that’s what this is: It’s a social norm. There is no deep economic reason why a down payment needs to be anything in particular—or even why down payments in general are necessary.

There is some evidence that higher down payments are associated with less risk of default, but it’s not as strong as many people seem to think. The big HUD study on the subject found that one percentage point of down payment reduces default risk by about as much as 5 points of credit rating: So you should prefer to offer a mortgage to someone with an 800 rating and no down payment than someone with a 650 rating and a 20% down payment.

Also, it’s not as if mortgage lenders are unprotected from default (unlike, say, credit card lenders). Above all, they can foreclose on the house. So why is it so important to reduce the risk of default in the first place? Why do you need extra collateral in the form of a down payment, when you’ve already got an entire house of collateral?

It may be that this is actually a good opportunity for financial innovation, a phrase that should in general strike terror in one’s heart. Most of the time “financial innovation” means “clever ways of disguising fraud”. Previous attempts at “innovating” mortgages have resulted in such monstrosities as “interest-only mortgages” (a literal oxymoron, since by definition a mortgage must have a termination date—a date at which the debt “dies”), “balloon payments”, and “adjustable rate mortgages”—all of which increase risk of default while as far as I can tell accomplishing absolutely nothing. “Subprime” lending created many excuses for irresponsible or outright predatory lending—and then, above all, securitization of mortgages allowed banks to offload the risk they had taken on to third parties who typically had no idea what they were getting.

Volcker was too generous when he said that the last great financial innovation was the ATM; no, that was an innovation in electronics (and we’ve had plenty of those). The last great financial innovation I can think of is the joint-stock corporation in the 1550s. But I think a new type of mortgage contract that minimizes default risk without requiring large up-front payments might actually qualify as a useful form of financial innovation.

It would also be useful to have mortgages that make it easier to move, perhaps by putting payments on hold while the home is up for sale. That way people wouldn’t have to make two mortgage payments at once as they move from one place to another, and the bank will see that money eventually—paid for by new buyer and their mortgage.

Indeed, ideally I’d like to eliminate foreclosure as well, so that no one has to be kicked out of their homes. How might we do that?

Well, as a pandemic response measure, we should have simply instituted a freeze on all evictions and foreclosures for the duration of the pandemic. Some states did, in fact—but many didn’t, and the federal moratoria on evictions were limited. This is the kind of emergency power that government should have, to protect people from a disaster. So far it appears that the number of evictions was effectively reduced from tens of millions to tens of thousands by these measures—but evicting anyone during a pandemic is a human rights violation.

But as a long-term policy, simply banning evictions wouldn’t work. No one would want to lend out mortgages, knowing that they had no recourse if the debtor stopped paying. Even buyers with good credit might get excluded from the market, since once they actually received the house they’d have very little incentive to actually make their payments on time.

But if there are no down payments and no foreclosures, that means mortgage lenders have no collateral. How are they supposed to avoid defaults?

One option would be wage garnishment. If you have the money and are simply refusing to pay it, the courts could simply require your employer to send the money directly to your creditors. If you have other assets, those could be garnished as well.

And what if you don’t have the money, perhaps because you’re unemployed? Well, then, this isn’t really a problem of incentives at all. It isn’t that you’re choosing not to pay, it’s that you can’t pay. Taking away such people’s homes would protect banks financially, but at a grave human cost.

One option would be to simply say that the banks should have to bear the risk: That’s part of what their huge profits are supposed to be compensating them for, the willingness to take on risks others won’t. The main downside here is the fact that it would probably make it more difficult to get a mortgage and raise the interest rates that you would need to pay once you do.

Another option would be some sort of government program to make up the difference, by offering grants or guaranteed loans to homeowners who can’t afford to pay their mortgages. Since most such instances are likely to be temporary, the government wouldn’t be on the hook forever—just long enough for people to get back on their feet. Here the downside would be the same as any government spending: higher taxes or larger budget deficits. But honestly it probably wouldn’t take all that much; while the total value of all mortgages is very large, only a small portion are in default at any give time. Typically only about 2-4% of all mortgages in the US are in default. Even 4% of the $10 trillion total value of all US mortgages is about $400 billion, which sounds like a lot—but the government wouldn’t owe that full amount, just whatever portion is actually late. I couldn’t easily find figures on that, but I’d be surprised if it’s more than 10% of the total value of these mortgages that would need to be paid by the government. $40 billion is about 1% of the annual federal budget.

Reforms to our healthcare system would also help tremendously, as medical expenses are a leading cause of foreclosure in the United States (and literally nowhere else—every other country with the medical technology to make medicine this expensive also has a healthcare system that shares the burden). Here there is virtually no downside: Our healthcare system is ludicrously expensive without producing outcomes any better than the much cheaper single-payer systems in Canada, the UK, and France.

All of this sounds difficult and complicated, I suppose. Some may think that it’s not worth it. But I believe that there is a very strong moral argument for universal homeownership and ending eviction: Your home is your own, and no one else’s. No one has a right to take your home away from you.

This is also fundamentally capitalist: It is the private ownership of capital by its users, the acquisition of wealth through ownership of assets. The system of landlords and renters honestly doesn’t seem so much capitalist as it does feudal: We even call them “lords”, for goodness’ sake!

As an added bonus, if everyone owned their own homes, then perhaps we wouldn’t have to worry about “gentrification”, since rising property values would always benefit residents.

What I think “gentrification” ought to mean

Mar7 JDN 2459281

A few years back I asked the question: “What is gentrification?”

The term evokes the notion of a gentrya landed upper class who hoards wealth and keeps the rest of the population in penury and de facto servitude. Yet the usual meaning of the term really just seems to mean “rich people buying houses in poor areas”. Where did we get the idea that rich people buying houses in poor areas constitutes the formation of a landed gentry?

In that previous post I argued that the concept of “gentrification” as usually applied is not a useful one, and we should instead be focusing directly on the issues of poverty and housing affordability. I still think that’s right.

But it occurs to me that there is something “gentrification” could be used to mean, that would actually capture some of the original intended meaning. It doesn’t seem to be used this way often, but unlike the usual meaning, this one actually has some genuine connection with the original concept of a gentry.

Here goes: Gentrification is the purchasing of housing for the purpose of renting it out.

Why this definition in particular? Well, it actually does have an effect similar in direction (though hardly in magnitude) to the formation of a landed gentry: It concentrates land ownership and makes people into tenants instead of homeowners. It converts what should have been a one-time transfer of wealth from one owner to another into a permanent passive income stream that typically involves the poor indefinitely paying to the rich.

Because houses aren’t very fungible, the housing market is one of monopolistic competition: Each house is its own unique commodity, only partially substitutable with others, and this gives market power to the owners of houses. When it’s a permanent sale, that market power will be reflected in the price, but it will also effectively transfer to the new owner. When it’s a rental, that market power remains firmly in the hands of the landlord. The more a landlord owns, the more market power they can amass: A large landholding corporation like the Irvine Company can amass an enormous amount of market power, effectively monopolizing an entire city. (Now that feels like a landed gentry! Bend the knee before the great and noble House Irvine.)

Compare this to two other activities that are often called “gentrification”: Rich people buying houses in poor areas for the purpose of living in them, and developers building apartment buildings and renting them out.

When rich people buy houses for the purpose of living in them, they are not concentrating land ownership. They aren’t generating a passive income stream. They are simply doing the same thing that other people do—buying houses to live in them—but they have more money with which to do so. This is utterly unproblematic, and I think people need to stop complaining about it. There is absolutely nothing wrong with buying a house because you want to live in it, and if it’s a really expensive house—like Jeff Bezos’ $165 million mansion—then the problem isn’t rich people buying houses, it’s the massive concentration of wealth that made anyone that rich in the first place. No one should be made to feel guilty for spending their own money on their own house. Every time “gentrification” is used to describe this process, it just makes it seem like “gentrification” is nothing to worry about—or maybe even something to celebrate.

What about developers who build apartments to rent them out? Aren’t they setting up a passive income stream from the poor to the rich? Don’t they have monopolistic market power? Yes, that’s all true. But they’re also doing something else that buying houses in order to rent them doesn’t: They are increasing the supply of housing.

What are the two most important factors determining the price of housing? The same two factors as anything else: Supply and demand. If prices are too high, the best way to fix that is to increase supply. Developers do that.

Conversely, buying up a house in order to rent it is actually reducing the supply of housing—or at least the supply of permanent owner-occupied housing. Whereas developers buy land that has less housing and build more housing on it, gentrifiers (as I’m defining them) buy housing that already exists and rent it out to others.

Indeed, it’s really not clear to me that rent is a thing that needs to exist. Obviously people need housing. And it certainly makes sense to have things like hotels for very short-term stays and dorms for students who are living in an area for a fixed number of years.

But it’s not clear to me that we really needed to have a system where people would own other people’s houses and charge them for the privilege of living in them. I think the best argument for it is a libertarian one: If people want to do that, why not let them?

Yet I think the downsides of renting are clear enough: People get evicted and displaced, and in many cases landlords consistently fail to provide the additional services that they are supposed to provide. (I wasn’t able to quickly find good statistics on how common it is for landlords to evade their responsibilities like this, but anecdotal evidence would suggest that it’s not uncommon.)

The clearest upside is that security deposits are generally cheaper than down payments, so it’s generally easier to rent a home than to buy one. But why does this have to be the case? Indeed, why do banks insist on such large down payments in the first place? It seems to be only social norms that set the rate of down payments; I’m not aware of any actual economic arguments for why a particular percentage of the home’s value needs to be paid in cash up front. It’s commonly thought that large down payments somehow reduce the risk of defaulting on a mortgage; but I’m not aware of much actual evidence of this. Here’s a theoretical model saying that down payments should matter, but it’s purely theoretical. Here’s an empirical paper showing that lower down payments are associated with higher interest rates—but it may be the higher interest rates that account for the higher defaults, not the lower down payments. There is also a selection bias, where buyers with worse credit get worse loan terms (which can be a self-fulfilling prophecy).

The best empirical work I could find on the subject was a HUD study suggesting that yes, lower down payments are associated with higher default risk—but their effect is much smaller than lots of other things. In particular, one percentage point of down payment was equivalent to about 5 points of credit score. So someone with a credit score of 750 and a down payment of 0% is no more likely to default than someone with a credit score of 650 and a down payment of 20%. Or, to use an example they specifically state in the paper: “For example, to have the same probability of default as a prime loan, a B or C [subprime] loan needs to have a CLTV [combined loan-to-value ratio] that is 11.9 percentage points lower than the CLTV of an otherwise identical prime loan.” A combined loan-to-value ratio 12 percentage points lower is essentially the same thing as a down payment that is 12 percentage points larger—and 12% of the median US home price of $300,000 is $36,000, not an amount of money most middle-class families can easily come up with.

I also found a quasi-experimental study showing that support from nonprofit housing organizations was much more effective at reducing default rates than higher down payments. So even if larger down payments do reduce defaults, there are better ways of doing so.

The biggest determinant of whether you will default on your mortgage is the obvious one: whether you have steady income large enough to afford the mortgage payment. Typically when people default it’s because their adjustable interest rate surged or they lost their job. When housing prices decline and you end up “underwater” (owing more than the house’s current price), strategic default can theoretically increase your wealth; but in fact it’s relatively rare to take advantage of this, because it’s devastating to your credit rating. Only about 20% of all mortgage defaults in the crisis were strategic—the other 80% were people who actually couldn’t afford to pay.

Another potential upside is that it may be easier to move from one place to another if you rent your home, since selling a home can take a considerable amount of time. But I think this benefit is overstated: Most home leases are 12 months long, while selling a house generally takes 60-90 days. So unless you are already near the end of your lease term when you decide to move, you may actually find that you could move faster if you sold your home than if you waited for your lease to end—and if you end your lease early, the penalties are often substantial. Your best-case scenario is a flat early termination fee; your worst-case scenario is being on the hook for all the remaining rent (at which point, why bother?). Some landlords instead require you to cover rent until a new tenant is found—which you may recognize as almost exactly equivalent to selling your own home.

I think the main reason that people rent instead of buying is simply that they can’t come up with a down payment. If it seems too heavy-handed or risky to simply cap down payments, how about we offer government-subsidized loans (or even grants!) to first-time home buyers to cover their down payments? This would be expensive, but no more so than the mortgage interest deduction—and far less regressive.

For now, we can continue to let people rent out homes. When developers do this, I think the benefits generally outweigh the harms: Above all, they are increasing the supply of housing. A case could be made for policies that incentivize the construction of condos rather than rentals, but above all, policy should be focusing on incentivizing construction.

However, when someone buys an existing house and then rents it out, they are doing something harmful. It probably shouldn’t be illegal, and in some cases there may be no good alternatives to simply letting people do it. But it’s a harmful activity nonetheless, and where legal enforcement is too strict, social stigma can be useful. And for that reason, I think it might actually be fair to call them gentrifiers.

Why I am not an anarchist

Feb 28 JDN 2459274

I read a post on social media not long ago which was remarkably thoughtful and well-written, considering that it contained ideas that would, if consistently followed, probably destroy human civilization as we know it.

It was an argument in favor of the radical view “ACAB” (for “All Cops Are Bastards”), pointing out that police officers swear an oath to uphold all laws, not only just laws, and therefore are willfully participating in a system of oppression.

This isn’t entirely wrong. Police officers do swear such an oath, and it does seem morally problematic. But if you stop and think for a moment, what was the alternative?

Should we have police officers only swear an oath to uphold the laws they believe are just? Then you have just eliminated the entire purpose of having laws. If police officers get to freely choose which laws they want to uphold and which ones they don’t, we don’t have laws; we just have police officers and their own opinions. In place of the republican system of electing representatives to choose laws, we have a system where the only democratic power lies in choosing the governor and the mayor, and from that point on downward everything is appointments that the public has no say in.

Or should we not have police officers at all? Anyone who chants “ACAB” evidently believes so. But without police officers—or at least some kind of law enforcement mechanism, which would almost certainly have to involve something very much like police officers—we once again find that laws no longer have any real power. Government ceases to exist as a meaningful institution. Laws become nothing more than statements of public disapproval. The logical conclusion of “ACAB” is nothing less than anarchism.

Don’t get me wrong; statements of public disapproval can be useful in themselves. Most international law has little if any enforcement mechanism attached to it, yet most countries follow most international laws most of the time. But for one thing, serious violations of international law are frequent—even by countries that are ostensibly “good citizens”; and for another, international politics does have some kind of enforcement mechanism—if your reputation in the international community gets bad enough, you will often face trade sanctions or even find yourself invaded.

Indeed, it is widely recognized by experts in international relations that more international law enforcement would be a very good thing—perhaps one of the very best things that could possibly happen, in fact, given its effect on war, trade, and the catastrophic risks imposed by nuclear weapons and climate change. The problem with international governance is not that it is undesirable, but that it seems infeasible; we can barely seem to get the world’s major power to all agree on international human rights, much less get them to sign onto a pact that would substantially limit their sovereignty against a global government. The UN is toothless precisely because most of the countries that have the power to control UN policy prefer it that way.

At the national and sub-national scale, however, we already have law enforcement; and while it certainly has significant flaws and is in need of various reforms, it does largely succeed at its core mission of reducing crime.

Indeed, the exceptions prove the rule: The one kind of crime that is utterly rampant in the First World, with impacts dwarfing all others, is white-collar crime—the kind that our police almost never seem to care about.

It’s unclear exactly how much worse crime would be if law enforcement did not exist. Most people, I’m sure, would be unlikely to commit rape or murder even if it were legal to do so. Indeed, it’s not clear how effective law enforcement is at actually deterring rape or murder, since rape is so underreported and most murders are one-off crimes of passion. So, a bit ironically, removing law enforcement for the worst crimes might actually have a relatively small effect.

But there are many other crimes that law enforcement clearly does successfully deter, such as aggravated assault, robbery, larceny, burglary, and car theft. Even controlling for the myriad other factors that affect crime, effective policing has been shown to reduce overall crime by at least 10 percent. Policing has the largest effects on so-called “street crime”, crimes like robbery and auto theft that occur in public places where police can be watching.

Moreover, I would contend that these kinds of estimates should be taken as a lower bound. They are comparing the marginal effect of additional policing—not the overall effect of having police at all. If the Law of Diminishing Marginal Returns applies, the marginal benefit of the first few police officers would be very high, while beyond a certain point adding more cops might not do much.

At the extremes this is almost certainly correct, in fact: A country where 25% of all citizens were police officers probably wouldn’t actually have zero crime, but it would definitely be wasting enormous amounts of resources on policing. Dropping that all the way down to 5% or even 1% could be done essentially without loss. Meanwhile—and this is really the relevant question for anarchism—a country with no police officers at all would probably be one with vastly more crime.

I can’t be certain, of course. No country has ever really tried going without police.

What there have been are police strikes: And yes, it turns out that most police strikes don’t result in substantially increased crime. But there are some important characteristics of police strikes that make this result less convincing than it might seem. First of all, police can’t really strike the way most workers can—it’s almost always illegal for police to strike. So instead what happens is a lot of them call in sick (“blue flu”), or they do only the bare minimum requirements of their duties (“work-to-rule”). Often slack in the police force is made up by deploying state or federal officers. So the “strike” is more of a moderate reduction in policing, rather than a complete collapse of policing as the word “strike” would seem to imply.

Moreover, police strikes are almost always short—the NYPD strike in the 1970s lasted only a week. A lot can still happen even in that time: The Murray-Hill riot as a result of a police strike in Montreal led to hundreds of thefts, millions of dollars in damage, and several deaths—all in a single night. (In Canada!) But even when things turn out okay after a week of striking, as they did in New York, that doesn’t really tell us what would happen if the police were gone for a month, or a year, or a decade. Most crime investigations last months or years anyway, so police going on strike for a week isn’t really that different from, say, economists going on strike for a week: It doesn’t much matter, because most of the work happens on a much longer timescale than that. Speaking as a graduate student, I’ve definitely had whole weeks where I did literally no useful work and nobody noticed.

There’s another problem as well, which is that we don’t actually know how much crime happens. We mainly know about crime from two sources: Reporting, which is directly endogenous to police activity(if the police are known to be useless, nobody reports to them) and surveys, which are very slow (usually they are conducted annually or so). With reporting, we can’t really trust how the results change when policing changes; with surveys, we don’t actually see the outcome for months or years after the policing change. Indeed, it is a notorious fact in criminology that we can’t even really reliably compare crime rates in different times and places because of differences in reporting and survey methods; the one thing we feel really confident comparing is homicide rates (dead is pretty much dead!), which are known to not be very responsive to policing for reasons I already discussed.

I suppose we could try conducting an actual experiment where we declare publically that there will be no police action whatsoever for some interval of time (wasn’t there a movie about this?), and see what happens. But this seems very dangerous: If indeed the pessimistic predictions of mass crime waves are accurate, the results could be catastrophic.

The more realistic approach would be to experiment by reducing police activity, and see if crime increases. We would probably want to do this slowly and gradually, so that we have time to observe the full effect before going too far. This is something we can—and should—do without ever needing to go all the way to being anarchists who believe in abolishing all policing. Even if you think that police are really important and great at reducing crime, you should be interested in figuring out which police methods are most cost-effective, and experimenting with different policing approaches is the best way to do that.

I understand the temptation of anarchism. Above all, it’s simple. It feels very brave and principled. I even share the temperament behind it: I am skeptical of authority in general and agree that the best world would be one where every person (or at least every adult of sound mind) had the full autonomy to make their own choices. But that world just doesn’t seem to be feasible right now, and perhaps it never will be.

Police reform is absolutely necessary. Reductions in policing should be seriously tried and studied. But anarchy is just too dangerous—and that is why we shouldn’t be getting rid of police any time soon.

In search of reasonable conservatism

Feb 21JDN 2459267

This is a very tumultuous time for American politics. Donald Trump, not once, but twice was impeached—giving him the dubious title of having been impeached as many times as the previous 45 US Presidents combined. He was not convicted either time, not because the evidence for his crimes was lacking—it was in fact utterly overwhelming—but because of obvious partisan bias: Republican Senators didn’t want to vote against a Republican President. All 50 of the Democratic Senators, but only 7 of the 50 Republican Senators, voted to convict Trump. The required number of votes to convict was 67.

Some degree of partisan bias is to be expected. Indeed, the votes looked an awful lot like Bill Clinton’s impeachment, in which all Democrats and only a handful of Republicans voted to acquit. But Bill Clinton’s impeachment trial was nowhere near as open-and-shut as Donald Trump’s. He was being tried for perjury and obstruction of justice, over lies he told about acts that were unethical, but not illegal or un-Constitutional. I’m a little disappointed that no Democrats voted against him, but I think acquittal was probably the right verdict. There’s something very odd about being tried for perjury because you lied about something that wasn’t even a crime. Ironically, had it been illegal, he could have invoked the Fifth Amendment instead of lying and they wouldn’t have been able to touch him. So the only way the perjury charge could actually stick was because it wasn’t illegal. But that isn’t what perjury is supposed to be about: It’s supposed to be used for things like false accusations and planted evidence. Refusing to admit that you had an affair that’s honestly no one’s business but your family’s really shouldn’t be a crime, regardless of your station.

So let us not imagine an equivalency here: Bill Clinton was being tried for crimes that were only crimes because he lied about something that wasn’t a crime. Donald Trump was being tried for manipulating other countries to interfere in our elections, obstructing investigations by Congress, and above all attempting to incite a coup. Partisan bias was evident in all three trials, but only Trump’s trials were about sedition against the United States.

That is to say, I expect to see partisan bias; it would be unrealistic not to. But I expect that bias to be limited. I expect there to be lines beyond which partisans will refuse to go. The Republican Party in the United States today has shown us that they have no such lines. (Or if there are, they are drawn far too high. What would he have to do, bomb an American city? He incited an invasion of the Capitol Building, for goodness’ sake! And that was after so terribly mishandling a pandemic that he caused roughly 200,000 excess American deaths!)

Temperamentally, I like to compromise. I want as many people to be happy as possible, even if that means not always getting exactly what I would personally prefer. I wanted to believe that there were reasonable conservatives in our government, professional statespersons with principles who simply had honest disagreements about various matters of policy. I can now confirm that there are at most 7 such persons in the US Senate, and at most 10 such persons in the US House of Representatives. So of the 261 Republicans in Congress, no more than 17 are actually reasonable statespersons who do not let partisan bias override their most basic principles of justice and democracy.

And even these 17 are by no means certain: There were good strategic reasons to vote against Trump, even if the actual justice meant nothing to you. Trump’s net disapproval rating was nearly the highest of any US President ever. Carter and Bush I had periods where they fared worse, but overall fared better. Johnson, Ford, Reagan, Obama, Clinton, Bush II, and even Nixon were consistently more approved than Trump. Kennedy and Eisenhower completely blew him out of the water—at their worst, Kennedy and Eisenhower were nearly 30 percentage points above Trump at his best. With Trump this unpopular, cutting ties with him would make sense for the same reason rats desert a sinking ship. And yet somehow partisan loyalty won out for 94% of Republicans in Congress.

Politics is the mind-killer, and I fear that this sort of extreme depravity on the part of Republicans in Congress will make it all too easy to dismiss conservatism as a philosophy in general. I actually worry about that; not all conservative ideas are wrong! Low corporate taxes actually make a lot of sense. Minimum wage isn’t that harmful, but it’s also not that beneficial. Climate change is a very serious threat, but it’s simply not realistic to jump directly to fully renewable energy—we need something for the transition, probably nuclear energy. Capitalism is overall the best economic system, and isn’t particularly bad for the environment. Industrial capitalism has brought us a golden age. Rent control is a really bad idea. Fighting racism is important, but there are ways in which woke culture has clearly gone too far. Indeed, perhaps the worst thing about woke culture is the way it denies past successes for civil rights and numbs us with hopelessness.

Above all, groupthink is incredibly dangerous. Once we become convinced that any deviation from the views of the group constitutes immorality or even treason, we become incapable of accepting new information and improving our own beliefs. We may start with ideas that are basically true and good, but we are not omniscient, and even the best ideas can be improved upon. Also, the world changes, and ideas that were good a generation ago may no longer be applicable to the current circumstances. The only way—the only way—to solve that problem is to always remain open to new ideas and new evidence.

Therefore my lament is not just for conservatives, who now find themselves represented by craven ideologues; it is also for liberals, who no longer have an opposition party worth listening to. Indeed, it’s a little hard to feel bad for the conservatives, because they voted for these maniacs. Maybe they didn’t know what they were getting? But they’ve had chances to remove most of them, and didn’t do so. At best I’d say I pity them for being so deluded by propaganda that they can’t see the harm their votes have done.

But I’m actually quite worried that the ideologues on the left will now feel vindicated; their caricatured view of Republicans as moustache-twirling cartoon villains turned out to be remarkably accurate, at least for Trump himself. Indeed, it was hard not to think of the ridiculous “destroying the environment for its own sake” of Captain Planet villains when Trump insisted on subsidizing coal power—which by the way didn’t even work.

The key, I think, is to recognize that reasonable conservatives do exist—there just aren’t very many of them in Congress right now. A significant number of Americans want low taxes, deregulation, and free markets but are horrified by Trump and what the Republican Party has become—indeed, at least a few write for the National Review.

The mere fact that an idea comes from Republicans is not a sufficient reason to dismiss that idea. Indeed, I’m going to say something even stronger: The mere fact that an idea comes from a racist or a bigot is not a sufficient reason to dismiss that idea. If the idea itself is racist or bigoted, yes, that’s a reason to think it is wrong. But even bad people sometimes have good ideas.

The reasonable conservatives seem to be in hiding at the moment; I’ve searched for them, and had difficulty finding more than a handful. Yet we must not give up the search. Politics should not appear one-sided.

Love in a time of quarantine

Feb 14JDN 2459260

This is our first Valentine’s Day of quarantine—and hopefully our last. With Biden now already taking action and the vaccine rollout proceeding more or less on schedule, there is good reason to think that this pandemic will be behind us by the end of this year.

Yet for now we remain isolated from one another, attempting to substitute superficial digital interactions for the authentic comforts of real face-to-face contact. And anyone who is single, or forced to live away from their loved ones, during quarantine is surely having an especially hard time right now.

I have been quite fortunate in this regard: My fiancé and I have lived together for several years, and during this long period of isolation we’ve at least had each other—if basically no one else.

But even I have felt a strong difference, considerably stronger than I expected it would be: Despite many of my interactions already being conducted via the Internet, needing to do so with all interactions feels deeply constraining. Nearly all of my work can be done remotely—but not quite all, and even what can be done remotely doesn’t always work as well remotely. I am moderately introverted, and I still feel substantially deprived; I can only imagine how awful it must be for the strongly extraverted.

As awkward as face-to-face interactions can be, and as much as I hate making phone calls, somehow Zoom video calls are even worse than either. Being unable to visit someone’s house for dinner and games, or go out to dinner and actually sit inside a restaurant, leaves a surprisingly large emotional void. Nothing in particular feels radically different, but the sum of so many small differences adds up to a rather large one. I think I felt it the most when we were forced to cancel our usual travel back to Michigan over the holiday season.

Make no mistake: Social interaction is not simply something humans enjoy, or are good at. Social interaction is a human need. We need social interaction in much the same way that we need food or sleep. The United Nations considers solitary confinement for more than two weeks to be torture. Long periods in solitary confinement are strongly correlated with suicide—so in that sense, isolation can kill you. Think about the incredibly poor quality of social interactions that goes on in most prisons: Endless conflict, abuse, racism, frequent violence—and then consider that the one thing that inmates find most frightening is to be deprived of that social contact. This is not unlike being fed nothing but stale bread and water, and then suddenly having even that taken away from you.

Even less extreme forms of social isolation—like most of us are feeling right now—have as detrimental an effect on health as smoking or alcoholism, and considerably worse than obesity. Long-term social isolation increases overall mortality risk by more than one-fourth. Robust social interaction is critical for long-term health, both physically and mentally.

This does not mean that the quarantines were a bad idea—on the contrary, we should have enforced them more aggressively, so as to contain the pandemic faster and ultimately need less time in quarantine. Timing is critical here: Successfully containing the pandemic early is much easier than trying to bring it back under control once it has already spread. When the pandemic began, lockdown might have been able to stop the spread. At this point, vaccines are really our only hope of containment.

But it does mean that if you feel terrible lately, there is a very good reason for this, and you are not alone. Due to forces much larger than any of us can control, forces that even the world’s most powerful governments are struggling to contain, you are currently being deprived of a basic human need.

And especially if you are on your own this Valentine’s Day, remember that there are people who love you, even if they can’t be there with you right now.

What happened with GameStop?

Feb 7 JDN 2459253

No doubt by now you’ve heard about the recent bubble in GameStop stock that triggered several trading stops, nearly destroyed a hedge fund, and launched a thousand memes. What really strikes me about this whole thing is how ordinary it is: This is basically the sort of thing that happens in our financial markets all the time. So why are so many people suddenly paying so much attention to it?

There are a few important ways this is unusual: Most importantly, the bubble was triggered by a large number of middle-class people investing small amounts, rather than by a handful of billionaires or hedge funds. It’s also more explicitly collusive than usual, with public statements in writing about what stocks are being manipulated rather than hushed whispers between executives at golf courses. Partly as a consequence of these, the response from the government and the financial industry has been quite different as well, trying to halt trading and block transactions in a way that they would never do if the crisis had been caused by large financial institutions.

If you’re interested in the technical details of what happened, what a short squeeze is and how it can make a hedge fund lose enormous amounts of money unexpectedly, I recommend this summary by KQED. But the gist of it is simple enough: Melvin Capital placed huge bets that GameStop stock would fall in price, and a coalition of middle-class traders coordinated on Reddit to screw them over by buying a bunch of GameStop stock and driving up the price. It worked, and now Melvin Capital lost something on the order of $3-5 billion in just a few days.

The particular kind of bet they placed is called a short, and it’s a completely routine practice on Wall Street despite the fact that I could never quite understand why it is a thing that should be allowed.

The essence of a short is quite simple: When you short, you are selling something you don’t own. You “borrow” it (it isn’t really even borrowing), and then sell it to someone else, promising to buy it back and return it to where you borrowed it from at some point in the future. This amounts to a bet that the price will decline, so that the price at which you buy it is lower than the price at which you sold it.

Doesn’t that seem like an odd thing to be allowed to do? Normally you can’t sell something you have merely borrowed. I can’t borrow a car and then sell it; car title in fact exists precisely to prevent this from happening. If I were to borrow your coat and then sell it to a thrift store, I’d have committed larceny. It’s really quite immaterial whether I plan to buy it back afterward; in general we do not allow people to sell things that they do not own.

Now perhaps the problem is that when I borrow your coat or your car, you expect me to return that precise object—not a similar coat or a car of equivalent Blue Book value, but your coat or your car. When I borrow a share of GameStop stock, no one really cares whether it is that specific share which I return—indeed, it would be almost impossible to even know whether it was. So in that way it’s a bit like borrowing money: If I borrow $20 from you, you don’t expect me to pay back that precise $20 bill. Indeed you’d be shocked if I did, since presumably I borrowed it in order to spend it or invest it, so how would I ever get it back?

But you also don’t sell money, generally speaking. Yes, there are currency exchanges and money-market accounts; but these are rather exceptional cases. In general, money is not bought and sold the way coats or cars are.

What about consumable commodities? You probably don’t care too much about any particular banana, sandwich, or gallon of gasoline. Perhaps in some circumstances we might “loan” someone a gallon of gasoline, intending them to repay us at some later time with a different gallon of gasoline. But far more likely, I think, would be simply giving a friend a gallon of gasoline and then not expecting any particular repayment except perhaps a vague offer of providing a similar favor in the future. I have in fact heard someone say the sentence “Can I borrow your sandwich?”, but it felt very odd when I heard it. (Indeed, I responded something like, “No, you can keep it.”)

And in order to actually be shorting gasoline (which is a thing that you, too, can do, perhaps even right now, if you have a margin account on a commodities exchange), it isn’t enough to borrow a gallon with the expectation of repaying a different gallon; you must also sell that gallon you borrowed. And now it seems very odd indeed to say to a friend, “Hey, can I borrow a gallon of gasoline so that I can sell it to someone for a profit?”

The usual arguments for why shorting should be allowed are much like the arguments for exotic financial instruments in general: “Increase liquidity”, “promote efficient markets”. These arguments are so general and so ubiquitous that they essentially amount to the strongest form of laissez-faire: Whatever Wall Street bankers feel like doing is fine and good and part of what makes American capitalism great.

In fact, I was never quite clear why margin accounts are something we decided to allow; margin trading is inherently high-leverage and thus inherently high-risk. Borrowing money in order to arbitrage financial assets doesn’t just seem like a very risky thing to do, it has been one way or another implicated in virtually every financial crisis that has ever occurred. It would be an exaggeration to say that leveraged arbitrage is the one single cause of financial crises, but it would be a shockingly small exaggeration. I think it absolutely is fair to say that if leveraged arbitrage did not exist, financial crises would be far rarer and further between.

Indeed, I am increasingly dubious of the whole idea of allowing arbitrage in general. Some amount of arbitrage may be unavoidable; there may always be people people who see that prices are different for the same item in two different markets, and then exploit that difference before anyone can stop them. But this is a bit like saying that theft is probably inevitable: Yes, every human society that has had a system of property ownership (which is most of them—even communal hunter-gatherers have rules about personal property), has had some amount of theft. That doesn’t mean there is nothing we can do to reduce theft, or that we should simply allow theft wherever it occurs.

The moral argument against arbitrage is straightforward enough: You’re not doing anything. No good is produced; no service is provided. You are making money without actually contributing any real value to anyone. You just make money by having money. This is what people in the Middle Ages found suspicious about lending money at interest; but lending money actually is doing something—sometimes people need more money than they have, and lending it to them is providing a useful service for which you deserve some compensation.

A common argument economists make is that arbitrage will make prices more “efficient”, but when you ask them what they mean by “efficient”, the answer they give is that it removes arbitrage opportunities! So the good thing about arbitrage is that it stops you from doing more arbitrage?

And what if it doesn’t stop you? Many of the ways to exploit price gaps (particularly the simplest ones like “where it’s cheap, buy it; where it’s expensive, sell it”) will automatically close those gaps, but it’s not at all clear to me that all the ways to exploit price gaps will necessarily do so. And even if it’s a small minority of market manipulation strategies that exploit gaps without closing them, those are precisely the strategies that will be most profitable in the long run, because they don’t undermine their own success. Then, left to their own devices, markets will evolve to use such strategies more and more, because those are the strategies that work.

That is, in order for arbitrage to be beneficial, it must always be beneficial; there must be no way to exploit price gaps without inevitably closing those price gaps. If that is not the case, then evolutionary pressure will push more and more of the financial system toward using methods of arbitrage that don’t close gaps—or even exacerbate them. And indeed, when you look at how ludicrously volatile and crisis-prone our financial system has become, it sure looks an awful lot like an evolutionary equilibrium where harmful arbitrage strategies have evolved to dominate.

A world where arbitrage actually led to efficient pricing would be a world where the S&P 500 rises a steady 0.02% per day, each and every day. Maybe you’d see a big move when there was actually a major event, like the start of a war or the invention of a vaccine for a pandemic. You’d probably see a jump up or down of a percentage point or two with each quarterly Fed announcement. But daily moves of even five or six percentage points would be a very rare occurrence—because the real expected long-run aggregate value of the 500 largest publicly-traded corporations in America is what the S&P 500 is supposed to represent, and that is not a number that should change very much very often. The fact that I couldn’t really tell you what that number is without multi-trillion-dollar error bars is so much the worse for anyone who thinks that financial markets can somehow get it exactly right every minute of every day.

Moreover, it’s not hard to imagine how we might close price gaps without simply allowing people to exploit them. There could be a bunch of economists at the Federal Reserve whose job it is to locate markets where there are arbitrage opportunities, and then a bundle of government funds that they can allocate to buying and selling assets in order to close those price gaps. Any profits made are received by the treasury; any losses taken are borne by the treasury. The economists would get paid a comfortable salary, and perhaps get bonuses based on doing a good job in closing large or important price gaps; but there is no need to give them even a substantial fraction of the proceeds, much less all of it. This is already how our money supply is managed, and it works quite well, indeed obviously much better than an alternative with “skin in the game”: Can you imagine the dystopian nightmare we’d live in if the Chair of the Federal Reserve actually received even a 1% share of the US money supply? (Actually I think that’s basically what happened in Zimbabwe: The people who decided how much money to print got to keep a chunk of the money that was printed.)

I don’t actually think this GameStop bubble is all that important in itself. A decade from now, it may be no more memorable than Left Shark or the Macarena. But what is really striking about it is how little it differs from business-as-usual on Wall Street. The fact that a few million Redditors can gather together to buy a stock “for the lulz” or to “stick it to the Man” and thereby bring hedge funds to their knees is not such a big deal in itself, but it is symptomatic of much deeper structural flaws in our financial system.

On the accuracy of testing

Jan 31 JDN 2459246

One of the most important tools we have for controlling the spread of a pandemic is testing to see who is infected. But no test is perfectly reliable. Currently we have tests that are about 80% accurate. But what does it mean to say that a test is “80% accurate”? Many people get this wrong.

First of all, it certainly does not mean that if you have a positive result, you have an 80% chance of having the virus. Yet this is probably what most people think when they hear “80% accurate”.

So I thought it was worthwhile to demystify this a little bit, an explain just what we are talking about when we discuss the accuracy of a test—which turns out to have deep implications not only for pandemics, but for knowledge in general.

There are really two key measures of a test’s accuracy, called sensitivity and specificity, The sensitivity is the probability that, if the true answer is positive (you have the virus), the test result will be positive. This is the sense in which our tests are 80% accurate. The specificity is the probability that, if the true answer is negative (you don’t have the virus), the test result is negative. The terms make sense: A test is sensitive if it always picks up what’s there, and specific if it doesn’t pick up what isn’t there.

These two measures need not be the same, and typically are quite different. In fact, there is often a tradeoff between them: Increasing the sensitivity will often decrease the specificity.

This is easiest to see with an extreme example: I can create a COVID test that has “100% accuracy” in the sense of sensitivity. How do I accomplish this miracle? I simply assume that everyone in the world has COVID. Then it is absolutely guaranteed that I will have zero false negatives.

I will of course have many false positives—indeed the vast majority of my “positive results” will be me assuming that COVID is present without any evidence. But I can guarantee a 100% true positive rate, so long as I am prepared to accept a 0% true negative rate.

It’s possible to combine tests in ways that make them more than the sum of their parts. You can first run a test with a high specificity, and then re-test with a test that has a high sensitivity. The result will have both rates higher than either test alone.

For example, suppose test A has a sensitivity of 70% and a specificity of 90%, while test B has the reverse.

Then, if the true answer is positive, test A will return true 70% of the time, while test B will return true 90% of the time. So there is a 70% + (30%)(90%) = 97% chance of getting a positive result on the combined test.

If the true answer is negative, test A will return false 90% of the time, while test B will return false 70% of the time. So there is a 90% + (10%)(70%) = 97% chance of getting a negative result on the combined test.

Actually if we are going to specify the accuracy of a test in a single number, I think it would be better to use a much more obscure term, the informedness. Informedness is sensitivity plus specificity, minus one. It ranges between -1 and 1, where 1 is a perfect test, and 0 is a test that tells you absolutely nothing. -1 isn’t the worst possible test; it’s a test that’s simply calibrated backwards! Re-label it, and you’ve got a perfect test. So really maybe we should talk about the absolute value of the informedness.

It’s much harder to play tricks with informedness: My “miracle test” that just assumes everyone has the virus actually has an informedness of zero. This makes sense: The “test” actually provides no information you didn’t already have.

Surprisingly, I was not able to quickly find any references to this really neat mathematical result for informedness, but I find it unlikely that I am the only one who came up with it: The informedness of a test is the non-unit eigenvalue of a Markov matrix representing the test. (If you don’t know what all that means, don’t worry about it; it’s not important for this post. I just found it a rather satisfying mathematical result that I couldn’t find anyone else talking about.)

But there’s another problem as well: Even if we know everything about the accuracy of a test, we still can’t infer the probability of actually having the virus from the test result. For that, we need to know the baseline prevalence. Failing to account for that is the very common base rate fallacy.

Here’s a quick example to help you see what the problem is. Suppose that 1% of the population has the virus. And suppose that the tests have 90% sensitivity and 95% specificity. If I get a positive result, what is the probability I have the virus?

If you guessed something like 90%, you have committed the base rate fallacy. It’s actually much smaller than that. In fact, the true probability you have the virus is only 15%.

In a population of 10000 people, 100 (1%) will have the virus while 9900 (99%) will not. Of the 100 who have the virus, 90 (90%) will test positive and 10 (10%) will test negative. Of the 9900 who do not have the virus, 495 (5%) will test positive and 9405 (95%) will test negative.

This means that out of 585 positive test results, only 90 will actually be true positives!

If we wanted to improve the test so that we could say that someone who tests positive is probably actually positive, would it be better to increase sensitivity or specificity? Well, let’s see.

If we increased the sensitivity to 95% and left the specificity at 95%, we’d get 95 true positives and 495 false positives. This raises the probability to only 16%.

But if we increased the specificity to 97% and left the sensitivity at 90%, we’d get 90 true positives and 297 false positives. This raises the probability all the way to 23%.

But suppose instead we care about the probability that you don’t have the virus, given that you test negative. Our original test had 9900 true negatives and 10 false negatives, so it was quite good in this regard; if you test negative, you only have a 0.1% chance of having the virus.

Which approach is better really depends on what we care about. When dealing with a pandemic, false negatives are much worse than false positives, so we care most about sensitivity. (Though my example should show why specificity also matters.) But there are other contexts in which false positives are more harmful—such as convicting a defendant in a court of law—and then we want to choose a test which has a high true negative rate, even if it means accepting a low true positive rate.

In science in general, we seem to care a lot about false positives; a p-value is simply one minus the specificity of the statistical test, and as we all know, low p-values are highly sought after. But the sensitivity of statistical tests is often quite unclear. This means that we can be reasonably confident of our positive results (provided the baseline probability wasn’t too low, the statistics weren’t p-hacked, etc.); but we really don’t know how confident to be in our negative results. Personally I think negative results are undervalued, and part of how we got a replication crisis and p-hacking was by undervaluing those negative results. I think it would be better in general for us to report 95% confidence intervals (or better yet, 95% Bayesian prediction intervals) for all of our effects, rather than worrying about whether they meet some arbitrary threshold probability of not being exactly zero. Nobody really cares whether the effect is exactly zero (and it almost never is!); we care how big the effect is. I think the long-run trend has been toward this kind of analysis, but it’s still far from the norm in the social sciences. We’ve become utterly obsessed with specificity, and basically forgot that sensitivity exists.

Above all, be careful when you encounter a statement like “the test is 80% accurate”; what does that mean? 80% sensitivity? 80% specificity? 80% informedness? 80% probability that an observed positive is true? These are all different things, and the difference can matter a great deal.

The paperclippers are already here

Jan 24 JDN 2459239

Imagine a powerful artificial intelligence, which is comprised of many parts distributed over a vast area so that it has no particular location. It is incapable of feeling any emotion: Neither love nor hate, neither joy nor sorrow, neither hope nor fear. It has no concept of ethics or morals, only its own programmed directives. It has one singular purpose, which it seeks out at any cost. Any who aid its purpose are generously rewarded. Any who resist its purpose are mercilessly crushed.

The Less Wrong community has come to refer to such artificial intelligences as “paperclippers”; the metonymous singular directive is to maximize the number of paperclips produced. There’s even an online clicker game where you can play as one called “Universal Paperclips“. The concern is that we might one day invent such artificial intelligences, and they could get out of control. The paperclippers won’t kill us because they hate us, but simply because we can be used to make more paperclips. This is a far more plausible scenario for the “AI apocalypse” than the more conventional sci-fi version where AIs try to kill us on purpose.

But I would say that the paperclippers are already here. Slow, analog versions perhaps. But they are already getting out of control. We call them corporations.

A corporation is probably not what you visualized when you read the first paragraph of this post, so try reading it again. Which parts are not true of corporations?

Perhaps you think a corporation is not an artificial intelligence? But clearly it’s artificial, and doesn’t it behave in ways that seem intelligent? A corporation has purpose beyond its employees in much the same way that a hive has purpose beyond its bees. A corporation is a human superorganism (and not the only kind either).

Corporations are absolutely, utterly amoral. Their sole directive is to maximize profit. Now, you might think that an individual CEO, or a board of directors, could decide to do something good, or refrain from something evil, for reasons other than profit; and to some extent this is true. But particularly when a corporation is publicly-traded, that CEO and those directors are beholden to shareholders. If shareholders see that the corporation is acting in ways that benefit the community but hurt their own profits, shareholders can rebel by selling their shares or even suing the company. In 1919, Dodge successfully sued Ford for the “crime” of setting wages too high and prices too low.

Humans are altruistic. We are capable of feeling, emotion, and compassion. Corporations are not. Corporations are made of human beings, but they are specifically structured to minimize the autonomy of human choices. They are designed to provide strong incentives to behave in a particular way so as to maximize profit. Even the CEO of a corporation, especially one that is publicly traded, has their hands tied most of the time by the desires of millions of shareholders and customers—so-called “market forces”. Corporations are entirely the result of human actions, but they feel like impersonal forces because they are the result of millions of independent choices, almost impossible to coordinate; so one individual has very little power to change the outcome.

Why would we create such entities? It almost feels as though we were conquered by some alien force that sought to enslave us to its own purposes. But no, we created corporations ourselves. We intentionally set up institutions designed to limit our own autonomy in the name of maximizing profit.

Part of the answer is efficiency: There are genuine gains in economic efficiency due to the corporate structure. Corporations can coordinate complex activity on a vast scale, with thousands or even millions of employees each doing what they are assigned without ever knowing—or needing to know—the whole of which they are a part.

But a publicly-traded corporation is far from the only way to do that. Even for-profit businesses are not the only way to organize production. And empirically, worker co-ops actually seem to be about as productive as corporations, while producing far less inequality and far more satisfied employees.

Thus, in order to explain the primacy of corporations, particularly those that are traded on stock markets, we must turn to ideology: The extreme laissez- faire concept of capitalism and its modern expression in the ideology of “shareholder value”. Somewhere along the way enough people—or at least enough policymakers—became convinced that the best way to run an economy was to hand over as much as possible to entities that exist entirely to maximize their own profits.

This is not to say that corporations should be abolished entirely. I am certainly not advocating a shift to central planning; I believe in private enterprise. But I should note that private enterprise can also include co-ops, partnerships, and closely-held businesses, rather than publicly traded corproations, and perhaps that’s all we need. Yet there do seem to be significant advantages to the corporate structure: Corporation seem to be spectacularly good at scaling up the production of goods and providing them to a large number of customers. So let’s not get rid of corporations just yet.

Instead, let us keep corporations on a short leash. When properly regulated, corporations can be very efficient at producing goods. But corporations can also cause tremendous damage when given the opportunity. Regulations aren’t just “red tape” that gets in the way of production. They are a vital lifeline that protects us against countless abuses that corporations would otherwise commit.

These vast artificial intelligences are useful to us, so let’s not get rid of them. But never for a moment imagine that their goals are the same as ours. Keep them under close watch at all times, and compel them to use their great powers for good—for, left to their own devices, they can just as easily do great evil.

A new chapter in my life, hopefully

Jan 17 JDN 2459232

My birthday is coming up soon, and each year around this time I try to step back and reflect on how the previous year has gone and what I can expect from the next one.

Needless to say, 2020 was not a great year for me. The pandemic and its consequences made this quite a bad year for almost everyone. Months of isolation and fear have made us all stressed and miserable, and even with the vaccines coming out the end is still all too far away. Honestly I think I was luckier than most: My work could be almost entirely done remotely, and my income is a fixed stipend, so financially I faced no hardship at all. But isolation still wreaks its toll.

Most of my energy this past year has been spent on the job market. I applied to over 70 different job postings, and from that I received 6 interviews, all but one of which I’ve already finished. Then, if they liked how I did in those interviews, I will be invited to another phase, which in normal times would be a flyout where candidates visit the campus; but due to COVID it’s all being done remotely now. And then, finally, I may actually get some job offers. Statistically I think I will probably get some kind of offer at this point, but I can’t be sure—and that uncertainty is quite nerve-wracking. I may get a job and move somewhere new, or I may not and have to stay here for another year and try again. Both outcomes are still quite probable, and I really can’t plan on either one.

If I do actually get a job, this will open a new chapter in my life—and perhaps I will finally be able to settle down with a permanent career, buy a house, start a family. One downside of graduate school I hadn’t really anticipated is how it delays adulthood: You don’t really feel like you are a proper adult, because you are still in the role of a student for several additional years. I am all too ready to be done with being a student. I feel as though I’ve spent all my life preparing to do things instead of actually doing them, and I am now so very tired of preparing.

I don’t even know for sure what I want to do—I feel disillusioned with academia, I haven’t been able to snare any opportunities in government or nonprofits, and I need more financial security than I could get if I leapt headlong into full-time writing. But I am quite certain that I want to actually do something, and no longer simply be trained and prepared (and continually evaluated on that training and preparation).

I’m even reluctant to do a postdoc, because that also likely means packing up and moving again in a few year (though I would prefer it to remaining here another year).

I have to keep reminding myself that all of this is temporary: The pandemic will eventually be quelled by vaccines, and quarantine procedures will end, and life for most of us will return to normal. Even if I don’t get a job I like this year, I probably will next year; and then I can finally tie off my education with a bow and move on. Even if the first job isn’t permanent, eventually one will be, and at last I’ll be able to settle into a stable adult life.

Much of this has already dragged on longer than I thought it would. Not the job market, which has gone more or less as expected. (More accurately, my level of optimism has jumped up and down like a roller coaster, and on average what I thought would happen has been something like what actually happened so far.) But the pandemic certainly has; the early attempts at lockdown were ineffective, the virus kept spreading worse and worse, and now there are more COVID cases in the US than ever before. Southern California in particular has been hit especially hard, and hospitals here are now overwhelmed just as we feared they might be.

Even the removal of Trump has been far more arduous than I expected. First there was the slow counting of ballots because so many people had (wisely) voted absentee. Then there were the frivolous challenges to the counts—and yes, I mean frivolous in a legal sense, as 61 out of 62 lawsuits were thrown out immediately and the 1 that made it through was a minor technical issue.

And then there was an event so extreme I can barely even fathom that it actually happened: An armed mob stormed the Capitol building, forced Congress to evacuate, and made it inside with minimal resistance from the police. The stark difference in how the police reacted to this attempted insurrection and how they have responded to the Black Lives Matter protests underscores the message of Black Lives Matter better than they ever could have by themselves.

In one sense it feels like so much has happened: We have borne witness to historic events in real-time. But in another sense it feels like so little has happened: Staying home all the time under lockdown has meant that days are alway much the same, and each day blends into the next. I feel somehow unhinged frrom time, at once marveling that a year has passed already, and marveling that so much happened in only a year.

I should soon hear back from these job interviews and have a better idea what the next chapter of my life will be. But I know for sure that I’ll be relieved once this one is over.