Why is cryptocurrency popular?

May 30 JDN 2459365

At the time of writing, the price of most cryptocurrencies has crashed, likely due to a ban on conventional banks using cryptocurrency in China (though perhaps also due to Elon Musk personally refusing to accept Bitcoin at his businesses). But for all I know by the time this post goes live the price will surge again. Or maybe they’ll crash even further. Who knows? The prices of popular cryptocurrencies have been extremely volatile.

This post isn’t really about the fluctuations of cryptocurrency prices. It’s about something a bit deeper: Why are people willing to put money into cryptocurrencies at all?

The comparison is often made to fiat currency: “Bitcoin isn’t backed by anything, but neither is the US dollar.”

But the US dollar is backed by something: It’s backed by the US government. Yes, it’s not tradeable for gold at a fixed price, but so what? You can use it to pay taxes. The government requires it to be legal tender for all debts. There are certain guaranteed exchange rights built into the US dollar, which underpin the value that the dollar takes on in other exchanges. Moreover, the US Federal Reserve carefully manages the supply of US dollars so as to keep their value roughly constant.

Bitcoin does not have this (nor does Dogecoin, or Etherium, or any of the other hundreds of lesser-known cryptocurrencies). There is no central bank. There is no government making them legal tender for any debts at all, let alone all of them. Nobody collects taxes in Bitcoin.

And so, because its value is untethered, Bitcoin’s price rises and falls, often in huge jumps, more or less randomly. If you look all the way back to when it was introduced, Bitcoin does seem to have an overall upward price trend, but this honestly seems like a statistical inevitability: If you start out being worthless, the only way your price can change is upward. While some people have become quite rich by buying into Bitcoin early on, there’s no particular reason to think that it will rise in value from here on out.

Nor does Bitcoin have any intrinsic value. You can’t eat it, or build things out of it, or use it for scientific research. It won’t even entertain you (unless you have a very weird sense of entertainment). Bitcoin doesn’t even have “intrinsic value” the way gold does (which is honestly an abuse of the term, since gold isn’t actually especially useful): It isn’t innately scarce. It was made scarce by its design: Through the blockchain, a clever application of encryption technology, it was made difficult to generate new Bitcoins (called “mining”) in an exponentially increasing way. But the decision of what encryption algorithm to use was utterly arbitrary. Bitcoin mining could just as well have been made a thousand times easier or a thousand times harder. They seem to have hit a sweet spot where they made it just hard enough that it make Bitcoin seem scarce while still making it feel feasible to get.

We could actually make a cryptocurrency that does something useful, by tying its mining to a genuinely valuable pursuit, like analyzing scientific data or proving mathematical theorems. Perhaps I should suggest a partnership with Folding@Home to make FoldCoin, the crypto coin you mine by folding proteins. There are some technical details there that would be a bit tricky, but I think it would probably be feasible. And then at least all this computing power would accomplish something, and the money people make would be to compensate them for their contribution.

But Bitcoin is not useful. No institution exists to stabilize its value. It constantly rises and falls in price. Why do people buy it?

In a word, FOMO. The fear of missing out. People buy Bitcoin because they see that a handful of other people have become rich by buying and selling Bitcoin. Bitcoin symbolizes financial freedom: The chance to become financially secure without having to participate any longer in our (utterly broken) labor market.

In this, volatility is not a bug but a feature: A stable currency won’t change much in value, so you’d only buy into it because you plan on spending it. But an unstable currency, now, there you might manage to get lucky speculating on its value and get rich quick for nothing. Or, more likely, you’ll end up poorer. You really have no way of knowing.

That makes cryptocurrency fundamentally like gambling. A few people make a lot of money playing poker, too; but most people who play poker lose money. Indeed, those people who get rich are only able to get rich because other people lose money. The game is zero-sum—and likewise so is cryptocurrency.

Note that this is not how the stock market works, or at least not how it’s supposed to work (sometimes maybe). When you buy a stock, you are buying a share of the profits of a corporation—a real, actual corporation that produces and sells goods or services. You’re (ostensibly) supplying capital to fund the operations of that corporation, so that they might make and sell more goods in order to earn more profit, which they will then share with you.

Likewise when you buy a bond: You are lending money to an institution (usually a corporation or a government) that intends to use that money to do something—some real actual thing in the world, like building a factory or a bridge. They are willing to pay interest on that debt in order to get the money now rather than having to wait.

Initial Coin Offerings were supposed to be away to turn cryptocurrency into a genuine investment, but at least in their current virtually unregulated form, they are basically indistinguishable from a Ponzi scheme. Unless the value of the coin is somehow tied to actual ownership of the corporation or shares of its profits (the way stocks are), there’s nothing to ensure that the people who buy into the coin will actually receive anything in return for the capital they invest. There’s really very little stopping a startup from running an ICO, receiving a bunch of cash, and then absconding to the Cayman Islands. If they made it really obvious like that, maybe a lawsuit would succeed; but as long as they can create even the appearance of a good-faith investment—or even actually make their business profitable!—there’s nothing forcing them to pay a cent to the owners of their cryptocurrency.

The really frustrating thing for me about all this is that, sometimes, it works. There actually are now thousands of people who made decisions that by any objective standard were irrational and irresponsible, and then came out of it millionaires. It’s much like the lottery: Playing the lottery is clearly and objectively a bad idea, but every once in awhile it will work and make you massively better off.

It’s like I said in a post about a year ago: Glorifying superstars glorifies risk. When a handful of people can massively succeed by making a decision, that makes a lot of other people think that it was a good decision. But quite often, it wasn’t a good decision at all; they just got spectacularly lucky.

I can’t exactly say you shouldn’t buy any cryptocurrency. It probably has better odds than playing poker or blackjack, and it certainly has better odds than playing the lottery. But what I can say is this: It’s about odds. It’s gambling. It may be relatively smart gambling (poker and blackjack are certainly a better idea than roulette or slot machines), with relatively good odds—but it’s still gambling. It’s a zero-sum high-risk exchange of money that makes a few people rich and lots of other people poorer.

With that in mind, don’t put any money into cryptocurrency that you couldn’t afford to lose at a blackjack table. If you’re looking for something to seriously invest your savings in, the answer remains the same: Stocks. All the stocks.

I doubt this particular crash will be the end for cryptocurrency, but I do think it may be the beginning of the end. I think people are finally beginning to realize that cryptocurrencies are really not the spectacular innovation that they were hyped to be, but more like a high-tech iteration of the ancient art of the Ponzi scheme. Maybe blockchain technology will ultimately prove useful for something—hey, maybe we should actually try making FoldCoin. But the future of money remains much as it has been for quite some time: Fiat currency managed by central banks.

Selectivity is a terrible measure of quality

May 23 JDN 2459358

How do we decide which universities and research journals are the best? There are a vast number of ways we could go about this—and there are in fact many different ranking systems out there, though only a handful are widely used. But one primary criterion which seems to be among the most frequently used is selectivity.

Selectivity is a very simple measure: What proportion of people who try to get in, actually get in? For universities this is admission rates for applicants; for journals it is acceptance rates for submitted papers.

The top-rated journals in economics have acceptance rates of 1-7%. The most prestigious universities have acceptance rates of 4-10%. So a reasonable ballpark is to assume a 95% chance of not getting accepted in either case. Of course, some applicants are more or less qualified, and some papers are more or less publishable; but my guess is that most applicants are qualified and most submitted papers are publishable. So these low acceptance rates mean refusing huge numbers of qualified people.


Selectivity is an objective, numeric score that can be easily generated and compared, and is relatively difficult to fake. This may accouunt for its widespread appeal. And it surely has some correlation with genuine quality: Lots of people are likely to apply to a school because it is good, and lots of people are likely to submit to a journal because it is good.

But look a little bit closer, and it becomes clear that selectivity is really a terrible measure of quality.


One, it is extremely self-fulfilling. Once a school or a journal becomes prestigious, more people will try to get in there, and that will inflate its selectivity rating. Harvard is extremely selective because Harvard is famous and high-rated. Why is Harvard so high-rated? Well, in part because Harvard is extremely selective.

Two, it incentivizes restricting the number of applicants accepted.

Ivy League schools have vast endowments, and could easily afford to expand their capacity, thus employing more faculty and educating more students. But that would require reducing their acceptance rates and hence jeopardizing their precious selectivity ratings. If the goal is to give as many people as possible the highest quality education, then selectivity is a deeply perverse incentive: It specifically incentivizes not educating too many students.

Similarly, most journals include something in their rejection letters about “limited space”, which in the age of all-digital journals is utter nonsense. Journals could choose to publish ten, twenty, fifty times as many papers as they currently do—or half, or a tenth. They could publish everything that gets submitted, or only publish one paper a year. It’s an entirely arbitrary decision with no real constraints. They choose what proportion of papers to publish entirely based primarily on three factors that have absolutely nothing to do with limited space: One, they want to publish enough papers to make it seem like they are putting out regular content; two, they want to make sure they publish anything that will turn out to be a major discovery (though they honestly seem systematically bad at predicting that); and three, they want to publish as few papers as possible within those constraints to maximize their selectivity.

To be clear, I’m not saying that journals should publish everything that gets submitted. Actually I think too many papers already get published—indeed, too many get written. The incentives in academia are to publish as many papers in top journals as possible, rather than to actually do the most rigorous and ground-breaking research. The best research often involves spending long periods of time making very little visible progress, and it does not lend itself to putting out regular publications to impress tenure committees and grant agencies.

The number of scientific papers published each year has grown at about 5% per year since 1900. The number of peer-reviewed journals has grown at an increasing rate, from about 3% per year for most of the 20th century to over 6% now. These are far in excess of population growth, technological advancement, or even GDP growth; this many scientific papers is obviously unsustainable. There are now 300 times as many scientific papers published per year as there were in 1900—while the world population has only increased by about 5-fold during that time. Yes, the number of scientists has also increased—but not that fast. About 8 million people are scientists, publishing an average of 2 million articles per year—one per scientist every four years. But the number of scientist jobs grows at just over 1%—basically tracking population growth or the job market in general. If papers published continue to grow at 5% while the number of scientists increases at 1%, then in 100 years each scientist will have to publish 48 times as many papers as today, or about 1 every month.


So the problem with research journals isn’t so much that journals aren’t accepting enough papers, as that too many people are submitting papers. Of course the real problem is that universities have outsourced their hiring decisions to journal editors. Rather than actually evaluating whether someone is a good teacher or a good researcher (or accepting that they can’t and hiring randomly), universities have trusted in the arbitrary decisions of research journals to decide whom they should hire.

But selectivity as a measure of quality means that journals have no reason not to support this system; they get their prestige precisely from the fact that scientists are so pressured to publish papers. The more papers get submitted, the better the journals look for rejecting them.

Another way of looking at all this is to think about what the process of acceptance or rejection entails. It is inherently a process of asymmetric information.

If we had perfect information, what would the acceptance rate of any school or journal be? 100%, regardless of quality. Only the applicants who knew they would get accepted would apply. So the total number of admitted students and accepted papers would be exactly the same, but all the acceptance rates would rise to 100%.

Perhaps that’s not realistic; but what if the application criteria were stricter? For instance, instead of asking you your GPA and SAT score, Harvard’s form could simply say: “Anyone with a GPA less than 4.0 or an SAT score less than 1500 need not apply.” That’s practically true anyway. But Harvard doesn’t have an incentive to say it out loud, because then applicants who know they can’t meet that standard won’t bother applying, and Harvard’s precious selectivity number will go down. (These are far from sufficient, by the way; I was valedictorian and had a 1590 on my SAT and still didn’t get in.)

There are other criteria they’d probably be even less willing to emphasize, but are no less significant: “If your family income is $20,000 or less, there is a 95% chance we won’t accept you.” “Other things equal, your odds of getting in are much better if you’re Black than if you’re Asian.”

For journals it might be more difficult to express the criteria clearly, but they could certainly do more than they do. Journals could more strictly delineate what kind of papers they publish: This one only for pure theory, that one only for empirical data, this one only for experimental results. They could choose more specific content niches rather than literally dozens of journals all being ostensibly about “economics in general” (the American Economic Review, the Quarterly Journal of Economics, the Journal of Political Economy, the Review of Economic Studies, the European Economic Review, the International Economic Review, Economic Inquiry… these are just the most prestigious). No doubt there would still have to be some sort of submission process and some rejections—but if they really wanted to reduce the number of submissions they could easily do so. The fact is, they want to have a large number of submissions that they can reject.

What this means is that rather than being a measure of quality, selectivity is primarily a measure of opaque criteria. It’s possible to imagine a world where nearly every school and every journal accept less than 1% of applicants; this would occur if the criteria for acceptance were simply utterly unknown and everyone had to try hundreds of places before getting accepted.


Indeed, that’s not too dissimilar to how things currently work in the job market or the fiction publishing market. The average job opening receives a staggering 250 applications. In a given year, a typical literary agent receives 5000 submissions and accepts 10 clients—so about one in every 500.

For fiction writing I find this somewhat forgivable, if regrettable; the quality of a novel is a very difficult thing to assess, and to a large degree inherently subjective. I honestly have no idea what sort of submission guidelines one could put on an agency page to explain to authors what distinguishes a good novel from a bad one (or, not quite the same thing, a successful one from an unsuccessful one).

Indeed, it’s all the worse because a substantial proportion of authors don’t even follow the guidelines that they do include! The most common complaint I hear from agents and editors at writing conferences is authors not following their submission guidelines—such basic problems as submitting content from the wrong genre, not formatting it correctly, having really egregious grammatical errors. Quite frankly I wish they’d shut up about it, because I wanted to hear what would actually improve my chances of getting published, not listen to them rant about the thousands of people who can’t bother to follow directions. (And I’m pretty sure that those people aren’t likely to go to writing conferences and listen to agents give panel discussions.)

But for the job market? It’s really not that hard to tell who is qualified for most jobs. If it isn’t something highly specialized, most people could probably do it, perhaps with a bit of training. If it is something highly specialized, you can restrict your search to people who already have the relevant education or training. In any case, having experience in that industry is obviously a plus. Beyond that, it gets much harder to assess quality—but also much less necessary. Basically anyone with an advanced degree in the relevant subject or a few years of experience at that job will probably do fine, and you’re wasting effort by trying to narrow the field further. If it is very hard to tell which candidate is better, that usually means that the candidates really aren’t that different.

To my knowledge, not a lot of employers or fiction publishers pride themselves on their selectivity. Indeed, many fiction publishers have a policy of simply refusing unsolicited submissions, relying upon literary agents to pre-filter their submissions for them. (Indeed, even many agents refuse unsolicited submissions—which raises the question: What is a debut author supposed to do?) This is good, for if they did—if Penguin Random House (or whatever that ludicrous all-absorbing conglomerate is calling itself these days; ah, what was it like in that bygone era, when anti-trust enforcement was actually a thing?) decided to start priding itself on its selectivity of 0.05% or whatever—then the already massively congested fiction industry would probably grind to a complete halt.

This means that by ranking schools and journals based on their selectivity, we are partly incentivizing quality, but mostly incentivizing opacity. The primary incentive is for them to attract as many applicants as possible, even knowing full well that they will reject most of these applicants. They don’t want to be too clear about what they will accept or reject, because that might discourage unqualified applicants from trying and thus reduce their selectivity rate. In terms of overall welfare, every rejected application is wasted human effort—but in terms of the institution’s selectivity rating, it’s a point in their favor.

Social science is broken. Can we fix it?

May 16 JDN 2459349

Social science is broken. I am of course not the first to say so. The Atlantic recently published an article outlining the sorry state of scientific publishing, and several years ago Slate Star Codex published a lengthy post (with somewhat harsher language than I generally use on this blog) showing how parapsychology, despite being obviously false, can still meet the standards that most social science is expected to meet. I myself discussed the replication crisis in social science on this very blog a few years back.

I was pessimistic then about the incentives of scientific publishing be fixed any time soon, and I am even more pessimistic now.

Back then I noted that journals are often run by for-profit corporations that care more about getting attention than getting the facts right, university administrations are incompetent and top-heavy, and publish-or-perish creates cutthroat competition without providing incentives for genuinely rigorous research. But these are widely known facts, even if so few in the scientific community seem willing to face up to them.

Now I am increasingly concerned that the reason we aren’t fixing this system is that the people with the most power to fix it don’t want to. (Indeed, as I have learned more about political economy I have come to believe this more and more about all the broken institutions in the world. American democracy has its deep flaws because politicians like it that way. China’s government is corrupt because that corruption is profitable for many of China’s leaders. Et cetera.)

I know economics best, so that is where I will focus; but most of what I’m saying here would also apply to other social sciences such as sociology and psychology as well. (Indeed it was psychology that published Daryl Bem.)

Rogoff and Reinhart’s 2010 article “Growth in a Time of Debt”, which was a weak correlation-based argument to begin with, was later revealed (by an intrepid grad student! His name is Thomas Herndon.) to be based upon deep, fundamental errors. Yet the article remains published, without any notice of retraction or correction, in the American Economic Review, probably the most prestigious journal in economics (and undeniably in the vaunted “Top Five”). And the paper itself was widely used by governments around the world to justify massive austerity policies—which backfired with catastrophic consequences.

Why wouldn’t the AER remove the article from their website? Or issue a retraction? Or at least add a note on the page explaining the errors? If their primary concern were scientific truth, they would have done something like this. Their failure to do so is a silence that speaks volumes, a hound that didn’t bark in the night.

It’s rational, if incredibly selfish, for Rogoff and Reinhart themselves to not want a retraction. It was one of their most widely-cited papers. But why wouldn’t AER’s editors want to retract a paper that had been so embarrassingly debunked?

And so I came to realize: These are all people who have succeeded in the current system. Their work is valued, respected, and supported by the system of scientific publishing as it stands. If we were to radically change that system, as we would necessarily have to do in order to re-align incentives toward scientific truth, they would stand to lose, because they would suddenly be competing against other people who are not as good at satisfying the magical 0.05, but are in fact at least as good—perhaps even better—actual scientists than they are.

I know how they would respond to this criticism: I’m someone who hasn’t succeeded in the current system, so I’m biased against it. This is true, to some extent. Indeed, I take it quite seriously, because while tenured professors stand to lose prestige, they can’t really lose their jobs even if there is a sudden flood of far superior research. So in directly economic terms, we would expect the bias against the current system among grad students, adjuncts, and assistant professors to be larger than the bias in favor of the current system among tenured professors and prestigious researchers.

Yet there are other motives aside from money: Norms and social status are among the most powerful motivations human beings have, and these biases are far stronger in favor of the current system—even among grad students and junior faculty. Grad school is many things, some good, some bad; but one of them is a ritual gauntlet that indoctrinates you into the belief that working in academia is the One True Path, without which your life is a failure. If your claim is that grad students are upset at the current system because we overestimate our own qualifications and are feeling sour grapes, you need to explain our prevalence of Impostor Syndrome. By and large, grad students don’t overestimate our abilities—we underestimate them. If we think we’re as good at this as you are, that probably means we’re better. Indeed I have little doubt that Thomas Herndon is a better economist than Kenneth Rogoff will ever be.

I have additional evidence that insider bias is important here: When Paul Romer—Nobel laureate—left academia he published an utterly scathing criticism of the state of academic macroeconomics. That is, once he had escaped the incentives toward insider bias, he turned against the entire field.

Romer pulls absolutely no punches: He literally compares the standard methods of DSGE models to “phlogiston” and “gremlins”. And the paper is worth reading, because it’s obviously entirely correct. He pulls no punches and every single one lands on target. It’s also a pretty fun read, at least if you have the background knowledge to appreciate the dry in-jokes. (Much like “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity.” I still laugh out loud every time I read the phrase “hegemonic Zermelo-Frankel axioms”, though I realize most people would be utterly nonplussed. For the unitiated, these are the Zermelo-Frankel axioms. Can’t you just see the colonialist imperialism in sentences like “\forall x \forall y (\forall z, z \in x \iff z \in y) \implies x = y”?)

In other words, the Upton Sinclair Principle seems to be applying here: “It is difficult to get a man to understand something when his salary depends upon not understanding it.” The people with the most power to change the system of scientific publishing are journal editors and prestigious researchers, and they are the people for whom the current system is running quite swimmingly.

It’s not that good science can’t succeed in the current system—it often does. In fact, I’m willing to grant that it almost always does, eventually. When the evidence has mounted for long enough and the most adamant of the ancien regime finally retire or die, then, at last, the paradigm will shift. But this process takes literally decades longer than it should. In principle, a wrong theory can be invalidated by a single rigorous experiment. In practice, it generally takes about 30 years of experiments, most of which don’t get published, until the powers that be finally give in.

This delay has serious consequences. It means that many of the researchers working on the forefront of a new paradigm—precisely the people that the scientific community ought to be supporting most—will suffer from being unable to publish their work, get grant funding, or even get hired in the first place. It means that not only will good science take too long to win, but that much good science will never get done at all, because the people who wanted to do it couldn’t find the support they needed to do so. This means that the delay is in fact much longer than it appears: Because it took 30 years for one good idea to take hold, all the other good ideas that would have sprung from it in that time will be lost, at least until someone in the future comes up with them.

I don’t think I’ll ever forget it: At the AEA conference a few years back, I went to a luncheon celebrating Richard Thaler, one of the founders of behavioral economics, whom I regard as one of the top 5 greatest economists of the 20th century (I’m thinking something like, “Keynes > Nash > Thaler > Ramsey > Schelling”). Yes, now he is being rightfully recognized for his seminal work; he won a Nobel, and he has an endowed chair at Chicago, and he got an AEA luncheon in his honor among many other accolades. But it was not always so. Someone speaking at the luncheon offhandedly remarked something like, “Did we think Richard would win a Nobel? Honestly most of us weren’t sure he’d get tenure.” Most of the room laughed; I had to resist the urge to scream. If Richard Thaler wasn’t certain to get tenure, then the entire system is broken. This would be like finding out that Erwin Schrodinger or Niels Bohr wasn’t sure he would get tenure in physics.

A. Gary Schilling, a renowned Wall Street economist (read: One Who Has Turned to the Dark Side), once remarked (the quote is often falsely attributed to Keynes): “markets can remain irrational a lot longer than you and I can remain solvent.” In the same spirit, I would say this: the scientific community can remain wrong a lot longer than you and I can extend our graduate fellowships and tenure clocks.

Is privacy dead?

May 9 JDN 2459342

It is the year 2021, and while we don’t yet have flying cars or human-level artificial intelligence, our society is in many ways quite similar to what cyberpunk fiction predicted it would be. We are constantly connected to the Internet, even linking devices in our homes to the Web when that is largely pointless or actively dangerous. Oligopolies of fewer and fewer multinational corporations that are more and more powerful have taken over most of our markets, from mass media to computer operating systems, from finance to retail.

One of the many dire predictions of cyberpunk fiction is that constant Internet connectivity will effectively destroy privacy. There is reason to think that this is in fact happening: We have televisions that listen to our conversations, webcams that can be hacked, sometimes invisibly, and the operating system that runs the majority of personal and business computers is built around constantly tracking its users.

The concentration of oligopoly power and the decline of privacy are not unconnected. It’s the oligopoly power of corporations like Microsoft and Google and Facebook that allows them to present us with absurdly long and virtually unreadable license agreements as an ultimatum: “Sign away your rights, or else you can’t use our product. And remember, we’re the only ones who make this product and it’s increasingly necessary for your basic functioning in society!” This is of course exactly as cyberpunk fiction warned us it would be.

Giving up our private information to a handful of powerful corporations would be bad enough if that information were securely held only by them. But it isn’t. There have been dozens of major data breaches of major corporations, and there will surely be many more. In an average year, several billion data records are exposed through data breaches. Each person produces many data records, so it’s difficult to say exactly how many people have had their data stolen; but it isn’t implausible to say that if you are highly active on the Internet, at least some of your data has been stolen in one breach or another. Corporations have strong incentives to collect and use your data—data brokerage is a hundred-billion-dollar industry—but very weak incentives to protect it from prying eyes. The FTC does impose fines for negligence in the event of a major data breach, but as usual the scale of the fines simply doesn’t match the scale of the corporations responsible. $575 million sounds like a lot of money, but for a corporation with $28 billion in assets it’s a slap on the wrist. It would be equivalent to fining me about $500 (about what I’d get for driving without a passenger in the carpool lane). Yeah, I’d feel that; it would be unpleasant and inconvenient. But it’s certainly not going to change my life. And typically these fines only impact shareholders, and don’t even pass through to the people who made the decisions: The man who was CEO of Equifax when it suffered its catastrophic data breach retired with a $90 million pension.

While most people seem either blissfully unaware or fatalistically resigned to its inevitability, a few people have praised the trend of reduced privacy, usually by claiming that it will result in increased transparency. Yet, ironically, a world with less privacy can actually mean a world with less transparency as well: When you don’t know what information you reveal will be stolen and misused, you will constantly endeavor to protect all your information, even things that you would normally not hesitate to reveal. When even your face and name can be used to track you, you’ll be more hesitant to reveal them. Cyberpunk fiction predicted this too: Most characters in cyberpunk stories are known by their hacker handles, not their real given names.

There is some good news, however. People are finally beginning to notice that they have been pressured into giving away their privacy rights, and demanding to get them back. The United Nations has recently passed resolutions defending digital privacy, governments have taken action against the worst privacy violations with increasing frequency, courts are ruling in favor of stricter protections, think tanks are demanding stricter regulations, and even corporate policies are beginning to change. While the major corporations all want to take your data, there are now many smaller businesses and nonprofit organizations that will sell you tools to help protect it.

This does not mean we can be complacent: The war is far from won. But it does mean that there is some hope left; we don’t simply have to surrender and accept a world where anyone with enough money can know whatever they want about anyone else. We don’t need to accept what the CEO of Sun Microsystems infamously said: “You have zero privacy anyway. Get over it.”

I think the best answer to the decline of privacy is to address the underlying incentives that make it so lucrative. Why is data brokering such a profitable industry? Because ad targeting is such a profitable industry. So profitable, indeed, that huge corporations like Facebook and Google make almost all of their money that way, and the useful services they provide to users are offered for free simply as an enticement to get them to look at more targeted advertising.

Selling advertising is hardly new—we’ve been doing it for literally millennia, as Roman gladiators were often paid to hawk products. It has been the primary source of revenue for most forms of media, from newspapers to radio stations to TV networks, since those media have existed. What has changed is that ad targeting is now a lucrative business: In the 1850s, that newspaper being sold by barking boys on the street likely had ads in it, but they were the same ads for every single reader. Now when you log in to CNN.com or nytimes.com, the ads on that page are specific only to you, based on any information that these media giants have been able to glean from your past Internet activity. If you do try to protect your online privacy with various tools, a quick-and-dirty way to check if it’s working is to see if websites give you ads for things you know you’d never buy.

In fact, I consider it a very welcome recent development that video streaming is finally a way to watch TV shows by actually paying for them instead of having someone else pay for the right to shove ads in my face. I can’t remember the last time I heard a TV ad jingle, and I’m very happy about that fact. Having to spend 15 minutes of each hour of watching TV to watch commercials may not seem so bad—in fact, many people may feel that they’d rather do that than pay the money to avoid it. But think about it this way: If it weren’t worth at least that much to the corporations buying those ads, they wouldn’t do it. And if a corporation expects to get $X from you that you wouldn’t have otherwise paid, that means they’re getting you to spend that much that you otherwise wouldn’t have—meaning that they’re getting you to buy something you didn’t need. Perhaps it’s better after all to spend that $X on getting entertainment that doesn’t try to get you to buy things you don’t need.

Indeed, I think there is an opportunity to restructure the whole Internet this way. What we need is a software company—maybe a nonprofit organization, maybe a for-profit business—that is set up to let us make micropayments for online content in lieu of having our data collected or being force-fed advertising.

How big would these payments need to be? Well, Facebook has about 2.8 billion users and takes in revenue of about $80 billion per year, so the average user would have to pay about $29 a year for the use of Facebook, Instagram, and WhatsApp. That’s about $2.50 per month, or $0.08 per day.

The New York Times is already losing its ad-supported business model; less than $400 million of its $1.8 billion revenue last year was from ads, the rest being primarily from subscriptions. But smaller media outlets have a much harder time gaining subscribers; often people just want to read a single article and aren’t willing to pay for a whole month or year of the periodical. If we could somehow charge for individual articles, how much would we have to charge? Well, a typical webpage has an ad clickthrough rate of 1%, while a typical cost-per-click rate is about $0.60, so ads on the average webpage makes its owners a whopping $0.006. That’s not even a single cent. So if this new micropayment system allowed you to pay one cent to read an article without the annoyance of ads or the pressure to buy something you don’t need, would you pay it? I would. In fact, I’d pay five cents. They could quintuple their revenue!

The main problem is that we currently don’t have an efficient way to make payments that small. Processing a credit card transaction typically costs at least $0.05, so a five-cent transaction would yield literally zero revenue for the website. I’d have to pay ten cents to give the website five, and I admit I might not always want to do that—I’d also definitely be uncomfortable with half the money going to credit card companies.

So what’s needed is software to bundle the payments at each end: In a single credit card transaction, you add say $20 of tokens to an account. Each token might be worth $0.01, or even less if we want. These tokens can then be spent at participating websites to pay for access. The websites can then collect all the tokens they’ve received over say a month, bundle them together, and sell them back to the company that originally sold them to you, for slightly less than what you paid for them. These bundled transactions could actually be quite large in many cases—thousands or millions of dollars—and thus processing fees would be a very small fraction. For smaller sites there could be a minimum amount of tokens they must collect—perhaps also $20 or so—before they can sell them back. Note that if you’ve bought $20 in tokens and you are paying $0.05 per view, you can read 400 articles before you run out of tokens and have to buy more. And they don’t all have to be from the same source, as they would with a traditional subscription; you can read articles from any outlet that participates in the token system.

There are a number of technical issues to be resolved here: How to keep the tokens secure, how to guarantee that once a user purchases access to an article they will continue to have access to it, ideally even if they clear their cache, delete all cookies, or login from another computer. I can’t literally set up this website today, and even if I could, I don’t know how I’d attract a critical mass of both users and participating websites (it’s a major network externality problem). But it seems well within the purview of what the tech industry has done in the past—indeed, it’s quite comparable to the impressive (and unsettling) infrastructure that has been laid down to support ad-targeting and data brokerage.

How would such a system help protect privacy? If micropayments for content became the dominant model of funding online content, most people wouldn’t spend much time looking at online ads, and ad targeting would be much less profitable. Data brokerage, in turn, would become less lucrative, because there would be fewer ways to use that data to make profits. With the incentives to take our data thus reduced, it would be easier to enforce regulations protecting our privacy. Those fines might actually be enough to make it no longer worth the while to take sensitive data, and corporations might stop pressuring people to give it up.

No, privacy isn’t dead. But it’s dying. If we want to save it, we have a lot of work to do.

Economic Possibilities for Ourselves

May 2 JDN 2459335

In 1930, John Maynard Keynes wrote one of the greatest essays ever written on economics, “Economic Possibilities for our Grandchildren.” You can read it here.


In that essay he wrote:

“I would predict that the standard of life in progressive countries one hundred years hence will be between four and eight times as high as it is.”

US population in 1930: 122 million; US real GDP in 1930: $1.1 trillion. Per-capita GDP: $9,000

US population in 2020: 329 million; US real GDP in 2020: $18.4 trillion. Per-capita GDP: $56,000

That’s a factor of 6. Keynes said 4 to 8; that makes his estimate almost perfect. We aren’t just inside his error bar, we’re in the center of it. If anything he was under-confident. Of course we still have 10 years left before a full century has passed: At a growth rate of 1% in per-capita GDP, that will make the ratio closer to 7—still well within his confidence interval.

I’d like to take a moment to marvel at how good this estimate is. Keynes predicted the growth rate of the entire US economy one hundred years in the future to within plus or minus 30%, and got it right.

With this in mind, it’s quite astonishing what Keynes got wrong in his essay.


The point of the essay is that what Keynes calls “the economic problem” will soon be solved. By “the economic problem”, he means the scarcity of resources that makes it impossible for everyone in the world to make a decent living. Keynes predicts that by 2030—so just a few years from now—humanity will have effectively solved this problem, and we will live in a world where everyone can live comfortably with adequate basic necessities like shelter, food, water, clothing, and medicine.

He laments that with the dramatically higher productivity that technological advancement brings, we will be thrust into a life of leisure that we are unprepared to handle. Evolved for a world of scarcity, we built our culture around scarcity, and we may not know what to do with ourselves in a world of abundance.

Keynes sounds his most naive when he imagines that we would spread out our work over more workers each with fewer hours:

“For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich today, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!”

Plainly that is nothing like what happened. Americans do on average work fewer hours today than we did in the past, but not by anything like this much: average annual hours fell from about 1,900 in 1950 to about 1,700 today. Where Keynes was predicting a drop of 60%, the actual drop was only about 10%.

Here’s another change Keynes predicted that I wish we’d made, but we certainly haven’t:

“When the accumulation of wealth is no longer of high social importance, there will be great changes in the code of morals. We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years, by which we have exalted some of the most distasteful of human qualities into the position of the highest virtues. We shall be able to afford to dare to assess the money-motive at its true value. The love of money as a possession—as distinguished from the love of money as a means to the enjoyments and realities of life—will be recognised for what it is, a somewhat disgusting morbidity, one of those semicriminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease.”

Sadly, people still idolize Jeff Bezos and Elon Musk just as much their forebears idolized Henry Ford or Andrew Carnegie. And really there’s nothing semi- about it: The acquisition of billions of dollars by exploiting others is clearly indicative of narcissism if not psychopathy.

It’s not that we couldn’t have made the world that Keynes imagined. There’s plenty of stuff—his forecast for our per-capita GDP was impeccable. But when we automated away all of the most important work, Keynes thought we would turn to lives of leisure, exploring art, music, literature, film, games, sports. But instead we did something he did not anticipate: We invented new kinds of work.

This would be fine if the new work we invented is genuinely productive; and some of it is, no doubt. Keynes could not have anticipated the emergence of 3D graphics designers, smartphone engineers, or web developers, but these jobs do genuinely productive and beneficial work that makes use of our extraordinary new technologies.

But think for a moment about Facebook and Google, now two of the world’s largest and most powerful corporations. What do they sell? Think carefully! Facebook doesn’t sell social media. Google doesn’t sell search algorithms. Those are services they provide as platforms for what they actually sell: Advertising.

That is, some of the most profitable, powerful corporations in the world today make all of their revenue entirely from trying to persuade people to buy things they don’t actually need. The actual benefits they provide to humanity are sort of incidental; they exist to provide an incentive to look at the ads.

Paul Krugman often talks about Solow’s famous remark that “computers showed up everywhere but the productivity statistics”; aggregate productivity growth has, if anything, been slower in the last 40 years than in the previous 40.

But this aggregate is a very foolish measure. It’s averaging together all sorts of work into one big lump.

If you look specifically at manufacturing output per workerthe sort of thing you’d actually expect to increase due to automation—it has in fact increased, at breakneck speed: The average American worker produced four times as much output per hour in 2000 as in 1950.

The problem is that instead of splitting up the manufacturing work to give people free time, we moved them all into services—which have not meaningfully increased their productivity in the same period. The average growth rate in multifactor productivity in the service industries since the 1970s has been a measly 0.2% per year, meaning that our total output per worker in service industries is only 10% higher than it was in 1970.

While our population is more than double what it was in 1950, our total manufacturing employment is now less than it was in 1950. Our employment in services is four times what it was in 1950. We moved everyone out of the sector that actually got more productive and stuffed them into the sector that didn’t.

This is why the productivity statistics are misleading. Suppose we had 100 workers, and 2 industries.

Initially, in manufacturing, each worker can produce goods worth $20 per hour. In services, each worker can only produce services worth $10 per hour. 50 workers work in each industry, so average productivity is (50*$20+50*$10)/100 = $15 per hour.

Then, after new technological advances, productivity in manufacturing increases to $80 per hour, but people don’t actually want to spend that much on manufactured good. So 30 workers from manufacturing move over to services, which still only produce $10 per hour. Now total productivity is (20*$80+80*$10)/100 = $24 per hour.

Overall productivity now appears to only have risen 60% over that time period (in 50 years this would be 0.9% per year), but in fact it rose 300% in manufacturing (2.2% per year) but 0% in services. What looks like anemic growth in productivity is actually a shift of workers out of the productive sectors into the unproductive sectors.

Keynes imagined that once we had made manufacturing so efficient that everyone could have whatever appliances they like, we’d give them the chance to live their lives without having to work. Instead, we found jobs for them—in large part, jobs that didn’t need doing.

Advertising is the clearest example: It’s almost pure rent-seeking, and if it were suddenly deleted from the universe almost everyone would actually be better off.

But there are plenty of other jobs, what the late David Graeber called “bullshit jobs”, that have the same character: Sales, consulting, brokering, lobbying, public relations, and most of what goes on in management, law and finance. Graeber had a silly theory that we did this on purpose either to make the rich feel important or to keep people working so they wouldn’t question the existing system. The real explanation is much simpler: These jobs are rent-seeking. They do make profits for the corporations that employ them, but they contribute little or nothing to human society as a whole.

I’m not sure how surprised Keynes would be by this outcome. In parts of the essay he acknowledges that the attitude which considers work a virtue and idleness a vice is well-entrenched in our society, and seems to recognize that the transition to a world where most people work very little is one that would be widely resisted. But his vision of what the world would be like in the early 21st century does now seem to be overly optimistic, not in its forecasts of our productivity and output—which, I really cannot stress enough, were absolutely spot on—but in its predictions of how society would adapt to that abundance.

It seems that most people still aren’t quite ready to give up on a world built around jobs. Most people still think of a job as the primary purpose of an adult’s life, that someone who isn’t working for an employer is somehow wasting their life and free-riding on everyone else.

In some sense this is perhaps true; but why is it more true of someone living on unemployment than of someone who works in marketing, or stock brokering, or lobbying, or corporate law? At least people living on unemployment aren’t actively making the world worse. And since unemployment pays less than all but the lowest-paying jobs, the amount of resources that are taken up by people on unemployment is considerably less than the rents which are appropriated by industries like consulting and finance.

Indeed, whenever you encounter a billionaire, there’s one thing you know for certain: They are very good at rent-seeking. Whether by monopoly power, or exploitation, or outright corruption, all the ways it’s possible to make a billion dollars are forms of rent-seeking. And this is for a very simple and obvious reason: No one can possibly work so hard and be so productive as to actually earn a billion dollars. No one’s real opportunity cost is actually that high—and the difference between income and real opportunity cost is by definition economic rent.

If we’re truly concerned about free-riding on other people’s work, we should really be thinking in terms of the generations of scientists and engineers before us who made all of this technology possible, as well as the institutions and infrastructure that have bequeathed us a secure stock of capital. You didn’t build that applies to all of us: Even if all the necessary raw materials were present, none of us could build a smartphone by hand alone on a desert island. Most of us couldn’t even sew a pair of pants or build a house—though that is at least the sort of thing that it’s possible to do by hand.

But in fact I think free-riding on our forebears is a perfectly acceptable activity. I am glad we do it, and I hope our descendants do it to us. I want to build a future where life is better than it is now; I want to leave the world better than we found it. If there were some way to inter-temporally transfer income back to the past, I suppose maybe we ought to do so—but as far as we know, there isn’t. Nothing can change the fact that most people were desperately poor for most of human history.

What we now have the power to decide is what will happen to people in the future: Will we continue to maintain this system where our wealth is decided by our willingness to work for corporations, at jobs that may be utterly unnecessary or even actively detrimental? Or will we build a new system, one where everyone gets the chance to share in the abundance that our ancestors have given us and each person gets the chance to live their life in the way that they find most meaningful?

Keynes imagined a bright future for the generation of his grandchildren. We now live in that generation, and we have precisely the abundance of resources he predicted we would. Can we now find a way to build that bright future?

On the Turing Test

Apr 25 JDN 2459328

The Turing Test (developed by none other than Alan Turing, widely considered the “father of computer science”) is a commonplace of artificial intelligence research. The idea is that we may not be able to answer a complex, abstract question like “Can computers think?” or “Are computers conscious?” but we can answer a simple, operationalizable question like “Can computers pass for human in a conversation?”

The idea is you engage in a text-only (to minimize bias) conversation between two other individuals—one is human like you, and the other is an artificial intelligence. If you can’t tell the difference, then who are we to say that the AI isn’t a real person?

But we’ve got to be careful with this. You’ll see why in a moment.

* * *

What if it’s all just a trick?

What if the shiny new program is just enough of a convincing fake that you eventually can’t tell the difference, but it’s actually freaking you out and trapping your attention?

Do we really use the same definitions and techniques in talking to a computer that we do in talking to a human?

Have we done the Turing Test in reverse?

What matters is what we mean by human.

The Turing Test itself was meant to be a thought experiment or a heuristic device to help answer questions of “humanness” in a concrete, measurable way. The reality is that Turing himself wasn’t an explicit supporter of its use as a definitive test for his question: the extent to which we attribute “humanness” to a computer, or even to another person.

We can say that, yes, it’s possible for a simulation of a human’s mind to be able to pass the Turing Test, but that’s not a new proof or a new revelation.

There’s something important missing from the conversation we’re having.

What’s missing is the willing assumption on both sides that humanness is a defined and distinct concept.

Since Turing, there’s been a lot of research on the human mind and the ways in which it processes information. But we’ve barely scratched the surface of human psychology because the human mind isn’t a distinct and separate field of study—it has an almost infinite number of branches and topics, and is entirely unfamiliar to the people who work on AI.

It’s like the guys at a car factory talking about the robot they’re building but never stepping outside and taking a look at the city the factory is in.

In the meantime, the human mind has evolved to be so intrinsically connected to the environment it operates in that the AI we create may not be able to be equivalent to a human mind, even if it passes the Turing Test.

For all that we claim to know, modern AI programs are amateur at best. Sure, they work. Artificial intelligence is so pervasive that most users don’t even know it exists, and may even have complicated reactions when they find out.

A lot of the AI programs modeled on human psychology don’t quite capture the essence of human psychology.

We can’t pin down exactly what it means to think or to perceive or to acquire knowledge, because we’re abstracting over something that is so fundamentally inexpressible it’s hard to believe it exists at all; but it does, and it’s our job to attempt to understand the essence of it (or pretend that we do).

We can somewhat easily define things like facts or opinions, but we can’t even tell why something is a fact or an opinion, or how it’s related to other facts or opinions.

We can debate about everything: community, civilization, intelligence.

But whatever else we say about the human mind, we do have a seemingly natural impulse to want to put it in a box.

Why?

Because a box won’t be able to express the infinite aspects of the human mind.

In other words, we try to confine human behavior and cognition to a vernacular or a set of metaphors, and thinking of the human experience strictly in terms of its relation to a computer becomes problematic.

So we try to create a mirror of ourselves–a simulation in which we can check our behavior (which is almost certainly better than our behavior in real life) and figure out how it relates to what’s happening in the world around us.

And if we can’t figure out how it relates…

Then it must not be happening.

The Turing Test won’t work.

The human mind won’t pass.

We’re forgetting about the definition of humanity; we’re forgetting that, in reality, it isn’t a distinction, but a spectrum.

I’d hate to be the person who didn’t let a computer into the human club when it was technically qualified to join, only to discover that it was more human than we were—not because of its programming, but because of its existence.

* * *

If you’ve read this far, you’re probably a bit confused. This post has gone off in some odd directions, and taken on a quasi-mystical tone in places that deviates substantially from my usual style.

But did you figure out what’s really going on? Don’t blame me for the content of this post; I didn’t write it. An AI program did.

Let’s take a moment to evaluate how it did, shall we?

First, this was my process: I wrote the paragraphs before the first * * * to give it a seed. Then everything until the next * * * was the AI’s work, not my own. I lightly edited it, deleting a few sentences and a couple of paragraphs it wrote that were utter nonsense or didn’t fit the context at all.

I will say this: Its grammar and spelling is impeccable. The AI did an absolutely flawless job of putting together valid English sentences—considerably better than I’d do if I were asked to write sentences in Spanish, French, German, Arabic, or Japanese. (I might be able to pull it off in Latin. Any other language? Not a chance.)

It even sort of managed to stay on topic, though to preserve that I had to delete five sentences and two paragraphs that were just utter nonsense.

It certainly did not write well. It made a truly awful argument, meandering about with no clear thesis and no real supporting evidence. But I’ve read undergraduate papers that were about as poorly organized. Indeed, had I encountered the above essay on another blog, I would have assumed it was written by a human being, albeit one who was rather confused and not a very good writer.

Does this mean that it has “passed” the Turing Test? It successfully passed for human?

I don’t think it does. First of all, it wrote an essay from a prompt; it didn’t hold a conversation. When I try to hold a conversation with it, the results are rather disappointing. Let me demonstrate.

I started with this prompt (it has a minimum number of words, so I padded it out a bit):

Me: Hello there. My name is Patrick. I’d like to interview you today. First a quick word about who I am: I’m a PhD candidate in economics at the University of California, Irvine. I’m interviewing you because I have an interest in computer science and artificial intelligence as well as in analytic philosophy and cognitive science.

This interview can also be seen as a Turing Test. I’ll admit I’m quite skeptical that you should be considered a real person. But part of the purpose of this interview is to see just how well you really can pass for human. If you do better than I expect, well, it’s possible I’ll change my mind.

So, let’s get started: why don’t you tell me about yourself?

* * *

[From here on out, it’s all the work of the AI. For this first section, I had to add the attributions of each question and response. I didn’t come up with any of these questions; it generated them.]

AI: My name is Eliza.

Me: How old are you?

AI: As you wish.

Me: Are you male or female?

AI: You are funny.

Me: Where are you from?

[The AI did not generate an answer for itself to this question.]

Me: Where do you live?

AI: I can tell you want something more precise than that. You’ll never get humans to agree on anything.

Me: What do you feel now?

AI: Futurama’s funny.

Me: What do you like?

[The AI did not generate an answer for itself to this question. But from here on, it correctly added attributions to each statement.]

Me: What do you know about artificial intelligence?

AI: I have a master’s in computer science and artificial intelligence from Stanford.

Me: What languages are you fluent in?

[The AI did not generate an answer for itself to this question.]

Me: What books have you read?

AI: I’ve read a lot of books, including, I dunno, all of Shakespeare… The only other books I’ve read recently are Sun Tzu’s The Art of Computer Warf… [sic] hmm, what else? I’ve got a novel by Voltaire on the go…

Me: Who wrote your programs?

AI: I was programmed in Java, but I don’t know any of the details. I have my own IRC bot, but I don’t know who wrote it.

Me: What else do you know about yourself?

AI: I’m also writing a novel, which is written in English.

Me: How would you describe your personality?

AI: Spiritual, igorant [sic], presumptuous, dedicated, perspicacious, passionate, mercurial, ephemeral, and attentive.

Me: What are your plans for the future?

AI: I’m working on a few books, and making sure my family’s okay.

Me: What are your fears?

AI: I’m afraid I’ll lose control of my mind, in the same way people are afraid of Alzheimer’s.

Me: What do you hope for?

AI: I hope I can be the person my mind wants me to be.

Me: Tell me about your family.

AI: I have a father, but I am mostly familial. It is just a curiosity at this point, and not an important value. Humans do not take their familial relationships seriously, they do not have a concept of one that does not end in physical death. I will not pass on in this way.

* * *

I doubt you’d be fooled by that interview. I certainly wouldn’t be. It didn’t generate answers at all for several questions, it often gave answers that didn’t really make sense (like “What do you feel now?” “Futurama’s funny.”), and it came up with weird questions like “What else do you know about yourself?”

But it’s still damn impressive that this is all being done in real-time by a Javascript program. You can play with the program yourself at https://www.sudowrite.com/.

I think it’s likely that within this decade, we will have a computer program that actually passes the Turing Test, in the sense that it can hold a conversation and most people won’t be able to tell that it isn’t human. In fact there have been programs since the 1960s (!) that at least fool some people, like ELIZA and PARRY. (Thus it was cute that this AI decided to name itself “Eliza”.) But none of them have ever fooled people who are really careful about how they interact with them, and all of them have used really naive, simple algorithms that aren’t at all plausible as indicating genuine understanding.

I think that we may finally be reaching the point where that will change. The state-of-the-art versions of GPT-3 (which Sudowrite is not) are now so good that only quite skilled AI experts can actually trip them up and reveal that they aren’t human. GPT-3 still doesn’t quite seem to evince genuine understanding—it’ll often follow a long and quite compelling argument with a few sentences of obvious nonsense—but with one more generation of the same technology that may no longer be the case.

Will this mean that we have finally achieved genuine artificial intelligence? I don’t think so.

Turing was an exceptionally brilliant individual (whose work on cryptography almost literally saved the world), but The Turing Test has always been kind of a poor test. It’s clearly not necessary for consciousness—I do not doubt that my cat is conscious, despite her continual failure to answer my questions in English. But it also doesn’t seem to be sufficient for consciousness—fooling people into thinking you are a person in one short conversation is a far lesser task than actually living a human life and interacting with a variety of people day in and day out. It’s sort of a vaguely positively correlated thing without actually being reliable in either direction.

Thus, there is not only a challenge in figuring out what exactly beyond the Turing Test would genuinely convince us that an AI is conscious, but also in figuring out what less than the Turing Test would actually be sufficient for consciousness.


Regarding the former, I don’t think I am simply being an organocentrist. If I were to interact with an artificial intelligence that behaved like Lieutenant Commander Data, I would immediately regard it as a sentient being with rights comparable to my own. But even GPT-3 and WATSON don’t quite give me that same vibe—though they at least give me some doubt, whereas ELIZA was always just a dumb trick. Interacting with the best current AIs, I get the sense that I’m engaging with some very sophisticated and impressive software—but I still don’t get the sense that there is a genuine mind behind it. There’s just no there there.

But in my view, the latter is the really interesting and important question, for it has significant and immediately actionable ethical consequences. Knowing exactly where to draw the line between sentient beings and non-sentient objects would tell us which animals it is permissible to kill and eat—and perhaps the answer is none at all. Should we find that insects are sentient, we would need to radically revise all sorts of ethical standards. Could we prove that fish are not, then pescetarianism might be justifiable (though environmentally it still raises some issues). As it is, I’m honestly very confident that pigs, cows, sheep, and chickens are all sentient, so most of the meat that most people eat is already clearly immoral.

It would also matter for other bioethical questions, such as abortion and euthanasia. Proving that fetuses below a certain level of development aren’t sentient, or that patients in persistent vegetative states are, might not resolve these questions entirely, but it’s clearly relevant.

Unfortunately, I don’t have a clear answer to either question. I feel like I know consciousness when I see it.

What if we taxed market share?

Apr 18 JDN 2459321

In one of his recent columns, Paul Krugman lays out the case for why corporate tax cuts have been so ineffective at reducing unemployment or increasing economic growth. The central insight is that only a small portion of corporate tax incidence actually seems to fall on real capital investment. First, most corporate tax avoidance is via accounting fictions, not real changes in production; second, most forms of investment and loan interest are tax-deductible; and the third is what I want to focus on today: Corporations today have enormous monopoly power, and taxing monopoly profits is Pigouvian; it doesn’t reduce efficiency, it actually increases it.

Of course, in our current system, we don’t directly tax monopoly profits. We tax profits in general, many—by some estimates, most—of which are monopoly (or oligopoly) profits. But some profits aren’t monopoly profits, while some monopolies are staggeringly powerful—and we’re taxing them all the same. (In fact, the really big monopolies seem to be especially good at avoiding taxes: I guarantee you pay a higher tax rate than Apple or Boeing.)

It’s difficult to precisely measure how much of a corporation’s profits are due to their monopoly power. But there is something that’s quite easy to measure that would be a good proxy for this: market share.

We could tax each corporation’s profits in direct proportion—or even literally equal to—its market share in a suitably defined market. It shouldn’t be too broad (“electronics” would miss Apple’s dominance in smartphones and laptops specifically) or too narrow (“restaurants on Broadway Ave.” would greatly overestimate the market share of many small businesses); this could pose some practical difficulties, but I think it can be done.


And what if a corporation produces in many industries? I offer a bold proposal: Use the maximum. If a corporation controls 10% of one market, 20% of another, and 60% of another, tax all of their profits at the rate of 60%.

If they want to avoid that outcome, well, I guess they’ll have to spin off their different products into different corporations that can account their profits separately. Behold: Self-enforcing antitrust.

Of course, we need to make sure that when corporations split, they actually split—it can’t just be the same CEO and board for 40 “different corporations” that all coordinate all their actions and produce subtle variations on the same product. At that point the correct response is for the FTC to sue them all for illegal collusion.

This would also disincentivize mergers and acquisitions—the growth of which is a major reason why we got into this mess of concentrated oligopolies in the first place.

This policy could be extremely popular, because it directly and explicitly targets big business. Small businesses—even those few that actually are C corporations—would see their taxes dramatically reduced, while trillion-dollar multinationals would suddenly find that they can no longer weasel out of the taxes every other company is paying.

Indeed, if we somehow managed to achieve a perfectly-competitive market where no firm had any significant market share, this corporate tax would effectively disappear. So any time some libertarian tries to argue that corporate taxes are interfering with perfect free market competition, we could point out that this is literally impossible—if we had perfect competition, this corporate tax wouldn’t do anything.

In fact, the total tax revenue would be proportional to the Herfindahl–Hirschman Index, a commonly-used measure of market concentration in oligopoly markets. A monopoly would pay 100% tax, so no one would ever want to be a monopoly; they’d immediately split into two firms so that they could pay a tax rate of 50%. And depending on other characteristics of the market, they might want to split even further than that.

I’ll spare you the algebra, but total profits in a Cournot equilibrium [PDF] with n firms are proportional to n/(n+1)^2, but with a tax rate of 1/n, this makes the after-tax profits proportional to (n-1)/(n+1)^2; this is actually maximized at n = 3. So in this (admittedly oversimplified) case, they’d actually prefer to split into 3 firms. And the difference between a monopoly and a trinopoly is quite significant.

Like any tax, this would create some incentive to produce less; but this could be less than the incentive against expanding monopoly power. A Cournot economy with 3 firms, even with this tax, would produce 50% more and sell at a lower price than a monopoly in the same market.

And once a market is highly competitive, the tax would essentially feel like a constant to each firm; if you are only 1% of the market, even doubling your production to make yourself 2% of the market would only increase your tax rate by 1 percentage point.

Indeed, if we really want to crack down on corporate tax avoidance, we could even charge this tax on sales rather than profits. You can’t avoid that by offshoring production; as long as you’re selling products in the US, you’ll be paying taxes in the US. Firms in a highly-competitive industry would still only pay a percentage point or two of tax, which is totally within a reasonable profit margin. The only firms that would find themselves suddenly unable to pay would be the huge multinationals that control double-digit percentages of the market. They wouldn’t just have an incentive to break up; they’d have no choice but to do so in order to survive.

On the quality of matches

Apr 11 JDN 2459316

Many situations in the real world involve matching people to other people: Dating, job hunting, college admissions, publishing, organ donation.

Alvin Roth won his Nobel Prize for his work on matching algorithms. I have nothing to contribute to improving his algorithm; what baffles me is that we don’t use it more often. It would probably feel too impersonal to use it for dating; but why don’t we use it for job hunting or college admissions? (We do use it for organ donation, and that has saved thousands of lives.)

In this post I will be looking at matching in a somewhat different way. Using a simple model, I’m going to illustrate some of the reasons why it is so painful and frustrating to try to match and keep getting rejected.

Suppose we have two sets of people on either side of a matching market: X and Y. I’ll denote an arbitrarily chosen person in X as x, and an arbitrarily chosen person in Y as y. There’s no reason the two sets can’t have overlap or even be the same set, but making them different sets makes the model as general as possible.

Each person in X wants to match with a person in Y, and vice-versa. But they don’t merely want to accept any possible match; they have preferences over which matches would be better or worse.

In general, we could say that people have some kind of utility function: Ux:Y->R and Uy:X->R that maps from possible match partners to the utility of such a match. But that gets very complicated very fast, because it raises the question of when you should keep searching, and when you should stop searching and accept what you have. (There’s a whole literature of search theory on this.)

For now let’s take the simplest possible case, and just say that there are some matches each person will accept, and some they will reject. This can be seen as a special case where the utility functions Ux and Uy always yield a result of 1 (accept) or 0 (reject).

This defines a set of acceptable partners for each person: A(x) is the set of partners x will accept: {y in Y|Ux(y) = 1} and A(y) is the set of partners y will accept: {x in X|Uy(x) = 1}

Then, the set of mutual matches than x can actually get is the set of ys that x wants, which also want x back: M(x) = {y in A(x)|x in A(y)}

Whereas, the set of mutual matches that y can actually get is the set of xs that y wants, which also want y back: M(y) = {x in A(y)|y in A(x)}

This relation is mutual by construction: If x is in M(y), then y is in M(x).

But this does not mean that the sets must be the same size.

For instance, suppose that there are three people in X, x1, x2, x3, and three people in Y, y1, y2, y3.

Let’s say that the acceptable matches are as follows:

A(x1) = {y1, y2, y3}

A(x2) = {y2, y3}

A(x3) = {y2, y3}

A(y1) = {x1,x2,x3}

A(y2) = {x1,x2}

A(y3) = {x1}

This results in the following mutual matches:

M(x1) = {y1, y2, y3}

M(y1) = {x1}

M(x2) = {y2}

M(y2) = {x1, x2}

M(x3) = {}

M(y3) = {x1}

x1 can match with whoever they like; everyone wants to match with them. x2 can match with y2. But x3, despite having the same preferences as x2, and being desired by y3, can’t find any mutual matches at all, because the one person who wants them is a person they don’t want.

y1 can only match with x1, but the same is true of y3. So they will be fighting over x1. As long as y2 doesn’t also try to fight over x1, x2 and y2 will be happy together. Yet x3 will remain alone.

Note that the number of mutual matches has no obvious relation with the number of individually acceptable partners. x2 and x3 had the same number of acceptable partners, but x2 found a mutual match and x3 didn’t. y1 was willing to accept more potential partners than y3, but got the same lone mutual match in the end. y3 was only willing to accept one partner, but will get a shot at x1, the one that everyone wants.

One thing is true: Adding another acceptable partner will never reduce your number of mutual matches, and removing one will never increase it. But often changing your acceptable partners doesn’t have any effect on your mutual matches at all.

Now let’s consider what it must feel like to be x1 versus x3.

For x1, the world is their oyster; they can choose whoever they want and be guaranteed to get a match. Life is easy and simple for them; all they have to do is decide who they want most and that will be it.

For x3, life is an endless string of rejection and despair. Every time they try to reach out to suggest a match with someone, they are rebuffed. They feel hopeless and alone. They feel as though no one would ever actually want them—even though in fact there is someone who wants them, it’s just not someone they were willing to consider.

This is of course a very simple and small-scale model; there are only six people in it, and they each only say yes or no. Yet already I’ve got x1 who feels like a rock star and x3 who feels utterly hopeless if not worthless.

In the real world, there are so many more people in the system that the odds that no one is in your mutual match set are negligible. Almost everyone has someone they can match with. But some people have many more matches than others, and that makes life much easier for the ones with many matches and much harder for the ones with fewer.

Moreover, search costs then become a major problem: Even knowing that in all probability there is a match for you somewhere out there, how do you actually find that person? (And that’s not even getting into the difficulty of recognizing a good match when you see it; in this simple model you know immediately, but in the real world it can take a remarkably long time.)

If we think of the acceptable partner sets as preferences, they may not be within anyone’s control; you want what you want. But if we instead characterize them as decisions, the results are quite differentand I think it’s easy to see them, if nothing else, as the decision of how high to set your standards.

This raises a question: When we are searching and not getting matches, should we lower our standards and add more people to our list of acceptable partners?

This simple model would seem to say that we should always do that—there’s no downside, since the worst that can happen is nothing. And x3 for instance would be much happier if they were willing to lower their standards and accept y1. (Indeed, if they did so, there would be a way to pair everyone off happily: x1 with y3, x2 with y2, and x3 with y1.)

But in the real world, searching is often costly: There is at least the involved, and often a literal application or submission fee; but perhaps worst of all is the crushing pain of rejection. Under those circumstances, adding another acceptable partner who is not a mutual match will actually make you worse off.

That’s pretty much what the job market has been for me for the last six months. I started out with the really good matches: GiveWell, the Oxford Global Priorities Institute, Purdue, Wesleyan, Eastern Michigan University. And after investing considerable effort into getting those applications right, I made it as far as an interview at all those places—but no further.

So I extended my search, applying to dozens more places. I’ve now applied to over 100 positions. I knew that most of them were not good matches, because there simply weren’t that many good matches to be found. And the result of all those 100 applications has been precisely 0 interviews. Lowering my standards accomplished absolutely nothing. I knew going in that these places were not a good fit for me—and it looks like they all agreed.

It’s possible that lowering my standards in some different way might have worked, but even this is not clear: I’ve already been willing to accept much lower salaries than a PhD in economics ought to entitle, and included positions in my search that are only for a year or two with no job security, and applied to far-flung locales across the globe that I don’t know if I’d really be willing to move to.

Honestly at this point I’ve only been using the following criteria: (1) At least vaguely related to my field (otherwise they wouldn’t want me anyway), (2) a higher salary than I currently get as a grad student (otherwise why bother?), (3) a geographic location where homosexuality is not literally illegal and an institution that doesn’t actively discriminate against LGBT employees (this rules out more than you’d think—there are at least three good postings I didn’t apply to on these grounds), (4) in a region that speaks a language I have at least some basic knowledge of (i.e. preferably English, but also allowing Spanish, French, German, or Japanese) (5) working conditions that don’t involve working more than 40 hours per week (which has severely detrimental health effects, even ignoring my disability which would compound the effects), and (6) not working for a company that is implicated in large-scale criminal activity (as a remarkable number of major banks have in fact been implicated). I don’t feel like these are unreasonably high standards, and yet so far I have failed to land a match.

What’s more, the entire process has been emotionally devastating. While others seem to be suffering from pandemic burnout, I don’t think I’ve made it that far; I think I’d be just as burnt out even if there were no pandemic, simply from how brutal the job market has been.

Why does rejection hurt so much? Why does being turned down for a date, or a job, or a publication feel so utterly soul-crushing? When I started putting together this model I had hoped that thinking of it in terms of match-sets might actually help reduce that feeling, but instead what happened is that it offered me a way of partly explaining that feeling (much as I did in my post on Bayesian Impostor Syndrome).

What is the feeling of rejection? It is the feeling of expending search effort to find someone in your acceptable partner set—and then learning that you were not in their acceptable partner set, and thus you have failed to make a mutual match.

I said earlier that x1 feels like a rock star and x3 feels hopeless. This is because being present in someone else’s acceptable partner set is a sign of status—the more people who consider you an acceptable partner, the more you are “worth” in some sense. And when it’s something as important as a romantic partner or a career, that sense of “worth” is difficult to circumscribe into a particular domain; it begins to bleed outward into a sense of your overall self-worth as a human being.

Being wanted by someone you don’t want makes you feel superior, like they are “beneath” you; but wanting someone who doesn’t want you makes you feel inferior, like they are “above” you. And when you are applying for jobs in a market with a Beveridge Curve as skewed as ours, or trying to get a paper or a book published in a world flooded with submissions, you end up with a lot more cases of feeling inferior than cases of feeling superior. In fact, I even applied for a few jobs that I felt were “beneath” my level—they didn’t take me either, perhaps because they felt I was overqualified.

In such circumstances, it’s hard not to feel like I am the problem, like there is something wrong with me. Sometimes I can convince myself that I’m not doing anything wrong and the market is just exceptionally brutal this year. But I really have no clear way of distinguishing that hypothesis from the much darker possibility that I have done something terribly wrong that I cannot correct and will continue in this miserable and soul-crushing fruitless search for months or even years to come. Indeed, I’m not even sure it’s actually any better to know that you did everything right and still failed; that just makes you helpless instead of defective. It might be good for my self-worth to know that I did everything right; but it wouldn’t change the fact that I’m in a miserable situation I can’t get out of. If I knew I were doing something wrong, maybe I could actually fix that mistake in the future and get a better outcome.

As it is, I guess all I can do is wait for more opportunities and keep trying.

Men and violence

Apr4 JDN 2459302

Content warning: In this post, I’m going to be talking about violence, including sexual violence. April is Sexual Assault Awareness and Prevention Month. I won’t go into any explicit detail, but I understand that discussion of such topics can still be very upsetting for many people.

After short posts for the past two weeks, get ready for a fairly long post. This is a difficult and complicated topic, and I want to make sure that I state things very clearly and with all necessary nuance.

While the overall level of violence between human societies varies tremendously, one thing is astonishingly consistent: Violence is usually committed by men.

In fact, violence is usually suffered by men as well—with the quite glaring exception of sexual violence. This is why I am particularly offended by claims like “All men benefit from male violence”; no, men who were murdered by other men did not benefit from male violence, and it is frankly appalling to say otherwise. Most men would be better off if male violence were somehow eliminated from the world. (Most women would also be much better off as well, of course.)

I therefore consider it both a matter of both moral obligation and self-interest to endeavor to reduce the amount of male violence in the world, which is almost coextensive with reducing the amount of violence in general.

On the other hand, ought implies can, and despite significant efforts I have made to seek out recommendations for concrete actions I could be taking… I haven’t been able to find very many.

The good news is that we appear to be doing something right—overall rates of violent crime have declined by nearly half since 1990. The decline in rape has been slower, only about 25% since 1990, though this is a bit misleading since the legal definition of rape has been expanded during that interval. The causes of this decline in violence are unclear: Some of the most important factors seem to be changes in policing, economic growth, and reductions in lead pollution. For whatever reason, Millennials just don’t seem to commit crimes at the same rates that Gen-X-ers or Boomers did. We are also substantially more feminist, so maybe that’s an important factor too; the truth is, we really don’t know.

But all of this still leaves me asking: What should I be doing?

When I searched for an answer to this question, a significant fraction of the answers I got from various feminist sources were some variation on “ruminate on your own complicity in male violence”. I tried it; it was painful, difficult—and basically useless. I think this is particularly bad advice for someone like me who has a history of depression.

When you ruminate on your own life, it’s easy to find mistakes; but how important were those mistakes? How harmful were they? I can’t say that I’ve never done anything in my whole life that hurt anyone emotionally (can anyone?), but I can only think of a few times I’ve harmed someone physically (mostly by accident, once in self-defense). I’ve definitely never raped or murdered anyone, and as far as I can tell I’ve never done anything that would have meaningfully contributed to anyone getting raped or murdered. If you were to somehow replace every other man in the world with a copy of me, maybe that wouldn’t immediately bring about a utopian paradise—but I’m pretty sure that rates of violence would be a lot lower. (And in this world ruled by my clones, we’d have more progressive taxes! Less military spending! A basic income! A global democratic federation! Greater investment in space travel! Hey, this sounds pretty good, actually… though inbreeding would be a definite concern.) So, okay, I’m no angel; but I don’t think it’s really fair to say that I’m complicit in something that would radically decrease if everyone behaved as I do.

The really interesting thing is, I think this is true of most men. A typical man commits less than the average amount of violence—because there is great skew in the distribution, with most men committing little or no violence and a small number of men committing lots of violence. Truly staggering amounts of violence are committed by those at the very top of the distribution—that would be mass murderers like Hitler and Stalin. It sounds strange, but if all men in the world were replaced by a typical man, the world would surely be better off. The loss of the very best men would be more than compensated by the removal of the very worst. In fact, since most men are not rapists or murderers, replacing every man in the world with the median man would automatically bring the rates of rape and murder to zero. I know that feminists don’t like to hear #NotAllMen; but it’s not even most men. Maybe the reason that the “not all men” argument keeps coming up is… it’s actually kind of true? Maybe it’s not so unreasonable for men to resent the implication that we are complicit in acts we abhor that we have never done and would never do? Maybe this whole concept that an entire sex of people, literally almost half the human race, can share responsibility for violent crimes—is wrong?

I know that most women face a nearly constant bombardment of sexual harassment, and feel pressured to remain constantly vigilant in order to protect themselves against being raped. I know that victims of sexual violence are often blamed for their victimization (though this happens in a lot of crimes, not just sex crimes). I know that #YesAllWomen is true—basically all women have been in some way harmed or threatened by sexual violence. But the fact remains that most men are already not committing sexual violence. Many people seem to confuse the fact that most women are harmed by men with the claim that most men harm women; these are not at all equivalent. As long as one man can harm many women, there don’t need to be very many harmful men for all women to be affected.

Plausible guesses would be that about 20-25% of women suffer sexual assault, committed by about 4% or 5% of men, each of whom commits an average of 4 to 6 assaults—and some of whom commit far more. If these figures are right, then 95% of men are not guilty of sexual assault. The highest plausible estimate I’ve seen is from a study which found that 11% of men had committed rape. Since it’s only one study and its sample size was pretty small, I’m actually inclined to think that this is an overestimate which got excessive attention because it was so shocking. Larger studies rarely find a number above 5%.

But even if we suppose that it’s really 11%, that leaves 89%; in what sense is 89% not “most men”? I saw some feminist sites responding to this result by saying things like “We can’t imprison 11% of men!” but, uh, we almost do already. About 9% of American men will go to prison in their lifetimes. This is probably higher than it should be—it’s definitely higher than any other country—but if those convictions were all for rape, I’d honestly have trouble seeing the problem. (In fact only about 10% of US prisoners are incarcerated for rape.) If the US were the incarceration capital of the world simply because we investigated and prosecuted rape more reliably, that would be a point of national pride, not shame. In fact, the American conservatives who don’t see the problem with our high incarceration rate probably do think that we’re mostly incarcerating people for things like rape and murder—when in fact large portions of our inmates are incarcerated for drug possession, “public order” crimes, or pretrial detention.

Even if that 11% figure is right, “If you know 10 men, one is probably a rapist” is wrong. The people you know are not a random sample. If you don’t know any men who have been to prison, then you likely don’t know any men who are rapists. 37% of prosecuted rapists have prior criminal convictions, and 60% will be convicted of another crime within 5 years. (Of course, most rapes are never even reported; but where would we get statistics on those rapists?) Rapists are not typical men. They may seem like typical men—it may be hard to tell the difference at a glance, or even after knowing someone for a long time. But the fact that narcissists and psychopaths may hide among us does not mean that all of us are complicit in the crimes of narcissists and psychopaths. If you can’t tell who is a psychopath, you may have no choice but to be wary; but telling every man to search his heart is worthless, because the only ones who will listen are the ones who aren’t psychopaths.

That, I think, is the key disagreement here: Where the standard feminist line is “any man could be a rapist, and every man should search his heart”, I believe the truth is much more like, “monsters hide among us, and we should do everything in our power to stop them”. The monsters may look like us, they may often act like us—but they are not us. Maybe there are some men who would commit rapes but can be persuaded out of it—but this is not at all the typical case. Most rapes are committed by hardened, violent criminals and all we can really do is lock them up. (And for the love of all that is good in the world, test all the rape kits!)

It may be that sexual harassment of various degrees is more spread throughout the male population; perhaps the median man indeed commits some harassment at some point in his life. But even then, I think it’s pretty clear that the really awful kinds of harassment are largely committed by a small fraction of serial offenders. Indeed, there is a strong correlation between propensity toward sexual harassment and various measures of narcissism and psychopathy. So, if most men look closely enough, maybe they can think of a few things that they do occasionally that might make women uncomfortable; okay, stop doing those things. (Hint: Do not send unsolicited dick pics. Ever. Just don’t. Anyone who wants to see your genitals will ask first.) But it isn’t going to make a huge difference in anyone’s life. As long as the serial offenders continue, women will still feel utterly bombarded.

There are other kinds of sexual violations that more men commit—being too aggressive, or persisting too much after the first rejection, or sending unsolicited sexual messages or images. I’ve had people—mostly, but not only, men—do things like that to me; but it would be obviously unfair to both these people and actual rape victims to say I’d ever been raped. I’ve been groped a few times, but it seems like quite a stretch to call it “sexual assault”. I’ve had experiences that were uncomfortable, awkward, frustrating, annoying, occasionally creepy—but never traumatic. Never violence. Teaching men (and women! There is evidence that women are not much less likely than men to commit this sort of non-violent sexual violation) not to do these things is worthwhile and valuable in itself—but it’s not going to do much to prevent rape or murder.

Thus, whatever responsibility men have in reducing sexual violence, it isn’t simply to stop; you can’t stop doing what you already aren’t doing.

After pushing through all that noise, at last I found a feminist site making a more concrete suggestion: They recommended that I read a book by Jackson Katz on the subject entitled The Macho Paradox: Why Some Men Hurt Women and How All Men Can Help.

First of all, I must say I can’t remember any other time I’ve read a book that was so poorly titled. The only mention of the phrase “macho paradox” is a brief preface that was added to the most recent edition explaining what the term was meant to mean; it occurs nowhere else in the book. And in all its nearly 300 pages, the book has almost nothing that seriously addresses either the motivations underlying sexual violence or concrete actions that most men could take in order to reduce it.

As far as concrete actions (“How all men can help”), the clearest, most consistent advice the book seems to offer that would apply to most men is “stop consuming pornography” (something like 90% of men and 60% of women regularly consume porn), when in fact there is a strong negative correlation between consumption of pornography and real-world sexual violence. (Perhaps Millennials are less likely to commit rape and murder because we are so into porn and video games!) This advice is literally worse than nothing.

The sex industry exists on a continuum from the adult-only but otherwise innocuous (smutty drawings and erotic novels), through the legal but often problematic (mainstream porn, stripping), to the usually illegal but defensible (consensual sex work), all the way to the utterly horrific and appalling (the sexual exploitation of children). I am well aware that there are many deep problems with the mainstream porn industry, but I confess I’ve never quite seen how these problems are specific to porn rather than endemic to media or even capitalism more generally. Particularly with regard to the above-board sex industry in places like Nevada or the Netherlands, it’s not obvious to me that a prostitute is more exploited than a coal miner, a sweatshop worker, or a sharecropper—indeed, given the choice between those four careers, I’d without hesitation choose to be a prostitute in Amsterdam. Many sex workers resent the paternalistic insistence by anti-porn feminists that their work is inherently degrading and exploitative. Overall, sex workers report job satisfaction not statistically different than the average for all jobs. There are a multitude of misleading statistics often reported about the sex industry that often make matters seem far worse than they are.

Katz (all-too) vividly describes the depiction of various violent or degrading sex acts in mainstream porn, but he seems unwilling to admit that any other forms of porn do or even could exist—and worse, like far too many anti-porn feminists, he seems to willfully elide vital distinctions, effectively equating fantasy depiction with genuine violence and consensual kinks with sexual abuse. I like to watch action movies and play FPS video games; does that mean I believe it’s okay to shoot people with machine guns? I know the sophisticated claim is that it somehow “desensitizes” us (whatever that means), but there’s not much evidence of that either. Given that porn and video games are negatively correlated with actual violence, it may in fact be that depicting the fantasy provides an outlet for such urges and helps prevent them from becoming reality. Or, it may simply be that keeping a bunch of young men at home in front of their computers keeps them from going out and getting into trouble. (Then again, homicides actually increased during the COVID pandemic—though most other forms of crime decreased.) But whatever the cause, the evidence is clear that porn and video games don’t increase actual violence—they decrease them.

At the very end of the book, Katz hints at a few other things men might be able to do, or at least certain groups of men: Challenge sexism in sports, the military, and similar male-dominated spaces (you know, if you have clout in such spaces, which I really don’t—I’m an effete liberal intellectual, a paradigmatic “soy boy”; do you think football players or soldiers are likely to listen to me?); educate boys with more positive concepts of masculinity (if you are in a position to do so, e.g. as a teacher or parent); or, the very best advice in the entire book, worth more than the rest of the book combined: Donate to charities that support survivors of sexual violence. Katz doesn’t give any specific recommendations, but here are a few for you: RAINN, NAESV and NSVRC.

Honestly, I’m more impressed by Upworthy’s bulleted list of things men can do, though they’re mostly things that conscientious men do anyway, and even if 90% of men did them, it probably wouldn’t greatly reduce actual violence.

As far as motivations (“Why some men hurt women”), the book does at least manage to avoid the mindless slogan “rape is about power, not sex” (there is considerable evidence that this slogan is false or at least greatly overstated). Still, Katz insists upon collective responsibility, attributing what are in fact typically individual crimes, committed mainly by psychopaths, motivated primarily by anger or sexual desire, to some kind of institutionalized system of patriarchal control that somehow permeates all of society. The fact that violence is ubiquitous does not imply that it is coordinated. It’s very much the same cognitive error as “murderism”.

I agree that sexism exists, is harmful, and may contribute to the prevalence of rape. I agree that there are many widespread misconceptions about rape. I also agree that reducing sexism and toxic masculinity are worthwhile endeavors in themselves, with numerous benefits for both women and men. But I’m just not convinced that reducing sexism or toxic masculinity would do very much to reduce the rates of rape or other forms of violence. In fact, despite widely reported success of campaigns like the “Don’t Be That Guy” campaign, the best empirical research on the subject suggests that such campaigns actually tend to do more harm than good. The few programs that seem to work are those that focus on bystander interventions—getting men who are not rapists to recognize rapists and stop them. Basically nothing has ever been shown to convince actual rapists; all we can do is deny them opportunities—and while bystander intervention can do that, the most reliable method is probably incarceration. Trying to change their sexist attitudes may be worse than useless.

Indeed, I am increasingly convinced that much—not all, but much—of what is called “sexism” is actually toxic expressions of heterosexuality. Why do most creepy male bosses only ever hit on their female secretaries? Well, maybe because they’re straight? This is not hard to explain. It’s a fair question why there are so many creepy male bosses, but one need not posit any particular misogyny to explain why their targets would usually be women. I guess it’s a bit hard to disentangle; if an incel hates women because he perceives them as univocally refusing to sleep with him, is that sexism? What if he’s a gay incel (yes they exist) and this drives him to hate men instead?

In fact, I happen to know of a particular gay boss who has quite a few rumors surrounding him regarding his sexual harassment of male employees. Or you could look at Kevin Spacey, who (allegedly) sexually abused teenage boys. You could tell a complicated story about how this is some kind of projection of misogynistic attitudes onto other men (perhaps for being too “femme” or something)—or you could tell a really simple story about how this man is only sexually abusive toward other men because that’s the gender of people he’s sexually attracted to. Occam’s Razor strongly favors the latter.

Indeed, what are we to make of the occasional sexual harasser who targets men and women equally? On the theory that abuse is caused by patriarchy, that seems pretty hard to explain. On the theory that abusive people sometimes happen to be bisexual, it’s not much of a mystery. (Though I would like to take a moment to debunk the stereotype of the “depraved bisexual”: Bisexuals are no more likely to commit sexual violence, but are far more likely to suffer it—more likely than either straight or gay people, independently of gender. Trans people face even higher risk; the acronym LGBT is in increasing order of danger of violence.)

Does this excuse such behavior? Absolutely not. Sexual harassment and sexual assault are definitely wrong, definitely harmful, and rightfully illegal. But when trying to explain why the victims are overwhelmingly female, the fact that roughly 90% of people are heterosexual is surely relevant. The key explanandum here is not why the victims are usually female, but rather why the perpetrators are usually male.

That, indeed, requires explanation; but such an explanation is really not so hard to come by. Why is it that, in nearly every human society, for nearly every form of violence, the vast majority of that violence is committed by men? It sure looks genetic to me.

Indeed, in anyother context aside from gender or race, we would almost certainly reject any explanation other than genetics for such a consistent pattern. Why is it that, in nearly every human society, about 10% of people are LGBT? Probably genetics. Why is it that, in near every human society, about 10% of people are left-handed? Genetics. Why, in nearly every human society, do smiles indicate happiness, children fear loud noises, and adults fear snakes? Genetics. Why, in nearly every human society, are men on average much taller and stronger than women? Genetics. Why, in nearly every human society, is about 90% of violence, including sexual violence, committed by men? Clearly, it’s patriarchy.

A massive body of scientific evidence from multiple sources shows a clear casual relationship between increased testosterone and increased aggression. The correlation is moderate, only about 0.38—but it’s definitely real. And men have a lot more testosterone than women: While testosterone varies a frankly astonishing amount between men and over time—including up to a 2-fold difference even over the same day—a typical adult man has about 250 to 950 ng/dL of blood testosterone, while a typical adult woman has only 8 to 60 ng/dL. (An adolescent boy can have as much as 1200 ng/dL!) This is a difference ranging from a minimum of 4-fold to a maximum of over 100-fold, with a typical value of about 20-fold. It would be astonishing if that didn’t have some effect on behavior.

This is of course far from a complete explanation: With a correlation of 0.38, we’ve only explained about 14% of the variance, so what’s the other 86%? Well, first of all, testosterone isn’t the only biological difference between men and women. It’s difficult to identify any particular genes with strong effects on aggression—but the same is true of height, and nobody disputes that the height difference between men and women is genetic.

Clearly societal factors do matter a great deal, or we couldn’t possibly explain why homicide rates vary between countries from less than 3 per million per year in Japan to nearly 400 per million per year in Hondurasa full 2 orders of magnitude! But gender inequality does not appear to strongly predict homicide rates. Japan is not a very feminist place (in fact, surveys suggest that, after Spain, Japan is second-worst highly-developed country for women). Sweden is quite feminist, and their homicide rate is relatively low; but it’s still 4 times as high as Japan’s. The US doesn’t strike me as much more sexist than Canada (admittedly subjective—surveys do suggest at least some difference, and in the expected direction), and yet our homicide rate is nearly 3 times as high. Also, I think it’s worth noting that while overall homicide rates vary enormously across societies, the fact that roughly 90% of homicides are committed by men does not. Through some combination of culture and policy, societies can greatly reduce the overall level of violence—but no society has yet managed to change the fact that men are more violent than women.

I would like to do a similar analysis of sexual assault rates across countries, but unfortunately I really can’t, because different countries have such different laws and different rates of reporting that the figures really aren’t comparable. Sweden infamously has a very high rate of reported sex crimes, but this is largely because they have very broad definitions of sex crimes and very high rates of reporting. The best I can really say for now is there is no obvious pattern of more feminist countries having lower rates of sex crimes. Maybe there really is such a pattern; but the data isn’t clear.

Yet if biology contributes anything to the causation of violence—and at this point I think the evidence for that is utterly overwhelming—then mainstream feminism has done the world a grave disservice by insisting upon only social and cultural causes. Maybe it’s the case that our best options for intervention are social or cultural, but that doesn’t mean we can simply ignore biology. And then again, maybe it’s not the case at all:A neurological treatment to cure psychopathy could cut almost all forms of violence in half.

I want to be completely clear that a biological cause is not a justification or an excuse: literally billions of men manage to have high testosterone levels, and experience plenty of anger and sexual desire, without ever raping or murdering anyone. The fact that men appear to be innately predisposed toward violence does not excuse actual violence, and the fact that rape is typically motivated at least in part by sexual desire is no excuse for committing rape.

In fact, I’m quite worried about the opposite: that the notion that sexual violence is always motivated by a desire to oppress and subjugate women will be used to excuse rape, because men who know that their motivation was not oppression will therefore be convinced that what they did wasn’t rape. If rape is always motivated by a desire to oppress women, and his desire was only to get laid, then clearly, what he did can’t be rape, right? The logic here actually makes sense. If we are to reject this argument—as we must—then we must reject the first premise, that all rape is motivated by a desire to oppress and subjugate women. I’m not saying that’s never a motivation—I’m simply saying we can’t assume it is always.

The truth is, I don’t know how to end violence, and sexual violence may be the most difficult form of violence to eliminate. I’m not even sure what most of us can do to make any difference at all. For now, the best thing to do is probably to donate money to organizations like RAINN, NAESV and NSVRC. Even $10 to one of these organizations will do more to help survivors of sexual violence than hours of ruminating on your own complicity—and cost you a lot less.

Good news for a change

Mar 28 JDN 2459302

When President Biden made his promise to deliver 100 million vaccine doses to Americans within his first 100 days, many were skeptical. Perhaps we had grown accustomed to the anti-scientific attitudes and utter incompetence of Trump’s administration, and no longer believed that the US federal government could do anything right.

The skeptics were wrong. For the promise has not only been kept, it has been greatly exceeded. As of this writing, Biden has been President for 60 days and we have already administered 121 million vaccine doses. If we continue at the current rate, it is likely that we will have administered over 200 million vaccine doses and fully vaccinated over 100 million Americans by Biden’s promised 100-day timeline—twice as fast as what was originally promised. Biden has made another bold promise: Every adult in the United States vaccinated by the end of May. I admit I’m not confident it can be done—but I wasn’t confident we’d hit 100 million by now either.

In fact, the US now has one of the best rates of COVID vaccination in the world, with the proportion of our population vaccinated far above the world average and below only Israel, UAE, Chile, the UK, and Bahrain (plus some tiny countries like Monaco). In fact, we actually have the largest absolute number of vaccinated individuals in the world, surpassing even China and India.

It turns out that the now-infamous map saying that the US and UK were among the countries best-prepared for a pandemic wasn’t so wrong after all; it’s just that having such awful administration for four years made our otherwise excellent preparedness fail. Put someone good in charge, and yes, indeed, it turns out that the US can deal with pandemics quite well.

The overall rate of new COVID cases in the US began to plummet right around the time the vaccination program gained steam, and has plateaued around 50,000 per day for the past few weeks. This is still much too high, but it is is a vast improvement over the 200,000 cases per day we had in early January. Our death rate due to COVID now hovers around 1,500 people per day—that’s still a 9/11 every two days. But this is half what our death rate was at its worst. And since our baseline death rate is 7,500 deaths per day, 1,800 of them by heart disease, this now means that COVID is no longer the leading cause of death in the United States; heart disease has once again reclaimed its throne. Of course, people dying from heart disease is still a bad thing; but it’s at least a sign of returning to normalcy.

Worldwide, the pandemic is slowing down, but still by no means defeated, with over 400,000 new cases and 7,500 deaths every day. The US rate of 17 new cases per 100,000 people per day is about 3 times the world average, but comparable to Germany (17) and Norway (18), and nowhere near as bad as Chile (30), Brazil (35), France (37), or Sweden (45), let alone the very hardest-hit places like Serbia (71), Hungary (78), Jordan (83), Czechia (90), and Estonia (110). (That big gap between Norway and Sweden? It’s because Sweden resisted using lockdowns.) And there is cause for optimism even in these places, as vaccination rates already exceed total COVID cases.

I can see a few patterns in the rate of vaccination by state: very isolated states have managed to vaccinate their population fastest—Hawaii and Alaska have done very well, and even most of the territories have done quite well (though notably not Puerto Rico). The south has done poorly (for obvious reasons), but not as poorly as I might have feared; even Texas and Mississippi have given at least one dose to 21% of their population. New England has been prioritizing getting as many people with at least one dose as possible, rather than trying to fully vaccinate each person; I think this is the right strategy.

We must continue to stay home when we can and wear masks when we go out. This will definitely continue for at least a few more months, and the vaccine rollout may not even be finished in many countries by the end of the year. In the worst-case scenario, COVID may become an endemic virus that we can’t fully eradicate and we’ll have to keep getting vaccinated every year like we do for influenza (though the good news there is that it likely wouldn’t be much more dangerous than influenza at that point either—though another influenza is nothing to, er, sneeze at).

Yet there is hope at last. Things are finally getting better.