Imagine a powerful artificial intelligence, which is comprised of many parts distributed over a vast area so that it has no particular location. It is incapable of feeling any emotion: Neither love nor hate, neither joy nor sorrow, neither hope nor fear. It has no concept of ethics or morals, only its own programmed directives. It has one singular purpose, which it seeks out at any cost. Any who aid its purpose are generously rewarded. Any who resist its purpose are mercilessly crushed.
But I would say that the paperclippers are already here. Slow, analog versions perhaps. But they are already getting out of control. We call them corporations.
A corporation is probably not what you visualized when you read the first paragraph of this post, so try reading it again. Which parts are not true of corporations?
Perhaps you think a corporation is not an artificial intelligence? But clearly it’s artificial, and doesn’t it behave in ways that seem intelligent? A corporation has purpose beyond its employees in much the same way that a hive has purpose beyond its bees. A corporation is a human superorganism (and not the only kind either).
Corporations are absolutely, utterly amoral. Their sole directive is to maximize profit. Now, you might think that an individual CEO, or a board of directors, could decide to do something good, or refrain from something evil, for reasons other than profit; and to some extent this is true. But particularly when a corporation is publicly-traded, that CEO and those directors are beholden to shareholders. If shareholders see that the corporation is acting in ways that benefit the community but hurt their own profits, shareholders can rebel by selling their shares or even suing the company. In 1919, Dodge successfully sued Ford for the “crime” of setting wages too high and prices too low.
Humans are altruistic. We are capable of feeling, emotion, and compassion. Corporations are not. Corporations are made of human beings, but they are specifically structured to minimize the autonomy of human choices. They are designed to provide strong incentives to behave in a particular way so as to maximize profit. Even the CEO of a corporation, especially one that is publicly traded, has their hands tied most of the time by the desires of millions of shareholders and customers—so-called “market forces”. Corporations are entirely the result of human actions, but they feel like impersonal forces because they are the result of millions of independent choices, almost impossible to coordinate; so one individual has very little power to change the outcome.
Why would we create such entities? It almost feels as though we were conquered by some alien force that sought to enslave us to its own purposes. But no, we created corporations ourselves. We intentionally set up institutions designed to limit our own autonomy in the name of maximizing profit.
Part of the answer is efficiency: There are genuine gains in economic efficiency due to the corporate structure. Corporations can coordinate complex activity on a vast scale, with thousands or even millions of employees each doing what they are assigned without ever knowing—or needing to know—the whole of which they are a part.
Thus, in order to explain the primacy of corporations, particularly those that are traded on stock markets, we must turn to ideology: The extreme “laissez- faire“ concept of capitalism and its modern expression in the ideology of “shareholder value”. Somewhere along the way enough people—or at least enough policymakers—became convinced that the best way to run an economy was to hand over as much as possible to entities that exist entirely to maximize their own profits.
This is not to say that corporations should be abolished entirely. I am certainly not advocating a shift to central planning; I believe in private enterprise. But I should note that private enterprise can also include co-ops, partnerships, and closely-held businesses, rather than publicly traded corproations, and perhaps that’s all we need. Yet there do seem to be significant advantages to the corporate structure: Corporation seem to be spectacularly good at scaling up the production of goods and providing them to a large number of customers. So let’s not get rid of corporations just yet.
Instead, let us keep corporations on a short leash. When properly regulated, corporations can be very efficient at producing goods. But corporations can also cause tremendous damage when given the opportunity. Regulations aren’t just “red tape” that gets in the way of production. They are a vital lifeline that protects us against countless abuses that corporations would otherwise commit.
These vast artificial intelligences are useful to us, so let’s not get rid of them. But never for a moment imagine that their goals are the same as ours. Keep them under close watch at all times, and compel them to use their great powers for good—for, left to their own devices, they can just as easily do great evil.
My birthday is coming up soon, and each year around this time I try to step back and reflect on how the previous year has gone and what I can expect from the next one.
Needless to say, 2020 was not a great year for me. The pandemic and its consequences made this quite a bad year for almost everyone. Months of isolation and fear have made us all stressed and miserable, and even with the vaccines coming out the end is still all too far away. Honestly I think I was luckier than most: My work could be almost entirely done remotely, and my income is a fixed stipend, so financially I faced no hardship at all. But isolation still wreaks its toll.
Most of my energy this past year has been spent on the job market. I applied to over 70 different job postings, and from that I received 6 interviews, all but one of which I’ve already finished. Then, if they liked how I did in those interviews, I will be invited to another phase, which in normal times would be a flyout where candidates visit the campus; but due to COVID it’s all being done remotely now. And then, finally, I may actually get some job offers. Statistically I think I will probably get some kind of offer at this point, but I can’t be sure—and that uncertainty is quite nerve-wracking. I may get a job and move somewhere new, or I may not and have to stay here for another year and try again. Both outcomes are still quite probable, and I really can’t plan on either one.
If I do actually get a job, this will open a new chapter in my life—and perhaps I will finally be able to settle down with a permanent career, buy a house, start a family. One downside of graduate school I hadn’t really anticipated is how it delays adulthood: You don’t really feel like you are a proper adult, because you are still in the role of a student for several additional years. I am all too ready to be done with being a student. I feel as though I’ve spent all my life preparing to do things instead of actually doing them, and I am now so very tired of preparing.
I don’t even know for sure what I want to do—I feel disillusioned with academia, I haven’t been able to snare any opportunities in government or nonprofits, and I need more financial security than I could get if I leapt headlong into full-time writing. But I am quite certain that I want to actually do something, and no longer simply be trained and prepared (and continually evaluated on that training and preparation).
I’m even reluctant to do a postdoc, because that also likely means packing up and moving again in a few year (though I would prefer it to remaining here another year).
I have to keep reminding myself that all of this is temporary: The pandemic will eventually be quelled by vaccines, and quarantine procedures will end, and life for most of us will return to normal. Even if I don’t get a job I like this year, I probably will next year; and then I can finally tie off my education with a bow and move on. Even if the first job isn’t permanent, eventually one will be, and at last I’ll be able to settle into a stable adult life.
Even the removal of Trump has been far more arduous than I expected. First there was the slow counting of ballots because so many people had (wisely) voted absentee. Then there were the frivolous challenges to the counts—and yes, I mean frivolous in a legal sense, as 61 out of 62 lawsuits were thrown out immediately and the 1 that made it through was a minor technical issue.
In one sense it feels like so much has happened: We have borne witness to historic events in real-time. But in another sense it feels like so little has happened: Staying home all the time under lockdown has meant that days are alway much the same, and each day blends into the next. I feel somehow unhinged frrom time, at once marveling that a year has passed already, and marveling that so much happened in only a year.
I should soon hear back from these job interviews and have a better idea what the next chapter of my life will be. But I know for sure that I’ll be relieved once this one is over.
I was originally planning on titling this post “I hate overstatement”, but I thought that might be itself an overstatement; then I considered leaning into the irony with something like “Overstatement is the worst thing ever”. But no, I think my point best comes across if I exemplify it, rather than present it ironically.
It’s a familiar formula: “[Widespread belief] is wrong! [Extreme alternative view] is true! [Obvious exception]. [Further qualifications]. [Revised, nuanced view that is only slightly different from the widespread belief].”
Here are some examples of the formula (these are not direct quotes but paraphrases of their general views). Note that these are all people I basically agree with, and yet I still find their overstatement annoying:
Bernie Sanders: “Capitalism is wrong! Socialism is better! Well, not authoritarian socialism like the Soviet Union. And some industries clearly function better when privatized. Scandinavian social democracy seems to be the best system.”
Richard Dawkins: “Religion is a delusion! Only atheists are rational! Well, some atheists are also pretty irrational. And most religious people are rational about most things most of the time, and don’t let their religious beliefs interfere too greatly with their overall behavior. Really, what I mean to say that is that God doesn’t exist and organized religion is often harmful.”
Black Lives Matter: “Abolish the police! All cops are bastards! Well, we obviously still need some kind of law enforcement system for dealing with major crimes; we can’t just let serial killers go free. In fact, while there are deep-seated flaws in police culture, we could solve a lot of the most serious problems with a few simple reforms like changing the rules of engagement.”
Sam Harris is particularly fond of this formula, so here is a direct quote that follows the pattern precisely:
“The link between belief and behavior raises the stakes considerably. Some propositions are so dangerous that it may even be ethical to kill people for believing them. This may seem an extraordinary claim, but it merely enunciates an ordinary fact about the world in which we live. Certain beliefs place their adherents beyond the reach of every peaceful means of persuasion, while inspiring them to commit acts of extraordinary violence against others. There is, in fact, no talking to some people. If they cannot be captured, and they often cannot, otherwise tolerant people may be justified in killing them in self-defense. This is what the United States attempted in Afghanistan, and it is what we and other Western powers are bound to attempt, at an even greater cost to ourselves and to innocents abroad, elsewhere in the Muslim world. We will continue to spill blood in what is, at bottom, a war of ideas.”
Somehow in a single paragraph he started with the assertion “It is permissible to punish thoughtcrime with death” and managed to qualify it down to “The Afghanistan War was largely justified”. This is literally the difference between a proposition fundamentally antithetical to everything America stands for, and an utterly uncontroversial statement most Americans agree with.Harris often complains that people misrepresent his views, and to some extent this is true, but honestly I think he does this on purpose because he knows that controversy sells. There’s taking things out of context—and then there’s intentionally writing in a style that will maximize opportunities to take you out of context.
I think the idea behind overstating your case is that you can then “compromise” toward your actual view, and thereby seem more reasonable.
If there is some variable X that we want to know the true value of, and I currently believe that it is some value x1 while you believe that it is some larger value x2, and I ask you what you think, you may not want to tell me x2. Intead you might want to give some number even larger than x2 that you choose to try to make me adjust all the way into adopting your new belief.
For instance, suppose I think the probability of your view being right is p and the probability of my view being right is 1-p. But you think that the probability of your view being right is q > p and the probability of my view being right is 1-q < 1-p.
I tell you that my view is x1. Then I ask you what your view is. What answer should you give?
Well, you can expect that I’ll revise my belief to a new value px + (1-p)x1, where x is whatever answer you give me. The belief you want me to hold is qx2 + (1-q)x1. So your optimal choice is as follows:
qx2 + (1-q)x1= px + (1-p)x1
x = x1 + q/p(x2-x1)
Since q > p, q/p > 1 and the x you report to me will be larger than your true value x2. You will overstate your case to try to get me to adjust my beliefs more. (Interestingly, if you were less confident in your own beliefs, you’d report a smaller difference. But this seems like a rare case.)
In a simple negotiation over dividing some resource (e.g. over a raise or a price), this is quite reasonable. When you’re a buyer and I’m a seller, our intentions are obvious enough: I want to sell high and you want to buy low. Indeed, the Nash Equilibrium of this game seems to be that we both make extreme offers then compromise on a reasonable offer, all the while knowing that this is exactly what we’re doing.
But when it comes to beliefs about the world, things aren’t quite so simple.
In particular, we have reasons for our beliefs. (Or at least, we’re supposed to!) And evidence isn’t linear. Even when propositions can be placed on a one-dimensional continuum in this way (and quite frankly we shoehorn far too many complex issues onto a simple “left/right” continuum!), evidence that X = x isn’t partial evidence that X = 2x. A strong argument that the speed of light is 3*108 m/s isn’t a weak argument that the speed of light is 3*109 m/s. A compelling reason to think that taxes should be over 30% isn’t even a slight reason to think that taxes should be over 90%.
To return to my specific examples: Seeing that Norway is a very prosperous country doesn’t give us reasons to like the Soviet Union. Recognizing that religion is empirically false doesn’t justify calling all religious people delusional. Reforming the police is obviously necessary, and diverting funds to other social services is surely a worthwhile goal; but law enforcement is necessary and cannot simply be abolished. And defending against the real threat of Islamist terrorism in no way requires us to institute the death penalty for thoughtcrime.
I don’t know how most people response to overstatement. Maybe it really does cause them to over-adjust their beliefs. Hyperbole is a very common rhetorical tactic, and for all I know perhaps it is effective on many people.
But personally, here is my reaction: At the very start, you stated something implausible. That has reduced your overall credibility.
If I continue reading and you then deal with various exceptions and qualifications, resulting in a more reasonable view, I do give you some credit for that; but now I am faced with a dilemma: Either (1) you were misrepresenting your view initially, or (2) you are engaging in a motte-and-bailey doctrine, trying to get me to believe the strong statement while you can only defend the weak statement. Either way I feel like you are being dishonest and manipulative. I trust you less. I am less interested in hearing whatever else you have to say. I am in fact less likely to adopt your nuanced view than I would have been if you’d simply presented it in the first place.
And that’s assuming I have the opportunity to hear your full nuanced version. If all I hear is the sound-byte overstatement, I will come away with an inaccurate assessment of your beliefs. I will have been presented with an implausible claim and evidence that doesn’t support that claim. I will reject your view out of hand, without ever actually knowing what your view truly was.
Furthermore, I know that many others who are listening are not as thoughtful as I am about seeking out detailed context, so even if I know the nuanced version I know—and I think you know—that some people are going to only hear the extreme version.
Maybe what it really comes down to is a moral question: Is this a good-faith discussion where we are trying to reach the truth together? Or is this a psychological manipulation to try to get me to believe what you believe? Am I a fellow rational agent seeking knowledge with you? Or am I a behavior machine that you want to control by pushing the right buttons?
I won’t say that overstatement is always wrong—because that would be an overstatement. But please, make an effort to avoid it whenever you can.
I received several books for Christmas this year, and the one I was most excited to read first was The Sense of Style by Steven Pinker. Pinker is exactly the right person to write such a book: He is both a brilliant linguist and cognitive scientist and also an eloquent and highly successful writer. There are two other books on writing that I rate at the same tier: On Writing by Stephen King, and The Art of Fiction by John Gardner. Don’t bother with style manuals from people who only write style manuals; if you want to learn how to write, learn from people who are actually successful at writing.
Indeed, I knew I’d love The Sense of Style as soon as I read its preface, containing some truly hilarious takedowns of Strunk & White. And honestly Strunk & White are among the best standard style manuals; they at least actually manage to offer some useful advice while also being stuffy, pedantic, and often outright inaccurate. Most style manuals only do the second part.
One of Pinker’s central focuses in The Sense of Style is on The Curse of Knowledge, an all-too-common bias in which knowing things makes us unable to appreciate the fact that other people don’t already know it. I think I succumbed to this failing most greatly in my first book, Special Relativity from the Ground Up, in which my concept of “the ground” was above most people’s ceilings. I was trying to write for high school physics students, and I think the book ended up mostly being read by college physics professors.
The problem is surely a real one: After years of gaining expertise in a subject, we are all liable to forget the difficulty of reaching our current summit and automatically deploy concepts and jargon that only a small group of experts actually understand. But I think Pinker underestimates the difficulty of escaping this problem, because it’s not just a cognitive bias that we all suffer from time to time. It’s also something that our society strongly incentivizes.
Pinker points out that a small but nontrivial proportion of published academic papers are genuinely well written, using this to argue that obscurantist jargon-laden writing isn’t necessary for publication; but he didn’t seem to even consider the fact that nearly all of those well-written papers were published by authors who already had tenure or even distinction in the field. I challenge you to find a single paper written by a lowly grad student that could actually get published without being full of needlessly technical terminology and awkward passive constructions: “A murian model was utilized for the experiment, in an acoustically sealed environment” rather than “I tested using mice and rats in a quiet room”. This is not because grad students are more thoroughly entrenched in the jargon than tenured professors (quite the contrary), nor that grad students are worse writers in general (that one could really go either way), but because grad students have more to prove. We need to signal our membership in the tribe, whereas once you’ve got tenure—or especially once you’ve got an endowed chair or something—you have already proven yourself.
Pinker seems to briefly touch this insight (p. 69), without fully appreciating its significance: “Even when we have an inlkling that we are speaking in a specialized lingo, we may be reluctant to slip back into plain speech. It could betray to our peers the awful truth that we are still greenhorns, tenderfoots, newbies. And if our readers do know the lingo, we might be insulting their intelligence while spelling it out. We would rather run the risk of confusing them while at least appearing to be soophisticated than take a chance at belaboring the obvious while striking them as naive or condescending.”
What we are dealing with here is a signalingproblem. The fact that one can write better once one is well-established is the phenomenon of countersignaling, where one who has already established their status stops investing in signaling.
Here’s a simple model for you. Suppose each person has a level of knowledge x, which they are trying to demonstrate. They know their own level of knowledge, but nobody else does.
Suppose that when we observe someone’s knowledge, we get two pieces of information: We have an imperfect observation of their true knowledge which is x+e, the real value of x plus some amount of error e. Nobody knows exactly what the error is. To keep the model as simple as possible I’ll assume that e is drawn from a uniform distribution between -1 and 1.
Finally, assume that we are trying to select people above a certain threshold: Perhaps we are publishing in a journal, or hiring candidates for a job. Let’s call that threshold z. If x < z-1, then since e can never be larger than 1, we will immediately observe that they are below the threshold and reject them. If x > z+1, then since e can never be smaller than -1, we will immediately observe that they are above the threshold and accept them.
But when z-1 < x < z+1, we may think they are above the threshold when they actually are not (if e is positive), or think they are not above the threshold when they actually are (if e is negative).
So then let’s say that they can invest in signaling by putting some amount of visible work in y (like citing obscure papers or using complex jargon). This additional work may be costly and provide no real value in itself, but it can still be useful so long as one simple condition is met: It’s easier to do if your true knowledge x is high.
In fact, for this very simple model, let’s say that you are strictly limited by the constraint that y <= x. You can’t show off what you don’t know.
If your true value x > z, then you should choose y = x. Then, upon observing your signal, we know immediately that you must be above the threshold.
But if your true value x < z, then you should choose y = 0, because there’s no point in signaling that you were almost at the threshold. You’ll still get rejected.
Yet remember before that only those with z-1 < x < z+1 actually need to bother signaling at all. Those with x > z+1 can actually countersignal, by also choosing y = 0. Since you already have tenure, nobody doubts that you belong in the club.
This means we’ll end up with three groups: Those with x < z, who don’t signal and don’t get accepted; those with z < x < z+1, who signal and get accepted; and those with x > z+1, who don’t signal but get accepted. Then life will be hardest for those who are just above the threshold, who have to spend enormous effort signaling in order to get accepted—and that sure does sound like grad school.
You can make the model more sophisticated if you like: Perhaps the error isn’t uniformly distributed, but some other distribution with wider support (like a normal distribution, or a logistic distribution); perhaps the signalling isn’t perfect, but itself has some error; and so on. With such additions, you can get a result where the least-qualified still signal a little bit so they get some chance, and the most-qualified still signal a little bit to avoid a small risk of being rejected. But it’s a fairly general phenomenon that those closest to the threshold will be the ones who have to spend the most effort in signaling.
This reveals a disturbing overlap between the Curse of Knowledge and Impostor Syndrome: We write in impenetrable obfuscationist jargon because we are trying to conceal our own insecurity about our knowledge and our status in the profession. We’d rather you not know what we’re talking about than have you realize that we don’t know what we’re talking about.
For the truth is, we don’t know what we’re talking about. And neither do you, and neither does anyone else. This is the agonizing truth of research that nearly everyone doing research knows, but one must be either very brave, very foolish, or very well-established to admit out loud: It is in the nature of doing research on the frontier of human knowledge that there is always far more that we don’t understand about our subject than that we do understand.
I would like to be more open about that. I would like to write papers saying things like “I have no idea why it turned out this way; it doesn’t make sense to me; I can’t explain it.” But to say that the profession disincentivizes speaking this way would be a grave understatement. It’s more accurate to say that the profession punishes speaking this way to the full extent of its power. You’re supposed to have a theory, and it’s supposed to work. If it doesn’t actually work, well, maybe you can massage the numbers until it seems to, or maybe you can retroactively change the theory into something that does work. Or maybe you can just not publish that paper and write a different one.
It looks like a bell curve, except that almost all the values between -2 and 2 are mysteriously missing.
If we were actually publishing all the good science that gets done, it would in fact be a very nice bell curve. All those missing values are papers that never got published, or results that were excluded from papers, or statistical analyses that were massaged, in order to get a p-value less than the magical threshold for publication of 0.05. (For the statistically uninitiated, a z-score less than -2 or greater than +2 generally corresponds to a p-value less than 0.05, so these are effectively the same constraint.)
I have literally never read a single paper published in an academic journal in the last 50 years that said in plain language, “I have no idea what’s going on here.” And yet I have read many papers—probably most of them, in fact—where that would have been an appropriate thing to say. It’s actually quite a rare paper, at least in the social sciences, that actually has a theory good enough to really precisely fit the data and not require any special pleading or retroactive changes. (Often the bar for a theory’s success is lowered to “the effect is usually in the right direction”.) Typically results from behavioral experiments are bizarre and baffling, because people are a little screwy. It’s just that nobody is willing to stake their career on being that honest about the depth of our ignorance.
This is a deep shame, for the greatest advances in human knowledge have almost always come from people recognizing the depth of their ignorance. Paradigms never shift until people recognize that the one they are using is defective.
This is why it’s so hard to beat the Curse of Knowledge: You need to signal that you know what you’re talking about, and the truth is you probably don’t, because nobody does. So you need to sound like you know what you’re talking about in order to get people to listen to you. You may be doing nothing more than educated guesses based on extremely limited data, but that’s actually the best anyone can do; those other people saying they have it all figured out are either doing the same thing, or they’re doing something even less reliable than that. So you’d better sound like you have it all figured out, and that’s a lot more convincing when you “utilize a murian model” than when you “use rats and mice”.
Perhaps we can at least push a little bit toward plainer language. It helps to be addressing a broader audience: it is both blessing and curse that whatever I put on this blog is what you will read, without any gatekeepers in my path. I can use plainer language here if I so choose, because no one can stop me. But of course there’s a signaling risk here as well: The Internet is a public place, and potential employers can read this as well, and perhaps decide they don’t like me speaking so plainly about the deep flaws in the academic system. Maybe I’d be better off keeping my mouth shut, at least for awhile. I’ve never been very good at keeping my mouth shut.
Once we get established in the system, perhaps we can switch to countersignaling, though even this doesn’t always happen. I think there are two reasons this can fail: First, you can almost always try to climb higher. Once you have tenure, aim for an endowed chair. Once you have that, try to win a Nobel. Second, once you’ve spent years of your life learning to write in a particular stilted, obscurantist, jargon-ridden way, it can be very difficult to change that habit. People have been rewarding you all your life for writing in ways that make your work unreadable; why would you want to take the risk of suddenly making it readable?
I don’t have a simple solution to this problem, because it is so deeply embedded. It’s not something that one person or even a small number of people can really fix. Ultimately we will need to, as a society, start actually rewarding people for speaking plainly about what they don’t know. Admitting that you have no clue will need to be seen as a sign of wisdom and honesty rather than a sign of foolishness and ignorance. And perhaps even that won’t be enough: Because the fact will still remain that knowing what you know that other people don’t know is a very difficult thing to do.