Games as economic simulations—and education tools

Mar 5, JDN 2457818 [Sun]

Moore’s Law is a truly astonishing phenomenon. Now as we are well into the 21st century (I’ve lived more of my life in the 21st century than the 20th now!) it may finally be slowing down a little bit, but it has had quite a run, and even this could be a temporary slowdown due to economic conditions or the lull before a new paradigm (quantum computing?) matures. Since at least 1975, the computing power of an individual processor has doubled approximately every year and a half; that means it has doubled over 25 times—or in other words that it has increased by a factor of over 30 million. I now have in my pocket a smartphone with several thousand times the processing speed of the guidance computer of the Saturn V that landed on the Moon.

This meteoric increase in computing power has had an enormous impact on the way science is done, including economics. Simple theoretical models that could be solved by hand are now being replaced by enormous simulation models that have to be processed by computers. It is now commonplace to devise models with systems of dozens of nonlinear equations that are literally impossible to solve analytically, and just solve them iteratively with computer software.

But one application of this technology that I believe is currently underutilized is video games.

As a culture, we still have the impression that video games are for children; even games like Dragon Age and Grand Theft Auto that are explicitly for adults (and really quite inappropriate for children!) are viewed as in some sense “childish”—that no serious adult would be involved with such frivolities. The same cultural critics who treat Shakespeare’s vagina jokes as the highest form of art are liable to dismiss the poignant critique of war in Call of Duty: Black Ops or the reflections on cultural diversity in Skyrim as mere puerility.

But video games are an art form with a fundamentally greater potential than any other. Now that graphics are almost photorealistic, there is really nothing you can do in a play or a film that you can’t do in a video game—and there is so, so much more that you can only do in a game.
In what other medium can we witness the spontaneous emergence and costly aftermath of a war? Yet EVE Online has this sort of event every year or so—just today there was a surprise attack involving hundreds of players that destroyed thousands of hours’—and dollars’—worth of starships, something that has more or less become an annual tradition. A few years ago there was a massive three-faction war that destroyed over $300,000 in ships and has now been commemorated as “the Bloodbath of B-R5RB”.
Indeed, the immersion and interactivity of games present an opportunity to do nothing less than experimental macroeconomics. For generations it has been impossible, or at least absurdly unethical, to ever experimentally manipulate an entire macroeconomy. But in a video game like EVE Online or Second Life, we can now do so easily, cheaply, and with little or no long-term harm to the participants—and we can literally control everything in the experiment. Forget the natural resource constraints and currency exchange rates—we can change the laws of physics if we want. (Indeed, EVE‘s whole trade network is built around FTL jump points, and in Second Life it’s a basic part of the interface that everyone can fly like Superman.)

This provides untold potential for economic research. With sufficient funding, we could build a game that would allow us to directly test hypotheses about the most fundamental questions of economics: How do governments emerge and maintain security? How is the rule of law sustained, and when can it be broken? What controls the value of money and the rate of inflation? What is the fundamental cause of unemployment, and how can it be corrected? What influences the rate of technological development? How can we maximize the rate of economic growth? What effect does redistribution of wealth have on employment and output? I envision a future where we can directly simulate these questions with thousands of eager participants, varying the subtlest of parameters and carrying out events over any timescale we like from seconds to centuries.

Nor is the potential of games in economics limited to research; it also has enormous untapped potential in education. I’ve already seen in my classes how tabletop-style games with poker chips can teach a concept better in a few minutes than hours of writing algebra derivations on the board; but custom-built video games could be made that would teach economics far better still, and to a much wider audience. In a well-designed game, people could really feel the effects of free trade or protectionism, not just on themselves as individuals but on entire nations that they control—watch their GDP numbers go down as they scramble to produce in autarky what they could have bought for half the price if not for the tariffs. They could see, in real time, how in the absence of environmental regulations and Pigovian taxes the actions of millions of individuals could despoil our planet for everyone.

Of course, games are fundamentally works of fiction, subject to the Fictional Evidence Fallacy and only as reliable as their authors make them. But so it is with all forms of art. I have no illusions about the fact that we will never get the majority of the population to regularly read peer-reviewed empirical papers. But perhaps if we are clever enough in the games we offer them to play, we can still convey some of the knowledge that those papers contain. We could also update and expand the games as new information comes in. Instead of complaining that our students are spending time playing games on their phones and tablets, we could actually make education into games that are as interesting and entertaining as the ones they would have been playing. We could work with the technology instead of against it. And in a world where more people have access to a smartphone than to a toilet, we could finally bring high-quality education to the underdeveloped world quickly and cheaply.

Rapid growth in computing power has given us a gift of great potential. But soon our capacity will widen even further. Even if Moore’s Law slows down, computing power will continue to increase for awhile yet. Soon enough, virtual reality will finally take off and we’ll have even greater depth of immersion available. The future is bright—if we can avoid this corporatist cyberpunk dystopia we seem to be hurtling toward, of course.

Experimentally testing categorical prospect theory

Dec 4, JDN 2457727

In last week’s post I presented a new theory of probability judgments, which doesn’t rely upon people performing complicated math even subconsciously. Instead, I hypothesize that people try to assign categories to their subjective probabilities, and throw away all the information that wasn’t used to assign that category.

The way to most clearly distinguish this from cumulative prospect theory is to show discontinuity. Kahneman’s smooth, continuous function places fairly strong bounds on just how much a shift from 0% to 0.000001% can really affect your behavior. In particular, if you want to explain the fact that people do seem to behave differently around 10% compared to 1% probabilities, you can’t allow the slope of the smooth function to get much higher than 10 at any point, even near 0 and 1. (It does depend on the precise form of the function, but the more complicated you make it, the more free parameters you add to the model. In the most parsimonious form, which is a cubic polynomial, the maximum slope is actually much smaller than this—only 2.)

If that’s the case, then switching from 0.% to 0.0001% should have no more effect in reality than a switch from 0% to 0.00001% would to a rational expected utility optimizer. But in fact I think I can set up scenarios where it would have a larger effect than a switch from 0.001% to 0.01%.

Indeed, these games are already quite profitable for the majority of US states, and they are called lotteries.

Rationally, it should make very little difference to you whether your odds of winning the Powerball are 0 (you bought no ticket) or 0.000000001% (you bought a ticket), even when the prize is $100 million. This is because your utility of $100 million is nowhere near 100 million times as large as your marginal utility of $1. A good guess would be that your lifetime income is about $2 million, your utility is logarithmic, the units of utility are hectoQALY, and the baseline level is about 100,000.

I apologize for the extremely large number of decimals, but I had to do that in order to show any difference at all. I have bolded where the decimals first deviate from the baseline.

Your utility if you don’t have a ticket is ln(20) = 2.9957322736 hQALY.

Your utility if you have a ticket is (1-10^-9) ln(20) + 10^-9 ln(1020) = 2.9957322775 hQALY.

You gain a whopping 40 microQALY over your whole lifetime. I highly doubt you could even perceive such a difference.

And yet, people are willing to pay nontrivial sums for the chance to play such lotteries. Powerball tickets sell for about $2 each, and some people buy tickets every week. If you do that and live to be 80, you will spend some $8,000 on lottery tickets during your lifetime, which results in this expected utility: (1-4*10^-6) ln(20-0.08) + 4*10^-6 ln(1020) = 2.9917399955 hQALY.
You have now sacrificed 0.004 hectoQALY, which is to say 0.4 QALY—that’s months of happiness you’ve given up to play this stupid pointless game.

Which shouldn’t be surprising, as (with 99.9996% probability) you have given up four months of your lifetime income with nothing to show for it. Lifetime income of $2 million / lifespan of 80 years = $25,000 per year; $8,000 / $25,000 = 0.32. You’ve actually sacrificed slightly more than this, which comes from your risk aversion.

Why would anyone do such a thing? Because while the difference between 0 and 10^-9 may be trivial, the difference between “impossible” and “almost impossible” feels enormous. “You can’t win if you don’t play!” they say, but they might as well say “You can’t win if you do play either.” Indeed, the probability of winning without playing isn’t zero; you could find a winning ticket lying on the ground, or win due to an error that is then upheld in court, or be given the winnings bequeathed by a dying family member or gifted by an anonymous donor. These are of course vanishingly unlikely—but so was winning in the first place. You’re talking about the difference between 10^-9 and 10^-12, which in proportional terms sounds like a lot—but in absolute terms is nothing. If you drive to a drug store every week to buy a ticket, you are more likely to die in a car accident on the way to the drug store than you are to win the lottery.

Of course, these are not experimental conditions. So I need to devise a similar game, with smaller stakes but still large enough for people’s brains to care about the “almost impossible” category; maybe thousands? It’s not uncommon for an economics experiment to cost thousands, it’s just usually paid out to many people instead of randomly to one person or nobody. Conducting the experiment in an underdeveloped country like India would also effectively amplify the amounts paid, but at the fixed cost of transporting the research team to India.

But I think in general terms the experiment could look something like this. You are given $20 for participating in the experiment (we treat it as already given to you, to maximize your loss aversion and endowment effect and thereby give us more bang for our buck). You then have a chance to play a game, where you pay $X to get a P probability of $Y*X, and we vary these numbers.

The actual participants wouldn’t see the variables, just the numbers and possibly the rules: “You can pay $2 for a 1% chance of winning $200. You can also play multiple times if you wish.” “You can pay $10 for a 5% chance of winning $250. You can only play once or not at all.”

So I think the first step is to find some dilemmas, cases where people feel ambivalent, and different people differ in their choices. That’s a good role for a pilot study.

Then we take these dilemmas and start varying their probabilities slightly.

In particular, we try to vary them at the edge of where people have mental categories. If subjective probability is continuous, a slight change in actual probability should never result in a large change in behavior, and furthermore the effect of a change shouldn’t vary too much depending on where the change starts.

But if subjective probability is categorical, these categories should have edges. Then, when I present you with two dilemmas that are on opposite sides of one of the edges, your behavior should radically shift; while if I change it in a different way, I can make a large change without changing the result.

Based solely on my own intuition, I guessed that the categories roughly follow this pattern:

Impossible: 0%

Almost impossible: 0.1%

Very unlikely: 1%

Unlikely: 10%

Fairly unlikely: 20%

Roughly even odds: 50%

Fairly likely: 80%

Likely: 90%

Very likely: 99%

Almost certain: 99.9%

Certain: 100%

So for example, if I switch from 0%% to 0.01%, it should have a very large effect, because I’ve moved you out of your “impossible” category (indeed, I think the “impossible” category is almost completely sharp; literally anything above zero seems to be enough for most people, even 10^-9 or 10^-10). But if I move from 1% to 2%, it should have a small effect, because I’m still well within the “very unlikely” category. Yet the latter change is literally one hundred times larger than the former. It is possible to define continuous functions that would behave this way to an arbitrary level of approximation—but they get a lot less parsimonious very fast.

Now, immediately I run into a problem, because I’m not even sure those are my categories, much less that they are everyone else’s. If I knew precisely which categories to look for, I could tell whether or not I had found it. But the process of both finding the categories and determining if their edges are truly sharp is much more complicated, and requires a lot more statistical degrees of freedom to get beyond the noise.

One thing I’m considering is assigning these values as a prior, and then conducting a series of experiments which would adjust that prior. In effect I would be using optimal Bayesian probability reasoning to show that human beings do not use optimal Bayesian probability reasoning. Still, I think that actually pinning down the categories would require a large number of participants or a long series of experiments (in frequentist statistics this distinction is vital; in Bayesian statistics it is basically irrelevant—one of the simplest reasons to be Bayesian is that it no longer bothers you whether someone did 2 experiments of 100 people or 1 experiment of 200 people, provided they were the same experiment of course). And of course there’s always the possibility that my theory is totally off-base, and I find nothing; a dissertation replicating cumulative prospect theory is a lot less exciting (and, sadly, less publishable) than one refuting it.

Still, I think something like this is worth exploring. I highly doubt that people are doing very much math when they make most probabilistic judgments, and using categories would provide a very good way for people to make judgments usefully with no math at all.

The replication crisis, and the future of science

Aug 27, JDN 2457628 [Sat]

After settling in a little bit in Irvine, I’m now ready to resume blogging, but for now it will be on a reduced schedule. I’ll release a new post every Saturday, at least for the time being.

Today’s post was chosen by Patreon vote, though only one person voted (this whole Patreon voting thing has not been as successful as I’d hoped). It’s about something we scientists really don’t like to talk about, but definitely need to: We are in the middle of a major crisis of scientific replication.

Whenever large studies are conducted attempting to replicate published scientific results, their ability to do so is almost always dismal.

Psychology is the one everyone likes to pick on, because their record is particularly bad. Only 39% of studies were really replicated with the published effect size, though a further 36% were at least qualitatively but not quantitatively similar. Yet economics has its own replication problem, and even medical research is not immune to replication failure.

It’s important not to overstate the crisis; the majority of scientific studies do at least qualitatively replicate. We are doing better than flipping a coin, which is better than one can say of financial forecasters.
There are three kinds of replication, and only one of them should be expected to give near-100% results. That kind is reanalysiswhen you take the same data and use the same methods, you absolutely should get the exact same results. I favor making reanalysis a routine requirement of publication; if we can’t get your results by applying your statistical methods to your data, then your paper needs revision before we can entrust it to publication. A number of papers have failed on reanalysis, which is absurd and embarrassing; the worst offender was probably Rogart-Reinhoff, which was used in public policy decisions around the world despite having spreadsheet errors.

The second kind is direct replication—when you do the exact same experiment again and see if you get the same result within error bounds. This kind of replication should work something like 90% of the time, but in fact works more like 60% of the time.

The third kind is conceptual replication—when you do a similar experiment designed to test the same phenomenon from a different perspective. This kind of replication should work something like 60% of the time, but actually only works about 20% of the time.

Economists are well equipped to understand and solve this crisis, because it’s not actually about science. It’s about incentives. I facepalm every time I see another article by an aggrieved statistician about the “misunderstanding” of p-values; no, scientist aren’t misunderstanding anything. They know damn well how p-values are supposed to work. So why do they keep using them wrong? Because their jobs depend on doing so.

The first key point to understand here is “publish or perish”; academics in an increasingly competitive system are required to publish their research in order to get tenure, and frequently required to get tenure in order to keep their jobs at all. (Or they could become adjuncts, who are paid one-fifth as much.)

The second is the fundamentally defective way our research journals are run (as I have discussed in a previous post). As private for-profit corporations whose primary interest is in raising more revenue, our research journals aren’t trying to publish what will genuinely advance scientific knowledge. They are trying to publish what will draw attention to themselves. It’s a similar flaw to what has arisen in our news media; they aren’t trying to convey the truth, they are trying to get ratings to draw advertisers. This is how you get hours of meaningless fluff about a missing airliner and then a single chyron scroll about a war in Congo or a flood in Indonesia. Research journals haven’t fallen quite so far because they have reputations to uphold in order to attract scientists to read them and publish in them; but still, their fundamental goal is and has always been to raise attention in order to raise revenue.

The best way to do that is to publish things that are interesting. But if a scientific finding is interesting, that means it is surprising. It has to be unexpected or unusual in some way. And above all, it has to be positive; you have to have actually found an effect. Except in very rare circumstances, the null result is never considered interesting. This adds up to making journals publish what is improbable.

In particular, it creates a perfect storm for the abuse of p-values. A p-value, roughly speaking, is the probability you would get the observed result if there were no effect at all—for instance, the probability that you’d observe this wage gap between men and women in your sample if in the real world men and women were paid the exact same wages. The standard heuristic is a p-value of 0.05; indeed, it has become so enshrined that it is almost an explicit condition of publication now. Your result must be less than 5% likely to happen if there is no real difference. But if you will only publish results that show a p-value of 0.05, then the papers that get published and read will only be the ones that found such p-values—which renders the p-values meaningless.

It was never particularly meaningful anyway; as we Bayesians have been trying to explain since time immemorial, it matters how likely your hypothesis was in the first place. For something like wage gaps where we’re reasonably sure, but maybe could be wrong, the p-value is not too unreasonable. But if the theory is almost certainly true (“does gravity fall off as the inverse square of distance?”), even a high p-value like 0.35 is still supportive, while if the theory is almost certainly false (“are human beings capable of precognition?”—actual study), even a tiny p-value like 0.001 is still basically irrelevant. We really should be using much more sophisticated inference techniques, but those are harder to do, and don’t provide the nice simple threshold of “Is it below 0.05?”

But okay, p-values can be useful in many cases—if they are used correctly and you see all the results. If you have effect X with p-values 0.03, 0.07, 0.01, 0.06, and 0.09, effect X is probably a real thing. If you have effect Y with p-values 0.04, 0.02, 0.29, 0.35, and 0.74, effect Y is probably not a real thing. But I’ve just set it up so that these would be published exactly the same. They each have two published papers with “statistically significant” results. The other papers never get published and therefore never get seen, so we throw away vital information. This is called the file drawer problem.

Researchers often have a lot of flexibility in designing their experiments. If their only goal were to find truth, they would use this flexibility to test a variety of scenarios and publish all the results, so they can be compared holistically. But that isn’t their only goal; they also care about keeping their jobs so they can pay rent and feed their families. And under our current system, the only way to ensure that you can do that is by publishing things, which basically means only including the parts that showed up as statistically significant—otherwise, journals aren’t interested. And so we get huge numbers of papers published that tell us basically nothing, because we set up such strong incentives for researchers to give misleading results.

The saddest part is that this could be easily fixed.

First, reduce the incentives to publish by finding other ways to evaluate the skill of academics—like teaching for goodness’ sake. Working papers are another good approach. Journals already get far more submissions than they know what to do with, and most of these papers will never be read by more than a handful of people. We don’t need more published findings, we need better published findings—so stop incentivizing mere publication and start finding ways to incentivize research quality.

Second, eliminate private for-profit research journals. Science should be done by government agencies and nonprofits, not for-profit corporations. (And yes, I would apply this to pharmaceutical companies as well, which should really be pharmaceutical manufacturers who make cheap drugs based off of academic research and carry small profit margins.) Why? Again, it’s all about incentives. Corporations have no reason to want to find truth and every reason to want to tilt it in their favor.

Third, increase the number of tenured faculty positions. Instead of building so many new grand edifices to please your plutocratic donors, use your (skyrocketing) tuition money to hire more professors so that you can teach more students better. You can find even more funds if you cut the salaries of your administrators and football coaches. Come on, universities; you are the one industry in the world where labor demand and labor supply are the same people a few years later. You have no excuse for not having the smoothest market clearing in the world. You should never have gluts or shortages.

Fourth, require pre-registration of research studies (as some branches of medicine already do). If the study is sound, an optimal rational agent shouldn’t care in the slightest whether it had a positive or negative result, and if our ape brains won’t let us think that way, we need to establish institutions to force it to happen. They shouldn’t even see the effect size and p-value before they make the decision to publish it; all they should care about is that the experiment makes sense and the proper procedure was conducted.
If we did all that, the replication crisis could be almost completely resolved, as the incentives would be realigned to more closely match the genuine search for truth.

Alas, I don’t see universities or governments or research journals having the political will to actually make such changes, which is very sad indeed.

What’s wrong with academic publishing?

JDN 2457257 EDT 14:23.

I just finished expanding my master’s thesis into a research paper that is, I hope, suitable for publication in an economics journal. As part of this process I’ve been looking into the process of submitting articles for publication in academic journals… and I’ve found has been disgusting and horrifying. It is astonishingly bad, and my biggest question is why researchers put up with it.

Thus, the subject of this post is what’s wrong with the system—and what we might do instead.

Before I get into it, let me say that I don’t actually disagree with “publish or perish” in principle—as SMBC points out, it’s a lot like “do your job or get fired”. Researchers should publish in peer-reviewed journals; that’s a big part of what doing research means. The problem is how most peer-reviewed journals are currently operated.

First of all, in case you didn’t know, most scientific journals are owned by for-profit corporations. The largest corporation Elsevier, owns The Lancet and all of ScienceDirect, and has net income of over 1 billion Euros a year. Then there’s Springer and Wiley-Blackwell; between the three of them, these publishers account for over 40% of all scientific publications. These for-profit publishers retain the full copyright to most of the papers they publish, and tightly control access with paywalls; the cost to get through these paywalls is generally thousands of dollars a year for individuals and millions of dollars a year for universities. Their monopoly power is so great it “makes Rupert Murdoch look like a socialist.”

For-profit journals do often offer an “open-access” option in which you basically buy back your own copyright, but the price is high—the most common I’ve seen are $1800 or $3000 per paper—and very few researchers do this, for obvious financial reasons. In fact I think for a full-time tenured faculty researcher it’s probably worth it, given the alternatives. (Then again, full-time tenured faculty are becoming an endangered species lately; what might be worth it in the long run can still be very difficult for a cash-strapped adjunct to afford.) Open-access means people can actually read your paper and potentially cite your paper. Closed-access means it may languish in obscurity.

And of course it isn’t just about the benefits for the individual researcher. The scientific community as a whole depends upon the free flow of information; the reason we publish in the first place is that we want people to read papers, discuss them, replicate them, challenge them. Publication isn’t the finish line; it’s at best a checkpoint. Actually one thing that does seem to be wrong with “publish or perish” is that there is so much pressure for publication that we publish too many pointless papers and nobody has time to read the genuinely important ones.

These prices might be justifiable if the for-profit corporations actually did anything. But in fact they are basically just aggregators. They don’t do the peer-review, they farm it out to other academic researchers. They don’t even pay those other researchers; they just expect them to do it. (And they do! Like I said, why do they put up with this?) They don’t pay the authors who have their work published (on the contrary, they often charge submission fees—about $100 seems to be typical—simply to look at them). It’s been called “the world’s worst restaurant”, where you pay to get in, bring your own ingredients and recipes, cook your own food, serve other people’s food while they serve yours, and then have to pay again if you actually want to be allowed to eat.

They pay for the printing of paper copies of the journal, which basically no one reads; and they pay for the electronic servers that host the digital copies that everyone actually reads. They also provide some basic copyediting services (copyediting APA style is a job people advertise on Craigslist—so you can guess how much they must be paying).

And even supposing that they actually provided some valuable and expensive service, the fact would remain that we are making for-profit corporations the gatekeepers of the scientific community. Entities that exist only to make money for their owners are given direct control over the future of human knowledge. If you look at Cracked’s “reasons why we can’t trust science anymore”, all of them have to do with the for-profit publishing system. p-hacking might still happen in a better system, but publishers that really had the best interests of science in mind would be more motivated to fight it than publishers that are simply trying to raise revenue by getting people to buy access to their papers.

Then there’s the fact that most journals do not allow authors to submit to multiple journals at once, yet take 30 to 90 days to respond and only publish a fraction of what is submitted—it’s almost impossible to find good figures on acceptance rates (which is itself a major problem!), but the highest figures I’ve seen are 30% acceptance, a more typical figure seems to be 10%, and some top journals go as low as 3%. In the worst-case scenario you are locked into a journal for 90 days with only a 3% chance of it actually publishing your work. At that rate publishing an article could take years.

Is open-access the solution? Yes… well, part of it, anyway.

There are a large number of open-access journals, some of which do not charge submission fees, but very few of them are prestigious, and many are outright predatory. Predatory journals charge exorbitant fees, often after accepting papers for publication; many do little or no real peer review. There are almost seven hundred known predatory open-access journals; over one hundred have even been caught publishing hoax papers. These predatory journals are corrupting the process of science.

There are a few reputable open-access journals, such as BMC Biology and PLOSOne. Though not actually a journal, ArXiv serves a similar role. These will be part of the solution, most definitely. Yet even legitimate open-access journals often charge each author over $1000 to publish an article. There is a small but significant positive correlation between publication fees and journal impact factor.

We need to found more open-access journals which are funded by either governments or universities, so that neither author nor reader ever pays a cent. Science is a public good and should be funded as such. Even if copyright makes sense for other forms of content (I’m not so sure about that), it most certainly does not make sense for scientific knowledge, which by its very nature is only doing its job if it is shared with the world.

These journals should be specifically structured to be method-sensitive but results-blind. (It’s a very good thing that medical trials are usually registered before they are completed, so that publication is assured even if the results are negative—the same should be done with other sciences. Unfortunately, even in medicine there is significant publication bias.) If you could sum up the scientific method in one phrase, it might just be that: Method-sensitive but results-blind. If you think you know what you’re going to find beforehand, you may not be doing science. If you are certain what you’re going to find beforehand, you’re definitely not doing science.

The process should still be highly selective, but it should be possible—indeed, expected—to submit to multiple journals at once. If journals want to start paying their authors to entice them to publish in that journal rather than take another offer, that’s fine with me. Researchers are the ones who produce the content; if anyone is getting paid for it, it should be us.

This is not some wild and fanciful idea; it’s already the way that book publishing works. Very few literary agents or book publishers would ever have the audacity to say you can’t submit your work elsewhere; those that try are rapidly outcompeted as authors stop submitting to them. It’s fundamentally unreasonable to expect anyone to hang all their hopes on a particular buyer months in advance—and that is what you are, publishers, you are buyers. You are not sellers, you did not create this content.

But new journals face a fundamental problem: Good researchers will naturally want to publish in journals that are prestigious—that is, journals that are already prestigious. When all of the prestige is in journals that are closed-access and owned by for-profit companies, the best research goes there, and the prestige becomes self-reinforcing. Journals are prestigious because they are prestigious; welcome to tautology club.

Somehow we need to get good researchers to start boycotting for-profit journals and start investing in high-quality open-access journals. If Elsevier and Springer can’t get good researchers to submit to them, they’ll change their ways or wither and die. Research should be funded and published by governments and nonprofit institutions, not by for-profit corporations.

This may in fact highlight a much deeper problem in academia, the very concept of “prestige”. I have no doubt that Harvard is a good university, better university than most; but is it actually the best as it is in most people’s minds? Might Stanford or UC Berkeley be better, or University College London, or even the University of Michigan? How would we tell? Are the students better? Even if they are, might that just be because all the better students went to the schools that had better reputations? Controlling for the quality of the student, more prestigious universities are almost uncorrelated with better outcomes. Those who get accepted to Ivies but attend other schools do just as well in life as those who actually attend Ivies. (Good news for me, getting into Columbia but going to Michigan.) Yet once a university acquires such a high reputation, it can be very difficult for it to lose that reputation, and even more difficult for others to catch up.

Prestige is inherently zero-sum; for me to get more prestige you must lose some. For one university or research journal to rise in rankings, another must fall. Aside from simply feeding on other prestige, the prestige of a university is largely based upon the students it rejects—its “selectivity” score. What does it say about our society that we value educational institutions based upon the number of people they exclude?

Zero-sum ranking is always easier to do than nonzero-sum absolute scoring. Actually that’s a mathematical theorem, and one of the few good arguments against range voting (still not nearly good enough, in my opinion); if you have a list of scores you can always turn them into ranks (potentially with ties); but from a list of ranks there is no way to turn them back into scores.

Yet ultimately it is absolute scores that must drive humanity’s progress. If life were simply a matter of ranking, then progress would be by definition impossible. No matter what we do, there will always be top-ranked and bottom-ranked people.

There is simply no way mathematically for more than 1% of human beings to be in the top 1% of the income distribution. (If you’re curious where exactly that lies today, I highly recommend this interactive chart by the New York Times.) But we could raise the standard of living for the majority of people to a level that only the top 1% once had—and in fact, within the First World we have already done this. We could in fact raise the standard of living for everyone in the First World to a level that only the top 1%—or less—had as recently as the 16th century, by the simple change of implementing a basic income.

There is no way for more than 0.14% of people to have an IQ above 145, because IQ is defined to have a mean of 100 and a standard deviation of 15, regardless of how intelligent people are. People could get dramatically smarter over timeand in fact have—and yet it would still be the case that by definition, only 0.14% can be above 145.

Similarly, there is no way for much more than 1% of people to go to the top 1% of colleges. There is no way for more than 1% of people to be in the highest 1% of their class. But we could increase the number of college degrees (which we have); we could dramatically increase literacy rates (which we have).

We need to find a way to think of science in the same way. I wouldn’t suggest simply using number of papers published or even number of drugs invented; both of those are skyrocketing, but I can’t say that most of the increase is actually meaningful. I don’t have a good idea of what an absolute scale for scientific quality would look like, even at an aggregate level; and it is likely to be much harder still to make one that applies on an individual level.

But I think that ultimately this is the only way, the only escape from the darkness of cutthroat competition. We must stop thinking in terms of zero-sum rankings and start thinking in terms of nonzero-sum absolute scales.

What does it mean to “own” an idea?

JDN 2457195 EDT 11:29.

For a long time I’ve been suspicious of intellectual property as current formulated, but I’m never quite sure what to replace it with. I recently finished reading a surprisingly compelling little book called Against Intellectual Monopoly, which offered some more direct empirical support for many of my more philosophical concerns. (Fitting their opposition to copyright law, the authors, Michele Boldrin and David Levine, offer the full text of the book for free online.)

Boldrin and Levine argue that they are not in fact opposed to intellectual property, but intellectual monopoly. I think this is a bit of a silly distinction myself, and in fact muddles the issue a little because most of what we currently call “intellectual property” is in fact what they call “intellectual monopoly”.

The problems with intellectual property are well-documented within, but I think it’s worth repeating at least the basic form of the argument. Intellectual property is supposed to incentivize innovation by rewarding innovators for their investment, and thereby increase the total amount of innovation.

This requires three conditions to hold: First, the intellectual property must actually reward the innovators. Second, innovation must be increased when innovators seek rewards. And third, the costs of implementing the policy must be exceeded by the benefits provided by it.

As it turns out, none of those three conditions to hold. For intellectual property to make sense, they would all need to hold; and in fact none do.

First—and worst—of all, intellectual property does not actually reward innovators. It instead rewards those who manipulate the intellectual property system. Intellectual property is why Thomas Edison was wealthy and Nikola Tesla was poor. Intellectual property is why we keep getting new versions of the same pills for erectile dysfunction instead of an AIDS vaccine. Intellectual property is how we get patent troll corporations, submarine patents, and Samsung owing Apple $1 billion for making its smartphones the wrong shape. Intellectual property is how Worlds.com is proposing to sue an entire genre of video games.

Second, the best innovators are not motivated by individual rewards. This has always been true; the people who really contribute the most to the world in knowledge or creativity are those who do it out of an insatiable curiosity, or a direct desire to improve the world. People who are motivated primarily by profit only innovate as a last resort, instead preferring to manipulate laws, undermine competitors, or simply mass-produce safe, popular products.

I can think of no more vivid an example here than Hollywood. Why is it that every single new movie that comes out is basically a more expensive rehash of the exact same 5 movies that have been coming out for the last 50 years? Because big corporations don’t innovate. It’s too risky to try to make a movie that’s fundamentally new and different, because, odds are, that new movie would fail. It’s much safer to make an endless series of superhero movies and keep coming out with yet another movie about a heroic dog. It’s not even that these movies are bad—they’re often pretty good, and when done well (like Avengers) they can be quite enjoyable. But thousands of original screenplays are submitted to Hollywood every year, and virtually none of them are actually made into films. It’s impossible to know what great works of film we might have seen on the big screen if not for the stranglehold of media companies.

This is not how Hollywood began; it started out wildly innovative and new. But did you ever know why it started in Los Angeles and not somewhere else? It was to evade patent laws. Thomas Edison, the greatest patent troll in history, held a stranglehold on motion picture technology on the East Coast, so filmmakers fled to California to get as far away from there as possible, during a time when Federal enforcement was much more lax. The innovation that created Los Angeles as we know it not only was not incentivized by intellectual property protection—it was only possible in its absence.

And then of course there is the third condition, that the benefits be worth the costs—but it’s trivially obvious that this is not the case, since the benefits are in fact basically zero. We divert billions of dollars from consumers to huge corporations, monopolize the world’s ideas, create a system of surveillance and enforcement that makes basically everyone a criminal (I’ll admit it; I have pirated music, software, and most recently the film My Neighbor Totoro, and I often copy video games I own on CD or DVD to digital images so I don’t need the CD or DVD every time to play—which should be fair use but has been enforced as copyright violation). When everyone is a criminal, enforcement becomes capricious, a means of control that can be used and abused by those in power.

Intellectual property even allows corporations to undermine our more basic sense of property ownership—they can prevent us from making use of our own goods as we choose. They can punish us for modifying the software in our computers, our video game systems—or even our cars. They can install software on our computers that compromises our security in order to protect their copyright. This is a point that Boldrin and Levine repeat several times; in place of what we call “intellectual property” (and they call “intellectual monopoly”), they offer a system which would protect our ordinary property rights, our rights to do what we choose with the goods that we purchase—goods that include books, computers, and DVDs.

That brings me to where I think their argument is weakest—their policy proposal. Basically the policy they propose is that we eliminate all intellectual property rights (except trademarks, which they rightly point out are really more about honesty than they are about property—trademark violation typically amounts to fraudulently claiming that your product was made by someone it wasn’t), and then do nothing else. The only property rights would be ordinary property rights, which would know apply in full to products such as books and DVDs. When you buy a DVD, you would have the right to do whatever you please with it, up to and including copying it a hundred times and selling the copies. You bought the DVD, you bought the blank discs, you bought the burner; so (goes their argument), why shouldn’t you be able to do what you want with them?

For patents, I think their argument is basically correct. I’ve tried to make lists of the greatest innovations in science in technology, and virtually none of them were in any way supported by patents. We needn’t go as far back as fire, writing, and the wheel; think about penicillin, the smallpox vaccine, electricity, digital computing, superconductors, lasers, the Internet. Airplanes might seem like they were invented under patent, but in fact the Wright brothers made a relatively small contribution and most of the really important development in aircraft was done by the military. Important medicines are almost always funded by the NIH, while private pharmaceutical companies give us Viagra at best and Vioxx at worst. Private companies have an incentive to skew their trials in various ways, ranging from simply questionable (p-value hacking) to the outright fraudulent (tampering with data). We know they do, because meta-analyses have found clear biases in the literature. The NIH has much less incentive to bias results in this way, and as a result more of the drugs released will be safe and effective. Boldrin and Levine recommend that all drug trials be funded by the NIH instead of drug companies, and I couldn’t agree more. What basis would drug companies have for complaining? We’re giving them something they previously had to pay for. But of course they will complain, because now their drugs will be subject to unbiased scrutiny. Moreover, it undercuts much of the argument for their patent; without the initial cost of large-scale drug trials, it’s harder to see why they need patents to make a profit.

Major innovations have been the product of individuals working out of curiosity, or random chance, or university laboratories, or government research projects; but they are rarely motivated by patents and they are almost never created by corporations. Corporations do invent incremental advancements, but many of these they keep as trade secrets, or go ahead and share, knowing that reverse-engineering takes time and investment. The great innovations of the computer industry (like high-level programming languages, personal computers, Ethernet, USB ports, and windowed operating systems) were all invented before software could be patented—and since then, what have we really gotten? In fact, it can be reasonably argued that patents reduce innovation; most innovations are built on previous innovations, and patents hinder that process of assimilation and synthesis. Patent pools can mitigate this effect, but only for oligopolistic insiders, which almost by definition are less innovative than disruptive outsiders.

And of course, patents on software and biological systems should be invalidated yesterday. If we must have patents, they should be restricted only to entities that cannot self-replicate, which means no animals, no plants, no DNA, nothing alive, no software, and for good measure, no grey goo nanobots. (It also makes sense at a basic level: How can you stop people from copying it, when it can copy itself?)

It’s when we get to copyright that I’m not so convinced. I certainly agree that the current copyright system suffers from deep problems. When your photos can be taken without your permission and turned into works of art but you can’t make a copy of a video game onto your hard drive to play it more conveniently, clearly something is wrong with our copyright system. I also agree that there is something fundamentally problematic about saying that one “owns” a text in such a way that they can decide what others do with it. When you read my work, copies of the information I convey to you are stored inside your brain; do I now own a piece of your brain? If you print out my blog post on a piece of paper and then photocopy it, how can I own something you made with your paper on your printer?

I release all my blog posts under a “by-sa” copyleft, “attribution-share-alike”, which requires that my work be shared without copyright protection and properly attributed to me. You are however free to sell them, modify them, or use them however you like, given those constraints. I think that something like this may be the best system for protecting authors against plagiarism without unduly restricting the rights of readers to copy, modify, and otherwise use the content they buy. Applied to software, the Free Software Foundation basically agrees.

Boldrin and Levine do not, however; they think that even copyleft is too much, because it imposes restrictions upon buyers. They do agree that plagiarism should be illegal (because it is fraudulent), but they disagree with the “share-alike” part, the requirement that content be licensed according to what the author demands. As far as they are concerned, you bought the book, and you can do whatever you damn well please with it. In practice there probably isn’t a whole lot of difference between these two views, since in the absence of copyright there isn’t nearly as much need for copyleft. I don’t really need to require you to impose a free license if you can’t impose any license at all. (When I say “free” I mean libre, not gratis; free as in speech, not as in beerRed Hat Linux is free software you pay for, and Zynga games are horrifically predatory proprietary software you get for free.)

One major difference is that under copyleft we could impose requirements to release information under certain circumstances—I have in mind particularly scientific research papers and associated data. To maximize the availability of knowledge and facilitate peer review, it could be a condition of publication for scientific research that the paper and data be made publicly available under a free license—already this is how research done directly for the government works (at least the stuff that isn’t classified). But under a strict system of physical property only this sort of licensing would be a violation of the publishers’ property rights to do as they please with their servers and hard drives.

But there are legitimate concerns to be had even about simply moving to a copyleft system. I am a fiction author, and I submit books for publication. (This is not hypothetical; I actually do this.) Under the current system, I own the copyright to those books, and if the publisher decides to use them (thus far, only JukePop Serials, a small online publisher, has ever done so), they must secure my permission, presumably by means of a royalty contract. They can’t simply take whatever manuscripts they like and publish them. But if I submitted under a copyleft, they absolutely could. As long as my name were on the cover, they wouldn’t have to pay me a dime. (Charles Darwin certainly didn’t get a dime from Ray Comfort’s edition of The Origin of Species—yes, that is a thing.)

Now the question becomes, would they? There might be a competitive equilbrium where publishers are honest and do in fact pay their authors. If they fail to do so, authors are likely to stop submitting to that publisher once it acquires its shady reputation. If we can reach the equilibrium where authors get paid, that’s almost certainly better than today; the only people I can see it hurting are major publishing houses like Pearson PLC and superstar authors like J.K. Rowling; and even then it wouldn’t hurt them all that much. (Rowling might only be a millionaire instead of a billionaire, and Pearson PLC might see its net income drop from over $500 million to say $10 million.) The average author would most likely benefit, because publishers would have more incentive to invest in their midlist when they can’t crank out hundreds of millions of dollars from their superstars. Books would proliferate at bargain prices, and we could all double the size of our libraries. The net effect on the book market would be to reduce the winner-takes-all effect, which can only be a good thing.

But that isn’t the only possibility. The incentive to steal authors’ work when they submit it could instead create an equilibrium where hardly anyone publishes fiction anymore; and that world is surely worse than the one we live in today. We would want to think about how we can ensure that authors are adequately paid for their work in a copyleft system. Maybe some can make their money from speaking tours and book signings, but I’m not confident that enough can.

I do have one idea, similar to what Thomas Pogge came up with in his “public goods system”, though he primarily intended that to apply to medicine. The basic concept is that there would be a fund, either gathered from donations or supported by taxes, that supports artists. (Actually we already have the National Endowment for the Arts, but it isn’t nearly big enough.) This support would be doled out based on some metric of the artists’ popularity or artistic importance. The details of that are quite tricky, but I think one could arrange some sort of voting system where people use range voting to decide how much to give to each author, musician, painter, or filmmaker. Potentially even research funding could be set this way, with people voting to decide how important they think a particular project is—though I fear that people may be too ignorant to accurately gauge the important of certain lines of research, as when Sarah Palin mocked studies of “fruit flies in Paris”, otherwise known as literally the foundation of modern genetics. Maybe we could vote instead on research goals like “eliminate cancer” and “achieve interstellar travel” and then the scientific community could decide how to allocate funds toward those goals? The details are definitely still fuzzy in my mind.

The general principle, however, would be that if we want to support investment in innovation, we do that—instead of devising this bizarre system of monopoly that gives corporations growing power over our lives. Subsidize investment by subsidizing investment. (I feel similarly about capital taxes; we could incentivize investment in this vague roundabout way by doing nothing to redistribute wealth and hoping that all the arbitrage and speculation somehow translates into real investment… or, you know, we could give tax credits to companies that build factories.) As Boldrin and Levine point out, intellectual property laws were not actually created to protect innovation; they were an outgrowth of the general power of kings and nobles to enforce monopolies on various products during the era of mercantilism. They were weakened to be turned into our current system, not strengthened. They are, in fact, fundamentally mercantilist—and nothing could make that clearer than the TRIPS accord, which literally allows millions of people to die from treatable diseases in order to increase the profits of pharmaceutical companies. Far from being this modern invention that brought upon the scientific revolution, intellectual property is an atavistic policy borne from the age of colonial kings. I think it’s time we try something new.
(Oh, and one last thing: “Piracy”? Really? I can’t believe the linguistic coup it was for copyright holders to declare that people who copy music might as well be slavers and murderers—somehow people went along with this ridiculous terminology. No, there is no such thing as “music piracy” or “software piracy”; there is music copyright violation and software copyright violation.)