Surviving in an ad-supported world

Apr 21 JDN 2460423

Advertising is as old as money—perhaps even older. Scams have likewise been a part of human society since time immemorial.

But I think it’s fair to say that recently, since the dawn of the Internet at least, both advertising and scams have been proliferating, far beyond what they used to be.

We live in an ad-supported world.

News sites are full of ads. Search engines are full of ads. Even shopping sites are full of ads now; we literally came here planning to buy something, but that wasn’t good enough for you; you want us to also buy something else. Most of the ads are for legitimate products; but some are for scams. (And then there’s multi-level marketing, which is somewhere in between: technically not a scam.)

We’re so accustomed to getting spam emails, phone calls, and texts full of ads and scams that we just accept it as a part of our lives. But these are not something people had to live with even 50 years ago. This is a new, fresh Hell we have wrought for ourselves as a civilization.

AI promises to make this problem even worse. AI still isn’t very good at doing anything particularly useful; you can’t actually trust it to drive a truck or diagnose an X-ray. (There are people working on this sort of thing, but they haven’t yet succeeded.) But it’s already pretty good at making spam texts and phone calls. It’s already pretty good at catfishing people. AI isn’t smart enough to really help us, but it is smart enough to hurt us, especially those of us who are most vulnerable.

I think that this causes a great deal more damage to our society than is commonly understood.

It’s not just that ads are annoying (though they are), or that they undermine our attention span (though they do), or that they exploit the vulnerable (though they do).

I believe that an ad-supported world is a world where trust goes to die.

When the vast majority of your interactions with other people involve those people trying to get your money, some of them by outright fraud—but none of them really honestly—you have no choice but to ratchet down your sense of trust. It begins to feel as this financial transactions are the only form of interaction there is in the world.

But in fact most people can be trusted, and should be trusted—you are missing out on a great deal of what makes life worth living if you do not know how to trust.

The question is whom you trust. You should trust people you know, people you interact with personally and directly. Even strangers are more trustworthy than any corporation will ever be. And never are corporations more dishonest than when they are sending out ads.


The more the world fills with ads, the less room it has for trust.

Is there any way to stem this tide? Or are we simply doomed to live in the cyberpunk dystopia our forebears warned about, where everything is for sale and all available real estate is used for advertising?

Ads and scams only exist because they are profitable; so our goal should be to make them no longer profitable.

Here is one very simple piece of financial advice that will help protect you. Indeed, I believe it can protect so well, that if everyone followed it consistently, we would stem the tide.

Only give money to people you have sought out yourself.

Only buy things you already knew you wanted.

Yes, of course you must buy things. We live in a capitalist society. You can’t survive without buying things. But this is how buying things should work:

You check your fridge and see you are out of milk. So you put “milk” on your grocery list, you go to the grocery store, you find some milk that looks good, and you buy it.

Or, your car is getting old and expensive to maintain, and you decide you need a new one. You run the numbers on your income and expenses, and come up with a budget for a new car. You go to the dealership, they help you pick out a car that fits your needs and your budget, and you buy it.

Your tennis shoes are getting frayed, and it’s time to replace them. You go online and search for “tennis shoes”, looking up sizes and styles until you find a pair that suits you. You order that pair.

You should be the one to decide that you need a thing, and then you should go out looking for it.

It’s okay to get help searching, or even listen to some sales pitches, as long as the whole thing was your idea from the start.

But if someone calls you, texts you, or emails you, asking for your money for something?

Don’t give them a cent.

Just don’t. Don’t do it. Even if it sounds like a good product. Even if it is a good product. If the product they are selling sounds so great that you decide you actually want to buy it, go look for it on your own. Shop around. If you can, go out of your way to buy it from a competing company.

Your attention is valuable. Don’t reward them for stealing it.

This applies to donations, too. Donation asks aren’t as awful as ads, let alone scams, but they are pretty obnoxious, and they only send those things out because people respond to them. If we all stopped responding, they’d stop sending.

Yes, you absolutely should give money to charity. But you should seek out the charities to donate to. You should use trusted sources (like GiveWell and Charity Navigator) to vet them for their reliability, transparency, and cost-effectiveness.

If you just receive junk mail asking you for donations, feel free to take out any little gifts they gave you (it’s often return address labels, for some reason), and then recycle the rest.

Don’t give to the ones who ask for it. Give to the ones who will use it the best.

Reward the charities that do good, not the charities that advertise well.

This is the rule to follow:

If someone contacts you—if they initiate the contact—refuse to give them any money. Ever.

Does this rule seem too strict? It is quite strict, in fact. It requires you to pass up many seemingly-appealing opportunities, and the more ads there are, the more opportunities you’ll need to pass up.

There may even be a few exceptions; no great harm befalls us if we buy Girl Scout cookies or donate to the ASPCA because the former knocked on our doors and the latter showed us TV ads. (Then again, you could just donate to feminist and animal rights charities without any ads or sales pitches.)

But in general, we live in a society that is absolutely inundated with people accosting us and trying to take our money, and they’re only ever going to stop trying to get our money if we stop giving it to them. They will not stop it out of the goodness of their hearts—no, not even the charities, who at least do have some goodness in their hearts. (And certainly not the scammers, who have none.)

They will only stop if it stops working.

So we need to make it stop working. We need to draw this line.

Trust the people around you, who have earned it. Do not trust anyone who seeks you out asking for money.

Telemarketing calls? Hang up. Spam emails? Delete. Junk mail? Recycle. TV ads? Mute and ignore.

And then, perhaps, future generations won’t have to live in an ad-supported world.

Is privacy dead?

May 9 JDN 2459342

It is the year 2021, and while we don’t yet have flying cars or human-level artificial intelligence, our society is in many ways quite similar to what cyberpunk fiction predicted it would be. We are constantly connected to the Internet, even linking devices in our homes to the Web when that is largely pointless or actively dangerous. Oligopolies of fewer and fewer multinational corporations that are more and more powerful have taken over most of our markets, from mass media to computer operating systems, from finance to retail.

One of the many dire predictions of cyberpunk fiction is that constant Internet connectivity will effectively destroy privacy. There is reason to think that this is in fact happening: We have televisions that listen to our conversations, webcams that can be hacked, sometimes invisibly, and the operating system that runs the majority of personal and business computers is built around constantly tracking its users.

The concentration of oligopoly power and the decline of privacy are not unconnected. It’s the oligopoly power of corporations like Microsoft and Google and Facebook that allows them to present us with absurdly long and virtually unreadable license agreements as an ultimatum: “Sign away your rights, or else you can’t use our product. And remember, we’re the only ones who make this product and it’s increasingly necessary for your basic functioning in society!” This is of course exactly as cyberpunk fiction warned us it would be.

Giving up our private information to a handful of powerful corporations would be bad enough if that information were securely held only by them. But it isn’t. There have been dozens of major data breaches of major corporations, and there will surely be many more. In an average year, several billion data records are exposed through data breaches. Each person produces many data records, so it’s difficult to say exactly how many people have had their data stolen; but it isn’t implausible to say that if you are highly active on the Internet, at least some of your data has been stolen in one breach or another. Corporations have strong incentives to collect and use your data—data brokerage is a hundred-billion-dollar industry—but very weak incentives to protect it from prying eyes. The FTC does impose fines for negligence in the event of a major data breach, but as usual the scale of the fines simply doesn’t match the scale of the corporations responsible. $575 million sounds like a lot of money, but for a corporation with $28 billion in assets it’s a slap on the wrist. It would be equivalent to fining me about $500 (about what I’d get for driving without a passenger in the carpool lane). Yeah, I’d feel that; it would be unpleasant and inconvenient. But it’s certainly not going to change my life. And typically these fines only impact shareholders, and don’t even pass through to the people who made the decisions: The man who was CEO of Equifax when it suffered its catastrophic data breach retired with a $90 million pension.

While most people seem either blissfully unaware or fatalistically resigned to its inevitability, a few people have praised the trend of reduced privacy, usually by claiming that it will result in increased transparency. Yet, ironically, a world with less privacy can actually mean a world with less transparency as well: When you don’t know what information you reveal will be stolen and misused, you will constantly endeavor to protect all your information, even things that you would normally not hesitate to reveal. When even your face and name can be used to track you, you’ll be more hesitant to reveal them. Cyberpunk fiction predicted this too: Most characters in cyberpunk stories are known by their hacker handles, not their real given names.

There is some good news, however. People are finally beginning to notice that they have been pressured into giving away their privacy rights, and demanding to get them back. The United Nations has recently passed resolutions defending digital privacy, governments have taken action against the worst privacy violations with increasing frequency, courts are ruling in favor of stricter protections, think tanks are demanding stricter regulations, and even corporate policies are beginning to change. While the major corporations all want to take your data, there are now many smaller businesses and nonprofit organizations that will sell you tools to help protect it.

This does not mean we can be complacent: The war is far from won. But it does mean that there is some hope left; we don’t simply have to surrender and accept a world where anyone with enough money can know whatever they want about anyone else. We don’t need to accept what the CEO of Sun Microsystems infamously said: “You have zero privacy anyway. Get over it.”

I think the best answer to the decline of privacy is to address the underlying incentives that make it so lucrative. Why is data brokering such a profitable industry? Because ad targeting is such a profitable industry. So profitable, indeed, that huge corporations like Facebook and Google make almost all of their money that way, and the useful services they provide to users are offered for free simply as an enticement to get them to look at more targeted advertising.

Selling advertising is hardly new—we’ve been doing it for literally millennia, as Roman gladiators were often paid to hawk products. It has been the primary source of revenue for most forms of media, from newspapers to radio stations to TV networks, since those media have existed. What has changed is that ad targeting is now a lucrative business: In the 1850s, that newspaper being sold by barking boys on the street likely had ads in it, but they were the same ads for every single reader. Now when you log in to CNN.com or nytimes.com, the ads on that page are specific only to you, based on any information that these media giants have been able to glean from your past Internet activity. If you do try to protect your online privacy with various tools, a quick-and-dirty way to check if it’s working is to see if websites give you ads for things you know you’d never buy.

In fact, I consider it a very welcome recent development that video streaming is finally a way to watch TV shows by actually paying for them instead of having someone else pay for the right to shove ads in my face. I can’t remember the last time I heard a TV ad jingle, and I’m very happy about that fact. Having to spend 15 minutes of each hour of watching TV to watch commercials may not seem so bad—in fact, many people may feel that they’d rather do that than pay the money to avoid it. But think about it this way: If it weren’t worth at least that much to the corporations buying those ads, they wouldn’t do it. And if a corporation expects to get $X from you that you wouldn’t have otherwise paid, that means they’re getting you to spend that much that you otherwise wouldn’t have—meaning that they’re getting you to buy something you didn’t need. Perhaps it’s better after all to spend that $X on getting entertainment that doesn’t try to get you to buy things you don’t need.

Indeed, I think there is an opportunity to restructure the whole Internet this way. What we need is a software company—maybe a nonprofit organization, maybe a for-profit business—that is set up to let us make micropayments for online content in lieu of having our data collected or being force-fed advertising.

How big would these payments need to be? Well, Facebook has about 2.8 billion users and takes in revenue of about $80 billion per year, so the average user would have to pay about $29 a year for the use of Facebook, Instagram, and WhatsApp. That’s about $2.50 per month, or $0.08 per day.

The New York Times is already losing its ad-supported business model; less than $400 million of its $1.8 billion revenue last year was from ads, the rest being primarily from subscriptions. But smaller media outlets have a much harder time gaining subscribers; often people just want to read a single article and aren’t willing to pay for a whole month or year of the periodical. If we could somehow charge for individual articles, how much would we have to charge? Well, a typical webpage has an ad clickthrough rate of 1%, while a typical cost-per-click rate is about $0.60, so ads on the average webpage makes its owners a whopping $0.006. That’s not even a single cent. So if this new micropayment system allowed you to pay one cent to read an article without the annoyance of ads or the pressure to buy something you don’t need, would you pay it? I would. In fact, I’d pay five cents. They could quintuple their revenue!

The main problem is that we currently don’t have an efficient way to make payments that small. Processing a credit card transaction typically costs at least $0.05, so a five-cent transaction would yield literally zero revenue for the website. I’d have to pay ten cents to give the website five, and I admit I might not always want to do that—I’d also definitely be uncomfortable with half the money going to credit card companies.

So what’s needed is software to bundle the payments at each end: In a single credit card transaction, you add say $20 of tokens to an account. Each token might be worth $0.01, or even less if we want. These tokens can then be spent at participating websites to pay for access. The websites can then collect all the tokens they’ve received over say a month, bundle them together, and sell them back to the company that originally sold them to you, for slightly less than what you paid for them. These bundled transactions could actually be quite large in many cases—thousands or millions of dollars—and thus processing fees would be a very small fraction. For smaller sites there could be a minimum amount of tokens they must collect—perhaps also $20 or so—before they can sell them back. Note that if you’ve bought $20 in tokens and you are paying $0.05 per view, you can read 400 articles before you run out of tokens and have to buy more. And they don’t all have to be from the same source, as they would with a traditional subscription; you can read articles from any outlet that participates in the token system.

There are a number of technical issues to be resolved here: How to keep the tokens secure, how to guarantee that once a user purchases access to an article they will continue to have access to it, ideally even if they clear their cache, delete all cookies, or login from another computer. I can’t literally set up this website today, and even if I could, I don’t know how I’d attract a critical mass of both users and participating websites (it’s a major network externality problem). But it seems well within the purview of what the tech industry has done in the past—indeed, it’s quite comparable to the impressive (and unsettling) infrastructure that has been laid down to support ad-targeting and data brokerage.

How would such a system help protect privacy? If micropayments for content became the dominant model of funding online content, most people wouldn’t spend much time looking at online ads, and ad targeting would be much less profitable. Data brokerage, in turn, would become less lucrative, because there would be fewer ways to use that data to make profits. With the incentives to take our data thus reduced, it would be easier to enforce regulations protecting our privacy. Those fines might actually be enough to make it no longer worth the while to take sensitive data, and corporations might stop pressuring people to give it up.

No, privacy isn’t dead. But it’s dying. If we want to save it, we have a lot of work to do.

The fable of the billionaires

May 31 JDN 2458999

There are great many distortions in real-world markets that cause them to deviate from the ideal of perfectly competitive free markets, and economists rightfully spend much of their time locating, analyzing, and mitigating such distortions.

But I think there is a general perception among economists, and perhaps among others as well, that if we could somehow make markets perfectly competitive and efficient, we’d be done; the world, or at least the market, would be just and fair and all would be good. And this perception is gravely mistaken. To make that clear to you, I offer a little fable.

Once upon a time, widgets were made by hand. One person, working for one eight-hour day, could make 100 widgets. Most people were employed making widgets full-time. The wage for making widgets was $1 per widget.

Then, an inventor came up with a way to automate the production of widgets. For $100 per day, the same cost to hire a worker to make 100 widgets, the machine could instead make 101 widgets.

Because it was 1% more efficient, businesses began adopting the new machine, and now made slightly more widgets than before. But some workers who had previously made widgets were laid off, while others saw their wages fall to only $0.99 per widget.


If there were more widgets, but fewer people were getting paid less to make them, where did the extra wealth go? To the inventor, of course, who now owns 10% of all widget production and has billions of dollars.

Later, another inventor came up with an even better machine, which could make 102 widgets in a day. And that inventor became a billionare too, while more became unemployed and wages fell to $0.98 per widget.

And then there was another inventor, and another, and another; and today the machines can make 200 widgets in a day and wages are only $0.50 per widget. We now have twice as many widgets as we used to have, and hundreds of billionaires; yet only half as many people now work making widgets as once did, and those who remain make only half of what they once did.

Was this market inefficient or uncompetitive? Not at all! In fact it was quite efficient: It delivered the most widgets for the least cost every step of the way. And the first round of billionaires didn’t get enough power to keep the next round from innovating even better and also becoming billionaires. No one stole or cheated to get where they are; the billionaires really made it to the top by being brilliant innovators who made the world more efficient.

Indeed, by the standard measures of economic surplus, the world has gotten better with each new machine. GDP has gone up, wealth has gone up. Yet millions of people are out of work, and millions more are making pitifully low wages. Overall the nation seems to be worse off, even though all the numbers keep saying things are getting better.

There are some relatively simple solutions to this problem: We could tax those billionaires, and use the money to provide public goods to everyone else; and then the added wealth from doubling our quantity of widgets would benefit everyone and not just the inventors who made it happen. Would that reduce the incentives to innovate? A little, perhaps; but it’s hard to believe that most people who would be willing to invent something for $1 billion wouldn’t be willing to do so for $500 million or even for $50 million. At some point that extra money really isn’t benefiting you all that much. And what’s the point of incentivizing innovation if it makes life worse for most of our population?

In the real world there are lots of other problems, of course. Corruption, regulatory capture, rent-seeking, collusion, and so on all make our markets less efficient than they could have been. But even if markets were efficient, it’s not clear that they would be fair or just, or that they would be making most people’s lives better.

Indeed, I’m not convinced that most billionaires really got where they are by being particularly innovative. I can appreciate the innovations made by Cisco and Microsoft, but what brilliant innovation underlies Facebook or Amazon? The Internet itself is a great innovation (largely created by DARPA and universities), but is using it to talk to people or sell things really such a great leap? Tesla and SpaceX are innovative, but they have largely been money pits for Elon Musk, who inherited a good chunk of his wealth and made most of the rest by owning shares in PayPal. Yet even if we suppose that all the billionaires got where they are by inventing things that made the economy more efficient, it’s still not clear that they deserve to keep that staggering wealth.

I think the fundamental problem is that we have mentally equated ‘value of marginal product’ with ‘what you rightfully earn’. But the former is dependent upon the rest of the market: Who you are competing with, what your customers want. You can work very hard and be very talented, but if you’re making something that people aren’t willing to pay for, you won’t make any money. And the fact that people won’t pay for something doesn’t mean it isn’t valuable: If you produce public goods, they could benefit many people a great deal but still not draw in profits. Conversely, the fact that something is profitable doesn’t necessarily make it valuable: It could just be a very effective method of rent-seeking.

I’m not saying we should do away with markets; they’re very useful, and they do have a lot of benefits. But we should acknowledge their limitations. We should be aware not only that real-world markets are not perfectly efficient, but also that even a perfectly efficient market wouldn’t make for the best possible world.

SESTA/FOSTA: oppression through moral panic

Mar 31 JDN 2458574

The road to Hell is paved with good intentions.

The “Stop Enabling Sex Traffickers Act” and “Fight Online Sex Traffickers Act”. Who could disagree with that? Nobody wants to be on the side of sex traffickers.

Beware bills with such one-sided names; they are almost never what they seem. The “USA PATRIOT Act” was one of the most authoritarian and un-American pieces of legislation ever produced.

The bill was originally two bills, SESTA and FOSTA, which were then merged. For the rest of this post I’m just going to call it SESTA for short.

SESTA passed with overwhelming support; the vote totals were 388 to 25 in the House and 97 to 2 in the Senate. Apparently members of Congress fall for that sort of one-sided naming, because they also passed the USA PATRIOT Act 357 to 66 in the House and 98 to 1 in the Senate. This is the easiest way to take away our freedoms: Make it sound like you are doing something obviously good that no sane person could disagree with. If fascism comes to America, it will be called the “Puppies Are Cute Act” and it will pass with overwhelming support.

Of course I’m against sex trafficking. Almost everyone is against sex trafficking. The problem is that SESTA doesn’t actually do much to fight sex trafficking, may in some cases make sex trafficking worse, and sets a precedent that could undermine fundamental civil liberties.

So, what does SESTA actually do? At its core, it changes the way the Internet is regulated. It has been a basic principle of Internet regulation from the beginning that websites aren’t responsible for the actions of their users; unless a site is actively designed for an illegal purpose, the fact that it is used for illegal purposes is not the website’s fault.

This is how we regulate other forms of communication. If a mob boss calls a hit man on the phone, we don’t sue the phone company. If banks exchange emails to collude on manipulating interest rates, we don’t put the email sysadmin in jail. If terrorists send messages through the mail, we don’t arrest the postal workers.

There may be some grey areas, but generally courts have leaned toward greater liberty: The Anarchist’s Cookbook sure looks an awful lot like a means for conducting acts of sabotage and terrorism, but we’re so loathe to ban books that we allow it to be sold.

That is, until now. Because it’s about sex crime, we went into moral panic mode and stopped thinking clearly about the real implications of the policy. We don’t react the same way to gun crime—if we did, we’d have re-instituted the assault weapons ban a decade ago. This is clearly part of our double standard between sex and violence. Sex trafficking is horrible, to be sure; but I think that gun homicides clearly worse. (Yes, it is horrible to be forced into sexual slavery; but would you rather be shot to death?) But we wildly overreact to the former and do basically nothing to stop the latter. And the scale of the two problems is not just comparable, it’s almost identical: About 18,000-20,000 people are trafficked into the US each year for sex, and there were precisely 19,362 homicides in the US last year, of which 14,415 were committed with firearms.
Indeed, why focus specifically on sex trafficking? The majority of forced labor trafficking is not sex. I suppose it seems worse to become a sex slave than to become a slave at a diamond mine or a tobacco plantation… but not that much worse. Not so much worse that the former merits an overwhelming response and the latter barely any response at all.

SESTA breaks the usual principles applied to regulating communication and instead allows the government to penalize websites that are used to facilitate any kind of sale of sexual services.

Note, first of all, that this suddenly changes the topic from sex trafficking to sex work; the vast majority of sex workers are not trafficking victims but voluntary participants. In countries where brothels are legal and regulated, job satisfaction of sex workers is not statistically different from median job satisfaction overall.

Second, the bill doesn’t really do anything to target sex trafficking. It was already illegal to use websites for sex trafficking and already illegal to advertise sex trafficking via the Web. In fact, there is reason to think that pushing sex trafficking further into the Dark Web will only make the job of law enforcement harder.

Part of the liability protections for websites which will now be stripped away included a “right to moderate”: using moderation tools to remove illegal content would not result in additional liability. Under SESTA, this has changed; sites will now want to avoid moderating illegal content, because in so doing they would be effectively admitting that they knew it existed. Since there can never be any guarantee of removing 100% of all illegal content without shutting the entire site down, sites may choose instead to not moderate, so they have more plausible deniability when illegal content is ultimately found.

Instead, the main effect of SESTA will be to put more sex workers in danger. Where previously they could use websites to screen clients, they now have to return to in-person contact that is much more dangerous. It pushes them from the relative security of working indoors and online to the extreme danger of walking the street or working for a pimp. SESTA also removes the opportunity for sex workers to communicate with each other, because now any content related to sex work is banned; and these kinds of communication networks can be literally a matter of life and death.

Make no mistake: People will die over this. Mostly women and queer men (because the vast majority of sex work clients are male). The homicide rate of female victims dropped an astonishing 17 percent when Craigslist iSmplemented its Erotic section that allowed sex workers to use the Internet to screen clients. SESTA is taking that away, so we can expect homicides of female victims to rise.

SESTA is also blatantly Unconstitutional. The original form of the bill included an ex post facto clause, which violates one of the most basic principles of the Constitution.

Even with that removed, SESTA is obviously in violation of the First Amendment; this is censorship. It has already been used to justify Tumblr’s purge of all sexual content, which caused a 20% drop in user base and an exodus of erotic artists and sex workers to other platforms, and will disproportionately harm queer youth because Tumblr had previously been one of the Internet’s safest spaces for exploring sexual identity.

And now that the precedent has been set to hold websites responsible for their users, expect to see more of this. We already see sites being held responsible for copyright infringement; but we could soon see similar laws passed punishing sites for “facilitating” illegal drug use, hacking, or hate speech. Operators of communication platforms will be forced to become arms of law enforcement or face prison themselves.

Of course we all want to stop sex trafficking. Everyone agrees on that. But a bill that targets bad things can still be a bad bill.

In honor of Pi Day, I for one welcome our new robot overlords

JDN 2457096 EDT 16:08

Despite my preference to use the Julian Date Number system, it has not escaped my attention that this weekend was Pi Day of the Century, 3/14/15. Yesterday morning we had the Moment of Pi: 3/14/15 9:26:53.58979… We arguably got an encore that evening if we allow 9:00 PM instead of 21:00.

Though perhaps it is a stereotype and/or cheesy segue, pi and associated mathematical concepts are often associated with computers and robots. Robots are an increasing part of our lives, from the industrial robots that manufacture our cars to the precision-timed satellites that provide our GPS navigation. When you want to know how to get somewhere, you pull out your pocket thinking machine and ask it to commune with the space robots who will guide you to your destination.

There are obvious upsides to these robots—they are enormously productive, and allow us to produce great quantities of useful goods at astonishingly low prices, including computers themselves, creating a positive feedback loop that has literally lowered the price of a given amount of computing power by a factor of one trillion in the latter half of the 20th century. We now very much live in the early parts of a cyberpunk future, and it is due almost entirely to the power of computer automation.

But if you know your SF you may also remember another major part of cyberpunk futures aside from their amazing technology; they also tend to be dystopias, largely because of their enormous inequality. In the cyberpunk future corporations own everything, governments are virtually irrelevant, and most individuals can barely scrape by—and that sounds all too familiar, doesn’t it? This isn’t just something SF authors made up; there really are a number of ways that computer technology can exacerbate inequality and give more power to corporations.

Why? The reason that seems to get the most attention among economists is skill-biased technological change; that’s weird because it’s almost certainly the least important. The idea is that computers can automate many routine tasks (no one disputes that part) and that routine tasks tend to be the sort of thing that uneducated workers generally do more often than educated ones (already this is looking fishy; think about accountants versus artists). But educated workers are better at using computers and the computers need people to operate them (clearly true). Hence while uneducated workers are substitutes for computers—you can use the computers instead—educated workers are complements for computers—you need programmers and engineers to make the computers work. As computers get cheaper, their substitutes also get cheaper—and thus wages for uneducated workers go down. But their complements get more valuable—and so wages for educated workers go up. Thus, we get more inequality, as high wages get higher and low wages get lower.

Or, to put it more succinctly, robots are taking our jobs. Not all our jobs—actually they’re creating jobs at the top for software programmers and electrical engineers—but a lot of our jobs, like welders and metallurgists and even nurses. As the technology improves more and more jobs will be replaced by automation.

The theory seems plausible enough—and in some form is almost certainly true—but as David Card has pointed out, this fails to explain most of the actual variation in inequality in the US and other countries. Card is one of my favorite economists; he is also famous for completely revolutionizing the economics of minimum wage, showing that prevailing theory that minimum wages must hurt employment simply doesn’t match the empirical data.

If it were just that college education is getting more valuable, we’d see a rise in income for roughly the top 40%, since over 40% of American adults have at least an associate’s degree. But we don’t actually see that; in fact contrary to popular belief we don’t even really see it in the top 1%. The really huge increases in income for the last 40 years have been at the top 0.01%—the top 1% of 1%.

Many of the jobs that are now automated also haven’t seen a fall in income; despite the fact that high-frequency trading algorithms do what stockbrokers do a thousand times better (“better” at making markets more unstable and siphoning wealth from the rest of the economy that is), stockbrokers have seen no such loss in income. Indeed, they simply appropriate the additional income from those computer algorithms—which raises the question why welders couldn’t do the same thing. And indeed, I’ll get to in a moment why that is exactly what we must do, that the robot revolution must also come with a revolution in property rights and income distribution.

No, the real reasons why technology exacerbates inequality are twofold: Patent rents and the winner-takes-all effect.

In an earlier post I already talked about the winner-takes-all effect, so I’ll just briefly summarize it this time around. Under certain competitive conditions, a small fraction of individuals can reap a disproportionate share of the rewards despite being only slightly more productive than those beneath them. This often happens when we have network externalities, in which a product becomes more valuable when more people use it, thus creating a positive feedback loop that makes the products which are already successful wildly so and the products that aren’t successful resigned to obscurity.

Computer technology—more specifically, the Internet—is particularly good at creating such situations. Facebook, Google, and Amazon are all examples of companies that (1) could not exist without Internet technology and (2) depend almost entirely upon network externalities for their business model. They are the winners who take all; thousands of other software companies that were just as good or nearly so are now long forgotten. The winners are not always the same, because the system is unstable; for instance MySpace used to be much more important—and much more profitable—until Facebook came along.

But the fact that a different handful of upper-middle-class individuals can find themselves suddenly and inexplicably thrust into fame and fortune while the rest of us toil in obscurity really isn’t much comfort, now is it? While technically the rise and fall of MySpace can be called “income mobility”, it’s clearly not what we actually mean when we say we want a society with a high level of income mobility. We don’t want a society where the top 10% can by little more than chance find themselves becoming the top 0.01%; we want a society where you don’t have to be in the top 10% to live well in the first place.

Even without network externalities the Internet still nurtures winner-takes-all markets, because digital information can be copied infinitely. When it comes to sandwiches or even cars, each new one is costly to make and costly to transport; it can be more cost-effective to choose the ones that are made near you even if they are of slightly lower quality. But with books (especially e-books), video games, songs, or movies, each individual copy costs nothing to create, so why would you settle for anything but the best? This may well increase the overall quality of the content consumers get—but it also ensures that the creators of that content are in fierce winner-takes-all competition. Hence J.K. Rowling and James Cameron on the one hand, and millions of authors and independent filmmakers barely scraping by on the other. Compare a field like engineering; you probably don’t know a lot of rich and famous engineers (unless you count engineers who became CEOs like Bill Gates and Thomas Edison), but nor is there a large segment of “starving engineers” barely getting by. Though the richest engineers (CEOs excepted) are not nearly as rich as the richest authors, the typical engineer is much better off than the typical author, because engineering is not nearly as winner-takes-all.

But the main topic for today is actually patent rents. These are a greatly underappreciated segment of our economy, and they grow more important all the time. A patent rent is more or less what it sounds like; it’s the extra money you get from owning a patent on something. You can get that money either by literally renting it—charging license fees for other companies to use it—or simply by being the only company who is allowed to manufacture something, letting you sell it at monopoly prices. It’s surprisingly difficult to assess the real value of patent rents—there’s a whole literature on different econometric methods of trying to tackle this—but one thing is clear: Some of the largest, wealthiest corporations in the world are built almost entirely upon patent rents. Drug companies, R&D companies, software companies—even many manufacturing companies like Boeing and GM obtain a substantial portion of their income from patents.

What is a patent? It’s a rule that says you “own” an idea, and anyone else who wants to use it has to pay you for the privilege. The very concept of owning an idea should trouble you—ideas aren’t limited in number, you can easily share them with others. But now think about the fact that most of these patents are owned by corporationsnot by inventors themselves—and you’ll realize that our system of property rights is built around the notion that an abstract entity can own an idea—that one idea can own another.

The rationale behind patents is that they are supposed to provide incentives for innovation—in exchange for investing the time and effort to invent something, you receive a certain amount of time where you get to monopolize that product so you can profit from it. But how long should we give you? And is this really the best way to incentivize innovation?

I contend it is not; when you look at the really important world-changing innovations, very few of them were done for patent rents, and virtually none of them were done by corporations. Jonas Salk was indignant at the suggestion he should patent the polio vaccine; it might have made him a billionaire, but only by letting thousands of children die. (To be fair, here’s a scholar arguing that he probably couldn’t have gotten the patent even if he wanted to—but going on to admit that even then the patent incentive had basically nothing to do with why penicillin and the polio vaccine were invented.)

Who landed on the moon? Hint: It wasn’t Microsoft. Who built the Hubble Space Telescope? Not Sony. The Internet that made Google and Facebook possible was originally invented by DARPA. Even when corporations seem to do useful innovation, it’s usually by profiting from the work of individuals: Edison’s corporation stole most of its good ideas from Nikola Tesla, and by the time the Wright Brothers founded a company their most important work was already done (though at least then you could argue that they did it in order to later become rich, which they ultimately did). Universities and nonprofits brought you the laser, light-emitting diodes, fiber optics, penicillin and the polio vaccine. Governments brought you liquid-fuel rockets, the Internet, GPS, and the microchip. Corporations brought you, uh… Viagra, the Snuggie, and Furbies. Indeed, even Google’s vaunted search algorithms were originally developed by the NSF. I can think of literally zero examples of a world-changing technology that was actually invented by a corporation in order to secure a patent. I’m hesitant to say that none exist, but clearly the vast majority of seminal inventions have been created by governments and universities.

This has always been true throughout history. Rome’s fire departments were notorious for shoddy service—and wholly privately-owned—but their great aqueducts that still stand today were built as government projects. When China invented paper, turned it into money, and defended it with the Great Wall, it was all done on government funding.

The whole idea that patents are necessary for innovation is simply a lie; and even the idea that patents lead to more innovation is quite hard to defend. Imagine if instead of letting Google and Facebook patent their technology all the money they receive in patent rents were instead turned into tax-funded research—frankly is there even any doubt that the results would be better for the future of humanity? Instead of better ad-targeting algorithms we could have had better cancer treatments, or better macroeconomic models, or better spacecraft engines.

When they feel their “intellectual property” (stop and think about that phrase for awhile, and it will begin to seem nonsensical) has been violated, corporations become indignant about “free-riding”; but who is really free-riding here? The people who copy music albums for free—because they cost nothing to copy, or the corporations who make hundreds of billions of dollars selling zero-marginal-cost products using government-invented technology over government-funded infrastructure? (Many of these companies also continue receive tens or hundreds of millions of dollars in subsidies every year.) In the immortal words of Barack Obama, “you didn’t build that!”

Strangely, most economists seem to be supportive of patents, despite the fact that their own neoclassical models point strongly in the opposite direction. There’s no logical connection between the fixed cost of inventing a technology and the monopoly rents that can be extracted from its patent. There is some connection—albeit a very weak one—between the benefits of the technology and its monopoly profits, since people are likely to be willing to pay more for more beneficial products. But most of the really great benefits are either in the form of public goods that are unenforceable even with patents (go ahead, try enforcing on that satellite telescope on everyone who benefits from its astronomical discoveries!) or else apply to people who are so needy they can’t possibly pay you (like anti-malaria drugs in Africa), so that willingness-to-pay link really doesn’t get you very far.

I guess a lot of neoclassical economists still seem to believe that willingness-to-pay is actually a good measure of utility, so maybe that’s what’s going on here; if it were, we could at least say that patents are a second-best solution to incentivizing the most important research.

But even then, why use second-best when you have best? Why not devote more of our society’s resources to governments and universities that have centuries of superior track record in innovation? When this is proposed the deadweight loss of taxation is always brought up, but somehow the deadweight loss of monopoly rents never seems to bother anyone. At least taxes can be designed to minimize deadweight loss—and democratic governments actually have incentives to do that; corporations have no interest whatsoever in minimizing the deadweight loss they create so long as their profit is maximized.

I’m not saying we shouldn’t have corporations at all—they are very good at one thing and one thing only, and that is manufacturing physical goods. Cars and computers should continue to be made by corporations—but their technologies are best invented by government. Will this dramatically reduce the profits of corporations? Of course—but I have difficulty seeing that as anything but a good thing.

Why am I talking so much about patents, when I said the topic was robots? Well, it’s typically because of the way these patents are assigned that robots taking people’s jobs becomes a bad thing. The patent is owned by the company, which is owned by the shareholders; so when the company makes more money by using robots instead of workers, the workers lose.

If when a robot takes your job, you simply received the income produced by the robot as capital income, you’d probably be better off—you get paid more and you also don’t have to work. (Of course, if you define yourself by your career or can’t stand the idea of getting “handouts”, you might still be unhappy losing your job even though you still get paid for it.)

There’s a subtler problem here though; robots could have a comparative advantage without having an absolute advantage—that is, they could produce less than the workers did before, but at a much lower cost. Where it cost $5 million in wages to produce $10 million in products, it might cost only $3 million in robot maintenance to produce $9 million in products. Hence you can’t just say that we should give the extra profits to the workers; in some cases those extra profits only exist because we are no longer paying the workers.

As a society, we still want those transactions to happen, because producing less at lower cost can still make our economy more efficient and more productive than it was before. Those displaced workers can—in theory at least—go on to other jobs where they are needed more.

The problem is that this often doesn’t happen, or it takes such a long time that workers suffer in the meantime. Hence the Luddites; they don’t want to be made obsolete even if it does ultimately make the economy more productive.

But this is where patents become important. The robots were probably invented at a university, but then a corporation took them and patented them, and is now selling them to other corporations at a monopoly price. The manufacturing company that buys the robots now has to spend more in order to use the robots, which drives their profits down unless they stop paying their workers.

If instead those robots were cheap because there were no patents and we were only paying for the manufacturing costs, the workers could be shareholders in the company and the increased efficiency would allow both the employers and the workers to make more money than before.

What if we don’t want to make the workers into shareholders who can keep their shares after they leave the company? There is a real downside here, which is that once you get your shares, why stay at the company? We call that a “golden parachute” when CEOs do it, which they do all the time; but most economists are in favor of stock-based compensation for CEOs, and once again I’m having trouble seeing why it’s okay when rich people do it but not when middle-class people do.

Another alternative would be my favorite policy, the basic income: If everyone knows they can depend on a basic income, losing your job to a robot isn’t such a terrible outcome. If the basic income is designed to grow with the economy, then the increased efficiency also raises everyone’s standard of living, as economic growth is supposed to do—instead of simply increasing the income of the top 0.01% and leaving everyone else where they were. (There is a good reason not to make the basic income track economic growth too closely, namely the business cycle; you don’t want the basic income payments to fall in a recession, because that would make the recession worse. Instead they should be smoothed out over multiple years or designed to follow a nominal GDP target, so that they continue to rise even in a recession.)

We could also combine this with expanded unemployment insurance (explain to me again why you can’t collect unemployment if you weren’t working full-time before being laid off, even if you wanted to be or you’re a full-time student?) and active labor market policies that help people re-train and find new and better jobs. These policies also help people who are displaced for reasons other than robots making their jobs obsolete—obviously there are all sorts of market conditions that can lead to people losing their jobs, and many of these we actually want to happen, because they involve reallocating the resources of our society to more efficient ends.

Why aren’t these sorts of policies on the table? I think it’s largely because we don’t think of it in terms of distributing goods—we think of it in terms of paying for labor. Since the worker is no longer laboring, why pay them?

This sounds reasonable at first, but consider this: Why give that money to the shareholder? What did they do to earn it? All they do is own a piece of the company. They may not have contributed to the goods at all. Honestly, on a pay-for-work basis, we should be paying the robot!

If it bothers you that the worker collects dividends even when he’s not working—why doesn’t it bother you that shareholders do exactly the same thing? By definition, a shareholder is paid according to what they own, not what they do. All this reform would do is make workers into owners.

If you justify the shareholder’s wealth by his past labor, again you can do exactly the same to justify worker shares. (And as I said above, if you’re worried about the moral hazard of workers collecting shares and leaving, you should worry just as much about golden parachutes.)

You can even justify a basic income this way: You paid taxes so that you could live in a society that would protect you from losing your livelihood—and if you’re just starting out, your parents paid those taxes and you will soon enough. Theoretically there could be “welfare queens” who live their whole lives on the basic income, but empirical data shows that very few people actually want to do this, and when given opportunities most people try to find work. Indeed, even those who don’t, rarely seem to be motivated by greed (even though, capitalists tell us, “greed is good”); instead they seem to be de-motivated by learned helplessness after trying and failing for so long. They don’t actually want to sit on the couch all day and collect welfare payments; they simply don’t see how they can compete in the modern economy well enough to actually make a living from work.

One thing is certain: We need to detach income from labor. As a society we need to get over the idea that a human being’s worth is decided by the amount of work they do for corporations. We need to get over the idea that our purpose in life is a job, a career, in which our lives are defined by the work we do that can be neatly monetized. (I admit, I suffer from the same cultural blindness at times, feeling like a failure because I can’t secure the high-paying and prestigious employment I want. I feel this clear sense that my society does not value me because I am not making money, and it damages my ability to value myself.)

As robots do more and more of our work, we will need to redefine the way we live by something else, like play, or creativity, or love, or compassion. We will need to learn to see ourselves as valuable even if nothing we do ever sells for a penny to anyone else.

A basic income can help us do that; it can redefine our sense of what it means to earn money. Instead of the default being that you receive nothing because you are worthless unless you work, the default is that you receive enough to live on because you are a human being of dignity and a citizen. This is already the experience of people who have substantial amounts of capital income; they can fall back on their dividends if they ever can’t or don’t want to find employment. A basic income would turn us all into capital owners, shareholders in the centuries of established capital that has been built by our forebears in the form of roads, schools, factories, research labs, cars, airplanes, satellites, and yes—robots.