The right (and wrong) way to buy stocks

July 9, JDN 2457944

Most people don’t buy stocks at all. Stock equity is the quintessential form of financial wealth, and 42% of financial net wealth in the United States is held by the top 1%, while the bottom 80% owns essentially none.

Half of American households do not have any private retirement savings at all, and are depending either on employee pensions or Social Security for their retirement plans.

This is not necessarily irrational. In order to save for retirement, one must first have sufficient income to live on. Indeed, I got very annoyed at a “financial planning seminar” for grad students I attended recently, trying to scare us about the fact that almost none of us had any meaningful retirement savings. No, we shouldn’t have meaningful retirement savings, because our income is currently much lower than what we can expect to get once we graduate and enter our professions. It doesn’t make sense for someone scraping by on a $20,000 per year graduate student stipend to be saving up for retirement, when they can quite reasonably expect to be making $70,000-$100,000 per year once they finally get that PhD and become a professional economist (or sociologist, or psychologist or physicist or statistician or political scientist or material, mechanical, chemical, or aerospace engineer, or college professor in general, etc.). Even social workers, historians, and archaeologists make a lot more money than grad students. If you are already in the workforce and only expect to be getting small raises in the future, maybe you should start saving for retirement in your 20s. If you’re a grad student, don’t bother. It’ll be a lot easier to save once your income triples after graduation. (Personally, I keep about $700 in stocks mostly to get a feel for what it is like owning and trading stocks that I will apply later, not out of any serious expectation to support a retirement fund. Even at Warren Buffet-level returns I wouldn’t make more than $200 a year this way.)

Total US retirement savings are over $25 trillion, which… does actually sound low to me. In a country with a GDP now over $19 trillion, that means we’ve only saved a year and change of total income. If we had a rapidly growing population this might be fine, but we don’t; our population is fairly stable. People seem to be relying on economic growth to provide for their retirement, and since we are almost certainly at steady-state capital stock and fairly near full employment, that means waiting for technological advancement.

So basically people are hoping that we get to the Wall-E future where the robots will provide for us. And hey, maybe we will; but assuming that we haven’t abandoned capitalism by then (as they certainly haven’t in Wall-E), maybe you should try to make sure you own some assets to pay for robots with?

But okay, let’s set all that aside, and say you do actually want to save for retirement. How should you go about doing it?

Stocks are clearly the way to go. A certain proportion of government bonds also makes sense as a hedge against risk, and maybe you should even throw in the occasional commodity future. I wouldn’t recommend oil or coal at this point—either we do something about climate change and those prices plummet, or we don’t and we’ve got bigger problems—but it’s hard to go wrong with corn or steel, and for this one purpose it also can make sense to buy gold as well. Gold is not a magical panacea or the foundation of all wealth, but its price does tend to correlate negatively with stock returns, so it’s not a bad risk hedge.

Don’t buy exotic derivatives unless you really know what you’re doing—they can make a lot of money, but they can lose it just as fast—and never buy non-portfolio assets as a financial investment. If your goal is to buy something to make money, make it something you can trade at the click of a button. Buy a house because you want to live in that house. Buy wine because you like drinking wine. Don’t buy a house in the hopes of making a financial return—you’ll have leveraged your entire portfolio 10 to 1 while leaving it completely undiversified. And the problem with investing in wine, ironically, is its lack of liquidity.

The core of your investment portfolio should definitely be stocks. The biggest reason for this is the equity premium; equities—that is, stocks—get returns so much higher than other assets that it’s actually baffling to most economists. Bond returns are currently terrible, while stock returns are currently fantastic. The former is currently near 0% in inflation-adjusted terms, while the latter is closer to 16%. If this continues for the next 10 years, that means that $1000 put in bonds would be worth… $1000, while $1000 put in stocks would be worth $4400. So, do you want to keep the same amount of money, or quadruple your money? It’s up to you.

Higher risk is generally associated with higher return, because rational investors will only accept additional risk when they get some additional benefit from it; and stocks are indeed riskier than most other assets, but not that much riskier. For this to be rational, people would need to be extremely risk-averse, to the point where they should never drive a car or eat a cheeseburger. (Of course, human beings are terrible at assessing risk, so what I really think is going on is that people wildly underestimate the risk of driving a car and wildly overestimate the risk of buying stocks.)

Next, you may be asking: How does one buy stocks? This doesn’t seem to be something people teach in school.

You will need a brokerage of some sort. There are many such brokerages, but they are basically all equivalent except for the fees they charge. Some of them will try to offer you various bells and whistles to justify whatever additional cut they get of your trades, but they are almost never worth it. You should choose one that has a low a trade fee as possible, because even a few dollars here and there can add up surprisingly quickly.

Fortunately, there is now at least one well-established reliable stock brokerage available to almost anyone that has a standard trade fee of zero. They are called Robinhood, and I highly recommend them. If they have any downside, it is ironically that they make trading too easy, so you can be tempted to do it too often. Learn to resist that urge, and they will serve you well and cost you nothing.

Now, which stocks should you buy? There are a lot of them out there. The answer I’m going to give may sound strange: All of them. You should buy all the stocks.

All of them? How can you buy all of them? Wouldn’t that be ludicrously expensive?

No, it’s quite affordable in fact. In my little $700 portfolio, I own every single stock in the S&P 500 and the NASDAQ. If I get a little extra money to save, I may expand to own every stock in Europe and China as well.

How? A clever little arrangement called an exchange-traded fund, or ETF for short. An ETF is actually a form of mutual fund, where the fund purchases shares in a huge array of stocks, and adjusts what they own to precisely track the behavior of an entire stock market (such as the S&P 500). Then what you can buy is shares in that mutual fund, which are usually priced somewhere between $100 and $300 each. As the price of stocks in the market rises, the price of shares in the mutual fund rises to match, and you can reap the same capital gains they do.

A major advantage of this arrangement, especially for a typical person who isn’t well-versed in stock markets, is that it requires almost no attention at your end. You can buy into a few ETFs and then leave your money to sit there, knowing that it will grow as long as the overall stock market grows.

But there is an even more important advantage, which is that it maximizes your diversification. I said earlier that you shouldn’t buy a house as an investment, because it’s not at all diversified. What I mean by this is that the price of that house depends only on one thing—that house itself. If the price of that house changes, the full change is reflected immediately in the value of your asset. In fact, if you have 10% down on a mortgage, the full change is reflected ten times over in your net wealth, because you are leveraged 10 to 1.

An ETF is basically the opposite of that. Instead of its price depending on only one thing, it depends on a vast array of things, averaging over the prices of literally hundreds or thousands of different corporations. When some fall, others will rise. On average, as long as the economy continues to grow, they will rise.

The result is that you can get the same average return you would from owning stocks, while dramatically reducing the risk you bear.

To see how this works, consider the past year’s performance of Apple (AAPL), which has done very well, versus Fitbit (FIT), which has done very poorly, compared with the NASDAQ as a whole, of which they are both part.

AAPL has grown over 50% (40 log points) in the last year; so if you’d bought $1000 of their stock a year ago it would be worth $1500. FIT has fallen over 60% (84 log points) in the same time, so if you’d bought $1000 of their stock instead, it would be worth only $400. That’s the risk you’re taking by buying individual stocks.

Whereas, if you had simply bought a NASDAQ ETF a year ago, your return would be 35%, so that $1000 would be worth $1350.

Of course, that does mean you don’t get as high a return as you would if you had managed to choose the highest-performing stock on that index. But you’re unlikely to be able to do that, as even professional financial forecasters are worse than random chance. So, would you rather take a 50-50 shot between gaining $500 and losing $600, or would you prefer a guaranteed $350?

If higher return is not your only goal, and you want to be socially responsible in your investments, there are ETFs for that too. Instead of buying the whole stock market, these funds buy only a section of the market that is associated with some social benefit, such as lower carbon emissions or better representation of women in management. On average, you can expect a slightly lower return this way; but you are also helping to make a better world. And still your average return is generally going to be better than it would be if you tried to pick individual stocks yourself. In fact, certain classes of socially-responsible funds—particularly green tech and women’s representation—actually perform better than conventional ETFs, probably because most investors undervalue renewable energy and, well, also undervalue women. Women CEOs perform better at lower prices; why would you not want to buy their companies?

In fact ETFs are not literally guaranteed—the market as a whole does move up and down, so it is possible to lose money even by buying ETFs. But because the risk is so much lower, your odds of losing money are considerably reduced. And on average, an ETF will, by construction, perform exactly as well as the average performance of a randomly-chosen stock from that market.

Indeed, I am quite convinced that most people don’t take enough risk on their investment portfolios, because they confuse two very different types of risk.

The kind you should be worried about is idiosyncratic risk, which is risk tied to a particular investment—the risk of having chosen the Fitbit instead of Apple. But a lot of the time people seem to be avoiding market risk, which is the risk tied to changes in the market as a whole. Avoiding market risk does reduce your chances of losing money, but it does so at the cost of reducing your chances of making money even more.

Idiosyncratic risk is basically all downside. Yeah, you could get lucky; but you could just as well get unlucky. Far better if you could somehow average over that risk and get the average return. But with diversification, that is exactly what you can do. Then you are left only with market risk, which is the kind of risk that is directly tied to higher average returns.

Young people should especially be willing to take more risk in their portfolios. As you get closer to retirement, it becomes important to have more certainty about how much money will really be available to you once you retire. But if retirement is still 30 years away, the thing you should care most about is maximizing your average return. That means taking on a lot of market risk, which is then less risky overall if you diversify away the idiosyncratic risk.

I hope now that I have convinced you to avoid buying individual stocks. For most people most of the time, this is the advice you need to hear. Don’t try to forecast the market, don’t try to outperform the indexes; just buy and hold some ETFs and leave your money alone to grow.

But if you really must buy individual stocks, either because you think you are savvy enough to beat the forecasters or because you enjoy the gamble, here’s some additional advice I have for you.

My first piece of advice is that you should still buy ETFs. Even if you’re willing to risk some of your wealth on greater gambles, don’t risk all of it that way.

My second piece of advice is to buy primarily large, well-established companies (like Apple or Microsoft or Ford or General Electric). Their stocks certainly do rise and fall, but they are unlikely to completely crash and burn the way that young companies like Fitbit can.

My third piece of advice is to watch the price-earnings ratio (P/E for short). Roughly speaking, this is the number of years it would take for the profits of this corporation to pay off the value of its stock. If they pay most of their profits in dividends, it is approximately how many years you’d need to hold the stock in order to get as much in dividends as you paid for the shares.

Do you want P/E to be large or small? You want it to be small. This is called value investing, but it really should just be called “investing”. The alternatives to value investing are actually not investment but speculation and arbitrage. If you are actually investing, you are buying into companies that are currently undervalued; you want them to be cheap.

Of course, it is not always easy to tell whether a company is undervalued. A common rule-of-thumb is that you should aim for a P/E around 20 (20 years to pay off means about 5% return in dividends); if the P/E is below 10, it’s a fantastic deal, and if it is above 30, it might not be worth the price. But reality is of course more complicated than this. You don’t actually care about current earnings, you care about future earnings, and it could be that a company which is earning very little now will earn more later, or vice-versa. The more you can learn about a company, the better judgment you can make about their future profitability; this is another reason why it makes sense to buy large, well-known companies rather than tiny startups.

My final piece of advice is not to trade too frequently. Especially with something like Robinhood where trades are instant and free, it can be tempting to try to ride every little ripple in the market. Up 0.5%? Sell! Down 0.3%? Buy! And yes, in principle, if you could perfectly forecast every such fluctuation, this would be optimal—and make you an almost obscene amount of money. But you can’t. We know you can’t. You need to remember that you can’t. You should only trade if one of two things happens: Either your situation changes, or the company’s situation changes. If you need the money, sell, to get the money. If you have extra savings, buy, to give those savings a good return. If something bad happened to the company and their profits are going to fall, sell. If something good happened to the company and their profits are going to rise, buy. Otherwise, hold. In the long run, those who hold stocks longer are better off.

The credit rating agencies to be worried about aren’t the ones you think

JDN 2457499

John Oliver is probably the best investigative journalist in America today, despite being neither American nor officially a journalist; last week he took on the subject of credit rating agencies, a classic example of his mantra “If you want to do something evil, put it inside something boring.” (note that it’s on HBO, so there is foul language):

As ever, his analysis of the subject is quite good—it’s absurd how much power these agencies have over our lives, and how little accountability they have for even assuring accuracy.

But I couldn’t help but feel that he was kind of missing the point. The credit rating agencies to really be worried about aren’t Equifax, Experian, and Transunion, the ones that assess credit ratings on individuals. They are Standard & Poor’s, Moody’s, and Fitch (which would have been even easier to skewer the way John Oliver did—perhaps we can get them confused with Standardly Poor, Moody, and Filch), the agencies which assess credit ratings on institutions.

These credit rating agencies have almost unimaginable power over our society. They are responsible for rating the risk of corporate bonds, certificates of deposit, stocks, derivatives such as mortgage-backed securities and collateralized debt obligations, and even municipal and government bonds.

S&P, Moody’s, and Fitch don’t just rate the creditworthiness of Goldman Sachs and J.P. Morgan Chase; they rate the creditworthiness of Detroit and Greece. (Indeed, they played an important role in the debt crisis of Greece, which I’ll talk about more in a later post.)

Moreover, they are proven corrupt. It’s a matter of public record.

Standard and Poor’s is the worst; they have been successfully sued for fraud by small banks in Pennsylvania and by the State of New Jersey; they have also settled fraud cases with the Securities and Exchange Commission and the Department of Justice.

Moody’s has also been sued for fraud by the Department of Justice, and all three have been prosecuted for fraud by the State of New York.

But in fact this underestimates the corruption, because the worst conflicts of interest aren’t even illegal, or weren’t until Dodd-Frank was passed in 2010. The basic structure of this credit rating system is fundamentally broken; the agencies are private, for-profit corporations, and they get their revenue entirely from the banks that pay them to assess their risk. If they rate a bank’s asset as too risky, the bank stops paying them, and instead goes to another agency that will offer a higher rating—and simply the threat of doing so keeps them in line. As a result their ratings are basically uncorrelated with real risk—they failed to predict the collapse of Lehman Brothers or the failure of mortgage-backed CDOs, and they didn’t “predict” the European debt crisis so much as cause it by their panic.

Then of course there’s the fact that they are obviously an oligopoly, and furthermore one that is explicitly protected under US law. But then it dawns upon you: Wait… US law? US law decides the structure of credit rating agencies that set the bond rates of entire nations? Yes, that’s right. You’d think that such ratings would be set by the World Bank or something, but they’re not; in fact here’s a paper published by the World Bank in 2004 about how rather than reform our credit rating system, we should instead tell poor countries to reform themselves so they can better impress the private credit rating agencies.

In fact the whole concept of “sovereign debt risk” is fundamentally defective; a country that borrows in its own currency should never have to default on debt under any circumstances. National debt is almost nothing like personal or corporate debt. Their fears should be inflation and unemployment—their monetary policy should be set to minimize the harm of these two basic macroeconomic problems, understanding that policies which mitigate one may enflame the other. There is such a thing as bad fiscal policy, but it has nothing to do with “running out of money to pay your debt” unless you are forced to borrow in a currency you can’t control (as Greece is, because they are on the Euro—their debt is less like the US national debt and more like the debt of Puerto Rico, which is suffering an ongoing debt crisis you may not have heard about). If you borrow in your own currency, you should be worried about excessive borrowing creating inflation and devaluing your currency—but not about suddenly being unable to repay your creditors. The whole concept of giving a sovereign nation a credit rating makes no sense. You will be repaid on time and in full, in nominal terms; if inflation or currency exchange has devalued the currency you are repaid in, that’s sort of like a partial default, but it’s a fundamentally different kind of “default” than simply not paying back the money—and credit ratings have no way of capturing that difference.

In particular, it makes no sense for interest rates on government bonds to go up when a country is suffering some kind of macroeconomic problem.

The basic argument for why interest rates go up when risk is higher is that lenders expect to be paid more by those who do pay to compensate for what they lose from those who don’t pay. This is already much more problematic than most economists appreciate; I’ve been meaning to write a paper on how this system creates self-fulfilling prophecies of default and moral hazard from people who pay their debts being forced to subsidize those who don’t. But it at least makes some sense.

But if a country is a “high risk” in the sense of macroeconomic instability undermining the real value of their debt, we want to ensure that they can restore macroeconomic stability. But we know that when there is a surge in interest rates on government bonds, instability gets worse, not better. Fiscal policy is suddenly shifted away from real production into higher debt payments, and this creates unemployment and makes the economic crisis worse. As Paul Krugman writes about frequently, these policies of “austerity” cause enormous damage to national economies and ultimately benefit no one because they destroy the source of wealth that would have been used to repay the debt.

By letting credit rating agencies decide the rates at which governments must borrow, we are effectively treating national governments as a special case of corporations. But corporations, by design, act for profit and can go bankrupt. National governments are supposed to act for the public good and persist indefinitely. We can’t simply let Greece fail as we might let a bank fail (and of course we’ve seen that there are serious downsides even to that). We have to restructure the sovereign debt system so that it benefits the development of nations rather than detracting from it. The first step is removing the power of private for-profit corporations in the US to decide the “creditworthiness” of entire countries. If we need to assess such risks at all, they should be done by international institutions like the UN or the World Bank.

But right now people are so stuck in the idea that national debt is basically the same as personal or corporate debt that they can’t even understand the problem. For after all, one must repay one’s debts.

The Cognitive Science of Morality Part II: Molly Crockett

JDN 2457140 EDT 20:16.

This weekend has been very busy for me, so this post is going to be shorter than most—which is probably a good thing anyway, since my posts tend to run a bit long.

In an earlier post I discussed the Weinberg Cognitive Science Conference and my favorite speaker in the lineup, Joshua Greene. After a brief interlude from Capybara Day, it’s now time to talk about my second-favorite speaker, Molly Crockett. (Is it just me, or does the name “Molly” somehow seem incongruous with a person of such prestige?)

Molly Crockett is a neuroeconomist, though you’d never hear her say that. She doesn’t think of herself as an economist at all, but purely as a neuroscientist. I suspect this is because when she hears the word “economist” she thinks of only mainstream neoclassical economists, and she doesn’t want to be associated with such things.

Still, what she studies is clearly neuroeconomics—I in fact first learned of her work by reading the textbook Neuroeconomics, though I really got interested in her work after watching her TED Talk. It’s one of the better TED talks (they put out so many of them now that the quality is mixed at best); she talks about news reporting on neuroscience, how it is invariably ridiculous and sensationalist. This is particularly frustrating because of how amazing and important neuroscience actually is.

I could almost forgive the sensationalism if they were talking about something that’s actually fantastically boring, like, say, tax codes, or financial regulations. Of course, even then there is the Oliver Effect: You can hide a lot of evil by putting it in something boring. But Dodd-Frank is 2300 pages long; I read an earlier draft that was only (“only”) 600 pages, and it literally contained a three-page section explaining how to define the word “bank”. (Assuming direct proportionality, I would infer that there is now a twelve-page section defining the word “bank”. Hopefully not?) It doesn’t get a whole lot more snoozeworthy than that. So if you must be a bit sensationalist in order to get people to see why eliminating margin requirements and the swaps pushout rule are terrible, terrible ideas, so be it.

But neuroscience is not boring, and so sensationalism only means that news outlets are making up exciting things that aren’t true instead of saying the actually true things that are incredibly exciting.

Here, let me express without sensationalism what Molly Crockett does for a living: Molly Crockett experimentally determines how psychoactive drugs modulate moral judgments. The effects she observes are small, but they are real; and since these experiments are done using small doses for a short period of time, if these effects scale up they could be profound. This is the basic research component—when it comes to technological fruition it will be literally A Clockwork Orange. But it may be A Clockwork Orange in the best possible way: It could be, at last, a medical cure for psychopathy, a pill to make us not just happier or healthier, but better. We are not there yet by any means, but this is clearly the first step: Molly Crockett is to A Clockwork Orange roughly as Michael Faraday is to the Internet.

In one of the experiments she talked about at the conference, Crockett found that serotonin reuptake inhibitors enhance harm aversion. Serotonin reuptake inhibitors are very commonly used drugs—you are likely familiar with one called Prozac. So basically what this study means is that Prozac makes people more averse to causing pain in themselves or others. It doesn’t necessarily make them more altruistic, let alone more ethical; but it does make them more averse to causing pain. (To see the difference, imagine a 19th-century field surgeon dealing with a wounded soldier; there is no anesthetic, but an amputation must be made. Sometimes being ethical requires causing pain.)

The experiment is actually what Crockett calls “the honest Milgram Experiment“; under Milgram, the experimenters told their subjects they would be causing shocks, but no actual shocks were administered. Under Crockett, the shocks are absolutely 100% real (though they are restricted to a much lower voltage of course). People are given competing offers that contain an amount of money and a number of shocks to be delivered, either to you or to the other subject. They decide how much it’s worth to them to bear the shocks—or to make someone else bear them. It’s a classic willingness-to-pay paradigm, applied to the Milgram Experiment.

What Crockett found did not surprise me, nor do I expect it will surprise you if you imagine yourself in the same place; but it would totally knock the socks off of any neoclassical economist. People are much more willing to bear shocks for money than they are to give shocks for money. They are what Crockett terms hyper-altruistic; I would say that they are exhibiting an apparent solidarity coefficient greater than 1. They seem to be valuing others more than they value themselves.

Normally I’d say that this makes no sense at all—why would you value some random stranger more than yourself? Equally perhaps, and obviously only a psychopath would value them not at all; but more? And there’s no way you can actually live this way in your daily life; you’d give away all your possessions and perhaps even starve yourself to death. (I guess maybe Jesus lived that way.) But Crockett came up with a model that explains it pretty well: We are morally risk-averse. If we knew we were dealing with someone very strong who had no trouble dealing with shocks, we’d be willing to shock them a fairly large amount. But we might actually be dealing with someone very vulnerable who would suffer greatly; and we don’t want to take that chance.

I think there’s some truth to that. But her model leaves something else out that I think is quite important: We are also averse to unfairness. We don’t like the idea of raising one person while lowering another. (Obviously not so averse as to never do it—we do it all the time—but without a compelling reason we consider it morally unjustified.) So if the two subjects are in roughly the same condition (being two undergrads at Oxford, they probably are), then helping one while hurting the other is likely to create inequality where none previously existed. But if you hurt yourself in order to help yourself, no such inequality is created; all you do is raise yourself up, provided that you do believe that the money is good enough to be worth the shocks. It’s actually quite Rawslian; lifting one person up while not affecting the other is exactly the sort of inequality you’re allowed to create according to the Difference Principle.

There’s also the fact that the subjects can’t communicate; I think if I could make a deal to share the money afterward, I’d feel better about shocking someone more in order to get us both more money. So perhaps with communication people would actually be willing to shock others more. (And the sensation headline would of course be: “Talking makes people hurt each other.”)

But all of these ideas are things that could be tested in future experiments! And maybe I’ll do those experiments someday, or Crockett, or one of her students. And with clever experimental paradigms we might find out all sorts of things about how the human mind works, how moral intuitions are structured, and ultimately how chemical interventions can actually change human moral behavior. The potential for both good and evil is so huge, it’s both wondrous and terrifying—but can you deny that it is exciting?

And that’s not even getting into the Basic Fact of Cognitive Science, which undermines all concepts of afterlife and theistic religion. I already talked about it before—as the sort of thing that I sort of wish I could say when I introduce myself as a cognitive scientist—but I think it bears repeating.

As Patricia Churchland said on the Colbert Report: Colbert asked, “Are you saying I have no soul?” and she answered, “Yes.” I actually prefer Daniel Dennett’s formulation: “Yes, we have a soul, but it’s made of lots of tiny robots.”

We don’t have a magical, supernatural soul (whatever that means); we don’t have an immortal soul that will rise into Heaven or be reincarnated in someone else. But we do have something worth preserving: We have minds that are capable of consciousness. We love and hate, exalt and suffer, remember and imagine, understand and wonder. And yes, we are born and we die. Once the unique electrochemical pattern that defines your consciousness is sufficiently degraded, you are gone. Nothing remains of what you were—except perhaps the memories of others, or things you have created. But even this legacy is unlikely to last forever. One day it is likely that all of us—and everything we know, and everything we have built, from the Great Pyramids to Hamlet to Beethoven’s Ninth to Principia Mathematica to the US Interstate Highway System—will be gone. I don’t have any consolation to offer you on that point; I can’t promise you that anything will survive a thousand years, much less a million. There is a chance—even a chance that at some point in the distant future, whatever humanity has become will find a way to reverse the entropic decay of the universe itself—but nothing remotely like a guarantee. In all probability you, and I, and all of this will be gone someday, and that is absolutely terrifying.

But it is also undeniably true. The fundamental link between the mind and the brain is one of the basic facts of cognitive science; indeed I like to call it The Basic Fact of Cognitive Science. We know specifically which kinds of brain damage will make you unable to form memories, comprehend language, speak language (a totally different area), see, hear, smell, feel anger, integrate emotions with logic… do I need to go on? Everything that you are is done by your brain—because you are your brain.

Now why can’t the science journalists write about that? Instead we get “The Simple Trick That Can Boost Your Confidence Immediately” and “When it Comes to Picking Art, Men & Women Just Don’t See Eye to Eye.” HuffPo is particularly awful of course; the New York Times is better, but still hardly as good as one might like. They keep trying to find ways to make it exciting—but so rarely seem to grasp how exciting it already is.

What do we mean by “risk”?

JDN 2457118 EDT 20:50.

In an earlier post I talked about how, empirically, expected utility theory can’t explain the fact that we buy both insurance and lottery tickets, and how, normatively it really doesn’t make a lot of sense to buy lottery tickets precisely because of what expected utility theory says about them.

But today I’d like to talk about one of the major problems with expected utility theory, which I consider one of the major unexplored frontiers of economics: Expected utility theory treats all kinds of risk exactly the same.

In reality there are three kinds of risk: The first is what I’ll call classical risk, which is like the game of roulette; the odds are well-defined and known in advance, and you can play the game a large number of times and average out the results. This is where expected utility theory really shines; if you are dealing with classical risk, expected utility is obviously the way to go and Von Neumann and Morgenstern quite literally proved mathematically that anything else is irrational.

The second is uncertainty, a distinction which was most famously expounded by Frank Knight, an economist at the University of Chicago. (Chicago is a funny place; on the one hand they are a haven for the madness that is Austrian economics; on the other hand they have led the charge in behavioral and cognitive economics. Knight was a perfect fit, because he was a little of both.) Uncertainty is risk under ill-defined or unknown probabilities, where there is no way to play the game twice. Most real-world “risk” is actually uncertainty: Will the People’s Republic of China collapse in the 21st century? How many deaths will global warming cause? Will human beings ever colonize Mars? Is P = NP? None of those questions have known answers, but nor can we clearly assign probabilities either; Either P = NP or not, as a mathematical theorem (or, like the continuum hypothesis, it’s independent of ZFC, the most bizarre possibility of all), and it’s not as if someone is rolling dice to decide how many people global warming will kill. You can think of this in terms of “possible worlds”, though actually most modal theorists would tell you that we can’t even say that P=NP is possible (nor can we say it isn’t possible!) because, as a necessary statement, it can only be possible if it is actually true; this follows from the S5 axiom of modal logic, and you know what, even I am already bored with that sentence. Clearly there is some sense in which P=NP is possible, and if that’s not what modal logic says then so much the worse for modal logic. I am not a modal realist (not to be confused with a moral realist, which I am); I don’t think that possible worlds are real things out there somewhere. I think possibility is ultimately a statement about ignorance, and since we don’t know that P=NP is false then I contend that it is possible that it is true. Put another way, it would not be obviously irrational to place a bet that P=NP will be proved true by 2100; but if we can’t even say that it is possible, how can that be?

Anyway, that’s the mess that uncertainty puts us in, and almost everything is made of uncertainty. Expected utility theory basically falls apart under uncertainty; it doesn’t even know how to give an answer, let alone one that is correct. In reality what we usually end up doing is waving our hands and trying to assign a probability anyway—because we simply don’t know what else to do.

The third one is not one that’s usually talked about, yet I think it’s quite important; I will call it one-shot risk. The probabilities are known or at least reasonably well approximated, but you only get to play the game once. You can also generalize to few-shot risk, where you can play a small number of times, where “small” is defined relative to the probabilities involved; this is a little vaguer, but basically what I have in mind is that even though you can play more than once, you can’t play enough times to realistically expect the rarest outcomes to occur. Expected utility theory almost works on one-shot and few-shot risk, but you have to be very careful about taking it literally.

I think an example make things clearer: Playing the lottery is a few-shot risk. You can play the lottery multiple times, yes; potentially hundreds of times in fact. But hundreds of times is nothing compared to the 1 in 400 million chance you have of actually winning. You know that probability; it can be computed exactly from the rules of the game. But nonetheless expected utility theory runs into some serious problems here.

If we were playing a classical risk game, expected utility would obviously be right. So for example if you know that you will live one billion years, and you are offered the chance to play a game (somehow compensating for the mind-boggling levels of inflation, economic growth, transhuman transcendence, and/or total extinction that will occur during that vast expanse of time) in which at each year you can either have a guaranteed $40,000 of inflation-adjusted income or a 99.999,999,75% chance of $39,999 of inflation-adjusted income and a 0.000,000,25% chance of $100 million in inflation-adjusted income—which will disappear at the end of the year, along with everything you bought with it, so that each year you start afresh. Should you take the second option? Absolutely not, and expected utility theory explains why; that one or two years where you’ll experience 8 QALY per year isn’t worth dropping from 4.602056 QALY per year to 4.602049 QALY per year for the other nine hundred and ninety-eight million years. (Can you even fathom how long that is? From here, one billion years is all the way back to the Mesoproterozoic Era, which we think is when single-celled organisms first began to reproduce sexually. The gain is to be Mitt Romney for a year or two; the loss is the value of a dollar each year over and over again for the entire time that has elapsed since the existence of gamete meiosis.) I think it goes without saying that this whole situation is almost unimaginably bizarre. Yet that is implicitly what we’re assuming when we use expected utility theory to assess whether you should buy lottery tickets.

The real situation is more like this: There’s one world you can end up in, and almost certainly will, in which you buy lottery tickets every year and end up with an income of $39,999 instead of $40,000. There is another world, so unlikely as to be barely worth considering, yet not totally impossible, in which you get $100 million and you are completely set for life and able to live however you want for the rest of your life. Averaging over those two worlds is a really weird thing to do; what do we even mean by doing that? You don’t experience one world 0.000,000,25% as much as the other (whereas in the billion-year scenario, that is exactly what you do); you only experience one world or the other.

In fact, it’s worse than this, because if a classical risk game is such that you can play it as many times as you want as quickly as you want, we don’t even need expected utility theory—expected money theory will do. If you can play a game where you have a 50% chance of winning $200,000 and a 50% chance of losing $50,000, which you can play up to once an hour for the next 48 hours, and you will be extended any credit necessary to cover any losses, you’d be insane not to play; your 99.9% confidence level of wealth at the end of the two days is from $850,000 to $6,180,000. While you may lose money for awhile, it is vanishingly unlikely that you will end up losing more than you gain.

Yet if you are offered the chance to play this game only once, you probably should not take it, and the reason why then comes back to expected utility. If you have good access to credit you might consider it, because going $50,000 into debt is bad but not unbearably so (I did, going to college) and gaining $200,000 might actually be enough better to justify the risk. Then the effect can be averaged over your lifetime; let’s say you make $50,000 per year over 40 years. Losing $50,000 means making your average income $48,750, while gaining $200,000 means making your average income $55,000; so your QALY per year go from a guaranteed 4.70 to a 50% chance of 4.69 and a 50% chance of 4.74; that raises your expected utility from 4.70 to 4.715.

But if you don’t have good access to credit and your income for this year is $50,000, then losing $50,000 means losing everything you have and living in poverty or even starving to death. The benefits of raising your income by $200,000 this year aren’t nearly great enough to take that chance. Your expected utility goes from 4.70 to a 50% chance of 5.30 and a 50% chance of zero.

So expected utility theory only seems to properly apply if we can play the game enough times that the improbable events are likely to happen a few times, but not so many times that we can be sure our money will approach the average. And that’s assuming we know the odds and we aren’t just stuck with uncertainty.

Unfortunately, I don’t have a good alternative; so far expected utility theory may actually be the best we have. But it remains deeply unsatisfying, and I like to think we’ll one day come up with something better.

Prospect Theory: Why we buy insurance and lottery tickets

JDN 2457061 PST 14:18.

Today’s topic is called prospect theory. Prospect theory is basically what put cognitive economics on the map; it was the knock-down argument that Kahneman used to show that human beings are not completely rational in their economic decisions. It all goes back to a 1979 paper by Kahneman and Tversky that now has 34000 citations (yes, we’ve been having this argument for a rather long time now). In the 1990s it was refined into cumulative prospect theory, which is more mathematically precise but basically the same idea.

What was that argument? People buy both insurance and lottery tickets.

The “both” is very important. Buying insurance can definitely be rational—indeed, typically is. Buying lottery tickets could theoretically be rational, under very particular circumstances. But they cannot both be rational at the same time.

To see why, let’s talk some more about marginal utility of wealth. Recall that a dollar is not worth the same to everyone; to a billionaire a dollar is a rounding error, to most of us it is a bottle of Coke, but to a starving child in Ghana it could be life itself. We typically observe diminishing marginal utility of wealth—the more money you have, the less another dollar is worth to you.

If we sketch a graph of your utility versus wealth it would look something like this:

Marginal_utility_wealth

Notice how it increases as your wealth increases, but at a rapidly diminishing rate.

If you have diminishing marginal utility of wealth, you are what we call risk-averse. If you are risk-averse, you’ll (sometimes) want to buy insurance. Let’s suppose the units on that graph are tens of thousands of dollars. Suppose you currently have an income of $50,000. You are offered the chance to pay $10,000 a year to buy unemployment insurance, so that if you lose your job, instead of making $10,000 on welfare you’ll make $30,000 on unemployment. You think you have about a 20% chance of losing your job.

If you had constant marginal utility of wealth, this would not be a good deal for you. Your expected value of money would be reduced if you buy the insurance: Before you had an 80% chance of $50,000 and a 20% chance of $10,000 so your expected amount of money is $42,000. With the insurance you have an 80% chance of $40,000 and a 20% chance of $30,000 so your expected amount of money is $38,000. Why would you take such a deal? That’s like giving up $4,000 isn’t it?

Well, let’s look back at that utility graph. At $50,000 your utility is 1.80, uh… units, er… let’s say QALY. 1.80 QALY per year, meaning you live 80% better than the average human. Maybe, I guess? Doesn’t seem too far off. In any case, the units of measurement aren’t that important.

Insurance_options

By buying insurance your effective income goes down to $40,000 per year, which lowers your utility to 1.70 QALY. That’s a fairly significant hit, but it’s not unbearable. If you lose your job (20% chance), you’ll fall down to $30,000 and have a utility of 1.55 QALY. Again, noticeable, but bearable. Your overall expected utility with insurance is therefore 1.67 QALY.

But what if you don’t buy insurance? Well then you have a 20% chance of taking a big hit and falling all the way down to $10,000 where your utility is only 1.00 QALY. Your expected utility is therefore only 1.64 QALY. You’re better off going with the insurance.

And this is how insurance companies make a profit (well; the legitimate way anyway; they also like to gouge people and deny cancer patients of course); on average, they make more from each customer than they pay out, but customers are still better off because they are protected against big losses. In this case, the insurance company profits $4,000 per customer per year, customers each get 30 milliQALY per year (about the same utility as an extra $2,000 more or less), everyone is happy.

But if this is your marginal utility of wealth—and it most likely is, approximately—then you would never want to buy a lottery ticket. Let’s suppose you actually have pretty good odds; it’s a 1 in 1 million chance of $1 million for a ticket that costs $2. This means that the state is going to take in about $2 million for every $1 million they pay out to a winner.

That’s about as good as your odds for a lottery are ever going to get; usually it’s more like a 1 in 400 million chance of $150 million for $1, which is an even bigger difference than it sounds, because $150 million is nowhere near 150 times as good as $1 million. It’s a bit better from the state’s perspective though, because they get to receive $400 million for every $150 million they pay out.

For your convenience I have zoomed out the graph so that you can see 100, which is an income of $1 million (which you’ll have this year if you win; to get it next year, you’ll have to play again). You’ll notice I did not have to zoom out the vertical axis, because 20 times as much money only ends up being about 2 times as much utility. I’ve marked with lines the utility of $50,000 (1.80, as we said before) versus $1 million (3.30).

Lottery_utility

What about the utility of $49,998 which is what you’ll have if you buy the ticket and lose? At this number of decimal places you can’t see the difference, so I’ll need to go out a few more. At $50,000 you have 1.80472 QALY. At $49,998 you have 1.80470 QALY. That $2 only costs you 0.00002 QALY, 20 microQALY. Not much, really; but of course not, it’s only $2.

How much does the 1 in 1 million chance of $1 million give you? Even less than that. Remember, the utility gain for going from $50,000 to $1 million is only 1.50 QALY. So you’re adding one one-millionth of that in expected utility, which is of course 1.5 microQALY, or 0.0000015 QALY.

That $2 may not seem like it’s worth much, but that 1 in 1 million chance of $1 million is worth less than one tenth as much. Again, I’ve tried to make these figures fairly realistic; they are by no means exact (I don’t actually think $49,998 corresponds to exactly 1.804699 QALY), but the order of magnitude difference is right. You gain about ten times as much utility from spending that $2 on something you want than you do on taking the chance at $1 million.

I said before that it is theoretically possible for you to have a utility function for which the lottery would be rational. For that you’d need to have increasing marginal utility of wealth, so that you could be what we call risk-seeking. Your utility function would have to look like this:

Weird_utility

There’s no way marginal utility of wealth looks like that. This would be saying that it would hurt Bill Gates more to lose $1 than it would hurt a starving child in Ghana, which makes no sense at all. (It certainly would makes you wonder why he’s so willing to give it to them.) So frankly even if we didn’t buy insurance the fact that we buy lottery tickets would already look pretty irrational.

But in order for it to be rational to buy both lottery tickets and insurance, our utility function would have to be totally nonsensical. Maybe it could look like this or something; marginal utility decreases normally for awhile, and then suddenly starts going upward again for no apparent reason:

Weirder_utility

Clearly it does not actually look like that. Not only would this mean that Bill Gates is hurt more by losing $1 than the child in Ghana, we have this bizarre situation where the middle class are the people who have the lowest marginal utility of wealth in the world. Both the rich and the poor would need to have higher marginal utility of wealth than we do. This would mean that apparently yachts are just amazing and we have no idea. Riding a yacht is the pinnacle of human experience, a transcendence beyond our wildest imaginings; and riding a slightly bigger yacht is even more amazing and transcendent. Love and the joy of a life well-lived pale in comparison to the ecstasy of adding just one more layer of gold plate to your Ferrari collection.

Where increasing marginal utility is ridiculous, this is outright special pleading. You’re just making up bizarre utility functions that perfectly line up with whatever behavior people happen to have so that you can still call it rational. It’s like saying, “It could be perfectly rational! Maybe he enjoys banging his head against the wall!”

Kahneman and Tversky had a better idea. They realized that human beings aren’t so great at assessing probability, and furthermore tend not to think in terms of total amounts of wealth or annual income at all, but in terms of losses and gains. Through a series of clever experiments they showed that we are not so much risk-averse as we are loss-averse; we are actually willing to take more risk if it means that we will be able to avoid a loss.

In effect, we seem to be acting as if our utility function looks like this, where the zero no longer means “zero income”, it means “whatever we have right now“:

Prospect_theory

We tend to weight losses about twice as much as gains, and we tend to assume that losses also diminish in their marginal effect the same way that gains do. That is, we would only take a 50% chance to lose $1000 if it meant a 50% chance to gain $2000; but we’d take a 10% chance at losing $10,000 to save ourselves from a guaranteed loss of $1000.

This can explain why we buy insurance, provided that you frame it correctly. One of the things about prospect theory—and about human behavior in general—is that it exhibits framing effects: The answer we give depends upon the way you ask the question. That’s so totally obviously irrational it’s honestly hard to believe that we do it; but we do, and sometimes in really important situations. Doctors—doctors—will decide a moral dilemma differently based on whether you describe it as “saving 400 out of 600 patients” or “letting 200 out of 600 patients die”.

In this case, you need to frame insurance as the default option, and not buying insurance as an extra risk you are taking. Then saving money by not buying insurance is a gain, and therefore less important, while a higher risk of a bad outcome is a loss, and therefore important.

If you frame it the other way, with not buying insurance as the default option, then buying insurance is taking a loss by making insurance payments, only to get a gain if the insurance pays out. Suddenly the exact same insurance policy looks less attractive. This is a big part of why Obamacare has been effective but unpopular. It was set up as a fine—a loss—if you don’t buy insurance, rather than as a bonus—a gain—if you do buy insurance. The latter would be more expensive, but we could just make it up by taxing something else; and it might have made Obamacare more popular, because people would see the government as giving them something instead of taking something away. But the fine does a better job of framing insurance as the default option, so it motivates more people to actually buy insurance.

But even that would still not be enough to explain how it is rational to buy lottery tickets (Have I mentioned how it’s really not a good idea to buy lottery tickets?), because buying a ticket is a loss and winning the lottery is a gain. You actually have to get people to somehow frame not winning the lottery as a loss, making winning the default option despite the fact that it is absurdly unlikely. But I have definitely heard people say things like this: “Well if my numbers come up and I didn’t play that week, how would I feel then?” Pretty bad, I’ll grant you. But how much you wanna bet that never happens? (They’ll bet… the price of the ticket, apparently.)

In order for that to work, people either need to dramatically overestimate the probability of winning, or else ignore it entirely. Both of those things totally happen.

First, we overestimate the probability of rare events and underestimate the probability of common events—this is actually the part that makes it cumulative prospect theory instead of just regular prospect theory. If you make a graph of perceived probability versus actual probability, it looks like this:

cumulative_prospect

We don’t make much distinction between 40% and 60%, even though that’s actually pretty big; but we make a huge distinction between 0% and 0.00001% even though that’s actually really tiny. I think we basically have categories in our heads: “Never, almost never, rarely, sometimes, often, usually, almost always, always.” Moving from 0% to 0.00001% is going from “never” to “almost never”, but going from 40% to 60% is still in “often”. (And that for some reason reminded me of “Well, hardly ever!”)

But that’s not even the worst of it. After all that work to explain how we can make sense of people’s behavior in terms of something like a utility function (albeit a distorted one), I think there’s often a simpler explanation still: Regret aversion under total neglect of probability.

Neglect of probability is self-explanatory: You totally ignore the probability. But what’s regret aversion, exactly? Unfortunately I’ve had trouble finding any good popular sources on the topic; it’s all scholarly stuff. (Maybe I’m more cutting-edge than I thought!)

The basic idea that is that you minimize regret, where regret can be formalized as the difference in utility between the outcome you got and the best outcome you could have gotten. In effect, it doesn’t matter whether something is likely or unlikely; you only care how bad it is.

This explains insurance and lottery tickets in one fell swoop: With insurance, you have the choice of risking a big loss (big regret) which you can avoid by paying a small amount (small regret). You take the small regret, and buy insurance. With lottery tickets, you have the chance of getting a large gain (big regret if you don’t) which you gain by paying a small amount (small regret).

This can also explain why a typical American’s fears go in the order terrorists > Ebola > sharks > > cars > cheeseburgers, while the actual risk of dying goes in almost the opposite order, cheeseburgers > cars > > terrorists > sharks > Ebola. (Terrorists are scarier than sharks and Ebola and actually do kill more Americans! Yay, we got something right! Other than that it is literally reversed.)

Dying from a terrorist attack would be horrible; in addition to your own death you have all the other likely deaths and injuries, and the sheer horror and evil of the terrorist attack itself. Dying from Ebola would be almost as bad, with gruesome and agonizing symptoms. Dying of a shark attack would be still pretty awful, as you get dismembered alive. But dying in a car accident isn’t so bad; it’s usually over pretty quick and the event seems tragic but ordinary. And dying of heart disease and diabetes from your cheeseburger overdose will happen slowly over many years, you’ll barely even notice it coming and probably die rapidly from a heart attack or comfortably in your sleep. (Wasn’t that a pleasant paragraph? But there’s really no other way to make the point.)

If we try to estimate the probability at all—and I don’t think most people even bother—it isn’t by rigorous scientific research; it’s usually by availability heuristic: How many examples can you think of in which that event happened? If you can think of a lot, you assume that it happens a lot.

And that might even be reasonable, if we still lived in hunter-gatherer tribes or small farming villages and the 150 or so people you knew were the only people you ever heard about. But now that we have live TV and the Internet, news can get to us from all around the world, and the news isn’t trying to give us an accurate assessment of risk, it’s trying to get our attention by talking about the biggest, scariest, most exciting things that are happening around the world. The amount of news attention an item receives is in fact in inverse proportion to the probability of its occurrence, because things are more exciting if they are rare and unusual. Which means that if we are estimating how likely something is based on how many times we heard about it on the news, our estimates are going to be almost exactly reversed from reality. Ironically it is the very fact that we have more information that makes our estimates less accurate, because of the way that information is presented.

It would be a pretty boring news channel that spent all day saying things like this: “82 people died in car accidents today, and 1657 people had fatal heart attacks, 11.8 million had migraines, and 127 million played the lottery and lost; in world news, 214 countries did not go to war, and 6,147 children starved to death in Africa…” This would, however, be vastly more informative.

In the meantime, here are a couple of counter-heuristics I recommend to you: Don’t think about losses and gains, think about where you are and where you might be. Don’t say, “I’ll gain $1,000”; say “I’ll raise my income this year to $41,000.” Definitely do not think in terms of the percentage price of things; think in terms of absolute amounts of money. Cheap expensive things, expensive cheap things is a motto of mine; go ahead and buy the $5 toothbrush instead of the $1, because that’s only $4. But be very hesitant to buy the $22,000 car instead of the $21,000, because that’s $1,000. If you need to estimate the probability of something, actually look it up; don’t try to guess based on what it feels like the probability should be. Make this unprecedented access to information work for you instead of against you. If you want to know how many people die in car accidents each year, you can literally ask Google and it will tell you that (I tried it—it’s 1.3 million worldwide). The fatality rate of a given disease versus the risk of its vaccine, the safety rating of a particular brand of car, the number of airplane crash deaths last month, the total number of terrorist attacks, the probability of becoming a university professor, the average functional lifespan of a new television—all these things and more await you at the click of a button. Even if you think you’re pretty sure, why not look it up anyway?

Perhaps then we can make prospect theory wrong by making ourselves more rational.

The winner-takes-all effect

JDN 2457054 PST 14:06.

As I write there is some sort of mariachi band playing on my front lawn. It is actually rather odd that I have a front lawn, since my apartment is set back from the road; yet there is the patch of grass, and there is the band playing upon it. This sort of thing is part of the excitement of living in a large city (and Long Beach would seem like a large city were it not right next to the sprawling immensity that is Los Angeles—there are more people in Long Beach than in Cleveland, but there are more people in greater Los Angeles than in Sweden); with a certain critical mass of human beings comes unexpected pieces of culture.

The fact that people agglomerate in this way is actually relevant to today’s topic, which is what I will call the winner-takes-all effect. I actually just finished reading a book called The Winner-Take-All Society, which is particularly horrifying to read because it came out in 1996. That’s almost twenty years ago, and things were already bad; and since then everything it describes has only gotten worse.

What is the winner-takes-all effect? It is the simple fact that in competitive capitalist markets, a small difference in quality can yield an enormous difference in return. The third most popular soda drink company probably still makes drinks that are pretty good, but do you have any idea what it is? There’s Coke, there’s Pepsi, and then there’s… uh… Dr. Pepper, apparently! But I didn’t know that before today and I bet you didn’t either. Now think about what it must be like to be the 15th most popular soda drink company, or the 37th. That’s the winner-takes-all effect.

I don’t generally follow football, but since tomorrow is the Super Bowl I feel some obligation to use that example as well. The highest-paid quarterback is Russell Wilson of the Seattle Seahawks, who is signing onto a five-year contract worth $110 million ($22 million a year). In annual income that will make him pass Jay Cutler of the Chicago Bears who has a seven-year contract worth $127 million ($18.5 million a year). This shift may have something to do with the fact that the Seahawks are in the Super Bowl this year and the Bears are not (they haven’t since 2007). Now consider what life is like for most football players; the median income of football players is most likely zero (at least as far as football-related income), and the median income of NFL players—the cream of the crop already—is $770,000; that’s still very good money of course (more than Krugman makes, actually! But he could make more, if he were willing to sell out to Wall Street), but it’s barely 1/30 of what Wilson is going to be making. To make that million-dollar salary, you need to be the best, of the best, of the best (sir!). That’s the winner-takes-all effect.

To go back to the example of cities, it is for similar reasons that the largest cities (New York, Los Angeles, London, Tokyo, Shanghai, Hong Kong, Delhi) become packed with tens of millions of people while others (Long Beach, Ann Arbor, Cleveland) get hundreds of thousands and most (Petoskey, Ketchikan, Heber City, and hundreds of others you’ve never heard of) get only a few thousand. Beyond that there are thousands of tiny little hamlets that many don’t even consider cities. The median city probably has about 10,000 people in it, and that only because we’d stop calling it a city if it fell below 1,000. If we include every tiny little village, the median town size is probably about 20 people. Meanwhile the largest city in the world is Tokyo, with a greater metropolitan area that holds almost 38 million people—or to put it another way almost exactly as many people as California. Huh, LA doesn’t seem so big now does it? How big is a typical town? Well, that’s the thing about this sort of power-law distribution; the concept of “typical” or “average” doesn’t really apply anymore. Each little piece of the distribution has basically the same shape as the whole distribution, so there isn’t a “typical” size or scale. That’s the winner-takes-all effect.

As they freely admit in the book, it isn’t literally that a single winner takes everything. That is the theoretical maximum level of wealth inequality, and fortunately no society has ever quite reached it. The closest we get in today’s society is probably Saudi Arabia, which recently lost its king—and yes I do mean king in the fullest sense of the word, a man of virtually unlimited riches and near-absolute power. His net wealth was estimated at $18 billion, which frankly sounds low; still even if that’s half the true amount it’s oddly comforting to know that he is still not quite as rich as Bill Gates ($78 billion), who earned his wealth at least semi-legitimately in a basically free society. Say what you will about intellectual property rents and market manipulation—and you know I do—but they are worlds away from what Abdullah’s family did, which was literally and directly robbed from millions of people by the power of the sword. Mostly he just inherited all that, and he did implement some minor reforms, but make no mistake: He was ruthless and by no means willing to give up his absolute power—he beheaded dozens of political dissidents, for example. Saudi Arabia does spread their wealth around a little, such that basically no one is below the UN poverty lines of $1.25 and $2 per day, but about a fourth of the population is below the national poverty line—which is just about the same distribution of wealth as what we have in the US, which actually makes me wonder just how free and legitimate our markets really are.

The winner-takes-all effect would really be more accurately described as the “top small fraction takes the vast majority” effect, but that isn’t nearly as catchy, now is it?

There are several different causes that can all lead to this same result. In the book, Robert Frank and Philip Cook argue that we should not attribute the cause to market manipulation, but in fact to the natural functioning of competitive markets. There’s something to be said for this—I used to buy the whole idea that competitive markets are the best, but increasingly I’ve been seeing ways that less competitive markets can make better overall outcomes.

Where they lose me is in arguing that the skyrocketing compensation packages for CEOs are due to their superior performance, and corporations are just being rational in competing for the best CEOs. If that were true, we wouldn’t find that the rank correlation between the CEO’s pay and the company’s stock performance is statistically indistinguishable from zero. Actually even a small positive correlation wouldn’t prove that the CEOs are actually performing well; it could just be that companies that perform well are willing to pay their CEOs more—and stock option compensation will do this automatically. But in fact the correlation is so tiny as to be negligible; corporations would be better off hiring a random person off the street and paying them $50,000 for all the CEO does for their stock performance. If you adjust for the size of the company, you find that having a higher-paid CEO is positively related to performance for small startups, but negatively correlated for large well-established corporations. No, clearly there’s something going on here besides competitive pay for high performance—corruption comes to mind, which you’ll remember was the subject of my master’s thesis.

But in some cases there isn’t any apparent corruption, and yet we still see these enormously unequal distributions of income. Another good example of this is the publishing industry, in which J.K. Rowling can make over $1 billion (she donated enough to charity to officially lose her billionaire status) but most authors make little or nothing, particularly those who can’t get published in the first place. I have no reason to believe that J.K. Rowling acquired this massive wealth by corruption; she just sold an awful lot of booksover 100 million of the first Harry Potter book alone.

But why would she be able to sell 100 million while thousands of authors write books that are probably just as good or nearly so make nothing? Am I just bitter and envious, as Mitt Romney would say? Is J.K. Rowling actually a million times as good an author as I am?

Obviously not, right? She may be better, but she’s not that much better. So how is it that she ends up making a million times as much as I do from writing? It feels like squaring the circle: How can markets be efficient and competitive, yet some people are being paid millions of times as others despite being only slightly more productive?

The answer is simple but enormously powerful: positive feedback.Once you start doing well, it’s easier to do better. You have what economists call an economy of scale. The first 10,000 books sold is the hardest; then the next 10,000 is a little easier; the next 10,000 a little easier still. In fact I suspect that in many cases the first 10% growth is harder than the second 10% growth and so on—which is actually a much stronger claim. For my sales to grow 10% I’d need to add like 20 people. For J.K. Rowling’s sales to grow 10% she’d need to add 10 million. Yet it might actually be easier for J.K. Rowling to add 10 million than for me to add 20. If not, it isn’t much harder. Suppose we tried by just sending out enticing tweets. I have about 100 Twitter followers, so I’d need 0.2 sales per follower; she has about 4 million, so she’d need an average of 2.5 sales per follower. That’s an advantage for me, percentage-wise—but if we have the same uptake rate I sell 20 books and she sells 800,000.

If you have only a handful of book sales like I do, those sales are static; but once you cross that line into millions of sales, it’s easy for that to spread into tens or even hundreds of millions. In the particular case of books, this is because it spreads by word-of-mouth; say each person who reads a book recommends it to 10 friends, and you only read a book if at least 2 of your friends recommended it. In a city of 100,000 people, if you start with 50 people reading it, odds are that most of those people don’t have friends that overlap and so you stop at 50. But if you start at 50,000, there is bound to be a great deal of overlap; so then that 50,000 recruits another 10,000, then another 10,000, and pretty soon the whole 100,000 have read it. In this case we have what are called network externalitiesyou’re more likely to read a book if your friends have read it, so the more people there are who have read it, the more people there are who want to read it. There’s a very similar effect at work in social networks; why does everyone still use Facebook, even though it’s actually pretty awful? Because everyone uses Facebook. Less important than the quality of the software platform (Google Plus is better, and there are some third-party networks that are likely better still) is the fact that all your friends and family are on it. We all use Facebook because we all use Facebook? We all read Harry Potter books because we all read Harry Potter books? The first rule of tautology club is…

Languages are also like this, which is why I can write this post in English and yet people can still read it around the world. English is the winner of the language competition (we call it the lingua franca, as weird as that is—French is not the lingua franca anymore). The losers are those hundreds of New Guinean languages you’ve never heard of, many of which are dying. And their distribution obeys, once again, a power-law. (Individual words actually obey a power-law as well, which makes this whole fractal business delightfully ever more so.)
Network externalities are not the only way that the winner-takes-all effect can occur, though I think it is the most common. You can also have economies of scale from the supply side, particularly in the case of information: Recording a song is a lot of time and effort, but once you record a song, it’s trivial to make more copies of it. So that first recording costs a great deal, while every subsequent recording costs next to nothing. This is probably also at work in the case of J.K. Rowling and the NFL; the two phenomena are by no means mutually exclusive. But clearly the sizes of cities are due to network externalities: It’s quite expensive to live in a big city—no supply-side economy of scale—but you want to live in a city where other people live because that’s where friends and family and opportunities are.

The most worrisome kind of winner-takes-all effect is what Frank and Cook call deep pockets: Once you have concentration of wealth in a few hands, those few individuals can now choose their own winners in a much more literal sense: the rich can commission works of art from their favorite artists, exacerbating the inequality among artists; worse yet they can use their money to influence politicians (as the Kochs are planning on spending $900 million—$3 for every person in America—to do in 2016) and exacerbate the inequality in the whole system. That gives us even more positive feedback on top of all the other positive feedbacks.

Sure enough, if you run the standard neoclassical economic models of competition and just insert the assumption of economies of scale, the result is concentration of wealth—in fact, if nothing about the rules prevents it, the result is a complete monopoly. Nor is this result in any sense economically efficient; it’s just what naturally happens in the presence of economies of scale.

Frank and Cook seem most concerned about the fact that these winner-take-all incomes will tend to push too many people to seek those careers, leaving millions of would-be artists, musicians and quarterbacks with dashed dreams when they might have been perfectly happy as electrical engineers or high school teachers. While this may be true—next week I’ll go into detail about prospect theory and why human beings are terrible at making judgments based on probability—it isn’t really what I’m most concerned about. For all the cost of frustrated ambition there is also a good deal of benefit; striving for greatness does not just make the world better if we succeed, it can make ourselves better even if we fail. I’d strongly encourage people to have backup plans; but I’m not going to tell people to stop painting, singing, writing, or playing football just because they’re unlikely to make a living at it. The one concern I do have is that the competition is so fierce that we are pressured to go all in, to not have backup plans, to use performance-enhancing drugs—they may carry awful risks, but they also work. And it’s probably true, actually, that you’re a bit more likely to make it all the way to the top if you don’t have a backup plan. You’re also vastly more likely to end up at the bottom. Is raising your probability of being a bestselling author from 0.00011% to 0.00012% worth giving up all other career options? Skipping chemistry class to practice football may improve your chances of being an NFL quarterback from 0.000013% to 0.000014%, but it will also drop your chances of being a chemical engineer from 95% (a degree in chemical engineering almost guarantees you a job eventually) to more like 5% (it’s hard to get a degree when you flunk all your classes).

Frank and Cook offer a solution that I think is basically right; they call it positional arms control agreements. By analogy with arms control agreements between nations—and what is war, if not the ultimate winner-takes-all contest?—they propose that we use taxation and regulation policy to provide incentives to make people compete less fiercely for the top positions. Some of these we already do: Performance-enhancing drugs are banned in professional sports, for instance. Even where there are no regulations, we can use social norms: That’s why it’s actually a good thing that your parents rarely support your decision to drop out of school and become a movie star.

That’s yet another reason why progressive taxation is a good idea, as if we needed another; by paring down those top incomes it makes the prospect of winning big less enticing. If NFL quarterbacks only made 10 times what chemical engineers make instead of 300 times, people would be a lot more hesitant to give up on chemical engineering to become a quarterback. If top Wall Street executives only made 50 times what normal people make instead of 5000, people with physics degrees might go back to actually being physicists instead of speculating on stock markets.

There is one case where we might not want fewer people to try, and that is entrepreneurship. Most startups fail, and only a handful go on to make mind-bogglingly huge amounts of money (often for no apparent reason, like the Snuggie and Flappy Bird), yet entrepreneurship is what drives the dynamism of a capitalist economy. We need people to start new businesses, and right now they do that mainly because of a tiny chance of a huge benefit. Yet we don’t want them to be too unrealistic in their expectations: Entrepreneurs are much more optimistic than the general population, but the most successful entrepreneurs are a bit less optimistic than other entrepreneurs. The most successful strategy is to be optimistic but realistic; this outperforms both unrealistic optimism and pessimism. That seems pretty intuitive; you have to be confident you’ll succeed, but you can’t be totally delusional. Yet it’s precisely the realistic optimists who are most likely to be disincentivized by a reduction in the top prizes.

Here’s my solution: Let’s change it from a tiny change of a huge benefit to a large chance of a moderately large benefit. Let’s reward entrepreneurs for trying—with standards for what constitutes a really serious, good attempt rather than something frivolous that was guaranteed to fail. Use part of the funds from the progressive tax as a fund for angel grants, provided to a large number of the most promising entrepreneurs. It can’t be a million-dollar prize for the top 100. It needs to be more like a $50,000 prize for the top 100,000 (which would cost $5 billion a year, affordable for the US government). It should be paid at the proposal phase; the top 100,000 business plans receive the funding and are under no obligation to repay it. It has to be enough money that someone can rationally commit themselves to years of dedicated work without throwing themselves into poverty, and it has to be confirmed money so that they don’t have to worry about throwing themselves into debt. As for the upper limit, it only needs to be small enough that there is still an incentive for the business to succeed; but even with a 99% tax Mark Zuckerberg would still be a millionaire, so the rewards for success are high indeed.

The good news is that we actually have such a system to some extent. For research scientists rather than entrepreneurs, NSF grants are pretty close to what I have in mind, but at present they are a bit too competitive: 8,000 research grants with a median of $130,000 each and a 20% acceptance rate isn’t quite enough people—the acceptance rate should be higher, since most of these proposals are quite worthy. Still, it’s close, and definitely a much better incentive system than what we have for entrepreneurs; there are almost 12 million entrepreneurs in the United States, starting 6 million businesses a year, 75% of which fail before they can return their venture capital. Those that succeed have incomes higher than the general population, with a median income of around $70,000 per year, but most of this is accounted for by the fact that entrepreneurs are more educated and talented than the general population. Once you factor that in, successful entrepreneurs have about 50% more income on average, but their standard deviation of income is also 60% higher—so some are getting a lot and some are getting very little. Since 75% fail, we’re talking about a 25% chance of entering an income distribution that’s higher on average but much more variable, and a 75% chance of going through a period with little or no income at all—is it worth it? Maybe, maybe not. But if you could get a guaranteed $50,000 for having a good idea—and let me be clear, only serious proposals that have a good chance of success should qualify—that deal sounds an awful lot better.

How do we measure happiness?

JDN 2457028 EST 20:33.

No, really, I’m asking. I strongly encourage my readers to offer in the comments any ideas they have about the measurement of happiness in the real world; this has been a stumbling block in one of my ongoing research projects.

In one sense the measurement of happiness—or more formally utility—is absolutely fundamental to economics; in another it’s something most economists are astonishingly afraid of even trying to do.

The basic question of economics has nothing to do with money, and is really only incidentally related to “scarce resources” or “the production of goods” (though many textbooks will define economics in this way—apparently implying that a post-scarcity economy is not an economy). The basic question of economics is really this: How do we make people happy?

This must always be the goal in any economic decision, and if we lose sight of that fact we can make some truly awful decisions. Other goals may work sometimes, but they inevitably fail: If you conceive of the goal as “maximize GDP”, then you’ll try to do any policy that will increase the amount of production, even if that production comes at the expense of stress, injury, disease, or pollution. (And doesn’t that sound awfully familiar, particularly here in the US? 40% of Americans report their jobs as “very stressful” or “extremely stressful”.) If you were to conceive of the goal as “maximize the amount of money”, you’d print money as fast as possible and end up with hyperinflation and total economic collapse ala Zimbabwe. If you were to conceive of the goal as “maximize human life”, you’d support methods of increasing population to the point where we had a hundred billion people whose lives were barely worth living. Even if you were to conceive of the goal as “save as many lives as possible”, you’d find yourself investing in whatever would extend lifespan even if it meant enormous pain and suffering—which is a major problem in end-of-life care around the world. No, there is one goal and one goal only: Maximize happiness.

I suppose technically it should be “maximize utility”, but those are in fact basically the same thing as long as “happiness” is broadly conceived as eudaimoniathe joy of a life well-lived—and not a narrow concept of just adding up pleasure and subtracting out pain. The goal is not to maximize the quantity of dopamine and endorphins in your brain; the goal is to achieve a world where people are safe from danger, free to express themselves, with friends and family who love them, who participate in a world that is just and peaceful. We do not want merely the illusion of these things—we want to actually have them. So let me be clear that this is what I mean when I say “maximize happiness”.

The challenge, therefore, is how we figure out if we are doing that. Things like money and GDP are easy to measure; but how do you measure happiness?
Early economists like Adam Smith and John Stuart Mill tried to deal with this question, and while they were not very successful I think they deserve credit for recognizing its importance and trying to resolve it. But sometime around the rise of modern neoclassical economics, economists gave up on the project and instead sought a narrower task, to measure preferences.

This is often called technically ordinal utility, as opposed to cardinal utility; but this terminology obscures the fundamental distinction. Cardinal utility is actual utility; ordinal utility is just preferences.

(The notion that cardinal utility is defined “up to a linear transformation” is really an eminently trivial observation, and it shows just how little physics the physics-envious economists really understand. All we’re talking about here is units of measurement—the same distance is 10.0 inches or 25.4 centimeters, so is distance only defined “up to a linear transformation”? It’s sometimes argued that there is no clear zero—like Fahrenheit and Celsius—but actually it’s pretty clear to me that there is: Zero utility is not existing. So there you go, now you have Kelvin.)

Preferences are a bit easier to measure than happiness, but not by as much as most economists seem to think. If you imagine a small number of options, you can just put them in order from most to least preferred and there you go; and we could imagine asking someone to do that, or—the technique of revealed preferenceuse the choices they make to infer their preferences by assuming that when given the choice of X and Y, choosing X means you prefer X to Y.

Like much of neoclassical theory, this sounds good in principle and utterly collapses when applied to the real world. Above all: How many options do you have? It’s not easy to say, but the number is definitely huge—and both of those facts pose serious problems for a theory of preferences.

The fact that it’s not easy to say means that we don’t have a well-defined set of choices; even if Y is theoretically on the table, people might not realize it, or they might not see that it’s better even though it actually is. Much of our cognitive effort in any decision is actually spent narrowing the decision space—when deciding who to date or where to go to college or even what groceries to buy, simply generating a list of viable options involves a great deal of effort and extremely complex computation. If you have a true utility function, you can satisficechoosing the first option that is above a certain threshold—or engage in constrained optimizationchoosing whether to continue searching or accept your current choice based on how good it is. Under preference theory, there is no such “how good it is” and no such thresholds. You either search forever or choose a cutoff arbitrarily.

Even if we could decide how many options there are in any given choice, in order for this to form a complete guide for human behavior we would need an enormous amount of information. Suppose there are 10 different items I could have or not have; then there are 10! = 3.6 million possible preference orderings. If there were 100 items, there would be 100! = 9e157 possible orderings. It won’t do simply to decide on each item whether I’d like to have it or not. Some things are complements: I prefer to have shoes, but I probably prefer to have $100 and no shoes at all rather than $50 and just a left shoe. Other things are substitutes: I generally prefer eating either a bowl of spaghetti or a pizza, rather than both at the same time. No, the combinations matter, and that means that we have an exponentially increasing decision space every time we add a new option. If there really is no more structure to preferences than this, we have an absurd computational task to make even the most basic decisions.

This is in fact most likely why we have happiness in the first place. Happiness did not emerge from a vacuum; it evolved by natural selection. Why make an organism have feelings? Why make it care about things? Wouldn’t it be easier to just hard-code a list of decisions it should make? No, on the contrary, it would be exponentially more complex. Utility exists precisely because it is more efficient for an organism to like or dislike things by certain amounts rather than trying to define arbitrary preference orderings. Adding a new item means assigning it an emotional value and then slotting it in, instead of comparing it to every single other possibility.

To illustrate this: I like Coke more than I like Pepsi. (Let the flame wars begin?) I also like getting massages more than I like being stabbed. (I imagine less controversy on this point.) But the difference in my mind between massages and stabbings is an awful lot larger than the difference between Coke and Pepsi. Yet according to preference theory (“ordinal utility”), that difference is not meaningful; instead I have to say that I prefer the pair “drink Pepsi and get a massage” to the pair “drink Coke and get stabbed”. There’s no such thing as “a little better” or “a lot worse”; there is only what I prefer over what I do not prefer, and since these can be assigned arbitrarily there is an impossible computational task before me to make even the most basic decisions.

Real utility also allows you to make decisions under risk, to decide when it’s worth taking a chance. Is a 50% chance of $100 worth giving up a guaranteed $50? Probably. Is a 50% chance of $10 million worth giving up a guaranteed $5 million? Not for me. Maybe for Bill Gates. How do I make that decision? It’s not about what I prefer—I do in fact prefer $10 million to $5 million. It’s about how much difference there is in terms of my real happiness—$5 million is almost as good as $10 million, but $100 is a lot better than $50. My marginal utility of wealth—as I discussed in my post on progressive taxation—is a lot steeper at $50 than it is at $5 million. There’s actually a way to use revealed preferences under risk to estimate true (“cardinal”) utility, developed by Von Neumann and Morgenstern. In fact they proved a remarkably strong theorem: If you don’t have a cardinal utility function that you’re maximizing, you can’t make rational decisions under risk. (In fact many of our risk decisions clearly aren’t rational, because we aren’t actually maximizing an expected utility; what we’re actually doing is something more like cumulative prospect theory, the leading cognitive economic theory of risk decisions. We overrespond to extreme but improbable events—like lightning strikes and terrorist attacks—and underrespond to moderate but probable events—like heart attacks and car crashes. We play the lottery but still buy health insurance. We fear Ebola—which has never killed a single American—but not influenza—which kills 10,000 Americans every year.)

A lot of economists would argue that it’s “unscientific”—Kenneth Arrow said “impossible”—to assign this sort of cardinal distance between our choices. But assigning distances between preferences is something we do all the time. Amazon.com lets us vote on a 5-star scale, and very few people send in error reports saying that cardinal utility is meaningless and only preference orderings exist. In 2000 I would have said “I like Gore best, Nader is almost as good, and Bush is pretty awful; but of course they’re all a lot better than the Fascist Party.” If we had simply been able to express those feelings on the 2000 ballot according to a range vote, either Nader would have won and the United States would now have a three-party system (and possibly a nationalized banking system!), or Gore would have won and we would be a decade ahead of where we currently are in preventing and mitigating global warming. Either one of these things would benefit millions of people.

This is extremely important because of another thing that Arrow said was “impossible”—namely, “Arrow’s Impossibility Theorem”. It should be called Arrow’s Range Voting Theorem, because simply by restricting preferences to a well-defined utility and allowing people to make range votes according to that utility, we can fulfill all the requirements that are supposedly “impossible”. The theorem doesn’t say—as it is commonly paraphrased—that there is no fair voting system; it says that range voting is the only fair voting system. A better claim is that there is no perfect voting system, which is true if you mean that there is no way to vote strategically that doesn’t accurately reflect your true beliefs. The Myerson-Satterthwaithe Theorem is then the proper theorem to use; if you could design a voting system that would force you to reveal your beliefs, you could design a market auction that would force you to reveal your optimal price. But the least expressive way to vote in a range vote is to pick your favorite and give them 100% while giving everyone else 0%—which is identical to our current plurality vote system. The worst-case scenario in range voting is our current system.

But the fact that utility exists and matters, unfortunately doesn’t tell us how to measure it. The current state-of-the-art in economics is what’s called “willingness-to-pay”, where we arrange (or observe) decisions people make involving money and try to assign dollar values to each of their choices. This is how you get disturbing calculations like “the lives lost due to air pollution are worth $10.2 billion.”

Why are these calculations disturbing? Because they have the whole thing backwards—people aren’t valuable because they are worth money; money is valuable because it helps people. It’s also really bizarre because it has to be adjusted for inflation. Finally—and this is the point that far too few people appreciate—the value of a dollar is not constant across people. Because different people have different marginal utilities of wealth, something that I would only be willing to pay $1000 for, Bill Gates might be willing to pay $1 million for—and a child in Africa might only be willing to pay $10, because that is all he has to spend. This makes the “willingness-to-pay” a basically meaningless concept independent of whose wealth we are spending.

Utility, on the other hand, might differ between people—but, at least in principle, it can still be added up between them on the same scale. The problem is that “in principle” part: How do we actually measure it?

So far, the best I’ve come up with is to borrow from public health policy and use the QALY, or quality-adjusted life year. By asking people macabre questions like “What is the maximum number of years of your life you would give up to not have a severe migraine every day?” (I’d say about 20—that’s where I feel ambivalent. At 10 I definitely would; at 30 I definitely wouldn’t.) or “What chance of total paralysis would you take in order to avoid being paralyzed from the waist down?” (I’d say about 20%.) we assign utility values: 80 years of migraines is worth giving up 20 years to avoid, so chronic migraine is a quality of life factor of 0.75. Total paralysis is 5 times as bad as paralysis from the waist down, so if waist-down paralysis is a quality of life factor of 0.90 then total paralysis is 0.50.

You can probably already see that there are lots of problems: What if people don’t agree? What if due to framing effects the same person gives different answers to slightly different phrasing? Some conditions will directly bias our judgments—depression being the obvious example. How many years of your life would you give up to not be depressed? Suicide means some people say all of them. How well do we really know our preferences on these sorts of decisions, given that most of them are decisions we will never have to make? It’s difficult enough to make the actual decisions in our lives, let alone hypothetical decisions we’ve never encountered.

Another problem is often suggested as well: How do we apply this methodology outside questions of health? Does it really make sense to ask you how many years of your life drinking Coke or driving your car is worth?
Well, actually… it better, because you make that sort of decision all the time. You drive instead of staying home, because you value where you’re going more than the risk of dying in a car accident. You drive instead of walking because getting there on time is worth that additional risk as well. You eat foods you know aren’t good for you because you think the taste is worth the cost. Indeed, most of us aren’t making most of these decisions very well—maybe you shouldn’t actually drive or drink that Coke. But in order to know that, we need to know how many years of your life a Coke is worth.

As a very rough estimate, I figure you can convert from willingness-to-pay to QALY by dividing by your annual consumption spending Say you spend annually about $20,000—pretty typical for a First World individual. Then $1 is worth about 50 microQALY, or about 26 quality-adjusted life-minutes. Now suppose you are in Third World poverty; your consumption might be only $200 a year, so $1 becomes worth 5 milliQALY, or 1.8 quality-adjusted life-days. The very richest individuals might spend as much as $10 million on consumption, so $1 to them is only worth 100 nanoQALY, or 3 quality-adjusted life-seconds.

That’s an extremely rough estimate, of course; it assumes you are in perfect health, all your time is equally valuable and all your purchasing decisions are optimized by purchasing at marginal utility. Don’t take it too literally; based on the above estimate, an hour to you is worth about $2.30, so it would be worth your while to work for even $3 an hour. Here’s a simple correction we should probably make: if only a third of your time is really usable for work, you should expect at least $6.90 an hour—and hey, that’s a little less than the US minimum wage. So I think we’re in the right order of magnitude, but the details have a long way to go.

So let’s hear it, readers: How do you think we can best measure happiness?