What really happened in Greece

JDN 2457506

I said I’d get back to this issue, so here goes.

Let’s start with what is uncontroversial: Greece is in trouble.

Their per-capita GDP PPP has fallen from a peak of over $32,000 in 2007 to a trough of just over $24,000 in 2013, and only just began to recover over the last 2 years. That’s a fall of 29 log points. Put another way, the average person in Greece has about the same real income now that they had in the year 2000—a decade and a half of economic growth disappeared.

Their unemployment rate surged from about 7% in 2007 to almost 28% in 2013. It remains over 24%. That is, almost one quarter of all adults in Greece are seeking jobs and not finding them. The US has not seen an unemployment rate that high since the Great Depression.

Most shocking of all, over 40% of the population in Greece is now below the national poverty line. They define poverty as 60% of the inflation-adjusted average income in 2009, which works out to 665 Euros per person ($756 at current exchange rates) per month, or about $9000 per year. They also have an absolute poverty line, which 14% of Greeks now fall below, but only 2% did before the crash.

So now, let’s talk about why.

There’s a standard narrative you’ve probably heard many times, which goes something like this:

The Greek government spent too profligately, heaping social services on the population without the tax base to support them. Unemployment insurance was too generous; pensions were too large; it was too hard to fire workers or cut wages. Thus, work incentives were too weak, and there was no way to sustain a high GDP. But they refused to cut back on these social services, and as a result went further and further into debt until it finally became unsustainable. Now they are cutting spending and raising taxes like they needed to, and it will eventually allow them to repay their debt.

Here’s a fellow of the Cato Institute spreading this narrative on the BBC. Here’s ABC with a five bullet-point list: Pension system, benefits, early retirement, “high unemployment and work culture issues” (yes, seriously), and tax evasion. Here the Telegraph says that Greece “went on a spending spree” and “stopped paying taxes”.

That story is almost completely wrong. Almost nothing about it is true. Cato and the Telegraph got basically everything wrong. The only one ABC got right was tax evasion.

Here’s someone else arguing that Greece has a problem with corruption and failed governance; there is something to be said for this, as Greece is fairly corrupt by European standards—though hardly by world standards. For being only a generation removed from an authoritarian military junta, they’re doing quite well actually. They’re about as corrupt as a typical upper-middle income country like Libya or Botswana; and Botswana is widely regarded as the shining city on a hill of transparency as far as Sub-Saharan Africa is concerned. So corruption may have made things worse, but it can’t be the whole story.

First of all, social services in Greece were not particularly extensive compared to the rest of Europe.

Before the crisis, Greece’s government spending was about 44% of GDP.

That was about the same as Germany. It was slightly more than the UK. It was less than Denmark and France, both of which have government spending of about 50% of GDP.

Greece even tried to cut spending to pay down their debt—it didn’t work, because they simply ended up worsening the economic collapse and undermining the tax base they needed to do that.

Europe has fairly extensive social services by world standards—but that’s a major part of why it’s the First World. Even the US, despite spending far less than Europe on social services, still spends a great deal more than most countries—about 36% of GDP.

Second, if work incentives were a problem, you would not have high unemployment. People don’t seem to grasp what the word unemployment actually means, which is part of why I can’t stand it when news outlets just arbitrarily substitute “jobless” to save a couple of syllables. Unemployment does not mean simply that you don’t have a job. It means that you don’t have a job and are trying to get one.

The word you’re looking for to describe simply not having a job is nonemployment, and that’s such a rarely used term my spell-checker complains about it. Yet economists rarely use this term precisely because it doesn’t matter; a high nonemployment rate is not a symptom of a failing economy but a result of high productivity moving us toward the post-scarcity future (kicking and screaming, evidently). If the problem with Greece were that they were too lazy and they retire too early (which is basically what ABC was saying in slightly more polite language), there would be high nonemployment, but there would not be high unemployment. “High unemployment and work culture issues” is actually a contradiction.

Before the crisis, Greece had an employment-to-population ratio of 49%, meaning a nonemployment rate of 51%. If that sounds ludicrously high, you’re not accustomed to nonemployment figures. During the same time, the United States had an employment-to-population ratio of 52% and thus a nonemployment rate of 48%. So the number of people in Greece who were voluntarily choosing to drop out of work before the crisis was just slightly larger than the number in the US—and actually when you adjust for the fact that the US is full of young immigrants and Greece is full of old people (their median age is 10 years older than ours), it begins to look like it’s we Americans who are lazy. (Actually, it’s that we are studious—the US has an extremely high rate of college enrollment and the best colleges in the world. Full-time students are nonemployed, but they are certainly not unemployed.)

But Greece does have an enormously high debt, right? Yes—but it was actually not as bad before the crisis. Their government debt surged from 105% of GDP to almost 180% today. 105% of GDP is about what we have right now in the US; it’s less than what we had right after WW2. This is a little high, but really nothing to worry about, especially if you’ve incurred the debt for the right reasons. (The famous paper by Rogart and Reinhoff arguing that 90% of GDP is a horrible point of no return was literally based on math errors.)

Moreover, Ireland and Spain suffered much the same fate as Greece, despite running primary budget surpluses.

So… what did happen? If it wasn’t their profligate spending that put them in this mess, what was it?

Well, first of all, there was the Second Depression, a worldwide phenomenon triggered by the collapse of derivatives markets in the United States. (You want unsustainable debt? Try 20 to 1 leveraged CDO-squareds and one quadrillion dollars in notional value. Notional value isn’t everything, but it’s a lot.) So it’s mainly our fault, or rather the fault of our largest banks. As far as us voters, it’s “our fault” in the way that if your car gets stolen it’s “your fault” for not locking the doors and installing a LoJack. We could have regulated against this and enforced those regulations, but we didn’t. (Fortunately, Dodd-Frank looks like it might be working.)

Greece was hit particularly hard because they are highly dependent on trade, particularly in services like tourism that are highly sensitive to the business cycle. Before the crash they imported 36% of GDP and exported 23% of GDP. Now they import 35% of GDP and export 33% of GDP—but it’s a much smaller GDP. Their exports have only slightly increased while their imports have plummeted. (This has reduced their “trade deficit”, but that has always been a silly concept. I guess it’s less silly if you don’t control your own currency, but it’s still silly.)

Once the crash happened, the US had sovereign monetary policy and the wherewithal to actually use that monetary policy effectively, so we weathered the crash fairly well, all things considered. Our unemployment rate barely went over 10%. But Greece did not have sovereign monetary policy—they are tied to the Euro—and that severely limited their options for expanding the money supply as a result of the crisis. Raising spending and cutting taxes was the best thing they could do.

But the bank(st?)ers and their derivatives schemes caused the Greek debt crisis a good deal more directly than just that. Part of the condition of joining the Euro was that countries must limit their fiscal deficit to no more than 3% of GDP (which is a totally arbitrary figure with no economic basis in case you were wondering). Greece was unwilling or unable to do so, but wanted to look like they were following the rules—so they called up Goldman Sachs and got them to make some special derivatives that Greece could use to continue borrowing without looking like they were borrowing. The bank could have refused; they could have even reported it to the European Central Bank. But of course they didn’t; they got their brokerage fee, and they knew they’d sell it off to some other bank long before they had to worry about whether Greece could ever actually repay it. And then (as I said I’d get back to in a previous post) they paid off the credit rating agencies to get them to rate these newfangled securities as low-risk.

In other words, Greece is not broke; they are being robbed.

Like homeowners in the US, Greece was offered loans they couldn’t afford to pay, but the banks told them they could, because the banks had lost all incentive to actually bother with the question of whether loans can be repaid. They had “moved on”; their “financial innovation” of securitization and collateralized debt obligations meant that they could collect origination fees and brokerage fees on loans that could never possibly be repaid, then sell them off to some Greater Fool down the line who would end up actually bearing the default. As long as the system was complex enough and opaque enough, the buyers would never realize the garbage they were getting until it was too late. The entire concept of loans was thereby broken: The basic assumption that you only loan money you expect to be repaid no longer held.

And it worked, for awhile, until finally the unpayable loans tried to create more money than there was in the world, and people started demanding repayment that simply wasn’t possible. Then the whole scheme fell apart, and banks began to go under—but of course we saved them, because you’ve got to save the banks, how can you not save the banks?

Honestly I don’t even disagree with saving the banks, actually. It was probably necessary. What bothers me is that we did nothing to save everyone else. We did nothing to keep people in their homes, nothing to stop businesses from collapsing and workers losing their jobs. Precisely because of the absurd over-leveraging of the financial system, the cost to simply refinance every mortgage in America would have been less than the amount we loaned out in bank bailouts. The banks probably would have done fine anyway, but if they didn’t, so what? The banks exist to serve the people—not the other way around.

We can stop this from happening again—here in the US, in Greece, in the rest of Europe, everywhere. But in order to do that we must first understand what actually happened; we must stop blaming the victims and start blaming the perpetrators.

The credit rating agencies to be worried about aren’t the ones you think

JDN 2457499

John Oliver is probably the best investigative journalist in America today, despite being neither American nor officially a journalist; last week he took on the subject of credit rating agencies, a classic example of his mantra “If you want to do something evil, put it inside something boring.” (note that it’s on HBO, so there is foul language):

As ever, his analysis of the subject is quite good—it’s absurd how much power these agencies have over our lives, and how little accountability they have for even assuring accuracy.

But I couldn’t help but feel that he was kind of missing the point. The credit rating agencies to really be worried about aren’t Equifax, Experian, and Transunion, the ones that assess credit ratings on individuals. They are Standard & Poor’s, Moody’s, and Fitch (which would have been even easier to skewer the way John Oliver did—perhaps we can get them confused with Standardly Poor, Moody, and Filch), the agencies which assess credit ratings on institutions.

These credit rating agencies have almost unimaginable power over our society. They are responsible for rating the risk of corporate bonds, certificates of deposit, stocks, derivatives such as mortgage-backed securities and collateralized debt obligations, and even municipal and government bonds.

S&P, Moody’s, and Fitch don’t just rate the creditworthiness of Goldman Sachs and J.P. Morgan Chase; they rate the creditworthiness of Detroit and Greece. (Indeed, they played an important role in the debt crisis of Greece, which I’ll talk about more in a later post.)

Moreover, they are proven corrupt. It’s a matter of public record.

Standard and Poor’s is the worst; they have been successfully sued for fraud by small banks in Pennsylvania and by the State of New Jersey; they have also settled fraud cases with the Securities and Exchange Commission and the Department of Justice.

Moody’s has also been sued for fraud by the Department of Justice, and all three have been prosecuted for fraud by the State of New York.

But in fact this underestimates the corruption, because the worst conflicts of interest aren’t even illegal, or weren’t until Dodd-Frank was passed in 2010. The basic structure of this credit rating system is fundamentally broken; the agencies are private, for-profit corporations, and they get their revenue entirely from the banks that pay them to assess their risk. If they rate a bank’s asset as too risky, the bank stops paying them, and instead goes to another agency that will offer a higher rating—and simply the threat of doing so keeps them in line. As a result their ratings are basically uncorrelated with real risk—they failed to predict the collapse of Lehman Brothers or the failure of mortgage-backed CDOs, and they didn’t “predict” the European debt crisis so much as cause it by their panic.

Then of course there’s the fact that they are obviously an oligopoly, and furthermore one that is explicitly protected under US law. But then it dawns upon you: Wait… US law? US law decides the structure of credit rating agencies that set the bond rates of entire nations? Yes, that’s right. You’d think that such ratings would be set by the World Bank or something, but they’re not; in fact here’s a paper published by the World Bank in 2004 about how rather than reform our credit rating system, we should instead tell poor countries to reform themselves so they can better impress the private credit rating agencies.

In fact the whole concept of “sovereign debt risk” is fundamentally defective; a country that borrows in its own currency should never have to default on debt under any circumstances. National debt is almost nothing like personal or corporate debt. Their fears should be inflation and unemployment—their monetary policy should be set to minimize the harm of these two basic macroeconomic problems, understanding that policies which mitigate one may enflame the other. There is such a thing as bad fiscal policy, but it has nothing to do with “running out of money to pay your debt” unless you are forced to borrow in a currency you can’t control (as Greece is, because they are on the Euro—their debt is less like the US national debt and more like the debt of Puerto Rico, which is suffering an ongoing debt crisis you may not have heard about). If you borrow in your own currency, you should be worried about excessive borrowing creating inflation and devaluing your currency—but not about suddenly being unable to repay your creditors. The whole concept of giving a sovereign nation a credit rating makes no sense. You will be repaid on time and in full, in nominal terms; if inflation or currency exchange has devalued the currency you are repaid in, that’s sort of like a partial default, but it’s a fundamentally different kind of “default” than simply not paying back the money—and credit ratings have no way of capturing that difference.

In particular, it makes no sense for interest rates on government bonds to go up when a country is suffering some kind of macroeconomic problem.

The basic argument for why interest rates go up when risk is higher is that lenders expect to be paid more by those who do pay to compensate for what they lose from those who don’t pay. This is already much more problematic than most economists appreciate; I’ve been meaning to write a paper on how this system creates self-fulfilling prophecies of default and moral hazard from people who pay their debts being forced to subsidize those who don’t. But it at least makes some sense.

But if a country is a “high risk” in the sense of macroeconomic instability undermining the real value of their debt, we want to ensure that they can restore macroeconomic stability. But we know that when there is a surge in interest rates on government bonds, instability gets worse, not better. Fiscal policy is suddenly shifted away from real production into higher debt payments, and this creates unemployment and makes the economic crisis worse. As Paul Krugman writes about frequently, these policies of “austerity” cause enormous damage to national economies and ultimately benefit no one because they destroy the source of wealth that would have been used to repay the debt.

By letting credit rating agencies decide the rates at which governments must borrow, we are effectively treating national governments as a special case of corporations. But corporations, by design, act for profit and can go bankrupt. National governments are supposed to act for the public good and persist indefinitely. We can’t simply let Greece fail as we might let a bank fail (and of course we’ve seen that there are serious downsides even to that). We have to restructure the sovereign debt system so that it benefits the development of nations rather than detracting from it. The first step is removing the power of private for-profit corporations in the US to decide the “creditworthiness” of entire countries. If we need to assess such risks at all, they should be done by international institutions like the UN or the World Bank.

But right now people are so stuck in the idea that national debt is basically the same as personal or corporate debt that they can’t even understand the problem. For after all, one must repay one’s debts.

The Warren Rule is a good start

JDN 2457243 EDT 10:40.

As far back as 2010, Elizabeth Warren proposed a simple regulation on the reporting of CEO compensation that was then built into Dodd-Frank—but the SEC has resisted actually applying that rule for five years; only now will it actually take effect (and by “now” I mean over the next two years). For simplicity I’ll refer to that rule as the Warren Rule, though I don’t see a lot of other people doing that (most people don’t give it a name at all).

Two things are important to understand about this rule, which both undercut its effectiveness and make all the right-wing whinging about it that much more ridiculous.

1. It doesn’t actually place any limits on CEO compensation or employee salaries; it merely requires corporations to consistently report the ratio between them. Specifically, the rule says that every publicly-traded corporation must report the ratio between the “total compensation” of their CEO and the median salary (with benefits) of their employees; wisely, it includes foreign workers (with a few minor exceptions—lobbyists fought for more but fortunately Warren stood firm), so corporations can’t simply outsource everything but management to make it look like they pay their employees more. Unfortunately, it does not include contractors, which is awful; expect to see corporations working even harder to outsource their work to “contractors” who are actually employees without benefits (not that they weren’t already). The greatest victory here will be for economists, who now will have more reliable data on CEO compensation; and for consumers, who will now find it more salient just how overpaid America’s CEOs really are.

2. While it does wisely cover “total compensation”, that isn’t actually all the money that CEOs receive for owning and operating corporations. It includes salaries, bonuses, benefits, and newly granted stock options—it does not include the value of stock options previously exercised or dividends received from stock the CEO already owns.

TIME screwed this up; they took at face value when Larry Page reported a $1 “total compensation”, which technically is true by how “total compensation” is defined; he received a $1 token salary and no new stock awards. But Larry Page has net wealth of over $38 billion; about half of that is Google stock, so even if we ignore all others, on Google’s PE ratio of about 25, Larry Page received at least $700 million in Google retained earnings alone. (In my personal favorite unit of wealth, Page receives about 3 romneys a year in retained earnings.) No, TIME, he is not the lowest-paid CEO in the world; he has simply structured his income so that it comes entirely from owning shares instead of receiving a salary. Most top CEOs do this, so be wary when it says a Fortune 500 CEO received only $2 million, and completely ignore it when it says a CEO received only $1. Probably in the former case and definitely in the latter, their real money is coming from somewhere else.

Of course, the complaints about how this is an unreasonable demand on businesses are totally absurd. Most of them keep track of all this data anyway; it’s simply a matter of porting it from one spreadsheet to another. (I also love the argument that only “idiosyncratic investors” will care; yeah, what sort of idiot would care about income inequality or be concerned how much of their investment money is going directly to line a single person’s pockets?) They aren’t complaining because it will be a large increase in bureaucracy or a serious hardship on their businesses; they’re complaining because they think it might work. Corporations are afraid that if they have to publicly admit how overpaid their CEOs are, they might actually be pressured to pay them less. I hope they’re right.

CEO pay is set in a very strange way; instead of being based on an estimate of how much they are adding to the company, a CEO’s pay is typically set as a certain margin above what the average CEO is receiving. But then as the process iterates and everyone tries to be above average, pay keeps rising, more or less indefinitely. Anyone with a basic understanding of statistics could have seen this coming, but somehow thousands of corporations didn’t—or else simply didn’t care.

Most people around the world want the CEO-to-employee pay ratio to be dramatically lower than it is. Indeed, unrealistically lower, in my view. Most countries say only 6 to 1, while Scandinavia says only 2 to 1. I want you to think about that for a moment; if the average employee at a corporation makes $50,000, people in Scandinavia think the CEO should only make $100,000, and people elsewhere think the CEO should only make $300,000? I’m honestly not sure what would happen to our economy if we made such a rule. There would be very little incentive to want to become a CEO; why bear all that fierce competition and get blamed for everything to make only twice as much as you would as an average employee?

On the other hand, most CEOs don’t actually do all that much; CEO pay is basically uncorrelated with company performance. Maybe it would be better if they weren’t paid very much, or even if we didn’t have them at all. But under our current system, capping CEO pay also caps the pay of basically everyone else; the CEO is almost always the highest-paid individual in any corporation.

I guess that’s really the problem. We need to find ways to change the overall attitude of our society that higher authority necessarily comes with higher pay; that isn’t a rational assessment of marginal productivity, it’s a recapitulation of our primate instincts for a mating hierarchy. He’s the alpha male, of course he gets all the bananas.

The president of a university should make next to nothing compared to the top scientists at that university, because the president is a useless figurehead and scientists are the foundation of universities—and human knowledge in general. Scientists are actually the one example I can think of where one individual trulycan be one million times as productive as another—though even then I don’t think that justifies paying them one million times as much.

Most corporations should be structured so that managers make moderate incomes and the highest incomes go to engineers and designers, the people who have the highest skills and do the most important work. A car company without managers seems like an interesting experiment in employee ownership. A car company without engineers seems like an oxymoron.

Finally, people who work in finance should make very low incomes, because they don’t actually do very much. Bank tellers are probably paid about what they should be; stock traders and hedge fund managers should be paid like bank tellers. (Or rather, there shouldn’t be stock traders and hedge funds as we know them; this is all pure waste. A really efficient financial system would be extremely simple, because finance actually is very simple—people who have money loan it to people who need it, and in return receive more money later. Everything else is just elaborations on that, and most of these elaborations are really designed to obscure, confuse, and manipulate.)

Oddly enough, the place where we do this best is the nation as a whole; the President of the United States would be astonishingly low-paid if we thought of him as a CEO. Only about $450,000 including expense accounts, for a “corporation” with revenue of nearly $3 trillion? (Suppose instead we gave the President 1% of tax revenue; that would be $30 billion per year. Think about how absurdly wealthy our leaders would be if we gave them stock options, and be glad that we don’t do that.)

But placing a hard cap at 2 or even 6 strikes me as unreasonable. Even during the 1950s the ratio was about 20 to 1, and it’s been rising ever since. I like Robert Reich’s proposal of a sliding scale of corporate taxes; I also wouldn’t mind a hard cap at a higher figure, like 50 or 100. Currently the average CEO makes about 350 times as much as the average employee, so even a cap of 100 would substantially reduce inequality.
A pay ratio cap could actually be a better alternative to a minimum wage, because it can adapt to market conditions. If the economy is really so bad that you must cut the pay of most of your workers, well, you’d better cut your own pay as well. If things are going well and you can afford to raise your own pay, your workers should get a share too. We never need to set some arbitrary amount as the minimum you are allowed to pay someone—but if you want to pay your employees that little, you won’t be paid very much yourself.

The biggest reason to support the Warren Rule, however, is awareness. Most people simply have no idea of how much CEOs are actually paid. When asked to estimate the ratio between CEO and employee pay, most people around the world underestimate by a full order of magnitude.

Here are some graphs from a sampling of First World countries. I used data from this paper in Perspectives on Psychological Sciencethe fact that it’s published in a psychology journal tells you a lot about the academic turf wars involved in cognitive economics.

The first shows the absolute amount of average worker pay (not adjusted for purchasing power) in each country. Notice how the US is actually near the bottom, despite having one of the strongest overall economies and not particularly high purchasing power:


The second shows the absolute amount of average CEO pay in each country; I probably don’t even need to mention how the US is completely out of proportion with every other country.


And finally, the ratio of the two. One of these things is not like the other ones…


So obviously the ratio in the US is far too high. But notice how even in Poland, the ratio is still 28 to 1. In order to drop to the 6 to 1 ratio that most people seem to think would be ideal, we would need to dramatically reform even the most equal nations in the world. Denmark and Norway should particularly think about whether they really believe that 2 to 1 is the proper ratio, since they are currently some of the most equal (not to mention happiest) nations in the world, but their current ratios are still 48 and 58 respectively. You can sustain a ratio that high and still have universal prosperity; every adult citizen in Norway is a millionaire in local currency. (Adjusting for purchasing power, it’s not quite as impressive; instead the guaranteed wealth of a Norwegian citizen is “only” about $100,000.)

Most of the world’s population simply has no grasp of how extreme economic inequality has become. Putting the numbers right there in people’s faces should help with this, though if the figures only need to be reported to investors that probably won’t make much difference. But hey, it’s a start.

The Cognitive Science of Morality Part II: Molly Crockett

JDN 2457140 EDT 20:16.

This weekend has been very busy for me, so this post is going to be shorter than most—which is probably a good thing anyway, since my posts tend to run a bit long.

In an earlier post I discussed the Weinberg Cognitive Science Conference and my favorite speaker in the lineup, Joshua Greene. After a brief interlude from Capybara Day, it’s now time to talk about my second-favorite speaker, Molly Crockett. (Is it just me, or does the name “Molly” somehow seem incongruous with a person of such prestige?)

Molly Crockett is a neuroeconomist, though you’d never hear her say that. She doesn’t think of herself as an economist at all, but purely as a neuroscientist. I suspect this is because when she hears the word “economist” she thinks of only mainstream neoclassical economists, and she doesn’t want to be associated with such things.

Still, what she studies is clearly neuroeconomics—I in fact first learned of her work by reading the textbook Neuroeconomics, though I really got interested in her work after watching her TED Talk. It’s one of the better TED talks (they put out so many of them now that the quality is mixed at best); she talks about news reporting on neuroscience, how it is invariably ridiculous and sensationalist. This is particularly frustrating because of how amazing and important neuroscience actually is.

I could almost forgive the sensationalism if they were talking about something that’s actually fantastically boring, like, say, tax codes, or financial regulations. Of course, even then there is the Oliver Effect: You can hide a lot of evil by putting it in something boring. But Dodd-Frank is 2300 pages long; I read an earlier draft that was only (“only”) 600 pages, and it literally contained a three-page section explaining how to define the word “bank”. (Assuming direct proportionality, I would infer that there is now a twelve-page section defining the word “bank”. Hopefully not?) It doesn’t get a whole lot more snoozeworthy than that. So if you must be a bit sensationalist in order to get people to see why eliminating margin requirements and the swaps pushout rule are terrible, terrible ideas, so be it.

But neuroscience is not boring, and so sensationalism only means that news outlets are making up exciting things that aren’t true instead of saying the actually true things that are incredibly exciting.

Here, let me express without sensationalism what Molly Crockett does for a living: Molly Crockett experimentally determines how psychoactive drugs modulate moral judgments. The effects she observes are small, but they are real; and since these experiments are done using small doses for a short period of time, if these effects scale up they could be profound. This is the basic research component—when it comes to technological fruition it will be literally A Clockwork Orange. But it may be A Clockwork Orange in the best possible way: It could be, at last, a medical cure for psychopathy, a pill to make us not just happier or healthier, but better. We are not there yet by any means, but this is clearly the first step: Molly Crockett is to A Clockwork Orange roughly as Michael Faraday is to the Internet.

In one of the experiments she talked about at the conference, Crockett found that serotonin reuptake inhibitors enhance harm aversion. Serotonin reuptake inhibitors are very commonly used drugs—you are likely familiar with one called Prozac. So basically what this study means is that Prozac makes people more averse to causing pain in themselves or others. It doesn’t necessarily make them more altruistic, let alone more ethical; but it does make them more averse to causing pain. (To see the difference, imagine a 19th-century field surgeon dealing with a wounded soldier; there is no anesthetic, but an amputation must be made. Sometimes being ethical requires causing pain.)

The experiment is actually what Crockett calls “the honest Milgram Experiment“; under Milgram, the experimenters told their subjects they would be causing shocks, but no actual shocks were administered. Under Crockett, the shocks are absolutely 100% real (though they are restricted to a much lower voltage of course). People are given competing offers that contain an amount of money and a number of shocks to be delivered, either to you or to the other subject. They decide how much it’s worth to them to bear the shocks—or to make someone else bear them. It’s a classic willingness-to-pay paradigm, applied to the Milgram Experiment.

What Crockett found did not surprise me, nor do I expect it will surprise you if you imagine yourself in the same place; but it would totally knock the socks off of any neoclassical economist. People are much more willing to bear shocks for money than they are to give shocks for money. They are what Crockett terms hyper-altruistic; I would say that they are exhibiting an apparent solidarity coefficient greater than 1. They seem to be valuing others more than they value themselves.

Normally I’d say that this makes no sense at all—why would you value some random stranger more than yourself? Equally perhaps, and obviously only a psychopath would value them not at all; but more? And there’s no way you can actually live this way in your daily life; you’d give away all your possessions and perhaps even starve yourself to death. (I guess maybe Jesus lived that way.) But Crockett came up with a model that explains it pretty well: We are morally risk-averse. If we knew we were dealing with someone very strong who had no trouble dealing with shocks, we’d be willing to shock them a fairly large amount. But we might actually be dealing with someone very vulnerable who would suffer greatly; and we don’t want to take that chance.

I think there’s some truth to that. But her model leaves something else out that I think is quite important: We are also averse to unfairness. We don’t like the idea of raising one person while lowering another. (Obviously not so averse as to never do it—we do it all the time—but without a compelling reason we consider it morally unjustified.) So if the two subjects are in roughly the same condition (being two undergrads at Oxford, they probably are), then helping one while hurting the other is likely to create inequality where none previously existed. But if you hurt yourself in order to help yourself, no such inequality is created; all you do is raise yourself up, provided that you do believe that the money is good enough to be worth the shocks. It’s actually quite Rawslian; lifting one person up while not affecting the other is exactly the sort of inequality you’re allowed to create according to the Difference Principle.

There’s also the fact that the subjects can’t communicate; I think if I could make a deal to share the money afterward, I’d feel better about shocking someone more in order to get us both more money. So perhaps with communication people would actually be willing to shock others more. (And the sensation headline would of course be: “Talking makes people hurt each other.”)

But all of these ideas are things that could be tested in future experiments! And maybe I’ll do those experiments someday, or Crockett, or one of her students. And with clever experimental paradigms we might find out all sorts of things about how the human mind works, how moral intuitions are structured, and ultimately how chemical interventions can actually change human moral behavior. The potential for both good and evil is so huge, it’s both wondrous and terrifying—but can you deny that it is exciting?

And that’s not even getting into the Basic Fact of Cognitive Science, which undermines all concepts of afterlife and theistic religion. I already talked about it before—as the sort of thing that I sort of wish I could say when I introduce myself as a cognitive scientist—but I think it bears repeating.

As Patricia Churchland said on the Colbert Report: Colbert asked, “Are you saying I have no soul?” and she answered, “Yes.” I actually prefer Daniel Dennett’s formulation: “Yes, we have a soul, but it’s made of lots of tiny robots.”

We don’t have a magical, supernatural soul (whatever that means); we don’t have an immortal soul that will rise into Heaven or be reincarnated in someone else. But we do have something worth preserving: We have minds that are capable of consciousness. We love and hate, exalt and suffer, remember and imagine, understand and wonder. And yes, we are born and we die. Once the unique electrochemical pattern that defines your consciousness is sufficiently degraded, you are gone. Nothing remains of what you were—except perhaps the memories of others, or things you have created. But even this legacy is unlikely to last forever. One day it is likely that all of us—and everything we know, and everything we have built, from the Great Pyramids to Hamlet to Beethoven’s Ninth to Principia Mathematica to the US Interstate Highway System—will be gone. I don’t have any consolation to offer you on that point; I can’t promise you that anything will survive a thousand years, much less a million. There is a chance—even a chance that at some point in the distant future, whatever humanity has become will find a way to reverse the entropic decay of the universe itself—but nothing remotely like a guarantee. In all probability you, and I, and all of this will be gone someday, and that is absolutely terrifying.

But it is also undeniably true. The fundamental link between the mind and the brain is one of the basic facts of cognitive science; indeed I like to call it The Basic Fact of Cognitive Science. We know specifically which kinds of brain damage will make you unable to form memories, comprehend language, speak language (a totally different area), see, hear, smell, feel anger, integrate emotions with logic… do I need to go on? Everything that you are is done by your brain—because you are your brain.

Now why can’t the science journalists write about that? Instead we get “The Simple Trick That Can Boost Your Confidence Immediately” and “When it Comes to Picking Art, Men & Women Just Don’t See Eye to Eye.” HuffPo is particularly awful of course; the New York Times is better, but still hardly as good as one might like. They keep trying to find ways to make it exciting—but so rarely seem to grasp how exciting it already is.

The terrible, horrible, no-good very-bad budget bill

JDN 2457005 PST 11:52.

I would have preferred to write about something a bit cheerier (like the fact that by the time I write my next post I expect to be finished with my master’s degree!), but this is obviously the big news in economic policy today. The new House budget bill was unveiled Tuesday, and then passed in the House on Thursday by a narrow vote. It has stalled in the Senate thanks in part to fierce—and entirely justified—opposition by Elizabeth Warren, and so today it has been delayed in the Senate. Obama has actually urged his fellow Democrats to pass it, in order to avoid another government shutdown. Here’s why Warren is right and Obama is wrong.

You know the saying “You can’t negotiate with terrorists!”? Well, in practice that’s not actually true—we negotiate with terrorists all the time; the FBI has special hostage negotiators for this purpose, because sometimes it really is the best option. But the saying has an underlying kernel of truth, which is that once someone is willing to hold hostages and commit murder, they have crossed a line, a Rubicon from which it is impossible to return; negotiations with them can never again be good-faith honest argumentation, but must always be a strategic action to minimize collateral damage. Everyone knows that if you had the chance you’d just as soon put bullets through all their heads—because everyone knows they’d do the same to you.

Well, right now, the Republicans are acting like terrorists. Emotionally a fair comparison would be with two-year-olds throwing tantrums, but two-year-olds do not control policy on which thousands of lives hang in the balance. This budget bill is designed—quite intentionally, I’m sure—in order to ensure that Democrats are left with only two options: Give up on every major policy issue and abandon all the principles they stand for, or fail to pass a budget and allow the government to shut down, canceling vital services and costing billions of dollars. They are holding the American people hostage.

But here is why you must not give in: They’re going to shoot the hostages anyway. This so-called “compromise” would not only add $479 million in spending on fighter jets that don’t work and the Pentagon hasn’t even asked for, not only cut $93 million from WIC, a 3.5% budget cut adjusted for inflation—literally denying food to starving mothers and children—and dramatically increase the amount of money that can be given by individuals in campaign donations (because apparently the unlimited corporate money of Citizens United wasn’t enough!), but would also remove two of the central provisions of Dodd-Frank financial regulation that are the only thing that stands between us and a full reprise of the Great Recession. And even if the Democrats in the Senate cave to the demands just as the spineless cowards in the House already did, there is nothing to stop Republicans from using the same scorched-earth tactics next year.

I wouldn’t literally say we should put bullets through their heads, but we definitely need to get these Republicans out of office immediately at the next election—and that means that all the left-wing people who insist they don’t vote “on principle” need to grow some spines of their own and vote. Vote Green if you want—the benefits of having a substantial Green coalition in Congress would be enormous, because the Greens favor three really good things in particular: Stricter regulation of carbon emissions, nationalization of the financial system, and a basic income. Or vote for some other obscure party that you like even better. But for the love of all that is good in the world, vote.

The two most obscure—and yet most important—measures in the bill are the elimination of the swaps pushout rule and the margin requirements on derivatives. Compared to these, the cuts in WIC are small potatoes (literally, they include a stupid provision about potatoes). They also really aren’t that complicated, once you boil them down to their core principles. This is however something Wall Street desperately wants you to never, ever do, for otherwise their global crime syndicate will be exposed.

The swaps pushout rule says quite simply that if you’re going to place bets on the failure of other companies—these are called credit default swaps, but they are really quite literally a bet that a given company will go bankrupt—you can’t do so with deposits that are insured by the FDIC. This is the absolute bare minimum regulatory standard that any reasonable economist (or for that matter sane human being!) would demand. Honestly I think credit default swaps should be banned outright. If you want insurance, you should have to buy insurance—and yes, deal with the regulations involved in buying insurance, because those regulations are there for a reason. There’s a reason you can’t buy fire insurance on other people’s houses, and that exact same reason applies a thousandfold for why you shouldn’t be able to buy credit default swaps on other people’s companies. Most people are not psychopaths who would burn down their neighbor’s house for the insurance money—but even when their executives aren’t psychopaths (as many are), most companies are specifically structured so as to behave as if they were psychopaths, as if no interests in the world mattered but their own profit.

But the swaps pushout rule does not by any means ban credit default swaps. Honestly, it doesn’t even really regulate them in any real sense. All it does is require that these bets have to be made with the banks’ own money and not with everyone else’s. You see, bank deposits—the regular kind, “commercial banking”, where you have your checking and savings accounts—are secured by government funds in the event a bank should fail. This makes sense, at least insofar as it makes sense to have private banks in the first place (if we’re going to insure with government funds, why not just use government funds?). But if you allow banks to place whatever bets they feel like using that money, they have basically no downside; heads they win, tails we lose. That’s why the swaps pushout rule is absolutely indispensable; without it, you are allowing banks to gamble with other people’s money.

What about margin requirements? This one is even worse. Margin requirements are literally the only thing that keeps banks from printing unlimited money. If there was one single cause of the Great Recession, it was the fact that there were no margin requirements on over-the-counter derivatives. Because there were no margin requirements, there was no limit to how much money banks could print, and so print they did; the result was a still mind-blowing quadrillion dollars in nominal value of outstanding derivatives. Not million, not billion, not even trillion; quadrillion. $1e15. $1,000,000,000,000,000. That’s how much money they printed. The total world money supply is about $70 trillion, which is 1/14 of that. (If you read that blog post, he makes a rather telling statement: “They demonstrate quite clearly that those who have been lending the money that we owe can’t possibly have had the money they lent.” No, of course they didn’t! They created it by lending it. That is what our system allows them to do.)

And yes, at its core, it was printing money. A lot of economists will tell you otherwise, about how that’s not really what’s happening, because it’s only “nominal” value, and nobody ever expects to cash them in—yeah, but what if they do? (These are largely the same people who will tell you that quantitative easing isn’t printing money, because, uh… er… squirrel!) A tiny fraction of these derivatives were cashed in in 2007, and I think you know what happened next. They printed this money and now they are holding onto it; but woe betide us all if they ever decide to spend it. Honestly we should invalidate all of these derivatives and force them to start over with strict margin requirements, but short of that we must at least, again at the bare minimum, have margin requirements.

Why are margin requirements so important? There’s actually a very simple equation that explains it. If the margin requirement is m, meaning that you must retain a portion m between 0 and 1 of the loans you make as reserves, the total amount of money supply that can be created from the current amount of money M is just M/m. So if margin requirements were 100%—full-reserve banking—then the total money supply is M, and therefore in full control of the central bank. This is how it should be, in my opinion. But usually m is set around 10%, so the total money supply is 10M, meaning that 90% of the money in the system was created by banks. But if you ever let that margin requirement go to zero, you end up dividing by zero—and the total amount of money that can be created is infinite.

To see how this works, suppose we start with $1000 and put it in bank A. Bank A then creates a loan; how big they can make the loan depends on the margin requirement. Let’s say it’s 10%. They can make a loan of $900, because they must keep $100 (10% of $1000) in reserve. So they do that, and then it gets placed in bank B. Then bank B can make a loan of $810, keeping $90. The $810 gets deposited in bank C, which can make a loan of $729, and so on. The total amount of money in the system is the sum of all these: $1000 in bank A (remember, that deposit doesn’t disappear when it’s loaned out!), plus the $900 in bank B, plus $810 in bank C, plus $729 in bank D. After 4 steps we are at $3,439. As we go through more and more steps, the money supply gets larger at an exponentially decaying rate and we converge toward the maximum at $10,000.

The original amount is M, and then we add M(1-m), M(1-m)^2, M(1-m)^3, and so on. That produces the following sum up to n terms (below is LaTeX, which I can’t render for you without a plugin, which requires me to pay for a WordPress subscription I cannot presently afford; you can copy-paste and render it yourself here):

\sum_{k=0}^{n} M (1-m)^k = M \frac{1 – (1-m)^{n+1}}{m}

And then as you let the number of terms grow arbitrarily large, it converges toward a limit at infinity:

\sum_{k=0}^{\infty} M (1-m)^k = \frac{M}{m}

To be fair, we never actually go through infinitely many steps, so even with a margin requirement of zero we don’t literally end up with infinite money. Instead, we just end up with n M, the number of steps times the initial money supply. Start with $1000 and go through 4 steps: $4000. Go through 10 steps: $10,000. Go through 100 steps: $100,000. It just keeps getting bigger and bigger, until that money has nowhere to go and the whole house of cards falls down.

Honestly, I’m not even sure why Wall Street banks would want to get rid of margin requirements. It’s basically putting your entire economy on the counterfeiting standard. Fiat money is often accused of this, but the government has both (a) the legitimate authority empowered by the electorate and (b) incentives to maintain macroeconomic stability, neither of which private banks have. There is no reason other than altruism (and we all know how much altruism Citibank and HSBC have—it is approximately equal to the margin requirement they are trying to get passed—and yes, they wrote the bill) that would prevent them from simply printing as much money as they possibly can, thus maximizing their profits; and they can even excuse the behavior by saying that everyone else is doing it, so it’s not like they could prevent the collapse all by themselves. But by lobbying for a regulation to specifically allow this, they no longer have that excuse; no, everyone won’t be doing it, not unless you pass this law to let them. Despite the global economic collapse that was just caused by this sort of behavior only seven years ago, they now want to return to doing it. At this point I’m beginning to wonder if calling them an international crime syndicate is actually unfair to international crime syndicates. These guys are so totally evil it actually goes beyond the bounds of rational behavior; they’re turning into cartoon supervillains. I would honestly not be that surprised if there were a video of one of these CEOs caught on camera cackling maniacally, “Muahahahaha! The world shall burn!” (Then again, I was pleasantly surprised to see the CEO of Goldman Sachs talking about the harms of income inequality, though it’s not clear he appreciated his own contribution to that inequality.)

And that is why Democrats must not give in. The Senate should vote it down. Failing that, Obama should veto. I wish he still had the line-item veto so he could just remove the egregious riders without allowing a government shutdown, but no, the Senate blocked it. And honestly their reasoning makes sense; there is supposed to be a balance of power between Congress and the President. I just wish we had a Congress that would use its power responsibly, instead of holding the American people hostage to the villainous whims of Wall Street banks.