What would a new macroeconomics look like?

Dec 9 JDN 2458462

In previous posts I have extensively criticized the current paradigm of macroeconomics. But it’s always easier to tear the old edifice down than to build a better one in its place. So in this post I thought I’d try to be more constructive: What sort of new directions could macroeconomics take?

The most important change we need to make is to abandon the assumption of dynamic optimization. This will be a very hard sell, as most macroeconomists have become convinced that the Lucas Critique means we need to always base everything on the dynamic optimization of a single representative agent. I don’t think this was actually what Lucas meant (though maybe we should ask him; he’s still at Chicago), and I certainly don’t think it is what he should have meant. He had a legitimate point about the way macroeconomics was operating at that time: It was ignoring the feedback loops that occur when we start trying to change policies.

Goodhart’s Law is probably a better formulation: Once you make an indicator into a target, you make it less effective as an indicator. So while inflation does seem to be negatively correlated with unemployment, that doesn’t mean we should try to increase inflation to extreme levels in order to get rid of unemployment; sooner or later the economy is going to adapt and we’ll just have both inflation and unemployment at the same time. (Campbell’s Law provides a specific example that I wish more people in the US understood: Test scores would be a good measure of education if we didn’t use them to target educational resources.)

The reason we must get rid of dynamic optimization is quite simple: No one behaves that way.

It’s often computationally intractable even in our wildly oversimplified models that experts spend years working onnow you’re imagining that everyone does this constantly?

The most fundamental part of almost every DSGE model is the Euler equation; this equation comes directly from the dynamic optimization. It’s supposed to predict how people will choose to spend and save based upon their plans for an infinite sequence of future income and spending—and if this sounds utterly impossible, that’s because it is. Euler equations don’t fit the data at all, and even extreme attempts to save them by adding a proliferation of additional terms have failed. (It reminds me very much of the epicycles that astronomers used to add to the geocentric model of the universe to try to squeeze in weird results like Mars, before they had the heliocentric model.)

We should instead start over: How do people actually choose their spending? Well, first of all, it’s not completely rational. But it’s also not totally random. People spend on necessities before luxuries; they try to live within their means; they shop for bargains. There is a great deal of data from behavioral economics that could be brought to bear on understanding the actual heuristics people use in deciding how to spend and save. There have already been successful policy interventions using this knowledge, like Save More Tomorrow.

The best thing about this is that it should make our models simpler. We’re no longer asking each agent in the model to solve an impossible problem. However people actually make these decisions, we know it can be done, because it is being done. Most people don’t really think that hard, even when they probably should; so the heuristics really can’t be that complicated. My guess is that you can get a good fit—certainly better than an Euler equation—just by assuming that people set a target for how much they’re going to save (which is also probably pretty small for most people), and then spend the rest.

The second most important thing we need to add is inequality. Some people are much richer than others; this is a very important fact about economics that we need to understand. Yet it has taken the economics profession decades to figure this out, and even now I’m only aware of one class of macroeconomic models that seriously involves inequality, the Heterogeneous Agent New Keynesian (HANK) models which didn’t emerge until the last few years (the earliest publication I can find is 2016!). And these models are monsters; they are almost always computationally intractable and have a huge number of parameters to estimate.

Understanding inequality will require more parameters, that much is true. But if we abandon dynamic optimization, we won’t need as many as the HANK models have, and most of the new parameters are actually things we can observe, like the distribution of wages and years of schooling.

Observability of parameters is a big deal. Another problem with the way the Lucas Critique has been used is that we’ve been told we need to be using “deep structural parameters” like the temporal elasticity of substitution and the coefficient of relative risk aversion—but we have no idea what those actually are. We can’t observe them, and all of our attempts to measure them indirectly have yielded inconclusive or even inconsistent results. This is probably because these parameters are based on assumptions about human rationality that are simply not realistic. Most people probably don’t have a well-defined temporal elasticity of substitution, because their day-to-day decisions simply aren’t consistent enough over time for that to make sense. Sometimes they eat salad and exercise; sometimes they loaf on the couch and drink milkshakes. Likewise with risk aversion: many moons ago I wrote about how people will buy both insurance and lottery tickets, which no one with a consistent coefficient of relative risk aversion would ever do.

So if we are interested in deep structural parameters, we need to base those parameters on behavioral experiments so that we can understand actual human behavior. And frankly I don’t think we need deep structural parameters; I think this is a form of greedy reductionism, where we assume that the way to understand something is always to look at smaller pieces. Sometimes the whole is more than the sum of its parts. Economists obviously feel a lot of envy for physics; but they don’t seem to understand that aerodynamics would never have (ahem) gotten off the ground if we had first waited for an exact quantum mechanical solution of the oxygen atom (which we still don’t have, by the way). Macroeconomics may not actually need “microfoundations” in the strong sense that most economists intend; it needs to be consistent with small-scale behavior, but it doesn’t need to be derived from small-scale behavior.

This means that the new paradigm in macroeconomics does not need to be computationally intractable. Using heuristics instead of dynamic optimization and worrying less about microfoundations will make the models simpler; adding inequality need not make them so much more complicated.

The sausage of statistics being made

 

Nov 11 JDN 2458434

“Laws, like sausages, cease to inspire respect in proportion as we know how they are made.”

~ John Godfrey Saxe, not Otto von Bismark

Statistics are a bit like laws and sausages. There are a lot of things in statistical practice that don’t align with statistical theory. The most obvious examples are the fact that many results in statistics are asymptotic: they only strictly apply for infinitely large samples, and in any finite sample they will be some sort of approximation (we often don’t even know how good an approximation).

But the problem runs deeper than this: The whole idea of a p-value was originally supposed to be used to assess one single hypothesis that is the only one you test in your entire study.

That’s frankly a ludicrous expectation: Why would you write a whole paper just to test one parameter?

This is why I don’t actually think this so-called multiple comparisons problem is a problem with researchers doing too many hypothesis tests; I think it’s a problem with statisticians being fundamentally unreasonable about what statistics is useful for. We have to do multiple comparisons, so you should be telling us how to do it correctly.

Statisticians have this beautiful pure mathematics that generates all these lovely asymptotic results… and then they stop, as if they were done. But we aren’t dealing with infinite or even “sufficiently large” samples; we need to know what happens when your sample is 100, not when your sample is 10^29. We can’t assume that our variables are independently identically distributed; we don’t know their distribution, and we’re pretty sure they’re going to be somewhat dependent.

Even in an experimental context where we can randomly and independently assign some treatments, we can’t do that with lots of variables that are likely to matter, like age, gender, nationality, or field of study. And applied econometricians are in an even tighter bind; they often can’t randomize anything. They have to rely upon “instrumental variables” that they hope are “close enough to randomized” relative to whatever they want to study.

In practice what we tend to do is… fudge it. We use the formal statistical methods, and then we step back and apply a series of informal norms to see if the result actually makes sense to us. This is why almost no psychologists were actually convinced by Daryl Bem’s precognition experiments, despite his standard experimental methodology and perfect p < 0.05 results; he couldn’t pass any of the informal tests, particularly the most basic one of not violating any known fundamental laws of physics. We knew he had somehow cherry-picked the data, even before looking at it; nothing else was possible.

This is actually part of where the “hierarchy of sciences” notion is useful: One of the norms is that you’re not allowed to break the rules of the sciences above you, but you can break the rules of the sciences below you. So psychology has to obey physics, but physics doesn’t have to obey psychology. I think this is also part of why there’s so much enmity between economists and anthropologists; really we should be on the same level, cognizant of each other’s rules, but economists want to be above anthropologists so we can ignore culture, and anthropologists want to be above economists so they can ignore incentives.

Another informal norm is the “robustness check”, in which the researcher runs a dozen different regressions approaching the same basic question from different angles. “What if we control for this? What if we interact those two variables? What if we use a different instrument?” In terms of statistical theory, this doesn’t actually make a lot of sense; the probability distributions f(y|x) of y conditional on x and f(y|x, z) of y conditional on x and z are not the same thing, and wouldn’t in general be closely tied, depending on the distribution f(x|z) of x conditional on z. But in practice, most real-world phenomena are going to continue to show up even as you run a bunch of different regressions, and so we can be more confident that something is a real phenomenon insofar as that happens. If an effect drops out when you switch out a couple of control variables, it may have been a statistical artifact. But if it keeps appearing no matter what you do to try to make it go away, then it’s probably a real thing.

Because of the powerful career incentives toward publication and the strange obsession among journals with a p-value less than 0.05, another norm has emerged: Don’t actually trust p-values that are close to 0.05. The vast majority of the time, a p-value of 0.047 was the result of publication bias. Now if you see a p-value of 0.001, maybe then you can trust it—but you’re still relying on a lot of assumptions even then. I’ve seen some researchers argue that because of this, we should tighten our standards for publication to something like p < 0.01, but that’s missing the point; what we need to do is stop publishing based on p-values. If you tighten the threshold, you’re just going to get more rejected papers and then the few papers that do get published will now have even smaller p-values that are still utterly meaningless.

These informal norms protect us from the worst outcomes of bad research. But they are almost certainly not optimal. It’s all very vague and informal, and different researchers will often disagree vehemently over whether a given interpretation is valid. What we need are formal methods for solving these problems, so that we can have the objectivity and replicability that formal methods provide. Right now, our existing formal tools simply are not up to that task.

There are some things we may never be able to formalize: If we had a formal algorithm for coming up with good ideas, the AIs would already rule the world, and this would be either Terminator or The Culture depending on whether we designed the AIs correctly. But I think we should at least be able to formalize the basic question of “Is this statement likely to be true?” that is the fundamental motivation behind statistical hypothesis testing.

I think the answer is likely to be in a broad sense Bayesian, but Bayesians still have a lot of work left to do in order to give us really flexible, reliable statistical methods we can actually apply to the messy world of real data. In particular, tell us how to choose priors please! Prior selection is a fundamental make-or-break problem in Bayesian inference that has nonetheless been greatly neglected by most Bayesian statisticians. So, what do we do? We fall back on informal norms: Try maximum likelihood, which is like using a very flat prior. Try a normally-distributed prior. See if you can construct a prior from past data. If all those give the same thing, that’s a “robustness check” (see previous informal norm).

Informal norms are also inherently harder to teach and learn. I’ve seen a lot of other grad students flail wildly at statistics, not because they don’t know what a p-value means (though maybe that’s also sometimes true), but because they don’t really quite grok the informal underpinnings of good statistical inference. This can be very hard to explain to someone: They feel like they followed all the rules correctly, but you are saying their results are wrong, and now you can’t explain why.

In fact, some of the informal norms that are in wide use are clearly detrimental. In economics, norms have emerged that certain types of models are better simply because they are “more standard”, such as the dynamic stochastic general equilibrium models that can basically be fit to everything and have never actually usefully predicted anything. In fact, the best ones just predict what we already knew from Keynesian models. But without a formal norm for testing the validity of models, it’s been “DSGE or GTFO”. At present, it is considered “nonstandard” (read: “bad”) not to assume that your agents are either a single unitary “representative agent” or a continuum of infinitely-many agents—modeling the actual fact of finitely-many agents is just not done. Yet it’s hard for me to imagine any formal criterion that wouldn’t at least give you some points for correctly including the fact that there is more than one but less than infinity people in the world (obviously your model could still be bad in other ways).

I don’t know what these new statistical methods would look like. Maybe it’s as simple as formally justifying some of the norms we already use; maybe it’s as complicated as taking a fundamentally new approach to statistical inference. But we have to start somewhere.

What we lose by aggregating

Jun 25, JDN 2457930

One of the central premises of current neoclassical macroeconomics is the representative agent: Rather than trying to keep track of all the thousands of firms, millions of people, and billions of goods and in a national economy, we aggregate everything up into a single worker/consumer and a single firm producing and consuming a single commodity.

This sometimes goes under the baffling misnomer of microfoundations, which would seem to suggest that it carries detailed information about the microeconomic behavior underlying it; in fact what this means is that the large-scale behavior is determined by some sort of (perfectly) rational optimization process as if there were just one person running the entire economy optimally.

First of all, let me say that some degree of aggregation is obviously necessary. Literally keeping track of every single transaction by every single person in an entire economy would require absurd amounts of data and calculation. We might have enough computing power to theoretically try this nowadays, but then again we might not—and in any case such a model would very rapidly lose sight of the forest for the trees.

But it is also clearly possible to aggregate too much, and most economists don’t seem to appreciate this. They cite a couple of famous theorems (like the Gorman Aggregation Theorem) involving perfectly-competitive firms and perfectly-rational identical consumers that offer a thin veneer of justification for aggregating everything into one, and then go on with their work as if this meant everything were fine.

What’s wrong with such an approach?

Well, first of all, a representative agent model can’t talk about inequality at all. It’s not even that a representative agent model says inequality is good, or not a problem; it lacks the capacity to even formulate the concept. Trying to talk about income or wealth inequality in a representative agent model would be like trying to decide whether your left hand is richer than your right hand.

It’s also nearly impossible to talk about poverty in a representative agent model; the best you can do is talk about a country’s overall level of development, and assume (not without reason) that a country with a per-capita GDP of $1,000 probably has a lot more poverty than a country with a per-capita GDP of $50,000. But two countries with the same per-capita GDP can have very different poverty rates—and indeed, the cynic in me wonders if the reason we’re reluctant to use inequality-adjusted measures of development is precisely that many American economists fear where this might put the US in the rankings. The Human Development Index was a step in the right direction because it includes things other than money (and as a result Saudi Arabia looks much worse and Cuba much better), but it still aggregates and averages everything, so as long as your rich people are doing well enough they can compensate for how badly your poor people are doing.

Nor can you talk about oligopoly in a representative agent model, as there is always only one firm, which for some reason chooses to act as if it were facing competition instead of rationally behaving as a monopoly. (This is not quite as nonsensical as it sounds, as the aggregation actually does kind of work if there truly are so many firms that they are all forced down to zero profit by fierce competition—but then again, what market is actually like that?) There is no market share, no market power; all are at the mercy of the One True Price.

You can still talk about externalities, sort of; but in order to do so you have to set up this weird doublethink phenomenon where the representative consumer keeps polluting their backyard and then can’t figure out why their backyard is so darn polluted. (I suppose humans do seem to behave like that sometimes; but wait, I thought you believed people were rational?) I think this probably confuses many an undergrad, in fact; the models we teach them about externalities generally use this baffling assumption that people consider one set of costs when making their decisions and then bear a different set of costs from the outcome. If you can conceptualize the idea that we’re aggregating across people and thinking “as if” there were a representative agent, you can ultimately make sense of this; but I think a lot of students get really confused by it.

Indeed, what can you talk about with a representative agent model? Economic growth and business cycles. That’s… about it. These are not minor issues, of course; indeed, as Robert Lucas famously said:

The consequences for human welfare involved in questions like these [on economic growth] are simply staggering: once one starts to think about them, it is hard to think about anything else.

I certainly do think that studying economic growth and business cycles should be among the top priorities of macroeconomics. But then, I also think that poverty and inequality should be among the top priorities, and they haven’t been—perhaps because the obsession with representative agent models make that basically impossible.

I want to be constructive here; I appreciate that aggregating makes things much easier. So what could we do to include some heterogeneity without too much cost in complexity?

Here’s one: How about we have p firms, making q types of goods, sold to n consumers? If you want you can start by setting all these numbers equal to 2; simply going from 1 to 2 has an enormous effect, as it allows you to at least say something about inequality. Getting them as high as 100 or even 1000 still shouldn’t be a problem for computing the model on an ordinary laptop. (There are “econophysicists” who like to use these sorts of agent-based models, but so far very few economists take them seriously. Partly that is justified by their lack of foundational knowledge in economics—the arrogance of physicists taking on a new field is legendary—but partly it is also interdepartmental turf war, as economists don’t like the idea of physicists treading on their sacred ground.) One thing that really baffles me about this is that economists routinely use computers to solve models that can’t be calculated by hand, but it never seems to occur to them that they could have started at the beginning planning to make the model solvable only by computer, and that would spare them from making the sort of heroic assumptions they are accustomed to making—assumptions that only made sense when they were used to make a model solvable that otherwise wouldn’t be.

You could also assign a probability distribution over incomes; that can get messy quickly, but we actually are fortunate that the constant relative risk aversion utility function and the Pareto distribution over incomes seem to fit the data quite well—as the product of those two things is integrable by hand. As long as you can model how your policy affects this distribution without making that integral impossible (which is surprisingly tricky), you can aggregate over utility instead of over income, which is a lot more reasonable as a measure of welfare.

And really I’m only scratching the surface here. There are a vast array of possible new approaches that would allow us to extend macroeconomic models to cover heterogeneity; the real problem is an apparent lack of will in the community to make such an attempt. Most economists still seem very happy with representative agent models, and reluctant to consider anything else—often arguing, in fact, that anything else would make the model less microfounded when plainly the opposite is the case.

 

Who are you? What is this new blog? Why “Infinite Identical Psychopaths”?

My name is Patrick Julius. I am about halfway through a master’s degree in economics, specializing in the new subfield of cognitive economics (closely related to the also quite new fields of cognitive science and behavioral economics). This makes me in one sense heterodox; I disagree adamantly with most things that typical neoclassical economists say. But in another sense, I am actually quite orthodox. All I’m doing is bringing the insights of psychology, sociology, history, and political science—not to mention ethics—to the study of economics. The problem is simply that economists have divorced themselves so far from the rest of social science.

Another way I differ from most critics of mainstream economics (I’m looking at you, Peter Schiff) is that, for lack of a better phrase, I’m good at math. (As Bill Clinton said, “It’s arithmetic!”) I understand things like partial differential equations and subgame perfect equilibria, and therefore I am equipped to criticize them on their own terms. In this blog I will do my best to explain the esoteric mathematical concepts in terms most readers can understand, but it’s not always easy. The important thing to keep in mind is that fancy math can’t make a lie true; no matter how sophisticated its equations, a model that doesn’t fit the real world can’t be correct.

This blog, which I plan to update every Saturday, is about the current state of economics, both as it is and how economists imagine it to be. One of my central points is that these two are quite far apart, which has exacerbated if not caused the majority of economic problems in the world today. (Economists didn’t invent world hunger, but for over a decade now we’ve had the power to end it and haven’t done so. You’d be amazed how cheap it would be; we’re talking about 1% of First World GDP at most.)

The reason I call it “infinite identical psychopaths” is that this is what neoclassical economists appear to believe human beings are, at least if we judge by the models they use. These are the typical assumptions of a neoclassical economic model:

      1. Perfect information: All individuals know everything they need to know about the state of the world and the actions of other individuals.
      2. Rational expectations: Predictions about the future can only be wrong within a normal distribution, and in the long run are on average correct.
      3. Representative agents: All individuals are identical and interchangeable; a single type represents them all.
      4. Perfect competition: There are infinitely many agents in the market, and none of them ever collude with one another.
      5. “Economic rationality”: Individuals act according to a monotonic increasing utility function that is only dependent upon their own present and future consumption of goods.

I put the last one in scare quotes because it is the worst of the bunch. What economists call “rationality” has only a distant relation to actual rationality, either as understood by common usage or by formal philosophical terminology.

Don’t be scared by the terminology; a “utility function” is just a formal model of the things you care about when you make decisions. Things you want have positive utility; things you don’t want have negative utility. Larger numbers reflect stronger feelings: a bar of chocolate has much less positive utility than a decade of happy marriage; a pinched finger has much less negative utility than a year of continual torture. Utility maximization just means that you try to get the things you want and avoid the things you don’t. By talking about expected utility, we make some allowance for an uncertain future—but not much, because we have so-called “rational expectations”.

Since any action taken by an “economically rational” agent maximizes expected utility, it is impossible for such an agent to ever make a mistake in the usual sense. Whatever they do is always the best idea at the time. This is already an extremely strong assumption that doesn’t make a whole lot of sense applied to human beings; who among us can honestly say they’ve never done anything they later regretted?

The worst part, however, is the assumption that an individual’s utility function depends only upon their own consumption. What this means is that the only thing anyone cares about is how much stuff they have; considerations like family, loyalty, justice, honesty, and fairness cannot factor into their decisions. The “monotonic increasing” part means that more stuff is always better; if they already have twelve private jets, they’d still want a thirteenth; and even if children had to starve for it, they’d be just fine with that. They are, in other words, psychopaths. So that’s one word of my title.

I think “identical” is rather self-explanatory; by using representative agent models, neoclassicists effectively assume that there is no variation between human beings whatsoever. They all have the same desires, the same goals, the same capabilities, the same resources. Implicit in this assumption is the notion that there is no such thing as poverty or wealth inequality, not to mention diversity, disability, or even differences in taste. (One wonders why you’d even bother with economics if that were the case.)

As for “infinite”, that comes from the assumptions of perfect information and perfect competition. In order to really have perfect information, one would need a brain with enough storage capacity to contain the state of every particle in the visible universe. Maybe not quite infinite, but pretty darn close. Likewise, in order to have true perfect competition, there must be infinitely many individuals in the economy, all of whom are poised to instantly take any opportunity offered that allows them to make even the tiniest profit.

Now, you might be thinking this is a strawman; surely neoclassicists don’t actually believe that people are infinite identical psychopaths. They just model that way to simplify the mathematics, which is of course necessary because the world is far too vast and interconnected to analyze in its full complexity.

This is certainly true: Suppose it took you one microsecond to consider each possible position on a Go board; how long would it take you to go through them all? More time than we have left before the universe fades into heat death. A Go board has two colors (plus empty) and 361 spaces. Now imagine trying to understand a global economy of 7 billion people by brute-force analysis. Simplifying heuristics are unavoidable.

And some neoclassical economists—for example Paul Krugman and Joseph Stiglitz—generally use these heuristics correctly; they understand the limitations of their models and don’t apply them in cases where they don’t belong. In that sort of case, there’s nothing particularly bad about these simplifying assumptions; they are like when a physicist models the trajectory of a spacecraft by assuming frictionless vacuum. Since outer space actually is close to a frictionless vacuum, this works pretty well; and if you need to make minor corrections (like the Pioneer Anomaly) you can.

However, this explanation already seems weird for the “economically rational” assumption (the psychopath part), because that doesn’t really make things much simpler. Why would we exclude the fact that people care about each other, they like to cooperate, they have feelings of loyalty and trust? And don’t tell me it’s because that’s impossible to quantify; behavioral geneticists already have a simple equation (C < r B) designed precisely to quantify altruism. (C is cost, B is benefit, r is relatedness.) I’d make only one slight modification; instead of r for relatedness, use p for psychological closeness, or as I like to call it, solidarity. For humans, solidarity is usually much higher than relatedness, though the two are correlated. C < p B.

Worse, there are other neoclassical economists—those of the most fanatically “free-market” bent—who really don’t seem to do this. I don’t know if they honestly believe that people are infinite identical psychopaths, but they make policy as if they did.

We have people like Stephen Moore saying that unemployment is “like a paid vacation” because obviously anyone who truly wants a job can immediately find one, or people like N. Gregory Mankiw arguing—in a published paper no less!—that the reason Steve Jobs was a billionaire was that he was actually a million times as productive as the rest of us, and therefore it would be inefficient (and, he implies but does not say outright, immoral) to take the fruits of those labors from him. (Honestly, I think I could concede the point and still argue for redistribution, on the grounds that people do not deserve to starve to death simply because they aren’t productive; but that’s the sort of thing never even considered by most neoclassicists, and anyway it’s a topic for another time.)

These kinds of statements would only make sense if markets were really as efficient and competitive as neoclassical models—that is, if people were infinite identical psychopaths. Allow even a single monopoly or just a few bits of imperfect information, and that whole edifice collapses.

And indeed if you’ve ever been unemployed or known someone who was, you know that our labor markets just ain’t that efficient. If you want to cut unemployment payments, you need a better argument than that. Similarly, it’s obvious to anyone who isn’t wearing the blinders of economic ideology that many large corporations exert monopoly power to increase their profits at our expense (How can you not see that Apple is a monopoly!?).

This sort of reasoning is more like plotting the trajectory of an aircraft on the assumption of frictionless vacuum; you’d be baffled as to where the oxidizer comes from, or how the craft manages to lift itself off the ground when the exhaust vents are pointed sideways instead of downward. And then you’d be telling the aerospace engineers to cut off the wings because they’re useless mass.

Worst of all, if we continue this analogy, the engineers would listen to you—they’d actually be convinced by your differential equations and cut off the wings just as you requested. Then the plane would never fly, and they’d ask if they could put the wings back on—but you’d adamantly insist that it was just coincidence, you just happened to be hit by a random problem at the very same moment as you cut off the wings, and putting them back on will do nothing and only make things worse.

No, seriously; so-called “Real Business Cycle” theory, while thoroughly obfuscated in esoteric mathematics, ultimately boils down to the assertion that financial crises have nothing to do with recessions, which are actually caused by random shocks to the real economy—the actual production of goods and services. The fact that a financial crisis always seems to happen just beforehand is, apparently, sheer coincidence, or at best some kind of forward-thinking response investors make as they see the storm coming. I want to you think for a minute about the idea that the kind of people who make computer programs that accidentally collapse the Dow, who made Bitcoin the first example in history of hyperdeflation, and who bought up Tweeter thinking it was Twitter are forward-thinking predictors of future events in real production.

And yet, it is on this sort of basis that our policy is made.

Can otherwise intelligent people really believe that these insane models are true? I’m not sure.
Sadly I think they may really believe that all people are psychopaths—because they themselves may be psychopaths. Economics students score higher on various psychopathic traits than other students. Part of this is self-selection—psychopaths are more likely to study economics—but the terrifying part is that part of it isn’t—studying economics may actually make you more like a sociopath. As I study for my master’s degree, I actually am somewhat afraid of being corrupted by this; I make sure to periodically disengage from their ideology and interact with normal people with normal human beliefs to recalibrate my moral compass.

Of course, it’s still pretty hard to imagine that anyone could honestly believe that the world economy is in a state of perfect information. But if they can’t really believe this insane assumption, why do they keep using models based on it?

The more charitable possibility is that they don’t appreciate just how sensitive the models are to the assumptions. They may think, for instance, that the General Welfare Theorems still basically apply if you relax the assumption of perfect information; maybe it’s not always Pareto-efficient, but it’s probably most of the time, right? Or at least close? Actually, no. The Myerson-Satterthwaithe Theorem says that once you give up perfect information, the whole theorem collapses; even a small amount of asymmetric information is enough to make it so that a Pareto-efficient outcome is impossible. And as you might expect, the more asymmetric the information is, the further the result deviates from Pareto-efficiency. And since we always have some asymmetric information, it looks like the General Welfare Theorems really aren’t doing much for us. They apply only in a magical fantasy world. (In case you didn’t know, Pareto-efficiency is a state in which it’s impossible to make any person better off without making someone else worse off. The real world is in a not Pareto-efficient state, which means that by smarter policy we could improve some people’s lives without hurting anyone else.)

The more sinister possibility is that they know full well that the models are wrong, they just don’t care. The models are really just excuses for an underlying ideology, the unshakeable belief that rich people are inherently better than poor people and private corporations are inherently better than governments. Hence, it must be bad for the economy to raise the minimum wage and good to cut income taxes, even though the empirical evidence runs exactly the opposite way; it must be good to subsidize big oil companies and bad to subsidize solar power research, even though that makes absolutely no sense.

One should normally be hesitant to attribute to malice what can be explained by stupidity, but the “I trust the models” explanation just doesn’t work for some of the really extreme privatizations that the US has undergone since Reagan.

No neoclassical model says that you should privatize prisons; prisons are a classic example of a public good, which would be underfunded in a competitive market and basically has to be operated or funded by the government.

No neoclassical model would support the idea that the EPA is a terrorist organization (yes, a member of the US Congress said this). In fact, the economic case for environmental regulations is unassailable. (What else are we supposed to do, privatize the air?) The question is not whether to regulate and tax pollution, but how and how much.

No neoclassical model says that you should deregulate finance; in fact, most neoclassical models don’t even include a financial sector (as bizarre and terrifying as that is), and those that do generally assume it is in a state of perfect equilibrium with zero arbitrage. If the financial sector were actually in a state of zero arbitrage, no banks would make a profit at all.

In case you weren’t aware, arbitrage is the practice of making money off of money without actually making any goods or doing any services. Unlike manufacturing (which, oddly enough, almost all neoclassical models are based on—despite the fact that it is now a minority sector in First World GDP), there’s no value added. Under zero arbitrage, the interest rate a bank charges should be almost exactly the same as the interest rate it receives, with just enough gap between to barely cover their operating expenses—which should in turn be minimal, especially in a modern electronic system. If financial markets were at zero arbitrage equilibrium, it would be sensible to speak of a single “real interest rate” in the economy, the one that everyone pays and everyone receives. Of course, those of us who live in the real world know that not only do different people pay radically different rates, most people have multiple outstanding lines of credit, each with a different rate. My savings account is 0.5%, my car loan is 5.5%, and my biggest credit card is 19%. These basically span the entire range of sensible interest rates (frankly 19% may even exceed that; that’s a doubling time of 3.6 years), and I know I’m not the exception but the rule.

So that’s the mess we’re in. Stay tuned; in future weeks I’ll talk about what we can do about it.