What we lose by aggregating

Jun 25, JDN 2457930

One of the central premises of current neoclassical macroeconomics is the representative agent: Rather than trying to keep track of all the thousands of firms, millions of people, and billions of goods and in a national economy, we aggregate everything up into a single worker/consumer and a single firm producing and consuming a single commodity.

This sometimes goes under the baffling misnomer of microfoundations, which would seem to suggest that it carries detailed information about the microeconomic behavior underlying it; in fact what this means is that the large-scale behavior is determined by some sort of (perfectly) rational optimization process as if there were just one person running the entire economy optimally.

First of all, let me say that some degree of aggregation is obviously necessary. Literally keeping track of every single transaction by every single person in an entire economy would require absurd amounts of data and calculation. We might have enough computing power to theoretically try this nowadays, but then again we might not—and in any case such a model would very rapidly lose sight of the forest for the trees.

But it is also clearly possible to aggregate too much, and most economists don’t seem to appreciate this. They cite a couple of famous theorems (like the Gorman Aggregation Theorem) involving perfectly-competitive firms and perfectly-rational identical consumers that offer a thin veneer of justification for aggregating everything into one, and then go on with their work as if this meant everything were fine.

What’s wrong with such an approach?

Well, first of all, a representative agent model can’t talk about inequality at all. It’s not even that a representative agent model says inequality is good, or not a problem; it lacks the capacity to even formulate the concept. Trying to talk about income or wealth inequality in a representative agent model would be like trying to decide whether your left hand is richer than your right hand.

It’s also nearly impossible to talk about poverty in a representative agent model; the best you can do is talk about a country’s overall level of development, and assume (not without reason) that a country with a per-capita GDP of $1,000 probably has a lot more poverty than a country with a per-capita GDP of $50,000. But two countries with the same per-capita GDP can have very different poverty rates—and indeed, the cynic in me wonders if the reason we’re reluctant to use inequality-adjusted measures of development is precisely that many American economists fear where this might put the US in the rankings. The Human Development Index was a step in the right direction because it includes things other than money (and as a result Saudi Arabia looks much worse and Cuba much better), but it still aggregates and averages everything, so as long as your rich people are doing well enough they can compensate for how badly your poor people are doing.

Nor can you talk about oligopoly in a representative agent model, as there is always only one firm, which for some reason chooses to act as if it were facing competition instead of rationally behaving as a monopoly. (This is not quite as nonsensical as it sounds, as the aggregation actually does kind of work if there truly are so many firms that they are all forced down to zero profit by fierce competition—but then again, what market is actually like that?) There is no market share, no market power; all are at the mercy of the One True Price.

You can still talk about externalities, sort of; but in order to do so you have to set up this weird doublethink phenomenon where the representative consumer keeps polluting their backyard and then can’t figure out why their backyard is so darn polluted. (I suppose humans do seem to behave like that sometimes; but wait, I thought you believed people were rational?) I think this probably confuses many an undergrad, in fact; the models we teach them about externalities generally use this baffling assumption that people consider one set of costs when making their decisions and then bear a different set of costs from the outcome. If you can conceptualize the idea that we’re aggregating across people and thinking “as if” there were a representative agent, you can ultimately make sense of this; but I think a lot of students get really confused by it.

Indeed, what can you talk about with a representative agent model? Economic growth and business cycles. That’s… about it. These are not minor issues, of course; indeed, as Robert Lucas famously said:

The consequences for human welfare involved in questions like these [on economic growth] are simply staggering: once one starts to think about them, it is hard to think about anything else.

I certainly do think that studying economic growth and business cycles should be among the top priorities of macroeconomics. But then, I also think that poverty and inequality should be among the top priorities, and they haven’t been—perhaps because the obsession with representative agent models make that basically impossible.

I want to be constructive here; I appreciate that aggregating makes things much easier. So what could we do to include some heterogeneity without too much cost in complexity?

Here’s one: How about we have p firms, making q types of goods, sold to n consumers? If you want you can start by setting all these numbers equal to 2; simply going from 1 to 2 has an enormous effect, as it allows you to at least say something about inequality. Getting them as high as 100 or even 1000 still shouldn’t be a problem for computing the model on an ordinary laptop. (There are “econophysicists” who like to use these sorts of agent-based models, but so far very few economists take them seriously. Partly that is justified by their lack of foundational knowledge in economics—the arrogance of physicists taking on a new field is legendary—but partly it is also interdepartmental turf war, as economists don’t like the idea of physicists treading on their sacred ground.) One thing that really baffles me about this is that economists routinely use computers to solve models that can’t be calculated by hand, but it never seems to occur to them that they could have started at the beginning planning to make the model solvable only by computer, and that would spare them from making the sort of heroic assumptions they are accustomed to making—assumptions that only made sense when they were used to make a model solvable that otherwise wouldn’t be.

You could also assign a probability distribution over incomes; that can get messy quickly, but we actually are fortunate that the constant relative risk aversion utility function and the Pareto distribution over incomes seem to fit the data quite well—as the product of those two things is integrable by hand. As long as you can model how your policy affects this distribution without making that integral impossible (which is surprisingly tricky), you can aggregate over utility instead of over income, which is a lot more reasonable as a measure of welfare.

And really I’m only scratching the surface here. There are a vast array of possible new approaches that would allow us to extend macroeconomic models to cover heterogeneity; the real problem is an apparent lack of will in the community to make such an attempt. Most economists still seem very happy with representative agent models, and reluctant to consider anything else—often arguing, in fact, that anything else would make the model less microfounded when plainly the opposite is the case.


Financial fraud is everywhere

Jun 4, JDN 2457909
When most people think of “crime”, they probably imagine petty thieves, pickpockets, drug dealers, street thugs. In short, we think of crime as something poor people do. And certainly, that kind of crime is more visible, and typically easier to investigate and prosecute. It may be more traumatic to be victimized by it (though I’ll get back to that in a moment).

The statistics on this matter are some of the fuzziest I’ve ever come across, so estimates could be off by as much as an order of magnitude. But there is some reason to believe that, within most highly-developed countries, financial fraud may actually be more common than any other type of crime. It is definitely among the most common, and the only serious contenders for exceeding it are other forms of property crime such as petty theft and robbery.

It also appears that financial fraud is the one type of crime that isn’t falling over time. Violent crime and property crime are both at record lows; the average American’s probability of being victimized by a thief or a robber in any given year has fallen from 35% to 11% in the last 25 years. But the rate of financial fraud appears to be roughly constant, and the rate of high-tech fraud in particular is definitely rising. (This isn’t too surprising, given that the technology required is becoming cheaper and more widely available.)

In the UK, the rate of credit card fraud rose during the Great Recession, fell a little during the recovery, and has been holding steady since 2010; it is estimated that about 5% of people in the UK suffer credit card fraud in any given year.

About 1% of US car loans are estimated to contain fraudulent information (such as overestimated income or assets). As there are over $1 trillion in outstanding US car loans, that amounts to about $5 billion in fraud losses every year.

Using DOJ data, Statistic Brain found that over 12 million Americans suffer credit card fraud any given year; based on the UK data, this is probably an underestimate. They also found that higher household income had only a slight effect of increasing the probability of suffering such fraud.

The Office for Victims of Crime estimates that total US losses due to financial fraud are between $40 billion and $50 billion per year—which is to say, the GDP of Honduras or the military budget of Japan. The National Center for Victims of Crime estimated that over 10% of Americans suffer some form of financial fraud in any given year.

Why is fraud so common? Well, first of all, it’s profitable. Indeed, it appears to be the only type of crime that is. Most drug dealers live near the poverty line. Most bank robberies make off with less than $10,000.

But Bernie Madoff made over $50 billion before he was caught. Of course he was an exceptional case; the median Ponzi scheme only makes off with… $2.1 million. That’s over 200 times the median bank robbery.

Second, I think financial fraud allows the perpetrator a certain psychological distance from their victims. Just as it’s much easier to push a button telling a drone to launch a missile than to stab someone to death, it’s much easier to move some numbers between accounts than to point a gun at someone’s head and demand their wallet. Construal level theory is all about how making something seem psychologically more “distant” can change our attitudes toward it; toward things we perceive as “distant”, we think more abstractly, we accept more risks, and we are more willing to engage in violence to advance a cause. (It also makes us care less about outcomes, which may be a contributing factor in the collective apathy toward climate change.)

Perhaps related to this psychological distance, we also generally have a sense that fraud is not as bad as violent crime. Even judges and juries often act as though white-collar criminals aren’t real criminals. Often the argument seems to be that the behavior involved in committing financial fraud is not so different, after all, from the behavior of for-profit business in general; are we not all out to make an easy buck?

But no, it is not the same. (And if it were, this would be more an indictment of capitalism than it is a justification for fraud. So this sort of argument makes a lot more sense coming from socialists than it does from capitalists.)

One of the central justifications for free markets lies in the assumption that all parties involved are free, autonomous individuals acting under conditions of informed consent. Under those conditions, it is indeed hard to see why we have a right to interfere, as long as no one else is being harmed. Even if I am acting entirely out of my own self-interest, as long as I represent myself honestly, it is hard to see what I could be doing that is morally wrong. But take that away, as fraud does, and the edifice collapses; there is no such thing as a “right to be deceived”. (Indeed, it is quite common for Libertarians to say they allow any activity “except by force or fraud”, never quite seeming to realize that without the force of government we would all be surrounded by unending and unstoppable fraud.)

Indeed, I would like to present to you for consideration the possibility that large-scale financial fraud is worse than most other forms of crime, that someone like Bernie Madoff should be viewed as on a par with a rapist or a murderer. (To its credit, our justice system agrees—Madoff was given the maximum sentence of 150 years in maximum security prison.)

Suppose you were given the following terrible choice: Either you will be physically assaulted and beaten until several bones are broken and you fall unconscious—or you will lose your home and all the money you put into it. If the choice were between death and losing your home, obviously, you’d lose your home. But when it is a question of injury, that decision isn’t so obvious to me. If there is a risk of being permanently disabled in some fashion—particularly mentally disabled, as I find that especially terrifying—then perhaps I accept losing my home. But if it’s just going to hurt a lot and I’ll eventually recover, I think I prefer the beating. (Of course, if you don’t have health insurance, recovering from a concussion and several broken bones might also mean losing your home—so in that case, the dilemma is a no-brainer.) So when someone commits financial fraud on the scale of hundreds of thousands of dollars, we should consider them as having done something morally comparable to beating someone until they have broken bones.

But now let’s scale things up. What if terrorist attacks, or acts of war by a foreign power, had destroyed over one million homes, killed tens of thousands of Americans by one way or another, and cut the wealth of the median American family in half? Would we not count that as one of the greatest acts of violence in our nation’s history? Would we not feel compelled to take some overwhelming response—even be tempted toward acts of brutal vengeance? Yet that is the scale of the damage done by the Great Recession—much, if not all, preventable if our regulatory agencies had not been asleep at the wheel, lulled into a false sense of security by the unending refrain of laissez-faire. Most of the harm was done by actions that weren’t illegal, yes; but some of actually was illegal (20% of direct losses are attributable to fraud), and most of the rest should have been illegal but wasn’t. The repackaging and selling of worthless toxic assets as AAA bonds may not legally have been “fraud”, but morally I don’t see how it was different. With this in mind, the actions of our largest banks are not even comparable to murder—they are comparable to invasion or terrorism. No mere individual shooting here; this is mass murder.

I plan to make this a bit of a continuing series. I hope that by now I’ve at least convinced you that the problem of financial fraud is a large and important one; in later posts I’ll go into more detail about how it is done, who is doing it, and what perhaps can be done to stop them.

Why “marginal productivity” is no excuse for inequality

May 28, JDN 2457902

In most neoclassical models, workers are paid according to their marginal productivity—the additional (market) value of goods that a firm is able to produce by hiring that worker. This is often used as an excuse for inequality: If someone can produce more, why shouldn’t they be paid more?

The most extreme example of this is people like Maura Pennington writing for Forbes about how poor people just need to get off their butts and “do something”; but there is a whole literature in mainstream economics, particularly “optimal tax theory”, arguing based on marginal productivity that we should tax the very richest people the least and never tax capital income. The Chamley-Judd Theorem famously “shows” (by making heroic assumptions) that taxing capital just makes everyone worse off because it reduces everyone’s productivity.

The biggest reason this is wrong is that there are many, many reasons why someone would have a higher income without being any more productive. They could inherit wealth from their ancestors and get a return on that wealth; they could have a monopoly or some other form of market power; they could use bribery and corruption to tilt government policy in their favor. Indeed, most of the top 0.01% do literally all of these things.

But even if you assume that pay is related to productivity in competitive markets, the argument is not nearly as strong as it may at first appear. Here I have a simple little model to illustrate this.

Suppose there are 10 firms and 10 workers. Suppose that firm 1 has 1 unit of effective capital (capital adjusted for productivity), firm 2 has 2 units, and so on up to firm 10 which has 10 units. And suppose that worker 1 has 1 unit of so-called “human capital”, representing their overall level of skills and education, worker 2 has 2 units, and so on up to worker 10 with 10 units. Suppose each firm only needs one worker, so this is a matching problem.

Furthermore, suppose that productivity is equal to capital times human capital: That is, if firm 2 hired worker 7, they would make 2*7 = $14 of output.

What will happen in this market if it converges to equilibrium?

Well, first of all, the most productive firm is going to hire the most productive worker—so firm 10 will hire worker 10 and produce $100 of output. What wage will they pay? Well, they need a wage that is high enough to keep worker 10 from trying to go elsewhere. They should therefore pay a wage of $90—the next-highest firm productivity times the worker’s productivity. That’s the highest wage any other firm could credibly offer; so if they pay this wage, worker 10 will not have any reason to leave.

Now the problem has been reduced to matching 9 firms to 9 workers. Firm 9 will hire worker 9, making $81 of output, and paying $72 in wages.

And so on, until worker 1 at firm 1 produces $1 and receives… $0. Because there is no way for worker 1 to threaten to leave, in this model they actually get nothing. If I assume there’s some sort of social welfare system providing say $0.50, then at least worker 1 can get that $0.50 by threatening to leave and go on welfare. (This, by the way, is probably the real reason firms hate social welfare spending; it gives their workers more bargaining power and raises wages.) Or maybe they have to pay that $0.50 just to keep the worker from starving to death.

What does inequality look like in this society?
Well, the most-productive firm only has 10 times as much capital as the least-productive firm, and the most-educated worker only has 10 times as much skill as the least-educated worker, so we might think that incomes would vary only by a factor of 10.

But in fact they vary by a factor of over 100.

The richest worker makes $90, while the poorest worker makes $0.50. That’s a ratio of 180. (Still lower than the ratio of the average CEO to their average employee in the US, by the way.) The worker is 10 times as productive, but they receive 180 times as much income.

The firm profits vary along a more reasonable scale in this case; firm 1 makes a profit of $0.50 while firm 10 makes a profit of $10. Indeed, except for firm 1, firm n always makes a profit of $n. So that’s very nearly a linear scaling in productivity.

Where did this result come from? Why is it so different from the usual assumptions? All I did was change one thing: I allowed for increasing returns to scale.

If you make the usual assumption of constant returns to scale, this result can’t happen. Multiplying all the inputs by 10 should just multiply the output by 10, by assumption—since that is the definition of constant returns to scale.

But if you look at the structure of real-world incomes, it’s pretty obvious that we don’t have constant returns to scale.

If we had constant returns to scale, we should expect that wages for the same person should only vary slightly if that person were to work in different places. In particular, to have a 2-fold increase in wage for the same worker you’d need more than a 2-fold increase in capital.

This is a bit counter-intuitive, so let me explain a bit further. If a 2-fold increase in capital results in a 2-fold increase in wage for a given worker, that’s increasing returns to scale—indeed, it’s precisely the production function I assumed above.
If you had constant returns to scale, a 2-fold increase in wage would require something like an 8-fold increase in capital. This is because you should get a 2-fold increase in total production by doubling everything—capital, labor, human capital, whatever else. So doubling capital by itself should produce a much weaker effect. For technical reasons I’d rather not get into at the moment, usually it’s assumed that production is approximately proportional to capital to the one-third power—so to double production you need to multiply capital by 2^3 = 8.

I wasn’t able to quickly find really good data on wages for the same workers across different countries, but this should at least give a rough idea. In Mumbai, the minimum monthly wage for a full-time worker is about $80. In Shanghai, it is about $250. If you multiply out the US federal minimum wage of $7.25 per hour by 40 hours by 4 weeks, that comes to $1160 per month.

Of course, these are not the same workers. Even an “unskilled” worker in the US has a lot more education and training than a minimum-wage worker in India or China. But it’s not that much more. Maybe if we normalize India to 1, China is 3 and the US is 10.

Likewise, these are not the same jobs. Even a minimum wage job in the US is much more capital-intensive and uses much higher technology than most jobs in India or China. But it’s not that much more. Again let’s say India is 1, China is 3 and the US is 10.

If we had constant returns to scale, what should the wages be? Well, for India at productivity 1, the wage is $80. So for China at productivity 3, the wage should be $240—it’s actually $250, close enough for this rough approximation. But the US wage should be $800—and it is in fact $1160, 45% larger than we would expect by constant returns to scale.

Let’s try comparing within a particular industry, where the differences in skill and technology should be far smaller. The median salary for a software engineer in India is about 430,000 INR, which comes to about $6,700. If that sounds rather low for a software engineer, you’re probably more accustomed to the figure for US software engineers, which is $74,000. That is a factor of 11 to 1. For the same job. Maybe US software engineers are better than Indian software engineers—but are they that much better? Yes, you can adjust for purchasing power and shrink the gap: Prices in the US are about 4 times as high as those in India, so the real gap might be 3 to 1. But these huge price differences themselves need to be explained somehow, and even 3 to 1 for the same job in the same industry is still probably too large to explain by differences in either capital or education, unless you allow for increasing returns to scale.

In most industries, we probably don’t have quite as much increasing returns to scale as I assumed in my simple model. Workers in the US don’t make 100 times as much as workers in India, despite plausibly having both 10 times as much physical capital and 10 times as much human capital.

But in some industries, this model might not even be enough! The most successful authors and filmmakers, for example, make literally thousands of times as much money as the average author or filmmaker in their own country. J.K. Rowling has almost $1 billion from writing the Harry Potter series; this is despite having literally the same amount of physical capital and probably not much more human capital than the average author in the UK who makes only about 11,000 GBP—which is about $14,000. Harry Potter and the Philosopher’s Stone is now almost exactly 20 years old, which means that Rowling made an average of $50 million per year, some 3500 times as much as the average British author. Is she better than the average British author? Sure. Is she three thousand times better? I don’t think so. And we can’t even make the argument that she has more capital and technology to work with, because she doesn’t! They’re typing on the same laptops and using the same printing presses. Either the return on human capital for British authors is astronomical, or something other than marginal productivity is at work here—and either way, we don’t have anything close to constant returns to scale.

What can we take away from this? Well, if we don’t have constant returns to scale, then even if wage rates are proportional to marginal productivity, they aren’t proportional to the component of marginal productivity that you yourself bring. The same software developer makes more at Microsoft than at some Indian software company, the same doctor makes more at a US hospital than a hospital in China, the same college professor makes more at Harvard than at a community college, and J.K. Rowling makes three thousand times as much as the average British author—therefore we can’t speak of marginal productivity as inhering in you as an individual. It is an emergent property of a production process that includes you as a part. So even if you’re entirely being paid according to “your” productivity, it’s not really your productivity—it’s the productivity of the production process you’re involved in. A myriad of other factors had to snap into place to make your productivity what it is, most of which you had no control over. So in what sense, then, can we say you earned your higher pay?

Moreover, this problem becomes most acute precisely when incomes diverge the most. The differential in wages between two welders at the same auto plant may well be largely due to their relative skill at welding. But there’s absolutely no way that the top athletes, authors, filmmakers, CEOs, or hedge fund managers could possibly make the incomes they do by being individually that much more productive.

Argumentum ab scientia is not argumentum baculo: The difference between authority and expertise

May 7, JDN 2457881

Americans are, on the whole, suspicious of authority. This is a very good thing; it shields us against authoritarianism. But it comes with a major downside, which is a tendency to forget the distinction between authority and expertise.

Argument from authority is an informal fallacy, argumentum baculo. The fact that something was said by the Pope, or the President, or the General Secretary of the UN, doesn’t make it true. (Aside: You’re probably more familiar with the phrase argumentum ad baculum, which is terrible Latin. That would mean “argument toward a stick”, when clearly the intended meaning was “argument by means of a stick”, which is argumentum baculo.)

But argument from expertise, argumentum ab scientia, is something quite different. The world is much too complicated for any one person to know everything about everything, so we have no choice but to specialize our knowledge, each of us becoming an expert in only a few things. So if you are not an expert in a subject, when someone who is an expert in that subject tells you something about that subject, you should probably believe them.

You should especially be prepared to believe them when the entire community of experts is in consensus or near-consensus on a topic. The scientific consensus on climate change is absolutely overwhelming. Is this a reason to believe in climate change? You’re damn right it is. Unless you have years of education and experience in understanding climate models and atmospheric data, you have no basis for challenging the expert consensus on this issue.

This confusion has created a deep current of anti-intellectualism in our culture, as Isaac Asimov famously recognized:

There is a cult of ignorance in the United States, and there always has been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that “my ignorance is just as good as your knowledge.”

This is also important to understand if you have heterodox views on any scientific topic. The fact that the whole field disagrees with you does not prove that you are wrong—but it does make it quite likely that you are wrong. Cranks often want to compare themselves to Galileo or Einstein, but here’s the thing: Galileo and Einstein didn’t act like cranks. They didn’t expect the scientific community to respect their ideas before they had gathered compelling evidence in their favor.

When behavioral economists found that neoclassical models of human behavior didn’t stand up to scrutiny, did they shout from the rooftops that economics is all a lie? No, they published their research in peer-reviewed journals, and talked with economists about the implications of their results. There may have been times when they felt ignored or disrespected by the mainstream, but they pressed on, because the data was on their side. And ultimately, the mainstream gave in: Daniel Kahneman won the Nobel Prize in Economics.

Experts are not always right, that is true. But they are usually right, and if you think they are wrong you’d better have a good reason to think so. The best reasons are the sort that come about when you yourself have spent the time and effort to become an expert, able to challenge the consensus on its own terms.

Admittedly, that is a very difficult thing to do—and more difficult than it should be. I have seen firsthand how difficult and painful the slow grind toward a PhD can be, and how many obstacles will get thrown in your way, ranging from nepotism and interdepartmental politics, to discrimination against women and minorities, to mismatches of interest between students and faculty, all the way to illness, mental health problems, and the slings and arrows of outrageous fortune in general. If you have particularly heterodox ideas, you may face particularly harsh barriers, and sometimes it behooves you to hold your tongue and toe the lie awhile.

But this is no excuse not to gain expertise. Even if academia itself is not available to you, we live in an age of unprecedented availability of information—it’s not called the Information Age for nothing. A sufficiently talented and dedicated autodidact can challenge the mainstream, if their ideas are truly good enough. (Perhaps the best example of this is the mathematician savant Srinivasa Ramanujan. But he’s… something else. I think he is about as far from the average genius as the average genius is from the average person.) No, that won’t be easy either. But if you are really serious about advancing human understanding rather than just rooting for your political team (read: tribe), you should be prepared to either take up the academic route or attack it as an autodidact from the outside.

In fact, most scientific fields are actually quite good about admitting what they don’t know. A total consensus that turns out to be wrong is actually a very rare phenomenon; much more common is a clash of multiple competing paradigms where one ultimately wins out, or they end up replaced by a totally new paradigm or some sort of synthesis. In almost all cases, the new paradigm wins not because it becomes fashionable or the ancien regime dies out (as Planck cynically claimed) but because overwhelming evidence is observed in its favor, often in the form of explaining some phenomenon that was previously impossible to understand. If your heterodox theory doesn’t do that, then it probably won’t win, because it doesn’t deserve to.

(Right now you might think of challenging me: Does my heterodox theory do that? Does the tribal paradigm explain things that either total selfishness or total altruism cannot? I think it’s pretty obvious that it does. I mean, you are familiar with a little thing called “racism”, aren’t you? There is no explanation for racism in neoclassical economics; to understand it at all you have to just impose it as an arbitrary term on the utility function. But at that point, why not throw in whatever you please? Maybe some people enjoy bashing their heads against walls, and other people take great pleasure in the taste of arsenic. Why would this particular self- (not to mention other-) destroying behavior be universal to all human societies?)

In practice, I think most people who challenge the mainstream consensus aren’t genuinely interested in finding out the truth—certainly not enough to actually go through the work of doing it. It’s a pattern you can see in a wide range of fringe views: Anti-vaxxers, 9/11 truthers, climate denialists, they all think the same way. The mainstream disagrees with my preconceived ideology, therefore the mainstream is some kind of global conspiracy to deceive us. The overwhelming evidence that vaccination is safe and (wildly) cost-effective, 9/11 was indeed perpetrated by Al Qaeda and neither planned nor anticipated by anyone in the US government , and the global climate is being changed by human greenhouse gas emissions—these things simply don’t matter to them, because it was never really about the truth. They knew the answer before they asked the question. Because their identity is wrapped up in that political ideology, they know it couldn’t possibly be otherwise, and no amount of evidence will change their mind.

How do we reach such people? That, I don’t know. I wish I did. But I can say this much: We can stop taking them seriously when they say that the overwhelming scientific consensus against them is just another “appeal to authority”. It’s not. It never was. It’s an argument from expertise—there are people who know this a lot better than you, and they think you’re wrong, so you’re probably wrong.

What is the point of democracy?

Apr 9, JDN 2457853

[This topic was chosen by Patreon vote.]

“Democracy” is the sort of word that often becomes just an Applause Light (indeed it was the original example Less Wrong used). Like “freedom” and “liberty” (and for much the same reasons), it’s a good thing, that much we know; but it’s often unclear what is even meant by the word, much less why it should be so important to us.

From another angle, it is strangely common for economists and political scientists to argue that democracy is not all that important; they at least tend to use a precise formal definition of “democracy”, but are oddly quick to dismiss it as pointless or even harmful when it doesn’t line up precisely with their models of an efficient economy or society. I think the best example of this is the so-called “Downs paradox”, where political scientists were so steeped in the tradition of defining all rationality as psychopathic self-interest that they couldn’t even explain why it would occur to anyone to vote. (And indeed, rumor has it that most economists don’t bother to vote, much less campaign politically—which perhaps begins to explain why our economic policy is so terrible.)

Yet especially for Americans in the Trump era, I think it is vital to understand what “democracy” is supposed to mean, and why it is so important.

So, first of all, what is democracy? It is nothing more or less than government by popular vote.

This comes in degrees, of course: The purest direct democracy would have the entire population vote on even the most mundane policies and decisions. You could actually manage something like a monastery or a social club in such a fashion, but this is clearly unworkable on any large scale. Even once you get to hundreds of people, much less thousands or millions, it becomes unviable. The closest example I’ve seen is Switzerland, where there are always numerous popular referenda on ballots that are voted on by entire regions or the entire country—and even then, Switzerland does have representatives that make many of the day-to-day decisions.

So in practice all large-scale democratic systems are some degree of representative democracy, or republic, where some especially decisions may be made by popular vote, but most policies are made by elected representatives, staff appointed by those representatives, or even career civil servants who are appointed in a nominally apolitical process not so different from private-sector hiring. In the most extreme cases such civil servants can become so powerful that you get a deep state, where career bureaucrats exercise more power than elected officials—at that point I think you have actually lost the right to really call yourself a “democracy” and have become something more like a technocracy.
Yet of course a country can get even more undemocratic than that, and many are, governed by an aristocracy or oligarchy that vests power in a small number of wealthy and powerful individuals, or monarchy or autocracy that gives near-absolute power to a single individual.

Thus, there is a continuum of most to least democratic, with popular vote at one end, followed by elected representatives, followed by appointed civil servants, followed by a handful of oligarchs, and ultimately the most undemocratic system is an autocracy controlled by a single individual.

I also think it’s worth mentioning that constitutional monarchies with strong parliamentary systems, like the United Kingdom and Norway, are also “democracies” in the sense I intend. Yes, technically they have these hereditary monarchs—but in practice, the vast majority of the state’s power is vested in the votes of its people. Indeed, if we separate out parliamentary constitutional monarchy from presidential majoritarian democracy and compare them, the former might actually turn out to be better. Certainly, some of the world’s most prosperous nations are governed that way.

As I’ve already acknowledge, the very far extreme of pure direct democracy is unfeasible. But why would we want to get closer to that end? Why be like Switzerland or Denmark rather than like Turkey or Russia—or for that matter why be like California rather than like Mississippi?
Well, if you know anything about the overall welfare of these states, it almost seems obvious—Switzerland and Denmark are richer, happier, safer, healthier, more peaceful, and overall better in almost every way than Turkey and Russia. The gap between California and Mississippi is not as large, but it is larger than most people realize. Median household income in California is $64,500; in Mississippi it is only $40,593. Both are still well within the normal range of a highly-developed country, but that effectively makes California richer than Luxembourg but Mississippi poorer than South Korea. But perhaps the really stark comparison to make is life expectancy: Life expectancy at birth in California is almost 81 years, while in Mississippi it’s only 75.

Of course, there are a lot of other differences between states besides how much of their governance is done by popular referendum. Simply making Mississippi decide more things by popular vote would not turn it into California—much less would making Turkey more democratic turn it into Switzerland. So we shouldn’t attribute these comparisons entirely to differences in democracy. Indeed, a pair of two-way comparisons is only in the barest sense a statistical argument; we should be looking at dozens if not hundreds of comparisons if we really want to see the effects of democracy. And we should of course be trying to control for other factors, adjust for country fixed-effects, and preferably use natural experiments or instrumental variables to tease out causality.

Yet such studies have in fact been done. Stronger degrees of democracy appear to improve long-run economic growth, as well as reduce corruption, increase free trade, protect peace, and even improve air quality.

Subtler analyses have compared majoritarian versus proportional systems (where proportional seems, to me, at least, more democratic), as well as different republican systems with stronger or weaker checks and balances (stronger is clearly better, though whether that is “more democratic” is at least debatable). The effects of democracy on income distribution are more complicated, probably because there have been some highly undemocratic socialist regimes.

So, the common belief that democracy is good seems to be pretty well supported by the data. But why is democracy good? Is it just a practical matter of happening to get better overall results? Could it one day be overturned by some superior system such as technocracy or a benevolent autocratic AI?

Well, I don’t want to rule out the possibility of improving upon existing systems of government. Clearly new systems of government have in fact emerged over the course of history—Greek “democracy” and Roman “republic” were both really aristocracy, and anything close to universal suffrage didn’t really emerge on a large scale until the 20th century. So the 21st (or 22nd) century could well devise a superior form of government we haven’t yet imagined.
However, I do think there is good reason to believe that any new system of government that actually manages to improve upon democracy will still resemble democracy, because there are three key features democracy has that other systems of government simply can’t match. It is these three features that make democracy so important and so worth fighting for.

1. Everyone’s interests are equally represented.

Perhaps no real system actually manages to represent everyone’s interests equally, but the more democratic a system is, the better it will conform to this ideal. A well-designed voting system can aggregate the interests of an entire population and choose the course of action that creates the greatest overall benefit.

Markets can also be a good system for allocating resources, but while markets represent everyone’s interests, they do so highly unequally. Rich people are quite literally weighted more heavily in the sum.

Most systems of government do even worse, by completely silencing the voices of the majority of the population. The notion of a “benevolent autocracy” is really a conceit; what makes you think you could possibly keep the autocrat benevolent?

This is also why any form of disenfranchisement is dangerous and a direct attack upon democracy. Even if people are voting irrationally, against their own interests and yours, by silencing their voice you are undermining the most fundamental tenet of democracy itself. All voices must be heard, no exceptions. That is democracy’s fundamental strength.

2. The system is self-correcting.

This may more accurately describe a constitutional republican system with strong checks and balances, but that is what most well-functioning democracies have and it is what I recommend. If you conceive of “more democracy” as meaning that people can vote their way into fascism by electing a sufficiently charismatic totalitarian, then I do not want us to have “more democracy”. But just as contracts and regulations that protect you can make you in real terms more free because you can now safely do things you otherwise couldn’t risk, I consider strong checks and balances that maintain the stability of a republic against charismatic fascists to be in a deeper sense more democratic. This is ultimately semantic; I think I’ve made it clear enough that I want strong checks and balances.

With such checks and balances in place, democracies may move slower than autocracies; they may spend more time in deliberation or even bitter, polarized conflict. But this also means that their policies do not lurch from one emperor’s whim to another, and they are stable against being overtaken by corruption or fascism. Their policies are stable and predictable; their institutions are strong and resilient.

No other system of government yet devised by humans has this kind of stability, which may be why democracies are gradually taking over the world. Charismatic fascism fails when the charismatic leader dies; hereditary monarchy collapses when the great-grandson of the great king is incompetent; even oligarchy and aristocracy, which have at least some staying power, ultimately fall apart when the downtrodden peasants ultimately revolt. But democracy abides, for where monarchy and aristocracy are made of families and autocracy and fascism are made of a single man, democracy is made of principles and institutions. Democracy is evolutionarily stable, and thus in Darwinian terms we can predict it will eventually prevail.

3. The coercion that government requires is justified.

All government is inherently coercive. Libertarians are not wrong about this. Taxation is coercive. Regulation is coercive. Law is coercive. (The ones who go on to say that all government is “death threats” or “slavery” are bonkers, mind you. But it is in fact coercive.)

The coercion of government is particularly terrible if that coercion is coming from a system like an autocracy, where the will of the people is minimally if at all represented in the decisions of policymakers. Then that is a coercion imposed from outside, a coercion in the fullest sense, one person who imposes their will upon another.

But when government coercion comes from a democracy, it takes on a fundamentally different meaning. Then it is not they who coerce us—it is we who coerce ourselves. Now, why in the world would you coerce yourself? It seems ridiculous, doesn’t it?

Not if you know any game theory. There are in fall all sorts of reasons why one might want to coerce oneself, and two in particular become particularly important for the justification of democratic government.

The first and most important is collective action: There are many situations in which people all working together to accomplish a goal can be beneficial to everyone, but nonetheless any individual person who found a way to shirk their duty and not contribute could benefit even more. Anyone who has done a group project in school with a couple of lazy students in it will know this experience: You end up doing all the work, but they still get a good grade at the end. If everyone had taken the rational, self-interested action of slacking off, everyone in the group would have failed the project.

Now imagine that the group project we’re trying to achieve is, say, defending against an attack by Imperial Japan. We can’t exactly afford to risk that project falling through. So maybe we should actually force people to support it—in the form of taxes, or even perhaps a draft (as ultimately we did in WW2). Then it is no longer rational to try to shirk your duty, so everyone does their duty, the project gets done, and we’re all better off. How do we decide which projects are important enough to justify such coercion? We vote, of course. This is the most fundamental justification of democratic government.

The second that is relevant for government is commitment. There are many circumstances in which we want to accomplish something in the future, and from a long-run perspective it makes sense to achieve that goal—but then when the time comes to take action, we are tempted to procrastinate or change our minds. How can we resolve such a dilemma? Well, one way is to tie our own hands—to coerce ourselves into carrying out the necessary task we are tempted to avoid or delay.

This applies to many types of civil and criminal law, particularly regarding property ownership. Murder is a crime that most people would not commit even if it were completely legal. But shoplifting? I think if most people knew there would be no penalty for petty theft and retail fraud they would be tempted into doing it at least on occasion. I doubt it would be frequent enough to collapse our entire economic system, but it would introduce a lot of inefficiency, and make almost everything more expensive. By having laws in place that punish us for such behavior, we have a way of defusing such temptations, at least for most people most of the time. This is not as important for the basic functioning of government as is collective action, but I think it is still important enough to be worthy of mention.

Of course, there will always be someone who disagrees with any given law, regardless of how sensible and well-founded that law may be. And while in some sense “we all” agreed to pay these taxes, when the IRS actually demands that specific dollar amount from you, it may well be an amount that you would not have chosen if you’d been able to set our entire tax system yourself. But this is a problem of aggregation that I think may be completely intractable; there’s no way to govern by consensus, because human beings just can’t achieve consensus on the scale of millions of people. Governing by popular vote and representation is the best alternative we’ve been able to come up with. If and when someone devises a system of government that solves that problem and represents the public will even better than voting, then we will have a superior alternative to democracy.

Until then, it is as Churchill said: “Democracy is the worst form of government, except for all the others.”

Markets value rich people more

Feb 26, JDN 2457811

Competitive markets are optimal at maximizing utility, as long as you value rich people more.

That is literally a theorem in neoclassical economics. I had previously thought that this was something most economists didn’t realize; I had delusions of grandeur that maybe I could finally convince them that this is the case. But no, it turns out this is actually a well-known finding; it’s just that somehow nobody seems to care. Or if they do care, they never talk about it. For all the thousands of papers and articles about the distortions created by minimum wage and capital gains tax, you’d think someone could spare the time to talk about the vastly larger fundamental distortions created by the structure of the market itself.

It’s not as if this is something completely hopeless we could never deal with. A basic income would go a long way toward correcting this distortion, especially if coupled with highly progressive taxes. By creating a hard floor and a soft ceiling on income, you can reduce the inequality that makes these distortions so large.

The basics of the theorem are quite straightforward, so I think it’s worth explaining them here. It’s extremely general; it applies anywhere that goods are allocated by market prices and different individuals have wildly different amounts of wealth.

Suppose that each person has a certain amount of wealth W to spend. Person 1 has W1, person 2 has W2, and so on. They all have some amount of happiness, defined by a utility function, which I’ll assume is only dependent on wealth; this is a massive oversimplification of course, but it wouldn’t substantially change my conclusions to include other factors—it would just make everything more complicated. (In fact, including altruistic motives would make the whole argument stronger, not weaker.) Thus I can write each person’s utility as a function U(W). The rate of change of this utility as wealth increases, the marginal utility of wealth, is denoted U'(W).

By the law of diminishing marginal utility, the marginal utility of wealth U'(W) is decreasing. That is, the more wealth you have, the less each new dollar is worth to you.

Now suppose people are buying goods. Each good C provides some amount of marginal utility U'(C) to the person who buys it. This can vary across individuals; some people like Pepsi, others Coke. This marginal utility is also decreasing; a house is worth a lot more to you if you are living in the street than if you already have a mansion. Ideally we would want the goods to go to the people who want them the most—but as you’ll see in a moment, markets systematically fail to do this.

If people are making their purchases rationally, each person’s willingness-to-pay P for a given good C will be equal to their marginal utility of that good, divided by their marginal utility of wealth:

P = U'(C)/U'(W)

Now consider this from the perspective of society as a whole. If you wanted to maximize utility, you’d equalize marginal utility across individuals (by the Extreme Value Theorem). The idea is that if marginal utility is higher for one person, you should give that person more, because the benefit of what you give them will be larger that way; and if marginal utility is lower for another person, you should give that person less, because the benefit of what you give them will be smaller. When everyone is equal, you are at the maximum.

But market prices don’t actually do this. Instead they equalize over willingness-to-pay. So if you’ve got two individuals 1 and 2, instead of having this:

U'(C1) = U'(C2)

you have this:

P1 = P2

which translates to:

U'(C1)/U'(W1) = U'(C2)/U'(W2)

If the marginal utilities were the same, U'(W1) = U'(W2), we’d be fine; these would give the same results. But that would only happen if W1 = W2, that is, if the two individuals had the same amount of wealth.

Now suppose we were instead maximizing weighted utility, where each person gets a weighting factor A based on how “important” they are or something. If your A is higher, your utility matters more. If we maximized this new weighted utility, we would end up like this:

A1*U'(C1) = A2*U'(C2)

Because person 1’s utility counts for more, their marginal utility also counts for more. This seems very strange; why are we valuing some people more than others? On what grounds?

Yet this is effectively what we’ve already done by using market prices.
Just set:
A = 1/U'(W)

Since marginal utility of wealth is decreasing, 1/U'(W) is higher precisely when W is higher.

How much higher? Well, that depends on the utility function. The two utility functions I find most plausible are logarithmic and harmonic. (Actually I think both apply, one to other-directed spending and the other to self-directed spending.)

If utility is logarithmic:

U = ln(W)

Then marginal utility is inversely proportional:

U'(W) = 1/W

In that case, your value as a human being, as spoken by the One True Market, is precisely equal to your wealth:

A = 1/U'(W) = W

If utility is harmonic, matters are even more severe.

U(W) = 1-1/W

Marginal utility goes as the inverse square of wealth:

U'(W) = 1/W^2

And thus your value, according to the market, is equal to the square of your wealth:

A = 1/U'(W) = W^2

What are we really saying here? Hopefully no one actually believes that Bill Gates is really morally worth 400 trillion times as much as a starving child in Malawi, as the calculation from harmonic utility would imply. (Bill Gates himself certainly doesn’t!) Even the logarithmic utility estimate saying that he’s worth 20 million times as much is pretty hard to believe.

But implicitly, the market “believes” that, because when it decides how to allocate resources, something that is worth 1 microQALY to Bill Gates (about the value a nickel dropped on the floor to you or I) but worth 20 QALY (twenty years of life!) to the Malawian child, will in either case be priced at $8,000, and since the child doesn’t have $8,000, it will probably go to Mr. Gates. Perhaps a middle-class American could purchase it, provided it was worth some 0.3 QALY to them.

Now consider that this is happening in every transaction, for every good, in every market. Goods are not being sold to the people who get the most value out of them; they are being sold to the people who have the most money.

And suddenly, the entire edifice of “market efficiency” comes crashing down like a house of cards. A global market that quite efficiently maximizes willingness-to-pay is so thoroughly out of whack when it comes to actually maximizing utility that massive redistribution of wealth could enormously increase human welfare, even if it turned out to cut our total output in half—if utility is harmonic, even if it cut our total output to one-tenth its current value.

The only way to escape this is to argue that marginal utility of wealth is not decreasing, or at least decreasing very, very slowly. Suppose for instance that utility goes as the 0.9 power of wealth:

U(W) = W^0.9

Then marginal utility goes as the -0.1 power of wealth:

U'(W) = 0.9 W^(-0.1)

On this scale, Bill Gates is only worth about 5 times as much as the Malawian child, which in his particular case might actually be too small—if a trolley is about to kill either Bill Gates or 5 Malawian children, I think I save Bill Gates, because he’ll go on to save many more than 5 Malawian children. (Of course, substitute Donald Trump or Charles Koch and I’d let the trolley run over him without a second thought if even a single child is at stake, so it’s not actually a function of wealth.) In any case, a 5 to 1 range across the whole range of human wealth is really not that big a deal. It would introduce some distortions, but not enough to justify any redistribution that would meaningfully reduce overall output.

Of course, that commits you to saying that $1 to a Malawian child is only worth about $1.50 to you or I and $5 to Bill Gates. If you can truly believe this, then perhaps you can sleep at night accepting the outcomes of neoclassical economics. But can you, really, believe that? If you had the choice between an intervention that would give $100 to each of 10,000 children in Malawi, and another that would give $50,000 to each of 100 billionaires, would you really choose the billionaires? Do you really think that the world would be better off if you did?

We don’t have precise measurements of marginal utility of wealth, unfortunately. At the moment, I think logarithmic utility is the safest assumption; it’s about the slowest decrease that is consistent with the data we have and it is very intuitive and mathematically tractable. Perhaps I’m wrong and the decrease is even slower than that, say W^(-0.5) (then the market only values billionaires as worth thousands of times as much as starving children). But there’s no way you can go as far as it would take to justify our current distribution of wealth. W^(-0.1) is simply not a plausible value.

And this means that free markets, left to their own devices, will systematically fail to maximize human welfare. We need redistribution—a lot of redistribution. Don’t take my word for it; the math says so.

What good are macroeconomic models? How could they be better?

Dec 11, JDN 2457734

One thing that I don’t think most people know, but which immediately obvious to any student of economics at the college level or above, is that there is a veritable cornucopia of different macroeconomic models. There are growth models (the Solow model, the Harrod-Domar model, the Ramsey model), monetary policy models (IS-LM, aggregate demand-aggregate supply), trade models (the Mundell-Fleming model, the Heckscher-Ohlin model), large-scale computational models (dynamic stochastic general equilibrium, agent-based computational economics), and I could go on.

This immediately raises the question: What are all these models for? What good are they?

A cynical view might be that they aren’t useful at all, that this is all false mathematical precision which makes economics persuasive without making it accurate or useful. And with such a proliferation of models and contradictory conclusions, I can see why such a view would be tempting.

But many of these models are useful, at least in certain circumstances. They aren’t completely arbitrary. Indeed, one of the litmus tests of the last decade has been how well the models held up against the events of the Great Recession and following Second Depression. The Keynesian and cognitive/behavioral models did rather well, albeit with significant gaps and flaws. The Monetarist, Real Business Cycle, and most other neoclassical models failed miserably, as did Austrian and Marxist notions so fluid and ill-defined that I’m not sure they deserve to even be called “models”. So there is at least some empirical basis for deciding what assumptions we should be willing to use in our models. Yet even if we restrict ourselves to Keynesian and cognitive/behavioral models, there are still a great many to choose from, which often yield inconsistent results.

So let’s compare with a science that is uncontroversially successful: Physics. How do mathematical models in physics compare with mathematical models in economics?

Well, there are still a lot of models, first of all. There’s the Bohr model, the Schrodinger equation, the Dirac equation, Newtonian mechanics, Lagrangian mechanics, Bohmian mechanics, Maxwell’s equations, Faraday’s law, Coulomb’s law, the Einstein field equations, the Minkowsky metric, the Schwarzschild metric, the Rindler metric, Feynman-Wheeler theory, the Navier-Stokes equations, and so on. So a cornucopia of models is not inherently a bad thing.

Yet, there is something about physics models that makes them more reliable than economics models.

Partly it is that the systems physicists study are literally two dozen orders of magnitude or more smaller and simpler than the systems economists study. Their task is inherently easier than ours.

But it’s not just that; their models aren’t just simpler—actually they often aren’t. The Navier-Stokes equations are a lot more complicated than the Solow model. They’re also clearly a lot more accurate.

The feature that models in physics seem to have that models in economics do not is something we might call nesting, or maybe consistency. Models in physics don’t come out of nowhere; you can’t just make up your own new model based on whatever assumptions you like and then start using it—which you very much can do in economics. Models in physics are required to fit consistently with one another, and usually inside one another, in the following sense:

The Dirac equation strictly generalizes the Schrodinger equation, which strictly generalizes the Bohr model. Bohmian mechanics is consistent with quantum mechanics, which strictly generalizes Lagrangian mechanics, which generalizes Newtonian mechanics. The Einstein field equations are consistent with Maxwell’s equations and strictly generalize the Minkowsky, Schwarzschild, and Rindler metrics. Maxwell’s equations strictly generalize Faraday’s law and Coulomb’s law.
In other words, there are a small number of canonical models—the Dirac equation, Maxwell’s equations and the Einstein field equation, essentially—inside which all other models are nested. The simpler models like Coulomb’s law and Newtonian mechanics are not contradictory with these canonical models; they are contained within them, subject to certain constraints (such as macroscopic systems far below the speed of light).

This is something I wish more people understood (I blame Kuhn for confusing everyone about what paradigm shifts really entail); Einstein did not overturn Newton’s laws, he extended them to domains where they previously had failed to apply.

This is why it is sensible to say that certain theories in physics are true; they are the canonical models that underlie all known phenomena. Other models can be useful, but not because we are relativists about truth or anything like that; Newtonian physics is a very good approximation of the Einstein field equations at the scale of many phenomena we care about, and is also much more mathematically tractable. If we ever find ourselves in situations where Newton’s equations no longer apply—near a black hole, traveling near the speed of light—then we know we can fall back on the more complex canonical model; but when the simpler model works, there’s no reason not to use it.

There are still very serious gaps in the knowledge of physics; in particular, there is a fundamental gulf between quantum mechanics and the Einstein field equations that has been unresolved for decades. A solution to this “quantum gravity problem” would be essentially a guaranteed Nobel Prize. So even a canonical model can be flawed, and can be extended or improved upon; the result is then a new canonical model which we now regard as our best approximation to truth.

Yet the contrast with economics is still quite clear. We don’t have one or two or even ten canonical models to refer back to. We can’t say that the Solow model is an approximation of some greater canonical model that works for these purposes—because we don’t have that greater canonical model. We can’t say that agent-based computational economics is approximately right, because we have nothing to approximate it to.

I went into economics thinking that neoclassical economics needed a new paradigm. I have now realized something much more alarming: Neoclassical economics doesn’t really have a paradigm. Or if it does, it’s a very informal paradigm, one that is expressed by the arbitrary judgments of journal editors, not one that can be written down as a series of equations. We assume perfect rationality, except when we don’t. We assume constant returns to scale, except when that doesn’t work. We assume perfect competition, except when that doesn’t get the results we wanted. The agents in our models are infinite identical psychopaths, and they are exactly as rational as needed for the conclusion I want.

This is quite likely why there is so much disagreement within economics. When you can permute the parameters however you like with no regard to a canonical model, you can more or less draw whatever conclusion you want, especially if you aren’t tightly bound to empirical evidence. I know a great many economists who are sure that raising minimum wage results in large disemployment effects, because the models they believe in say that it must, even though the empirical evidence has been quite clear that these effects are small if they are present at all. If we had a canonical model of employment that we could calibrate to the empirical evidence, that couldn’t happen anymore; there would be a coefficient I could point to that would refute their argument. But when every new paper comes with a new model, there’s no way to do that; one set of assumptions is as good as another.

Indeed, as I mentioned in an earlier post, a remarkable number of economists seem to embrace this relativism. “There is no true model.” they say; “We do what is useful.” Recently I encountered a book by the eminent economist Deirdre McCloskey which, though I confess I haven’t read it in its entirety, appears to be trying to argue that economics is just a meaningless language game that doesn’t have or need to have any connection with actual reality. (If any of you have read it and think I’m misunderstanding it, please explain. As it is I haven’t bought it for a reason any economist should respect: I am disinclined to incentivize such writing.)

Creating such a canonical model would no doubt be extremely difficult. Indeed, it is a task that would require the combined efforts of hundreds of researchers and could take generations to achieve. The true equations that underlie the economy could be totally intractable even for our best computers. But quantum mechanics wasn’t built in a day, either. The key challenge here lies in convincing economists that this is something worth doing—that if we really want to be taken seriously as scientists we need to start acting like them. Scientists believe in truth, and they are trying to find it out. While not immune to tribalism or ideology or other human limitations, they resist them as fiercely as possible, always turning back to the evidence above all else. And in their combined strivings, they attempt to build a grand edifice, a universal theory to stand the test of time—a canonical model.