What we lose by aggregating

Jun 25, JDN 2457930

One of the central premises of current neoclassical macroeconomics is the representative agent: Rather than trying to keep track of all the thousands of firms, millions of people, and billions of goods and in a national economy, we aggregate everything up into a single worker/consumer and a single firm producing and consuming a single commodity.

This sometimes goes under the baffling misnomer of microfoundations, which would seem to suggest that it carries detailed information about the microeconomic behavior underlying it; in fact what this means is that the large-scale behavior is determined by some sort of (perfectly) rational optimization process as if there were just one person running the entire economy optimally.

First of all, let me say that some degree of aggregation is obviously necessary. Literally keeping track of every single transaction by every single person in an entire economy would require absurd amounts of data and calculation. We might have enough computing power to theoretically try this nowadays, but then again we might not—and in any case such a model would very rapidly lose sight of the forest for the trees.

But it is also clearly possible to aggregate too much, and most economists don’t seem to appreciate this. They cite a couple of famous theorems (like the Gorman Aggregation Theorem) involving perfectly-competitive firms and perfectly-rational identical consumers that offer a thin veneer of justification for aggregating everything into one, and then go on with their work as if this meant everything were fine.

What’s wrong with such an approach?

Well, first of all, a representative agent model can’t talk about inequality at all. It’s not even that a representative agent model says inequality is good, or not a problem; it lacks the capacity to even formulate the concept. Trying to talk about income or wealth inequality in a representative agent model would be like trying to decide whether your left hand is richer than your right hand.

It’s also nearly impossible to talk about poverty in a representative agent model; the best you can do is talk about a country’s overall level of development, and assume (not without reason) that a country with a per-capita GDP of $1,000 probably has a lot more poverty than a country with a per-capita GDP of $50,000. But two countries with the same per-capita GDP can have very different poverty rates—and indeed, the cynic in me wonders if the reason we’re reluctant to use inequality-adjusted measures of development is precisely that many American economists fear where this might put the US in the rankings. The Human Development Index was a step in the right direction because it includes things other than money (and as a result Saudi Arabia looks much worse and Cuba much better), but it still aggregates and averages everything, so as long as your rich people are doing well enough they can compensate for how badly your poor people are doing.

Nor can you talk about oligopoly in a representative agent model, as there is always only one firm, which for some reason chooses to act as if it were facing competition instead of rationally behaving as a monopoly. (This is not quite as nonsensical as it sounds, as the aggregation actually does kind of work if there truly are so many firms that they are all forced down to zero profit by fierce competition—but then again, what market is actually like that?) There is no market share, no market power; all are at the mercy of the One True Price.

You can still talk about externalities, sort of; but in order to do so you have to set up this weird doublethink phenomenon where the representative consumer keeps polluting their backyard and then can’t figure out why their backyard is so darn polluted. (I suppose humans do seem to behave like that sometimes; but wait, I thought you believed people were rational?) I think this probably confuses many an undergrad, in fact; the models we teach them about externalities generally use this baffling assumption that people consider one set of costs when making their decisions and then bear a different set of costs from the outcome. If you can conceptualize the idea that we’re aggregating across people and thinking “as if” there were a representative agent, you can ultimately make sense of this; but I think a lot of students get really confused by it.

Indeed, what can you talk about with a representative agent model? Economic growth and business cycles. That’s… about it. These are not minor issues, of course; indeed, as Robert Lucas famously said:

The consequences for human welfare involved in questions like these [on economic growth] are simply staggering: once one starts to think about them, it is hard to think about anything else.

I certainly do think that studying economic growth and business cycles should be among the top priorities of macroeconomics. But then, I also think that poverty and inequality should be among the top priorities, and they haven’t been—perhaps because the obsession with representative agent models make that basically impossible.

I want to be constructive here; I appreciate that aggregating makes things much easier. So what could we do to include some heterogeneity without too much cost in complexity?

Here’s one: How about we have p firms, making q types of goods, sold to n consumers? If you want you can start by setting all these numbers equal to 2; simply going from 1 to 2 has an enormous effect, as it allows you to at least say something about inequality. Getting them as high as 100 or even 1000 still shouldn’t be a problem for computing the model on an ordinary laptop. (There are “econophysicists” who like to use these sorts of agent-based models, but so far very few economists take them seriously. Partly that is justified by their lack of foundational knowledge in economics—the arrogance of physicists taking on a new field is legendary—but partly it is also interdepartmental turf war, as economists don’t like the idea of physicists treading on their sacred ground.) One thing that really baffles me about this is that economists routinely use computers to solve models that can’t be calculated by hand, but it never seems to occur to them that they could have started at the beginning planning to make the model solvable only by computer, and that would spare them from making the sort of heroic assumptions they are accustomed to making—assumptions that only made sense when they were used to make a model solvable that otherwise wouldn’t be.

You could also assign a probability distribution over incomes; that can get messy quickly, but we actually are fortunate that the constant relative risk aversion utility function and the Pareto distribution over incomes seem to fit the data quite well—as the product of those two things is integrable by hand. As long as you can model how your policy affects this distribution without making that integral impossible (which is surprisingly tricky), you can aggregate over utility instead of over income, which is a lot more reasonable as a measure of welfare.

And really I’m only scratching the surface here. There are a vast array of possible new approaches that would allow us to extend macroeconomic models to cover heterogeneity; the real problem is an apparent lack of will in the community to make such an attempt. Most economists still seem very happy with representative agent models, and reluctant to consider anything else—often arguing, in fact, that anything else would make the model less microfounded when plainly the opposite is the case.

 

What you can do to protect against credit card fraud

JDN 2457923

This is the second post in my ongoing series on financial fraud, but it’s also some useful personal financial advice. One of the most common forms of fraud, which I have experienced, and most Americans will experience at some point in their lives, is credit card fraud. The US leads the world in credit card fraud, accounting for 47% of all money stolen by this means. In most countries credit card fraud is declining, but not here.

The good news is that there are several things you can do to reduce both the probability of being victimized and the harm you will suffer if you are. I am of course not the first to make such recommendations; similar lists have been made by the Wall Street Journal, Consumer Reports, and even the FTC itself.

1. The first and simplest is to use fewer credit cards.

It is a good idea to have at least one credit card, because you can build a credit history this way which will help you get larger loans such as car loans and home loans later. The best thing to do is to use it for regular purchases and then pay it off as quickly as you can. The higher the interest rate, the more imperative it is to pay it quickly.

More credit cards means that you have more to keep track of, and more that can be stolen; it also generally means that you have larger total credit limits, which is a mixed blessing at best. You have more liquidity that way, to buy things you need; but you also have more temptation to buy things you don’t actually need, and more risk of losing a great deal should any of your cards be stolen.

2. Buy fewer things online, and always from reputable merchants.

This is one I certainly preach more than I practice; I probably buy as much online now as I do in person. It’s hard to beat the combination of higher convenience, wider selection, and lower prices. But buying online is the most likely way to have your credit card stolen (and it is certainly how mine was stolen a few years ago).

The US is unusual among developed countries because we still mainly use magnetic-strip cards, whereas most countries have switched to the EMV system of chip-based cards that provide more security. But this security measure is really quite overrated; it can’t protect against “card not present” fraud, which is by far the most common. Unless and until you can somehow link up the encrypted chips to your laptop in order to use them to pay online, the chips will do little to protect against fraud.

3. Monitor your bank and credit card statements regularly.

This is something you should be doing anyway. Online statements are available from just about every major bank and credit union, and you can check them at any time, any day. Watching these online statements will help you keep track of your spending, manage your budget, and, yes, protect against fraud, because the sooner you see and report a suspicious transaction the more likely you are to recover the money.

4. Use secure passwords, don’t re-use passwords, and use a secure password manager.

Most people still use remarkably insecure passwords for their online accounts. Hacking your online accounts —especially your online retail accounts, like Amazon—typically means being able to steal your credit cards. As we move into the cyberpunk future, personal security will increasingly be coextensive with online security, and until we find something better, that means good passwords.

Passwords should be long, complicated, and not easily tied to anything about you. To remember them, I highly recommend the following technique: Write a sentence of several words, and then convert the words of that sentence into letters and numbers. For example (obviously don’t use this particular example; the whole point is for passwords to be unique), the sentence “Passwords should be long, complicated, and not easily tied to anything about you.” could become the password “Psblcanet2aau”.

Human long-term memory is encoded in something very much like narrative, so you can make a password much more memorable by making it tell a story. (Literally a story if you like: “Once upon a time, in a land far away, there were seven dwarves who lived in a forest.” could form the password “1uatialfatw7dwliaf”.) If you used the whole words, it would be far too long to fit in most password systems; but by condensing it into letters, you keep it memorable while allowing it to fit. The first letters of English words are not quite random—some letters are much more common than others, for example—but as long as the password is long enough this doesn’t make it substantially easier to guess.

If you have any doubts about the security of your password, do the following: Generate a new password by the same method you used to generate that one, and then try the new password—not the old password—in an entropy checking utility such as https://howsecureismypassword.net/. The utility will tell you approximately how long it would take to guess your password by guessing random characters using current technology. This is really an upper limit—computers will get faster, and by knowing things about you, hackers can improve upon random guessing substantially—but a good password should at least be in the thousands or millions of years, while a very bad password (like the word “password” itself) can literally be in the nanoseconds. (Actually if you play around you can generate passwords that can take far longer, even “12 tredecillion years” and the like, but they are generally too long to actually use.) The reason not to use your actual password is that there is a chance, however remote, that it could be intercepted while you were doing the check. But by checking the method, you can ensure that you are generating passwords in an effective way.

After you’ve generated all these passwords, how do you remember them all? It’s unreasonable to expect you to keep them all in your head. Instead, you can just keep a few of the most important ones in your head, including a master password that you then use for a password manager like LastPass or Keeper. Password managers are frequently rated by sites like PC Mag, CNET, Consumer Affairs, and CSO. Get one that is free and top-rated; there’s no reason to pay when the free ones are just as good, and no excuse for getting any less than the best when the best ones are free.

The idea of a password manager makes some people uncomfortable—aren’t you handing your passwords over to someone else?—so let me explain it a little. You aren’t actually handing over your passwords, first of all; a reputable password manager will actually encrypt your passwords locally, and then only transmit encrypted versions of them to the site that operates the password manager. This means that no one—not the company, not even you—can access those passwords without knowing the master password, so definitely make sure you remember that master password.

In theory, it would be better to just remember different 27-character alphanumeric passwords for each site you use online. This is indisputable. Encryption isn’t perfect, and theoretically someone might be able to recover your passwords even from Keeper or LastPass. But that is astronomically unlikely, and what’s far more likely is that if you don’t use a password manager, you will forget your passwords, or re-use them and get them stolen, or else make them too simple and allow them to be guessed. A password manager allows you to maintain dozens of distinct, very complex passwords, and even update them regularly, all while remembering only one or a few. In practice, this is what provides the best security.

5. Above all, report any suspicious activity immediately.

This one I cannot emphasize enough. If you do nothing else, do this. If you ever have any reason to suspect that your credit card might have been compromised, call your bank immediately. Get them to cancel the card, send you a new one, and check any recent transactions.

Do this if you lose your wallet. Do it if you see something weird on your online statement. Do it if you bought something from an online retailer that seemed a little sketchy. Do it if you just have a weird hunch and something doesn’t feel right. The cost of doing this is a minor inconvenience; the benefit could be thousands of dollars.

If you do report a stolen card, in most cases you won’t be held liable for a penny—the credit card company will have to cover any losses. But if you don’t, you could end up making payments on interest on a balance that a thief ran up on your behalf.

If we all do this, credit card fraud could become a thing of the past. Now, about those interest rates…

Financial fraud is everywhere

Jun 4, JDN 2457909
When most people think of “crime”, they probably imagine petty thieves, pickpockets, drug dealers, street thugs. In short, we think of crime as something poor people do. And certainly, that kind of crime is more visible, and typically easier to investigate and prosecute. It may be more traumatic to be victimized by it (though I’ll get back to that in a moment).

The statistics on this matter are some of the fuzziest I’ve ever come across, so estimates could be off by as much as an order of magnitude. But there is some reason to believe that, within most highly-developed countries, financial fraud may actually be more common than any other type of crime. It is definitely among the most common, and the only serious contenders for exceeding it are other forms of property crime such as petty theft and robbery.

It also appears that financial fraud is the one type of crime that isn’t falling over time. Violent crime and property crime are both at record lows; the average American’s probability of being victimized by a thief or a robber in any given year has fallen from 35% to 11% in the last 25 years. But the rate of financial fraud appears to be roughly constant, and the rate of high-tech fraud in particular is definitely rising. (This isn’t too surprising, given that the technology required is becoming cheaper and more widely available.)

In the UK, the rate of credit card fraud rose during the Great Recession, fell a little during the recovery, and has been holding steady since 2010; it is estimated that about 5% of people in the UK suffer credit card fraud in any given year.

About 1% of US car loans are estimated to contain fraudulent information (such as overestimated income or assets). As there are over $1 trillion in outstanding US car loans, that amounts to about $5 billion in fraud losses every year.

Using DOJ data, Statistic Brain found that over 12 million Americans suffer credit card fraud any given year; based on the UK data, this is probably an underestimate. They also found that higher household income had only a slight effect of increasing the probability of suffering such fraud.

The Office for Victims of Crime estimates that total US losses due to financial fraud are between $40 billion and $50 billion per year—which is to say, the GDP of Honduras or the military budget of Japan. The National Center for Victims of Crime estimated that over 10% of Americans suffer some form of financial fraud in any given year.

Why is fraud so common? Well, first of all, it’s profitable. Indeed, it appears to be the only type of crime that is. Most drug dealers live near the poverty line. Most bank robberies make off with less than $10,000.

But Bernie Madoff made over $50 billion before he was caught. Of course he was an exceptional case; the median Ponzi scheme only makes off with… $2.1 million. That’s over 200 times the median bank robbery.

Second, I think financial fraud allows the perpetrator a certain psychological distance from their victims. Just as it’s much easier to push a button telling a drone to launch a missile than to stab someone to death, it’s much easier to move some numbers between accounts than to point a gun at someone’s head and demand their wallet. Construal level theory is all about how making something seem psychologically more “distant” can change our attitudes toward it; toward things we perceive as “distant”, we think more abstractly, we accept more risks, and we are more willing to engage in violence to advance a cause. (It also makes us care less about outcomes, which may be a contributing factor in the collective apathy toward climate change.)

Perhaps related to this psychological distance, we also generally have a sense that fraud is not as bad as violent crime. Even judges and juries often act as though white-collar criminals aren’t real criminals. Often the argument seems to be that the behavior involved in committing financial fraud is not so different, after all, from the behavior of for-profit business in general; are we not all out to make an easy buck?

But no, it is not the same. (And if it were, this would be more an indictment of capitalism than it is a justification for fraud. So this sort of argument makes a lot more sense coming from socialists than it does from capitalists.)

One of the central justifications for free markets lies in the assumption that all parties involved are free, autonomous individuals acting under conditions of informed consent. Under those conditions, it is indeed hard to see why we have a right to interfere, as long as no one else is being harmed. Even if I am acting entirely out of my own self-interest, as long as I represent myself honestly, it is hard to see what I could be doing that is morally wrong. But take that away, as fraud does, and the edifice collapses; there is no such thing as a “right to be deceived”. (Indeed, it is quite common for Libertarians to say they allow any activity “except by force or fraud”, never quite seeming to realize that without the force of government we would all be surrounded by unending and unstoppable fraud.)

Indeed, I would like to present to you for consideration the possibility that large-scale financial fraud is worse than most other forms of crime, that someone like Bernie Madoff should be viewed as on a par with a rapist or a murderer. (To its credit, our justice system agrees—Madoff was given the maximum sentence of 150 years in maximum security prison.)

Suppose you were given the following terrible choice: Either you will be physically assaulted and beaten until several bones are broken and you fall unconscious—or you will lose your home and all the money you put into it. If the choice were between death and losing your home, obviously, you’d lose your home. But when it is a question of injury, that decision isn’t so obvious to me. If there is a risk of being permanently disabled in some fashion—particularly mentally disabled, as I find that especially terrifying—then perhaps I accept losing my home. But if it’s just going to hurt a lot and I’ll eventually recover, I think I prefer the beating. (Of course, if you don’t have health insurance, recovering from a concussion and several broken bones might also mean losing your home—so in that case, the dilemma is a no-brainer.) So when someone commits financial fraud on the scale of hundreds of thousands of dollars, we should consider them as having done something morally comparable to beating someone until they have broken bones.

But now let’s scale things up. What if terrorist attacks, or acts of war by a foreign power, had destroyed over one million homes, killed tens of thousands of Americans by one way or another, and cut the wealth of the median American family in half? Would we not count that as one of the greatest acts of violence in our nation’s history? Would we not feel compelled to take some overwhelming response—even be tempted toward acts of brutal vengeance? Yet that is the scale of the damage done by the Great Recession—much, if not all, preventable if our regulatory agencies had not been asleep at the wheel, lulled into a false sense of security by the unending refrain of laissez-faire. Most of the harm was done by actions that weren’t illegal, yes; but some of actually was illegal (20% of direct losses are attributable to fraud), and most of the rest should have been illegal but wasn’t. The repackaging and selling of worthless toxic assets as AAA bonds may not legally have been “fraud”, but morally I don’t see how it was different. With this in mind, the actions of our largest banks are not even comparable to murder—they are comparable to invasion or terrorism. No mere individual shooting here; this is mass murder.

I plan to make this a bit of a continuing series. I hope that by now I’ve at least convinced you that the problem of financial fraud is a large and important one; in later posts I’ll go into more detail about how it is done, who is doing it, and what perhaps can be done to stop them.

Why “marginal productivity” is no excuse for inequality

May 28, JDN 2457902

In most neoclassical models, workers are paid according to their marginal productivity—the additional (market) value of goods that a firm is able to produce by hiring that worker. This is often used as an excuse for inequality: If someone can produce more, why shouldn’t they be paid more?

The most extreme example of this is people like Maura Pennington writing for Forbes about how poor people just need to get off their butts and “do something”; but there is a whole literature in mainstream economics, particularly “optimal tax theory”, arguing based on marginal productivity that we should tax the very richest people the least and never tax capital income. The Chamley-Judd Theorem famously “shows” (by making heroic assumptions) that taxing capital just makes everyone worse off because it reduces everyone’s productivity.

The biggest reason this is wrong is that there are many, many reasons why someone would have a higher income without being any more productive. They could inherit wealth from their ancestors and get a return on that wealth; they could have a monopoly or some other form of market power; they could use bribery and corruption to tilt government policy in their favor. Indeed, most of the top 0.01% do literally all of these things.

But even if you assume that pay is related to productivity in competitive markets, the argument is not nearly as strong as it may at first appear. Here I have a simple little model to illustrate this.

Suppose there are 10 firms and 10 workers. Suppose that firm 1 has 1 unit of effective capital (capital adjusted for productivity), firm 2 has 2 units, and so on up to firm 10 which has 10 units. And suppose that worker 1 has 1 unit of so-called “human capital”, representing their overall level of skills and education, worker 2 has 2 units, and so on up to worker 10 with 10 units. Suppose each firm only needs one worker, so this is a matching problem.

Furthermore, suppose that productivity is equal to capital times human capital: That is, if firm 2 hired worker 7, they would make 2*7 = $14 of output.

What will happen in this market if it converges to equilibrium?

Well, first of all, the most productive firm is going to hire the most productive worker—so firm 10 will hire worker 10 and produce $100 of output. What wage will they pay? Well, they need a wage that is high enough to keep worker 10 from trying to go elsewhere. They should therefore pay a wage of $90—the next-highest firm productivity times the worker’s productivity. That’s the highest wage any other firm could credibly offer; so if they pay this wage, worker 10 will not have any reason to leave.

Now the problem has been reduced to matching 9 firms to 9 workers. Firm 9 will hire worker 9, making $81 of output, and paying $72 in wages.

And so on, until worker 1 at firm 1 produces $1 and receives… $0. Because there is no way for worker 1 to threaten to leave, in this model they actually get nothing. If I assume there’s some sort of social welfare system providing say $0.50, then at least worker 1 can get that $0.50 by threatening to leave and go on welfare. (This, by the way, is probably the real reason firms hate social welfare spending; it gives their workers more bargaining power and raises wages.) Or maybe they have to pay that $0.50 just to keep the worker from starving to death.

What does inequality look like in this society?
Well, the most-productive firm only has 10 times as much capital as the least-productive firm, and the most-educated worker only has 10 times as much skill as the least-educated worker, so we might think that incomes would vary only by a factor of 10.

But in fact they vary by a factor of over 100.

The richest worker makes $90, while the poorest worker makes $0.50. That’s a ratio of 180. (Still lower than the ratio of the average CEO to their average employee in the US, by the way.) The worker is 10 times as productive, but they receive 180 times as much income.

The firm profits vary along a more reasonable scale in this case; firm 1 makes a profit of $0.50 while firm 10 makes a profit of $10. Indeed, except for firm 1, firm n always makes a profit of $n. So that’s very nearly a linear scaling in productivity.

Where did this result come from? Why is it so different from the usual assumptions? All I did was change one thing: I allowed for increasing returns to scale.

If you make the usual assumption of constant returns to scale, this result can’t happen. Multiplying all the inputs by 10 should just multiply the output by 10, by assumption—since that is the definition of constant returns to scale.

But if you look at the structure of real-world incomes, it’s pretty obvious that we don’t have constant returns to scale.

If we had constant returns to scale, we should expect that wages for the same person should only vary slightly if that person were to work in different places. In particular, to have a 2-fold increase in wage for the same worker you’d need more than a 2-fold increase in capital.

This is a bit counter-intuitive, so let me explain a bit further. If a 2-fold increase in capital results in a 2-fold increase in wage for a given worker, that’s increasing returns to scale—indeed, it’s precisely the production function I assumed above.
If you had constant returns to scale, a 2-fold increase in wage would require something like an 8-fold increase in capital. This is because you should get a 2-fold increase in total production by doubling everything—capital, labor, human capital, whatever else. So doubling capital by itself should produce a much weaker effect. For technical reasons I’d rather not get into at the moment, usually it’s assumed that production is approximately proportional to capital to the one-third power—so to double production you need to multiply capital by 2^3 = 8.

I wasn’t able to quickly find really good data on wages for the same workers across different countries, but this should at least give a rough idea. In Mumbai, the minimum monthly wage for a full-time worker is about $80. In Shanghai, it is about $250. If you multiply out the US federal minimum wage of $7.25 per hour by 40 hours by 4 weeks, that comes to $1160 per month.

Of course, these are not the same workers. Even an “unskilled” worker in the US has a lot more education and training than a minimum-wage worker in India or China. But it’s not that much more. Maybe if we normalize India to 1, China is 3 and the US is 10.

Likewise, these are not the same jobs. Even a minimum wage job in the US is much more capital-intensive and uses much higher technology than most jobs in India or China. But it’s not that much more. Again let’s say India is 1, China is 3 and the US is 10.

If we had constant returns to scale, what should the wages be? Well, for India at productivity 1, the wage is $80. So for China at productivity 3, the wage should be $240—it’s actually $250, close enough for this rough approximation. But the US wage should be $800—and it is in fact $1160, 45% larger than we would expect by constant returns to scale.

Let’s try comparing within a particular industry, where the differences in skill and technology should be far smaller. The median salary for a software engineer in India is about 430,000 INR, which comes to about $6,700. If that sounds rather low for a software engineer, you’re probably more accustomed to the figure for US software engineers, which is $74,000. That is a factor of 11 to 1. For the same job. Maybe US software engineers are better than Indian software engineers—but are they that much better? Yes, you can adjust for purchasing power and shrink the gap: Prices in the US are about 4 times as high as those in India, so the real gap might be 3 to 1. But these huge price differences themselves need to be explained somehow, and even 3 to 1 for the same job in the same industry is still probably too large to explain by differences in either capital or education, unless you allow for increasing returns to scale.

In most industries, we probably don’t have quite as much increasing returns to scale as I assumed in my simple model. Workers in the US don’t make 100 times as much as workers in India, despite plausibly having both 10 times as much physical capital and 10 times as much human capital.

But in some industries, this model might not even be enough! The most successful authors and filmmakers, for example, make literally thousands of times as much money as the average author or filmmaker in their own country. J.K. Rowling has almost $1 billion from writing the Harry Potter series; this is despite having literally the same amount of physical capital and probably not much more human capital than the average author in the UK who makes only about 11,000 GBP—which is about $14,000. Harry Potter and the Philosopher’s Stone is now almost exactly 20 years old, which means that Rowling made an average of $50 million per year, some 3500 times as much as the average British author. Is she better than the average British author? Sure. Is she three thousand times better? I don’t think so. And we can’t even make the argument that she has more capital and technology to work with, because she doesn’t! They’re typing on the same laptops and using the same printing presses. Either the return on human capital for British authors is astronomical, or something other than marginal productivity is at work here—and either way, we don’t have anything close to constant returns to scale.

What can we take away from this? Well, if we don’t have constant returns to scale, then even if wage rates are proportional to marginal productivity, they aren’t proportional to the component of marginal productivity that you yourself bring. The same software developer makes more at Microsoft than at some Indian software company, the same doctor makes more at a US hospital than a hospital in China, the same college professor makes more at Harvard than at a community college, and J.K. Rowling makes three thousand times as much as the average British author—therefore we can’t speak of marginal productivity as inhering in you as an individual. It is an emergent property of a production process that includes you as a part. So even if you’re entirely being paid according to “your” productivity, it’s not really your productivity—it’s the productivity of the production process you’re involved in. A myriad of other factors had to snap into place to make your productivity what it is, most of which you had no control over. So in what sense, then, can we say you earned your higher pay?

Moreover, this problem becomes most acute precisely when incomes diverge the most. The differential in wages between two welders at the same auto plant may well be largely due to their relative skill at welding. But there’s absolutely no way that the top athletes, authors, filmmakers, CEOs, or hedge fund managers could possibly make the incomes they do by being individually that much more productive.

No, this isn’t like Watergate. It’s worse.

May 21, JDN 2457895

Make no mistake: This a historic moment. This may be the greatest corruption scandal in the history of the United States. Donald Trump has fired the director of the FBI in order to block an investigation—and he said so himself.

It has become cliche to compare scandals to Watergate—to the point where we even stick the suffix “-gate” on things to indicate scandals. “Gamergate”, “Climategate”, and so on. So any comparison to Watergate is bound to draw some raised eyebrows.

But just as it’s not Godwin’s Law when you’re really talking about fascism and genocide, it’s not the “-gate” cliche when we are talking about a corruption scandal that goes all the way up to the President of the United States. And The Atlantic is right: this isn’t Watergate; it’s worse.

First of all, let’s talk about the crime of which Trump is accused. Nixon was accused of orchestrating burglary and fraud. These are not minor offenses, to be sure. But they are ordinary criminal offenses, felonies at worst. Trump is accused of fundamental Constitutional violations (particularly the First Amendment and the Emoluments Clause), and above all, Trump is accused of treason. This is the highest crime recognized by the Constitution of the United States. It is the only crime with a specifically listed Constitutional punishment—and that punishment is execution.

Donald Trump is being investigated not for stealing something or concealing information, but for colluding with foreign powers in the attempt to undermine American democracy. Is he guilty? I don’t know; that’s why we’re investigating. But let me say this: If he isn’t guilty of something, it’s quite baffling that he would fight so hard to stop the investigation.

Speaking of which: Trump’s intervention to stop Comey is much more direct, and much more sudden, than anything Nixon did to stop the Watergate investigations. Nixon of course tried to stonewall the investigations, but he did so subtly, cautiously, always trying to at least appear like he valued due process and rule of law. Trump made no such efforts, openly threatening Comey personally on Twitter and publicly declaring on national television that he had fired him to block the investigation.

But perhaps what makes the Trump-Comey affair most terrifying is how the supposedly “mainstream” Republican Party has reacted. The Republicans of Nixon had some honor left in them; several resigned rather than follow Nixon’s illegal orders, and dozens of Republicans in Congress supported the investigations and called for Nixon’s impeachment. Apparently that honor is gone now, as GOP leaders like Mitch McConnell and Lindsey Graham have expressed support for the President’s corrupt and illegal actions citing no principle other than party loyalty. If we needed any more proof that the Republican Party of the United States is no longer a mainstream political party, this is it. They don’t believe in democracy or rule of law anymore. They believe in winning at any cost, loyalty at any price. They have become a radical far-right organization—indeed, if they continue down this road of supporting the President in undermining the freedom of the press and consolidating his own power, I think it is fair to call them literally neo-fascist.

We are about to see whether American institutions can withstand such an onslaught, whether liberty and justice can prevail against corruption and tyranny. So far, there have been reasons to be optimistic: In particular, the judicial branch has proudly and bravely held the line, blocking Trump’s travel ban (multiple times), resisting his order to undermine sanctuary cities, and standing up to direct criticisms and even threats from the President himself. Our system of checks and balances is being challenged, but so far it is holding up against that challenge. We will find out soon enough whether the American system truly is robust enough to survive.

Our government just voted to let thousands of people die for no reason

May 14, JDN 2457888

The US House of Representatives just voted to pass a bill that will let thousands of Americans die for no reason. At the time of writing it hasn’t yet passed the Senate, but it may yet do so. And if it does, there can be little doubt that President Trump (a phrase I still feel nauseous saying) will sign it.

Some already call it Trumpcare (or “Trump-doesn’t-care”); but officially they call it the American Health Care Act. I think we should use the formal name, because it is a name which is already beginning to take on a dark irony; yes, only in America would such a terrible health care act be considered. Every other highly-developed country has a universal healthcare system; most of them have single-payer systems (and this has been true for over two decades).
The Congressional Budget Office estimates that the AHCA will increase the number of uninsured Americans by 24 million. Of these, 14 million will be people near the poverty line who lose access to Medicaid.

In 2009, a Harvard study estimated that 45,000 Americans die each year because they don’t have health insurance. This is on the higher end; other studies have estimated more like 20,000. But based on the increases in health insurance rates under Obamacare, somewhere between 5,000 and 10,000 American lives have been saved each year since it was enacted. That reduction came from insuring about 10 million people who weren’t insured before.

Making a linear projection, we can roughly estimate the number of additional Americans who will die every year if this American Health Care Act is implemented. (24 million/10 million)(5,000 to 10,000) = 12,000 to 24,000 deaths per year. For comparison, there are about 14,000 total homicides in the United States each year (and we have an exceptionally high homicide rate for a highly-developed country).
Indeed, morally, it might make sense to count these deaths as homicides (by the principle of “depraved indifference”); Trump therefore intends to double our homicide rate.

Of course, it will not be prosecuted this way. And one can even make an ethical case for why it shouldn’t be, why it would be impossible to make policy if every lawmaker had to face the consequences of every policy choice. (Start a war? A hundred thousand deaths. Fail to start a war in response to a genocide? A different hundred thousand deaths.)

But for once, I might want to make an exception. Because these deaths will not be the result of a complex policy trade-off with merits and demerits on both sides. They will not be the result of honest mistakes or unforeseen disasters. These people will die out of pure depraved indifference.

We had a healthcare bill that was working. Indeed, Obamacare was remarkably successful. It increased insurance rates and reduced mortality rates while still managing to slow the growth in healthcare expenditure.

The only real cost was an increase in taxes on the top 5% (and particularly the top 1%) of the income distribution. But the Republican Party—and make no mistake, the vote was on almost completely partisan lines, and not a single Democrat supported it—has now made it a matter of official policy that they care more about cutting taxes on millionaires than they do about poor people dying from lack of healthcare.

Yet there may be a silver lining in all of this: Once people saw that Obamacare could work, the idea of universal healthcare in the United States began to seem like a serious political position. The Overton Window has grown. Indeed, it may even have shifted to the left for once; the responses to the American Health Care Act have been almost uniformly comprised of shock and outrage, when really what the bill does is goes back to the same awful system we had before. Going backward and letting thousands of people die for no reason should appall people—but I feared that it might not, because it would seem “normal”. We in America have grown very accustomed to letting poor people die in order to slightly increase the profits of billionaires, and I thought this time might be no different—but it was different. Once Obamacare actually passed and began to work, people really saw what was happening—that all this suffering and death wasn’t necessary, it wasn’t an inextricable part of having a functioning economy. And now that they see that, they aren’t willing to go back.

Argumentum ab scientia is not argumentum baculo: The difference between authority and expertise

May 7, JDN 2457881

Americans are, on the whole, suspicious of authority. This is a very good thing; it shields us against authoritarianism. But it comes with a major downside, which is a tendency to forget the distinction between authority and expertise.

Argument from authority is an informal fallacy, argumentum baculo. The fact that something was said by the Pope, or the President, or the General Secretary of the UN, doesn’t make it true. (Aside: You’re probably more familiar with the phrase argumentum ad baculum, which is terrible Latin. That would mean “argument toward a stick”, when clearly the intended meaning was “argument by means of a stick”, which is argumentum baculo.)

But argument from expertise, argumentum ab scientia, is something quite different. The world is much too complicated for any one person to know everything about everything, so we have no choice but to specialize our knowledge, each of us becoming an expert in only a few things. So if you are not an expert in a subject, when someone who is an expert in that subject tells you something about that subject, you should probably believe them.

You should especially be prepared to believe them when the entire community of experts is in consensus or near-consensus on a topic. The scientific consensus on climate change is absolutely overwhelming. Is this a reason to believe in climate change? You’re damn right it is. Unless you have years of education and experience in understanding climate models and atmospheric data, you have no basis for challenging the expert consensus on this issue.

This confusion has created a deep current of anti-intellectualism in our culture, as Isaac Asimov famously recognized:

There is a cult of ignorance in the United States, and there always has been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that “my ignorance is just as good as your knowledge.”

This is also important to understand if you have heterodox views on any scientific topic. The fact that the whole field disagrees with you does not prove that you are wrong—but it does make it quite likely that you are wrong. Cranks often want to compare themselves to Galileo or Einstein, but here’s the thing: Galileo and Einstein didn’t act like cranks. They didn’t expect the scientific community to respect their ideas before they had gathered compelling evidence in their favor.

When behavioral economists found that neoclassical models of human behavior didn’t stand up to scrutiny, did they shout from the rooftops that economics is all a lie? No, they published their research in peer-reviewed journals, and talked with economists about the implications of their results. There may have been times when they felt ignored or disrespected by the mainstream, but they pressed on, because the data was on their side. And ultimately, the mainstream gave in: Daniel Kahneman won the Nobel Prize in Economics.

Experts are not always right, that is true. But they are usually right, and if you think they are wrong you’d better have a good reason to think so. The best reasons are the sort that come about when you yourself have spent the time and effort to become an expert, able to challenge the consensus on its own terms.

Admittedly, that is a very difficult thing to do—and more difficult than it should be. I have seen firsthand how difficult and painful the slow grind toward a PhD can be, and how many obstacles will get thrown in your way, ranging from nepotism and interdepartmental politics, to discrimination against women and minorities, to mismatches of interest between students and faculty, all the way to illness, mental health problems, and the slings and arrows of outrageous fortune in general. If you have particularly heterodox ideas, you may face particularly harsh barriers, and sometimes it behooves you to hold your tongue and toe the lie awhile.

But this is no excuse not to gain expertise. Even if academia itself is not available to you, we live in an age of unprecedented availability of information—it’s not called the Information Age for nothing. A sufficiently talented and dedicated autodidact can challenge the mainstream, if their ideas are truly good enough. (Perhaps the best example of this is the mathematician savant Srinivasa Ramanujan. But he’s… something else. I think he is about as far from the average genius as the average genius is from the average person.) No, that won’t be easy either. But if you are really serious about advancing human understanding rather than just rooting for your political team (read: tribe), you should be prepared to either take up the academic route or attack it as an autodidact from the outside.

In fact, most scientific fields are actually quite good about admitting what they don’t know. A total consensus that turns out to be wrong is actually a very rare phenomenon; much more common is a clash of multiple competing paradigms where one ultimately wins out, or they end up replaced by a totally new paradigm or some sort of synthesis. In almost all cases, the new paradigm wins not because it becomes fashionable or the ancien regime dies out (as Planck cynically claimed) but because overwhelming evidence is observed in its favor, often in the form of explaining some phenomenon that was previously impossible to understand. If your heterodox theory doesn’t do that, then it probably won’t win, because it doesn’t deserve to.

(Right now you might think of challenging me: Does my heterodox theory do that? Does the tribal paradigm explain things that either total selfishness or total altruism cannot? I think it’s pretty obvious that it does. I mean, you are familiar with a little thing called “racism”, aren’t you? There is no explanation for racism in neoclassical economics; to understand it at all you have to just impose it as an arbitrary term on the utility function. But at that point, why not throw in whatever you please? Maybe some people enjoy bashing their heads against walls, and other people take great pleasure in the taste of arsenic. Why would this particular self- (not to mention other-) destroying behavior be universal to all human societies?)

In practice, I think most people who challenge the mainstream consensus aren’t genuinely interested in finding out the truth—certainly not enough to actually go through the work of doing it. It’s a pattern you can see in a wide range of fringe views: Anti-vaxxers, 9/11 truthers, climate denialists, they all think the same way. The mainstream disagrees with my preconceived ideology, therefore the mainstream is some kind of global conspiracy to deceive us. The overwhelming evidence that vaccination is safe and (wildly) cost-effective, 9/11 was indeed perpetrated by Al Qaeda and neither planned nor anticipated by anyone in the US government , and the global climate is being changed by human greenhouse gas emissions—these things simply don’t matter to them, because it was never really about the truth. They knew the answer before they asked the question. Because their identity is wrapped up in that political ideology, they know it couldn’t possibly be otherwise, and no amount of evidence will change their mind.

How do we reach such people? That, I don’t know. I wish I did. But I can say this much: We can stop taking them seriously when they say that the overwhelming scientific consensus against them is just another “appeal to authority”. It’s not. It never was. It’s an argument from expertise—there are people who know this a lot better than you, and they think you’re wrong, so you’re probably wrong.