Social construction is not fact—and it is not fiction

July 30, JDN 2457965

With the possible exception of politically-charged issues (especially lately in the US), most people are fairly good at distinguishing between true and false, fact and fiction. But there are certain types of ideas that can’t be neatly categorized into fact versus fiction.

First, there are subjective feelings. You can feel angry, or afraid, or sad—and really, truly feel that way—despite having no objective basis for the emotion coming from the external world. Such emotions are usually irrational, but even knowing that doesn’t make them automatically disappear. Distinguishing subjective feelings from objective facts is simple in principle, but often difficult in practice: A great many things simply “feel true” despite being utterly false. (Ask an average American which is more likely to kill them, a terrorist or the car in their garage; I bet quite a few will get the wrong answer. Indeed, if you ask them whether they’re more likely to be shot by someone else or to shoot themselves, almost literally every gun owner is going to get that answer wrong—or they wouldn’t be gun owners.)

The one I really want to focus on today is social constructions. This is a term that has been so thoroughly overused and abused by postmodernist academics (“science is a social construction”, “love is a social construction”, “math is a social construction”, “sex is a social construction”, etc.) that it has almost lost its meaning. Indeed, many people now react with automatic aversion to the term; upon hearing it, they immediately assume—understandably—that whatever is about to follow is nonsense.

But there is actually a very important core meaning to the term “social construction” that we stand to lose if we throw it away entirely. A social construction is something that exists only because we all believe in it.

Every part of that definition is important:

First, a social construction is something that exists: It’s really there, objectively. If you think it doesn’t exist, you’re wrong. It even has objective properties; you can be right or wrong in your beliefs about it, even once you agree that it exists.

Second, a social construction only exists because we all believe in it: If everyone in the world suddenly stopped believing in it, like Tinker Bell it would wink out of existence. The “we all” is important as well; a social construction doesn’t exist simply because one person, or a few people, believe in it—it requires a certain critical mass of society to believe in it. Of course, almost nothing is literally believed by everyone, so it’s more that a social construction exists insofar as people believe in it—and thus can attain a weaker or stronger kind of existence as beliefs change.

The combination of these two features makes social constructions a very weird sort of entity. They aren’t merely subjective beliefs; you can’t be wrong about what you are feeling right now (though you can certainly lie about it), but you can definitely be wrong about the social constructions of your society. But we can’t all be wrong about the social constructions of our society; once enough of our society stops believing in them, they will no longer exist. And when we have conflict over a social construction, its existence can become weaker or stronger—indeed, it can exist to some of us but not to others.

If all this sounds very bizarre and reminds you of postmodernist nonsense that might come from the Wisdom of Chopra randomizer, allow me to provide a concrete and indisputable example of a social construction that is vitally important to economics: Money.

The US dollar is a social construction. It has all sorts of well-defined objective properties, from its purchasing power in the market to its exchange rate with other currencies (also all social constructions). The markets in which it is spent are social constructions. The laws which regulate those markets are social constructions. The government which makes those laws is a social construction.

But it is not social constructions all the way down. The paper upon which the dollar was printed is a physical object with objective factual existence. It is an artifact—it was made by humans, and wouldn’t exist if we didn’t—but now that we’ve made it, it exists and would continue to exist regardless of whether we believe in it or even whether we continue to exist. The cotton from which it was made is also partly artificial, bred over centuries from a lifeform that evolved over millions of years. But the carbon atoms inside that cotton were made in a star, and that star existed and fused its carbon billions of years before any life on Earth existed, much less humans in particular. This is why the statements “math is a social construction” and “science is a social construction” are so ridiculous. Okay, sure, the institutions of science and mathematics are social constructions, but that’s trivial; nobody would dispute that, and it’s not terribly interesting. (What, you mean if everyone stopped going to MIT, there would be no MIT!?) The truths of science and mathematics were true long before we were even here—indeed, the fundamental truths of mathematics could not have failed to be true in any possible universe.

But the US dollar did not exist before human beings created it, and unlike the physical paper, the purchasing power of that dollar (which is, after all, mainly what we care about) is entirely socially constructed. If everyone in the world suddenly stopped accepting US dollars as money, the US dollar would cease to be money. If even a few million people in the US suddenly stopped accepting dollars, its value would become much more precarious, and inflation would be sure to follow.

Nor is this simply because the US dollar is a fiat currency. That makes it more obvious, to be sure; a fiat currency attains its value solely through social construction, as its physical object has negligible value. But even when we were on the gold standard, our currency was representative; the paper itself was still equally worthless. If you wanted gold, you’d have to exchange for it; and that process of exchange is entirely social construction.

And what about gold coins, one of the oldest form of money? There now the physical object might actually be useful for something, but not all that much. It’s shiny, you can make jewelry out of it, it doesn’t corrode, it can be used to replace lost teeth, it has anti-inflammatory properties—and millennia later we found out that its dense nucleus is useful for particle accelerator experiments and it is a very reliable electrical conductor useful for making microchips. But all in all, gold is really not that useful. If gold were priced based on its true usefulness, it would be extraordinarily cheap; cheaper than water, for sure, as it’s much less useful than water. Yet very few cultures have ever used water as currency (though some have used salt). Thus, most of the value of gold is itself socially constructed; you value gold not to use it, but to impress other people with the fact that you own it (or indeed to sell it to them). Stranded alone on a desert island, you’d do anything for fresh water, but gold means nothing to you. And a gold coin actually takes on additional socially-constructed value; gold coins almost always had seignorage, additional value the government received from minting them over and above the market price of the gold itself.

Economics, in fact, is largely about social constructions; or rather I should say it’s about the process of producing and distributing artifacts by means of social constructions. Artifacts like houses, cars, computers, and toasters; social constructions like money, bonds, deeds, policies, rights, corporations, and governments. Of course, there are also services, which are not quite artifacts since they stop existing when we stop doing them—though, crucially, not when we stop believing in them; your waiter still delivered your lunch even if you persist in the delusion that the lunch is not there. And there are natural resources, which existed before us (and may or may not exist after us). But these are corner cases; mostly economics is about using laws and money to distribute goods, which means using social constructions to distribute artifacts.

Other very important social constructions include race and gender. Not melanin and sex, mind you; human beings have real, biological variation in skin tone and body shape. But the concept of a race—especially the race categories we ordinarily use—is socially constructed. Nothing biological forced us to regard Kenyan and Burkinabe as the same “race” while Ainu and Navajo are different “races”; indeed, the genetic data is screaming at us in the opposite direction. Humans are sexually dimorphic, with some rare exceptions (only about 0.02% of people are intersex; about 0.3% are transgender; and no more than 5% have sex chromosome abnormalities). But the much thicker concept of gender that comes with a whole system of norms and attitudes is all socially constructed.

It’s one thing to say that perhaps males are, on average, more genetically predisposed to be systematizers than females, and thus men are more attracted to engineering and women to nursing. That could, in fact, be true, though the evidence remains quite weak. It’s quite another to say that women must not be engineers, even if they want to be, and men must not be nurses—yet the latter was, until very recently, the quite explicit and enforced norm. Standards of clothing are even more obviously socially-constructed; in Western cultures (except the Celts, for some reason), flared garments are “dresses” and hence “feminine”; in East Asian cultures, flared garments such as kimono are gender-neutral, and gender is instead expressed through clothing by subtler aspects such as being fastened on the left instead of the right. In a thousand different ways, we mark our gender by what we wear, how we speak, even how we walk—and what’s more, we enforce those gender markings. It’s not simply that males typically speak in lower pitches (which does actually have a biological basis); it’s that males who speak in higher pitches are seen as less of a man, and that is a bad thing. We have a very strict hierarchy, which is imposed in almost every culture: It is best to be a man, worse to be a woman who acts like a woman, worse still to be a woman who acts like a man, and worst of all to be a man who acts like a woman. What it means to “act like a man” or “act like a woman” varies substantially; but the core hierarchy persists.

Social constructions like these ones are in fact some of the most important things in our lives. Human beings are uniquely social animals, and we define our meaning and purpose in life largely through social constructions.

It can be tempting, therefore, to be cynical about this, and say that our lives are built around what is not real—that is, fiction. But while this may be true for religious fanatics who honestly believe that some supernatural being will reward them for their acts of devotion, it is not a fair or accurate description of someone who makes comparable sacrifices for “the United States” or “free speech” or “liberty”. These are social constructions, not fictions. They really do exist. Indeed, it is only because we are willing to make sacrifices to maintain them that they continue to exist. Free speech isn’t maintained by us saying things we want to say; it is maintained by us allowing other people to say things we don’t want to hear. Liberty is not protected by us doing whatever we feel like, but by not doing things we would be tempted to do that impose upon other people’s freedom. If in our cynicism we act as though these things are fictions, they may soon become so.

But it would be a lot easier to get this across to people, I think, if folks would stop saying idiotic things like “science is a social construction”.

Experimentally testing categorical prospect theory

Dec 4, JDN 2457727

In last week’s post I presented a new theory of probability judgments, which doesn’t rely upon people performing complicated math even subconsciously. Instead, I hypothesize that people try to assign categories to their subjective probabilities, and throw away all the information that wasn’t used to assign that category.

The way to most clearly distinguish this from cumulative prospect theory is to show discontinuity. Kahneman’s smooth, continuous function places fairly strong bounds on just how much a shift from 0% to 0.000001% can really affect your behavior. In particular, if you want to explain the fact that people do seem to behave differently around 10% compared to 1% probabilities, you can’t allow the slope of the smooth function to get much higher than 10 at any point, even near 0 and 1. (It does depend on the precise form of the function, but the more complicated you make it, the more free parameters you add to the model. In the most parsimonious form, which is a cubic polynomial, the maximum slope is actually much smaller than this—only 2.)

If that’s the case, then switching from 0.% to 0.0001% should have no more effect in reality than a switch from 0% to 0.00001% would to a rational expected utility optimizer. But in fact I think I can set up scenarios where it would have a larger effect than a switch from 0.001% to 0.01%.

Indeed, these games are already quite profitable for the majority of US states, and they are called lotteries.

Rationally, it should make very little difference to you whether your odds of winning the Powerball are 0 (you bought no ticket) or 0.000000001% (you bought a ticket), even when the prize is $100 million. This is because your utility of $100 million is nowhere near 100 million times as large as your marginal utility of $1. A good guess would be that your lifetime income is about $2 million, your utility is logarithmic, the units of utility are hectoQALY, and the baseline level is about 100,000.

I apologize for the extremely large number of decimals, but I had to do that in order to show any difference at all. I have bolded where the decimals first deviate from the baseline.

Your utility if you don’t have a ticket is ln(20) = 2.9957322736 hQALY.

Your utility if you have a ticket is (1-10^-9) ln(20) + 10^-9 ln(1020) = 2.9957322775 hQALY.

You gain a whopping 40 microQALY over your whole lifetime. I highly doubt you could even perceive such a difference.

And yet, people are willing to pay nontrivial sums for the chance to play such lotteries. Powerball tickets sell for about $2 each, and some people buy tickets every week. If you do that and live to be 80, you will spend some $8,000 on lottery tickets during your lifetime, which results in this expected utility: (1-4*10^-6) ln(20-0.08) + 4*10^-6 ln(1020) = 2.9917399955 hQALY.
You have now sacrificed 0.004 hectoQALY, which is to say 0.4 QALY—that’s months of happiness you’ve given up to play this stupid pointless game.

Which shouldn’t be surprising, as (with 99.9996% probability) you have given up four months of your lifetime income with nothing to show for it. Lifetime income of $2 million / lifespan of 80 years = $25,000 per year; $8,000 / $25,000 = 0.32. You’ve actually sacrificed slightly more than this, which comes from your risk aversion.

Why would anyone do such a thing? Because while the difference between 0 and 10^-9 may be trivial, the difference between “impossible” and “almost impossible” feels enormous. “You can’t win if you don’t play!” they say, but they might as well say “You can’t win if you do play either.” Indeed, the probability of winning without playing isn’t zero; you could find a winning ticket lying on the ground, or win due to an error that is then upheld in court, or be given the winnings bequeathed by a dying family member or gifted by an anonymous donor. These are of course vanishingly unlikely—but so was winning in the first place. You’re talking about the difference between 10^-9 and 10^-12, which in proportional terms sounds like a lot—but in absolute terms is nothing. If you drive to a drug store every week to buy a ticket, you are more likely to die in a car accident on the way to the drug store than you are to win the lottery.

Of course, these are not experimental conditions. So I need to devise a similar game, with smaller stakes but still large enough for people’s brains to care about the “almost impossible” category; maybe thousands? It’s not uncommon for an economics experiment to cost thousands, it’s just usually paid out to many people instead of randomly to one person or nobody. Conducting the experiment in an underdeveloped country like India would also effectively amplify the amounts paid, but at the fixed cost of transporting the research team to India.

But I think in general terms the experiment could look something like this. You are given $20 for participating in the experiment (we treat it as already given to you, to maximize your loss aversion and endowment effect and thereby give us more bang for our buck). You then have a chance to play a game, where you pay $X to get a P probability of $Y*X, and we vary these numbers.

The actual participants wouldn’t see the variables, just the numbers and possibly the rules: “You can pay $2 for a 1% chance of winning $200. You can also play multiple times if you wish.” “You can pay $10 for a 5% chance of winning $250. You can only play once or not at all.”

So I think the first step is to find some dilemmas, cases where people feel ambivalent, and different people differ in their choices. That’s a good role for a pilot study.

Then we take these dilemmas and start varying their probabilities slightly.

In particular, we try to vary them at the edge of where people have mental categories. If subjective probability is continuous, a slight change in actual probability should never result in a large change in behavior, and furthermore the effect of a change shouldn’t vary too much depending on where the change starts.

But if subjective probability is categorical, these categories should have edges. Then, when I present you with two dilemmas that are on opposite sides of one of the edges, your behavior should radically shift; while if I change it in a different way, I can make a large change without changing the result.

Based solely on my own intuition, I guessed that the categories roughly follow this pattern:

Impossible: 0%

Almost impossible: 0.1%

Very unlikely: 1%

Unlikely: 10%

Fairly unlikely: 20%

Roughly even odds: 50%

Fairly likely: 80%

Likely: 90%

Very likely: 99%

Almost certain: 99.9%

Certain: 100%

So for example, if I switch from 0%% to 0.01%, it should have a very large effect, because I’ve moved you out of your “impossible” category (indeed, I think the “impossible” category is almost completely sharp; literally anything above zero seems to be enough for most people, even 10^-9 or 10^-10). But if I move from 1% to 2%, it should have a small effect, because I’m still well within the “very unlikely” category. Yet the latter change is literally one hundred times larger than the former. It is possible to define continuous functions that would behave this way to an arbitrary level of approximation—but they get a lot less parsimonious very fast.

Now, immediately I run into a problem, because I’m not even sure those are my categories, much less that they are everyone else’s. If I knew precisely which categories to look for, I could tell whether or not I had found it. But the process of both finding the categories and determining if their edges are truly sharp is much more complicated, and requires a lot more statistical degrees of freedom to get beyond the noise.

One thing I’m considering is assigning these values as a prior, and then conducting a series of experiments which would adjust that prior. In effect I would be using optimal Bayesian probability reasoning to show that human beings do not use optimal Bayesian probability reasoning. Still, I think that actually pinning down the categories would require a large number of participants or a long series of experiments (in frequentist statistics this distinction is vital; in Bayesian statistics it is basically irrelevant—one of the simplest reasons to be Bayesian is that it no longer bothers you whether someone did 2 experiments of 100 people or 1 experiment of 200 people, provided they were the same experiment of course). And of course there’s always the possibility that my theory is totally off-base, and I find nothing; a dissertation replicating cumulative prospect theory is a lot less exciting (and, sadly, less publishable) than one refuting it.

Still, I think something like this is worth exploring. I highly doubt that people are doing very much math when they make most probabilistic judgments, and using categories would provide a very good way for people to make judgments usefully with no math at all.

How I wish we measured percentage change

JDN 2457415

For today’s post I’m taking a break from issues of global policy to discuss a bit of a mathematical pet peeve. It is an opinion I share with many economists—for instance Miles Kimball has a very nice post about it, complete with some clever analogies to music.

I hate when we talk about percentages in asymmetric terms.

What do I mean by this? Well, here are a few examples.

If my stock portfolio loses 10% one year and then gains 11% the following year, have I gained or lost money? I’ve lost money. Only a little bit—I’m down 0.1%—but still, a loss.

In 2003, Venezuela suffered a depression of -26.7% growth one year, and then an economic boom of 36.1% growth the following year. What was their new GDP, relative to what it was before the depression? Very slightly less than before. (99.8% of its pre-recession value, to be precise.) You would think that falling 27% and rising 36% would leave you about 9% ahead; in fact it leaves you behind.

Would you rather live in a country with 11% inflation and have constant nominal pay, or live in a country with no inflation and take a 10% pay cut? You should prefer the inflation; in that case your real income only falls by 9.9%, instead of 10%.

We often say that the real interest rate is simply the nominal interest rate minus the rate of inflation, but that’s actually only an approximation. If you have 7% inflation and a nominal interest rate of 11%, your real interest rate is not actually 4%; it is 3.74%. If you have 2% inflation and a nominal interest rate of 0%, your real interest rate is not actually -2%; it is -1.96%.

This is what I mean by asymmetric:

Rising 10% and falling 10% do not cancel each other out. To cancel out a fall of 10%, you must actually rise 11.1%.

Gaining 20% and losing 20% do not cancel each other out. To cancel out a loss of 20%, you need a gain of 25%.

Is it starting to bother you yet? It sure bothers me.

Worst of all is the fact that the way we usually measure percentages, losses are bounded at 100% while gains are unbounded. To cancel a loss of 100%, you’d need a gain of infinity.

There are two basic ways of solving this problem: The simple way, and the good way.

The simple way is to just start measuring percentages symmetrically, by including both the starting and ending values in the calculation and averaging them.
That is, instead of using this formula:

% change = 100% * (new – old)/(old)

You use this one:

% change = 100% * (new – old)/((new + old)/2)

In this new system, percentage changes are symmetric.

Suppose a country’s GDP rises from $5 trillion to $6 trillion.

In the old system we’d say it has risen 20%:

100% * ($6 T – $5 T)/($5 T) = 20%

In the symmetric system, we’d say it has risen 18.2%:

100% * ($6 T – $5 T)/($5.5 T) = 18.2%

Suppose it falls back to $5 trillion the next year.

In the old system we’d say it has only fallen 16.7%:

100% * ($5 T – $6 T)/($6 T) = -16.7%

But in the symmetric system, we’d say it has fallen 18.2%.

100% * ($5 T – $6 T)/($5.5 T) = -18.2%

In the old system, the gain of 20% was somehow canceled by a loss of 16.7%. In the symmetric system, the gain of 18.2% was canceled by a loss of 18.2%, just as you’d expect.

This also removes the problem of losses being bounded but gains being unbounded. Now both losses and gains are bounded, at the rather surprising value of 200%.

Formally, that’s because of these limits:
lim_{x rightarrow infty} {(x-1) over {(x+1)/2}} = 2

lim_{x rightarrow infty} {(0-x) over {(x+0)/2}} = -2

It might be easier to intuit these limits with an example. Suppose something explodes from a value of 1 to a value of 10,000,000. In the old system, this means it rose 1,000,000,000%. In the symmetric system, it rose 199.9999%. Like the speed of light, you can approach 200%, but never quite get there.

100% * (10^7 – 1)/(5*10^6 + 0.5) = 199.9999%

Gaining 200% in the symmetric system is gaining an infinite amount. That’s… weird, to say the least. Also, losing everything is now losing… 200%?

This is simple to explain and compute, but it’s ultimately not the best way.

The best way is to use logarithms.

As you may vaguely recall from math classes past, logarithms are the inverse of exponents.

Since 2^4 = 16, log_2 (16) = 4.

The natural logarithm ln() is the most fundamental for deep mathematical reasons I don’t have room to explain right now. It uses the base e, a transcendental number that starts 2.718281828459045…

To the uninitiated, this probably seems like an odd choice—no rational number has a natural logarithm that is itself a rational number (well, other than 1, since ln(1) = 0).

But perhaps it will seem a bit more comfortable once I show you that natural logarithms are remarkably close to percentages, particularly for the small changes in which percentages make sense.

We define something called log points such that the change in log points is 100 times the natural logarithm of the ratio of the two:

log points = 100 * ln(new / old)

This is symmetric because of the following property of logarithms:

ln(a/b) = – ln(b/a)

Let’s return to the country that saw its GDP rise from $5 trillion to $6 trillion.

The logarithmic change is 18.2 log points:

100 * ln($6 T / $5 T) = 100 * ln(1.2) = 18.2

If it falls back to $5 T, the change is -18.2 log points:

100 * ln($5 T / $6 T) = 100 * ln(0.833) = -18.2

Notice how in the symmetric percentage system, it rose and fell 18.2%; and in the logarithmic system, it rose and fell 18.2 log points. They are almost interchangeable, for small percentages.

In this graph, the old value is assumed to be 1. The horizontal axis is the new value, and the vertical axis is the percentage change we would report by each method.

percentage_change_small

The green line is the usual way we measure percentages.

The red curve is the symmetric percentage method.

The blue curve is the logarithmic method.

For percentages within +/- 10%, all three methods are about the same. Then both new methods give about the same answer all the way up to changes of +/- 40%. Since most real changes in economics are within that range, the symmetric method and the logarithmic method are basically interchangeable.

However, for very large changes, even these two methods diverge, and in my opinion the logarithm is to be preferred.

percentage_change_large

The symmetric percentage never gets above 200% or below -200%, while the logarithm is unbounded in both directions.

If you lose everything, the old system would say you have lost 100%. The symmetric system would say you have lost 200%. The logarithmic system would say you have lost infinity log points. If infinity seems a bit too extreme, think of it this way: You have in fact lost everything. No finite proportional gain can ever bring it back. A loss that requires a gain of infinity percent seems like it should be called a loss of infinity percent, doesn’t it? Under the logarithmic system it is.

If you gain an infinite amount, the old system would say you have gained infinity percent. The logarithmic system would also say that you have gained infinity log points. But the symmetric percentage system would say that you have gained 200%. 200%? Counter-intuitive, to say the least.

Log points also have another very nice property that neither the usual system nor the symmetric percentage system have: You can add them.

If you gain 25 log points, lose 15 log points, then gain 10 log points, you have gained 20 log points.

25 – 15 + 10 = 20

Just as you’d expect!

But if you gain 25%, then lose 15%, and then gain 10%, you have gained… 16.9%.

(1 + 0.25)*(1 – 0.15)*(1 + 0.10) = 1.169

If you gain 25% symmetric, lose 15% symmetric, then gain 10% symmetric, that calculation is really a pain. To find the value y that is p symmetric percentage points from the starting value x, you end up needing to solve this equation:

p = 100 * (y – x)/((x+y)/2)

This can be done; it comes out like this:

y = (200 + p)/(200 – p) * x

(This also gives a bit of insight into why it is that the bounds are +/- 200%.)

So by chaining those, we can in fact find out what happens after gaining 25%, losing 15%, then gaining 10% in the symmetric system:

(200 + 25)/(200 – 25)*(200 – 15)/(200 + 15)*(200 + 10)/(200 – 10) = 1.223

Then we can put that back into the symmetric system:

100% * (1.223 – 1)/((1+1.223)/2) = 20.1%

So after all that work, we find out that you have gained 20.1% symmetric. We could almost just add them—because they are so similar to log points—but we can’t quite.

Log points actually turn out to be really convenient, once you get the hang of them. The problem is that there’s a conceptual leap for most people to grasp what a logarithm is in the first place.

In particular, the hardest part to grasp is probably that a doubling is not 100 log points.

It is in fact 69 log points, because ln(2) = 0.69.

(Doubling in the symmetric percentage system is gaining 67%—much closer to the log points than to the usual percentage system.)

Calculation of the new value is a bit more difficult than in the usual system, but not as difficult as in the symmetric percentage system.

If you have a change of p log points from a starting point of x, the ending point y is:

y = e^{p/100} * x

The fact that you can add log points ultimately comes from the way exponents add:

e^{p1/100} * e^{p2/100} = e^{(p1+p2)/100}

Suppose US GDP grew 2% in 2007, then 0% in 2008, then fell 8% in 2009 and rose 4% in 2010 (this is approximately true). Where was it in 2010 relative to 2006? Who knows, right? It turns out to be a net loss of 2.4%; so if it was $15 T before it’s now $14.63 T. If you had just added, you’d think it was only down 2%; you’d have underestimated the loss by $70 billion.

But if it had grown 2 log points, then 0 log points, then fell 8 log points, then rose 4 log points, the answer is easy: It’s down 2 log points. If it was $15 T before, it’s now $14.70 T. Adding gives the correct answer this time.

Thus, instead of saying that the stock market fell 4.3%, we should say it fell 4.4 log points. Instead of saying that GDP is up 1.9%, we should say it is up 1.8 log points. For small changes it won’t even matter; if inflation is 1.4%, it is in fact also 1.4 log points. Log points are a bit harder to conceptualize; but they are symmetric and additive, which other methods are not.

Is this a matter of life and death on a global scale? No.

But I can’t write about those every day, now can I?