Why are so many famous people so awful?

Oct 12 JDN 2460961

J.K. Rowling is a transphobic bigot. H.P. Lovecraft was an overt racist. Orson Scott Card is homophobic, and so was Frank Herbert. Robert Heinlein was a misogynist. Isaac Asimov was a serial groper and sexual harasser. Neil Gaiman has been credibly accused of multiple sexual assaults.

That’s just among sci-fi and fantasy authors whose work I admire. I could easily go on with lots of other famous people and lots of other serious allegations. (I suppose Bill Cosby and Roman Polanski seem like particularly apt examples.)

Some of these are worse than others; since they don’t seem to be guilty of any actual crimes, we might even cut some slack to Lovecraft, Herbert and Heinlein for being products of their times. (It seems very hard to make that defense for Asimov and Gaiman, with Rowling and Card somewhere in between because they aren’t criminals, but ‘their time’ is now.)

There are of course exceptions: Among sci-fi authors, for instance, Ursula Le Guin, Becky Chambers, Alistair Reynolds and Andy Weir all seem to be ethically unimpeachable. (As far as I know? To be honest, I still feel blind-sided by Neil Gaiman.)

But there really does seem to be pattern here:

Famous people are often bad people.

I guess I’m not quite sure what the baseline rate of being racist, sexist, or homophobic is (and frankly maybe it’s pretty high); but the baseline rate of committing multiple sexual assaults is definitely lower than the rate at which famous men get credibly accused of such.

Lord Acton famously remarked similarly:

Power tends to corrupt and absolute power corrupts absolutely. Great men are almost always bad men, even when they exercise influence and not authority; still more when you superadd the tendency of the certainty of corruption by authority.

I think this account is wrong, however. Abraham Lincoln, Mahatma Gandhi, and Nelson Mandela were certainly powerful—and certainly flawed—but they do not seem corrupt to me. I don’t think that Gandhi beat his wife because he led the Indian National Congress, and Mandela supported terrorists precisely during the period when he had the least power and the fewest options. (It’s almost tautologically true that Lincoln couldn’t have suspended habeas corpusif he weren’t extremely powerful—but that doesn’t mean that it was the power that shaped his character.)

I don’t think the problem is that power corrupts. I think the problem is that the corrupt seek power, and are very good at obtaining it.

In fact, I think the reason that so many famous people are such awful people is that our society rewards being awful. People will flock to you if you are overconfident and good at self-promoting, and as long as they like your work, they don’t seem to mind who you hurt along the way; this makes a perfect recipe for rewarding narcissists and psychopaths with fame, fortune, and power.

If you doubt that this is the case:

How else do you explain Donald Trump?

The man has absolutely no redeeming qualities. He is incompetent, willfully ignorant, deeply incurious, arrogant, manipulative, and a pathological liar. He’s also a racist, misogynist, and admitted sexual assaulter. He has been doing everything in his power to prevent the release of the Epstein Files, which strongly suggests he has in fact sexually assaulted teenagers. He’s also a fascist, and now that he has consolidated power, he is rapidly pushing the United States toward becoming a fascist state—complete with masked men with guns who break into your home and carry you away without warrants or trials.

Yet tens of millions of Americans voted for him to become President of the United States—twice.

Basically, it seems to be that Trump said he was great, and they believed him. Simply projecting confidence—however utterly unearned that confidence might be—was good enough.

When it comes to the authors I started this post with, one might ask whether their writing talents were what brought them fame, independently or in spite of their moral flaws. To some extent that is probably true. But we also don’t really know how good they are, compared to all the other writers whose work never got published or never got read. Especially during times—all too recently—when writers who were women, queer, or people of color simply couldn’t get their work published, who knows what genius we might have missed out on? Dune the first book is a masterpiece, but by the time we get to Heretics of Dune the books have definitely lost their luster; maybe there were some other authors with better books that could have been published, but never were because Herbert had the clout and the privilege and those authors didn’t.

I do think genuine merit has some correlation with success. But I think the correlation is much weaker than is commonly supposed. A lot of very obviously terrible and/or incompetent people are extremely successful in life. Many of them were born with advantages—certainly true of Elon Musk and Donald Trump—but not all of them.

Indeed, there are so many awful successful people that I am led to conclude that moral behavior has almost nothing to do with success. I don’t think people actively go out of their way to support authors, musicians, actors, business owners or politicians who are morally terrible; but it’s difficult for me to reject the hypothesis that they literally don’t care. Indeed, when evidence emerges that someone powerful is terrible, usually their supporters will desperately search for reasons why the allegations can’t be true, rather than seriously considering no longer supporting them.

I don’t know what to do about this.

I don’t know how to get people to believe allegations more, or care about them more; and that honestly seems easier than changing the fundamental structure of our society in a way that narcissists and psychopaths are no longer rewarded with power. The basic ways that we decide who gets jobs, who gets published, and who gets elected seem to be deeply, fundamentally broken; they are selecting all the wrong people, and our whole civilization is suffering the consequences.


We are so far from a just world that I honestly can’t see how to get there from here, or even how to move substantially closer.

But I think we still have to try.

Taylor Swift and the means of production

Oct 5 JDN 2460954

This post is one I’ve been meaning to write for awhile, but current events keep taking precedence.

In 2023, Taylor Swift did something very interesting from an economic perspective, which turns out to have profound implications for our economic future.

She re-recorded an entire album and released it through a different record company.

The album was called 1989 (Taylor’s Version), and she created it because for the last four years she had been fighting with Big Machine Records over the rights to her previous work, including the original album 1989.

A Marxist might well say she seized the means of production! (How rich does she have to get before she becomes bourgeoisie, I wonder? Is she already there, even though she’s one of a handful of billionaires who can truly say they were self-made?)

But really she did something even more interesting than that. It was more like she said:

Seize the means of production? I am the means of production.”

Singing and songwriting are what is known as a human-capital-intensive industry. That is, the most important factor of production is not land, or natural resources, or physical capital (yes, you need musical instruments, amplifiers, recording equipment and the like—but these are a small fraction of what it costs to get Talor Swift for a concert), or even labor in the ordinary sense. It’s one where so-called (honestly poorly named) “human capital” is the most important factor of production.

A labor-intensive industry is one where you just need a lot of work to be done, but you can get essentially anyone to do it: Cleaning floors is labor-intensive. A lot of construction work is labor-intensive (though excavators and the like also make it capital-intensive).

No, for a human-capital-intensive industry, what you need is expertise or talent. You don’t need a lot of people doing back-breaking work; you need a few people who are very good at doing the specific thing you need to get done.

Taylor Swift was able to re-record and re-release her songs because the one factor of production that couldn’t be easily substituted was herself. Big Machine Records overplayed their hand; they thought they could control her because they owned the rights to her recordings. But she didn’t need her recordings; she could just sing the songs again.

But now I’m sure you’re wondering: So what?

Well, Taylor Swift’s story is, in large part, the story of us all.

For most of the 18th, 19th, and 20th centuries, human beings in developed countries saw a rapid increase in their standard of living.

Yes, a lot of countries got left behind until quite recently.

Yes, this process seems to have stalled in the 21st century, with “real GDP” continuing to rise but inequality and cost of living rising fast enough that most people don’t feel any richer (and I’ll get to why that may be the case in a moment).

But for millions of people, the gains were real, and substantial. What was it that brought about this change?

The story we are usually told is that it was capital; that as industries transitioned from labor-intensive to capital-intensive, worker productivity greatly increased, and this allowed us to increase our standard of living.

That’s part of the story. But it can’t be the whole thing.

Why not, you ask?

Because very few people actually own the capital.

When capital ownership is so heavily concentrated, any increases in productivity due to capital-intensive production can simply be captured by the rich people who own the capital. Competition was supposed to fix this, compelling them to raise wages to match productivity, but we often haven’t actually had competitive markets; we’ve had oligopolies that consolidate market power in a handful of corporations. We had Standard Oil before, and we have Microsoft now. (Did you know that Microsoft not only owns more than half the consumer operating system industry, but after acquiring Activision Blizzard, is now the largest video game company in the world?) In the presence of an oligopoly, the owners of the capital will reap the gains from capital-intensive productivity.

But standards of living did rise. So what happened?

The answer is that production didn’t just become capital-intensive. It became human-capital-intensive.

More and more jobs required skills that an average person didn’t have. This created incentives for expanding public education, making workers not just more productive, but also more aware of how things work and in a stronger bargaining position.

Today, it’s very clear that the jobs which are most human-capital-intensive—like doctors, lawyers, researchers, and software developers—are the ones with the highest pay and the greatest social esteem. (I’m still not 100% sure why stock traders are so well-paid; it really isn’t that hard to be a stock trader. I could write you an algorithm in 50 lines of Python that would beat the average trader (mostly by buying ETFs). But they pretend to be human-capital-intensive by hiring Harvard grads, and they certainly pay as if they are.)

The most capital-intensive industries—like factory work—are reasonably well-paid, but not that well-paid, and actually seem to be rapidly disappearing as the capital simply replaces the workers. Factory worker productivity is now staggeringly high thanks to all this automation, but the workers themselves have gained only a small fraction of this increase in higher wages; by far the bigger effect has been increased profits for the capital owners and reduced employment in manufacturing.

And of course the real money is all in capital ownership. Elon Musk doesn’t have $400 billion because he’s a great engineer who works very hard. He has $400 billion because he owns a corporation that is extremely highly valued (indeed, clearly overvalued) in the stock market. Maybe being a great engineer or working very hard helped him get there, but it was neither necessary nor sufficient (and I’m sure that his dad’s emerald mine also helped).

Indeed, this is why I’m so worried about artificial intelligence.

Most forms of automation replace labor, in the conventional labor-intensive sense: Because you have factory robots, you need fewer factory workers; because you have mountaintop removal, you need fewer coal miners. It takes fewer people to do the same amount of work. But you still need people to plan and direct the process, and in fact those people need to be skilled experts in order to be effective—so there’s a complementarity between automation and human capital.

But AI doesn’t work like that. AI substitutes for human capital. It doesn’t just replace labor; it replaces expertise.

So far, AI is currently too unreliable to replace any but entry-level workers in human-capital-intensive industries (though there is some evidence it’s already doing that). But it will most likely get more reliable over time, if not via the current LLM paradigm, than through the next one that comes after. At some point, AI will come to replace experienced software developers, and then veteran doctors—and I don’t think we’ll be ready.

The long-term pattern here seems to be transitioning away from human-capital-intensive production to purely capital-intensive production. And if we don’t change the fact that capital ownership is heavily concentrated and so many of our markets are oligopolies—which we absolutely do not seem poised to do anything about; Democrats do next to nothing and Republicans actively and purposefully make it worse—then this transition will be a recipe for even more staggering inequality than before, where the rich will get even more spectacularly mind-bogglingly rich while the rest of us stagnate or even see our real standard of living fall.

The tech bros promise us that AI will bring about a utopian future, but that would only work if capital ownership were equally shared. If they continue to own all the AIs, they may get a utopia—but we sure won’t.

We can’t all be Taylor Swift. (And if AI music catches on, she may not be able to much longer either.)

Reflections on the Charlie Kirk assassination

Sep 28 JDN 2460947

No doubt you are well aware that Charlie Kirk was shot and killed on September 10. His memorial service was held on September 21, and filled a stadium in Arizona.

There have been a lot of wildly different takes on this event. It’s enough to make you start questioning your own sanity. So while what I have to say may not be that different from what Krugman (or for that matter Jacobin) had to say, I still thought I would try to contribute to the small part of the conversation that’s setting the record straight.

First of all, let me say that this is clearly a political assassination, and as a matter of principle, that kind of thing should not be condoned in a democracy.

The whole point of a democratic system is that we don’t win by killing or silencing our opponents, we win by persuading or out-voting them. As long as someone is not engaging in speech acts that directly command or incite violence (like, say, inciting people to attack the Capitol), they should be allowed to speak in peace; even abhorrent views should be not be met with violence.

Free speech isn’t just about government censorship (though that is also a major problem right now); it’s a moral principle that underlies the foundation of liberal democracy. We don’t resolve conflicts with violence unless absolutely necessary.

So I want to be absolutely clear about this: Killing Charlie Kirk was not acceptable, and the assassin should be tried in a court of law and, if duly convicted, imprisoned for a very long time.

Second of all, we still don’t know the assassin’s motive, so stop speculating until we do.

At first it looked like the killer was left-wing. Then it looked like maybe he was right-wing. Now it looks like maybe he’s left-wing again. Maybe his views aren’t easily categorized that way; maybe he’s an anarcho-capitalist, or an anarcho-communist, or a Scientologist. I won’t say it doesn’t matter; it clearly does matter. But we simply do not know yet.

There is an incredibly common and incredibly harmful thing that people do after any major crime: They start spreading rumors and speculating about things that we actually know next to nothing about. Stop it. Don’t contribute to that.


The whole reason we have a court system is to actually figure out the real truth, which takes a lot of time and effort. The courts are one American institution that’s actually still functioning pretty well in this horrific cyberpunk/Trumpistan era; let them do their job.

It could be months or years before we really fully understand what happened here. Accept that. You don’t need to know the answer right now, and it’s far more dangerous to think you know the answer when you actually don’t.

But finally, I need to point out that Charlie Kirk was an absolutely abhorrent, despicable husk of a human being and no one should be honoring him.

First of all, he himself advocated for political violence against his opponents. I won’t say anyone deserves what happened to him—but if anyone did, it would be him, because he specifically rallied his followers to do exactly this sort of thing to other people.

He was also bigoted in almost every conceivable way: Racist, sexist, ableist, homophobic, and of course transphobic. He maintained a McCarthy-esque list of college professors that he encouraged people to harass for being too left-wing. He was a covert White supremacist, and only a little bit covert. He was not covert at all about his blatant sexism and misogyny that seems like it came from the 1950s instead of the 2020s.

He encouraged his—predominantly White, male, straight, cisgender, middle-class—audience to hate every marginalized group you can think of: women, people of color, LGBT people, poor people, homeless people, people with disabilities. Not content to merely be an abhorrent psychopath himself, he actively campaigned against the concept of empathy.

Charlie Kirk deserves no honors. The world is better off without him. He made his entire career out of ruining the lives of innocent people and actively making the world a worse place.

It was wrong to kill Charlie Kirk. But if you’re sad he’s gone, what is wrong with you!?

For my mother, on her 79th birthday

Sep 21 JDN 2460940

When this post goes live, it will be mother’s 79th birthday. I think birthdays are not a very happy time for her anymore.

I suppose nobody really likes getting older; children are excited to grow up, but once you hit about 25 or 26 (the age at which you can rent a car at the normal rate and the age at which you have to get your own health insurance, respectively) and it becomes “getting older” instead of “growing up”, the excitement rapidly wears off. Even by 30, I don’t think most people are very enthusiastic about their birthdays. Indeed, for some people, I think it might be downhill past 21—you wanted to become an adult, but you had no interest in aging beyond that point.

But I think it gets worse as you get older. As you get into your seventies and eighties, you begin to wonder which birthday will finally be your last; actually I think my mother has been wondering about this even earlier than that, because her brothers died in their fifties, her sister died in her sixties, and my father died at 63. At this point she has outlived a lot of people she loved. I think there is a survivor’s guilt that sets in: “Why do I get to keep going, when they didn’t?”

These are also very hard times in general; Trump and the people who enable him have done tremendous damage to our government, our society, and the world at large in a shockingly short amount of time. It feels like all the safeguards we were supposed to have suddenly collapsed and we gave free rein to a madman.

But while there are many loved ones we have lost, there are many we still have; and nor need our set of loved ones be fixed, only to dwindle with each new funeral. We can meet new people, and they can become part of our lives. New children can be born into our family, and they can make our family grow. It is my sincere hope that my mother still has grandchildren yet to meet; in my case they would probably need to be adopted, as the usual biological route is pretty much out of the question, and surrogacy seems beyond our budget for the foreseeable future. But we would still love them, and she could still love them, and it is worth sticking around in this world in order to be a part of their lives.

I also believe that this is not the end for American liberal democracy. This is a terrible time, no doubt. Much that we thought would never happen already has, and more still will. It must be so unsettling, so uncanny, for someone who grew up in the triumphant years after America helped defeat fascism in Europe, to grow older and then see homegrown American fascism rise ascendant here. Even those of us who knew history all too well still seem doomed to repeat it.

At this point it is clear that victory over corruption, racism, and authoritarianism will not be easy, will not be swift, may never be permanent—and is not even guaranteed. But it is still possible. There is still enough hope left that we can and must keep fighting for an America worth saving. I do not know when we will win; I do not even know for certain that we will, in fact, win. But I believe we will.

I believe that while it seems powerful—and does everything it can to both promote that image and abuse what power it does have—fascism is a fundamentally weak system, a fundamentally fragile system, which simply cannot sustain itself once a handful of critical leaders are dead, deposed, or discredited. Liberal democracy is kinder, gentler—and also slower, at times even clumsier—than authoritarianism, and so it may seem weak to those whose view of strength is that of the savanna ape or the playground bully; but this is an illusion. Liberal democracy is fundamentally strong, fundamentally resilient. There is power in kindness, inclusion, and cooperation that the greedy and cruel cannot see. Fascism in Germany arrived and disappeared within a generation; democracy in America has stood for nearly 250 years.

We don’t know how much more time we have, Mom; none of us do. I have heard it said that you should live your life as though you will live both a short life and a long one; but honestly, you should probably live your life as though you will live a randomly-decided amount of time that is statistically predicted by actuarial tables—because you will. Yes, the older you get, the less time you have left (almost tautologically); but especially in this age of rapid technological change, none of us really know whether we’ll die tomorrow or live another hundred years.

I think right now, you feel like there isn’t much left to look forward to. But I promise you there is. Maybe it’s hard to see right now; indeed, maybe you—or I, or anyone—won’t even ever get to see it. But a brighter future is possible, and it’s worth it to keep going, especially if there’s any way that we might be able to make that brighter future happen sooner.

Passion projects and burnout

Sep 14 JDN 2460933

I have seen a shockingly precipitous decline in my depression and anxiety scores over the last couple of weeks, from average Burns Scores of about 15/29 to about 7/20. This represents a decline from “mild depression” and “moderate anxiety” to “normal but unhappy” and “mild anxiety”; but under the circumstances (Trump is still President, I’m still unemployed), I think it may literally mean a complete loss of pathological symptoms.

I’m not on any new medications. I did recently change therapists, but I don’t think this one is substantially better than the last one. My life situation hasn’t changed. The political situation in the United States is if anything getting worse. So what happened?

I found a passion project.

A month and a half ago, I started XBOX Game Camp, and was assigned to a team of game developers to make a game over the next three months (so we’re about halfway there now). I was anxious at first, because I have limited experience in video game development (a few game jams, some Twine games, and playing around with RenPy and Unity) and absolutely no formal training in it; but once we got organized, I found myself Lead Producer on the project and also the second-best programmer. I also got through a major learning curve barrier in Unreal Engine, which is what the team decided to use.

But that wasn’t my real passion project; instead, it enabled me to create one. With that boost in confidence and also increased comfortability with Unreal, I soon realized that, with the help of some free or cheap 3D assets from Fab and Sketchfab, I now had the tools I needed to make my own 3D video game all by myself—something that I would never have thought possible.

And having this chance to create more or less whatever I want (constrained by availability of assets and my own programming skills—but both of which are far less constraining than I had previously believed) has had an extremely powerful effect on my mood. I not only feel less depression and anxiety, I also feel more excitement, more jeu de vivre. I finally feel like I’m recovering from the years of burnout I got from academia.

That got me wondering: How unusual is this?

The empirical literature on burnout doesn’t generally talk about this; it’s mostly about conventional psychiatric interventions like medication and cognitive behavioral therapy. There are also some studies on mindfulness.

But there are more than a few sources of anecdotal reports and expert advice suggesting that passion projects can make a big difference. A lot of what burnout seems to be is disillusionment from your work, loss of passion for it. Finding other work that you can be passionate about can go a long way at fixing that problem.

Of course, like anything else, I’m sure this is no miracle cure. (Indeed, I’m feeling much worse today in particular, but I think that’s because I went through a grueling six-hour dental surgery yesterday—awake the whole time—and now I’m in pain and it was hard to sleep.) But it has made a big difference for me the last few weeks, so if you are going through anything similar, it might be worth a try to find a passion project of your own.

The AI bubble is going to crash hard

Sep 7 JDN 2460926

Based on the fact that it only sort of works and yet corps immediately put it in everything, I had long suspected that the current wave of AI was a bubble. But after reading Ed Zitron’s epic takedowns of the entire industry, I am not only convinced it’s a bubble; I’m convinced it is probably the worst bubble we’ve had in a very long time. This isn’t the dot-com crash; it’s worse.

The similarity to the dot-com crash is clear, however: This a huge amount of hype over a new technology that genuinely could be a game-changer (the Internet certainly was!), but won’t be in the time horizon on which the most optimistic investors have assumed it will be. The gap between “it sort of works” and “it radically changes our economy” is… pretty large, actually. It’s not something you close in a few years.


The headline figure here is that based on current projections, US corporations will have spent $560 billion on capital expenditure, for anticipated revenue of only $35 billion.

They won’t pay it off for 16 years!? That kind of payoff rate would make sense for large-scale physical infrastructure, like a hydroelectric dam. It absolutely does not make sense in an industry that is dependent upon cutting-edge technology that wears out fast and becomes obsolete even faster. They must think that revenue is going to increase to something much higher, very soon.

The corps seem to be banking on the most optimistic view of AI: That it will soon—very soon—bring about a radical increase in productivity that brings GDP surging to new heights, or even a true Singularity where AI fundamentally changes the nature of human existence.

Given the kind of errors I’ve seen LLMs make when I tried to use them to find research papers or help me with tedious coding, this is definitely not what’s going to happen. Claude gives an impressive interview, and (with significant guidance and error-correction) it also managed pretty well at making some simple text-based games; but it often recommended papers to me that didn’t exist, and through further experimentation, I discovered that it could not write me a functional C++ GUI if its existence depended on it. Somewhere on the Internet I heard someone describe LLMs as answering not the question you asked directly, but the question, “What would a good answer to this question look like?” and that seems very accurate. It always gives an answer that looks valid—but not necessarily one that is valid.

AI will find some usefulness in certain industries, I’m sure; and maybe the next paradigm (or the one after that) will really, truly, effect a radical change on our society. (Right now the best thing to use LLMs for seems to be cheating at school—and it also seems to be the most common use. Not exactly the great breakthrough we were hoping for.) But LLMs are just not reliable enough to actually use for anything important, and sooner or later, most of the people using them are going to figure that out.

Of course, by the Efficient Roulette Hypothesis, it’s extremely difficult to predict exactly when a bubble will burst, and it could well be that NVIDIA stock will continue to grow at astronomical rates for several years yet—or it could be that the bubble bursts tomorrow and NVIDIA stock collapses, if not to worthless, then to far below its current price.

Krugman has an idea of what might be the point that bursts the bubble: Energy costs. There is a clear mismatch between the anticipated energy needs of these ever-growing data centers and the actual energy production we’ve been installing—especially now that Trump and his ilk have gutted subsidies for solar and wind power. That’s definitely something to watch out for.

But the really scary thing is that the AI bubble actually seems to be the only thing holding the US economy above water right now. It’s the reason why Trump’s terrible policies haven’t been as disastrous as economists predicted they would; our economy is being sustained by this enormous amount of capital investment.

US GDP is about $30 trillion right now, but $500 billion of that is just AI investment. That’s over 1.6%, and last quarter our annualized GDP growth rate was 3.3%—so roughly half of our GDP growth was just due to building more data centers that probably won’t even be profitable.

Between that, the tariffs, the loss of immigrants, and rising energy costs, a crashing AI bubble could bring down the whole stock market with it.

So I guess what I’m saying is: Don’t believe the AI hype, and you might want to sell some stocks.

Grief, a rationalist perspective

Aug 31 JDN 2460919

This post goes live on the 8th anniversary of my father’s death. Thus it seems an appropriate time to write about grief—indeed, it’s somewhat difficult for me to think about much else.

Far too often, the only perspectives on grief we hear are religious ones. Often, these take the form of consolation: “He’s in a better place now.” “You’ll see him again someday.”

Rationalism doesn’t offer such consolations. Technically one can be an atheist and still believe in an afterlife; but rationalism is stronger than mere atheism. It requires that we believe in scientific facts, and the permanent end of consciousness at death is a scientific fact. We know from direct experiments and observations in neuroscience that a destroyed brain cannot think, feel, see, hear, or remember—when your brain shuts down, whatever you are now will be gone.

It is the Basic Fact of Cognitive Science: There is no soul but the brain.

Moreover, I think, deep down, we all know that death is the end. Even religious people grieve. Their words may say that their loved one is in a better place, but their tears tell a different story.

Maybe it’s an evolutionary instinct, programmed deep into our minds like an ancestral memory, a voice that screams in our minds, insistent on being heard:

Death is bad!”

If there is one crucial instinct a lifeform needs in order to survive, surely it is something like that one: The preference for life over death. In order to live in a hostile world, you have to want to live.

There are some people who don’t want to live, people who become suicidal. Sometimes even the person we are grieving was someone who chose to take their own life. Generally this is because they believe that their life from then on would be defined only by suffering. Usually, I would say they are wrong about that; but in some cases, maybe they are right, and choosing death is rational. Most of the time, life is worth living, even when we can’t see that.

But aside from such extreme circumstances, most of us feel most of the time that death is one of the worst things that could happen to us or our loved ones. And it makes sense that we feel that way. It is right to feel that way. It is rational to feel that way.

This is why grief hurts so much.

This is why you are not okay.

If the afterlife were real—or even plausible—then grief would not hurt so much. A loved one dying would be like a loved one traveling away to somewhere nice; bittersweet perhaps, maybe even sad—but not devastating the way that grief is. You don’t hold a funeral for someone who just booked a one-way trip to Hawaii, even if you know they aren’t ever coming back.

Religion tries to be consoling, but it typically fails. Because that voice in our heads is still there, repeating endlessly: “Death is bad!” “Death is bad!” “Death is bad!”

But what if religion does give people some comfort in such a difficult time? What if supposing something as nonsensical as Heaven numbs the pain for a little while?

In my view, you’d be better off using drugs. Drugs have side effects and can be addictive, but at least they don’t require you to fundamentally abandon your ontology. Mainstream religion isn’t simply false; it’s absurd. It’s one of the falsest things anyone has ever believed about anything. It’s obviously false. It’s ridiculous. It has never deserved any of the respect and reverence it so often receives.

And in a great many cases, religion is evil. Religion teaches people to be obedient to authoritarians, and to oppress those who are different. Some of the greatest atrocities in history were committed in the name of religion, and some of the worst oppression going on today is done in the name of religion.

Rationalists should give religion no quarter. It is better for someone to find solace in alcohol or cannabis than for them to find solace in religion.

And maybe, in the end, it’s better if they don’t find solace at all.

Grief is good. Grief is healthy. Grief is what we should feel when something as terrible as death happens. That voice screaming “Death is bad!” is right, and we should listen to it.

No, what we need is to not be paralyzed by grief, destroyed by grief. We need to withstand our grief, get through it. We must learn to be strong enough to bear what seems unbearable, not console ourselves with lies.

If you are a responsible adult, then when something terrible happens to you, you don’t pretend it isn’t real. You don’t conjure up a fantasy world in which everything is fine. You face your terrors. You learn to survive them. You make yourself strong enough to carry on. The death of a loved one is a terrible thing; you shouldn’t pretend otherwise. But it doesn’t have to destroy you. You can grow, and heal, and move on.

Moreover, grief has a noble purpose. From our grief we must find motivation to challenge death, to fight death wherever we find it. Those we have already lost are gone; it’s too late for them. But it’s not too late for the rest of us. We can keep fighting.

And through economic development and medical science, we do keep fighting.

In fact, little by little, we are winning the war on death.

Death has already lost its hold upon our children. For most of human history, nearly a third of children died before the age of 5. Now less than 1% do, in rich countries, and even in the poorest countries, it’s typically under 10%. With a little more development—development that is already happening in many places—we can soon bring everyone in the world to the high standard of the First World. We have basically won the war on infant and child mortality.

And death is losing its hold on the rest of us, too. Life expectancy at adulthood is also increasing, and more and more people are living into their nineties and even their hundreds.

It’s true, there still aren’t many people living to be 120 (and some researchers believe it will be a long time before this changes). But living to be 85 instead of 65 is already an extra 20 years of life—and these can be happy, healthy years too, not years of pain and suffering. They say that 60 is the new 50; physiologically, we are so much healthier than our ancestors that it’s as if we were ten years younger.

My sincere hope is that our grief for those we have lost and fear of losing those we still have will drive us forward to even greater progress in combating death. I believe that one day we will finally be able to slow, halt, perhaps even reverse aging itself, rendering us effectively immortal.

Religion promises us immortality, but it isn’t real.

Science offers us the possibility of immortality that’s real.

It won’t be easy to get there. It won’t happen any time soon. In all likelihood, we won’t live to see it ourselves. But one day, our descendants may achieve the grandest goal of all: Finally conquering death.

And even long before that glorious day, our lives are already being made longer and healthier by science. We are pushing death back, step by step, day by day. We are fighting, and we are winning.

Moreover, we as individuals are not powerless in this fight: you can fight death a little harder yourself, by becoming an organ donor, or by donating to organizations that fight global poverty or advance medical science. Let your grief drive you to help others, so that they don’t have to grieve as you do.

And if you need consolation from your grief, let it come from this truth: Death is rarer now today than it was yesterday, and will be rarer still tomorrow. We can’t bring back who we have lost, but we can keep ourselves from losing more so soon.

Solving the student debt problem

Aug 24 JDN 2460912

A lot of people speak about student debt as a “crisis”, which makes it sound like the problem is urgent and will have severe consequences if we don’t soon intervene. I don’t think that’s right. While it’s miserable to be unable to pay your student loans, student loans don’t seem to be driving people to bankruptcy or homelessness the way that medical bills do.

Instead I think what we have here is a long-term problem, something that’s been building for a long time and will slowly but surely continue getting worse if we don’t change course. (I guess you can still call it a “crisis” if you want; climate change is also like this, and arguably a crisis.)

But there is a problem here: Student loan balances are rising much faster than other kinds of debt, and the burden falls the worst on Black women and students who went to for-profit schools. A big part of the problem seems to be predatory schools that charge high prices and make big promises but offer poor results.

Making all this worse is the fact that some of the most important income-based repayment plans were overturned by a federal court, forcing everyone who was on them into forebearance. Income-based repayment was a big reason why student loans actually weren’t as bad a burden as their high loan balances might suggest; unlike a personal loan or a mortgage, if you didn’t have enough income to repay your student loans at the full amount, you could get on a plan that would let you make smaller payments, and if you paid on that plan for long enough—even if it didn’t add up to the full balance—your loans would be forgiven.

Now the forebearance is ending for a lot of borrowers, and so they are going into default; and most of that loan forgiveness has been ruled illegal. (Supposedly this is because Congress didn’t approve it. I’ll believe that was the reason when the courts overrule Trump’s tariffs, which clearly have just as thin a legal justification and will cause far more harm to us and the rest of the world.)

In theory, student loans don’t really seem like a bad idea.

College is expensive, because it requires highly-trained professors, who demand high salaries. (The tuition money also goes other places, of course….)

College is valuable, because it provides you with knowledge and skills that can improve your life and also increase your long-term earnings. It’s a big difference: Median salary for someone with a college degree is about $60k, while median salary for someone with only a high school diploma is about $34k.

Most people don’t have enough liquidity to pay for college.

So, we provide loans, so that people can pay for college, and then when they make more money after graduating, they can pay the loans back.

That’s the theory, anyway.

The problem is that average or even median salaries obscure a lot of variation. Some college graduates become doctors, lawyers, or stockbrokers and make huge salaries. Others can’t find jobs at all. In the absence of income-based repayment plans, all students have to pay back their loans in full, regardless of their actual income after graduation.

There is inherent risk in trying to build a career. Our loan system—especially with the recent changes—puts most of this risk on the student. We treat it as their fault they can’t get a good job, and then punish them with loans they can’t afford to repay.

In fact, right now the job market is pretty badfor recent graduates—while usually unemployment for recent college grads is lower than that of the general population, since about 2018 it has actually been higher. (It’s no longer sky-high like it was during COVID; 4.8% is not bad in the scheme of things.)

Actually the job market may even be worse than it looks, because new hires are actually the lowest rate they’ve been since 2020. Our relatively low unemployment currently seems to reflect a lack of layoffs, not a healthy churn of people entering and leaving jobs. People seem to be locked into their jobs, and if they do leave them, finding another is quite difficult.

What I think we need is a system that makes the government take on more of the risk, instead of the students.

There are lots of ways to do this. Actually, the income-based repayment systems we used to have weren’t too bad.

But there is actually a way to do it without student loans at all. College could be free, paid for by taxes.


Now, I know what you’re thinking: Isn’t this unfair to people who didn’t go to college? Why should they have to pay?

Who said they were paying?

There could simply be a portion of the income tax that you only pay if you have a bachelor’s degree. Then you would only pay this tax if you both graduated from college and make a lot of money.

I don’t think this would create a strong incentive not to get a bachelor’s degree; the benefits of doing so remain quite large, even if your taxes were a bit higher as a result.

It might create incentives to major in subjects that aren’t as closely linked to higher earnings—liberal arts instead of engineering, medicine, law, or business. But this I see as fundamentally a public good: The world needs people with liberal arts education. If the market fails to provide for them, the government should step in.

This plan is not as progressive as Elizabeth Warren’s proposal to use wealth taxes to fund free college; but it might be more politically feasible. The argument that people who didn’t go to college shouldn’t have to pay for people who did actually seems reasonable to me; but this system would ensure that in fact they don’t.

The transfer of wealth here would be from people who went to college and make a lot of money to people who went to college and don’t make a lot of money. It would be the government bearing some of the financial risk of taking on a career in an uncertain world.

Conflict without shared reality

Aug 17 JDN 2460905

Donald Trump has federalized the police in Washington D.C. and deployed the National Guard. He claims he is doing this in response to a public safety emergency and crime that is “out of control”.

Crime rates in Washington, D.C. are declining and overall at their lowest level in 30 years. Its violent crime rate has not been this low since the 1960s.

By any objective standard, there is no emergency here. Crime in D.C. is not by any means out of control.

Indeed, across the United States, homicide rates are as low as they have been in 60 years.

But we do not live in a world where politics is based on objective truth.

We live in a world where the public perception of reality itself is shaped by the political narrative.

One of the first things that authoritarians do to control these narratives is try to make their followers distrust objective sources. I watch in disgust as not simply the Babylon Bee (which is a right-wing satire site that tries really hard to be funny but never quite manages it) but even the Atlantic (a mainstream news outlet generally considered credible) feeds—in multiple articles—into this dangerous lie that crime is increasing and the official statistics are somehow misleading us about that.

Of course the Atlantic‘s take is much more nuanced; but quite frankly, now is not the time for nuance. A fascist is trying to take over our government, and he needs to be resisted at every turn by every means possible. You need to be calling him out on every single lie he makes—yes, every single one, I know there are a lot of them, and that’s kind of the point—rather than trying to find alternative framings on which maybe part of what he said could somehow be construed as reasonable from a certain point of view. Every time you make Trump sound more reasonable than he is—and mainstream news outlets have done this literally hundreds of times—you are pushing America closer to fascism.

I really don’t know what to do here.

It is impossible to resolve conflicts when they are not based on shared reality.

No policy can solve a crime wave that doesn’t exist. No trade agreement can stop unfair trading practices that aren’t happening. Nothing can stop vaccines from causing autism that they already don’t cause. There is no way to fix problems when those problems are completely imaginary.

I used to think that political conflict was about different values which had to be balanced against one another: Liberty versus security, efficiency versus equality, justice versus mercy. I thought that we all agreed on the basic facts and even most of the values, and were just disagreeing about how to weigh certain values over others.

Maybe I was simply naive; maybe it’s never been like that. But it certainly isn’t right now. We aren’t disagreeing about what should be done; we are disagreeing about what is happening in front of our eyes. We don’t simply have different priorities or even different values; it’s like we are living in different worlds.

I have read, e.g. by Jonathan Haidt, that conservatives largely understand what liberals want, but liberals don’t really understand what conservatives want. (I would like to take one of the tests they use in these experiments, see how I actually do; but I’ve never been able to find one.)

Haidt’s particular argument seems to be that liberals don’t “understand” the “moral dimensions” of loyalty, authority, and sanctity, because we only “understand” harm and fairness as the basis of morality. But just because someone says something is morally relevant, that doesn’t mean it is morally relevant! And indeed, based on more or less the entirety of ethical philosophy, I can say that harm and fairness are morality, and the others simply aren’t. They are distortions of morality, they are inherently evil, and we are right to oppose them at every turn. Loyalty, authority, and sanctity are what fed Nazi Germany and the Spanish Inquisition.

This claim that liberals don’t understand conservatives has always seemed very odd to me: I feel like I have a pretty clear idea what conservatives want, it’s just that what they want is terrible: Kick out the immigrants, take money from the poor and give it to the rich, and put rich straight Christian White men back in charge of everything. (I mean, really, if that’s not what they want, why do they keep voting for people who do it? Revealed preferences, people!)

Or, more sympathetically: They want to go back to a nostalgia-tinted vision of the 1950s and 1960s in which it felt like things were going well for our country—because they were blissfully ignorant of all the violence and injustice in the world. No, thank you, Black people and queer people do not want to go back to how we were treated in the 1950s—when segregation was legal and Alan Turing was chemically castrated. (And they also don’t seem to grasp that among the things that did make some things go relatively well in that period were unions, antitrust law and progressive taxes, which conservatives now fight against at every turn.)

But I think maybe part of what’s actually happening here is that a lot of conservatives actually “want” things that literally don’t make sense, because they rest upon assumptions about the world that simply aren’t true.

They want to end “out of control” crime that is the lowest it’s been in decades.

They want to stop schools from teaching things that they already aren’t teaching.

They want the immigrants to stop bringing drugs and crime that they aren’t bringing.

They want LGBT people to stop converting their children, which we already don’t and couldn’t. (And then they want to do their own conversions in the other direction—which also don’t work, but cause tremendous harm.)

They want liberal professors to stop indoctrinating their students in ways we already aren’t and can’t. (If we could indoctrinate our students, don’t you think we’d at least make them read the syllabus?)

They want to cut government spending by eliminating “waste” and “fraud” that are trivial amounts, without cutting the things that are actually expensive, like Social Security, Medicare, and the military. They think we can balance the budget without cutting these things or raising taxes—which is just literally mathematically impossible.

They want to close off trade to bring back jobs that were sent offshore—but those jobs weren’t sent offshore, they were replaced by robots. (US manufacturing output is near its highest ever, even though manufacturing employment is half what it once was.)


And meanwhile, there’s a bunch of real problems that aren’t getting addressed: Soaring inequality, a dysfunctional healthcare system, climate change, the economic upheaval of AI—and they either don’t care about those, aren’t paying attention to them, or don’t even believe they exist.

It feels a bit like this:

You walk into a room and someone points a gun at you, shouting “Drop the weapon!” but you’re not carrying a weapon. And you show your hands, and try to explain that you don’t have a weapon, but they just keep shouting “Drop the weapon!” over and over again. Someone else has already convinced them that you have a weapon, and they expect you to drop that weapon, and nothing you say can change their mind about this.

What exactly should you do in that situation?

How do you avoid getting shot?

Do you drop something else and say it’s the weapon (make some kind of minor concession that looks vaguely like what they asked for)? Do you try to convince them that you have a right to the weapon (accept their false premise but try to negotiate around it)? Do you just run away (leave the country?)? Do you double down and try even harder to convince them that you really, truly, have no weapon?

I’m not saying that everyone on the left has a completely accurate picture of reality; there are clearly a lot of misconceptions on this side of the aisle as well. But at least among the mainstream center left, there seems to be a respect for objective statistics and a generally accurate perception of how the world works—the “reality-based community”. Sometimes liberals make mistakes, have bad ideas, or even tell lies; but I don’t hear a lot of liberals trying to fix problems that don’t exist or asking for the government budget to be changed in ways that violate basic arithmetic.

I really don’t know what do here, though.

How do you change people’s minds when they won’t even agree on the basic facts?

On foxes and hedgehogs, part II

Aug 3 JDN 2460891

In last week’s post I described Philip E. Tetlock’s experiment showing that “foxes” (people who are open-minded and willing to consider alternative views) make more accurate predictions than “hedgehogs” (people who are dogmatic and conform strictly to a single ideology).

As I explained at the end of the post, he, uh, hedges on this point quite a bit, coming up with various ways that the hedgehogs might be able to redeem themselves, but still concluding that in most circumstances, the foxes seem to be more accurate.

Here are my thoughts on this:

I think he went too easy on the hedgehogs.

I consider myself very much a fox, and I honestly would never assign a probability of 0% or 100% to any physically possible event. Honestly I consider it a flaw in Tetlock’s design that he included those as options but didn’t include probabilities I would assign, like 1%, 0.1%, or 0.01%.

He only let people assign probabilities in 10% increments. So I guess if you thought something was 3% likely, you’re supposed to round to 0%? That still feels terrible. I’d probably still write 10%. There weren’t any questions like “Aliens from the Andromeda Galaxy arrive to conquer our planet, thus rendering all previous political conflicts moot”, but man, had there been, I’d still be tempted to not put 0%. I guess I would put 0% for that though? Because in 99.999999% of cases, I’d get it right—it wouldn’t happen—and I’d get more points. But man, even single-digit percentages? I’d mash the 10% button. I am pretty much allergic to overconfidence.

In fact, I think in my mind I basically try to use a logarithmic score, which unlike a Brier score, severely (technically, infinitely) punishes you for saying that something impossible happened or something inevitable didn’t. Like, really, if you’re doing it right, that should never, ever happen to you. If you assert that something has 0% probability and it happens, you have just conclusively disproven your worldview. (Admittedly it’s possible you could fix it with small changes—but a full discussion of that would get us philosophically too far afield. “outside the scope of this paper”.)

So honestly I think he was too lenient on overconfidence by using a Brier score, which does penalize this kind of catastrophic overconfidence, but only by a moderate amount. If you say that something has a 0% chance and then it happens, you get a Brier score of -1. But if you say that something has a 50% chance and then it happens (which it would, you know, 50% of the time), you’d get a Brier score of -0.25. So even absurd overconfidence isn’t really penalized that badly.

Compare this to a logarithmic rule: Say 0% and it happens, and you get negative infinity. You lose. You fail. Go home. Your worldview is bad and you should feel bad. This should never happen to you if you have a coherent worldview (modulo the fact that he didn’t let you say 0.01%).

So if I had designed this experiment, I would have given finer-grained options at the extremes, and then brought the hammer down on anybody who actually asserted a 0% chance of an event that actually occurred. (There’s no need for the finer-grained options elsewhere; over millennia of history, the difference between 0% and 0.1% is whether it won’t happen or it will—quite relevant for, say, full-scale nuclear war—while the difference between 40% and 42.1% is whether it’ll happen every 2 to 3 years or… every 2 to 3 years.)

But okay, let’s say we stick with the Brier score, because infinity is scary.

  1. About the adjustments:
    1. The “value adjustments” are just absolute nonsense. Those would be reasons to adjust your policy response, via your utility function—they are not a reason to adjust your probability. Yes, a nuclear terrorist attack would be a really big deal if it happened and we should definitely be taking steps to prevent that; but that doesn’t change the fact that the probability of one happening is something like 0.1% per year and none have ever happened. Predicting things that don’t happen is bad forecasting, even if the things you are predicting would be very important if they happened.
    2. The “difficulty adjustments” are sort of like applying a different scoring rule, so that I’m more okay with; but that wasn’t enough to make the hedgehogs look better than the foxes.
    3. The “fuzzy set” adjustments could be legitimate, but only under particular circumstances. Being “almost right” is only valid if you clearly showed that the result was anomalous because of some other unlikely event, and—because the timeframe was clearly specified in the questions—“might still happen” should still get fewer points than accurately predicting that it hasn’t happened yet. Moreover, it was very clear that people only ever applied these sort of changes when they got things wrong; they rarely if ever said things like “Oh, wow, I said that would happen and it did, but for completely different reasons that I didn’t expect—I was almost wrong there.” (Crazy example, but if the Soviet Union had been taken over by aliens, “the Soviet Union will fall” would be correct—but I don’t think you could really attribute that to good political prediction.)
  2. The second exercise shows that even the foxes are not great Bayesians, and that some manipulations can make people even more inaccurate than before; but the hedgehogs also perform worse and also make some of the same crazy mistakes and still perform worse overall than the foxes, even in that experiment.
  3. I guess he’d call me a “hardline neopositivist”? Because I think that your experiment asking people to predict things should require people to, um, actually predict things? The task was not to get the predictions wrong but be able to come up with clever excuses for why they were wrong that don’t challenge their worldview. The task was to not get the predictions wrong. Apparently this very basic level of scientific objectivity is now considered “hardline neopositivism”.

I guess we can reasonably acknowledge that making policy is about more than just prediction, and indeed maybe being consistent and decisive is advantageous in a game-theoretic sense (in much the same way that the way to win a game of Chicken is to very visibly throw away your steering wheel). So you could still make a case for why hedgehogs are good decision-makers or good leaders.

But I really don’t see how you weasel out of the fact that hedgehogs are really bad predictors. If I were running a corporation, or a government department, or an intelligence agency, I would want accurate predictions. I would not be interested in clever excuses or rich narratives. Maybe as leaders one must assemble such narratives in order to motivate people; so be it, there’s a division of labor there. Maybe I’d have a separate team of narrative-constructing hedgehogs to help me with PR or something. But the people who are actually analyzing the data should be people who are good at making accurate predictions, full stop.

And in fact, I don’t think hedgehogs are good decision-makers or good leaders. I think they are good politicians. I think they are good at getting people to follow them and believe what they say. But I do not think they are actually good at making the decisions that would be the best for society.

Indeed, I think this is a very serious problem.

I think we systematically elect people to higher office—and hire them for jobs, and approve them for tenure, and so on—because they express confidence rather than competence. We pick the people who believe in themselves the most, who (by regression to the mean if nothing else) are almost certainly the people who are most over-confident in themselves.

Given that confidence is easier to measure than competence in most areas, it might still make sense to choose confident people if confidence were really positively correlated with competence, but I’m not convinced that it is. I think part of what Tetlock is showing us is that the kind of cognitive style that yields high confidence—a hedgehog—simply is not the kind of cognitive style that yields accurate beliefs—a fox. People who are really good at their jobs are constantly questioning themselves, always open to new ideas and new evidence; but that also means that they hedge their bets, say “on the other hand” a lot, and often suffer from Impostor Syndrome. (Honestly, testing someone for Impostor Syndrome might be a better measure of competence than a traditional job interview! Then again, Goodhart’s Law.)

Indeed, I even see this effect within academic science; the best scientists I know are foxes through and through, but they’re never the ones getting published in top journals and invited to give keynote speeches at conferences. The “big names” are always hedgehog blowhards with some pet theory they developed in the 1980s that has failed to replicate but somehow still won’t die.

Moreover, I would guess that trustworthiness is actually pretty strongly inversely correlated to confidence—“con artist” is short for “confidence artist”, after all.

Then again, I tried to find rigorous research comparing openness (roughly speaking “fox-ness”) or humility to honesty, and it was surprisingly hard to find. Actually maybe the latter is just considered an obvious consensus in the literature, because there is a widely-used construct called honesty-humility. (In which case, yeah, my thinking on trustworthiness and confidence is an accepted fact among professional psychologists—but then, why don’t more people know that?)

But that still doesn’t tell me if there is any correlation between honesty-humility and openness.

I did find these studies showing that both honesty-humility and openness are both positively correlated with well-being, both positively correlated with cooperation in experimental games, and both positively correlated with being left-wing; but that doesn’t actually prove they are positively correlated with each other. I guess it provides weak evidence in that direction, but only weak evidence. It’s entirely possible for A to be positively correlated with both B and C but B and C are uncorrelated or negatively correlated. (Living in Chicago is positively correlated with being a White Sox fan and positively correlated with being a Cubs fan, but being a White Sox fan is not positively correlated with being a Cubs fan!)

I also found studies showing that higher openness predicts less right-wing authoritarianism and higher honesty predicts less social conformity; but that wasn’t the question either.

Here’s a factor analysis specifically arguing for designing measures of honesty-humility so that they don’t correlate with other personality traits, so it can be seen as its own independent personality trait. There are some uncomfortable degrees of freedom in designing new personality metrics, which may make this sort of thing possible; and then by construction honesty-humility and openness would be uncorrelated, because any shared components were parceled out to one trait or the other.

So, I guess I can’t really confirm my suspicion here; maybe people who think like hedgehogs aren’t any less honest, or are even more honest, than people who think like foxes. But I’d still bet otherwise. My own life experience has been that foxes are honest and humble while hedgehogs are deceitful and arrogant.

Indeed, I believe that in systematically choosing confident hedgehogs as leaders, the world economy loses tens of trillions of dollars a year in inefficiencies. In fact, I think that we could probably end world hunger if we only ever put leaders in charge who were both competent and trustworthy.

Of course, in some sense that’s a pipe dream; we’re never going to get all good leaders, just as we’ll never get zero death or zero crime.

But based on how otherwise-similar countries have taken wildly different trajectories based on differences in leadership, I suspect that even relatively small changes in that direction could have quite large impacts on a society’s outcomes: South Korea isn’t perfect at picking its leaders; but surely it’s better than North Korea, and indeed that seems like one of the primary things that differentiates the two countries. Botswana is not a utopian paradise, but it’s a much nicer place to live than Nigeria, and a lot of the difference seems to come down to who is in charge, or who has been in charge for the last few decades.

And I could put in a jab here about the current state of the United States, but I’ll resist. If you read my blog, you already know my opinions on this matter.