# Billionaires bear the burden of proof

Sep 15 JDN 2458743

A king sits atop a golden throne, surrounded by a thousand stacks of gold coins six feet high. A hundred starving peasants beseech him for just one gold coin each, so that they might buy enough food to eat and clothes for the winter. The king responds: “How dare you take my hard-earned money!”

This is essentially the world we live in today. I really cannot emphasize enough how astonishingly, horrifically, mind-bogglingly rich billionares are. I am writing this sentence at 13:00 PDT on September 8, 2019. A thousand seconds ago was 12:43, about when I started this post. A million seconds ago was Wednesday, August 28. A billion seconds ago was 1987. I will be a billion seconds old this October.

Jeff Bezos has \$170 billion. 170 billion seconds ago was a thousand years before the construction of the Great Pyramid. To get as much money as he has gaining one dollar per second (that’s \$3600 an hour!), Jeff Bezos would have had to work for as long as human civilization has existed.

At a more sensible wage like \$30 per hour (still better than most people get), how long would it take to amass \$170 billion? Oh, just about 600,000 years—or about twice the length of time that Homo sapiens has existed on Earth.

How does this compare to my fictional king with a thousand stacks of gold? A typical gold coin is worth about \$500, depending on its age and condition. Coins are about 2 millimeters thick. So a thousand stacks, each 2 meters high, would be about \$500*1000*1000 = \$500 million. This king isn’t even a billionaire! Jeff Bezos has three hundred times as much as him.

Coins are about 30 millimeters in diameter, so assuming they are packed in neat rows, these thousand stacks of gold coins would fill a square about 0.9 meters to a side—in our silly Imperial units, that’s 3 feet wide, 3 feet deep, 6 feet tall. If Jeff Bezo’s stock portfolio were liquidated into gold coins (which would require about 2% of the world’s entire gold supply and surely tank the market), the neat rows of coins stacked a thousand high would fill a square over 16 meters to a side—that’s a 50-foot-wide block of gold coins. Smaug’s hoard in The Hobbit was probably about the same amount of money as what Jeff Bezos has.

And yet, somehow there are still people who believe that he deserves this money, that he earned it, that to take even a fraction of it away would be a crime tantamount to theft or even slavery.

Their arguments can be quite seductive: How would you feel about the government taking your hard-earned money? Entrepreneurs are brilliant, dedicated, hard-working people; why shouldn’t they be rewarded? What crime do CEOs commit by selling products at low prices?

The way to cut through these arguments is to never lose sight of the numbers. In defense of a man who had \$5 million or even \$20 million, such an argument might make sense. I can imagine how someone could contribute enough to humanity to legitimately deserve \$20 million. I can understand how a talented person might work hard enough to earn \$5 million. But it’s simply not possible for any human being to be so brilliant, so dedicated, so hard-working, or make such a contribution to the world, that they deserve to have more dollars than there have been seconds since the Great Pyramid.

It’s not necessary to find specific unethical behaviors that brought a billionaire to where he (and yes, it’s nearly always he) is. They are generally there to be found: At best, one becomes a billionaire by sheer luck. Typically, one becomes a billionaire by exerting monopoly power. At worst, one can become a billionaire by ruthless exploitation or even mass murder. But it’s not our responsibility to point out a specific crime for every specific billionaire.

The burden of proof is on billionaires: Explain how you can possibly deserve that much money.

It’s not enough to point to some good things you did, or emphasize what a bold innovator you are: You need to explain what you did that was so good that it deserves to be rewarded with Smaug-level hoards of wealth. Did you save the world from a catastrophic plague? Did you end world hunger? Did you personally prevent a global nuclear war? I could almost see the case for Norman Borlaug or Jonas Salk earning a billion dollars (neither did, by the way). But Jeff Bezos? You didn’t save the world. You made a company that sells things cheaply and ships them quickly. Get over yourself.

Where exactly do we draw that line? That’s a fair question. \$20 million? \$100 million? \$500 million? Maybe there shouldn’t even be a hard cap. There are many other approaches we could take to reducing this staggering inequality. Previously I have proposed a tax system that gets continuously more progressive forever, as well as a CEO compensation cap based on the pay of the lowliest employees. We could impose a wealth tax, as Elizabeth Warren has proposed. Or we could simply raise the top marginal rate on income tax to something more like what it was in the 1960s. Or as Republicans today would call it, radical socialism.

# What is the processing power of the human brain?

JDN 2457485

Futurists have been predicting that AI will “surpass humans” any day now for something like 50 years. Eventually they’ll be right, but it will be more or less purely by chance, since they’ve been making the same prediction longer than I’ve been alive. (Similarity, whenever someone projects the date at which immortality will be invented, it always seems to coincide with just slightly before the end of the author’s projected life expectancy.) Any technology that is “20 years away” will be so indefinitely.

There are a lot of reasons why this prediction keeps failing so miserably. One is an apparent failure to grasp the limitations of exponential growth. I actually think the most important is that a lot of AI fans don’t seem to understand how human cognition actually works—that it is primarily social cognition, where most of the processing has already been done and given to us as cached results, some of them derived centuries before we were born. We are smart enough to run a civilization with airplanes and the Internet not because any individual human is so much smarter than any other animal, but because all humans together are—and other animals haven’t quite figured out how to unite their cognition in the same way. We’re about 3 times smarter than any other animal as individuals—and several billion times smarter when we put our heads together.

A third reason is that even if you have sufficient computing power, that is surprisingly unimportant; what you really need are good heuristics to make use of your computing power efficiently. Any nontrivial problem is too complex to brute-force by any conceivable computer, so simply increasing computing power without improving your heuristics will get you nowhere. Conversely, if you have really good heuristics like the human brain does, you don’t even need all that much computing power. A chess grandmaster was once asked how many moves ahead he can see on the board, and he replied: “I only see one move ahead. The right one.” In cognitive science terms, people asked him how much computing power he was using, expecting him to say something far beyond normal human capacity, and he replied that he was using hardly any—it was all baked into the heuristics he had learned from years of training and practice.

Making an AI capable of human thought—a true artificial person—will require a level of computing power we can already reach (as long as we use huge supercomputers), but that is like having the right material. To really create the being we will need to embed the proper heuristics. We are trying to make David, and we have finally mined enough marble—now all we need is Michelangelo.

But another reason why so many futurists have failed in their projections is that they have wildly underestimated the computing power of the human brain. Reading 1980s cyberpunk is hilarious in hindsight; Neuromancer actually quite accurately projected the number of megabytes that would flow through the Internet at any given moment, but somehow thought that a few hundred megaflops would be enough to copy human consciousness. The processing power of the human brain is actually on the order of a few petaflops. So, you know, Gibson was only off by a factor of a few million.

We can now match petaflops—the world’s fastest supercomputer is actually about 30 petaflops. Of course, it cost half a month of China’s GDP to build, and requires 24 megawatts to run and cool, which is about the output of a mid-sized solar power station. The human brain consumes only about 400 kcal per day, which is about 20 watts—roughly the consumption of a typical CFL lightbulb. Even if you count the rest of the human body as necessary to run the human brain (which I guess is sort of true), we’re still clocking in at about 100 watts—so even though supercomputers can now process at the same speed, our brains are almost a million times as energy-efficient.

How do I know it’s a few petaflops?

Earlier this year a study was published showing that a conservative lower bound for the total capacity of human memory is about 4 bits per synapse, where previously some scientists thought that each synapse might carry only 1 bit (I’ve always suspected it was more like 10 myself).

So then we need to figure out how many synapses we have… which turns out to be really difficult actually. They are in a constant state of flux, growing, shrinking, and moving all the time; and when we die they fade away almost immediately (reason #3 I’m skeptical of cryonics). We know that we have about 100 billion neurons, and each one can have anywhere between 100 and 15,000 synapses with other neurons. The average seems to be something like 5,000 (but highly skewed in a power-law distribution), so that’s about 500 trillion synapses. If each one is carrying 4 bits to be as conservative as possible, that’s a total storage capacity of about 2 quadrillion bits, which is about 0.2 petabytes.

Of course, that’s assuming that our brains store information the same way as a computer—every bit flipped independently, each bit stored forever. Not even close. Human memory is constantly compressing and decompressing data, using a compression scheme that’s lossy enough that we not only forget things, we can systematically misremember and even be implanted with false memories. That may seem like a bad thing, and in a sense it is; but if the compression scheme is that lossy, it must be because it’s also that efficient—that our brains are compressing away the vast majority of the data to make room for more. Our best lossy compression algorithms for video are about 100:1; but the human brain is clearly much better than that. Our core data format for long-term memory appears to be narrative; more or less we store everything not as audio or video (that’s short-term memory, and quite literally so), but as stories.

How much compression can you get by storing things as narrative? Think about The Lord of the Rings. The extended edition of the films runs to 6 discs of movie (9 discs of other stuff), where a Blu-Ray disc can store about 50 GB. So that’s 300 GB. Compressed into narrative form, we have the books (which, if you’ve read them, are clearly not optimally compressed—no, we do not need five paragraphs about the trees, and I’m gonna say it, Tom Bombadil is totally superfluous and Peter Jackson was right to remove him), which run about 500,000 words altogether. If the average word is 10 letters (normally it’s less than that, but this is Tolkien we’re talking about), each word will take up about 10 bytes (because in ASCII or Unicode a letter is a byte). So altogether the total content of the entire trilogy, compressed into narrative, can be stored in about 5 million bytes, that is, 5 MB. So the compression from HD video to narrative takes us all the way from 300 GB to 5 MB, which is a factor of 60,000. Sixty thousand. I believe that this is the proper order of magnitude for the compression capability of the human brain.

Even more interesting is the fact that the human brain is almost certainly in some sense holographic storage; damage to a small part of your brain does not produce highly selective memory loss as if you had some bad sectors of your hard drive, but rather an overall degradation of your total memory processing as if you in some sense stored everything everywhere—that is, holographically. How exactly this is accomplished by the brain is still very much an open question; it’s probably not literally a hologram in the quantum sense, but it definitely seems to function like a hologram. (Although… if the human brain is a quantum computer that would explain an awful lot—it especially helps with the binding problem. The problem is explaining how a biological system at 37 C can possibly maintain the necessary quantum coherences.) The data storage capacity of holograms is substantially larger than what can be achieved by conventional means—and furthermore has similar properties to human memory in that you can more or less always add more, but then what you had before gradually gets degraded. Since neural nets are much closer to the actual mechanics of the brain as we know them, understanding human memory will probably involve finding ways to simulate holographic storage with neural nets.

With these facts in mind, the amount of information we can usefully take in and store is probably not 0.2 petabytes—it’s probably more like 10 exabytes. The human brain can probably hold just about as much as the NSA’s National Cybersecurity Initiative Data Center in Utah, which is itself more or less designed to contain the Internet. (The NSA is at once awesome and terrifying.)

But okay, maybe that’s not fair if we’re comparing human brains to computers; even if you can compress all your data by a factor of 100,000, that isn’t the same thing as having 100,000 times as much storage.

So let’s use that smaller figure, 0.2 petabytes. That’s how much we can store; how much can we process?

The next thing to understand is that our processing architecture is fundamentally difference from that of computers.

Computers generally have far more storage than they have processing power, because they are bottlenecked through a CPU that can only process 1 thing at once (okay, like 8 things at once with a hyperthreaded quad-core; as you’ll see in a moment this is a trivial difference). So it’s typical for a new computer these days to have processing power in gigaflops (It’s usually reported in gigahertz, but that’s kind of silly; hertz just tells you clock cycles, while what you really wanted to know is calculations—and that you get from flops. They’re generally pretty comparable numbers though.), while they have storage in terabytes—meaning that it would take about 1000 seconds (about 17 minutes) for the computer to process everything in its entire storage once. In fact it would take a good deal longer than that, because there are further bottlenecks in terms of memory access, especially from hard-disk drives (RAM and solid-state drives are faster, but would still slow it down to a couple of hours).

The human brain, by contrast, integrates processing and memory into the same system. There is no clear distinction between “memory synapses” and “processing synapses”, and no single CPU bottleneck that everything has to go through. There is however something like a “clock cycle” as it turns out; synaptic firings are synchronized across several different “rhythms”, the fastest of which is about 30 Hz. No, not 30 GHz, not 30 MHz, not even 30 kHz; 30 hertz. Compared to the blazing speed of billions of cycles per second that goes on in our computers, the 30 cycles per second our brains are capable of may seem bafflingly slow. (Even more bafflingly slow is the speed of nerve conduction, which is not limited by the speed of light as you might expect, but is actually less than the speed of sound. When you trigger the knee-jerk reflex doctors often test, it takes about a tenth of a second for the reflex to happen—not because your body is waiting for anything, but because it simply takes that long for the signal to travel to your spinal cord and back.)

The reason we can function at all is because of our much more efficient architecture; instead of passing everything through a single bottleneck, we do all of our processing in parallel. All of those 100 billion neurons with 500 trillion synapses storing 2 quadrillion bits work simultaneously. So whereas a computer does 8 things at a time, 3 billion times per second, a human brain does 2 quadrillion things at a time, 30 times per second. Provided that the tasks can be fully parallelized (vision, yes; arithmetic, no), a human brain can therefore process 60 quadrillion bits per second—which turns out to be just over 6 petaflops, somewhere around 6,000,000,000,000,000 calculations per second.

So, like I said, a few petaflops.