Modeling the Human Trajectory

In arriving at our funding priorities—including criminal justice reform, farm animal welfare, pandemic preparedness, health-related science, and artificial intelligence safety—Open Philanthropy has pondered profound questions. How much should we care about people who will live far in the future? Or about chickens today? What events could extinguish civilization? Could artificial intelligence (AI) surpass human intelligence?

One strand of analysis that has caught our attention is about the pattern of growth of human society over many millennia, as measured by number of people or value of economic production. Perhaps the mathematical shape of the past tells us about the shape of the future. I dug into that subject. A draft of my technical paper is here. (Comments welcome.) In this post, I’ll explain in less technical language what I learned.

It’s extraordinary that the larger the human economy has become—the more people and the more goods and services they produce—the faster it has grown on average. Now, especially if you’re reading quickly, you might think you know what I mean. And you might be wrong, because I’m not referring to exponential growth. That happens when, for example, the number of people carrying a virus doubles every week. Then the growth rate (100% increase per week) holds fixed. The human economy has grown super-exponentially. The bigger it has gotten, the faster it has doubled, on average. The global economy churned out $74 trillion in goods and services in 2019, twice as much as in 2000.1 Such a quick doubling was unthinkable in the Middle Ages and ancient times. Perhaps our earliest doublings took millennia.

If global economic growth keeps accelerating, the future will differ from the present to a mind-boggling degree. The question is whether there might be some plausibility in such a prospect. That is what motivated my exploration of the mathematical patterns in the human past and how they could carry forward. Having now labored long on the task, I doubt I’ve gained much perspicacity. I did come to appreciate that any system whose rate of growth rises with its size is inherently unstable. The human future might be one of explosion, perhaps an economic upwelling that eclipses the industrial revolution as thoroughly as it eclipsed the agricultural revolution. Or the future could be one of implosion, in which environmental thresholds are crossed or the creative process that drives growth runs amok, as in an AI dystopia. More likely, these impulses will mix.

I now understand more fully a view that shapes the work of Open Philanthropy. The range of possible futures is wide. So it is our task as citizens and funders, at this moment of potential leverage, to lower the odds of bad paths and raise the odds of good ones.

The human past, coarsely quantified

Humans are better than viruses at multiplying. If a coronavirus particle sustains an advantageous mutation (lowering the virulence of the virus, one hopes), it cannot transmit that innovation to particles around the world. But humans have language, which is the medium of culture. When someone hits upon a new idea in science or political philosophy (lowering the virulence of humans, one hopes) that intellectual mutation can disseminate quickly. And some new ideas, such as the printing press and the World Wide Web, let other ideas spread even faster. Through most of human history, new insights about how to grow wheat or raise sheep ultimately translated into population increases. The material standard of living did not improve much and may even have declined. In the last century or so, the pattern has flipped. In most of the world, women are having fewer children while material standards are higher for many, enough that human economic activity, in aggregate, has continued to swell. When the global economy is larger, it has more capacity to innovate, and potentially to double even faster.

To the extent that superexponential growth is a good model for history, it comes with a strange corollary when projected into the future: the human system will go infinite in finite time. Cyberneticist Heinz Von Foerster and colleagues highlighted this implication in 1960. They graphed world population since the birth of Jesus, fit a line to the data, projected it, and foretold an Armageddon of infinite population in 2026. They evidently did so tongue in cheek, for they dated the end times to Friday the 13th of November. As we close in on 2026, the impossible prophecy is not looking more possible. In fact, the world population growth rate peaked at 2.1%/year in 1968 and has since fallen by half.

That a grand projection went off track so fast should instill humility in anyone trying to predict the human trajectory. And it’s fine to laugh at the absurdity of an infinite doomsday. Nevertheless, those responses seem incomplete. What should we make of the fact that good models of the past project an impossible future? While population growth has slowed, growth in aggregate economic activity has not slackened as much. Historically poor countries such as China are catching up with wealthier ones, adding to the global totals. Of course, there is only so much catching up to do. And economically important ideas may be getting harder to find. For instance, keeping up with Moore’s law of computer chip improvement is getting more expensive. But history records other slowdowns, each of which ended with a burst of innovation such as the European Enlightenment. Is this time different? It’s possible, to be sure. But it’s impossible to be sure.

Since 1960, when Von Foerster and colleagues published, other analysts have worked the same vein—now including me. I was influenced by writings of Michael Kremer in 1993 and Robin Hanson in 2000. Building on work by demographer Ronald Lee, Kremer brought ideas about “endogenous technology” (explained below) to population data like that of Von Foerster and his coauthors. Except Kremer’s population numbers went back not 2,000 years, but a million years. Hanson was the first to look at economic output, rather than population, over such a stretch, relying mainly on numbers from Brad De Long.

You might wonder how anyone knows how many people lived in 5000 BCE and how much “gross product” they produced. Scholars have formed rough ideas from the available evidence. Ancient China and Rome conducted censuses, for example. McEvedy and Jones, whose historical population figures are widely used, put it this way:

[T]here is something more to statements about the size of classical and early medieval populations than simple speculation….[W]e wouldn’t attempt to disguise the hypothetical nature of our treatment of the earlier periods. But we haven’t just pulled numbers out of the sky. Well, not often.

Meanwhile, until 1800 most people lived barely above subsistence; before then the story of GWP growth was mostly the story of population growth, which simplifies the task of estimating GWP through most of history.

I focused on GWP from 10,000 BCE to 2019. I chose GWP over population because I think economic product is a better indicator of capacity for innovation, which seems central to economic history. And I prefer to start in 10,000 BCE rather 1 million or 2 million years ago because the numbers become especially conjectural that far back. In addition, it seems problematic to start before the evolution of language 40,000–50,000 years ago. Arguably, it was then that the development of human society took on its modern character. Before, hominins had developed technologies such as handaxes, intellectual mutations that may have spread no faster than the descendants of those who wrought them. After, innovations could diffuse through human language, a novel medium of arbitrary expressiveness—one built on a verbal “alphabet” whose letters could be strung together in limitless, meaningful ways. Human language is the first new, arbitrarily expressive medium on Earth since DNA.2

Here is the data series I studied the most3:

Roodman_GWP_10,000_BCE-2019_1.png
The series looks like a hockey stick. It starts at $1.6 billion in 10,000 BCE, in inflation-adjusted dollars of 1990: that is 4 million people times $400 per person per year, Angus Maddison’s quantification of subsistence living.
For clarity, here is the same graph but with $1 billion, $10 billion, $100 billion, etc., equally spaced. When the vertical axis is scaled this way, exponentially growing quantities—ones with fixed doubling times—follow straight lines. So to show how poorly human history corresponds to exponential growth, I’ve also drawn a best-fit line:

Roodman_GWP_10,000_BCE-2019_2.png
Finally, just as in that 1960 paper, I do something similar to the horizontal axis, so that 10,000, 1,000, 100, and 10 years before 2047 are equally spaced. (Below, I’ll explain how I chose 2047.) The horizontal stretching and compression changes the contour of the data once again. And it bends the line that represented exponential growth. But I’ve fit another line under the new scaling:

Roodman_GWP_10,000_BCE-2019_3.png
The new “power law” line follows the data points remarkably well. The most profound developments since language—the agricultural and industrial revolutions—shrink to gentle ripples on a long-term climb.

This graph raises two important questions. First, did those economic revolutions constitute major breaks with the past, which is how we usually think of them, or were they mere statistical noise within the longer-term pattern? And where does that straight line take us if we follow it forward?

I’ll tackle the second question here and return to the other later. I’ve already extended the line on the graph to 10 years before 2047, i.e., 2037, at which time it has GWP reaching a stupendous $500 trillion. That is ten times the level of 2007. If like Harold with his purple crayon you extend the line across your computer screen, off the edge, and into the ether, you will come to 1 year before 2047, then 0.1 before, then 0.01…. Meanwhile GWP will grow horrifically: to $30.7 quadrillion at the start of 2046, to $1.9 quintillion 11 months later, and so on. Striving to reach 2047, you will drive GWP to infinity.4 That was Von Foerster’s point back in 1960: explosion is an inevitable implication of the straight-line model of history in that last graph.

Yet the line fits so well. To grapple with this paradox, I took two main analytical approaches. I gained insight from each. But in the end the paradox essentially remained, and I think now that it is best interpreted in a non-mathematical way. I will discuss these ideas in turn.

Capturing the randomness of history

An old BBC documentary called the Midas Formula (transcript) tells how three economists in the early 1970s developed the E = mc2 of finance. It is a way to estimate the value of options such as the right to buy a stock at a set price by a set date. Fischer Black and Myron Scholes first arrived, tentatively, at the formula, then consulted Robert Merton. Watch till 27:40, then keep reading this post!

The BBC documented the work of Black, Scholes, and Merton not only because they discovered an important formula, but also because they co-founded the hedge fund Long-Term Capital Management to apply some of their ideas, and the fund imploded spectacularly in 1998.

In thinking about the evolution of GWP over thousands of years, I experienced something like what Merton experienced, except for the bits about winning a Nobel and almost bringing down the global financial system. I realized I needed a certain kind of math, then discovered that it exists and is called the Itô calculus.

The calculus of Isaac Newton and Gottfried Leibniz excels at describing smooth arcs, such as the path of Halley’s Comet. Like the rocket in the BBC documentary, the comet’s mathematical situation is always changing. As it boomerangs across the solar system, it experiences a smoothly varying pull from the sun, strongest at the perihelion, weakest when the comet is out beyond Neptune. If at some moment the comet is hurtling by the sun at 50 kilometers per second, then a second later, or a nanosecond later, it won’t be, not exactly. And the rate at which the comet’s speed is changing is itself always changing.

One way to approximate the comet’s path is to program a computer. We could feed in a starting position and velocity, code formulas for where the object will be a nanosecond later given its velocity now, update its velocity at the new location to account for the sun’s pull, and repeat. This method is widely used. The miracle in calculus lies in passing to the limit, treating paths through time and space as accumulations of infinitely many, infinitely small steps, which no computer could simulate because no computer is infinitely fast. Yet passing to the infinite limit often simplifies the math. For example, plotting the smooth lines and curves in the graphs above required no heavy-duty number crunching even though the contours represent growth processes in which the absolute increment, additional dollars of GWP, is always changing.

But classical calculus ignores randomness. It is great for modeling the fall of apples; not so much for the price of Apple. And not so much for rockets buffeted by turbulence, nor for the human trajectory, which has sustained shocks such as the fall of Rome, the Black Death, industrial take-off, world wars, depressions, and financial crises. It was Kyosi Itô who in the mid-20th century, more than anyone else, found a way to infuse randomness into the calculus of Newton and Leibniz. The result is called the stochastic calculus, or the Itô calculus. (Though to listen to the BBC narrator, you’d think he invented the classical calculus rather than adding randomness to it.)

Think of an apple falling toward the surface of a planet whose gravity is perpetually, randomly fluctuating, jiggling the apple’s acceleration as it descends. Or think of a trillion molecules of dry ice vapor released to scatter and careen across a stage. Each drop of an apple or release of a molecule would initiate a unique course through space and time. We cannot predict the exact paths but we can estimate the distribution of possibilities. The apple, for example, might more likely land in the first second than in the 100th.

I devised a stochastic model for the evolution of GWP. I borrowed ideas from John Cox, who as a young Ph.D. followed in the footsteps of Black, Scholes, and Merton. The stochastic approach intrigued me because it can express the randomness of human history, including the way that unexpected events send ripples into the future. Also, for technical reasons, stochastic models are better for data series with unevenly spaced data points. (In my GWP data, the first two numbers are 5,000 years apart, for 10,000 and 5,000 BCE, while the last two are nine apart, for 2010 and 2019.) Finally, I hoped that a stochastic model would soften the paradox of infinity: perhaps after fitting to the data, it would imply that infinite GWP in finite time was possible but not inevitable.

The equation for this stochastic model generalizes that implied by the straight “power law” line in the third graph above, the one we followed toward infinity in 2047.5 It preserves the possibility that growth can rise more than proportionally with the level of GWP, so that doublings will tend to come faster and faster. Here, I’ll skip the equations and stick to graphs.

The first graph shows twenty “rollouts” of the model after it has been calibrated to match the GWP history.6 All twenty paths start where the real data series starts, at $1.6 billion in 10,000 BCE. The real GWP series is in red. Arguably the rollouts meet the Goldilocks test: they resemble the original data series, but not so perfectly as to look contrived. Each represents an alternative history of humanity. Like the real series, the rollouts experience random ups and downs, woven into an overall tendency to rise at a gathering pace. I think of the downs as statistical Black Deaths. The randomness suffices to greatly affect the timing of economic takeoff: one rollout explodes by 3000 BCE while others do not do so even by 5000 CE. In a path that explodes early, I imagine, the wheel was invented a thousand years sooner, and the breakthroughs snowballed from there.

BernouPathsGWP12KDecnovBlog.png
The second graph introduces a few changes. Instead of 20 rollouts, I run 10,000. Since that is too many to plot and perceive, I show percentiles. The black curve in the middle shows the median simulated GWP at each moment—the 50th percentile. Boundaries between grey bands mark the 5th, 10th, 15th, etc., percentiles. I also run 10,000 rollouts from the end of the data series, $73.6 trillion in 2019, and depict them in the same way. And to take account of the uncertainty in the fitting of my model to the data, each path is generated under a slightly different version of the model.7 So this graph contains two kinds of randomness: the randomness of history itself, and the imprecision in our measurement of it.8

BernouDistGWP12KDecLogBlog.png
The actual GWP series, still in red, meanders mainly between the 40th and 60th percentiles. This good fit is the stochastic analog to the good fit of the power law line in the third graph in the earlier triplet. As a result, this model is the best statistical representation I have seen of world economic history, as proxied by GWP. That and a dollar will buy you an apple.

Through the Itô calculus, I quantified the probability and timing of escalation to infinity. The probability that a path like those in the first of the two graphs just above will not eventually explode is a mere 1 in 100 million. The median year of explosion is 1527. Applying the same calculations starting from 2019—that is, incorporating the knowledge that GWP reached $73.6 trillion last year—the probability of no eventual explosion falls to 1 in 1069, which is a number-of-atoms-in-the-universe sort of figure. (OK, there may be more like 1086 atoms. But who’s counting?) The estimate of the median explosion year sharpens to 2047 (95% confidence range is ±16 years), which is why I used that year in the third graph of the post. In the mathematical world of the best-fit model, explosion is all but inevitable by the end of the century.9

Incorporating randomness into the modeling does not after all soften the paradox of infinity. An even better mathematical description of the past still predicts an impossible future.

I will put that conundrum back on hold for the moment and address the other question inspired by the power law’s excellent fit to GWP history. Should the agricultural and industrial revolutions be viewed as ruptures in history or as routine, modest deviations around a longer-term trend? To assess whether GWP was surprisingly high in 1820, by which time the industrial revolution had built a head of steam, I fitted the model just to the data before 1820, i.e., through 1700. Then I generated many paths wiggling forward from 1700 to 1820. The 1820 GWP value of $741 billion places it in the 95th percentile of these simulated paths: the model is “surprised,” going by previous history, at how big GWP was in 1820. I repeat the whole exercise for other time points, back to 1600 and forward to 2019. This graph contains the results:

BernouDiffPredGWP12KDecBlog.png
The model is also surprised by the next data point, for 1870, despite “knowing” about the fast GWP growth before 1820. And it is surprised again in 1913. Now, if my stochastic model for GWP is correct, then the 14 dots in this graph should be distributed roughly evenly across the 0–100% range, with no correlation from one dot to the next. That’s not what we see. The three dots in a row above the 90th percentile strongly suggests that the economic growth of the 19th century broke with the past. The same goes for the four low values since 1990: recent global growth has been slower and steadier than the model predicts from previous history.

In sum, my stochastic model succeeds in expressing some of the randomness of history, along with the long-term propensity for growth to accelerate. But it is not accurate or flexible enough to fully accommodate events as large and sudden as the industrial revolution. Nevertheless, I think it is a virtue, and perhaps an inspiration for further work, that this rigorous model can quantify its own shortcomings.

Land, labor, capital, and more

To this point I have represented economic growth as univariate. A single quantity, GWP, determines the rate of its own growth, if with randomness folded in. I have radically caricatured human history—the billions of people who have lived, and how they have made their livings. That is how models work, simplifying matters in order to foreground aspects few enough for the mind to embrace.

A longstanding tradition in the study of economic growth is to move one notch in the direction of complexity, from one variable to several. Economic activity is cast as combining “factors of production.” Thus we have inherited from classical economists such as Adam Smith and David Ricardo the triumvirate of land, labor, and capital. Modern factor lists may include other ingredients, such as “human capital,” the investment in skills and education that can raise the value of one’s labor. A stimulus to one of these factors can boost economic output, which can be reinvested in some or all of the factors: more office buildings, more college degrees, more kids even. In this way, factors can propel their own growth and each other’s, in a richer version of the univariate feedback loop contemplated above. And just as in the univariate model that fits GWP history so well, the percentage growth rate of the economy can increase with output.10

I studied multivariate models too11 though I left for another day the technically daunting step of injecting them with randomness. I learned a few things.

First, the single-variable “power law” model—that straight line in my third graph up top—is, mathematically, a special case of standard models in economics, models that won at least one Nobel (for Robert Solow) and are taught to students every day somewhere on this Earth.12 In this sense, fitting the power law model to the GWP data and projecting forward is not as naive as it might appear.

To appreciate the concern about naiveté, think of the IHME model of the spread of coronavirus in the United States. It received much attention—including criticism that it is an atheoretical “curve-fitting” exercise. The IHME model worked by synthesizing a hump-shaped contour from the experiences of Wuhan and Italy, fitting the early section of the contour to U.S. data, then projecting forward. The IHME exercise did not try to mathematically reconstruct what underlay the U.S. data, the speed at which the virus hopped from person to person, community to community. If “the IHME projections are based not on transmission dynamics but on a statistical model with no epidemiologic basis,” the analogous charge cannot so easily be brought against the power law model for GWP. It is in a certain way rooted in established economics.

The second thing I learned constitutes a caveat that I just glossed over. By the mid-20th century, it became clear to economists that reinvestment alone had not generated the economic growth of the industrial era. Yes, there were more workers and factories, but from any given amount of labor and capital, industrial countries extracted more value in 1950 than in 1870. As Paul Romer put it in 1990,

The raw materials that we use have not changed, but…the instructions that we follow for combining raw materials have become vastly more sophisticated. One hundred years ago, all we could do to get visual stimulation from iron oxide was to use it as a pigment. Now we put it on plastic tape and use it to make videocassette recordings.

So in the 1950s economists inserted another input into their models: technology. As meant here, technology is knowledge rather than the physical manifestations thereof, the know-how to make a smartphone, not the phone itself.

The ethereal character of technology makes it alchemical too. One person’s use of a drill or farm plot tends to exclude others’ use of the same, while one person’s use of an idea does not. So a single discovery can raise the productivity of the entire global economy. I love Thomas Jefferson’s explanation, which I got from Charles Jones:

Its peculiar character … is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density at any point.

That ideas can spread like flames from candle to candle seems to lie at the heart of the long-term speed-up of growth.

And the tendency to speed up, expressed in a short equation, is also what generates the strange, superexponential implication that economic output could spiral to infinity in decades. Yet that implication is not conventional within economics, unsurprisingly. Since the 1950s, macroeconomic modeling has emphasized the achievement of “steady state,” meaning a constant economic growth rate such as 3% per year. Granted, even such exponential growth seems implausible if we look far enough ahead, just as the coronavirus case count couldn’t keep doubling forever. But, in their favor, models predicting steady growth cohered with the relatively stability of per-person economic growth over the previous century in industrial countries (contrasting with the acceleration we see over longer stretches). And under exponential growth the economy merely keeps expanding; it does not reach infinity in finite time. “It is one thing to say that a quantity will eventually exceed any bound,” Solow jabbed in 1994. “It is quite another to say that it will exceed any stated bound before Christmas.”

The power law model that fits history so well, yet explodes before Christmas, is mathematical kin with Solow’s influential models. So how did he avoid the explosive tendencies? To understand, step over to my whiteboard, where I’ll diagram a typical version of Solow’s model. The economy is conceived as a giant factory with four inputs: labor, capital, human capital, and technology. It produces output, much of which is immediately eaten, drunk, watched, or otherwise consumed:
blog_economy_1.png
Some of the output is not consumed, and is instead invested in factors—here, the capital of businesses, and the human capital that is skills in our brains:
blog_economy_2.png
A final dynamic is depreciation: factories wear out, skills fade. And the more there are, the more wear out each year. So just below I’ve drawn little purple loops to the left of these factors with minus signs inside them. Fortunately, the reinvestment flowing in through the orange arrows can compensate for depreciation by effecting repairs and refreshing skills.

Now, labor and technology can also depreciate, since workers age and die and innovations are occasionally lost too. But Solow put the sources of their replenishment outside his model. From the standpoint of the Solow model, they grow for opaque reasons. So they receive no orange arrows. And to convey their unexplained tendency to grow, I’ve drawn plus signs in their purple feedback loops:
blog_economy_3.png
In the language of economics, Solow made technology and labor exogenous. This choice constitutes another caveat to my claim that the power law model is rooted in standard economic models. For Solow, the choice had two virtues and a drawback. I’ll explain them with reference to technology, the more fateful factor.

One virtue was humility: it left for future research the mystery of what sets the pace of technological advance.13

The other virtue was that defaulting to the simple assumption that technology—the efficiency of turning inputs into output—improved at a constant rate such as 1% per year led to the comfortable prediction that a market economy would converge to a “steady state” of constant growth. It was as if economic output were a ship and technology its anchor; and as if the anchor were not heavy enough to moor the ship, but its abrasion against the seabed capped the ship’s speed. In effect, Solow built the desired outcome of constant growth into his model.

In general, the drawback of casting technology as exogenous is that it leaves a story of long-term economic development incomplete. It does not explain or examine where technical advance comes from, nor its mathematical character, despite its centrality to history. On its face, taking the rate of technological advance as fixed implies, implausibly, that a society’s wealth has zero effect on its rate of technical advance. There is no orange arrow from Production to Technology. Yet in general, when societies become richer, they do invest more in research and development and other kinds of innovation. It was this observation that motivated Romer, among others, to reconfigure economic models to make technological advance endogenous (which eventually earned a Nobel too). Just as people can invest earnings into capital, people can invest in technology, not to mention labor (in the number, longevity, and health of workers). Making these links in the model merely requires writing the same equations for technology and labor as for capital. It is like drawing the sixth branch of a snowflake just like the other five. It looks like this:
blog_economy_4.png
I discovered that when you do this—when you allow technology and all the other factors to affect economic output and be affected by it—the modeled system is unstable. (I was hardly the first to discover this.) As time passes, the amount of each factor either explodes to infinity in finite time or decays to zero in infinite time. And under broadly plausible (albeit rigid) assumptions about the rates at which that Production diamond transforms inputs into output and reinvestment, explosion is the norm. It can even happen when ideas are getting harder to find. For example, even though it is getting expensive to squeeze more speed out of silicon chips, the global capacity to invest in the pursuit has never been greater.14

Here’s a demonstration of how endogenous technology creates explosive potential. Imagine an economy that begins with 1 unit each of labor, capital, human capital, and technology. Define the “units” how you please. A unit of capital could be a handaxe or a million factories. Suppose the economy then produces 1 unit of output per year. I’ve diagrammed that starting point by writing a 1 next to each factor as well as to the right of the Production diamond. To simplify, I’ve removed the purple depreciation loops:
blog_economy_5.png
Now suppose that over a generation, enough output is reinvested in each factor other than technology that the stock of each increases to 2 units. Technology doesn’t change. Doubling the number of factories, workers, and diplomas they collectively hold is like duplicating the global economy: with all the inputs doubled, output should double too:
blog_economy_6.png
Now suppose that in addition over this same generation, the world invests enough in R&D to double technology. Now the world economy extracts twice the economic value from given inputs—which themselves have doubled. So output instead quadruples in the first generation:
blog_economy_7.png
What happens when the process repeats? Since output starts at 4 per year, instead of 1 as in the previous generation, total reinvestment into each input also quadruples. So where each factor stock climbed by 1 unit in the first generation, now each climbs by 4, from 2 to 6. In other words, each input triples by the end of this generation. And just as doubling each input, including technology, multiplied output by 2×2 = 4, the new cycle multiplies output by 3×3 = 9, raising it from 4 to 36:
blog_economy_8.png
The growth rate accelerates. The doubling time drops. And it drops ever more in succeeding generations.

Again, it is technology that drives this acceleration. If technology were stagnant, or if, as in Solow’s model, its growth rate were locked down, the system could not spiral upward so.

In the paper, I carry out a more intense version of this exercise, with 100 million steps, each representing 10 minutes. I imagine the economy to start in the Stone Age, so I endow it with a lot of labor (people) but primitive technology and little capital or human capital. I start population at 1 (which could represent 1 million) and the other factors lower. This graph shows how factor stocks and economic output (GWP) evolve over time:
lny_v_t_r0Blog.png
Apparently my simulated economy could not support all the people I gave it at the start, at least given the fraction of its income I allowed it to invest in creating and sustaining life. So population falls at first, until after about 500 years the economy settles into something close to stasis. But it is not quite stasis, for eventually the economy starts to grow perceptibly, and within a few centuries its scale ascends to infinity. The sharp acceleration resembles history.

It turns out that a superexponential growth process not only fits the past well. It is rooted in conventional economic theory, once that theory is naturally generalized to allow for investment in technology.

Interpreting infinity

How then are we to make sense of the fact that good models of the past predict an impossible future?

One explanation is simply that history need not repeat itself. The best model for the past may not be the best for the future. Perhaps technology can only progress so far. It has been half a century since men first stepped on the moon and the 747 entered commercial service; contrast that with the previous half century of progress in aeronautics. As we saw, the world economy has grown more slowly and steadily in the last 50 years than the univariate model predicts. But it is hard to know whether any slowdown is permanent or merely a century-scale pause.

A deeper take is that infinities are a sign not that a model is flatly wrong but that it loses accuracy outside a certain realm of possible states of the world. Beyond that realm, some factor once neglected no longer can be. Einstein used the fact that the speed of light is the same in all inertial reference frames to crack open classical physics. It turned out that when such great speeds were involved, the old equations become wrong. As Anders Johansen and Didier Sornette have written,

Singularities are always mathematical idealisations of natural phenomena: they are not present in reality but foreshadow an important transition or change of regime. In the present context, they must be interpreted as a kind of ‘critical point’ signaling a fundamental and abrupt change of regime similar to what occurs in phase transitions.

What might be that factor once neglected that no longer can be? One candidate is a certain unrealism in calculus-based economic models. Calculus is great for predicting the path of comets, along which the sun’s pull really does change in each picosecond. All the the simulations I’ve graphed here treat innovation analogously, as something that happens in infinitely many steps, each of infinitely small size, each diffused around the globe at infinite speed. But real innovations take time to adopt, and time lags forestall infinities. If you keep hand-cranking the model on my white board, you won’t get to infinity by Christmas. You will just get really big numbers. That is because the simulation will take a finite number of chunky steps, not an infinite number of infinitely small steps.

The upshot of recognizing the unrealism of calculus, however, seems only to be that while GWP won’t go to infinity, it could still get stupendously big. How might that happen? We have in hand machines whose fundamental operations proceed a million times faster than those of the brain. And researchers are getting better at making such machines work like brains. Artificial intelligence might open major new production possibilities. More radically, if AI is doing the economic accounting a century from now, it may include the welfare of artificial minds in GWP. Their number would presumably dwarf the human population. As absurd as that may sound, a rise of AI could be seen as the next unfolding of possibilities that began with the evolution of talkative, toolmaking apes.

A more profound neglected factor is the flow of energy (more precisely, negative entropy) from the sun and the earth’s interior. As economists Nicholas Georgescu-Roegen and Herman Daly have emphasized, depictions of the economic process like my whiteboard diagrams obscure the role of energy and natural resources in converting capital and labor into output. For this reason, at the end of my paper, I add natural resources to the model, rather as the classical economists included land. Since sunlight is constantly replenishing the biosphere, I have natural resources appreciate rather than depreciate. And to capture how economic activity can deplete natural resources, I cast the “reinvestment” in resources as negative.15 This is conceptually awkward, but I don’t see a better way within this modeling structure. I indicate these dynamics with a positive sign in the purple loop for natural resources and a minus sign on its orange reinvestment arrow:
blog_economy_10.png
In the simulation, the stock of resources is taken as initially plentiful, so it too starts at 1 rather than a lower value. The slow, solar-powered increase in this economic input (in green) hastens the explosion by about 1000 years. But because the growing economy depletes natural resources more rapidly, the take-off initiates a plunge in natural resources, which brings GWP down with it. In a flash, explosion turns into implosion.
lny_v_t_r1Blog.png
The scenario is, one hopes, unrealistic. Its realism will depend on whether the human enterprise ultimately undermines itself by depleting a natural endowment such as safe water supplies or the greenhouse gas absorptive capacity of the atmosphere; or whether we skirt such limits by, for example, switching to climate-safe energy sources and using them to clean the water and store the carbon.

…which points up another factor the model neglects: how people respond to changing circumstances by changing their behavior. While the model allows the amount of labor, capital, etc., to gyrate, it locks down the numbers that shape that evolution, such as the rate at which economic output translates into environmental harm. This is another reason to interpret the model’s behavior directionally, as suggesting a tendency to diverge, not as gesturing all the way to utopia or dystopia.

Still, this run suffices to demonstrate that an accelerating-growth model can capture the explosiveness of long-term GWP history without predicting a permanently spiraling ascent. Thus the presence of infinities in the model neglecting natural resource degradation does not justify dismissing superexponential models as a group. This too I learned through multivariate modeling.

Conclusion

I do not know whether most of the history of technological advance on Earth lies behind or ahead of us. I do know that it is far easier to imagine what has happened than what hasn’t. I think it would be a mistake to laugh off or dismiss the predictions of infinity emerging from good models of the past. Better to take them as stimulants to our imaginations. I believe the predictions of infinity tell us two key things. First, if the patterns of history continue, then some sort of economic explosion will take place again, the most plausible channel being AI. It wouldn’t reach infinity, but it could be big. Second, and more generally, I take the propensity for explosion as a sign of instability in the human trajectory. Gross world product, as a rough proxy for the scale of the human enterprise, might someday spike or plunge or follow a complicated path in between. The projections of explosion should be taken as indicators of the long-run tendency of the human system to diverge. They are hinting that realistic models of long-term development are unstable, and stable models of long-term development unrealistic. The credible range of future paths is indeed wide.

Data and code for the paper and for this post are on GitHub. A comment draft of the paper is here.

Comments

Some personal reactions on this piece:

First, a note on how it came about and what I think the relevance to our work is. I asked David to evaluate Robin Hanson’s work on long-term growth as a sequence of exponential growth modes. I found it interesting that an attempt to extrapolate future economic growth from the past (with very little reasoning other than attempting to essentially trend-extrapolate) implied a strong chance of explosive growth in the next few decades, but I wasn’t convinced that Hanson’s approach was the best method of doing such trend extrapolation. I asked David how he would extrapolate future growth based on the past, and this is the result. The model is very different from Hanson’s (and I prefer it), but it too has an implication of explosive growth in the next few decades.

On its own, seeing trend extrapolation exercises with this implication doesn’t necessarily mean much. However, I independently have a view (based on other reasoning) that transformative AI could plausibly be developed in the next couple of decades. I think one of the best reasons to be skeptical of this view about transformative AI is that it seemingly implies a major “trend break”: it seems that it would, one way or another, have to mean world economic growth well outside the 1-3% range that it’s been pretty steady in for the last couple of centuries. However, Hanson’s and Roodman’s work both imply that a broader look at economic history demonstrates accelerating growth, and that in this sense, expecting that “the future will be like the past” could be entirely consistent with expecting radically world-changing technology to be developed in the coming decades.

Like David, I wouldn’t take the model discussed in this piece literally, but I tentatively agree with what I see as the central themes: that a sufficiently broad view of history shows accelerating growth, that this dynamic is inherently unstable, and that we therefore have little reason to “rule out” explosive growth in the coming decades.

We are working on a number of other analyses regarding the likelihood of transformative AI being developed in the coming decades. One topic we’re exploring is a potential followup on this piece in which we would try to understand the degree to which growth economists find this piece’s central themes reasonable, and what objections are most common.

Now a few comments on ways in which I see things differently from how David sees them. I should start by saying that any model makes simplifications, and this is a case where extreme simplifications are particularly called for. However, if I’d written this post I would’ve called out the following non-modeled dynamics as particularly worth noting.

1 - this post’s multivariate model does not match the way I intuitively model what I call the “technological landscape.” Some discoveries and technological developments enable others, so there is in some sense an “order” in which we’ve developed new technologies that might be fairly stable across multiple possible versions of history. And some technologies are more impactful than others. There may thus be important natural structure that leads to inevitable (as opposed to stochastic) acceleration and deceleration, as the world hits phases where there are more vs. less impactful technologies being discovered. The most obvious way in which I expect the “technological landscape “ to matter is that at some point, I think the world could “run out of new findings” - at which point technology could stop improving. I see this as a likely way that real-world growth could avoid going to infinity in finite time, without needing to invoke natural resource limits.

2 - it seems to me that a more realistic multivariate model would have natural resource shortages leading to growth “leveling off” rather than spiking and imploding. E.g., at the point where natural resources are foreseeably going to be a bottleneck on growth, I expect them to become more expensive and hence more carefully conserved. I’m not sure whether this would apply to a long enough time frame to make a big visual difference to the charts in this post, but I still thought it was worth mentioning.

3 - I’m interested in the hypothesis that the recent “stagnation” this model sees is largely driven by the fact that population growth has slowed, which in turn limits the rate of technological advance. Advances in AI could later lead to a dynamic in which capital can more efficiently substitute for labor’s (and/or human capital’s) role in technological advance. This is an example of how the shape of the “technological landscape” could explain some of the “surprises” seen in David’s tests of the model.

4 - regarding this statement in David’s piece:

The scenario is, one hopes, unrealistic. Its realism will depend on whether human enterprise ultimately undermines itself by depleting a natural endowment such as safe water supplies or the greenhouse gas absorptive capacity of the atmosphere; or whether we skirt such limits by, for example, switching to climate-safe energy sources and using them to clean the water and store the carbon.

In worlds where explosive growth of the kind predicted by David’s model occured, I’d anticipate radical changes to the way the world looks (for example, civilization expanding outside of Earth), which could significantly change the picture of what resources are scarce.

Regarding the estimate of the explosion year:

> The estimate of the median explosion year sharpens to 2047, which is why I used that year in the third graph of the post (95% confidence range is ±16 years).

I’m curious what the predicted explosion year (and CI range) would have been if the same analysis had been run in the past. For example, would a similar report published 100 years ago have expected the explosion sooner or later, and how narrow would the CI have been? It would be nice to have a table or chart that shows how the date/CI would have changed over time.

Great question, Louis. You inspired me to look into that. This graph shows median predicted take-off date (where 0 means 1 CE) as a function of the last date allowed into the sample that is the basis of the projection. All samples start in 10,000 BCE. I’ve also indicated 95% confidence intervals in grey, but just take those as rough indications:
TakeOffvsDataStop1.png

I labeled the data points with the stop date, not the take-off date.

Here, I zoom in on the part of that graph starting in 1820:
TakeOffvsDataStop1.png

See what you think.

Interesting, thank you! This is exactly the kind of response I was hoping for.

It’s hard to make out what the takeoff dates are from reading the graph, could you post another version with takeoff dates also labelled please?

Hi Greg. Sure thing.
TakeOffvsDataStop1.png
TakeOffvsDataStop1.png

Fascinating! Is there some other explanation for why the confidence intervals collapsed to such a small grey range for the past 100 years–maybe something to do with how much more frequent the data points are? Besides the first explanation which comes to mind, which is that starting around then the data really did give evidence for less variance/more certainty of trajectory.

I think it’s two effects: more data, and more proximate explosion date. To elaborate on the latter, I think it’s inherent in the mathematical process that’s posited here, that the more distant the median explosion date under given parameter choices, the greater the spread in years around that date. If the median explosion date is implied to be a million years from now, it makes sense that the uncertain band around that point estimate would be quite large as measured in years–say, plus or minus ten millennia.

Interesting, thanks! So it’s not quite like fusion, but the time to the singularity has remained “a few decades away” over the last few decades..

Timeseries end year Predicted year of “Singularity” Predicted years to singularity
1970 2010 40
1980 2014 34
1990 2022 32
2000 2031 31
2010 2038 28
2020 2047 27

(although this is relatively stable compared to the large-confidence interval region preceding it)

Extrapolating forward very naively:

Timeseries end year Predicted year of “Singularity” Predicted years to singularity second difference
1970 2010 40
1980 2014 34 6
1990 2022 32 2
2000 2031 31 1
2010 2038 28 3
2020 2047 27 1
average second difference: 3
projecting forward:
2030 2054 24 3
2040 2061 21 3
2050 2068 18 3
2060 2075 15 3
2070 2082 12 3
2080 2089 9 3
2090 2096 6 3
2100 2103 3 3
2110 2110 0 3

Interesting!

I’m looking at the paper, and I’m getting some characters that are not displaying properly in the paragraph following equation (1). Is there a web version of the paper? Is the paper intended to display properly on a particular platform?

Hi Rick. Thanks for bringing that to my attention. It was just a glitch in Word. I think it’s fixed now.

An s-curve looks like an exponential until you’ve passed the inflection point. Might there be something like an s-curve, but for the hyperexponential trend you are using? Might the mid-twentieth-century be the inflection point? I’d love to see you redo the simulations using this hyper-s-curve model, and thereby get evidence about whether the mid-twentieth-century slowdown is more likely to be the inflection point or just noise. (Presumably which is more likely in an absolute sense would depend on your prior for how high the inflection point is.)

Hi Daniel,
Yes, that’s entirely possible to do. One reason I didn’t go there is that in my experience, until the inflection is fairly far along, its very hard to identify the additional parameter, the one that brings about deceleration, with any precision. You can get equally good fits that imply long-term ceilings that differ by a magnitude of 100 or 1000.
In a sense, the final graph is my attempt to do that, but while staying within the mathematical framework I’d already set up. I realize it’s not the same thing as explicitly working in a limit.
–David

Anything written by David Roodman is worth reading, and this is no exception. I did start to wonder whether there is some way to include a conception of production that is not captured in the usual way – that is, the value of care and other non-monetized dimensions of the economy (and society). I get that the main point of the work is that we cannot rule out explosive growth in material wellbeing in the future. But I think it would be even more interesting if, in the range of futures we could envision, we included an exploration of the trajectory/ies non-material wellbeing. Maybe you don’t have enough empirical basis to work from, although there are a bunch of people who do work on that (e.g., https://research.american.edu/careworkeconomy/?_ga=2.36238048.1339677417.1592331992-593210144.158986…).

In any case, thanks, David, for another mind-expanding paper. Looking forward to more!

Thanks Ruth! (Where’s the Like button on this blog? Really seems like this blog should have them….)

I think that the sorts of historical figures used here capture something real and huge, and to that extent are useful. But I completely agree that there are more things in our past and future than are dreamt of in my GWP–and they matter much more as we think about and work for the future we want.

Shouldn’t this be plotted per capita?

Ray Kurzweil, I believe in his book The Singularity is Near (2005), does some very similar historical plotting and predicts the singularity by 2045. Did I miss where you cited him?

No, you didn’t. There are not cites of him. As far as I can see, Kurzweil reaches that number by projecting Moore’s law for processing power, not by projecting population or GWP. I think the 1960 population paper I highlight has priority and is the more relevant precedent here.

He is better known for his Moore’s Law projections, but he has a more general law of accelerating returns (2001). He talks about positive feedback loops and why progress is super exponential here: https://www.kurzweilai.net/the-law-of-accelerating-returns . See the second figure where he talks about paradigm shifts in history. This plot is similar to yours, but it is not GDP.

I’ve seen these ideas in a bunch of places. Francois Meyer wrote about a Law of Accelerating Evolution in 1947. Before his death in 1957, in conversation with Stanislaw Ulam, John von Neumann spoke of the possibility of a technological singularity. Contemporenous with that 1960 article in Science, Scientific American published a piece on the history of human population—the source for my pre-10,000 BCE figures—with a graph akin those seen here. Economists in the 1960s understood and were uncomfortable with the way the standard models could, with slight changes, lead to explosion. I.J. Good speculated on self-improving, ultraintelligent machines in 1966. I think Hans Moravec’s Mind Children (1988) also projects Moore’s Law–type curves.

This is super interesting! I’m looking forward to diving into it.

I’m wondering if you know William Nordhaus’ 2015 paper ‘Are We Approaching an Economic Singularity? Information Technology and the Future of Economic Growth’? I haven’t engaged with it yet but it seems relevant and I don’t see it mentioned in the technical report.

Yes, I have read that and I like it. He makes a pretty convincing case that there’s no sign in the last ~century of U.S. data of the kinds of economic changes one would expect if a surge toward a singularity had started. This is consistent with the way my model is “surprised” by the relative stability and slowness of growth since about 1950.

Quite possibly I will cite this paper and others related to it on AI and growth. That said, AI gets almost no mention in the paper, so this literature is not central.

Thanks for an interesting discussion. Two small points; most people living through the collapse of the Roman Empire in the late 470s were not aware that this event, arguably one of the focal points in western history, was taking place. In the same way, we may be going through a transformation, enabled by the internet, that we can’t quite see.

Economics finds it difficult to quantify measures that include quality of life. Urban cycling improves aggregate quality of life more than urban car transport, even though it results in a drop in economic growth. It is arguable that policies designed to maximise unsustainable economic growth are probably the single greatest risk to human and mammalian survival over the coming centuries…

Amazing thoughts and insight into the future of humanity!!!…The only thing I’d like to add (and it’s quite trivial) is that Black and Scholes are probably not the originators of their options pricing model. Edward Thorpe makes a case that he invented the model years before and kept quiet about it since he was profiting handsomely from it. Ed is a legend who also invented: blackjack card counting, the world’s first wearable computer that helped him beat casinos at roulette, and quant hedge funds.

Amazing.

This is way beyond me. Brain at explosion stage. Anyway, I was wondering, when you have four boxes of inputs into production: labor, capital, human capital, technology, are you not missing a fifth one: connectedness? Advances in production didn’t just come from labor, capital, human capital, and technology, but also from our ability to communicate with each other, and spread the knowledge of how to use those inputs better - a point you make higher up in your article.

It’s an exponential result of a feedback loop: the fruits of our production improves communications, and our improved communications means we can produce better, including better communications, which means… ad infinitum.

But perhaps there is a limit to the speed at which ideas can spread and therefore the exponential growth will stop. When our ideas spread at the speed of light they will be able to go no faster, so at some point the growth derived from improving communications will stop, and production growth increases overall will flatten.

[I’m too slow to understand if this is answered in your article. Sorry if nonsense.]

Hi Tom. It’s not nonsense. Two responses. First, as I mention, economists have added and deleted items in the list of factors, to suit their particular interests. So one can certainly add another factor. That said, it wouldn’t change the logical structure of the whiteboard model, thus its overall behavior. So I’d probably just consider connectedness part of technology. Second, and by the same token, within the rigid and limited structure of the model, adding “connectedness” wouldn’t impose a speed limit. Connectedness would just spiral up along with everything else. So I’d just say that an absolute bound on connectedness is absent from the model, and that’s one of its limitations. I do sort of get at that near the end, talking about the assumption that innovations are disseminated with infinite speed.

Great, thanks for your quick and kind response. I shall have to have another, slower read. I must say, it’s beautifully written, but it’s just a bit difficult to get my head around.

How much of a stretch is the following interpretation of the percentiles in the “Percentile of GWP distribution” graph:

1) 50% would represent a sort of average timeline?

2) A higher percentile would be one in which the randomness of man made events (I.e., unpredictable twists and turn of human history) had a positive impact that accelerated GWP growth?

For example, in our specific timeline 1620 is the year Francis Bacon publishes Novum Organum Scientiarum promoting the new method of science, which inspires the Royal Society founded in 1660, Isaac Newton who published Principia mathematica in 1687.
The spark of science sets the backdrop to the industrial revolution and the technological age.

I suspect these “random” developments are actually secondary effects of anti-fragile structural social changes that provided the fertile soil for them to take root. Society must be investing in lots of cheap options for black swan inordinate gains. Most of these cheap options will amount to nothing, so this type of activity will only happen under conditions of diversity and abundance.

For example, the black death (1347 - 1351) destabilizes feudal power structures and creates enough economic slack for survivors to enable the more freewheeling artistic and intellectual inquiries during the Renaissance and beyond.

3) Lower percentile would be one in which the randomness of man made events has an inordinate negative effect (e.g., war worlds, bad policies applied in scale due to political centralization, etc.)

For example, War World I is 1914 - 1918, followed by the great depression, following by War World II. Afterwards, GWP bounces back until the 1970s, the height of the Cold War. From 1970 onwwards, Nixon abolishes Bretton Woods and ushers in the current fiat monetary regime that facilitates the invisible confiscation of the bulk of productivity gains from increased automation in the service of parasitic special interests and inefficient public monopolies funded by perpetually expanding debt .

I suspect the underlying structure for these negative “random” events is the fragile mirror image of the anti-fragile structure that leads to large unexpected positive outcomes. Massive man made disasters seems to require sufficient centralization of power to amplify the consequences of small random mistakes. For example the interlocking military alliances in Europe prior to War World, or the consolidation of power into 2 competing super powers afterwards (e.g., Without the US hegemony over the West, the sweeping changes to the Western monetary system would probably have been harder to install)

Hi Liraz. This is all very interesting. I think it is not much of stretch to say that the percentile graph is compatible with this narrative. But I think it is also compatible with other narratives, so I wouldn’t say that it obviously supports one such narrative over another. To go beyond that would lead into a massive discussion about the causes of modernity, rather separate from what I have to offer here.

Pages

Leave a comment