Discover more from The Fitzwilliam
Response to the Comments on Stagnation
And, are ideas getting harder to find?
There were many great replies to my essay ‘We Were Promised Flying Cars…’. It is one of the broadest and most speculative pieces I have written, so objections and additions were welcome. I’ve responded to some of them below:
Are ideas getting harder to find?
Onno Eric Blom writes:
This is a fantastic overview of the stagnation debate, and almost fully on the nose. I must disagree with one thing, namely the “ideas are harder to find” section. How could one even judge such a thing, without knowing what discoveries are lying ahead (which we can’t, per definition). I forgot who it was that lamented that he wished he lived before Newton, since the laws of physics could only be discovered once. Of course, he was proven wrong by Einstein. There is no reason to assume that reality is not infinitely complex, and there’s always much more to discover. Any lamentation such as the above seems ridiculous in hindsight, and people lamenting it now will seem ridiculous in 500 (or even 100) years. The same thing goes for economic growth. The thing which degrowthers consequently get wrong with the “infinite growth is impossible on a finite planet”-quip is that growth does not need to be quantitative, but can be qualitative as well (e.g. colour television instead of black and white television). Since we can find better ways of producing things with less resources, and since people pay more money for better products, we can keep improving the human condition infinitely of a finite planet.
Re ideas getting harder to find, you prompted me to write up some brief thoughts here: https://progressforum.org/posts/oTrktskpBncRkpPFa/where-i-have-landed-on-ideas-getting-harder-to-find-in-a. Ideas do get harder to find, but we also get better at finding them, and growth rates have to be understood as the balance of these opposing forces.
Reality might be infinitely complex, but if we are discovering very slight wrinkles in basically correct theories (or, at least, experimentally accurate ones), you would say that it’s getting harder to find important ideas, right?
Something like Bell’s theorem only gets discovered once. Either local hidden variable theories of quantum mechanics are possible, or they are not. That’s a foundational discovery about reality that you can only make once. I think it would be misleading to imply that reality has a kind of fractal structure, in which we’re constantly making and un-making discoveries. I don’t think this is even an accurate description of the cases you mentioned: Newtonian mechanics was not wrong per se, but incomplete outside the normal range of speeds and masses.
Suppose we asked a committee of wise experts who intimately knew a field to rate which advances were the most important. My claim is this: achieving a given level of ‘importance’ will require a dramatically increasing number of quality-adjusted researcher hours. This fits with Michael Nielsen and Patrick Collison’s finding that Nobel Prizes are being awarded for progressively less significant research, as rated by experts in the field.
The idea that low hanging ideas have already been picked implies a tree-like theory of ideas whereby new ideas are somehow derived from previous ideas, making each progressive idea further out of reach.
This [is] contradicted by the simple example of the theory of general relativity, which does not extend or build on Newton's theory. Rather, general relativity replaces Newton's theory and provides a better explanation for our universe.
I think this is a non sequitur. I can easily imagine situations in which ideas get harder to find, but where they don’t necessarily derive from one another. For example, suppose that general relativity is just inherently more complex than Newtonian mechanics, in a way that requires more researcher time to figure out. Newtonianism is the more low-hanging fruit, in the sense that it’s the more “obvious” set of ideas. But relativity is qualitatively different and does not in any important sense “derive” from Newton.
Pradyumna Prasad (Bretton Goods) writes to me:
I feel that there are two methods for measuring the value of items produced by the internet. One is to ask consumers how much they’d pay for the now nearly free / very cheap good. Run a survey to ask people what they’d pay to see so many thousand movies on Netflix, infer what it would be for the population overall. The second method is to see what the equivalent bundle of goods would have cost you N years before and see what the reduction in price is. While the first method says that the internet’s addition to GDP is not very much, I think the second one would indicate a much higher addition to GDP. So when people say “consumer surveys indicate that they would pay only $y for Google”, my reaction is that I don't think $y is the maximum at all! People might be accustomed to free access, and they might not know the value of it in their lives until it is actually gone. The second thing is that as time goes on, the value of the willingness to pay increases because of lock-in effects. Maybe in 2005 people could have survived without search engines. But now they have become such an important part of our lives that we can’t function without them and that would increase the WTP number which would indicate that whatever value of GDP increase you get from WTP surveys done many years ago, you should increase your estimation of that with time. I’m not sure if this speaks to a flaw in using WTP numbers to estimate the internet’s output or that our actual economic value from internet goods increasing over time.
Re GDP being mismeasured, Gordon has an answer to this: GDP has *always* been mismeasured, and it's not clear that it is *more* mismeasured now. Yes, the Internet provides lots of consumer surplus. But so does penicillin: the consumer value is saving your life, which is a lot of surplus over the cost of the pills.
(One argument for “increasing mismeasurement” that I have heard suggested is the shift from manufacturing to services. This is an interesting hypothesis, but I haven't seen it developed.)
The Economist article I linked to in my original essay discusses four different methods of measuring the internet’s unmeasured contribution to GDP. All of them produce paltry numbers, which continues to puzzle me. Pradyumna correctly points out that the value of having internet access is increasing over time. I also share his scepticism of any form of direct willingness-to-pay survey. If you told me that you would only pay €20/month for access to the internet, I wouldn’t believe you.
The background to Prad’s comment is that many goods cost dramatically less than the maximum amount you would be willing to pay for them (the willingness-to-pay). But for most goods, I have no idea what the absolute maximum amount of money I would pay is. To get a better picture of economic welfare you would need to know WTP, which is inherently tricky to measure.
It would be surprising if the willingness to pay for the internet was not increasing, but I would advise caution in interpreting it. Higher WTP could be because of an actual increase in value, or it could be a network effect.
Alexey Guzey and others argue that the GDP mismeasurement problem is being understated by Gordon and others:
Thinking really hard about this, would you rather
have your $200k salary but give up the internet (i.e. give up virtually all indexed and searchable knowledge ever produced by humanity via Google, Sci-Hub and Libgen; give up email and social media; give up free high-quality video calls with anyone on the planet; give up StackOverflow; give up online gaming; give up econjobrumors; etc. etc.)?
have the internet but earn the salary of “only” $100k?
If everyone would take the second option, then we’ve been underestimating GDP by a factor of two since the internet was invented! But it’s important to remember that blog readers are disproportionately sampled from the subset of the population for whom the internet is highly valuable. People who love and crave information – the ‘infovores’ – have their lives immeasurably enhanced by Wikipedia, Google search, and now ChatGPT. But there are also people, in, say, my mum’s generation, for whom Facebook is just replacing leisure time that would otherwise be spent watching TV. If anything, that seems like a downgrade to me.
The better question is this: If you lived in a world in which no one had access to the internet, how much of your salary would you give up to transplant yourself and everyone you know to a world where everyone has access? The answer to that question is much trickier, and I think a sizable minority of people would pay money to go the other way (i.e. from the internet world to the no internet world).
“I don’t think even this is enough to account for how much researcher productivity has declined since the days of Victorian gentlemen making earth-shattering discoveries in their spare time.”
Did they though? I think the earth-shattering discoveries were largely made by professors (Sedgwick, Maxwell, etc) or people like Darwin who were working full-time but independently as they were self-funded. “Spare time” misrepresents this... How many of the important discoveries were genuinely made in evenings and weekends by people with unrelated jobs?
I admit my original comment was exaggerated. Pierre de Fermat might be an example (though not Victorian). It’s unthinkable that a modern figure could contribute as much to mathematics in his spare time as Fermat did while working full-time as a judge. I’m not sure there’s any living mathematician who has made as significant breakthroughs as Fermat did. This alone does not reflect badly on modern mathematicians! The mathematical problems which remain unsolved inherently rely on more background knowledge.
Consider an extreme case: it’s not difficult to notice Pythagoras’ theorem. It’s the kind of thing that a bright kid might independently stumble upon or even prove.
Am I saying that I would have figured out Pythagoras’ theorem if I lived in Ancient Greece? Of course not. There are clearly many trends which have made ideas easier to find: the knowledge we have already accumulated, and the availability of learning tools like books and the internet.
A crux of disagreement is whether we have strong reason to believe that the forces pushing in favour of, and against ideas getting harder to find, should move at the same rate. I say no.
Problems with PhDs
Robert Tolan writes:
One thing I have noticed is that there seems to have been an ‘ageing’ of the typical science PhD graduate … Controlling for cultural factors, to what extent do you think demographics might feed into the [total factor productivity] data? Internet and universities aside, the great leaps in science you mention were made by comparatively young people: Penzias and Wilson were just 31 and 28 years old respectively when they stumbled upon cosmic microwave background radiation. Notably, structured PhD programmes are a relatively recent phenomenon and might feed into the university bureaucracy you mention – maybe if budding science PhD students could do sufficient work in one or two years, rather than having to stick to a four-year PhD track, they might be more likely to do frontier defining work. A flaw in my reasoning here is that it has become more time consuming to understand a subfield given the breadth of knowledge that now exists, which means it takes longer to gain the expertise a PhD graduate is expected to have in their area, so feel free to critique my thoughts on this.
I would imagine that, in many ways, having an older population makes culture more ossified and less predisposed to innovation. At the same time, one has to imagine that structural factors are more important: Kenya has an average age of 20, and it is doing hardly any science.
You are correct that scientists are getting older (and as far as I can tell, this isn’t accounted for by PhDs taking longer).
I have heard mixed views from my friends doing PhDs about how structured they are. Someone in my university’s philosophy department told me that, among his friends, it was taken for granted that everyone’s doctorate would take 1-2 years longer than necessary, as you inevitably would get depressed in the middle and feel completely lost. That sounds like too little structure to me. Ben Kuhn on his blog gives another example of students at top PhD programmes being ineffectual because of a lack of structure.
Presumably, both things can be true. There must be a way to get budding scholars learning how to do actual frontier research sooner, but also, if you have a multi-year structureless programme, I predict students will accomplish almost nothing.
Culture and regulation
Callum McCreadie writes:
I would agree with 5. Cultural decline and 3. Excessive regulation (at least in the west). In 5 minutes I could find several examples where regulation is (insanely) stifling innovation
1. SpaceX starship being delayed by nearly year because of “Environmental Review”. (It’s in the middle of the desert?)
Is the so-called “Faustian Spirit” still well and alive in Europe? I often think of the Tibetans being confused as to why anyone would attempt to climb Everest. “Because it's there.” - Mallory… I saw Fergus “restacked” your post. I would be inclined to agree w/ him that University and how research is conducted is significantly to blame. In short:
+ Peer Review (Einstein famously insulted by peer review) sucks. Watch this video: A math prof writes a paper in ~ 2 months. Takes nearly 2 years to get published. Compare this to how much work was achieved in weeks, when the superconductor paper was published on arXiv. We need to shorten feedback loops.
+ Research Agencies/ Grant Agencies. See the effect … Patrick [Collison]/Tyler [Cowen] had with fast grants vs NIH, NHS etc… (Anything groundbreaking in the past 100 years is near guaranteed to be have funded by DARPA. It’s budget is only ~4 billion!)
+ Wrong incentives in Academia only way to climb is become an Administrator and empire build taking on grad students doing iterative work.
+ More places nowadays for the insanely talented to go vs the Victorian Era. Would Maxwell be working in a Hedge Fund nowadays?
I don’t necessarily disagree with any of this. I wrote, “There is room for all your pet theories to be correct, and more.” Perhaps that was an unnecessarily sarcastic way to refer to what are, in many cases, tremendously important ideas that are likely to be correct.
One point here I would like to pick out is that the opportunity cost of very smart people has gone up. This would only explain scientific stagnation in less applied areas, not economic stagnation: If Maxwell had gone to work for a hedge fund, that would have contributed to GDP.
Your view is highly retrospective. It assumes that because the low-hanging fruit has been plucked in the basic sciences and elsewhere that stagnation is somehow inevitable. Even assuming no further paradigmatic breakthroughs here (and this seems unlikely), there is potential for any one breakthrough in R&D to propel us onto a more general S curve. E.g. cheap, abundant energy through cheaper renewables (perhaps with better storage through hydrogen or some other medium) combined with robotics and decent (not necessarily AGI level) AI could reduce the cost of setting up a new factory by orders of magnitude. The design and architecture would be faster and more bullet proof with real-world level simulation ironing out problems at multiple levels in advance, the build and materials would be cheaper, the machines and their operation would be robotic. With transport costs heading to zero, the world market is at your doorstep. And so on. If a lack of basic science was the bottleneck in the early 19th century, the infusion of AI into the physical realm (combining Thiel’s bits and atoms) may be the one to translate the tech S curve to other areas. But there are several other potential breakthrough points. Yes, organizational factors are sticky obstacles but it may only take one small principality to find a better method of regulating innovation for the laggards to start catching up. In summary, our first innovation bounty was from breakthroughs in basic science. The next could be through more prosaic innovation using the basic science we already have. Throw in the possibilities of fusion, working nano-tech etc, real fundamental breakthroughs and the future gets brighter yet.
I am aware of and excited about the possibility of being pushed into new s-curves - that’s why I had a whole section about it! The things you mentioned would indeed be transformative. I just think it’s reasonable to have a general presumption that they would also require an increasing number of quality-adjusted researcher hours to achieve. Your first two sentences also misstate my view: I am not assuming that stagnation is inevitable. I wrote that “For economic growth to persist for more than a few hundred or thousand years requires some very futuristic changes, like artificial general intelligence or emulated minds. I’m not taking a stance on whether or not that will happen, but I struggle to see a future with persistent economic growth that is remotely recognisable.” I am deliberately remaining agnostic about those scenarios – and I would say I’m probably more open to them than, say, the average economist.
Sam Street comments:
Nice article! I just have 3 more thoughts: 1. Have you considered whether it might be status-oriented? E.g. Elon Musk went from building a rocket company to...running Twitter. Maybe intelligent/wealthy people who previously would have invented something, aren't enticed to create/build nowadays because it’s seen as lower status? 2. Because of the low-hanging fruit + bureaucracy argument, research teams are growing in size. These teams are less disruptive than the smaller ones (https://www.nature.com/articles/s41586-019-0941-9) 3. Lead poisoning??
Many disagreements I have with my friends end at an impasse, when we realise we are making different foundational assumptions about what the average person believes, and have no way to verify one way or the other. I have no idea whether creating and building things is seen as lower status than it used to be. Lower status by whom? I can tell you that if I started a rocket company, my friends would be very supportive.
I would also avoid taking lessons from Elon. He did come across from the Ashlee Vance biography as impulsive, and as exactly the kind of person that would spend billions of dollars to buy a company because he was addicted to the website and thought he could do a better job running it.
At one point I heard Alex Tabarrok say that all the theories for why American healthcare is screwed up are correct, because American healthcare is screwed up in all the possible ways. I think something similar might be true of academia.
I have heard of that Nature paper but haven’t looked into it. Obviously, there are worries about causality: There are many perfectly innocent cases of large groups working on problems that they are better suited to (data collection, perhaps) and small teams working on problems they are better suited to. Also, are the authors (or anyone else) claiming that larger group size is imposed on academics from the outside? If scientists are themselves choosing to aggregate into the group sizes that they are, then I would be sceptical of any attempt to break them up to increase innovation, like the research equivalent of antitrust.
I’m not sure how much of the lead poisoning comment is a joke, but there is a serious point to be made here about environment. By the Flynn effect, we know that IQ scores have been rising each generation. We have removed lead from petrol, and many other environmental contaminants that were holding back the cognitive abilities of previous generations. From this, you might have expected a greater flowering of scientific and economic progress than what we have actually seen.
Is exponential growth the baseline?
Jason Crawford (Roots of Progress):
Re: “If TFP is going up and to the right, then what’s the problem? … Should I expect technology to improve by a similar amount each year, or should I expect it to improve by a similar percentage?”
IMO, exponential growth is sort of the baseline. Utility functions are ~logarithmic, therefore improvement is relative, therefore what matters is growth rates rather than absolute growth, and therefore exponential growth is basically constant growth and linear growth is deceleration. More: https://rootsofprogress.org/exponential-growth-is-the-baseline
Jason also points out that the historical norm is acceleration, not deceleration. It was easier to find new ideas in ancient Athens than in the Stone Age, despite the latter having occurred earlier in time.
In 1960: The Year the Singularity Was Cancelled, Scott Alexander argues that the defining growth pattern of civilisation is not exponential but hyperbolic growth – wherein the rate of growth is itself accelerating. Hyperbolic growth hits a point at which growth accelerates to infinity and the maths breaks down. For millennia, there was a remarkably stable trend of the doubling time of world GDP shrinking. If you had extrapolated this forward, you would have concluded that world population will hit infinity in 2026. This led to von Foerster’s famous spoof paper in Science, predicting that the world would end on Friday, November 13th, 2026.
As the essay’s name implies, sometime in the 1960s or 70s, humanity fell off the hyperbolic growth trajectory. Instead of growth rates increasing, they stayed the same or even shrank.
If this is correct, then I need to significantly amend the arguments in my original post. Ideas became easier to find for all of human history until, one generation ago, we fell off a pattern of growth in ideas and output which had held for millennia. I remain unconvinced whether exponential growth is a sensible baseline – in economics or in life.
Thanks for reading The Fitzwilliam! Subscribe for free to receive new posts and support our work.