117 Comments
User's avatar
David Hugh-Jones's avatar

A couple of comments.

1. I think you’re too quick to dismiss what you call “oral tradition”. I’d suggest that *stories* are a kind of zeroth technology. Before written history, humans had already (for millennia!) found ways to embody their knowledge in ways that they could pass down. The basic function was narrative: folk tales, myths, and so on. By 0 AD there’s already 100-300m humans on the planet - many orders of magnitude more than the chimpanzees our nearest relatives. That suggest our cumulative knowledge was already doing a lot, before writing was widespread enough to have much effect.

2. AI may really be discovering deep principles. It just doesn’t know how to explain them to us yet. If that’s true, then work on interpretable AI may help these models tell us the new laws they have discovered. After all, there must be some regularity there, otherwise how could they make their predictions?

Expand full comment
Noah Smith's avatar

1. I didn't dismiss oral tradition; I counted it as a part of the accumulation of knowledge. It is a much more lossy medium than writing, making it difficult for oral tradition to disseminate information long distances through time and space. But it is important (still!).

2. Regularities in nature that allow repeated, generalizable, accurate predictions do not have to be things that are expressed in simple mathematical formulas. They can be very large complex sets of rules!

Expand full comment
David Hugh-Jones's avatar

Yeah right, I think it’s an open question whether these AI guys are spotting deep underlying principles, or if it’s more a complicated prediction like “is this a picture of a dolphin?” which just involves putting together many different aspects.

Expand full comment
Noah Smith's avatar

I think we may never know! And that's the interesting thing...

Expand full comment
Dmitrii Zelenskii's avatar

On the other hand, there are reasons to think we will know: e.g., https://www.astralcodexten.com/p/god-help-us-lets-try-to-understand

Expand full comment
Karkar's avatar

Came to comments to discuss this same things, glad to see this raised and replied already.

Almost always with big concept Noah posts, get much out of basic post but then mind goes to these tangents, that maybe aren't totally tangents.

In this case, human knowledge and advancement and what is science or magic or data interpretation and how modern tech, communication effects our knowledge, culture, advancement or regression.

Expand full comment
DxS's avatar

"Trade" was as critical as "history". Consider how many separate expert products go into a semiconductor factory. Or how the potter's wheel disappeared in Britain, after Rome's fall. Why? Because there was no longer enough population density and social trust to support full-time potters.

Without specialization, humans have a hard limit on their tools and productivity.

Without trade, there's not much room for specialization.

Expand full comment
redbirdcurry's avatar

How is wisdom applied with AI, swimming in data that might build a pyramid but for what purpose but to place a corpus.

Expand full comment
Anna Labhran's avatar

This is the first email I read in 2023 and my year is off to a good start.

I’m a mathematician and I always appreciate your portrayal of my field as powerful and relevant and not, as some seem to believe, an instrument of torture.

The distinction you draw between “understanding” and “control” helped me think about my relation to my own field. I (and many like me) see mathematics as something to be understood; “revel in the beauty of axiomatics” as one of my students put it. I feel good when I can understand the infinity of primes or why some sequences converge to some magic number and others do not. The experience itself is satisfying. Most of the rest of the world view my field as a tool which can help attain the control you describe, including my engineer husband and physicist daughter. I guess this is roughly the difference between pure and applied mathematics.

I had a professor in a graduate mathematical modeling class who told us that someday we would be able to model human behavior as accurately as we could then model science. We needed only to learn a sufficient amount about human behavior, which he believed was a certainty.

I gave the aforementioned daughter a subscription to this blog. I think she will like it.

Expand full comment
Marc's avatar

Have you read Infinite Powers by Steven Strogatz? I just started it and have zero background in math, even failed HS calculus but the book still intrigued me. Feynman always described Calculus as the language they God speaks and without it we would not have many of todays technological achievements and things we take for granted like GPS and wifi.

Expand full comment
Lee Drake's avatar

I very much like the point about history being a form of magic, though would note that Chimpanzees do indeed pass tool making down generations. Goulgoulo Chimpanzees not only pass down tool traditions, but also use them in specific orders for step-wise problem solving. https://www.pnas.org/doi/full/10.1073/pnas.1907476116

Which brings me to the main thought having read this article. The main distinction to be found isn’t dividing AI from science, but rather dividing certainty science and uncertainty science. To date the biggest improvements of our lives have been certainty calculations - the precise use of calculus or other elegant solutions to come to conclusions. This is most important in engineering and other applied fields. To me the natural end point of this type of math was string theory, the proposed reconciliation between general relativity and quantum dynamics. It is the end point because it is not falsifiable by its own rules - it may be correct, but we lack the ability to determine it scientifically.

The other tradition , uncertainty, is messier and for a long time was looked down upon. It has its roots with Bayes, Galton (Darwin’s cousin) and Ronald Fischer. Rather than try to precisely calculate solutions, it tries to draw lines around our ignorance. For a long time uncertainty was treated as a cheap trick to sell certainty in exchange for academic job security (see p-hacking). But with the advent of computers the necessary calculation power could explore the boundaries of that ignorance in ways that became a creative force. That is the origin of AI. The raw tools of AI are how to manage uncertainty. With sufficient complexity, it can be generative.

Which brings us back to the magic of history. The world was fundamentally chaotic. An old professor of mine once pointed out that if you took the most conservative population growth rates for any observed hunter-gatherer population and assumed that at the start of European migration into Eurasia, you’d expect 13 trillion people today (compound interest is a bitch). He reconciled the contradiction between comparatively low population and the compounding effects of observed growth rates with a small probability of cataclysm every generation - if something like 85% of the population was lost due to tragedy every once in a blue moon, the scales balance again. This non-linear impact of selective sweeps is also a big driver of evolution - if a species eats soft nuts normally but resorts to hard nuts when times are tough, then the beaks for cracking the refugia food become important, and would confuse researchers who observe the birds eating food that their beaks are over prepared for.

History, agriculture, and the scientific revolution each solved a cause of these precipitous population collapses. And each contributed to a non-linear change in population numbers. I do not know if AI will do the same. But if it does, it will be because statistics solves the same problem history did - how to confront our ignorance in a terrifyingly dynamic environment.

Expand full comment
Lee Drake's avatar

TBH, if history works because it tells us how we solved problems in one iteration of history, statistics/AI can generate counter factual histories to allow us to robustness test strategies. Inductive alternate history simulation can differentiate between flukes and meaningful actions.

Expand full comment
Noah Smith's avatar

That's interesting about the chimps!!

Expand full comment
Michael Kelly's avatar

I think what's missing from the human population growth, is the limited areas of the pockets of suitable habitat for pre-mechanized agricultural peoples.

Expand full comment
Sinity's avatar

"Do we need to understand things in order to predict and control them?"

For some reason, that makes me think of von Neumann's quote, “All stable processes we shall predict. All unstable processes we shall control”.

Expand full comment
Dr Dawood Mamoon's avatar

Dear Noah,

What a treat to start the year 2023. This article is kind of philosophy of logic and knowledge and a great treatise to human evolution that is not really explicitly mentioned but you address the most fundamental of the issues to plausibly argue the clear relevance of foundations of evolution with modern world. You are absolutely right and I guess the long understood message of evolutionary sciences has unfortunately been lost in white noise that so-called Trumps MAGA movement has started. For example, as you write and you are absolutely right that the difference between human social, economic and scientific evolution is directly proportional to our ability to record history through stories of our forefathers or noting down the techniques and that is what makes us different from animal kingdom. That is exactly what I have understood too and what I mean by that and what all the animal rights advocates have understood is that animals are not stupid by means of their cognitive evolution. For example, a dog that is such a friendly and amazing animal or any animal for that matter would have a cognitive understanding of basic mathematical expressions. For example a puppy understands the value zero or more when the puppy understand his mother is gone to fetch food and when the puppy is among other puppies of dogs. The qualitative complexity is also fed into animals by nature as they can differentiate between other species and characteristics like food and shelter. Why humans are different is all because of their physical characteristics rightly proposed by evolutionary theories. Due to our superior physical characteristics we humans can make complex sounds leading to complex words and languages and we can write. Well this also is a lesson for lot of fancy unrealistic science fiction or religious dogma where superiority is attributed to physical strength without adhering to the evolutionary logic that prevents any possibility of an intelligent deity or specie that for example may have ten hands or a face of an elephant but could invent spaceships to travel faster than light. This fact really cuts down the possibility of evolutionary characteristics of intelligent species in the universe as part if intergalactic civilizations. That is why in many of the writings I claim that humans are the most aesthetic life form in the universe and plausibly the best outcome of nature.

Thereby once your readers would understand the power of your starting lines of this blog, the brilliance of the article unravels in later parts.

Nature and humans are in the first place the marvel exhibit and as you also suggest the most scientific exhibit than any created by humans themselves and thereby despite the wonders of the modern world, the argument is supreme to preserve nature in perfect balance is the responsibility of the scientific world to understand and create models of scientific interactions of science itself and the natural world.

If you and me and many others can construct this argument, then I claim in few of my articles that the third person, that is AI would only be considered as intelligent if future 'intelligent' AI also understands this argument. In otherwards, if AI through us understand the natural world and its rules whereby we have been creating all the scientific and mathematical algorithms that may eventually capture the complexity of the natural world with their contextual controls, only then that AI can be considered as an intelligent extension of evolution. The first and most foundational information and DNA of such an AI is to understand that nature has to be preserved with the right balance where humans would still be at the central stage associated with natural life cycles of biology.

All what humans have so far created is mostly to the benefit of humanity whether it is economics or science but ancient and modern societies, the aware segments talk about preserving nature and harmony with nature and that is the central message of science as well that urban modern life has usually disavowed because economic models have simply failed to capture the complexity required to effectively create policy that can universally ensure it. We are still at the very early stages of economics and sociology while creating new cultures attributed to the rapid pace human scientific conquest is moving and consumer applications are introduced.

All in all, great way to start 2023 with this article and I hope, this article would restart debate among your readers that has been some what lost among futile race centric movements that have been started in the US.

Expand full comment
Noah Smith's avatar

Thanks, Dawood!

Expand full comment
Deborah Richardson Evans's avatar

Thanks for sharing, I appreciate you.

Expand full comment
Robert Brockman's avatar

Fascinating ideas. I especially agree with the notion that humans can accumulate and share innovation, whether technology or schools of thought. However, I propose stepping back to recognize two even more basic human abilities. I got these ideas from observing my very smart two year old grandson.

First is the ability to generalize. This involves humanity’s ability to extend a particular experience or thought and understand it as being relevant to seemingly unrelated experiences. This ability allows us to exploit those insights even when entirely new experiences are encountered.

The second is the ability to form abstractions. You cover this concept very well when you described how much we depend upon our ability to conduct “thought experiments”. Crucially, the insights realized from these virtual experiments can be efficiently documented and communicated across time and cultures.

Expand full comment
Xim's avatar

Great post, it seems really plausible for ai and humans to go this way. Thanks!

Expand full comment
John Hardman's avatar

I appreciate your enthusiasm but reading your post I am reminded of the 'sorcerer's apprentice' scene in Disney's Fantasia where the apprentice attempts to use his new power without understanding it creating all sorts of mayhem.

As a social scientist, I see a problem with our technology getting ahead of our sociology and psychology. Unleashing great powers without understanding and incorporating the consequences of these new powers is, frankly, horrifying.

All these bright, shiny "magics" can be a blessing or a curse. How do we 'humanize' or make these technologies humane before releasing them on the world?

"We are drowning in information, while starving for wisdom. The world henceforth will be run by synthesizers, people able to put together the right information at the right time, think critically about it, and make important choices wisely."

E. O. Wilson

Expand full comment
Steersman's avatar

Quite agree that that "sorcerer's apprentice" is a durable parable for the problems of technology. Though one might argue that the myth of Prometheus is an even earlier one on the same topic.

But, ICYMI, another favorite and relevant quote of Wilson's is his:

“The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.”

https://www.goodreads.com/quotes/9770741-the-real-problem-of-humanity-is-the-following-we-have

Expand full comment
John Hardman's avatar

Yes, the ancient Greeks considered hubris - attempting to emulate the gods - to be the worst sin. So here we are with god-like technology with neither the biology or the sociology to properly use it. We are like toddlers given a loaded handgun.

Expand full comment
John Hardman's avatar

The Atlantic just posted an article doing a deep dive on the perils of opening the Pandora's box of AI before understanding the implications. "Creating AI will be the last act of human creation."

https://www.theatlantic.com/newsletters/archive/2023/01/is-this-the-start-of-an-ai-takeover/672628/?utm_source=newsletter&utm_medium=email&utm_campaign=up-for-debate&utm_content=20230103&utm_term=Up%20for%20Debate

Expand full comment
Phil Tanny's avatar

Everybody is all wound up about AI right now, but...

We should shift some focus from the various tools rolling off the end of the knowledge explosion assembly line, to the assembly line itself. As example...

As a thought experiment, imagine for a moment that we somehow magically resolved all concerns with AI and made it 100% safe. Would this really matter if the knowledge explosion continues to generate ever more, ever larger powers, at an ever accelerating pace?

So long as the knowledge explosion continues to accelerate it will create new challenges faster than we can figure out how to manage them. As example, after 75 years we still don't have the slightest clue what to do about nuclear weapons. And while we've been wondering about that the knowledge explosion has added AI and genetic engineering to the list of things we don't know how to make safe. And the 21st century is still young. AI and genetic engineering are just the beginning, not the end of the challenge parade.

It's not this or that technology which is the real problem, but rather the process generating all the threats. Until we address that, we are playing a game of wack-a-mole that we will inevitably lose sooner or later.

https://www.tannytalk.com/p/our-relationship-with-knowledge

Expand full comment
John Hardman's avatar

Your term - knowledge explosion - is an apt description but do you not see the violence behind it? Is there some other way to humanely grow without having "explosions" shatter our minds and organizations? This is violence, a war by technology.

No, technology is not the real problem, the slow pace of psychology and sociology is, but technology is simply a 'thing' and humanity is not. We are chasing bright shiny 'things' and forgetting the humans who are being left behind. We are at a breaking point yet we still ignore the warning signs and chase the bling.

Expand full comment
Phil Tanny's avatar

As I see it, it's a case of our attempt to map a "more is better" relationship with knowledge philosophy left over from the long era of knowledge scarcity on to a new very different era characterized by knowledge growing in every direction at an ever accelerating rate. "More is better" used to make perfect sense, and we don't realize that it no longer does. We aren't adapting to a new environment.

Expand full comment
Phil Tanny's avatar

Wow, could someone please show me how to use AI to like this post 42 times? What you write is so true, and you wrote it so well. Yes, the relationship between knowledge and wisdom is way out of whack. I wrote about this just this morning, and would welcome your input if interested. https://www.tannytalk.com/p/knowledge-knowledge-and-wisdom

Expand full comment
John Hardman's avatar

Thanks for the compliment and I will do the 'wise' thing and chew on your tannytalk article for a while before I answer. No, I don't have a substack yet, but I guess it is time. Chao...

Expand full comment
Phil Tanny's avatar

PS: Here's hoping you have a substack somewhere, or other journal of your insights.

Expand full comment
Curated Notes's avatar

It’s great that you mentioned Google’s AlphaFold. Few understand how powerful it is to accurately predict protein folding. It will bring drug development into a new era and much more.

Can’t wait to see the next iterations of AI.

Expand full comment
Phil Tanny's avatar

Yes, I didn't know about AlphaFold and was interested to learn about it. An AI writer friend of mine said that AI isn't yet developing new knowledge, I'll point him to this example.

Expand full comment
Joel McKinnon's avatar

Noah, I found this piece lucid and inspiring, and I'm so glad I resubscribed recently. As the host of a podcast about Isaac Asimov's Foundation called Seldon Crisis, I've been pondering the concept of psychohistory for quite a while. In the novel, this was a mathematical approach to predicting the future based on the huge dataset available at the time the story is set - 20,000 years in the future or so - with a quintillion human beings and 30 millennia of past events as input. Most commentators since Asimov's time have dismissed this idea as impossibly naive considering complexity theory, but the principles you're describing here give me some hope that something like psychohistory may become more and more possible with the help of AI systems that act as a black box. In other words, we might not need to understand why the future can be predicted to a high degree of probability in order to do so.

I'd actually love to have you on the podcast sometime to chat about this. Please take a look at https://seldoncrisis.net and let me know if you're interested.

Expand full comment
Fernando Pereira's avatar

The following needs more elaboration, but as an AI researcher and practitioner, I don’t see the possibility of such long-range predictions, any more that we can do extremely long-range weather predictions (even with the benefit of physical theory) because observable variables contain much less information than the process being modeled, and also small observation errors blow up into completely different futures (butterfly effect).

Expand full comment
james mawson's avatar

Yes, Fernando, good points. But taking @Noah's example on Economic impacts of AI-augmented R&D to get to better estimates of the future is directionally in line with Azimov's psychohistory. Too many variables of course and the longer the time horizon the less certainty perhaps but if there are some factors that outweigh others then AI could help uncover them and so more efforts can be made here - interested in Joel's podcast as a result (disclosure: my work is more about capital flows and financial investment in innovation)

https://arxiv.org/pdf/2212.08198.pdf

Expand full comment
Michael Francis's avatar

Thank you for a wonderfully articulate essay on a very important topic. I suppose there are downsides but I see your analysis as a most optimistic contribution to knowledge expansion. I enjoyed reading it and , I anticipate, will enjoy reading it many more times. Happy New Year.

Expand full comment
Benjamin Clark's avatar

This is an excellent essay, Noah. For me, it stitches together a few observations from very different places into a more cohesive narrative.

It's useful to actively think about what we can realistically hope to know, understand and control about complex systems. The natural sciences are a good demonstration that even when you do know the underlying rules and they are simple, applying them directly can be useless (e.g. one doesn't use the Standard Model to do chemistry, let alone model cells). Our expectations for understanding complex systems which are the result of human civilization (without such simple underlying rules) should be set correspondingly. Viewing AI as an approach to that fundamental challenge is a real change in perspective.

Expand full comment
Charlie Rose's avatar

“The authors use deep neural nets (i.e., AI) to look at daytime satellite imagery, in order to predict future economic growth at the hyper-local level.”

From my reading of the paper by Khachiyan et al., they didn’t “predict” anything. They built a model that fit well with previous data. They built a model that describes something that already occurred. That’s not a prediction.

IF this model happened to also be an excellent predictor of economic growth over the next 10 years, and enough people knew about, then behaviors would change and destroy the model. For example, elected officials might move resources to an area that shows negative economic growth to try and improve it. Other speculators might begin to invest heavily in areas with high potential for economic growth, which could accelerate the growth, but also it could have the opposite effect, by creating a higher cost for the local resources like land.

In order to be mildly successful predictive models that people can impact must be kept secret. Simply publishing the model destroys it’s predictive ability - because behaviors change.

Expand full comment
Bobson's avatar

There are some great analog-thinking adages that are great for revealing the stress and pressure points for digital technology.

The observer effect in physics states that the mere act of observation changes the state of the property being observed. Psychology also has this principle to consider that people will behave differently if they are aware of being watched.

Campbell's law is an adage that any tool or method for evaluation will be subject to politics and corrupt the outcomes it was designed to evaluate.

Goodhart's law is the adage of "When a measure becomes a target, it ceases to be a good measure". In other words, when a measure becomes a basis for decision-making, it fundamentally alters not only what decisions are going to be made, but also the consequences of the decision itself and the alternatives not chosen.

Expand full comment
Charlie Rose's avatar

Fundamentally I was addressing the false statement that this predicted future growth, when it was actually a model for what already happened - not predictive.

But yes…to everything you said, and Thank You for the references!

Expand full comment
Karkar's avatar

This post is really sending me thinking about where we are going.

Really like the last part understanding versus blindly predicting, knowing from data and about how economics has evolved because gets at something nagging me about assertions earlier in post.

One frustration I've often had with modern scientists like medical and economists is lack of ability to adapt to new compelling evidence and data in timely fashion simply because they can't know the exact causes, the only real thing to them is something with defined cause that simple algorithm can be written for, and don't change even when really world data is staring them in face (say min wage effects in economy or masking working to reduce spread of airborne disease to name some recent examples)

Feels like all of human time has been this struggle to balance and define "science" and "magic", verifiable data, versus legends, myths, stories

History was a early magic power for humans long before it was written. Now we may have new "magic" not that different.

More isolated, specific location cultures developed highly accurate and seemingly impossibly long oral traditions that have recorded astronomical data, geogolic events, and climate changes from thousands to 10,000+ years.ago. These accurate records were likely necessary for survival,.for hunting, fishing, farming knowledge, and likely only the cultures in certain that did this high priority oral traditions were cultures that survived, because these rigorous oral recordings are so common in so many places do world. Highly-disciplined oral histories, sometimes tied to simpler we're similar to what we did with books later and gave we humans advantage.

And this old way of observing and recording really has me thinking where we are going with our modern tech.

Oral traditions can be seen as messy by modern, Western eyes just as AI and ata driven non-understandable predictions are and as social media memes. Oral traditions used stories, songs, myths to aid in memory how stars moved or animals behaved and also reinforced culture for better results, survival, like stay away from that Crater Lake, volcano gods fight there, or run away from coast and to the inland after an earthquake and sea goes out, because it's a sign enemy tribe has cast a spell to create and tsunami. Which improves survival rate immensely compared to new settlers in area who are uneducated about tsunami dangers. See even modern people getting needlessly killed in Thailand for lack of education, maybe a scary myth would have helped transmission if this knowledge better than our books? Obviously places like Japan actually recorded in writing tsunami from 1000+ years ago and had wide preparations because of that and science, another way.

So after human populated whole world, sailed to remote we had period Noah describes where we had written history and science, and now with new tech we seem to be going back, maybe never really left that much, to observation, extrapolation without knowing causes, and stories myths being what shapes culture more effectively than science and books.

We now have AI and also memes doing more education than schools. Tik Tok and YouTube videos transferring more knowledge (or misinformation) than books. Tell people stories, raise their fears or give them characters and influence them.

And AI starts to act like magic.

When we don't know how reliable AI is, and yet it amazes us most of time, is it that different from Jsepth telling the pharaoh his dreams mean than will be 7 years of plenty followed by 7 years of famine?

Expand full comment