Regarding your First Magic, passing down/transmission/tradition for millennia hardly ever involved writing down. As a professional historian I must protest that technological processes are just about the last thing that ever gets written down. Instead, in a near universal pattern technological processes are passed down orally within a small group of people who safeguard the “secret" of their art for their chosen successors. Of course, these groups that try to control succession usually have some amount of leakage and the secrets of the craft spread, but almost always on oral rather than in written form.
It is interesting that many of these processes and inventions were written down for the first time to obtain a patent, which is another way of safeguarding technological secrets.
One day I will finish this slavery-and-abolition stack before Trump makes me but something like "The Reinvention Of Atlantic Slavery" is entirely post-Industrial Revolution and it was a kind of magic that you could write down how the machines worked and adapt them to tropical conditions they were not designed for. So the idea that you could write it down and in a very different time (the SCA) and place it could be invented again is a different enough concept from "science".
The stochastic nature of LLM responses is entirely by design. If you would prefer a determininiatic response, just set the temperature parameter to zero and it will regurgitate the same response each time.
I came to say precisely this. All the network parameters are deterministic, as well as the sequence of input tokens. That there is any randomness in the input-output relationship is a pure design choice, not an indispensable aspect of these systems.
The essay was fascinating until the commentary by ChatGPT. I find that no matter what the subject, 1) My own conversations with GenAI are fascinating to me, 2) Anyone else's conversations with GenAI are incredibly boring. Is that the case for other people? If so, it suggests something about AI's limits.
Small quibble: I think it's a mistake to say GPT is parametric but some ML models (especially the neural networks being referenced circa 2009 aka RNNs) aren't, because basically every machine learning model that can be stored and run by a computer is parametric (since it has to be in order to be stored as 1s and 0s).
IMHO neither you or Noah are correctly describing parametric v.s. non-parametric models.
A non-parametric model is not a model that has "no parameters", as you and Noah imply.
The opposite would then be that all models with parameters are parametric, which you then claim is necessarily all models stored on computers, but that's not correct.
A non-parametric model just doesn't have a *fixed number* of parameters. It still has parameters. Examples:
Non-parametric: decision trees, support vector machines. Neither of these has a fixed number of parameters. In decision trees, we can always grow additional subtrees, which increases the number of parameters. In support vector machines, we can add more support vectors to increase the number of parameters. Crucially, in both cases we are progressively refining both the number of parameters, and their values, during training/learning.
Parametric: Neural networks, linear models. Fixed number of parameters. The values of parameters evolves during training/learning, but the number of parameters does not change.
"Non-parametric models differ from parametric models in that the model structure is not specified a priori but is instead determined from data. The term non-parametric is not meant to imply that such models completely lack parameters but that the number and nature of the parameters are flexible and not fixed in advance."
As we do not fully seem to understand how both AI and the human brain work (how do we get from everything we can see and measure in the brain to a thought in language?), my limited knowledge of AI is that its a lot of math and statistics. That makes me wonder: is getting a thought in my brain not also mainly math and statistics, though in an organic process vs digital? If that would be true, is there actually a fundamental difference between a human and an AI robot, and is the idea we are a someone just an illusion, or have I seen too much science fiction?
I continue to enjoy reposts in particular where they show your personal evolution of thought (the reality of people, not idée fixe". Also thought that the ChatGPT commentary was a great way to expand the discussion and wondered if you checked more than one AI to see if there was much overlap in their commentaries.
Obligatory reference to Snow Crash and the "me" re "history" is probably not what ChatGPT was being asked but that it knew especially the AI literature well and how to soothe econometricians is a legitimately impressive thing
Regarding your First Magic, passing down/transmission/tradition for millennia hardly ever involved writing down. As a professional historian I must protest that technological processes are just about the last thing that ever gets written down. Instead, in a near universal pattern technological processes are passed down orally within a small group of people who safeguard the “secret" of their art for their chosen successors. Of course, these groups that try to control succession usually have some amount of leakage and the secrets of the craft spread, but almost always on oral rather than in written form.
It is interesting that many of these processes and inventions were written down for the first time to obtain a patent, which is another way of safeguarding technological secrets.
One day I will finish this slavery-and-abolition stack before Trump makes me but something like "The Reinvention Of Atlantic Slavery" is entirely post-Industrial Revolution and it was a kind of magic that you could write down how the machines worked and adapt them to tropical conditions they were not designed for. So the idea that you could write it down and in a very different time (the SCA) and place it could be invented again is a different enough concept from "science".
The stochastic nature of LLM responses is entirely by design. If you would prefer a determininiatic response, just set the temperature parameter to zero and it will regurgitate the same response each time.
I came to say precisely this. All the network parameters are deterministic, as well as the sequence of input tokens. That there is any randomness in the input-output relationship is a pure design choice, not an indispensable aspect of these systems.
The essay was fascinating until the commentary by ChatGPT. I find that no matter what the subject, 1) My own conversations with GenAI are fascinating to me, 2) Anyone else's conversations with GenAI are incredibly boring. Is that the case for other people? If so, it suggests something about AI's limits.
ChatGPT writing is giving me ticks; just as enjoyable as going to a dental clinic- functional but not fun
Small quibble: I think it's a mistake to say GPT is parametric but some ML models (especially the neural networks being referenced circa 2009 aka RNNs) aren't, because basically every machine learning model that can be stored and run by a computer is parametric (since it has to be in order to be stored as 1s and 0s).
IMHO neither you or Noah are correctly describing parametric v.s. non-parametric models.
A non-parametric model is not a model that has "no parameters", as you and Noah imply.
The opposite would then be that all models with parameters are parametric, which you then claim is necessarily all models stored on computers, but that's not correct.
A non-parametric model just doesn't have a *fixed number* of parameters. It still has parameters. Examples:
Non-parametric: decision trees, support vector machines. Neither of these has a fixed number of parameters. In decision trees, we can always grow additional subtrees, which increases the number of parameters. In support vector machines, we can add more support vectors to increase the number of parameters. Crucially, in both cases we are progressively refining both the number of parameters, and their values, during training/learning.
Parametric: Neural networks, linear models. Fixed number of parameters. The values of parameters evolves during training/learning, but the number of parameters does not change.
From Wikipedia: (https://en.wikipedia.org/wiki/Nonparametric_statistics#Non-parametric_models)
"Non-parametric models differ from parametric models in that the model structure is not specified a priori but is instead determined from data. The term non-parametric is not meant to imply that such models completely lack parameters but that the number and nature of the parameters are flexible and not fixed in advance."
As we do not fully seem to understand how both AI and the human brain work (how do we get from everything we can see and measure in the brain to a thought in language?), my limited knowledge of AI is that its a lot of math and statistics. That makes me wonder: is getting a thought in my brain not also mainly math and statistics, though in an organic process vs digital? If that would be true, is there actually a fundamental difference between a human and an AI robot, and is the idea we are a someone just an illusion, or have I seen too much science fiction?
I continue to enjoy reposts in particular where they show your personal evolution of thought (the reality of people, not idée fixe". Also thought that the ChatGPT commentary was a great way to expand the discussion and wondered if you checked more than one AI to see if there was much overlap in their commentaries.
Obligatory reference to Snow Crash and the "me" re "history" is probably not what ChatGPT was being asked but that it knew especially the AI literature well and how to soothe econometricians is a legitimately impressive thing
It’s like protoculture from Robotech.
Nobody understands how it works, but it works. And it gives us a future full of awesome robots and warp drives and big hair.
I’m so here for it.