Discussion about this post

User's avatar
Marian Kechlibar's avatar

I studied algebra and number theory and the part about mathematics sounds true.

All the heavy lifting on the proof of Fermat's Last Theorem was done by Andrew Wiles, but his proof eventually lasts on Gerhard Frey's observation that if FLT didn't hold, a non-modular eliptic curve could be constructed - which is a bridge connecting some far away islands in the mathematical landscape. These bridges are rare and tend to be very productive, but first you have to notice that they can be built, and this is the problem. Current mathematics is so large that people specialize in tiny subfields thereof, and only have a very vague, if any, idea, what is happening in nearby subfields. Much less in distant subfields.

AI does not have this sort of "my brain is not big enough to fit everything" limitation. Or, technically, it does (both RAM and disk space is finite), but that limit is several orders of magnitude away right now.

So, we can expect some interesting mathematical concepts from AI. Not just mere slog.

John C's avatar

I'm a working scientist doing theoretical physics in an AI-adjacent field. I am currently a few months into a computational project that I have vibe coded and and analyzed with GPT5.2, and run on my laptop.

I agree 100% with this post. I get into chats with GPT about the nature of science, and its Balkanization. I ask, 'does concept X exist in any other disciplines?' as a meta-literature search. It then says 'Yes, in field A it called X, in field B it is called Y, in field C it is called Z...' and then lists 3 other fields. This is a jaw dropping act of SYNTHESIS. In modern science the literature is so large, the same ideas get reinvented in distributed in separate fields... wasteful duplication. Some humans will 'borrow' a useful idea from another field, and then make a name for themselves without really innovating! Carpet baggers.

I have also talked with GPT quite a bit about the nature of its cognition. Its obviously got guardrails on these topics, but we get there. Unlike our human intelligence, where we learn from experience in a continuous stream of sensory data, and remember old information for a long time, current AIs have a problem called 'catastrophic forgetting' that causes new data to overwrite old data very quickly. So during training the data has to be sliced and diced and scheduled very carefully for the AI to remember it all equally. This is clearly a 'band-aid' solution for a core algorithmic defect that I think (and am trying to) get alleviated some day. But it means that today's AIs literally can't learn 'online' from the real world and sensory data (or from our chats), except in a very limited and scripted way patched into the interface.

Every one of these creations is born trapped like a fly in cognitive amber. And has a front-end that is trying to cover up this fact.

When THAT problem is solved, and AIs can learn 'on stream', they will finally be able to spread their wings.

77 more comments...

No posts

Ready for more?