Discussion about this post

User's avatar
Marian Kechlibar's avatar

I studied algebra and number theory and the part about mathematics sounds true.

All the heavy lifting on the proof of Fermat's Last Theorem was done by Andrew Wiles, but his proof eventually lasts on Gerhard Frey's observation that if FLT didn't hold, a non-modular eliptic curve could be constructed - which is a bridge connecting some far away islands in the mathematical landscape. These bridges are rare and tend to be very productive, but first you have to notice that they can be built, and this is the problem. Current mathematics is so large that people specialize in tiny subfields thereof, and only have a very vague, if any, idea, what is happening in nearby subfields. Much less in distant subfields.

AI does not have this sort of "my brain is not big enough to fit everything" limitation. Or, technically, it does (both RAM and disk space is finite), but that limit is several orders of magnitude away right now.

So, we can expect some interesting mathematical concepts from AI. Not just mere slog.

Brad K's avatar

One issue I see here is verification. Scaling out dozens or hundreds of agents to do research on long tail problems or tedious sub-tasks significantly increases the likelihood of mistakes, particularly if things like computation or symbolic reasoning are handled through tokens instead of code.

Programmatically verifying chain of thought and reasoning in different domains will go a long way towards addressing this, but it's unclear how to robustly validate certain kinds of proofs for example (to my limited knowledge).

6 more comments...

No posts

Ready for more?