took me a while but i read it. thought it was actually a pretty good and well researched paper that does a good job rationalizing its thesis. thanks for sharing
Is this AI paper written by a reputable subject matter expert? It seems to be written by a physicist and also be the only academic work by this author in English
How many articles on this topic do we imagine there are? Thousands? Hundreds of thousands? It is hopeless to read every one by any author, no matter how unrelated to the domain, and judge them individually on their merits. Being a subject domain expert is not a perfect measure of paper quality but it's the only feasible way to make a first pass at filtering.
Even if I did read it, I have no hope of understanding if it has made a fundamental mistake because I don't have the subject matter expertise either.
(I imagine it has made a fundamental mistake anyway: for LLMs to be useful progress toward AGI they don't have to be a feasible way to create AGI by themselves. Innovation very often involves stepping through technologies that end up only being a component of the final solution, or inspiration for the final solution. This was always going to be an issue with trying to prove a negative.)
The core piece as quoted from the abstract: "AGI predictions fail not from
insufficient compute, but from fundamental misunderstanding of what intelligence demands structurally."
Then goes in detail as to what that is and why LLMs don't fit that. There are plenty other similar papers out there.
https://www.arxiv.org/pdf/2511.18517