Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLM's do not think, understand, reason, reflect, comprehend and they never shall. ... It's amazing the results that LLM's are able to acheive. ... it also makes sense as to why it would, just look at the volume of human knowledge

Not so much amazing as bewildering that certain results are possible in spite of a lack of thinking etc. I find it highly counterintuitive that simply referencing established knowledge would ever get the correct answer to novel problems, absent any understanding of that knowledge.





> simply referencing established knowledge would ever get the correct answer to novel problems, absent any understanding of that knowledge.

What is a concrete example of this?


What problems have LLMs (so models like ChatGPT, Claude, Gemini, etc, not specific purpose algorithms like MCTS tuned by humans for certain tasks like AlphaGo or AlphaFold) solved that thousands of humans worked decades on and didn't solve (so as OP said, novel)? Can you name 1-3 of them?

Wait, you're redefining novel to mean something else.

If I prove a new math theorem, it's novel - even though it's unlikely that thousands of humans have worked on that specific theorem for decades.

LLMs have proven novel math theorems and solved novel math problems. There are more than three examples already.


I’m not redefining anything, that's the definition of "novel" in science. Otherwise, this comment would be "novel" too, because I bet you won't find it anywhere on Google, but no one would call it novel.

Show me these novel problems, that were solved by LLMs, name more than 3 then.


You're seriously insisting that the definition of novel in science only includes things that thousands of people have worked on for decades and haven't solved?

An example problem includes the "Erdos set" problems (see problem 124).

But also, LLMs have solved Olympia problems, see the results of IMO 2025. You can say that these are not interesting or challenging problems, but in the context of the original discussion, I don't think you can discount them as "novel". This is what the original comment said:

> Not so much amazing as bewildering that certain results are possible in spite of a lack of thinking etc. I find it highly counterintuitive that simply referencing established knowledge would ever get the correct answer to novel problems, absent any understanding of that knowledge.

I think in this context, it's clear that IMO problems are "novel" - they are applying knowledge in some way to solve something that isn't in-distribution. It is surprising that this is possible without "true understanding"; or, alternatively, LLMs do have understanding, whatever that means, which is also surprising.


Coding seems like the most prominent example.

Can you tell us more?

Unless everybody is writing the same code to solve the same exact problems over and over again, by definition LLMs are solving novel problems every time somebody prompts them for code. Sure, the fundamental algorithms and data structures and dependencies would be the same, but they would be composed in novel ways to address unique use-cases, which describes approximately all of software engineering.

If you want to define "novel problems" as those requiring novel algorithms and data structures etc, well, how often do humans solve those in their day-to-day coding?


Based on my experience it doesn’t solve novel problems. It’s good at generating common solutions.

This goes back to how we define "novel problems." Is a dev building a typical CRUD webapp for some bespoke business purpose a "novel problem" or not? Reimplementing a well-known standard in a different language and infrastructure environment (e.g. https://github.com/cloudflare/workers-oauth-provider/)?

I'm probably just rephrasing what you mean, but LLMs are very good at applying standard techniques ("common solutions"?) to new use-cases. My take is, in many cases, these new use-cases are unique enough to be a "novel problem."

Otherwise, this pushes the definition of "novel problems" to something requiring entirely new techniques altogether. If so, I doubt if LLMs can solve these, but I am also pretty sure that 99.99999% of engineers cannot either.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: