← Back to context

Comment by ds_opseeker

14 days ago

> a system we use to ask questions expecting truthful answers.

yes, I still wonder how LLMs managed to generate this expectation, given that they have no innate sense of "truth" nor are they designed to return the most truthful next token.

That expectation emerged because that has largely been the goal of the field of AI research since it's inception.

LLMs stepped into a field that has existed in popular consciousness for decades and decades, and the companies running LLMs for public use *sell* them on the idea that they're useful as more than just expensive text-suggestion machines.