← Back to context

Comment by musicale

11 days ago

> An LLM is just a reflection of the text that humans write, and humans seem very far off from having world models and reasoning that accurately reflect reality

The original sin of LLMs is that they are trained to imitate human language output.

Passing the Turing test isn't necessarily a good thing; it means that we have trained machines to imitate humans (including biases, errors, and other undesirable qualities) to the extent that they can deceptively pose as humans.