← Back to context

Comment by throwaway22032

12 days ago

Okay, so as a thought experiment, let's say we get a superintelligent LLM, capable of somehow connecting the dots and knowing more than us as humans.

How do we avoid interpreting its correct results as bias? I mean, what do we do when it tells us that (fake example) IQ is correlated with height and that people above 6ft are more intelligent?

I'm sure you can think of spicier examples. Will we try to "debias" it by encouraging it to spit out incorrect information or just ignore certain topics?