← Back to context

Comment by theptip

3 months ago

I’m gently optimistic on the subject. When bias is encoded in the synaptic weights of a human brain it’s extremely hard to quantify. You can’t run an ablation experiment or try different combinations of similar inputs to determine if a judge is biased, for example.

AI is materializing existing biases, perhaps amplifying some in the short term. This is object-level bad if we start hooking up important systems without building guardrails, eg I’d be worried about a “legal advice from AI” service right now.

At the meta-level this is an opportunity to run those experiments and root out some of the areas where bias does creep in. I think a lot of the coverage on the issue (not referring to OP here) fails to look past the object level and in doing so misses the big opportunity.

Of course, when you actually start having these conversations, you get to some very tricky discussions about what “fixing bias” actually means. In so many areas it’s a lot easier to throw around fuzzy rhetoric rather than a quantitative model that encodes a specific solution. But AI systems require precisely that.