← Back to context

Comment by SirMaster

12 days ago

I feel like "information systems" have always struggled with bias, and the latest AI/ML systems seem to be no different.

It doesn't really seem like a problem that can or will ever be "solved". Just mitigated to various extents, but there will still likely be some underlying biases that exist that are not fully or effectively filtered. Because to adjust a bias seems to mean you have to detect and understand it first.

It feels like it would be a full-time job to keep making sure some evolving model continued to stay "neutral".

Considering that bias is in the eye of the beholder, a biasless language model is a beholderless language model.

The nomenclature is poor, IMO; we should be talking about bias-aligned models, models that align to our specific sets of biases. That'd be more fair to what's actually happening.