← Back to context

Comment by seb314

8 months ago

The state space of chess is so huge that even a giant training set would be a _very_ sparse sample of the stockfish-computed value function.

So the network still needs to do some impressive generalization in order to „interpolate“ between those samples.

I think so, anyway (didn‘t read the paper but worked on alphazero-like algorithms for a few years)