← Back to context

Comment by simonw

4 hours ago

Do you know why OpenAI are unable to provide a "seed" parameter that's fully deterministic? I had assumed it was for the reason I described, but I'm not confident in my assertion there.

The original question was about LLMs, and what OpenAI provides is out of LLM definition (they have caches, "memory", undocumented o1 Chain-of-Thought and so on). Also I am pretty sure that they collected a zoo of hardware with autoscaling and per-client parallelism. And if you change number of threads even basic bricks like matrix multiplication produce different result.