How Do I Make Ray.tune.run Reproducible?
Solution 1:
(This answer focuses on class API and ray version 0.8.7. Function API does not support reproducibility due to implementation specifics)
There are two main sources of undeterministic results.
1. Search algorithm
Every search algorithm supports random seed, although interface to it may vary. This initializes hyperparameter space sampling.
For example, if you're using AxSearch
, it looks like this:
from ax.service.ax_client import AxClient
from ray.tune.suggest.ax import AxSearch
client = AxClient(..., random_seed=42)
client.create_experiment(...)
algo = AxSearch(client)
2. Trainable API
This is distributed among worker processes, which requires seeding within tune.Trainable
class. Depending on the tune.Trainable.train
logic that you implement, you need to manually seed numpy
, tf
, or whatever other framework you use, inside tune.Trainable.setup
by passing seed with config
argument of tune.run
.
The following code is based on RLLib PR5197 that handled the same issue:
See the example:
from ray import tune
import numpy as np
import random
classTuner(tune.Trainable):
defsetup(self, config):
seed = config['seed']
np.random.seed(seed)
random.seed(seed)
...
...
seed = 42
tune.run(Tuner, config={'seed': seed})
Post a Comment for "How Do I Make Ray.tune.run Reproducible?"