AutonomousExperimenter


class AutonomousExperimenterGP(parameter_bounds, instrument_func, hyperparameters, hyperparameter_bounds,

init_dataset_size=None, acq_func='covariance', cost_func=None, cost_update_func = None,

cost_func_params={}, kernel_func=None, prior_mean_func=None,

run_every_iteration=None, x=None, y=None, v=None, dataset=None,

append_data=False, compute_device='cpu',

sparse=False, training_dask_client=None,

acq_func_opt_dask_client=None)

Parameters:

  • parameter_bounds

  • instrument_func

  • hyperparameters

  • hyperparameter_bounds

Optional Parameters:

  • init_dataset_size = None: int or None, None means you have to provide intial data

  • acq_func = "covariance": acquisition function to be maximized in search of new measurements

  • cost_func = None

  • cost_update_func = None

  • cost_func_params = {}

  • kernel_func = None

  • prior_mean_func = None

  • run_every_iteration = None

  • x = None, y = None, v = None: inital data can be supplied here

  • append_data = False: Append data or communiate entire dataset

  • compute_device = "cpu"

  • sparse = False

  • training_dask_client = None

  • acq_func_opt_dask_client = None


Class Methods


train(pop_size = 10, tol = 1e-6, max_iter = 20, method = "global"):


train_async(max_iter = 20, dask_client = None)


kill_training() ###kills the training but leaves the client alive to be reused


kill_client() ###kills the training and the client


update_hps()


go(N=1e15, breaking_error=1e-50, retrain_globally_at=[100, 400, 1000], retrain_locally_at=[20, 40, 60, 80, 100, 200, 400, 1000], retrain_async_at=[1000, 2000, 5000, 10000], retrain_callable_at=[], update_cost_func_at=[],

acq_func_opt_setting = lambda number: "global" if number % 2 == 0 else "local",

training_opt_callable=None, training_opt_max_iter=20, training_opt_pop_size=10, training_opt_tol=1e-06, acq_func_opt_max_iter=20, acq_func_opt_pop_size=20, acq_func_opt_tol=1e-06, acq_func_opt_tol_adjust=[True, 0.1], number_of_suggested_measurements=1)

go() is a function to start the autonomous-data-acquisition loop

optional parameters:

  • N = 1e15 ... run N iterations

  • breaking_error = 1e-15 ... run until breaking_error is achieved

  • retrain_globally_at = [100,400,1000]

  • retrain_locally_at = [20,40,60,80,100,200,400,1000]

  • retrain_async_at = [1000,2000,5000,10000]

  • retrain_callable = []: if this is not an empty list, "training_opt_callable" has to be provided

  • update_cost_func_at = []: when the user-defined cost function is updated

  • acq_func_opt_setting = lambda function to decide when global, local, hgdl, other ask() (see above)

  • training_opt_callable = None, callable, has to return a 1 1d numpy array of hyperparameters def func(obj): return np.array()

  • training_opt_max_iter = 20

  • training_opt_pop_size = 10

  • training_opt_tol = 1e-6

  • acq_func_opt_max_iter = 20

  • acq_func_opt_pop_size = 20

  • acq_func_opt_tol = 1e-6

  • acq_func_opt_tol_adjust = [True, 0.1]

  • number_of_suggested_measurements = 1

  • compute_device = cpu

  • sparse = False

  • training_dask_client = None

  • acq_func_opt_dask_client = None


class AutonomousExperimenterFvGP(parameter_bounds, output_number, output_dim, instrument_func, hyperparameters, hyperparameter_bounds,

init_dataset_size=None, acq_func='covariance', cost_func=None, cost_update_func=None, cost_func_params={},

kernel_func=None, prior_mean_func=None, run_every_iteration=None, x=None, y=None, v=None, dataset=None,

append_data=False, compute_device='cpu', sparse=False,

training_dask_client=None, acq_func_opt_dask_client=None)

The AutonomousExperimenterFvGP class inherits its functionalities from the AutonomousExperimenterGP class. The method go() is therefore equivalent (see above). The additional parameters are:

  • output_number

  • output_dim

Have suggestions for the API or found a bug?

Write a brief description of what you saw or what you'd like to see.