Changes by Version

7.0.0 ---> 7.2.2

Minor changes to asynchronous training in the autonomous experimenter and gp_optimizer classes. Have a look at the docs

6.0.5 --> 7.0.0

In version 7, the autonomous loop is included in the API, so does not have to be implemented by the user using the gp_optimizer class.

New:

changes to gp_optimizer (and therefore fvgp_optimizer):

  • reminder: async_train_gp() --> train_gp_async()

  • added parameter: ask(optimization_x0); gives the option to provide starting positions for local and hgdl optimization

Additional Notes:

The AutonomousExperimenter receives an optional lambda function to decide what method to use to ask() for new data

ask() has to receive the cost function for it to be used

tell() does not append data anymore but only receives full datasets

Communicating an old dataset can now be done via numpy arrays or by providing the data structure from a previous run. In the latter case, only one filename has to be supplied, the hyperparameters are defined in the data.

6.0.4 --> 6.0.5

async_train_gp() --> train_gp_async()