# gpOptimizer

The API is new and often updated. Make heavy use of python's help function. So, for instance

from gpcam.gp_optimizer import GPOptimizer

gpo = GPOptimizer(...)

help(gpo.tell)

class GPOptimizer(input_space_dimensions, index_set_bounds)

Arguments:

input_space_dimensions (int): the number of dimensions in the input space

input_space_bounds (2d np.array): bounds of the combined index set

Class Methods:

tell(x, y, variances = None)

tell() communicates new data to the GP. Note, the entire dataset has to be communicated, the function will NOT append.

x ... points

y... values

variances, an array of shape(y)

init_gp(init_hyperparameters, compute_device = "cpu",gp_kernel_function = None, gp_mean_function = None,

sparse = False)

init_gp() initializes a GP. For this, a guess for the hyperparameters is required. Depending on the subsequent training, that guess may or may not be taken into account. This is also the call the gives the user the opportunity to inject physical knowledge into the GP via the kernel and the mean function. The matrix solves will happen on the specified compute device ("cpu"/"gpu").

ask(position = None, n = 1, acquisition_function = "covariance", bounds = None,

method = "global", pop_size = 20, max_iter = 20, tol = 10e-6, x0 = None, dask_client = False)

This function ask() s for a new optimal point, given (optional) acquisition and cost functions. If an optimization with a local component is used "x0" is used as starting point. n specifies the number of returned suggestions but only if "hgdl" is used as the optimizer. The position parameter is used to tell the cost function where the motion through the parameter space has to start.

train_gp(hyperparameter_bounds, method = "global", pop_size = 20,

tolerance = 1e-6, optimization_dict = None, max_iter = 120)

train_gp() trains the initialized GP via a global, local or user-defined and callable optimization algorithm.

train_gp_async(hyperparameter_bounds, max_iter = 10000, dask_client = None)

train_gp_async() trains the initialized GP asynchronously using HGDL.

get_data()

Displays the data in the object.

evaluate_acquisition_function(x, acquisition_function = "covariance", cost_function = None, origin = None)

Evaluates the current acquisition function, taking into account costs

update_hyperparameters()

If the training is performed asynchronously, this updates the hyperparameters in fvGP.

stop_async_train()

This function stops the async training but leaves the client alive.

kill_async_train()

This function stops the async training and kills the client.

update_gp()

Function to update the fvGP, will automatically be called if the data is updated via tell()

init_cost(cost_function, cost_function_parameters, cost_update_function = None)

Function to initialize the costs. Note, the cost function still has to be passed to ask(), to activate the costs in the search.

update_cost_function(measurement_costs)

Calls a user-provided callable that updates the cost function.

posterior_mean(x)

Computes the posterior mean at a set of points x.

posterior_covariance(x)

shannon_information_gain(x)

posterior_probability(x, comparison_mean, comparison_cov)

gp_kl_div(x, comparison_mean, comparison_cov)

### Have suggestions for the API or found a bug?

Write a brief description of what you saw or what you'd like to see.