fvGPOptimizer for multi-tasking

This example shows how to do multi-task learning with the fvGPOptimizer. 

Make sure to install the right gpCAM version for this example: pip install gpcam==7.4.10


The most important thing to remember is that a multi-task GP is just a single-task GP over a transformed input space. This only works well when flexible non-stationary kernels are used. Let's look at an example of this transformation. If my input domain is two-dimensional, and my output domain is one-dimensional (for instance 3 scalar outputs), the resulting new input domain is 3-dimensional.


Input points 

1,2

3,2

becomes new inputs 

1,2,0

1,2,1

1,2,2

3,2,0

3,2,1

3,2,2

so the Cartesian product of the sets. Here we assumed that the tasks are indexed by (0,1,2). This is a really important distinction because the underlying GP is not even aware that there are several outputs; it is only aware of a single output over this new space. That makes calling the posterior is little more intricate. So in the example above, I would have to call the posterior mean and variance on points in 3d, not 2d. This is why the (multi-task) fvGPOptimizer needs a user-defined acquisition function work. An example is below.