What is Under the Hood
gpCAM is an advanced Gaussian process tool, combined with HPC mathematical optimization.
Its power comes from the fact that most parts of the Gaussian process and active learning can be defined by the user as they become more familiar with the underlying mathematics. One could imagine a car engine; the motor block is the core code of gpCAM, all other parts that make a well-working car are provided, but they can be exchanged to create the perfect vehicle for a particular purpose. Some of the building blocks that can be defined by the user and imported into gpCAM are
A data acquisition function that tells gpCAM where data is sent to and received from.
A kernel function to constrain the set of model functions, for extra flexibility, and more accurate uncertainty quantification.
A parametric prior-mean function to encapsulate a physics-based model.
An optimizer that replaces the standard optimizers, for instance, for constrained training.
A flexible block-MCMC for advanced fully Bayesian training.
An acquisition function to inject patterns and characteristics of interest.
A cost function to make sure measured points will minimize a cost while maximizing knowledge gain.
All this flexibility means that gpCAM naturally includes the ability for physics awareness and also multi-task learning.
Torch and DASK-based high-performance computing mean that gpCAM can take full advantage of supercomputers.
Read More Here:
A unifying perspective on non-stationary kernels for deeper Gaussian processes
Advanced Stationary and Non-Stationary Kernel Designs for Domain-Aware Gaussian Processes
High-Performance Hybrid-Global-Deflated-Local Optimization with Applications to Active Learning
Gaussian processes for autonomous data acquisition at large-scale synchrotron and neutron facilities
Hybrid genetic deflated Newton method for global optimisation
Advances in Kriging-Based Autonomous X-Ray Scattering Experiments
A Kriging-Based Approach to Autonomous Experimentation with Applications to X-Ray Scattering
Many of the recent features are not published yet. The papers will be linked here soon.