πŸ§ͺ Implemented Algorithms#

The core concept in adaptive is the learner. A learner samples a function at the most interesting locations within its parameter space, allowing for optimal sampling of the function. As the function is evaluated at more points, the learner improves its understanding of the best locations to sample next.

The definition of the β€œbest locations” depends on your application domain. While adaptive provides sensible default choices, the adaptive sampling process can be fully customized.

The following learners are implemented:

  • Learner1D, for 1D functions f: ℝ β†’ ℝ^N,

  • Learner2D, for 2D functions f: ℝ^2 β†’ ℝ^N,

  • LearnerND, for ND functions f: ℝ^N β†’ ℝ^M,

  • AverageLearner, for random variables where you want to average the result over many evaluations,

  • AverageLearner1D, for stochastic 1D functions where you want to estimate the mean value of the function at each point,

  • IntegratorLearner, for when you want to intergrate a 1D function f: ℝ β†’ ℝ.

  • BalancingLearner, for when you want to run several learners at once, selecting the β€œbest” one each time you get more points.

Meta-learners (to be used with other learners):

  • BalancingLearner, for when you want to run several learners at once, selecting the β€œbest” one each time you get more points,

  • DataSaver, for when your function doesn’t just return a scalar or a vector.

In addition to the learners, adaptive also provides primitives for running the sampling across several cores and even several machines, with built-in support for concurrent.futures, mpi4py, loky, ipyparallel, and distributed.

πŸ’‘ Examples#

Here are some examples of how Adaptive samples vs. homogeneous sampling. Click on the Play button or move the sliders.

Hide code cell content
import itertools

import holoviews as hv
import numpy as np

import adaptive
from adaptive.learner.learner1D import default_loss, uniform_loss

adaptive.notebook_extension()
hv.output(holomap="scrubber")