π§ͺ Implemented Algorithms#
The core concept in adaptive
is the learner.
A learner samples a function at the most interesting locations within its parameter space, allowing for optimal sampling of the function.
As the function is evaluated at more points, the learner improves its understanding of the best locations to sample next.
The definition of the βbest locationsβ depends on your application domain.
While adaptive
provides sensible default choices, the adaptive sampling process can be fully customized.
The following learners are implemented:
Learner1D
, for 1D functionsf: β β β^N
,Learner2D
, for 2D functionsf: β^2 β β^N
,LearnerND
, for ND functionsf: β^N β β^M
,AverageLearner
, for random variables where you want to average the result over many evaluations,AverageLearner1D
, for stochastic 1D functions where you want to estimate the mean value of the function at each point,IntegratorLearner
, for when you want to intergrate a 1D functionf: β β β
.BalancingLearner
, for when you want to run several learners at once, selecting the βbestβ one each time you get more points.
Meta-learners (to be used with other learners):
BalancingLearner
, for when you want to run several learners at once, selecting the βbestβ one each time you get more points,DataSaver
, for when your function doesnβt just return a scalar or a vector.
In addition to the learners, adaptive
also provides primitives for running the sampling across several cores and even several machines, with built-in support for
concurrent.futures,
mpi4py,
loky,
ipyparallel, and
distributed.
π‘ Examples#
Here are some examples of how Adaptive samples vs. homogeneous sampling. Click on the Play button or move the sliders.
Show code cell content
import itertools
import holoviews as hv
import numpy as np
import adaptive
from adaptive.learner.learner1D import default_loss, uniform_loss
adaptive.notebook_extension()
hv.output(holomap="scrubber")