# Implemented algorithms¶

The core concept in adaptive is that of a learner. A learner samples a function at the best places in its parameter space to get maximum “information” about the function. As it evaluates the function at more and more points in the parameter space, it gets a better idea of where the best places are to sample next.

Of course, what qualifies as the “best places” will depend on your application domain! adaptive makes some reasonable default choices, but the details of the adaptive sampling are completely customizable.

The following learners are implemented:

Meta-learners (to be used with other learners):

In addition to the learners, adaptive also provides primitives for running the sampling across several cores and even several machines, with built-in support for concurrent.futures, ipyparallel and distributed.

# Examples¶

Here are some examples of how Adaptive samples vs. homogeneous sampling. Click on the Play button or move the sliders.

import itertools
import holoviews as hv
import numpy as np
%output holomap='scrubber'


## adaptive.Learner1D¶

%%opts Layout [toolbar=None]
def f(x, offset=0.07357338543088588):
a = 0.01
return x + a**2 / (a**2 + (x - offset)**2)

def plot_loss_interval(learner):
if learner.npoints >= 2:
x_0, x_1 = max(learner.losses, key=learner.losses.get)
y_0, y_1 = learner.data[x_0], learner.data[x_1]
x, y = [x_0, x_1], [y_0, y_1]
else:
x, y = [], []
return hv.Scatter((x, y)).opts(style=dict(size=6, color='r'))

def plot(learner, npoints):
adaptive.runner.simple(learner, lambda l: l.npoints == npoints)
return (learner.plot() * plot_loss_interval(learner))[:, -1.1:1.1]

def get_hm(loss_per_interval, N=101):
loss_per_interval=loss_per_interval)
plots = {n: plot(learner, n) for n in range(N)}
return hv.HoloMap(plots, kdims=['npoints'])

(get_hm(uniform_loss).relabel('homogeneous samping')


Once Loop Reflect

## adaptive.Learner2D¶

def ring(xy):
import numpy as np
x, y = xy
a = 0.2
return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4)

def plot(learner, npoints):
adaptive.runner.simple(learner, lambda l: l.npoints == npoints)
xs = ys = np.linspace(*learner.bounds, learner.npoints**0.5)
xys = list(itertools.product(xs, ys))
learner2.tell_many(xys, map(ring, xys))
return (learner2.plot().relabel('homogeneous grid')