adaptive.Learner1D#

class adaptive.Learner1D(function: Callable[[Real], Float | np.ndarray], bounds: tuple[Real, Real], loss_per_interval: Callable[[XsTypeN, YsTypeN], Float] | None = None)[source]#

Bases: BaseLearner

Learns and predicts a function ‘f:ℝ → ℝ^N’.

Parameters:
  • function (callable) – The function to learn. Must take a single real parameter and return a real number or 1D array.

  • bounds (pair of reals) – The bounds of the interval on which to learn ‘function’.

  • loss_per_interval (callable, optional) – A function that returns the loss for a single interval of the domain. If not provided, then a default is used, which uses the scaled distance in the x-y plane as the loss. See the notes for more details.

data#

Sampled points and values.

Type:

dict

pending_points#

Points that still have to be evaluated.

Type:

set

Notes

loss_per_interval takes 2 parameters: xs and ys, and returns a

scalar; the loss over the interval.

xstuple of floats

The x values of the interval, if nth_neighbors is greater than zero it also contains the x-values of the neighbors of the interval, in ascending order. The interval we want to know the loss of is then the middle interval. If no neighbor is available (at the edges of the domain) then None will take the place of the x-value of the neighbor.

ystuple of function values

The output values of the function when evaluated at the xs. This is either a float or a tuple of floats in the case of vector output.

The loss_per_interval function may also have an attribute nth_neighbors that indicates how many of the neighboring intervals to interval are used. If loss_per_interval doesn’t have such an attribute, it’s assumed that is uses no neighboring intervals. Also see the uses_nth_neighbors decorator for more information.

ask(n: int, tell_pending: bool = True) tuple[list[float], list[float]][source]#

Return ‘n’ points that are expected to maximally reduce the loss.

load_dataframe(df: DataFrame, with_default_function_args: bool = True, function_prefix: str = 'function.', x_name: str = 'x', y_name: str = 'y') None[source]#

Load data from a pandas.DataFrame.

If with_default_function_args is True, then learner.function’s default arguments are set (using functools.partial) from the values in the pandas.DataFrame.

Parameters:
  • df (pandas.DataFrame) – The data to load.

  • with_default_function_args (bool, optional) – The with_default_function_args used in to_dataframe(), by default True

  • function_prefix (str, optional) – The function_prefix used in to_dataframe, by default “function.”

  • x_name (str, optional) – The x_name used in to_dataframe, by default “x”

  • y_name (str, optional) – The y_name used in to_dataframe, by default “y”

loss(real: bool = True) float[source]#

Return the loss for the current state of the learner.

Parameters:

real (bool, default: True) – If False, return the “expected” loss, i.e. the loss including the as-yet unevaluated points (possibly by interpolation).

new() Learner1D[source]#

Create a copy of Learner1D without the data.

property npoints: int#

Number of evaluated points.

plot(*, scatter_or_line: str = 'scatter')[source]#

Returns a plot of the evaluated data.

Parameters:

scatter_or_line (str, default: "scatter") – Plot as a scatter plot (“scatter”) or a line plot (“line”).

Returns:

plot – Plot of the evaluated data.

Return type:

holoviews.Overlay

remove_unfinished() None[source]#

Remove uncomputed data from the learner.

tell(x: float, y: Union[float, float64, Sequence[Union[float, float64]], ndarray]) None[source]#

Tell the learner about a single value.

Parameters:
  • x (A value from the function domain) –

  • y (A value from the function image) –

tell_many(xs: collections.abc.Sequence[Union[float, numpy.float64]] | numpy.ndarray, ys: collections.abc.Sequence[Union[float, numpy.float64]] | collections.abc.Sequence[collections.abc.Sequence[Union[float, numpy.float64]]] | collections.abc.Sequence[numpy.ndarray] | numpy.ndarray, *, force: bool = False) None[source]#

Tell the learner about some values.

Parameters:
  • xs (Iterable of values from the function domain) –

  • ys (Iterable of values from the function image) –

tell_pending(x: float) None[source]#

Tell the learner that ‘x’ has been requested such that it’s not suggested again.

to_dataframe(with_default_function_args: bool = True, function_prefix: str = 'function.', x_name: str = 'x', y_name: str = 'y') DataFrame[source]#

Return the data as a pandas.DataFrame.

Parameters:
  • with_default_function_args (bool, optional) – Include the learner.function’s default arguments as a column, by default True

  • function_prefix (str, optional) – Prefix to the learner.function’s default arguments’ names, by default “function.”

  • x_name (str, optional) – Name of the input value, by default “x”

  • y_name (str, optional) – Name of the output value, by default “y”

Return type:

pandas.DataFrame

Raises:

ImportError – If pandas is not installed.

to_numpy()[source]#

Data as NumPy array of size (npoints, 2) if learner.function returns a scalar and (npoints, 1+vdim) if learner.function returns a vector of length vdim.

property vdim: int#

Length of the output of learner.function. If the output is unsized (when it’s a scalar) then vdim = 1.

As long as no data is known vdim = 1.

Custom loss functions#

adaptive.learner.learner1D.default_loss(xs: XsType0, ys: YsType0) Float[source]#

Calculate loss on a single interval.

Currently returns the rescaled length of the interval. If one of the y-values is missing, returns 0 (so the intervals with missing data are never touched. This behavior should be improved later.

adaptive.learner.learner1D.uniform_loss(xs: XsType0, ys: YsType0) Float[source]#

Loss function that samples the domain uniformly.

Works with Learner1D only.

Examples

>>> def f(x):
...     return x**2
>>>
>>> learner = adaptive.Learner1D(f,
...                              bounds=(-1, 1),
...                              loss_per_interval=uniform_sampling_1d)
>>>
adaptive.learner.learner1D.uses_nth_neighbors(n: int)[source]#

Decorator to specify how many neighboring intervals the loss function uses.

Wraps loss functions to indicate that they expect intervals together with n nearest neighbors

The loss function will then receive the data of the N nearest neighbors (nth_neighbors) along with the data of the interval itself in a dict. The Learner1D will also make sure that the loss is updated whenever one of the nth_neighbors changes.

Examples

The next function is a part of the curvature_loss_function function.

>>> @uses_nth_neighbors(1)
... def triangle_loss(xs, ys):
...    xs = [x for x in xs if x is not None]
...    ys = [y for y in ys if y is not None]
...
...    if len(xs) == 2: # we do not have enough points for a triangle
...        return xs[1] - xs[0]
...
...    N = len(xs) - 2 # number of constructed triangles
...    if isinstance(ys[0], Iterable):
...        pts = [(x, *y) for x, y in zip(xs, ys)]
...        vol = simplex_volume_in_embedding
...    else:
...        pts = [(x, y) for x, y in zip(xs, ys)]
...        vol = volume
...    return sum(vol(pts[i:i+3]) for i in range(N)) / N

Or you may define a loss that favours the (local) minima of a function, assuming that you know your function will have a single float as output.

>>> @uses_nth_neighbors(1)
... def local_minima_resolving_loss(xs, ys):
...     dx = xs[2] - xs[1] # the width of the interval of interest
...
...     if not ((ys[0] is not None and ys[0] > ys[1])
...         or (ys[3] is not None and ys[3] > ys[2])):
...         return loss * 100
...
...     return loss
adaptive.learner.learner1D.triangle_loss(xs: XsType1, ys: YsType1) Float[source]#
adaptive.learner.learner1D.curvature_loss_function(area_factor: Real = 1, euclid_factor: Real = 0.02, horizontal_factor: Real = 0.02) Callable[[XsType1, YsType1], Float][source]#
adaptive.learner.learner1D.abs_min_log_loss(xs: XsType0, ys: YsType0) Float[source]#

Calculate loss of a single interval that prioritizes the absolute minimum.

adaptive.learner.learner1D.resolution_loss_function(min_length: Real = 0, max_length: Real = 1) Callable[[XsType0, YsType0], Float][source]#

Loss function that is similar to the default_loss function, but you can set the maximum and minimum size of an interval.

Works with Learner1D only.

The arguments min_length and max_length should be in between 0 and 1 because the total size is normalized to 1.

Returns:

loss_function

Return type:

callable

Examples

>>> def f(x):
...     return x**2
>>>
>>> loss = resolution_loss_function(min_length=0.01, max_length=1)
>>> learner = adaptive.Learner1D(f, bounds=(-1, -1), loss_per_interval=loss)