adaptive.Learner1D¶
-
class
adaptive.
Learner1D
(*args, **kwargs)[source]¶ Bases:
adaptive.learner.base_learner.BaseLearner
Learns and predicts a function ‘f:ℝ → ℝ^N’.
- Parameters
function (callable) – The function to learn. Must take a single real parameter and return a real number or 1D array.
bounds (pair of reals) – The bounds of the interval on which to learn ‘function’.
loss_per_interval (callable, optional) – A function that returns the loss for a single interval of the domain. If not provided, then a default is used, which uses the scaled distance in the x-y plane as the loss. See the notes for more details.
Notes
- loss_per_interval takes 2 parameters:
xs
andys
, and returns a scalar; the loss over the interval.
- xstuple of floats
The x values of the interval, if nth_neighbors is greater than zero it also contains the x-values of the neighbors of the interval, in ascending order. The interval we want to know the loss of is then the middle interval. If no neighbor is available (at the edges of the domain) then None will take the place of the x-value of the neighbor.
- ystuple of function values
The output values of the function when evaluated at the xs. This is either a float or a tuple of floats in the case of vector output.
The loss_per_interval function may also have an attribute nth_neighbors that indicates how many of the neighboring intervals to interval are used. If loss_per_interval doesn’t have such an attribute, it’s assumed that is uses no neighboring intervals. Also see the uses_nth_neighbors decorator for more information.
-
ask
(n: int, tell_pending: bool = True) → Tuple[List[float], List[float]][source]¶ Return ‘n’ points that are expected to maximally reduce the loss.
-
loss
(real: bool = True) → float[source]¶ Return the loss for the current state of the learner.
- Parameters
real (bool, default: True) – If False, return the “expected” loss, i.e. the loss including the as-yet unevaluated points (possibly by interpolation).
-
property
npoints
¶ Number of evaluated points.
-
plot
(*, scatter_or_line: str = 'scatter')[source]¶ Returns a plot of the evaluated data.
- Parameters
scatter_or_line (str, default: "scatter") – Plot as a scatter plot (“scatter”) or a line plot (“line”).
- Returns
plot – Plot of the evaluated data.
- Return type
holoviews.Overlay
-
tell
(x: float, y: Union[float, numpy.float64, Sequence[Union[float, numpy.float64]], numpy.ndarray]) → None[source]¶ Tell the learner about a single value.
- Parameters
x (A value from the function domain) –
y (A value from the function image) –
-
tell_many
(xs: Sequence[Union[float, numpy.float64]], ys: Union[Sequence[Union[float, numpy.float64]], Sequence[Sequence[Union[float, numpy.float64]]], Sequence[numpy.ndarray]], *, force: bool = False) → None[source]¶ Tell the learner about some values.
- Parameters
xs (Iterable of values from the function domain) –
ys (Iterable of values from the function image) –
-
tell_pending
(x: float) → None[source]¶ Tell the learner that ‘x’ has been requested such that it’s not suggested again.
-
to_numpy
()[source]¶ Data as NumPy array of size
(npoints, 2)
iflearner.function
returns a scalar and(npoints, 1+vdim)
iflearner.function
returns a vector of lengthvdim
.
-
property
vdim
¶ Length of the output of
learner.function
. If the output is unsized (when it’s a scalar) then vdim = 1.As long as no data is known vdim = 1.
Custom loss functions¶
-
adaptive.learner.learner1D.
default_loss
(xs: Tuple[Union[float, numpy.float64], Union[float, numpy.float64]], ys: Union[Tuple[Union[float, numpy.float64], Union[float, numpy.float64]], Tuple[numpy.ndarray, numpy.ndarray]]) → Union[float, numpy.float64][source]¶ Calculate loss on a single interval.
Currently returns the rescaled length of the interval. If one of the y-values is missing, returns 0 (so the intervals with missing data are never touched. This behavior should be improved later.
-
adaptive.learner.learner1D.
uniform_loss
(xs: Tuple[Union[float, numpy.float64], Union[float, numpy.float64]], ys: Union[Tuple[Union[float, numpy.float64], Union[float, numpy.float64]], Tuple[numpy.ndarray, numpy.ndarray]]) → Union[float, numpy.float64][source]¶ Loss function that samples the domain uniformly.
Works with
Learner1D
only.Examples
>>> def f(x): ... return x**2 >>> >>> learner = adaptive.Learner1D(f, ... bounds=(-1, 1), ... loss_per_interval=uniform_sampling_1d) >>>
-
adaptive.learner.learner1D.
uses_nth_neighbors
(n: int)[source]¶ Decorator to specify how many neighboring intervals the loss function uses.
Wraps loss functions to indicate that they expect intervals together with
n
nearest neighborsThe loss function will then receive the data of the N nearest neighbors (
nth_neighbors
) along with the data of the interval itself in a dict. TheLearner1D
will also make sure that the loss is updated whenever one of thenth_neighbors
changes.Examples
The next function is a part of the
curvature_loss_function
function.>>> @uses_nth_neighbors(1) ... def triangle_loss(xs, ys): ... xs = [x for x in xs if x is not None] ... ys = [y for y in ys if y is not None] ... ... if len(xs) == 2: # we do not have enough points for a triangle ... return xs[1] - xs[0] ... ... N = len(xs) - 2 # number of constructed triangles ... if isinstance(ys[0], Iterable): ... pts = [(x, *y) for x, y in zip(xs, ys)] ... vol = simplex_volume_in_embedding ... else: ... pts = [(x, y) for x, y in zip(xs, ys)] ... vol = volume ... return sum(vol(pts[i:i+3]) for i in range(N)) / N
Or you may define a loss that favours the (local) minima of a function, assuming that you know your function will have a single float as output.
>>> @uses_nth_neighbors(1) ... def local_minima_resolving_loss(xs, ys): ... dx = xs[2] - xs[1] # the width of the interval of interest ... ... if not ((ys[0] is not None and ys[0] > ys[1]) ... or (ys[3] is not None and ys[3] > ys[2])): ... return loss * 100 ... ... return loss
-
adaptive.learner.learner1D.
triangle_loss
(xs: Tuple[Optional[Union[float, numpy.float64]], Optional[Union[float, numpy.float64]], Optional[Union[float, numpy.float64]], Optional[Union[float, numpy.float64]]], ys: Union[Tuple[Optional[Union[float, numpy.float64]], Optional[Union[float, numpy.float64]], Optional[Union[float, numpy.float64]], Optional[Union[float, numpy.float64]]], Tuple[Optional[numpy.ndarray], Optional[numpy.ndarray], Optional[numpy.ndarray], Optional[numpy.ndarray]]]) → Union[float, numpy.float64][source]¶
-
adaptive.learner.learner1D.
curvature_loss_function
(area_factor: Union[float, numpy.float64, int, numpy.int64] = 1, euclid_factor: Union[float, numpy.float64, int, numpy.int64] = 0.02, horizontal_factor: Union[float, numpy.float64, int, numpy.int64] = 0.02) → Callable[[Tuple[Optional[Union[float, numpy.float64]], Optional[Union[float, numpy.float64]], Optional[Union[float, numpy.float64]], Optional[Union[float, numpy.float64]]], Union[Tuple[Optional[Union[float, numpy.float64]], Optional[Union[float, numpy.float64]], Optional[Union[float, numpy.float64]], Optional[Union[float, numpy.float64]]], Tuple[Optional[numpy.ndarray], Optional[numpy.ndarray], Optional[numpy.ndarray], Optional[numpy.ndarray]]]], Union[float, numpy.float64]][source]¶
-
adaptive.learner.learner1D.
abs_min_log_loss
(xs: Tuple[Union[float, numpy.float64], Union[float, numpy.float64]], ys: Union[Tuple[Union[float, numpy.float64], Union[float, numpy.float64]], Tuple[numpy.ndarray, numpy.ndarray]]) → Union[float, numpy.float64][source]¶ Calculate loss of a single interval that prioritizes the absolute minimum.
-
adaptive.learner.learner1D.
resolution_loss_function
(min_length: Union[float, numpy.float64, int, numpy.int64] = 0, max_length: Union[float, numpy.float64, int, numpy.int64] = 1) → Callable[[Tuple[Union[float, numpy.float64], Union[float, numpy.float64]], Union[Tuple[Union[float, numpy.float64], Union[float, numpy.float64]], Tuple[numpy.ndarray, numpy.ndarray]]], Union[float, numpy.float64]][source]¶ Loss function that is similar to the
default_loss
function, but you can set the maximum and minimum size of an interval.Works with
Learner1D
only.The arguments min_length and max_length should be in between 0 and 1 because the total size is normalized to 1.
- Returns
loss_function
- Return type
callable
Examples
>>> def f(x): ... return x**2 >>> >>> loss = resolution_loss_function(min_length=0.01, max_length=1) >>> learner = adaptive.Learner1D(f, bounds=(-1, -1), loss_per_interval=loss)