adaptive.Learner1D#
- class adaptive.Learner1D(function: Callable[[Real], Float | np.ndarray], bounds: tuple[Real, Real], loss_per_interval: Callable[[XsTypeN, YsTypeN], Float] | None = None)[source]#
Bases:
BaseLearner
Learns and predicts a function ‘f:ℝ → ℝ^N’.
- Parameters:
function (callable) – The function to learn. Must take a single real parameter and return a real number or 1D array.
bounds (pair of reals) – The bounds of the interval on which to learn ‘function’.
loss_per_interval (callable, optional) – A function that returns the loss for a single interval of the domain. If not provided, then a default is used, which uses the scaled distance in the x-y plane as the loss. See the notes for more details.
Notes
- loss_per_interval takes 2 parameters:
xs
andys
, and returns a scalar; the loss over the interval.
- xstuple of floats
The x values of the interval, if nth_neighbors is greater than zero it also contains the x-values of the neighbors of the interval, in ascending order. The interval we want to know the loss of is then the middle interval. If no neighbor is available (at the edges of the domain) then None will take the place of the x-value of the neighbor.
- ystuple of function values
The output values of the function when evaluated at the xs. This is either a float or a tuple of floats in the case of vector output.
The loss_per_interval function may also have an attribute nth_neighbors that indicates how many of the neighboring intervals to interval are used. If loss_per_interval doesn’t have such an attribute, it’s assumed that is uses no neighboring intervals. Also see the uses_nth_neighbors decorator for more information.
- ask(n: int, tell_pending: bool = True) tuple[list[float], list[float]] [source]#
Return ‘n’ points that are expected to maximally reduce the loss.
- load_dataframe(df: DataFrame, with_default_function_args: bool = True, function_prefix: str = 'function.', x_name: str = 'x', y_name: str = 'y') None [source]#
Load data from a
pandas.DataFrame
.If
with_default_function_args
is True, thenlearner.function
’s default arguments are set (usingfunctools.partial
) from the values in thepandas.DataFrame
.- Parameters:
df (pandas.DataFrame) – The data to load.
with_default_function_args (bool, optional) – The
with_default_function_args
used into_dataframe()
, by default Truefunction_prefix (str, optional) – The
function_prefix
used into_dataframe
, by default “function.”x_name (str, optional) – The
x_name
used into_dataframe
, by default “x”y_name (str, optional) – The
y_name
used into_dataframe
, by default “y”
- loss(real: bool = True) float [source]#
Return the loss for the current state of the learner.
- Parameters:
real (bool, default: True) – If False, return the “expected” loss, i.e. the loss including the as-yet unevaluated points (possibly by interpolation).
- plot(*, scatter_or_line: str = 'scatter')[source]#
Returns a plot of the evaluated data.
- Parameters:
scatter_or_line (str, default: "scatter") – Plot as a scatter plot (“scatter”) or a line plot (“line”).
- Returns:
plot – Plot of the evaluated data.
- Return type:
holoviews.Overlay
- tell(x: float, y: Union[float, float64, Sequence[Union[float, float64]], ndarray]) None [source]#
Tell the learner about a single value.
- Parameters:
x (A value from the function domain) –
y (A value from the function image) –
- tell_many(xs: collections.abc.Sequence[Union[float, numpy.float64]] | numpy.ndarray, ys: collections.abc.Sequence[Union[float, numpy.float64]] | collections.abc.Sequence[collections.abc.Sequence[Union[float, numpy.float64]]] | collections.abc.Sequence[numpy.ndarray] | numpy.ndarray, *, force: bool = False) None [source]#
Tell the learner about some values.
- Parameters:
xs (Iterable of values from the function domain) –
ys (Iterable of values from the function image) –
- tell_pending(x: float) None [source]#
Tell the learner that ‘x’ has been requested such that it’s not suggested again.
- to_dataframe(with_default_function_args: bool = True, function_prefix: str = 'function.', x_name: str = 'x', y_name: str = 'y') DataFrame [source]#
Return the data as a
pandas.DataFrame
.- Parameters:
with_default_function_args (bool, optional) – Include the
learner.function
’s default arguments as a column, by default Truefunction_prefix (str, optional) – Prefix to the
learner.function
’s default arguments’ names, by default “function.”x_name (str, optional) – Name of the input value, by default “x”
y_name (str, optional) – Name of the output value, by default “y”
- Return type:
pandas.DataFrame
- Raises:
ImportError – If
pandas
is not installed.
Custom loss functions#
- adaptive.learner.learner1D.default_loss(xs: XsType0, ys: YsType0) Float [source]#
Calculate loss on a single interval.
Currently returns the rescaled length of the interval. If one of the y-values is missing, returns 0 (so the intervals with missing data are never touched. This behavior should be improved later.
- adaptive.learner.learner1D.uniform_loss(xs: XsType0, ys: YsType0) Float [source]#
Loss function that samples the domain uniformly.
Works with
Learner1D
only.Examples
>>> def f(x): ... return x**2 >>> >>> learner = adaptive.Learner1D(f, ... bounds=(-1, 1), ... loss_per_interval=uniform_sampling_1d) >>>
- adaptive.learner.learner1D.uses_nth_neighbors(n: int)[source]#
Decorator to specify how many neighboring intervals the loss function uses.
Wraps loss functions to indicate that they expect intervals together with
n
nearest neighborsThe loss function will then receive the data of the N nearest neighbors (
nth_neighbors
) along with the data of the interval itself in a dict. TheLearner1D
will also make sure that the loss is updated whenever one of thenth_neighbors
changes.Examples
The next function is a part of the
curvature_loss_function
function.>>> @uses_nth_neighbors(1) ... def triangle_loss(xs, ys): ... xs = [x for x in xs if x is not None] ... ys = [y for y in ys if y is not None] ... ... if len(xs) == 2: # we do not have enough points for a triangle ... return xs[1] - xs[0] ... ... N = len(xs) - 2 # number of constructed triangles ... if isinstance(ys[0], Iterable): ... pts = [(x, *y) for x, y in zip(xs, ys)] ... vol = simplex_volume_in_embedding ... else: ... pts = [(x, y) for x, y in zip(xs, ys)] ... vol = volume ... return sum(vol(pts[i:i+3]) for i in range(N)) / N
Or you may define a loss that favours the (local) minima of a function, assuming that you know your function will have a single float as output.
>>> @uses_nth_neighbors(1) ... def local_minima_resolving_loss(xs, ys): ... dx = xs[2] - xs[1] # the width of the interval of interest ... ... if not ((ys[0] is not None and ys[0] > ys[1]) ... or (ys[3] is not None and ys[3] > ys[2])): ... return loss * 100 ... ... return loss
- adaptive.learner.learner1D.curvature_loss_function(area_factor: Real = 1, euclid_factor: Real = 0.02, horizontal_factor: Real = 0.02) Callable[[XsType1, YsType1], Float] [source]#
- adaptive.learner.learner1D.abs_min_log_loss(xs: XsType0, ys: YsType0) Float [source]#
Calculate loss of a single interval that prioritizes the absolute minimum.
- adaptive.learner.learner1D.resolution_loss_function(min_length: Real = 0, max_length: Real = 1) Callable[[XsType0, YsType0], Float] [source]#
Loss function that is similar to the
default_loss
function, but you can set the maximum and minimum size of an interval.Works with
Learner1D
only.The arguments min_length and max_length should be in between 0 and 1 because the total size is normalized to 1.
- Returns:
loss_function
- Return type:
callable
Examples
>>> def f(x): ... return x**2 >>> >>> loss = resolution_loss_function(min_length=0.01, max_length=1) >>> learner = adaptive.Learner1D(f, bounds=(-1, -1), loss_per_interval=loss)