adaptive.BalancingLearner#

class adaptive.BalancingLearner(*args, **kwargs)[source]#

Bases: adaptive.learner.base_learner.BaseLearner

Choose the optimal points from a set of learners.

Parameters
  • learners (sequence of BaseLearners) – The learners from which to choose. These must all have the same type.

  • cdims (sequence of dicts, or (keys, iterable of values), optional) –

    Constant dimensions; the parameters that label the learners. Used in plot. Example inputs that all give identical results:

    • sequence of dicts:

      >>> cdims = [{'A': True, 'B': 0},
      ...          {'A': True, 'B': 1},
      ...          {'A': False, 'B': 0},
      ...          {'A': False, 'B': 1}]`
      
    • tuple with (keys, iterable of values):

      >>> cdims = (['A', 'B'], itertools.product([True, False], [0, 1]))
      >>> cdims = (['A', 'B'], [(True, 0), (True, 1),
      ...                       (False, 0), (False, 1)])
      

learners#

The sequence of BaseLearners.

Type

list

function#

A function that calls the functions of the underlying learners. Its signature is function(learner_index, point).

Type

callable

strategy#

The points that the BalancingLearner choses can be either based on: the best ‘loss_improvements’, the smallest total ‘loss’ of the child learners, the number of points per learner, using ‘npoints’, or by cycling through the learners one by one using ‘cycle’. One can dynamically change the strategy while the simulation is running by changing the learner.strategy attribute.

Type

‘loss_improvements’ (default), ‘loss’, ‘npoints’, or ‘cycle’.

Notes

This learner compares the loss calculated from the “child” learners. This requires that the ‘loss’ from different learners can be meaningfully compared. For the moment we enforce this restriction by requiring that all learners are the same type but (depending on the internals of the learner) it may be that the loss cannot be compared even between learners of the same type. In this case the BalancingLearner will behave in an undefined way. Change the strategy in that case.

ask(n: int, tell_pending: bool = True) tuple[list[tuple[numbers.Integral, typing.Any]], list[float]][source]#

Chose points for learners.

property data: dict[tuple[int, typing.Any], typing.Any]#
classmethod from_product(f, learner_type: adaptive.learner.base_learner.BaseLearner, learner_kwargs: dict[str, typing.Any], combos: dict[str, typing.Sequence[typing.Any]]) adaptive.learner.balancing_learner.BalancingLearner[source]#

Create a BalancingLearner with learners of all combinations of named variables’ values. The cdims will be set correctly, so calling learner.plot will be a holoviews.core.HoloMap with the correct labels.

Parameters
  • f (callable) – Function to learn, must take arguments provided in in combos.

  • learner_type (BaseLearner) – The learner that should wrap the function. For example Learner1D.

  • learner_kwargs (dict) – Keyword argument for the learner_type. For example dict(bounds=[0, 1]).

  • combos (dict (mapping individual fn arguments -> sequence of values)) – For all combinations of each argument a learner will be instantiated.

Returns

learner – A BalancingLearner with learners of all combinations of combos

Return type

BalancingLearner

Example

>>> def f(x, n, alpha, beta):
...     return scipy.special.eval_jacobi(n, alpha, beta, x)
>>> combos = {
...     'n': [1, 2, 4, 8, 16],
...     'alpha': np.linspace(0, 2, 3),
...     'beta': np.linspace(0, 1, 5),
... }
>>> learner = BalancingLearner.from_product(
...     f, Learner1D, dict(bounds=(0, 1)), combos)

Notes

The order of the child learners inside learner.learners is the same as adaptive.utils.named_product(**combos).

load(fname: Callable[[BaseLearner], str] | Sequence[str], compress: bool = True) None[source]#

Load the data of the child learners from pickle files in a directory.

Parameters
  • fname (callable or sequence of strings) – Given a learner, returns a filename from which to load the data. Or a list (or iterable) with filenames.

  • compress (bool, default True) – If the data is compressed when saved, one must load it with compression too.

Example

See the example in the BalancingLearner.save doc-string.

load_dataframe(df: pandas.core.frame.DataFrame, index_name: str = 'learner_index', **kwargs)[source]#

Load the data from a pandas.DataFrame into the child learners.

Parameters
  • df (pandas.DataFrame) – DataFrame with the data to load.

  • index_name (str, optional) – The index_name used in to_dataframe, by default “learner_index”.

  • **kwargs (dict) – Keyword arguments passed to each child_learner.load_dataframe(**kwargs).

loss(real: bool = True) float[source]#

Return the loss for the current state of the learner.

Parameters

real (bool, default: True) – If False, return the “expected” loss, i.e. the loss including the as-yet unevaluated points (possibly by interpolation).

new() adaptive.learner.balancing_learner.BalancingLearner[source]#

Create a new BalancingLearner with the same parameters.

property npoints: int#
property nsamples#
property pending_points: set[tuple[int, typing.Any]]#
plot(cdims: CDIMS_TYPE = None, plotter: Callable[[BaseLearner], Any] | None = None, dynamic: bool = True)[source]#

Returns a DynamicMap with sliders.

Parameters
  • cdims (sequence of dicts, or (keys, iterable of values), optional) –

    Constant dimensions; the parameters that label the learners. Example inputs that all give identical results:

    • sequence of dicts:

      >>> cdims = [{'A': True, 'B': 0},
      ...          {'A': True, 'B': 1},
      ...          {'A': False, 'B': 0},
      ...          {'A': False, 'B': 1}]`
      
    • tuple with (keys, iterable of values):

      >>> cdims = (['A', 'B'], itertools.product([True, False], [0, 1]))
      >>> cdims = (['A', 'B'], [(True, 0), (True, 1),
      ...                       (False, 0), (False, 1)])
      

  • plotter (callable, optional) – A function that takes the learner as a argument and returns a holoviews object. By default learner.plot() will be called.

  • dynamic (bool, default True) – Return a holoviews.core.DynamicMap if True, else a holoviews.core.HoloMap. The DynamicMap is rendered as the sliders change and can therefore not be exported to html. The HoloMap does not have this problem.

Returns

dm – A DynamicMap (dynamic=True) or HoloMap (dynamic=False) with sliders that are defined by cdims.

Return type

holoviews.core.DynamicMap (default) or holoviews.core.HoloMap

remove_unfinished() None[source]#

Remove uncomputed data from the learners.

save(fname: Callable[[BaseLearner], str] | Sequence[str], compress: bool = True) None[source]#

Save the data of the child learners into pickle files in a directory.

Parameters
  • fname (callable or sequence of strings) – Given a learner, returns a filename into which to save the data. Or a list (or iterable) with filenames.

  • compress (bool, default True) – Compress the data upon saving using gzip. When saving using compression, one must load it with compression too.

Example

>>> def combo_fname(learner):
...     val = learner.function.keywords  # because functools.partial
...     fname = '__'.join([f'{k}_{v}.pickle' for k, v in val.items()])
...     return 'data_folder/' + fname
>>>
>>> def f(x, a, b): return a * x**2 + b
>>>
>>> learners = [Learner1D(functools.partial(f, **combo), (-1, 1))
...             for combo in adaptive.utils.named_product(a=[1, 2], b=[1])]
>>>
>>> learner = BalancingLearner(learners)
>>> # Run the learner
>>> runner = adaptive.Runner(learner)
>>> # Then save
>>> learner.save(combo_fname)  # use 'load' in the same way
property strategy: Literal['loss_improvements', 'loss', 'npoints', 'cycle']#

Can be either ‘loss_improvements’ (default), ‘loss’, ‘npoints’, or ‘cycle’. The points that the BalancingLearner choses can be either based on: the best ‘loss_improvements’, the smallest total ‘loss’ of the child learners, the number of points per learner, using ‘npoints’, or by going through all learners one by one using ‘cycle’. One can dynamically change the strategy while the simulation is running by changing the learner.strategy attribute.

tell(x: tuple[numbers.Integral, typing.Any], y: Any) None[source]#

Tell the learner about a single value.

Parameters
  • x (A value from the function domain) –

  • y (A value from the function image) –

tell_pending(x: tuple[numbers.Integral, typing.Any]) None[source]#

Tell the learner that ‘x’ has been requested such that it’s not suggested again.

to_dataframe(index_name: str = 'learner_index', **kwargs)[source]#

Return the data as a concatenated pandas.DataFrame from child learners.

Parameters
  • index_name (str, optional) – The name of the index column indicating the learner index, by default “learner_index”.

  • **kwargs (dict) – Keyword arguments passed to each child_learner.to_dataframe(**kwargs).

Returns

Return type

pandas.DataFrame

Raises

ImportError – If pandas is not installed.