adaptive.BalancingLearner#
- class adaptive.BalancingLearner(*args, **kwargs)[source]#
Bases:
adaptive.learner.base_learner.BaseLearner
Choose the optimal points from a set of learners.
- Parameters
learners (sequence of
BaseLearner
s) – The learners from which to choose. These must all have the same type.cdims (sequence of dicts, or (keys, iterable of values), optional) –
Constant dimensions; the parameters that label the learners. Used in
plot
. Example inputs that all give identical results:sequence of dicts:
>>> cdims = [{'A': True, 'B': 0}, ... {'A': True, 'B': 1}, ... {'A': False, 'B': 0}, ... {'A': False, 'B': 1}]`
tuple with (keys, iterable of values):
>>> cdims = (['A', 'B'], itertools.product([True, False], [0, 1])) >>> cdims = (['A', 'B'], [(True, 0), (True, 1), ... (False, 0), (False, 1)])
- function#
A function that calls the functions of the underlying learners. Its signature is
function(learner_index, point)
.- Type
callable
- strategy#
The points that the
BalancingLearner
choses can be either based on: the best ‘loss_improvements’, the smallest total ‘loss’ of the child learners, the number of points per learner, using ‘npoints’, or by cycling through the learners one by one using ‘cycle’. One can dynamically change the strategy while the simulation is running by changing thelearner.strategy
attribute.- Type
‘loss_improvements’ (default), ‘loss’, ‘npoints’, or ‘cycle’.
Notes
This learner compares the
loss
calculated from the “child” learners. This requires that the ‘loss’ from different learners can be meaningfully compared. For the moment we enforce this restriction by requiring that all learners are the same type but (depending on the internals of the learner) it may be that the loss cannot be compared even between learners of the same type. In this case theBalancingLearner
will behave in an undefined way. Change thestrategy
in that case.- property data#
- classmethod from_product(f, learner_type, learner_kwargs, combos)[source]#
Create a
BalancingLearner
with learners of all combinations of named variables’ values. The cdims will be set correctly, so calling learner.plot will be aholoviews.core.HoloMap
with the correct labels.- Parameters
f (callable) – Function to learn, must take arguments provided in in combos.
learner_type (
BaseLearner
) – The learner that should wrap the function. For exampleLearner1D
.learner_kwargs (dict) – Keyword argument for the learner_type. For example dict(bounds=[0, 1]).
combos (dict (mapping individual fn arguments -> sequence of values)) – For all combinations of each argument a learner will be instantiated.
- Returns
learner – A
BalancingLearner
with learners of all combinations of combos- Return type
Example
>>> def f(x, n, alpha, beta): ... return scipy.special.eval_jacobi(n, alpha, beta, x)
>>> combos = { ... 'n': [1, 2, 4, 8, 16], ... 'alpha': np.linspace(0, 2, 3), ... 'beta': np.linspace(0, 1, 5), ... }
>>> learner = BalancingLearner.from_product( ... f, Learner1D, dict(bounds=(0, 1)), combos)
Notes
The order of the child learners inside learner.learners is the same as
adaptive.utils.named_product(**combos)
.
- load(fname, compress=True)[source]#
Load the data of the child learners from pickle files in a directory.
- Parameters
fname (callable or sequence of strings) – Given a learner, returns a filename from which to load the data. Or a list (or iterable) with filenames.
compress (bool, default True) – If the data is compressed when saved, one must load it with compression too.
Example
See the example in the
BalancingLearner.save
doc-string.
- load_dataframe(df: pandas.core.frame.DataFrame, index_name: str = 'learner_index', **kwargs)[source]#
Load the data from a
pandas.DataFrame
into the child learners.- Parameters
df (pandas.DataFrame) – DataFrame with the data to load.
index_name (str, optional) – The
index_name
used into_dataframe
, by default “learner_index”.**kwargs (dict) – Keyword arguments passed to each
child_learner.load_dataframe(**kwargs)
.
- loss(real=True)[source]#
Return the loss for the current state of the learner.
- Parameters
real (bool, default: True) – If False, return the “expected” loss, i.e. the loss including the as-yet unevaluated points (possibly by interpolation).
- new() adaptive.learner.balancing_learner.BalancingLearner [source]#
Create a new
BalancingLearner
with the same parameters.
- property npoints#
- property nsamples#
- property pending_points#
- plot(cdims=None, plotter=None, dynamic=True)[source]#
Returns a DynamicMap with sliders.
- Parameters
cdims (sequence of dicts, or (keys, iterable of values), optional) –
Constant dimensions; the parameters that label the learners. Example inputs that all give identical results:
sequence of dicts:
>>> cdims = [{'A': True, 'B': 0}, ... {'A': True, 'B': 1}, ... {'A': False, 'B': 0}, ... {'A': False, 'B': 1}]`
tuple with (keys, iterable of values):
>>> cdims = (['A', 'B'], itertools.product([True, False], [0, 1])) >>> cdims = (['A', 'B'], [(True, 0), (True, 1), ... (False, 0), (False, 1)])
plotter (callable, optional) – A function that takes the learner as a argument and returns a holoviews object. By default
learner.plot()
will be called.dynamic (bool, default True) – Return a
holoviews.core.DynamicMap
if True, else aholoviews.core.HoloMap
. TheDynamicMap
is rendered as the sliders change and can therefore not be exported to html. TheHoloMap
does not have this problem.
- Returns
dm – A DynamicMap
(dynamic=True)
or HoloMap(dynamic=False)
with sliders that are defined by cdims.- Return type
holoviews.core.DynamicMap
(default) orholoviews.core.HoloMap
- save(fname, compress=True)[source]#
Save the data of the child learners into pickle files in a directory.
- Parameters
Example
>>> def combo_fname(learner): ... val = learner.function.keywords # because functools.partial ... fname = '__'.join([f'{k}_{v}.pickle' for k, v in val.items()]) ... return 'data_folder/' + fname >>> >>> def f(x, a, b): return a * x**2 + b >>> >>> learners = [Learner1D(functools.partial(f, **combo), (-1, 1)) ... for combo in adaptive.utils.named_product(a=[1, 2], b=[1])] >>> >>> learner = BalancingLearner(learners) >>> # Run the learner >>> runner = adaptive.Runner(learner) >>> # Then save >>> learner.save(combo_fname) # use 'load' in the same way
- property strategy#
Can be either ‘loss_improvements’ (default), ‘loss’, ‘npoints’, or ‘cycle’. The points that the
BalancingLearner
choses can be either based on: the best ‘loss_improvements’, the smallest total ‘loss’ of the child learners, the number of points per learner, using ‘npoints’, or by going through all learners one by one using ‘cycle’. One can dynamically change the strategy while the simulation is running by changing thelearner.strategy
attribute.
- tell(x, y)[source]#
Tell the learner about a single value.
- Parameters
x (A value from the function domain) –
y (A value from the function image) –
- tell_pending(x)[source]#
Tell the learner that ‘x’ has been requested such that it’s not suggested again.
- to_dataframe(index_name: str = 'learner_index', **kwargs)[source]#
Return the data as a concatenated
pandas.DataFrame
from child learners.- Parameters
- Returns
- Return type
pandas.DataFrame
- Raises
ImportError – If
pandas
is not installed.