adaptive.runner.BaseRunner#

class adaptive.runner.BaseRunner(learner, goal, *, executor=None, ntasks=None, log=False, shutdown_executor=False, retries=0, raise_if_retries_exceeded=True)[source]#

Bases: object

Base class for runners that use concurrent.futures.Executors.

Parameters
  • learner (BaseLearner instance) –

  • goal (callable) – The end condition for the calculation. This function must take the learner as its sole argument, and return True when we should stop requesting more points.

  • executor (concurrent.futures.Executor, distributed.Client,) –

    mpi4py.futures.MPIPoolExecutor, ipyparallel.Client or

    loky.get_reusable_executor, optional

    The executor in which to evaluate the function to be learned. If not provided, a new ProcessPoolExecutor on Linux, and a loky.get_reusable_executor on MacOS and Windows.

  • ntasks (int, optional) – The number of concurrent function evaluations. Defaults to the number of cores available in executor.

  • log (bool, default: False) – If True, record the method calls made to the learner by this runner.

  • shutdown_executor (bool, default: False) – If True, shutdown the executor when the runner has completed. If executor is not provided then the executor created internally by the runner is shut down, regardless of this parameter.

  • retries (int, default: 0) – Maximum amount of retries of a certain point x in learner.function(x). After retries is reached for x the point is present in runner.failed.

  • raise_if_retries_exceeded (bool, default: True) – Raise the error after a point x failed retries.

learner#

The underlying learner. May be queried for its state.

Type

BaseLearner instance

log#

Record of the method calls made to the learner, in the format (method_name, *args).

Type

list or None

to_retry#

List of (point, n_fails). When a point has failed runner.retries times it is removed but will be present in runner.tracebacks.

Type

list of tuples

tracebacks#

List of of (point, tb) for points that failed.

Type

list of tuples

pending_points#

A list of tuples with (concurrent.futures.Future, point).

Type

list of tuples

overhead : callable

The overhead in percent of using Adaptive. Essentially, this is 100 * (1 - total_elapsed_function_time / self.elapsed_time()).

property do_log#
abstract elapsed_time()[source]#

Return the total time elapsed since the runner was started.

Is called in overhead.

property failed#

Set of points that failed runner.retries times.

overhead()[source]#

Overhead of using Adaptive and the executor in percent.

This is measured as 100 * (1 - t_function / t_elapsed).

Notes

This includes the overhead of the executor that is being used. The slower your function is, the lower the overhead will be. The learners take ~5-50 ms to suggest a point and sending that point to the executor also takes about ~5 ms, so you will benefit from using Adaptive whenever executing the function takes longer than 100 ms. This of course depends on the type of executor and the type of learner but is a rough rule of thumb.

property pending_points#
property to_retry#
property tracebacks#