adaptive.BlockingRunner¶
-
class
adaptive.
BlockingRunner
(learner, goal, *, executor=None, ntasks=None, log=False, shutdown_executor=False, retries=0, raise_if_retries_exceeded=True)[source]¶ Bases:
adaptive.runner.BaseRunner
Run a learner synchronously in an executor.
- Parameters
learner (
BaseLearner
instance) –goal (callable) – The end condition for the calculation. This function must take the learner as its sole argument, and return True when we should stop requesting more points.
executor (
concurrent.futures.Executor
,distributed.Client
, mpi4py.futures.MPIPoolExecutor, oripyparallel.Client
, optional) – The executor in which to evaluate the function to be learned. If not provided, a newProcessPoolExecutor
is used on Unix systems while on Windows adistributed.Client
is used ifdistributed
is installed.ntasks (int, optional) – The number of concurrent function evaluations. Defaults to the number of cores available in executor.
log (bool, default: False) – If True, record the method calls made to the learner by this runner.
shutdown_executor (bool, default: False) – If True, shutdown the executor when the runner has completed. If executor is not provided then the executor created internally by the runner is shut down, regardless of this parameter.
retries (int, default: 0) – Maximum amount of retries of a certain point
x
inlearner.function(x)
. After retries is reached forx
the point is present inrunner.failed
.raise_if_retries_exceeded (bool, default: True) – Raise the error after a point
x
failed retries.
-
learner
¶ The underlying learner. May be queried for its state.
- Type
BaseLearner
instance
-
log
¶ Record of the method calls made to the learner, in the format
(method_name, *args)
.
-
to_retry
¶ Mapping of
{point: n_fails, ...}
. When a point has failedrunner.retries
times it is removed but will be present inrunner.tracebacks
.- Type
-
elapsed_time : callable
A method that returns the time elapsed since the runner was started.
-
overhead : callable
The overhead in percent of using Adaptive. This includes the overhead of the executor. Essentially, this is
100 * (1 - total_elapsed_function_time / self.elapsed_time())
.