Implemented algorithms

The core concept in adaptive is that of a learner. A learner samples a function at the best places in its parameter space to get maximum “information” about the function. As it evaluates the function at more and more points in the parameter space, it gets a better idea of where the best places are to sample next.

Of course, what qualifies as the “best places” will depend on your application domain! adaptive makes some reasonable default choices, but the details of the adaptive sampling are completely customizable.

The following learners are implemented:

  • Learner1D, for 1D functions f: ℝ^N,

  • Learner2D, for 2D functions f: ℝ^2 ℝ^N,

  • LearnerND, for ND functions f: ℝ^N ℝ^M,

  • AverageLearner, For stochastic functions where you want to average the result over many evaluations,

  • IntegratorLearner, for when you want to intergrate a 1D function f: .

Meta-learners (to be used with other learners):

  • BalancingLearner, for when you want to run several learners at once, selecting the “best” one each time you get more points,

  • DataSaver, for when your function doesn’t just return a scalar or a vector.

In addition to the learners, adaptive also provides primitives for running the sampling across several cores and even several machines, with built-in support for concurrent.futures, ipyparallel and distributed.

Examples

Here are some examples of how Adaptive samples vs. homogeneous sampling. Click on the Play button or move the sliders.

adaptive.Learner1D



Once Loop Reflect

adaptive.Learner2D



Once Loop Reflect

adaptive.AverageLearner



Once Loop Reflect

adaptive.LearnerND

see more in the Tutorial Adaptive.

Installation

adaptive works with Python 3.6 and higher on Linux, Windows, or Mac, and provides optional extensions for working with the Jupyter/IPython Notebook.

The recommended way to install adaptive is using conda:

conda install -c conda-forge adaptive

adaptive is also available on PyPI:

pip install adaptive[notebook]

The [notebook] above will also install the optional dependencies for running adaptive inside a Jupyter notebook.

Development

Clone the repository and run setup.py develop to add a link to the cloned repo into your Python path:

git clone git@github.com:python-adaptive/adaptive.git
cd adaptive
python3 setup.py develop

We highly recommend using a Conda environment or a virtualenv to manage the versions of your installed packages while working on adaptive.

In order to not pollute the history with the output of the notebooks, please setup the git filter by executing

python ipynb_filter.py

in the repository.

Credits

We would like to give credits to the following people:

  • Pedro Gonnet for his implementation of CQUAD, “Algorithm 4” as described in “Increasing the Reliability of Adaptive Quadrature Using Explicit Interpolants”, P. Gonnet, ACM Transactions on Mathematical Software, 37 (3), art. no. 26, 2010.

  • Pauli Virtanen for his AdaptiveTriSampling script (no longer available online since SciPy Central went down) which served as inspiration for the Learner2D.

Authors

Below is a list of the contributors to Adaptive:

For general discussion, we have a Gitter chat channel. If you find any bugs or have any feature suggestions please file a GitHub issue or submit a pull request.