adaptive.runner.BaseRunner#
- class adaptive.runner.BaseRunner(learner: LearnerType, goal: Optional[Callable[[LearnerType], bool]] = None, *, loss_goal: Optional[float] = None, npoints_goal: Optional[int] = None, end_time_goal: Optional[datetime] = None, duration_goal: Optional[Union[timedelta, int, float]] = None, executor: Optional[Union[ProcessPoolExecutor, ThreadPoolExecutor, SequentialExecutor, _ReusablePoolExecutor]] = None, ntasks: Optional[int] = None, log: bool = False, shutdown_executor: bool = False, retries: int = 0, raise_if_retries_exceeded: bool = True, allow_running_forever: bool = False)[source]#
Bases:
object
Base class for runners that use
concurrent.futures.Executor
's.- Parameters:
learner (
BaseLearner
instance) โgoal (callable, optional) โ The end condition for the calculation. This function must take the learner as its sole argument, and return True when we should stop requesting more points.
loss_goal (float, optional) โ Convenience argument, use instead of
goal
. The end condition for the calculation. Stop when the loss is smaller than this value.npoints_goal (int, optional) โ Convenience argument, use instead of
goal
. The end condition for the calculation. Stop when the number of points is larger or equal than this value.end_time_goal (datetime, optional) โ Convenience argument, use instead of
goal
. The end condition for the calculation. Stop when the current time is larger or equal than this value.duration_goal (timedelta or number, optional) โ Convenience argument, use instead of
goal
. The end condition for the calculation. Stop when the current time is larger or equal thanstart_time + duration_goal
.duration_goal
can be a number indicating the number of seconds.executor (
concurrent.futures.Executor
,distributed.Client
,) โ- mpi4py.futures.MPIPoolExecutor, ipyparallel.Client or
loky.get_reusable_executor
, optional
The executor in which to evaluate the function to be learned. If not provided, a new
ProcessPoolExecutor
on Linux, and aloky.get_reusable_executor
on MacOS and Windows.ntasks (int, optional) โ The number of concurrent function evaluations. Defaults to the number of cores available in executor.
log (bool, default: False) โ If True, record the method calls made to the learner by this runner.
shutdown_executor (bool, default: False) โ If True, shutdown the executor when the runner has completed. If executor is not provided then the executor created internally by the runner is shut down, regardless of this parameter.
retries (int, default: 0) โ Maximum amount of retries of a certain point
x
inlearner.function(x)
. After retries is reached forx
the point is present inrunner.failed
.raise_if_retries_exceeded (bool, default: True) โ Raise the error after a point
x
failed retries.allow_running_forever (bool, default: False) โ Allow the runner to run forever when the goal is None.
- learner#
The underlying learner. May be queried for its state.
- Type:
BaseLearner
instance
- log#
Record of the method calls made to the learner, in the format
(method_name, *args)
.- Type:
list or None
- to_retry#
List of
(point, n_fails)
. When a point has failedrunner.retries
times it is removed but will be present inrunner.tracebacks
.- Type:
list of tuples
- overhead : callable
The overhead in percent of using Adaptive. Essentially, this is
100 * (1 - total_elapsed_function_time / self.elapsed_time())
.
- abstract elapsed_time()[source]#
Return the total time elapsed since the runner was started.
Is called in
overhead
.
- overhead() float [source]#
Overhead of using Adaptive and the executor in percent.
This is measured as
100 * (1 - t_function / t_elapsed)
.Notes
This includes the overhead of the executor that is being used. The slower your function is, the lower the overhead will be. The learners take ~5-50 ms to suggest a point and sending that point to the executor also takes about ~5 ms, so you will benefit from using Adaptive whenever executing the function takes longer than 100 ms. This of course depends on the type of executor and the type of learner but is a rough rule of thumb.