Source code

pySOT.auxiliary_problems module

Module:auxiliary_problems
Author:David Eriksson <dme65@cornell.edu>,
pySOT.auxiliary_problems.candidate_dycors(num_pts, opt_prob, surrogate, X, fX, weights, prob_perturb, Xpend=None, sampling_radius=0.2, subset=None, dtol=0.001, num_cand=None)

Select new evaluations using DYCORS.

Parameters:
  • num_pts (int) – Number of points to generate
  • opt_prob (object) – Optimization problem
  • surrogate (object) – Surrogate model object
  • X (numpy.array) – Previously evaluated points, of size n x dim
  • fX (numpy.array) – Values at previously evaluated points, of size n x 1
  • weights (list or numpy.array) – num_pts weights in [0, 1] for merit function
  • prob_perturb (list or numpy.array) – Probability to perturb a given coordinate
  • Xpend (numpy.array) – Pending evaluations
  • sampling_radius (float) – Perturbation radius
  • subset (list or numpy.array) – Coordinates that should be perturbed, use None for all
  • dtol (float) – Minimum distance between evaluated and pending points
  • num_cand (int) – Number of candidate points
Returns:

The num_pts new points to evaluate

Return type:

numpy.array of size num_pts x dim

pySOT.auxiliary_problems.candidate_srbf(num_pts, opt_prob, surrogate, X, fX, weights, Xpend=None, sampling_radius=0.2, subset=None, dtol=0.001, num_cand=None)

Select new evaluations using Stochastic RBF (SRBF).

Parameters:
  • num_pts (int) – Number of points to generate
  • opt_prob (object) – Optimization problem
  • surrogate (object) – Surrogate model object
  • X (numpy.array) – Previously evaluated points, of size n x dim
  • fX (numpy.array) – Values at previously evaluated points, of size n x 1
  • weights (list or numpy.array) – num_pts weights in [0, 1] for merit function
  • Xpend (numpy.array) – Pending evaluation, of size k x dim
  • sampling_radius (float) – Perturbation radius
  • subset (list or numpy.array) – Coordinates that should be perturbed, use None for all
  • dtol (float) – Minimum distance between evaluated and pending points
  • num_cand (int) – Number of candidate points
Returns:

The num_pts new points to evaluate

Return type:

numpy.array of size num_pts x dim

pySOT.auxiliary_problems.candidate_uniform(num_pts, opt_prob, surrogate, X, fX, weights, Xpend=None, subset=None, dtol=0.001, num_cand=None)

Select new evaluations from uniform candidate points.

Parameters:
  • num_pts (int) – Number of points to generate
  • opt_prob (object) – Optimization problem
  • surrogate (object) – Surrogate model object
  • X (numpy.array) – Previously evaluated points, of size n x dim
  • fX (numpy.array) – Values at previously evaluated points, of size n x 1
  • weights (list or numpy.array) – num_pts weights in [0, 1] for merit function
  • Xpend (numpy.array) – Pending evaluations
  • subset (list or numpy.array) – Coordinates that should be perturbed, use None for all
  • dtol (float) – Minimum distance between evaluated and pending points
  • num_cand (int) – Number of candidate points
Returns:

The num_pts new points to evaluate

Return type:

numpy.array of size num_pts x dim

pySOT.auxiliary_problems.ei_merit(X, surrogate, fX, XX=None, dtol=0)

Compute the expected improvement merit function.

Parameters:
  • X (numpy.array) – Points where to compute EI, of size n x dim
  • surrogate (object) – Surrogate model object, must implement predict_std
  • fX (numpy.array) – Values at previously evaluated points, of size m x 1
  • XX (numpy.array) – Previously evaluated points, of size m x 1
  • dtol (float) – Minimum distance between evaluated and pending points
Returns:

Evaluate the expected improvement for points X

Return type:

numpy.array of length X.shape[0]

pySOT.auxiliary_problems.expected_improvement_ga(num_pts, opt_prob, surrogate, X, fX, Xpend=None, dtol=0.001, ei_tol=1e-06)

Maximize EI using a genetic algorithm.

Parameters:
  • num_pts (int) – Number of points to generate
  • opt_prob (object) – Optimization problem
  • surrogate (object) – Surrogate model object
  • X (numpy.array) – Previously evaluated points, of size n x dim
  • fX (numpy.array) – Values at previously evaluated points, of size n x 1
  • Xpend (numpy.array) – Pending evaluations
  • dtol (float) – Minimum distance between evaluated and pending points
  • ei_tol (float) – Return None if we don’t find an EI of at least this value
Returns:

num_pts new points to evaluate

Return type:

numpy.array of size num_pts x dim

pySOT.auxiliary_problems.expected_improvement_uniform(num_pts, opt_prob, surrogate, X, fX, Xpend=None, dtol=0.001, ei_tol=1e-06, num_cand=None)

Maximize EI from a uniform set of points.

Parameters:
  • num_pts (int) – Number of points to generate
  • opt_prob (object) – Optimization problem
  • surrogate (object) – Surrogate model object
  • X (numpy.array) – Previously evaluated points, of size n x dim
  • fX (numpy.array) – Values at previously evaluated points, of size n x 1
  • Xpend (numpy.array) – Pending evaluations
  • dtol (float) – Minimum distance between evaluated and pending points
  • ei_tol (float) – Return None if we can’t reach this threshold
  • num_cand (int) – Number of candidate points
Returns:

num_pts new points to evaluate

Return type:

numpy.array of size num_pts x dim

pySOT.auxiliary_problems.lcb_merit(X, surrogate, fX, XX=None, dtol=0.0, kappa=2.0)

Compute the lcb merit function.

Parameters:
  • X (numpy.array) – Points where to compute LCB, of size n x dim
  • surrogate (object) – Surrogate model object, must implement predict_std
  • fX (numpy.array) – Values at previously evaluated points, of size m x 1
  • XX (numpy.array) – Previously evaluated points, of size m x 1
  • dtol (float) – Minimum distance between evaluated and pending points
  • kappa (float) – Constant in front of standard deviation Default: 2.0
Returns:

Evaluate the lower confidence bound for points X

Return type:

numpy.array of length X.shape[0]

pySOT.auxiliary_problems.lower_confidence_bound_ga(num_pts, opt_prob, surrogate, X, fX, Xpend=None, kappa=2.0, dtol=0.001, lcb_target=None)

Minimize the LCB using a genetic algorithm.

Parameters:
  • num_pts (int) – Number of points to generate
  • opt_prob (object) – Optimization problem
  • surrogate (object) – Surrogate model object
  • X (numpy.array) – Previously evaluated points, of size n x dim
  • fX (numpy.array) – Values at previously evaluated points, of size n x 1
  • Xpend (numpy.array) – Pending evaluations
  • dtol (float) – Minimum distance between evaluated and pending points
  • lcb_target (float) – Return None if we don’t find an LCB value <= lcb_target
Returns:

num_pts new points to evaluate

Return type:

numpy.array of size num_pts x dim

pySOT.auxiliary_problems.weighted_distance_merit(num_pts, surrogate, X, fX, cand, weights, Xpend=None, dtol=0.001)

Compute the weighted distance merit function.

Parameters:
  • num_pts (int) – Number of points to generate
  • surrogate (object) – Surrogate model object
  • X (numpy.array) – Previously evaluated points, of size n x dim
  • fX (numpy.array) – Values at previously evaluated points, of size n x 1
  • cand (numpy.array) – Candidate points to select from, of size m x dim
  • weights (list or numpy.array) – num_pts weights in [0, 1] for merit function
  • Xpend (numpy.array) – Pending evaluation, of size k x dim
  • dtol (float) – Minimum distance between evaluated and pending points
Returns:

The num_pts new points chosen from the candidate points

Return type:

numpy.array of size num_pts x dim

pySOT.controller module

Module:controller
Author:David Eriksson <dme65@cornell.edu>,
class pySOT.controller.CheckpointController(controller, fname='checkpoint.pysot')

Checkpoint controller

Controller that uses dill to take snapshots of the strategy each time an evaluation is completed, killed, or the run is terminated. We assume that the strategy can be pickled, or this won’t work. We currently do not respect potential termination callbacks and failed evaluation callbacks. The strategy needs to implement a resume method that is called when a run is resumed. The strategy object can assume that all pending evaluations have been killed and that their respective callbacks won’t be executed

Parameters:
  • controller (Controller) – POAP controller
  • fname (string) – Filename for checkpoint file (file cannot exist for new run)
Variables:
  • controller – POAP controller
  • fname – Filename for snapshot
on_cancel(record)

“Handle record cancelled.

Parameters:record (EvalRecord) – Evaluation record
on_complete(record)

Handle feval completion.

Parameters:record (EvalRecord) – Evaluation record
on_kill(record)

“Handle record killed.

Parameters:record (EvalRecord) – Evaluation record
on_new_feval(record)

Handle new function evaluation request.

Parameters:record (EvalRecord) – Evaluation record
on_terminate()

“Handle termination.

on_update(record)

Handle feval update.

Parameters:record (EvalRecord) – Evaluation record
resume()

Resume an optimization run.

Returns:The record corresponding to the best solution
Return type:EvalRecord
run()

Start the optimization run.

Make sure we do not overwrite any existing checkpointing files

Returns:The record corresponding to the best solution
Return type:EvalRecord

pySOT.experimental_design module

Module:experimental_design
Author:David Eriksson <dme65@cornell.edu> Yi Shen <ys623@cornell.edu>
class pySOT.experimental_design.ExperimentalDesign

Base class for experimental designs.

Variables:
  • dim – Number of dimensions
  • num_pts – Number of points in the experimental design
generate_points()
class pySOT.experimental_design.LatinHypercube(dim, num_pts, criterion='c')

Latin Hypercube experimental design.

Parameters:
  • dim (int) – Number of dimensions
  • num_pts (int) – Number of desired sampling points
  • criterion (string) –

    Sampling criterion

    • ”center” or “c”
      Center the points within the sampling intervals
    • ”maximin” or “m”
      Maximize the minimum distance between points, but place the point in a randomized location within its interval
    • ”centermaximin” or “cm”
      Same as “maximin”, but centered within the intervals
    • ”correlation” or “corr”
      Minimize the maximum correlation coefficient
Variables:
  • dim – Number of dimensions
  • num_pts – Number of points in the experimental design
  • criterion – Criterion for generating the design
generate_points()

Generate a Latin hypercube design in the unit hypercube.

Returns:Latin hypercube design in unit hypercube of size num_pts x dim
Return type:numpy.array
class pySOT.experimental_design.SymmetricLatinHypercube(dim, num_pts)

Symmetric Latin hypercube experimental design.

Parameters:
  • dim (int) – Number of dimensions
  • num_pts (int) – Number of desired sampling points
Variables:
  • dim – Number of dimensions
  • num_pts – Number of points in the experimental design
generate_points()

Generate a symmetric Latin hypercube design in the unit hypercube.

Returns:Symmetric Latin hypercube design in the unit hypercube of size num_pts x dim
Return type:numpy.array
class pySOT.experimental_design.TwoFactorial(dim)

Two-factorial experimental design.

The two-factorial experimental design consists of the corners of the unit hypercube, and hence \(2^{dim}\) points.

Parameters:

dim (int) – Number of dimensions

Variables:
  • dim – Number of dimensions
  • num_pts – Number of points in the experimental design
Raises:

ValueError – If dim >= 15

generate_points()

Generate a two factorial design in the unit hypercube.

Returns:Two factorial design in unit hypercube of size num_pts x dim
Return type:numpy.array

pySOT.optimization_problems module

Module:optimization_problems
Author:David Eriksson <dme65@cornell.edu>, David Bindel <bindel@cornell.edu>
class pySOT.optimization_problems.Ackley(dim=10)

Ackley function

\[f(x_1,\ldots,x_n) = -20\exp\left( -0.2 \sqrt{\frac{1}{n} \sum_{j=1}^n x_j^2} \right) -\exp \left( \frac{1}{n} \sum{j=1}^n \cos(2 \pi x_j) \right) + 20 - e\]

subject to

\[-15 \leq x_i \leq 20\]

Global optimum: \(f(0,0,...,0)=0\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Ackley function at x

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.Branin

Branin function

Details: http://www.sfu.ca/~ssurjano/branin.html

Global optimum: \(f(-\pi,12.275)=0.397887\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Branin function at x

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.Exponential(dim=10)

Exponential function

\[f(x_1,\ldots,x_n) = \sum_{j=1}^n e^{jx_j} - \sum_{j=1} e^{-5.12 j}\]

subject to

\[-5.12 \leq x_i \leq 5.12\]

Global optimum: \(f(0,0,...,0)=0\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Exponential function at x.

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.GoldsteinPrice
eval(x)

Evaluate the GoldStein Price function at x

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.Griewank(dim=10)

Griewank function

\[f(x_1,\ldots,x_n) = 1 + \frac{1}{4000} \sum_{j=1}^n x_j^2 - \prod_{j=1}^n \cos \left( \frac{x_i}{\sqrt{i}} \right)\]

subject to

\[-512 \leq x_i \leq 512\]

Global optimum: \(f(0,0,...,0)=0\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Griewank function at x.

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.Hartman3

Hartman 3 function

Details: http://www.sfu.ca/~ssurjano/hart3.html

Global optimum: \(f(0.114614,0.555649,0.852547)=-3.86278\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Hartman 3 function at x

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.Hartman6

Hartman 6 function

Details: http://www.sfu.ca/~ssurjano/hart6.html

Global optimum: \(f(0.201,0.150,0.476,0.275,0.311,0.657)=-3.322\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Hartman 6 function at x

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.Himmelblau(dim=10)

Himmelblau function

\[f(x_1,\ldots,x_n) = 10n - \frac{1}{2n} \sum_{i=1}^n (x_i^4 - 16x_i^2 + 5x_i)\]

Global optimum: \(f(-2.903,...,-2.903)=-39.166\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Himmelblau function at x.

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.Levy(dim=10)

Levy function

Details: https://www.sfu.ca/~ssurjano/levy.html

Global optimum: \(f(1,1,...,1)=0\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Levy function at x.

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.Michalewicz(dim=10)

Michalewicz function

\[f(x_1,\ldots,x_n) = -\sum_{i=1}^n \sin(x_i) \sin^{20} \left( \frac{ix_i^2}{\pi} \right)\]

subject to

\[0 \leq x_i \leq \pi\]
Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Michalewicz function at x.

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.OptimizationProblem

Base class for optimization problems.

eval(record)
class pySOT.optimization_problems.Perm(dim=10)

Perm function

Global optimum: \(f(1,1/2,1/3,...,1/n)=0\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Perm function at x.

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.Rastrigin(dim=10)

Rastrigin function

\[f(x_1,\ldots,x_n)=10n-\sum_{i=1}^n (x_i^2 - 10 \cos(2 \pi x_i))\]

subject to

\[-5.12 \leq x_i \leq 5.12\]

Global optimum: \(f(0,0,...,0)=0\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Rastrigin function at x

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.Rosenbrock(dim=10)

Rosenbrock function

\[f(x_1,\ldots,x_n) = \sum_{j=1}^{n-1} \left( 100(x_j^2-x_{j+1})^2 + (1-x_j)^2 \right)\]

subject to

\[-2.048 \leq x_i \leq 2.048\]

Global optimum: \(f(1,1,...,1)=0\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Rosenbrock function at x

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.Schwefel(dim=10)

Schwefel function

\[f(x_1,\ldots,x_n) = \sum_{j=1}^{n} \left( -x_j \sin(\sqrt{|x_j|}) \right) + 418.982997 n\]

subject to

\[-512 \leq x_i \leq 512\]

Global optimum: \(f(420.968746,420.968746,...,420.968746)=0\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Schwefel function at x.

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.SixHumpCamel

Six-hump camel function

Details: https://www.sfu.ca/~ssurjano/camel6.html

Global optimum: \(f(0.0898,-0.7126)=-1.0316\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Six Hump Camel function at x

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.Sphere(dim=10)

Sphere function

\[f(x_1,\ldots,x_n) = \sum_{j=1}^n x_j^2\]

subject to

\[-5.12 \leq x_i \leq 5.12\]

Global optimum: \(f(0,0,...,0)=0\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Sphere function at x.

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.SumOfSquares(dim=10)

Sum of squares function

\[f(x_1,\ldots,x_n)=\sum_{i=1}^n ix_i^2\]

Global optimum: \(f(0,0,...,0)=0\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Sum of squares function at x.

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.Weierstrass(dim=10)
eval(x)

Evaluate the Weierstrass function at x.

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float
class pySOT.optimization_problems.Zakharov(dim=10)

Zakharov function

Global optimum: \(f(0,0,...,0)=1\)

Variables:
  • dim – Number of dimensions
  • lb – Lower variable bounds
  • ub – Upper variable bounds
  • int_var – Integer variables
  • cont_var – Continuous variables
  • min – Global minimum value
  • minimum – Global minimizer
  • info – String with problem info
eval(x)

Evaluate the Zakharov function at x.

Parameters:x (numpy.array) – Data point
Returns:Value at x
Return type:float

pySOT.strategy module

Module:strategy
Author:David Eriksson <dme65@cornell.edu> David Bindel <bindel@cornell.edu>,
class pySOT.strategy.DYCORSStrategy(max_evals, opt_prob, exp_design, surrogate, asynchronous=True, batch_size=None, extra_points=None, extra_vals=None, weights=None, num_cand=None)

DYCORS optimization strategy.

This is an implementation of the DYCORS strategy by Regis and Shoemaker:

Rommel G Regis and Christine A Shoemaker. Combining radial basis function surrogates and dynamic coordinate search in high-dimensional expensive black-box optimization. Engineering Optimization, 45(5): 529–555, 2013.

This is an extension of the SRBF strategy that changes how the candidate points are generated. The main idea is that many objective functions depend only on a few directions so it may be advantageous to perturb only a few directions. In particular, we use a perturbation probability to perturb a given coordinate and decrease this probability after each function evaluation so fewer coordinates are perturbed later in the optimization.

Parameters:
  • max_evals (int) – Evaluation budget
  • opt_prob (OptimizationProblem) – Optimization problem object
  • exp_design (ExperimentalDesign) – Experimental design object
  • surrogate (Surrogate) – Surrogate object
  • asynchronous (bool) – Whether or not to use asynchrony (True/False)
  • batch_size (int) – Size of the batch (use 1 for serial, ignored if async)
  • extra_points (numpy.array of size n x dim) – Extra points to add to the experimental design
  • extra_vals (numpy.array of size n x 1) – Values for extra_points (np.nan/np.inf if unknown)
  • reset_surrogate (bool) – Whether or not to reset surrogate model
  • weights (list of np.array) – Weights for merit function, default = [0.3, 0.5, 0.8, 0.95]
  • num_cand (int) – Number of candidate points, default = 100*dim
generate_evals(num_pts)

Generate the next adaptive sample points.

class pySOT.strategy.EIStrategy(max_evals, opt_prob, exp_design, surrogate, asynchronous=True, batch_size=None, extra_points=None, extra_vals=None, reset_surrogate=True, ei_tol=None, dtol=None)

Expected Improvement strategy.

This is an implementation of Expected Improvement (EI), arguably the most popular acquisition function in Bayesian optimization. Under a Gaussian process (GP) prior, the expected value of the improvement:

I(x) := max(f_best - f(x), 0) EI[x] := E[I(x)]

can be computed analytically, where f_best is the best observed function value.EI is one-step optimal in the sense that selecting the maximizer of EI is the optimal action if we have exactly one function value remaining and must return a solution with a known function value.

When using parallelism, we constrain each new evaluation to be a distance dtol away from previous and pending evaluations to avoid that the same point is being evaluated multiple times. We use a default value of dtol = 1e-3 * norm(ub - lb), but note that this value has not been tuned carefully and may be far from optimal.

The optimization strategy terminates when the evaluatio budget has been exceeded or when the EI of the next point falls below some threshold, where the default threshold is 1e-6 * (max(fX) - min(fX)).

Parameters:
  • max_evals (int) – Evaluation budget
  • opt_prob (OptimizationProblem) – Optimization problem object
  • exp_design (ExperimentalDesign) – Experimental design object
  • surrogate (Surrogate) – Surrogate object
  • asynchronous (bool) – Whether or not to use asynchrony (True/False)
  • batch_size (int) – Size of the batch (use 1 for serial, ignored if async)
  • extra_points (numpy.array of size n x dim) – Extra points to add to the experimental design
  • extra_vals (numpy.array of size n x 1) – Values for extra_points (np.nan/np.inf if unknown)
  • reset_surrogate (bool) – Whether or not to reset surrogate model
  • ei_tol (float) – Terminate if the largest EI falls below this threshold Default: 1e-6 * (max(fX) - min(fX))
  • dtol (float) – Minimum distance between new and pending/finished evaluations Default: 1e-3 * norm(ub - lb)
check_input()

Check the inputs to the optimization strategt.

generate_evals(num_pts)

Generate the next adaptive sample points.

class pySOT.strategy.LCBStrategy(max_evals, opt_prob, exp_design, surrogate, asynchronous=True, batch_size=None, extra_points=None, extra_vals=None, reset_surrogate=True, kappa=2.0, dtol=None, lcb_tol=None)

Lower confidence bound strategy.

This is an implementation of Lower Confidence Bound (LCB), a popular acquisition function in Bayesian optimization. The main idea is to minimize:

LCB(x) := E[x] - kappa * sqrt(V[x])

where E[x] is the predicted function value, V[x] is the predicted variance, and kappa is a constant that balances exploration and exploitation. We use a default value of kappa = 2.

When using parallelism, we constrain each new evaluation to be a distance dtol away from previous and pending evaluations to avoid that the same point is being evaluated multiple times. We use a default value of dtol = 1e-3 * norm(ub - lb), but note that this value has not been tuned carefully and may be far from optimal.

The optimization strategy terminates when the evaluatio budget has been exceeded or when the LCB of the next point falls below some threshold, where the default threshold is 1e-6 * (max(fX) - min(fX)).

Parameters:
  • max_evals (int) – Evaluation budget
  • opt_prob (OptimizationProblem) – Optimization problem object
  • exp_design (ExperimentalDesign) – Experimental design object
  • surrogate (Surrogate) – Surrogate object
  • asynchronous (bool) – Whether or not to use asynchrony (True/False)
  • batch_size (int) – Size of the batch (use 1 for serial, ignored if async)
  • extra_points (numpy.array of size n x dim) – Extra points to add to the experimental design
  • extra_vals (numpy.array of size n x 1) – Values for extra_points (np.nan/np.inf if unknown)
  • reset_surrogate (bool) – Whether or not to reset surrogate model
  • kappa (float) – Constant in the LCB merit function
  • dtol (float) – Minimum distance between new and pending evaluations Default: 1e-3 * norm(ub - lb)
  • lcb_tol (float) – Terminate if min(fX) - min(LCB(x)) < lcb_tol Default: 1e-6 * (max(fX) - min(fX))
check_input()

Check the inputs to the optimization strategt.

generate_evals(num_pts)

Generate the next adaptive sample points.

class pySOT.strategy.RandomSampling(max_evals, opt_prob)

Random sampling strategy.

We generate and evaluate a fixed number of points using all resources. The optimization problem must implement OptimizationProblem and max_evals must be a positive integer.

Parameters:
  • max_evals (int) – Evaluation budget
  • opt_prob (OptimizationProblem) – Optimization problem
propose_action()

Propose an action based on outstanding points.

class pySOT.strategy.SRBFStrategy(max_evals, opt_prob, exp_design, surrogate, asynchronous=True, batch_size=None, extra_points=None, extra_vals=None, reset_surrogate=True, weights=None, num_cand=None)

Stochastic RBF (SRBF) optimization strategy.

This is an implementation of the SRBF strategy by Regis and Shoemaker:

Rommel G Regis and Christine A Shoemaker. A stochastic radial basis function method for the global optimization of expensive functions. INFORMS Journal on Computing, 19(4): 497–509, 2007.

Rommel G Regis and Christine A Shoemaker. Parallel stochastic global optimization using radial basis functions. INFORMS Journal on Computing, 21(3):411–426, 2009.

The main idea is to pick the new evaluations from a set of candidate points where each candidate point is generated as an N(0, sigma^2) distributed perturbation from the current best solution. The value of sigma is modified based on progress and follows the same logic as in many trust region methods; we increase sigma if we make a lot of progress (the surrogate is accurate) and decrease sigma when we aren’t able to make progress (the surrogate model is inaccurate). More details about how sigma is updated is given in the original papers.

After generating the candidate points we predict their objective function value and compute the minimum distance to previously evaluated point. Let the candidate points be denoted by C and let the function value predictions be s(x_i) and the distance values be d(x_i), both rescaled through a linear transformation to the interval [0,1]. This is done to put the values on the same scale. The next point selected for evaluation is the candidate point x that minimizes the weighted-distance merit function:

merit(x) := w * s(x) + (1 - w) * (1 - d(x))

where 0 <= w <= 1. That is, we want a small function value prediction and a large minimum distance from previously evalauted points. The weight w is commonly cycled between a few values to achieve both exploitation and exploration. When w is close to zero we do pure exploration while w close to 1 corresponds to explotation.

This strategy has two additional arguments than the base class:

weights: Specify a list of weights to cycle through
Default = [0.3, 0.5, 0.8, 0.95]
num_cand: Number of candidate to use when generating new evaluations
Default = 100 * dim
Parameters:
  • max_evals (int) – Evaluation budget
  • opt_prob (OptimizationProblem) – Optimization problem object
  • exp_design (ExperimentalDesign) – Experimental design object
  • surrogate (Surrogate) – Surrogate object
  • asynchronous (bool) – Whether or not to use asynchrony (True/False)
  • batch_size (int) – Size of the batch (use 1 for serial, ignored if async)
  • extra_points (numpy.array of size n x dim) – Extra points to add to the experimental design
  • extra_vals (numpy.array of size n x 1) – Values for extra_points (np.nan/np.inf if unknown)
  • reset_surrogate (bool) – Whether or not to reset surrogate model
  • weights (list of np.array) – Weights for merit function, default = [0.3, 0.5, 0.8, 0.95]
  • num_cand (int) – Number of candidate points, default = 100*dim
adjust_step()

Adjust the sampling radius sigma.

After succtol successful steps, we cut the sampling radius; after failtol failed steps, we double the sampling radius.

check_input()

Check inputs.

generate_evals(num_pts)

Generate the next adaptive sample points.

get_weights(num_pts)

Generate the nextw weights.

on_adapt_completed(record)

Handle completed evaluation.

class pySOT.strategy.SurrogateBaseStrategy(max_evals, opt_prob, exp_design, surrogate, asynchronous=True, batch_size=None, extra_points=None, extra_vals=None, reset_surrogate=True)
adapt_proposal()

Propose a point from the batch_queue.

check_input()

Check the inputs to the optimization strategt.

generate_evals(num_pts)
init_proposal()

Propose a point from the initial experimental design.

log_completion(record)

Record a completed evaluation to the log.

Parameters:record (EvalRecord) – Record of the function evaluation
make_proposal(x)

Create proposal and update counters and budgets.

on_adapt_aborted(record)

Handle aborted feval from sampling phase.

on_adapt_accept(proposal)

Handle accepted proposal from sampling phase.

on_adapt_completed(record)

Handle completion of feval from sampling phase.

on_adapt_proposal(proposal)

Handle accept/reject of proposal from sampling phase.

on_adapt_reject(proposal)

Handle rejected proposal from sampling phase.

on_adapt_update(record)

Handle update of feval from sampling phase.

on_initial_aborted(record)

Handle aborted feval from initial design.

on_initial_accepted(proposal)

Handle proposal accept from initial design.

on_initial_completed(record)

Handle successful completion of feval from initial design.

on_initial_proposal(proposal)

Handle accept/reject of proposal from initial design.

on_initial_rejected(proposal)

Handle proposal rejection from initial design.

on_initial_update(record)

Handle update of feval from initial design.

propose_action()

Propose an action.

NB: We allow workers to continue to the adaptive phase if the initial queue is empty. This implies that we need enough points in the experimental design for us to construct a surrogate.

remove_pending(x)

Delete a pending point from self.Xpend.

resume()

Resume a terminated run.

sample_initial()

Generate and queue an initial experimental design.

save(fname)

Save the state of the strategy.

We do this in a 3-step procedure
  1. Save to temp file
  2. Move temp file to save file
  3. Remove temp file
Parameters:fname (string) – Filename

pySOT.surrogate module

Module:surrogate
Author:David Eriksson <dme65@cornell.edu>
class pySOT.surrogate.ConstantTail(dim)

Constant polynomial tail.

Constant polynomial in d-dimension, built from the basis \(\{ 1 \}\).

deriv(x)

Evaluate the derivative of the constant polynomial tail.

Parameters:x (numpy.array) – Point to evaluate, of size (1, dim) or (dim,)
Returns:A numpy.array of size dim_tail x dim
Return type:numpy.array
eval(X)

Evaluate the constant polynomial tail.

Parameters:X (numpy.array) – Points to evaluate, of size num_pts x dim
Returns:A numpy.array of size num_pts x dim_tail(dim)
Return type:numpy.array
class pySOT.surrogate.CubicKernel

Cubic RBF kernel

This is a class for the Cubic RBF kernel: \(\varphi(r) = r^3\) which is conditionally positive definite of order 2.

deriv(dists)

Evaluates the derivative of the Cubic kernel for a distance matrix.

Parameters:dists (numpy.array) – Distance input matrix
Returns:a matrix where element \((i,j)\) is \(3 \| x_i - x_j \|^2\)
Return type:numpy.array
eval(dists)

Evaluates the Cubic kernel for a distance matrix

Parameters:dists (numpy.array) – Distance input matrix
Returns:a matrix where element \((i,j)\) is \(\|x_i - x_j \|^3\)
Return type:numpy.array
class pySOT.surrogate.GPRegressor(dim, gp=None, n_restarts_optimizer=3)

Gaussian process (GP) regressor.

Wrapper around the GPRegressor in scikit-learn.

Parameters:
  • dim (int) – Number of dimensions
  • gp (object) – GPRegressor model
  • n_restarts_optimizer (int) – Number of restarts in hyperparam fitting
Variables:
  • dim – Number of dimensions
  • num_pts – Number of points in surrogate model
  • X – Point incorporated in surrogate model (num_pts x dim)
  • fX – Function values in surrogate model (num_pts x 1)
  • updated – True if model is up-to-date (no refit needed)
  • model – GPRegressor object
predict(xx)

Evaluate the GP regressor at the points xx.

Parameters:xx (numpy.ndarray) – Prediction points, must be of size num_pts x dim or (dim, )
Returns:Prediction of size num_pts x 1
Return type:numpy.ndarray
predict_deriv(xx)

TODO: Not implemented

predict_std(xx)

Predict standard deviation at points xx.

Parameters:xx (numpy.ndarray) – Prediction points, must be of size num_pts x dim or (dim, )
Returns:Predicted standard deviation, of size num_pts x 1
Return type:numpy.ndarray
class pySOT.surrogate.Kernel

Base class for a radial kernel.

Variables:order – Order of the conditionally positive definite kernel
deriv(dists)

Evaluate derivatives of radial kernel wrt distance.

Parameters:dists (numpy.ndarray) – Array of size n x n with pairwise distances
Returns:Array of size n x n with kernel derivatives
Return type:numpy.ndarray
eval(dists)

Evaluate the radial kernel.

Parameters:dists (numpy.ndarray) – Array of size n x n with pairwise distances
Returns:Array of size n x n with kernel values
Return type:numpy.ndarray
class pySOT.surrogate.LinearKernel

Linear RBF kernel.

This is a basic class for the Linear RBF kernel: \(\varphi(r) = r\) which is conditionally positive definite of order 1.

deriv(dists)

Evaluate the derivative of the Linear kernel.

Parameters:dists (numpy.array) – Distance input matrix
Returns:a matrix where element \((i,j)\) is 1
Return type:numpy.array
eval(dists)

Evaluate the Linear kernel.

Parameters:dists (numpy.array) – Distance input matrix
Returns:a matrix where element \((i,j)\) is \(\|x_i - x_j \|\)
Return type:numpy.array
class pySOT.surrogate.LinearTail(dim)

Linear polynomial tail.

This is a standard linear polynomial in d-dimension, built from the basis \(\{1,x_1,x_2,\ldots,x_d\}\).

deriv(x)

Evaluate the derivative of the linear polynomial tail

Parameters:x (numpy.array) – Point to evaluate, of size (1, dim) or (dim,)
Returns:A numpy.array of size dim_tail x dim
Return type:numpy.array
eval(X)

Evaluate the linear polynomial tail.

Parameters:X (numpy.array) – Points to evaluate, of size num_pts x dim
Returns:A numpy.array of size num_pts x dim_tail
Return type:numpy.array
class pySOT.surrogate.MARSInterpolant(dim)

Compute and evaluate a MARS interpolant

MARS builds a model of the form

\[\hat{f}(x) = \sum_{i=1}^{k} c_i B_i(x).\]

The model is a weighted sum of basis functions \(B_i(x)\). Each basis function \(B_i(x)\) takes one of the following three forms:

  1. a constant 1.
  2. a hinge function of the form \(\max(0, x - const)\) or \(\max(0, const - x)\). MARS automatically selects variables and values of those variables for knots of the hinge functions.
  3. a product of two or more hinge functions. These basis functions c an model interaction between two or more variables.
Parameters:

dim (int) – Number of dimensions

Variables:
  • dim – Number of dimensions
  • num_pts – Number of points in surrogate model
  • X – Point incorporated in surrogate model (num_pts x dim)
  • fX – Function values in surrogate model (num_pts x 1)
  • updated – True if model is up-to-date (no refit needed)
  • model – Earth object
predict(xx)

Evaluate the MARS interpolant at the points xx

Parameters:xx (numpy.ndarray) – Prediction points, must be of size num_pts x dim or (dim, )
Returns:Prediction of size num_pts x 1
Return type:numpy.ndarray
predict_deriv(xx)

Evaluate the derivative of the MARS interpolant at points xx

Parameters:xx (numpy.array) – Prediction points, must be of size num_pts x dim or (dim, )
Returns:Derivative of the RBF interpolant at xx
Return type:numpy.array
class pySOT.surrogate.PolyRegressor(dim, degree=2)

Multi-variate polynomial regression with cross-terms

Parameters:
  • dim (int) – Number of dimensions
  • degree (int) – Polynomial degree
Variables:
  • dim – Number of dimensions
  • num_pts – Number of points in surrogate model
  • X – Point incorporated in surrogate model (num_pts x dim)
  • fX – Function values in surrogate model (num_pts x 1)
  • updated – True if model is up-to-date (no refit needed)
  • model – scikit-learn pipeline for polynomial regression
predict(xx)

Evaluate the polynomial regressor at the points xx

Parameters:xx (numpy.ndarray) – Prediction points, must be of size num_pts x dim or (dim, )
Returns:Prediction of size num_pts x 1
Return type:numpy.ndarray
predict_deriv(xx)

TODO: Not implemented

class pySOT.surrogate.RBFInterpolant(dim, kernel=None, tail=None, eta=1e-06)

Compute and evaluate RBF interpolant.

Manages an expansion of the form

\[s(x) = \sum_j c_j \phi(\|x-x_j\|) + \sum_j \lambda_j p_j(x)\]

where the functions \(p_j(x)\) are low-degree polynomials. The fitting equations are

\[\begin{split}\begin{bmatrix} \eta I & P^T \\ P & \Phi+\eta I \end{bmatrix} \begin{bmatrix} \lambda \\ c \end{bmatrix} = \begin{bmatrix} 0 \\ f \end{bmatrix}\end{split}\]

where \(P_{ij} = p_j(x_i)\) and \(\Phi_{ij}=\phi(\|x_i-x_j\|)\) The regularization parameter \(\eta\) allows us to avoid problems with potential poor conditioning of the system. Consider using the SurrogateUnitBox wrapper or manually scaling the domain to the unit hypercube to avoid issues with the domain scaling.

We add k new points to the RBFInterpolant in \(O(kn^2)\) flops by updating the LU factorization of the old RBF system. This is better than computing the RBF coefficients from scratch, which costs \(O(n^3)\) flops.

Parameters:
  • dim (int) – Number of dimensions
  • kernel (Kernel) – RBF kernel object
  • tail (Tail) – RBF polynomial tail object
  • eta (float) – Regularization parameter
Variables:
  • dim – Number of dimensions
  • num_pts – Number of points in surrogate model
  • X – Point incorporated in surrogate model (num_pts x dim)
  • fX – Function values in surrogate model (num_pts x 1)
  • updated – True if model is up-to-date (no refit needed)
  • kernel – RBF kernel
  • tail – RBF tail
  • eta – Regularization parameter
predict(xx)

Evaluate the RBF interpolant at the points xx

Parameters:xx (numpy.ndarray) – Prediction points, must be of size num_pts x dim or (dim, )
Returns:Prediction of size num_pts x 1
Return type:numpy.ndarray
predict_deriv(xx)

Evaluate the derivative of the RBF interpolant at a point xx

Parameters:xx (numpy.array) – Prediction points, must be of size num_pts x dim or (dim, )
Returns:Derivative of the RBF interpolant at xx
Return type:numpy.array
reset()

Reset the RBF interpolant.

class pySOT.surrogate.Surrogate

Base class for a surrogate model.

Variables:
  • dim – Number of dimensions
  • num_pts – Number of points in surrogate model
  • X – Point incorporated in surrogate model (num_pts x dim)
  • fX – Function values in surrogate model (num_pts x 1)
  • updated – True if model is up-to-date (no refit needed)
add_points(xx, fx)

Add new function evaluations.

This method SHOULD NOT trigger a new fit, it just updates X and fX but leaves the original surrogate object intact

Parameters:
  • xx (numpy.ndarray) – Points to add
  • fx (numpy.array or float) – The function values of the point to add
predict(xx)

Evaluate surroagte at points xx.

Parameters:xx (numpy.ndarray) – xx must be of size num_pts x dim or (dim, )
Returns:Surrogate predictions, of size num_pts x 1
Return type:numpy.ndarray
predict_deriv(xx)

Evaluate derivative of interpolant at points xx.

Parameters:xx (numpy.ndarray) – xx must be of size num_pts x dim or (dim, )
Returns:Surrogate derivative predictions, of size num_pts x dim
Return type:numpy.ndarray
reset()

Reset the surrogate.

class pySOT.surrogate.SurrogateCapped(model, transformation=None)

Wrapper for tranformation of function values.

This adapter takes an existing surrogate model and replaces it with a modified version where the function values are replaced according to some transformation. A common transformation is replacing all values above the median by the median to reduce the influence of large function values.

Parameters:
  • model (object) – Original surrogate model (must implement Surrogate)
  • transformation (function) – Function that transforms the function values
Variables:
  • dim – Number of dimensions
  • num_pts – Number of points in surrogate model
  • X – Point incorporated in surrogate model (num_pts x dim)
  • fX – Function values in surrogate model (num_pts x 1)
  • updated – True if model is up-to-date (no refit needed)
  • model – scikit-learn pipeline for polynomial regression
  • transformation – Transformation function
add_points(xx, fx)

Add new function evaluations.

This method SHOULD NOT trigger a new fit, it just updates X and fX but leaves the original surrogate object intact

Parameters:
  • xx (numpy.ndarray) – Points to add
  • fx (numpy.array or float) – The function values of the point to add
predict(xx)

Evaluate the surrogate model at the points xx

Parameters:xx (numpy.ndarray) – Prediction points, must be of size num_pts x dim or (dim, )
Returns:Prediction of size num_pts x 1
Return type:numpy.ndarray
predict_deriv(xx)

Evaluate the derivative of the surrogate model at points xx.

Parameters:xx (numpy.array) – Prediction points, must be of size num_pts x dim or (dim, )
Returns:Derivative of the RBF interpolant at xx
Return type:numpy.array
predict_std(xx)

Predict standard deviation at points xx.

Parameters:xx (numpy.ndarray) – Prediction points, must be of size num_pts x dim or (dim, )
Returns:Predicted standard deviation, of size num_pts x 1
Return type:numpy.ndarray
reset()

Reset the surrogate.

class pySOT.surrogate.SurrogateUnitBox(model, lb, ub)

Unit box adapter for surrogate models.

This adapter takes an existing surrogate model and replaces it by a modified version where the domain is rescaled to the unit hypercube. This is useful for surrogate models that are sensitive to scaling, such as RBFs.

Parameters:
  • model (object) – Original surrogate model (must implement Surrogate)
  • lb (function) – Lower variable bounds, of size 1 x dim
  • ub (function) – Upper variable bounds, of size 1 x dim
Variables:
  • dim – Number of dimensions
  • num_pts – Number of points in surrogate model
  • X – Point incorporated in surrogate model (num_pts x dim)
  • fX – Function values in surrogate model (num_pts x 1)
  • updated – True if model is up-to-date (no refit needed)
  • model – scikit-learn pipeline for polynomial regression
  • lb – Lower variable bounds
  • ub – Upper variable bounds
add_points(xx, fx)

Add new function evaluations.

This method SHOULD NOT trigger a new fit, it just updates X and fX but leaves the original surrogate object intact

Parameters:
  • xx (numpy.ndarray) – Points to add
  • fx (numpy.array or float) – The function values of the point to add
predict(xx)

Evaluate the surrogate model at the points xx

Parameters:xx (numpy.ndarray) – Prediction points, must be of size num_pts x dim or (dim, )
Returns:Prediction of size num_pts x 1
Return type:numpy.ndarray
predict_deriv(x)

Evaluate the derivative of the surrogate model at points xx

Remember the chain rule:
f’(x) = (d/dx) g((x-a)/(b-a)) = g’((x-a)/(b-a)) * 1/(b-a)
Parameters:xx (numpy.array) – Prediction points, must be of size num_pts x dim or (dim, )
Returns:Derivative of the RBF interpolant at xx
Return type:numpy.array
predict_std(x)

Predict standard deviation at points xx.

Parameters:xx (numpy.ndarray) – Prediction points, must be of size num_pts x dim or (dim, )
Returns:Predicted standard deviation, of size num_pts x 1
Return type:numpy.ndarray
reset()

Reset the surrogate model.

class pySOT.surrogate.TPSKernel

Thin-plate spline RBF kernel.

This is a basic class for the TPS RBF kernel: \(\varphi(r) = r^2 \log(r)\) which is conditionally positive definite of order 2.

deriv(dists)

Evaluate the derivative of the TPS kernel.

Parameters:dists (numpy.array) – Distance input matrix
Returns:a matrix where element \((i,j)\) is \(\|x_i - x_j \|(1 + 2 \log (\|x_i - x_j \|) )\)
Return type:numpy.array
eval(dists)

Evaluate the TPS kernel.

Parameters:dists (numpy.array) – Distance input matrix
Returns:a matrix where element \((i,j)\) is \(\|x_i - x_j \|^2 \log (\|x_i - x_j \|)\)
Return type:numpy.array
class pySOT.surrogate.Tail

Base class for a polynomial tail.

“ivar dim: Dimensionality of the original space :ivar dim_tail: Dimensionality of the polynomial space (number of basis functions)

deriv(x)

Evaluate derivative of the polynomial tail.

Parameters:x (numpy.ndarray) – Array of size 1 x dim or (dim,)
Returns:Array of size dim_tail x dim
Return type:numpy.ndarray
eval(X)

Evaluate the polynomial tail.

Parameters:X (numpy.ndarray) – Array of size num_pts x dim
Returns:Array of size num_pts x dim_tail
Return type:numpy.ndarray

pySOT.utils module

Module:utils
Author:David Eriksson <dme65@cornell.edu>
class pySOT.utils.GeneticAlgorithm(function, dim, lb, ub, int_var=None, pop_size=100, num_gen=100, start='SLHD')

Genetic algorithm.

Implementation of the real-valued Genetic algorithm. The mutations are normally distributed perturbations, the selection mechanism is a tournament selection, and the crossover oepration is the standard linear combination taken at a randomly generated cutting point.

The total number of evaluations are popsize x ngen

Parameters:
  • function (Object) – Function that can be used to evaluate the entire population. It needs to take an input of size pop_size x dim and return a numpy.array of size pop_size x 1
  • dim (int) – Number of dimensions
  • lb (numpy.array) – Lower variable bounds, of length dim
  • ub (numpy.array) – Lower variable bounds, of length dim
  • int_var (list) – List of indices with the integer valued variables (e.g., [0, 1, 5])
  • pop_size (int) – Population size
  • num_gen (int) – Number of generations
  • start (string) – Method for generating the initial population
Variables:
  • nvariables – Number of variables (dimensions)
  • nindividuals – population size
  • lower_boundary – lower bounds for the optimization problem
  • upper_boundary – upper bounds for the optimization problem
  • integer_variables – List of variables that are integer valued
  • start – Method for generating the initial population
  • sigma – Perturbation radius. Each pertubation is N(0, sigma)
  • p_mutation – Mutation probability (1/dim)
  • tournament_size – Size of the tournament (5)
  • p_cross – Cross-over probability (0.9)
  • ngenerations – Number of generations
  • function – Object that can be used to evaluate the objective function
optimize()

Method used to run the Genetic algorithm

Returns:Returns the best individual and its function value
Return type:numpy.array, float
pySOT.utils.check_opt_prob(obj)

Check an implementation of the optimization problem.

This method checks everything, but can’t make sure that the objective function returns values of the correct type since this would involve actually evaluating the objective function, which isn’t feasible when the evaluations are expensive. If some test fails, an exception is raised.

Parameters:obj (Object) – Optimization problem

:raise Appropriate error if object doesn’t follow the pySOT standard

pySOT.utils.from_unit_box(x, lb, ub)

Maps a set of points from the unit box to the original domain

Parameters:
  • x (numpy.array) – Points to be mapped from the unit box, of size npts x dim
  • lb (numpy.array) – Lower bounds, of size 1 x dim
  • ub (numpy.array) – Upper bounds, of size 1 x dim
Returns:

Points mapped to the original domain

Return type:

numpy.array

pySOT.utils.progress_plot(controller, title='', interactive=False)

Makes a progress plot from a POAP controller.

This method requires matplotlib and will terminate if matplotlib.pyplot is unavailable.

Parameters:
  • controller (Object) – POAP controller object
  • title (string) – Title of the plot
  • interactive (bool) – True if the plot should be interactive
pySOT.utils.round_vars(x, int_var, lb, ub)

Round integer variables to closest integer in the domain.

Parameters:
  • x (numpy.array) – Set of points, of size npts x dim
  • int_var (numpy.array) – Set of indices of integer variables
  • lb (numpy.array) – Lower bounds, of size 1 x dim
  • ub (numpy.array) – Upper bounds, of size 1 x dim
Returns:

The set of points with the integer variables rounded to the closest integer in the domain

Return type:

numpy.array

pySOT.utils.to_unit_box(x, lb, ub)

Maps a set of points to the unit box

Parameters:
  • x (numpy.array) – Points to be mapped to the unit box, of size npts x dim
  • lb (numpy.array) – Lower bounds, of size 1 x dim
  • ub (numpy.array) – Upper bounds, of size 1 x dim
Returns:

Points mapped to the unit box

Return type:

numpy.array

pySOT.utils.unit_rescale(x)

Shift and rescale elements of a vector to the unit interval

Parameters:x (numpy.ndarray) – array that should be rescaled to the unit interval
Returns:array scaled to the unit interval
Return type:numpy.ndarray