FASO

class viabel.FASO(sgo, *, mcse_threshold=0.1, W_min=200, ESS_min=None, k_check=None)[source]

Fixed-learning rate stochastic optimization meta-algorithm (FASO)

This algorithm runs stochastic optimization with a fixed-learning rate using a user specified optimization method. It determines the convergence at the fixed-learning rate using the potential scale reduction factor (hat{R}) and combined with a Monte Carlo standard error cutoff.

See more details: https://arxiv.org/pdf/2203.15945.pdf

Parameters:
sgoStochasticGradientOptimizer object

optimization method to use

mcse_thresholdfloat optional

Monte Carlo standard error threshold for convergence. The default is 0.1.

W_minint, optional

Minimum window size for checking convergence. The default is 200.

ESS_minint, optional

Minimum ESS for computing iterate average. Default is W_min / 8.

k_checkint, optional

Frequency with which to check convergence. The default is W_min.

Methods

optimize(n_iters, objective, init_param)

Parameters:

__init__(sgo, *, mcse_threshold=0.1, W_min=200, ESS_min=None, k_check=None)[source]
optimize(n_iters, objective, init_param)[source]
Parameters:
n_itersint

Number of iterations of the optimization

objectivefunction

Function for constructing the objective and gradient function

init_paramnumpy.ndarray, shape(var_param_dim,)

Initial values of the variational parameters

**kwargs

Keyword arguments to pass (example: smoothed_prop)

Returns:
resultsdict

Must contain at least opt_param, the estimate for the optimal variational parameter.