RAABBVI

class viabel.RAABBVI(sgo, *, rho=0.5, iters0=1000, accuracy_threshold=0.1, inefficiency_threshold=1.0, init_rmsprop=False, **kwargs)[source]

A robust, automated, and accurate BBVI optimizer (RAABBVI)

This algorithm combines the FASO algorithm with a termination rule to determine the appropriate point where the algorithm could terminate. The termination rule is based on the trade-off between improved accuracy of the variational approximation if the current learning rate is reduced by an adaptation factor \(\rho \in (0,1)\) and the time required to reach that improved accuracy. If the improved accuracy level is large compared to the runtime then this algorithm adaptively decrease the learning rate and if not algorithm will be terminated.

See more details: https://arxiv.org/pdf/2203.15945.pdf

Parameters:
sgoStochasticGradientOptimizer object

Optimizer to use for computing descent direction.

rhofloat, optional

Learning rate reducing factor. The default is 0.5

iters0int, optional

Small iteration number. The default is 1000 for decreased learning rate.

accuracy_thresholdfloat, optional

Absolute SKL accuracy threshold

inefficiency_thresholdfloat, optional

Threshold for the inefficiency index.

init_rmpspropBoolean, optional

Indicate whether to run using RMSProp optimization method for the initial learning rate

**kwargs:

Keyword arguments for FASO

Methods

convg_iteration_trend_detection(slope)

Detecting the relationship trend between learning rate and number of iterations to reach convergence

optimize(K_max, objective, init_param)

Parameters:

weighted_linear_regression(model, y, x[, s, ...])

weighted regression with likelihood term having the weight Parameters ---------- model : pystan model Pystan model to conduct the sampling y : numpy_ndarray Response variable x : numpy_ndarray Predictor variable s : int, optional Observation weight parameter. The default is 9. a : int, optional Power of the weight parameter. The default is 1/4. n_chains: int, optional Number of sampling chains. The default is 4.

wls(x, y[, s, a])

weighted least squares

__init__(sgo, *, rho=0.5, iters0=1000, accuracy_threshold=0.1, inefficiency_threshold=1.0, init_rmsprop=False, **kwargs)[source]
convg_iteration_trend_detection(slope)[source]

Detecting the relationship trend between learning rate and number of iterations to reach convergence

Parameters:
slopefloat

slope of the fitted regression model

Returns:
bool

Indicating having negative relationship or not

optimize(K_max, objective, init_param)[source]
Parameters:
K_maxint

Number of iterations of the optimization

objective: `function`

Function for constructing the objective and gradient function

init_paramnumpy.ndarray, shape(var_param_dim,)

Initial values of the variational parameters

weighted_linear_regression(model, y, x, s=9, a=0.25, n_chains=4)[source]

weighted regression with likelihood term having the weight Parameters ———- model : pystan model

Pystan model to conduct the sampling

ynumpy_ndarray

Response variable

xnumpy_ndarray

Predictor variable

sint, optional

Observation weight parameter. The default is 9.

aint, optional

Power of the weight parameter. The default is 1/4.

n_chains: int, optional

Number of sampling chains. The default is 4.

Returns:
kappafloat

Power

cfloat

Constant

fitPystan object

Pystan fit object if results = True

wls(x, y, s=9, a=0.25)[source]

weighted least squares

Parameters:
xnumpy_ndarray

Predictor variable

ynumpy_ndarray

Response variable

sint, optional

Observation weight parameter. The default is 9.

aint, optional

Power of the weight parameter. The default is 1/4.

Returns:
b0float

Intercept

b1float

Slope