StochasticGradientOptimizer

class viabel.StochasticGradientOptimizer(learning_rate, *, weight_decay=0, iterate_avg_prop=0.2, diagnostics=False)[source]

Stochastic gradient descent.

Methods

descent_direction(grad)

Compute descent direction for optimization.

optimize(n_iters, objective, init_param[, ...])

Parameters:

reset_state()

Reset internal state of the optimizer

__init__(learning_rate, *, weight_decay=0, iterate_avg_prop=0.2, diagnostics=False)[source]
Parameters:
learning_ratefloat

Tuning parameter that determines the step size

weight_decay: `float`

L2 regularization weight

iterate_avg_propfloat

Proportion of iterates to use for computing iterate average. None means no iterate averaging. The default is 0.2.

diagnosticsbool, optional

Record diagnostic information if True. The default is False.

descent_direction(grad)[source]

Compute descent direction for optimization.

Default implementation returns grad.

Parameters:
gradnumpy.ndarray, shape(var_param_dim,)

(stochastic) gradient of the objective function

Returns:
descent_dirnumpy.ndarray, shape(var_param_dim,)

Descent direction of the optimization algorithm

optimize(n_iters, objective, init_param, init_hamflow_model_param=None, init_hamflow_rho_param=None)[source]
Parameters:
n_itersint

Number of iterations of the optimization

objectivefunction

Function for constructing the objective and gradient function

init_paramnumpy.ndarray, shape(var_param_dim,)

Initial values of the variational parameters

**kwargs

Keyword arguments to pass (example: smoothed_prop)

Returns:
resultsdict

Must contain at least opt_param, the estimate for the optimal variational parameter.

reset_state()[source]

Reset internal state of the optimizer