RMSProp¶
- class viabel.RMSProp(learning_rate, *, weight_decay=0, iterate_avg_prop=0.2, beta=0.9, jitter=1e-08, diagnostics=False)[source]¶
RMSProp optimization method (Hinton and Tieleman, 2012)
Tracks the exponential moving average of squared gradient:
\[\nu^{(k+1)} = \beta \nu^{(k)} + (1-\beta) \hat{g}^{(k)} \cdot \hat{g}^{(k)}\]and uses \(\nu^{(k)}\) to rescale the current stochastic gradient:
\[\hat{g}^{(k+1)}/\sqrt{\nu^{(k)}}.\]- Parameters:
- betafloat optional
Squared gradient moving average hyper parameter. The default is 0.9
- jitter: `float` optional
Small value used for numerical stability. The default is 1e-8
- Returns:
- descent_dirnumpy.ndarray, shape(var_param_dim,)
Descent direction of the optimization algorithm
Methods
descent_direction
(grad)Compute descent direction for optimization.
optimize
(n_iters, objective, init_param[, ...])- Parameters:
resetting \(\nu\), the exponential moving average of squared gradient
- __init__(learning_rate, *, weight_decay=0, iterate_avg_prop=0.2, beta=0.9, jitter=1e-08, diagnostics=False)[source]¶
- Parameters:
- learning_ratefloat
Tuning parameter that determines the step size
- weight_decay: `float`
L2 regularization weight
- iterate_avg_propfloat
Proportion of iterates to use for computing iterate average. None means no iterate averaging. The default is 0.2.
- diagnosticsbool, optional
Record diagnostic information if True. The default is False.
- descent_direction(grad)[source]¶
Compute descent direction for optimization.
Default implementation returns
grad
.- Parameters:
- gradnumpy.ndarray, shape(var_param_dim,)
(stochastic) gradient of the objective function
- Returns:
- descent_dirnumpy.ndarray, shape(var_param_dim,)
Descent direction of the optimization algorithm