API

Convenience Methods

bbvi(dimension, *[, n_iters, ...])

Fit a model using black-box variational inference.

vi_diagnostics(var_param, *[, objective, ...])

Check variational inference diagnostics.

Approximation Families

ApproximationFamily(dim, var_param_dim, ...)

An abstract class for an variational approximation family.

MFGaussian(dim[, seed])

A mean-field Gaussian approximation family.

MFStudentT(dim, df[, seed])

A mean-field Student's t approximation family.

MultivariateT(dim, df[, seed])

A full-rank multivariate t approximation family.

NeuralNet(layers_shapes[, nonlinearity, ...])

Attributes:

NVPFlow(layers_t, layers_s, mask, prior, ...)

Attributes:

Models

Model(log_density)

Base class for representing a model.

StanModel(fit)

Class that encapsulates a PyStan model.

Variational Objectives

VariationalObjective(approx, model)

A class representing a variational objective to minimize

ExclusiveKL(approx, model, num_mc_samples[, ...])

Exclusive Kullback-Leibler divergence.

DISInclusiveKL(approx, model, ...[, ...])

Inclusive Kullback-Leibler divergence using Distilled Importance Sampling.

AlphaDivergence(approx, model, ...)

Log of the alpha-divergence.

Diagnostics

all_diagnostics(log_weights, *[, samples, ...])

Compute all VI diagnostics.

divergence_bound(log_weights, *[, alpha, ...])

Compute a bound on the alpha-divergence.

error_bounds(*[, W1, W2, q_var, p_var])

Compute error bounds.

wasserstein_bounds(d2, *[, samples, ...])

Compute all bounds.

Optimization

Optimizer()

An abstract class for optimization

StochasticGradientOptimizer(learning_rate, *)

Stochastic gradient descent.

Adam(learning_rate, *[, beta1, beta2, ...])

Adam optimization method (Kingma and Ba, 2015)

AveragedAdam(learning_rate, *[, beta1, ...])

Averaged Adam optimization method (Mukkamala and Hein, 2017, §4)

RMSProp(learning_rate, *[, weight_decay, ...])

RMSProp optimization method (Hinton and Tieleman, 2012)

AveragedRMSProp(learning_rate, *[, jitter, ...])

Averaged RMSProp optimization method (Mukkamala and Hein, 2017, §4)

Adagrad(learning_rate, *[, weight_decay, ...])

Adagrad optimization method (Duchi et al., 2011)

WindowedAdagrad(learning_rate, *[, ...])

Windowed Adagrad optimization method (Default optimizer in Pymc3)

FASO(sgo, *[, mcse_threshold, W_min, ...])

Fixed-learning rate stochastic optimization meta-algorithm (FASO)

RAABBVI(sgo, *[, rho, iters0, ...])

A robust, automated, and accurate BBVI optimizer (RAABBVI)