# Models¶

## Bayesian Neural Network¶

class pysgmcmc.models.bayesian_neural_network.BayesianNeuralNetwork(session, sampling_method=<Sampler.SGHMC: 'SGHMC'>, get_net=<function get_default_net>, batch_generator=<function generate_batches>, batch_size=20, stepsize_schedule=<pysgmcmc.stepsize_schedules.ConstantStepsizeSchedule object>, n_nets=100, n_iters=50000, burn_in_steps=1000, sample_steps=100, normalize_input=True, normalize_output=True, seed=None, dtype=tf.float64, **sampler_kwargs)[source]
__init__(session, sampling_method=<Sampler.SGHMC: 'SGHMC'>, get_net=<function get_default_net>, batch_generator=<function generate_batches>, batch_size=20, stepsize_schedule=<pysgmcmc.stepsize_schedules.ConstantStepsizeSchedule object>, n_nets=100, n_iters=50000, burn_in_steps=1000, sample_steps=100, normalize_input=True, normalize_output=True, seed=None, dtype=tf.float64, **sampler_kwargs)[source]

Bayesian Neural Networks use Bayesian methods to estimate the posterior distribution of a neural network’s weights. This allows to also predict uncertainties for test points and thus makes Bayesian Neural Networks suitable for Bayesian optimization.

This module uses stochastic gradient MCMC methods to sample from the posterior distribution.

See [1] for more details.

[1] J. T. Springenberg, A. Klein, S. Falkner, F. Hutter
Bayesian Optimization with Robust Bayesian Neural Networks. In Advances in Neural Information Processing Systems 29 (2016).
Parameters: session (tensorflow.Session) – A tensorflow.Session object used to delegate computations performed in this network over to tensorflow. sampling_method (Sampler, optional) – Method used to sample networks for this BNN. Defaults to Sampler.SGHMC. n_nets (int, optional) – Number of nets to sample during training (and use to predict). Defaults to 100. stepsize_schedule (pysgmcmc.stepsize_schedules.StepsizeSchedule) – Iterator class that produces a stream of stepsize values that we can use during sampling. See also: pysgmcmc.stepsize_schedules mdecay (float, optional) – Momentum decay per time-step (parameter for SGHMCSampler). Defaults to 0.05. n_iters (int, optional) – Total number of iterations of the sampler to perform. Defaults to 50000 batch_size (int, optional) – Number of datapoints to include in each minibatch. Defaults to 20 datapoints per minibatch. burn_in_steps (int, optional) – Number of burn-in steps to perform Defaults to 1000. sample_steps (int, optional) – Number of sample steps to perform. Defaults to 100. normalize_input (bool, optional) – Specifies whether or not input data should be normalized. Defaults to True normalize_output (bool, optional) – Specifies whether or not outputs should be normalized. Defaults to True get_net (callable, optional) – Callable that returns a network specification. Expected inputs are a tensorflow.Placeholder object that serves as feedable input to the network and an integer random seed. Expected return value is the networks final output. Defaults to get_default_net. batch_generator (callable, optional) – TODO: DOKU NOTE: Generator callable with signature like generate_batches that yields feedable dicts of minibatches. seed (int, optional) – Random seed to use in this BNN. Defaults to None. dtype (tf.DType, optional) – Tensorflow datatype to use for internal representation. Defaults to None.
__weakref__

list of weak references to the object (if defined)

compute_network_output(params, input_data)[source]
Compute and return the output of the network when
parameterized with params on input_data.
Parameters: params (list of ndarray objects) – List of parameter values (ndarray) for each tensorflow.Variable parameter of our network. input_data (ndarray (N, D)) – Input points to compute the network output for. network_output – Output of the network parameterized with params on the given input_data points. ndarray (N,)
negative_log_likelihood(X, Y)[source]
Compute the negative log likelihood of the
current network parameters with respect to inputs X with labels Y.
Parameters: X (tensorflow.Placeholder) – Placeholder for input datapoints. Y (tensorflow.Placeholder) – Placeholder for input labels. neg_log_like – Negative log likelihood of the current network parameters with respect to inputs X with labels Y. tensorflow.Tensor
mse: tensorflow.Tensor
Mean squared error of the current network parameters with respect to inputs X with labels Y.

## Base Model¶

class pysgmcmc.models.base_model.BaseModel[source]
__init__()[source]

Abstract base class for all models

__metaclass__

alias of ABCMeta

__weakref__

list of weak references to the object (if defined)

get_incumbent()[source]

Returns the best observed point and its function value

Returns: incumbent (ndarray (D,)) – current incumbent incumbent_value (ndarray (N,)) – the observed value of the incumbent
get_json_data()[source]

Json getter function’

Returns: dictionary
predict(X_test)[source]

Predicts for a given set of test data points the mean and variance of its target values

Parameters: X_test (np.ndarray (N, D)) – N Test data points with input dimensions D mean (ndarray (N,)) – Predictive mean of the test data points var (ndarray (N,)) – Predictive variance of the test data points
train(X, y)[source]

Trains the model on the provided data.

Parameters: X (np.ndarray (N, D)) – Input data points. The dimensionality of X is (N, D), with N as the number of points and D is the number of input dimensions. y (np.ndarray (N,)) – The corresponding target values of the input data points.
update(X, y)[source]

Update the model with the new additional data. Override this function if your model allows to do something smarter than simple retraining

Parameters: X (np.ndarray (N, D)) – Input data points. The dimensionality of X is (N, D), with N as the number of points and D is the number of input dimensions. y (np.ndarray (N,)) – The corresponding target values of the input data points.