Models

Abstract Base Class Model

class hebel.models.Model

Abstract base-class for a Hebel model

evaluate(input_data, targets, return_cache=False, prediction=True)

Evaluate the loss function without computing gradients

feed_forward(input_data, return_cache=False, prediction=True)

Get predictions from the model

test_error(input_data, targets, average=True, cache=None)

Evaulate performance on a test set

training_pass(input_data, targets)

Perform a full forward and backward pass through the model

Neural Network

class hebel.models.NeuralNet(layers, top_layer=None, activation_function='sigmoid', dropout=0.0, input_dropout=0.0, n_in=None, n_out=None, l1_penalty_weight=0.0, l2_penalty_weight=0.0, **kwargs)

A neural network for classification using the cross-entropy loss function.

Parameters:

layers : array_like
An array of either integers or instances of hebel.models.HiddenLayer objects. If integers are given, they represent the number of hidden units in each layer and new HiddenLayer objects will be created. If HiddenLayer instances are given, the user must make sure that each HiddenLayer has n_in set to the preceding layer’s n_units. If HiddenLayer instances are passed, then activation_function, dropout, n_in, l1_penalty_weight, and l2_penalty_weight are ignored.
top_layer : hebel.models.TopLayer instance, optional
If top_layer is given, then it is used for the output layer, otherwise, a LogisticLayer instance is created.
activation_function : {‘sigmoid’, ‘tanh’, ‘relu’, or ‘linear’}, optional
The activation function to be used in the hidden layers.
dropout : float in [0, 1)
Probability of dropping out each hidden unit during training. Default is 0.
input_dropout : float in [0, 1)
Probability of dropping out each input during training. Default is 0.
n_in : integer, optional
The dimensionality of the input. Must be given, if the first hidden layer is not passed as a hebel.models.HiddenLayer instance.
n_out : integer, optional
The number of classes to predict from. Must be given, if a hebel.models.HiddenLayer instance is not given in top_layer.
l1_penalty_weight : float, optional
Weight for L1 regularization
l2_penalty_weight : float, optional
Weight for L2 regularization
kwargs : optional
Any additional arguments are passed on to top_layer

See also:

hebel.models.LogisticRegression, hebel.models.NeuralNetRegression, hebel.models.MultitaskNeuralNet

Examples:

# Simple form
model = NeuralNet(layers=[1000, 1000],
                  activation_function='relu',
                  dropout=0.5,
                  n_in=784, n_out=10,
                  l1_penalty_weight=.1)

# Extended form, initializing with ``HiddenLayer`` and ``TopLayer`` objects
hidden_layers = [HiddenLayer(784, 1000, 'relu', dropout=0.5,
                             l1_penalty_weight=.2),
                 HiddenLayer(1000, 1000, 'relu', dropout=0.5,
                             l1_penalty_weight=.1)]
softmax_layer = LogisticLayer(1000, 10, l1_penalty_weight=.1)

model = NeuralNet(hidden_layers, softmax_layer)
TopLayerClass

alias of SoftmaxLayer

checksum()

Returns an MD5 digest of the model.

This can be used to easily identify whether two models have the same architecture.

evaluate(input_data, targets, return_cache=False, prediction=True)

Evaluate the loss function without computing gradients.

Parameters:

input_data : GPUArray
Data to evaluate
targets: GPUArray
Targets
return_cache : bool, optional
Whether to return intermediary variables from the computation and the hidden activations.
prediction : bool, optional
Whether to use prediction model. Only relevant when using dropout. If true, then weights are multiplied by 1 - dropout if the layer uses dropout.

Returns:

loss : float
The value of the loss function.
hidden_cache : list, only returned if return_cache == True
Cache as returned by hebel.models.NeuralNet.feed_forward().
activations : list, only returned if return_cache == True
Hidden activations as returned by hebel.models.NeuralNet.feed_forward().
feed_forward(input_data, return_cache=False, prediction=True)

Run data forward through the model.

Parameters:

input_data : GPUArray
Data to run through the model.
return_cache : bool, optional
Whether to return the intermediary results.
prediction : bool, optional
Whether to run in prediction mode. Only relevant when using dropout. If true, weights are multiplied by 1 - dropout. If false, then half of hidden units are randomly dropped and the dropout mask is returned in case return_cache==True.

Returns:

prediction : GPUArray
Predictions from the model.
cache : list of GPUArray, only returned if return_cache == True
Results of intermediary computations.
parameters

A property that returns all of the model’s parameters.

test_error(test_data, average=True)

Evaulate performance on a test set.

Parameters:

test_data : :class:hebel.data_provider.DataProvider
A DataProvider instance to evaluate on the model.
average : bool, optional
Whether to divide the loss function by the number of examples in the test data set.

Returns:

test_error : float

training_pass(input_data, targets)

Perform a full forward and backward pass through the model.

Parameters:

input_data : GPUArray
Data to train the model with.
targets : GPUArray
Training targets.

Returns:

loss : float
Value of loss function as evaluated on the data and targets.
gradients : list of GPUArray
Gradients obtained from backpropagation in the backward pass.

Neural Network Regression

class hebel.models.NeuralNetRegression(layers, top_layer=None, activation_function='sigmoid', dropout=0.0, input_dropout=0.0, n_in=None, n_out=None, l1_penalty_weight=0.0, l2_penalty_weight=0.0, **kwargs)

A neural network for regression using the squared error loss function.

This class exists for convenience. The same results can be achieved by creating a hebel.models.NeuralNet instance and passing a hebel.layers.LinearRegressionLayer instance as the top_layer argument.

Parameters:

layers : array_like
An array of either integers or instances of hebel.models.HiddenLayer objects. If integers are given, they represent the number of hidden units in each layer and new HiddenLayer objects will be created. If HiddenLayer instances are given, the user must make sure that each HiddenLayer has n_in set to the preceding layer’s n_units. If HiddenLayer instances are passed, then activation_function, dropout, n_in, l1_penalty_weight, and l2_penalty_weight are ignored.
top_layer : hebel.models.TopLayer instance, optional
If top_layer is given, then it is used for the output layer, otherwise, a LinearRegressionLayer instance is created.
activation_function : {‘sigmoid’, ‘tanh’, ‘relu’, or ‘linear’}, optional
The activation function to be used in the hidden layers.
dropout : float in [0, 1)
Probability of dropping out each hidden unit during training. Default is 0.
n_in : integer, optional
The dimensionality of the input. Must be given, if the first hidden layer is not passed as a hebel.models.HiddenLayer instance.
n_out : integer, optional
The number of classes to predict from. Must be given, if a hebel.models.HiddenLayer instance is not given in top_layer.
l1_penalty_weight : float, optional
Weight for L1 regularization
l2_penalty_weight : float, optional
Weight for L2 regularization
kwargs : optional
Any additional arguments are passed on to top_layer

See also:

hebel.models.NeuralNet, hebel.models.MultitaskNeuralNet, hebel.layers.LinearRegressionLayer

TopLayerClass

alias of LinearRegressionLayer

Logistic Regression

class hebel.models.LogisticRegression(n_in, n_out, test_error_fct='class_error')

A logistic regression model

Multi-Task Neural Net

class hebel.models.MultitaskNeuralNet(layers, top_layer=None, activation_function='sigmoid', dropout=0.0, input_dropout=0.0, n_in=None, n_out=None, l1_penalty_weight=0.0, l2_penalty_weight=0.0, **kwargs)
TopLayerClass

alias of MultitaskTopLayer