Network

class network.Network(batch, input_shape=None, train=True)[source]

Bases: object

Neural Network object

Parameters
  • batch (int) – Batch size

  • input_shape (tuple) – Input dimensions

  • train (bool (default=True)) – Turn on/off the parameters tuning

Notes

Warning

Up to now the trainable variable is useless since the layer doesn’t take it into account!

LAYERS = {'activation': <class 'NumPyNet.layers.activation_layer.Activation_layer'>, 'avgpool': <class 'NumPyNet.layers.avgpool_layer.Avgpool_layer'>, 'batchnorm': <class 'NumPyNet.layers.batchnorm_layer.BatchNorm_layer'>, 'connected': <class 'NumPyNet.layers.connected_layer.Connected_layer'>, 'convolutional': <class 'NumPyNet.layers.convolutional_layer.Convolutional_layer'>, 'cost': <class 'NumPyNet.layers.cost_layer.Cost_layer'>, 'dropout': <class 'NumPyNet.layers.dropout_layer.Dropout_layer'>, 'input': <class 'NumPyNet.layers.input_layer.Input_layer'>, 'l1norm': <class 'NumPyNet.layers.l1norm_layer.L1Norm_layer'>, 'l2norm': <class 'NumPyNet.layers.l2norm_layer.L2Norm_layer'>, 'logistic': <class 'NumPyNet.layers.logistic_layer.Logistic_layer'>, 'lstm': <class 'NumPyNet.layers.lstm_layer.LSTM_layer'>, 'maxpool': <class 'NumPyNet.layers.maxpool_layer.Maxpool_layer'>, 'rnn': <class 'NumPyNet.layers.rnn_layer.RNN_layer'>, 'route': <class 'NumPyNet.layers.route_layer.Route_layer'>, 'shortcut': <class 'NumPyNet.layers.shortcut_layer.Shortcut_layer'>, 'shuffler': <class 'NumPyNet.layers.shuffler_layer.Shuffler_layer'>, 'simplernn': <class 'NumPyNet.layers.simple_rnn_layer.SimpleRNN_layer'>, 'softmax': <class 'NumPyNet.layers.softmax_layer.Softmax_layer'>, 'upsample': <class 'NumPyNet.layers.upsample_layer.Upsample_layer'>, 'yolo': <class 'NumPyNet.layers.yolo_layer.Yolo_layer'>}
add(layer)[source]

Add a new layer to the network model. Layers are progressively appended to the tail of the model.

Parameters

layer (Layer object) – Layer object to append to the current architecture

Return type

self

Notes

Note

If the architecture is empty a default InputLayer is used to start the model.

Warning

The input layer type must be one of the types stored into the LAYERS dict, otherwise a LayerError is raised.

compile(optimizer=<class 'NumPyNet.optimizer.Optimizer'>, metrics=None)[source]

Compile the neural network model setting the optimizer to each layer and the evaluation metrics

Parameters
  • optimizer (Optimizer) – Optimizer object to use during the training

  • metrics (list (default=None)) – List of metrics functions to use for the model evaluation.

Notes

Note

The optimizer is copied into each layer object which requires a parameters optimization.

evaluate(X, truth, verbose=False)[source]

Return output and loss of the model

Parameters
  • X (array-like) – Input data

  • truth (array-like) – Ground truth or labels

  • verbose (bool (default=False)) – Turn on/off the verbosity given by the training progress bar

Returns

  • loss (float) – The current loss of the model

  • output (array-like) – Output of the model as numpy array

fit(X, y, max_iter=100, shuffle=True, verbose=True)[source]

Fit/training function

Parameters
  • X (array-like) – Input data

  • y (array-like) – Ground truth or labels

  • max_iter (int (default=100)) – Maximum number of iterations/epochs to perform

  • shuffle (bool (default=True)) – Turn on/off the random shuffling of the data

  • verbose (bool (default=True)) – Turn on/off the verbosity given by the training progress bar

Return type

self

fit_generator(Xy_generator, max_iter=100)[source]

Fit function using a train generator

Parameters
  • Xy_generator (DataGenerator) – Data generator object

  • max_iter (int (default=100)) – Maximum number of iterations/epochs to perform

Return type

self

References

DataGenerator object in data.py

property input_shape

Get the input shape

load(cfg_filename, weights=None)[source]

Load network model from config file in INI fmt

Parameters
  • cfg_filename (str) – Filename or path of the neural network configuration file in INI format

  • weights (str (default=None)) – Filename of the weights

Return type

self

load_model(model_filename)[source]

Load network model object as pickle

Parameters

model_filename (str) – Filename or path of the model (binary) file

Return type

self

Notes

Note

The model loading is performed using pickle. If the model was previously dumped with the save_model function everything should be ok.

load_weights(weights_filename)[source]

Load weight from filename in binary fmt

Parameters

weights_filename (str) – Filename of the input weights file

Return type

self

Notes

Note

The weights are read and set to each layer which has the load_weights member function.

next()[source]

Get the next layer

Notes

This should fix python2* problems with __iter__ and __next__

property num_layers

Get the number of layers in the model

property out_shape

Get the output shape

predict(X, truth=None, verbose=True)[source]

Predict the given input

Parameters
  • X (array-like) – Input data

  • truth (array-like (default=None)) – Ground truth or labels

  • verbose (bool (default=True)) – Turn on/off the verbosity given by the training progress bar

Returns

output – Output of the model as numpy array

Return type

array-like

save_model(model_filename)[source]

Dump the current network model as pickle

Parameters

model_filename (str) – Filename or path for the model dumping

Return type

self

Notes

Note

The model is dumped using pickle binary format.

save_weights(filename)[source]

Dump current network weights

Parameters

filename (str) – Filename of the output weights file

Return type

self

Notes

Note

The weights are extracted from each layer which has the save_weights member function.

summary()[source]

Print the network model summary

Return type

None