Connected layer

class layers.connected_layer.Connected_layer(outputs, activation=<class 'NumPyNet.activations.Activations'>, input_shape=None, weights=None, bias=None, **kwargs)[source]

Bases: NumPyNet.layers.base.BaseLayer

Connected layer

It’s the equivalent of a Dense layer in keras, or a single layer of an MLP in scikit-learn

Parameters
  • outputs (int) – Number of outputs of the layers. It’s also the number of Neurons of the layer.

  • activation (str or Activation object) – Activation function of the layer.

  • input_shape (tuple (default=None)) – Shape of the input in the format (batch, w, h, c), None is used when the layer is part of a Network model.

  • weights (array-like (default=None)) –

    Array of shape (w * h * c, outputs), default is None. Weights of the dense layer. If None, weights initialization is random and follows a uniform distribution in the range [-scale, scale] where:

    scale = sqrt(2 / (w * h* c))

  • bias (array-like (default=None)) – Array of shape (outputs,). Bias of the fully-connected layer. If None, bias inititialization is zeros

Example

>>> import os
>>>
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from PIL import Image
>>>
>>> from NumPyNet import activations
>>>
>>> img_2_float = lambda im : ((im - im.min()) * (1./(im.max() - im.min()) * 1.)).astype(float)
>>> float_2_img = lambda im : ((im - im.min()) * (1./(im.max() - im.min()) * 255.)).astype(np.uint8)
>>>
>>> filename = os.path.join(os.path.dirname(__file__), '..', '..', 'data', 'dog.jpg')
>>> inpt = np.asarray(Image.open(filename), dtype=float)
>>> inpt.setflags(write=1)
>>> inpt = img_2_float(inpt)
>>>
>>> # from (w, h, c) to shape (1, w, h, c)
>>> inpt = np.expand_dims(inpt, axis=0)  # just to add the 'batch' dimension
>>>
>>> # Number of outputs
>>> outputs = 10
>>> layer_activation = activations.Relu()
>>> batch, w, h, c = inpt.shape
>>>
>>> # Random initialization of weights with shape (w * h * c) and bias with shape (outputs,)
>>> np.random.seed(123)  # only if one want always the same set of weights
>>> weights = np.random.uniform(low=-1., high=1., size=(np.prod(inpt.shape[1:]), outputs))
>>> bias = np.random.uniform(low=-1., high=1., size=(outputs,))
>>>
>>> # Model initialization
>>> layer = Connected_layer(outputs, input_shape=inpt.shape,
>>>                         activation=layer_activation, weights=weights, bias=bias)
>>> print(layer)
>>>
>>> # FORWARD
>>>
>>> layer.forward(inpt)
>>> forward_out = layer.output.copy()
>>>
>>> # BACKWARD
>>>
>>> layer.delta = np.ones(shape=(layer.out_shape), dtype=float)
>>> delta = np.zeros(shape=(batch, w, h, c), dtype=float)
>>> layer.backward(inpt, delta=delta, copy=True)
>>>
>>> # print('Output: {}'.format(', '.join( ['{:.3f}'.format(x) for x in forward_out[0]] ) ) )
>>>
>>> # Visualizations
>>>
>>> fig, (ax1, ax2, ax3) = plt.subplots(nrows=1, ncols=3, figsize=(10, 5))
>>> fig.subplots_adjust(left=0.1, right=0.95, top=0.95, bottom=0.15)
>>> fig.suptitle('Connected Layer activation : {}'.format(layer_activation.name))
>>>
>>> ax1.imshow(float_2_img(inpt[0]))
>>> ax1.set_title('Original Image')
>>> ax1.axis('off')
>>>
>>> ax2.matshow(forward_out[:, 0, 0, :], cmap='bwr')
>>> ax2.set_title('Forward', y=4)
>>> ax2.axes.get_yaxis().set_visible(False)         # no y axis tick
>>> ax2.axes.get_xaxis().set_ticks(range(outputs))  # set x axis tick for every output
>>>
>>> ax3.imshow(float_2_img(delta[0]))
>>> ax3.set_title('Backward')
>>> ax3.axis('off')
>>>
>>> fig.tight_layout()
>>> plt.show()

References

  • TODO

backward(inpt, delta=None, copy=False)[source]

Backward function of the connected layer, updates the global delta of the network to be Backpropagated, he weights upadtes and the biases updates

Parameters
  • delta (array-like) – delta array of shape (batch, w, h, c). Global delta to be backpropagated.

  • copy (bool (default=False)) – States if the activation function have to return a copy of the input or not.

Return type

self

forward(inpt, copy=False)[source]

Forward function of the connected layer. It computes the matrix product between inpt and weights, add bias and activate the result with the chosen activation function.

Parameters
  • inpt (array-like) – Input batch in format (batch, in_w, in_h, in _c)

  • copy (bool (default=False)) – If False the activation function modifies its input, if True make a copy instead

Return type

self

property inputs

Number of inputs of the layer, not considering the batch size

load_weights(chunck_weights, pos=0)[source]

Load weights from full array of model weights

Parameters
  • chunck_weights (array-like) – model weights and bias

  • pos (int (default=0)) – Current position of the array

Returns

pos – Updated stream position.

Return type

int

property out_shape

Returns the output shape in the format (batch, 1, 1, outputs)

save_weights()[source]

Return the biases and weights in a single ravel fmt to save in binary file

update()[source]

Update function for the convolution layer. Optimizer must be assigned externally as an optimizer object.

Return type

self