Shortcut layer

class layers.shortcut_layer.Shortcut_layer(activation=<class 'NumPyNet.activations.Activations'>, alpha=1.0, beta=1.0, **kwargs)[source]

Bases: NumPyNet.layers.base.BaseLayer

Shortcut layer: activation of the linear combination of the output of two layers

layer1 * alpha + layer2 * beta = output

Now working only with same shapes input

activationstr or Activation object

Activation function of the layer.

alphafloat, (default = 1.)

first weight of the combination.

betafloat, (default = 1.)

second weight of the combination.

>>> import pylab as plt
>>>
>>> from NumPyNet import activations
>>>
>>> img_2_float = lambda im : ((im - im.min()) * (1./(im.max() - im.min()) * 1.)).astype(float)
>>> float_2_img = lambda im : ((im - im.min()) * (1./(im.max() - im.min()) * 255.)).astype(np.uint8)
>>>
>>> # Set seed to have same input
>>> np.random.seed(123)
>>>
>>> layer_activ = activations.Relu()
>>>
>>> batch = 2
>>>
>>> alpha = 0.75
>>> beta  = 0.5
>>>
>>> # Random input
>>> inpt1      = np.random.uniform(low=-1., high=1., size=(batch, 100, 100, 3))
>>> inpt2      = np.random.uniform(low=-1., high=1., size=inpt1.shape)
>>> b, w, h, c = inpt1.shape
>>>
>>>
>>> # model initialization
>>> layer = Shortcut_layer(activation=layer_activ,
>>>                        alpha=alpha, beta=beta)
>>>
>>> # FORWARD
>>>
>>> layer.forward(inpt1, inpt2)
>>> forward_out = layer.output.copy()
>>>
>>> print(layer)
>>>
>>> # BACKWARD
>>>
>>> delta      = np.zeros(shape=inpt1.shape, dtype=float)
>>> delta_prev = np.zeros(shape=inpt2.shape, dtype=float)
>>>
>>> layer.delta = np.ones(shape=layer.out_shape, dtype=float)
>>> layer.backward(delta, delta_prev)
>>>
>>> # Visualizations
>>>
>>> fig, (ax1, ax2, ax3) = plt.subplots(nrows=1, ncols=3, figsize=(10, 5))
>>> fig.subplots_adjust(left=0.1, right=0.95, top=0.95, bottom=0.15)
>>> fig.suptitle('Shortcut Layer
alpha{}, beta{}, activation{} ‘.format(alpha, beta, layer_activ.name))
>>>
>>> ax1.imshow(float_2_img(inpt1[0]))
>>> ax1.set_title('Original Image')
>>> ax1.axis('off')
>>>
>>> ax2.imshow(float_2_img(forward_out[0]))
>>> ax2.set_title('Forward')
>>> ax2.axis('off')
>>>
>>> ax3.imshow(float_2_img(delta[0]))
>>> ax3.set_title('Backward')
>>> ax3.axis('off')
>>>
>>> fig.tight_layout()
>>> plt.show()

TODO

backward(delta, prev_delta, copy=False)[source]

Backward function of the Shortcut layer

Parameters
  • delta (array-like) – delta array of shape (batch, w, h, c). Global delta to be backpropagated.

  • delta_prev (array-like) – second delta to be backporpagated.

  • copy (bool (default=False)) – States if the activation function have to return a copy of the input or not.

Return type

self

forward(inpt, prev_output, copy=False)[source]

Forward function of the Shortcut layer: activation of the linear combination between input.

Parameters
  • inpt (array-like) – Input batch of images in format (batch, in_w, in_h, in _c)

  • prev_output (array-like) – second input of the layer

Return type

self

property out_shape

Get the output shape

Returns

out_shape – Tuple as (batch, out_w, out_h, out_c)

Return type

tuple