Activation layer

class layers.activation_layer.Activation_layer(input_shape=None, activation=<class 'NumPyNet.activations.Activations'>, **kwargs)[source]

Bases: NumPyNet.layers.base.BaseLayer

Activation layer

Parameters
  • input_shape (tuple (default=None)) – Input dimensions as tuple of 4 integers

  • activation (str or Activation object) – Activation function to apply into the layer.

Example

>>>  import os
>>>  import pylab as plt
>>>  from PIL import Image
>>>  from NumPyNet import activations
>>>
>>>  activation_func = activations.Relu()
>>>
>>>  img_2_float = lambda im : ((im - im.min()) * (1./(im.max() - im.min()) * 1.)).astype(float)
>>>  float_2_img = lambda im : ((im - im.min()) * (1./(im.max() - im.min()) * 255.)).astype(np.uint8)
>>>
>>>  filename = os.path.join(os.path.dirname(__file__), '..', '..', 'data', 'dog.jpg')
>>>  inpt = np.asarray(Image.open(filename), dtype=float)
>>>  inpt.setflags(write=1)
>>>  inpt = img_2_float(inpt)
>>>  # Relu activation constrain
>>>  inpt = inpt * 2 - 1
>>>
>>>  # add batch = 1
>>>  inpt = np.expand_dims(inpt, axis=0)
>>>
>>>  layer = Activation_layer(input_shape=inpt.shape, activation=activation_func)
>>>
>>>  # FORWARD
>>>
>>>  layer.forward(inpt)
>>>  forward_out = layer.output
>>>  print(layer)
>>>
>>>  # BACKWARD
>>>
>>>  layer.delta = np.ones(shape=inpt.shape, dtype=float)
>>>  delta = np.zeros(shape=inpt.shape, dtype=float)
>>>  layer.backward(delta, copy=True)
>>>
>>>  # Visualizations
>>>
>>>  fig, (ax1, ax2, ax3) = plt.subplots(nrows=1, ncols=3, figsize=(10, 5))
>>>  fig.subplots_adjust(left=0.1, right=0.95, top=0.95, bottom=0.15)
>>>
>>>  fig.suptitle('Activation Layer : {}'.format(activation_func.name))
>>>
>>>  ax1.imshow(float_2_img(inpt[0]))
>>>  ax1.set_title('Original image')
>>>  ax1.axis('off')
>>>
>>>  ax2.imshow(float_2_img(forward_out[0]))
>>>  ax2.set_title("Forward")
>>>  ax2.axis("off")
>>>
>>>  ax3.imshow(float_2_img(delta[0]))
>>>  ax3.set_title('Backward')
>>>  ax3.axis('off')
>>>
>>>  fig.tight_layout()
>>>  plt.show()
../../_images/activation_relu.png

References

  • TODO

backward(delta, copy=False)[source]

Compute the backward of the activation layer

Parameters

delta (array-like) – Global error to be backpropagated.

Return type

self

forward(inpt, copy=True)[source]

Forward of the activation layer, apply the selected activation function to the input.

Parameters
  • inpt (array-like) – Input array to activate.

  • copy (bool (default=True)) – If True make a copy of the input before applying the activation.

Return type

self

property out_shape

Get the output shape

Returns

out_shape – Tuple as (batch, out_w, out_h, out_c)

Return type

tuple