Javatpoint Logo
Javatpoint Logo

Keras Core Layers

Dense

The dense layer can be defined as a densely-connected common Neural Network layer. The output = activation(dot(input, kernel) +bias) operation is executed by the Dense layer. Here an element-wise activation function is being performed by the activation, so as to pass an activation argument, a matrix of weights called kernel is built by the layer, and bias is a vector created by the layer, which is applicable only if the use_bias is True.

It is to be noted that if the input given to the layer has a rank greater than two, it will be flattened previously to its primary dot product with the kernel.

Example

Arguments

  • units: It refers to a positive integer that acknowledges the output space dimensionality.
  • activation: It makes sure that the dense layer utilizes the element-wise activation function. It is a linear activation, which is set to none by default. Since its linearity is limited, we don't have much of its in-built activation function.
  • use_bias: It is an optional parameter, which means we may or may not incorporate it in our calculation. It represents a Boolean that shows whether the layer utilizes a bias vector.
  • kernel_initializer: It can be defined as an initializer for the kernel weights matrix.
  • bias_initializer: It can be defined as an initializer for the bias vector for which Keras uses zero initializer by default. It is assumed that it sets the bias vector to all zeros.
  • kernel_regularizer: It can be termed as a regularizer function, which is implemented on the kernel weights matrix.
  • bias_regularizer: It can be defined as a regularizer function, which is applied to the bias vector.
  • activity_regualrizer: It relates to a regularizer function that is applied to the output of the layer (its activation).
  • kernel_constraint: It refers to the constraint, which is applied to the kernel weights matrix.
  • bias_constraint: It can be defined as a constraint, which is applied to the bias vector.

Input shape

The input shape layer accepts an nD tensor of shape (batch_size, …, input_dim), and makes sure that its most common situation would have to be a 2D input encompassing a shape of (batch_size, input_dim).

Output shape

It outputs an nD tensor of shape (batch_size, …, units). For instance, where input is a 2D of shape (batch_size, input_dim), the corresponding output will be of shape (batch_size, units).

Activation

This is the layer that implements an activation function on the output.

Arguments

  • activation: Basically, it refers to the name of an activation function to be used, or simply we can say a Theano or TensorFlow operation.

Input Shape

It comprises of an arbitrary input shape. It makes use of an argument called input_shape while using it as an initial layer in the model. The input_shape can be defined as a tuple of integers that does not include the samples axis.

Output Shape

The output shape is the same as that of the input shape.

Dropout

The dropout is applied to the input as it prevents overfitting by randomly setting units of a fraction rate to 0 during the training time at each update.

Arguments

  • rate: It refers to a float value between 0 and 1, which represents the fraction units to be dropped.
  • noise_shape: It refers to a one-dimension tensor integer that epitomizes the shape of a binary dropout mask, which will be used in its multiplication with the input. If the input shape is (batch_size, timesteps, features), and for all timesteps, you wish your dropout mask to be similar, then, in that case, noise_shape=(batch_size, 1, feature) can be utilized.
  • seed: It indicates a python integer that will be used as a random seed.

Flatten

The flatten layer is used for flattening the input by not affecting the batch size.

Arguments

  • data_format: It can be defined as a string one of channels_last (by default) or channels_first. It is mainly used for ordering the input dimensions, so as to preserve ordering of weight when a model is being switched from one data format to another. Here the channels_last relates to the input shape of (batch, …, channels), whereas the channels_first relates to the input shape of (batch, channels, …). By default, the image_data_format value found in Keras config file is residing at ~/.keras/keras.json, else if it has not been set, then it will be at "channels_last".

Example

Input

The input layer makes use Input() to instantiate a Keras tensor, which is simply a tensor object from the backend such as Theano, TensorFlow, or CNTK. It can be augmented with some specific attributes, which will let us build a Keras model with the help of only inputs and outputs.

If we have m,n and o Keras tensors, then we can perform model = Model(input=[m, n], output=o).

Some other added Keras attributes are; _keras_shape, integer shape tuple that is propagated via Keras-side shape inference, and _keras_history is the last layer, which is applied on the tensor. The last layer enables the retrieval of the entire layer graph recursively.

Arguments

  • shape: The shape tuple can be defined as an integer that does not include the batch size. For instance, shape=(32, ) specifies that the expected input batches will be of 32-dimensional vectors.
  • batch_shape: The shape tuple indicates an integer that includes the batch size, such that for instance, batch_shape=(10, 32) represents that the expected input batches will be ten 32-dimensional vectors and batch_shape=(None, 32) represents batches of an arbitrary number of 32-D vectors.
  • name: It is an optional string name of the layer that must be unique, and even if it is not provided, it gets generated automatically.
  • dtype: The expected data type of the input is a string (float32, float64, int32, …).
  • sparse: It refers to a Boolean that specifies if the created placeholder is sparse or not.
  • tensor: It is an optional tensor that exists for wrapping up into the Input layer. If it is set, the layer will not create a placeholder tensor

Returns

It returns a tensor.

Example

Reshape

It is used to reshape the output to some specific shape.

Arguments

  • target_shape: It refers to a tuple of integers that points the output shape, excluding the batch axis.

Input shape

It includes arbitrary input shape even though if it is fixed and make use of input_shape argument while using this layer as the initial layer in the model.

Output shape

Example

Permute

It permutes the input's dimension as per the given pattern and is mainly used to join RNN's with convnets together.

Example

Argument

  • dims: It can be defined as a tuple of integers. The permutation patterns do not comprehend sample dimensions. Here the indexing starts at 1and for any random instance, (2,1) will permute first and second dimension of the input.

Input shape

It consists of an arbitrary input shape and makes use of the input_shape keyword argument, which is a tuple of integers. This argument is utilized while using this layer as the initial layer in the model. It does not include the samples axis.

Output shape

The output shape is similar to the input shape, just the fact that dimensions are re-ordered according to some specific pattern.

RepeatVector

The RepeatVector layer is used for reiterating an input n times.

Example

Arguments

  • n: It can be defined as an integer that signifies the repetition factor.

Input shape

It comprises of a 2D tensor having a shape of (num_samples, features).

Output shape

It constitutes a 3D tensor of shape (num_samples, n, features).

Lambda

This layer is used for wrapping up an arbitrary expression like an object of Layer.

Examples



Arguments

  • function: It can be defined as a function, which is required to be calculated. It takes the input tensor or a list of tensors as the first argument.
  • output_shape: It expects the output shape from the function itself, which is relevant if Theano is used. If the output_shape is a tuple, then it starts specifying from the first dimension. It assumes the sample dimension to be either similar to output_shape = (input_shape[0], ) + output_shape, or the input is None. Similarly, the dimension is None: output_shape = (None, ) + output_shape, if a function is specified to the entire shape as a function of the input shape: output_shape = f(input_shape).
  • mask: It can be either none that indicates no masking or a tensor, which relates to the input mask for embedding.
  • arguments: It is an optional dictionary of arguments keyword that is passed to the function.

Input shape

The input shape is an arbitrary tuple of integers that uses the argument keyword input_shape while using this layer as the initial layer in the model and does not include the samples axis.

Output shape

It is either specified by an output_shape argument or auto-inferred when TensorFlow or CNTK is in use.

ActivityRegularization

The activityregularization layer updates the cost function on the basis of input activity.

Arguments

  • l1: L1 is a positive float regularization factor.
  • l2: L2 is a positive float regularization factor.

Input shape

It is an arbitrary tuple of integers that makes use of an input_shape argument while using this layer as the initial layer in the model. It does not embrace samples axis

Output shape

The output shape is similar to that of input shape.

Masking

The masking layer is used to mask a sequence, simply by using a mask value to avoid timesteps. For a given sample of timesteps, if all the features are equivalent to mask_value, then, in that case, the sample timesteps are masked (skipped) in all downstream layers only if they support masking.

An exception will be raised if any of the downstream layers is not in support of masking but still receiving an input mask.

Example

Let x be the numpy data array of shape (samples, timesteps, features), which will be fed to the LSTM layer. Now suppose that you are willing to mask #0 at timestep #3, and sample #2 at timestep #5, as you lacking with features for these sample timesteps then you can do the followings:

  • set x[0, 3, :]=0 and x[2, 5, :]=0.
  • To insert a masking layer before the LSTM layer, use mask_value=0.

Arguments

  • mask_value: It can be either none or skipped.

SpatialDropout1D

It is a spatial dropout 1D version that performs the same function as that of the dropout, but it does not drop an individual element, rather then it drops the entire 1D feature map. When the contiguous frames are strongly linked in the feature map the same as it is carried out in the convolution layers, then, in that case, the activations are not going to be regularized by the regular dropout but will end up reducing the effective learning rate. In this particular case, it will help in promoting independence between feature maps and will be used instead.

Arguments

  • rate: It is a float between 0 and 1. Fractions of the input units to drop.

Input shape

It is a 3D tensor of shape (samples, timesteps, channels)

Output shape

The output shape is similar to the input shape.

SpatialDropout2D

It is a spatial dropout 2D version. It also performs similar functions as that of the dropout; however, it drops the whole 2D feature maps rather than an individual element. If the adjacent frames are strongly correlated in the feature map like it is done in the convolution layers, then the activations will not be regularized by the regular dropout, and will otherwise reduce the effective learning rate. In this case, it promotes independence between feature maps and is used instead.

Arguments

  • rate: It is the float between 0 and 1. A fraction of input units to drop.
  • data_format: It is in either mode, i.e. 'channels_first' or 'channels_last'. If it is in channels_first mode, then the depth is meant to be at index 1, else in case of channels_last, it is at index 3. It defaults to the image_data_format value that is found in Keras config at ~/.keras/keras.json. If you cannot find it in that folder, then it is residing at "channels_last".

Input shape

If data_format='channels_first', then the 4D tensor will be of shape (samples, channels, rows, cols) else, if it data_format='channels_last', then the 4D tensor will be of shape (samples, rows, cols, channels).

Output shape

The output shape is similar to the input shape.

SpatialDropout3D

It is a spatial dropout 3D version that performs similar functions as that of dropout, but it drops complete 3D feature maps instead of any particular element. The regular dropout will not regularize the activations if adjacent voxels residing in the feature maps are strongly correlated just like in convolution layers and will otherwise decrease the effective learning rate. It also supports independence between feature maps.

Arguments

  • rate: It is a float between 0 and. Fractions of input units to drop.
  • data_format: It comes with two modes, i.e. 'channels_first' or 'channels_last', such that the channels dimension in 'channels_first' is at index 1 and in case of 'channels_last', it is at index 4. It defaults to the image_data_format value that is found in Keras config at ~/.keras/keras.json. If you cannot find it in that folder, then it is residing at "channels_last".

Input shape

The shape of a 5D tensor is: (samples, channels, dim1, dim2, dim3) if data_format='channels_first', else (samples, dim1, dim2, dim3, channels) if data_format='channels_last'.

Output shape

The output shape is the same as that of the input shape.







Youtube For Videos Join Our Youtube Channel: Join Now

Feedback


Help Others, Please Share

facebook twitter pinterest

Learn Latest Tutorials


Preparation


Trending Technologies


B.Tech / MCA