Artificial Neural Network Tutorial

Artificial Neural Network Tutorial

Artificial Neural Network Tutorial provides basic and advanced concepts of ANNs. Our Artificial Neural Network tutorial is developed for beginners as well as professions.

The term "Artificial neural network" refers to a biologically inspired sub-field of artificial intelligence modeled after the brain. An Artificial neural network is usually a computational network based on biological neural networks that construct the structure of the human brain. Similar to a human brain has neurons interconnected to each other, artificial neural networks also have neurons that are linked to each other in various layers of the networks. These neurons are known as nodes.

Artificial neural network tutorial covers all the aspects related to the artificial neural network. In this tutorial, we will discuss ANNs, Adaptive resonance theory, Kohonen self-organizing map, Building blocks, unsupervised learning, Genetic algorithm, etc.

What is Artificial Neural Network?

The term "Artificial Neural Network" is derived from Biological neural networks that develop the structure of a human brain. Similar to the human brain that has neurons interconnected to one another, artificial neural networks also have neurons that are interconnected to one another in various layers of the networks. These neurons are known as nodes.

What is Artificial Neural Network

The given figure illustrates the typical diagram of Biological Neural Network.

The typical Artificial Neural Network looks something like the given figure.

What is Artificial Neural Network

Dendrites from Biological Neural Network represent inputs in Artificial Neural Networks, cell nucleus represents Nodes, synapse represents Weights, and Axon represents Output.

Relationship between Biological neural network and artificial neural network:

Biological Neural NetworkArtificial Neural Network
DendritesInputs
Cell nucleusNodes
SynapseWeights
AxonOutput

An Artificial Neural Network in the field of Artificial intelligence where it attempts to mimic the network of neurons makes up a human brain so that computers will have an option to understand things and make decisions in a human-like manner. The artificial neural network is designed by programming computers to behave simply like interconnected brain cells.

There are around 1000 billion neurons in the human brain. Each neuron has an association point somewhere in the range of 1,000 and 100,000. In the human brain, data is stored in such a manner as to be distributed, and we can extract more than one piece of this data when necessary from our memory parallelly. We can say that the human brain is made up of incredibly amazing parallel processors.

We can understand the artificial neural network with an example, consider an example of a digital logic gate that takes an input and gives an output. "OR" gate, which takes two inputs. If one or both the inputs are "On," then we get "On" in output. If both the inputs are "Off," then we get "Off" in output. Here the output depends upon input. Our brain does not perform the same task. The outputs to inputs relationship keep changing because of the neurons in our brain, which are "learning."

The architecture of an artificial neural network:

To understand the concept of the architecture of an artificial neural network, we have to understand what a neural network consists of. In order to define a neural network that consists of a large number of artificial neurons, which are termed units arranged in a sequence of layers. Lets us look at various types of layers available in an artificial neural network.

Artificial Neural Network primarily consists of three layers:

What is Artificial Neural Network

Input Layer:

As the name suggests, it accepts inputs in several different formats provided by the programmer.

Hidden Layer:

The hidden layer presents in-between input and output layers. It performs all the calculations to find hidden features and patterns.

Output Layer:

The input goes through a series of transformations using the hidden layer, which finally results in output that is conveyed using this layer.

The artificial neural network takes input and computes the weighted sum of the inputs and includes a bias. This computation is represented in the form of a transfer function.

What is Artificial Neural Network

It determines weighted total is passed as an input to an activation function to produce the output. Activation functions choose whether a node should fire or not. Only those who are fired make it to the output layer. There are distinctive activation functions available that can be applied upon the sort of task we are performing.

Advantages of Artificial Neural Network (ANN)

Parallel processing capability:

Artificial neural networks have a numerical value that can perform more than one task simultaneously.

Storing data on the entire network:

Data that is used in traditional programming is stored on the whole network, not on a database. The disappearance of a couple of pieces of data in one place doesn't prevent the network from working.

Capability to work with incomplete knowledge:

After ANN training, the information may produce output even with inadequate data. The loss of performance here relies upon the significance of missing data.

Having a memory distribution:

For ANN is to be able to adapt, it is important to determine the examples and to encourage the network according to the desired output by demonstrating these examples to the network. The succession of the network is directly proportional to the chosen instances, and if the event can't appear to the network in all its aspects, it can produce false output.

Having fault tolerance:

Extortion of one or more cells of ANN does not prohibit it from generating output, and this feature makes the network fault-tolerance.

Disadvantages of Artificial Neural Network:

Assurance of proper network structure:

There is no particular guideline for determining the structure of artificial neural networks. The appropriate network structure is accomplished through experience, trial, and error.

Unrecognized behavior of the network:

It is the most significant issue of ANN. When ANN produces a testing solution, it does not provide insight concerning why and how. It decreases trust in the network.

Hardware dependence:

Artificial neural networks need processors with parallel processing power, as per their structure. Therefore, the realization of the equipment is dependent.

Difficulty of showing the issue to the network:

ANNs can work with numerical data. Problems must be converted into numerical values before being introduced to ANN. The presentation mechanism to be resolved here will directly impact the performance of the network. It relies on the user's abilities.

The duration of the network is unknown:

The network is reduced to a specific value of the error, and this value does not give us optimum results.

Science artificial neural networks that have steeped into the world in the mid-20th century are exponentially developing. In the present time, we have investigated the pros of artificial neural networks and the issues encountered in the course of their utilization. It should not be overlooked that the cons of ANN networks, which are a flourishing science branch, are eliminated individually, and their pros are increasing day by day. It means that artificial neural networks will turn into an irreplaceable part of our lives progressively important.

How do artificial neural networks work?

Artificial Neural Network can be best represented as a weighted directed graph, where the artificial neurons form the nodes. The association between the neurons outputs and neuron inputs can be viewed as the directed edges with weights. The Artificial Neural Network receives the input signal from the external source in the form of a pattern and image in the form of a vector. These inputs are then mathematically assigned by the notations x(n) for every n number of inputs.

What is Artificial Neural Network

Afterward, each of the input is multiplied by its corresponding weights ( these weights are the details utilized by the artificial neural networks to solve a specific problem ). In general terms, these weights normally represent the strength of the interconnection between neurons inside the artificial neural network. All the weighted inputs are summarized inside the computing unit.

If the weighted sum is equal to zero, then bias is added to make the output non-zero or something else to scale up to the system's response. Bias has the same input, and weight equals to 1. Here the total of weighted inputs can be in the range of 0 to positive infinity. Here, to keep the response in the limits of the desired value, a certain maximum value is benchmarked, and the total of weighted inputs is passed through the activation function.

The activation function refers to the set of transfer functions used to achieve the desired output. There is a different kind of the activation function, but primarily either linear or non-linear sets of functions. Some of the commonly used sets of activation functions are the Binary, linear, and Tan hyperbolic sigmoidal activation functions. Let us take a look at each of them in details:

Binary:

In binary activation function, the output is either a one or a 0. Here, to accomplish this, there is a threshold value set up. If the net weighted input of neurons is more than 1, then the final output of the activation function is returned as one or else the output is returned as 0.

Sigmoidal Hyperbolic:

The Sigmoidal Hyperbola function is generally seen as an "S" shaped curve. Here the tan hyperbolic function is used to approximate output from the actual net input. The function is defined as:

F(x) = (1/1 + exp(-????x))

Where ???? is considered the Steepness parameter.

Types of Artificial Neural Network:

There are various types of Artificial Neural Networks (ANN) depending upon the human brain neuron and network functions, an artificial neural network similarly performs tasks. The majority of the artificial neural networks will have some similarities with a more complex biological partner and are very effective at their expected tasks. For example, segmentation or classification.

Feedback ANN:

In this type of ANN, the output returns into the network to accomplish the best-evolved results internally. As per the University of Massachusetts, Lowell Centre for Atmospheric Research. The feedback networks feed information back into itself and are well suited to solve optimization issues. The Internal system error corrections utilize feedback ANNs.

Feed-Forward ANN:

A feed-forward network is a basic neural network comprising of an input layer, an output layer, and at least one layer of a neuron. Through assessment of its output by reviewing its input, the intensity of the network can be noticed based on group behavior of the associated neurons, and the output is decided. The primary advantage of this network is that it figures out how to evaluate and recognize input patterns.

 

Prerequisite

No specific expertise is needed as a prerequisite before starting this tutorial.

Audience

Our Artificial Neural Network Tutorial is developed for beginners as well as professionals, to help them understand the basic concept of ANNs.

Problems

We assure you that you will not find any problem in this Artificial Neural Network tutorial. But if there is any problem or mistake, please post the problem in the contact form so that we can further improve it.


Multiple Choices on Artificial Neural Networks

1. The activation function of the artificial neural network serves what main purpose?

  1. The focused goal to Calculate the weighted sum of the inputs
  2. To design non-linearity into the network
  3. When conducting the back propagation to be able to update the weights.
  4. To re-scale, it back to the original range

Answer: b) To supply non-linearity into the network

Explanation: They are applied to introduce non-linearity into the network so that the network can be able to learn various forms of patterns. They decide the output level of a neuron by using the weighted sum of the inputs passing through its control.


2.Which of the following is not an example of an artificial neural network? This model is:

  1. Feedforward neural network
  2. Recurrent neural network
  3. Convolutional neural network
  4. Logistic regression

Answer: d) Logistic regression

Explanation: Logistic regression is actually a model used in classification thus it cannot in anyway be considered as a neural network.


3. What is the function of an algorithm known as backpropagation in the construction of neural networks?

  1. For the initial setting of weights of the network
  2. To compute the value at the output of the network
  3. For modifying the weights of the network with an aim to reduce the error
  4. To scale down the input data

Answer: c) For the weights of network applied corrections aimed at minimizing an error

Explanation: Backpropagation is an algorithm that calculates the error from the input which has been processed through the network and then goes back through the connections to modify weights accordingly.


4. What layer of neural network take the input data?

  1. A layer that is concealed
  2. The final layer that gives the results
  3. The first layer where input is taken
  4. A layer that does not connect to other layers, it only takes the signal and ‘activates’ it.

Answer: c) Input layer

Explanation: The input layer is the outermost layer in a neural network and is responsible for taking the input data which is to be analysed.


5. The explicit bias term in a neuron is used hence to introduce an explicit bias to the activation function employed in the neuron.

  1. To make the relationship between inputs and weights non-linear
  2. To make the _ of the activation function
  3. To standardize the input data
  4. To alter the weights

Answer: b) To replace the activation function

Explanation: The bias term makes the activation function to be flexible in learning other patterns since it shifts the curve up or down.


6. Which activation function we use in the last layer for classification problem? There are four activation functions, including

  1. ReLU
  2. Sigmoid
  3. Tanh and
  4. Linear.

Answer: b) Sigmoid

Explanation: The sigmoid activation function is utilized in the output layer of the feed forward networks particularly in classification problems since the output value ranges between 0 and 1 which may be seen as a probable distribution of the input patterns.


7. What can be considered as the primary distinction between supervised and unsupervised learning in neural nets?

  1. Supervised learning: it involves training the algorithm on data sets which are labelled while unsupervised learning: does not involve training of data set having labels.
  2. Supervised learning is applied when the result, which needs to be predicted, is categorical while unsupervised learning is applied when the result is continuous.
  3. Supervised learning is believed to be sophisticated than unsupervised learning.
  4. Supervised learning takes less time more than unsupervised learning.

Answer: a) Supervised learning work on labelled data set while unsupervised learning does not work on labelled data set.

Explanation: Supervised learning on the other hand, involves learning where the input values are well defined and pointed out while the output values. In case of unsupervised learning the network looks for different relationships inside the data and has no tags for it.


8. What is the architecture of the neural network for the aims of image recognition?

  1. RNN
  2. CNN
  3. Perceptron
  4. Autoencoder

Answer: b) Convolutional neural network

Explanation: Once trained, CNNs are the best to use for special types of data such as images, for example when classifying an image or detecting an object in an image.


9. What is exactly the vanishing gradient problem in the deep neural networks?

  1. If the gradients are too big
  2. If the gradients are too small
  3. If weights are too big
  4. If the weights are too small

Answer: b) When the gradients become too small

Explanation: This is because during the back propagation the gradients become very much small and thus leading to loss of gradient problem.


10. Which technique is favourable in avoiding overfitting in the case of neural network?

  1. Regularization
  2. Dropout
  3. Early stopping
  4. All of them

Answer: d) All of the above

Explanation: Quite a few methods are applied to control over fitting; among them, we have regularization techniques, dropout, and early stopping.


11. What is a hidden layer of a neural network and what does it look like? The decision rule:

  1. For the sake of input data
  2. For the output data
  3. For the desirable intricate nonlinearity
  4. For new weights.

Answer: c) For the purpose of concept acquisition that involves learning of complex patterns and features.

Explanation: In a neural network there are always one or more hidden layers which are responsible for extracting hard and comprehensive features from the input data as used in the predictions or classifications.


12. Which activation function do we often apply in the hidden layers of the artificial neural network?

  1. ReLU
  2. Sigmoid
  3. Tanh
  4. Linear

Answer: a) ReLU

Explanation: The rectified linear unit (ReLU) non-linearity is used universally as the activation function in the hidden layer of neural networks for their mathematical efficiency and non-sensitive nature to the vanishing gradient issue.


13. What is the principal drawback of Neural Networks? It is for the following reasons:

  1. Neural networks are computationally expensive
  2. Neural networks are not easy to interpret
  3. Large amounts of data are required for neural networks
  4. All the above

Answer: d) All the above.

Explanation: That’s why, training deep networks in neural networks can be a big computationally demanding. They also can be ambiguous, because it is rather complicated to understand the inner structure of the network. Also, neural networks are data intensive, and as such may demand big data to have an accurate training.



 
 

Latest Courses