Validation of Neural Network for Image Recognition
In the training section, we trained our model on the MNIST dataset (Endless dataset), and it seemed to reach a reasonable loss and accuracy. If the model can take what it has learned and generalize itself to new data, then it would be a true testament to its performance. This will be done with the help of the following steps:
We will create our validation set with the help of our training dataset, which we have created in the training section. In this time we will set train equals to false as:
Now as we have declared a training loader in the training section similarly we will declare a validation loader in validation section. Validation loader will also create in the same way as we have created training loader, but this time we pass training loader rather than training the dataset, and we set shuffle equals to false because we will not be trained our validation data. There is no need to shuffle it because it is only for testing purpose.
Our next step is to analyze the validation loss and accuracy at every epoch. For this purpose, we have to create two lists for validation running lost, and validation running loss corrects.
In the next step, we will validate the model. The model will validate the same epoch. After we finished iterating through the entire training set to train our data, we will now iterate through our validation set to test our data.
We will first measure for two things. The first one is the performance of our model, i.e., how many correct classifications. Our model makes on the test set on the validation set to check for overfitting. We will set running loss and running corrects of validation as:
We can now loop through our test data. So after the else statement, we will define a loop statement for labels and inputs as:
As we iterate through our batches of images, we must flatten them, and we must reshape them with the help of view method.
Note: The shape of each image tensor is (1, 28, and 28) which means a total of 784 pixels.
According to the structure of the neural network, our input values are going to be multiplied by our weight matrix connecting our input layer to the first hidden layer. To conduct this multiplication, we must make our images one dimensional. Instead of each image is 28 rows by two columns, we must flatten it into a single row of 784 pixels.
Now, with the help of these inputs, we get outputs as
With the help of the outputs, we will calculate the total categorical cross-entropy loss, and the output is ultimately compared with the actual labels.
We are not training our neural network, so there is no need to call zero_grad(), backward() or any of that. And there is also no need to compute derivative anymore. In the scope of operation to save memory, we call no_grad() method before For loop with the torch as:
It will temporarily set all the require grad flag to be false.
Now, we will calculate the validation loss and accuracy in the same way as we have calculated the training loss and training accuracy as:
Now, we will calculate the validation epoch loss which will be done as same as how we calculate the training epoch loss where we divide the total running loss by the length of the dataset. So it will be write as:
We will print validation loss and validation accuracy as:
Nor, for better understanding, we will plot it for visualization purpose. We will plot it as: