Linear regression is a way to find the linear relationship between the dependent and independent variable by minimizing the distance.
Linear regression is a supervised machine learning approach. This approach is used for classification of order discrete category. In this section, we will understand how to build a model by which a user can predict the relationship between the dependent and independent variable.
In simple terms, we can say the relationship between both the variable, i.e., independent or dependent, is known as linear. Suppose Y is the dependent and X is an independent variable, then the linear regression relationship of these two variables is
There are three basic concepts which are essential to understand to create or learn basic linear model.
1. Model class
It is very typical to code everything and writes all the function when required, and it's not our motive.
It is always better to writing numeric optimization libraries rather than writing all the code and functions, but business value can also be increased if we built it on top of prewritten libraries to get things done. For this purpose, we use the implementation of the nn package of PyTorch. For this, we have first to create a single layer.
Linear layer use
Each linear module computes the output from the input, and for weight and bias, it holds its internal Tensor.
There are several other standard modules. We will use a model class format, which has two main methods, which are as follows:
The optimizer is one of the important concepts in PyTorch. It is used to optimize our weight to fit our model into the dataset. There are several optimization algorithms such as gradient descent and backpropagation which optimize our weight value and fit our model best.
Various optimization algorithms are implemented by torch.optim package. To use torch.optim, you have to construct an optimizer object which will update the parameter based on the computer gradient and will hold the current state. The object is created as follows:
A step() method is implemented by all optimizer, which updates the parameters. There are two ways to use it
This is very simple method and supported by many optimizers. After computing the gradients using backward () method, we can call the optimizer.step() function.
There are some optimization algorithms such as LBFGS, and Conjugate Gradient needs to re-evaluate the function many times, so we have to pass it in a closure which allows them to recompute your model.
The criterion is our loss function, which is used to find loss. This function is used from the torch nn module.
Functions and objects which are required
And we need to define some data and assign them to variables in the following way
Following is the code which gives us prediction for training a complete regression model. It's just to understand how we implement the code and what function we used to train a regression model.
epoch0,loss1.7771836519241333 epoch1,loss1.0423388481140137 epoch2,loss0.7115973830223083 epoch3,loss0.5608030557632446 . . . . epoch499,loss0.0003389564517419785 predict (after training) 4 tensor(7.9788)
There are the following concepts which are used to train a complete regression model
All the above points are essential to understand how a regression model will be trained.