Javatpoint Logo
Javatpoint Logo

Linear Regression

Linear regression is a way to find the linear relationship between the dependent and independent variable by minimizing the distance.

Linear regression is a supervised machine learning approach. This approach is used for classification of order discrete category. In this section, we will understand how to build a model by which a user can predict the relationship between the dependent and independent variable.

In simple terms, we can say the relationship between both the variable, i.e., independent or dependent, is known as linear. Suppose Y is the dependent and X is an independent variable, then the linear regression relationship of these two variables is


  • A is the slope.
  • b is y-intercept.

Initial State

Linear Regression

Final State

Linear Regression

There are three basic concepts which are essential to understand to create or learn basic linear model.

1. Model class

It is very typical to code everything and writes all the function when required, and it's not our motive.

It is always better to writing numeric optimization libraries rather than writing all the code and functions, but business value can also be increased if we built it on top of prewritten libraries to get things done. For this purpose, we use the implementation of the nn package of PyTorch. For this, we have first to create a single layer.

Linear layer use

Each linear module computes the output from the input, and for weight and bias, it holds its internal Tensor.

There are several other standard modules. We will use a model class format, which has two main methods, which are as follows:

  1. Init: Used for defining a linear module.
  2. Forward: With the help of forwarding method predictions are made on the basis of that we will train our linear regression model

2. Optimizer

The optimizer is one of the important concepts in PyTorch. It is used to optimize our weight to fit our model into the dataset. There are several optimization algorithms such as gradient descent and backpropagation which optimize our weight value and fit our model best.

Various optimization algorithms are implemented by torch.optim package. To use torch.optim, you have to construct an optimizer object which will update the parameter based on the computer gradient and will hold the current state. The object is created as follows:

A step() method is implemented by all optimizer, which updates the parameters. There are two ways to use it

1) Optimizer.step()

This is very simple method and supported by many optimizers. After computing the gradients using backward () method, we can call the optimizer.step() function.


2) Optimizer.step(closure)

There are some optimization algorithms such as LBFGS, and Conjugate Gradient needs to re-evaluate the function many times, so we have to pass it in a closure which allows them to recompute your model.



The criterion is our loss function, which is used to find loss. This function is used from the torch nn module.


Functions and objects which are required

  1. Import torch
  2. From torch.autagrad import Variable

And we need to define some data and assign them to variables in the following way

Following is the code which gives us prediction for training a complete regression model. It's just to understand how we implement the code and what function we used to train a regression model.


predict (after training) 4 tensor(7.9788)

Linear Regression

There are the following concepts which are used to train a complete regression model

  1. Making Predictions
  2. Linear Class
  3. Custom Modules
  4. Creating Dataset
  5. Loss Function
  6. Gradient Descent
  7. Mean squared error
  8. Training

All the above points are essential to understand how a regression model will be trained.

Youtube For Videos Join Our Youtube Channel: Join Now


Help Others, Please Share

facebook twitter pinterest

Learn Latest Tutorials


Trending Technologies

B.Tech / MCA