Implementation of Linear Regression using PythonLinear regression is a statistical technique to describe relationships between dependent variables with a number of independent variables. This tutorial will discuss the basic concepts of linear regression as well as its application within Python. In order to give an understanding of the basics of the concept of linear regression, we begin with the most basic form of linear regression, i.e., "Simple linear regression". Simple Linear RegressionSimple linear regression (SLR) is a method to predict a response using one feature. It is believed that both variables are linearly linked. Thus, we strive to find a linear equation that can predict an answer value(y) as precisely as possible in relation to features or the independently derived variable(x). Let's consider a dataset in which we have a number of responses y per feature x: For simplification, we define: x as feature vector, i.e., x = [x_{1}, x_{2}, x_{3}, …., x_{n}], y as response vector, i.e., y = [y_{1}, y_{2}, y_{3} …., y_{n}] for n observations (in above example, n = 10). A scatter plot of the above dataset looks like:  The next step is to identify the line that is most suitable for this scatter graph so that we can anticipate the response of any new value for a feature. (i.e., the value of x is that is not in the dataset) This line is referred to as the regression line. The equation of the regression line can be shown as follows: Here,
In order to build our model, we need to "learn" or estimate the value of the regression coefficients and . After we've determined those coefficients, then we are able to make use of this model in order to forecast the response! In this tutorial, we're going to employ the concept of Least Squares. Let's consider: y_{i} = ?_{0}+ ?_{1xi} + ?_{i}=h(x_{i} )+ ?_{i} ? ?_{i}= y_{i} h(x_{i} ) Here, ?_{i} is a residual error in i^{th} observation. So, our goal is to minimize the total residual error. We have defined the cost function or squared error, J as: and our mission is to find the value of ?_{0} and ?_{1} for which J(?_{0},?_{1}) is minimum. Without going into the mathematical details, we are presenting the result below: Where, ss_{xy} would be the sum of the cross deviations of "y" and "x": And ss_{xx} would be the sum of squared deviations of "x" Code: Output: Estimated coefficients are : b_0 = 0.4606060606060609 b_1 = 1.1696969696969697 Multiple linear regressionMultiple linear regression attempts explain how the relationships are among several elements and then respond by applying a linear equation with the data. Clearly, it's not anything more than an extension of linear regression. Imagine a set of data that has one or more features (or independent variables) as well as one response (or dependent variable). The dataset also is comprised of additional n rows/observations. We define: X(Feature Matrix) = it is a matrix of size "n * p" where "x_{ij}" represents the values of j^{th} attribute for the i^{th} observation. Therefore, And, y (Response Vector) = It is a vector of size n where represents the value of response for the i^{th} observation. The regression line for "p" features are represented as: Where h(x_{i}) is the predicted response value for the i^{th} observation point and ?_{0},?_{1},?_{2},....,?_{p} are the regression coefficients. We can also write: Where, ?_{i} is representing the residual error in the ith observation point. We can also generalize our linear model a little more by representing the attribute matrix of "X" as: Therefore, the linear model can be expressed in the terms of matrices as shown below: y=X?+? Where, We now determine an estimation of the b, i.e., b' by with an algorithm called the Least Squares method. As previously mentioned, this Least Squares method is used to find b' for the case where the residual error of total is minimized. We will present the results as shown below: Where ' is the matrix's transpose and 1 is the matrix's that is the reverse. With the help of the lowest square estimates b', the multilinear regression model is now calculated by: where y' is the estimated response vector. Code: Output: Regression Coefficients are: [8.95714048e02 6.73132853e02 5.04649248e02 2.18579583e+00 1.72053975e+01 3.63606995e+00 2.05579939e03 1.36602886e+00 2.89576718e01 1.22700072e02 8.34881849e01 9.40360790e03 5.04008320e01] Variance score is: 0.7209056672661751 In the above example, we calculate the accuracy score using the Explained Variance Score. We define: explained_variance_score = 1  Var{y  y'}/Var{y} where y' is the estimated output target, where y is the equivalent (correct) to the target's output, where Var is Variance, which is the square of the standard deviation. The most ideal score is 1.0. Lower scores are worse. Assumptions:Here are the main assumptions that the linear regression model is based on with regard to the dataset on the basis on which it is utilized:
At the end of the tutorial, we'll discuss some of the applications of linear regression. Applications:Following are the fields of applications based on Linear Regression:
Next TopicNested Decorators in Python
