cover

58. PyTorch Implementation of Linear Regression#

58.1. Introduction#

The previous experiments have provided a detailed introduction to the use of PyTorch. I believe you are already familiar with PyTorch’s tensor types, common operation methods, and the process of building neural networks. In this challenge, you are required to use PyTorch to implement the well-known linear regression. Although linear regression is simple, the purpose of the challenge is to get familiar with the use of PyTorch.

58.2. Key Points#

  • Principles and Usage of PyTorch

  • Implementing Linear Regression with the nn.Module Class

Linear regression is already an old acquaintance of ours, and it was introduced in depth at the beginning of the course. If we were to summarize linear regression in one sentence, it would be to model the relationship between input and output data using a linear method.

First, we generate the sample data required for this challenge. Here, we use the APIs provided by PyTorch to operate.

import torch
from matplotlib import pyplot as plt

%matplotlib inline

torch.manual_seed(10)  # 随机数种子
x = torch.linspace(1, 10, 50)  # 生成等间距张量
y = 2 * x + 3 * torch.rand(50)

plt.style.use("ggplot")  # 使用 ggplot 绘图样式
plt.scatter(x, y)
<matplotlib.collections.PathCollection at 0x12ea2de40>
../_images/3d62920e3f47c0a2d9347d119b6bbb20d613729469a97f049915b664b206d371.png

58.3. Implementing the Linear Regression Model#

As mentioned in the previous experimental content, the torch.nn.Module class is the base class for all neural networks. It can represent either a single layer in a neural network or a neural network consisting of multiple layers. Next, you will implement the LinearRegressionModel() linear regression class required for the challenge by inheriting from the torch.nn.Modules class.

Exercise 58.1

Challenge: Implement the LinearRegressionModel() linear regression class required for the challenge by inheriting from the torch.nn.Module class.

Requirement: Only use the classes and methods provided by PyTorch.

Hint: You may use the nn.Linear() linear transformation layer.

import torch.nn as nn

## 代码开始 ### (> 5 行代码)

## 代码结束 ###

Run the tests

LinearRegressionModel()

Expected output

LinearRegressionModel(
  (linear): Linear(in_features=1, out_features=1, bias=True)
)

In this challenge, the least squares method will not be used to solve the linear regression parameters. Instead, an iterative method will be used. Therefore, first, the loss function and the optimizer need to be defined. The challenge will choose the value of MSE as the loss function and solve it through the SGD algorithm.

Exercise 58.2

Challenge: Define the MSE loss function and the Stochastic Gradient Descent optimizer.

Requirement: Set the learning rate of the Stochastic Gradient Descent optimizer to 0.01, and use the default parameters for the rest.

Hint: You may use the loss function and optimizer mentioned in the experiment.

model = LinearRegressionModel()  # 实例化模型

## 代码开始 ### (≈ 2 行代码)
loss_fn = None
opt = None
## 代码开始 ### (≈ 3 行代码)

Run the test

loss_fn, opt

Expected output

(MSELoss(), SGD (
 Parameter Group 0
     dampening: 0
     lr: 0.01
     momentum: 0
     nesterov: False
     weight_decay: 0
 ))

Everything is ready. Next is to train the model and solve for the linear regression parameters.

Exercise 58.3

Challenge: Complete the iterative process of optimizing the linear regression parameters.

Requirement: The number of iterations is 100.

Hint: Pay attention to the shape problem of the input data. The idea is to first perform a forward pass to obtain the true values, calculate the loss, and iterate through the optimizer.

## 代码开始 ### (> 5 行代码)

## 代码结束 ###

Expected output: (It doesn’t matter if the loss values are different)

Iteration [ 10/100], Loss: 0.791
Iteration [ 20/100], Loss: 0.784
Iteration [ 30/100], Loss: 0.778
Iteration [ 40/100], Loss: 0.772
Iteration [ 50/100], Loss: 0.767
Iteration [ 60/100], Loss: 0.762
Iteration [ 70/100], Loss: 0.757
Iteration [ 80/100], Loss: 0.753
Iteration [ 90/100], Loss: 0.749
Iteration [100/100], Loss: 0.745

Finally, it is also necessary to plot the fitted line on the original scatter plot to check the effect.

Exercise 58.4

Challenge: Plot the fitted line onto the image according to the fitted parameters.

Hint: Read the fitted parameters via model.state_dict().

## 代码开始 ### (≈ 4 行代码)

## 代码结束 ###

Expected output

image


○ Sharethis article link to your social media, blog, forum, etc. More external links will increase the search engine ranking of this site.