# 线性回归的从零开始实现

param是一个列表，里面是要优化的参数

from d2l import torch as d2l，这个地方的torch是指什么？

``````#绘制图表
d2l.plt.figure(figsize=(14,10))
#特征1的图
d2l.plt.subplot(2,2,1)
features0 = features[:, 0].detach().numpy()

labels1 = labels.detach().numpy()
d2l.plt.scatter(features0, labels1, 1);
#将第一个特征和权重的第一个组合成线性函数
rlt0 = features[:, 0]*w[0] + b
rlt0z = rlt0.reshape(features[:, 0].shape)
d2l.plt.plot(features0, rlt0z.detach().numpy());
#特征2的图
d2l.plt.subplot(2,2,2)
features1 = features[:, 1].detach().numpy()
d2l.plt.scatter(features1, labels1, 1);
#将第二个特征和权重的第二个组合成线性函数
rlt1 = features[:, 1]*w[1] + b
rlt1z = rlt1.reshape(features[:, 1].shape)
d2l.plt.plot(features1, rlt1z.detach().numpy());
``````

1 Like

``````RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

``````

true_w = [2, -3.4] 就是說設定的模型權重為[2, -3.4],沒有特殊意義

a = torch.tensor([[1,2]])
a, a.T