GPUs

https://d2l.ai/chapter_builders-guide/use-gpu.html

I didn’t use nvidia.So I didn’t install GPU version and cuda.
torch.cuda.device('cuda')
AssertionError : Torch not compiled with CUDA enabled
torch.cuda.device_count()
0


Why didn’t AssertionError happen when I run the following code?
torch.cuda.device('cuda:1')
<torch.cuda.device at 0x16349fb2488>

5.6.5. Exercises

  1. you should see almost linear scaling? I’m confused what is linear scaling.

‘Linear scaling’ is that the computation speed is proportional to the number of GPUs you use.
e.g., With one GPU, two tasks take 2 sec, and with two GPUs, two tasks (each on one GPU) takes only 1 sec.

1 Like

awsblog – 玩转GPU实例之系统工具 – NVIDIA 篇

I am confused about most of the questions here

Exercises and my silly answers

  1. Try a larger computation task, such as the multiplication of large matrices, and see the difference in speed between the CPU and GPU. What about a task with a small amount of calculations?
  1. How should we read and write model parameters on the GPU?
  • by using net.to(device=torch.device('cuda'))
  1. Measure the time it takes to compute 1000 matrix-matrix multiplications of 100 × 100 matrices and log the Frobenius norm of the output matrix one result at a time vs. keeping a log on

the GPU and transferring only the final result.

  • I did not get it but the difference between time in cpu and gpu is 40ms.
  1. Measure how much time it takes to perform two matrix-matrix multiplications on two GPUs

at the same time vs. in sequence on one GPU. Hint: you should see almost linear scaling.

  • I have one GPu