Use GPUs

http://d2l.ai/chapter_deep-learning-computation/use-gpu.html

I didn’t use nvidia.So I didn’t install GPU version and cuda.
torch.cuda.device('cuda')
AssertionError : Torch not compiled with CUDA enabled
torch.cuda.device_count()
0


Why didn’t AssertionError happen when I run the following code?
torch.cuda.device('cuda:1')
<torch.cuda.device at 0x16349fb2488>

5.6.5. Exercises

  1. you should see almost linear scaling? I’m confused what is linear scaling.

‘Linear scaling’ is that the computation speed is proportional to the number of GPUs you use.
e.g., With one GPU, two tasks take 2 sec, and with two GPUs, two tasks (each on one GPU) takes only 1 sec.

1 Like

awsblog – 玩转GPU实例之系统工具 – NVIDIA 篇