一个热身实验,熟悉一下pytorch的tensor。
内容普通的计算都是发生在cpu里的,如果数据在cpu中,那么可以通过 x.numpy()方法将tensor转为numpy
要调用gpu计算要 x.cuda(), 如果还要回到内存要先.cpu
除chartensor外所有tensor都可以转换为numpy
import torch as tor import numpy as np a = tor.ones(5) a tensor([1., 1., 1., 1., 1.]) a.device device(type='cpu') a1 = a.cuda() a1 tensor([1., 1., 1., 1., 1.], device='cuda:0') b = a.numpy() b array([1., 1., 1., 1., 1.], dtype=float32) # 要先cpu()才行 b1 = a1.numpy() --------------------------------------------------------------------------- TypeError Traceback (most recent call last)in ----> 1 b1 = a1.numpy() TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
查看显存(这就花了830M? 应该是有有些固定开销。)
!nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.73.01 Driver Version: 460.73.01 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 306... Off | 00000000:01:00.0 Off | N/A |
| 0% 45C P8 14W / 200W | 830MiB / 7979MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+



