Tensors: A data structure, similar to arrays and matices. (used to encode inputs and outputs, and model parameters)
Tensor can be run on GPUs, optimized for automatic differentiation.
Importing
import torch
import numpy as np
4 examples to initialze tensors
(1) Directly from data
Data type is automatically inferred.
data = [[1, 2],[3, 4]]
x_data = torch.tensor(data)
(2) From a NumPy array
np_array = np.array(data)
x_np = torch.from_numpy(np_array)
(3) From another tensor
x_ones = torch.ones_like(x_data) # retains the properties of x_data
print(f"Ones Tensor: \n {x_ones} \n")
x_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data
print(f"Random Tensor: \n {x_rand} \n")
(4) With random or constant values
`shape` : tuple of tensor dimension.
shape = (2,3,)
rand_tensor = torch.rand(shape)
ones_tensor = torch.ones(shape)
zeros_tensor = torch.zeros(shape)
print(f"Random Tensor: \n {rand_tensor} \n")
print(f"Ones Tensor: \n {ones_tensor} \n")
print(f"Zeros Tensor: \n {zeros_tensor}")
Tensor attributes: describes (1) shape, (2) datatype, (3) stored device.
tensor = torch.rand(3,4)
print(f"Shape of tensor: {tensor.shape}")
print(f"Datatype of tensor: {tensor.dtype}")
print(f"Device tensor is stored on: {tensor.device}")
Too many operations. Refer to: torch — PyTorch 1.8.1 documentation
Default: created on CPU
Explicitly move to GPU: `.to`
# We move our tensor to the GPU if available
if torch.cuda.is_available():
tensor = tensor.to('cuda')
Familiar with NumPy? Good.
Standard numpy-like indexing and slicing:
tensor = torch.ones(4, 4)
print('First row: ', tensor[0])
print('First column: ', tensor[:, 0])
print('Last column:', tensor[..., -1])
tensor[:,1] = 0
print(tensor)
Joining Tensors
Use `torch.cat` (You can also use `torch.stack`).
t1 = torch.cat([tensor, tensor, tensor], dim=1)
print(t1)
Arithmetic operations
# This computes the matrix multiplication between two tensors. y1, y2, y3 will have the same value
y1 = tensor @ tensor.T
y2 = tensor.matmul(tensor.T)
y3 = torch.rand_like(tensor)
torch.matmul(tensor, tensor.T, out=y3)
# This computes the element-wise product. z1, z2, z3 will have the same value
z1 = tensor * tensor
z2 = tensor.mul(tensor)
z3 = torch.rand_like(tensor)
torch.mul(tensor, tensor, out=z3)
Single-element tensors
Example: aggreate all values of a tensor into one value
Then, convert it to a Python numberical value using `item()`
agg = tensor.sum()
agg_item = agg.item()
print(agg_item, type(agg_item))
In-place operations
Operations that store the result into the operand are called in-place. They are denoted by a _ suffix. For example: x.copy_(y), x.t_(), will change x.
print(tensor, "\n")
tensor.add_(5)
print(tensor)
※ It can save memory. BUT, Do not use it when computing derivatives. Its history is immediately lost.
Tensors on the CPU and NumPy arrays can share their underlying memory locations, and changing one will change the other.
Tensor → NumPy array
t = torch.ones(5)
print(f"t: {t}")
n = t.numpy()
print(f"n: {n}")
Change tensor? NumPy array change too.
t.add_(1)
print(f"t: {t}")
print(f"n: {n}")
NumPy array → Tensor
n = np.ones(5)
t = torch.from_numpy(n)
Change NumPy array? Tensor change too.
np.add(n, 1, out=n)
print(f"t: {t}")
print(f"n: {n}")
Reference: Tensors — PyTorch Tutorials 1.8.1+cu102 documentation
Deep Learning Book (0) | 2021.06.10 |
---|---|
파이토치 기본예제 손코딩하기 (0) | 2021.06.09 |
0. PyTorch Quickstart (0) | 2021.06.07 |
aiconnnect (0) | 2021.06.07 |
ResNet18 (0) | 2021.06.04 |