Abstract: Tensor, it can be 0-dimensional, one-dimensional and multi-dimensional arrays. You can think of it as Numpy in the neural network world. It is similar to Numpy. The two can share memory and the conversion between them is very convenient.
This article is shared from the HUAWEI cloud community " Pytorch Neural Network Circle", author: Choosing a city to die.
Tensor
Tensor, it can be 0-dimensional, one-dimensional, and multi-dimensional arrays. You can think of it as Numpy in the neural network world. It is similar to Numpy. The two can share memory, and the conversion between them is very convenient.
But they are not the same. The biggest difference is that Numpy will put the ndarray on the CPU for acceleration, while the Tensor generated by Torch will be on the GPU for acceleration.
For Tensor, from the interface division, we can roughly be divided into 2 categories:
- torch.function: such as torch.sum, torch.add, etc.
- tensor.function: such as tensor.view, tensor.add, etc.
And according to whether to modify itself, it will be divided into the following two categories:
- Without modifying its own data, such as x.add(y), the data of x remains unchanged, and a new Tensor is returned.
- Modify its own data, such as x.add_(y), the operation result is stored in x, and x is modified.
The simple understanding is that the method name is not underlined.
Now, let's implement the addition of the corresponding positions of the two arrays and see how the effect is nearby:
import torch
x = torch.tensor([1, 2])
y = torch.tensor([3, 4])
print(x + y)
print(x.add(y))
print(x)
print(x.add_(y))
print(x)
After running, the effect is as follows:
Next, we will formally explain how to use Tensor.
Create Tensor
Like Numpy, there are many ways to create a Tensor, which can be generated by its own function, can also be converted by a list or ndarray, and dimensions can also be specified. The specific method is as follows (array is tensor):
It should be noted here that Tensor has both uppercase and lowercase methods. For the specific effect, let's take a look at the code first:
import torch
t1 = torch.tensor(1)
t2 = torch.Tensor(1)
print("值{0},类型{1}".format(t1, t1.type()))
print("值{0},类型{1}".format(t2, t2.type()))
After running, the effect is as follows:
Other examples are as follows:
import torch
import numpy as np
t1 = torch.zeros(1, 2)
print(t1)
t2 = torch.arange(4)
print(t2)
t3 = torch.linspace(10, 5, 6)
print(t3)
nd = np.array([1, 2, 3, 4])
t4 = torch.from_numpy(nd)
print(t4)
Other examples are basically the same as above, so I won’t repeat them here.
Modify Tensor dimension
Same as Numpy, Tensor has the same dimensional modification function. The specific method is shown in the following table:
The sample code is as follows:
import torch
t1 = torch.Tensor([[1, 2]])
print(t1)
print(t1.size())
print(t1.dim())
print(t1.view(2, 1))
print(t1.view(-1))
print(torch.unsqueeze(t1, 0))
print(t1.numel())
After running, the effect is as follows:
Intercept element
Of course, we create a Tensor tensor to use the data inside, so it is inevitable to get the data for processing. The specific methods of intercepting elements are shown in the table:
The sample code is as follows:
import torch
# 设置随机数种子,保证每次运行结果一致
torch.manual_seed(100)
t1 = torch.randn(2, 3)
# 打印t1
print(t1)
# 输出第0行数据
print(t1[0, :])
# 输出t1大于0的数据
print(torch.masked_select(t1, t1 > 0))
# 输出t1大于0的数据索引
print(torch.nonzero(t1))
# 获取第一列第一个值,第二列第二个值,第三列第二个值为第1行的值
# 获取第二列的第二个值,第二列第二个值,第三列第二个值为第2行的值
index = torch.LongTensor([[0, 1, 1], [1, 1, 1]])
# 取0表示以行为索引
a = torch.gather(t1, 0, index)
print(a)
# 反操作填0
z = torch.zeros(2, 3)
print(z.scatter_(1, index, a))
After running, the effect is as follows:
We made a diagram of a = torch.gather(t1, 0, index) to make it easier for everyone to understand. As shown below:
Of course, we directly have the company to calculate, because so many data markings are really unsightly, here the blogger lists the conversion companies for your reference:
当dim=0时,out[i,j]=input[index[i,j]][j]
当dim=1时,out[i,j]=input[i][index[i][j]]
Simple math
Like Numpy, Tensor also supports mathematical operations. Here, the blogger lists the commonly used mathematical operation functions for your reference:
It should be noted that all the function operations in the table above will create a new Tensor. If you do not need to create a new Tensor, use the underscore "_" version of these functions.
Examples are as follows:
t = torch.Tensor([[1, 2]])
t1 = torch.Tensor([[3], [4]])
t2 = torch.Tensor([5, 6])
# t+0.1*(t1/t2)
print(torch.addcdiv(t, 0.1, t1, t2))
# t+0.1*(t1*t2)
print(torch.addcmul(t, 0.1, t1, t2))
print(torch.pow(t,3))
print(torch.neg(t))
After running, the effect is as follows:
The above functions are easy to understand, and only one function is believed to be difficult to understand without contact with machine learning. That is, the sigmoid() activation function, and its formula is as follows:
Merge operation
A simple understanding is to perform operations such as merging or totaling tensors. The input and output dimensions of this type of operation are generally different, and the input is often greater than the output dimension. The merge function of Tensor is shown in the following table:
The sample code is as follows:
t = torch.linspace(0, 10, 6)
a = t.view((2, 3))
print(a)
b = a.sum(dim=0)
print(b)
b = a.sum(dim=0, keepdim=True)
print(b)
After running, the effect is as follows:
It should be noted that after the sum function is summed, the number of elements of dim is 1, so it should be removed. If you want to keep this dimension, you should keepdim=True, and the default is False.
Comparison operation
In quantitative trading, we generally compare stock prices. The Tensor tensor also supports comparison operations, generally element-by-element comparison. The specific functions are as follows:
The sample code is as follows:
t = torch.Tensor([[1, 2], [3, 4]])
t1 = torch.Tensor([[1, 1], [4, 4]])
# 获取最大值
print(torch.max(t))
# 比较张量是否相等
# equal直接返回True或False
print(torch.equal(t, t1))
# eq返回对应位置是否相等的布尔值与两者维度相同
print(torch.eq(t, t1))
# 取最大的2个元素,返回索引与值
print(torch.topk(t, 1, dim=0))
After running, the output is as follows:
Matrix Operations
In machine learning and deep learning, there are a lot of matrix operations. Like the commonly used matrix operations like Numpy, one is element-wise multiplication and the other is dot product multiplication. The functions are shown in the following table:
There are three main dot product calculations that need to be distinguished. The dot() function can only calculate a 1-dimensional tensor, the mm() function can only calculate a two-dimensional tensor, and bmm can only calculate a three-dimensional matrix tensor. Examples are as follows:
# 计算1维点积
a = torch.Tensor([1, 2])
b = torch.Tensor([3, 4])
print(torch.dot(a, b))
# 计算2维点积
a = torch.randint(10, (2, 3))
b = torch.randint(6, (3, 4))
print(torch.mm(a, b))
# 计算3维点积
a = torch.randint(10, (2, 2, 3))
b = torch.randint(6, (2, 3, 4))
print(torch.bmm(a, b))
After running, the output is as follows:
Click to follow to learn about Huawei Cloud's fresh technology for the first time~
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。