Still love to toss...
I bought a new laptop a while ago, the Magic 13-3050TI-1T version , an all-around notebook, the CPU is the standard AMD Ryzen 9-5900HS with 8 cores and 16 threads, the graphics card is NVIDIA-3050TI, and the weight is similar to that of a macbook. Both are 1.4kg, portable and can change shape.
It looks like this:
It can be changed into 3 forms (inexplicably excited), and some people may ask me why I don't buy a macbook. There are two reasons for not buying it:
- MacBook does not support nvidia graphics card, this has no solution, there is no way to run AI code locally, only remote server
- I'm tired of using macbook, a little aesthetic fatigue, the new version of macbook pro is too thick, it feels inconvenient to carry, mainly expensive...
So I have an all-around notebook with an independent display that is close to the size and weight of a macbook, which is enough for simple daily development, debugging and debugging. The CPU is R9-5900HS , and the graphics card is 3050TI-4G. The compilation speed of the CPU is also acceptable. For the GPU, except the video memory is a bit small, the features of the 30 series are all available, so you can try it out.
After all, those who are engaged in AI are of course more sensitive to GPU. This 3050TI is based on the GA107 core, with 2560 CUDA Cores and 80 Tensor Cores, which is basically enough to play. The computing power is 8.6. At present (this sentence is still the latest at the time of writing, but immediately Laohuang launched the H100 in March) the latest features should be all there:
- FP32, FP16, BF16 and INT8 precision support
- 3rd Generation Tensor Core and more
A white paper of GA102 is attached, you can look through it if you are interested.
With the notebook, the next step is to configure the development environment.
Before I didn't want to use Windows, I was mainly used to the Linux operating environment, but Mac and Linux are not very different in operation, and using Windows will be unaccustomed to all kinds of things.
So the initial solution is win10+ubuntu dual system:
Ubuntu + win10/win11 dual system solution
This is also the configuration of most programmers. Of course, development must be ubuntu , windows entertainment, and ubuntu work.
Here I first installed ubuntu under win10, and then upgraded win10 to win11 under the premise of win10+ubuntu dual system. The whole upgrade process is very smooth. There is no change after the upgrade (the boot is not destroyed, and the ubuntu system is not destroyed). After the upgrade and restart, ubuntu can be used normally. At this time, the dual system upgrade is win11+ubuntu
.
And installing Ubuntu is also a common topic, basically:
- Download the Ubuntu image and take a U disk to make a U disk image
- Allocate a part of the disk in the Win10 system for Ubuntu to use
- Restart the bios and set the boot method to U disk and then install
I have Ubuntu version 20.04 installed.
This is probably the process. There is a lot of information on the Internet, and you can find it just by searching. One thing is different from the previous installation of 16.04. NVIDIA's driver installation is much smoother than expected. Four years ago, Ubuntu was also installed on a laptop. After a lot of hardships: Steps and problems related to installing NVIDIA (cuda)-gtx965m under ubuntu16.04 , this time the NVIDIA driver does not have many pits, just install the normal logical installation, and there is no need to disable anything.
To enter a new Ubuntu system, the basic steps are basically the same:
Switch the source first, pay attention to the source version must be the same as your ubuntu version (inconsistency will lead to incompatibility of your various software, that is, there will be a lot of unmet, I didn't pay attention to this at first, and tossed it for a long time), go to Tsinghua or Ali source:
- https://mirrors.tuna.tsinghua.edu.cn/help/ubuntu/
- https://developer.aliyun.com/mirror/ubuntu?spm=a2c6h.13651102.0.0.3d151b117MRP65
Then replace the /etc/apt
in source.list
with the domestic source:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential
After installing the necessary components of Ubuntu, the next step is to upgrade the kernel (Ubuntu-20.04 defaults to the 5.10 kernel). As for why I need to upgrade the kernel, it is because my Magic 13 is a relatively new notebook, and some functions are not supported by the old version of the kernel (such as flipping the screen, such as keyboard lights, fingerprint unlocking, etc.), so I upgraded the kernel first.
There are some pits in upgrading the kernel. I installed ubuntu20.04 with reference to ROG Magic 13 and solved various driver problems. This article has really helped me a lot. Probably, if you download the update from the official kernel , it may be because of the official The libc6 version in the kernel is incompatible, which leads to sudo apt-get update
sometimes there will be errors and various problems will be reported. You need to download the appropriate version of the kernel.
If you have any doubts about the kernel, you can read this article. In short, you need to be cautious about upgrading the kernel. In addition, the latest version of the Ubuntu system will come with the latest kernel.
Next, let's talk about WSL2.
win11+wsl2+docker
WSL(Windows Subsystem for Linux)
is a new term I saw when I was researching win11 related information. I had only heard of it before, but I had never actually used it. Now I have the need to run Ubuntu on Windows, and I suddenly thought of giving it a try.
The function of wsl is to allow you to use the linux system on windows . It can make it easy for people like me who are used to the command line to develop under windows. After all, if you develop directly under ubuntu, it will be a bit of a toss to fish and chat. I still want to achieve the effect similar to the development on macOS, the entertainment and work are not wrong, and the development experience is not divided .
Because mac and nvidia are incompatible, for people like me who are deeply dependent on nvidia graphics cards for deep learning, they can only use mac to remotely connect to the server for development, which is more uncomfortable when the network is not good.
It seems that WSL2 is also more powerful than VMware virtual machine (you can say it if you know it), you can also run the Ubuntu image directly in Windows, and then connect to vscode for development, and the efficiency is directly doubled, which directly caught my heart .
At present, the latest version of wsl is wsl2. The difference between wsl and wsl2 is quite big. The sixth sense of men makes me use the new one instead of the old one , so I choose to use wsl2. In fact, there is another reason that the linux kernel under wsl2 can call cuda.
First upgrade win11, then install a WSL dedicated driver 510.06_gameready_win11_win10-dch_64bit_international
, and then enter directly in the win terminal:
wsl --set-default-version 2
At this point, WSl2 is used by default.
If you are anxious to see if WSL2 can be used, you can run this directly in WIN:
docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
---
result:
---
GPU Device 0: "Ampere" with compute capability 8.6
> Compute 8.6 CUDA device: [NVIDIA GeForce RTX 3050 Ti Laptop GPU]
20480 bodies, total time for 10 iterations: 40.139 ms
= 104.495 billion interactions per second
= 2089.903 single-precision GFLOP/s at 20 flops per interaction
If the output is normal, it proves that both the WSL-NVIDIA driver and your graphics card can be detected correctly.
Install Ubuntu
Next, try installing Ubuntu. Generally, it is recommended to search and install in the Microsoft Store on the Internet. However, if you search for Ubuntu directly in the WIN11 store, it will install it directly to the C drive . This is very annoying, and I also accidentally installed it. I got the Ubuntu image to the C drive, but I had no choice but to delete it first, then unbind the docker binding in WSL2, and then move it to another drive (here I moved it to the D drive):
wsl --export docker-desktop-data D:\Docker\wsl\docker-desktop-data\docker-desktop-data.tar
wsl --unregister docker-desktop-data
wsl --import docker-desktop-data D:\Docker\wsl\docker-desktop-data\ D:\Docker\wsl\docker-desktop-data\docker-desktop-data.tar --version 2
After the docker image address is moved to another disk, you can let go of mirroring!
docker image based on wsl2
Since they are all images, why not directly find an image with a cuda environment ? You can find it directly on the docker official website or NVIDIA-docker : nvidia/cuda:11.4.3-cudnn8-devel-ubuntu20.04
, and then docker pull it.
So, I unregistered the previous Ubuntu image in wsl2, wsl --unregister Ubuntu
and deleted the previous image. Then docker pull nvidia/cuda:11.4.3-cudnn8-devel-ubuntu20.04
pull the new image.
Try running it, execute docker run -it --gpus all 42a32a65aa9d /usr/bin/bash
, pay attention to add --gpus all
, otherwise the graphics card will not be detected.
Enter the container and execute nvidia-smi
:
root@304af4811a38:/# nvidia-smi
Sun Jan 30 10:37:28 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.39.01 Driver Version: 511.23 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:01:00.0 Off | N/A |
| N/A 51C P8 8W / N/A | 0MiB / 4096MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Hmm, no problem.
Test CUDA
Compile the code in https://github.com/NVIDIA/cuda-samples
, then run deviceQuery
:
root@0b09ee5e9284:~/code/cuda-samples/bin/x86_64/linux/release# ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "NVIDIA GeForce RTX 3050 Ti Laptop GPU"
CUDA Driver Version / Runtime Version 11.6 / 11.4
CUDA Capability Major/Minor version number: 8.6
Total amount of global memory: 4096 MBytes (4294443008 bytes)
(020) Multiprocessors, (128) CUDA Cores/MP: 2560 CUDA Cores
GPU Max Clock rate: 1035 MHz (1.03 GHz)
Memory Clock rate: 5501 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total shared memory per multiprocessor: 102400 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Managed Memory: Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.6, CUDA Runtime Version = 11.4, NumDevs = 1
Result = PASS
The card can be detected normally, and it can also operate normally.
Run another simple matrix multiplication:
root@0b09ee5e9284:~/code/cuda-samples/bin/x86_64/linux/release# ./matrixMul
[Matrix Multiply Using CUDA] - Starting...
GPU Device 0: "Ampere" with compute capability 8.6
MatrixA(320,320), MatrixB(640,320)
Computing result using CUDA Kernel...
done
Performance= 294.72 GFlop/s, Time= 0.445 msec, Size= 131072000 Ops, WorkgroupSize= 1024 threads/block
Checking computed result for correctness: Result = PASS
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
Hmm, no problem.
compile tvm test
Simply compile the TVM test under WSL2 and under dual-system Ubuntu, the tvm version is tested on 779dc51e1332f417fa4c304b595ce76891dfc33a
this commit, the win side is adjusted to the performance mode of rog, the Ubuntu system has no additional settings, the settings of cmake The same, use the ninja j8
command to compile.
The difference between WSL2 and Ubuntu compiled TVM is 30s, nearly 2%. The difference is not very big. In fact, this comparison is not very standard, the highest frequency of the CPU of the two systems is not unified, just a simple test~
With VSCODE
It is very common to use VSCODE to develop. VSCODE has a remote-SSH plug-in that allows us to easily connect to remote servers for development, just like local services.
Similarly, there is also a plug-in in VSCODE that can directly connect to docker under WSL2. Under windows docekr run
, you can find this docker container in vscode under win:
After executing Attach Vscode
, you can enter the docker environment of VSCODE:
Development is exactly the same as VSCODE under Ubuntu, with root privileges, you can install plug-ins, you can debug code, and you can do whatever you want.
So far, the development of WSL2 on WIN11 has gone well~
Is WIN11 good or not?
When I first got this notebook, it was WIN10, and I didn't have the idea of upgrading to WIN11, but because it is more troublesome to use WSL2 in win10, and WIN11 comes with wsl2. So I upgraded to WIN11.
The entire upgrade process is smoother than expected. You can upgrade directly in the settings, download updates, and restart in one go. After restarting, you will have a new system, and all the previous software can be used.
It is said that WIN11 will be worse in CPU scheduling than WIN10, and it will be more affected when playing games. But I can't feel it. It looks better than the WIN10 interface in use. Other core operations are not much different from WIN10, and it is more friendly to touch screen users.
What surprises me is that WIN11 comes with a core function similar to the paid application Paste on the MAC side, Win+V
can directly display the recent clipboard and choose to paste it, and pictures are also possible.
problems encountered
There is a relatively pitted problem. Originally, the win11+Ubuntu20.04 dual system was used well. Suddenly, one day ASUS asked to upgrade the bios (from 407->408). At that time, there was no idea and it was upgraded directly. After the upgrade, I was dumbfounded and found that I couldn't get into the ubuntu system, and the win11 system was fine.
I thought it was a boot problem, so I modified the grub boot for a long time, but it didn't work after fixing the ubuntu installer on the U disk.
Finally, I checked on reddit by accident, and it seems that the 408 version is not compatible with ubuntu-20.04, so just downgrade the bios directly.
Related question links:
ASUS bios download official website:
use lldb
If the file compiled with clang wants to be debugged in VSCODE, you need the next codeLLDB, and then configure it in json:
{
"type": "lldb",
"request": "launch",
"name": "lldb launch",
"program": "/path/to/a.out",
},
That's it~
3080 docking station
Because Magic 13 can connect its own expansion docking graphics card through the dedicated PCIE expansion port , the transmission rate is much faster than Thunderbolt 4, and the external graphics card can be almost lossless. So I bought a 3080 graphics card dock on Nichia, the tax of 7300+800, and finally arrived after waiting for a month.
First of all this is not a true desktop version of the RTX3080. This is the full blood version of the rtx3070 desktop version of the ga104 core. Just a few more cuda cores. Because of the power consumption limit, it is actually slower than the desktop version 3070, which is slightly weaker than the desktop 3070. But I bought this mainly because of its 16G video memory, it is really suitable for alchemy~
Looking at the evaluation of the big stone, this 3080 graphics card dock is in the dual-bake enhanced mode with the Magic 13 extreme. The GPU can run to 150W and the temperature is 82 degrees, and the CPU can run to 45W and the temperature is 95 degrees. To be honest, the temperature is a bit high. If you use it regularly If so, it is recommended to lower it, after all, this thing is more delicate.
One thing must be clear, the performance of this graphics card is directly proportional to the power consumption. Whether it is a desktop, notebook or embedded graphics card, the stronger the power consumption, the stronger the performance.
In addition, after switching the 3080 graphics card on the WIN11 side, restarting into Ubuntu will directly recognize the 3080, which is cool.
Desktop configuration
By the way, let’s talk about our uncommon screen hanging lamps . At first, I thought that the scene of using hanging lamps was an ink screen, because the ink screen does not emit light. It’s fine in the daytime, but it can’t be used at night. Although it can be illuminated with a desk lamp, the light is uneven or incomplete, such as this:
The above is the appearance of only using the desk lamp, and the screen hanging lamp on the top has not been turned on, so it is too uncomfortable to work.
If the screen hanging light is turned on:
It can be seen that the ink screen is illuminated a lot... In addition, the desktop is also illuminated, it doesn't matter if the desk lamp is turned off, only the hanging lamp, there is no problem with reading.
This lamp is BenQ's Screenbar Halo . It is said that it is equipped with such an expensive ink screen display. It is not bad for a hanging lamp. The Xiaomi I used before, I felt that the brightness was not enough, so I simply equipped a better hanging lamp. Compared with Xiaomi, it must be much more advanced, and the brightness has improved a lot. The maximum brightness requires a large current output (1A is a bit reluctant, 1.5A-2A is almost the same), the previous Xiaomi did not use it (because it was not so bright), I hung up A charging treasure is no problem.
But to be honest, it seems that the screen hanging lamp is not used in this way. It is to give you an ambient light to illuminate the desktop, so that you can read books without taking up space. The key point is that this lamp is asymmetric (we don’t understand it), but it will not Shine on the ordinary screen to give you reflection (the ordinary screen will emit light, which is not the same as my ink screen), but in fact, I want this hanging light to shine on my screen (because the ink screen does not emit light) Well), but when I normally use a hanging lamp (BenQ), I can't illuminate the screen. This is really good. After all, the screen hanging lamp should not illuminate the screen to prevent reflection. Then I can only forcibly adjust the direction of the hanging lamp so that it can illuminate my screen as much as possible, which is a bit embarrassing for others.
I also tried it with a normal screen:
Neither the upper or lower screen is reflective , and it is very clear to see, which is great~ There is no problem with watching the code and playing games. It feels like an ordinary screen. Use a hanging lamp to make the surrounding light full, so that the eyes can Not too tired .
After all, it is considered productivity, and the screen hanging lamp will not go back after it is used.
This article will talk about so much, the next one will be a technical article that everyone is familiar with~
References
- https://docs.nvidia.com/cuda/wsl-user-guide/index.html
- https://zhuanlan.zhihu.com/p/455979556
- https://www.codetd.com/fr/article/13628610
- https://developer.nvidia.com/blog/leveling-up-cuda-performance-on-wsl2-with-new-enhancements/
- https://l-zb.com/?id=49#
tease me
- If you are like-minded with me, Lao Pan is very willing to communicate with you!
- If you like Lao Pan's content, please follow and support~
- If you have any questions and want to contact me, you can add a public account and send a direct private message, click here !
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。