Hi, guys. I am a fan of deep reinforcement learning. In order to break the bottleneck of computing power, I tried to add GPU functions in QuantConnect/research images. Please following the steps:
1. install Win11 and Docker2. Install WSL on Windows 10: https://docs.microsoft.com/zh-cn/windows/wsl/install3. install Lean4. docker pull quantconnet/research:latest
7. install GPU in Windows Subsystem for Linux (WSL) https://developer.nvidia.com/cuda/wsl 6. install cuda wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600 wget https://developer.download.nvidia.com/compute/cuda/11.4.2/local_installers/cuda-repo-wsl-ubuntu-11-4-local_11.4.2-1_amd64.deb dpkg -i cuda-repo-wsl-ubuntu-11-4-local_11.4.2-1_amd64.deb apt-key add /var/cuda-repo-wsl-ubuntu-11-4-local/7fa2af80.pub apt-get update apt-get -y install cuda 7. commit the changes of imaged239f9553fef is the id of the containerdocker commit -m="try to add gpu" -a="otw" d239f9553fef quantconnect/research:gpu
8. run new vesion of image using bashdocker run -t -i --gpus all quantconnect/research:gpu /bin/bash
9. Test the result:
cd /usr/local/cuda-11.4/samples/4_Finance/BlackScholesmake BlackScholes./BlackScholes
The ouput:
[./BlackScholes] - Starting...GPU Device 0: "Ampere" with compute capability 8.6
Initializing data......allocating CPU memory for options....allocating GPU memory for options....generating input data in CPU mem....copying input data to GPU mem.Data init done.
Executing Black-Scholes GPU kernel (512 iterations)...Options count : 8000000BlackScholesGPU() time : 0.258305 msecEffective memory bandwidth: 309.711765 GB/sGigaoptions per second : 30.971176
BlackScholes, Throughput = 30.9712 GOptions/s, Time = 0.00026 s, Size = 8000000 options, NumDevsUsed = 1, Workgroup = 128
Reading back GPU results...Checking the results......running CPU calculations.
Comparing the results...L1 norm: 1.741792E-07Max absolute error: 1.192093E-05
Shutting down......releasing GPU memory....releasing CPU memory.Shutdown done.
[BlackScholes] - Test Summary
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
Test passed
Can someone edit the lean package in pip:
when we run lean research “project new” , we need to run a similar code background.
docker run -t -i --gpus all quantconnect/research:gpu /bin/bash
we need add “--gpus all” in the code of lean. (\lean-1.0.71\lean\commands\research.py), can someone edit the code?
OTreeWEN
1. install Win11 and Docker
2. Install WSL on Windows 10: https://docs.microsoft.com/zh-cn/windows/wsl/install
3. install Lean
4. docker pull quantconnet/research:latest
5. change apt update source:
aliyun cloud source
cp /etc/apt/sources.list /etc/apt/sources.list.bak
rm sources.list
vim sources.list
deb http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
apt-get update
apt-get upgrade
6. install GPU in Windows Subsystem for Linux (WSL)
https://developer.nvidia.com/cuda/wsl
7. install cuda
wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.4.2/local_installers/cuda-repo-wsl-ubuntu-11-4-local_11.4.2-1_amd64.deb
dpkg -i cuda-repo-wsl-ubuntu-11-4-local_11.4.2-1_amd64.deb
apt-key add /var/cuda-repo-wsl-ubuntu-11-4-local/7fa2af80.pub
apt-get update
apt-get -y install cuda
8. commit the changes of image
d239f9553fef is the id of the container
docker commit -m="try to add gpu" -a="otw" d239f9553fef quantconnect/research:gpu
9. run new vesion of image using bash
docker run -t -i --gpus all quantconnect/research:gpu /bin/bash
10. Test the result:
cd /usr/local/cuda-11.4/samples/4_Finance/BlackScholes
make BlackScholes
./BlackScholes
The ouput:
[./BlackScholes] - Starting...
GPU Device 0: "Ampere" with compute capability 8.6
Initializing data...
...allocating CPU memory for options.
...allocating GPU memory for options.
...generating input data in CPU mem.
...copying input data to GPU mem.
Data init done.
Executing Black-Scholes GPU kernel (512 iterations)...
Options count : 8000000
BlackScholesGPU() time : 0.258305 msec
Effective memory bandwidth: 309.711765 GB/s
Gigaoptions per second : 30.971176
BlackScholes, Throughput = 30.9712 GOptions/s, Time = 0.00026 s, Size = 8000000 options, NumDevsUsed = 1, Workgroup = 128
Reading back GPU results...
Checking the results...
...running CPU calculations.
Comparing the results...
L1 norm: 1.741792E-07
Max absolute error: 1.192093E-05
Shutting down...
...releasing GPU memory.
...releasing CPU memory.
Shutdown done.
[BlackScholes] - Test Summary
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
Test passed
OTreeWEN
docker run -t -i --gpus all quantconnect/research:gpu /bin/bash
root@755e27a624ad:/Lean/Launcher/bin/Debug# python
Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.cuda.is_available())
True
>>>
OTreeWEN
I followed this road and updated the lean package. Now I can use lean command to use research jupyter with GPU.
Louis Szeto
Hi OTreeWEN
Thank you for bringing up this issue and for the help. We highly appreciate this input.
For anyone needed, to set the quantconnect/research:gpu image as default research image for the CLI, please run the bash code in terminal/powershell under LeanCLI root directory:
Be noted that in order to receive updates, you'll need to redo the whole GPU setting process after lean-cli updates.
To revert back the default research image as our default, please run
Best
Louis
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.
Jared Broad
We added GPU versions of our cloud accessible here. Hopefully it does well, and we can justify open-sourcing the base image.
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.
Mihir Verma
Hi, I have found this to be the only guide/thread talking about using local GPU for LeanCLI backtests / research. So if I'm mistaken and there are more sophisticated guides out there, please point me to them.
However even with this, I have followed the steps (to my understanding). And have been able to make the docker image find GPU with torch:
But when using the same image to run my backtests, it does not find my GPU.
Even when I do
It does not find my GPU.
Why is this happening and how can I solve this?
One difference I found in my case vs OP's case was that when he does:
He gets this “[{'Driver': '', 'Count': -1, 'DeviceIDs': [], 'Capabilities': [['gpu']], 'Options': {}}]” but I get “/LeanCLI is not mounted, skipping compilation…”.I tried to search online what could be the reason of this but I could not find anything. I tried to get a fresh image of LeanCLI and do a “lean build” and do “lean init” in the same directory, but it did not help. Same case of not finding a GPU.
Please guide me here on how to use my GPU with LeanCLI locally!
OTreeWEN
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by QuantConnect. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. QuantConnect makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. All investments involve risk, including loss of principal. You should consult with an investment professional before making any investment decisions.
To unlock posting to the community forums please complete at least 30% of Boot Camp.
You can continue your Boot Camp training progress from the terminal. We hope to see you in the community soon!