Skip to content

Shared weights whenever multiple instances #18

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
DavidLangworthy opened this issue Nov 30, 2020 · 10 comments
Open

Shared weights whenever multiple instances #18

DavidLangworthy opened this issue Nov 30, 2020 · 10 comments
Labels

Comments

@DavidLangworthy
Copy link

Also CPU

@DavidLangworthy DavidLangworthy changed the title Shared weights whenever multiple instances on the same GPU. Shared weights whenever multiple instances Dec 7, 2020
@Jackiexiao
Copy link

any progress?

@robertbagge
Copy link

robertbagge commented Jul 12, 2022

+1 for this. Did some benchmarking on this today.

This is with 1 instance of 3 ONNX models

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.48.07    Driver Version: 515.48.07    CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
| 74%   64C    P2   235W / 350W |   4416MiB / 24576MiB |     40%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      2054      G   /usr/lib/xorg/Xorg                  8MiB |
|    0   N/A  N/A      2270      G   /usr/bin/gnome-shell                6MiB |
|    0   N/A  N/A   1498503      C   tritonserver                     4397MiB |
+-----------------------------------------------------------------------------+

This is with 2 instances of 3 ONNX models

Tue Jul 12 15:19:46 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.48.07    Driver Version: 515.48.07    CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
| 73%   65C    P2   236W / 350W |   7238MiB / 24576MiB |     50%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      2054      G   /usr/lib/xorg/Xorg                  8MiB |
|    0   N/A  N/A      2270      G   /usr/bin/gnome-shell                6MiB |
|    0   N/A  N/A   1653466      C   tritonserver                     7219MiB |
+-----------------------------------------------------------------------------+

@GuanLuo
Copy link
Contributor

GuanLuo commented Jul 13, 2022

CC @pranavsharma does ORT provides API for doing so? Or can a ORT session be run for different inferences in parallel?

@pranavsharma
Copy link
Contributor

pranavsharma commented Jul 13, 2022

CC @pranavsharma does ORT provides API for doing so? Or can a ORT session be run for different inferences in parallel?

Not fully following. What API are you looking for? I believe Triton already creates a separate session for each instance and these instances (sessions) can be used to run inferences in parallel. The drawback is that each session has its own copy of the weights thereby increasing (replicating) the memory consumption. Someone has submitted code changes to share a session between different instances. We're reviewing the changes. This should fix the memory consumption problem.

@GuanLuo
Copy link
Contributor

GuanLuo commented Jul 13, 2022

Someone has submitted code changes to share a session between different instances. We're reviewing the changes. This should fix the memory consumption problem.

Yes, this is what I was looking for. Sorry for not being clear in my previous question, just mumbling different ways to use a copy of weights in multiple instances that I have seen in different framework. i.e. TRT store weights in an "engine" and it can creates multiple "context" maps to the same "engine"

@heliqi
Copy link

heliqi commented Nov 1, 2022

@pranavsharma any progress about "Sharing a session between different instances of ONNXRuntime" ?

@pranavsharma
Copy link
Contributor

@pranavsharma any progress about "Sharing a session between different instances of ONNXRuntime" ?

I should be able to get to it this week.

@heliqi
Copy link

heliqi commented Nov 2, 2022

@pranavsharma any progress about "Sharing a session between different instances of ONNXRuntime" ?

I should be able to get to it this week.

GOOD! I look forward to hearing from you soon.

@heliqi
Copy link

heliqi commented Nov 9, 2022

@pranavsharma any progress?

@FabianSchuetze
Copy link

FabianSchuetze commented Feb 1, 2023

Is there any news about sharing gpu memory? I the PR you mentioned #141, @pranavsharma ?

We have to switch models regularly and sharing memory would be very beneficial.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants