You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I’m running into a strange issue with my self-hosted GitHub Actions runner where the listener process is using an enormous amount of virtual memory—around 700GB—even when it’s idle and no workflows are running. This is causing problems on my private VPS, as other applications are being terminated due to insufficient swap space.
What's not working?
Enormous VMEM usage - is possible to limit the VMEM usage for runner? Is there config somewhere where I could specify max usage for virtual memory?
Main part of the workflow used for building the docker images and running the containers:
jobs:
deploy:
runs-on: self-hostedsteps:
- name: Add GitHub to the SSH known hosts filerun: | [...]
- name: Collect Git and SSH config files in a directory that is part of the Docker build contextrun: | mkdir root-config cp -r ~/.gitconfig ~/.ssh root-config/
- name: Stop and Remove Existing Container (if running)run: | docker stop app-test || true docker rm app-test || true
- name: Remove Old Docker Image (if exists)run: | docker rmi app:test || true
- name: Build docker imagerun: | docker build -f test.dockerfile -t app:test --ssh default=${{ env.SSH_AUTH_SOCK }} .
- name: Run the containerrun: | docker run \ -e LOG_DIR=/home/app/logs \ -e WORK_DIR=/home/app \ --cpus="8.0" \ --memory="100g" \ -d \ --name app-test \ app:test
Runner Version and Platform
Current runner version: '2.322.0'
Ubuntu 24.04.1 LTS
I’m running the self-hosted runner directly on a private VPS with Ubuntu 20.04. This machine has 346 GB of RAM
The runner is set up to process workflows for my repositories (building Docker containers of app), but this high memory usage happens even when no jobs are active.
When I check with htop, the VIRT (virtual memory) for the listener process shows ~700GB, while the RES (physical memory) stays low, below 100 MB.
I tried using ulimit -v, but after setting it to 1 GB (or even 50gb) it won't start the runner (./run.sh)
devops-user:~/actions-runner$ ./run.sh
GC heap initialization failed with error 0x8007000E
Failed to create CoreCLR, HRESULT: 0x8007000E
Exiting with unknown error code: 137
Exiting runner...
The text was updated successfully, but these errors were encountered:
Describe the bug
I’m running into a strange issue with my self-hosted GitHub Actions runner where the listener process is using an enormous amount of virtual memory—around 700GB—even when it’s idle and no workflows are running. This is causing problems on my private VPS, as other applications are being terminated due to insufficient swap space.
What's not working?
Enormous VMEM usage - is possible to limit the VMEM usage for runner? Is there config somewhere where I could specify max usage for virtual memory?

Main part of the workflow used for building the docker images and running the containers:
Runner Version and Platform
Current runner version: '2.322.0'
Ubuntu 24.04.1 LTS
htop
, the VIRT (virtual memory) for the listener process shows ~700GB, while the RES (physical memory) stays low, below 100 MB.ulimit -v
, but after setting it to 1 GB (or even 50gb) it won't start the runner (./run.sh
)The text was updated successfully, but these errors were encountered: