Running out of disk space on GitLab runners

138 views
Skip to first unread message

Hammond, Glenn E

unread,
Jul 3, 2023, 8:11:22 PM7/3/23
to pflotran-dev (pflotran-dev@googlegroups.com), Madsen, Calvin Franklin
Heeho/Calvin,

I have been researching the issue of running out of disk space this afternoon.

We are using "shared runners" to execute our GitLab pipelines in the cloud. Think of a GitLab runner as a virtual machine (VM) that executes the GitLab pipeline. Runners pull Docker images and execute them (layering of VMs). These runners are running on GitLab's servers, and the hardware resources are "shared" with other users in the cloud. Another option is to execute "runners" on our own machines where the resources are much larger, but then we have to maintain the machines. Therefore, we prefer the free, shared resource.

Shared runners are limited to ~20 GB of disk space, which should be sufficient for what you are attempting. However, I can see that often ~50% of the disk space is occupied at the beginning of the execution, and in many cases it is far more than 50%. It's non-uniform, and Calvin is likely seeing the randomness in the CI results (runners executed on "cleaner" machines [disks with less clutter] run just fine). Bottom line, due to clutter, we are not receiving our full shared resource, and there are reasons beyond our control.

Perhaps our best option is to clean up each runner at the beginning of each stage (as each stage can receive different hardware). We need to look into how to accomplish this. There are others who are experiencing the same and have developed workarounds. We need to figure out which workaround resolves the issue for us.

Glenn
Reply all
Reply to author
Forward
0 new messages