No, all the archiving does is remove the pointer. What slurm does right now is that it creates a hash of the job_script/job_env and then checks and sees if that hash matches one on record. If not then it adds it to the record, if it does match then it adds a pointer to the appropriate record. So you can think of the job_script/job_env as an internal database of all the various scripts and envs that slurm has ever seen and then what ends up in the Job record is a pointer to that database. This way slurm can deduplicate scripts/envs that are the same. This works great for job_scripts as they are functionally the same and thus you have many jobs pointed to the same script, but less so for job_envs.
-Paul Edmon-
Yes it was later than that. If you are 23.02 you are good. We've
been running with storing job_scripts on for years at this point
and that part of the database only uses up 8.4G. Our entire
database takes up 29G on disk. So its about 1/3 of the database.
We also have database compression which helps with the on disk
size. Raw uncompressed our database is about 90G. We keep 6
months of data in our active database.
-Paul Edmon-
At least in our setup, users can see their own scripts by doing sacct -B -j JOBID
I would make sure that the scripts are being stored and how you have PrivateData set.
-Paul Edmon-
You will probably need to.
The way we handle it is that we add users when the first submit a job via the job_submit.lua script. This way the database autopopulates with active users.
-Paul Edmon-