Hi,
I am looking to compare a set of downsampled vcf files to a truth set using rtg-tools vcfeval. I am running the command:
rtg vcfeval --baseline $truth_set --calls $downsampled_calls --template $output_sdf --output $output_dir
The data is genome-wide human data in gvcf format. The truth set is 1.6GB and the baseline set varies with coverage, but the largest one is around 846mb.
I am requesting a large amount of memory from the compute node (LSF) - up to 248GB.
Each time, a variety of jobs fail with the Java error:
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f0c00480000, 17997234176, 0) failed; error='Cannot allocate memory' (errno=12)
The operating system did not make requested memory available to the JVM. Try removing other jobs on this machine, adjusting allocated memory appropriate to currently available memory, or adjusting command parameters to reduce memory requirements. More information is contained in the file: ./hs_err_pid30143.log
I have tried increasing the amount of memory up to 248Gb, but the error persists. Strangely, the larger downsampled call sets seem to be able to finish, but the smaller ones don't.
I suppose the error comes from the way Java is accessing memory, rather than there not being enough memory. Or perhaps it is to do with the memory each thread is using?
Could you suggest some ways I could fix the error?
Thanks.