Error: *** caught segfault ***

117 views
Skip to first unread message

Sajal Sthapit

unread,
Nov 10, 2022, 9:17:38 AM11/10/22
to STITCH imputation
Hello Dr Davies,

I have successfully run STITCH with and without --genfile on 100Mbp and 200Mbp segments of the chromosome with 1557 to 1617 samples. But when I attempt to run it on the entire chromosome (574Mbp), I get a "caught segfault" error after about 4 hours, after the downsample step for sample # 309 out of 1617.  I had allocated 1000GB ram for the job. Can you help me understand what I am doing wrong? Thank you.

[2022-11-09 20:25:52] downsample sample C7_EMS2025 - 7 of 427062 reads removed

*** caught segfault ***
address 0xafb2c8000, cause 'memory not mapped'

Traceback:
1: cpp_read_reassign(ord = ord, qnameInteger_ord = qnameInteger_ord, bxtagInteger_ord = bxtagInteger_ord, bxtag_bad_ord = bxtag_bad_ord, qname = qname, bxtag = bxtag, strand = strand, sampleReadsRaw = sampleReadsRaw, readStart_ord = readStart_ord, readEnd_ord = readEnd_ord, readStart = readStart, readEnd = readEnd, iSizeUpperLimit = iSizeUpperLimit, bxTagUpperLimit = bxTagUpperLimit, use_bx_tag = use_bx_tag, save_sampleReadsInfo = save_sampleReadsInfo)
2: merge_reads_from_sampleReadsRaw(sampleReadsRaw = sampleReadsRaw, qname = qname, bxtag = bxtag, strand = strand, readStart = readStart, readEnd = readEnd, iSizeUpperLimit = iSizeUpperLimit, use_bx_tag = use_bx_tag, bxTagUpperLimit = bxTagUpperLimit, save_sampleReadsInfo = save_sampleReadsInfo, qname_all = qname_all, readStart_all = readStart_all, readEnd_all = readEnd_all)
3: loadBamAndConvert(iBam = iBam, L = L, pos = pos, nSNPs = nSNPs, bam_files = bam_files, cram_files = cram_files, reference = reference, iSizeUpperLimit = iSizeUpperLimit, bqFilter = bqFilter, chr = chr, N = N, downsampleToCov = downsampleToCov, sampleNames = sampleNames, inputdir = inputdir, useSoftClippedBases = useSoftClippedBases, regionName = regionName, tempdir = tempdir, chrStart = chrStart, chrEnd = chrEnd, chrLength = chrLength, save_sampleReadsInfo = save_sampleReadsInfo, use_bx_tag = use_bx_tag, bxTagUpperLimit = bxTagUpperLimit)
4: FUN(X[[i]], ...)
5: lapply(X = X, FUN = FUN, ...)
6: mclapply(1:length(sampleRanges), mc.cores = nCores, FUN = loadBamAndConvert_across_a_range, sampleRanges = sampleRanges, bundling_info = bundling_info, L = L, pos = pos, nSNPs = nSNPs, bam_files = bam_files, cram_files = cram_files, reference = reference, iSizeUpperLimit = iSizeUpperLimit, bqFilter = bqFilter, chr = chr, N = N, downsampleToCov = downsampleToCov, sampleNames = sampleNames, inputdir = inputdir, useSoftClippedBases = useSoftClippedBases, regionName = regionName, tempdir = tempdir, chrStart = chrStart, chrEnd = chrEnd, chrLength = chrLength, save_sampleReadsInfo = save_sampleReadsInfo, use_bx_tag = use_bx_tag, bxTagUpperLimit = bxTagUpperLimit)
7: generate_input(bundling_info = bundling_info, L = L, pos = pos, nSNPs = nSNPs, bam_files = bam_files, cram_files = cram_files, reference = reference, iSizeUpperLimit = iSizeUpperLimit, bqFilter = bqFilter, chr = chr, N = N, downsampleToCov = downsampleToCov, sampleNames = sampleNames, inputdir = inputdir, useSoftClippedBases = useSoftClippedBases, regionName = regionName, tempdir = tempdir, chrStart = chrStart, chrEnd = chrEnd, nCores = nCores, save_sampleReadsInfo = save_sampleReadsInfo, use_bx_tag = use_bx_tag, bxTagUpperLimit = bxTagUpperLimit)
8: generate_or_refactor_input(regenerateInput = regenerateInput, bundling_info = bundling_info, L = L, pos = pos, nSNPs = nSNPs, bam_files = bam_files, cram_files = cram_files, reference = reference, iSizeUpperLimit = iSizeUpperLimit, bqFilter = bqFilter, chr = chr, outputdir = outputdir, N = N, downsampleToCov = downsampleToCov, sampleNames = sampleNames, inputdir = inputdir, useSoftClippedBases = useSoftClippedBases, regionName = regionName, tempdir = tempdir, chrStart = chrStart, chrEnd = chrEnd, generateInputOnly = generateInputOnly, nCores = nCores, save_sampleReadsInfo = save_sampleReadsInfo, use_bx_tag = use_bx_tag, bxTagUpperLimit = bxTagUpperLimit)
9: STITCH(chr = opt$chr, posfile = opt$posfile, K = opt$K, S = opt$S, nGen = opt$nGen, outputdir = opt$outputdir, tempdir = opt$tempdir, bamlist = opt$bamlist, cramlist = opt$cramlist, sampleNames_file = opt$sampleNames_file, reference = opt$reference, genfile = opt$genfile, method = opt$method, output_format = opt$output_format, B_bit_prob = opt$B_bit_prob, outputInputInVCFFormat = opt$outputInputInVCFFormat, downsampleToCov = opt$downsampleToCov, downsampleFraction = opt$downsampleFraction, readAware = opt$readAware, chrStart = opt$chrStart, chrEnd = opt$chrEnd, regionStart = opt$regionStart, regionEnd = opt$regionEnd, buffer = opt$buffer, maxDifferenceBetweenReads = opt$maxDifferenceBetweenReads, maxEmissionMatrixDifference = opt$maxEmissionMatrixDifference, alphaMatThreshold = opt$alphaMatThreshold, emissionThreshold = opt$emissionThreshold, iSizeUpperLimit = opt$iSizeUpperLimit, bqFilter = opt$bqFilter, niterations = opt$niterations, shuffleHaplotypeIterations = eval(parse(text = opt$shuffleHaplotypeIterations)), splitReadIterations = eval(parse(text = opt$splitReadIterations)), nCores = opt$nCores, expRate = opt$expRate, maxRate = opt$maxRate, minRate = opt$minRate, Jmax = opt$Jmax, regenerateInput = opt$regenerateInput, originalRegionName = opt$originalRegionName, keepInterimFiles = opt$keepInterimFiles, keepTempDir = opt$keepTempDir, switchModelIteration = opt$switchModelIteration, generateInputOnly = opt$generateInputOnly, restartIterations = opt$restartIterations, refillIterations = eval(parse(text = opt$refillIterations)), downsampleSamples = opt$downsampleSamples, downsampleSamplesKeepList = opt$downsampleSamplesKeepList, subsetSNPsfile = opt$subsetSNPsfile, useSoftClippedBases = opt$useSoftClippedBases, outputBlockSize = opt$outputBlockSize, outputSNPBlockSize = opt$outputSNPBlockSize, inputBundleBlockSize = opt$inputBundleBlockSize, genetic_map_file = opt$genetic_map_file, reference_haplotype_file = opt$reference_haplotype_file, reference_legend_file = opt$reference_legend_file, reference_sample_file = opt$reference_sample_file, reference_populations = eval(parse(text = opt$reference_populations)), reference_phred = opt$reference_phred, reference_iterations = opt$reference_iterations, reference_shuffleHaplotypeIterations = eval(parse(text = opt$reference_shuffleHaplotypeIterations)), output_filename = opt$output_filename, initial_min_hapProb = opt$initial_min_hapProb, initial_max_hapProb = opt$initial_max_hapProb, regenerateInputWithDefaultValues = opt$regenerateInputWithDefaultValues, plotHapSumDuringIterations = opt$plotHapSumDuringIterations, plot_shuffle_haplotype_attempts = opt$plot_shuffle_haplotype_attempts, plotAfterImputation = opt$plotAfterImputation, save_sampleReadsInfo = opt$save_sampleReadsInfo, gridWindowSize = opt$gridWindowSize, shuffle_bin_nSNPs = opt$shuffle_bin_nSNPs, shuffle_bin_radius = opt$shuffle_bin_radius, keepSampleReadsInRAM = opt$keepSampleReadsInRAM, useTempdirWhileWriting = opt$useTempdirWhileWriting, output_haplotype_dosages = opt$output_haplotype_dosages, use_bx_tag = opt$use_bx_tag, bxTagUpperLimit = opt$bxTagUpperLimit)
An irrecoverable exception occurred. R is aborting now ...
/var/spool/slurm/d/job8549000/slurm_script: line 21: 36620 Segmentation fault (core dumped) ./STITCH.R --chr=V01 --chrStart=1 --chrEnd=574115619 --bamlist=/fastscratch/sthapit/chops/Nov09a/bam_list.txt --posfile=/fastscratch/sthapit/chops/Nov09a/V01_hqSNPs_pos.txt --sampleNames_file=/fastscratch/sthapit/chops/Nov09a/sample_names.txt --genfile=/fastscratch/sthapit/chops/Nov09a/V01_gen.txt --outputBlockSize=500 --outputSNPBlockSize=100000 --gridWindowSize=100000 --plotAfterImputation=FALSE --outputdir=/fastscratch/sthapit/chops/Nov09a/ --K=4 --nGen=25 --nCores=1

Robbie Davies

unread,
Nov 17, 2022, 5:00:32 AM11/17/22
to Sajal Sthapit, STITCH imputation
Hi,

Sorry for my slow reply. 

I've seen this error multiple times recently and I've not been able to capture it reproducibly. Does yours go away if you re-run?

One potential explanation is different libraries available on the machine you compiled STITCH vs the one you are running it on, for instance if you're using a compute cluster type setup with different machines. Is that the case here?

Best,
Robbie



--
You received this message because you are subscribed to the Google Groups "STITCH imputation" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stitch-imputat...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/stitch-imputation/e74e71d5-d44b-4d4f-b332-0fc6e4a5a94en%40googlegroups.com.

Sajal Sthapit

unread,
Feb 1, 2023, 8:44:23 AM2/1/23
to STITCH imputation
Hi Robbie,

Just wanted to give you an update. I have not been able to resolve the "caught segfault" error, but I have been able to brute force my way through it. When running imputation on all the variants in a chromosome (ranging between 3 to 9 million) using 16 cores and 16 Gb per core, I have a success rate on 1/3rd of the chromosomes. Then I just repeated the same job on the chromosomes that failed. After 3-4 repeats, I had all 21 chromosomes imputed. Not an elegant solution, but it worked.

Robbie Davies

unread,
Feb 1, 2023, 10:12:11 AM2/1/23
to Sajal Sthapit, STITCH imputation
Hi,

Sorry I never answered you. I did see other messages from other people similar to you, and then most usefully, I got the error on my own data. I was able to track it down and fix it for me, and pushed 1.6.7 in early December. Sorry I didn't reply to this thread about that.

Sorry you had to brute force your way through. The bug, at least the one that fixed things for me, was an off by 1 out of bounds problem C++, which is never a good thing, but doesn't always lead to an error, depending on what the adjacent memory spot is doing / being used for.

Best,
Robbie



zhiqiang chen

unread,
May 10, 2023, 10:56:45 AM5/10/23
to STITCH imputation
Hi Robbie,

Are you trying to solve the problem yet? I am imputing a ca. 20G conifer Genome for more than 1000 individuals. I felt too often to see such errors. It caused too many repeated runs for the same job. 

Cheers
zhiqiang 

Robbie Davies

unread,
May 11, 2023, 7:33:30 AM5/11/23
to zhiqiang chen, STITCH imputation
Hi,

Did you try with the newer version as mentioned in that email with the bug fix? 

If you're still getting bugs with the latest version can you reconfirm an error with log message etc?

Thanks
Robbie


zhiqiang chen

unread,
May 12, 2023, 4:31:32 AM5/12/23
to STITCH imputation
Hi Robbie,

Yes, I used the newer version released in February 2023. I recently found that the error may be related to the region of the genome. In order to facilitate the SNP calling, we split the whole genome into 100Mbp for each. Moreover, I split the 100Mbp into 250 regions. So, I submitted 250 jobs to our computing cluster. For some of 100 Mbp regions, 250 jobs never produced any errors. However, for some of 100 Mbp regions, the error is produced quite often. Therefore, I need to resubmit it again. For example, yesterday, I resubmitted the 8 jobs, but only 4 were successful. 

# One of the examples
[2023-05-12 07:01:48] Running STITCH(chr = PA_chr01_9, nGen = 100, posfile = pos.tabulated.txt, K = 40, S = 1, outputdir = /proj/uppstore2017145/V3/users/chen/imputation/PA_chr01_9, nStarts = , tempdir = /proj/uppstore2017145/V3/users/chen/imputation/temp.PA_chr01_9, bamlist = /crex/proj/uppstore2017145/V3/users/chen/bin/STITCH/bamlist.KAW1063_new.txt, cramlist = , sampleNames_file = , reference = , genfile = , method = pseudoHaploid, output_format = bgvcf, B_bit_prob = 16, outputInputInVCFFormat = FALSE, downsampleToCov = 50, downsampleFraction = 1, readAware = TRUE, chrStart = NA, chrEnd = NA, regionStart = 42000001, regionEnd = 42400000, buffer = 10000, maxDifferenceBetweenReads = 1000, maxEmissionMatrixDifference = 1e+10, alphaMatThreshold = 1e-04, emissionThreshold = 1e-04, iSizeUpperLimit = 600, bqFilter = 17, niterations = 40, shuffleHaplotypeIterations = c(4, 8, 12, 16), splitReadIterations = -1, nCores = 15, expRate = 0.5, maxRate = 100, minRate = 0.1, Jmax = 1000, regenerateInput = TRUE, originalRegionName = NA, keepInterimFiles = FALSE, keepTempDir = FALSE, outputHaplotypeProbabilities = FALSE, switchModelIteration = 39, generateInputOnly = FALSE, restartIterations = NA, refillIterations = c(6, 10, 14, 18), downsampleSamples = 1, downsampleSamplesKeepList = NA, subsetSNPsfile = NA, useSoftClippedBases = FALSE, outputBlockSize = 1000, outputSNPBlockSize = 10000, inputBundleBlockSize = NA, genetic_map_file = , reference_haplotype_file = , reference_legend_file = , reference_sample_file = , reference_populations = NA, reference_phred = 20, reference_iterations = 40, reference_shuffleHaplotypeIterations = c(4, 8, 12, 16), output_filename = NULL, initial_min_hapProb = 0.2, initial_max_hapProb = 0.8, regenerateInputWithDefaultValues = FALSE, plotHapSumDuringIterations = FALSE, plot_shuffle_haplotype_attempts = FALSE, plotAfterImputation = TRUE, save_sampleReadsInfo = FALSE, gridWindowSize = NA, shuffle_bin_nSNPs = NULL, shuffle_bin_radius = 5000, keepSampleReadsInRAM = FALSE, useTempdirWhileWriting = FALSE, output_haplotype_dosages = FALSE, use_bx_tag = TRUE, bxTagUpperLimit = 50000)
[2023-05-12 07:01:48] Program start
[2023-05-12 07:01:48] Get and validate pos and gen
[2023-05-12 07:03:02] Done get and validate pos and gen
[2023-05-12 07:03:04] There are 1619 variants in the left buffer region 41990001 <= position < 42000001
[2023-05-12 07:03:04] There are 80469 variants in the central region 42000001 <= position <= 42400000
[2023-05-12 07:03:04] There are 2252 variants in the right buffer region 42400000 < position <= 42410000
[2023-05-12 07:03:04] Get BAM sample names
[2023-05-12 07:10:45] Done getting BAM sample names
[2023-05-12 07:10:45] Generate inputs
[2023-05-12 07:10:49] Load and convert BAM 500 of 1063
[2023-05-12 07:10:57] downsample sample Diploid - 9851 of 65253 reads removed
[2023-05-12 07:10:57] Load and convert BAM 1000 of 1063
[2023-05-12 07:11:06] downsample sample P_engelmannii - 6853 of 39375 reads removed
[2023-05-12 07:11:10] downsample sample P_glauca - 592 of 23525 reads removed
[2023-05-12 07:11:15] Load and convert BAM 300 of 1063
[2023-05-12 07:11:19] downsample sample Haploid_ERX242654 - 11425 of 72798 reads removed
[2023-05-12 07:11:19] downsample sample P21002-133 - 1198 of 46850 reads removed
[2023-05-12 07:11:20] Load and convert BAM 800 of 1063
[2023-05-12 07:11:25] downsample sample P21002-134 - 634 of 39746 reads removed
[2023-05-12 07:11:29] downsample sample P24354-103 - 1 of 21828 reads removed
[2023-05-12 07:11:29] downsample sample P15502-103 - 3 of 29694 reads removed
[2023-05-12 07:11:31] Load and convert BAM 600 of 1063
[2023-05-12 07:11:32] downsample sample P21002-135 - 1452 of 42061 reads removed
[2023-05-12 07:11:33] downsample sample P15502-104 - 124 of 26229 reads removed
[2023-05-12 07:11:37] downsample sample P21002-136 - 531 of 45823 reads removed
[2023-05-12 07:11:37] downsample sample P15502-105 - 48 of 31097 reads removed
[2023-05-12 07:11:42] downsample sample P21002-137 - 760 of 47196 reads removed
[2023-05-12 07:11:45] downsample sample P15502-110 - 28 of 30869 reads removed
[2023-05-12 07:11:51] downsample sample P21002-138 - 536 of 45249 reads removed
[2023-05-12 07:11:52] Load and convert BAM 400 of 1063
[2023-05-12 07:11:57] downsample sample P21002-139 - 720 of 45478 reads removed

 *** caught segfault ***
address 0xd4d11000, cause 'memory not mapped'

Traceback:
 1: cpp_read_reassign(ord = ord, qnameInteger_ord = qnameInteger_ord,     bxtagInteger_ord = bxtagInteger_ord, bxtag_bad_ord = bxtag_bad_ord,     qname = qname, bxtag = bxtag, strand = strand, sampleReadsRaw = sampleReadsRaw,     readStart_ord = readStart_ord, readEnd_ord = readEnd_ord,     readStart = readStart, readEnd = readEnd, iSizeUpperLimit = iSizeUpperLimit,     bxTagUpperLimit = bxTagUpperLimit, use_bx_tag = use_bx_tag,     save_sampleReadsInfo = save_sampleReadsInfo, maxnSNPInRead = maxnSNPInRead)

 2: merge_reads_from_sampleReadsRaw(sampleReadsRaw = sampleReadsRaw,     qname = qname, bxtag = bxtag, strand = strand, readStart = readStart,     readEnd = readEnd, iSizeUpperLimit = iSizeUpperLimit, use_bx_tag = use_bx_tag,     bxTagUpperLimit = bxTagUpperLimit, save_sampleReadsInfo = save_sampleReadsInfo,     qname_all = qname_all, readStart_all = readStart_all, readEnd_all = readEnd_all)
 3: loadBamAndConvert(iBam = iBam, L = L, pos = pos, nSNPs = nSNPs,     bam_files = bam_files, cram_files = cram_files, reference = reference,     iSizeUpperLimit = iSizeUpperLimit, bqFilter = bqFilter, chr = chr,     N = N, downsampleToCov = downsampleToCov, sampleNames = sampleNames,     inputdir = inputdir, useSoftClippedBases = useSoftClippedBases,     regionName = regionName, tempdir = tempdir, chrStart = chrStart,     chrEnd = chrEnd, chrLength = chrLength, save_sampleReadsInfo = save_sampleReadsInfo,     use_bx_tag = use_bx_tag, bxTagUpperLimit = bxTagUpperLimit)
 4: FUN(X[[i]], ...)
 5: lapply(X = S, FUN = FUN, ...)
 6: doTryCatch(return(expr), name, parentenv, handler)
 7: tryCatchOne(expr, names, parentenv, handlers[[1L]])
 8: tryCatchList(expr, classes, parentenv, handlers)
 9: tryCatch(expr, error = function(e) {    call <- conditionCall(e)    if (!is.null(call)) {        if (identical(call[[1L]], quote(doTryCatch)))             call <- sys.call(-4L)        dcall <- deparse(call)[1L]        prefix <- paste("Error in", dcall, ": ")        LONG <- 75L        sm <- strsplit(conditionMessage(e), "\n")[[1L]]        w <- 14L + nchar(dcall, type = "w") + nchar(sm[1L], type = "w")        if (is.na(w))             w <- 14L + nchar(dcall, type = "b") + nchar(sm[1L],                 type = "b")        if (w > LONG)             prefix <- paste0(prefix, "\n  ")    }    else prefix <- "Error : "    msg <- paste0(prefix, conditionMessage(e), "\n")    .Internal(seterrmessage(msg[1L]))    if (!silent && isTRUE(getOption("show.error.messages"))) {        cat(msg, file = outFile)        .Internal(printDeferredWarnings())    }    invisible(structure(msg, class = "try-error", condition = e))})
10: try(lapply(X = S, FUN = FUN, ...), silent = TRUE)
11: sendMaster(try(lapply(X = S, FUN = FUN, ...), silent = TRUE))
12: FUN(X[[i]], ...)
13: lapply(seq_len(cores), inner.do)
14: mclapply(1:length(sampleRanges), mc.cores = nCores, FUN = loadBamAndConvert_across_a_range,     sampleRanges = sampleRanges, bundling_info = bundling_info,     L = L, pos = pos, nSNPs = nSNPs, bam_files = bam_files, cram_files = cram_files,     reference = reference, iSizeUpperLimit = iSizeUpperLimit,     bqFilter = bqFilter, chr = chr, N = N, downsampleToCov = downsampleToCov,     sampleNames = sampleNames, inputdir = inputdir, useSoftClippedBases = useSoftClippedBases,     regionName = regionName, tempdir = tempdir, chrStart = chrStart,     chrEnd = chrEnd, chrLength = chrLength, save_sampleReadsInfo = save_sampleReadsInfo,     use_bx_tag = use_bx_tag, bxTagUpperLimit = bxTagUpperLimit)
15: generate_input(bundling_info = bundling_info, L = L, pos = pos,     nSNPs = nSNPs, bam_files = bam_files, cram_files = cram_files,     reference = reference, iSizeUpperLimit = iSizeUpperLimit,     bqFilter = bqFilter, chr = chr, N = N, downsampleToCov = downsampleToCov,     sampleNames = sampleNames, inputdir = inputdir, useSoftClippedBases = useSoftClippedBases,     regionName = regionName, tempdir = tempdir, chrStart = chrStart,     chrEnd = chrEnd, nCores = nCores, save_sampleReadsInfo = save_sampleReadsInfo,     use_bx_tag = use_bx_tag, bxTagUpperLimit = bxTagUpperLimit)
16: generate_or_refactor_input(regenerateInput = regenerateInput,     bundling_info = bundling_info, L = L, pos = pos, nSNPs = nSNPs,     bam_files = bam_files, cram_files = cram_files, reference = reference,     iSizeUpperLimit = iSizeUpperLimit, bqFilter = bqFilter, chr = chr,     outputdir = outputdir, N = N, downsampleToCov = downsampleToCov,     sampleNames = sampleNames, inputdir = inputdir, useSoftClippedBases = useSoftClippedBases,     regionName = regionName, tempdir = tempdir, chrStart = chrStart,     chrEnd = chrEnd, generateInputOnly = generateInputOnly, nCores = nCores,     save_sampleReadsInfo = save_sampleReadsInfo, use_bx_tag = use_bx_tag,     bxTagUpperLimit = bxTagUpperLimit)
17: STITCH(chr = opt$chr, posfile = opt$posfile, K = opt$K, S = opt$S,     nGen = opt$nGen, outputdir = opt$outputdir, tempdir = opt$tempdir,     bamlist = opt$bamlist, cramlist = opt$cramlist, sampleNames_file = opt$sampleNames_file,     reference = opt$reference, genfile = opt$genfile, method = opt$method,     output_format = opt$output_format, B_bit_prob = opt$B_bit_prob,     outputInputInVCFFormat = opt$outputInputInVCFFormat, downsampleToCov = opt$downsampleToCov,     downsampleFraction = opt$downsampleFraction, readAware = opt$readAware,     chrStart = opt$chrStart, chrEnd = opt$chrEnd, regionStart = opt$regionStart,     regionEnd = opt$regionEnd, buffer = opt$buffer, maxDifferenceBetweenReads = opt$maxDifferenceBetweenReads,     maxEmissionMatrixDifference = opt$maxEmissionMatrixDifference,     alphaMatThreshold = opt$alphaMatThreshold, emissionThreshold = opt$emissionThreshold,     iSizeUpperLimit = opt$iSizeUpperLimit, bqFilter = opt$bqFilter,     niterations = opt$niterations, shuffleHaplotypeIterations = eval(parse(text = opt$shuffleHaplotypeIterations)),     splitReadIterations = eval(parse(text = opt$splitReadIterations)),     nCores = opt$nCores, expRate = opt$expRate, maxRate = opt$maxRate,     minRate = opt$minRate, Jmax = opt$Jmax, regenerateInput = opt$regenerateInput,     originalRegionName = opt$originalRegionName, keepInterimFiles = opt$keepInterimFiles,     keepTempDir = opt$keepTempDir, switchModelIteration = opt$switchModelIteration,     generateInputOnly = opt$generateInputOnly, restartIterations = opt$restartIterations,     refillIterations = eval(parse(text = opt$refillIterations)),     downsampleSamples = opt$downsampleSamples, downsampleSamplesKeepList = opt$downsampleSamplesKeepList,     subsetSNPsfile = opt$subsetSNPsfile, useSoftClippedBases = opt$useSoftClippedBases,     outputBlockSize = opt$outputBlockSize, outputSNPBlockSize = opt$outputSNPBlockSize,     inputBundleBlockSize = opt$inputBundleBlockSize, genetic_map_file = opt$genetic_map_file,     reference_haplotype_file = opt$reference_haplotype_file,     reference_legend_file = opt$reference_legend_file, reference_sample_file = opt$reference_sample_file,     reference_populations = eval(parse(text = opt$reference_populations)),     reference_phred = opt$reference_phred, reference_iterations = opt$reference_iterations,     reference_shuffleHaplotypeIterations = eval(parse(text = opt$reference_shuffleHaplotypeIterations)),     output_filename = opt$output_filename, initial_min_hapProb = opt$initial_min_hapProb,     initial_max_hapProb = opt$initial_max_hapProb, regenerateInputWithDefaultValues = opt$regenerateInputWithDefaultValues,     plotHapSumDuringIterations = opt$plotHapSumDuringIterations,     plot_shuffle_haplotype_attempts = opt$plot_shuffle_haplotype_attempts,     plotAfterImputation = opt$plotAfterImputation, save_sampleReadsInfo = opt$save_sampleReadsInfo,     gridWindowSize = opt$gridWindowSize, shuffle_bin_nSNPs = opt$shuffle_bin_nSNPs,     shuffle_bin_radius = opt$shuffle_bin_radius, keepSampleReadsInRAM = opt$keepSampleReadsInRAM,     useTempdirWhileWriting = opt$useTempdirWhileWriting, output_haplotype_dosages = opt$output_haplotype_dosages,     use_bx_tag = opt$use_bx_tag, bxTagUpperLimit = opt$bxTagUpperLimit)

An irrecoverable exception occurred. R is aborting now ...
[2023-05-12 07:12:00] downsample sample P15502-115 - 26 of 31024 reads removed
[2023-05-12 07:12:01] downsample sample P21002-140 - 553 of 39790 reads removed
[2023-05-12 07:12:01] Load and convert BAM 100 of 1063
[2023-05-12 07:12:06] downsample sample P21002-141 - 705 of 44286 reads removed
[2023-05-12 07:12:10] downsample sample P21002-142 - 322 of 36800 reads removed
[2023-05-12 07:12:18] Load and convert BAM 700 of 1063
[2023-05-12 07:12:33] Load and convert BAM 900 of 1063
[2023-05-12 07:12:42] Load and convert BAM 200 of 1063
[2023-05-12 07:13:05] downsample sample P15502-146 - 17 of 17858 reads removed
[2023-05-12 07:13:19] downsample sample P15502-154 - 3 of 15635 reads removed
[2023-05-12 07:13:38] downsample sample P15502-164 - 4 of 15582 reads removed
[2023-05-12 07:13:50] Done generating inputs
[2023-05-12 07:13:50] Copying files onto tempdir
[2023-05-12 07:15:08] Done copying files onto tempdir
[2023-05-12 07:15:08] Generate allele count
[2023-05-12 07:15:40] Error in readChar(con, 5L, useBytes = TRUE) : cannot open the connection

Error in check_mclapply_OK(out2) :
  An error occured during STITCH. The first such error is above
Calls: STITCH -> buildAlleleCount -> check_mclapply_OK
In addition: Warning messages:
1: In mclapply(1:length(sampleRanges), mc.cores = nCores, FUN = loadBamAndConvert_across_a_range,  :
  scheduled core 15 did not deliver a result, all values of the job will be affected
2: In mclapply(sampleRanges, mc.cores = nCores, FUN = buildAlleleCount_subfunction,  :

Reply all
Reply to author
Forward
0 new messages