Hello there,
My colleagues and I have been testing our new BlackPearl system over the past week or so and have encountered several issues that we have questions about.
1) We sent a job through five days ago that reached the BP cache just fine but has since not been sent to tape. The job is 141GB in total size and blobbing is enabled in the data policy (if that is a factor). We tried sending other jobs through thinking perhaps the size of the job was too small, however, the other jobs were written to tape while this job remains in the cache (screenshot attached). Any idea why this could be?
2) After sending several jobs through to the same bucket we had a discrepancy in the amount of space used on the tape. Our policy was 'Dual copy on tape', the jobs totaled approx 1.14TB in size. Each of the tapes the jobs were written to show ~600GB used when I look under Tape Management in the RMI but when I use the DS3Java SDK to query the bucket it returns the correct size of 1.14TB. Any idea why the interface shows ~600GB per tape?
3) Does the object offset refer to the 'chunk' size? Or the 'blob' size? When hovering over the question mark on the interface (see screenshot) it says chunk size but documentation says the preferred blob size is 64GB which is what it shows as the object offset, so we are just wondering if this is meaning the blob size, not chunk?
Thanks and sorry for so many questions!
Tiffany Holden