Hello Group I've seemed to hit a bit of an impasse and I hope other may have experienced this before and can offer some suggestions.
I've created a test set based on the nextflow aws-batch
documentation
What I've done:
- Built my own ami starting with a "Amazon ECS-Optimized Amazon Linux" and installed awscli, and modifying the docker storage via the '/etc/sysconfig/docker-storage' file.
- Duplicated the nf config file found here but modified it so it looks like the following.
aws {
accessKey = '******'
secretKey = '*******'
region = 'us-west-2'
}
profiles {
awsbatch {
aws.region = 'us-west-2'
process.queue = 'testQ'
executor.name = 'awsbatch'
executor.awscli = '/home/ec2-user/miniconda/bin/aws'
}
}
process {
$getVersion {
container = 'quay.io/biocontainers/salmon' <-- testing with nf container.
}
}
With a equally simple nf script.
#!/usr/bin/env nextflow
// ------------------------------------- //
process testconnect {
"""
touch myfile.txt
echo "hello world" > myfile.txt
"""
}
// ------------------------------------- //
Also, I created a simple aws-queue and compute
environment to launch into.
==============================================================>
When I launch the job nextflow accept the process and config and holds.
$> nextflow run aws.pipeline.nf -c aws.config -w s3://testbucket
...
When I jump over to the aws console nextflow creates the .command.sh and .command.run files in S3, but the 'job queue' in aws-batch never changes and no ec2 instances are launched. Basically, the job just holds until I cancel it via the command line.
I thought that maybe the the instance wasn't configured correctly but based on my reading of the
aws docs when you use a "Amazon ECS-Optimized Amazon Linux" instance all the necessary environmental variables are set correctly, including 'awslogs'
Another avenue I'm currently exploring is the possibly my vpc was not configured correctly, so if anyone thinks this could be creating the issue, I'll dig further.
Thanks!
--Shawn