modify docker run command with -v option?

56 views
Skip to first unread message

Ajay Kurani

unread,
Nov 2, 2023, 1:01:13 PM11/2/23
to xnat_discussion
To whom it may concern:
   I am running container-service plugin in a non-traditional way for an HPC setup.  Instead of running the container for a  pipeline,  all it does is output a .json file of parameters.  The issue is that I would like the container to output the file to a folder location on the docker host I have access to such as /tmp.  

Typically in would just map it in the docker run command using -v option , but since the containers are run non-interactively this is not possible.   Another option I was thinking was modifying the command.json but I'm not sure this will work either since it seems that all outputs get redirected to the XNAT_HOME/build folder.  I assume this part of container plugin where the mount is autoset.  Is there any way either during the docker build or modifying the command.json or exposing the option -v to mount additional volumes ?  

If not through the container service plugin or command/label command during docker build, or the dockerbuild file, is there a way to add this option permanently on the docker instance we are running such as docker-compose.yml or other means? I know how to with scp but it's not ideal to keep the public key in a container for passwordless data transfers.

Thanks,
Ajay

John Flavin

unread,
Nov 2, 2023, 1:35:45 PM11/2/23
to xnat_di...@googlegroups.com
Ajay,

The Container Service intentionally controls which file system locations get mounted into the containers it launches as a way to maintain security and data access controls. We don't allow mounting arbitrary file locations as that would allow uncontrolled access to XNAT's data, system files and configuration, and the docker socket (which is tantamount to root access). So we can't present any direct volume mounting options.

And I'm not aware of any docker configuration options which would auto-mount a volume into all containers, though that doesn't mean it is impossible.

I can't give you a precise recommendation on how to proceed because I don't really know what you’re trying to do. If you can modify your workflow to pull this output file from an XNAT resource rather than a file path then it would be pretty easy to write a command that would automatically create a resource and put your output file into it. There are lots of other options, too, but it depends on what you want to accomplish.

John Flavin
Backend Team Lead
He/Him


--
You received this message because you are subscribed to the Google Groups "xnat_discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xnat_discussi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/xnat_discussion/04c4f6cb-f90b-4991-b2eb-850e89ed1441n%40googlegroups.com.

akluiber

unread,
Nov 3, 2023, 1:29:16 PM11/3/23
to xnat_discussion
Ajay,

I would suggest one way to accomplish your task might be to run a command within the container/script to copy the output json to your desired location via ssh/scp and using key authentication.
- Alex

Ajay Kurani

unread,
Nov 7, 2023, 12:44:09 AM11/7/23
to xnat_discussion
Hi Alex and John,
 Thanks for your replies. We are currently using the  scp command with ssh keypairs, but as far as I know the only way I can copy the key to the container for use is at build time since John mentioned we cannot mount external volumes by default for security purposes.  While this method works well for the task, from a security perspective I wasn't a fan of building a container with the key installed.  Do you or John have any suggestions on how to have the key mounted at run time maybe through one of the XNAT folders that gets auto mounted during runtime?

John to give you more context we are currently writing a json file out that contains all relevant pipeline parameters and container ID/workflow ID so that we can transfer to our compute resource and use to run jobs since our infrastructure does not support docker directly, outside of the one VM used with container service.  I am note the docker expert by any means but I was hoping to use docker cp to copy the file from the container to the docker host but that seems to only work  the other way around from the host side, which does not work when running the container.  I may be wrong on this point but have been unsuccessful thus far.  I was looking for a way to avoid using scp since I need to install the key pair at build-time, unless there is an XNAT folder that does not get auto-cataloged into the database where I can store the key and have it auto mounted in the way you specified during login.  

I am not opposed to using the resource, as we upload the outputs of pipelines there, but since we are able to copy it currently I think it was just lower hanging fruit to fix scp/keygen issue as opposed to having to use a separate API call.

Any suggestions or thoughts would be appreciated.

Thanks,
Ajay

Reply all
Reply to author
Forward
0 new messages