help with xnat container outputs on NFS when compute backend runs Docker remotely

170 views
Skip to first unread message

Carmen Giugliano

unread,
Nov 19, 2025, 8:48:41 AMNov 19
to xnat_discussion

Hello everyone,

Here’s my setup:

  • XNAT: 1.9.2.1

  • Container Service: 3.7.2

  • OS: AlmaLinux

  • Installation: via Docker 

I've connected the xnat  compute backend to a dokcer daemon running on a different machine than the XNAT host. Jobs run inside containers correctly, and output files are generated in the staging area (build/<job-id>), but xnat fails to copy them to the final resource folder.
Here's what i observe:
  • the container runs as UID/GID 1002:1002 (correspondig to xnat:xnat)
  • Files inside the container's /output folder are owned by 1002:1002 and have the drwxr-x--- permissions
  • the staging folder on the NFS share is drwxrwx--- 1002:1002
  • on the host, ls -l shows the same permissions and ownership 
It seems that the XNAT service on the host cannot access the staging folder.
what should i do?
Thanks in advance,
Best.

In the following more details:

here's my path translation:
XNAT path prefix: /data/xnat
Server path prefix: /mnt/xnat_shared
Container user:
1002:1002

Here's my json:
{
  "name": "ciao-gpu-writer",
  "label": "Write ciao.txt to Project from GPU",
  "description": "Runs ciao-image and saves results to a Project Resource.",
  "version": "1.0",
  "schema-version": "1.0",
  "image": "my-xnat-app:latest",
  "type": "docker",
  "command-line": "pwd && python /app/hello_from_gpu.py",
  "override-entrypoint": true,
  "mounts": [
    {
      "name": "out",
      "writable": true,
      "path": "/output"
    }
  ],
  "environment-variables": {},
  "ports": {},
  "inputs": [],
  "outputs": [
    {
      "name": "txt_out",
      "description": "All text files written to /output",
      "required": true,
      "mount": "out",
      "path": "",
      "glob": "*.txt"
    }
  ],
  "xnat": [
    {
      "name": "project",
      "label": "Ciao Writer Project Launch",
      "description": "Lancia il container su un Progetto",
      "contexts": [
        "xnat:projectData"
      ],
      "external-inputs": [
        {
          "name": "project",
          "description": "Target project",
          "type": "Project",
          "required": true,
          "load-children": true
        }
      ],
      "derived-inputs": [],
      "output-handlers": [
        {
          "name": "save_to_project_resource",
          "accepts-command-output": "txt_out",
          "as-a-child-of": "project",
          "type": "Resource",
          "label": "ciao-output-resource",
          "tags": []
        }
      ]
    }
  ],
  "container-labels": {},
  "generic-resources": {},
  "ulimits": {},
  "secrets": [],
  "visibility": "public"
}

Here's my dockerfile:
FROM python:3.12-slim

ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1

WORKDIR /app

RUN mkdir -p /output && chown 1002:1002 /output

COPY --chown=1002:1002 hello_from_gpu.py /app/hello_from_gpu.py

USER 1002:1002
ENTRYPOINT ["sh", "-c", "chmod 775 /output && chown 1002:1002 /output && exec python /app/hello_from_gpu.py"]
CMD ["python", "/app/hello_from_gpu.py"]

Here's my python:
import os
import time
import stat
import sys
import pwd
import grp


try:
  
    current_uid = os.getuid()
    current_gid = os.getgid()

   
    try:
        username = pwd.getpwuid(current_uid).pw_name
    except KeyError:
        username = f"UID {current_uid} "

    try:
        groupname = grp.getgrgid(current_gid).gr_name
    except KeyError:
        groupname = f"GID {current_gid}"

    print(f"[DEBUG] Process running as: {username} (UID: {current_uid}) / {groupname} (GID: {current_gid})")

 

    output_stat = os.stat('/output')

    print(f"[DEBUG] /output exists: {os.path.exists('/output')}")
    print(f"[DEBUG] /output writable: {os.access('/output', os.W_OK)}")
    print(f"[DEBUG] /output permissions: {stat.filemode(output_stat.st_mode)}")
    print(f"[DEBUG] /output owned by: UID {output_stat.st_uid} / GID {output_stat.st_gid}")

   
    out_dir = "/output"
    os.makedirs(out_dir, exist_ok=True)

    # 1. Write file
    path = f"{out_dir}/ciao.txt"
    with open(path, "w") as f:
        f.write("HELLO FROM GPU \n")
 

except Exception as e:
    
    print(f"[ERRORE] Si è verificato un errore: {e}", file=sys.stderr)

here's my log:
View stdout (live)
/app
[DEBUG] Process running as: UID 1002 (UID: 1002) / GID 1002 (GID: 1002)
[DEBUG] /output exists: True
[DEBUG] /output writable: True
[DEBUG] /output permissions: drwxr-x---
[DEBUG] /output owned by: UID 1002 / GID 1002

John Flavin

unread,
Nov 19, 2025, 5:22:32 PMNov 19
to xnat_di...@googlegroups.com
My first thought on reading the description was the file permissions don't match. However, you have checked all the permissions, and everything looks like I would expect it to look. I don't see any reason it wouldn't work. I don't have any other immediate thoughts just based on what you've said, so we'll need to find some more information. 

What do you mean that XNAT fails to copy them to the resource? Is there an error message? Are there any relevant entries in the XNAT or container service logs around the time the container finishes and CS tries to finalize it?

John Flavin

--
You received this message because you are subscribed to the Google Groups "xnat_discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xnat_discussi...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/xnat_discussion/d924830e-1f1f-4cc9-862d-60c811f47cd9n%40googlegroups.com.
Message has been deleted

Carmen Giugliano

unread,
Nov 21, 2025, 11:31:22 AMNov 21
to xnat_discussion
Dear John, 

thanks a lot for your prompt reply. 


>What do you mean that XNAT fails to copy them to the resource?
I mean that nothing appears in the Manage Files as you can see from the screenshot below:
Screenshot 2025-11-21 alle 11.46.09.pngScreenshot 2025-11-21 alle 11.46.23.png


>Is there an error message? Are there any relevant entries in the XNAT or container service logs around the time the container finishes and CS tries to finalize it?


Plus i want to add that in the xnat-web in the docker-compose.yml i've added :
    environment:
      - XNAT_DATASERVER_UMASK=000       
      - XNAT_DATASERVER_DIRECTORY_PERMS=0777 


below there's my containers.log:
2025-11-21 11:59:22,123 [http-nio-8080-exec-34] DEBUG org.nrg.containers.rest.LaunchRestApi - Creating launch UI.
2025-11-21 11:59:22,125 [http-nio-8080-exec-34] DEBUG org.nrg.containers.model.command.auto.LaunchUi - ROOT project - Populating input relationship tree.
2025-11-21 11:59:22,125 [http-nio-8080-exec-34] DEBUG org.nrg.containers.model.command.auto.LaunchUi - ROOT project - Populating input value tree.
2025-11-21 11:59:23,574 [http-nio-8080-exec-19] INFO  org.nrg.containers.rest.LaunchRestApi - Launch requested for wrapper id 13
2025-11-21 11:59:24,156 [http-nio-8080-exec-19] DEBUG org.nrg.containers.services.impl.ContainerServiceImpl - Created workflow 435.
2025-11-21 11:59:24,156 [http-nio-8080-exec-19] DEBUG org.nrg.containers.services.impl.ContainerServiceImpl - Adding to staging queue: count [not computed], project newstorage, wrapperId 13, commandId 0, wrapperName null, inputValues {project=/archive/projects/newstorage}, username admin, workflowId 435
2025-11-21 11:59:24,306 [stagingQueueListener-43] DEBUG org.nrg.containers.jms.listeners.ContainerStagingRequestListener - Consuming staging queue: count [not computed], project newstorage, wrapperId 13, commandId 0, wrapperName null, inputValues {project=/archive/projects/newstorage}, username admin, workflowId 435
2025-11-21 11:59:24,306 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.ContainerServiceImpl - consumeResolveCommandAndLaunchContainer wfid 435
2025-11-21 11:59:24,377 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.ContainerServiceImpl - Configuring command for wfid 435
2025-11-21 11:59:24,392 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.ContainerServiceImpl - Resolving command for wfid 435
2025-11-21 11:59:24,401 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.ContainerServiceImpl - Launching command for wfid 435
2025-11-21 11:59:24,401 [stagingQueueListener-43] INFO  org.nrg.containers.services.impl.ContainerServiceImpl - Preparing to launch resolved command.
2025-11-21 11:59:24,436 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.ContainerServiceImpl - Checking input values to find root XNAT input object.
2025-11-21 11:59:24,436 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.ContainerServiceImpl - Input "project".
2025-11-21 11:59:24,436 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.ContainerServiceImpl - Getting input value as XFTItem.
2025-11-21 11:59:24,436 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.ContainerServiceImpl - Found a valid root XNAT input object: project.
2025-11-21 11:59:24,436 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.ContainerServiceImpl - Update workflow for Wrapper project - Command ciao-gpu-writer - Image my-xnat-app:latest.
2025-11-21 11:59:24,477 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.ContainerServiceImpl - Updated workflow 435.
2025-11-21 11:59:24,477 [stagingQueueListener-43] INFO  org.nrg.containers.services.impl.ContainerServiceImpl - Creating container from resolved command.
2025-11-21 11:59:24,479 [stagingQueueListener-43] DEBUG org.nrg.containers.api.DockerControlApi - Creating container:
server docker_gpu tcp://gctd-gpu.epiccloud:2376
image my-xnat-app:latest
command "pwd && python /app/hello_from_gpu.py"
working directory "null"
containerUser "0:0"
volumes [/mnt/xnat_shared/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8:/output]
environment variables [XNAT_USER=e62d8ahgcjfxjtrzsaje, XNAT_EVENT_ID=435, XNAT_WORKFLOW_ID=435, XNAT_HOST=XXXXX, XNAT_PASS=XXXXXX]
exposed ports: {}
2025-11-21 11:59:25,056 [stagingQueueListener-43] INFO  org.nrg.containers.services.impl.ContainerServiceImpl - Recording container launch.
2025-11-21 11:59:25,056 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.ContainerServiceImpl - Updating workflow for Container b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2
2025-11-21 11:59:26,029 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.ContainerServiceImpl - Updated workflow 435.
2025-11-21 11:59:26,038 [stagingQueueListener-43] INFO  org.nrg.containers.services.impl.HibernateContainerEntityService - Adding new history item to container entity 224
2025-11-21 11:59:26,040 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.HibernateContainerEntityService - Acquiring lock for the container 224
2025-11-21 11:59:26,040 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.HibernateContainerEntityService - Acquired lock for the container 224
2025-11-21 11:59:26,040 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.HibernateContainerEntityService - Setting container entity 224 status to "Created", based on history entry status "Created".
2025-11-21 11:59:26,040 [stagingQueueListener-43] DEBUG org.nrg.containers.utils.ContainerUtils - Updating status of workflow 435.
2025-11-21 11:59:26,044 [stagingQueueListener-43] DEBUG org.nrg.containers.utils.ContainerUtils - Found workflow 435.
2025-11-21 11:59:26,044 [stagingQueueListener-43] INFO  org.nrg.containers.utils.ContainerUtils - Updating workflow 435 pipeline "project" from "Queued" to "Created" (details: ).
2025-11-21 11:59:26,334 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.HibernateContainerEntityService - Releasing lock for the container 224
2025-11-21 11:59:26,363 [stagingQueueListener-43] INFO  org.nrg.containers.services.impl.ContainerServiceImpl - Starting container.
2025-11-21 11:59:26,364 [stagingQueueListener-43] INFO  org.nrg.containers.api.DockerControlApi - Starting container b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2
2025-11-21 11:59:27,002 [stagingQueueListener-43] INFO  org.nrg.containers.services.impl.ContainerServiceImpl - Launched command for wfid 435: command 7, wrapper 13 project. Produced container 224.
2025-11-21 11:59:27,003 [stagingQueueListener-43] DEBUG org.nrg.containers.services.impl.ContainerServiceImpl - Container for wfid 435: Container{databaseId=224, commandId=7, status=Created, statusTime=Fri Nov 21 11:59:26 CET 2025, wrapperId=13, containerId=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, workflowId=435, userId=admin, project=newstorage, backend=docker, serviceId=null, taskId=null, nodeId=null, dockerImage=my-xnat-app:latest, containerName=null, commandLine=pwd && python /app/hello_from_gpu.py, overrideEntrypoint=true, workingDirectory=null, subtype=docker, parent=null, parentSourceObjectName=null, environmentVariables={XNAT_USER=e62d8a0e-b185-401e-a13b-d515bfeef45e, XNAT_EVENT_ID=435, XNAT_WORKFLOW_ID=435, XNAT_HOST=XXXX:443, XNAT_PASS=XXXXXX}, ports={}, mounts=[ContainerMount{databaseId=251, name=out, writable=true, xnatHostPath=/data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8, containerHostPath=/mnt/xnat_shared/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8, containerPath=/output, mountPvcName=null, inputFiles=[]}], inputs=[ContainerInput{databaseId=668, type=RAW, name=project, value=/archive/projects/newstorage, sensitive=false}, ContainerInput{databaseId=669, type=WRAPPER_EXTERNAL, name=project, value=/archive/projects/newstorage, sensitive=false}], outputs=[ContainerOutput{databaseId=219, name=txt_out:save_to_project_resource, fromCommandOutput=txt_out, fromOutputHandler=save_to_project_resource, type=Resource, required=true, mount=out, path=, glob=*.txt, label=ciao-output-resourcecarmen, format=null, description=null, content=null, tags=[], created=null, handledBy=project, viaWrapupContainer=null}], history=[ContainerHistory{databaseId=1305, status=Created, entityType=user, entityId=admin, timeRecorded=Fri Nov 21 11:59:26 CET 2025, externalTimestamp=null, message=null, exitCode=null}], logPaths=[], reserveMemory=null, limitMemory=null, limitCpu=null, swarmConstraints=null, runtime=null, ipcMode=null, autoRemove=false, shmSize=null, network=null, containerLabels={XNAT_USER_EMAIL=txxx.it, XNAT_PROJECT=newstorage, XNAT_ID=newstorage, XNAT_USER_ID=admin, XNAT_DATATYPE=Project}, gpus=null, genericResources=null, ulimits=null, secrets=[]}
....(in loop)
2025-11-21 12:02:24,948 [docker-java-stream--2126463612] DEBUG org.nrg.containers.api.DockerControlApi - Received event: Event(status=create, id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, from=my-xnat-app:latest, node=null, type=CONTAINER, action=create, actor=EventActor(id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, attributes={XNAT_DATATYPE=Project, XNAT_ID=newstorage, XNAT_PROJECT=newstorage, XNAT_USER_EMAIL=XXXt, XNAT_USER_ID=admin, image=my-xnat-app:latest, name=distracted_greider}), time=1763722598, timeNano=1763722598657450172)
2025-11-21 12:02:24,948 [docker-java-stream--2126463612] DEBUG org.nrg.containers.api.DockerControlApi - Received event: Event(status=start, id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, from=my-xnat-app:latest, node=null, type=CONTAINER, action=start, actor=EventActor(id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, attributes={XNAT_DATATYPE=Project, XNAT_ID=newstorage, XNAT_PROJECT=newstorage, XNAT_USER_EMAIL=XXXXX, XNAT_USER_ID=admin, image=my-xnat-app:latest, name=distracted_greider}), time=1763722600, timeNano=1763722600604767564)
2025-11-21 12:02:24,948 [docker-java-stream--2126463612] DEBUG org.nrg.containers.api.DockerControlApi - Received event: Event(status=die, id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, from=my-xnat-app:latest, node=null, type=CONTAINER, action=die, actor=EventActor(id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, attributes={XNAT_DATATYPE=Project, XNAT_ID=newstorage, XNAT_PROJECT=newstorage, XNAT_USER_EMAIL=XXXXX, XNAT_USER_ID=admin, execDuration=0, exitCode=0, image=my-xnat-app:latest, name=distracted_greider}), time=1763722601, timeNano=1763722601506361086)


again, thanks for any advice, 
best, 
Carmen 

John Flavin

unread,
Nov 24, 2025, 11:58:18 AMNov 24
to xnat_di...@googlegroups.com
Thanks for including that log. I think what I'm looking for would be right after the logs you included, though. You've given the logs from when you launched the container, through command resolution, and the docker "create", "start", and "die" messages. Once the container finishes and CS gets the "die" status, that triggers a few finalization steps, one of which is uploading the output files to resources. So I'm curious to know if there is anything in the CS log right after it gets that last status message. 

But more importantly, I want to know if there is anything at that time in the XNAT logs, specifically the application.log or xdat.log files. When CS determines that it needs to upload resources, it hands that operation off to some XNAT internal code. That means if there is a problem in the file upload or resource creation process, any error messages that get logged would not show up in the containers.log file. (Or, well, maybe they would, depending on what exactly went wrong. But more likely they would be in one of the XNAT log files.)

And lastly, I know you already said in your initial message that "output files are generated in the staging area (build/<job-id>)", but can you confirm that all the output files exist where you expect them to be? On the docker / execution node the output files for this run should be in /mnt/xnat_shared/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8, and on the XNAT node they should be in /data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8. 

John Flavin

Carmen Giugliano

unread,
Nov 27, 2025, 10:34:20 AMNov 27
to xnat_discussion
Dear John, 
sorry for the late reply, 

>So I'm curious to know if there is anything in the CS log right after it gets that last status message. 
Nothing. It gets stuck and repeats more or less every 15 seconds:


2025-11-21 12:02:24,948 [docker-java-stream--2126463612] DEBUG org.nrg.containers.api.DockerControlApi - Received event: Event(status=create, id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, from=my-xnat-app:latest, node=null, type=CONTAINER, action=create, actor=EventActor(id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, attributes={XNAT_DATATYPE=Project, XNAT_ID=newstorage, XNAT_PROJECT=newstorage, XNAT_USER_EMAIL=XXXt, XNAT_USER_ID=admin, image=my-xnat-app:latest, name=distracted_greider}), time=1763722598, timeNano=1763722598657450172)
2025-11-21 12:02:24,948 [docker-java-stream--2126463612] DEBUG org.nrg.containers.api.DockerControlApi - Received event: Event(status=start, id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, from=my-xnat-app:latest, node=null, type=CONTAINER, action=start, actor=EventActor(id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, attributes={XNAT_DATATYPE=Project, XNAT_ID=newstorage, XNAT_PROJECT=newstorage, XNAT_USER_EMAIL=XXXXX, XNAT_USER_ID=admin, image=my-xnat-app:latest, name=distracted_greider}), time=1763722600, timeNano=1763722600604767564)
2025-11-21 12:02:24,948 [docker-java-stream--2126463612] DEBUG org.nrg.containers.api.DockerControlApi - Received event: Event(status=die, id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, from=my-xnat-app:latest, node=null, type=CONTAINER, action=die, actor=EventActor(id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, attributes={XNAT_DATATYPE=Project, XNAT_ID=newstorage, XNAT_PROJECT=newstorage, XNAT_USER_EMAIL=XXXXX, XNAT_USER_ID=admin, execDuration=0, exitCode=0, image=my-xnat-app:latest, name=distracted_greider}), time=1763722601, timeNano=1763722601506361086)

2025-11-21 12:02:39,531 [docker-java-stream-1163728552] DEBUG org.nrg.containers.api.DockerControlApi - Received event: Event(status=create, id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, from=my-xnat-app:latest, node=null, type=CONTAINER, action=create, actor=EventActor(id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, attributes={X
NAT_DATATYPE=Project, XNAT_ID=newstorage, XNAT_PROJECT=newstorage, XNAT_USER_EMAIL=XXXXXX, XNAT_USER_ID=admin, image=my-xnat-app:latest, name=distracted_greider}), time=1763722598, timeNano=1763722598657450172)
2025-11-21 12:02:39,532 [docker-java-stream-1163728552] DEBUG org.nrg.containers.api.DockerControlApi - Received event: Event(status=start, id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, from=my-xnat-app:latest, node=null, type=CONTAINER, action=start, actor=EventActor(id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, attributes={XNAT_DATATYPE=Project, XNAT_ID=newstorage, XNAT_PROJECT=newstorage, XNAT_USER_EMAIL=XXXXXX, XNAT_USER_ID=admin, image=my-xnat-app:latest, name=distracted_greider}), time=1763722600, timeNano=1763722600604767564)
2025-11-21 12:02:39,532 [docker-java-stream-1163728552] DEBUG org.nrg.containers.api.DockerControlApi - Received event: Event(status=die, id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, from=my-xnat-app:latest, node=null, type=CONTAINER, action=die, actor=EventActor(id=b8b1a87188c270468309e9d7e199a0a0888c342d6a6d8f3f49c39b5c2ec3a4d2, attributes={XNAT_DATATYPE=Project, XNAT_ID=newstorage, XNAT_PROJECT=newstorage, XNAT_USER_EMAIL=XXXXX, XNAT_USER_ID=admin, execDuration=0, exitCode=0, image=my-xnat-app:latest, name=distracted_greider
}), time=1763722601, timeNano=1763722601506361086)
2025-11-21 12:02:54,100 [taskScheduler-3] INFO  org.nrg.containers.api.DockerControlApi - Requesting events from 1763561187 to 1763722974


>But more importantly, I want to know if there is anything at that time in the XNAT logs, specifically the application.log 
I don't see anything in xdat.log and here's what i read in application.log:

2025-11-21 12:02:29,450 [docker-java-stream--2126463612] ERROR com.github.dockerjava.api.async.ResultCallbackTemplate - Error during callback
java.net.SocketTimeoutException: timeout
        at okio.SocketAsyncTimeout.newTimeoutException(JvmOkio.kt:147)
        at okio.AsyncTimeout.access$newTimeoutException(AsyncTimeout.kt:158)
        at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:337)
        at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
        at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
        at okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.kt:381)
        at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.kt:429)
        at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:408)
        at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276)
        at okio.RealBufferedSource$inputStream$1.read(RealBufferedSource.kt:158)
        at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._loadMore(UTF8StreamJsonParser.java:257)
        at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._skipWSOrEnd2(UTF8StreamJsonParser.java:3086)
        at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._skipWSOrEnd(UTF8StreamJsonParser.java:3081)
        at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:756)
        at com.fasterxml.jackson.databind.MappingIterator.hasNextValue(MappingIterator.java:246)
        at com.github.dockerjava.core.DefaultInvocationBuilder$JsonSink.accept(DefaultInvocationBuilder.java:314)
        at com.github.dockerjava.core.DefaultInvocationBuilder$JsonSink.accept(DefaultInvocationBuilder.java:298)
        at com.github.dockerjava.core.DefaultInvocationBuilder.lambda$executeAndStream$1(DefaultInvocationBuilder.java:275)
        at java.lang.Thread.run(Thread.java:750)
Caused by: javax.net.ssl.SSLException: Socket closed
        at sun.security.ssl.Alert.createSSLException(Alert.java:127)
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:331)
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:274)
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:269)
        at sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1572)
        at sun.security.ssl.SSLSocketImpl.access$400(SSLSocketImpl.java:73)
        at sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:982)
        at okio.InputStreamSource.read(JvmOkio.kt:94)
        at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:125)
        ... 16 common frames omitted
Caused by: java.net.SocketException: Socket closed
        at java.net.SocketInputStream.read(SocketInputStream.java:204)
        at java.net.SocketInputStream.read(SocketInputStream.java:141)
        at sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:464)
        at sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:68)
        at sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1350)
        at sun.security.ssl.SSLSocketImpl.access$300(SSLSocketImpl.java:73)
        at sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:966)
        ... 18 common frames omitted

2025-11-21 12:02:44,035 [docker-java-stream-1163728552] ERROR com.github.dockerjava.api.async.ResultCallbackTemplate - Error during callback
java.net.SocketTimeoutException: timeout
        at okio.SocketAsyncTimeout.newTimeoutException(JvmOkio.kt:147)
        at okio.AsyncTimeout.access$newTimeoutException(AsyncTimeout.kt:158)
        at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:337)
        at okio.RealBufferedSource.request(RealBufferedSource.kt:206)
        at okio.RealBufferedSource.require(RealBufferedSource.kt:199)
        at okio.RealBufferedSource.readHexadecimalUnsignedLong(RealBufferedSource.kt:381)
        at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.readChunkSize(Http1ExchangeCodec.kt:429)
        at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:408)
        at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276)
        at okio.RealBufferedSource$inputStream$1.read(RealBufferedSource.kt:158)
        at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._loadMore(UTF8StreamJsonParser.java:257)
        at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._skipWSOrEnd2(UTF8StreamJsonParser.java:3086)
        at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._skipWSOrEnd(UTF8StreamJsonParser.java:3081)
        at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:756)
        at com.fasterxml.jackson.databind.MappingIterator.hasNextValue(MappingIterator.java:246)
        at com.github.dockerjava.core.DefaultInvocationBuilder$JsonSink.accept(DefaultInvocationBuilder.java:314)
        at com.github.dockerjava.core.DefaultInvocationBuilder$JsonSink.accept(DefaultInvocationBuilder.java:298)
        at com.github.dockerjava.core.DefaultInvocationBuilder.lambda$executeAndStream$1(DefaultInvocationBuilder.java:275)
        at java.lang.Thread.run(Thread.java:750)
Caused by: javax.net.ssl.SSLException: Socket closed
        at sun.security.ssl.Alert.createSSLException(Alert.java:127)
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:331)
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:274)
        at sun.security.ssl.TransportContext.fatal(TransportContext.java:269)
        at sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1572)
        at sun.security.ssl.SSLSocketImpl.access$400(SSLSocketImpl.java:73)
        at sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:982)
        at okio.InputStreamSource.read(JvmOkio.kt:94)
        at okio.AsyncTimeout$source$1.read(AsyncTimeout.kt:125)
        ... 16 common frames omitted
Caused by: java.net.SocketException: Socket closed
        at java.net.SocketInputStream.read(SocketInputStream.java:204)
        at java.net.SocketInputStream.read(SocketInputStream.java:141)
        at sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:464)
        at sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:68)
        at sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1350)
        at sun.security.ssl.SSLSocketImpl.access$300(SSLSocketImpl.java:73)
        at sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:966)
        ... 18 common frames omitted

>And lastly, I know you already said in your initial message that "output files are generated in the staging area (build/<job-id>)", but can you confirm that all the output files exist where you expect them to be? On the docker / execution node the output files for this run should be in /mnt/xnat_shared/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8, and on the XNAT node they should be in /data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8. 

Yes, I confirm that all the output files are in /mnt/xnat_shared/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8 for the execution node.
Regarding the XNAT node, since I'm running XNAT with this docker-compose instructions:
    volumes:
      - ./xnat/plugins:${XNAT_HOME}/plugins
      - ./xnat-data/home/logs:${XNAT_HOME}/logs
      - /mnt/disk1/data/xnat/archive:${XNAT_ROOT}/archive    
      - /mnt/disk1/data/xnat/build:${XNAT_ROOT}/build 
      - /mnt/disk1/data/xnat/cache:${XNAT_ROOT}/cache  
      - ./xnat/xnat-conf.properties:${XNAT_HOME}/conf/xnat-conf.properties
      - ./xnat/server.xml:/usr/local/tomcat/conf/server.xml
      - ./xnat-data/home/config/auth:${XNAT_HOME}/config/auth 
      - ./xnat-data/remote_docker_keys:${XNAT_ROOT}/remote_docker_keys

where disk1 is a nfs shared hdd, the output files are in:
/mnt/disk1/data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8


Thanks again for your support.
Best, 
Carmen 

John Flavin

unread,
Nov 29, 2025, 11:35:53 AM (13 days ago) Nov 29
to xnat_di...@googlegroups.com
​Regarding the XNAT node, since I'm running XNAT with this docker-compose instructions:
    volumes:
      - ./xnat/plugins:${XNAT_HOME}/plugins
      - ./xnat-data/home/logs:${XNAT_HOME}/logs
      - /mnt/disk1/data/xnat/archive:${XNAT_ROOT}/archive    
      - /mnt/disk1/data/xnat/build:${XNAT_ROOT}/build 
      - /mnt/disk1/data/xnat/cache:${XNAT_ROOT}/cache  
      - ./xnat/xnat-conf.properties:${XNAT_HOME}/conf/xnat-conf.properties
      - ./xnat/server.xml:/usr/local/tomcat/conf/server.xml
      - ./xnat-data/home/config/auth:${XNAT_HOME}/config/auth 
      - ./xnat-data/remote_docker_keys:${XNAT_ROOT}/remote_docker_keys

where disk1 is a nfs shared hdd, the output files are in:
/mnt/disk1/data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8

That's the path that exists on the machine where you run the XNAT docker compose. But what's the path that XNAT sees from within its container? As in, what is the value of the ${XNAT_ROOT} environment variable that gets set in that container?

Based on what you've shared above and from your CS logs, all these paths need to have the same files
  1. Inside execution container: /output
  2. On execution node: /mnt/xnat_shared/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8 
  3. Inside XNAT container: /data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8
  4. On XNAT node: /mnt/disk1/data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8

You've confirmed 2 and 4, and it makes sense that those both have the files because you know they are NFS mounts on those nodes. 1 we can assume is correct because otherwise how would the files have been written in the first place? The last place to check is 3. I expect that the files are indeed there, because I'm guessing that XNAT_ROOT=/data/xnat. But if they're not there, that would explain why the resources haven't been created.

The reason I'm poking at this is that I'm trying to figure out if the CS path translation setting is correct. That setting controls, more or less, the mapping from 3 to 2 (in the numbering system above). CS finds the root build directory from XNAT (which is, crucially, from XNAT's perspective inside its container if you’re running it in docker compose, not from the outside filesystem on the XNAT node), then it makes a new subdirectory to hold these outputs. Before launch, it looks up the path translation setting to change the /data/xnat path prefix that XNAT sees to /mnt/xnat_shared that the container execution node sees. I'm just guessing those values from looking at the paths you've given.

If that path translation setting isn't correct, it can cause a mismatch between where the output files actually are and where XNAT thinks it can find those files.

So, given all that preamble, can you confirm that the files you expect are at /data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8 inside the XNAT container? (Or if you need to launch a new job, whatever the value of /data/xnat/build/<some id> is.)

John Flavin

Pasquale

unread,
Nov 29, 2025, 7:29:06 PM (13 days ago) Nov 29
to xnat_discussion
i can confirm for her  the point 3: the file is in /data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8

Carmen Giugliano

unread,
Dec 2, 2025, 8:20:34 AM (10 days ago) Dec 2
to xnat_discussion
Dear John, 
thanks again for your support.


>That's the path that exists on the machine where you run the XNAT docker compose. But what's the path that XNAT sees from within its container? As in, what is the value of the ${XNAT_ROOT} environment variable that gets set in that container?

in the default.env
I've defined the ${XNAT_ROOT}  as :
XNAT_ROOT=/data/xnat


>I'm just guessing those values from looking at the paths you've given.
you're right 

>So, given all that preamble, can you confirm that the files you expect are at /data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8 inside the XNAT container? (Or if you need to launch a new job, whatever the value of /data/xnat/build/<some id> is.)
I confirm!

xnat-docker-compose]$ sudo docker exec -it fac8c40XXXXX7 bash
xnat@fac8c40XXXXX7:/usr/local/tomcat$ cd /data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8
xnat@fac8c40XXXXX7:/data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8$ ls
ciao.txt
xnat@fac8c40XXXXX7:/data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8$

I am at loss. I hope in your help.
Best, 
Carmen 

kel...@wustl.edu

unread,
Dec 2, 2025, 4:53:23 PM (10 days ago) Dec 2
to xnat_discussion
Hi Carmen,
I wonder if you might be running into an issue caused by your SELinux configuration on the AlmaLinux host.  
On the host console, you can check:

sestatus

Assuming this is not a production system, temporarily disable SELinux and run your process again.
setenforce 0

Also, to follow-up on the experiment you mentioned above:

xnat-docker-compose]$ sudo docker exec -it fac8c40XXXXX7 bash
xnat@fac8c40XXXXX7:/usr/local/tomcat$ cd /data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8
xnat@fac8c40XXXXX7:/data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8$ ls
ciao.txt
xnat@fac8c40XXXXX7:/data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8$


Could you try this same thing as the user running XNAT (UID/GID 1002:1002)?

sudo docker exec --user 1002 -it fac8c40XXXXX7 bash
ls -al /data/xnat/build/3eac0fb3-1082-4e28-bca0-d05d4de5f2c8

*Be sure to re-enable SELinux after testing:
setenforce 1

Thanks,
Matt

Carmen Giugliano

unread,
Dec 4, 2025, 10:58:09 AM (8 days ago) Dec 4
to xnat_discussion

Dear Matt, 

thanks for the advice.
I've also tried with setenforce 0 but it doesn't work. 
I attach also the output you've requested:

[almalinux@xnatvm xnat-docker-compose]$ sudo docker exec --user 1002 -it 64c3XXXXXX66 bash
xnat@64XXXXXXXXXX6:/usr/local/tomcat$ ls
bin           conf             filtered-KEYS  LICENSE  native-jni-lib  README.md      RUNNING.txt  upstream-KEYS  webapps.dist
BUILDING.txt  CONTRIBUTING.md  lib            logs     NOTICE          RELEASE-NOTES  temp         webapps        work
xnat@XXXXXX66:/usr/local/tomcat$ cd /data/xnat/build/
xnat@6XXXXXX6:/data/xnat/build$ ls -ltrha
xnat@6XXXXXXX6:/data/xnat/build/65ecddf9-a1f5-4b84-ad7a-29184614fb92$ ls -lthra
total 28K
drwxrwxr-x. 277 xnat xnat  20K Dec  4 16:23 ..
drwxr-x---.   2 xnat xnat 4.0K Dec  4 16:23 .
-rw-r--r--.   1 xnat xnat   16 Dec  4 16:23 ciao.txt

I would be grateful for any advice.

Best, 
Carmen 




Kelsey, Matt

unread,
Dec 5, 2025, 11:24:14 AM (7 days ago) Dec 5
to xnat_di...@googlegroups.com

Hi Carmen,

I wonder if your XNAT instance is running as root inside of the Docker container and being root squashed by NFS.  You can check this (and poke around for other things) by starting a shell session on the running tomcat container as before.

 

Find your container id, as before:

Kelsey$ docker ps

 

CONTAINER ID   IMAGE                          COMMAND                  CREATED       STATUS       PORTS                                                                                              NAMES

7e61a0229c01   nginx:1.19-alpine-perl         "/docker-entrypoint.…"   2 weeks ago   Up 2 weeks   0.0.0.0:80->80/tcp                                                                                 xnat-docker-compose-xnat-nginx-1

6564dc25351c   xnat-docker-compose-xnat-web   "wait-for-postgres.s…"   2 weeks ago   Up 2 weeks   0.0.0.0:8000->8000/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:8104->8104/tcp, 0.0.0.0:10001->10001/tcp   xnat-docker-compose-xnat-web-1

84ddc76e24c1   postgres:12.2-alpine           "docker-entrypoint.s…"   2 weeks ago   Up 2 weeks   5432/tcp                                                                                           xnat-docker-compose-xnat-db-1

 

Run bash on the xnat-web container:

Kelsey$ docker exec -ti 6564dc25351c bash

 

Find the user running java/tomcat:

root@6564dc25351c:/usr/local/tomcat# ps -aux

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND

root         1  1.0  9.2 14785044 1860596 ?    Ssl  Nov24 157:23 /usr/local/openjdk-8/bin/java -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties -Djava.util.logg

root     37889  0.0  0.0   5628  3212 pts/0    Ss   10:01   0:00 bash

root     37898  0.0  0.0   7136  2576 pts/0    R+   10:01   0:00 ps -aux

 

*My system shows root because I’m running locally on a dev instance. If you also see that root is running java/tomcat, the XNAT process may be getting root squashed on NFS access attempts.

 

Change to this user and try running a command on the file in your build space (NFS)

java_user@6564dc25351c:/usr/local/tomcat# su user_from_above

java_user@6564dc25351c:/usr/local/tomcat# cat /data/xnat/build/some-build-folder-uid-on-your-system/ciao.txt

01_CT_2345678

91_CT_2345678

java_user@6564dc25351c:/usr/local/tomcat# cp /data/xnat/build/ some-build-folder-uid-on-your-system/ciao.txt /tmp/

 

 

Are you able read and manipulate the ciao.txt file manually, as the user running tomcat/java? That should emulate permissions XNAT has to do the same, with the exception being that you are not performing this inside of the XNAT JVM.

 

-Matt

 

 

 

Image removed by sender.

Image removed by sender.

--
You received this message because you are subscribed to the Google Groups "xnat_discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xnat_discussi...@googlegroups.com.

--

You received this message because you are subscribed to the Google Groups "xnat_discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xnat_discussi...@googlegroups.com.

 


The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

Carmen Giugliano

unread,
Dec 9, 2025, 11:26:04 AM (3 days ago) Dec 9
to xnat_discussion
Dear Matt, 
many thanks for your reply!


>I wonder if your XNAT instance is running as root inside of the Docker container and being root squashed by NFS.  You can check this (and poke around for other things) by starting a shell session on the running tomcat container as before.

I've  already set no_root_squash:
[almalinux@xnatvm xnat-docker-compose]$ cat /etc/exports
/mnt/disk1/data/xnat/build    IP(rw,sync,no_subtree_check,no_root_squash)
/mnt/disk1/data/xnat/archive  IP(rw,sync,no_subtree_check,no_root_squash)
/mnt/disk1/data/xnat/cache    IP(rw,sync,no_subtree_check,no_root_squash)
/mnt/disk1/data/xnat          IP(rw,sync,no_subtree_check,no_root_squash)


my container id:

[almalinux@xnatvm xnat-docker-compose]$ sudo docker ps -a

CONTAINER ID   IMAGE                          COMMAND                  CREATED      STATUS      PORTS                                                                                      NAMES
f634ab7c4bdc   nginx:1.29.0-alpine-perl       "/docker-entrypoint.…"   4 days ago   Up 4 days   0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp               xnat-nginx
64c3b35b9266   xnat-docker-compose-xnat-web   "wait-for-postgres.s…"   4 days ago   Up 4 days   0.0.0.0:8080->8080/tcp, [::]:8080->8080/tcp, 0.0.0.0:8443->8443/tcp, [::]:8443->8443/tcp   xnat-web
35097fb037f1   postgres:16.9-alpine           "docker-entrypoint.s…"   4 days ago   Up 4 days   5432/tcp                                                                                   xnat-db


Run bash on the xnat-web container:

[almalinux@xnatvm xnat-docker-compose]$ sudo docker ps -a

CONTAINER ID   IMAGE                          COMMAND                  CREATED      STATUS      PORTS                                                                                      NAMES
f634ab7c4bdc   nginx:1.29.0-alpine-perl       "/docker-entrypoint.…"   4 days ago   Up 4 days   0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp               xnat-nginx
64c3b35b9266   xnat-docker-compose-xnat-web   "wait-for-postgres.s…"   4 days ago   Up 4 days   0.0.0.0:8080->8080/tcp, [::]:8080->8080/tcp, 0.0.0.0:8443->8443/tcp, [::]:8443->8443/tcp   xnat-web
35097fb037f1   postgres:16.9-alpine           "docker-entrypoint.s…"   4 days ago   Up 4 days   5432/tcp                                                                                   xnat-db
[almalinux@xnatvm xnat-docker-compose]$
[almalinux@xnatvm xnat-docker-compose]$ sudo docker exec -ti 64c3b35b9266 bash
xnat@64c3b35b9266:/usr/local/tomcat$ whoami
xnat
xnat@64c3b35b9266:/usr/local/tomcat$ ps -aux

USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
xnat           1  0.8  7.7 10421260 1252104 ?    Ssl  Dec04  58:24 /opt/java/openjdk/bin/java -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties -Djava.util.logging.ma
xnat      185924  0.1  0.0   7604  4224 pts/0    Ss   11:01   0:00 bash
xnat      185946  0.0  0.0  10880  4480 pts/0    R+   11:01   0:00 ps -aux

>Are you able read and manipulate the ciao.txt file manually, as the user running tomcat/java? That should emulate permissions XNAT has to do the same, with the exception being that you are not performing this inside of the XNAT JVM.

Yes!

xnat@64c3b35b9266:/usr/local/tomcat$ cat /data/xnat/build/65ecddf9-a1f5-4b84-ad7a-29184614fb92/ciao.txt
HELLO FROM GPU
xnat@64c3b35b9266:/usr/local/tomcat$ cp /data/xnat/build/65ecddf9-a1f5-4b84-ad7a-29184614fb92/ciao.txt /tmp/
xnat@64c3b35b9266:/usr/local/tomcat$ ls -ltrha /tmp
total 4.0K
-rw-r--r--. 1 xnat xnat 16 Dec  9 11:02 ciao.txt
xnat@64c3b35b9266:/usr/local/tomcat$ cat /tmp/ciao.txt
HELLO FROM GPU
xnat@64c3b35b9266:/usr/local/tomcat$ cat > /tmp/ciao.txt <<EOF
HELLO FROM GPU
nuova riga
un'altra riga
EOF
xnat@64c3b35b9266:/usr/local/tomcat$ cat /tmp/ciao.txt
HELLO FROM GPU
nuova riga
un'altra riga
xnat@64c3b35b9266:/usr/local/tomcat$

here more details of my backend configuration:
Host path:tcp://gctd-gpu.xxxxx:2376

could be something related with the json?
      "label": "CT Scan Simulation",
Thanks again!
Best, 
Carmen 

John Flavin

unread,
Dec 9, 2025, 12:13:00 PM (3 days ago) Dec 9
to xnat_di...@googlegroups.com
The command looks to me like it should work. It's relatively simple. Send in a project, run the command, upload everything in the command output to a project resource.

The only unnecessary part I see is the "glob": "*.txt" on the output. I would expect this would work, but I think it should be fine to remove given that there is only the one file in the output.

Sorry that we haven't been able to solve this! It is very strange that there just seems to be nothing happening, no errors, no logs, but also no files uploaded.

John Flavin

Reply all
Reply to author
Forward
0 new messages