I have been managing the XNAT infrastructure at our site for about a year and a half now.
We primarily use XNAT as a platform to store and provide controlled access to MRI data collected from our scanner. We also use it to run basic preprocessing steps such as DICOM-to-Nifti conversion, FreeSurfer, MRIQC, etc., through the container service plugin. These containers interface with our HPC environment and offload more computationally demanding jobs there.
One of XNAT’s biggest strengths is that it is open source, extensive, and well documented. Even when there isn’t a built-in or “official” way to do something, there are usually multiple ways to implement a solution manually.
A good example is BIDS. We initially struggled to implement BIDS in a way that fit our workflow. It required careful planning around how data would be stored, downloaded, and remain compatible with the preprocessing pipelines we wanted to run. Ultimately, we found a workable solution by storing BIDS outputs as session-level resources. These were generated via the container service plugin, which took configuration files from the project level and ran a session-level dcm2bids. This gave us a BIDS-like structure that worked well for our use case; the data could be downloaded directly or passed cleanly into downstream preprocessing pipelines.
This example highlights both a strength and a weakness of XNAT. On one hand, it is flexible enough that you can implement workflows exactly the way you want. On the other hand, almost everything can feel like a workaround. There are many ways to achieve a requirement, but rarely one “perfect” or canonical way. Continuing with the BIDS example: we can achieve a clean rawdata structure at the session level, but this isn’t fully aligned with how BIDS is intended to be organized across an entire project. Ideally, we would store a complete BIDS dataset in a comprehensive, project-level manner rather than fragmented across sessions. That said, we were able to make BIDS work satisfactorily by bending some of its rules to better match XNAT’s DICOM-focused data model. A counter-argument is that XNAT is open source, and with enough time and development resources, it would be possible to implement a more “ideal” BIDS solution. this would require a dedicated development team -- something we do not have the resources to commit.
This leads to another important point: XNAT requires dedicated personnel, depending on the complexity of your use case. At our site, the university IT team manages the underlying hardware and networking infrastructure. That still leaves significant operational responsibility on the XNAT side -- training users, troubleshooting issues, managing uptime, developing and maintaining containers, and continuously optimizing workflows. I do have support when needed, but this is not something that runs itself. That said, this can still be a reasonable trade-off, especially when compared to licensed commercial alternatives, which are often expensive and may not meet all custom requirements.
Another pain point is the legacy stack. Personally, I feel XNAT is designed to be robust and comprehensive rather than modern, fast, or especially user-friendly. It works reliably, but it does not feel lightweight or intuitive, particularly for new or non-technical users.
Overall, my advice would be to understand your current and future requirements, as well as your resource constraints - hardware, personnel, and dedicated time - before committing to XNAT as your primary data platform. There is much more that could be said about XNAT, both positive and negative, but this is a concise breakdown based on our real-world experience, using our BIDS implementation as an example.