Digital Reference Objects

31 views
Skip to first unread message

Aaron Oliver-Taylor

unread,
Aug 25, 2021, 7:21:29 AM8/25/21
to niQC
Hi All,

further to my presentation yesterday, here's some info on ASLDRO, the digital reference object software we have developed:


We are quite close to putting out a new release (v2.3.0), which has loads more features than the current (2.2.0), so I would recommend downloading from the develop branch rather than installing the package. You can do this with pip using:


If you're interested you can also check out the development documentation which we made publicly available to view: 


However because of permissions you won't quite see everything - it won't pull the issues from our jira server because that needs to be logged in.

As you'll see we put a lot of emphasis into ensuring that everything is well tested and documented.  We actually use gitlab for development, and just have github mirror the development and master branches.

DRO's are of course useful for testing/benchmarking image analysis software, as well as probing the underlying physical models. We used the DRO to perform sensitivity and uncertainty analyses for an ASL measurement, this is in an ISMRM abstract from this year:  https://submissions2.mirasmart.com/ISMRM2021/ViewSubmission.aspx?sbmID=1108&validate=false

Perhaps others can share DRO's they have made/are aware of here as well?

Cheers,
Aaron

Raamana, Pradeep Reddy

unread,
Aug 25, 2021, 12:11:42 PM8/25/21
to Aaron Oliver-Taylor, niQC, Richard Mallozzi

Thanks Aaron and Richard for the very interesting talks and useful discussion – I uploaded the video here, which also contain our previous discussion on “phantom data matter (beyond scanner QA)”: https://www.youtube.com/playlist?list=PLIa3r7AIaTinx9aVjhozUaUd2gpU5HTgn

 

An important realization (for me atleast) is we need to identify or develop guidelines and/or criteria for what would be an acceptable phantom, and I hope this group will help develop them.

 

Below are some rough incomplete notes from me (please feel free to share your thoughts):

 

Aaron Oliver-Taylor / Gold Standard Phantoms:

  • Developed Digital Reference object
    • Useful for testing pipelines etc
  • We may need to develop acceptable criteria for hardware phantoms
    • They must allow easy identification of correct orientation
    • Stability
      • Characterize it?
      • Way to identify when things (such as stability) change!
    • Standards to compare different phantoms?
    • Approval process for hardware?
    • Need good guidelines to reproduce hardware?
    • Safe materials?
      • Not using sodium azide
    • Portability for easy sharing?
      • Traveling phantom
    • Main issue for hardware
      • Getting NIST certification. Need their hardware characterization
    • Acceptance testing!
      • Benchmarks etc
  • Cyril: QA often doesn’t follow the application/study!!
    • Should it? When doesn’t it not need to?

Richard Mallozzi / Phantom Lab

  • Helped developed the original ADNI phantom
  • General aim: develop phantoms and analysis software together to provide a thorough MR QC solution
  • Two different faces of MR QC
    • Scanner working well or not?
    • Is there a sequence related problem? User error?
  • Their phantom allows estimation of distortion down to millimeters
    • Important in cancer/radiation therapy!
  • Edge spread function!
    • Resolution measurement
    • To help qualify a pulse sequence for contouring e.g. in cancer therapy
  • Signal uniformity within virtual/background spheres
    • Detects failure of phased array element
  • We need to measure the stability of the phantoms themselves over time!
    • Phantom drift besides system drift

Questions for the speakers:

  • Is a standard possible, for basic scanner QA? Even under limited circumstances
    • It will be very application specific!

 

 

Registered in England and Wales.

Company number: 9342804

VAT Reg: GB211807049

 

This email and any attachments to it may be confidential and are intended solely for the use of the individual to whom it is addressed. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Gold Standard Phantoms.

If you are not the intended recipient of this email, you must neither take any action based upon its contents, nor copy or show it to anyone.

Please contact the sender if you believe you have received this email in error.

--
You received this message because you are subscribed to the Google Groups "niQC" group.
To unsubscribe from this group and stop receiving emails from it, send an email to niqc+uns...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/niqc/af0bf4e3-6001-44cd-9230-6af973052a07n%40googlegroups.com.

Raamana, Pradeep Reddy

unread,
Aug 25, 2021, 1:05:24 PM8/25/21
to Aaron Oliver-Taylor, niQC, Richard Mallozzi

Also, it’d be helpful if those at Imaging Centres can share what sort of phantoms they currently use, for which modalities/sequences, and what their experience has been so far (pros and cons etc) and what they would ideally like to do going forward (which is what we here at Pitt trying to decide on)? Thanks.

Dr Cyril, Pernet

unread,
Aug 25, 2021, 1:48:37 PM8/25/21
to Raamana, Pradeep Reddy, Aaron Oliver-Taylor, niQC, Richard Mallozzi

Quick follow up - QC is seen first and foremost as a way to detect scanner issues and phantoms are designed mostly for that ; what we have been discussing is how to use QC in our group analyses (beyond spotting problems). As Aaron pointed out, manufacturer/model invariant metrics is what is needed here.

Cyril

To view this discussion on the web visit https://groups.google.com/d/msgid/niqc/MN2PR04MB67505C655EC7D857B6555454B8C69%40MN2PR04MB6750.namprd04.prod.outlook.com.
-- 
Dr Cyril Pernet, PhD, OHBM fellow, SSI fellow
Neurobiology Research Unit, 
Building 8057, Blegdamsvej 9
Copenhagen University Hospital, Rigshospitalet
DK-2100 Copenhagen, Denmark

wamc...@gmail.com
https://cpernet.github.io/
https://orcid.org/0000-0003-4010-4632

Dr Cyril, Pernet

unread,
Aug 25, 2021, 1:57:05 PM8/25/21
to Raamana, Pradeep Reddy, niQC

our plan is either to scan this .. or a ball with the same solution - with the usual SNR, tSNR, some homogeneity stuff,

now I have no good knowledge of diffusion - I'm guessing some spatial related metrics can be useful to regress in group analyses? and that's where another phantom (legos?) can be better??

Antoine Lutti

unread,
Aug 26, 2021, 3:49:41 AM8/26/21
to Dr Cyril, Pernet, Raamana, Pradeep Reddy, niQC
For diffusion-related QA, gradient non-linearities would definitely be an important aspect. There's already been a lot done in this area. For their effect on group-level analyses, see here for example:

Best
Antoine
 

Aaron Oliver-Taylor

unread,
Aug 26, 2021, 5:02:34 AM8/26/21
to Antoine Lutti, Dr Cyril, Pernet, Raamana, Pradeep Reddy, niQC
H All

Pradeep thanks for the summary. Here are my comments/elaboration:

  • We may need to develop acceptable criteria for hardware phantoms: I think we can come up with a good set of guidelines for sharing phantom designs here, although some things will be application specific!
    • They must allow easy identification of correct orientation: this is more a nice to have than essential, but there are relatively easy ways to implement this. Alternatively the phantom could sit in a holder/cradle that enforces a specific orientation.
    • Stability
        • Characterize it?
        • Way to identify when things (such as stability) change!
        • It is dependent on the conditions a phantom is kept in (temperature, humidity etc), so really this might need to be done on a per-phantom basis. A procedure to assess the phantom itself is going to be important, however if the phantom is used to check the MR system is good for a certain purpose, maybe it's not such a good idea to assess the phantom using the same method. Gets a bit meta...
      • Standards to compare different phantoms? Phantoms of different design but for the same purpose can be compared against a specification of requirements, which is defined by the application.
      • Approval process for hardware?
      • Need good guidelines to reproduce hardware?
        • Define: materials and consumables, manufacturing methods, assembly instructions, instructions for use.
        • Should be shared and communicated in an effective way - a scientific article is usually insufficient due to brevity.
        • Perhaps a template could be produced to facilitate.
      • Safe materials?
        • Not using sodium azide
        • Highly advise people stop using this, it is extremely dangerous. Recommend ProClin 150  at 0.1% w/w for water based liquids and gels. Also ensure mixing vessels, tools, and the phantom itself are clean beforehand - hot water and washing up liquid is fine. We rinse with deionised water so there's no residue when dried.
      • Portability for easy sharing?
        • Traveling phantom: being able to transport is of course very important, and so things like foam-lined peli cases should be considered. Safety data sheets should be supplied in case of spillages - even if the liquid is perfectly harmless these are the standard way to communicate this information.
      • Main issue for hardware
        • Getting NIST certification. Need their hardware characterization: this is application specific - really only for qMRI, and I wouldn't say it is absolutely necessary for every situation. It comes down to defining a gold standard that you can then compare against - this could be NIST's characterisation, or it might be performing the characterisation using a 'gold standard' pulse sequence, for example T1 mapping using a long TR inversion recovery. This kind of sequence is impractical clinically (long acquisition times, single slice) but will give you values that more clinical T1 mapping sequences (e.g. VFA or MOLLI) can be compared against.
        • Other issues are the compatibility of materials - chemical compatibility of plastics etc. Plastics tend to have lots of additives (plasticisers, dyes etc) which can leach into the solutions over time and might affect the sample's stability. Again this depends on the application as to whether this is a problem.
      • Acceptance testing!
        • Benchmarks etc
    • Cyril: QA often doesn’t follow the application/study!!
      • Should it? When doesn’t it not need to?
      • It comes down to measuring performance for an application. If you run very high gradient duty cycles in your studies, then QA'ing the gradient system with low gradient duty cycles is clearly not going to be sufficient. However, if it is shown (and the jury is out on this, but it could be tested relatively easily) that a standard fBIRN protocol is sensitive enough to pick up issues that affect the more state-of-the-art multiband type sequences with parallel imaging etc then there's no need to use the same protocol as your study for the fBIRN test. The advantage of doing this is you are doing something that is standardised and can be compared with other sites/systems. This does need addressing, but it comes down to establishing the performance requirements of your application and then determining how to measure them in your QA test.

    A more general comment, which Richard also emphasised is that phantoms can be used in different ways:
    • QA to check that the MR system is in spec, picking up issues, which then usually requires the MRI vendor to send a service engineer.
    • Validation - checking that protocols or new pulse sequences are valid and produce the correct results. This is perhaps where things like calibrated phantoms (i.e. NIST certification) are useful.
    • Acceptance testing - checking that a particular site (comprising MR system + staff) is able to acquire data of a specified quality. Both phantoms and volunteers can be used here, and phantoms are particularly useful because they do not drink coffee, have a bad night's sleep etc so in principle results are more comparable.
    • Harmonisation/standardisation: this is something of a combination of the above three. Again it is application specific. The ACR guidelines and phantom harmonise sites that seek accreditation, it shows they are able to acquire MR data of sufficient quality, and this is really just based on some acceptance testing (although it involves more than just phantom imaging and is only for quite basic T1w, T2w type imaging). Likewise the fBIRN test means that fMRI data from different sites can be checked to ensure it is of sufficient quality throughout a study, however at least in the implementation as it is in the paper this is just to act as a canary in the coal mine, rather than provide data that can be incorporated into group analyses. At the other end of the spectrum we discussed how potentially phantom data could be used to "correct" images on a per-subject basis - the issue here is that it would need to be strongly validated that the differences requiring correction are purely systematic biases due to the MRI hardware and not subject dependent. And if that is the case, maybe it would be better to address the hardware issues through QA to check that the system is within spec...
    As for defining a specification for a phantom, I would propose the following:
    1. Establish what your application is, what do you want to provide QC for - fMRI, diffusion, spectroscopy for example.
    2. Determine what the essential performance requirements are for this.
    3. Find/develop methods that can measure this performance quantitatively, resulting in performance metrics. Ideally you want these metrics to be just of a single effect, although in practise this may not be possible.
    4. This should then establish the requirements for the phantom (at least in the sense of what sort of signal it should provide).
    5. The signal requirements then need to be translated into a specification for a physical phantom, here focusing on what it needs to do, not what the actual phantom is.
    With this it should then be possible to design one or more types of phantoms that fulfil these requirements.

    Cheers,
    Aaron

    -- 
    Aaron Oliver-Taylor, PhD
    Chief Technical Officer


    You received this message because you are subscribed to a topic in the Google Groups "niQC" group.
    To unsubscribe from this topic, visit https://groups.google.com/d/topic/niqc/YS3GqEhg8hU/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to niqc+uns...@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/niqc/CA%2BPiCB%2BiE8n5sGDNuSw%2BDY0pgSqyO5xFcCeH3wBL1T34F_-68g%40mail.gmail.com.

    Raamana, Pradeep Reddy

    unread,
    Aug 31, 2021, 8:55:19 AM8/31/21
    to Aaron Oliver-Taylor, Antoine Lutti, Dr Cyril, Pernet, niQC

    Thank you for Aaron for the detailed and insightful comments – I feel like there is enough meat for us to consider writing a paper (even if short) on “what makes a good phantom?” or “desirable characteristics of an MR phantom”, with some discussion to how to monitor phantoms themselves while they help us monitor MR scanners etc. What say?

    Todd Constable

    unread,
    Aug 31, 2021, 9:37:15 AM8/31/21
    to Raamana, Pradeep Reddy, Todd Constable, Aaron Oliver-Taylor, Antoine Lutti, Dr Cyril, Pernet, niQC
    or maybe “Principle Considerations for Phantom QA”.
    I think in the applications section there needs to be a broader scope of body vs head (phantom size, receiver coils) and then depending on the sequences things like temporal stability (of the phantom and the magnet) may or may not need to be considered. 
    How to use a phantom (just plopping it down and scanning, or waiting a period of time to stabilize in the bore) might ned a few words too.
    But this is a good outline. 
    tc


    R. Todd Constable, Ph.D.
    Professor of Radiology and Biomedical Imaging, BME, Neurosurgery
    Director MRI Research
    Yale University School of Medicine
    The Anlyan Center
    300 Cedar Street
    PO Box 208043
    New Haven, CT 06520-8043
    Website: http://mri.med.yale.edu


    <image001.jpg>

    Christoph Vogelbacher

    unread,
    Aug 31, 2021, 11:05:28 AM8/31/21
    to niQC
    very good idea writing a paper regarding this topic. I know that many don't know a thing about phantoms and their need. I know that the positioning of the phantom in the scanner is important, too. 
    Reply all
    Reply to author
    Forward
    0 new messages