Re: Biomedisa research

42 views
Skip to first unread message

Philipp Loesel

unread,
Apr 14, 2024, 11:30:59 PM4/14/24
to Yi Qing Low (22718812), biom...@googlegroups.com

Hi Yi Qing,


I'll address all your questions within the text:


On 13/4/24 20:45, Yi Qing Low (22718812) wrote:


Hi!

An update, it gave an error

Same for another set of example data I tried:
While I'm not 100% sure, it looks like you pressed the start button, which is solely for smart interpolation. Please select your label and image TAR file and press the AI button for training.

Would this be the way to do training with my files: (this is just one patient)
Instead of putting all files in a TAR file, you can use individual files for training and upload them to different projects. In this case, only one image and one label file are allowed in each project. Unfortunately, you cannot upload different labels in different files. In your case, both "Blood" and "AAA" must be in the same file and have different values ​​(e.g. 1,2,...). In 3D Slicer you can add multiple labels to a segmentation project. This is necessary in all cases, regardless of whether you use TAR files or individual files.

Thank you so much, I will patiently wait for your respond!


From: Yi Qing Low (22718812) <2271...@student.uwa.edu.au>
Sent: Saturday, 13 April 2024 3:33 PM
To: Philipp Loesel <philipp...@anu.edu.au>
Subject: Re: Biomedisa research
 
Hello Philip,

Hope you had a great week and is currently doing well.

I just have a few questions about Biomedisa for deep learning - as I am not familiar with coding.
I followed the instruction on github for deep learning.

Something like this showed up:

I need help in locating the way to visualise the trained data, ie how do I know if the deep learning succeeded ?
That's fine. Most Biomedisa users are unfamiliar with coding. Biomedisa was developed for this purpose. Although you didn't train a network here, but used a trained network for automated segmentation, everything worked fine. You have installed Biomedisa properly, your GPU has been detected, and segmentation has completed successfully. The result “final.testing_axial_crop_pat13.nii.gz” should be located where your image data is located, i.e. in your Downloads directory.

On the Biomedisa website, I attempted to perform deep learning to see how the app works:
I uploaded the two files (one uploaded as image, another as label), then ticked the check box before clicking the AI button, is it normal for the processing duration to take more than 2 hours or have I made a mistake somewhere?
You did everything absolutely right. Yes, it is normal for training to take several hours. The computation time mainly depends on the number of volumetric training images, the number of training epochs used, and the number of GPUs available. This dataset from the Biomedisa gallery contains 10 volumetric training images, the default number of epochs is 100 (can be changed in the settings), and Biomedisa currently uses 2 NVIDIA V100 for training.

For my research project, I have 35 patients' CT scans (.nrrd) with labelled (using 3D Slicer) AAA and its lumen (also .nrrd), as shown like this:

Each patient have different shape and size AAA.
From my understanding, do I upload the CT scan and its correspond labels to the app > click the AI button, then repeat for the next 24 patients, then use 5 patients as validation  and the last 5 for prediction (via the Prediction button).
Since you have fully segmented 35 CT scans, you basically have two valid options:

1. The evaluation approach:

You could use 21 of your scans for training, 7 for validation during training (please also upload both the 7 images and 7 labels to Biomedisa as a TAR file, but in a separate project and enable "Validation Data (AI)" in the label settings) and the remaining 7 for predicting and testing how well the trained network works.


2. The production approach:

Use 28 for training and 7 for validation during training (same as in the first approach). Then simply apply the network to new data sets that you want to segment.

In both cases, validation during training not only monitors the improvement of the learning process, but also ensures that only the best performing network is ultimately used.

And lastly, with the above two methods (command line and on the website), which method do you reckon is more suitable for my data?
In general you are much more flexible using the command line, currently not all features are available online, you avoid uploading the data to the server and predicting multiple images is a bit annoying as you have to upload them all individually (I'm in the process of changing that). On the other hand, with Biomedisa online you don't need to install anything and you don't even need a GPU (but neither seems to be a problem in your particular case), however I can help you figure out what's going on and give you some recommendations on what you can try to improve your results when you use Biomedisa online.

Thank you so much and I look forward to your input. 

With regards,
Yi Qing.


From: Philipp Loesel <philipp...@anu.edu.au>
Sent: Friday, 13 October 2023 2:51 PM
To: Yi Qing Low (22718812) <2271...@student.uwa.edu.au>
Subject: Re: Biomedisa research
 

Hi YiQing,


thank you for notifying me. The server crashed. I have to check what went wrong.

But it should be fine now. Please let me know if you recognize any further problems.


Cheers,
Philipp


Am 13/10/23 um 16:30 schrieb Yi Qing Low (22718812):


Hi Dr Philipp,

Hope you are well.

I am a student studying her Masters in Biomedical Engineering in The University of Western Australia. I am interested in working with your Biomedisa platform for my research project (fully automated segmentation of abdominal aortic aneurysm medical images), as I read that your software can perform deep learning for fully automating segmentation. 
You may have heard from my classmate Anushree two months ago.

I attempted to enter the Biomediaa website at  https://biomedisa.org/ but encountered this issue and just wanted to let you know.


Kind regards,
YiQing.


_________________________________________________ Dr. Philipp Loesel Department of Materials Physics Research School of Physics (RSPhys) The Australian National University (ANU) 58 Mills Road, Cockcroft, Room C3.24 Acton ACT 2601 Australia phone: +61 2612 57583 email: philipp...@anu.edu.au https://physics.anu.edu.au/contact/people/profile.php?ID=3160 https://biomedisa.org
_________________________________________________

Dr. Philipp Loesel

Department of Materials Physics
Research School of Physics (RSPhys)
The Australian National University (ANU)

58 Mills Road, Cockcroft, Room C4.40
Acton ACT 2601
Australia

email: philipp...@anu.edu.au
https://physics.anu.edu.au/contact/people/profile.php?ID=3160
https://biomedisa.info/

Yi Qing Low (22718812)

unread,
Apr 15, 2024, 1:57:00 AM4/15/24
to Philipp Loesel, biom...@googlegroups.com
Hi Philip, 

"Instead of putting all files in a TAR file, you can use individual files for training and upload them to different projects. In this case, only one image and one label file are allowed in each project. Unfortunately, you cannot upload different labels in different files. In your case, both "Blood" and "AAA" must be in the same file and have different values ​​(e.g. 1,2,...). In 3D Slicer you can add multiple labels to a segmentation project. This is necessary in all cases, regardless of whether you use TAR files or individual files."

  1. Just to clarify, I am training a neural network to segment different patients. To make the training process more efficient, I would have to convert the patient scans into TAR file and their label scans into TAR file as well
    • in one TAR file would have the 21 patients for testing (if I am using the evaluation approach)
    • a TAR file for their corresponding label file (Blood and AAA in same file)
    • another TAR with 7 patients + their labels for validation
  • And this can be done on the app and on command line?

  1. Is there a way to save the final trained files e.g.final.testing_axial_crop_pat13 so I can view it in 3D as well? Viewing the file in 3D slicer shows a black and white image

Thank you!

From: Philipp Loesel <philipp...@anu.edu.au>
Sent: Monday, 15 April 2024 11:30 AM
To: Yi Qing Low (22718812) <2271...@student.uwa.edu.au>; biom...@googlegroups.com <biom...@googlegroups.com>
Subject: Re: Biomedisa research
 

Philipp Loesel

unread,
Apr 15, 2024, 3:49:24 AM4/15/24
to Yi Qing Low (22718812), biom...@googlegroups.com

Hi Yi Qing,


On 15/4/24 15:56, Yi Qing Low (22718812) wrote:
Hi Philip, 

"Instead of putting all files in a TAR file, you can use individual files for training and upload them to different projects. In this case, only one image and one label file are allowed in each project. Unfortunately, you cannot upload different labels in different files. In your case, both "Blood" and "AAA" must be in the same file and have different values ​​(e.g. 1,2,...). In 3D Slicer you can add multiple labels to a segmentation project. This is necessary in all cases, regardless of whether you use TAR files or individual files."

  1. Just to clarify, I am training a neural network to segment different patients. To make the training process more efficient, I would have to convert the patient scans into TAR file and their label scans into TAR file as well
    • in one TAR file would have the 21 patients for testing (if I am using the evaluation approach)
    • a TAR file for their corresponding label file (Blood and AAA in same file)
Correct, 21 patients/images for training in one TAR file and 21 corresponding labels in one TAR file. It is also worth mentioning that the file names from the image/patient data and the corresponding label files should match, as Biomedisa needs to figure out which files correspond to each other. So you could name the files exactly the same or do something like patient_1.nrrd, patient_2.nrrd, ..., patient_21.nrrd for your image files and labels.patient_1.nrrd, labels.patient_2.nrrd, ... , labels.patient_21.nrrd for your label files.


    • another TAR with 7 patients + their labels for validation
Yes, but here too the 7 patient/image files must be in a separate “Image” TAR file and the 7 corresponding label files in another “Label” TAR file (just like for the training data).


  • And this can be done on the app and on command line?
Yes, both can be done online or locally using the command line interface. As mentioned earlier, you need to enable “Validation Data (AI)” in your validation data label settings. For command line please use -vi="path_to_your_validation_images" -vl="path_to_your_validation_labels". For the command line, the location can be either a TAR file or a directory.


  1. Is there a way to save the final trained files e.g.final.testing_axial_crop_pat13 so I can view it in 3D as well? Viewing the file in 3D slicer shows a black and white image

Yes, now things are getting a little more advanced. Only when using Amira/Avizo label files to train the neural network, the header information is saved and automatically included in the prediction result. In all other cases, the result is saved as a TIFF without header information. However, in both cases, online and from the command line, you can specify another file with the header information, which will then be included in your result.


Online:
Upload one of the training labels to a separate project and enter the file name in the “Header File” field in the trained network settings. Here called “My_reference_file.nrrd”.


Command-line:

Use -hf="C:\full_path_to\Myreference_file.nrrd"


Your segmentation result should then also be an NRRD file (or a .nii.gz file for the heart examples from the gallery). When you import the result into 3D Slicer, select “Segmentation” as description. Then it should look like this:

Yi Qing Low (22718812)

unread,
Apr 15, 2024, 8:20:10 PM4/15/24
to Philipp Loesel, biom...@googlegroups.com
Hi Philipp!

I changed the Header File field online and saved the changes:

However, after finished training, this showed up:

From: Philipp Loesel <philipp...@anu.edu.au>
Sent: Monday, 15 April 2024 3:49 PM
To: Yi Qing Low (22718812) <2271...@student.uwa.edu.au>; biom...@googlegroups.com <biom...@googlegroups.com>
Subject: Re: Biomedisa research

Yi Qing Low (22718812)

unread,
Apr 15, 2024, 8:21:53 PM4/15/24
to Philipp Loesel, biom...@googlegroups.com
Correction: I meant Predict, not training

From: Yi Qing Low (22718812) <2271...@student.uwa.edu.au>
Sent: Tuesday, 16 April 2024 8:19 AM
To: Philipp Loesel <philipp...@anu.edu.au>; biom...@googlegroups.com <biom...@googlegroups.com>
Subject: Re: Biomedisa research
 

Philipp Loesel

unread,
Apr 15, 2024, 8:31:04 PM4/15/24
to Yi Qing Low (22718812), biom...@googlegroups.com

Hi Yi Qing,


You need to extract a single label file from the training labels, e.g. training_axial_crop_pat0-label.nii.gz, and upload it as "label" to a Biomedisa project. Then replace final_heart_file.nrrd with training_axial_crop_pat0-label.nii.gz in the "Header file" field of the neural network.

--
You received this message because you are subscribed to the Google Groups "Biomedisa" group.
To unsubscribe from this group and stop receiving emails from it, send an email to biomedisa+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/biomedisa/ME2PR01MB26737D7137F4521378668E40BB082%40ME2PR01MB2673.ausprd01.prod.outlook.com.

Yi Qing Low (22718812)

unread,
Apr 15, 2024, 9:24:52 PM4/15/24
to Philipp Loesel, biom...@googlegroups.com
Hi Philipp,

To clarify, as I am a little confused of what the instruction is, as an example: 

I have a label.nii.gz of the image to be predicted, an already trained network from Biomedisa network.h5, and the image to be predicted image.nii.gz

I would upload a label file to a project e.g. example-label.nii.gz to Project 1

And then the network and image to Project 2 > change header file in the neural network to label.nii.gz > Predict?

From: Philipp Loesel <philipp.da...@gmail.com>
Sent: Tuesday, 16 April 2024 8:30 AM
Subject: Re: [Biomedisa] Re: Biomedisa research
 

Philipp Loesel

unread,
Apr 15, 2024, 10:39:28 PM4/15/24
to Yi Qing Low (22718812), biom...@googlegroups.com

Hi Yi Qing,


In this case, you need to change the header file in the neural network settings to example-label.nii.gz. Your label.nii.gz does not exist yet and the filename of the predicted file will be automatically set to final.image.nii.gz. You need to specify the header file in order to transfer the header information from the already existing label file example-label.nii.gz to your result. If you do not specify it, your result will be final.image.tif. But then you won't be able to import it as a Segmentation into 3D Slicer.

Yi Qing Low (22718812)

unread,
Apr 16, 2024, 10:30:30 AM4/16/24
to Philipp Loesel, biom...@googlegroups.com
Hi Philipp,

I tried to train on command line with Validation and it seems it did not run.


From: Philipp Loesel <philipp.da...@gmail.com>
Sent: Tuesday, 16 April 2024 10:39 AM

Yi Qing Low (22718812)

unread,
Apr 17, 2024, 1:26:52 AM4/17/24
to Philipp Loesel, biom...@googlegroups.com
Hi,

I realised my mistake, I forgot to put "-t -bs 12" at the end. Putting just "-t" does not work as GPU ran out of memory.

The tar files contained nrrd files of the patient images, that may be the reason why a "Warning !" message is shown:

From: Yi Qing Low (22718812) <2271...@student.uwa.edu.au>
Sent: Tuesday, 16 April 2024 10:30 PM
To: Philipp Loesel <philipp.da...@gmail.com>; biom...@googlegroups.com <biom...@googlegroups.com>

Philipp Lösel

unread,
Apr 17, 2024, 5:31:43 AM4/17/24
to Yi Qing Low (22718812), biom...@googlegroups.com

Hi Yi Qing,


You're right, the default value for -bs (--batch_size) is 24, but that's usually too large. Since most users only have a single GPU, I decided to change the default to -bs=12. I'm just a little confused that you didn't get an error message at all. Also, I'll probably replace the warning message with a note that users must use the -hf (--header_file) flag for prediction if they want to save the result as NRRD, as discussed in this chat previously.

Thank you for your feedback!

Yi Qing Low (22718812)

unread,
Apr 17, 2024, 9:49:15 PM4/17/24
to Philipp Lösel, biom...@googlegroups.com
Hi Philipp,

I summarised the steps to train a network using validation data and Prediction on Biomedisa online for easier understanding, please point out if I made any mistake:

Upload Files to a Project e.g. Project 1 for training: patient.tar (files to train with), label_patient.tar (label)

Upload Files to a separate Project for validation: val.tar (files for validation), label_val.tar (label)
"By activating "Validation data (AI)" in the settings of the labels, Biomedisa knows that everything in this project should be considered as validation data. But it needs to be in a separate project because otherwise Biomedisa does not know which image TAR file belongs to training and which to validation."

  1. Use the Setting icon to enable 'Validation Data' in val.tar and input Validation split as the recommended 0.8.
    (What happens if it stays 0?)
  2. Click the AI button after checking the boxes to patient.tar and label_patient.tar at Project 1 to start training.

Output: a final .h5 file

For Prediction:
  1. Upload one or more image file (as a tar file) and the .h5 file in same project.
  2. If required a specific output format (e.g. nrrd), upload a label file with the required format (e.g. .nrrd) and in the Setting, input the file name in 'Header Field'



From: Philipp Lösel <philipp.da...@gmail.com>
Sent: Wednesday, 17 April 2024 5:31 PM

Yi Qing Low (22718812)

unread,
Apr 17, 2024, 9:51:29 PM4/17/24
to Philipp Lösel, biom...@googlegroups.com
Correction:

2. Click the AI button after checking the boxes to patient.tar, label_patient.tar, val.tar, and label_val.tar to start training.

From: Yi Qing Low (22718812) <2271...@student.uwa.edu.au>
Sent: Thursday, 18 April 2024 9:49 AM
To: Philipp Lösel <philipp.da...@gmail.com>; biom...@googlegroups.com <biom...@googlegroups.com>

Philipp Loesel

unread,
Apr 18, 2024, 2:07:43 AM4/18/24
to Yi Qing Low (22718812), biom...@googlegroups.com

That's pretty much correct, except that you don't need a "Validation split" if you use separate validation data. Validation splitting would split your training data into training and validation data. However, once you specify dedicated validation data, any validation split will be ignored. Dedicated validation data is prioritized.


The prediction of TAR files is not yet available. You must upload all images individually. But that will change in the near future. Also, the image files and the network for prediction do not need to be in the same project.


Just to clarify, "Validation data" must be enabled in the "label_val.tar" settings and the "Header file" field is in the trained network settings.

Yi Qing Low (22718812)

unread,
May 6, 2024, 1:40:51 AM5/6/24
to Philipp Loesel, biom...@googlegroups.com
Hi Philipp,

Does the online Biomedisa has a way to compute the Dice Score and ASSD of ground truth and result?

I know on command-line (according to the Github), we use this:
      from biomedisa_features.biomedisa_helper import Dice_score, ASSD
      dice = Dice_score(ground_truth, result)
      assd = ASSD(ground_truth, result)

Thank you.

Regards,
Yi Qing.

From: Philipp Loesel <philipp.da...@gmail.com>
Sent: Thursday, 18 April 2024 2:07 PM

Philipp Loesel

unread,
May 6, 2024, 9:19:12 PM5/6/24
to Yi Qing Low (22718812), biom...@googlegroups.com

Hi Yi Qing,


The online version of Biomedisa does not currently have the ability to calculate Dice Score and ASSD like the local command-line version. However, there are plans to include them, and there are also plans to add the img_resize function. But I can't say when that will happen. Any support is warmly welcome.


Best wishes,
Philipp

Reply all
Reply to author
Forward
0 new messages