[JHU] Learning Segmentation From Radiology Reports

92 views
Skip to first unread message

Zongwei Zhou

unread,
Sep 22, 2025, 9:00:05 PM9/22/25
to body...@googlegroups.com, Zongwei Zhou, Pedro Salvador Bassi

Dear Colleagues,

 

We are excited to introduce R-Super (Report Supervision), a new AI training method that learns tumor segmentation from radiology reports. This work has been short-listed for MICCAI 2025 Best Paper and Young Scientist Awards.

 

📍 MICCAI Presentation

 

If you are in Korea, please visit us on Wednesday, September 24, 15:30–17:00, Poster Hall (DCC 2), Poster C36: “Learning Segmentation from Radiology Reports.”

 

You can read our paper and poster. A demo is fully available on GitHub (MrGiovanni/R-Super) using public datasets PanTS (10K CT + Masks, JHU) and Merlin (25K CT + Reports, Stanford).

 

R-Super has two unique benefits:

 

  • Scales segmentation datasets: Adding CT–Report pairs to existing CT–Mask datasets improves segmentation performance significantly (e.g., +16% F1, +8.4% DSC for pancreatic tumors).
  • Works with few or no masks: R-Super enables training for many tumor types that currently have no public CT–Mask dataset.

 

 

wMaC0wAYC5ImAAAAABJRU5ErkJggg==

 

 

Best regards,

Pedro Bassi and Zongwei Zhou

on behalf of BodyMaps team

 

Johns Hopkins University

Reply all
Reply to author
Forward
0 new messages