Help needed for suitable feature extraction method for Surgical Image Analysis

12 views
Skip to first unread message

Basil Sunny

unread,
Jun 22, 2015, 1:08:45 PM6/22/15
to jfeat...@googlegroups.com

Dear Dr.Franz Graf,

Thank you for providing an excellent library.

I am research scholar currently working on surgical image analysis. My objective is to detect an instrument inside a surgcial video frame using an instrument template. I have tried general object detection methods (surf,sift), edge detection methods etc for feature extraction.The problem is most of the Feature points calculated by these methods did not belong to the instruments that to be detected. Can you please suggest an appropriate method to perform the instrument detection?
Please see the attachment to get some more information.

Thank you in advance

instrument-detection.pdf

Johannes Niedermayer

unread,
Jun 23, 2015, 7:16:46 AM6/23/15
to jfeat...@googlegroups.com
Hi,

The problem with the features you used is that they respond mostly to corners and textured regions in general, and the instrument you are aiming to detect contains neither interesting textures nor a lot of gradients.

Despite this, you could try the following approaches; I order them by simplicity:

1) Extract color features locally, e.g. in a grid, such as color histograms; as the instrument has more of a grayish tone and flesh is red, this might work. I see however difficulties with this approach as the instrument seems to reflect the red light, making it appear in a similar color.

2) Extract local textural features (one instance of these is Haralick). The resulting feature vectors should differ between flesh and object, as the objects barly contain any textures but the meat does. Given training patches of background and instrument you could try to train a classifier (SVM, random forest) based on this training data, and classify each grid cell of the test images based on this classifier.

3) There exist also features based on oriented gradients similar to SIFT, but without interest point detection, that can be used for object detection (e.g. Histogram of Oriented Gradients, HOG). These do however mostly work when objects are always in a similar pose, such as pedestrians (they are usually standing). This means, these features are not generally rotation invariant.

4) A recent advancement in detection is based on Convolutional Neural Networks, which are however not included in JFeatureLib. While they often work nicely, they require a large amount of training images for building a sufficient object model. This would allow to detect the model in different poses, but requires specialized hardware (fast GPUs).

Cheers

Johannes
--
You received this message because you are subscribed to the Google Groups "JFeatureLib" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jfeaturelib...@googlegroups.com.
To post to this group, send email to jfeat...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Basil Sunny

unread,
Jun 23, 2015, 11:27:34 AM6/23/15
to jfeat...@googlegroups.com
Thank for your help Mr.Jhoannes. I will try out your suggestions.
Reply all
Reply to author
Forward
0 new messages