Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Neural Network feature importance map

44 views
Skip to first unread message

andymc...@gmail.com

unread,
Jun 15, 2015, 2:17:59 PM6/15/15
to
Hello

I'm currently doing a machine learning project and it is the first time that I'm in touch with Deep Belief Net / Neural net. I have some problems and I hope that somebody can help me.

I'm using the DeepLearnToolbox (https://github.com/rasmusbergpalm/DeepLearnToolbox).

I have a fMRI image dataset and I extracted for each image one feature vector. In the end I have now 40 feature vectors (I know that it is low but I still want to try neural net). There are 4 classes in total (images are labeled from 1 to 4) and I want to use a DBN / NN for classifiaction.

With the above toolbox I first learn weights with the DBN and then use this weights to initialize a NN.

Now my two questions hoping that sombebody can help:

1. I would like to derive a feature importance map as described on https://code.google.com/p/princeton-mvpa-toolbox/wiki/TutorialImportanceMaps.

In detail, it is about the following section from above website:

-------------------------------------------------
Determining the importance of a given input unit on a given output unit (for a given condition/category) for a 3-layer network.

- calculate the average activity of the input unit for that condition (avg_input)
- calculate the average activity for each of the hidden units for that condition (avg_hidden)

ALTERNATE VERSION - for loop over i hidden units: total_importance = total_importance + (avg_input w_input_to_hidd(i) avg_hidden(i) w_hidd_to_out(i))
-------------------------------------------------

I don't know which values I have to choose from the DBN or NN. In the end, I just want the importance of each input feature.

Could somebody perhaps briefly help me with this?

2. I'm completely unsure about some parameters, i.e. which value is fair or over which range I should search. It is about the following parameters:

Dropout level
learning rate
momentum
Scaling factor for the learning rate (each epoch)
weightPenaltyL2 (L2 regularization)
Non sparsity penalty
Sparsity target
numepochs
batchsize

See also https://github.com/rasmusbergpalm/DeepLearnToolbox/blob/master/NN/nnsetup.m


Thank you so much for the help.
0 new messages