How to train multiple datasets/faces_training-*.pkl files with train_test_mlp.py script ?

97 views
Skip to first unread message

G10DRAS

unread,
Feb 16, 2016, 9:44:05 PM2/16/16
to OpenCV with Python Blueprints

Hello Michael,

Is it possible to train MLP (params/mlp.xml) using multiple datasets/faces_training-*.pkl files?
Basically I want to do continuous Training and Testing in multiple sessions.

Thanks

Michael Beyeler

unread,
Feb 17, 2016, 8:43:18 PM2/17/16
to OpenCV with Python Blueprints
Hi,

You should be able to simply run homebrew.load_data() on different pickle files from different sessions and feed the resulting data one-by-one to MLP.fit():

for idx in xrange(10):
   # load some file from session, put all samples in X_train
   (X_train, y_train) = homebrew.load_data("datasets/faces-training-" + `idx` + ".pkl", test_split=0)

   # convert to numpy
   X_train = np.squeeze(np.array(X_train)).astype(np.float32)
   y_train = np.array(y_train)

   # train for 10 epochs
   params = dict(term_crit=(cv2.TERM_CRITERIA_COUNT, 10, 0.01),
                 train_method=cv2.ANN_MLP_TRAIN_PARAMS_BACKPROP,
                 bp_dw_scale=0.001, bp_moment_scale=0.9)
   MLP.fit(X_train, y_train, params=params)


Does that answer your question?

G10DRAS

unread,
Feb 18, 2016, 3:00:56 AM2/18/16
to OpenCV with Python Blueprints
Thank for code Michael.

I quickly did other way. That is, in Main GUI, I have loaded previously trained sample and labels as follows:

           
print "Loading data from previously trained data_file", training_file
            f
= open(training_file, 'rb')
           
self.samples = pickle.load(f)
           
self.labels = pickle.load(f)
           
print "Loaded", len(self.samples), " samples from previously trained data_file"
            f
.close()

and then continue with adding more training data to samples and labels. Hence on exit it will save all the data including data loaded from previous training file.

Paskl SUNNY

unread,
May 3, 2018, 5:17:22 PM5/3/18
to OpenCV with Python Blueprints
 Hi Michael,

     Thank you very much for your code. Actually i have a problem
     on how i can get the datasets/faces-training.pkl file. on your
     page(https://github.com/mbeyeler/opencv-python-blueprints/tree/master/chapter7/datasets) I can just find in datasets directory the following files: homerbew.py
     and __init__.py which is empty and no faces-training.pkl or faces_preprocessed.pkl

Thank you in advance for your help

Michael Beyeler

unread,
May 4, 2018, 4:07:16 PM5/4/18
to OpenCV with Python Blueprints
Hi,

In short, you need to assemble a training set first using the app. The .pkl file will then be created from the face images you record. This is all explained in my book in Chapter 7 and in the source code of "chapter7/chapter7.py" (see the first step below):

"""OpenCV with Python Blueprints Chapter 7: Learning to Recognize Emotion in Faces
An app that combines both face detection and face recognition, with a
focus on recognizing emotional expressions in the detected faces.
The process flow is as follows:
* Run the GUI in Training Mode to assemble a training set. Upon exiting
the app will dump all assembled training samples to a pickle file
"datasets/faces_training.pkl".
* Run the script train_test_mlp.py to train a MLP classifier on the
dataset. This file will store the parameters of the trained MLP in
a file "params/mlp.xml" and dump the preprocessed dataset to a
pickle file "datasets/faces_preprocessed.pkl".
* Run the GUI in Testing Mode to apply the pre-trained MLP classifier
to the live stream of the webcam.
"""
Best,
Michael
Reply all
Reply to author
Forward
0 new messages