spear, question on custom data

74 views
Skip to first unread message

Márton Makrai

unread,
Aug 19, 2016, 6:31:59 AM8/19/16
to bob-...@googlegroups.com
Dear All,

I would like to use bob.bio.spear for language recognition by letting languages play the role of speakers. I would like to use iVectors, but I'm totally new to the topic (I have learned about GMMs, but only read a few about iVectors. Sorry for asking this on the devel list; I didn't find a help list; and sorry, if I confuse things.)
My question is what the different parts of the data are used for and how much speech they should contain.
I compare the overview of experiment design at
https://pypi.python.org/pypi/bob.spear/1.1.2
with the directory structure of file lists at
http://pythonhosted.org/bob.db.verification.filelist/guide.html#creating-file-lists.
I guess I need the following files (as a minimum):

basedir -- norm -- train_world.lst
       |
       |-- dev -- for_models.lst
       |      |-- for_probes.lst
       |
       |-- eval -- for_models.lst
               |-- for_probes.lst


I was told that I should used half of the data I have for training the UBM (step 3 in the overview, file train_world.lst).
The remaining steps from the overview are

4. Subspace Training and Projection (subspaces needed by ISV, JFA and I-Vector)
5. Conditioning and Compensation, this steps is used by the I-Vector  toolchain (Whitening, Length Normalization, LDA and WCCN projection).
6. Model Enrollment.
7. Scoring.
8. Fusion (The fusion of scores from different systems is done using logistic regression that should be trained normally on the development scores.) and
9. Evaluation.

I've read that there is no enrollment in the iVector technique. But how are the remaining 5 steps (4, 5,7--9) related to the 4 files dev/eval for_models.lst/for_probes.lst?
I was also told that the half of the data that remains after UBM training should be split equally between dev and eval. But what should be the ratio of for_models.lst and for_probes within either dev or eval?

Some background on my project: We are creating a Hungarian speech archive (http://hlt.bme.hu/en/projects/speech). There is not much speech in the archive yet, so I'm doing experiments with the IARPA Babel Language Packs.

Thank you very much
Márton Makrai

Tiago Freitas Pereira

unread,
Aug 19, 2016, 1:05:51 PM8/19/16
to bob-...@googlegroups.com
Hey Márton,

First, don't worry, you are in the right place :-)

Second, try to use the most recent software (https://pypi.python.org/pypi/bob.bio.spear). We also have all these packages available for conda (https://github.com/idiap/bob/wiki/Installation-with-conda).

I'm not a speaker recognition specialist and I've never worked with language detection before, so I don't have much to say (where are the speaker specialists of this list :-P ??).
About the amount of data to train the background models (UBM, TV Matrix, etc...), there is no precise answer for that.
It depends in which conditions you want to operate your language detection system.
Roughly speaking you should have a good amount of utterances from different speakers.

Regarding your question about i-Vectors, I suggest you to have a look in this paper ( Front-end factor analysis for speaker verification).
To be short, from the step 4 you deal with the 'i-vectors' (which are 1-d feature vectors).
So the Whitening, LDA, WCCN, ..... is done with the 'i-vectors' from the training set ('train_world.lst' in your example).


Cheers
Tiago


--
-- You received this message because you are subscribed to the Google Groups bob-devel group. To post to this group, send email to bob-...@googlegroups.com. To unsubscribe from this group, send email to bob-devel+unsubscribe@googlegroups.com. For more options, visit this group at https://groups.google.com/d/forum/bob-devel or directly the project website at http://idiap.github.com/bob/
---
You received this message because you are subscribed to the Google Groups "bob-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bob-devel+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Tiago
Reply all
Reply to author
Forward
0 new messages