On the Projector.hdf5 file for algorithms: ISV, TV-cosine and TV-PLDA

87 views
Skip to first unread message

Alain Komaty

unread,
Nov 14, 2016, 5:01:01 AM11/14/16
to bob-devel

Hey all,

This discussion is about the temporary files created when running a verification experiment, that is Projector.hdf5, Enroller.hdf5, ... etc

If one runs the ./bin/verify.py using ubm-gmm, then a file named Projector.hdf5 will be created in /idiap/temp/username/YOUR_DATBASE_NAME/YOUR_SUB_DIR/. This Projector.hdf5 is the UBM and it contains a number of Gaussian (depending on how many Gaussian one has used for his GMM model).
Now if one wants to run another experiment, like ISV of i-vector, using the same database and same protocol, then it is judicious to use the pre-calculated UBM model as well as the preprocessed files, instead of recalculating them all. In order to do that, we can specify the options: --skip-preprocessing --skip-extraction --projector-file '/idiap/temp/akomaty/AMI-test1/isv/Projector.hdf5'. Then the script will look directly for the UBM file in the specified directory.
However, if one runs the whole verify.py command without any skip and without specifying the already-trained Projector file, then a file also named Projector.hdf5 will be created in , but this file is different than the one previously discussed, so it does not only contains the GMM-UBM. In fact, if you open it, you'll see two groups, Projector and Enroller as seen below:



  • for ISV Projector.hdf5 file: the group Projector is the UBM, and the group Enroller is the one containing the trained matrices of the ISV model (m_{s,h} = m_0 + D z_s + U x_h)
  • for ivector_PLDA Projector.hdf5 file, the group Projector is the UBM, and the group Enroller is the one containing the trained matrices of the i-vector model (TV): m_{s,h}=m_0+T w_{s,h}, and the rest of the data like LDA projection, and PLDA classifier.
Now it is time to talk about the issue. The problem is when we want to use a pre-trained UBM for ISV (or i-vector), we will specify the path for the Projector.hdf5 file, but the script will not find the file, and you'll get an error like this one:
ERROR: During the execution, an exception was raised: HDF5File - cd ('/idiap/temp/username/YOUR_DATABASE_NAME/YOUR_UBM_DIR/Projector.hdf5'): C++ exception caught: 'Cannot find group `Projector' at `/idiap/temp/username/YOUR_DATABASE_NAME/YOUR_UBM_GMM_DIR/Projector.hdf5:'

This in normal because as we've already mentioned above, the Projector.hdf5 file created for ubm_gmm experiment does not contains any group, while the one generated by the script verify.py for ISV (or i-vector) contains more than one group.
The solution for this problem is to train ISV (or i-vector) parameters separately using train_isv.py and train_ivector.py, and then run verify.py on the trained matrices. An example for each algorithm is given in the following paper package documentation:
http://pythonhosted.org/xspear.btas2015/  

The experiments done in the BTAS2015 paper are just enough to run similar experiment and to remedy the problem described above, but my question is, is there any reason why the verify.py creates different types of Projector.hdf5 depending on the algorithm? Why it does not create another file called Enroller.hdf5 alongside the projector.hdf5 file? I guess this will make it easier to re-run experiments with already-trained models by using only verify.py, and playing with the options provided by this script.

Please correct me if I missed something or if I misunderstood something.

Best,
Alain

























Manuel Günther

unread,
Nov 14, 2016, 12:03:34 PM11/14/16
to bob-devel
Dear Alain,

indeed, the `Projector.hdf5` is different for each algorithm. This is by design. Here, you are only talking about UBM-GMM-based algorithms, while the possible types of algorithms is so wide-spread that it is impossible to assume any kind of structure inside a `Projector.hdf5`. In your case, this means that the `Projector.hdf5` files for the UBM-GMM and ISV algorithms are also different and incompatible. Hence, it is impossible to train a `Projector.hdf5` with one algorithm and use it in another one.

I am not sure if Elie has implemented a clever way to re-use the UBM trained with another algorithm. From looking at the code, it does not seem to be the case. 
However, it should be straight-forward to implement this. For example, you should be able to specify a pre-trained UBM file in the GMM constructor: https://gitlab.idiap.ch/bob/bob.bio.gmm/blob/master/bob/bio/gmm/algorithm/GMM.py#L21 and use this instead of training a new one. For example you might check if `self.ubm` is already set here: https://gitlab.idiap.ch/bob/bob.bio.gmm/blob/master/bob/bio/gmm/algorithm/GMM.py#L101 and skip the UBM training.

Then, you can specify the UBM in your configuration file. Note that you might need some manual work to get the UBM file into the right format, i.e., you might need to extract some sub-directory of `Projector.hdf5` into its own file.

Let me know in case of problems.
Manuel

Amir Mohammadi

unread,
Nov 15, 2016, 11:00:13 AM11/15/16
to bob-devel
Hi Alain,

Thank you for raising this issue. Generally I think your problem is that you want to use a cascade of classifiers and also share some classifiers between different experiments.

In my understanding, bob.bio.base currently supports only two layers of cascading through the "projection" and the "enrollment" steps. This is somewhat limited and usually you may want to have N number of classifiers cascaded. But to be more realistic, in case of GMM based algorithms, you want to share the UBM/GMM between experiments.
Hence, I think the GMM should be trained and saved in the projection step and other steps such as ISV and Ivector be done in the enrollment step and saved there.

This way, you can use the vanilla 'verify.py' to provide the trained UBM with the '--projector-file' option and it should just work. This change I think will break current API of bob.bio.gmm but I think the changes can be minimal.

P.S. I really don't know why we have 'verify_gmm.py', 'verify_isv.py', and 'verify_ivector.py' written separately but I think this may be time to merge them into 'verify.py` since these scripts are lacking behind the original one in terms of bug fixes and new features.

Best,
Amir

--
-- You received this message because you are subscribed to the Google Groups bob-devel group. To post to this group, send email to bob-...@googlegroups.com. To unsubscribe from this group, send email to bob-devel+...@googlegroups.com. For more options, visit this group at https://groups.google.com/d/forum/bob-devel or directly the project website at http://idiap.github.com/bob/
---
You received this message because you are subscribed to the Google Groups "bob-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bob-devel+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Tiago Freitas Pereira

unread,
Nov 15, 2016, 11:26:17 AM11/15/16
to bob-...@googlegroups.com
Just to add some words in the discussion,

We have the verify_gmm.py, verify_ivector.py, etc... separated from verify.py, because we hack the bob.bio.base toolchain in order to fit the distributed training of the UBM (Kmeans and ML) and TV-Matrix (plus the linear transformations with the iVectors).
This is issue with bob.bio.base is deeper than we think it is.
We basically have one toolchain and we "squeeze" every possible biometric recognition problem on it.
Well, we should discuss this another time.

What we can do, as a paliative, is to make the  `ubm-gmm` training write the gaussians in the `Projector` group of the Projector.hdf5 file.
For the time being Alain, you can do this by hand (using bob or the hdf5view).
It is not the best way, but at least you don't need to train everything from scratch.

Tiago


On Tue, Nov 15, 2016 at 5:00 PM, Amir Mohammadi <183....@gmail.com> wrote:
Hi Alain,

Thank you for raising this issue. Generally I think your problem is that you want to use a cascade of classifiers and also share some classifiers between different experiments.

In my understanding, bob.bio.base currently supports only two layers of cascading through the "projection" and the "enrollment" steps. This is somewhat limited and usually you may want to have N number of classifiers cascaded. But to be more realistic, in case of GMM based algorithms, you want to share the UBM/GMM between experiments.
Hence, I think the GMM should be trained and saved in the projection step and other steps such as ISV and Ivector be done in the enrollment step and saved there.

This way, you can use the vanilla 'verify.py' to provide the trained UBM with the '--projector-file' option and it should just work. This change I think will break current API of bob.bio.gmm but I think the changes can be minimal.

P.S. I really don't know why we have 'verify_gmm.py', 'verify_isv.py', and 'verify_ivector.py' written separately but I think this may be time to merge them into 'verify.py` since these scripts are lacking behind the original one in terms of bug fixes and new features.

Best,
Amir
On Mon, Nov 14, 2016 at 6:03 PM 'Manuel Günther' via bob-devel <bob-...@googlegroups.com> wrote:
Dear Alain,

indeed, the `Projector.hdf5` is different for each algorithm. This is by design. Here, you are only talking about UBM-GMM-based algorithms, while the possible types of algorithms is so wide-spread that it is impossible to assume any kind of structure inside a `Projector.hdf5`. In your case, this means that the `Projector.hdf5` files for the UBM-GMM and ISV algorithms are also different and incompatible. Hence, it is impossible to train a `Projector.hdf5` with one algorithm and use it in another one.

I am not sure if Elie has implemented a clever way to re-use the UBM trained with another algorithm. From looking at the code, it does not seem to be the case. 
However, it should be straight-forward to implement this. For example, you should be able to specify a pre-trained UBM file in the GMM constructor: https://gitlab.idiap.ch/bob/bob.bio.gmm/blob/master/bob/bio/gmm/algorithm/GMM.py#L21 and use this instead of training a new one. For example you might check if `self.ubm` is already set here: https://gitlab.idiap.ch/bob/bob.bio.gmm/blob/master/bob/bio/gmm/algorithm/GMM.py#L101 and skip the UBM training.

Then, you can specify the UBM in your configuration file. Note that you might need some manual work to get the UBM file into the right format, i.e., you might need to extract some sub-directory of `Projector.hdf5` into its own file.

Let me know in case of problems.
Manuel

--
-- You received this message because you are subscribed to the Google Groups bob-devel group. To post to this group, send email to bob-...@googlegroups.com. To unsubscribe from this group, send email to bob-devel+unsubscribe@googlegroups.com. For more options, visit this group at https://groups.google.com/d/forum/bob-devel or directly the project website at http://idiap.github.com/bob/

---
You received this message because you are subscribed to the Google Groups "bob-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bob-devel+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
-- You received this message because you are subscribed to the Google Groups bob-devel group. To post to this group, send email to bob-...@googlegroups.com. To unsubscribe from this group, send email to bob-devel+unsubscribe@googlegroups.com. For more options, visit this group at https://groups.google.com/d/forum/bob-devel or directly the project website at http://idiap.github.com/bob/

---
You received this message because you are subscribed to the Google Groups "bob-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bob-devel+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Tiago

Amir Mohammadi

unread,
Nov 15, 2016, 11:32:04 AM11/15/16
to bob-...@googlegroups.com
Hi Tiago,

We are discussing this here because we don't want to do this by hand. My suggestion is if we separate it to projection and enrollment, then the gmm, isv, and ivector algorithms can easily pickup the shared ubm without manual modification.

Best,
Amir

-- You received this message because you are subscribed to the Google Groups bob-devel group. To post to this group, send email to bob-...@googlegroups.com. To unsubscribe from this group, send email to bob-devel+...@googlegroups.com. For more options, visit this group at https://groups.google.com/d/forum/bob-devel or directly the project website at http://idiap.github.com/bob/

---
You received this message because you are subscribed to the Google Groups "bob-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bob-devel+...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
-- You received this message because you are subscribed to the Google Groups bob-devel group. To post to this group, send email to bob-...@googlegroups.com. To unsubscribe from this group, send email to bob-devel+...@googlegroups.com. For more options, visit this group at https://groups.google.com/d/forum/bob-devel or directly the project website at http://idiap.github.com/bob/

---
You received this message because you are subscribed to the Google Groups "bob-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bob-devel+...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Tiago

--
-- You received this message because you are subscribed to the Google Groups bob-devel group. To post to this group, send email to bob-...@googlegroups.com. To unsubscribe from this group, send email to bob-devel+...@googlegroups.com. For more options, visit this group at https://groups.google.com/d/forum/bob-devel or directly the project website at http://idiap.github.com/bob/

---
You received this message because you are subscribed to the Google Groups "bob-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bob-devel+...@googlegroups.com.

Manuel Günther

unread,
Nov 15, 2016, 11:52:50 AM11/15/16
to bob-devel
I think there was a reason to have all the steps together in the projector-training. For ISV, for example, we pre-compute parts of the ISV equation that only rely on the probes during projection: https://gitlab.idiap.ch/bob/bob.bio.gmm/blob/master/bob/bio/gmm/algorithm/ISV.py#L129  -- in order to avoid computing it every time during scoring. This saves a lot of scoring time.
Hence, moving the ISV to the enroller-training might be doable, but on the cost of a much extended scoring time -- which is already pretty long.
I think, the same applies for I-Vector; for JFA I am not sure.

I think, we should go with Tiago's proposal, i.e., to make the UMB-GMM projector writing the UBM into the "Projector" HDF5 subgroup. In this way, we can specify the `ubm_file='...hdf5'` in the UMBGMM, ISV, IVector and JFA constructor and skip UBM training here: https://gitlab.idiap.ch/bob/bob.bio.gmm/blob/master/bob/bio/gmm/algorithm/GMM.py#L103 and here: https://gitlab.idiap.ch/bob/bob.bio.gmm/blob/master/bob/bio/gmm/script/verify_gmm.py#L76
Actually, we could even have a switch to read both formats, so that we do not break the API.


I agree that providing a single toolchain for all algorithms is a little bit difficult, but I tried my best and I think the latest version in the bob.bio packages are very well adaptable. The bob.bio.gmm package is a little bit odd, as it combines different steps into the projector training, which works both with the linear script `bin/verify.py` and with the parallelized training scripts `bin/verify_gmm.py` and alike. In fact, I thought that Elie might have implemented the `ubm_file` in the constructor already, as he had the same issue as Alain before. Apparently this seems not to be the case, but that should not hinder us to implement it now.

Manuel

Manuel Günther

unread,
Nov 15, 2016, 12:01:33 PM11/15/16
to bob-devel
Now, thinking about it, maybe we should add another (optional) step to process probes after enrollment training was completed. 

I currently have a similar issue. I am using a database (IJB-A), where we have several images combined into one probe (i.e., implementing the `FileSet` protocol). During projection, only single files are handled, while I need to combine all probe features of a FileSet to be able to score. Hence, at the moment I do this during scoring, i.e., many times for the same probe FileSet. It might be a good idea to have an additional step that allows to do this once, which we could use for the ISV probe file processing (currently done during projection, see above). Yet, this would complicate the toolchain even more, is useful only in rare cases, and needs to be carefully designed.

Amir Mohammadi

unread,
Jan 12, 2017, 10:33:36 AM1/12/17
to bob-devel
Another problem with having separate verify_gmm, verify_isv, and etc. is that these scripts usually lack behind verify.py and do not contain the latest bug fixes and features that goes into verify.py

Best,
Amir

On Tue, Nov 15, 2016 at 6:01 PM 'Manuel Günther' via bob-devel <bob-...@googlegroups.com> wrote:
Now, thinking about it, maybe we should add another (optional) step to process probes after enrollment training was completed. 

I currently have a similar issue. I am using a database (IJB-A), where we have several images combined into one probe (i.e., implementing the `FileSet` protocol). During projection, only single files are handled, while I need to combine all probe features of a FileSet to be able to score. Hence, at the moment I do this during scoring, i.e., many times for the same probe FileSet. It might be a good idea to have an additional step that allows to do this once, which we could use for the ISV probe file processing (currently done during projection, see above). Yet, this would complicate the toolchain even more, is useful only in rare cases, and needs to be carefully designed.

--

Manuel Günther

unread,
Jan 12, 2017, 12:35:29 PM1/12/17
to bob-devel
I agree that we need to take care to implement bug fixes of `verify.py` into `verify_gmm.py` and alike. So far, this has been done and I don't see that those scripts lack behind. Could you please show me, which bug fix of `verify.py` was not implemented in `verify_gmm.py`? Note that most of the fixes in `verify.py` are automatically included into `verify_gmm.py` as the latter relies on functionality of the former.

The main reason why we have created the `bob.bio.gmm` package in the first place is to have these specialized scripts. It took us (me and Elie, maybe also Laurent and Tiago) a while to get those scripts running and working together with the raw `verify.py` script. Please do not remove them. If you do not want to use the parallelized training, you can always use `verify.py`, and get (at least almost) the same result.

Priyanka Das

unread,
Oct 31, 2019, 12:54:14 PM10/31/19
to bob-devel
Hello All! 

I am  a new user to bob.bio.base platform as well as in the linux platform. I am trying to run an experiment for speaker recognition using my own database. I followed all the instructions for installing and setting up bob.bio.spear. When I run the following, 

verify.py --database Data_voice.py --preprocessor energy-thr --extractor mfcc20 --algorithm gmm --sub-directory Voice_bob_analysis --force 

I get an error: 

RuntimeError: HDF5File - hdf5 constructor: C++ exception caught: 'cannot open file `/home/prianka/Data_Voice_data/All_6_Collections/Voice_data_All_6_coll/20160104002/20160104002.hdf5'' 

I am unable to understand the cause and find a solution to the issue. The files have both read and write access.

I would really appreciate any help regarding this.

Thank you!

Amir Mohammadi

unread,
Oct 31, 2019, 1:38:19 PM10/31/19
to bob-devel
Hi Priyanka,

Could you please let us know how that .hdf5 file is created?
It is not clear where the error is happening from what you have provided.
Please run verify.py with `-vvv` and provide the full log of the command.

Thank you,
Amir

Priyanka Das

unread,
Oct 31, 2019, 2:36:47 PM10/31/19
to bob-...@googlegroups.com
Hi Amir!

Thank you very much for your swift response. 

The .hdf5 files are created in my data locations i.e where my .wav files are located. I am attaching an example of the files and location.

The response from verify.py -vvv:

(bob_py3) prianka@CA2011PRDas-MS-7623:~/anaconda3/envs/bob_py3/lib/python3.6/site-packages/bob/bio/spear/config/database$ verify.py -vvv
bob.bio.base@2019-10-31 14:32:20,892 -- ERROR: During the execution, an exception was raised: Please specify 'database' either on command line (via '--database') or in a configuration file
Traceback (most recent call last):
  File "/home/prianka/anaconda3/envs/bob_py3/bin/verify.py", line 6, in <module>
    sys.exit(bob.bio.base.script.verify.main())
  File "/home/prianka/anaconda3/envs/bob_py3/lib/python3.6/site-packages/bob/bio/base/script/verify.py", line 432, in main
    args = parse_arguments(command_line_parameters)
  File "/home/prianka/anaconda3/envs/bob_py3/lib/python3.6/site-packages/bob/bio/base/script/verify.py", line 34, in parse_arguments
    skips = ['preprocessing', 'extractor-training', 'extraction', 'projector-training', 'projection', 'enroller-training', 'enrollment', 'score-computation', 'concatenation', 'calibration'])
  File "/home/prianka/anaconda3/envs/bob_py3/lib/python3.6/site-packages/bob/bio/base/tools/command_line.py", line 422, in initialize
    args = parse_config_file(parsers, args, args_dictionary, keywords, skips)
  File "/home/prianka/anaconda3/envs/bob_py3/lib/python3.6/site-packages/bob/bio/base/tools/command_line.py", line 291, in parse_config_file
    parser.get_default(keyword))
  File "/home/prianka/anaconda3/envs/bob_py3/lib/python3.6/site-packages/bob/bio/base/tools/command_line.py", line 259, in take_from_config_or_command_line
    (keyword, keyword.replace("_","-")))
ValueError: Please specify 'database' either on command line (via '--database') or in a configuration file


Sincerely,
Priyanka Das
Research Assistant
Centre for Identification Technology Research(CITeR)
Electrical and Computer Science Department
Clarkson University
Potsdam,NY,USA



--
-- You received this message because you are subscribed to the Google Groups bob-devel group. To post to this group, send email to bob-...@googlegroups.com. To unsubscribe from this group, send email to bob-devel+...@googlegroups.com. For more options, visit this group at https://groups.google.com/d/forum/bob-devel or directly the project website at http://idiap.github.com/bob/
---
You received this message because you are subscribed to the Google Groups "bob-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bob-devel+...@googlegroups.com.
hdf5 location

Priyanka Das

unread,
Oct 31, 2019, 2:42:29 PM10/31/19
to bob-...@googlegroups.com
Hi Amir!

I think what you expected me to run was:
verify.py --database Child_voice.py --preprocessor energy-thr --extractor mfcc20 --algorithm ivector-plda --sub-directory Voice_bob_analysis --force --skip-projector-training -vvv

In that case the log is attached pdf.

Thank you!

Regards,
Priyanka


Sincerely,
Priyanka Das
Research Assistant
Centre for Identification Technology Research(CITeR)
Electrical and Computer Science Department
Clarkson University
Potsdam,NY,USA


On Thu, Oct 31, 2019 at 1:38 PM Amir Mohammadi <183....@gmail.com> wrote:
--
log.pdf

Amir Mohammadi

unread,
Nov 1, 2019, 11:26:38 AM11/1/19
to bob-devel
Hi Priyanka,

- You don't need to put `Child_voice.py` inside bob's source code. It can be anywhere. You just give its path to verify.py.
- Your database's original directory has brackets around it `[]`, please remove that. I wonder how the code is working with that.

The log does not make sense. The extracted files and preprocessed files end up in strange places.
Have you modified the Bob's source code?
Please also share `Child_voice.py`.

Thanks,
Amir

Priyanka Das

unread,
Nov 1, 2019, 2:46:54 PM11/1/19
to bob-...@googlegroups.com
Hi Amir!

- You don't need to put `Child_voice.py` inside bob's source code. It can be anywhere. You just give its path to verify.py.
When I try to run the Child_voice.py by providing its path it does not even run. I am copying log below for this case.  

(bob_py3) prianka@CA2011PRDas-MS-7623:~$ verify.py /home/prianka/anaconda3/envs/bob_py3/lib/python3.6/site-packages/bob/bio/spear/config/database/Child_voice.py
bob.bio.base@2019-11-01 14:24:45,666 -- WARNING: The variable 'Child_voice_wav_directory' in a configuration file is not known or not supported by this application; use a '_' prefix to the variable name (e.g., '_Child_voice_wav_directory') to suppress this warning
/home/prianka/anaconda3/envs/bob_py3/lib/python3.6/site-packages/scipy/io/wavfile.py:273: WavFileWarning: Chunk (non-data) not understood, skipping it.
  WavFileWarning)
bob.bio.spear@2019-11-01 14:24:45,753 -- INFO: After thresholded Energy-based VAD there are 2719 frames remaining over 2719
bob.bio.base@2019-11-01 14:24:45,776 -- ERROR: During the execution, an exception was raised: need at least one array to concatenate

Traceback (most recent call last):
  File "/home/prianka/anaconda3/envs/bob_py3/bin/verify.py", line 6, in <module>
    sys.exit(bob.bio.base.script.verify.main())
  File "/home/prianka/anaconda3/envs/bob_py3/lib/python3.6/site-packages/bob/bio/base/script/verify.py", line 435, in main
    verify(args, command_line_parameters)
  File "/home/prianka/anaconda3/envs/bob_py3/lib/python3.6/site-packages/bob/bio/base/script/verify.py", line 415, in verify
    if not execute(args):
  File "/home/prianka/anaconda3/envs/bob_py3/lib/python3.6/site-packages/bob/bio/base/script/verify.py", line 290, in execute
    force = args.force)
  File "/home/prianka/anaconda3/envs/bob_py3/lib/python3.6/site-packages/bob/bio/base/tools/algorithm.py", line 56, in train_projector
    algorithm.train_projector(train_features, fs.projector_file)
  File "/home/prianka/anaconda3/envs/bob_py3/lib/python3.6/site-packages/bob/bio/gmm/algorithm/GMM.py", line 146, in train_projector
    array = numpy.vstack(train_features)
  File "/home/prianka/anaconda3/envs/bob_py3/lib/python3.6/site-packages/numpy/core/shape_base.py", line 237, in vstack
    return _nx.concatenate([atleast_2d(_m) for _m in tup], 0)
ValueError: need at least one array to concatenate

- Your database's original directory has brackets around it `[]`, please remove that. I wonder how the code is working with that.
I made the changes.

 
Have you modified the Bob's source code? 
No, I only followed instructions from the links below. I do not have enough knowledge to make changes to the source code. 

PFA the Child-voice.py file.

Thank you!

Regards,
Priyanka

Sincerely,
Priyanka Das
Research Assistant
Centre for Identification Technology Research(CITeR)
Electrical and Computer Science Department
Clarkson University
Potsdam,NY,USA


Child_voice.py

Amir Mohammadi

unread,
Nov 4, 2019, 12:56:56 PM11/4/19
to bob-devel
Hi Priyanka,

I cannot understand where the problem is in your setup.
I will let others to take a look at your problem and see if they can help you.

Best regards,
Amir

Priyanka Das

unread,
Nov 8, 2019, 3:30:34 PM11/8/19
to bob-...@googlegroups.com
Hi Amir!

I am still stuck with the same problem. Do you have any suggestions that I can do to check/solve the issue?

Thanks!

Reards,
Priyanka 

Sincerely,
Priyanka Das
Research Assistant
Centre for Identification Technology Research(CITeR)
Electrical and Computer Science Department
Clarkson University
Potsdam,NY,USA


Reply all
Reply to author
Forward
0 new messages