Questions about UBM training in SPEAR

203 views
Skip to first unread message

Qiongqiong Wang

unread,
Aug 31, 2014, 11:57:25 PM8/31/14
to bob-...@googlegroups.com
Hi, 
 

I have two questions about UBM training in SPEAR.

 

The first one is training UBM took extremely long time. When I used 600 hours speech data to train a UBM (256g) with SPEAR, it took me about five days. Is it normal to every one? Or I may have some mistakes in using it.

 

The second question is whether I can use UBM trained by HTK in SPEAR. I think using HTK to train the UBM with the same data will take less time than 5 days. I know in feature extraction step, SPEAR has the function of HTK Feature reader. So I would like to ask whether there is similar function in UBM training step such as HTK UBM reader?

 

Looking forward your reply! Thank you!

 

 

Regards,

Qiongqiong

Elie Khoury

unread,
Sep 1, 2014, 2:51:22 AM9/1/14
to bob-...@googlegroups.com
Hello,

5 days to train a UBM with 600h of speech data looks normal if you are running the script on your local machine with a single node. One thing that could be done is to use the parallel implementation scripts (for example: bin/para_ubm_spkverif_gmm.py) that divides your time by nearly the number of nodes you have.

For now we don't have a python wrapper to read a GMM trained by HTK. This feature could be considered in the future.

Best regards,
Elie

Qiongqiong Wang

unread,
Sep 4, 2014, 4:08:10 AM9/4/14
to bob-...@googlegroups.com
 Thanks for the prompt reply! 

 

I actually have tried to use the parallel implementation of GMM "$ bin/para_ubm_spkverif_gmm.py", but i met some problems. I would appreciate it if you could help me on this issue also.

 

I have tried only to replace “spkverif_gmm.py” with “para_ubm_spkverif_gmm.py” and kept the options same as the following line.  It could work but the processing time was almost same as when I just use “spkverif_gmm.py”. So I think there must be something wrong so that it actually didn’t do the process parallelly.

     $ ./bin/ para_ubm_spkverif_gmm.py \

     -d config/database/voxforge.py \

     -p config/preprocessing/energy.py \

     -f config/features/mfcc_60.py \

     -t config/tools/ubm_gmm/ubm_gmm_256G.py \

     -b ubm_gmm -z \

     --user-directory PATH/TO/USER/DIR--temp-directory PATH/TO/TEMP/DIR

 

By comparing the usages of the two commands - spkverif_gmm.py and $ bin/para_ubm_spkverif_gmm.py, I found the following different options which are only in “para_ubm_spkverif_gmm.py”:

      [-l LIMIT_TRAINING_EXAMPLES]

     [-K KMEANS_TRAINING_ITERATIONS]

     [-k KMEANS_START_ITERATION]

     [-M GMM_TRAINING_ITERATIONS]

     [-m GMM_START_ITERATION] [-n] [-C]

     [--skip-normalization] [--skip-k-means]

     [--skip-gmm]

 

I think they might be necessary for the parallel training. But I am not clear how to use them. Could you give some brief explanation and a examples? Sorry for taking your time! Thank you! 

 

Best Regards,

Qiongqiong

Elie Khoury

unread,
Sep 4, 2014, 8:53:43 AM9/4/14
to bob-...@googlegroups.com
Hello,

Please check details regarding the different options using the help:

$ bin/para_ubm_spkverif_gmm.py --help

You may want to check their default values and modify them if needed.

In addition, I updated the config files and the instructions on how to run the parallel implementation (for Voxforge).

In summary, on local machine:
$ ./bin/para_ubm_spkverif_gmm.py -d config/database/voxforge.py -p config/preprocessing/energy.py \
 -f config/features/mfcc_60.py -t config/tools/ubm_gmm/ubm_gmm_256G.py -b ubm_gmm -z \
 --user-directory PATH/TO/USER/DIR --temp-directory PATH/TO/TEMP/DIR -g config/grid/para_training_local.py
$ bin/jman --local -vv run-scheduler --parallel 6
In this example, the number of nodes is 6

On SGE, you only need to type this command:
$ ./bin/para_ubm_spkverif_gmm.py -d config/database/voxforge.py -p config/preprocessing/energy.py \
 -f config/features/mfcc_60.py -t config/tools/ubm_gmm/ubm_gmm_256G.py -b ubm_gmm -z \
 --user-directory PATH/TO/USER/DIR --temp-directory PATH/TO/TEMP/DIR -g config/grid/para_training_sge.py

Probably on Voxforge database, which is a 'toy' database, you won't see the advantage of the parallel implementation. But it will be more clear when you run it on big databases such as NIST SREs.

Best regards,
Elie
--
-- You received this message because you are subscribed to the Google Groups bob-devel group. To post to this group, send email to bob-...@googlegroups.com. To unsubscribe from this group, send email to bob-devel+...@googlegroups.com. For more options, visit this group at https://groups.google.com/d/forum/bob-devel or directly the project website at http://idiap.github.com/bob/
---
You received this message because you are subscribed to the Google Groups "bob-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bob-devel+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


-- 
-------------------
Dr. Elie Khoury
Post Doctorant
Biometric Person Recognition Group
IDIAP Research Institute (Switzerland)
Tel : +41 27 721 77 23

Márton Makrai

unread,
Aug 25, 2016, 3:47:14 AM8/25/16
to bob-devel, elie....@idiap.ch
Dear Elie and All,

I would also like to use spear on a single machine, in parallel (on a custom database (created from IARPA Babel language packs for which I wrote a config file https://github.com/juditacs/hunspeech/blob/master/emLid/babel.py)). Could you please tell me where I can find this tconfig/grid/para_training_local.py file?

Thanks
Márton Makrai

Manuel Günther

unread,
Aug 25, 2016, 11:14:48 AM8/25/16
to bob-devel
Hi Marton,

if you are using ``bob.bio.spear`` (which I would recommend), for running in parallel on the local machine, you might use the ``--parallel N`` (where N is the number of parallel processes) option, which automatically will create a local grid queue internally.
Otherwise you'd need to create your own local grid configuration file, which should be generated in a similar way to what you get in ``tconfig/grid/para_training_local.py``.

BTW: next time, please do not dig out two year old threads, but open a new one instead, if possible.

Cheers
Manuel

Márton Makrai

unread,
Aug 26, 2016, 4:24:15 AM8/26/16
to bob-devel

Dear Manuel,

yes, I use bob.bio.spear. I tried the --parallel N option, and got the error bellow.
I read somewhere that this may be because the program tries to open for writing a dictionary where I don’t have the permission but I don’t know which directory this may be as I have write access for the “ sub-directory”.
Could you tell me, what the problem is?
What is tconfig/grid/para_training_local.py? Where can I find it?

Thanks
Márton

(venv)makrai@ron:/mnt/permanent/Language$ time python -m bob.bio.base.script.verify -d ~/repo/hunspeech/emLid/babel.py -p energy-2gauss -e mfcc-60 -a gmm -vvs /mnt/store/makrai/work/emLid/spear_babel/ --grid local-p4 --run-local-scheduler --nice 10
bob.bio.base@2016-08-26 10:10:34,358 -- ERROR: During the execution, an exception was raised: (sqlite3.OperationalError) unable to open database file
Traceback (most recent call last):
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 2074, in _wrap_pool_connect
    return fn()
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/pool.py", line 376, in connect
    return _ConnectionFairy._checkout(self)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/pool.py", line 713, in _checkout
    fairy = _ConnectionRecord.checkout(pool)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/pool.py", line 480, in checkout
    rec = pool._do_get()
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/pool.py", line 1151, in _do_get
    return self._create_connection()
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/pool.py", line 323, in _create_connection
    return _ConnectionRecord(self)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/pool.py", line 449, in __init__
    self.connection = self.__connect()
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/pool.py", line 607, in __connect
    connection = self.__pool._invoke_creator(self)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/engine/strategies.py", line 97, in connect
    return dialect.connect(*cargs, **cparams)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 385, in connect
    return self.dbapi.connect(*cargs, **cparams)
sqlite3.OperationalError: unable to open database file

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3.4/runpy.py", line 170, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.4/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/bob/bio/base/script/verify.py", line 453, in <module>
    main()
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/bob/bio/base/script/verify.py", line 446, in main
    verify(args, command_line_parameters)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/bob/bio/base/script/verify.py", line 399, in verify
    retval = add_jobs(args, submitter)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/bob/bio/base/script/verify.py", line 57, in add_jobs
    **args.grid.preprocessing_queue)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/bob/bio/base/tools/grid.py", line 111, in submit
    **kwargs
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/gridtk/local.py", line 48, in submit
    self.lock()
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/gridtk/manager.py", line 61, in lock
    self._create()
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/gridtk/manager.py", line 86, in _create
    Base.metadata.create_all(self._engine)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/sql/schema.py", line 3695, in create_all
    tables=tables)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1855, in _run_visitor
    with self._optional_conn_ctx_manager(connection) as conn:
  File "/usr/lib/python3.4/contextlib.py", line 59, in __enter__
    return next(self.gen)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1848, in _optional_conn_ctx_manager
    with self.contextual_connect() as conn:
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 2039, in contextual_connect
    self._wrap_pool_connect(self.pool.connect, None),
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 2078, in _wrap_pool_connect
    e, dialect, self)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1405, in _handle_dbapi_exception_noconnection
    exc_info
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 202, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 185, in reraise
    raise value.with_traceback(tb)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 2074, in _wrap_pool_connect
    return fn()
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/pool.py", line 376, in connect
    return _ConnectionFairy._checkout(self)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/pool.py", line 713, in _checkout
    fairy = _ConnectionRecord.checkout(pool)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/pool.py", line 480, in checkout
    rec = pool._do_get()
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/pool.py", line 1151, in _do_get
    return self._create_connection()
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/pool.py", line 323, in _create_connection
    return _ConnectionRecord(self)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/pool.py", line 449, in __init__
    self.connection = self.__connect()
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/pool.py", line 607, in __connect
    connection = self.__pool._invoke_creator(self)
    connection = self.__pool._invoke_creator(self)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/engine/strategies.py", line 97, in connect
    return dialect.connect(*cargs, **cparams)
  File "/home/makrai/tool/python/venv/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 385, in connect
    return self.dbapi.connect(*cargs, **cparams)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file

Tiago Freitas Pereira

unread,
Aug 26, 2016, 4:36:14 AM8/26/16
to bob-...@googlegroups.com
Hi Márton,

Usually the sqlite file is written in the same directory of bob.bio.spear project.
Just in case, try to specify this directory in a location that you are granted for writing using the option `--gridtk-database-file`or `-G`.
Have a look in ./bin/verify.py --help for more information.

Cheers

Tiago

--
-- You received this message because you are subscribed to the Google Groups bob-devel group. To post to this group, send email to bob-...@googlegroups.com. To unsubscribe from this group, send email to bob-devel+unsubscribe@googlegroups.com. For more options, visit this group at https://groups.google.com/d/forum/bob-devel or directly the project website at http://idiap.github.com/bob/

---
You received this message because you are subscribed to the Google Groups "bob-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bob-devel+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Tiago

Márton Makrai

unread,
Aug 26, 2016, 6:34:22 AM8/26/16
to bob-devel
Thank you very much, in the help I learned that the default for the gridtk-database-file is in the working directory (where the python command is run), so this is solved.
Unfortunately now I get an other error.
I tried
`python -m bob.bio.base.script.verify -d ~/repo/hunspeech/emLid/babel.py -p energy-2gauss -e mfcc-60 -a gmm -vvs /mnt/store/makrai/work/emLid/spear_babel/ --nice 10 --parallel 4`
and 
`python -m bob.bio.base.script.verify -d ~/repo/hunspeech/emLid/babel.py -p energy-2gauss -e mfcc-60 -a gmm -vvs /mnt/store/makrai/work/emLid/spear_babel/ --grid local-p4 --run-local-scheduler --nice 10 --parallel 4`
and I got similar error:
```bash
(venv)makrai@ron:/mnt/store/makrai/work/emLid/spear_babel$ python -m bob.bio.base.script.verify -d ~/repo/hunspeech/emLid/babel.py -p energy-2gauss -e mfcc-60 -a gmm -vvs /mnt/store/makrai/work/emLid/spear_babel/ --nice 10 --parallel 4
gridtk@2016-08-26 12:32:04,245 -- INFO: Added job '<Job: 15 (15) [1-4:1] - 'preprocess'> | local : submitted -- '/home/makrai/tool/python/venv/lib/python3.4/site-packages/bob/bio/base/script/verify.py' -d '/home/makrai/repo/hunspeech/emLid/babel.py' -p 'energy-2gauss' -e 'mfcc-60' -a 'gmm' -vvs '/mnt/store/makrai/work/emLid/spear_babel/' --nice '10' --parallel '4' --sub-task 'preprocess' ' to the database
bob.bio.base@2016-08-26 12:32:04,263 -- INFO: submitted: job 'preprocess' with id '15' and dependencies '[]'
gridtk@2016-08-26 12:32:04,291 -- INFO: Added job '<Job: 16 (16) [1-4:1] - 'extract'> | local : submitted -- '/home/makrai/tool/python/venv/lib/python3.4/site-packages/bob/bio/base/script/verify.py' -d '/home/makrai/repo/hunspeech/emLid/babel.py' -p 'energy-2gauss' -e 'mfcc-60' -a 'gmm' -vvs '/mnt/store/makrai/work/emLid/spear_babel/' --nice '10' --parallel '4' --sub-task 'extract' ' to the database
bob.bio.base@2016-08-26 12:32:04,309 -- INFO: submitted: job 'extract' with id '16' and dependencies '[15]'
gridtk@2016-08-26 12:32:04,334 -- INFO: Added job '<Job: 17 (17)  - 'train-p'> | local : submitted -- '/home/makrai/tool/python/venv/lib/python3.4/site-packages/bob/bio/base/script/verify.py' -d '/home/makrai/repo/hunspeech/emLid/babel.py' -p 'energy-2gauss' -e 'mfcc-60' -a 'gmm' -vvs '/mnt/store/makrai/work/emLid/spear_babel/' --nice '10' --parallel '4' --sub-task 'train-projector' ' to the database
bob.bio.base@2016-08-26 12:32:04,351 -- INFO: submitted: job 'train-p' with id '17' and dependencies '[15, 16]'
gridtk@2016-08-26 12:32:04,383 -- INFO: Added job '<Job: 18 (18) [1-4:1] - 'project'> | local : submitted -- '/home/makrai/tool/python/venv/lib/python3.4/site-packages/bob/bio/base/script/verify.py' -d '/home/makrai/repo/hunspeech/emLid/babel.py' -p 'energy-2gauss' -e 'mfcc-60' -a 'gmm' -vvs '/mnt/store/makrai/work/emLid/spear_babel/' --nice '10' --parallel '4' --sub-task 'project' ' to the database
bob.bio.base@2016-08-26 12:32:04,402 -- INFO: submitted: job 'project' with id '18' and dependencies '[15, 16, 17]'
gridtk@2016-08-26 12:32:04,637 -- INFO: Added job '<Job: 19 (19) [1-4:1] - 'enr-N-dev'> | local : submitted -- '/home/makrai/tool/python/venv/lib/python3.4/site-packages/bob/bio/base/script/verify.py' -d '/home/makrai/repo/hunspeech/emLid/babel.py' -p 'energy-2gauss' -e 'mfcc-60' -a 'gmm' -vvs '/mnt/store/makrai/work/emLid/spear_babel/' --nice '10' --parallel '4' --sub-task 'enroll' --group 'dev' --model-type 'N' ' to the database
bob.bio.base@2016-08-26 12:32:04,668 -- INFO: submitted: job 'enr-N-dev' with id '19' and dependencies '[15, 16, 17, 18]'
gridtk@2016-08-26 12:32:04,707 -- INFO: Added job '<Job: 20 (20) [1-4:1] - 'score-A-dev'> | local : submitted -- '/home/makrai/tool/python/venv/lib/python3.4/site-packages/bob/bio/base/script/verify.py' -d '/home/makrai/repo/hunspeech/emLid/babel.py' -p 'energy-2gauss' -e 'mfcc-60' -a 'gmm' -vvs '/mnt/store/makrai/work/emLid/spear_babel/' --nice '10' --parallel '4' --sub-task 'compute-scores' --group 'dev' --score-type 'A' ' to the database
bob.bio.base@2016-08-26 12:32:04,726 -- INFO: submitted: job 'score-A-dev' with id '20' and dependencies '[15, 16, 17, 18, 19]'
gridtk@2016-08-26 12:32:04,953 -- INFO: Added job '<Job: 21 (21)  - 'concat-dev'> | local : submitted -- '/home/makrai/tool/python/venv/lib/python3.4/site-packages/bob/bio/base/script/verify.py' -d '/home/makrai/repo/hunspeech/emLid/babel.py' -p 'energy-2gauss' -e 'mfcc-60' -a 'gmm' -vvs '/mnt/store/makrai/work/emLid/spear_babel/' --nice '10' --parallel '4' --sub-task 'concatenate' --group 'dev' ' to the database
bob.bio.base@2016-08-26 12:32:04,971 -- INFO: submitted: job 'concat-dev' with id '21' and dependencies '[20]'
bob.bio.base@2016-08-26 12:32:04,974 -- INFO: Starting jman deamon to run the jobs on the local machine.
gridtk@2016-08-26 12:32:05,021 -- INFO: Starting execution of Job 'preprocess' (15 (1/4))
gridtk@2016-08-26 12:32:05,239 -- INFO: Starting execution of Job 'preprocess' (15 (2/4))
gridtk@2016-08-26 12:32:05,249 -- INFO: Starting execution of Job 'preprocess' (15 (3/4))
gridtk@2016-08-26 12:32:05,258 -- INFO: Starting execution of Job 'preprocess' (15 (4/4))
gridtk@2016-08-26 12:32:06,295 -- INFO: Job 'preprocess' (15 (4)) finished execution with result 'failure (69)'
gridtk@2016-08-26 12:32:06,312 -- INFO: Job 'preprocess' (15 (3)) finished execution with result 'failure (69)'
gridtk@2016-08-26 12:32:06,532 -- INFO: Job 'preprocess' (15 (2)) finished execution with result 'failure (69)'
gridtk@2016-08-26 12:32:06,549 -- INFO: Job 'preprocess' (15 (1)) finished execution with result 'failure (69)'
gridtk@2016-08-26 12:32:06,587 -- INFO: Stopping task scheduler since there are no more jobs running.
bob.bio.base@2016-08-26 12:32:06,601 -- ERROR: The jobs with the following IDS did not finish successfully: '15'.
<Job: 15 (15) [1-4:1] - 'preprocess'> | local : failure (69) -- '/home/makrai/tool/python/venv/lib/python3.4/site-packages/bob/bio/base/script/verify.py' -d '/home/makrai/repo/hunspeech/emLid/babel.py' -p 'energy-2gauss' -e 'mfcc-60' -a 'gmm' -vvs '/mnt/store/makrai/work/emLid/spear_babel/' --nice '10' --parallel '4' --sub-task 'preprocess'
Array Job 1 (ron) :
gridtk@2016-08-26 12:32:06,618 -- INFO: Contents of output file: '/mnt/store/makrai/work/emLid/spear_babel/gridtk_logs/preprocess/preprocess.o15.1'
gridtk@2016-08-26 12:32:06,618 -- INFO: Contents of error file: '/mnt/store/makrai/work/emLid/spear_babel/gridtk_logs/preprocess/preprocess.e15.1'
2016-08-26 12:32:05,503 ERROR gridtk: The job with id '15' could not be executed: [Errno 13] Permission denied
----------------------------------------
Array Job 2 (ron) :
gridtk@2016-08-26 12:32:06,619 -- INFO: Contents of output file: '/mnt/store/makrai/work/emLid/spear_babel/gridtk_logs/preprocess/preprocess.o15.2'
gridtk@2016-08-26 12:32:06,619 -- INFO: Contents of error file: '/mnt/store/makrai/work/emLid/spear_babel/gridtk_logs/preprocess/preprocess.e15.2'
2016-08-26 12:32:05,691 ERROR gridtk: The job with id '15' could not be executed: [Errno 13] Permission denied
----------------------------------------
Array Job 3 (ron) :
gridtk@2016-08-26 12:32:06,620 -- INFO: Contents of output file: '/mnt/store/makrai/work/emLid/spear_babel/gridtk_logs/preprocess/preprocess.o15.3'
gridtk@2016-08-26 12:32:06,620 -- INFO: Contents of error file: '/mnt/store/makrai/work/emLid/spear_babel/gridtk_logs/preprocess/preprocess.e15.3'
2016-08-26 12:32:05,538 ERROR gridtk: The job with id '15' could not be executed: [Errno 13] Permission denied
----------------------------------------
Array Job 4 (ron) :
gridtk@2016-08-26 12:32:06,621 -- INFO: Contents of output file: '/mnt/store/makrai/work/emLid/spear_babel/gridtk_logs/preprocess/preprocess.o15.4'
gridtk@2016-08-26 12:32:06,621 -- INFO: Contents of error file: '/mnt/store/makrai/work/emLid/spear_babel/gridtk_logs/preprocess/preprocess.e15.4'
2016-08-26 12:32:05,717 ERROR gridtk: The job with id '15' could not be executed: [Errno 13] Permission denied
----------------------------------------
------------------------------------------------------------
```
Could you please help me, what may cause this error?

Thanks
Márton

--
-- You received this message because you are subscribed to the Google Groups bob-devel group. To post to this group, send email to bob-...@googlegroups.com. To unsubscribe from this group, send email to bob-devel+...@googlegroups.com. For more options, visit this group at https://groups.google.com/d/forum/bob-devel or directly the project website at http://idiap.github.com/bob/

---
You received this message because you are subscribed to the Google Groups "bob-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bob-devel+...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Tiago

Márton Makrai

unread,
Aug 26, 2016, 6:38:21 AM8/26/16
to bob-devel

sorry, I repeat the error formatted

On Friday, August 26, 2016 at 12:34:22 PM UTC+2, Márton Makrai wrote:

Thank you very much, in the help I learned that the default for the gridtk-database-file is in the working directory (where the python command is run), so this is solved.
Unfortunately now I get an other error.
I tried
`python -m bob.bio.base.script.verify -d ~/repo/hunspeech/emLid/babel.py -p energy-2gauss -e mfcc-60 -a gmm -vvs /mnt/store/makrai/work/emLid/spear_babel/ --nice 10 --parallel 4`
and 
`python -m bob.bio.base.script.verify -d ~/repo/hunspeech/emLid/babel.py -p energy-2gauss -e mfcc-60 -a gmm -vvs /mnt/store/makrai/work/emLid/spear_babel/ --grid local-p4 --run-local-scheduler --nice 10 --parallel 4`
and I got similar error.

Manuel Günther

unread,
Aug 29, 2016, 12:55:44 PM8/29/16
to bob-devel
Dear Marton,

sorry for the late reply.

I think your problem has a simple solution. You are calling the script in the wrong way. With ``buildout``, we create a script file inside the **bin** directory. So, your command line should be:
./bin/verify.py -d ~/repo/hunspeech/emLid/babel.py -p energy-2gauss -e mfcc-60 -a gmm -vvs /mnt/store/makrai/work/emLid/spear_babel/ --nice 10 --parallel 4

I thought that this was clearly documented here: http://pythonhosted.org/bob.bio.base/experiments.html#running-experiments-part-i
but apparently we need to be even more precise than that.

Márton Makrai

unread,
Aug 30, 2016, 4:42:47 AM8/30/16
to bob-devel
Thank you very much, now it works (with the verify.py in my virtual environment).
Márton
Reply all
Reply to author
Forward
0 new messages