PeptideShaker ran out of memory!

140 views
Skip to first unread message

Pino

unread,
Jan 23, 2014, 1:00:30 PM1/23/14
to peptide...@googlegroups.com
Hello,
I am trying to analyze some data and I keep getting the message:  PeptideShaker ran out of memory!.  However, I have checked the memory used by PeptideShaker and it never gets to the limit (12 GB).  Moreover, I have tried to analyze the same data that I did a few months ago and I get the same error message.  I attach the log file just in case it helps.

Is there anything I can do to overcome this problem?  Are there any settings I should change?  Thanks in advance for your help.

Best Regards,

Pino
log.txt

Marc Vaudel

unread,
Jan 23, 2014, 2:25:43 PM1/23/14
to peptide...@googlegroups.com
Hi Pino and sorry that the new version does not work as expected.

Can you verify that you indeed gave 12GB of memory to the software under "Edit" and "Java Options"? It seems that PeptideShaker gets stuck while processing your database which should not require that much of memory. What database is it? Can you make it available to us?

Thank you, hopefully we will get everything sorted out very fast :)

Marc


2014/1/23 Pino <mspi...@gmail.com>

--
You received this message because you are subscribed to the Google Groups "PeptideShaker" group.
To unsubscribe from this group and stop receiving emails from it, send an email to peptide-shake...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

"Manuel M. Sánchez del Pino"

unread,
Jan 24, 2014, 3:30:39 AM1/24/14
to peptide...@googlegroups.com
Hi Marc,
yes, the memory given to the software is 12 GB.  I have used the same database in the past without any problem.  I have tried to attach the database in a previous mail back it was rejected because of the size (8 MB).  How can I send you the database?  In the meanwhile, I will try to use a different one just in case..

Cheers,

Pino
Manuel Sánchez del Pino

Depto de Bioquímica y Biología Molecular
Facultad de Ciencias Biológicas
Universidad de Valencia
Dr. Moliner, 50
46100 Burjassot (Valencia)
Tel. 34-96-3543464
Fax 34-96-3544635

Marc Vaudel

unread,
Jan 24, 2014, 4:29:35 AM1/24/14
to peptide...@googlegroups.com
Hi Again!

In the past versions PeptideShaker used the protein inferred by the search engines. However, this led to many issues because search engines disagreed on the protein a peptide could come from. Now we remap every peptide on the fly and this leads to much more reliable results: the peptide to protein mapping is more reproducible and the number of one hit wonders is diminished. In order to do so the software needs to index your database, this should not take that much memory though, when testing I could index the whole uniprot (20GB) with 8GB of RAM on my laptop.
If you send me your database (zipped) to my personal address (mva...@gmail.com) I will try to look at it. If it is too big can you put it on google drive or dropbox?

Thank you!

Marc


2014/1/24 "Manuel M. Sánchez del Pino" <mspi...@gmail.com>

Nazrath Nawaz

unread,
Feb 12, 2018, 11:35:41 AM2/12/18
to PeptideShaker
Hey Marc,

I am just replying under here because I have the same issue when analysing the PRIDE dataset: PXD000654 (against human proteome).

I am using the 64 bit java version and reduced the memory allocation to less than 3/4 of my PC specs, however, I still keep running out of memory.

Also noticing this when using PS on a cluster with a lot of memory on the same dataset. Is it a problem with the dataset itself? (It is almost 10Gb in size).

If it helps, the 'ran out of memory' always appears while trying to parse the t.xml identification file. 

Appreciate your time taken to reply.

Naz

Harald Barsnes

unread,
Feb 13, 2018, 9:03:32 AM2/13/18
to PeptideShaker

Hi Naz,

The PRIDE dataset you are trying to reanalyze is relatively large, but it should be possible to analyze, at least on your cluster setup.

We usually run our bigger searches on a machine with 64 GB of RAM. But I'm afraid there is no guarantee that you will not experience the same issues even with more memory.

Which is why we are in the process of refactoring the PeptideShaker backend to handle the memory better and to speed up the overall processing. This has however been a long process and I cannot at this time give you a timeframe for when the new version will be available.

Best regards,
Harald

Björn Grüning

unread,
Feb 13, 2018, 9:19:24 AM2/13/18
to peptide...@googlegroups.com, Harald Barsnes

Hi Naz,

if you don't have enough memory yourself, you could use services like https://usegalaxy.eu which has
PeptideShaker and SearchGUI installed.

Ciao,
Bjoern


--
You received this message because you are subscribed to the Google Groups "PeptideShaker" group.
To unsubscribe from this group and stop receiving emails from it, send an email to peptide-shake...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages