I though it is a memory problem.
Unfortunately, the only PC that has original EN 64bit Windows
inastalled has only 8 GB of RAM. Moreover, we can not use 64bit
version of Java because it interferes with another software that we
need keep going (this PC is our Proteinscape server and it maintains
data processing).
We also tried Keep DB option, but the result was the same - Abacus
crashed.
Therefore, we tested a little bit differet approach - using Combined
protein xml file, our colleague extracted all identified sequences
from the database we use for identifications (NCBInr mentioned
earlier) and saved them as fasta. Then, we used this reduced database
for Abacus and the analysis run through. We were also able to get the
results fir Qspec in this way, the only limitation was connected with
the calculation of NSAF - Abacus could not do this. It reported
following error message:
2011-09-09T10 : 12 : 15 . 593 +0100 SERVERE null
java . sql . SQLException : user lacks privilege or object not
found : NAN
at org . hsqldb . jdbc . Util . sqlException (Unknown Source)
at org . hsqldb . jdbc . JDBCStatement . fetchResult (Unknown
Source)
at org . hsqldb . jdbc . JDBCStatement . executeUpdate
(Unknown Source)
at abacus . hyperSQLObject . getNSAF_values_prot
(hyperSQLObject . java : 2738)
at abacus . abacusUI . workThread . run (abacusUI . java :
2626)
Caused by : org . hsqldb . HsqlException : user lacks privilege or
object not found : NAN
at org . hsqldb . error . Error . error (Unknown source)
at org . hsqldb . error . Error . error (Unknown source)
at org . hsqldb . ExpressionColumn. checkColumnsResolved
(Unknown source)
at org . hsqldb . ParserDML . resolveUpdateExpressions
(Unknown Source)
at org . hsqldb . ParserDML . compileUpdateStatement (Unknown
Source)
at org . hsqldb . ParserCommand . compilePart (Unknown
Source)
at org . hsqldb . ParserCommand . compileStatements (Unknown
Source)
at org . hsqldb . Session . executeDirectStatement (Unknown
Source)
at org . hsqldb . Session . execute (Unknown Source)
... 4 more
So, could you tell us what is the problem this time? Do you think that
the approach we used (to generate artificial databse using combined
protein xml) is O.K.? If not, what is the reason for the use of the
whole database originally used for protein identifications?
And what databases and what database sizes are the best for Abacus? I
am asking because we usualy use quite large databases for protein
identifications (Swiss Prot, NCBInr oh Human IP) and I have never seen
a fasta database smaller than 1 GB. So, if you could give use some
recommendation, it would be really very helpfull for us.
Ivo
> > > - Zobrazit citovaný text -– Skrýt citovaný text –
>
> – Zobrazit citovaný text –