Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What AceDB "Error status = 139" means?

6 views
Skip to first unread message

Jing Yu

unread,
Feb 13, 2006, 12:30:31 PM2/13/06
to ac...@magpie.bio.indiana.edu, jin...@algodon.tamu.edu
Hello,

I got error message "Error status = 139" while loading data into
acedb. Does anyone know what that means?

- The error message never appears when I load each data independently into
an empty database, but will appear when load them together or added
more data into a database that with large part of data existing.

- I've tried ace_4.9f, ace_4.9p, etc.;
- The systems so far I've tried are:
1. SunOS baumwolle 5.10 Generic sun4u sparc SUNW,Ultra-4
2. SunOS algodon 5.8 Generic_108528-21 sun4u sparc SUNW,Ultra-60
3. Linux ceres 2.2.16-22smp

And they all gave me the same error message. And my dataset is not very
large yet, much smaller than that of graingenes (8 block*.wrm files of
mine compare to 69 block*.wrm files of graingenes, with the same size of
each block*.wrm in /database).

Can anyone help me on this? Thank you very much.


Jing


Jing Yu

unread,
Feb 14, 2006, 9:45:00 AM2/14/06
to Nicolas Thierry-Mieg, ac...@magpie.bio.indiana.edu
> have you tried
> loading the data with tace instead of xace?

Yes. The error message was given from tace. :(

Jing

On Tue, 14 Feb 2006, Nicolas Thierry-Mieg wrote:

> hello
>
> this is a long shot, but just in case it's gtk-related: have you tried
> loading the data with tace instead of xace?
>
> good luck
> nicolas

> > _______________________________________________
> > Acedb mailing list
> > Ac...@net.bio.net
> > http://www.bio.net/biomail/listinfo/acedb
>
> --
> ------------------------------------------------------------
> Problems with my digital signature? visit:
> http://igc.services.cnrs.fr/Doc/General/trust.html
> --------------------
> Nicolas Thierry-Mieg
> Laboratoire LSR-IMAG, BP 53, 38041 Grenoble Cedex 9, France
> tel : (33/0)4-76-63-55-79, fax : (33/0)4-76-63-55-50
> ------------------------------------------------------------
>

Jing Yu

unread,
Feb 14, 2006, 11:21:49 AM2/14/06
to Nicolas Thierry-Mieg, ac...@magpie.bio.indiana.edu
> alright, how about stack size limits etc...?
> if you have very large objects I think this could be a problem

That sounds in the right track! I do have very large objects and that's
where the problem starts to appear ... I checked the ulimit on my server
(see bellow) but not sure what that means? what is the current size that
I am having? - Could you give me a hint? By the way, I run tace under
bash, shall I try other shell?

Thank you.

Jing

=====================
baumwolle:/usr/bin$ more ulimit
#!/bin/ksh -p
#
#ident "@(#)alias.sh 1.2 00/02/15 SMI"
#
# Copyright (c) 1995 by Sun Microsystems, Inc.
#
cmd=`basename $0`
$cmd "$@"
=====================

On Tue, 14 Feb 2006, Nicolas Thierry-Mieg wrote:

> alright, how about stack size limits etc...?
> if you have very large objects I think this could be a problem
>
> (on bash, the relevant command is ulimit; on csh it's limit)

Jing Yu

unread,
Feb 14, 2006, 2:28:03 PM2/14/06
to Nicolas Thierry-Mieg, ac...@magpie.bio.indiana.edu
Nicolas,

No error appears after setting up ulimit -s unlimited, and
everything looks fine. Thank you very much!!

Jing

On Tue, 14 Feb 2006, Nicolas Thierry-Mieg wrote:

>
> bash is fine
>
> ...
> ulimit -a
> to see your current limits
>
> then eg to set the stack size to max, type:
> ulimit -s unlimited
>
>
> this will only change the limits for your current shell, hence you
> should call tace from it
>
> when satisfied you can put the correct ulimit commands in your
> .bash_profile to make the changes affect every new shell
>
>
> try setting every value to unlimited, at least the memory related ones
> (except core, which you can set to 0 since you probably don't read core
> dumps anyways)

0 new messages