Memory Requirements for reading 400MB op2, options for not memory faulting

54 views
Skip to first unread message

Eric Roulo

unread,
Dec 2, 2012, 1:06:20 PM12/2/12
to pynast...@googlegroups.com
Everyone:

I'm trying to read a production .op2 file that is 362 MB large and it crashes after taking about 2GB of memory on my 8GB laptop running python(x,y). It has four subcases in it and lots of cquad4 stress output.

I'm going to load a 64 bit python and see if that solves the problem, but I'm afraid that I'll run out of room on my laptop when I use larger output files. I have a couple of questions:

1) Is there a recommended 64 bit version of python to install (are there more than one?)
2) What is the pyNastran strategy for dealing with memory restricted computers? Can we swap to disk, use a database file?
3) Is there a known relationship between output file size and memory requirements? For both op2 and bdf.

Thanks.

-ejr

-- 
Eric J. Roulo
Owner, Roulo Consulting, Inc.
Engineering Analysis, Design, & Training


Steve Doyle

unread,
Dec 2, 2012, 2:11:21 PM12/2/12
to pynast...@googlegroups.com
1.  Enthought's Python distribution is very well done.  They check all the dependencies, but it costs money.  Beyond that, I'd use python.org http://www.enthought.com/products/epd.php

2.  The easiest way to handle your file is to read the subcases one at a time and clear them out afterwards.  Use the op2.setSubcases(iSubcases=None) method where iSubcases is a list of the subcases to read (e.g. iSubcases = [100]).  Beyond that I'd reduce the dt on the output (assuming it's transient) or try the op2.setTransientTimes(times) method.  I haven't tested it in a while, so it may not work.  You could also reduce the amount of data that is saved (e.g. save only von mises stress) by modifying the code.  There is no cache/database support.


3.  No, but my guess is a 280 MB op2 will read properly in 2 GB (assuming no other large objects in memory in your external program) and I've never seen a BDF take up more than a few hundred MB of memory, but I don't go past 120,000 node models.

Steve Doyle
Reply all
Reply to author
Forward
0 new messages