Memory Issues

298 views
Skip to first unread message

Luis Pinto

unread,
Jan 23, 2009, 11:06:49 AM1/23/09
to AIMMS - The Modeling System
Hello All,

I am having memory issues due to a very large model and very large data set.
I have addressed this issue with some techniques, such as using the memory emphasis option in CPLEX, which helped, but did not resolve the problem.
Besides this, I tried using some of the more documented AIMMS options, such as Auto Rebuild function (very little improvement) and Garbage Collector (had issues with my system and therefore could not be tested).
Does anyone know any more possible approaches, a part from simplifying the model or buying more RAM.
I have tried to find out if AIMMS and/or CPLEX has any memory paging functionality, but so far no luck.

Cheers,

Luis

-- 
Luis Franco de Campos Pinto
Unisoma Matemática para a Produtividade S.A.
Tel: +55 (19) 3208 0006   Ext: 248
Skype: Luis.Unisoma
www.unisoma.com.br

ck

unread,
Jan 24, 2009, 7:48:14 AM1/24/09
to AIMMS - The Modeling System
Hello Luis,

There are some alternatives to the options settings you've tried.
You may want to consider the following ones:

1) You've found your way to the memory management options.
You may also want to consider switching
'use multiple memory managers' to 'off'.
Normally the use of multiple memory managers helps to keep
memory from fragmenting, especially when doing what if
analysis or running batches of cases, but switching it off
might help in your case.
In addition, you may want to bypass a lot of internal
functionality of AIMMS memory management by setting
the option 'memory bin limit' to 0. This may, but need not,
significantly slow down your application, but you already
indicated that you were willing to pay such a penalty.

2) Sometimes only one or a few identifiers are responsible for
taking up most of the memory occupied by an application.
In order to find out which identifiers are responsible for
most of the memory usage you can use the identifier
cardinalities tool (AIMMS menu - Tools - Diagnostic tools -
Identifier cardinalities). With a lot of memory usage this
tool will require some time to open as it will scan each and
every identifier for its memory usage (AIMMS doesn't cache
this information). Be sure to check both
'Show Predeclared/Hidden Identifiers' and
'Show Identifier Suffices' shown at the bottom of this tool.

3) When your application is organized as follows:
a) Read in the data
b) Analyze verify the correctness of the data
c) Determine minimal sets for the domains / index domains
of the model parameters
d) Compute the model parameters
e) Generate and solve the mathematical program
You may want to consider using
CleanDependents
between steps c) and d).
In addition, it usually pays of to put some effort into
step c) in order to reduce memory usage later on.
Of course this alternative will require some work on your
part and may not simplify your model)

4) Inspect a constraint listing with a limit on the number
of rows listed per symbolic constraint
(option 'Number of rows per constraint in listing') in
category Solvers general - Standard reports - Constraints.
When there are a lot of constraints of the form:
SingleVariable = SomeValue
You may want to reformulate such constraints as AIMMS doesn't
do this for you.

I'm rather curious regarding the sizes (rows/columns/nonzeros)
of the mathematical program you're generating. I hope you don't
mind me asking.

Best regards,

Chris Kuip
AIMMS Software Developer

Luis Pinto

unread,
Jan 26, 2009, 6:52:14 AM1/26/09
to ck, AIMMS - The Modeling System
Hello Chris,

No problem at all. The current largest case we are working with has:
Variables: ~10 million
Constraints: ~5 million

I don't have the nonzeros values on me right now, but the problem isn't very dense.

I do have to say this should be considered a medium case. Our objective is to work with cases 4-8x larger. But I'm not sure if this will be possible.

Just so you know, the current machine has 16GB ram. But it can't solve this case, so far (runs out of memory).

Thanks for the input,

Luis Franco
www.unisoma.com



2009/1/24 ck <Chri...@gmail.com>

Sergio Bruno

unread,
Jan 26, 2009, 7:10:32 AM1/26/09
to AIMMS - The Modeling System
Luis,


I understand that you are using AIMMS x64 to generate this math
program, considering the sizes above mentioned. Is it right?
Try to understand if the memory usage comes from the math program
itself (variables and constraints) or from the parameters (using the
identifier cardinalities function). Then try breaking down on then
spotting redundant data.
Sometimes AIMMS just leaks memory for some strange reasons, like
reading huge amounts of data from a database. try looking at the
report generated by the function MemoryStatistics(). Maybe you can get
some info there. If the peak memory usage is very different from the
regular usage, there is some operations that is causing you trouble.

best,
Sergio Bruno
Operations Research Analyst

On Jan 26, 9:52 am, Luis Pinto <luisf...@gmail.com> wrote:
> Hello Chris,
>
> No problem at all. The current largest case we are working with has:
> Variables: ~10 million
> Constraints: ~5 million
>
> I don't have the nonzeros values on me right now, but the problem isn't very
> dense.
>
> I do have to say this should be considered a medium case. Our objective is
> to work with cases 4-8x larger. But I'm not sure if this will be possible.
>
> Just so you know, the current machine has 16GB ram. But it can't solve this
> case, so far (runs out of memory).
>
> Thanks for the input,
>
> Luis Francowww.unisoma.com
>
> 2009/1/24 ck <ChrisK...@gmail.com>

Luis Pinto

unread,
Jan 26, 2009, 7:45:08 AM1/26/09
to Sergio Bruno, AIMMS - The Modeling System
Bruno,

Using a regular memory monitoring program (OS, not AIMMS specific), doesn't indicate a lot peaks, but I'll take a closer look.
And, to answer your question, yes we are using AIMMS x64.

Cheers,

Luis

2009/1/26 Sergio Bruno <svbb...@gmail.com>

Sergio Bruno

unread,
Jan 26, 2009, 7:55:36 AM1/26/09
to AIMMS - The Modeling System
Luis,

AIMMS sometimes won't free the memory allocated in order to use it
later. This may mislead the results of your monitoring program.

best,
Sergio

On Jan 26, 10:45 am, Luis Pinto <luisf...@gmail.com> wrote:
> Bruno,
>
> Using a regular memory monitoring program (OS, not AIMMS specific), doesn't
> indicate a lot peaks, but I'll take a closer look.
> And, to answer your question, yes we are using AIMMS x64.
>
> Cheers,
>
> Luis
>
> 2009/1/26 Sergio Bruno <svbbr...@gmail.com>
Message has been deleted
Message has been deleted

ck

unread,
Jan 26, 2009, 9:13:06 AM1/26/09
to AIMMS - The Modeling System
Hi Luis,

One additional note:
You are using a machine of 16 GB right:
Is that 16 GB of phyiscal memory or is the 16 GB a combination
of some physical memory and the paging file size?


Occasionally we're doing tests with a machine with 8GB and a paging
file size of 24 GB. This enables us to do tests up to 24 GB of
memory in use.


The paging file size can be set as follows (Windows XP):
- Start
- Settings
- Control panel
- System
- Advanced tab
- Performance block
- Settings
- Advanced tab
- Virtual memory block
- Change

So, if you've got acces to a machine with 16Gb, you may want to
specify its paging file as well.

There is one big but: applications that need a big page file are
not fast, but eventually they get there.

Best regards,

Chris Kuip
AIMMS Software Developer




On Jan 26, 2:49 pm, ck <ChrisK...@gmail.com> wrote:
> Hello Luis, Bruno,
>
> I've never worked with a model of 10M variables / 5M constraints
> before.  1.5M variables is the max I've seen.  So this is huge in
> my apparently limited experience.  And Luis is planning to go way
> over that. Wow.
>
> As for the memory usage of the mathematical program you're
> generating, you may want to look at the intrinsic function
> GMP::Instance::GetMemoryUsed().
>
> Af for Bruno's comment regarding the read from database statement:
> You may also consider using the intrinsic function MemoryInUse()
> before and after the reading of the data. You can then compare the
> difference with the amount of data read. You can compute the amount
> of data read in by summing over the identifiers read in using the
> intrinsic function IdentifierMemory().
>
> Bruno couldn't have known about the intrinsic function MemoryInUse(),
> as it is not publicly visible yet. So here is a preliminary function
> reference page for MemoryInUse:
>
> The function MemoryInUse() returns the amount of memory currently
> in use in [Mb].
>     MemoryInUse(
>            ) ! No arguments.
> Return value:
>     A (fractional) number representing the amount of memory in use
>     in [Mb].
> Remarks:
>     This function uses an operating system function in order
>     to avoid missing any memory usage not taken into account.
>     See also the functions:
>     - MemoryStatistics
>     - IdentifierMemory
>     - GMP::Instance::GetMemoryUsed
>
> I'm going to document this function now in AIMMS 3.9.
> It will remain a hidden function in AIMMS 3.8.
>
> Bruno is right about AIMMS keeping memory in store in order to
> be *quickly* able to reuse it later on.  If that is a problem,
> and the GMP you're generating is (roughly) 20-40% of the total
> memory usage, you may want to consider the following division
> of your application:
> - Data phase:
>   - Read in data
>   - Compute model parameters
>   - Empty 'no more relevant parameters'
>   - Save case
> - close project
> - open project
> - Solve phase
>   - Load case
>   - Generate and solve math progr
>
> I'm very interested in any progress you make /
> hurdles you may encounter / facts  you may discover.
>
> Hope this helps and good luck,
>
> Chris Kuip
> AIMMS Software Developer
>
> > > > > > > Skype: Luis.Unisomawww.unisoma.com.br-Hide quoted text -
>
> > - Show quoted text -- Hide quoted text -
>
> - Show quoted text -

Luis Pinto

unread,
Jan 26, 2009, 10:46:28 AM1/26/09
to ck, AIMMS - The Modeling System
16GB of physical memory. We have tried to limit the use of paging files due to its lower speed.
Are you having good results with the large paging file?
I belive that our server is running on Windows Server 2003 x64 (if im not mistken).

Cheers,

Luis

2009/1/26 ck <Chri...@gmail.com>

Hi Luis,

One additional note:
You are using a machine of 16 GB right:
Is that 16 GB of phyiscal memory or is the 16 GB a combination
of some physical memory and the paging file size?

Occasionally we're doing tests with a machine with 8GB and a paging
file size of 24 GB.  This enables us to do tests up to 24 GB of
memory in use.

The paging file size can be set as follows (Windows XP):
- Start
- Settings
- Control panel
- System
- Advanced tab
- Performance block
- Settings
- Advanced tab
- Virtual memory block
- Change

Best regards,
AIMMS Software Developer

On Jan 26, 2:49 pm, ck <ChrisK...@gmail.com> wrote:
> Hello Luis, Bruno,
>
> I've never worked with a model of 10M variables / 5M constraints
> before.  1.5M variables is the max I've seen.  So this is huge in
> my apparently limited experience.  And Luis is planning to go way
> over that. Wow.
>
> As for the memory usage of the mathematical program you're
> Chris Kuip
> AIMMS Software Developer
>
> On Jan 26, 1:55 pm, Sergio Bruno <svbbr...@gmail.com> wrote:
>
>
>
> > > > > > > Skype: Luis.Unisomawww.unisoma.com.br-Hide quoted text -
>

Luis Pinto

unread,
Jan 26, 2009, 11:32:28 AM1/26/09
to ck, AIMMS - The Modeling System
Hello Chris,

You stated earlier that:

"Bruno is right about AIMMS keeping memory in store in order to
be *quickly* able to reuse it later on."

Is there anyway to stop AIMMS from doing this? Or limiting the amount of memory it can keep ready for this quick reuse?
I had the impression that the auto rebuild and/or garbage collector function would take care of this.

Cheers,

Luis

2009/1/26 Luis Pinto <luis...@gmail.com>
Message has been deleted
Message has been deleted

ck

unread,
Jan 27, 2009, 1:40:15 AM1/27/09
to AIMMS - The Modeling System
Hello Luis, Bruno,

10M variables / 5M Constraints is indeed a considerable model.
On 26 jan, 19:28, ck <ChrisK...@gmail.com> wrote:
> Hi Luis,
>
> I honestly don't remember. Sorry. These runs were of the form:
> start today - look at results tomorrow or after the weekend.
> So I wasn't too concerned with the performance degradation due
> to a heavy use of page files.
>
> As far as I know, if a computer has a large paging file, yet the
> application fits completely or almost completely within the
> physical memory of the computer, the paging file is hardly used.
> If that's true, I could be mistaken, creating a large page file
> doesn't hurt your application. Maybe your IT people can verify
> or falsify this statement of mine.
>
> It's just that if you've got a valid run, you can look at the
> results in the data cardinalities dialog.
>
> Another option to obtain results in the data cardinalities dialog is
> to reduce one or two root sets to 70 % of their elements and then
> using CleanDependents before doing the actual computations.
> The results in the data cardinalities dialog are, in that case,'
> only indicative, but might be a good starting point for further
> investigation.
>
> > You stated earlier that:
> > "Bruno is right about AIMMS keeping memory in store in order to
> > be *quickly* able to reuse it later on."
> > Is there anyway to stop AIMMS from doing this? Or limiting the amount of
> > memory it can keep ready for this quick reuse?
> > I had the impression that the auto rebuild and/or garbage collector function
> > would take care of this.
>
> AIMMS 3 engine works as follows (for an assignment/Data read/Case
> read)
> A) Compute the result as a linked list for each identifier read in /
> computed'
>     (Such a list contains values for the indices and the actual
> value).
> B) Do range checking, unit conversions (if applicable)
> C) Work this list into the tree that contains the data of the
> identifier(s) at hand.
> D) Reclaim this list into the memory waiting to be reused, but do not
> deallocate
>     that memory to the operating system.
> In the case many values are computed / read in these lists may become
> very long. Therefore the steps A, B, C and D are done interchangebly
> (limited
> buffer size) for the assignment statement and the Case read action.
> But this
> technique has not yet been extended to Data read (from database).
> Maybe Bruno has stumbeld on this 'feature'. If so I apologize to Bruno
> for the inconvenience.
> Once these lists have been created, AIMMS can use them for other
> intermediate
> results, but not for identifier data storage or for storage of rows in
> a
> mathematical program. The multiple memory manager option can be
> switched
> off and then there some reuse possible, at the expense of loosing
> locality and
> loosing rebuild feature.
>
> I don't intend to tell you how to do your work, but Imho I think that
> the best way
> to continue is to get an overview of how much memory is used in each
> phase of
> your application (does your application have different phases?)
> In addition, which identifiers are the most expensive and how
> expensive are they?
> How much memory is used by the mathematical program and how much is
> that
> relative to the total memory used by your application?
>
> Once the above figures are available, the road for further analysis
> should become
> clear and possibly followed by a solution (or work-around if the
> problem is
> indeed in the data read as Bruno suggests). I think that this analysis
> of
> your application is by far the best option you have.
>
> The other option you have is indeed to play with the memory manager
> options.
> However, I would be surprised if the gain you can obtain in this way
> is more
> that a few percent. If you want to play with these options, I suggest
> you start
> with the option 'use multiple memory managers' and turn it 'off'.  But
> on the
> other hand, it might be counter productive.  Don't get your hopes up
> until you
> have any positive results. In addition, if you observe any
> improvements in
> memory usage, how did that affect the througput times?
>
> BTW I'm curious about the memory size figures you obtain.
>
> Best regards,
>
> Chris Kuip
> AIMMS Software Developer
>
> On 26 jan, 16:46, Luis Pinto <luisf...@gmail.com> wrote:
>
>
>
> > 16GB of physical memory. We have tried to limit the use of paging files due
> > to its lower speed.
> > Are you having good results with the large paging file?
> > I belive that our server is running on Windows Server 2003 x64 (if im not
> > mistken).
>
> > Cheers,
>
> > Luis
>
> > 2009/1/26 ck <ChrisK...@gmail.com>
> ...
>
> meer lezen »- Tekst uit oorspronkelijk bericht niet weergeven -
>
> - Tekst uit oorspronkelijk bericht weergeven -

ck

unread,
Jan 27, 2009, 2:45:32 AM1/27/09
to AIMMS - The Modeling System
Hi Luis,

The suggestion regarding the use of page files was to get
some results. The influence of page files on the
performance of applications is a bit beyond the scope
of this forum.

It's just that if you've got a valid run, you can look at
the results in the data cardinalities dialog.

Another option to obtain results in the data cardinalities
dialog is to reduce one or two root sets to 70 % of their
elements and then using CleanDependents before doing the
actual computations. The results in the data cardinalities
dialog are, in that case, only indicative, but might be a
good starting point for further investigation.

> You stated earlier that:
> "Bruno is right about AIMMS keeping memory in store in order to
> be *quickly* able to reuse it later on."
> Is there anyway to stop AIMMS from doing this? Or limiting the amount of
> memory it can keep ready for this quick reuse?
> I had the impression that the auto rebuild and/or garbage collector function
> would take care of this.

AIMMS already tries to limit the amount of memory in
'reuse' store. The auto rebuild and garbage collector
functionality are only effective if cardinalities of
identifiers also decrease significantly, for example in
what if analysis or in runs of cases with batch files.

I don't intend to tell you how to do your work, but imho
I think that the best way to continue is to get an overview
of how much memory is used in each phase of your
application (does your application have different phases?)
In addition, which identifiers are the most expensive and
how expensive are they? How much memory is used by the
mathematical program and how much is that relative to the
total memory used by your application?

Once the above figures are available, the road for further
analysis should become clear and possibly followed by a
solution or prove that none exists. I think that this
analysis of your application is by far the best option you
have.

AIMMS is designed to avoid letting you worry about memory
issues. Only in rare cases, there is a need to change the
default memory management settings. You still have, of
course, the option to play with the memory manager options.
However, I would be surprised if the gain you can obtain in
this way is more that a few percent. If you want to play
with these options, I suggest you start with the option
'use multiple memory managers' and turn it 'off'.
But on the other hand, it might be counter productive.
Don't get your hopes up until you have any positive results.
In addition, if you observe any improvements in memory
usage, how did that affect the througput times?

BTW I'm curious about the memory size figures you obtain.

Best regards,


Chris Kuip
AIMMS Software Developer



Luis Pinto

unread,
Jan 27, 2009, 7:20:30 AM1/27/09
to ck, AIMMS - The Modeling System
Hello Chris,

My intention is to take a close look at the identifiers, because I believe the bigger gain is there. But for immediate purposes I need the model to run, so any kind of "work around" that can save me memory is valid.

Unfortunately, I don't have access on a daily basis to this server, so my feedback will take a while. Rather, I am trying to compile all possible functionalities and options I should explore, when I do have access.

Anyway, thanks for the good suggestions and help.

On another note, is there any prediction if AIMMS x64 will allow STUB optimization in the future? I know this could help a little.

Cheers,

Luis


2009/1/26 ck <Chri...@gmail.com>

Hi Luis,

I honestly don't remember. Sorry. These runs were of the form:
start today - look at results tomorrow or after the weekend.
So I wasn't too concerned with the performance degradation due
to a heavy use of page files.

As far as I know, if a computer has a large paging file, yet the
application fits completely or almost completely within the
physical memory of the computer, the paging file is hardly used.
If that's true, I could be mistaken, creating a large page file
doesn't hurt your application. Maybe your IT people can verify
or falsify this statement of mine.

It's just that if you've got a valid run, you can look at the
results in the data cardinalities dialog.

Another option to obtain results in the data cardinalities dialog is

to reduce one or two root sets to 70 % of their elements and then
using CleanDependents before doing the actual computations.
The results in the data cardinalities dialog are, in that case,'
only indicative, but might be a good starting point for further
investigation.


> You stated earlier that:
> "Bruno is right about AIMMS keeping memory in store in order to
> be *quickly* able to reuse it later on."

Best regards,

Chris Kuip
AIMMS Software Developer

On 26 jan, 16:46, Luis Pinto <luisf...@gmail.com> wrote:
> 16GB of physical memory. We have tried to limit the use of paging files due
> to its lower speed.
> Are you having good results with the large paging file?
> I belive that our server is running on Windows Server 2003 x64 (if im not
> mistken).
>
> Cheers,
>
> Luis
>
> 2009/1/26 ck <ChrisK...@gmail.com>

ck

unread,
Jan 27, 2009, 8:20:05 AM1/27/09
to AIMMS - The Modeling System
Hello Luis,

> My intention is to take a close look at the identifiers, because I believe
> the bigger gain is there. But for immediate purposes I need the model to
> run, so any kind of "work around" that can save me memory is valid.

I've got no new suggestions.

> Unfortunately, I don't have access on a daily basis to this server, so my
> feedback will take a while. Rather, I am trying to compile all possible
> functionalities and options I should explore, when I do have access.

I'm curious to your results when they become available.

> Anyway, thanks for the good suggestions and help.

You're welcome, I'm glad to be of help.
I always find large scale applications interesting.

> On another note, is there any prediction if AIMMS x64 will allow STUB
> optimization in the future? I know this could help a little.

If you want to generate a mathematical program without solving it,
you may want to use GMP::Instance::Generate.

Best regards,

Chris Kuip
AIMMS Software Developer


On Jan 27, 1:20 pm, Luis Pinto <luisf...@gmail.com> wrote:
> Hello Chris,
>
> My intention is to take a close look at the identifiers, because I believe
> the bigger gain is there. But for immediate purposes I need the model to
> run, so any kind of "work around" that can save me memory is valid.
>
> Unfortunately, I don't have access on a daily basis to this server, so my
> feedback will take a while. Rather, I am trying to compile all possible
> functionalities and options I should explore, when I do have access.
>
> Anyway, thanks for the good suggestions and help.
>
> On another note, is there any prediction if AIMMS x64 will allow STUB
> optimization in the future? I know this could help a little.
>
> Cheers,
>
> Luis
>
> 2009/1/26 ck <ChrisK...@gmail.com>
> ...
>
> read more »- Hide quoted text -

Luis Pinto

unread,
Jan 27, 2009, 8:37:57 AM1/27/09
to ck, AIMMS - The Modeling System
Hello,

"STUB optimization" as in using CPLEX as a new process in windows instead of in AIMMS, like you can already do by using the PS solver (Proxy/Stub solver). It is currently n.a. but it would nice if it were available in the future.

Cheers,

Luis

2009/1/27 ck <Chri...@gmail.com>
Reply all
Reply to author
Forward
Message has been deleted
0 new messages