coding a client/server application (on several Win98 boxes), I came across
the following problem: During debug sessions I write messages to a log file.
In order to be able to access the file at any time, I open the file every
time a string is to be added, write the string to the file and then close it
again. While doing so, I observed system resources (unreserved memory,
displayed in system monitor of Win98) going down to zero after some hours.
This results in very unstable system behaviour when memory is down to some
kbyte.
Finally, I wrote a simple programme that is exactly achieving this behaviour
in shorter time. At first, I used a sequence of fopen(), fwrite() and
fclose(). Since I read in newsgroups that rapid calls of this sequence will
"eat" memory, I switched to a sequence consisting of CreateFile(),
WriteFile() and CloseHandle() that looks as follows (this is the whole
programme now...):
#include <stdio.h>
#include <windows.h>
int main(){
char OutBuffer[256];
HANDLE LogFile;
DWORD BytesWritten;
strcpy(OutBuffer, "Hello World!\r\n");
DeleteFile("c:\\test.log");
while(1){
LogFile = CreateFile("c:\\test.log", GENERIC_WRITE, 0,
NULL, OPEN_ALWAYS,
FILE_ATTRIBUTE_NORMAL, NULL);
SetFilePointer(LogFile, 0, NULL, FILE_END);
WriteFile(LogFile, OutBuffer, strlen(OutBuffer), &BytesWritten, NULL);
CloseHandle(LogFile);
}
return 0; // ... will never be reached - anyway...
}
When executed, the memory is decreased in "steps". This means the programme
starts while memory is visually untouched for long time. Then, windows seems
to reserve another, rather large block of memory. The size of this memory
depends on the overall amount of free memory. After this reservation, the
size of unreserved memory then stays untouched again for a long time until
another block is taken. As windows seems to be acting in an "intelligent"
way, the step size decreases with runtime (with respect to the continiously
smaller amount of available memory) while the time periods between each
reservation decreases as well. The average usage over long time is thus
constant. Consequently, it takes usually about in between 500 and 5000
seconds until the first "bite is eaten" on my three machines where I start
with 180/100/40MB available memory. Furthermore, I think it is interesting
that some memory is freed when the programme is terminated. It is, however,
only a small amount of ca. 10MB (this is about the same if the while-loop is
substituted by a for-loop allowing the programme to terminate properly
instead of ctrl-c).
Since I read some information from related problems on newsgroups, I tried
to switch from "FILE_ATTRIBUTE_NORMAL" to "FILE_FLAG_WRITE_THROUGH" and
"FILE_FLAG_SEQUENTIAL_SCAN". The results on memory usage, however, were the
same with the only difference that in the first case the hard drive was
spinning constantly.
I compiled the code above with Borland C++ 5.02 (static build) and Visual
C++ 6.0 SP3 (linked against dynamic and static libraries) with identical
results.
I can certainly overcome the problem in the case of my application by simply
not opening and closing the file all the time. Does, nevertheless, anybody
have a solution to the problem?
Best regards,
John
in addition to problem of yesterday, I found out that the problem really
seems to be linked to fwrite() and WriteFile(), respectively. On a machine
with 40MB free RAM at programm start, available memory is decreasd to
absolutely 0 (!) bytes according to system monitor, if the loop is reduced
to
LogFile = CreateFile("c:\\test.log", GENERIC_WRITE, 0,
NULL, OPEN_ALWAYS,
FILE_ATTRIBUTE_NORMAL, NULL);
SetFilePointer(LogFile, 0, NULL, FILE_END);
while(1){
WriteFile(LogFile, OutBuffer, strlen(OutBuffer), &BytesWritten,
NULL);
}
CloseHandle(LogFile);
Ideas, anybody?
Cheers,
John
John Lafrowda <laa...@laa.com> schrieb in im Newsbeitrag:
<amai2o$8cv$1...@news.uni-stuttgart.de>...
Its a weird bug, thats for sure. Have you tried flushing the file after
the write?
Stephen
In message <amc1re$p3$1...@news.uni-stuttgart.de>, John Lafrowda
<laa...@laa.com> writes
--
Stephen Kellett http://www.objmedia.demon.co.uk
Object Media Limited C++/Java/Windows NT/Unix/X Windows/Multimedia
If you are suffering from RSI, contact me for advice.
Unsolicited email from spam merchants not welcome.
I find it intersting that the effect disappears if FILE_FLAG_NO_BUFFERING is
used in CreateFile() and blocks of the size of a sector of the destination
drive (in my case 512 Byte) are written. This, however, can be expected
since the system should not buffer anything in this scenario. Hence, the
real interesting fact is that FILE_FLAG_WRITE_THROUGH does not work,
although it should provide >almost< the same mechanism.
From my last tests and newsgroup information I finally found out that the
"memory-eating" effect is not necessarily a thing to worry about: Other
people had similar effects on Win98 and WinNT4.0. The memory is, however,
not locked up but freed if other application needs memory. I verified this
by starting another programme (Matlab) and allocating a larger huge array
(ca. 20MB) while the "memory-eater" was still running. After I freed the
array and closed the second application, available memory peaked up to 70MB
and then decreased again slowly. Consequently, the effect seems to be more
related to the flushing mechanisms of the windows memory manager and less to
the management of the file system.
Anyway - I'm pretty astonished that not more people had similar effects and
at least thought this would be problem. I would be interested if effects
from my sample code are the same on other windows derivatives (2000, XP, Me,
NT).
Cheers
John
Stephen Kellett <sn...@objmedia.demon.co.uk> schrieb in im Newsbeitrag:
moKk+2DR...@objmedia.demon.co.uk...
I was just checking. I was pretty sure it was the main thread (in which
case I had no answer).
Sounds like the behaviour is trying to optimise file access times by
delaying the write to the disk as long as possible, by keeping it in
memory, or it appears somewhat bizarrely in the pagefile. I think
accesses to the page file are faster than to disk (although the page
file is on disk), which might explain this. I expect someone how knows
more about the low level details of the page file will correct me if I'm
wrong.
Stephen
Hello. I tried your code on my XP/256Mb laptop, and didn't observe the
same behaviour. In Task Manager, the program's size stayed the same. In
the task manager "performance" tab, the "committed" and "physical memory
free" fluctuated a little, but not much, and after five minutes I hadn't
observed any systematic gobbling-up of the memory. I was using Borland
C++Builder 5 (but in a plain win32-api non-console program).
--
Lucian
People have had the same effects. You threw everyone off
when you said, in your first post:
"I observed system resources (unreserved memory, displayed
in system monitor of Win98) going down to zero after some
hours. This results in very unstable system behavior when
memory is down to some kbyte."
"Furthermore, I think it is interesting that some memory is freed
when the program is terminated. It is, however, only a small
amount of ca. 10MB"
You indicated the system never reclaimed memory and
became unstable due to lost resources.
Check the documentation on "Process Working Set", and
the SetProcessWorkingSetSize() API.
thank you for the advise.
I'm sorry that I've "thrown everyone off" in the first mail. It was just
that my system (or, to be more exact, the applications coded by me) becomes
unstable at the point of time the memory goes to zero. This is, however,
obviously not directly related to the memory problem, but to some other
bug in _my_ software.
John
Ron Ruble <raff...@att.net> schrieb in im Newsbeitrag:
M5li9.41395$jG2.2...@bgtnsc05-news.ops.worldnet.att.net...
OK, here's what's happening:
You open a file, extend it, and close it. Windows normally keeps the
data around for a while before flushing it to disk. This consumes
memory. By default, Win95/98/Me will swap random other stuff out to
disk in order to allow the buffer to grow, but you can control the
maximum size of the buffer using lines like these in SYSTEM.INI:
[vcache]
MinFileCache=2048
MaxFileCache=300000
The numbers (taken above from my own machine) are in KBytes, so I am
"reserving" 2 MBytes to nearly 300 MBytes for the buffered data. (My
machine has 512MB of RAM, and I want to control just how big the buffer
gets...)
On NT/2K/XP, the allocation strategy is different, and they seem less
keen to swap everything and its brother out in order to grow the buffer.
Data goes in the buffer for either read or write, and writes are
normally delayed, but:
FILE_FLAG_NO_BUFFERING keeps read or write data out of the buffer, and
just reads/writes directly to/from the disk. (Logically, it ought to
discard an affected part of the buffer if data is written using this
flag to an area that is buffered. Does it? Beats me, but presumably it
does, or data could be corrupted.)
FILE_FLAG_WRITE_THROUGH uses the buffer as normal but guarantees that
write data is written immediately rather than later.
>From my last tests and newsgroup information I finally found out that the
>"memory-eating" effect is not necessarily a thing to worry about: Other
>people had similar effects on Win98 and WinNT4.0. The memory is, however,
>not locked up but freed if other application needs memory. I verified this
>by starting another programme (Matlab) and allocating a larger huge array
>(ca. 20MB) while the "memory-eater" was still running. After I freed the
>array and closed the second application, available memory peaked up to 70MB
>and then decreased again slowly. Consequently, the effect seems to be more
>related to the flushing mechanisms of the windows memory manager and less to
>the management of the file system.
Not exactly. The flushing mechanisms here are part of the file system
management code, but it works closely together with the memory manager
to not interfere in the user's work too much.
--
"Eagle-eyed" Steve
The pictures they didn't want me to take:
http://www.koolpages.com/theoddcar/
or:
http://www.grandfathersaxe.demon.co.uk/
On photoSIG: http://www.photosig.com/userphotos.php?id=35348