Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ReadFile (win32 API) cannot read more than 64 MB in one chunk

1,299 views
Skip to first unread message

serge

unread,
Feb 1, 2005, 12:29:05 PM2/1/05
to
Hi to all. Hope I'm not wasting your time with this.
I have a little "c" program to read bytes from a hard drive into memory
buffer. It works perfectly fine as long as the "nNumberOfBytesToRead"
parameter in the ReadFile function (win32 API)is less than 64 MB. Anything
bigger than that, results in the ReadFile function returning 0 (zero)
indicating a failure of that function.
Could anybody put some light on this.
Many thanks to all,
Serge Matovic.

Gabriel Bogdan

unread,
Feb 1, 2005, 12:59:58 PM2/1/05
to
Yes, it probably can't read a buffer that large, probably because behind a
read there is a read IRP that has a MDL that can't map more then 64MB into
logical memory.

Then again, not being able to read a buffer that large, in a single call, is
it really a problem?
You don't really expected to be able to read a file in a single call, no
matter how big it was, did you?


"serge" <se...@discussions.microsoft.com> wrote in message
news:64A1F7F5-0561-48B4...@microsoft.com...

serge

unread,
Feb 1, 2005, 1:31:03 PM2/1/05
to
Thanks Gabriel: I definitely can read smaller chunks (<64 MB) from hard drive
and concatenate them into one large file (>64 MB), but I'm really curious
where/why is this limitation of 64 MB comming from, on the win32 API
functions level. Anything "deeper" than that is beyond me.

Just for interest, here are some details of my code. (I can post the whole
source, but don't want to bother you with all that):

1) The error code, repoted by the GetLastError() function is 1450, and Msft
site says this about it: "ERROR_NO_SYSTEM_RESOURCES Insufficient system
resources exist to complete the requested service."

2) My memory allocation code is:
lpSectBuff=
(LPBYTE)VirtualAlloc(NULL,102400000,MEM_COMMIT,PAGE_READWRITE);

3) My ReadFile function is:
fResult=
ReadFile(hDev,lpSectBuff,nNumberOfBytesToRead,lpNumberOfBytesRead,NULL);

My PC has 512 MB RAM, 80 GB hard drive, and Windows 2000 Prof.

As I mentioned, my code works perfectly as long as nNumberOfBytesToRead<64MB

regards,
Serge Matovic.

Gabriel Bogdan

unread,
Feb 1, 2005, 1:59:16 PM2/1/05
to
A read operation is translated by the OS in a IRP (I/O request packet) that
is sent to a driver serviceing the acctual read operation.

In the case of a read from a file, the driver responsable will probably be
the file system driver, this in turn will send apropriate IRPs to the disk
driver to carry out the acctual read.

The IRP describes an operation that is to be carryed out, read/write. the
field that describes the memory buffer is a MDL(memory descriptor list).
The MDL is used to map physical memory into logical memory and the MDL can't
be used to map more then 64MB, so the IRP can't work on more then 64MB
buffers, so, read/write opperation can't work on larger buffers.

If you are intrested in the inner working of the windows you can read about
this in DKK(rather cryptic for beginers)
Programming The Ms Windows Driver Model by Walter Oney
Inside Microsoft Windows 2000 Third Edition

"serge" <se...@discussions.microsoft.com> wrote in message

news:4634BCD5-7918-4CF7...@microsoft.com...

serge

unread,
Feb 1, 2005, 4:15:09 PM2/1/05
to
Wow, thanks Gabriel. I guess I'll have to "concatenate 64 MB chunks" read
from hard drive into the resulting big file.
serge.

m

unread,
Feb 1, 2005, 10:34:45 PM2/1/05
to
Out of curiosity, have you tried ReadFileGather?

"serge" <se...@discussions.microsoft.com> wrote in message

news:34E6ABE0-8CCE-49A1...@microsoft.com...

serge

unread,
Feb 1, 2005, 11:27:02 PM2/1/05
to
No, but will try, and report back. Thanks.

Jeff F

unread,
Feb 2, 2005, 7:54:21 AM2/2/05
to
serge wrote:
> Thanks Gabriel: I definitely can read smaller chunks (<64 MB) from
> hard drive and concatenate them into one large file (>64 MB), but I'm
> really curious where/why is this limitation of 64 MB comming from, on
> the win32 API functions level. Anything "deeper" than that is beyond
> me.
>
> Just for interest, here are some details of my code. (I can post the
> whole source, but don't want to bother you with all that):
>
> 1) The error code, repoted by the GetLastError() function is 1450,
> and Msft site says this about it: "ERROR_NO_SYSTEM_RESOURCES
> Insufficient system resources exist to complete the requested
> service."
>
> 2) My memory allocation code is:
> lpSectBuff=
> (LPBYTE)VirtualAlloc(NULL,102400000,MEM_COMMIT,PAGE_READWRITE);
>
> 3) My ReadFile function is:
> fResult=
> ReadFile(hDev,lpSectBuff,nNumberOfBytesToRead,lpNumberOfBytesRead,NULL);
>
> My PC has 512 MB RAM, 80 GB hard drive, and Windows 2000 Prof.
>
> As I mentioned, my code works perfectly as long as
> nNumberOfBytesToRead<64MB
>
> regards,
> Serge Matovic.


Why not just use a memory mapped file and forego the virtual alloc and
readfile calls. The class below allows can be used to get the begin and end
'iterators' of your mapped memory.

CFileMapped lFileMapped("c:\somefile.dat");

const char* lBegPtr = lFileMapped.Begin();

================
class CFileMapped
{
DWORD mSize ;
const char* mMemPtr;
HANDLE mMapHdl;
HANDLE mHdl ;

CFileMapped( );
CFileMapped( CFileMapped& aFileMapped );
public:
~CFileMapped( ){ Close(); }
CFileMapped( LPCTSTR aFileName )
: mMapHdl( NULL )
, mHdl ( NULL )
, mSize ( NULL )
, mMemPtr( NULL )
{
if
( (mHdl = CreateFile
( aFileName
, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING
, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_RANDOM_ACCESS, 0
)) != INVALID_HANDLE_VALUE
)
{
mSize = GetFileSize( mHdl, NULL ); if( mSize == 0xFFFFFFFF ) mSize = 0
;

if( mSize && (mMapHdl = CreateFileMapping( mHdl, NULL, PAGE_READONLY,
0, 0, 0 )) )
{
mMemPtr = (const char*)MapViewOfFile( mMapHdl, FILE_MAP_READ, 0, 0,
0 );

return;
}
}
Close();
}

const char* Begin()const{ return mMemPtr ; }
const char* End ()const{ return mMemPtr + mSize; }

DWORD Size()const{ return mSize; }

private:
void Close()
{
UnmapViewOfFile( mMemPtr ); mMemPtr = NULL;// Start cleanup by unmapping
view
CloseHandle( mMapHdl ); mMapHdl = NULL;
CloseHandle( mHdl ); mHdl = NULL;
mSize = NULL;
}
};

Jeff


serge

unread,
Feb 2, 2005, 10:37:08 AM2/2/05
to
Thanks Jeff: I'll try it. To answer you question of "why am I doing it my
way", well I am really not an experienced C programmer, but I know enough of
the C syntax to attempt and write a little C program to help a friend recover
some (large in size) files from a corrupted hard drive. Creating a large
buffer and reading a large chunk of hard disk space into it seemed like the
quickest thing to do. And it worked, except for this 64 MB limit.
Then I got really curious about this limit and where it is defined, and why
is it there? After all, a user's process is alloted 2GB of virtual memory
space, shouldn't this process be allowed to work with a meger 120 MB buffer
in it ?

Thanks Jeff,
serge.

Alexander Grigoriev

unread,
Feb 2, 2005, 10:49:16 AM2/2/05
to
Check FINDPART and other tools by Olaffson.

"serge" <se...@discussions.microsoft.com> wrote in message

news:2B0E8A7E-F9B7-4102...@microsoft.com...

Alexander Grigoriev

unread,
Feb 2, 2005, 10:48:20 AM2/2/05
to
Memory mapped file is inherently slower than FILE_FLAG_NO_BUFFERING access.
With a MM file, each I/O operation will only bring 4 KB, and there is also
problem of VM thrashing.

"Jeff F" <n...@anywhere.com> wrote in message
news:e%23XiSZSC...@tk2msftngp13.phx.gbl...

Slava M. Usov

unread,
Feb 2, 2005, 11:22:18 AM2/2/05
to
"Alexander Grigoriev" <al...@earthlink.net> wrote in message
news:uXHah6T...@TK2MSFTNGP12.phx.gbl...

> Memory mapped file is inherently slower than FILE_FLAG_NO_BUFFERING
> access.

Not true in general. There are some qualifying conditions when it is true,
but there are also conditions when the opposite is true.

> With a MM file, each I/O operation will only bring 4 KB,

Not true. See "page fault clustering".

> and there is also problem of VM thrashing.

And there is a problem of system resource availability when dealing with
large IO requests.

Generally, it all depends on what you have to do. If you need to read a few
bytes from a file with complex structure, which might require jumping back
and forth, then unbuffered IO may not automatically be the right answer.
Conversely, streaming compressed video through memory mapped files may be
awkward.

S


Phil Barila

unread,
Feb 9, 2005, 12:13:38 AM2/9/05
to
"serge" <se...@discussions.microsoft.com> wrote in message
news:2B0E8A7E-F9B7-4102...@microsoft.com...

> Thanks Jeff: I'll try it. To answer you question of "why am I doing it my
> way", well I am really not an experienced C programmer, but I know enough
of
> the C syntax to attempt and write a little C program to help a friend
recover
> some (large in size) files from a corrupted hard drive. Creating a large
> buffer and reading a large chunk of hard disk space into it seemed like
the
> quickest thing to do. And it worked, except for this 64 MB limit.
> Then I got really curious about this limit and where it is defined, and
why
> is it there? After all, a user's process is alloted 2GB of virtual memory
> space, shouldn't this process be allowed to work with a meger 120 MB
buffer
> in it ?

Nobody has yet mentioned that you are limited to, at *most* 32MB on the
interface, since the maximum transfer ATA can describe is 65536 blocks,
multiplied by 512 bytes yields 32 MiB.

However, you won't get anywhere close to that, since Windows won't attempt
to use those large transfers. The theoretical limit Windows imposes on a
single transfer on your 80GB disk is 128K, and Windows is free to further
break up the transfer into smaller chunks. It breaks up your 64MB into a
bunch of smaller transfers, though you have little control (none) over how
many, or how big. So while you might like it if you could fill a giant
buffer in one transfer, it's just not going to happen.

Phil
--
Philip D. Barila Windows DDK MVP
Seagate Technology LLC
(720) 684-1842
As if I need to say it: Not speaking for Seagate.
E-mail address is pointed at a domain squatter. Use reply-to instead.


0 new messages