Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

CreateFileMapping failing for large files - ERROR_NOT_ENOUGH_QUOTA

379 views
Skip to first unread message

Sheneal@discussions.microsoft.com Alex Sheneal

unread,
Jun 13, 2008, 3:21:00 PM6/13/08
to
I am dealing with potentially very large pre-existing files (2 TB). These
files are proprietary high definition video files. I am trying to create a
file mapping object using CreateFileMapping for these files. The idea is to
use MapViewOfFile to map the index of the file into virtual address space of
the process. I am seeing a problem when creating the file mapping. Let’s
say I have 2 files (F1 and F2). Here is some pseudocode of an experiment
that I did to exhibit the problem:

OpenFile( F1 )
OpenFile( F2 )
CreateFileMapping( F1 )
CreateFileMapping( F2 )

The second CreateFileMapping call will fail with error 1816
(ERROR_NOT_ENOUGH_QUOTA). I don’t even have to map any views to get this
problem. The code to duplicate the problem is literally as simple as the
pseudocode above. I have also tried to create each file mapping such that it
was small (64K) just to see what would happen. The second call still fails
with the same error.

Please note that I am *not* trying to map the entire file into memory, only
a small index within the file. So the file size should be irrelevant.

Has anyone ever seen this behaviour in Windows? Does anyone have a
solution? Any ideas what quota is exceeded? Any help would be greatly
appreciated.

Alex

Pavel A.

unread,
Jun 13, 2008, 5:14:55 PM6/13/08
to
Which OS? NTFS or exfat?

--PA

"Alex Sheneal" <Alex She...@discussions.microsoft.com> wrote in message
news:D527AE3C-1199-4A43...@microsoft.com...


> I am dealing with potentially very large pre-existing files (2 TB). These
> files are proprietary high definition video files. I am trying to create
> a
> file mapping object using CreateFileMapping for these files. The idea is
> to
> use MapViewOfFile to map the index of the file into virtual address space
> of
> the process. I am seeing a problem when creating the file mapping.

> Letā?Ts


> say I have 2 files (F1 and F2). Here is some pseudocode of an experiment
> that I did to exhibit the problem:
>
> OpenFile( F1 )
> OpenFile( F2 )
> CreateFileMapping( F1 )
> CreateFileMapping( F2 )
>
> The second CreateFileMapping call will fail with error 1816

> (ERROR_NOT_ENOUGH_QUOTA). I donā?Tt even have to map any views to get

Alex Sheneal

unread,
Jun 13, 2008, 5:51:01 PM6/13/08
to
Sorry I should have mentioned that. This is under Windows XP on an NTFS drive.

Thanks,
Alex

"Pavel A." wrote:

> Which OS? NTFS or exfat?
>
> --PA
>
>
>
> "Alex Sheneal" <Alex She...@discussions.microsoft.com> wrote in message
> news:D527AE3C-1199-4A43...@microsoft.com...
> > I am dealing with potentially very large pre-existing files (2 TB). These
> > files are proprietary high definition video files. I am trying to create
> > a
> > file mapping object using CreateFileMapping for these files. The idea is
> > to
> > use MapViewOfFile to map the index of the file into virtual address space
> > of
> > the process. I am seeing a problem when creating the file mapping.

> > Letâ?Ts


> > say I have 2 files (F1 and F2). Here is some pseudocode of an experiment
> > that I did to exhibit the problem:
> >
> > OpenFile( F1 )
> > OpenFile( F2 )
> > CreateFileMapping( F1 )
> > CreateFileMapping( F2 )
> >
> > The second CreateFileMapping call will fail with error 1816

> > (ERROR_NOT_ENOUGH_QUOTA). I donâ?Tt even have to map any views to get

Alexander Grigoriev

unread,
Jun 13, 2008, 10:22:54 PM6/13/08
to
What parameters are you passing to CreateFile and CreateFileMapping? Are you
actually using CreateFile or obsolete OpenFile? Are you opening the files
with FILE_FLAG_NO_BUFFERING flag (good) or without that flag (BAD)?

"Alex Sheneal" <Alex She...@discussions.microsoft.com> wrote in message
news:D527AE3C-1199-4A43...@microsoft.com...

Moran@discussions.microsoft.com Hugh Moran

unread,
Jun 14, 2008, 7:21:00 AM6/14/08
to

"Alex Sheneal" wrote:

Alex drom me a line at Morantex, we have considerable experience with this
subject, I cant suggest an answer as I write, but drop me an e-mail reminder
and I will be happy to see if I can shed light on this, we have numerous
libraries and in-house tools that allow us to examine this kind of issue in
depth.

Hugh Moran

unread,
Jun 14, 2008, 7:36:00 AM6/14/08
to

"Alex Sheneal" wrote:

Alex

I assume you are running XP64?

Hugh

Hugh Moran

unread,
Jun 16, 2008, 8:39:00 AM6/16/08
to

"Alex Sheneal" wrote:

Hi Alex

The sizes you are using are very large, 2TB, is 2000 GB. Can you possibly
establish at what size you begin to see this error code? Does it happen at
10G? 50G, 100G, 500G etc.

If you can tell me the smallest size mapping that exhibits the issue I will
try to see if we encounter it at these large sizes.

Hugh

http://www.morantex.com

Hugh Moran

unread,
Jun 16, 2008, 9:39:01 AM6/16/08
to

"Alex Sheneal" wrote:

Hi Alex

I now strongly suspect that this error code arises due to the exhaustion of
OS heaps. Both the paged and non-paged heap play a role here, page tables are
held within the paged heap and with files of this size a considerable demaned
is placed upon the OS heaps.

Your problem may be alleviated by increasing the size of your paging file or
by upping the paged and non-paged pool quota sizes.

By default the OS computes the sizes of these two regions at boot time,
these sizes can be manually overridden by changing the registry settings. In
addition the registry may also be used to adjust the quota (the per process)
usage allowances for paged and non-paged pool, a process that attempts to use
more than its quota will find the allocation fails (inside the OS).

You need to discover what the per-process quota are for your system.

This article will help you puruse this:

http://support.microsoft.com/kb/177415

Regards
Hugh


Alex Sheneal

unread,
Jun 16, 2008, 3:00:12 PM6/16/08
to
"Alexander Grigoriev" wrote:

> What parameters are you passing to CreateFile and CreateFileMapping?

Here the parameters we're using:

hFile = CreateFile(
pFilename,
GENERIC_READ,
FILE_SHARE_READ,
NULL,
OPEN_EXISTING,
FILE_FLAG_NO_BUFFERING,
NULL
);

hFileSection = CreateFileMapping(
hFile,
NULL,
PAGE_READONLY | SEC_COMMIT,
0,
65536,
NULL
);

> Are you
> actually using CreateFile or obsolete OpenFile? Are you opening the files
> with FILE_FLAG_NO_BUFFERING flag (good) or without that flag (BAD)?

As you can see everything seems correct.

Thanks,
Alex

Alex Sheneal

unread,
Jun 16, 2008, 3:02:01 PM6/16/08
to
No this is under 32-bit Windows XP.

Thanks,
Alex

Hugh Moran

unread,
Jun 16, 2008, 3:13:05 PM6/16/08
to

"Alex Sheneal" wrote:

Well it is not possible to map anything much bigger than 1 or 1.5 GB under
XP 32-bit. When you say you have 2TB files, I take it your are mapping
portions of this?

If you aqctually want to map files in the hundreds or thousands of GB range,
you cannot do so unless you move to a 64-bit OS.

Hugh

Alex Sheneal

unread,
Jun 16, 2008, 4:18:04 PM6/16/08
to
"Hugh Moran" wrote:
> Alex drom me a line at Morantex, we have considerable experience with this
> subject, I cant suggest an answer as I write, but drop me an e-mail reminder
> and I will be happy to see if I can shed light on this, we have numerous
> libraries and in-house tools that allow us to examine this kind of issue in
> depth.
>
> The sizes you are using are very large, 2TB, is 2000 GB. Can you possibly
> establish at what size you begin to see this error code? Does it happen at
> 10G? 50G, 100G, 500G etc.
>
> If you can tell me the smallest size mapping that exhibits the issue I will
> try to see if we encounter it at these large sizes.
>
> Hugh
>
> http://www.morantex.com

Hi Hugh,

Thanks very much for your offer of help!

From what we've seen, the files have to be fairly big, somewhere around 1984
GB. In our test app, the first CreateFileMapping call works, and the second
one fails, so I assume it's when it's close to 4 TB total that the problem
occurs. (Again, keep in mind we're not mapping the entire file into memory.)

> I now strongly suspect that this error code arises due to the exhaustion of
> OS heaps. Both the paged and non-paged heap play a role here, page tables are
> held within the paged heap and with files of this size a considerable demaned
> is placed upon the OS heaps.

The question is, why would simply creating a file mapping use so many
resources? There's no reason for so many page tables to be created. We're
not mapping the whole file. To me it really seems like there's some
fundamental design flaw in Windows (or in NTFS) in which a call to
CreateFileMapping chews up a ton of resources related to the file size.

> Your problem may be alleviated by increasing the size of your paging file or
> by upping the paged and non-paged pool quota sizes.
>
> By default the OS computes the sizes of these two regions at boot time,
> these sizes can be manually overridden by changing the registry settings. In
> addition the registry may also be used to adjust the quota (the per process)
> usage allowances for paged and non-paged pool, a process that attempts to use
> more than its quota will find the allocation fails (inside the OS).

This would be a last resort solution for us, since it implies everyone
running our software would be required to make this registry change.

> You need to discover what the per-process quota are for your system.
>
> This article will help you puruse this:
>
> http://support.microsoft.com/kb/177415

That's a good tip! We actually ran OSR's version of PoolMon (PoolTag.exe)
and we found that when CreateFileMapping() was called on a 1984 GB file, the
non-paged memory used by "MmSc" jumped from 784 to 7112000 bytes. The second
call to CreateFileMapping(), if it had succeeded, would presumably have
chewed up another 7 MB. The description for MmSc is "nt!mm - subsections
used to map data files" so this does seem to indicate that we are exhausting
system resources with our calls to CreateFileMapping(). This is unfortunate,
because it also means that CreateFileMapping() is poorly implemented by
Microsoft, making it essentially useless when dealing with very large files.
Perhaps this will be addressed in a future version of Windows, but for now it
looks like we're out of luck. (We'll likely implement a different solution,
presumably reading our entire index into memory when the file is first
opened.)

Thanks a lot for your help...

Alex

Alex Sheneal

unread,
Jun 16, 2008, 4:25:02 PM6/16/08
to
Hi Hugh,

I just replied to your previous posts.

We're not trying to map a large section of these large files into memory.
Quite the opposite, we wish to only map a very small section, typically 6 MB.
So address space is not an issue. Rather, it's the implementation of
CreateFileMapping that is the problem. Even when you tell CreateFileMapping
that you'll be mapping a small amount into memory (via the dwMaximumSizeHigh
and dwMaximumSizeLow parameters), it still gets overwhelmed by the size of
the underlying file and dies. Pity.

Alex

Jochen Kalmbach [MVP]

unread,
Jun 16, 2008, 4:41:25 PM6/16/08
to
Hi Alex!

> That's a good tip! We actually ran OSR's version of PoolMon (PoolTag.exe)
> and we found that when CreateFileMapping() was called on a 1984 GB file, the
> non-paged memory used by "MmSc" jumped from 784 to 7112000 bytes. The second
> call to CreateFileMapping(), if it had succeeded, would presumably have
> chewed up another 7 MB. The description for MmSc is "nt!mm - subsections
> used to map data files" so this does seem to indicate that we are exhausting
> system resources with our calls to CreateFileMapping(). This is unfortunate,
> because it also means that CreateFileMapping() is poorly implemented by
> Microsoft, making it essentially useless when dealing with very large files.
> Perhaps this will be addressed in a future version of Windows, but for now it
> looks like we're out of luck. (We'll likely implement a different solution,
> presumably reading our entire index into memory when the file is first
> opened.)

The problem is: it will be only fixed if you offically report a bug.
So my suggestion is: Contact MS product support and open a bug...

--
Greetings
Jochen

My blog about Win32 and .NET
http://blog.kalmbachnet.de/

Alexander Grigoriev

unread,
Jun 16, 2008, 10:56:20 PM6/16/08
to
Are these files local or remote?

"Alex Sheneal" <AlexS...@discussions.microsoft.com> wrote in message
news:AA3A180B-4207-497B...@microsoft.com...

Hugh Moran

unread,
Jun 17, 2008, 2:10:00 AM6/17/08
to

"Alex Sheneal" wrote:

OK I understand now, well this isnt necessarily a show stopper. If you
examine the systems use of memory using poolmon, you may be able to determine
the cause and make an adjustment to the registry fields that govern
per-process quotas for paged and non-paged pool.

With some luck a small tweak and reboot may be all that you need to overcome
this.

Hugh


Alex Sheneal

unread,
Jun 17, 2008, 11:29:07 AM6/17/08
to
They are local.

Alex Sheneal

unread,
Jun 17, 2008, 12:13:00 PM6/17/08
to
"Hugh Moran" wrote:

Hi Hugh,

The problem is, any solution we come up with along those lines is still
going to be a band-aid solution. Nothing is going to get around the poor
implementation of CreateFileMapping() in the OS. Even if we doubled the
amount of non-paged quota, all it would take is a few of these HD video files
to be open at once, and we'd be stuck again.

The solution is to simply avoid using CreateFileMapping() altogether, at
least on large files. In fact my colleague has already re-written the code:
We now create a shared memory region (backed by the paging file) that is the
size of index we wish to read. When we open the HD video file, we read this
index into that region, and we're done. This solution works great, with the
only downside being a small delay caused by reading the entire index when the
file is first opened. But in practice this delay is negligible.

Thanks again for all your help, in particular pointing us to PoolMon to
determine exactly what resources were being used.

Alex

0 new messages