Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Memory allocation failed during fsck of large EXT4 filesystem

617 views
Skip to first unread message

Reiner Buehl

unread,
Jul 5, 2021, 4:30:04 AM7/5/21
to
Hi all,

I have a corrupt EXT4 filesystem where fsck.ext4 fails with the error message:

Error storing directory block information (inode=366740508, block=0, num=406081): Memory allocation failed

/dev/vg_data/lv_mpg: ***** FILE SYSTEM WAS MODIFIED *****
e2fsck: aborted

/dev/vg_data/lv_mpg: ***** FILE SYSTEM WAS MODIFIED *****

The system has 4GB of memory and a 8GB swap partition. The filesystem has 7TB. Is there a quick way to enlarge the swap space to help fsck.ext4 to finish the repair? I do not have any unused partitions but have space for swap on other filesystems if that is possible.
 

IL Ka

unread,
Jul 5, 2021, 5:50:05 AM7/5/21
to
Error storing directory block information (inode=366740508, block=0, num=406081): Memory allocation failed

Try ``scratch_files``
This stanza controls when e2fsck will attempt to use scratch files to reduce the need for memory.

 
The system has 4GB of memory and a 8GB swap partition. The filesystem has 7TB. Is there a quick way to enlarge the swap space to help fsck.ext4 to finish the repair? 
I do not have any unused partitions but have space for swap on other filesystems if that is possible.

You can create swap on any free partition, see

7TB seems like too much for one partition imho.
Consider splitting it into the parts
 
 

Thomas Schmitt

unread,
Jul 5, 2021, 6:00:05 AM7/5/21
to
Hi,

Reiner Buehl wrote:
> Is there a quick way to enlarge the swap space

According to old memories of mine you may create a large, non-sparse file
as you would do for a virtual disk. E.g. by mkfile (which seems not to be
in Debian) or qemu-img (from qemu-utils):

qemu-img create "$swap_file_path" 8G

Set ownership and access rights of "$swap_file_path" so that no
unprivileged users can spy.

Then you tell the system to use it for swapping

swapon "$swap_file_path"


> fsck.ext4 fails with the error message:
> Error storing directory block information (inode=366740508, block=0,
> num=406081): Memory allocation failed

According to
https://codesearch.debian.net/search?q=package%3Ae2fsprogs+Memory+allocation+failed
this message is emitted if error code EXT2_ET_NO_MEMORY was returned.
This error code indeed occurs if memory allocating system calls fail.
In these cases i would expect that more virtual memory could help.

----------------------------------------------------------------------------

But i see questionable occurences of EXT2_ET_NO_MEMORY which get triggered
by bad data. In these cases no extra memory can help:

Halfways correct is its use to mark an insane request for a memory array
which would exceed the maximum number that can be stored in unsigned long
integer variables.

There are possible misattributions of that error code if get_icount_el()
returns 0 to set_inode_count() for reasons of bad data.
https://sources.debian.org/src/e2fsprogs/1.46.2-2/lib/ext2fs/icount.c/?hl=461#L461
https://sources.debian.org/src/e2fsprogs/1.46.2-2/lib/ext2fs/icount.c/?hl=388#L388
(in line 496 the same return value leads to ENOENT.)

In
https://sources.debian.org/src/e2fsprogs/1.46.2-2/lib/ext2fs/hashmap.c/?hl=33#L33
i see a potential memory fault by using the calloc(3) return without
checking it for NULL. (A caller of ext2fs_hashmap_create() would later
throw EXT2_ET_NO_MEMORY if the program did not crash yet.)

Return value 0 from get_refcount_el() is converted to EXT2_ET_NO_MEMORY
in ea_refcount_increment(), although get_refcount_el() did not attempt to
allocate memory.

It stays a riddle from where e2fsprogs links sparse_file_new(). I find it
only as Android C++ call. Whatever, if it fails then EXT2_ET_NO_MEMORY
can be returned by its caller io_manager_configure(), which seems not
restricted to Android.


Have a nice day :)

Thomas

Reiner Buehl

unread,
Jul 5, 2021, 10:20:04 AM7/5/21
to
It seems swap is not the solution: Even after adding a 50G swap file, I still get the same error message and the swap usage stats from collectd show that max swap usage was not more than just 2G.

I will now try if the scratch_files stanza makes a difference.

IL Ka

unread,
Jul 5, 2021, 10:40:04 AM7/5/21
to
On Mon, Jul 5, 2021 at 5:17 PM Reiner Buehl <reiner...@gmail.com> wrote:
It seems swap is not the solution: Even after adding a 50G swap file, I still get the same error message and the swap usage stats from collectd show that max swap usage was not more than just 2G.


btw, do you use 32 or 64bit os? 

Stefan Monnier

unread,
Jul 5, 2021, 11:20:04 AM7/5/21
to
Reiner Buehl [2021-07-05 10:21:13] wrote:
> Hi all,
> I have a corrupt EXT4 filesystem where fsck.ext4 fails with the error
> message:
>
> Error storing directory block information (inode=366740508, block=0,
> num=406081): Memory allocation failed
[...]
> The system has 4GB of memory and a 8GB swap partition. The filesystem has
> 7TB. Is there a quick way to enlarge the swap space to help fsck.ext4 to
> finish the repair? I do not have any unused partitions but have space for
> swap on other filesystems if that is possible.

I think you should report this as a bug in e2fsck. While 7TB is
significantly larger than the partitions I have, 8GB of swap should
still be plenty for that (my first 1TB disk was connected to a machine
with 64MB of RAM (an asus wl-700ge) and fsck was slow but it worked), so
I suspect the error is not in the lack of memory space.


Stefan

Marc Auslander

unread,
Jul 5, 2021, 12:40:05 PM7/5/21
to
Are you sure it's not a ulimit issue? Does the ulimit command return
unlimited?

Michael Stone

unread,
Jul 5, 2021, 5:00:04 PM7/5/21
to
On Mon, Jul 05, 2021 at 12:53:39PM +0300, IL Ka wrote:
>7TB seems like too much for one partition imho.
>Consider splitting it into the parts

That's silly. It's 2021; 7TB isn't particularly large and there's no
value in breaking things into multiple partitions for no reason.

Michael Stone

unread,
Jul 5, 2021, 5:30:04 PM7/5/21
to
>Maybe to have the ability to restore or reinstall the system without
>bothering /home?

I've never found that to be of much practical value. YMMV. Anyway, if
you have reasons to partition, go for it--but partitioning because a
filesystem is arbitrarily "too big" isn't a reason. (Especially since,
in context, it's a logical volume so presumably the OP has already
decided how to allocate space and if they wanted a separate home they
could have just made a lv for it.)

Thomas D. Dean

unread,
Jul 5, 2021, 5:30:04 PM7/5/21
to
On 7/5/21 1:54 PM, Michael Stone wrote:

Reiner Buehl

unread,
Jul 6, 2021, 3:20:07 AM7/6/21
to
/ and /home are fine on the system. The data on the affected filesystem is a collection of data from different remote sites, so it could be restored but that would take a lot of time. That's why I would like to fix the filesystem so that I can then use more intelligent recovery methods that do not need to copy every file.

Thomas Schmitt

unread,
Jul 6, 2021, 7:30:05 AM7/6/21
to
Hi,

Reiner Buehl wrote.
> I would like to fix the filesystem
> so that I can then use more intelligent recovery methods that do not need to
> copy every file.

Maybe the old workaround proposed by Ted T'so in
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=614082
would be worth a try.

Bug reporter:

"Error storing directory block information
(inode=169246423, block=0, num=3966024): Memory allocation failed"

Ted T'so:

"The way to work around this problem should it occur again is to note
the inode number, and then zap it using debugfs:
debugfs -w /dev/md1
debugfs: clri <169246423>
debugfs: quit
Then e2fsck should be able to complete its work."

If possible i would make any experiments on a plain copy of the filesystem.
I.e. a copy of the bytes which can be read from the partition device.

-------------------------------------------------------------------------
Far fetched approach:

If this would not help and i would deem it worth the effort, then i'd get
the newest source of e2fsprogs and try to build it. Then i'd run the new
fsck.ext4 program on a copy of the filesystem. If it still fails with
"Memory allocation failed" then i'd try to let the program print a message
from any occasion of EXT2_ET_NO_MEMORY, describing which occasion was
triggered.

This point in the source and, if possible, a stack trace would be hands-on
information for the developers of e2fsprogs.

I'm not sure how good an idea it is to run fsck under gdb to get a stack
trace.
There are glib functions backtrace(3) and backtrace_symbols_fd(3).
https://www.gnu.org/software/libc/manual/html_node/Backtraces.html
Their usage is shown in the handler function of
https://stackoverflow.com/questions/77005/how-to-automatically-generate-a-stacktrace-when-my-program-crashes
In your case no signal would be received, of course. You'd call the trace
printer from the various EXT2_ET_NO_MEMORY occasions.

As said: far fetched and open ended ...

IL Ka

unread,
Jul 6, 2021, 7:40:04 AM7/6/21
to

I use a 32bit OS

32-bit OS can't use more than 4GB. 32-bit app can use even lower amount of memory. This is why 50GB swap file didn't help.



Andy Smith

unread,
Jul 6, 2021, 9:20:04 AM7/6/21
to
Hello,

On Tue, Jul 06, 2021 at 02:34:30PM +0300, IL Ka wrote:
> > I use a 32bit OS

Is the hardware capable of 64-bit? If so then it should be possible
to install an amd64 kernel and e2fsprogs without completely
converting your system to amd64.

https://wiki.debian.org/CrossGrading

(Stop after booting in to the new kernel and then install
e2fsprogs:amd64)

You should then be able to fsck larger ext* filesystems.

There is also the "crossgrader":

https://packages.debian.org/bullseye/crossgrader

(though it is only in testing and unstable it is intended to work on
stable as well)

You may or may not find that simpler.

Cheers,
Andy

--
https://bitfolk.com/ -- No-nonsense VPS hosting
0 new messages