Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Bug#990560: Error message "Value too large for defined data type"

139 views
Skip to first unread message

Bernhard

unread,
Jul 2, 2021, 2:50:03 AM7/2/21
to
Package: subversion
Version: 1.14.1-3
Severity: important

Hello,

I updated my Server to Debian 11.
Hardware is Banana Pi M3 with Processor Allwinner A83T: https://linux-sunxi.org/A83T

The repository was completely new setup with "dump" and "load" with this version 1.14.1-3.
Filesystem is XFS.
I run the server using FSFS database and svnserve server.

Now, i got the error message from client after "svn commit":

<-- Snip -->
> Übertrage Daten ...erledigt
> Übertrage Transaktion...
> Revision 2338 übertragen.
>
> Warnung: post commit FS processing had error:
> Can't read directory '/storage/subversion/svn/db/transactions/2337-1sx.txn': Value too large for defined data type
>

At server side, i got the message:

> $ svnadmin lstxns /storage/subversion/svn
> svnadmin: E000075: Can't read directory '/storage/subversion/svn/db/transactions': Value too large for defined data type
>

This is the first error message after 4 successfull commits.

Commit was successfully and there is no data loss.
But there was the error message.

Best regards
Bernhard

signature.asc

James McCoy

unread,
Jul 12, 2021, 11:30:04 PM7/12/21
to
On Fri, Jul 02, 2021 at 06:44:13AM +0000, Bernhard wrote:
> At server side, i got the message:
>
> > $ svnadmin lstxns /storage/subversion/svn
> > svnadmin: E000075: Can't read directory '/storage/subversion/svn/db/transactions': Value too large for defined data type
> >

Looking at APR's source, it looks like this is coming from a call to
readdir_r but APR should be using readdir.

Could you run "strace -o lstxns.log -e trace=file svnadmin lstxns
/storage/subversion/svn" and attach lstxns.log?

Cheers,
--
James
GPG Key: 4096R/91BF BF4D 6956 BD5D F7B7 2D23 DFE6 91AE 331B A3DB

Bernhard

unread,
Jul 13, 2021, 10:00:03 AM7/13/21
to
Hello James

Thanks for working at this topic.
Attached, there is the Log file.

Interesting is:
At x86 (x86_64 and i386), there are no such problems.
This problem is only at armhf architecture.

Best regards
Bernhard
lstxns.log
signature.asc

James McCoy

unread,
Jul 15, 2021, 12:30:04 AM7/15/21
to
On Tue, Jul 13, 2021 at 01:54:20PM +0000, Bernhard wrote:
> Hello James
>
> Thanks for working at this topic.
> Attached, there is the Log file.

Guess that was too limiting. Can you run again without "-e trace=file"?

> Interesting is:
> At x86 (x86_64 and i386), there are no such problems.
> This problem is only at armhf architecture.

Thanks for the clarification on the arch. That is likely relevant.

Bernhard

unread,
Jul 15, 2021, 6:50:04 AM7/15/21
to
Hello James

Attached there is the log.

Best regards
Bernhard
lstxns.log
signature.asc

Helge Deller

unread,
Dec 13, 2022, 1:20:04 PM12/13/22
to
tag: hppa, lfs, patch

This bug usually indicates that a 32-bit application uses
functions like readdir() which (by default on a 32bit platform)
can only handle 32-bit values for inode numbers.
You can overcome that issue by recompiling the code while providing
"-D_FILE_OFFSET_BITS=64" on the gcc command line.

In this specific case I suggest to add the "future=+lfs" option
to debian/rules like this (copy/pasted here - may not apply cleanly but you get the idea):

--- debian/rules 2022-12-13 17:30:09.631676544 +0000
+++ debian/rules.org 2022-12-13 17:56:43.086922122 +0000
@@ -14,9 +14,9 @@

# Workaround an issue with PIC/PIE on certain architectures (c.f., #942798)
ifneq (,$(filter x32,$(DEB_HOST_ARCH)))
+ DEB_BUILD_MAINT_OPTIONS=hardening=+all,-pie future=+lfs
- DEB_BUILD_MAINT_OPTIONS=hardening=+all,-pie
else
+ DEB_BUILD_MAINT_OPTIONS=hardening=+all future=+lfs
- DEB_BUILD_MAINT_OPTIONS=hardening=+all
endif

This option enables large file support (LFS) generally. On 64-bit
platforms the aove future-flag is a no-op.

By the way, I think you couldn't reproduce the issue on i386, because
you probably didn't used a huge hard disc there. The issue only arises
sometimes if the searched/read file is above the 32-bit boundary on disc.

Dear subversion maintainers:
Please add the future option.

(I noticed that bug too, because for me "git" package failed to compile,
because it uses subversions in it's tests and subversion there ran into
this "Value too large" problem.)

Helge

James McCoy

unread,
Dec 16, 2022, 5:30:03 PM12/16/22
to
On Tue, Dec 13, 2022 at 07:08:29PM +0100, Helge Deller wrote:
> tag: hppa, lfs, patch
>
> This bug usually indicates that a 32-bit application uses
> functions like readdir() which (by default on a 32bit platform)
> can only handle 32-bit values for inode numbers.
> You can overcome that issue by recompiling the code while providing
> "-D_FILE_OFFSET_BITS=64" on the gcc command line.

Thanks for the investigation. Subversion is using libapr to perform the
directory listing, which builds with -D_LARGEFILE64_SUPPORT but not
-D_FILE_OFFSET_BITS=64.

Subversion itself also builds with -D_LARGEFILE64_SUPPORT and (for the
Perl bindings) -D_FILE_OFFSET_BITS=64. It should probably be consistent
about that, which your suggestion would enforce.

> In this specific case I suggest to add the "future=+lfs" option
> to debian/rules like this (copy/pasted here - may not apply cleanly but you get the idea):

I'll need to double check whether this affects the ABI of subversions
library. Hopefully not, since it tends to defer to APR for OS-specific
things.

However, I'm not sure changing subversion's build alone will address the
problem. APR may need a similar change.

Helge Deller

unread,
Dec 17, 2022, 5:10:04 AM12/17/22
to
On 12/16/22 23:19, James McCoy wrote:
> However, I'm not sure changing subversion's build alone will address the
> problem. APR may need a similar change.

I'll look into apr as well.
Note that #1026235 is relevant as well - it changed the behaviour of
some calls, which is why this problem pops up now.
In any way, it would be great (and correct) to build subversion with +lfs.

Thanks!
Helge

Helge Deller

unread,
Feb 10, 2023, 10:50:04 AM2/10/23
to
The "devscripts" package FTBFS too because subversion isn't compiled with LFS support:
https://buildd.debian.org/status/fetch.php?pkg=devscripts&arch=hppa&ver=2.23.1&stamp=1676035013&raw=0
The log shows:
svnadmin: E000072: Repository creation failed
svnadmin: E000072: Could not create top-level directory
svnadmin: E000072: Can't check directory '.': Value too large for defined data type

Please add this patch:
--- debian/rules 2022-12-13 17:30:09.631676544 +0000
+++ debian/rules.org 2022-12-13 17:56:43.086922122 +0000
@@ -14,9 +14,9 @@

# Workaround an issue with PIC/PIE on certain architectures (c.f., #942798)
ifneq (,$(filter x32,$(DEB_HOST_ARCH)))
- DEB_BUILD_MAINT_OPTIONS=hardening=+all,-pie
+ DEB_BUILD_MAINT_OPTIONS=hardening=+all,-pie future=+lfs
else
- DEB_BUILD_MAINT_OPTIONS=hardening=+all
+ DEB_BUILD_MAINT_OPTIONS=hardening=+all future=+lfs
endif

Bernhard

unread,
Mar 27, 2023, 1:20:04 AM3/27/23
to
Hello Helge

Thanks for investigating this topic.

My very first test with i386 was in a Acer Netbook with 64GB SSD.
This failure was not shown.

If i understand it right, this failure don't happen in case, of small partition <1TB:
2^31*512byte=1TB.
Can you please confirm?

But this failure happens at my Banana Pi with 4TB hard drive.

So, a small partition can be a workaround until subversion/apr is compiled with LBA support. Correct?

Best regards and thanks for support
Bernhard

Helge Deller

unread,
Mar 27, 2023, 4:50:06 AM3/27/23
to
On 3/27/23 07:13, Bernhard wrote:
> My very first test with i386 was in a Acer Netbook with 64GB SSD.
> This failure was not shown.
>
> If i understand it right, this failure don't happen in case, of small partition <1TB:
> 2^31*512byte=1TB.
> Can you please confirm?
>
> But this failure happens at my Banana Pi with 4TB hard drive.
>
> So, a small partition can be a workaround until subversion/apr is compiled with LBA support. Correct?

The limiting factor is how many inodes a filesystem allows.
This depends on the "inode size" and can be specified when formatting the filessystem.
32-bit applications can only address 2^32-1 inodes, which is ~ 4 million.

Run "df -i /your_filesystem",
e.g. on my 300GB disc I see:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/xxxxx 26091520 2247818 23843702 9% /home

See the "Inodes" field. In this example it's ~26 million inodes, so the
chance that a file is stored above the 4th million inode is pretty high,
in which case the 32-bit application may fail.

Another option is to use xfs filesystem, which tries to work around that
problem....

Helge

Bernhard

unread,
Mar 29, 2023, 11:30:05 AM3/29/23
to
Hello Helge

You wrote:

>>>>>
The limiting factor is how many inodes a filesystem allows.
This depends on the "inode size" and can be specified when formatting the filessystem.
32-bit applications can only address 2^32-1 inodes, which is ~ 4 million.
<<<<<

2^32 is ~4 billion.
Why is ~4 million a limiting factor?
Mistake?

>>>>>
Another option is to use xfs filesystem, which tries to work around that
problem....
<<<<<

I use the XFS file system at the 4TB hard drive, which is formatted with the 32Bit OS.
This is the output for the used XFS filesystem with size ~4TB:

> Filesystem Inodes IUsed IFree IUse% Mounted on
> /dev/sda1 390701632 9608 390692024 1% /storage

Thank you for your support and answering my questions.
Bernhard

signature.asc

Helge Deller

unread,
Mar 29, 2023, 3:40:04 PM3/29/23
to
Hello Bernhard,

On 3/29/23 17:17, Bernhard wrote:
> The limiting factor is how many inodes a filesystem allows.
> This depends on the "inode size" and can be specified when formatting the filessystem.
> 32-bit applications can only address 2^32-1 inodes, which is ~ 4 million.
> <<<<<
>
> 2^32 is ~4 billion.

:-)
To be specific:
Citing from: https://en.m.wikipedia.org/wiki/4,294,967,295
In computing, 4,294,967,295 is the highest unsigned (that is, not negative) 32-bit integer,
which makes it the highest possible number a 32-bit system can store in memory.

> Why is ~4 million a limiting factor?

The errors you see:
Can't read directory '/storage/subversion/svn/db/transactions/2337-1sx.txn': Value too large for defined data type

can have (at least) 2 reasons:
a) The inode number of that directory is bigger than 4,294,967,295 and as such doesn't fit into glibc's 32-bit ino_t struct,
b) The date of the directory is beyond year ~2038 and doesn't fit into glibc's 32-bit time_t struct.

glibc tests in various functions if a value is bigger than what can be handled by it's 32-bit variable.
see e.g. sysdeps/unix/sysv/linux/getdents64.c:
ssize_t
__old_getdents64 (int fd, char *buf, size_t nbytes)
{
...
/* Copy out the fixed-size data. */
__ino_t ino = source->d_ino;
__off64_t offset = source->d_off;
unsigned int reclen = source->d_reclen;
unsigned char type = source->d_type;

/* Check for ino_t overflow. */
if (__glibc_unlikely (ino != source->d_ino))
return handle_overflow (fd, previous_offset, p - buf);

Here ino (ino_t) is 32-bit while source->d_ino is a 64-bit variable.
If it doesn't fit your application will receive EOVERFLOW error for the
function you called.
It's done similiar for the time_t type.

> Another option is to use xfs filesystem, which tries to work around that
> problem....
> <<<<<
>
> I use the XFS file system at the 4TB hard drive, which is formatted with the 32Bit OS.
> This is the output for the used XFS filesystem with size ~4TB:
>
>> Filesystem Inodes IUsed IFree IUse% Mounted on
>> /dev/sda1 390701632 9608 390692024 1% /storage

I mixed up inodes and blocks in my last mail.
https://adil.medium.com/ext4-filesystem-data-blocks-super-blocks-inode-structure-1afb95c8e4ab
What I wanted to express is, that the inode number can be
bigger than 4,294,967,295, in which case the application will receive an overflow error code.
This can happen more easily on bigger drives with many files.
Filesystems are different, I think XFS has some workarounds to cope better
with 32-bit apps than ext3/ext4.

Helge
0 new messages