Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

>2Gb files

1 view
Skip to first unread message

david....@ogt.co.uk

unread,
Apr 10, 2001, 1:09:40 PM4/10/01
to
Hi, I'm looking for hints on using > 2GB files....

I'm using a 2.4.3 kernel-image compiled from debian kernel-source
package. After installing the corresponding kernel-headers deb I
created, I have recompiled glibc (2.2.2-4) with a
#apt-get --build source glibc
then installed the created debs...
#dpkg -i ./libc6-dev_2.2.2-4_i386.deb ./libc6_2.2.2-4_i386.deb ./locales_2.2.2-4_all.deb
I then build fileutils,
#apt-get -t testing --build source fileutils
and install them,
#dpkg -i ./fileutils_4.0.43-1_i386.deb

Unfortunately a
#/bin/dd if=/dev/zero of=file2 bs=1024 count=3553600
gives me a
"File size limit exceeded"

I've tried this in ext2fs and reiserfs and have tried a --build
source and install of their respective progs packages, recreation
of filesystems but I still have the 2GB limit.

Any ideas? What I should be checking?
thanks,
david


--
To UNSUBSCRIBE, email to debian-us...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listm...@lists.debian.org


Mario Olimpio de Menezes

unread,
Apr 11, 2001, 7:13:29 AM4/11/01
to

Hi,
Maybe you've received better suggestion, but any way I'll offer
my $0.02.
When you recompiled libc, where the compiler looks for the
kernel's header?
I mean: it might be the case that gcc is using
/usr/include/linux (2.2.18) for the kernel's header instead of your
/usr/src/linux/include/linux (2.4.3)
So I think that for enable >2GB files you should copy all
/usr/src/linux/include/linux to /usr/include/linux and then try to
recompile glibc.
Just my $0.02

[]s
Mario O.de Menezes "Many are the plans in a man's heart, but
IPEN-CNEN/SP is the Lord's purpose that prevails"
http://curiango.ipen.br/~mario Prov. 19.21

david....@ogt.co.uk

unread,
Apr 11, 2001, 11:32:30 AM4/11/01
to
On Wed, Apr 11, 2001 at 08:13:11AM -0300, Mario Olimpio de Menezes wrote:
> Maybe you've received better suggestion, but any way I'll offer
> my $0.02.
Appreciated!

> When you recompiled libc, where the compiler looks for the
> kernel's header?
Well the debian glibc packaging is quite cunning in that it looks for a
directory like /usr/src/kernel-header, (installed by a kernel-header package
which can be created at the same time as your kernel-image using make-kpkg)
and uses that, (or stops if there is more than one like that), or if there
is a LINUX_SOURCE variable set uses that. This seems to be used with gcc
after an -isystem flag (for prepending to the system include path I think).
So I was pretty sure it was looking in the right place.

> I mean: it might be the case that gcc is using
> /usr/include/linux (2.2.18) for the kernel's header instead of your
> /usr/src/linux/include/linux (2.4.3)
To make sure this wasn't the case I did a
#dpkg --force-depends -r libc6-dev
so that there was no /usr/include/linux or /usr/include/asm .
I then rebuilt and reinstalled libc6 and libc6-dev debs (no problems).

> So I think that for enable >2GB files you should copy all
> /usr/src/linux/include/linux to /usr/include/linux and then try to
> recompile glibc.
Just incase the libc6-dev package I'd created had done something evil to
the header files I linked include/linux and asm directly to the
corresponding kernel header directories and rebuilt and installed fileutils
(which has dd in it). But still no luck in creating >2GB files.
> Just my $0.02
cheers,
david

Karsten M. Self

unread,
Apr 11, 2001, 3:55:29 PM4/11/01
to

--eRtJSFbw+EEWtPj3
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

on Tue, Apr 10, 2001 at 06:06:36PM +0100, david....@ogt.co.uk (david.ja=


ck...@ogt.co.uk) wrote:
> Hi, I'm looking for hints on using > 2GB files....

>=20


> I'm using a 2.4.3 kernel-image compiled from debian kernel-source
> package. After installing the corresponding kernel-headers deb I
> created, I have recompiled glibc (2.2.2-4) with a

> #apt-get --build source glibc=20


> then installed the created debs...

> #dpkg -i ./libc6-dev_2.2.2-4_i386.deb ./libc6_2.2.2-4_i386.deb ./locales=


_2.2.2-4_all.deb
> I then build fileutils,
> #apt-get -t testing --build source fileutils
> and install them,
> #dpkg -i ./fileutils_4.0.43-1_i386.deb

>=20
> Unfortunately a
> #/bin/dd if=3D/dev/zero of=3Dfile2 bs=3D1024 count=3D3553600


> gives me a
> "File size limit exceeded"

>=20


> I've tried this in ext2fs and reiserfs and have tried a --build
> source and install of their respective progs packages, recreation

> of filesystems but I still have the 2GB limit.=20
>=20


> Any ideas? What I should be checking?

The following is pulled from VA's Knowledgebase. This is currently
offline, but kick me periodically and I'll post a URL when it's
available.

The short answer is that there is no easy fix for large file support
under GNU/Linux. As is often the case, I've no direct experience
attempting to implement any of this and would appreciate informed
opinion on either side of the debate.

------------------------------------------------------------------------
Copyright (c) VA Linux, Inc.
------------------------------------------------------------------------

Q: Is there any way around the 2GB file-size limit in Linux? Are there
any stable patches to fix it?

A: Short answer: In a practical sense, no. The 2GB limit is deeply
embedded in the versions of Linux for 32-bit CPUs: in GNU libc, in the
Linux kernel's filesystem drivers and VFS layer, in the fundamental
design of some filesystems, and in the function calls used in the
compiled applications and utilities furnished in (and for) typical Linux
systems. There is thus currently no VA-supported configuration. Long
answer: All Linux ports for 32-bit CPUs (i.e., all architectures other
than Alpha and IA-64) use 32-bit integers for file access and locking,
yielding a maximum size of 2^31 - 1 =3D 2GB. Creating a workaround for
this situation without breaking existing code is an obstacle for all
32-bit Unixes[1], whose creators met to design a standard 64-bit
file-access programming interface, at a meeting sponsored by X/Open,
called the Large File Summit (LFS)[2]. The resulting large-file
standard and related system calls are likewise called LFS.

To implement LFS (and remove the 2GB limit) on any of the Linux system
loads VA Linux has used as of March 2001, you would have to:

1. Install glibc v. 2.2 or later, compiled against kernel headers
from kernel 2.4.0test7 or later. At this date, all software in
VA's distributions are compiled using glibc 2.1.x and kernel
2.2.x. glibc 2.2 is considered experimental.

2. Install kernel 2.4.0test7 or later, or recent 2.2.x kernels with
unofficial LFS patches, such as VA Linux's 2.2.18 kernel
(ftp://ftp.valinux.com/pub/kernel/2.2.18pre11-va2.0/), or Red
Hat Software's "enterprise" kernel. Earlier versions omitted
the necessary routines in the middle kernel layers (e.g., VFS)
and improvements to the filesystem drivers. A number of build
problems have been reported with experimental kernel 2.2.x LFS
efforts, including incompatibility with the much-requested NFS
v. 3 and ext3 patches -- but VA Linux's 2.2.18 kernel
nonetheless does include LFS, ext3, LVM, and David Higgen and
H.J. Lu's NFSv3 patches. (That kernel, however, may or may not
include the necessary filesystem-driver improvements discussed
below. That aspect of the code is untested.)

3. Use suitable filesystems. These include ext2, ReiserFS v. 3.6.x
and up (available as a third-party patch), IBM's JFS, SGI's XFS,
ext3 (a driver for which, it should be noted, is not yet
included in kernel 2.4.0), and very recent NFSv3 client drivers
(http://nfs.sourceforge.net/). Problematic filesystems include
NFS v. 2, earlier ReiserFS, AFS, coda, Intermezzo, Minix, UFS,
SCO/SysV, msdos/umsdos/vfat, smbfs, and NCPfs.

The new maximum size of an individual file is 8 terabytes on
ext2 or ReiserFS filesystems, on LFS-compatible hosts. For
other suitable filesystems, a different limit applies in each,
but reported values are all quite large.

4. Revise and recompile all programs and utilities that will
manipulate large files, to use 64-bit file-access and
file-locking system calls, instead of 32-bit ones. Since you
will not have access to source code for proprietary applications
you may use (such as Oracle), this fix may be effectively
impossible for them, and very daunting to carry out on the large
number of relevant open-source userspace programs.

Please be aware that, even if you implement these fixes without
error, and even if the new 64-bit functions work perfectly, you are
likely to encounter performance losses and greater memory usage,
when using the revised and recompiled code.

Since implementing the above-described LFS regime isn't very
practical in most cases, sites having problems with excessively
large files should generally find other ways around the problem:
Large ongoing records can be written to multiple files branched off
periodically (e.g., daily logfiles), or can be back-ended into one
of the SQL databases, such as Oracle, that handle (work around)
filesystem limits within their table-storage mechanisms.

[1] I have heard it alleged that some 32-bit Unixes implemented fixes
different from those recommended by the Large File Summit, in some cases
earlier.

[2] The X/Open Large File Summit's standards are documented here:
http://ftp.sas.com/standards/large.file/

------------------------------------------------------------------------

--=20
Karsten M. Self <kms...@ix.netcom.com> http://kmself.home.netcom.com/
What part of "Gestalt" don't you understand? There is no K5 cabal
http://gestalt-system.sourceforge.net/ http://www.kuro5hin.org

--eRtJSFbw+EEWtPj3
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.4 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE61LMWOEeIn1XyubARAv0wAJ4qyxlm9meCFZd1tSEXCdQiNknx9wCeKSO7
0xueLi4Ax3ILcAXAQf572gg=
=DrBf
-----END PGP SIGNATURE-----

--eRtJSFbw+EEWtPj3--

0 new messages