Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

cnfs file size > 2Gb problem

0 views
Skip to first unread message

bill

unread,
Sep 4, 2002, 4:42:25 AM9/4/02
to
Hi,

I am using inn-2.3.3 on RedHat 7.3.

cycbuff.conf
=======
cycbuff:BUFF:/usr/local/news/spool/buff:2200000
metacycbuff:BIGAREA:BUFF

readers.conf
=======
method cnfs {
class: 0
newsgroups: alt.*
options: BIGAREA
}

method tradspool {
class: 1
newsgroups: *
}

When I start the server, I got the error from news.notice:
Sep 4 16:13:00 ccz404 innd: news opened news:16:file
Sep 4 16:13:00 ccz404 innd: CNFS-sm: file '/usr/local/news/spool/buff' :
Value too large for defined data type, ignoring 'BUFF' cycbuff
Sep 4 16:13:00 ccz404 innd: SM storage method 'cnfs' failed initialization
Sep 4 16:13:01 ccz404 innd: SM one or more storage methods failed
initialization

When reduce the file size to 2000000, everything work fine.

From the documentation (README), linux have a limitation of cycbuffs < 2GB.
However, RedHat 7.3's kernel 2.4 should overcome file size problem. I read
from some site that it may be a problem of the library libc. Any idea?? Is
the 2GB cycbuff size limitation still valid for RedHat 7.3? Any help would
be appreciate.

Thanks.

Bill

Jeffrey M. Vinocur

unread,
Sep 4, 2002, 6:28:20 AM9/4/02
to
In article <al4h1g$gkl$1...@news.ust.hk>, bill <ccbil...@ust.hk> wrote:
>
>From the documentation (README), linux have a limitation of cycbuffs < 2GB.
>However, RedHat 7.3's kernel 2.4 should overcome file size problem. I read
>from some site that it may be a problem of the library libc. Any idea?? Is
>the 2GB cycbuff size limitation still valid for RedHat 7.3?

Did you compile with large file support?

It doesn't really matter though -- you can get the same effect by
making two 1 GB cycbuffs and combining them into the same
metacycbuff.


--
Jeffrey M. Vinocur
je...@litech.org

Miquel van Smoorenburg

unread,
Sep 4, 2002, 6:57:14 AM9/4/02
to
In article <al4h1g$gkl$1...@news.ust.hk>, bill <ccbil...@ust.hk> wrote:
>From the documentation (README), linux have a limitation of cycbuffs < 2GB.
>However, RedHat 7.3's kernel 2.4 should overcome file size problem. I read
>from some site that it may be a problem of the library libc. Any idea?? Is
>the 2GB cycbuff size limitation still valid for RedHat 7.3? Any help would
>be appreciate.

With glibc 2.2 and up, and a 2.4 kernel, LFS support works with INN.
However the last time I tried it on our transit server, using a
2.4.16 or so kernel, the kernel fell flat on its face with VM problems.

The history file indexes became quite a bit bigger because of the
64 bits file offsets, and somehow that triggered VM inbalance.
The kernel had 1 GB memory, 500 MB free and yet it went > 300 MB
into swap as well. It trashed itself to death.

It may well be that later kernels solve this problems (esp. the
latest 2.4.20-pre kernels which have some VM fixes applied), or that
a RedHat kernel doesn't exhibit the problem at all since custom
redhat kernels have a different VM (the latest use rmap, I think).

I'm now toying with the idea of hacking storage/cnfs/cnfs.c so
that I can compile it with -D_LARGEFILE_SOURCE but NOT with
-D_FILE_OFFSET_BITS=64, i.e. adding support for mmap64(),
pread64(), stat64() and so on so that INN can be compiled
without LFS yet cnfs.c knows about files > 2GB. Seems to be
the best solution for now.

Mike.

bill

unread,
Sep 4, 2002, 7:23:52 AM9/4/02
to
Thanks. I made a big mistake that I didn't compile it with large file
support. It now works fine.
I didn't want to use two 1GB cycbuffs because I have to create a large
spool, 100 GB, so it would be tedious to create 100 cycbuffs...

Thanks again for your kind help.

Bill

"Jeffrey M. Vinocur" <je...@litech.org> wrote in message
news:al4n84$gh8$2...@puck.litech.org...

bill

unread,
Sep 4, 2002, 7:26:14 AM9/4/02
to
Btw, after compiling with large file support, it should be no problem too if
history file > 2 GB, right?
Do anyone know how fast will the history and the respective index file
grows?

Thanks.

Bill

"bill" <ccbil...@ust.hk> wrote in message news:al4qg7$jou$1...@news.ust.hk...

Thomas

unread,
Sep 4, 2002, 11:42:07 AM9/4/02
to
bill wrote:

> I didn't want to use two 1GB cycbuffs because I have to create a large
> spool, 100 GB, so it would be tedious to create 100 cycbuffs...

If you have readers that would also eat your open files limits, as every nnrpd
opens all cycbuffs when started (strange but true).

BTW, for me it works with device files as cycbuffs. Saves on file system
overhead too.


Thomas

Russ Allbery

unread,
Sep 4, 2002, 12:13:14 PM9/4/02
to
bill <ccbil...@ust.hk> writes:

> Btw, after compiling with large file support, it should be no problem
> too if history file > 2 GB, right?

Right.

> Do anyone know how fast will the history and the respective index file
> grows?

It depends very heavily on what the traffic load on your server is. It
takes something in the rough vicinity of 120 bytes per article on your
spool, maybe half that for articles that aren't on your spool but haven't
passed /remember/ yet.

--
Russ Allbery (r...@stanford.edu) <http://www.eyrie.org/~eagle/>

Please post questions rather than mailing me directly.
<http://www.eyrie.org/~eagle/faqs/questions.html> explains why.

M. Buchenrieder

unread,
Sep 4, 2002, 2:29:34 PM9/4/02
to
"bill" <ccbil...@ust.hk> writes:


[...]

>When I start the server, I got the error from news.notice:
>Sep 4 16:13:00 ccz404 innd: news opened news:16:file
>Sep 4 16:13:00 ccz404 innd: CNFS-sm: file '/usr/local/news/spool/buff' :
>Value too large for defined data type, ignoring 'BUFF' cycbuff
>Sep 4 16:13:00 ccz404 innd: SM storage method 'cnfs' failed initialization
>Sep 4 16:13:01 ccz404 innd: SM one or more storage methods failed
>initialization

>When reduce the file size to 2000000, everything work fine.

You have hit the underlying operating systemïs filesize limit.

>From the documentation (README), linux have a limitation of
>cycbuffs < 2GB.

Actually, the problem is much deeper than just the filesystem
type used.

>However, RedHat 7.3's kernel 2.4 should overcome file size problem.

It does. Unfortunately, all the other programs and components of the
OS and its tools have to be rewritten for to fully support >2GB
files, regardless of the fact that the kernel has overcome that
limit.

>I read
>from some site that it may be a problem of the library libc. Any
>idea??

AFAIR, the libc/glibc has been updated accordingly already, but
it might depend from the version your system is presently using.

>Is
>the 2GB cycbuff size limitation still valid for RedHat 7.3?

[...]

You might be bette of using another OS if you really need
support for files >2GB at this time. OTOH, you might want
to ask that in comp.os.linux.setup - or look up the actual
status of LFS on Google.


Michael
--
Michael Buchenrieder * mi...@scrum.greenie.muc.de * http://www.muc.de/~mibu
Lumber Cartel Unit #456 (TINLC) & Official Netscum
Note: If you want me to send you email, don't munge your address.

Miquel van Smoorenburg

unread,
Sep 4, 2002, 5:12:05 PM9/4/02
to
In article <al4oua$1e3$2...@ncc1701.cistron.net>,

Miquel van Smoorenburg <miquels.at.cist...@netscum.nl> wrote:
>I'm now toying with the idea of hacking storage/cnfs/cnfs.c so
>that I can compile it with -D_LARGEFILE_SOURCE but NOT with
>-D_FILE_OFFSET_BITS=64, i.e. adding support for mmap64(),
>pread64(), stat64() and so on so that INN can be compiled
>without LFS yet cnfs.c knows about files > 2GB. Seems to be
>the best solution for now.

The patch below compiles, perhaps you can try if it works.
Apply with patch -p0 < cnfs64.patch in storage/cnfs

diff --exclude 00-orig --exclude *.o -ruN 00-orig/Makefile ./Makefile
--- 00-orig/Makefile Mon Jul 9 11:14:45 2001
+++ ./Makefile Wed Sep 4 18:47:27 2002
@@ -2,7 +2,7 @@

include ../../Makefile.global

-CFLAGS = $(GCFLAGS) -I../../include -I..
+CFLAGS = $(GCFLAGS) -I../../include -I.. -D_LARGEFILE64_SOURCE

SOURCES = cnfs.c
OBJECTS = cnfs.o
diff --exclude 00-orig --exclude *.o -ruN 00-orig/cnfs-private.h ./cnfs-private.h
--- 00-orig/cnfs-private.h Wed Apr 4 16:18:10 2001
+++ ./cnfs-private.h Wed Sep 4 18:47:12 2002
@@ -13,8 +13,14 @@
/* These values may have to be changed for 64-bit support on systems that
don't support compilation options that increase the size of off_t and the
size lseek() can handle (like Solaris does). */
-typedef off_t CYCBUFF_OFF_T;
-#define CNFSseek lseek
+#if _LARGEFILE64_SOURCE && FILE_OFFSET_BITS != 64
+ typedef off64_t CYCBUFF_OFF_T;
+# define CNFSseek lseek64
+# define CNFS64
+#else
+ typedef off_t CYCBUFF_OFF_T;
+# define CNFSseek lseek
+#endif

/* Page boundary on which to mmap() the CNFS article usage header. Should
be a multiple of the pagesize for all the architectures you expect might
diff --exclude 00-orig --exclude *.o -ruN 00-orig/cnfs.c ./cnfs.c
--- 00-orig/cnfs.c Mon Oct 22 15:39:50 2001
+++ ./cnfs.c Wed Sep 4 18:44:01 2002
@@ -27,6 +27,14 @@
#include "cnfs.h"
#include "cnfs-private.h"

+#ifdef CNFS64
+# define mmap mmap64
+# define pread pread64
+# define pwrite pwrite64
+# define stat stat64
+# define open open64
+#endif
+
typedef struct {
/**** Stuff to be cleaned up when we're done with the article */
char *base; /* Base of mmap()ed art */

Kjetil Torgrim Homme

unread,
Sep 5, 2002, 6:56:13 AM9/5/02
to
[M. Buchenrieder]:

>
> You might be bette of using another OS if you really need support
> for files >2GB at this time. OTOH, you might want to ask that in
> comp.os.linux.setup - or look up the actual status of LFS on
> Google.

why are you spreading misinformation about Linux? the issue is
(usually) that Red Hat ships INN without largefile support, for a good
reason: INN+Linux wasn't largefile ready for Red Hat 7.0, and they
can't flip the switch now since the old datafiles would be rendered
unreadable for someone upgrading his system.

it's very easy to download the src.rpm and edit the spec-file. look
for

./configure --prefix=/usr \
--sysconfdir=/etc/news --mandir=%{_mandir} \

and add

CFLAGS=$(getconf LFS_CFLAGS) LD_FLAGS=$(getconf LFS_LDFLAGS) \

on the line above it. the result should be something like this:

with_tmp_path=/var/lib/news/tmp \
CFLAGS=$(getconf LFS_CFLAGS) LD_FLAGS=$(getconf LFS_LDFLAGS) \
./configure --prefix=/usr \
--sysconfdir=/etc/news --mandir=%{_mandir} \
--with-log-dir=/var/log/news --with-spool-dir=/var/spool/news\

--
Kjetil T. ==. ,,==. ,,==. ,,==. ,,==. ,,==
::://:::://:::://:::://:::://::::
=='' `=='' `=='' `=='' `=='' `== http://folding.stanford.edu

bill

unread,
Sep 5, 2002, 9:33:13 PM9/5/02
to
Hi,

Another questions about VERY BIG cycbuff. If I want to create 100GB buffer,
I have the following options:
1) one 100GB cycbuff and one metacycbuff contains it
2) four 25GB cycbuff and one metacycbuff contains these four cycbuff
3) many small cycbuff and one metacycbuff contains all cycbuff

Which one is better in terms of performance?
The asssumption is that all of them are on a 100GB hardware RAID 5 disk.

Thanks.

Bill

"Russ Allbery" <r...@stanford.edu> wrote in message
news:yld6rtk...@windlord.stanford.edu...

Kjetil Torgrim Homme

unread,
Sep 5, 2002, 10:14:40 PM9/5/02
to
[bill]:

>
> Hi,
>
> Another questions about VERY BIG cycbuff. If I want to create 100GB buffer,
> I have the following options:
> 1) one 100GB cycbuff and one metacycbuff contains it
> 2) four 25GB cycbuff and one metacycbuff contains these four cycbuff
> 3) many small cycbuff and one metacycbuff contains all cycbuff
>
> Which one is better in terms of performance?
> The asssumption is that all of them are on a 100GB hardware RAID 5 disk.

one huge cycbuff probably gives best performance (best locality for
writes). you can get the same effect by telling the metacycbuff to
use the cycbuffs sequentially. this gives you a little more
flexibility wrt. changing retention policy, ie. it's painless to
reduce the size of the metacycbuff by 25%.

Russ Allbery

unread,
Sep 5, 2002, 10:33:04 PM9/5/02
to
bill <ccbil...@ust.hk> writes:

> Another questions about VERY BIG cycbuff. If I want to create 100GB buffer,
> I have the following options:
> 1) one 100GB cycbuff and one metacycbuff contains it
> 2) four 25GB cycbuff and one metacycbuff contains these four cycbuff
> 3) many small cycbuff and one metacycbuff contains all cycbuff

> Which one is better in terms of performance?
> The asssumption is that all of them are on a 100GB hardware RAID 5 disk.

If you're creating these as large files, smaller ones are probably more
efficient due to the way that Unix file systems handle large files
(they're not optimized for it and it tends to involve a lot of redirection
through multiple inode blocks). However, at some point you hit
inefficiencies due to consumption of resources from all the open files, so
there's probably some local minima in the curve somewhere.

I don't know exactly where it is, though.

Mark Hittinger

unread,
Sep 5, 2002, 10:45:36 PM9/5/02
to
"bill" <ccbil...@ust.hk> writes:
>Another questions about VERY BIG cycbuff. If I want to create 100GB buffer,
>I have the following options:
>1) one 100GB cycbuff and one metacycbuff contains it
>2) four 25GB cycbuff and one metacycbuff contains these four cycbuff
>3) many small cycbuff and one metacycbuff contains all cycbuff
>Which one is better in terms of performance?

I built a server with 10 cycbuffs of around 24gb each under solaris 2.6 and
veritas. The cycbuffs were files rather than raw devices.

I did notice that article write times tended to increase the further "out"
in the 24gb we went. I assumed this was because of the triple indirect
block overhead.

Being able to use the raw partitions would probably have gotten around this
but at the time I would have had to make some patches to INN in order to
do that (solaris 2.6 wanted to do block i/o only to large raw devices).

I'd assume based on this experience that there wouldn't be much difference
between the triple indirect overhead of a 100gb cycbuff vs four 25 gb cycbuffs.

I had wanted to go back and redo that server with 100 2gb cycbuffs but never
had the time. I was working on the overview overhead problems and in
particular the expiration of overview overhead PROBLEM. :-)

Another server with several 2gb cycbuffs produced much less volatile article
write times.

Something else to consider is that your OS may not be particularly good at
files > 2gb or even more importantly may be buggy at mmap'ing files > 2gb.

Later

Mark Hittinger
bu...@pu.net

Thomas

unread,
Sep 6, 2002, 2:54:29 AM9/6/02
to
Mark Hittinger wrote:

> I built a server with 10 cycbuffs of around 24gb each under solaris 2.6 and
> veritas. The cycbuffs were files rather than raw devices.
>
> I did notice that article write times tended to increase the further "out"
> in the 24gb we went. I assumed this was because of the triple indirect
> block overhead.

Veritas file system comes with some admin commands which allow you to create a
file and set the extents at the same time. This will prevent the multiple
indirection, as the file on disk will be contiguous and the directory or
whatever it is on VxFS will point to just the beginning and end of that range of
disk blocks.


Thomas

Mark Hittinger

unread,
Sep 6, 2002, 10:08:43 AM9/6/02
to
Thomas <z...@spam.invalid> writes:
>Veritas file system comes with some admin commands which allow you to create a
>file and set the extents at the same time. This will prevent the multiple
>indirection, as the file on disk will be contiguous and the directory or
>whatever it is on VxFS will point to just the beginning and end of that range of
>disk blocks.

Cool but unfortunately I was using UFS on top of the Veritas volume manager.
It might have been a license expense issue that kept us from using VxFS.

I wonder if we could develop our own "really dumb" file system just for use
with CNFS that kept everything contiguous - similar to the old RT-11 file
system for example. A "dumbfs" loadable kernel module? :-)

Later

Mark Hittinger
bu...@pu.net

Miquel van Smoorenburg

unread,
Sep 6, 2002, 11:45:02 AM9/6/02
to
In article <LF2e9.210334$On.82...@bin3.nnrp.aus1.giganews.com>,

Mark Hittinger <bu...@pu.net> wrote:
>I wonder if we could develop our own "really dumb" file system just for use
>with CNFS that kept everything contiguous - similar to the old RT-11 file
>system for example. A "dumbfs" loadable kernel module? :-)

I did that for linux-2.2 kernels. Blockdevices in 2.2 kernels don't
have mmap() support, so you can't use them directly for CNFS.
So I wrote a 'rawfs' that simply maps partitions to fixed-size files.
See ftp://ftp.cistron.nl/pub/people/miquels/kernel/v2.2/rawfs-0.6.tar.gz

With linux-2.4 kernels this isn't needed anymore since you can
just use /dev/sdb1 directly.

Mike.

Mark Hittinger

unread,
Sep 6, 2002, 1:11:26 PM9/6/02
to
"Miquel van Smoorenburg" <miquels.at.cist...@netscum.nl> writes:
>I did that for linux-2.2 kernels. Blockdevices in 2.2 kernels don't
>have mmap() support, so you can't use them directly for CNFS.
>So I wrote a 'rawfs' that simply maps partitions to fixed-size files.
>See ftp://ftp.cistron.nl/pub/people/miquels/kernel/v2.2/rawfs-0.6.tar.gz

Hey thanks Mike for posting the url - I'll give it a look.

Later

Mark Hittinger
bu...@pu.net

Ed Clarke

unread,
Sep 6, 2002, 1:33:07 PM9/6/02
to
On Fri, 06 Sep 2002 14:08:43 GMT, Mark Hittinger <bu...@pu.net> wrote:
>I wonder if we could develop our own "really dumb" file system just for use
>with CNFS that kept everything contiguous - similar to the old RT-11 file
>system for example. A "dumbfs" loadable kernel module? :-)

Been done already; Miquels at cistron.nl has had rawfs out for a long
time on linux 2.2.x. I don't think he's ported it to the 2.4 kernel
yet. It maps raw disk partitions into CNFS files.

This filesystem is why my own news server is still using 2.2 instead
of 2.4 as all the other machines are. Google for "rawfs"...

Mark Hittinger

unread,
Sep 6, 2002, 7:09:50 PM9/6/02
to
cla...@news.cilia.org (Ed Clarke) writes:
>Been done already; Miquels at cistron.nl has had rawfs out for a long
>time on linux 2.2.x. I don't think he's ported it to the 2.4 kernel
>yet. It maps raw disk partitions into CNFS files.

Its close to what I was looking for - pretty neat.

What I was really looking for is a dumbfs that would allow multiple very large
contiguous files on a single big device. I would use multiple files in order
to have more flexibility with allocating storage to text, binaries, etc.
I would also like to play with some positional optimization of cycbuffs where
the cycbuffs for, say, control would be at the center of the spindle. We
used to focus a lot on such optimization in the old days but I'm not sure
it has any relevance today. Still it would be neat to experiment with it.

For example if I had several 100gb drives I could put cycbuffs in the center
of the drives for control/jobs/etc and put cycbuffs for very infrequently used
groups towards the beginning of the drive and towards the end of the drive.

rawfs would sort of let me do this if I set up multiple partitions but is
there not a limit of the number of partitions?

Later

Mark Hittinger
bu...@pu.net

Ed Clarke

unread,
Sep 6, 2002, 11:03:12 PM9/6/02
to
On Fri, 06 Sep 2002 23:09:50 GMT, Mark Hittinger <bu...@pu.net> wrote:
>For example if I had several 100gb drives I could put cycbuffs in the center
>of the drives for control/jobs/etc and put cycbuffs for very infrequently used
>groups towards the beginning of the drive and towards the end of the drive.
>
>rawfs would sort of let me do this if I set up multiple partitions but is
>there not a limit of the number of partitions?

15 per drive - I think. That's the limit on an SCSI drive anyway.
That'd limit you to 30GB drives unless you use some of the partitions
for something else. I'm using 17GB SCSI drives with 2GB partitions
on each data spindle. Drive sda contains the system and /var/lib,/usr/lib
stuff. sdb,sdc contain cycbuffs and sdd,sde are waiting to be activated.

I can get 15 drives into that box on the SCSI channel.

Miquel van Smoorenburg

unread,
Sep 7, 2002, 4:28:58 AM9/7/02
to
In article <2Bae9.483893$2p2.19...@bin4.nnrp.aus1.giganews.com>,

Mark Hittinger <bu...@pu.net> wrote:
>cla...@news.cilia.org (Ed Clarke) writes:
>>Been done already; Miquels at cistron.nl has had rawfs out for a long
>>time on linux 2.2.x. I don't think he's ported it to the 2.4 kernel
>>yet. It maps raw disk partitions into CNFS files.
>
>Its close to what I was looking for - pretty neat.

However since it's a 2.2 kernel you're limited to 2 GB per file/partition.

>What I was really looking for is a dumbfs that would allow multiple very large
>contiguous files on a single big device. I would use multiple files in order
>to have more flexibility with allocating storage to text, binaries, etc.
>I would also like to play with some positional optimization of cycbuffs where
>the cycbuffs for, say, control would be at the center of the spindle. We
>used to focus a lot on such optimization in the old days but I'm not sure
>it has any relevance today. Still it would be neat to experiment with it.
>
>For example if I had several 100gb drives I could put cycbuffs in the center
>of the drives for control/jobs/etc and put cycbuffs for very infrequently used
>groups towards the beginning of the drive and towards the end of the drive.

What you probably want is an OS that allows you to use a partition
directly in large-file mode. Linux 2.4 does this (I've migrated most
servers from linux-2.2+rawfs to linux-2.4). Then you need to hack
the cnfs driver in innd so that you can specify parts of a file
as cycbuff instead of an entire file.

Say

cycbuff:sdb1-1:/dev/sdb1:1023M:0-1023M # Need some fudge at the end
cycbuff:sbb1-2:/dev/sdb2:1023M:1024-2047M

.. or something similar. I think it's easy enough to do.

Mike.

Remco Blaakmeer

unread,
Sep 7, 2002, 6:21:38 AM9/7/02
to

15 partitions is the traditional limit, because of the 8-bit major/minor
numbers in the Linux kernel. But using devfs, which does things
differently, you can create far more partitions than that.

There is no real limit on the number of partitions per disk, other than
the limits imposed on you by the kernel.

Remco
--
remco3: 11:55:01 up 2 days, 10:10, 8 users, load average: 2.17, 2.16, 2.09

M. Buchenrieder

unread,
Sep 7, 2002, 12:58:13 PM9/7/02
to
Kjetil Torgrim Homme <kjet...@haey.ifi.uio.no> writes:

>[M. Buchenrieder]:
>>
>> You might be bette of using another OS if you really need support
>> for files >2GB at this time. OTOH, you might want to ask that in
>> comp.os.linux.setup - or look up the actual status of LFS on
>> Google.

>why are you spreading misinformation about Linux?

[...]

I donīt. LFS in Linux is only partially working - and the fact that
it _does_ indeed seem to work at least for INN is a surprise for me.
Itīs good news, but doesnīt change the fact that whether or not a
certain set of application(s) will work with files > 2GB is still
uncertain, as long as not all parts of the standard Linux system
(which is simply more than just the kernel and the libs) havenīt been
partially rewritten to support it. Though e.g. my current system
claims to have LFs built-in (this is SuSE 7.2), itīs not seriously
useable due to the above mentioned limitations. YMMV.

Miquel van Smoorenburg

unread,
Sep 8, 2002, 7:18:47 AM9/8/02
to
In article <H22v5...@scrum.muc.de>,
M. Buchenrieder <mi...@scrum.muc.de> wrote:
>Itïs good news, but doesnït change the fact that whether or not a
>certain set of application(s) will work with files > 2GB is still
>uncertain, as long as not all parts of the standard Linux system
>(which is simply more than just the kernel and the libs) havenït been

>partially rewritten to support it. Though e.g. my current system
>claims to have LFs built-in (this is SuSE 7.2), itïs not seriously

>useable due to the above mentioned limitations. YMMV.

You can't blaim your SuSe-specific problems on "Linux". My
Debian GNU/Linux 3.0 system works just fine with LFS.

Mike.

M. Buchenrieder

unread,
Sep 8, 2002, 4:12:53 PM9/8/02
to
"Miquel van Smoorenburg" <miquels.at.cist...@netscum.nl> writes:

[...]

>You can't blaim your SuSe-specific problems on "Linux". My
>Debian GNU/Linux 3.0 system works just fine with LFS.

Oh well. A quick glance at Google tells a different story, though.

Nevermind. I'm glad to hear that you are able to use files > 2GB
without obvious problems on Linux (not that I'd particularily
care about it; although I've been running Linux since 1993 I never
had to deal with files of that size).

0 new messages