Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ftruncate reserve space, or just set limits?

251 views
Skip to first unread message

Michael Flanagan

unread,
Apr 30, 2004, 6:01:05 PM4/30/04
to
One of my clients is running RedHat 7.1 on 2.4.3-6Enterprise. One of
the apps wants to create two or three large (2GB) temp files. Their
code works, but is beastly slow. They use fopen, fseek, fwrite, etc.
The program first writes enough records (with some initial data)
serially (fseek(); fwrite();) to fill the files. I suspect the
problem is that the file gets created small, and as each new extent of
the file gets filled, the OS has to get another extent.

I've modified their code to do an ftruncate(2GB) right after the fopen
for each file. This seems to modify the file's attributes to the
correct size (as reported by 'ls -l'), but I suspect that the
underlying disk space hasn't really been reserved. As I run their
program, I can see more and more of the disk get allocated, until the
program is done spinning through the full extent of the files.

1. Am I going about this all wrong?

2. How can I get the os to actually reserve all the space I need up
front? (Or am I foolish to think this will solve my problem?)

Thanks.
mfla...@MJFlanagan.nospam.com

Scott Lurndal

unread,
Apr 30, 2004, 6:50:23 PM4/30/04
to
mfla...@MJFlanagan.delete.this.nospam.com (Michael Flanagan) writes:
>One of my clients is running RedHat 7.1 on 2.4.3-6Enterprise. One of
>the apps wants to create two or three large (2GB) temp files. Their
>code works, but is beastly slow. They use fopen, fseek, fwrite, etc.
>The program first writes enough records (with some initial data)
>serially (fseek(); fwrite();) to fill the files. I suspect the
>problem is that the file gets created small, and as each new extent of
>the file gets filled, the OS has to get another extent.
>
>I've modified their code to do an ftruncate(2GB) right after the fopen
>for each file. This seems to modify the file's attributes to the
>correct size (as reported by 'ls -l'), but I suspect that the
>underlying disk space hasn't really been reserved. As I run their
>program, I can see more and more of the disk get allocated, until the
>program is done spinning through the full extent of the files.

That is correct. ftruncate(2) will set the end-of-file pointer in the
inode. It will not allocate blocks to back the new space until written
to.

>
>1. Am I going about this all wrong?

Probably. Using stdio to do the file I/O will not perform well
in many situations; stdio is double buffered - once in libc and
once in the kernel. The most efficient way to write files is to
use mmap(2) - however, you may have issues mmap'ing 2GB files in
the standard ia32 address space (kernel changes can be made to
provide a 3GB user virtual address space if required).
(You can always mmap sequential frames of the file into a smaller
portion of memory, for example 4MB at a shot unmapping the older
portion when you map the subsequent portion - i.e. windowing).

Most unix filesystems are _NOT_ extent based. Allocation of blocks
to files is very efficient since all blocks are the same size (typically
4096 bytes). Exceptions to this are Veritas VxFS and SGI's XFS.

>
>2. How can I get the os to actually reserve all the space I need up
>front? (Or am I foolish to think this will solve my problem?)

First, why do you need to preallocate the space?

If you do, you'll need to write each block in the file.

(check your filesystem properties for the appropriate
block size - 4kb is typical).

Note that you can't just write the last block - intervening blocks will
not be allocated (reads on the non-existent portion of the file will just
return a buffer of zero bytes).

scott

Robert Nichols

unread,
Apr 30, 2004, 11:45:31 PM4/30/04
to
In article <4092cc98....@news.viawest.net>,
Michael Flanagan <mfla...@MJFlanagan.delete.this.nospam.com> wrote:
:One of my clients is running RedHat 7.1 on 2.4.3-6Enterprise. One of

For starters, define "beastly slow" and describe the hardware you are
running on. On a reasonably up-to-date but basic (i.e., single ATA
disk, no RAID) system, it's going to take several minutes to fill three
2 GB files.

To answer the question posed in the subject header, ftruncate(3) neither
reserves space nor sets limits. It just marks the current end-of-file,
and does nothing to prevent that file from later growing beyond that
point. When you call ftruncate() immediately after creating a file, you
made a what is called a "sparse file" with no data blocks allocated.
AFAIK the only way to force a block to be allocated is by writing to it.
Blocks that have never been written are internally mmap()-ed to
/dev/zero and do not occupy space on the disk.

Disk space allocation is a pretty efficient process, so there is little
reason to pre-allocate the space.* It takes my machine about 90 seconds
to create a 2 GB file with

dd if=/dev/zero of=big_file bs=32k count=64k; sync

If I then overwrite that file without reallocating the space

dd conv=notrunc if=/dev/zero of=big_file bs=32k count=64k; sync

that process actually takes a few seconds _longer_. Incidentally,
reducing the transfer blocksize to 4K (same as the file system block
size) makes no difference at all in the time. If I go down to 512
bytes, though, the process gets "beastly slow" (4 minutes).

* One reason to pre-allocate is that it ensures that the space is
actually available. Programs tend to react badly if they run
out of disk space while inserting data into the middle of an
existing file.

If you're going to be accessing the files randomly, using stdio calls
may be hurting you. Stdio imposes an extra layer of buffering that
might be of no benefit to you. If you're going to be jumping from one
4 KB block to another and accessing only a little data from each, you're
better off just letting the kernel do the caching and not forcing stdio
to be constantly flushing and refilling its internal buffer. That
really shouldn't make a huge difference, however.

--
Bob Nichols AT interaccess.com I am "rnichols"

Michael Flanagan

unread,
May 1, 2004, 4:20:27 PM5/1/04
to
On Sat, 01 May 2004 03:45:31 GMT, Robert Nichols
<SEE_SI...@localhost.localdomain.invalid> wrote:

>In article <4092cc98....@news.viawest.net>,
>Michael Flanagan <mfla...@MJFlanagan.delete.this.nospam.com> wrote:
>:One of my clients is running RedHat 7.1 on 2.4.3-6Enterprise. One of
>:the apps wants to create two or three large (2GB) temp files. Their
>:code works, but is beastly slow. They use fopen, fseek, fwrite, etc.
>:The program first writes enough records (with some initial data)
>:serially (fseek(); fwrite();) to fill the files. I suspect the
>:problem is that the file gets created small, and as each new extent of
>:the file gets filled, the OS has to get another extent.
>:
>:I've modified their code to do an ftruncate(2GB) right after the fopen
>:for each file. This seems to modify the file's attributes to the
>:correct size (as reported by 'ls -l'), but I suspect that the
>:underlying disk space hasn't really been reserved. As I run their
>:program, I can see more and more of the disk get allocated, until the
>:program is done spinning through the full extent of the files.
>:
>:1. Am I going about this all wrong?
>:
>:2. How can I get the os to actually reserve all the space I need up
>:front? (Or am I foolish to think this will solve my problem?)
>
>For starters, define "beastly slow" and describe the hardware you are
>running on. On a reasonably up-to-date but basic (i.e., single ATA
>disk, no RAID) system, it's going to take several minutes to fill three
>2 GB files.

The client's machine is an Intel Xeon (sp?) 2-processor w/ 4GB ram.
Disk is non-Raid, 2x80GB SCSI. I have a test C program that writes
413,000 successive records, each 28,800 bytes long (the record size if
specified by the app's requirements). It takes about an hour to do so
(a little short of five 2GB files, I think). /tmp is about 50GB.

Am I hurting myself by doing my sequential writes: fseek();fwrite();?
I assume (uh-oh!) that the seeks don't add any appreciable time.

>
>To answer the question posed in the subject header, ftruncate(3) neither
>reserves space nor sets limits. It just marks the current end-of-file,
>and does nothing to prevent that file from later growing beyond that
>point. When you call ftruncate() immediately after creating a file, you
>made a what is called a "sparse file" with no data blocks allocated.
>AFAIK the only way to force a block to be allocated is by writing to it.
>Blocks that have never been written are internally mmap()-ed to
>/dev/zero and do not occupy space on the disk.
>
>Disk space allocation is a pretty efficient process, so there is little
>reason to pre-allocate the space.* It takes my machine about 90 seconds
>to create a 2 GB file with
>
> dd if=/dev/zero of=big_file bs=32k count=64k; sync
>
>If I then overwrite that file without reallocating the space
>
> dd conv=notrunc if=/dev/zero of=big_file bs=32k count=64k; sync
>

I'll try both these on the client's system.

Thanks for the comment about allocation being pretty efficient. The
reason I wanted to preallocate was that I assumed that the allocation
was inefficient, and was killing the response in a "death by a
thousand cuts" manner.

>that process actually takes a few seconds _longer_. Incidentally,
>reducing the transfer blocksize to 4K (same as the file system block
>size) makes no difference at all in the time. If I go down to 512
>bytes, though, the process gets "beastly slow" (4 minutes).
>
>* One reason to pre-allocate is that it ensures that the space is
> actually available. Programs tend to react badly if they run
> out of disk space while inserting data into the middle of an
> existing file.
>
>If you're going to be accessing the files randomly, using stdio calls
>may be hurting you. Stdio imposes an extra layer of buffering that
>might be of no benefit to you. If you're going to be jumping from one
>4 KB block to another and accessing only a little data from each, you're
>better off just letting the kernel do the caching and not forcing stdio
>to be constantly flushing and refilling its internal buffer. That
>really shouldn't make a huge difference, however.

In several places the app runs through the file(s) sequentially. In
one other place, it access them randomly.


>
>--
>Bob Nichols AT interaccess.com I am "rnichols"

mfla...@MJFlanagan.nospam.com

Michael Flanagan

unread,
May 1, 2004, 4:25:04 PM5/1/04
to
On Fri, 30 Apr 2004 22:50:23 GMT, sc...@slp53.sl.home (Scott Lurndal)
wrote:

If I trip on the mmap technique, would changing fread/fwrite to
read/write get rid of one level of buffering?

>Most unix filesystems are _NOT_ extent based. Allocation of blocks
>to files is very efficient since all blocks are the same size (typically
>4096 bytes). Exceptions to this are Veritas VxFS and SGI's XFS.
>
>>
>>2. How can I get the os to actually reserve all the space I need up
>>front? (Or am I foolish to think this will solve my problem?)
>
>First, why do you need to preallocate the space?
>
>If you do, you'll need to write each block in the file.

I assumed (!) that allocation was inefficient, particularly in many
small allocations, as opposed to one, large allocation. I understand
from you and others that my assumption is incorrect.

>
>(check your filesystem properties for the appropriate
>block size - 4kb is typical).

Smaller than 4kb or larger than ?kb would slow things down?

>
>Note that you can't just write the last block - intervening blocks will
>not be allocated (reads on the non-existent portion of the file will just
>return a buffer of zero bytes).
>
>scott

mfla...@MJFlanagan.nospam.com

Robert Nichols

unread,
May 2, 2004, 12:12:46 PM5/2/04
to
In article <40940438...@news.viawest.net>,
Michael Flanagan <mfla...@MJFlanagan.delete.this.nospam.com> wrote:
:
:The client's machine is an Intel Xeon (sp?) 2-processor w/ 4GB ram.

:Disk is non-Raid, 2x80GB SCSI. I have a test C program that writes
:413,000 successive records, each 28,800 bytes long (the record size if
:specified by the app's requirements). It takes about an hour to do so
:(a little short of five 2GB files, I think). /tmp is about 50GB.
:
:Am I hurting myself by doing my sequential writes: fseek();fwrite();?
:I assume (uh-oh!) that the seeks don't add any appreciable time.

Post some sample code that demonstrates this timing. Something is
seriously wrong. It takes me about 90 seconds to fill a 2 GB file with
a record size of 28800 bytes using fwrite(). Adding an fseek() before
each fwrite() makes no measurable difference. Just for laughs, I tried
writing the file backwards, and that also took 90 seconds.

My machine is new, but pretty basic: Intel Celeron 2400 MHz, 1/2 GB RAM,
5400 RPM ATA disk drive.

Have you verified that your SCSI adapter is using DMA and not PIO? If
your CPU using during the test is pegged at 100% "system", then you are
very likely not using DMA, and that _would_ account for your lengthy
times. If I turn off DMA to my hard disk, my time to write 2 GB goes up
to about 10 minutes, which is pretty consistent with what you are
seeing.

Michael Flanagan

unread,
May 2, 2004, 3:11:36 PM5/2/04
to
On Sun, 02 May 2004 16:12:46 GMT, Robert Nichols
<SEE_SI...@localhost.localdomain.invalid> wrote:

>In article <40940438...@news.viawest.net>,
>Michael Flanagan <mfla...@MJFlanagan.delete.this.nospam.com> wrote:
>:
>:The client's machine is an Intel Xeon (sp?) 2-processor w/ 4GB ram.
>:Disk is non-Raid, 2x80GB SCSI. I have a test C program that writes
>:413,000 successive records, each 28,800 bytes long (the record size if
>:specified by the app's requirements). It takes about an hour to do so
>:(a little short of five 2GB files, I think). /tmp is about 50GB.
>:
>:Am I hurting myself by doing my sequential writes: fseek();fwrite();?
>:I assume (uh-oh!) that the seeks don't add any appreciable time.
>
>Post some sample code that demonstrates this timing. Something is
>seriously wrong. It takes me about 90 seconds to fill a 2 GB file with
>a record size of 28800 bytes using fwrite(). Adding an fseek() before
>each fwrite() makes no measurable difference. Just for laughs, I tried
>writing the file backwards, and that also took 90 seconds.

Posted at the end.

>
>My machine is new, but pretty basic: Intel Celeron 2400 MHz, 1/2 GB RAM,
>5400 RPM ATA disk drive.
>
>Have you verified that your SCSI adapter is using DMA and not PIO? If
>your CPU using during the test is pegged at 100% "system", then you are
>very likely not using DMA, and that _would_ account for your lengthy
>times. If I turn off DMA to my hard disk, my time to write 2 GB goes up
>to about 10 minutes, which is pretty consistent with what you are
>seeing.

No, I haven't verified this, but will do. The machine seems blazingly
fast (sorry for the precise, technical terms <g>) for all other things
it does.

>
>--
>Bob Nichols AT interaccess.com I am "rnichols"

Notes on code:

This apparently evolved from trying to use fread/fwrite to one huge
file, ftruncate works only on fd's, though, so they started by opening
an fd, then a stream over that. Then, we thought ftruncate wasn't
working on large files; hence the split into several files.

Please feel free to comment on any and all of the code. I think there
are enough authors that there's little pride of authorship, so
comments along the lines of, "You're doing WHAT?!" are welcome.

/* Copyright (c) Lauren Geophysical, 2004. */
/* All rights reserved. */

/**
* Allocate large arrays into a file; allowing access to them. */

#include <stdio.h>
#include <errno.h>
//#include "sw.h" // in place of this include:
/* Virtual Files. */
#define VF_MAX_FILES 10
typedef struct _virtFile {
int fd[VF_MAX_FILES];
FILE *pF[VF_MAX_FILES];
int bytesPerRecord;
int nrRecords;
int recordsPerFile;
int nrFiles;
int currRecordNum; // starts at 0
char *fileName[VF_MAX_FILES];
char *buffer;
} VIRT_FILE;

#ifdef TEST
#define NR_RECORDS 413218
#define NR_BYTES 12400
#endif

static void CloseAllVirtFiles(VIRT_FILE *pvf)
{
int i;

for (i=0; i<pvf->nrFiles; i++) {
eclose(pvf->fd[i]);
fclose(pvf->pF[i]);
if (NULL != pvf->fileName[i])
free(pvf->fileName[i]);
}
}

static int GenerateSeek(VIRT_FILE *pvf, int recordNum, int *pFileNum,
off_t *pOffset)
{
int fileNum, recNumInFile;
static int firstWarn = 1;

if (NULL != pFileNum)
*pFileNum = 0;
else
return 0;

if (NULL != pOffset)
*pOffset = 0;
else
return 0;

if (NULL == pvf)
return 0;

*pFileNum = recordNum / pvf->recordsPerFile;
recNumInFile = recordNum % pvf->recordsPerFile;
*pOffset = (off_t) ((off_t) recNumInFile * (off_t)
pvf->bytesPerRecord);
return 1;
}

#define MAX_INT_32 ((int) 0x7fffffff)
//#define MAX_INT_32 ((int) NR_RECORDS * NR_BYTES / 4)

VIRT_FILE *GetVirtFile(unsigned int bytesPerRecord,
unsigned int nrRecords, char *pDir, int singleFileOnly)
{
int i, nrFiles, truncStat;
VIRT_FILE *pvf;
char aTemplate[] = "VirtFileXXXXXX";

if ((0 >= bytesPerRecord) || (0 >= nrRecords))
return NULL;
if (NULL == pDir)
return NULL;

pvf = (VIRT_FILE *) malloc(sizeof (VIRT_FILE));
if (NULL == pvf)
return NULL;
for (i=0; i<VF_MAX_FILES; i++) {
pvf->fd[i] = -1;
pvf->pF[i] = NULL;
pvf->fileName[i] = NULL;
}
pvf->bytesPerRecord = bytesPerRecord;
pvf->nrRecords = nrRecords;
pvf->currRecordNum = -1;
pvf->buffer = NULL;
pvf->nrFiles = 0;
if (singleFileOnly) {
pvf->recordsPerFile = pvf->nrRecords;
nrFiles = 1;
}
else {
pvf->recordsPerFile = MAX_INT_32 / pvf->bytesPerRecord;
nrFiles = pvf->nrRecords / pvf->recordsPerFile;
if (0 < (pvf->nrRecords % pvf->recordsPerFile))
nrFiles++;
}

for (i=0; i<nrFiles; i++) {
pvf->fileName[i] = (char *) malloc(strlen(pDir) +
strlen(aTemplate));
if (NULL == pvf->fileName[i]) {
CloseAllVirtFiles(pvf);
free(pvf);
return NULL;
}

strcpy(pvf->fileName[i], pDir);
strcat(pvf->fileName[i], aTemplate);
if (-1 == (pvf->fd[i] = mkstemp(pvf->fileName[i]))) {
CloseAllVirtFiles(pvf);
free(pvf);
return NULL;
}
if (NULL == (pvf->pF[i] = fdopen(pvf->fd[i], "w+"))) {
eclose(pvf->fd[i]);
CloseAllVirtFiles(pvf);
free(pvf);
return NULL;
}
if (i == (nrFiles-1))
nrRecords = pvf->nrRecords - (i * pvf->recordsPerFile);
else
nrRecords = pvf->recordsPerFile;

truncStat = ftruncate(pvf->fd[i],
(off_t) ((off_t) nrRecords * (off_t) pvf->bytesPerRecord));

pvf->nrFiles++;
if (0 != unlink(pvf->fileName[i])) {
CloseAllVirtFiles(pvf);
free(pvf);
return NULL;
}
}

if (NULL == (pvf->buffer = (char *) calloc(1,
pvf->bytesPerRecord))) {
CloseAllVirtFiles(pvf);
free(pvf);
return NULL;
}

return pvf;
}

char *GetRecord(VIRT_FILE *pvf, unsigned int recordNr, void *pCopyTo,
int writeCurrent)
{
int fileNum;
off_t offset;

if (NULL == pvf)
return NULL;
if ((0 > recordNr) || (recordNr >= pvf->nrRecords))
return NULL;

// if record already here, just return it
if (recordNr == pvf->currRecordNum)
return pvf->buffer;

// asking for a different record; so write out current one, if
there is one
if (writeCurrent &&
(0 <= pvf->currRecordNum) &&
(pvf->nrRecords > pvf->currRecordNum)) {
GenerateSeek(pvf, pvf->currRecordNum, &fileNum, &offset);
efseek(pvf->pF[fileNum], offset, SEEK_SET);
efwrite(pvf->buffer, pvf->bytesPerRecord, 1, pvf->pF[fileNum]);
}

GenerateSeek(pvf, recordNr, &fileNum, &offset);
efseek(pvf->pF[fileNum], offset, SEEK_SET);
efread(pvf->buffer, pvf->bytesPerRecord, 1, pvf->pF[fileNum]);
if (NULL != pCopyTo)
memcpy(pCopyTo, pvf->buffer, pvf->bytesPerRecord);

pvf->currRecordNum = recordNr;
return pvf->buffer;
}

int PutRecord(VIRT_FILE *pvf, unsigned int recordNr, void *pCopyFrom)
{
int fileNum;
off_t seekVal;

if (NULL == pvf)
return 0;
if ((0 > recordNr) || (recordNr >= pvf->nrRecords))
return 0;

if (NULL != pCopyFrom)
memcpy(pvf->buffer, pCopyFrom, pvf->bytesPerRecord);

GenerateSeek(pvf, recordNr, &fileNum, &seekVal);
if (-1 == efseek(pvf->pF[fileNum], seekVal, SEEK_SET)) {
perror(NULL);
return 0;
}
efwrite(pvf->buffer, pvf->bytesPerRecord, 1, pvf->pF[fileNum]);
pvf->currRecordNum = recordNr;
return 1;
}

int CloseVirtFile(VIRT_FILE *pvf)
{
if (NULL == pvf)
return 0;
CloseAllVirtFiles(pvf);
free(pvf->buffer);
free(pvf);

return 1;
}


#ifdef TEST

main()
{
int nrErrs = 0;
VIRT_FILE *pvf;
char *pbuff;
int i, j;
int nrRecords = NR_RECORDS, nrBytes = NR_BYTES;

if (NULL == (pvf = GetVirtFile(nrBytes, nrRecords, "/usr1/ibis/",
0))) {
printf("Unable to create VirtFile with %d bytes\n", nrBytes);
return EXIT_FAILURE;
}
printf("MAX_INT_32: %d; nrFiles: %d; recordsPerFile: %d\n",
MAX_INT_32, pvf->nrFiles, pvf->recordsPerFile);
printf("recs/file: %d; file size: %d\n",
pvf->recordsPerFile,
(off_t) ((off_t) pvf->nrRecords * (off_t) pvf->bytesPerRecord));

pbuff = GetRecord(pvf, 0, NULL, 1);
for (i = 0; i < nrRecords; i++) {
for (j = 0; j < nrBytes; j++)
*(pbuff+j) = (char) ('a' + (i*10) + j);
if (0 == (i % 1000))
printf("Putting record %d (%d%%): %d(%#x)\n",
i, ((i * 100) / nrRecords),
(off_t) ((off_t) i * (off_t) nrBytes),
(off_t) ((off_t) i * (off_t) nrBytes));
PutRecord(pvf, i, NULL);
}
printf("***All records put\n");

// see if values are correct, but access records backwards
for (i=nrRecords-1; i >= 0; i--) {
pbuff = GetRecord(pvf, i, NULL, 0);
if (NULL == pbuff) {
printf("Unable to GetRecord for record %d; err: %s\n",
i, strerror(errno));
exit(0);
}
if (0 == (i % 1000))
printf("Getting record %d (%d%%)\n", i, ((i * 100) /
nrRecords));
for (j=0; j<nrBytes; j++)
if (*(pbuff+j) != (char) ('a' + (i*10) + j)) {
if ((++nrErrs) > 10)
exit(0);
printf("Record %d; byte %d should be %c(%#x), but is
%c(%#x)\n",
i, j,
(char) ('a' + (i*10) + j), (char) ('a' + (i*10) + j),
*(pbuff+j), *(pbuff+j));
}
}

CloseVirtFile(pvf);

return EXIT_SUCCESS;
}
#endif

mfla...@MJFlanagan.nospam.com

Robert Nichols

unread,
May 2, 2004, 11:03:56 PM5/2/04
to
In article <40954312....@news.viawest.net>,
Michael Flanagan <mfla...@MJFlanagan.delete.this.nospam.com> wrote:
:On Sun, 02 May 2004 16:12:46 GMT, Robert Nichols
:<SEE_SI...@localhost.localdomain.invalid> wrote:
:>
:>Post some sample code that demonstrates this timing. Something is

:>seriously wrong. It takes me about 90 seconds to fill a 2 GB file with
:>a record size of 28800 bytes using fwrite(). Adding an fseek() before
:>each fwrite() makes no measurable difference. Just for laughs, I tried
:>writing the file backwards, and that also took 90 seconds.
:
:Posted at the end.

My conclusion: You've got DMA problems.

On my system your code takes about 90 seconds till it prints "All
records put" and runs to completion in a little over 6 minutes, writing
and then reading about 5 GB of data. I didn't investigate the reason
why the read/verify takes longer. Even at ~4.5 minutes it's still
reasonable since reading a file backward breaks all of the kernel's
caching.

I did have to insert definitions for some missing macros:

#define eclose(d) do { if(d>=0) close(d), d = -1; } while(0)
#define efseek(s,o,w) fseek(s,o,w)
#define efwrite(p,z,n,s) fwrite(p,z,n,s)
#define efread(p,z,n,s) fread(p,z,n,s)

You've also got an error in the malloc() call around line 115. That line
should be:

pvf->fileName[i] = (char *) malloc(strlen(pDir) + strlen(aTemplate) +1);

You are missing the "+1" to allow for the terminal null character.

Michael Flanagan

unread,
May 3, 2004, 10:16:53 AM5/3/04
to

Bob, Thanks much for all your advice. I don't think the sysadmin
will know where to look for DMA problems, and I sure don't. Can you
give me any insights into where I should be looking? What commands
should I look at? Config options? Something on the SCSI controller?

Thanks again.

Michael

On Mon, 03 May 2004 03:03:56 GMT, Robert Nichols
<SEE_SI...@localhost.localdomain.invalid> wrote:

mfla...@MJFlanagan.nospam.com

Robert Nichols

unread,
May 4, 2004, 5:45:37 AM5/4/04
to
Moving this discussion to email.

Nils O. Selåsdal

unread,
May 4, 2004, 7:21:36 AM5/4/04
to

The problem here is that you *sequencially* write to the file. Linux
is very good at that.
If on the other hand you place data at "random" locations in the file,
it can get heavily fragmented. And can severly decrease performance.

The original poster should probably create a file this way *first*.

--
Nils Olav Selåsdal
System Engineer
w w w . u t e l s y s t e m s . c o m


Michael Flanagan

unread,
May 6, 2004, 1:23:01 PM5/6/04
to
Nils, thanks for the info. I do create the file sequentially, not
randomly. Any ideas why it takes so long?

mfla...@MJFlanagan.nospam.com

0 new messages