Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

reading an Int Array from a Binary file?

359 views
Skip to first unread message

Jack

unread,
Dec 28, 2000, 9:20:23 AM12/28/00
to
Hi, should be a simple question
how would i write an int array to a binary file then read it back out and
print it out again?

Thanks

Jack

#include <stdio.h>


int main(void)
{
int i,b[10],a[10];


FILE *fptr = fopen ("Boo.bin","w+b");
if (fptr==NULL)
{
printf("Error");
exit(1);
}
for (i=0;i<10;i++)
a[i]=i+5000;


fwrite(a, sizeof(int),1,fptr);
fseek(fptr,0,SEEK_SET);
fread(b,sizeof(int),1,fptr);
for (i=0;i<10;i++)
printf("b = %d\n",b[i]);
return 0;
}


cja

unread,
Dec 28, 2000, 11:33:45 AM12/28/00
to
In article <01c070da$c00c8000$33889fd4@iankelly>,
"Jack" <jac...@currentbun.com> wrote:

> how would i write an int array to a binary file then read it back out
> and print it out again?
>

If you write and read 10 ints instead of 1, I think your code will do
what you want. Had you used a #define instead of hardcoding 10s and 1s
you could have avoided this problem.


> #include <stdio.h>
>
> int main(void)
> {
> int i,b[10],a[10];
>
> FILE *fptr = fopen ("Boo.bin","w+b");
> if (fptr==NULL)
> {
> printf("Error");
> exit(1);
> }
> for (i=0;i<10;i++)
> a[i]=i+5000;
>
> fwrite(a, sizeof(int),1,fptr);
> fseek(fptr,0,SEEK_SET);
> fread(b,sizeof(int),1,fptr);
> for (i=0;i<10;i++)
> printf("b = %d\n",b[i]);
> return 0;
> }
>
>

--
\\//
Chris Adamo
ad...@computer.org


Sent via Deja.com
http://www.deja.com/

Peter Shaggy Haywood

unread,
Jan 3, 2001, 9:49:34 AM1/3/01
to
Groovy hepcat Jack was jivin' on 28 Dec 2000 14:20:23 GMT in
comp.lang.c.
reading an Int Array from a Binary file?'s a cool scene! Dig it!

>Hi, should be a simple question
>how would i write an int array to a binary file then read it back out and
>print it out again?

You could try something like this:

#include <stdio.h>
#include <stdlib.h>

#define FILENAME "boo.bin"
#define ARRAYLEN 10

int main(void)
{
FILE *fp;
int a[ARRAYLEN], b[ARRAYLEN];
int i;

/* fill an array with data */
for(i = 0; i < ARRAYLEN; i++)
{
a[i] = i + 5000;
}

/* open file for writing */
fp = fopen(FILENAME, "wb");
if(!fp)
{
fprintf(stderr, "Error opening file in write mode.\n");
return EXIT_FAILURE;
}

/* write array to file */
if(ARRAYLEN != fwrite(a, sizeof *a, ARRAYLEN, fp))
{
fprintf(stderr, "Error writing to file.\n");
fclose(fp);
return EXIT_FAILURE;
}

/* done writing to file */
fclose(fp);

/* open file for reading */
fp = fopen(FILENAME, "rb");
if(!fp)
{
fprintf(stderr, "Error opening file in read mode.\n");
return EXIT_FAILURE;
}

/* read array from file */
if(ARRAYLEN != fread(b, sizeof *b, ARRAYLEN, fp))
{
fprintf(stderr, "Error reading from file.\n");
fclose(fp);
return EXIT_FAILURE;
}

/* done reading from file */
fclose(fp);

/* display read array */
for(i = 0; i < ARRAYLEN; i++)
{
printf("b[%d] = %d\n", i, b[i]);
}

return 0;
}

I'll point out some things that are wrong with your code below.

>#include <stdio.h>

You call exit() in this program, but have failed to provide a
prototype for it. You should include stdlib.h here.

>int main(void)
>{
>int i,b[10],a[10];
>
>FILE *fptr = fopen ("Boo.bin","w+b");

I don't recomend this. I suggest you keep your reading and writing
operations completely separate, rather than opening the file in a
"mixed" mode here. Open the file for writing, write to it, then close
it. Then open it for reading, read from it, then close it. This is
much clearer and more straight forward, IMHO. That's the way I'd do
it, unless there were some compelling reason to do otherwise.

>if (fptr==NULL)
>{
>printf("Error");

Error messages traditionally go to stderr, and are usually more
descriptive than that.

>exit(1);

You need to include stdlib.h, as stated above, for this. Also,
passing 1 to exit() (or returning it from main()) is not portable. I
suggest you use the standard macro EXIT_FAILURE for portability. This
macro is defined in stdlib.h.
Also, I find it more natural, more elegant to return from main()
rather than call exit(). That's a matter of style, and you're free to
ignore it. But I recomend it.

>}
>for (i=0;i<10;i++)
>a[i]=i+5000;

Your indentation is non-existent. This is very poor. Indentation is
important for code legibility.

>fwrite(a, sizeof(int),1,fptr);

What are you writing to the file here? Think about it for a moment.
What, exactly are you writing? Have a close look at the function call.
Understand every argument passed to fwrite() here. (This also goes for
the fread() call below, because you made the same mistake there.)
Let's run through these arguments, one by one, and see what we have:

1) a, an array of 10 integers, decays to address of first element.

2) sizeof(int), the size of each "object" to be written to the file.
The objects to be written to the file are of type int, so hence this
argument.

3) 1, the number of "objects" to write. AH HA!!! Here we've found
something interesting. You wanted to write the whole array to the
file, but here you are telling fwrite() to write only one element. No
doubt, this is your error.

4) fptr, pointer representing the file you want to write to.

>fseek(fptr,0,SEEK_SET);
>fread(b,sizeof(int),1,fptr);

See above.

>for (i=0;i<10;i++)
>printf("b = %d\n",b[i]);
>return 0;
>}

I don't see where you closed the file.
--

Dig the even newer still, yet more improved, sig!

http://alphalink.com.au/~phaywood/
"Ain't I'm a dog?" - Ronny Self, Ain't I'm a Dog, written by G. Sherry & W. Walker.
I know it's not "technically correct" English; but since when was rock & roll "technically correct"?

Eric Sosman

unread,
Jan 3, 2001, 11:24:54 AM1/3/01
to
Peter Shaggy Haywood wrote:
> [an example of reading and writing a binary stream, complete with
> lots of error-checking, except for:]

>
> /* done writing to file */
> fclose(fp);

The fclose() call can fail, and should be error-checked just as
assiduously as all the other file operations. Sounds pedantic, right?
Wrong. In a former life, my company's product managed to destroy a
customer's data by omitting a check for failure when closing a file.
The sequence went something like (paraphrased):

stream = fopen(tempfile, "w");
if (stream == NULL) ...
while (more_to_write)
if (fwrite(buffer, 1, buflen, stream) != buflen) ...
fclose (stream);

/* The new version has been written successfully. Delete
* the old one and rename.
*/
remove (realfile);
rename (tempfile, realfile);

Of course, what happened was that fclose() ran out of disk space
trying to write the last couple blocks of data, so the `tempfile'
was truncated and unusable. And since the fclose() failure wasn't
detected, the program went right ahead and destroyed the best extant
version of the data in favor of the damaged version. And, as Murphy
would have it, the victim in this particular incident was the person
in charge of the customer's department, the person with authority to
buy more of our product or replace it with a competitor's product --
and, natch, a person who was already unhappy with us for other reasons.

It'd be a stretch to ascribe all of the ensuing misery to this
single omission, but it may be worth pointing out that both the customer
and my former company have since vanished from the corporate ecology.

CHECK THOSE FAILURE CODES!

--
Eric....@east.sun.com

mike burrell

unread,
Jan 3, 2001, 1:36:29 PM1/3/01
to
Eric Sosman <Eric....@east.sun.com> wrote:
> Peter Shaggy Haywood wrote:
>> [an example of reading and writing a binary stream, complete with
>> lots of error-checking, except for:]
>>
>> /* done writing to file */
>> fclose(fp);

> The fclose() call can fail, and should be error-checked just as
> assiduously as all the other file operations. Sounds pedantic, right?

indeed it is. generally speaking, there's no way to intelligently handle a
failed closing of a file. generally, if i were to check for return values,
my code would look like:
if (fclose(f) == EOF)
carry_on;
else
carry_on;
boy, good thing i checked the return value!

> Wrong. In a former life, my company's product managed to destroy a
> customer's data by omitting a check for failure when closing a file.
> The sequence went something like (paraphrased):

> Of course, what happened was that fclose() ran out of disk space


> trying to write the last couple blocks of data, so the `tempfile'
> was truncated and unusable. And since the fclose() failure wasn't
> detected, the program went right ahead and destroyed the best extant
> version of the data in favor of the damaged version.

you've just demonstrated an extremely strange situation, and, i might add, a
very poorly designed one (why would you remove the file even if it did
succeed? this should be a job for the implementation, not the program).
you should be conscious, whenever you use fclose(), of what might happen if
it fails, but that does not imply that you should check to see whether it
failed or not.

> CHECK THOSE FAILURE CODES!

there was a good troll on this newsgroup not too far back about (i think)
making a good hello world program. it went something like this:

#include <stdio.h>
#include <stdlib.h>

int
main(void)
{
if (printf("hello world!\n") < 0) {
if (fprintf(stderr, "printf failed\n") < 0) {
if (fprintf(stderr, "fprintf failed to print "
"failure of printf\n") < 0) {
/* ... ad infinitum ... */
}
return EXIT_FAILURE;
}
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}

checking the return code is not always the best approach.

--
/"\ m i k e b u r r e l l
\ / ASCII RIBBON CAMPAIGN mik...@home.com
X AGAINST HTML MAIL,
/ \ AND NEWS TOO, dammit finger mik...@mikpos.dyndns.org for GPG key

Kaz Kylheku

unread,
Jan 3, 2001, 1:59:13 PM1/3/01
to
On Wed, 03 Jan 2001 18:36:29 GMT, mike burrell <mik...@home.com> wrote:
>Eric Sosman <Eric....@east.sun.com> wrote:
>
>> The fclose() call can fail, and should be error-checked just as
>> assiduously as all the other file operations. Sounds pedantic, right?
>
>indeed it is. generally speaking, there's no way to intelligently handle a
>failed closing of a file. generally, if i were to check for return values,
>my code would look like:
> if (fclose(f) == EOF)
> carry_on;
> else
> carry_on;
>boy, good thing i checked the return value!

You could return an indication to the caller that the last bit of data
in the stream was not successfully written to the underlying file.
The program may need to take some action, such as terminating
unsuccessfully, or notifying an interactive user that something went
wrong.

Or are you saying that it's acceptable to just leave a corrupt output
file and chug merrily along?

Eric Sosman

unread,
Jan 3, 2001, 2:33:01 PM1/3/01
to
mike burrell wrote:
>
> Eric Sosman <Eric....@east.sun.com> wrote:
> > [concerning the need to check for fclose() failure:]

> > In a former life, my company's product managed to destroy a
> > customer's data by omitting a check for failure when closing a file.
> > The sequence went something like (paraphrased):
>
> > Of course, what happened was that fclose() ran out of disk space
> > trying to write the last couple blocks of data, so the `tempfile'
> > was truncated and unusable. And since the fclose() failure wasn't
> > detected, the program went right ahead and destroyed the best extant
> > version of the data in favor of the damaged version.
>
> you've just demonstrated an extremely strange situation, and, i might add, a
> very poorly designed one (why would you remove the file even if it did
> succeed? this should be a job for the implementation, not the program).
> you should be conscious, whenever you use fclose(), of what might happen if
> it fails, but that does not imply that you should check to see whether it
> failed or not.

"Poor design" is in the eye of the beholder, I guess, but I
think I can refute the "extremely strange" claim. The application
was a structured document editor with the usual load/massage/write
outline. When saving the new version of a document, the program
wrote it to a temporary file and then replaced the original with
the newly-written copy; this meant that if the write failed for
some reason, the original was still available as a reasonably
recent backup. (Actually, it kept an additional level of backup:
the sequence was "write temp, remove backup, rename real file to
backup, rename temp to real file" -- the same dance I illustrated,
just with one more do-si-do.) It's not a particularly strange
dance at all; Emacs users will recognize it as closely akin to
the mechanism that creates "foo.c~" files.

I agree that there's little point in checking for an error for
which no recovery action is possible. But in this case there *was*
an appropriate recovery action: namely, to leave the pre-existing
files undisturbed instead of sliding them along the backup cycle.
Even when the program isn't going to "do" anything about an error,
it's very often a good idea to record the fact that all is not well.
Call perror() or put a failure message in your log or cause the
eventual exit status to be EXIT_FAILURE or ... The mechanisms are
widely variable, but there's almost always something better to do
than to pretend everything's hunky-dory.

--
Eric....@east.sun.com

B. van Ingen Schenau

unread,
Jan 3, 2001, 3:19:02 PM1/3/01
to
On Wed, 03 Jan 2001 18:36:29 GMT, mike burrell <mik...@home.com>
wrote:

>Eric Sosman <Eric....@east.sun.com> wrote:
<snip>


>> Wrong. In a former life, my company's product managed to destroy a
>> customer's data by omitting a check for failure when closing a file.
>> The sequence went something like (paraphrased):
>
>> Of course, what happened was that fclose() ran out of disk space
>> trying to write the last couple blocks of data, so the `tempfile'
>> was truncated and unusable. And since the fclose() failure wasn't
>> detected, the program went right ahead and destroyed the best extant
>> version of the data in favor of the damaged version.
>
>you've just demonstrated an extremely strange situation, and, i might add, a
>very poorly designed one (why would you remove the file even if it did
>succeed? this should be a job for the implementation, not the program).

I don't think the situation is that strange. How about a tool to
physically delete records that are marked for deletion from a
database. As you can't update a file in that way, the best (and most
robust) solution is to copy all reecords that should remain to a
second file, delete the first file and make sure the new file is
recognised correctly by the othe programs.



>you should be conscious, whenever you use fclose(), of what might happen if
>it fails, but that does not imply that you should check to see whether it
>failed or not.
>
>> CHECK THOSE FAILURE CODES!

I would state it somewhat different:
When you use a function that returns error values, check those values
if you depend on the correct execution of the function.

>
>there was a good troll on this newsgroup not too far back about (i think)
>making a good hello world program. it went something like this:
>
>#include <stdio.h>
>#include <stdlib.h>
>
>int
>main(void)
>{
> if (printf("hello world!\n") < 0) {
> if (fprintf(stderr, "printf failed\n") < 0) {

I would not perform this test, because thee correct functioning of the
program does not depend on the success of reporting an error to the
user.


> if (fprintf(stderr, "fprintf failed to print "
> "failure of printf\n") < 0) {
> /* ... ad infinitum ... */
> }
> return EXIT_FAILURE;
> }
> return EXIT_FAILURE;
> }
> return EXIT_SUCCESS;
>}
>
>checking the return code is not always the best approach.

Checking error-codes is most times a very good approach, but sometimes
you don't/shouldn't care if a function succeeded.

Bart v Ingen Schenau
--
Remove NOSPAM to mail me directly
FAQ for clc: http://www.eskimo.com/~scs/C-faq/top.html

mike burrell

unread,
Jan 3, 2001, 3:38:26 PM1/3/01
to
Kaz Kylheku <k...@ashi.footprints.net> wrote:
> On Wed, 03 Jan 2001 18:36:29 GMT, mike burrell <mik...@home.com> wrote:
>>Eric Sosman <Eric....@east.sun.com> wrote:
>>
>>> The fclose() call can fail, and should be error-checked just as
>>> assiduously as all the other file operations. Sounds pedantic, right?
>>
>>indeed it is. generally speaking, there's no way to intelligently handle a
>>failed closing of a file. generally, if i were to check for return values,
>>my code would look like:
>> if (fclose(f) == EOF)
>> carry_on;
>> else
>> carry_on;
>>boy, good thing i checked the return value!

> You could return an indication to the caller that the last bit of data
> in the stream was not successfully written to the underlying file.

that's quite likely not true, though. there's nothing about fclose()
failing that says that the file is corrupt, or that it wasn't closed, or
that there was anything wrong with the stream at all. at best you could
print a message that says "fclose failed", but neither the user nor the
programmer really knows what that means, as there are no semantics carried
along with it.

to check whether something really did go wrong, you have to rely on the
implementation, so i think it's best left to the implementation to check the
validity of the file. you *could* return something (an error code or return
status) that says "something might be wrong with this file, but i'm not
entirely sure; please check it out". in many cases, though, fclose()
failing doesn't corrupt the file, and even if it did, it's not vital (or
revelant at all) to the execution of the rest of the program, so returning
to the implementation immediately would be an error, i think.

> The program may need to take some action, such as terminating
> unsuccessfully, or notifying an interactive user that something went
> wrong.

i would haven't have any great problems with printing a message on stderr.
i think the proper practice would be for the implementation (script or human
or otherwise) to test the validity of output, regardless of what intuition
the programmer thinks he might have. in that case, the message is redundant
(though admittedly not harmful).

> Or are you saying that it's acceptable to just leave a corrupt output
> file and chug merrily along?

it's not corrupt (necessarily). but yes, there's not much you can do about.
it's best to leave it to the implementation to clean up afterwards.

mike burrell

unread,
Jan 3, 2001, 3:41:53 PM1/3/01
to
B. van Ingen Schenau <bvisN...@universalmail.com> wrote:
> On Wed, 03 Jan 2001 18:36:29 GMT, mike burrell <mik...@home.com>
> wrote:
>>Eric Sosman <Eric....@east.sun.com> wrote:
> <snip>
>>> Wrong. In a former life, my company's product managed to destroy a
>>> customer's data by omitting a check for failure when closing a file.
>>> The sequence went something like (paraphrased):
>>
>>> Of course, what happened was that fclose() ran out of disk space
>>> trying to write the last couple blocks of data, so the `tempfile'
>>> was truncated and unusable. And since the fclose() failure wasn't
>>> detected, the program went right ahead and destroyed the best extant
>>> version of the data in favor of the damaged version.
>>
>>you've just demonstrated an extremely strange situation, and, i might add, a
>>very poorly designed one (why would you remove the file even if it did
>>succeed? this should be a job for the implementation, not the program).

> I don't think the situation is that strange. How about a tool to
> physically delete records that are marked for deletion from a
> database. As you can't update a file in that way, the best (and most
> robust) solution is to copy all reecords that should remain to a
> second file, delete the first file and make sure the new file is
> recognised correctly by the othe programs.

i don't think that's the job of the program; it's the job of the
implementation. an implementation script (such as a shell script) has a lot
more knowledge about the implementation than a C program well, and is better
suited to deal with implementation-specific issues, such as file management.
filtering "in place" is a bad practice as it is, but if you think you must
do it, best leave it to the experts (the implementation).

Eric Sosman

unread,
Jan 3, 2001, 4:23:33 PM1/3/01
to
mike burrell wrote:
>
> Kaz Kylheku <k...@ashi.footprints.net> wrote:
> > On Wed, 03 Jan 2001 18:36:29 GMT, mike burrell <mik...@home.com> wrote:
> >>Eric Sosman <Eric....@east.sun.com> wrote:
> >>
> >>> The fclose() call can fail, and should be error-checked just as
> >>> assiduously as all the other file operations. Sounds pedantic, right?
>
> > You could return an indication to the caller that the last bit of data
> > in the stream was not successfully written to the underlying file.
>
> that's quite likely not true, though. there's nothing about fclose()
> failing that says that the file is corrupt, or that it wasn't closed, or
> that there was anything wrong with the stream at all. at best you could
> print a message that says "fclose failed", but neither the user nor the
> programmer really knows what that means, as there are no semantics carried
> along with it.

Equally, there's nothing about fwrite() failing that says the
data wasn't actually written to the file -- maybe the fwrite() failure
just means the implementation couldn't be *certain* the write worked,
and it's warning you of possible trouble. So: would you recommend
ignoring fwrite() failures?

I'll admit that circumstances do exist where it makes sense to
ignore fclose() failures. But I'm at a loss to imagine a situation
where fclose() failures are unimportant *and* fwrite() failures are
worth checking for -- they're either both unimportant (so all checking
can be omitted) or both important (so both should be checked). It
simply doesn't make sense to mix'n'match.

> it's not corrupt (necessarily). but yes, there's not much you can do about.
> it's best to leave it to the implementation to clean up afterwards.

Usually, returning the failure indication is all the "cleaning
up" the implementation is going to undertake. It made some kind of
"best effort" to do your bidding, and then reported that your bidding
couldn't be done. There is no reason to expect that it has also used
your credit card to order a bigger disk, or E-mailed your system
administrator with a petition for a larger quota. Recovery is the
job of the application, even if the recovery action consists of using
assorted implementation-supplied tools -- the selection of which tools
to use and whether to use them at all is in the realm of application-
specific knowledge, which the underlying implementation lacks.

--
Eric....@east.sun.com

mike burrell

unread,
Jan 3, 2001, 5:59:49 PM1/3/01
to
Eric Sosman <Eric....@east.sun.com> wrote:
> I'll admit that circumstances do exist where it makes sense to
> ignore fclose() failures. But I'm at a loss to imagine a situation
> where fclose() failures are unimportant *and* fwrite() failures are
> worth checking for -- they're either both unimportant (so all checking
> can be omitted) or both important (so both should be checked). It
> simply doesn't make sense to mix'n'match.

yes that's a good point.

nais...@enteract.com

unread,
Jan 3, 2001, 11:24:25 PM1/3/01
to
mike burrell <mik...@home.com> wrote:
> i don't think that's the job of the program; it's the job of the
> implementation. an implementation script (such as a shell script)
> has a lot more knowledge about the implementation than a C program
> well, and is better suited to deal with implementation-specific
> issues, such as file management. filtering "in place" is a bad
> practice as it is, but if you think you must do it, best leave it
> to the experts (the implementation).

Four problems:

* Who says my implementation knows any better than C does?
Maybe my program runs on a computer with no OS, and the
C program is the only program running.

* Who says I want to tie my program down to a specific
implementation when a C-based solution works fine?

* Why should my text editor written in C suddenly require
non-C parts in order to save a file? I certainly don't
want to deal with calling shell scripts with system().

* Why should I incur the overhead of multiple processes
when I can just as easily deal with the problem without
doing so?

--
nais...@enteract.com

Steven Huang

unread,
Jan 4, 2001, 1:56:10 PM1/4/01
to
In article <l8M46.271498$76.68...@news1.rdc1.ab.home.com>,

mike burrell <mik...@home.com> wrote:
> B. van Ingen Schenau <bvisN...@universalmail.com> wrote:
[...]

> > How about a tool to
> > physically delete records that are marked for deletion from a
> > database. As you can't update a file in that way, the best (and
> > most robust) solution is to copy all reecords that should remain
> > to a second file, delete the first file and make sure the new
> > file is recognised correctly by the othe programs.

> i don't think that's the job of the program;

What's not the job of the program? Several tasks were listed
above, and in particular I don't know how to write a shell script
that can recognize that a particular record in a database was
logically deleted.

> it's the job of the
> implementation. an implementation script (such as a shell script)
> has a lot more knowledge about the implementation than a C
> program well, and is better suited to deal with
> implementation-specific issues, such as file management.

Which part of creating a new file, copying valid contents of
old file to new file, and then deleting the old file, is
implementation-specific?

Besides, why require yet another language (assuming the rest of
the program is already in C)? That only decreases portability.

> filtering "in place" is a bad practice as it is, but if you
> think you must do it, best leave it to the experts (the
> implementation).

"Filtering in place"?

0 new messages