In data 09 novembre 2012 alle ore 22:17:17, Guillermo Varona Silupú
<
gvar...@hotmail.com> ha scritto:
hi,
in the GOTOC (Good Old Times Of Clipper) I had the very same problem.
I imposed the following guideline:
- no RAM-dependent solution ( no matter how big the txt file and the
memory limits)
I know that now with Harbour you could load a couple of GigaBytes of a txt
file directly in RAM an then parse it, but no programmer coming from the
past can even conceive such a obscenity. :-)
so my solution was:
- an external loop reading a chunk of txt file. (fopen, fread) This was
the reading buffer.
- another loop parsing the buffer and locating the delimiters of a
"paragraph" (CR+LF or LF, OS dependent)
If the end-of-paragraph was found, pass the "paragraph" to the parser, else
- read another chunk and repeat.
else if, subtract the "paragraph" from the buffer, save the remaining of
the buffer and read another chunk (if EOF not reached).
Re-parse the buffer etc.
The parser of the "paragraph" BTW in some implementation then located
"words", (character strings between spaces, commas, periods etc.) and made
further evaluations on the "words" extracted, in other implementations
looked for keywords or substrings etc. In another one replaced characters
translating them... etc.
I used such a schema to read a variety of files ranging from EBCDIC files
from mainframes (with conversion routines EBCDIC->OEM) to any other file
you can imagine.
It is not very easy, eh. Anyway I remember I started from examples in
Clipper example programs, IIRC copyfile.prg or something similar.
So before posting tons of old and bad-written code the question is: What
do you need exactly?
Dan