Il 16/01/2015 21:45, Claudio Voskian ha scritto:
> Daniel
>
> Use copy to ... for !deleted() instead.
> You won't go crazy anymore (at least not with "pack").
>
> Regards
> ---
> Claudio Voskian
I have another program (derived form a old one and still including some
very old code) in which I used such a technique (but no memo fields
involved there), but it dates back to Clipper era, since many years I
apparently don't have any problem with pack and memo (I had some hiccups
here and there with indexes). I suppose anyway that when something goes
wrong at a level prior to application level ( network level ) there is
no way to avoid data corruption. But probably you are right, some
operations are safer than other. To begin with, doing a "copy to" opens
the target file in exclusive mode and no sharing-related problems can occur.
Here I had a problem clearly depending from the locking of
records/files. It seems that the .fpt file resulted locked and the pack
operation was not able to update the .fpt file and ended up with the
.dbf file packed, and the .fpt unchanged (silently).
Curious, I had a tour to the sources to discover how PACK is
implemented. In \include\
std.ch we see that PACK is translated in
__dbPack(), and in \source\rdd\dbcmd.c __dbPack calls in turn SELF_PACK.
In dbf1.c we have hb_dbfPack, that contains this comment:
*/ This is bad hack but looks that people begins to use it :-( so I'll
add workaround to make it more safe /*
Then there is a check if the data structure (related to the file)
field valResult is an array or not. It seems that someone is storing in
this field an array of 2 elements containing a codeblock and a numeric
value. I miss the whole point then I go on.
Follows the conditional execution of a codeblock for every record
processed (a feature I completely was unaware of) and a final rewriting
of the file header.
It seems that the key is the call to SELF_PACKREC() that writes the next
record in the current (if cancelled).
SELF_PACKREC should read the memo field, too, and write it in the
current record, leaving unchanged the pointers that connect the dbf
record to the position in the .fpt binary file where the text/binary
data are stored, but apparently under some conditions fails.
Memo fields can get corrupted in 2 ways, IME. They can contain NIL, and
that makes them non-editable, but the program doesn't throws errors, or
they can be messy (or -3rd possibility- the pointer in the DBF is
inconsistent) and the program goes straight to the errorsys.
Anybody having a large database with memo fields could then benefit from
a service routine that checks and repairs memo fields.
Here is an example:
<open file>
memoflds:={}
for j:=1 to fcount()
if type(fieldname(j))="M"
aadd(memoflds,{fieldname(j),j})
endif
next j
DO WHILE .NOT. EOF()
// skip deleted (optional)
do while deleted()
SKIP
ENDDO
IF EOF()
EXIT
ENDif
for j=1 to len(memoflds)
lInval:=.f.
oldeb:=errorblock({|e|break()})
begin sequence
TESTO:=HARDCR(fieldget(memoflds[j,2]))
recover
testo=""
linval=.t.
end sequence
errorblock(oldeb)
if .not. linval .and. fieldget(memoflds[j,2]) == nil
lInval=.t.
endif
// error
if lInval
mylog("Memo field invalid!"+" rec. n. "+str(recno()) )
// if set exclusive off
//IF REC_LOCK(3) // from LOCKS.PRG - it simply locks a record
(and retries during 3 seconds, then returns lSuccess)
fieldput(memoflds[j,2], testo)
//UNLOCK
//ENDIF
endif
next j
SKIP
enddo
Mylog() is a simple function that writes to a logfile so i don't think
is needed to attach it.
HTH
Dan