one (indirect and slow) workaround would be to calculate index key value
from current record of dbf, then to perform seek and to check that index is
positioned at same record.
for not unique key ocurences there should be additional processing.
On Tue, 24 Jun 2003 23:45:22 +0200, "gabor salai"
<gabor...@euroherc.hr> wrote:
>is there some regular (easy) method to retrieve value of index key "burned"
>into index file of current record?
Assumed 5.2e version, Ordkey() returns the string representation of
the current key. Unfortunately, I am not aware of the existance of
OrdKeyBlock() which returns the index key block.
>of course, it suppose to have dbf and ntx opened.
>purpose is to check the index integrity.
>idea is to skip through dbf with opened index and compare index key value
>from index file with recalculated value from dbf file using indexkey()
>expression.
With corrupted index, it is likely the application crashes before you
do the the key comparison.
--
Bambang P
http://bpranoto.tripod.com
thanks for your attention, but problem is slightly different.
i'll try to explain.
i am not looking for the index key *expression* but the *value* of
index key which is stored in index file, specific for each record of dbf.
i want to check does value of index key correspond to newly
recalculated value (using index key expression) from dbf.
if not corresponds, it is a good chance that somebody had (mistakenly)
changed dbf asinchronly from index file.
application crash is some kind of "safe failure". at least crash will warn
the
user and app will not produce wrong results.
with index file just a little bit out of sync, app may work, but giving
wrong results. when index is out of sync, it simply doesn't point to
record that contains changed key.
idea was to have some independent tool that will constantly monitor
all indices on file server, maybe even launched as a task on server.
No, you don't have any method to have the index key value, at least if you
don't evaluate the indexkey / ordkey with a macro operator or "blockifying"
it.
AFAIK, it's true under plain clipper 5.2x (with its rdds), but, if you use a
different rdd, then:
* for SIX3:
Sx_KeyData([[nTagNo | cTagName,] nOrder | cIndexName])
nTagNo = Position of tag within compound index file
cTagName = Name of the tag
nOrder = Position of index in list of indexes
nIndexName = Name of the index
* for Comix3:
cmxKeyVal() -> xVal
(*) extracted from it's ng...
These two are the only rdds I use, can't tell you anything for the rest.
HTH
--
Saluten
Claudio
Buenos Aires - Argentina
--
"Lo importante no es saber, sino tener el telefono del que sabe"
"gabor salai" <gabor...@euroherc.hr> escribió en el mensaje
news:bdbhsm$2u3e$1...@as201.hinet.hr...
>> >....
>> >is there some regular (easy) method to retrieve value of index key
>> >"burned" into index file of current record?
>>
>> Assumed 5.2e version, Ordkey() returns the string representation of
>> the current key. ....
>>
>thanks for your attention, but problem is slightly different.
>i'll try to explain. i am not looking for the index key *expression* but the *value* of
>index key which is stored in index file, specific for each record of dbf.
>.....
Hoping to be able to get some low level backdoor, I played a bit with
the RDD.API in C. Unfortunately, the best thing I could get just only
SELFORDEXPR() which returns just the same with OrdKey() clipper level
function. No luck here :(, so....
Another idea is to compare the expression key of the current record
with the record above and below it, something like this
FUNCTION IndexSanity
local cExp , bExp , nRecno
local cThisValue , cPrecValue , cNextValue
local lReturn := .F.
begin sequence
cExp := OrdKey()
bExp := &( "{| |" + cExp + "}" )
//
go top
do while ! eof()
nRecno := recn()
cThisValue := eval( bExp )
// Compare the preceding record
skip -1
if ! bof()
cPrecValue := eval( bExp )
if cPrecValue > cThisValue
break
endif
end
go nRecno
// Compare the next record
skip
if ! eof()
cNextValue := eval( bExp )
if cNextValue < cThisValue
break
endif
end
go nRecno
// Next record
skip
enddo
lReturn := .T.
end
RETURN lReturn
A solution is in DBU (shipped with clipper). DBU reads the index
expression of a NTX-file by opening and reading with low level file
functions.
The code is in DBUUTIL.PRG in the function ntx_key.
Stefan Neuhauser
"Clipper is not dead, ist just smells funny"
> i am not looking for the index key *expression* but the *value* of
> index key which is stored in index file, specific for each record of dbf.
> i want to check does value of index key correspond to newly
> recalculated value (using index key expression) from dbf.
> if not corresponds, it is a good chance that somebody had (mistakenly)
> changed dbf asinchronly from index file.
I use a method somewhat like this, to test each index
file for corruption -- in a initial open/test procedure
at app startup.
As far as I remember, I have used IndexKey() -- created a
CodeBlock from generated charString, and then again Evaluated
this cbString (CodeBlock) for each record in the test loop.
I have to dig deep for this code, have some patience and
I'll come back ... if still relevant for you ?
yes, still very interested.
but, it seems that ntx driver has some weakness
there is a function in my c53 named ordkeyval() which is exactly what i
need,
but, for ntx, what is my internal historycal standard, like many other ord*
functions, it returns simply NIL
one of solutions is to make index sanity checking tool using harbour,
with its enhanced ntx driver (i have to re-check my harbour doc)
thanks for suggestion, but,
that ntx_key returns index key *expression*, common to whole index file,
not the index key *value*, specific for each record
I use Summer '87 and if I understand your question....This is what I use:
Mindex = indexkey(0) && This is the string expression defining the
currently active index
Mvalue = &Mindex && This is the value of the index key for the
current record
I hope this helps (or I apologize if I misunderstood your question)
Fred Zuckerman
San Diego, CA, USA
I get the sense that the original poster wanted to know how to retrieve
the actual key values from the index file. There used to be some C code
floating around that Bri wrote back in the Nantucket days that would walk
the index tree and return the values. That was for .NTX indexes, of
course. CDX indexes have a different structure. Sorry, but I don't have a
copy of that code handy.
This sort of thing typically used to come up in the context of "I want to
build an index-file tester to tell me if the index is corrupted", and the
conclusion was generally that it was faster and easier just to recreate
the index. In this case, I think the o.p. wanted to use it to verify if
the records in the table had changed, which is sort of the reverse. At
best, it would only tell you if the fields used in the index expression
had changed, however.
just an ideal.
Scott
"Frank" <NoSp...@online.no> wrote in message
news:3EF9F760...@online.no...
thanks for your code example.
currently i have three sanity checking (as you have named it) methods:
1) skipping through index and checking that newly recalculated index key
expression from dbf results in not descending values (V[n+1] >= V[n]),
this is exactly method you have mentioned above.
2) having same dbf opened in two areas, once with index, once without,
(natural order) skipping through natural order, seeking in indexed area,
and checking that found record correspond to referent record in natural
ordered area
3) making temporary index with same key as checked one, opening
database in two areas, once with real index to be checked, once with
temporary and checking thet skipping through both areas follow the same
record number pattern
thanks, i have also found ordkeyval() for my c53, but it doesn't work
with ntx driver, which i am forced to use (function simply returns NIL)
thanks for your attention, what you wrote is correct,
but my idea is little bit more complex.
i don't want just to *recalculate* the index key value from index key
expression,
i want also to *retreieve* index key value as stored in index file.
then i want to *compare* these two values, to check that index file points
correctly to record.
yes, you are reading my mind!
reindex is the best, but needs exclusive access, so i have to
break all users, what is not so convenient.
idea was to construct index checker that will permanently monitor the
database
even maybe as a task on file server.
yes, in some cases method will show the corruption, but it says
nothing about just changed field from index key and without
index file opened.
Still digging :-)
I've read this thread -- and wonder how deep you are willing
to check each index file. Extensive testing could be done,
but on some point -- it's easier and faster to recreate the
whole index ... On databases with few records, this is
both faster and safer, to avoid lock problems -- a temp
directory/name should be used. Next approach, is with
larger databases.
---
The metod I reffered to in my first message works fine, to
detect if .dbf and .ntx are out of sync. (_almost_
foolproof) Before I do this test, I check for obvious sync
problems -- by stepping forward and back with the index;
dbSetOrder( 0 )
dbGoTop(), dbSkip( 1 ), if Eof() or Bof() -> Out of sync
dbGoBottom(), dbSkip( -1 ), if Eof() or Bof() -> Out of sync
dbSetOrder( 1 )
dbGoTop(), dbSkip( 1 ), if Eof() or Bof() -> Out of sync
dbGoBottom(), dbSkip( -1 ), if Eof() or Bof() -> Out of sync
dbSetOrder( 0 )
dbGoBottom(), if Recn()<>LastRec() -> Out of sync
---
I've also seen that the .dbf header _could_ contain
wrong number of records, hence any index could get
problems. It can be discovered by counting the
records with brute force, and compare the result with
LastRec(), and also with simple math to calculate
file size and number of records.
Another problem, in the .dbf - which cause sync trouble,
is when a record contains the End_Of_File ascii code...
_Could_ happen in a system crash (on a LAN), or by sloppy
use of low level file functions.
---
How deep will you search ?
> purpose is to check the index integrity.
What I use in all my applications is a general routine that compares
date and time stamps of the DBF with those of the index files.
This is from S87 days, so please disregard the bad coding style...
Don't have time to correct things that are working :)
HTH
António Vila-Chã
Viana do Castelo
Portugal
x-------------------
// PROTO IndiceOk( dbf, [ntx1][, 2][, 3][, 4]) -> lOk
FUNCTION IndiceOk(NomeDbf, Indice1, Indice2, Indice3, Indice4, Indice5
)
LOCAL Indices := PCOUNT() - 1
LOCAL lVerifHora := !("HORANTX" $ GETENV("AVC"))
// this ENV var is used to disable the checking when,
// due to server configuration, the reindexing is called
// too many times. I've got 1 site that needs this.
PRIVATE DataDbf, HoraDbf // usados em TestaNtx()
if Indices < 1
RETURN(.y.)
endif
DECLARE n1[1], t1[1], d1[1], h1[1]
ADIR( NomeDbf, n1, t1, d1, h1)
DataDbf := d1[1]
HoraDbf := h1[1]
if .NOT. TestaNTX( Indice1, lVerifHora )
RETURN(.n.)
endif
if Indices > 1
if .NOT. TestaNTX( Indice2, lVerifHora )
RETURN(.n.)
endif
if Indices > 2
if .NOT. TestaNTX( Indice3, lVerifHora )
RETURN(.n.)
endif
if Indices > 3
if .NOT. TestaNTX( Indice4, lVerifHora )
RETURN(.n.)
endif
if Indices > 4
if .NOT. TestaNTX( Indice5, lVerifHora )
RETURN(.n.)
endif
endif
endif
endif
endif
RETURN(.y.)
// PROTO TestaNTX(ntx, lVerifHora) -> lOk
FUNCTION TestaNTX( Indice, lVerifHora )
default lVerifHora to .y.
if .not. FILE(Indice)
RETURN(.n.)
endif
if !lVerifHora // .n. means do not verify index
// basta verificar que existe // so just verify that NTX exists
return(.y.)
endif
ADIR( Indice, n1, t1, d1, h1)
// Se estiver a ser criado If it is beeing created
// por outro posto, o tamanho by another client, size
// é ZERO. is Zero.
if t1[1] == 0
RETURN(.n.)
endif
if d1[1] = DataDbf .AND. h1[1] = HoraDbf
RETURN(.y.)
endif
if d1[1] < DataDbf
RETURN( .n.)
else && a data é igual... e a hora ? date is the same... and
time ?
IF d1[1] = DataDbf
if LEFT( h1[1], 5 ) < LEFT( HoraDbf,5 )
** Tem a mesma data, mas a hora é menor. date ok, time is
lower
RETURN(.n.)
endif
endif
endif
RETURN(.y.)
thanks for your advices.
all what may be done at prg level is just a matter of time, needs and,
naturaly, ideas. since index integrity is not maintained by dbms, there is
no ideal solution.
at this moment i am looking for that mentioned particular information ->
stored value of index key.
since i have found (somewhere) the structure od ntx index file and it is
basicaly just a linked list of key values and corresponding record numbers,
maybe a solution would be to write a low level file i/o function to read
index file and retrieve info from it.
recreation of index is of course the best, but it needs a database to be
"disconnected"
yeas, it is very good solution, but sure not ideal (100% foolproof)
but in real life, ideal solution is not a must.
everybody is just looking for *enough* good solution.
|
| I've also seen that the .dbf header _could_ contain
| wrong number of records, hence any index could get
| problems. It can be discovered by counting the
| records with brute force, and compare the result with
| LastRec(), and also with simple math to calculate
| file size and number of records.
|
It could be easily solved with something like:
dbappend()
dbdelete()
provided that the database is opened shared (important).
SIx: use sx_KeyData()
ADS: use ax_KeyVal()
NTX... Best solution - kill DBFNTX.LIB.
More:
1. open dbf+ntx
2. eval index key expression via &(OrdKey())
3. Try to find this key + this records in index file
4. repeat for all records
use .... new shared index ....
AllKeysIsOk(1)
func AllKeysIsOk(nOrder)
local Rec,Key
ordsetfocus(0)
while !eof()
Rec := recno()
while !rlock()
end
Key := &(OrdKey())
OrdSetFocus(nOrder)
dbSeek(Key)
while recno() # Rec .and. &(OrdKey()) == Key
skip
end
if recno() # Rec
ret .F.
end
ordsetfocus(0) ; skip
end
ret .T.
Best regards.
How about this:
Simple, have a field named "TEST" and fill it with a test pattern. Lets say
"99".
This way you can have several records that can be used for testing. ( if
needed ).
The field does not have to be large.
This should be fast and simple.
can you a little bit deeply describe your idea?
i am not sure your point is clear to me ...
My suggestion was to use the RECCOUNT() function. Count the records before
an index is open
and then after an index is open. It fast and simple, This works very well
for me. When I screw something
up it is usually when I am adding a new record. When server's are
overloaded they tend to flush the
network buffers. That will mess up an index too. This is a good general
catchall ideal.
Most of the ideals submitted here basicly whats to compare the index key to
the current record.
I can't see this being a very good solution either. When the index is
corrupted clipper can't detect
that the record doe's not match the current index key value. Think about
it this way. If I want to find
HISTORY->NAME = "Scott" and the record that comes up says "Derek" Then
clipper can't see
a problem. That is why it gave you the wrong record! If it could then
clipper would tell
you that the index key is wrong and exit the program. To use the same system
varitbles to try
verify that the index is correct is a wast of time. If you could do that
then the NTX driver
would have this error checking built into it.
The only way I can see to test an index is to have a test record. This
record will have a know value.
Then seek for that value. You can use an entire record or you can use a
field. I suggested a field called
TEST. Make it char 2 then pick a record in the DBF and give TEST = "99".
To test the index
seek test="99" then check to see if the NAME="Scott" This was just a half
baked ideal that I suggested.
I was tired and I was not thinking stright. Now that I took some time to
think about it I am asking
everyone to ignore this ideal.
HOWEVER, I would suggest to the original poster that he write a small
program that simply
deletes all indexes and then build new ones. I suggest that he run's this
program on Friday @ 5:00
that way I should be done by Monday morning. He should do this untill he
finds whatever is
causing the index problems. Then Fix it.
you are right. such a scenario is dangerous, but unfortuntly not impossible
...
there are many options the index file to become corrupted.
my original post was oriented to case when dbf is changed without opened
index file (when somebody change field from dbf, perhaps using dbu,
forgetting that there is associated index file)
i was looking for some kind of "indexkeyvalue()" function in ntx driver, but
seems there is no such a thing.
and, according to many posts about this theme, there is no absolutely safe
method of keeping index file (ntx) in sync with dbf ...
as allready posted by you and the others it is easy to test
seek/found by recalculating index key on "found()" record, but
what to do with:
seek something
while something==&indexkey
counter++
skip
end do
if index is out of sync, it will simply exit the loop before all
expected (and present in dbf) occurences encountered,
sometimes it is not convenient to have all occurences sequentially
enumerated, instead, they are just held in group by strength of index rule.
knowing all of this, i may rearange new apps, but what to do with the
old ones?
thanks for you example!
descending test is good, but there is still small gap, let it be:
(aab) original indexkey value
(abb) mistakenly changed second a->b
as you see, in this particular case, descending test may not find corruption
"gabor salai" <gabor...@euroherc.hr> wrote in message news:<bdrmgk$a7hc$1...@as201.hinet.hr>...
> if index is out of sync, it will simply exit the loop before all
> expected (and present in dbf) occurences encountered,
It is wrong to check an index and do "dbSkip()" with active (and
potencialy corrupt) index.
See my sample. You must use ordSetFocus(0) for access all records.
Bye.
sorry, i was not making *compilation* of all posted answers and examples,
i was just (trying to) answer particular posts and examples, since people
spent time to read my original post.
so i simply missed your answer. in fact, i put it for later reading, and,
forgott it.
and, yes, skipping the dbf with natural order, while checking the seek()
seems good.
if supposed that index file has exactly the same number of entries as many
records are in dbf (as somebody posted example to check that aspect),
and using natural order skip founds each record its entry in
index file (like your example shows), it may prove finaly that index file
is *in* sync with dbf, and that later skipping through index file will
produce
correct result.
.
yes, as somebody has also posted yesterday, it seems that
"seeking myself" is a powerfull solution
On a NOVEL server the "NDIR" command will give you the owners name and the
time and date of the files.
<<Memory lane>>
I had a problem with a manager. You know the type, She had 15 years
expertise. Everyone
else makes the mistakes but her. The company would die in one month if she
were to leave.
.....
She used her one (1) week of dbaseIII training and dbaseIII to alter the
DBF. Then blamed
me for MY poor programming skills.
I had some fun with her at the Friday meeting. :)
we (it stuff ourselves) used sometimes to repair/maintain dbf data using dbu
utility.
i blame myself when after repairing data on which index key depends
forget to reindex the application. i have not found any case that user
should
change the data by dbu. they are aware that those are their data and they
are
responsable for them, and they don't want to destroy them, so they don't
play
with dbu.
by, the way, i found on internet ntx file description. it is extremly
simple!
i wrote a prg function that reads ntx file on low level, performs simulated
skip
and does indexkeyval() supstitution, allowing me to check index sanity.
whole subsystem is not longer than few dozen of lines!
although written in pure clipper function it is not more than 50% slower of
built-in ntx subsystem.
in this written function, i am missing only one part:
how to find index page corresponding to record number after dbgoto()?
it seems that i have to check all pages, which means to scan whole
ntx file, but at other side, clipper seems to makes it instantly.
does clipper have some index file buffering, or is there some additional
info in ntx file, hidden to me?
Gabor,
Evaluate the Index Expression the current record to obtain it's (key
value), and (seek) the key value using your routine, scan matching
Index Page entries for the entry matching ( RECNO() ).
To simulate (SEEK), compare the (last) key value of the Index Page to
the (seek value), jump to the next Page if not in range, e.g.:
cSeekKey := "SALAI"
(read Index Header)
(read Index Page) ( cBuffer )
FOR nItemNum := 1 to (nPageItems)
IF ( nItemNum == 1 )
// calculate last Item Pointer pos (nLastPtr)
// read Item Pointer pos to obtain Item Offset (nLastPos)
// read (key value) using Item Offset (xKeyVal)
IF ( LEFT( xKeyVal, LEN( cSeekKey ) ) < cSeekKey )
nItemNum := nPageItems
nItemPtr := nLastPtr
nItemPos := nLastPos
nTotKeys += nPageItems
ENDIF // bypass (skip) page !
ENDIF // Index Page seek (bypass) testing at start of Page scan
rtn.
// -- process each page entry -- (code fragment)
// obtain current Item Pointer pos (nItemPtr)
// obtain current Item Offset (nItemPos)
xKeyVal := SUBSTR( cBuffer, nItemPos + 9, nKeySize )
nRecNo := BIN2L( SUBSTR( cBuffer, nItemPos + 5, 4 )
NEXT // x
Above will position quickly to the correct page, from there just
process each entry ( xKeyVal / nRecNo ) and compare (xKeyVal) values
to (cSeekKey) <-> (nRecNo) to expected ( DBF recno() ).
(processing Hint: If (xKeyVal) is greater than > cSeekKey, you found
(all) the key(s) - you can stop scanning the Index file.)
HTH & Regards,
Bob
your explanation is logical, thank you.
but, my experiment with dbu from c87 shows different:
without opened index, i changed key value on one record on dbf, to be
completely out of order, then opened index, made goto to tested record
number,
and dbu jumped to proper record.
since the (new) key value from dbf and stored in ntx was different, what
helped dbu to make seek as you suggest?
that test let me think that clipper makes some buffering of index file
helping him to
optimize goto with index active?
i am going to repeat this test on some very large dbf/index where buffering
physicaly wold not be able, caused by memory restriction ...
This is due to the Clipper 'tolerating' mismatched key values on read
operations (AFAIK the RDD does detect the key as mismatched), try
(editing) the record/field(s) comprising the key value in DBU with the
Index assigned, the "Index Corrupted" message should magically appear.
:)
For even more fun try the same operation using a Clipper 5.x version
of DBU -after- doing the above, it seems that Clipper 5.x will attempt
to "auto-repair" the Index by inserting a (new) key in the Index,
although at this point the Index file's (key count) will exceed the
Database's (record count), the Index will contain (2) entries for the
same record, one correct, one not. Do this enough and at some point
the 5.x RDD will report an Index Corrupted Error, although I am not
sure at exactly what point - potentially your routine can catch the
corruption -before- Clipper does, if you test for this condition.
Regards,
Bob
thanks for sharing your experience with us ...
but, just to confirm:
with changed index key field in dbf, when the key value becomes
different in record compared to stored value in index file, seek *may not*
work, since newly recalculated key value from changed dbf doesn't point
anywhere in index file.
so, after dbgoto(), if seek fails, like i forced it to fail, it seems that
clipper
*must* do full index file scan to find the proper index page with contains
record's recno(). on large database (250000 rec, 5mb ntx file), i noticed
significant file read activity on my win98 system monitor in the moment
of performing dbgoto() in dbu.
tolerating behaviour is just a strategy (maybe should be configurable)
when to rise error and when to tolerate and when to make autorepair.
looking at my ntx file structure, it seems easy to generate custom built
orders?
it is not clear why not supported in ntx rdd?
it is very funny thing (in custom built order) when few key values point to
same record number. it allows you (for example) to have index of some
person's data both on name and first name (assuming char field is of same
length),
or have single record of some movie indexed on few actors simultaneously
(assuming you have char fileds actor1, actor2, etc) so whatever actor you
type
in seek request, you will get the same movie. without custom built order,
you
need complicated relations and support code to achieve same effect.
other thing falling on mind is malicious: to create looped index page
pointers, so skip, on some parts of dbf, will fall into endless loop
(just as idea)
My pleasure.
>
> but, just to confirm:
> with changed index key field in dbf, when the key value becomes
> different in record compared to stored value in index file, seek *may not*
> work, since newly recalculated key value from changed dbf doesn't point
> anywhere in index file.
> so, after dbgoto(), if seek fails, like i forced it to fail, it seems that
> clipper
> *must* do full index file scan to find the proper index page with contains
> record's recno(). on large database (250000 rec, 5mb ntx file), i noticed
> significant file read activity on my win98 system monitor in the moment
> of performing dbgoto() in dbu.
Yes, it is most likely scanning for the Index key value -> recno().
>
> tolerating behaviour is just a strategy (maybe should be configurable)
> when to rise error and when to tolerate and when to make autorepair.
That is where your index check rtn. comes in ??
re: "autorepair", that was a poor choice of words on my part,
What I meant to say is that while the Clipper 5.x DBFNTX RDD attempted
to "repair" the Index automatically, what it ended up doing was
inserting (another) key value in the Index, in-turn further corrupting
the Index file (key count now exceeds RECCOUNT() AND has a bad key
value), -further- compromising integrity of the parent Database.
Ouch.
For a frequently recommended solution, see: www.advantagedabase.com
(I think this site is still available, though I can't view it anymore
- Proxomitron's (Ad List) filtering auto-kills the Browser
connection, and I ain't turnin' no filterin' off, it stays
unavailable. :)
> looking at my ntx file structure, it seems easy to generate custom built
> orders?
> it is not clear why not supported in ntx rdd?
It seems the FoxPro Index structure is better suited for this.
(IIRC that is what both Comix/Clipmore & the SIX Driver use.)
> it is very funny thing (in custom built order) when few key values point to
> same record number. it allows you (for example) to have index of some
> person's data both on name and first name (assuming char field is of same
> length),
> or have single record of some movie indexed on few actors simultaneously
> (assuming you have char fileds actor1, actor2, etc) so whatever actor you
> type
> in seek request, you will get the same movie. without custom built order,
> you
> need complicated relations and support code to achieve same effect.
>
I suppose, though I've always considered any Index that points (2)
entries at the same record as corrupted (see above).
You could always scan via (n) Indexes & write matching (parent)
records found to a (temporary) child database & relate that database
back against the parent, only (1) relation needed there. Requires
more coding though.
re: custom Indexes to do the same - don't know, I don't use them.
> other thing falling on mind is malicious: to create looped index page
> pointers, so skip, on some parts of dbf, will fall into endless loop
> (just as idea)
It's your data (I hope) .. I wouldn't recommend doing it though. :)
Regards,
Bob