Under Linux, check file permissions for indexes,memo files.
Or check the RDD you are using (ntx? cdx?). Check the casing
also. Undr Linux cdx is not the same as CDX (unLess you set
SET_FILECASE).
HTH
Dan
--
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: https://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/c3368df9-a468-49cb-8f96-9fbbc1145af3n%40googlegroups.com.
My experience on Windows computers:
Error DFTNTX 1012 corruption detected <index name> is very rare but rough error.
I can not remember when I have it last time, but it was before almost 30 years. As I can remember this type of error is allways related to quality of computer hardware and software (power supply stability, operating system, quality of hard disk, quality of memory, etc.). In my case the cause of that error was never within PRGs.
I have aplication in which I can have up to 30 DBF files opened at same time, plus its .NTX index files. I can not remember if I ever had 1012 error. I do not work with memo files.
I suggest you to have some backdoor function to reindex selected DBF to quickly solve the problem. Then you can analyze situation on similar computer configurations.
Regards,
Simo.
Hello!
We have experience with a large number of computers (about 3000) with very demanding processing with a large number of files and indexes open. Corrupted indexes were always related to some hardware problem. Final proof: we transferred about half of the users to our servers with remote access (programs run directly on the server, there is no data transfer over the network). In 10 years we have not had a single case of corrupted indexes.
Regards, NB
--
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: https://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/6468d34d-996f-4dd8-963d-02cac9ec9757n%40googlegroups.com.
Yo después de lidiar años con los índices corruptos con los NTX nativos, me pasé a cliente servidor cambiado solo el RDD (Advantage Database Server) y el problema se solucionó con muy pocos cambios en el código
Saludos
Daniel Goldberg
La Reja
Bs As
--
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: https://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/23d673e7-a2c5-41ff-8f34-dc960660d74dn%40googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/6bffdefe-b00e-429d-83d3-f89ab7f7e54fn%40googlegroups.com.
1) Allways open files with indexes - all - on same order
2) After write/update, use SKIP 0, UNLOCK, Don't change this order ( or commit, unlock)
3) Old CLIPPER prolem, never try to do the same on harbour, to exists a memo field can cause index corruption message
4) Old CLIPPER problem: On W95 OSR2, W98 OSR2, W2000, wrong network driver cause problem, same with old Novell 3.11
5) Old CLIPPER problem: PACK can cause duplicate records
6) Old programmer error: DO WHILE ! RLock(); ENDDO. Insert a wait time. DO WHILE ! RLock();Inkey(0.3);ENDDO (On CLIPPER, OL_Yield() inside DO WHILE)
For the option 2, I create RecLock() and RecUnlock(), to do allways the same, on same order.
FUNCTION RecLock()
DO WHILE ! RLock()
Inkey(0.3)
ENDDO
RETURN Nil
FUNCTION RecUnlock()
SKIP 0
UNLOCK
RETURN Nil
This can be used for another alias: alias->( RecLock() ), alias->( RecUnlock() )
I think the best is to use CDX.
Note:
On CLIPPER days, I was using SIXCDX on Clipper 5.2 and ADS local on Visual Basic with no problem with simultaneous access.
On that time, limit for ADS Local was 20 users, local or terminal service.
On a current client, reindex can be made one time at year, or less.
José M. C. Quintas
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/6bffdefe-b00e-429d-83d3-f89ab7f7e54fn%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: https://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/c2ce527d-40f2-4bc8-869e-da1f383ea0d0n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: https://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/dc577b09-e9e4-4d38-a6ff-136508153061n%40googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/f61e3c13-74d6-45fe-a113-53d29f969c87n%40googlegroups.com.
Your part of program is old style programming, may be with syntax from Clipper Summer 87 version. I do not want to tell that it is wrong, only want to tell that you can use new language constructs which can be more readable or even more efficient.
You do not need to use FLOCK just to add one record. As other suggested you just test NETERR() function. Or you can make your network functions to open files, add record, lock record or lock file. May be to use BEGIN SEQUENCE / ENDSEQUENCE to better manage network errors.
I am also not sure that FLOCK is part of you index corruption problem. It works for you years before current 1012 error.
Regards,
Simo.Hi All
You can minimise the need for flock() for appending single new records by creating a cache of blank records with index keys that place them at the end of the current record set or table where they can be hidden. The cache can be, say, 1% - 2% of the number of active records depending on the size of the database and created during times of low or zero activity. Any deleted records can be added to the cache. When a new record is required just retrieve it from the cache and update the index key. If the cache is empty it can be rebuilt by adding the appropriate number of records using a single flock().
If there are regular numbers of record deletions this will maintain the cache automatically. If the cache gets too large it can be also be maintained automatically to control the total space used.
Regards
Bob
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/e50f95f4-e916-41c4-9d24-81bb26d4f118n%40googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/f61e3c13-74d6-45fe-a113-53d29f969c87n%40googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/2f69864c-9b07-4460-b375-00942e4fd15d%40gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/facc3cff-c23f-48da-b5c4-57fb2cd7d5c3n%40googlegroups.com.
Ok, I'm aware of that. When adding just one record, there's really no need to use flock().
But, if a complex document containing many records in multiple tables is being inserted into the database, it is necessary to ensure that all tables are available before the insertion can begin. If one of the tables is not available, the operation does not begin. Flock() is the simplest way to ensure this: all necessary tables are locked, and if that succeeds, the all transaction (with multiple DBAppend()) will also succeed.
Regards, NB
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/1764065027.0597824000.z4ndrnlo%40frv63.fwdcdn.com.
I'll try to explain how I do it. For example, adding an invoice looks like this. First user enter invoice in temporary tables. When click on „Save“, program run procedure:
T_BEGIN
T_Flock („Invoice“)
T_Flock („Items“)
Invoice->(DBAppend())
// write invoice data
FOR EACH Items
Items->(DBAppend())
// write one item
NEXT
T_END
„T_“ are my commands that ensure that the entire transaction succeeds. First, the INVOICE and ITEMS tables are locked. If either of them is unavailable, the entire transaction is aborted. If both tables are locked, then I continue: save invoice header, then all the items. Since both tables are locked, there is no possibility for another user to interrupt the transaction.
If any of the required tables are unavailable, the user receives a "please wait" message and tries again after a short pause. In practice this rarely happens because the transaction is very short. I have installations running 50+ users without any problems.
Regards, NB
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/de3d9cda-42ff-4227-b64a-962176e38ed3n%40googlegroups.com.