error dbfntx1012 corruption detected

835 views
Skip to first unread message

Michael Green

unread,
Oct 30, 2025, 5:48:32 AMOct 30
to Harbour Users
Has anyone encountered this error:
Error DFTNTX 1012 corruption detected <index name>

If so what was the cause? I'm running on FreeBSD 13.2 if that matters.
Sorry about the weak  description. Anyway thanks for looking.

Daniele Campagna

unread,
Oct 30, 2025, 5:54:31 AMOct 30
to harbou...@googlegroups.com

Under Linux, check file permissions for indexes,memo files.

Or check the RDD you are using (ntx? cdx?). Check the casing also. Undr Linux cdx is not the same as CDX (unLess you set SET_FILECASE).

HTH

Dan

--
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: https://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/harbour-users/c3368df9-a468-49cb-8f96-9fbbc1145af3n%40googlegroups.com.

Michael Green

unread,
Oct 30, 2025, 9:42:04 AMOct 30
to Harbour Users
Hmmm. I thought it might be an issue with the number of files opened, so I checked; each instance (up to a maximum of 5) opens a total of 47 tables/indexes. I'm using FreeBSD and accessing the program over ssh, so no issues with Samba. 

Michael Green

unread,
Nov 1, 2025, 10:40:33 AMNov 1
to Harbour Users
I reduced the number of files opened and the problem seems cured. Odd. Has anyone seen anything similar. How would I research whatever is being used up?

cod...@outlook.com

unread,
Nov 1, 2025, 12:42:59 PMNov 1
to Harbour Users

My experience on Windows computers:

Error DFTNTX 1012 corruption detected <index name> is very rare but rough error.

I can not remember when I have it last time, but it was before almost 30 years. As I can remember this type of error is allways related to quality of computer hardware and software (power supply stability, operating system, quality of hard disk, quality of memory, etc.). In my case the cause of that error was never within PRGs.

 

I have aplication in which I can have up to 30 DBF files opened at same time, plus its .NTX index files. I can not remember if I ever had 1012 error.  I do not work with memo files.

I suggest you to have some backdoor function to reindex selected DBF to quickly solve the problem.  Then you can analyze situation on similar computer configurations.

Regards,

Simo.


Michael Green

unread,
Nov 2, 2025, 10:37:31 AMNov 2
to Harbour Users
thanks for the insight

hmpaquito

unread,
Nov 3, 2025, 5:28:14 AMNov 3
to Harbour Users
Hola Michael,

Yo le sugiero dos cosas:
- Primero, que haga operaciones por lotes y la ultima sea grabar
- Segunda, antes de empezar los procesos compruebe que los indices estan correctos: Yo tengo por ahí una funcion que lo que hace es añadir un registro en blanco al final de la .dbf y luego lo borra; Hace algunas cosas más para comprobar que los indices estan bien. . Lo que consigo con esto es que si el programa rompe con un run-time error lo haga antes del proceso. Esto me ha ahorrado mucho tiempo porque me ha evitado que los procesos quedaran a medio

Espero que le sirva.

Michael Green

unread,
Nov 3, 2025, 7:47:27 AMNov 3
to Harbour Users
I'm currently writing a test program to open a large number of files to see if I can replicate the error. Details soon.

Michael Green

unread,
Nov 3, 2025, 8:57:53 AMNov 3
to Harbour Users
Well, no luck in replicating the message when opening 100 tables/indexes. A mystery...

Nenad Batoćanin

unread,
Nov 3, 2025, 9:11:29 AMNov 3
to harbou...@googlegroups.com

Hello!

 

We have experience with a large number of computers (about 3000) with very demanding processing with a large number of files and indexes open. Corrupted indexes were always related to some hardware problem. Final proof: we transferred about half of the users to our servers with remote access (programs run directly on the server, there is no data transfer over the network). In 10 years we have not had a single case of corrupted indexes.

 

Regards, NB

--

You received this message because you are subscribed to the Google Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: https://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.

Michael Green

unread,
Nov 3, 2025, 11:01:19 AMNov 3
to Harbour Users
Thank you for the input. I'm not getting the error now. However all information is useful. Tar </northern English vernacular>

danr...@gmail.com

unread,
Nov 6, 2025, 3:10:30 PMNov 6
to harbou...@googlegroups.com

Yo después de lidiar años con los índices corruptos con los NTX nativos, me pasé a cliente servidor cambiado solo el RDD (Advantage Database Server) y el problema se solucionó con muy pocos cambios en el código

Saludos

Daniel Goldberg

La Reja

Bs As

--

You received this message because you are subscribed to the Google Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: https://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.

Michael Green

unread,
Nov 11, 2025, 7:39:57 AMNov 11
to Harbour Users
As you were, I'm still getting the error. It's not a too many files open' issue, it's something else. Has anyone seen the error when the index wan't corrupted? I know the index is ok because a previous version works ok without any reindexing. All ideas welcome.

Mario H. Sabado

unread,
Nov 11, 2025, 8:27:47 AMNov 11
to harbou...@googlegroups.com
Hi Michael,

What's the create mask setting in your samba configuration?  Please test also the latency of your connection from workstation to server.  It should be consistent <1ms if your application is being accessed directly from the samba shared folder.  Fluctuating latency usually results in frequent index corruption in my experience.  Otherwise, you may try LetoDB/LetoDBf server to access your database and index files via TCP/IP that is more tolerant to higher network latency.

Best regards,
Mario


José M. C. Quintas

unread,
Nov 11, 2025, 9:10:28 AMNov 11
to harbou...@googlegroups.com

1) Allways open files with indexes - all - on same order

2) After write/update, use SKIP 0, UNLOCK, Don't change this order ( or commit, unlock)

3) Old CLIPPER prolem, never try to do the same on harbour, to exists a memo field can cause index corruption message

4) Old CLIPPER problem: On W95 OSR2, W98 OSR2, W2000, wrong network driver cause problem, same with old Novell 3.11

5) Old CLIPPER problem: PACK can cause duplicate records

6) Old programmer error:  DO WHILE ! RLock(); ENDDO. Insert a wait time. DO WHILE ! RLock();Inkey(0.3);ENDDO (On CLIPPER, OL_Yield() inside DO WHILE)


For the option 2, I create RecLock() and RecUnlock(), to do allways the same, on same order.

FUNCTION RecLock()

   DO WHILE ! RLock()

        Inkey(0.3)

  ENDDO

   RETURN Nil


FUNCTION RecUnlock()

   SKIP 0

   UNLOCK

   RETURN Nil

This can be used for another alias:   alias->( RecLock() ),  alias->( RecUnlock() )

I think the best is to use CDX.


Note: 

On CLIPPER days, I was using SIXCDX on Clipper 5.2 and ADS local on Visual Basic with no problem with simultaneous access.

On that time, limit for ADS Local was 20 users, local or terminal service.

On a current client, reindex can be made one time at year, or less.


José M. C. Quintas

Michael Green

unread,
Nov 11, 2025, 11:06:32 AMNov 11
to Harbour Users
So no Samba for me. I use ssh in terminals to access the application running on a FreeBSD server. 

Michael Green

unread,
Nov 11, 2025, 11:08:59 AMNov 11
to Harbour Users
I don't thinks there's any actual index corruption as an earlier version runs fine.

Francesco Perillo

unread,
Nov 11, 2025, 2:11:08 PMNov 11
to harbou...@googlegroups.com
Hi Michael
just to be sure I understand correctly:
you have a freebsd server that accepts incoming ssh connections. The ssh daemon spawns a harbour program that access files (dbf and ntx) as local files.

So your users are working and this error appears:
- to all users ?
- just opening the file ?
- doing a seek ?

What do you mean when you say "an earlier version runs fine"?

I sometimes get a "silent" index corruption. By silent I mean that I search a record and can't find it. After creating new indexes the record appears. Index is corrupted but all the internals are ok. Lately we have some problems with the docking stations losing connectivity writing records.
Never got something serious as your message in the last 30 years (clipper 87, clipper 5 and various harbour versions).

If I had to investigate I'd move in several directions:
- file system cache, mount options
- hard disk cache, is it an array, does it have a battery cache, can it be set to write-through (slower, i Know)
- search for zombie processes, halted, segfault, core dumps
- search for OOM killer

I'd write a producer/consumer software to test the full stack and in particular the RLOCK/FLOCK adherence under stress.

I'd look for programs running by cron that may be written with different harbour versions or different style of LOCK

I'd also look for backup software that may open the files in a different way.

Are the dbf/ntx reachable in some other way (samba, nfs,...)


I'm really curious since I'd like to port my program to linux to hide all the files from samba. I want to create a setup like your, but I'm afraid of different locking scheme so that I think to migrate to netio but if I migrate to netio I can hide the filesw anyway...

Sorry for the long message. Please check, as already suggested, that you SKIP 0/COMMIT as soon as possible after writing, so that the data can be pushed to the lower layers. And a fsck may help sometimes.... :-)

Michael Green

unread,
Nov 12, 2025, 4:06:17 AMNov 12
to Harbour Users
Yes, a FreeBSD system accessed via ssh. It isn't a load situation as it can appear when in use by a single user. The problem is intermittent and appears during an unlock command in the same program. An earlier version of the same system doesn't have the problem, even without a reindex of all files. The file system is ZFS and routine testing with the scrub command doesn't show any errors. I've never seen the error before or in any other circumstance. The problem only started to show, intermittently, after I made a number of modifications to the source and recompiled. I made a few changes to the source to the problem area and included an altd(), just before the unlock, to review the situation. I haven't had the problem again yet.  

Michael Green

unread,
Nov 14, 2025, 9:32:59 AMNov 14
to Harbour Users
Problem re-occurred just now, this time on a commit command, which I put in just before the unlock command that triggered the problem before.

Francesco Perillo

unread,
Nov 15, 2025, 7:24:29 AMNov 15
to harbou...@googlegroups.com
Michael,
this problem smells as a cache/locking or hardware error.

Cache can be also in the OS or the file system. How is it mounted ?

I can't believe that changes in harbour code can produce this kind of low level errors. It simply cannot be.

The error comes from internal data structures that are wrong, so the RDD low level code stops since it can't understand the situation. This means that one process writes the index and another process has a different view, as per a locking mismatch.

DBF files are really simple structure, really basic.

NTX/CDX files are really different beasts, adding/modifying one record can produce tens of page writes. If the writer process and another reader process have different values for the same data page.... booom !


How was your previous system configured? Did you upgraded Harbour compiler ?

Francesco

--
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: https://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.

Michael Green

unread,
Nov 15, 2025, 8:14:46 AMNov 15
to Harbour Users
> How is it mounted?
The system runs FreeBSD 14.2. I has a ZFS filesytem which checks clean using the scrub command. I access the system using ssh from Linux desktops.

I haven't changed or updated FreeBSD or Harbour since before the problem began. I have an earlier version of the system which doesn't have the intermittent problem, and does not complain if I complete the same tasks, that cause the problem, without a reindex. Here's a section of the code. The problem area occurred at the point after the atld(). I have commented out the 'commit' command and the 'unlock' command and moved the 'unlock into the two part of the if/else/endif section:
      if storev = 'Y'

         if treatv[prapptrtv,1] > 0
            go treatv[prapptrtv,1]
            store locker('R') to nul
            replace date with date(), code with treatv[prapptrtv,2],;
               comment with treatv[prapptrtv,3], kiroku with kirokuv
            unlock && <==unlock moved here
         else
            store locker('F') to nul
            append blank
            replace client with cnov, date with date(),;
               code with treatv[prapptrtv,2],;
               comment with treatv[prapptrtv,3], kiroku with kirokuv
            unlock && <==unlock moved here
         endif
         altd()
&&         commit
&&         inkey(1)
&&         unlock

The function locker() is in a procedure file:
proc locker

parameters L_TYPEV

if len(trim(dbf())) = 0
   set color to i
   @ 0,0 say "No file open in area, press a key..."
   wait ""
   set color to
&&   return to master
else

   if L_TYPEV$"Rr"
      do while .not. rlock()
         @ 0,0 say "Attempting record lock..."
         store inkey(1) to NUL
         @ 0,0 say "                          "
         store inkey(1) to NUL
      enddo
   endif

   if L_TYPEV$"Ff"
      do while .not. flock()
         @ 0,0 say "Attempting file lock..."
         store inkey(1) to NUL
         @ 0,0 say "                       "
         store inkey(1) to NUL
      enddo
   endif
endif && (file not open)

return .t.

hmpaquito

unread,
Nov 15, 2025, 2:47:06 PMNov 15
to Harbour Users
Hola,

¿ Ha pensado vd. en comparar el codigo fuente de harbour de su version que si funciona con el codigo fuente de harbour que hace crash ?
Podria haber algo que está, no siendo causa del problema, pero si provocandolo

Salu2

Michael Green

unread,
Nov 17, 2025, 4:59:22 AMNov 17
to Harbour Users
Sí, revisé los cambios que hice en cuanto se produjo el error por primera vez, pero aún no logro identificar la causa del problema. Estoy seguro de que debe ser algo que hice, pero la nueva versión tiene muchas funciones adicionales. Por suerte, la versión anterior sigue funcionando.

hmpaquito

unread,
Nov 17, 2025, 5:11:10 AMNov 17
to Harbour Users
Hola Michael,

Me referia a los cambios producidos en Harbour

Michael Green

unread,
Nov 22, 2025, 10:40:11 AM (11 days ago) Nov 22
to Harbour Users
I've eliminated the index which seems at the center of the problem. The index expression is:
str(client,5,0)+dtos(date)
It seems innocuous enought. Any insights? Tar.

Daniel Lopes Filho

unread,
Nov 22, 2025, 10:54:35 AM (11 days ago) Nov 22
to harbou...@googlegroups.com
I had problems using fcreate() and managed to solve them using the TTxtFile() class, but unfortunately I encountered a very serious error when reading txt files. However, since I'm stuck on FiveWin 805 + gtwvw with xharbour 1.1.0 because of the GT change...


--
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: https://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.


--
Lopes Informática
67-9-9202-9422 (tim)
67-9-9676-8637 (vivo/whatsapp)

Itamar Lins

unread,
Nov 22, 2025, 8:11:26 PM (11 days ago) Nov 22
to Harbour Users
Hi!
Do not use flock() to "append blank" command in files DBF.
The flock() function should not be used on a network; first, verify that the file is not in use by any process.
Avoid using flock() whenever possible.
The best way to copy, restructure, or index a DBF file is to open it in exclusive mode.

Function AddRec(cAlias)
LOCAL lRet := .T.
hb_default(@cAlias,Alias())  
(cAlias)->(DbAppend())
   If neterr()
      hwg_Msginfo('Attempting to Include in File, '+cAlias)
   Else
      exit
   EndIf
Return lRet

It is incorrect to use flock() to APPEND BLANK/DbAppend() to include a record.

Best regards,
Itamar M. Lins Jr.

Michael Green

unread,
Nov 24, 2025, 10:13:03 AM (9 days ago) Nov 24
to Harbour Users
>It is incorrect to use flock() to APPEND BLANK/DbAppend() to include a record.

That's a problem. I have used it all over my programs. How should I change the following code?:

set exclusive off
use customers && structure= name,c,30; address,c,30
store space(30) to namev, addressv
store ' ' to storev
@ 1,1 say 'name' get namev
@ 2,1 say 'address' get addressv
@ 3,1 say 'store' get storev pict '!' valid storev$'YN'
read
if storev = 'N'
   return
endif
do while .not. flock()
   @ 0,0 say 'attempting to lock file'
   inkey(1)
   @ 0,0 say '                                       '
   inkey(1)
enddo
append blank
replace name with namev, address with addressv
unlock

Thanks.

Angel Pais

unread,
Nov 24, 2025, 11:54:52 AM (9 days ago) Nov 24
to harbou...@googlegroups.com
delete the do while loop and use neterr() after the append blank

Itamar Lins

unread,
Nov 24, 2025, 7:28:14 PM (9 days ago) Nov 24
to Harbour Users
Hi!
/* this is block is not necessary.

do while .not. flock()
   @ 0,0 say 'attempting to lock file'
   inkey(1)
   @ 0,0 say '                                       '
   inkey(1)
enddo
*/
Append Blank Not necessary for rlock() and flock()
Only unlock or DbUnlock(), after replace ... with ...
My syntax is:
(cAlias)->(DbAppend()) //or append blank, append already opens a locked record; flock() or rlock() is not necessary.
//Here check if ok appended
IF neterr()
   alert("Fail, etc...')
ENDIF
//Here replace
(cAlias)->MyField_a := xMyVar
(cAlias)->MyField_b := yMyvar ... etc
//Here I unlock the registration.
(cAlias)->(DbUnlock())
Always I use ALIAS.


Best regards,
Itamar M. Lins Jr.

Nenad Batoćanin

unread,
Nov 24, 2025, 8:59:47 PM (9 days ago) Nov 24
to harbou...@googlegroups.com
> Do not use flock() to "append blank" command in files DBF.
> It is incorrect to use flock() to APPEND BLANK/DbAppend() to include a record.

Why? Is there any technical reason why this is not correct?

I've been using the flock/append blank combination for decades and haven't noticed any problems with it. Of course, it is always better to have an exclusively open table, but in a multiuser environment this is not possible in most cases.

Regards, NB

cod...@outlook.com

unread,
Nov 25, 2025, 1:56:43 AM (9 days ago) Nov 25
to Harbour Users

Your part of program is old style programming, may be with syntax from Clipper Summer 87 version. I do not want to tell that it is wrong, only want to tell that you can use new language constructs which can be more readable or even more efficient.

You do not need to use FLOCK just to add one record.  As other suggested you just test NETERR() function. Or you can make your network functions to open files, add record, lock record or lock file. May be to use BEGIN SEQUENCE / ENDSEQUENCE to better manage network errors.

I am also not sure that FLOCK is part of you index corruption problem. It works for you years before current 1012 error.

Regards,

Simo.
 

Bob G3OOU Gmail

unread,
Nov 25, 2025, 4:48:38 AM (9 days ago) Nov 25
to harbou...@googlegroups.com

Hi All

You can minimise the need for flock() for appending single new records by creating a cache of blank records with index keys that place them at the end of the current record set or table where they can be hidden. The cache can be, say, 1% - 2% of the number of active records depending on the size of the database and created during times of low or zero activity. Any deleted records can be added to the cache. When a new record is required just retrieve it from the cache and update the index key. If the cache is empty it can be rebuilt by adding the appropriate number of records using a single flock().

If there are regular numbers of record deletions this will maintain the cache automatically. If the cache gets too large it can be also be maintained automatically to control the total space used.

Regards

Bob

oleksa

unread,
Nov 25, 2025, 5:11:27 AM (9 days ago) Nov 25
to harbou...@googlegroups.com
Hi,


...
Under a networking environment, dbAppend() performs an additional
     operation: It attempts to lock the newly added record. If
     the database file is currently locked or if a locking assignment
     is made to `LastRec() + 1`, NetErr() will return a logical true (.T.)
     immediately after the dbAppend() function. Also by default this
     function does unlock previously locked records.
...

no need to call fLock() while attempt append new record.

Regards,
Oleksii

24 листопада 2025, 17:13:11, Від "Michael Green" <michael...@gmail.com>:

Otto Haldi GMAIL

unread,
Nov 25, 2025, 7:46:39 AM (9 days ago) Nov 25
to harbou...@googlegroups.com
// Here is a very brief example of how I proceed to add or modify a record in a table in network mode. 
// Perhaps this will help you.

// Example to ADD a Reccord
//
...
if ADRESSE->(AddRec())
   ADRESSE->GET_DATE  := date()
   if ADRESSE->(RecLock())
      GetADR("S")   // S to add a record
   end   
end
...


// Example to CHANGE a Reccord
//
...
if ADRESSE->(RecLock())
   GetADR("M")    // M to change a record
end
...



STATIC FUNCTION GetADR(mode)
@  3,17 get ADRESSE->NAME
// Anoter gets
// If I am in add mode and I exit using the Esc key, the record will be deleted.
if ReadEsc() .and. mode = "S"
   delete
end
Return nil



FUNCTION AddRec(secondes)
Local resume := .f.
if secondes = nil
   secondes := 4
end
DbAppend()
if !Neterr()
   DbCommit()
   resume := .t.
else
   do While secondes >= 0
      DbAppend()
      if !Neterr()
         DbCommit()
         resume := .t.
      end
      InKey(1)
      secondes--
   end
end
Return resume



FUNCTION RecLock(secondes)
Local txt, chk, AffMes
Local resume := .f.
if secondes = nil
   secondes := 2
end
if NetRecLock(secondes)
   resume := .t.
else
   Alert( "Cannot lock record # "+ltrim(Str(RecNo(),8)) + CRLF+ ;
          "in the data file "+alias()+", because" + CRLF+ ;
          "it is already in use by someone else." )
end
Return resume



FUNCTION ReadEsc()
Read
Return (LastKey() = K_ESC)

Itamar Lins

unread,
Nov 25, 2025, 8:38:26 AM (8 days ago) Nov 25
to Harbour Users
Hi!
Here, this code, is not correct.
If flock() fail and user break connection ssh/telnet/... 
A zombie process will keep trying to lock the entire file, even though the DBF header was already locked by "append blank"...

do while .not. flock()
         @ 0,0 say "Attempting file lock..."
         store inkey(1) to NUL
         @ 0,0 say "                       "
         store inkey(1) to NUL
  enddo

Flock() locks the entire file, just to include a record, something that append blank does automatically without the need to use rlock or flock.


Best regards,
Itamar M. Lins Jr.

Gerald Drouillard

unread,
Nov 25, 2025, 8:51:11 AM (8 days ago) Nov 25
to harbou...@googlegroups.com
Try doing a dbcommit() after the successful append (!neterr()).  You will want to dbcommit when you update a field with an index or part of index expression.
In heavily used tables it is a good practice to periodically add "blank" records when traffic is low and use them instead of a true dbappend().   Some sql backends also do that.

Michael Green

unread,
Nov 25, 2025, 9:29:55 AM (8 days ago) Nov 25
to Harbour Users
Thanks all. The problem (Error DBFNTX/1012 corruption detected  <someindex.ntx>) has resolved for now, by the simple expedient of removing the index and do a sort of a array. Not, I'm sure you'll all agree, the perfect solution, but the best I can do. I haven't given up on solving the problem but I've put it on the back-burner for now. Still, if anyone gets an 'Error DBFNTX/1012' and Identifies the cause I'd like to know.

Nenad Batoćanin

unread,
Nov 25, 2025, 6:22:12 PM (8 days ago) Nov 25
to harbou...@googlegroups.com

Ok, I'm aware of that. When adding just one record, there's really no need to use flock().

 

But, if a complex document containing many records in multiple tables is being inserted into the database, it is necessary to ensure that all tables are available before the insertion can begin. If one of the tables is not available, the operation does not begin. Flock() is the simplest way to ensure this: all necessary tables are locked, and if that succeeds, the all transaction (with multiple DBAppend()) will also succeed.

 

Regards, NB

Itamar Lins

unread,
Nov 25, 2025, 6:36:07 PM (8 days ago) Nov 25
to Harbour Users
Hi!
In a network environment with many users, I don't use flock().
Why? I always open the file in a shared way and treat it that way all the time. I only use rlock() to modify and dbappend() to add.

If the file is open in a shared way, there's no point in using flock(). What about the other users who are working with the same file? How does that work?


Best regards,
Itamar M. Lins Jr.

Nenad Batoćanin

unread,
Nov 25, 2025, 7:18:19 PM (8 days ago) Nov 25
to harbou...@googlegroups.com

I'll try to explain how I do it. For example, adding an invoice looks like this. First user enter invoice in temporary tables. When click on „Save“, program run procedure:

 

T_BEGIN

   T_Flock („Invoice“)

   T_Flock („Items“)

 

   Invoice->(DBAppend())

   // write invoice data

 

   FOR EACH Items

      Items->(DBAppend())

      // write one item

   NEXT

T_END

 

„T_“ are my commands that ensure that the entire transaction succeeds. First, the INVOICE and ITEMS tables are locked. If either of them is unavailable, the entire transaction is aborted. If both tables are locked, then I continue: save invoice header, then all the items. Since both tables are locked, there is no possibility for another user to interrupt the transaction.

 

If any of the required tables are unavailable, the user receives a "please wait" message and tries again after a short pause. In practice this rarely happens because the transaction is very short. I have installations running 50+ users without any problems.

 

Regards, NB

Itamar Lins

unread,
Nov 25, 2025, 7:33:08 PM (8 days ago) Nov 25
to Harbour Users
Hi!
Here, the only "please wait" command that appears to the user is when they request a report that covers large periods between dates.
For example, a report with annual sales of a product.

The table is open and shared by everyone, and no one locks it completely. Everyone adds, removes, and includes records without anyone waiting for it.

Best regards,
Itamar M. Lins Jr.

Itamar Lins

unread,
Nov 25, 2025, 7:36:34 PM (8 days ago) Nov 25
to Harbour Users
Hi!
Yes, I use LetoDB(Kresin/Pavel) and now LetoDbf(Elch)


Best regards,
Itamar M. Lins Jr.
Reply all
Reply to author
Forward
0 new messages