Conversion of an application from Clipper to Harbour + NetIO

1,937 views
Skip to first unread message

Ash

unread,
Feb 7, 2014, 4:26:23 PM2/7/14
to harbou...@googlegroups.com
Hello Everybody,
 
I have just completed the conversion of an application (Accounting) written in Clipper to Harbour. The immediate benefit is that the application runs faster. As well I have used GTWVW to allow for various Windows controls (Windows, Buttons, Status Bars and Scroll Bars) and HBNetIO for local/remote access. Printing is handled through Win32Prn class. I will be testing the application during the next few weeks before installing it in a production environment. 
 
I would like to thank all of you for your help and guidance.
 
Regards.
Ash
 
 
 

Ash

unread,
Feb 9, 2014, 9:59:20 AM2/9/14
to harbou...@googlegroups.com
Hello,
 
Testing begins and I have hit the first issue.
 
Test Environment
Server CentOS 6.5
Workstation Windows 7 Pro
in a LAN setting
NetIO is started at bootup - no RPC
 
Ran 'Build Index' program to create index files for the whole system. Tried to run the accounts application but it would not run as it kept telling me to build index files. Further investigation showed that it is the dreaded rights issue as shown below:
 
-rw-r--r-- 1 root   nobody    3072 Feb  9 04:13 salesrep.cdx
-rwxrwxrwx 1 nobody nobody     314 Nov 12  2011 salesrep.dbf
 
Not sure how to overcome this issue?
 
The whole application works very well in Samba setting.
 
Regards.
Ash

elch

unread,
Feb 9, 2014, 12:02:42 PM2/9/14
to harbou...@googlegroups.com

Hi Ash,


you started the HBNETIO server as root, so files created by it have only write access for root himself, all others ( group/ world ) only can read ..


[ r-ead w-rite e-x-ecute, in the order: owner group world ]

-rw-rw-rw or -rw-rw--- should be fine for data files, == 666 == umask 111 ?


IMHO not recommended as root, better to start HBNETIO as an user -- but that only won't solve the problem.


So you have to search for CentOS how to:

# auto-login an default user and let him start automatic at least one task: the HBNETIO server

# how to modify the 'default umask' for this user


here a first hint:

http://daddy-linux.blogspot.de/2012/02/how-to-setup-default-umask-under-linux.html

where this /etc/bashrc is guilty for ALL users, so better try the other file


best regards

Rolf

Francesco Perillo

unread,
Feb 9, 2014, 3:47:53 PM2/9/14
to harbou...@googlegroups.com
You don't have a
IF ! File( "salesrep.cdx" )
   ? "Please create indexes"
   quit
ENDIF




--
--
You received this message because you are subscribed to the Google
Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: http://groups.google.com/group/harbour-users
 
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

elch

unread,
Feb 9, 2014, 5:01:10 PM2/9/14
to harbou...@googlegroups.com
Ash,


Francesco is right - i have read your mail only volatile ...
So forget, except that i would not suggest this 'run as root' ..

best regards
Rolf

Francesco Perillo

unread,
Feb 9, 2014, 5:11:24 PM2/9/14
to harbou...@googlegroups.com
never run as root anything on a linux box... as you should never run anything as administrator unless you are forced to do...


on linux, you can use the command su to start a program under another user... please read here THE LAST ANSWER:
http://unix.stackexchange.com/questions/65328/should-i-use-sudo-or-su-in-a-startup-script

Imagine you have an user called "harbour" and a script user harbour can execute:
su -s /bin/sh -c "start_leto.sh my_args" harbour
please note that start_leto.sh must be able to detach, I mean, it must exit and the leto server must be running as daemon (I didn't see the script).




Ash

unread,
Feb 9, 2014, 6:27:42 PM2/9/14
to harbou...@googlegroups.com
Francesco,
 
No, I use the functions I wrote during my testing. They are for Leto, Netio and plain Harbour/xHarbour, based on the type of server I an using. They are included below. Code could be tidier though. My application code looks like this.
 
   IF !PVDC_File( filePath + "\ar.cdx" )
      errmsg( 'Build company index files.' )
      RETURN .F.
   ENDIF

I have just tested the system with letodb and all is well. As well, I can also share the database with Windows users.
 
Regards.
Ash
 
FUNCTION PVDC_Directory( cDirSpec )
#ifdef __XHARBOUR__
   RETURN Directory( cDirSpec )
#else
   DO CASE
   CASE currSrvrType = 'L'
      RETURN leto_Directory( cDirSpec )
   CASE currSrvrType = 'N'
      RETURN netio_FuncExec( "DIRECTORY", AtRepl( "\", cDirSpec, "/" ) )
   OTHERWISE
      RETURN Directory( cDirSpec )
   ENDCASE  
#endif
 
FUNCTION PVDC_fErase( cFilename )
#ifdef __XHARBOUR__
   RETURN fErase( cFilename )
#else
   DO CASE
   CASE currSrvrType = 'L'
      RETURN leto_fErase( cFilename )
   CASE currSrvrType = 'N'
      RETURN netio_FuncExec( "FERASE", AtRepl( "\", cFilename, "/" ) )
   OTHERWISE
      RETURN fErase( cFilename )
   ENDCASE  
#endif    

FUNCTION PVDC_File( cFilename )
#ifdef __XHARBOUR__
   RETURN File( cFilename )
#else
   DO CASE
   CASE currSrvrType = 'L'
      RETURN leto_File( cFilename )
   CASE currSrvrType = 'N'
      RETURN dbExists( AtRepl( "\", cFilename, "/" ) )
   OTHERWISE
      RETURN File( cFilename )
   ENDCASE  
#endif 
 
FUNCTION PVDC_MakeDir( cDirectory )
#ifdef __XHARBOUR__
   RETURN MakeDir( cDirectory )
#else
LOCAL nError
   DO CASE
   CASE currSrvrType = 'L'
      RETURN leto_MakeDir( cDirectory )
   CASE currSrvrType = 'N'
      RETURN netio_FuncExec( "MAKEDIR", AtRepl( "\", cDirectory, "/" ) )
   OTHERWISE
      RETURN MakeDir( cDirectory )
   ENDCASE  
#endif     
 
FUNCTION PVDC_CopyFile( cFromFile, cToFile )
#ifdef __XHARBOUR__
   RETURN __CopyFile( cFromFile, cToFile )
#else
LOCAL nError, cBuff
   DO CASE
   CASE currSrvrType = 'L'
      cbuff = leto_memoread( AtRepl( "\", cFromFile, "/" ) )
      leto_memoWrite( AtRepl( "\", cToFile, "/" ), cBuff )
      RETURN .t.
   CASE currSrvrType = 'N'
      RETURN netio_FuncExec("__COPYFILE", AtRepl( "\", cFromFile, "/" ), AtRepl( "\", cToFile, "/" ) )
   OTHERWISE
      RETURN __CopyFile( cFromFile, cToFile )
   ENDCASE  
#endif

elch

unread,
Feb 9, 2014, 7:08:03 PM2/9/14
to harbou...@googlegroups.com
Hi Ash,

 
   IF !PVDC_File( filePath + "\ar.cdx" )

   CASE currSrvrType = 'N'
      RETURN dbExists( AtRepl( "\", cFilename, "/" ) )

content of filepath for netio ?

is there a "net:" at start ?, else search will be local ..


RELATIVE path to HBNETIO server root directory, e.g.:

hb_dbexists( "net:ar.cdx")

and astonishing a: hb_dbexists( "net:/ar.cdx") seem also work ..


regards

Rolf

Ash

unread,
Feb 9, 2014, 7:12:15 PM2/9/14
to harbou...@googlegroups.com
Francesco,
 
Thanks for the information. I need to research some more.
 
As you suggested, I started hbnetio like this for user nobody:
 
su -s /bin/sh -c "/data/accounts/comp/hbnetio -iface=nnn.nnn.nnn.nnn -rootdir=/data/accounts/comp/ -rpc &" nobody
But still get an error message to build index files.
 
-rw-r--r-- 1 nobody nobody 3072 Feb  9 14:02 salesrep.cdx
-rwsrwsr-x 1 nobody nobody  194 Feb  9 07:01 salesrep.dbf

Regards.
Ash

Ash

unread,
Feb 9, 2014, 7:33:35 PM2/9/14
to harbou...@googlegroups.com
Hello Rolf,
 
Thank you.  I was my typo and code should have been:
 
   IF !PVDC_File( mainPath + "\ar.cdx" )

      errmsg( 'Build company index files.' )
      RETURN .F.
   ENDIF
 
Changed to mainPath and every thing works. And, hbnetio is no longer started by root.
 
I think I need a break.
 
Regards.
Ash

Ash

unread,
Feb 9, 2014, 7:44:24 PM2/9/14
to harbou...@googlegroups.com
 
 
This is how I login to various servers. It will help how filenames are passed to PVDC_* functions.
 
Regards.
Ash
 
FUNCTION srvrlogin ( )
   rddSetDefault( "DBFCDX" )
   SET AUTORDER TO 1
  
   mainPath := ".\"        //Path of data tree                           
   filePath := ".\"        //Path of comp folder for managing all files
   compPath := ""          //Path of company tables relative to mainPath
                           //Gets set when a company is selected 
   currSrvrType := ' '     //No server
  
#ifdef __XHARBOUR__
   // No server for xHarbour
   RETURN nil
  
#else
   // Determine the server type and login
   USE ( workdir + 'wsdata' ) NEW VIA "DBFCDX"  //Workstation setup table (local)
  
   IF ! Empty( srvrType )
  
      DO CASE
          CASE srvrType = 'L'
            // LetoDB Server
            mainPath := "//" + allTrim( srvrname ) + ":" + iif( Empty( srvrport), '2812', allTrim( srvrport ) ) + "/"
            filePath := mainPath
           
            IF leto_Connect( mainPath ) == -1
               Alert( 'Unable to connect to server.' )
               QUIT
            ENDIF
           
            REQUEST LETO
            RDDSETDEFAULT( "LETO" )
           
         CASE srvrType = 'N'
            // HBNetIO Server
            IF ! Empty( srvrname )
               IF ! netio_connect( alltrim( srvrname ), iif( Empty( srvrport), '2941', allTrim( srvrport ) ) )
                  Alert( 'Unable to connect to server.' )
                  QUIT
               ENDIF
     
               mainPath := "net:\"     // -rootdir
               filePath := ".\"       
              
            ENDIF
           
         OTHERWISE
        
      ENDCASE
     
      currSrvrType := srvrType
   ENDIF
  
   USE 
  
   RETURN  nil

elch

unread,
Feb 9, 2014, 8:01:02 PM2/9/14
to harbou...@googlegroups.com
fine !,

 
Changed to mainPath and every thing works. And, hbnetio is no longer started by root.

two additional remarks:
maybe not wanted alternative to hb_dbExists("net: ...) can be:
netio_FuncExec( "FILE", "/absolute!_path/" + cFile )
Not wanted, as the aim of netio is a bit to hide exactly the full path.

And i am unsure about the ending "/" at your server root dir, when starting HBNETIO server, but if it now works ...

best regards
Rolf

Ash

unread,
Feb 9, 2014, 9:25:17 PM2/9/14
to harbou...@googlegroups.com
Hello Rolf,
 
Agreed and tested and application works well.  Thanks.
 
I have two variables: mainPath - contains 'net:/' and filePath - contains './'. All PVDC_* functions use the filePath variable - relative path.
The updated function looks like:
 
FUNCTION PVDC_File( cFilename )
#ifdef __XHARBOUR__
   RETURN File( cFilename )
#else
   DO CASE
   CASE currSrvrType = 'L'
      RETURN leto_File( cFilename )
   CASE currSrvrType = 'N'
      RETURN netio_FuncExec( "FILE", AtRepl( "\", cFilename, "/" ) )  
   OTHERWISE
      RETURN File( cFilename )
   ENDCASE  
#endif
 
 
Regards.
Ash

Ash

unread,
Feb 10, 2014, 6:51:16 AM2/10/14
to harbou...@googlegroups.com
Correction:
 
On Sunday, February 9, 2014 7:44:24 PM UTC-5, Ash wrote:
  
               mainPath := "net:\"     // -rootdir
               filePath := ".\"        
              
 
               mainPath := "net:\"     // -rootdir
               filePath := netio_funcexec("hb_dirbase")

 Regards.
Ash
 

elch

unread,
Feb 10, 2014, 7:19:25 AM2/10/14
to harbou...@googlegroups.com
yep, Ash

addendum, checked for own interest:
an '/' will be internally added, if the given root directory for HBNETIO server end *not* with a path seperator, e.g. "/" for Linux -- so your first version was totally correct.

---
There is wilfully no way to retrieve this server root directory..

One workaround you just found, like another user earlier showed it to me:
put the hbnetio executable into the data directory on server.
Then we can use: netio_FuncExec( "HB_DIRBASE" ) to retrieve the path of the executable, in that case the server root directory.

But in this case we don't need to know the absolute path on server, like we then can use just the filename without any path prefix,
for opening a DBF with RPC commands for e.g. creating an index  ...

This hb_dbExists("net: ... ") seem the most flexible way, as you then easily can change that directory ...

best regards
Rolf

elch

unread,
Feb 10, 2014, 7:55:14 AM2/10/14
to harbou...@googlegroups.com
to clarify,

i ment this here

su -s /bin/sh -c "/data/accounts/comp/hbnetio -iface=nnn.nnn.nnn.nnn -rootdir=/data/accounts/comp/ -rpc &" nobody

addendum, checked for own interest:
an '/' will be internally added, if the given root directory for HBNETIO server end *not* with a path seperator, e.g. "/" for Linux -- so your first version was totally correct.

regards

Francesco Perillo

unread,
Feb 10, 2014, 2:04:57 PM2/10/14
to harbou...@googlegroups.com

Can you please check if it correctly handles locks when using share and letodb on the same DB?

Nenad Batocanin

unread,
Feb 10, 2014, 2:12:23 PM2/10/14
to harbou...@googlegroups.com

Maybe this help:

 

Share_Tables  = 0        -    if 0 (default, this mode server was the only from the

                                    start of a letodb project), the letodb opens all

                                    tables in an exclusive mode, what allows to increase

                                    the speed. If 1 (new mode, added since June 11, 2009),

                                    tables are opened in the same mode as client

                                    applications opens them, exclusive or shared, what

                                    allows the letodb to work in coexistence with other

                                    types of applications.

--

elch

unread,
Feb 10, 2014, 2:51:15 PM2/10/14
to harbou...@googlegroups.com
Hi Nenad,


Share_Tables  = 0        -    if 0 (default, this mode server was the only from the

with value = 1:
have you fun for checking, in how much impact on your previous test for LETODB this would results ?

best regards
Rolf

Francesco Perillo

unread,
Feb 10, 2014, 2:52:19 PM2/10/14
to harbou...@googlegroups.com
the most important thing a part from speed is record/file locking...




--

Ash

unread,
Feb 10, 2014, 3:18:44 PM2/10/14
to harbou...@googlegroups.com
My application is a multiuser one and record locking part works as expected for Samba, NetIO and LetoDB.
 
So far, I have been testing the application in a LAN setting. My next job is to see how well it performs via the internet.
 
Regards.
Ash

Francesco Perillo

unread,
Feb 10, 2014, 3:29:20 PM2/10/14
to harbou...@googlegroups.com
On Mon, Feb 10, 2014 at 9:18 PM, Ash <jun...@gmail.com> wrote:

My application is a multiuser one and record locking part works as expected for Samba, NetIO and LetoDB.
 

I mean, using an app that uses LetoDB *together* with an app that uses Samba....

Ash

unread,
Feb 10, 2014, 4:11:10 PM2/10/14
to harbou...@googlegroups.com
Yes, any combination of all three. Just to confirm, I have tested xHarbour version with LetoDB version and both respect each other's locks.
 
Regards.
Ashfaq

Nenad Batocanin

unread,
Feb 10, 2014, 9:10:56 PM2/10/14
to harbou...@googlegroups.com

I have no such cases, but I can try if you want. In fact, I prefer that LetoDB have exclusive access to database.

 

Regards, NB

 

From: harbou...@googlegroups.com [mailto:harbou...@googlegroups.com] On Behalf Of elch
Sent: Monday, February 10, 2014 8:51 PM
To: harbou...@googlegroups.com
Subject: Re: [harbour-users] Re: Conversion of an application from Clipper to Harbour + NetIO

 

Hi Nenad,

--

Ash

unread,
Feb 12, 2014, 9:12:02 AM2/12/14
to harbou...@googlegroups.com
I have just found that LetoDB does not respect file locks even between two instances of letodb based application. My testing is based on USE ... EXCLUSIVE commands.
 
Regards.
Ash

Nenad Batocanin

unread,
Feb 12, 2014, 11:13:40 AM2/12/14
to harbou...@googlegroups.com

I just tried locks it and it works quite normally. I tried two "leto" applications, and a combination of Leto + DBFNTX App. Apps open table as:

 

USE Test NEW SHARED

 

And then:

 

FLock()

 

Everything is functioning normally.

 

Regards, NB

 

 

 

From: harbou...@googlegroups.com [mailto:harbou...@googlegroups.com] On Behalf Of Ash
Sent: Wednesday, February 12, 2014 3:12 PM
To: harbou...@googlegroups.com
Subject: Re: [harbour-users] Re: Conversion of an application from Clipper to Harbour + NetIO

 

I have just found that LetoDB does not respect file locks even between two instances of letodb based application. My testing is based on USE ... EXCLUSIVE commands.

--

Ash

unread,
Feb 12, 2014, 1:44:27 PM2/12/14
to harbou...@googlegroups.com
Try:
 
USE test EXCLUSIVE NEW
 
and see if it works.
 
Regards.
Ash

Nenad Batocanin

unread,
Feb 12, 2014, 4:50:53 PM2/12/14
to harbou...@googlegroups.com

Yes, I've tried. Other programs can not open the table (USE command reports an error).

Ash

unread,
Feb 12, 2014, 7:21:39 PM2/12/14
to harbou...@googlegroups.com
I run the following program and left it running.
 
// lt.prg
// hbmk2 lt.prg -lrddleto
Function Main()

   REQUEST LETO
   RDDSETDEFAULT( "LETO" )
   USE ("//192.168.0.200:2812/compk/ar") EXCLUSIVE
   browse()
  
   RETURN nil
 
Then I ran it again while the first instance is still running, I get the following error from this run:
 
lt.exe has stopped working
-> Check online for a solution...
-> Close the program

Dušan D. Majkić

unread,
Feb 13, 2014, 4:31:49 AM2/13/14
to Harbour Users
> I run the following program and left it running.

Which LetoDB version are you using?

Regards,
Dusan Majkic
Wings Software

elch

unread,
Feb 13, 2014, 6:21:45 AM2/13/14
to harbou...@googlegroups.com
Hi Ash,


Function Main()
   REQUEST LETO
   RDDSETDEFAULT( "LETO" )
   USE ("//192.168.0.200:2812/compk/ar") EXCLUSIVE

please, where is the check for NETERR() ??, as USE or DBUSEAREA() ever can fail, as it just did ...

 
   browse()
With no active area, an .F. should be returned -- and so the 2.nd run of test snippet should blink and done.

*BUT* the failed USE EXCLUSIVE results in an: Exception SIGSEGV before browse()
( Leto DB Server v 2.12 [ Leto_GetServerVersion() ] )


Regards
Rolf

elch

unread,
Feb 13, 2014, 6:56:20 AM2/13/14
to harbou...@googlegroups.com

Hi,


something real scary:


# Share_Tables = 1

( but this doesn't matter, happens also with '= 0' )

# DBFNTX default for LetoDB

# using multiple index keys per DBF


--

The first instance of my app runs fine, all as expected.

When i start a second instance of application on <same client machine>:

DBF[s] are loaded error free, and also [more than one] index files for some DBF are opened without error.


*BUT* all index keys are pointing to the _first_ index.

e.g.: INDEXKEY(1) == INDEXKEY(2) == INDEXKEY(3) ..


That i like to call: <scary>, as it looks like we can't use a DBF with multiple index keys by two different applications on same client.


someone else can confirm ?


regards

Rolf

Ash

unread,
Feb 13, 2014, 7:48:44 AM2/13/14
to harbou...@googlegroups.com

On Thursday, February 13, 2014 4:31:49 AM UTC-5, dmajkic wrote:

Which LetoDB version are you using?
 
Leto DB Server v.2.12
 
Contents of LetoDB.ini.
 
[Main]
DataPath = /data/accounts/comp
EnableFileFunc = 1
Share_Tables = 1
Debug = 1
 
Regards.
Ash
 

Ash

unread,
Feb 13, 2014, 7:57:00 AM2/13/14
to harbou...@googlegroups.com
Hello Rolf,
 
I have tried the following code but end up with the same error. I believe it never completes the USE command.
 
// lt.prg

Function Main()
   REQUEST LETO
   RDDSETDEFAULT( "LETO" )
   USE ("//192.168.0.200:2812/compk/ar") EXCLUSIVE
   IF ! NetErr()
      browse()
   ENDIF
  
   RETURN nil
 
Regards.
Ash

Ash

unread,
Feb 13, 2014, 8:48:10 AM2/13/14
to harbou...@googlegroups.com
When I have lt.exe running, my main application that I have just converted to Harbour gives the same error. Two or more instance of my application handle record locking very well when lt.exe is not running.  I think it is the file locking or SET EXCLUSIVE mode that is broken in LetoDB.
 
There are two instances where I require exclusive use of a table in my application.  I will just have to find a work-around.
 
Pavel: There is a wealth of testing being done here. Do you have comments that might help us? Thanks.
 
Regards.
Ash

elch

unread,
Feb 13, 2014, 9:38:05 AM2/13/14
to harbou...@googlegroups.com
hello Ash,


When I have lt.exe running, my main application that I have just converted to Harbour gives the same error. Two or more instance of my application handle record locking very well when lt.exe is not running.  I think it is the file locking or SET EXCLUSIVE mode that is broken in LetoDB.

exactly - as i already wrote.
one 'Exception SIGSEGV' error arises in Linux. Windows will only report in such cases: 'app have a 'problem ..'

can find in LetoDB log file:
-004:21-1023-0-0 ...
and if i search LetoDB source code, it seems that an already EXCLUSIVE used DBF is correct detected -- but then something unplanned happens ...

So this problem perhaps may be relative easy corrected by the developers of LetoDB.

--
---
But my other problem report about the index files makes me real headaches, that sounds like the k.o. for LetoDB.
As it is certainly not my task to fiddle around in server code ;-)

best regards
Rolf

Ash

unread,
Feb 13, 2014, 10:03:27 AM2/13/14
to harbou...@googlegroups.com
Hello Rolf,
 
Agreed.  Let us hope Pavel and Alexander are visiting this list.
 
Regards.
Ash

Itamar M. Lins Jr. Lins

unread,
Feb 13, 2014, 10:36:48 AM2/13/14
to harbou...@googlegroups.com
Yes, I Get, error while; Share_Tables  = 1 and,  I open dbf with exclusive mode.


Best regards,
Itamar M. Lins Jr.



--

Nenad Batocanin

unread,
Feb 13, 2014, 10:45:52 AM2/13/14
to harbou...@googlegroups.com

I confirm this - when the table is open exclusively USE command breaks with error. I do not know why, but I agree it's not good. Actually, an error occurs in some unexpected cases. For example, if you write simple program:

 

RddSetDefault("LETO")

? OrdBagExt()

 

An error occurs.

Nenad Batocanin

unread,
Feb 13, 2014, 11:13:36 AM2/13/14
to harbou...@googlegroups.com

Very serious error. Can you please send piece of code that opens the index?

 

Regards, NB

 

From: harbou...@googlegroups.com [mailto:harbou...@googlegroups.com] On Behalf Of elch
Sent: Thursday, February 13, 2014 12:56 PM
To: harbou...@googlegroups.com
Subject: Re: [harbour-users] Re: Conversion of an application from Clipper to Harbour + NetIO

 

Hi,

--

elch

unread,
Feb 13, 2014, 12:25:17 PM2/13/14
to harbou...@googlegroups.com
Hi Nenad,



Very serious error. Can you please send piece of code that opens the index?

a  snippet needs some time -- and i also will further check if only *i* make something wrong ..

letodb.ini:
DataPath = /absolute_path_to_data_directory
Share_Tables = 1
; but Share_Tables = 0 makes no difference
Default_Driver = NTX
Memo_Type = DBT

--
At application start is done:

REQUEST LETO
RDDSETDEFAULT( "LETO" )
..
leto_Connect( "//" + cServerIPAdress + ":" + ALLTRIM( STR(nPort) ) + cDataPath )
without above line i can not use functions like: LETO_MGGETINFO() which i need at one place.

To open DBFs, a *relative* path for files is used:
DBUseArea( lNewsele, "LETO", cDatabase, cAlias, lOpenMode, lReado )
where cDatabase is e.g.: "/elch.dbf"

Followed by multiple calling of:
DBSETINDEX( cIndex )
where cIndex is e.g.: "/elch0.ntx"; "/elch1.ntx"; "/elch2.ntx"

---
This works with NO problem for first instance of my application.
But not for a second instance for application on same machine.

After a deeper inspection I must correct a bit:
ALL index keys are in second instance equal like reported, but pointing to the last opened index key from the FIRST APPLICATION INSTANCE.
This index-expression is returned for all index: IndexKey( x ). But *not* for the last index, this is here empty.

Example for *second* instance of application: (using debugger and step-wise run )
DBUseArea( ... )
opening "/elch0.ntx", and ask immediate after with: IndexKey( 1 )
=> index key from "/elch2.ntx"
opening "/elch1.ntx", and ask Indexkey( 2 )
=> index key from "/elch2.ntx"
opening "/elch2.ntx", and ask Indexkey( 3 )
=> *EMPTY* indexkey: ""

best regards
Rolf

Nenad Batocanin

unread,
Feb 13, 2014, 8:41:11 PM2/13/14
to harbou...@googlegroups.com

I suppose it would be good to send this example in the LetoDB forum:

 

http://sourceforge.net/apps/phpbb/letodb/

 

Regards, NB

 

 

From: harbou...@googlegroups.com [mailto:harbou...@googlegroups.com] On Behalf Of Ash


Sent: Thursday, February 13, 2014 1:22 AM

Ash

unread,
Feb 13, 2014, 9:58:47 PM2/13/14
to harbou...@googlegroups.com
Please do.
Regards.
Ash

elch

unread,
Feb 14, 2014, 5:08:26 PM2/14/14
to harbou...@googlegroups.com

Hi,


that looks like NO bug ! -- it is just a missing capability:

FunnyDB is not capable of handling multiple NTX for a single DBF, only possible to open such a DBF one single time on the 'server'.


--

I tried without leto_Connect() at application start, with absolute paths, with and without DataPath in letodb.ini,

with and without this: "//cServerIp:nPort" prefix, ....

I am somehow a bit annoyed after all this senseless wasted lifetime.


Before someone suggests:

NJET, i definitely will not change to use CDX -- a check if there is a similar problem for cdx, i leave for others to prove.

I will not even try that ..


A work around seem to be to put all index keys in one file, like it is common for CDX - and great Harbour is declared to be able to do so.

But then i loose compatibility with 3rd-party, which i want to have in the backhand.

( And i do not mean a 'handful' index files, there are *MANY* for my main project )


BTW.: some Index fundamentals explained best, as ever a pleasure to read, by Przemek:

https://groups.google.com/d/msg/harbour-devel/9nT9lZmtztk/Q3X-s81UpYYJ


BEST regards

Rolf

elch

unread,
Feb 15, 2014, 8:08:14 AM2/15/14
to harbou...@googlegroups.com

Hi,


i pushed an error description to the place shown by Nenad.


Meanwhile i looked a bit deeper into the source code: it could be 'only' a bug.

The basic structures for maintaining multiple index files seem to be there.

I just guess, it will also happen when someone tries to open multiple CDX index files.


But with this USE EXCLUSE sigsegv crash, plus my described error -- so at least two very basic and heavy problems just during first testing times:

i have no so good feeling ...


I will focus first on HBNETIO - and all work to exchange the 'server' is already done.


best regards

Rolf

Ash

unread,
Feb 15, 2014, 10:47:23 AM2/15/14
to harbou...@googlegroups.com
Hello Rolf,
 
My converted application is working fine using LetoDB except for a couple of places where I open a table in Exclusive Mode. Also I had to use default locking scheme which makes Clipper and xHarbour versions of my application respect each others record locks.
 
I am also leaning towards NetIO since neither Pavel nor Alexander are responding to request for help. I fear the hard earned test results will be wasted.
 
I have found no issues with using NetIO so far. It is a fully finished product with assistance from Przemek to boot.
 
Regards.
Ash

Nenad Batocanin

unread,
Feb 15, 2014, 12:01:42 PM2/15/14
to harbou...@googlegroups.com

I share your concerns. Of course, I'd prefer to use rock-solid product like NetIO, but unfortunately it does not solve my main problem.

 

As I see on clipper.borda.ru, Kresin and Pavel still working on LetoDB, just completed the php client and corrected some bugs, so I think with a little patients LetoDB can be quite usable product.

 

Regards, NB

 

From: harbou...@googlegroups.com [mailto:harbou...@googlegroups.com] On Behalf Of Ash
Sent: Saturday, February 15, 2014 4:47 PM
To: harbou...@googlegroups.com
Subject: Re: [harbour-users] Re: Conversion of an application from Clipper to Harbour + NetIO

 

Hello Rolf,

--

Massimo Belgrano

unread,
Feb 15, 2014, 12:19:47 PM2/15/14
to harbou...@googlegroups.com

2014-02-15 18:01 GMT+01:00 Nenad Batocanin <nbato...@wings.rs>:
I share your concerns. Of course, I'd prefer to use rock-solid product like NetIO, but unfortunately it does not solve my main problem.

Which is main problem not resolved from netio?



--
Massimo Belgrano
Delta Informatica S.r.l. (Cliccami per scoprire 

Nenad Batocanin

unread,
Feb 15, 2014, 12:42:18 PM2/15/14
to harbou...@googlegroups.com

I have a very a large program with many standard REPLACE/ SKIP/SEEK/TBrowse command. I need a way to speed up all such operations without code change (because it has about million lines). I was not able to accomplish this with NetIO, but LetoDB and ADS give much better results.

 

Regards, NB

 

From: harbou...@googlegroups.com [mailto:harbou...@googlegroups.com] On Behalf Of Massimo Belgrano
Sent: Saturday, February 15, 2014 6:20 PM
To: harbou...@googlegroups.com
Subject: Re: [harbour-users] Re: Conversion of an application from Clipper to Harbour + NetIO

 

 

2014-02-15 18:01 GMT+01:00 Nenad Batocanin <nbato...@wings.rs>:

--

Francesco Perillo

unread,
Feb 15, 2014, 12:43:31 PM2/15/14
to harbou...@googlegroups.com

You can, but you need tochange filters to run on the server... it all depends on now your code is organized...

Massimo Belgrano

unread,
Feb 15, 2014, 12:48:20 PM2/15/14
to harbou...@googlegroups.com
can you share part of code who not run with netio without change or better reduced sample

Problem can be in your code so better share

Francesco Perillo

unread,
Feb 15, 2014, 12:58:53 PM2/15/14
to harbou...@googlegroups.com

Problem is that ALL Records are sent over the lan while in letodb only the ones filtered...

Ash

unread,
Feb 15, 2014, 1:10:59 PM2/15/14
to harbou...@googlegroups.com
Hello Nenad,
 
One thing I have found while converting my application is that some of the code had to be reorganized/rewritten especially the browses. I my case the browses were very primitive (about 20 years old Clipper Summer 87 code). So, I don't mind changing the code as long as there is a benefit of having improved code and with better database manager.
 
I think letoDB has the advantage over NetIO in that the indexes are built on the server by default - Hardly any network traffic. It is faster than NetIO when creating reports.
 
When building indexes via RPC in NetIO, the network traffic is reduced by half, however - a fair advantage.
 
Again the reliability trumps all else. Hence, NetIO for me for the time being until LetDB issues are resolved.
 
Regards.
Ash

Massimo Belgrano

unread,
Feb 15, 2014, 1:12:50 PM2/15/14
to harbou...@googlegroups.com
IMO Set filter optimized (like rushmore) and centralized
may be an important area for most of us
for better harbouring  like occur in sql " select nomi.* where city='Tucson'"
set filter to city="Tucson"
not scannig the entire database if you have index on city



Nenad Batocanin

unread,
Feb 15, 2014, 1:15:03 PM2/15/14
to harbou...@googlegroups.com

The best approximation is the test program I sent earlier. For instance:

 

WHILE !Eof()

   x := some_field

   SKIP

END DO

 

 

SET FILTER TO some_cond

Browse()

 

LetoDB doing this about 10 times faster without any code change.

Francesco Perillo

unread,
Feb 15, 2014, 1:17:38 PM2/15/14
to harbou...@googlegroups.com

.
>  
> When building indexes via RPC in NetIO, the network traffic is reduced by half, however - a fair advantage.

You should have no network traffic if really using rpc

Nenad Batocanin

unread,
Feb 15, 2014, 1:31:03 PM2/15/14
to harbou...@googlegroups.com

Agree. But I must admit: I still do not know how to do the optimization in some cases. For example, I have a table with a field ABC (numeric), and the table has an index on that field. How to speed ​​up the following:

 

SET FILTER TO AScan ({123, 456, 789, ...}, ABC) <> 0

Browse()

 

This is equivalent to:

 

SET FILTER TO ABC = 123 .OR. ABC = 456 .OR. ABC = 789...

Browse()

 

I know for bit-map filters, but I could not use them in Harbour.

Francesco Perillo

unread,
Feb 15, 2014, 1:36:49 PM2/15/14
to harbou...@googlegroups.com

Create a RPC function on the server: pass the array as parameter and return the name of a temporary file created by the server. Browse that file.

BEWARE: this is a read-only Solution....

elch

unread,
Feb 15, 2014, 1:50:06 PM2/15/14
to harbou...@googlegroups.com
Hello Ash,

 
When building indexes via RPC in NetIO, the network traffic is reduced by half, however - a fair advantage.
 
search for the error, something you make somehow wrong.
The NIC LED will blinks two times: one for starting RPC creating index, one for when index is ready.
Between: ZERO network traffic ..

best regards
Rolf

elch

unread,
Feb 15, 2014, 1:57:38 PM2/15/14
to harbou...@googlegroups.com
Ash,

a somehow progress bar disturbing the ask ?

elch

unread,
Feb 15, 2014, 3:08:52 PM2/15/14
to harbou...@googlegroups.com
\o/

Bug in LetoDB: opening multiple NTX for a DBF for multiple application instances soso 'solved'.
Needed only 8 hours for 10 new short rows, not bad, hey ? 8-)
And only 'trampled' correct for NTX [ for CDX, maybe additional work will be needed ]

Good enough to do now some stability check ...


---

With 'Share_Tables=1' LetoDB looses only ~ xx%
Don't nail me for exact %, let say 50.
So that is not the big bang ..

With Nenads' Test dbf, i run following LOOP
with *concurrent* access of LetoDB and HBNETIO on the same shared DBF/ NTX:

DBGOTOP()
DBSEEK( fixed value )  // results for both at same recno()
[repeated] RLOCK() with 0.01s delay until locked
REPLACE one value
DBUNLOCK()

Here also LetoDB wins, let me estimate: 25% better time.
All together nothing which blows me from the chair.

What is incredible (10? times) fast with LetoDB:
DBSKIP()
And here maybe is the secret, why a DBSetfilter() is so fast with LetoDB, like with an active filter must be much 'hidden rows' skipped ..


#
Everybody has it own needs, i need for my main project 24/7 reliability.
Apps won't not even stop for daily backup, only short time release and exact restoring of DBF areas will be done in that moment.
In such a scenario a single workstation may temporary fail, but not the server ..

best regards
Rolf

Nenad Batocanin

unread,
Feb 15, 2014, 4:27:28 PM2/15/14
to harbou...@googlegroups.com

The problem is: no time to process. For example, the user may select ALL the records (50.000). In fact, I need exactly SET FILTER functionality, but much faster. With LetoDB/ADS simply SET FILTER works fine.

 

Regards, NB

Nenad Batocanin

unread,
Feb 15, 2014, 4:49:18 PM2/15/14
to harbou...@googlegroups.com

I'm not sure that this test shows the true situation. In my case, the improvement is clearly evident in the real application. For example, I have a report that runs 4 minutes 15 seconds over the LAN. When switch to LetoDB, time is 13 sec! Without _any_ code change!

 

But: I do not even cross my mind to use letoDB or any other product before being 100% sure that it will work properly!

 

NB

 

 

 

 

From: harbou...@googlegroups.com [mailto:harbou...@googlegroups.com] On Behalf Of elch
Sent: Saturday, February 15, 2014 9:09 PM
To: harbou...@googlegroups.com
Subject: Re: [harbour-users] Re: Conversion of an application from Clipper to Harbour + NetIO

 

DBGOTOP()

Francesco Perillo

unread,
Feb 15, 2014, 6:00:48 PM2/15/14
to harbou...@googlegroups.com
Please try if you want... I think that the new file may be created in no more than 2 seconds... 50000 records... unless they are very very very long records...

Client: please create a file from DB001 to a temp file with the following filter
Server: use DB001 SHARED; SET FILTER TO <filter>; COPY TO temp; RETURN "temp" to client
Client: use letodb:temp; brow

As I said, this is a COPY of the data, if someone else changes some data, it will not be updated... it is a snapshot, it is exactly how it would work in a SQL world, but not in a shared dbf world.

Of course, if the user wants to browse the whole database, you can just open DB001.


Francesco Perillo

unread,
Feb 15, 2014, 6:38:56 PM2/15/14
to harbou...@googlegroups.com
Hi Nenad,
I strongly believe that it is impossible to have "the perfect generical optimization" but that every use case must be treated differently and the query must be dissected and very well understood.

Let's say you have these lines:

SET FILTER TO AScan ({123, 456, 789, ...}, ABC) <> 0

Browse()

You also told us that there is an index on field ABC.
We don't know if the user is allowed to edit the data in the browse and if the edit may change the ABC field.
You don't say how many records are in the source dbf
You don't say how many records are *typicall* shown in the browse after the filter
You don't say how the file was opened, if an index is active and which one.
You don't say how many items are in the array
You write browse() but don't say if you display all fields or just a subset...

anyway, my idea in a netio readonly scenario would be to create a RPC function that creates the temporary file, where you can also limit the fields to the ones you need to display (as I wrote in the other post).


In a samba or netio setup I'd try to do something like this (I try to write pseudo code for something I have in my mind and necer really tried... but I strongly believe it is possible)

a := {123, 456, 789, ...}
rec := {}
for i := 1 to len( a )
    ordscope( 0, a[i] ); ordscope(1,a[i])  // or a classical seek a[i]; do while ABC=a[i]
    do while ! eof()
       aAdd( rec, recno() )
       skip
    enddo
next
ordscope(0)
ordscope(1)

In variable rec you now have all the record numbers you have filtered, you just have to adapt browse to use rec[ i ] as the pointer to the record to display on row i.

If you need to display data rows in a different order, you may change to:

aAdd( rec, { recno(), <keyexprs> } )

and then sort on <keyexprs> (that shouldn't be a real ntx/cdx index, but a valid sortable expression)...

It is possible (must be tested) that if you don't use <keyexprs> harbour just use indexes and doesn't retrieve full record data to fill rec array, so the speedup would be really great.

In a Netio/RPC world, the server side function may return rec array and you browse on the real file, or instead of adding to rec it may add to a temp file, the record number or the full record to browse...

Francesco Perillo

unread,
Feb 15, 2014, 6:51:01 PM2/15/14
to harbou...@googlegroups.com
On Sat, Feb 15, 2014 at 10:49 PM, Nenad Batocanin <nbato...@wings.rs> wrote:

I'm not sure that this test shows the true situation. In my case, the improvement is clearly evident in the real application. For example, I have a report that runs 4 minutes 15 seconds over the LAN. When switch to LetoDB, time is 13 sec! Without _any_ code change!


I'm quite sure you may have better response time with proper netio/rpc setup. Usually a report has some input values and outputs a... pdf ? printer-on-paper report ? Whatever you create you can do from the server.
I use harupdf to create PDF reports. Almost all my reports coding is done in two functions, the first to gather filter values from the user, the second uses the parameters passed from the first function to create the report. Some reports are long but moving the second function into the server can shrink the time a lot.

I have a report that performs a full table scan on the "clients" database. It takes 7 seconds in samba/shared mode. It takes 0,5 seconds if run (the exact same executable) on files stored on the local disk.
Tomorrow I will try to test moving the report generator into server side...

Ash

unread,
Feb 16, 2014, 8:51:46 AM2/16/14
to harbou...@googlegroups.com
Hello Rolf,
 
In the spirit of Harbour, a simple example is indicated to show that building indexes at the server with NetIO does require network resources - about 50% of what it would take in samba. I did make a change at line 182 of hbnetio.prg as shown below:
 
//                netiosrv[ _NETIOSRV_hRPCFHRB ] := hb_hrbLoad( HB_HRB_BIND_FORCELOCAL, cFile )
               netiosrv[ _NETIOSRV_hRPCFHRB ] := hb_hrbLoad( HB_HRB_BIND_DEFAULT, cFile )

 
// Test Program
// Build index file at the server
#require "hbnetio"
PROCEDURE Main()
   netio_Connect( "192.168.0.200" )
   netio_FuncExec( "My_UDF" )
   netio_Disconnect( )
   RETURN
 
// rpcdemo.prg
// RPC Demo module
STATIC FUNCTION HBNETIOSRV_RPCMAIN( sFunc, ... )
   OutStd( "DO", sFunc:name, "WITH", ..., hb_eol() ) 
   RETURN sFunc:exec( ... )
  
FUNCTION My_UDF( )
   ? 'Start...'
   USE ('frsgjour')
   INDEX ON field->accountno TO ('frstest')
   USE
   ? 'End.', hb_eol()
   RETURN .T.
 
//Fire up the Server and server activity.
 

// Network activity

 
Regards.
Ash

Nenad Batocanin

unread,
Feb 16, 2014, 12:47:29 PM2/16/14
to harbou...@googlegroups.com

In fact, this system is already in use :)

 

If the user selects lower part of the table, then I create a temporary table that contains only a few field from the main table and RecNo() in main table. Then I browse temporary table, using the RecNo() find the appropriate record and receive data as MainTable-> Field. It is a solid system, but I'm not completely satisfied, because the filter condition contains a function that is very slow (I did not mention it because of the simplicity) and execution may take a while, even on the server.

 

Regards, NB

 

 

From: harbou...@googlegroups.com [mailto:harbou...@googlegroups.com] On Behalf Of Francesco Perillo
Sent: Sunday, February 16, 2014 12:01 AM
To: harbou...@googlegroups.com
Subject: Re: [harbour-users] Re: Conversion of an application from Clipper to Harbour + NetIO

 

Please try if you want... I think that the new file may be created in no more than 2 seconds... 50000 records... unless they are very very very long records...

--

Nenad Batocanin

unread,
Feb 16, 2014, 1:03:20 PM2/16/14
to harbou...@googlegroups.com

You're probably right. But that means I have to change all the reports and everything, and it's not possible.

 

Regards, NB

 

 

From: harbou...@googlegroups.com [mailto:harbou...@googlegroups.com] On Behalf Of Francesco Perillo
Sent: Sunday, February 16, 2014 12:51 AM
To: harbou...@googlegroups.com
Subject: Re: [harbour-users] Re: Conversion of an application from Clipper to Harbour + NetIO

 

 

 

On Sat, Feb 15, 2014 at 10:49 PM, Nenad Batocanin <nbato...@wings.rs> wrote:

--

Francesco Perillo

unread,
Feb 16, 2014, 1:18:51 PM2/16/14
to harbou...@googlegroups.com
On Sun, Feb 16, 2014 at 6:47 PM, Nenad Batocanin <nbato...@wings.rs> wrote:
 It is a solid system, but I'm not completely satisfied, because the filter condition contains a function that is very slow (I did not mention it because of the simplicity) and execution may take a while, even on the server.

another thing you did not mention before :-)))
if in a standard samba setup the function takes 50% of cpu time and lan traffic uses the other 50%, you can't make the filter really quick...

... and now I'm curious about this function....

Francesco Perillo

unread,
Feb 16, 2014, 1:19:34 PM2/16/14
to harbou...@googlegroups.com


On Sun, Feb 16, 2014 at 7:03 PM, Nenad Batocanin <nbato...@wings.rs> wrote:

You're probably right. But that means I have to change all the reports and everything, and it's not possible.

 

On my test report I went from 7 seconds to 0,1 running on server as RPC.

Nenad Batocanin

unread,
Feb 16, 2014, 1:43:28 PM2/16/14
to harbou...@googlegroups.com

Ok, I'll explain in more detail. This table is a list of items, and filtered by items type. To display I use TBrowse class and filter is executed in one procedure. Somwhere in code exist this piece:

 

CASE Ch == K_ALT_F

     SetFilter()

 

This happens in exactly 56 very different procedures, but table of items is the same. Some procedures are changing data, some are read-only, in some user define columns... In this procedures there is thousands of different functions and no way to create a one procedure that does everything. Here's view on one of them:

 

 

In some tables, the user can edit the information and more importantly, some information may change by other users (for example, the price and quantity of an item)! Number of records is difficult to estimate. We have more than 7.000 users and each of them has its own items and types of items. These tables may be from 20 to 200,000+ items. In the selection is even worse: the user can choose one and 49.999 records from 50.000.

 

It would be the perfect solution to stay within SetFilter function, because changing all the procedures is _very_ big task. This is the reason ((+few thousand reports)) why I'm looking for a solution without code change. "Universal" data server is certainly not an ideal solution, but it gives me the most with no change of source code.

 

I'm sorry to bother you, I hope this discussion will be helpful to someone else :)

 

Regards, NB

 

From: harbou...@googlegroups.com [mailto:harbou...@googlegroups.com] On Behalf Of Francesco Perillo
Sent: Sunday, February 16, 2014 12:39 AM
To: harbou...@googlegroups.com
Subject: Re: [harbour-users] Re: Conversion of an application from Clipper to Harbour + NetIO

 

Hi Nenad,

--

image003.jpg

Nenad Batocanin

unread,
Feb 16, 2014, 1:57:14 PM2/16/14
to harbou...@googlegroups.com

> On my test report I went from 7 seconds to 0,1 running on server as RPC.

 

No doubt - this is the difference between "local" and "LAN in the test that I did. However, I hope you now understand why I still can not use NetIO.

 

I still hope that Przemek at some point decide to add on NetIO some features of "dumb" data server :))

 

Regards, NB

Ash

unread,
Feb 16, 2014, 3:37:05 PM2/16/14
to harbou...@googlegroups.com
Hello Nenad,
 
I don't fully understand the complexity of your system but one message is clear - you are filtering large tables. I have a similar situation where a table has over 1.5 million records (general ledger journal entries) and users are wanting to get to payment entries, let us say, from account 1200. Below is how I was able to speed up the browse process.
 
SET SCOPE TO 1200
SET FILTER TO transactiontype = payment
GOTO TOP
browse()
SET FILTER TO
SET SCOPE TO
 
In other words, I limit the number of records (group) using scope and then filter the ones I need from that group. You might have to create additional indexes to speed up the system.
 
Excuse me if I have missed the point.
 
Regards.
Ash

Francesco Perillo

unread,
Feb 16, 2014, 4:39:07 PM2/16/14
to harbou...@googlegroups.com
Hi Nenad,
I see that english is not our main language and so there may be some misunderstanding that makes us not understand each other. For example, when you say you have 7000 users you mean 7000 people working ON ONE database or that your program is installed at 7000 different companies and that each one has its copy of the database (each one having from 20 to 200000+ records ?)

Let's see if I understand: in 56 different source code files you display a TBrowse with a list of products. In each of these 56 you may need to list the products as a readonly, in some you must be able to edit, and you may need to have the values updated if some other user updates the value (of course you have to refresh the rows). Since there may be more than 200000 records you must be able somehow to filter that list and show only a subset of the records... for example only the products of a producer (that may reduce the rows a lot) or all the products that have the letter A in their name (and there may be 100000+). Since the filter is "Query by example" style you can't easily apply optimization and SET FILTER TO... is the quickest and easiest way for the programmer...
It would be good and great in SQL (and quite the only way to do it) but in Harbour it may be "expensive" especially when you have to browse the list back and forth.

LetoDB was written just for these cases: moving all the work from the clients to the server when using RDD features... but it is not so easy as you may think... (or I don't understand)
For example, what happens when you use a variable in the filter ?
SET FILTER TO FIELD->SURNAME = mSurname
Can LetoDB handle this ? Or should you modify the source code ?

You may go the commercial route with ADS, or you may hire Alex and/or Pavel to support development of LetoDB (as done in several open source projects)

I think this thread is becoming more interesting now that real code is used to test differences between different solutions, and I'm very very interested in code and dbf optimizations, so you may continue to bother me if you want, also with private mail.
But without looking at some of the code is really difficult to go farther than guesses...

Francesco

elch

unread,
Feb 16, 2014, 5:35:23 PM2/16/14
to harbou...@googlegroups.com

Nenad,


according my personal checks:

LetoDB application have memory leaks :-((

checked with sir Valgrind in Linux.


I counter-checked with the same source and hbnetio.hbc,

only without LetoDB functions and not linking '-lrddleto':

Crystal clear with Hbnetio: not a single byte going wild.


You may ask why this check with Valgrind was made ?

Because i saw in an old Windows version multiple this message:

'application had problems .....'

[ no 'exclusive use' known problem, seem to happen unpredictable ... ]


best regards

Rolf

Nenad Batocanin

unread,
Feb 16, 2014, 6:32:55 PM2/16/14
to harbou...@googlegroups.com

Is not possible to apply the SCOPE, because records are not consecutive. For instance (index on Type):

 

Rec     Type

---------------

1          11

2          11

3          15

4          20

5          20

6          21

7          21

8          22

9          23

 

I need records with type = 11 and 21. As I recall, the old Six RDD is read data directly from the index, and he "knew" how to optimize this query.

 

regards, NB

Nenad Batocanin

unread,
Feb 16, 2014, 7:09:07 PM2/16/14
to harbou...@googlegroups.com

Sorry, English is definitely not my strongest side :)

 

Our users is 2.232 companies with a total of 7,292 workstations. Each company has its own database and its table of items that typically is 1-2000 records, but sometimes it can be 50.000, 200,000 or more records.

 

The concept you fully understand. But filter can not be "X letter in the name" - let's say you can only filter by group of type of items.

 

I've done some experiments with the filter in letodb. For example, this command is executed very efficiently on server side:

 

SET FILTER TO At (I2Bin(Artikli->a_vrs), "AF 0F AA ..." ) <> 0

 

I first convert array of type ID's in the string "AF 0F AA...", then form the query and finaly send all to Leto. I do not expect a total optimization, it is sufficient to me to execute the query on the server side.

 

I'll send some piece of code when I came across a particular problem - there's no point now, it's too complicated.

 

Very thanks for your efforts and valuable information. I'll keep in mind your offer.

 

Regards, NB

 

 

 

From: harbou...@googlegroups.com [mailto:harbou...@googlegroups.com] On Behalf Of Francesco Perillo
Sent: Sunday, February 16, 2014 10:39 PM
To: harbou...@googlegroups.com
Subject: Re: [harbour-users] Re: Conversion of an application from Clipper to Harbour + NetIO

 

Hi Nenad,

--

Nenad Batocanin

unread,
Feb 16, 2014, 8:04:59 PM2/16/14
to harbou...@googlegroups.com

Very bad news :((

 

NB

 

From: harbou...@googlegroups.com [mailto:harbou...@googlegroups.com] On Behalf Of elch
Sent: Sunday, February 16, 2014 11:35 PM
To: harbou...@googlegroups.com
Subject: Re: [harbour-users] Re: Conversion of an application from Clipper to Harbour + NetIO

 

Nenad,

--

Przemyslaw Czerpak

unread,
Feb 17, 2014, 5:31:29 AM2/17/14
to harbou...@googlegroups.com
On Sat, 15 Feb 2014, Ash wrote:

Hi,

> When building indexes via RPC in NetIO, the network traffic is reduced by
> half, however - a fair advantage.

Via RPC _EVERYTHING_ is done on the server side. Only request to
function call is sent from the client to the server and then final
result is sent from the server to the client.
The cost is static and does not depend on table size at all.
Your information wrongly suggests that there is any network traffic
during indexing via RPC in NETIO. It's false information. The whole
operation is done on the server side only without any network calls.

best regards,
Przemek

Ash

unread,
Feb 17, 2014, 7:44:57 AM2/17/14
to harbou...@googlegroups.com
Hello,
 
I believe I have found the reason for the network traffic during the indexing process.
 
I use /data/accounts/comp folder on my Linux server for NetIO testing.  This folder is also being shared via Samba and is mapped as z: drive when I logon to my workstation - easier to move files around. The network traffic during the NetIO test was the chatter between Windows and Samba. 
 
When I ran the same test without the z: drive, there was no network traffic.
 
Regards.
Ash

Przemyslaw Czerpak

unread,
Feb 17, 2014, 8:58:48 AM2/17/14
to harbou...@googlegroups.com
On Mon, 17 Feb 2014, Ash wrote:

Hi,

> I believe I have found the reason for the network traffic during the
> indexing process.
> I use /data/accounts/comp folder on my Linux server for NetIO testing.
> This folder is also being shared via Samba and is mapped as z: drive when I
> logon to my workstation - easier to move files around. The network traffic
> during the NetIO test was the chatter between Windows and Samba.
> When I ran the same test without the z: drive, there was no network traffic.

Yes, it explains the network traffic problem.
Anyhow in such case you should should also repeat you tests for pure NETIO
file access because the configuration you used doubled number of network
messages.

best regards,
Przemek

elch

unread,
Feb 17, 2014, 10:26:27 AM2/17/14
to harbou...@googlegroups.com

Hi Przemek,


can we avoid that 'doubled' network traffic with a tricky setup, and get Samba AND HBnetio together ?

Different IP-address ?

Extra NIC for server with other subnet ?


Because i need to access many 'extra' files ( text, pictures ) and would need Samba for that,

as these are accessed with commands like FILE(), MemoRead() ..

And they are not in data-path of HBnetio: some here, some there ..


best regards

Rolf

elch

unread,
Feb 17, 2014, 10:57:00 AM2/17/14
to harbou...@googlegroups.com

btw Ash,


i believe you mis-use this loadable hrb file: it is at different places called RPC*filter*.

As far as i have understood that, it is intended to exclude some RPC functions because of security reason.

Maybe no topic for a 'trusted' local network, but you mentioned internet access to the server.

I think in this regard of functions like Directory(), MemoRead(), MemoWrit(), Ferase() ...


I am unsure, how to exclude such functions: per overlay with a new dummy ?


best regards

Rolf


 
In the spirit of Harbour, a simple example is indicated to show that building indexes at the server with NetIO does require network resources - about 50% of what it would take in samba. I did make a change at line 182 of hbnetio.prg as shown below:
 
//                netiosrv[ _NETIOSRV_hRPCFHRB ] := hb_hrbLoad( HB_HRB_BIND_FORCELOCAL, cFile )
               netiosrv[ _NETIOSRV_hRPCFHRB ] := hb_hrbLoad( HB_HRB_BIND_DEFAULT, cFile )

 
...

 
// rpcdemo.prg
// RPC Demo module
STATIC FUNCTION HBNETIOSRV_RPCMAIN( sFunc, ... )
   OutStd( "DO", sFunc:name, "WITH", ..., hb_eol() ) 
   RETURN sFunc:exec( ... )
  
FUNCTION My_UDF( )
   ? 'Start...'
   USE ('frsgjour')
   INDEX ON field->accountno TO ('frstest')
   USE
   ? 'End.', hb_eol()
   RETURN .T.


best regards
Rolf

Ash

unread,
Feb 17, 2014, 12:24:42 PM2/17/14
to harbou...@googlegroups.com
Hello, Rolf,
 
I think, the additional network traffic is the communication between samba and my workstation resulting from index file creation and changes in file size as the index is being built on the NetIO server. This traffic occurs for all folders that are being shared with Samba.Therefore, Netio and Samba can co-exist without any issues.
 
Regards.
Ash

Przemyslaw Czerpak

unread,
Feb 17, 2014, 1:25:26 PM2/17/14
to harbou...@googlegroups.com
Hi,

I think that you missed the configuration details.
The indexes were stored on other computer then HBNETIO server
so HBNETIO server was receiving requests from the client and
access files on other server by SMB protocol.
I do not know why you may need such configuration. Just simply
access file directly on other server from client installing
HBNETIO on this server.
I do not see anything what can be changed in HBNETIO.
If I'm missing sth please let me know what you need.

best regards,
Przemek

Ash

unread,
Feb 17, 2014, 1:26:43 PM2/17/14
to harbou...@googlegroups.com
Hello Rolf,
 
I have divided my application into two parts: Main application that is used by LAN/WAN users, and Admin application that is used by system administrator to manage the system in a LAN setting - build index files or create a new company, for example. As I will use standard hbnetio with default settings and no RPC - the functions you mentioned will not be available to the main application. Why mess with something that works.
 
Now, Admin application is different. It will require hbnetio with my code changes and this .hrb file to allow for the functionality provided by this application - let us say pvdc-netio on a different port. If it becomes too complicated, I will implement it using Samba.
 
I am still learning about this amazing piece of software. Am I misusing it? Possibly, but only during testing.
 
Regards.
Ash

elch

unread,
Feb 17, 2014, 6:15:21 PM2/17/14
to harbou...@googlegroups.com

okay !, Ash


respect you for your experience, and more for about a million source rows of Nenad:

WoW: i choosed a different way: as my main application is for all potential users the same ..


# because more than the half of my ~ 170K apps' lines is my own library, ever will be needed for all

# pushing the needs of i.e. 50 workstations onto one single server, i would need a 'full-grown mainframe' bastard ;-)


So all workstations are planned to use HBNETIO for DBF access, as it seem faster than Samba.

And all stations will have the option of RPC remote execution in the backhand for 'special forces' ;-),

where such is very rarely needed ..


---

Further i have some own additional tools around that, like my own database management:

this utility will run on the server itself without HBNETIO or else, but with local impressive Harbour fast access to the data.

This tool is [since long] responsible to maintain (i.e. PACK / update DBF structures / RARE! reindex if needed )

[ and all needed data is stored in 3 DBFs ]

I've create decades ago this tool, as we actually talking of about 80 DBF and 200+ NTX ...

( sure !! not to boast you or any else, only FYI, to better imagine about my worries .. )


very best regards

Rolf

Ash

unread,
Feb 17, 2014, 9:13:02 PM2/17/14
to harbou...@googlegroups.com
Fresh tests results are as follows:
 
Browse (20,000 records) standing on page down key till the last record - 17 records per page.
NetIO  Network 0.59% Time 53 Seconds
LetoDB Network 0.13% Time 59 Seconds
Samba  Network 0.64% Time 65 Seconds
 
Build Indexes (48 tables 80 indexes) Collective table size 300MB
NetIO  Network 5-25% Time 75 Seconds
LetoDB Network    0% Time 48 Seconds (by default indexes are built at the server)
Samba  Network 5-20% Time 83 Seconds
 
Network use when building a smaller index with:
 z: drive present  12.5 %
 z: drive disconnected   8.5 %
 
A difference of 4% in this case. It translates to between 45-50% samba overhead when z: drive is present as well that indexing takes a little longer. This is about the same number that I assigned _incorrectly_ to NetIO in my earlier posts.
 
Regards.
Ash

Ash

unread,
Feb 17, 2014, 10:01:37 PM2/17/14
to harbou...@googlegroups.com
Hello Rolf,
 
Thank you for sharing your thoughts.
 
I think unless we able to strike a balance between loading the server or loading the network the user will suffer the consequences. Perhaps we can throw more capable equipment into the mix to improve the performance as IT shops encouraged by IBM did in the last century.
 
I take comfort in the design of my application in that its interface and business logic are separated from the database. This is how I am able to work with Samba, NetIO and LetoDB with the same code base. I need to go one step further where I separate application interface from business logic to finally have a 3-tier application. But that is a long way away. Stability and security of the application are my concerns for now.
 
I have decided to use NetIO for three reasons: It is stable and secure, it is part of Harbour distribution, and the application can be used via the internet without having to deal with the remote desktop. 
 
Regards.
Ash

elch

unread,
Feb 18, 2014, 11:01:48 AM2/18/14
to harbou...@googlegroups.com
Nenad,

you perhaps like to shortly test my browse example in other thread ...

best regards
Rolf

Nenad Batocanin

unread,
Feb 18, 2014, 6:36:10 PM2/18/14
to harbou...@googlegroups.com

Of course, as soon as I have some free time. Currently I need 48 hours in a day :))

 

NB

 

From: harbou...@googlegroups.com [mailto:harbou...@googlegroups.com] On Behalf Of elch
Sent: Tuesday, February 18, 2014 5:02 PM
To: harbou...@googlegroups.com
Subject: Re: [harbour-users] Re: Conversion of an application from Clipper to Harbour + NetIO

 

Nenad,

--

Itamar M. Lins Jr. Lins

unread,
Mar 7, 2014, 9:59:45 AM3/7/14
to harbou...@googlegroups.com
Hi!
Last update Letodb with same  fix of memory leaks!


2014-03-06 18:15 UTC+0200 Pavel Tsarenko (tpe2/at/mail.ru)
  * include/funcleto.h
    * version number increased (2.13)
  * include/srvleto.h
    + added last datetime read and write fields into AREASTRU and TABLESTRU
    + added bBufKeyNo and bBufKeyCount fields into USERSTRU
  * include/rddleto.ch
    + added RDDI_BUFKEYNO and RDDI_BUFKEYCOUNT commands
  * includeletocl.h
    + added ulKeyNo and ulKeyCount fields into LETOTAGINFO structure
    + added LetoSet() declaration
  * source/server/letofunc.c
  * source/client/leto1.c
  * source/client/letocl.c
  * source/client/letomgmn.c
  * readme_rus.txt
  * readme_pt_br.txt
    * changed protocol for lock commands: if the table is modified by another
      user after reading the record, together with the result of the lock
      passed the changed record.
    + added RDDI_BUFKEYCOUNT and RDDI_BUFKEYNO commands, for buffering
      ordKeyCount() and ordKeyNo() calls. Protocol for record data has been
      changed: added KeyCount and KeyNo values.

2014-02-28 18:10 UTC+0200 Pavel Tsarenko (tpe2/at/mail.ru)
  * source/server/letofunc.c
    ! memory leak
  * source/client/letocl.c
    * warnings

2014-02-27 17:55 UTC+0200 Pavel Tsarenko (tpe2/at/mail.ru)
  * source/client/letocl.c
    ! fixed memory leak

2014-02-26 12:45 UTC+0300 Alexander Kresin (alex/at/belacy.ru)
  * source/client/leto1.c
    ! letoTrans() was fixed to work correctly with an old sever version

Francesco Perillo

unread,
Mar 7, 2014, 10:03:26 AM3/7/14
to harbou...@googlegroups.com
I still have to understand which LetoDb repository is the correct one: sourceforge or github...


Itamar M. Lins Jr. Lins

unread,
Mar 7, 2014, 10:30:00 AM3/7/14
to harbou...@googlegroups.com
Hi!

Fwd, of Alexander.

Hello,

thanks for the interest to LetoDb.
The current working repository is the Sourceforge CVS, rel-1-mt branch, it isn't outdated, the last update was today ( the information in the Sourceforge's letodb main page isn't correct - that's result of a bug in Sourceforge software ).
You may use the following console command to download it:

cvs checkout -r rel-1-mt -P letodb

The github repository is my personal, I use it for testing purposes.

Regards, Alexander.

I used 

cvs -d:pserver:anon...@letodb.cvs.sourceforge.net:/cvsroot/letodb checkout -r rel-1-mt letodb

and work fine

Regards 


2014-03-07 12:03 GMT-03:00 Francesco Perillo <fper...@gmail.com>:
I still have to understand which LetoDb repository is the correct one: sourceforge or github...


--
--
You received this message because you are subscribed to the Google
Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: http://groups.google.com/group/harbour-users

---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Francesco Perillo

unread,
Mar 7, 2014, 10:33:45 AM3/7/14
to harbou...@googlegroups.com
Ok thanks,
since I saw some recent commits on github... and also on cvs... I didn't know which one to use...

Perhaps officially moving the project to github and mark the "test" repository as "test fork" will create less problem and git is 1000 times better than cvs...

Itamar M. Lins Jr. Lins

unread,
Mar 7, 2014, 10:40:45 AM3/7/14
to harbou...@googlegroups.com
Yes, but this is decision of Pavel or Alexander.
More simple is GIT.
CVS old tool.
 
The branch of CVS of letodb is master, and throw away the root, since no one is using.

Best regards,
Itamar M. Lins Jr.

Dušan D. Majkić

unread,
Mar 7, 2014, 12:42:14 PM3/7/14
to Harbour Users
> I still have to understand which LetoDb repository is the correct one:

This is the link that directly downloads latest loetdb source as compresser tar.

http://letodb.cvs.sourceforge.net/viewvc/letodb/letodb/?view=tar&pathrev=rel-1-mt

For thre record, it is the cvs pathrev=rel-1-mt from sf.net.

Regards,
Dusan Majkic
Wings Software
Reply all
Reply to author
Forward
0 new messages