LetoDBf UDF

727 views
Skip to first unread message

Ottó Trapp

unread,
Apr 12, 2019, 3:58:01 AM4/12/19
to Harbour Users
Hello!

After connection form client side I tried to call an UDF function built into letoudf.hrb on the server. In letodbf_00.log I get an error: Error BASE/1001  Undefined function: HRB_PROBA
letodbf.log says the .hrb file is loaded, I have an Allow_UDF = 1 setting in letodb.ini.

The call is: LETO_UDF("HRB_Proba", 5, "Hello!", StoD("20190411"))

What do I miss? Thanks for help!

Best Regards,
Otto


elch

unread,
Apr 12, 2019, 5:34:03 PM4/12/19
to Harbour Users
Hi Otto,

LetoDB[f] using server side UDF functions ...
... is for special needs of experts!.

LetoDBf newbies just after testing, if they can connect to the server,
then switching with first question to topic: UDF ...
... are based of my experience on 'the very wrong path'.

There are a few occasions, when it is inevitable needed, i.e.:
a 'homemade' function in an index key.

---
Before i am answering your question, [ i feel to know the answer ]
you have to describe what the aim is ?,
because i possible can show you a 'better' way.

[ i noticed your question about HbNetIO UDF usage ... ;-) ]

best regards
Rolf

Ottó Trapp

unread,
Apr 15, 2019, 5:34:31 AM4/15/19
to Harbour Users
Hello Rolf,

I am evaluating LetoDBf, and my aim is to be able to replace SMB with a client / server database (see my post from 04.07.2017 titled 'desktop application split into...'). Only recently did I have the ability and time to take the 'harbour' steps on this road and test the speed and capabilities of LetoDBf more thoroughly. In this context I wanted to call an UDF to see how it works. Work is in progress to overcome the 'obstacle' that the app is in Xb***++.

For very sure I do not want to be on the wrong path! Please advise and guide me, I am interested in 'better' ways as you called it to have support from you! (Now and very hopefully in the future as well.)

Thanks very much!

Best Regards,
Otto

elch

unread,
Apr 20, 2019, 5:32:57 PM4/20/19
to Harbour Users
Hi Ottó,

well done HbNetIO is a 'storage re-director':
low-level file requests are re-directed over network to another storage.
That is similar, but better ! than using SMB network protocol.

Contrary ! LetoDB[f] is a buffering ! 'client - server' RDD,
capable of processing a few thousands of records -- per second !
[ RDD == Replaceable Database Driver ]
And 'client - server' term is very opposed, what you mean about it means,
you perhaps mean: 'remote code execution'.

Classic example is a FILTER: to filter 2 records out of 1000,
# HbNetIO send all 1000 records, one by one!, to the local machine,
and at local machine 998 are discarded
# LETO send one single request to the server for the next valid records,
and only *two* records are transmitted, all filtering work happens at server.

So HbNetIO user try to load some database action to the server side.

Such will heavily contradict ! the client-server concept of LetoDB[f],
and easily the buffering client will get out of sync.
It can do on demand actions at server side, but this should be the exception,
as such is a serious task with multiple trapdoors
-- expert area ! for very experienced developer with knowledge what happens 'under the hood'.

---
I have read about your intention, and one of my main question would be,
how the clients get the result of some data action at server ?!

The easy option:
you encourage the annoying pure Windows only 'lost souls' xBase++ team,
to temporary hire me as a freelancer to create them a RDD for their environment.
[ I would need some 'non-disclosure' ! insights to their RDD environment ]

The complex option:
stay away of LetoDBf ! -- and use HbNetIO.

best regards,
Rolf

Ottó Trapp

unread,
Apr 25, 2019, 9:43:28 AM4/25/19
to Harbour Users
Hello Rolf,

Thanks for you explanation!

By 'server/client' I tried to mean something similar.. that instead of a 'simple' low-level storage redirector a server process local to the database does centralized 'higher-level' jobs. (Indexing, filtering as you described, seeks, serving records efficiently). Then I wondered if it is possible within LetoDBf's UDF functionality to do more complex collecion of data for reports (processing /reading/ high number of related records and give back an array holding summed values for the client to write the acual report or to write an output report file that gets copied back to the client). What I did not think over was that for the server it may be very problematic to react to a server side UDF that can do almost 'anything' and still keep the clients in sync. But this latter need is not that important, I just tried to map the possibilities.

I'll write to Alaska team to tell them that you are willing to write an RDD for Xbase++ and that I think it would be most beneficial to have a lightweight yet very powerful RDD (or DBEs as they call them) among the present ones for network use. Alaska Software has a so called 'Technology Partner Program', see: http://alaska-software.com/partners/tpartner.cxp .

Best Regards,
Ottó

elch

unread,
Apr 25, 2019, 4:42:18 PM4/25/19
to Harbour Users
Hello Ottó,

... to do more complex collecion of data for reports ... and give back an array holding summed values
such i would label: 'manually on foot', and there maybe won't be even an 'array',
but a long string to be then converted to an array -- Phew! ...
This really sounds to me you better use HbNetIO for your task, if that works for you, as instead 'misusing' LetoDBf.

---
LetoDBf follows the philosophy:
make the Harbour application RDD independent ( generally a good 'idea' ),
then just 'switch' to use LETO as default RDD -- and be ready done.
[ Even that is not needed: with connect to server its done automatically. ]


I'll write to Alaska team to tell them that you are willing to write an RDD for Xbase++ and that I think it would be most beneficial to have a lightweight yet very powerful RDD (or DBEs as they call them) among the present ones for network use. Alaska Software has a so called 'Technology Partner Program' ...
[ know xBase++ since their early times, but discarded them ... ]
 
Correct!, xBase++ label it 'DBE', so it should look like:
DbeLoad( "LETODBE" )
and this will need further a "rddleto.dll", which is ready (hbmk2 script miss)

If i look with a DLL inspector into ADSDBE.dll, i see 12 class definitions, i.e.:
_ADSTBLClassData
which after DbeSetDefault( "ADSDBE" ) is used instead of DBFDBE.dll:
_DBFTBLClassData

So i just need to know, what's in these classes,
most easy would be a 'blank' pattern to fill into LETO API functions ...
... or at least this is how i think about ;-)
Sure it would be some new experience for me, OOP in C++,
opposite to the pure! ANSI C used in LetoDBf ],
but maybe easier for me as the to study the open source API 8-)

My personal bet to loose: 'they' have no interest ...

best regards
Rolf

Ottó Trapp

unread,
Apr 26, 2019, 4:02:14 AM4/26/19
to Harbour Users
Hello Rolf,

For server-side-run functions another process (A) may be used that runs on the same host as letodb, and connects like a normal client (if that is possible). Mr. Aleksander Czajczynski has very good tools to do communication / data exchange between processes (in this case between the client and the server process (A)).

Yesterday I have written the mail to Alaska. I told them that you have a ready and well working RDD and (your) porting its client side to Xbase++ may be a very viable option, and that such a new DBE has place among available DBEs. (I think I would not be here if it had not.) I'll inform you if I get any reply.

[Yes, as far as I know all DBEs reside in different dll files and they must be very similar to RDDs].

Best Regards,
Ottó

Angel Pais

unread,
Apr 26, 2019, 9:46:45 AM4/26/19
to harbou...@googlegroups.com
RDD architecture is waaaay different from DBE's
They are CORBA based and VEEEERY hard o hack
Many people have tried over the years, and nobody could success.

Regards
Angel Pais

--
--
You received this message because you are subscribed to the Google
Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: http://groups.google.com/group/harbour-users

---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

elch

unread,
Apr 27, 2019, 6:16:56 PM4/27/19
to Harbour Users
THANK you for the info, Angel

seems to be CORBA 2.0 in 32-bit for xBase++ ...
... SOUPer OMG! ;-)
[ Who of their left over team can manage it ? -- and they never would provide needed info ]

... hard to hack
Many people have tried ...

Not me ! -- here is a xBase++ App logged into Harbour LetoDBf:
the serious rest would need 'some' weeks to experimental test the edges ...

best regards
Rolf
moneyback.jpg

Angel Pais

unread,
Apr 29, 2019, 6:39:35 AM4/29/19
to harbou...@googlegroups.com
interesting !

--

Rob S

unread,
Apr 29, 2019, 10:10:07 AM4/29/19
to harbou...@googlegroups.com
Fe

elch

unread,
Apr 29, 2019, 5:46:56 PM4/29/19
to Harbour Users
Hi Angel,

interesting !

xBase++ 1.9 console app at work at LetoDBf:
---snip---

connect to //192.168.2.47:2812/ at 1508576420
opened table TEST1 SHARED
RecCount 252
RecNo 1
Skip1, RecNo 2
Fieldname1 NAME length 10
FieldValue Elch
MemoValue ' 1 2'
Fieldput NAME success
Fieldvalue Alexander
100 K! * fieldget 0,06 s
100 K! * fget() 0,44 s
100 K! * fput() 48,97 s
................................................
Skip records 19908 in 0,25 s
table closed ...
connection closed

---snip---

Comments:
## this is 'local' network as i want to see the overhead:
the overhead is very less than expected, will give good results in real network.
Tests shown ~ 50% decrease for fput() and skipping ...
## fput() include 100K! RLock() + Unlock() for 100K one field change
## impressive skipping speed as expected, as for that LetoDBf is famous
## above is done with the LetoDBf C-API, that means there is no "workarea",
so we have to 'emulate' a workarea environment.
Tests are done 'procedural', have to query for each call the "field position"
for a given 'field name" -- can be fine 'cached' with objects in OOP style.

For sure we cannot access FIELDs by just their name or by aliasing them
[ again: its no 'workarea' ! ] -- so it actually looks for fields like:
fget( "Table", "field" ),
where "table" is a pointer! to an allocated struct [ think of a FileHandle
pointer to be used in further calls -- like the Connection: 1508576420 ]
with objects it may look like: oTable::oField ??

---
As said, only early tests to check if its worth to work further ...

And as topic is about xBase++: i would expect, that i invest a lot of time,
and then to stumble over a K.O. -- so it ever was with that compoiler.
Best to keep it in my mind so that is not too specific to §$%&/() ...

best regards
Rolf

elch

unread,
Apr 29, 2019, 6:06:49 PM4/29/19
to Harbour Users
Hi Ottó,

sorry, i don't understand you comment.
We seem to talk about different topics.

I need no 'process communication' between a LetoDBf client and the server,
thats [hopefully] fine and moreover not your topic.

I have asked you how you get the information out of a Harbour app into your xBase app,
where this is not my task -- but possible of interest.

best regards
Rolf

Ottó Trapp

unread,
Apr 30, 2019, 3:53:13 AM4/30/19
to Harbour Users
Hello Rolf,
 
sorry, i don't understand you comment.
We seem to talk about different topics.

I need no 'process communication' between a LetoDBf client and the server,
thats [hopefully] fine and moreover not your topic.

I asked my initial question regarding UDF because I wanted to know if it is possible to put some business logic on the server side into letodb.hrb. You said letoudf.hrb is not really for this purpose. Then I thought it may be possible to run a Harbour process (A) on the same computer that hosts letodb server process (B). This process (A) then could connect to letodb server as a 'normal' letodb client but also act as a server towards the Xbase++ client: it could execute data-intensive business logic and send back data (eg. that long array) to Xbase++ client (C). So the Xbase++ client had 2 connections: one to process (A) and one to the server (B). I referred to data exchange between (C) and (A) not between LetoDBf components.

Those Harbour developers who actively use LetoDbf and generate big reports could tell if the need to move business code to server side does exist at all or clients are so fast that it is waste of time even to think of such setups. (With smb shared dbf/cdx there are reports in my program that take 40 minutes to be generated if database is bigger.. It may be a few minutes with Leto even from the client side..)

I have asked you how you get the information out of a Harbour app into your xBase app,
where this is not my task -- but possible of interest.

I do it with Mr. Aleksander Czajczynski' lib codenamed HBIO, but he could tell more of its capabilities in detail.

Best Regards,
Otto

Ash

unread,
Apr 30, 2019, 5:38:54 AM4/30/19
to Harbour Users
Hello Otto,

In a LAN, report production when using LetoDBf is around 8-10 times faster than SMB share. 

Regards.
Ash

Francesco Perillo

unread,
Apr 30, 2019, 6:30:57 AM4/30/19
to harbou...@googlegroups.com
8-10 time faster can be a little misleading...

Imagine the following snippets of code that calculate the sum of a filed based on a condition of another field. All 3 snippets will calculate the same total value!
S1: 
GO TOP
DO WHILE ! eof()
   IF FIELD->TEST == "A"
       nValue += FIELD->VAL
   ENDIF
   SKIP
ENDDO

S2:
SET FILTER TO TEST == "A"
GO TOP
DO WHILE ! eof()
   nValue += FIELD->VAL
   SKIP
ENDDO

S3: (I don't remember exact syntax of SUM....)
SUM FIELD->VAL TO nValue FOR TEST == "A"


In a standard Harbour program, the time spent is the same: ALL the records from the DBF are read from disk, transferred to the client and examined at the client. HBNETIO is supposed to be quicker than SMB...

What happens when LetoDbf is used?
From what I understand, Rolf please correct me, is this:
S1: ALL records are transferred from the server to the client as in standard harbour RDDs... the logic is in the application and there is no way for LetoDBf to know anything... It can be quicker than SMB and HBNETIO because it may use compression and doesn't need to obey native SMB locking
S2: LetoDBf knows there is a filter. The filter is transferred to the server that sends to the clients ONLY the records that match the filter. So if 10% of the records match, only 10% of the records are transferred. And the skip done locally on the server is way... way... way quicker... In this case LetoDBf still doesn't know what you want to do with the data, it just gives you the records...
S3: LetoDBf knows the filter and the job you want to do on the filtered data... it does the complete job on the server and transmit to the client only few bytes, the nValue... Also if the filter matches 100% of the data, only the result is returned...

So, if your code for the report is in S1 style... well, you may have a speedup, yes, but all the hard work is still on the client and on the lan. If your code is in S2 style, it depends on how many records your filter will match. The code in style S3 is the ost efficiently speedup by LetoDBf.


Now, why Otto is asking for RPC? Why he refers to HBIO?
Several months ago I've been asked to present a dashboard with some data aggregated in a special way. The user will need to have a look at the data once every X days, and when he needed he needed to have them NOW. He could not wait the 8 minutes the report needed to be generated. So I created a cron script on the server that at 6AM would calculate the totals up to that moment and store them in a dedicated DBF... from 8 minutes to... less than 3 seconds !

I decided to go the cron route but I investigated the HBNETIO RPC and the use of a call to a http server hosted on the linux server that also hosts the samba share with the DBFs that would fork the script... aka HBIO...

From what I understand, Otto wants to run a program/function/procedure on the server so that the data is local and he has a requirement that is mandatory: access must be coordinated with the clients! They must use the same way to access the data so to have the same locking mechanism, because he needs to access live data.

I hope this message can help the discussion.

Francesco


--

elch

unread,
Apr 30, 2019, 10:14:07 AM4/30/19
to Harbour Users
Hi Francesco,


8-10 time faster can be a little misleading...
yep!, for sure there is space left  ;-)

 
S1: 
GO TOP
DO WHILE ! eof()
   IF FIELD->TEST == "A"
       nValue += FIELD->VAL
   ENDIF
   SKIP
ENDDO
the rough way ..

S2:
SET FILTER TO TEST == "A"
GO TOP
DO WHILE ! eof()
   nValue += FIELD->VAL
   SKIP
ENDDO
the classic way ..

S3: (I don't remember exact syntax of SUM....)
SUM FIELD->VAL TO nValue FOR TEST == "A"
the ultimate way,
translated in "letostd.ch" to: 'Leto_Sum(), which is using the *blockbuster* 'Leto_DbEvil()'

 
S1: ALL records are transferred from the server to the client as in standard harbour RDDs... the logic is in the application and there is no way for LetoDBf to know anything... It can be quicker than SMB and HBNETIO because it may use compression and doesn't need to obey native SMB locking
!! Records are send ! **in bunches** !, not one by one !! as with SMB/ HbNetIO.

Default value: 10 records -- distributed sample 'letodb.ini' suggests 21 *as default*.
And such a bunch of records can be nicely compressed before transfered over network.
Amount of 'Cache_Records' can be temporary changed by: Leto_SetSkipBuffer().
Estimated speed improvement versus SMB: xx! times -- please check ! yourself.
Example "test_mem.prg" is to benchmark -- it can skip > 100 K! (*) records per second.
[ (*) 1 GBit fibre network with high-end switch, stone-age old hardware -- with copper network 30% ? lesser ]
 
S2: LetoDBf knows there is a filter. The filter is transferred to the server that sends to the clients ONLY the records that match the filter. So if 10% of the records match, only 10% of the records are transferred. And the skip done locally on the server is way... way... way quicker... In this case LetoDBf still doesn't know what you want to do with the data, it just gives you the records...
more increasing the performance: the bigger the DBF table, the more records are filtered out.
We hopefully don't discuss tables with only a thousand records ;-( ...
 
S3: LetoDBf knows the filter and the job you want to do on the filtered data... it does the complete job on the server and transmit to the client only few bytes, the nValue... Also if the filter matches 100% of the data, only the result is returned...
Nearly as done local at machine, plus the time for request to and response from server with the result.
Plus this happens down at C-level, only one 'PRG' (*) function call ...
[ (*) not true: LetoDBf() have its own implementation of that function, and commonly don't execute at PRG level ]

Now, why Otto is asking for RPC?
Because he only have connected once ?? connected to a LetoDBf server,
else no more real experience with the skipping monster LetoDBf :-)
"RPC RPC" burned into mind :-)
 
Several months ago I've been asked to present a dashboard with some data aggregated in a special way. The user will need to have a look at the data once every X days, and when he needed he needed to have them NOW. He could not wait the 8 minutes the report needed to be generated. So I created a cron script on the server that at 6AM would calculate the totals up to that moment and store them in a dedicated DBF... from 8 minutes to... less than 3 seconds !
Dirty tricks for a special occasion -- and why not to sum the values at 6 am with a LetoDBf client ?
If i understand correctly, you have then to sum from 6 am upwards
? -- and that is done conventional with HbNetIO ??
Then it should be a break of a a second if done with LETOooo 8-)

 
From what I understand, Otto wants to run a program/function/procedure on the server
No one hinders Ottó to write a Harbour application running at the 'server' machine.
But i won't give support to the nonsense!, that this have to run inside LetoDBf server.

And i am on the way to test LetoDBf access to 32 bit ;-)) xBase++,
posted *very* first results just a few hours ago ...

best regards
Rolf

Ash

unread,
Apr 30, 2019, 10:58:19 AM4/30/19
to Harbour Users
Hello Francesco,

8-10 time faster can be a little misleading...
These numbers come from a customer of mine. Report, a complicated one, took close to 50 minutes the old way but only takes less than 6 minutes with LetoDBf. However, efficient programming does count.

Regards.
Ash

Mario H. Sabado

unread,
Apr 30, 2019, 11:35:21 AM4/30/19
to 'elch' via Harbour Users
Hi,

In my case, the performance of accessing (Read/Write) database in AWS cloud from local (on prem) client application via LetoDBf connection is comparable to accessing my application from  Windows server over LAN through shared folder.  Wireless LAN access used to be intolerable in my environment but using LetoDBf, I have overcome this burden that I have for a long time (from user complains).

Regards,
Mario

Ottó Trapp

unread,
May 2, 2019, 11:02:59 AM5/2/19
to Harbour Users
Hello!

Ash, Francesco, Mario, Rolf thank you for sharing your experiences and thanks for the detailed explanations! Yes, Francesco guessed right what I was thinking of, that in some jobs a process local to the database should be the fastest. BUT I'll put that thinking aside now and I'll concentrate on testing 'the skipping monster'.

In their first reply to my recommendation Alaska showed little interest in a new DBE. In spite of this I made some samples with a large dbf for them to test the speed of existing DBE-s and LetoDBf (sent the github link to it).

Best Regards,
Otto


elch

unread,
May 7, 2019, 5:20:11 PM5/7/19
to Harbour Users
Hi,

really ! interesting,
to work without a "workarea".
[ You know what you miss if its gone ... ]

Beforehand an explanation: LetoDB[f] C-API
means an P-rogramming I-nterface for language 'C' -- for LetoDB[f].
It is made not as a separate effort, it is the 'underlaying layer' below the Harbour RDD -- aka below a "workarea".
The RDD methods (mostly) use these functions to communicate with the server.
And this 'C-API' i am using in my tests for a xBase app to work with LetoDBf.

Most obvious consequence is that we can not use 'pure' field-names,
but have to call a function which queries the C-API for the field content.
In the attached example ( test_xpp.prg for xBase !! ) you see these:
g() == fieldGet() -- p() == fieldPut().
In line 54ff we see a classic REPLACE command:
-- that works because of the PP rule in line 5+.
( We may notice, that i can use an ALIAS in the field name )
[ Such PP rule i cannot add generally, as it interfere with other DBEs ]

But there are more consequences: FILTER and RELATIONs
 -- they also would need "work-areas" we don't have at client side
 -- but fortunately we have real work-areas at server side!.
That means we can set 'optimized' FILTER and RELATION expressions,
which the server can evaluate because they are 'self containing' without
references to a variable of the client.
That is a limitation!, but as we want 'performance' it is the way to go,
because then the client need to know of a FILTER only for academic purpose,
the server will handle all what is needed.

The attached example snippet does not show the actually ~ 1500 lines of wrapper functions
-- aka what is behind a 'LetoRecCount()' etc ... [ yet all done at PRG! level ].

It looks IMO convenient ?
And it only needs 'internally' ! 6 STATIC variables:
two arrays plus two pointer what item in the array is the active one,
plus one STATIC indicating that the system is initialized ...
I thought about OPP style, but it would make much more effort -- a proof for
myself that well designed 'modular' code can be superiour ;-)

---
If xBase have no 'interest' [ or 'capacities' ] to earn money,
so i should make it for free -- but that is a pity, because:
# XPP user are used to PAY, and something free may seem to them worthless
# XPP user are NOT used to build themself system parts
-- and they may get very scared if they can look down into source 8-)

=> maybe i upload it as 'binary' ? 8-)

---
hefty work in progress ...
-- have done a lot changes to the C-API, all still not uploaded ...

stay tuned
elch
report.zip

Ottó Trapp

unread,
May 9, 2019, 12:27:17 PM5/9/19
to Harbour Users
Hello Rolf,

I had a look at your sample, very nice!, you are able to connect to and use LetoDBf from Xbase through your Leto..() functions and access fields with f() and g()!
Yes, a workarea is really missing if your code relies havily on ISAM and is full of ALIAS->(Db..()), ALIAS->FIELDNAME and FIELDNAME references. (Like ours).
I also have cases where client side SET  FILTER, SET RELATION, REPLACE ALL FOR, DELETE ALL FOR  conditions contain variables defined on the client side.

If xBase have no 'interest' [ or 'capacities' ] to earn money,
so i should make it for free -- but that is a pity, because:
# XPP user are used to PAY, and something free may seem to them worthless
# XPP user are NOT used to build themself system parts
-- and they may get very scared if they can look down into source 8-)

=> maybe i upload it as 'binary' ? 8-)
Alaska to my recommendation argued that they earlier helped 3 DBE attempts that unfortunately did not materialize into real DBEs.
It is true that we are used to pay for a tool or a 3rd party lib. Parallel to this I also admire those who are gifted to build their own tools. I would never consider free software (and its freedom) worthless, just the opposite, I like GNU/Linux, admire Harbour, its development and the open philosophy. But either users have to take part in the community's work and/or have to reward working members somehow. I'll attempt writing you soon on this. I am scared but vote for the 'source-code-available' version. :-)

stay tuned
I do!

Best Regards,
Otto
 
Reply all
Reply to author
Forward
0 new messages