[LetoDBf] Encountered Server Crash

676 views
Skip to first unread message

Mario H. Sabado

unread,
Feb 10, 2019, 10:46:30 AM2/10/19
to 'elch' via Harbour Users
Hi Rolf,

I have recently experience frequent LetoDBf server crash.   I'm using
the latest version under Ubuntu 16.04.  The Default_Driver is NTX,
Share_Tables=1,No_Save_WA=1.  The Users_Max was left commented until I
reached the 100 users.  When I changed it to 1000, that's where I
started to encounter the frequent abnormal termination of the server.  I
can't say that this is the culprit so I looked at the other possible
factors (RAM, etc. and adjusted them subsequently) but still experience
the problem.  I experimented to set the Users_Max to 9 and made
connections to the server exceeding the 9 max but have no alert or error
indicating that I have already reached the max allowed users.  I tried
to look at the log file but no relevant information was captured.

Any other hint what I might be doing wrong?

Thanks,
Mario

elch

unread,
Feb 10, 2019, 11:36:38 AM2/10/19
to Harbour Users
Hi Mario,

... I experimented to set the Users_Max to 9

This would be without effect, as the allowed minimum in "server.prg" is: > 10 !
This value is responsible for initially allocation of memory,
and this does not grow or shrink during server lifetime.

Yes, it should be set to 'enough', as it is not really expected to be too less.
I will have a look for that ...

But such many user may exceed another limit, check also for:
TABLES_MAX [ default is 999 ]

best regards
Rolf 

Mario H. Sabado

unread,
Feb 10, 2019, 6:20:29 PM2/10/19
to harbou...@googlegroups.com
HI Rolf,

Yes, my previous TABLES_MAX=128000 and I set it to 256000 while the USERS_MAX=1000.

Regards,
Mario
--
--
You received this message because you are subscribed to the Google
Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: http://groups.google.com/group/harbour-users

---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Mario H. Sabado

unread,
Feb 10, 2019, 9:03:16 PM2/10/19
to harbou...@googlegroups.com
Hi Rolf,

I have just encountered anew that termination of the Leto server.  I commented the the Users_Max to have the default 99 users.  I have watched in the console that it had reached the 99 users and when an attempt for new connection, the console and letodbf server terminates.

Thanks,
Mario

elch

unread,
Feb 11, 2019, 3:07:48 PM2/11/19
to Harbour Users
Hi Mario,

... I have watched in the console that it had reached the 99 users and when an attempt for new connection, the console and letodbf server terminates.
this worked for me -- and got feedback in "letodbf.log":
' ERROR configured mamimum number of users reached ... '

[ mamimum !! :-) -- excellent typo -- fixed in latest upload ... ]

 
... TABLES_MAX=128000 and I set it to 256000 while the USERS_MAX=1000.
Have just seen that working: with 250 user and 500 K ! active tables,
used a modified "/tests/ron.prg" ..
[ -- and again, Ron got me: if the *same* table is opened more than 65535 times -- fixed! ]

a) there are security limits in Linux,
about max open files per process( == all threads )
Show me: ulimit -s  // ulimit -n
I have e.g. in: "/etc/security/limits.conf"
* soft nofile 4096
* hard nofile 131070

b) each user connection is a THREAD, for this we have *per* thread:
OS stacksize
HVM + stacksize
64 KB send + receive buffer
structures about user, tables, index, workareas, ...
...

** How much RAM have the server ? **
Choose a task-manger of your choice, check for usage of LetoDBf.

Background: LetoDBf uses hb_xgrab():
in case of failure to allocate RAM it exits with a not recover-able error,
about this is then tried to write into the log -- which again needs a few Bytes/ file to open.
It would be one possible explanation why you do not see: 'leto_errInternal' in "letodbf.log".

[ Do you use Leto_Udf() functions ? -- only the REQUESTed in "server.prg" or the full command set ? ]


----
( This is not for production usage, on the fly adapted:
 configured with #define at top for 110 user * 50 tables

hbmk2 ron letodb.hbc
ron IP-Address
ESC + ENTER to stop  ... )


best regards
Rolf
ron.zip

Mario H. Sabado

unread,
Feb 11, 2019, 5:59:59 PM2/11/19
to harbou...@googlegroups.com
Hi Rolf,

Here is my result:

ulimit -s = 8192
ulimit -n = 1024

The server have 15GB available RAM.

Thanks,
Mario

On 02/12/2019 4:07 AM, 'elch' via Harbour Users wrote:
Hi Mario,

... I have watched in the console that it had reached the 99 users and when an attempt for new connection, the console and letodbf server terminates.
this worked for me -- and got feedback in "letodbf.log":
' ERROR configured mamimum number of users reached ... '

[ mamimum !! :-) -- excellent typo -- fixed in latest upload ... ]

 
... TABLES_MAX=128000 and I set it to 256000 while the USERS_MAX=1000.
Have just seen that working: with 250 user and 500 K ! active tables,
used a modified "/tests/ron.prg" ..
[ -- and again, Ron got me: if the *same* table is opened more than 65535 times -- fixed! ]

a) there are security limits in Linux,
about max open files per process( == all threads )
Show me: ulimit -s  // ulimit -n

ulimit -s = 8192

ulimit -n = 1024


I have e.g. in: "/etc/security/limits.conf"
* soft nofile 4096
* hard nofile 131070

b) each user connection is a THREAD, for this we have *per* thread:
OS stacksize
HVM + stacksize
64 KB send + receive buffer
structures about user, tables, index, workareas, ...
...

** How much RAM have the server ? **
Choose a task-manger of your choice, check for usage of LetoDBf.

Background: LetoDBf uses hb_xgrab():
in case of failure to allocate RAM it exits with a not recover-able error,
about this is then tried to write into the log -- which again needs a few Bytes/ file to open.
It would be one possible explanation why you do not see: 'leto_errInternal' in "letodbf.log".

[ Do you use Leto_Udf() functions ? -- only the REQUESTed in "server.prg" or the full command set ? ]


----
( This is not for production usage, on the fly adapted:
 configured with #define at top for 110 user * 50 tables

hbmk2 ron letodb.hbc
ron IP-Address
ESC + ENTER to stop  ... )


best regards
Rolf
--

Mario H. Sabado

unread,
Feb 11, 2019, 6:37:56 PM2/11/19
to 'elch' via Harbour Users
Hi Rolf,

For the TABLES_MAX, should I consider only the .dbf files and not .ntx?

I'm specifically calling Leto_UDF() functions in my code but I have Optimize=1 and ForceOpt=1 in my letodb.ini.

Thanks,
Mario
--

Mario H. Sabado

unread,
Feb 12, 2019, 12:28:52 AM2/12/19
to 'elch' via Harbour Users
Hi Rolf,

This is to confirm to you that after your last update,  I have already exceeded the 100+ users without experiencing server crash with the following config:

USERS_MAX=999
TABLES_MAX=199999
Optimize=1
ForceOpt=1
ShareTables=1
No_Save_WA=1
Default_Driver=NTX

Many thanks for your usual support!

Regards,
Mario

Mario H. Sabado

unread,
Feb 12, 2019, 2:12:11 AM2/12/19
to 'elch' via Harbour Users
Hi Rolf,

Running your ron.prg and experimented various values in THREADS and TABLES=100.  The safe value that does not crash or hangup for the THREAD=299.  I'll just set the USERS_MAX=299 to be safe in my production. 

What is the other option to increase the user connections?  Is it possible to run multiple instance of LetoDBf in one server pointing to the same DataPath?

Thanks,
Mario

elch

unread,
Feb 12, 2019, 2:28:06 AM2/12/19
to Harbour Users
Hi Mario,

...
ulimit -n = 1024
 
one process (one LetoDBf server) allowed to open 1024 files.
You should increase for a DBF server with all the index files.
It will not crash the server, but application cannot open a table/index if outreached.
It is IMO done in "/etc/security/limits.conf" ...

 

The server have 15GB available RAM.
That should be enough -- i noticed in my stress tests ~ 4 GB for the largest tests,
and these were for much more files + user you need.

---
"TABLES_MAX" is for DBF tables opened by *all* user in server mode "No_Save_WA = 1",
e.g. 100 user * 100 DBF need 10000.
[ 'opened' == actively used DBF in a workarea, e.g. with DbUseArea() ]

Index orders are handled dynamical -- need no config option.

---
"Optimize = 1" is left from old times, just without effect in letodb.ini,
and related to rushmore index type ...

best regards
Rolf

Mario H. Sabado

unread,
Feb 12, 2019, 2:43:51 AM2/12/19
to harbou...@googlegroups.com
Many thanks Rolf for this info and guidance.

Best regards,
Mario
--

elch

unread,
Feb 12, 2019, 2:48:16 AM2/12/19
to Harbour Users
Hi Mario,

...
Is it possible to run multiple instance of LetoDBf in one server pointing to the same DataPath?
Surely!

But if they use the same IP-address, they need to use different ports.
First with default 2812, next server(s) + 2 ! --> 2814, 2816

Or with multiple network cards, aka multiple IP addresses ...

The more opened tables one LetoDBf server must handle,
the slower it gets for open/ close/ create DBF actions.
Aka it needed a few seconds, to DbUseArea() 500K tables by 250 user ;-)

best regards
Rolf

elch

unread,
Feb 12, 2019, 3:01:20 AM2/12/19
to Harbour Users
Addendum, Mario

multiple server can use same server executable,
aka you just create a new "letodb2.ini" and use the 3rd param, e.g.:
letodb config letodb2
letodb stop letodb2
to start/stop another server.

Mario H. Sabado

unread,
Feb 12, 2019, 3:20:11 AM2/12/19
to harbou...@googlegroups.com
Hi Rolf,

Thaks for this hint!  At least I have an option if the needs arise for maximum possible users.

Best regards,
Mario
--

Mario H. Sabado

unread,
Feb 12, 2019, 3:22:57 AM2/12/19
to 'elch' via Harbour Users
Great!  Thanks again Rolf!
--

Mario H. Sabado

unread,
Feb 20, 2019, 4:34:54 AM2/20/19
to harbou...@googlegroups.com
Hi Rolf,

I'm have cases where I encountered 0 RAM as reflected in the letodbf console but when I query the available RAM, I still hae 16GB.  The total RAM of the Ubuntu 16 server running in VM is 20GB.  The application can no longer connect to the leto server when this is encountered.  The failed connection was not in the log file but the letodb process id is still active.  Any hint?

Thanks,
Mario



--

elch

unread,
Feb 20, 2019, 6:21:29 AM2/20/19
to Harbour Users
Hi Mario,

so i assume it does not anytime show zero.
Tell me about the behaviour of that value:
# it is constantly decreasing until it reaches '0',
# it jumps from a good value to '0'

Zero value sounds scary -- also that there is no log entry for connection in 'letodbf.log'.
Not allowed to open more files ?

Shown in Linux ! is the result of a query into: /proc/meminfo
and in that file it is searched for two lines, and then calculated:
MemFree: - Cached: = result
What is in these both lines when console shows '0' ?

cat /proc/meminfo | grep MemFree: > mem.txt
cat /proc/meminfo | grep Cached: >> mem.txt

best regards
Rolf

Mario H. Sabado

unread,
Feb 20, 2019, 6:37:16 AM2/20/19
to 'elch' via Harbour Users
Hi Rolf,



On Wed, Feb 20, 2019, 7:21 PM 'elch' via Harbour Users <harbou...@googlegroups.com wrote:
Hi Mario,

so i assume it does not anytime show zero.
- it shows good value until 8hrs with 50+ connections thrn drops to 0

Tell me about the behaviour of that value:
# it is constantly decreasing until it reaches '0',
- this happens after prolong use 8+ hrs with 50+ connections

# it jumps from a good value to '0'
-yes

Zero value sounds scary -- also that there is no log entry for connection in 'letodbf.log'.
Not allowed to open more files ?
-yes

Shown in Linux ! is the result of a query into: /proc/meminfo
and in that file it is searched for two lines, and then calculated:
MemFree: - Cached: = result
What is in these both lines when console shows '0' ?
-plenty of size i.e. 16gb when running free while console value is 0

elch

unread,
Feb 20, 2019, 7:35:59 AM2/20/19
to Harbour Users
Hi Mario,

zero [ '0' ] RAM free would be reported if e.g. LetoDBf can't open '/proc/meminfo'.

---
"pidof" can be used to find the ID of a process
"lsof" is a Linux tool, means: list opened files
"wc" counts words or lines

Above 'classic commands' should be available, and there are other approaches.
We want to inspect the opened files:

lsof -p $(pidof letodb)
--> lists all opened files of letodb

lsof -p $(pidof letodb) | wc -l
--> counts the files

Do not post above lists, but have a look for.
Is the number of open files constantly increasing ??

best regards
Rolf

Ash

unread,
Feb 20, 2019, 7:36:53 AM2/20/19
to Harbour Users
Hello Mario,

Culprit could the VM. Test your server on a real hardware and see if you get the same results.

I have been using Synology DS713+ with 2GB of memory for over a year now and have never come across this issue.

Regards.
Ash

elch

unread,
Feb 20, 2019, 8:04:41 AM2/20/19
to Harbour Users
addendum, Mario:

if:
pidof letodb
gives no result, search for letodb with:
ps aux | grep letodb
The maybe a path needed in front of 'letodb' ...

---
BTW, in the header of 'console', the right value in the line for "Tables current",
right value is 'Tables Max:',
that is the value at very minimum is needed for 'letodb.ini' option: TABLES_MAX =
So you have ~ 100 tables per user simultaneous opened, what is quite impressive !

best regards
Rolf

Mario H. Sabado

unread,
Feb 20, 2019, 8:40:49 AM2/20/19
to harbou...@googlegroups.com
Thanks Rolf! 

Will try this tests and info gathering and give you feedback.

Best regards,
Mario
--

Mario H. Sabado

unread,
Feb 20, 2019, 2:13:36 PM2/20/19
to 'elch' via Harbour Users
Hi Rolf,

The pidof result is 5477.

Regards,
Mario


--

elch

unread,
Feb 20, 2019, 2:45:28 PM2/20/19
to Harbour Users
Hi Mario,


The pidof result is 5477.
:-)
please carefully ! re-read my question from 13:35,
the process-ID was just the base for what then to do -- at your place!

**
We wanted to know how many files the server have opened in the moment,
when free RAM is reported to be zero.
Not the value in LetoDbf console, but instead what the Linux OS reports.
**

I have there a slightly 'suspicion':
that your UDF function open a file, which it does not close afterwards.
That would explain a lot --or-- if it's a unknown bug in LetoDBf server. 

About that was my instruction,
to detect such ... -- or to exclude a problem of this kind.

best regards
Rolf

Mario H. Sabado

unread,
Feb 20, 2019, 2:56:39 PM2/20/19
to harbou...@googlegroups.com
HI Rolf,

Sorry it's 3am here and still lost hehe.  Will get back to you for more info.

Regards,
Mario

Mario H. Sabado

unread,
Feb 21, 2019, 2:38:10 AM2/21/19
to harbou...@googlegroups.com
Hi Rolf,

I have run the test while the RAM in console is at 0.

I have 5 result for wc -l and lsof have "permission denied"

Thanks,
Mario
--

elch

unread,
Feb 22, 2019, 11:45:55 AM2/22/19
to Harbour Users
Hi Mario,


I have run the test while the RAM in console is at 0.
I have 5 result for wc -l and lsof have "permission denied"


then you perhaps need 'sudo', as the list of filenames is a 'serious info',
aka try to execute that commands as super-user 'root' ...

---
Let me explain my 'suspicion':
if the LetoDBf console **suddenly** reports: zero free RAM available,
it may be because the file: '/proc/meminfo' can not be opened.

[ Harbour can't do for Linux!, LetoDBf use values in that file to calculate:
  "MemFree:" - "Cached:" == available RAM ]

As i am somehow sure, the server close all *registered* files when a connection ends,
aka all the DbUseArea(), Leto_FOpen(), etc,
the question is up to you:
does you open any file in a UDF function and don't close it ?

My shown Linux commands should help to detect files which are still open,
but are be expected to be closed.
And how many files are 'correctly' opened,
and if amount exceed recent discussed security limits allowed for one process.

best regards
Rolf

elch

unread,
Feb 22, 2019, 11:51:55 AM2/22/19
to Harbour Users
it may be because the file: '/proc/meminfo' can not be opened.
...

  "MemFree:" - "Cached:" == available RAM
"MemFree:" + "Cached:" = available RAM

( the 'cached' is RAM used for caching e.g. files, given free on demand )

Mario H. Sabado

unread,
Feb 22, 2019, 6:30:17 PM2/22/19
to harbou...@googlegroups.com
HI Rolf,

I started the letodb with sudo.  When the RAM in console is 0, the application can no longer connect to the server and falls back to direct access on database.  There is no log entries though indicating a failed user connection.  When I disconnect an existing user during this state, the RAM have sudden increase in size.  When I run my application again, the RAM drops to 0 and can no longer proceed with letodb connection.  I have added RAM from 16GB to 24GB on one of the server encountering this and it's almost 3 days now that it's running smoothly with over 90 max connections without experiencing yet the sudden drop of RAM size.  This is still being experienced  though in one server with 16GB RAM.

Thanks,
Mario

Mario H. Sabado

unread,
Feb 22, 2019, 8:39:59 PM2/22/19
to 'elch' via Harbour Users
Hi Rolf,

Let me add that as of now, this only happens in VM environment. Other non virtual installations with max 50 connections did not encounter this.

Thanks,
Mario

elch

unread,
Feb 22, 2019, 11:45:14 PM2/22/19
to Harbour Users
Hi Mario,

not run the LetoDBf server as sudo,
but the OS tools which shows/ counts the opened files.

Your description is another evidence:
it is *not* the RAM, but the allowed number of open files per process.

You still have not shown the number of opened files,
we like to see the number of opened files the Linux OS reports.
If LetoDBf server works for a while, and then stops working,
a file is not closed when a connection ends -- which file ?

Search web for:
'Linux opened files per process'
there you will find instruction how to increase.

best regards
Rolf

Mario H. Sabado

unread,
Feb 22, 2019, 11:52:25 PM2/22/19
to 'elch' via Harbour Users
Hi Rolf,

I will get back to you on this with the data.  

Thanks,
Mario

elch

unread,
Feb 23, 2019, 12:21:54 AM2/23/19
to Harbour Users
Hi Mario,

to repeat:

if LetoDBf server works for a while, and then stops working,

a file is not closed when a connection ends -- which file ?


As i do not know your application, this info is best to you,
as you will know where in your source such a file is used:
'not closed with connection end'.

Then i can help, why that specific file it is not closed with connection ending.
So i do not ! want a long list of open files, it does not help.

best regards
Rolf

Mario H. Sabado

unread,
Feb 23, 2019, 1:45:40 AM2/23/19
to 'elch' via Harbour Users
Hi Rolf,

When I closed all connections from the console, I observed that there is one dbf file still active.  Could this be the source of the problem in my case?

Thanks,
Mario

Mario H. Sabado

unread,
Feb 23, 2019, 3:49:49 AM2/23/19
to 'elch' via Harbour Users
Hi Rolf,

This is one of my case after I close all connections in console.  In this case, no server termination or new connections rejected.  The ldbc.exe is the same as console.exe I just renamed to resemble LetoDbf Console.

Regards,
Mario

Mario H. Sabado

unread,
Feb 24, 2019, 8:10:56 PM2/24/19
to harbou...@googlegroups.com
Hi Rolf,

As the LetoDbf server has been running smoothly for more than 4 days after upgrading the memory from 16GB to 24GB and did not experience sudden drop in console RAM, I run the ron.prg test on the other machine.  As you can see from the image, I captured the result of free while the RAM in console is at 0, and the physical free memory is ~150MB.  Could this be the logical explanation of the problem?



After closing/terminating the ron.prg, the console RAM jumps to ~7GB but the free result is ~133MB?



lsof result:


wc -l result

When I run ron.prg again, the RAM in console immediately drops to 0.  This can be resolved by restarting the letodb server and it gets back to normal until such time that this limit is encountered again.

I also have this instance when I stop letodb and restart and need to run it multiple times since it sometimes takes time to run:

Best regards,
Mario

Mario H. Sabado

unread,
Feb 24, 2019, 8:29:59 PM2/24/19
to harbou...@googlegroups.com
Hi Rolf,

Just now I experienced again the sudden termination of letodbf server just by running console program.  The abnormal termination is unpredictable and I'm not even running my main application.  Our net admin reduced it back to 20GB from 24GB.  Perhaps based on his resources monitoring that I'm only utilizing <20GB.  Now I got this problem back again after 4+days non-stop of very good performance.



Best regards,
Mario
--

Mario H. Sabado

unread,
Feb 24, 2019, 9:33:34 PM2/24/19
to harbou...@googlegroups.com

Hi Rolf,

Sorry my bad.  The RAM size was not actually reduced to 20GB.

Regards,

Mario

elch

unread,
Feb 26, 2019, 6:32:53 AM2/26/19
to Harbour Users
Hi Mario,

once more explained: RAM in Linux is used for [securely!] caching files,
and all RAM that is lazy hanging around in the machine is used for that.

*Such used RAM is given free to applications [ LetoDBf ] if requested for it*
You buy RAM, *when* nearly all is in use *and* only less is used for cache,
as that means applications need the RAM.
To have one GB ( or even more ) in use for caching, is good for performance.

==> LetoDBf console very correct shows *free* RAM


I also have this instance when I stop letodb and restart and need to run it multiple times since it sometimes takes time to run:

It is a well known 'problem' in secure Linux,
that it need a delay ! ( ~ 1 min ) to re-start the server after a shutdown.

For this is the bash script: "leto.sh":
it waits until a restart is possible -- then ! it starts the server.
( It is easy to adapt/ extend -- and i may improve the example )

If you start the server as superuser 'root', new created files are owned by 'root'
-- and can be later accessed only by administrators ( if handled manually, e.g. an app not using LetoDBf server )
It can be done this way, but it is commonly not what people want.
For such scenario, *if* server is started by root, we have in 'letodb.ini':
"Server_User" or "Server_UID+GID" to set the user for the server in action.
Your net admin will like them ! ;-)


-----
You should increase the amount of files allowed to be opened by a single process,
as earlier suggested to put two lines into file:
/etc/security/limits.conf

*                soft    nofile          8192
root             soft    nofile          8192

After server restart, effect can be verified with:
ulimit -n

Above '8192' should be enough for:
1000 user   ( calcultes * 2 == 2000 files )
1000 DBFs ( in directory ! )
5192 NTXs  ( in directory ! )


-----
My ugly ! *bug* which caused steadily increasing open files,
to be precise it were 'zombie file-handles' left over after a socket failure,
seem found after intense search.
I have just uploaded, it hopefully fix some problems ... ;-)

best regards
Rolf

Mario H. Sabado

unread,
Feb 26, 2019, 7:03:11 AM2/26/19
to 'elch' via Harbour Users
Thanks Rolf!  Will test this and give you feedback.

Best regards,
Mario

--

Mario H. Sabado

unread,
Feb 26, 2019, 11:28:08 AM2/26/19
to 'elch' via Harbour Users
HI Rolf,

Here's my observation with the latest letodbf and ron.prg you have provided.  Please note that I have set the open files limit to 8192 as you suggested.

1.  When Users_Max is reached, no new connection is allowed from LetoDBf server and the server does not terminate
2.  When the Tables_Max exceeds, an open error is raised from ron.exe and the LetoDBf server does not terminate

Thanks,
Mario

Ash

unread,
Feb 26, 2019, 1:36:12 PM2/26/19
to Harbour Users
Hello Rolf,

Can you please add Users_Max to log file as you did for Tables_Max?

Regards.
Ash

Mario H. Sabado

unread,
Feb 26, 2019, 3:10:11 PM2/26/19
to 'elch' via Harbour Users
Hi Rolf,

Requesting also if possible to add the column for the name of the logged in account from the console.  Like in the case of Remote Terminal server, the IP address and workstation name is similar for all connected users and it would be helpful to know the profile name.

Thank,s
Mario

Ash

unread,
Feb 26, 2019, 3:38:53 PM2/26/19
to Harbour Users
Hello Rolf,

Console not respecting debug level set in letodb.ini.

'Debug = 0' in letodb.ini
8 reported by Console.

Regards.
Ash

elch

unread,
Feb 27, 2019, 11:55:14 AM2/27/19
to Harbour Users

--- Mario ---
you can rock'n'roll in the browses ;-) -- means there are a few more columns.

Rightmost column in browse "Connections" is user-name:
that is the second parameter of Leto_Connect( .. , cUsername, ... ).
"cUsername" can be given to Leto_Connect() without active user-management
-- and more info about the connection is not available, so did you mean that ?

If your screen is large enough, the column 'user-name' may be viewable without rocking
when you stretch (grab a window edge ) or maximize ( button in title-bar) the console window.
That works in my test environment with a 1600x1200 screen ...


--- Ash ---
"MAX_USERS" feedback along server-start will be included in next ! upload.
Even i guess that many LetoDBf admin are happy with the defaults:
*simultaneous* 99 logged-in user with each 10 active opened workareas.
Mario so far seem the first one who hit the user limit ...

LetoDbf console *query the server* for active 'Debug-Level',
it does not look for that into 'letodb.ini'.
The LetoDBf server set the debug-level out of 'letodb.ini' when it starts,
so who changed when the debug level to '8' ? ;-)


best regards
Rolf

Mario H. Sabado

unread,
Feb 27, 2019, 2:07:27 PM2/27/19
to 'elch' via Harbour Users
HI Rolf,

Thanks for the hint.  Will add this to the parameter.

Best regards,
Mario
--

Ash

unread,
Feb 28, 2019, 7:44:06 AM2/28/19
to Harbour Users
Hello Rolf,

so who changed when the debug level to '8' ? ;-)

I have checked my sources and find nowhere I am setting debug level to 8. I'm at a loss.

Using 2019-02-26 12:09 UTC+0100 Rolf 'elch' Beckmann (elchs users.noreply.github.com)

Regards.
Ash

elch

unread,
Feb 28, 2019, 9:06:32 AM2/28/19
to Harbour Users
Hello Ash,



I have checked my sources and find nowhere I am setting debug level to 8. I'm at a loss.

 there are only two occasions in LetoDBf server:
# LETO_SETAPPOPTIONS()
that is at server start ...

# RDDI_DEBUGLEVEL
used as: RddInfo( RDDI_DEBUGLEVEL, nLevel ), e.g. in the 'console.prg' ...

But ... didn't you have two 'letodb.ini', aka one for the SAMBA ?

best regards
Rolf

elch

unread,
Feb 28, 2019, 9:14:08 AM2/28/19
to Harbour Users
addendum, Ash

But ... didn't you have two 'letodb.ini', aka one for the SAMBA ?
 
and LetoDBf server running in Linux looks beforehand in "/etc" for a config file,
if not ! found there the directory with the executable is searched.

Ash

unread,
Feb 28, 2019, 11:39:55 AM2/28/19
to Harbour Users
Hello Ralf,

I am not using SAMBA for the time being in my development environment.

I have checked the whole server. There is only one letodb.ini file in \etc folder with the following contents.

[Main]
DataPath = /volume1/bms/comp
LogPath = /var/log/letodb         
EnableFileFunc = 1
EnableUDF = 1 
Cache_Records = 30
Debug = 0
TimeOut = -1
No_Save_WA = 1
Share_Tables = 1
Users_Max = 32
Tables_Max = 1999

The last two entries were added recently to control the number of users. Each instance of my application opens around 50 tables and each user could open a number (up to four, generally) of instances of it on a workstation.

Regards.
Ash

elch

unread,
Feb 28, 2019, 1:01:56 PM2/28/19
to Harbour Users
Hello Ash,

THANKS for your patience,
there was one silly single EXIT missing in the CASE switch, exactly there.
Damn ! ... -- just uploaded 'the fix'.

---
To limit max users to '32' is not a big RAM usage difference as to let it be the default max '99',
but you can do so.

---
BTW, i added 2019-02-04 the new [ old ] option in letodb.ini:
CRYPT_TRAFFIC = 1

I had this in mind especially for a scenario, the server is accessible over a *public* network.
The note in the 'ChangeLog.txt' from that day, plus the Readme.txt: 4.5 Security,
there the 2. chapter about the [ default ] hard-coded password,
may be of your interest.
It may possible decrease ? the performance a very few single %, but it hardens the server.
All not your ! client applications, and any not using encryption, are barred from the server.

So you maybe like to give it a practical test ...

best regards
Rolf

elch

unread,
Mar 1, 2019, 10:20:22 AM3/1/19
to Harbour Users
Hello Ash,

is the problem about the 'Debug-Level' fixed ?,
because when i look again at your 'letodb.ini' the bug should have set it to '32' not '8'.

best regards
Rolf

...
Debug = 0
...

Ash

unread,
Mar 1, 2019, 12:21:53 PM3/1/19
to Harbour Users
Hello Rolf,

Test Results with the following sets in letodb.ini.

[Main]
...
Debug = 0
...
Users_Max = 8
Tables_Max = 1999

Log File

03.01.2019 12:08:43 INFO: LetoDBf Server 3.00, will run at port :2812 ( internal also used :2813 )
03.01.2019 12:08:43 INFO: DataPath=/volume1/bms/comp, ShareTables=1, NoSaveWA=1, Max Tables=1999
03.01.2019 12:08:43 INFO: LoginPassword=0, CacheRecords=30, LockExtended=0, Max Users=99

Console shows debug level correctly set to 0.

Regards.
Ash

elch

unread,
Mar 1, 2019, 1:24:02 PM3/1/19
to Harbour Users
Hello Ash,

aha, there is the 'eight' ... ;-)

That user-limit is by intention, as given in server.prg:
"IF nTmp >= 9 .AND. nTmp < 65535"
else the default 99 remains active.

The already updated 'Readme.txt' will follow soon,
i was recently working on a different topic and that description isn't ready ...

best regards
Rolf

Ash

unread,
Mar 2, 2019, 3:28:57 PM3/2/19
to Harbour Users
Hello Rolf,

Console does not show the correct number of users connected.

╔═[ Statistics ]═══════════════════════╦
║ Server MB disk:  800932 RAM:     677 ║
Users  current:       1 Max:       8 ║
Tables current:       0 Max:     298 ║
Indexs current:       0 Max:     646 ║
║ Transact   All:       0 Bad:       0 ║
╚══════════════════════════════════════╩

Opening my application many times does not increase the highlighted counters above. However, when I invoke another instance of Console, user count goes to 2.

Regards.
Ash

Mario H. Sabado

unread,
Mar 2, 2019, 8:34:00 PM3/2/19
to harbou...@googlegroups.com
Hi Ash,

Did you verify in your application that leto_connect() was successful?  In my case, if max allowed connection has been reached, it defaults to direct files access.

Regards,
Mario
--
Message has been deleted

Ash

unread,
Mar 3, 2019, 11:29:31 AM3/3/19
to Harbour Users
Please ignore this post. Problem appears to be elsewhere.

Regards.
Ash
Reply all
Reply to author
Forward
0 new messages