... I experimented to set the Users_Max to 9
--
--
You received this message because you are subscribed to the Google
Groups "Harbour Users" group.
Unsubscribe: harbour-user...@googlegroups.com
Web: http://groups.google.com/group/harbour-users
---
You received this message because you are subscribed to the Google Groups "Harbour Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to harbour-user...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
... I have watched in the console that it had reached the 99 users and when an attempt for new connection, the console and letodbf server terminates.
... TABLES_MAX=128000 and I set it to 256000 while the USERS_MAX=1000.
Hi Mario,
... I have watched in the console that it had reached the 99 users and when an attempt for new connection, the console and letodbf server terminates.
this worked for me -- and got feedback in "letodbf.log":
' ERROR configured mamimum number of users reached ... '
[ mamimum !! :-) -- excellent typo -- fixed in latest upload ... ]
... TABLES_MAX=128000 and I set it to 256000 while the USERS_MAX=1000.Have just seen that working: with 250 user and 500 K ! active tables,used a modified "/tests/ron.prg" ..[ -- and again, Ron got me: if the *same* table is opened more than 65535 times -- fixed! ]
a) there are security limits in Linux,about max open files per process( == all threads )Show me: ulimit -s // ulimit -n
ulimit -s = 8192
ulimit -n = 1024
I have e.g. in: "/etc/security/limits.conf"* soft nofile 4096
* hard nofile 131070
b) each user connection is a THREAD, for this we have *per* thread:OS stacksizeHVM + stacksize64 KB send + receive bufferstructures about user, tables, index, workareas, ......
** How much RAM have the server ? **Choose a task-manger of your choice, check for usage of LetoDBf.
Background: LetoDBf uses hb_xgrab():
in case of failure to allocate RAM it exits with a not recover-able error,
about this is then tried to write into the log -- which again needs a few Bytes/ file to open.It would be one possible explanation why you do not see: 'leto_errInternal' in "letodbf.log".
[ Do you use Leto_Udf() functions ? -- only the REQUESTed in "server.prg" or the full command set ? ]
----( This is not for production usage, on the fly adapted:configured with #define at top for 110 user * 50 tables
hbmk2 ron letodb.hbcron IP-AddressESC + ENTER to stop ... )
best regardsRolf
--
--
...
ulimit -n = 1024
The server have 15GB available RAM.
--
...
Is it possible to run multiple instance of LetoDBf in one server pointing to the same DataPath?
--
--

--
Hi Mario,so i assume it does not anytime show zero.
- it shows good value until 8hrs with 50+ connections thrn drops to 0
Tell me about the behaviour of that value:# it is constantly decreasing until it reaches '0',
# it jumps from a good value to '0'
Zero value sounds scary -- also that there is no log entry for connection in 'letodbf.log'.
Not allowed to open more files ?
Shown in Linux ! is the result of a query into: /proc/meminfo
and in that file it is searched for two lines, and then calculated:
MemFree: - Cached: = result
What is in these both lines when console shows '0' ?
--

--
The pidof result is 5477.
--
I have run the test while the RAM in console is at 0.I have 5 result for wc -l and lsof have "permission denied"
it may be because the file: '/proc/meminfo' can not be opened.
...
"MemFree:" - "Cached:" == available RAM








--
Hi Rolf,
Sorry my bad. The RAM size was not actually reduced to 20GB.
Regards,
Mario
I also have this instance when I stop letodb and restart and need to run it multiple times since it sometimes takes time to run:
--
--
so who changed when the debug level to '8' ? ;-)
I have checked my sources and find nowhere I am setting debug level to 8. I'm at a loss.
But ... didn't you have two 'letodb.ini', aka one for the SAMBA ?
...Debug = 0...
--