NETTYPE protocol,poll_threads,connections,VP_class
** The first one is about "connections". This is per poll_thread or
for all connections ?
For example, I set:
NETTYPE soctcp,6,500,NET
and
NETTYPE soctcp,1,500,NET
In both cases, onstat -g ntd shows the same pair of values in the q-
limits column:
450/ 10
This suggests that the "connections" is per total connections and not
per poll thread, but the performance guide indicates that is per poll
thread. Which is the correct ?
** The second one is about the recommendation (from the performance
guide) of "do not exceed NUMCPUVPS" for the parameter poll_threads.
In my case, I have NUMCPUVPS=3 and poll_threads=6
Are there any possible problems with this configuration ?
Thanks in advance
Sources:
Reference:
http://publib.boulder.ibm.com/infocenter/idshelp/v10/index.jsp?topic=/com.ibm.adref.doc/adref122.htm
Peformance guide:
http://publib.boulder.ibm.com/infocenter/idshelp/v10/index.jsp?topic=/com.ibm.perf.doc/perf83.htm
Looks like an "an older engine" :P (Early 10? 9.40?)
Version of Engine and O/S?
1. There was a defect where the 3rd parameter was not multiplied by the 2nd parameter ... that is the defect you are seeing
regarding the "450" value not changing.
2. Really, a single poll thread should be able to handle up to about 500 connections. How many persistent connections do you have?
IF you have a very high number of connects coming in (more than 10 to 20 a second), then having more poll threads will help.
As regards "not exceeding", I suppose it would be based on "can the engine CPU VPs deal with everything the poll threads (and
listener) are throwing at it?"
Proof will be in testing and production obviously.
Yes, I have an old engine (7.31). S.O = HP-UX 11.11
The upgrade is not an option
1. Are you saying that was a bug ?. Dou you remember in what version
this defect was fixed ?
2. I have about 900 connections, because I have a legacy system that
no use connections poll
The onstat -g ntd shows high values in q-exceed column: 3186/ 671
Furthermore, I have high values use of CPU in PIDs related to virtual
processors soc
CPU TTY PID USERNAME PRI NI SIZE RES STATE TIME %WCPU %CPU
COMMAND
0 ? 14209 informix 180 20 2720M 1312K run 149:09 46.82 46.74
oninit
2 ? 14208 informix 180 20 2720M 1312K sleep 180:03 34.02 33.96
oninit
..
If I diminished the poll threads, this use of CPU increase ?
Thanks for your help
Well, you could try NETTYPE soctcp,2,1000 and that q/exceed should disappear.
High CPU usage could reflect a lot of data being transferred from server to client or perhaps the allocation / deallocation of
net_pvt_free (i.e. when you hit a queue exceed memory is allocated and then potentially de-allocated)
As regards the defect it was addressed in later 9.40.FC7(or 8?) and 10.00.FC4 (or 5?) I doubt whether it was addressed in 7 :(
Hi,
I have a couple of doubts about a recommendation from IBM, related to
NETTTPE parameter:
NETTYPE protocol,poll_threads,connections,VP_class
** The first one is about "connections". This is per poll_thread or
for all connections ?
For example, I set:
NETTYPE soctcp,6,500,NET
and
NETTYPE soctcp,1,500,NET
In both cases, onstat -g ntd shows the same pair of values in the q-
limits column:
450/ 10
This suggests that the "connections" is per total connections and not
per poll thread, but the performance guide indicates that is per poll
thread. Which is the correct ?
** The second one is about the recommendation (from the performance
guide) of "do not exceed NUMCPUVPS" for the parameter poll_threads.
In my case, I have NUMCPUVPS=3 and poll_threads=6
Are there any possible problems with this configuration ?
Thanks in advance
Sources:
Reference:
http://publib.boulder.ibm.com/infocenter/idshelp/v10/index.jsp?topic=/com.ibm.adref.doc/adref122.htm
Peformance guide:
http://publib.boulder.ibm.com/infocenter/idshelp/v10/index.jsp?topic=/com.ibm.perf.doc/perf83.htm
_______________________________________________
Informix-list mailing list
Inform...@iiug.org
http://www.iiug.org/mailman/listinfo/informix-list
OK, thanks
If I increase the number of connections, are there issues with global
pool ?
I'm researching a serious problem with an instance. The log shows
several alerts and I suspect about the net connections related to the
NETTYPE parameter and the fragments in the global pool.
As I said before, the upgrade is not an option (current version =
7.31)
Example:
11:20:27 Found during mt_shm_free_pool 3
11:20:27 Pool '°(' (0xc0000000a4693028)
11:20:27 Bad free block 0xc0000000aba8f028
...
base: 0xc0000000a493d000
len: 49152
pc: 0x0000000000000000
tos: 0xc0000000a493ebe0
state: running
vp: 4
...
( 0) 0x4000000000507f90 legacy_hp_afstack + 0x250 [/opt/7.31FD3W1/
bin/oninit]
( 1) 0x40000000005074c0 afstack + 0x78 [/opt/7.31FD3W1/bin/oninit]
( 2) 0x400000000050689c afhandler + 0x61c [/opt/7.31FD3W1/bin/
oninit]
( 3) 0x40000000005061a0 affail_interface + 0x58 [/opt/7.31FD3W1/
bin/oninit]
( 4) 0x40000000004fa98c recover_pool_bad_free_block + 0x354 [/opt/
7.31FD3W1/bin/oninit]
( 5) 0x40000000004f65b8 mt_shm_free_pool + 0x420 [/opt/7.31FD3W1/
bin/oninit]
( 6) 0x40000000004e7940 destroy_session + 0x2c0 [/opt/7.31FD3W1/
bin/oninit]
( 7) 0x4000000000133e70 sqsetconerr + 0x48 [/opt/7.31FD3W1/bin/
oninit]
( 8) 0x400000000023a5a8 asf_recv + 0xa0 [/opt/7.31FD3W1/bin/
oninit]
( 9) 0x400000000023a624 _iread + 0x64 [/opt/7.31FD3W1/bin/oninit]
(10) 0x4000000000239794 _igetint + 0x1c [/opt/7.31FD3W1/bin/
oninit]
(11) 0x40000000002346f4 sqmain + 0x3ac [/opt/7.31FD3W1/bin/oninit]
(12) 0x40000000004e3014 startup + 0xd4 [/opt/7.31FD3W1/bin/oninit]
(13) 0x40000000004fc5fc resume + 0x10c [/opt/7.31FD3W1/bin/oninit]
Thanks for your help
As per http://www-01.ibm.com/support/docview.wss?uid=swg21250847 you
can set
IFX_NETBUF_PVTPOOL_SIZE to increase the private network buffers pool
for each session
and IFX_NETBUF_SIZE to increase the size of the individual network
buffer).