Issue 116 in lusca-cache: Huge memory usage (cbdata clientHttpRequest (1013))

565 views
Skip to first unread message

codesite...@google.com

unread,
Jun 22, 2010, 9:03:14 AM6/22/10
to lusc...@googlegroups.com
Status: New
Owner: ----
Labels: Type-Defect Priority-Medium Version-1.0

New issue 116 by renato.ornelas: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

We have lusca running with tproxy but the memory grows over time:


12251 squid 20 0 7456m 6.2g 1240 R 23 19.7 10602:48 0 1.1g squid

Cachemgr memory stats:


Current memory usage:
Pool Obj Size Allocated In Use Hit Rate
(bytes) (#) (KB) high (KB) high (hrs) impact (%total) (#) (KB)
high (KB) (%num) (number)
2K Buffer (no-zero) 2048 346281 692562 692606 0.01 10
346281 692562 692606 100 3680052033
4K Buffer (no-zero) 4096 8660 34640 89160 36.48 0 8660
34640 89160 100 529606060
8K Buffer (no-zero) 8192 65 520 1920 36.51 0 65 520
1920 100 258375799
16K Buffer (no-zero) 16384 4 64 560 640.20 0 4 64 560
100 56383
32K Buffer (no-zero) 32768 1 32 288 70.83 0 1 32 288
100 1487
64K Buffer (no-zero) 65536 0 0 192 523.59 0 0 0 192
-1 20
Short Strings (no-zero) 36 4122237 144923 145296 1.64 2
4122237 144923 145296 100 46889689188
Medium Strings (no-zero) 128 980440 122555 122643 0.23 2
980440 122555 122643 100 7612882039
Long Strings (no-zero) 512 370338 185169 185300 0.34 3
370338 185169 185300 100 1519438858
event 48 10 1 9 665.43 0 10 1 9 100 96444999
close_handler 24 10346 243 725 17.08 0 10346 243 725
100 2253217601
acl 64 11 1 1 1078.14 0 11 1 1 100 22
acl_ip_data 24 18 1 1 1077.99 0 18 1 1 100 28
acl_list 24 21 1 1 1078.14 0 21 1 1 100 42
dwrite_q 48 0 0 1 1078.14 0 0 0 1 -1 404230713
FwdServer 24 850 20 218 662.27 0 850 20 218 100
706082101
HttpReply 168 25755 4226 5512 662.27 0 25755 4226
5512 100 2242148024
mem_node (no-zero) 4112 65724 263923 266939 68.58 4
65724 263923 266939 100 5454473575
StoreEntry 88 1353897 116351 137405 1025.54 2 1353897
116351 137405 100 832136272
MemObject 272 25179 6689 8796 662.27 0 25179 6689
8796 100 1171346199
netdbEntry 104 902 92 102 1077.92 0 902 92 102 100
7388150
net_db_name 32 80564 2518 2690 9.57 0 80564 2518
2690 100 12030746
request_t 1360 508580 675458 675551 0.08 10 508580
675458 675551 100 1064254693
ClientInfo 352 2870 987 1012 133.09 0 2870 987 1012
100 98686
storeSwapLogData 72 0 0 1 1078.14 0 0 0 1 -1 404230713
buf_t 80 0 0 2 785.82 0 0 0 2 -1 4552050534
AUFS IO State data 48 223 11 42 36.49 0 223 11 42 100
568372945
AUFS Queued read data 64 0 0 7 43.47 0 0 0 7 -1 365724429
AUFS Queued write data 56 0 0 320 12.01 0 0 0 320 -1
883954526
aio_ctrl 104 0 0 33 86.13 0 0 0 33 -1 3610297643
wordlist 16 8 1 1 1078.14 0 8 1 1 100 22
cbdata acl_address (1001) 48 1 1 1 1078.14 0 1 1 1 100 2
intlist 16 1 1 1 1078.14 0 1 1 1 100 2
cbdata acl_access (1002) 56 17 1 1 1078.14 0 17 1 1 100 34
cbdata http_port_list (1003) 136 4 1 1 1077.99 0 4 1 1 100 6
LRU policy node 24 1376148 32254 37867 1025.62 0 1376148
32254 37867 100 436791601
cbdata RemovalPolicy (1004) 104 2 1 1 1078.14 0 2 1 1 100 2
cbdata body_size (1005) 64 3 1 1 1078.14 0 3 1 1 100 6
ipcache_entry 128 994 125 172 643.14 0 994 125 172 100
35484162
fqdncache_entry 160 3 1 1 1078.14 0 3 1 1 100 12
cbdata idns_query (1006) 8680 0 0 5451 741.76 0 0 0 5451 -1
35484156
HttpHeaderEntry 40 4785813 186946 187194 0.23 3 4785813
186946 187194 100 40269498432
HttpHdrRangeSpec 16 4 1 10 159.53 0 4 1 10 100 27246157
HttpHdrRange 16 4 1 4 117.71 0 4 1 4 100 25838805
HttpHdrContRange 24 30 1 7 131.01 0 30 1 7 100 44247402
HttpHdrCc 40 126902 4958 5087 11.58 0 126902 4958
5087 100 2275902426
MD5 digest 16 1353897 21155 24983 1025.54 0 1353897
21155 24983 100 1094434172
aio_thread 40 16 1 1 1078.14 0 16 1 1 100 16
aio_request 96 0 0 31 86.13 0 0 0 31 -1 3610297643
cbdata RebuildState (1010) 112 0 0 1 1078.14 0 0 0 1 -1 1
pconn_data 32 1861 59 162 36.55 0 1861 59 162 100
313888294
pconn_fds 32 1859 59 162 36.55 0 1859 59 162 100
313888294
cbdata ConnStateData (1012) 336 8660 2842 7313 36.48 0 8660
2842 7313 100 430693723
cbdata clientHttpRequest (1013) 1136 3968488 4402542 4402606 0.01
63 3968488 4402542 4402606 100 1064240401
cbdata aclCheck_t (1014) 352 1 1 1 1078.13 0 1 1 1 100
4242619218
cbdata store_client (1015) 152 955 142 1396 662.27 0 955 142
1396 100 1308634079
cbdata RemovalPurgeWalker (1016) 72 0 0 1 1078.00 0 0 0 1 -1
7247373
cbdata storeIOState (1017) 136 223 30 118 36.49 0 223 30 118
100 568372945
cbdata FwdState (1018) 112 850 93 1014 662.27 0 850 93 1014
100 706071291
cbdata ps_state (1019) 200 0 0 1 1077.98 0 0 0 1 -1 706076957
cbdata ConnectStateData (1020) 96 135 13 771 662.27 0 135 13
771 100 402446406
cbdata generic_cbdata (1021) 32 138 5 25 741.76 0 138 5 25
100 411905923
cbdata HttpStateData (1022) 136 116998 15539 15562 12.46 0
116998 15539 15562 100 704165180
cbdata LocateVaryState (1023) 144 0 0 3 143.09 0 0 0 3 -1
40850058
VaryData 32 3604 113 114 0.70 0 3604 113 114 100
40850058
cbdata ErrorState (1024) 160 392155 61275 61282 0.12 1 392155
61275 61282 100 19277707
cbdata AddVaryState (1025) 160 0 0 2 16.35 0 0 0 2 -1 14487674
cbdata Logfile (1026) 4192 0 0 5 1077.47 0 0 0 5 -1 1
cbdata clientAsyncRefreshRequest (1027) 88 0 0 1 499.18 0 0 0
1 -1 152
cbdata SslStateData (1028) 120 0 0 2 20.01 0 0 0 2 -1 5666
cbdata RemovalPolicyWalker (1029) 56 0 0 1 1020.70 0 0 0 1
-1 44
Total 20042750 6979130 6996814 13.19 100 20042750
6979130 6996814 100 144269531012
Cumulative allocated volume: 46.79 TB
Current overhead: 13748 bytes (0.000%)
Idle pool limit: 5.00 MB
memPoolAlloc calls: -1759357052
memPoolFree calls: -1779399803
String Pool Impact
(%strings) (%volume)
Short Strings 74 25
Medium Strings 18 21
Long Strings 7 32
Other Strings 2 21

Large buffers: 0 (0 KB)


codesite...@google.com

unread,
Jun 22, 2010, 9:10:22 AM6/22/10
to lusc...@googlegroups.com

Comment #1 on issue 116 by renato.ornelas: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Forgot to add some details:

openSUSE 11.2 - 64Bit

proxy:~ # uname -a
Linux proxy 2.6.31.12-0.2-default #1 SMP 2010-03-16 21:25:39 +0100 x86_64
x86_64 x86_64 GNU/Linux
proxy:~ # squid -v
Squid Cache: Version LUSCA_HEAD-r14535

codesite...@google.com

unread,
Jul 5, 2010, 2:02:31 PM7/5/10
to lusc...@googlegroups.com

Comment #2 on issue 116 by arewall: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

It's a long shot, but this may be related to a bug that I recently posted
on the Squid bugzilla system (seemingly caused by HTTP requests that do not
contain a path element):
* http://bugs.squid-cache.org/show_bug.cgi?id=2973

Check your logs for TCP_DENIED messages and see if they correspond to the
sudden increase in Squid / Lusca memory use.

I'd be interested to know if it is the same problem.

codesite...@google.com

unread,
Jul 8, 2010, 3:23:36 AM7/8/10
to lusc...@googlegroups.com
Updates:
Labels: -Priority-Medium -Version-1.0 Priority-Critical Version-Head

Comment #3 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Thankyou for bringing this to my attention. I've fixed this in r14723.
Please try it and let me know!


codesite...@google.com

unread,
Jul 8, 2010, 3:27:44 AM7/8/10
to lusc...@googlegroups.com
Updates:
Status: Started
Owner: adrian.chadd

Comment #4 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

(No comment was entered for this change.)

codesite...@google.com

unread,
Aug 8, 2010, 9:59:29 AM8/8/10
to lusc...@googlegroups.com

Comment #5 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

renato, have you verified the latest LUSCA_HEAD fixes the memory leak?

codesite...@google.com

unread,
Aug 20, 2010, 3:26:46 PM8/20/10
to lusc...@googlegroups.com

Comment #6 on issue 116 by renato.ornelas: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

No.. It's still leaking... I will post version and cachemgr output later

codesite...@google.com

unread,
Aug 20, 2010, 3:34:47 PM8/20/10
to lusc...@googlegroups.com

Comment #7 on issue 116 by renato.ornelas: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Adrian,

Sorry for the slow response. In the following version it's still leaking.
Only the code changed from 1013 to 1014...


proxy:~ # squid -v
Squid Cache: Version LUSCA_HEAD-r14733
configure
options: '--prefix=/usr' '--sysconfdir=/etc/squid' '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--localstatedir=/var' '--libexecdir=/usr/sbin' '--datadir=/usr/share/squid' '--libdir=/usr/lib' '--enable-largefiles' '--with-maxfd=65536' '--with-default-user=squid' '--enable-storeio=aufs' '--enable-disk-io=AIO,Blocking,DiskDaemon,DiskThreads' '--enable-removal-policies=heap,lru' '--enable-icmp' '--enable-linux-tproxy4' '--enable-snmp'


proxy:~ # squidclient mgr:mem
HTTP/1.0 200 OK
Server: Lusca/LUSCA_HEAD-r14733
Date: Fri, 20 Aug 2010 18:51:46 GMT
Content-Type: text/plain
Expires: Fri, 20 Aug 2010 18:51:46 GMT
X-Cache: MISS from proxy.itake.com.br
Via: 1.0 proxy.itake.com.br:3129 (Lusca/LUSCA_HEAD-r14733)
Connection: close

Current memory usage:
Pool Obj Size Allocated In Use Hit Rate
(bytes) (#) (KB) high (KB) high (hrs) impact (%total) (#) (KB)
high (KB) (%num) (number)
2K Buffer (no-zero) 2048 2 4 1824 44.38 0 2 4 1824
100 1363980963
4K Buffer (no-zero) 4096 12787 51148 91952 187.85 1
12787 51148 91952 100 206672603
8K Buffer (no-zero) 8192 67 536 1480 44.38 0 67 536
1480 100 96770213
16K Buffer (no-zero) 16384 1 16 288 92.91 0 1 16 288
100 36019
32K Buffer (no-zero) 32768 1 32 96 262.93 0 1 32 96
100 271
64K Buffer (no-zero) 65536 0 0 64 196.91 0 0 0 64 -1 2
Short Strings (no-zero) 36 2379127 83642 85951 1.27 1
2379127 83642 85951 100 17090917365
Medium Strings (no-zero) 128 386717 48340 48484 0.27 1
386717 48340 48484 100 2796019360
Long Strings (no-zero) 512 137155 68578 68947 1.20 1
137155 68578 68947 100 559146338
event 48 9 1 3 100.96 0 9 1 3 100 26854698
close_handler 24 15895 373 663 187.84 0 15895 373 663
100 840799863
acl 64 11 1 1 269.28 0 11 1 1 100 11
acl_ip_data 24 14 1 1 269.28 0 14 1 1 100 14
acl_list 24 21 1 1 269.28 0 21 1 1 100 21
dwrite_q 48 0 0 1 269.28 0 0 0 1 -1 116084250
FwdServer 24 1580 38 99 100.96 0 1580 38 99 100
264002569
HttpReply 168 19673 3228 4688 171.92 0 19673 3228
4688 100 818677672
mem_node (no-zero) 4112 65669 263703 267509 91.90 4
65669 263703 267509 100 1919711465
StoreEntry 88 1229505 105661 129365 267.76 2 1229505
105661 129365 100 309494647
MemObject 272 18678 4962 7313 171.93 0 18678 4962
7313 100 422654029
netdbEntry 104 963 98 102 269.13 0 963 98 102 100
2944683
net_db_name 32 27208 851 1066 112.36 0 27208 851
1066 100 4706026
request_t 1360 313885 416879 416964 0.01 7 313885
416879 416964 100 391100141
ClientInfo 352 3120 1073 1091 67.45 0 3120 1073 1091
100 24588
storeSwapLogData 72 0 0 1 269.28 0 0 0 1 -1 116084250
buf_t 80 0 0 1 2.48 0 0 0 1 -1 1357228015
AUFS IO State data 48 321 16 40 260.39 0 321 16 40 100
181397983
AUFS Queued read data 64 1 1 6 19.63 0 1 1 6 100 124502703
AUFS Queued write data 56 0 0 146 115.10 0 0 0 146 -1
280635361
aio_ctrl 104 1 1 41 44.38 0 1 1 41 100 1160745460
wordlist 16 8 1 1 269.28 0 8 1 1 100 11
cbdata acl_address (1001) 48 1 1 1 269.28 0 1 1 1 100 1
intlist 16 1 1 1 269.28 0 1 1 1 100 1
cbdata acl_access (1002) 56 17 1 1 269.28 0 17 1 1 100 17
cbdata http_port_list (1003) 136 3 1 1 269.28 0 3 1 1 100 3
LRU policy node 24 1231857 28872 35634 197.72 0 1231857
28872 35634 100 140936958
cbdata RemovalPolicy (1004) 104 2 1 1 269.28 0 2 1 1 100 2
cbdata body_size (1005) 64 3 1 1 269.28 0 3 1 1 100 3
ipcache_entry 128 1124 141 199 198.59 0 1124 141 199
100 13854896
fqdncache_entry 160 3 1 1 269.28 0 3 1 1 100 6
cbdata idns_query (1006) 8680 0 0 9715 198.59 0 0 0 9715 -1
13854893
HttpHeaderEntry 40 2512704 98153 100169 1.19 2 2512704
98153 100169 100 14693352533
HttpHdrRangeSpec 16 9 1 5 49.05 0 9 1 5 100 15867412
HttpHdrRange 16 9 1 3 48.95 0 9 1 3 100 15387325
HttpHdrContRange 24 96 3 9 261.15 0 96 3 9 100 27848871
HttpHdrCc 40 49500 1934 2196 4.07 0 49500 1934
2196 100 843478615
MD5 digest 16 1229505 19212 23521 267.76 0 1229505
19212 23521 100 402241344
aio_thread 40 16 1 1 269.28 0 16 1 1 100 16
aio_request 96 1 1 38 44.38 0 1 1 38 100 1160745460
cbdata RebuildState (1010) 112 0 0 1 269.28 0 0 0 1 -1 1
pconn_data 32 3001 94 173 43.50 0 3001 94 173 100
115690712
pconn_fds 32 2998 94 173 43.50 0 2998 94 173 100
115690712
cbdata ConnStateData (1012) 336 12775 4192 7504 187.85 0 12775
4192 7504 100 166136683
cbdata RemovalPurgeWalker (1013) 72 0 0 1 269.28 0 0 0 1 -1
1842519
cbdata clientHttpRequest (1014) 1136 4536862 5033082 5033132 0.00
80 4536862 5033082 5033132 100 391098464
cbdata aclCheck_t (1015) 352 1 1 1 269.16 0 1 1 1 100
1566905793
cbdata ErrorState (1016) 160 157023 24535 24536 0.00 0 157023
24535 24536 100 8983102
cbdata store_client (1017) 152 1791 266 649 100.96 0 1791 266
649 100 473871420
cbdata storeIOState (1018) 136 321 43 112 260.39 0 321 43 112
100 181397983
cbdata FwdState (1019) 112 1580 173 460 100.96 0 1580 173 460
100 263780534
cbdata ps_state (1020) 200 0 0 1 269.16 0 0 0 1 -1 264001127
cbdata ConnectStateData (1021) 96 215 21 296 100.96 0 215 21
296 100 144145152
cbdata generic_cbdata (1022) 32 208 7 42 198.59 0 208 7 42
100 216378428
cbdata HttpStateData (1023) 136 157355 20899 20907 0.04 0 157355
20899 20907 100 263574171
cbdata LocateVaryState (1024) 144 0 0 2 2.27 0 0 0 2 -1
14017539
VaryData 32 412 13 25 22.34 0 412 13 25 100
14017539
cbdata AddVaryState (1025) 160 0 0 2 52.71 0 0 0 2 -1 5341040
cbdata SslStateData (1026) 120 0 0 3 188.58 0 0 0 3 -1 220593
cbdata Logfile (1027) 4192 0 0 5 268.62 0 0 0 5 -1 1
cbdata clientAsyncRefreshRequest (1028) 88 0 0 1 266.52 0 0 0
1 -1 22
cbdata RemovalPolicyWalker (1029) 56 0 0 1 250.86 0 0 0 1 -1
11
Total 14511812 6280906 6290533 0.05 100 14511812
6280906 6290533 100 51985853497
Cumulative allocated volume: 16.87 TB
Current overhead: 13748 bytes (0.000%)
Idle pool limit: 5.00 MB
memPoolAlloc calls: 446245945
memPoolFree calls: 431734132
String Pool Impact
(%strings) (%volume)
Short Strings 81 36
Medium Strings 13 21
Long Strings 5 29
Other Strings 1 14

codesite...@google.com

unread,
Aug 20, 2010, 3:38:51 PM8/20/10
to lusc...@googlegroups.com

Comment #8 on issue 116 by renato.ornelas: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Do you think release 14756 address the issue?

codesite...@google.com

unread,
Aug 20, 2010, 9:37:42 PM8/20/10
to lusc...@googlegroups.com

Comment #9 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Hm! You could try it. I doubt it though.


codesite...@google.com

unread,
Aug 21, 2010, 11:26:25 AM8/21/10
to lusc...@googlegroups.com

Comment #10 on issue 116 by renato.ornelas: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

I'll run it for a few days and let you know if the memory usage keeps
growing without limits.

codesite...@google.com

unread,
Aug 23, 2010, 8:59:17 PM8/23/10
to lusc...@googlegroups.com

Comment #11 on issue 116 by renato.ornelas: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Same memory usage with the latest release:


proxy:~ # squidclient mgr:mem
HTTP/1.0 200 OK

Server: Lusca/LUSCA_HEAD-r14756
Date: Tue, 24 Aug 2010 00:17:52 GMT
Content-Type: text/plain
Expires: Tue, 24 Aug 2010 00:17:52 GMT
Connection: close

Current memory usage:
Pool Obj Size Allocated In Use Hit Rate
(bytes) (#) (KB) high (KB) high (hrs) impact (%total) (#) (KB)
high (KB) (%num) (number)

2K Buffer (no-zero) 2048 1 2 4928 1.35 0 1 2 4928
100 332442356
4K Buffer (no-zero) 4096 2231 8924 98908 0.83 0 2231
8924 98908 100 51582727
8K Buffer (no-zero) 8192 39 312 6248 0.77 0 39 312
6248 100 23167970
16K Buffer (no-zero) 16384 0 0 240 33.30 0 0 0 240 -1
9912
32K Buffer (no-zero) 32768 0 0 64 75.86 0 0 0 64 -1 23
64K Buffer (no-zero) 65536 0 0 64 62.62 0 0 0 64 -1 2
Short Strings (no-zero) 36 783576 27548 31150 1.35 1
783576 27548 31150 100 4169421974
Medium Strings (no-zero) 128 108355 13545 15279 1.35 1
108355 13545 15279 100 681282320
Long Strings (no-zero) 512 39791 19896 22207 0.84 1 39791
19896 22207 100 135124259
event 48 9 1 44 1.35 0 9 1 44 100 6156643
close_handler 24 5365 126 900 0.84 0 5365 126 900 100
211224143
acl 64 11 1 1 77.22 0 11 1 1 100 11
acl_ip_data 24 14 1 1 77.22 0 14 1 1 100 14
acl_list 24 21 1 1 77.22 0 21 1 1 100 21
dwrite_q 48 0 0 1 77.22 0 0 0 1 -1 38056236
FwdServer 24 1632 39 177 1.35 0 1632 39 177 100
65779756
HttpReply 168 23565 3867 5673 1.35 0 23565 3867
5673 100 198694295
mem_node (no-zero) 4112 64433 258739 275613 1.19 13
64433 258739 275613 100 458617517
StoreEntry 88 1357472 116658 129847 5.81 6 1357472
116658 129847 100 78654572
MemObject 272 22563 5994 8502 1.35 0 22563 5994
8502 100 102372890
netdbEntry 104 951 97 102 77.19 0 951 97 102 100
694653
net_db_name 32 26148 818 841 44.86 0 26148 818 841
100 1121893
request_t 1360 75816 100694 106448 0.84 5 75816
100694 106448 100 95012705
ClientInfo 352 3151 1084 1090 1.01 0 3151 1084 1090
100 11066
storeSwapLogData 72 0 0 1 77.22 0 0 0 1 -1 38056236
buf_t 80 0 0 1 25.69 0 0 0 1 -1 245118380
AUFS IO State data 48 424 20 36 24.03 0 424 20 36 100
44921640
AUFS Queued read data 64 0 0 6 9.00 0 0 0 6 -1 28561271
AUFS Queued write data 56 0 0 128 26.04 0 0 0 128 -1 81410752
aio_ctrl 104 0 0 49 1.22 0 0 0 49 -1 290157214
wordlist 16 8 1 1 77.22 0 8 1 1 100 11
cbdata acl_address (1001) 48 1 1 1 77.22 0 1 1 1 100 1
intlist 16 1 1 1 77.22 0 1 1 1 100 1
cbdata acl_access (1002) 56 17 1 1 77.22 0 17 1 1 100 17
cbdata http_port_list (1003) 136 3 1 1 77.22 0 3 1 1 100 3
LRU policy node 24 1372926 32178 35777 5.86 2 1372926
32178 35777 100 40369990
cbdata RemovalPolicy (1004) 104 2 1 1 77.22 0 2 1 1 100 2
cbdata body_size (1005) 64 3 1 1 77.22 0 3 1 1 100 3
ipcache_entry 128 920 115 1240 1.35 0 920 115 1240 100
4044254
fqdncache_entry 160 3 1 1 77.22 0 3 1 1 100 6
cbdata idns_query (1006) 8680 0 0 80087 1.35 0 0 0 80087 -1
4044251
HttpHeaderEntry 40 799734 31240 35372 1.35 2 799734 31240
35372 100 3587427031
HttpHdrRangeSpec 16 14 1 4 6.94 0 14 1 4 100 3216791
HttpHdrRange 16 14 1 2 0.79 0 14 1 2 100 3127758
HttpHdrContRange 24 135 4 8 0.78 0 135 4 8 100 5787568
HttpHdrCc 40 28394 1110 1314 1.35 0 28394 1110
1314 100 208000280
MD5 digest 16 1357472 21211 23609 5.81 1 1357472
21211 23609 100 104509435
aio_thread 40 16 1 1 77.22 0 16 1 1 100 16
aio_request 96 0 0 45 1.22 0 0 0 45 -1 290157214
cbdata RebuildState (1010) 112 0 0 1 77.22 0 0 0 1 -1 1
pconn_data 32 0 0 173 24.01 0 0 0 173 -1 28697852
pconn_fds 32 0 0 172 24.01 0 0 0 172 -1 28697852
cbdata ConnStateData (1012) 336 2101 690 7990 0.84 0 2101 690
7990 100 41452858
cbdata clientHttpRequest (1013) 1144 1183414 1322096 1323136 0.07
67 1183414 1322096 1323136 100 95012242
cbdata aclCheck_t (1014) 352 1 1 1 77.22 0 1 1 1 100 382344428
cbdata store_client (1015) 152 1975 294 1294 1.35 0 1975 294
1294 100 114552408
cbdata FwdState (1016) 112 1632 179 824 1.35 0 1632 179 824
100 65778585
cbdata ps_state (1017) 200 0 0 1 77.22 0 0 0 1 -1 65779586
cbdata ConnectStateData (1018) 96 0 0 912 1.35 0 0 0 912 -1
37485292
cbdata generic_cbdata (1019) 32 312 10 304 1.35 0 312 10 304
100 38421796
cbdata ErrorState (1020) 160 40306 6298 6622 1.35 0 40306 6298
6622 100 2499117
cbdata HttpStateData (1021) 136 35815 4757 5008 0.83 0 35815
4757 5008 100 65662447
cbdata storeIOState (1022) 136 424 57 101 24.03 0 424 57 101
100 44921640
cbdata AddVaryState (1023) 160 0 0 5 1.73 0 0 0 5 -1 1289754
cbdata LocateVaryState (1024) 144 0 0 4 8.38 0 0 0 4 -1
3281690
VaryData 32 120 4 6 1.29 0 120 4 6 100 3281690
cbdata RemovalPurgeWalker (1025) 72 0 0 1 77.18 0 0 0 1 -1
523706
cbdata SslStateData (1026) 120 0 0 2 30.15 0 0 0 2 -1 1001
cbdata Logfile (1027) 4192 0 0 5 76.55 0 0 0 5 -1 1
cbdata clientAsyncRefreshRequest (1028) 88 0 0 1 75.42 0 0 0
1 -1 9
cbdata RemovalPolicyWalker (1029) 56 0 0 1 64.30 0 0 0 1 -1 3
Total 7341331 1978602 2157721 0.84 100 7341331
1978602 2157721 100 12648022042
Cumulative allocated volume: 4.08 TB
Current overhead: 13748 bytes (0.001%)


Idle pool limit: 5.00 MB

memPoolAlloc calls: -236879846
memPoolFree calls: -244221178


String Pool Impact
(%strings) (%volume)

Short Strings 83 37
Medium Strings 11 18
Long Strings 4 27
Other Strings 1 17

codesite...@google.com

unread,
Aug 23, 2010, 9:20:40 PM8/23/10
to lusc...@googlegroups.com

Comment #12 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Can you please try this:

* start up lusca
* pass traffic through it; see the usage blow out
* then without shutting lusca down, stop passing new connections to it and
let it time out the current connections
* -then- compare memory usage

Thanks,


Adrian

codesite...@google.com

unread,
Aug 23, 2010, 10:00:31 PM8/23/10
to lusc...@googlegroups.com

Comment #13 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

I'm going to make the hashed cbdata and cbdata debugging code work again so
we can get a list of what's locked/unlocked each of those objects. I have a
feeling something's just being done subtly wrong in your particular use
case. I certainly don't see memory leaks on my currently live proxies (or,
honestly, I haven't yet noticed them.)


codesite...@google.com

unread,
Aug 23, 2010, 10:39:21 PM8/23/10
to lusc...@googlegroups.com

Comment #14 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Try this patch against LUSCA_HEAD, and compile with:

env CFLAGS="-g -O -DHASHED_CBDATA -DCBDATA_DEBUG" ./configure ...

Then check "cbdata" cachemgr page.

Please note this will make your proxy run quite a bit slower, so be careful.


Attachments:
cbdata.diff 7.0 KB

codesite...@google.com

unread,
Aug 24, 2010, 11:00:57 AM8/24/10
to lusc...@googlegroups.com

Comment #15 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

I've patched a system which exhibits here, and this is an example that I've
seen for one of the leaking proxies:

2010/08/24 18:39:32| cbdataAlloc: 0x80a449900
client_side_request_parse.c:377 parseHttpRequest() 0
2010/08/24 18:39:32| cbdataLock: 0x80a449900:
acl.c:2508 aclNBCheck() 1
2010/08/24 18:39:32| cbdataValid: 0x80a449900
2010/08/24 18:39:32| cbdataLock: 0x80a449900:
acl.c:2508 aclNBCheck() 2
2010/08/24 18:39:32| cbdataValid: 0x80a449900
2010/08/24 18:39:32| cbdataLock: 0x80a449900:
comm.c:1485 comm_write() 3
2010/08/24 18:39:32| cbdataUnlock: 0x80a449900
acl.c:2355 aclCheckCallback() 2
2010/08/24 18:39:32| cbdataUnlock: 0x80a449900
acl.c:2355 aclCheckCallback() 1
2010/08/24 18:39:32| cbdataValid: 0x80a449900
2010/08/24 18:39:32| cbdataLock: 0x80a449900:
client_side_body.c:16 clientEatRequestBodyHandler() 2
2010/08/24 18:39:32| cbdataUnlock: 0x80a449900
comm.c:144 commWriteStateCallbackAndFree() 1
2010/08/24 18:39:32| cbdataValid: 0x80a449900
2010/08/24 18:39:32| cbdataUnlock: 0x80a449900
client_side_body.c:90 clientProcessBody() 0
2010/08/24 18:39:32| cbdataLock: 0x80a449900:
client_side_body.c:16 clientEatRequestBodyHandler() 1
2010/08/24 18:39:32| cbdataValid: 0x80a449900
2010/08/24 18:39:32| cbdataUnlock: 0x80a449900
client_side_body.c:90 clientProcessBody() 0
2010/08/24 18:39:32| cbdataLock: 0x80a449900:
client_side_body.c:16 clientEatRequestBodyHandler() 1
2010/08/24 18:39:32| cbdataFree: 0x80a449900
2010/08/24 18:39:32| cbdataFree: 0x80a449900 has 1 locks, not
freeing 1! aiee!


codesite...@google.com

unread,
Aug 24, 2010, 11:12:29 AM8/24/10
to lusc...@googlegroups.com

Comment #16 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

.. so the point here is, something enters the clientEatRequestBody() path
via client_side.c and it results in that locked up situation.

I've been staring at the code and it seems like clientEatRequestBody()
calls clientEatRequestBodyHandler(), which is locking the clientHttpRequest
(ie, 'http'), creating a blank buffer, setting the body callback to itself,
and calling clientProcessBody(). clientProcessBody() then eats some data,
calls cbdataUnlock() above, then calls the callback - which is
clientEatRequestBodyHandler() again.

THe thing is, that last clientEatRequestBodyHandler() should've found the
http pointer fine; but I wonder if conn->in.offset is 0 at this point (ie,
there's no further data in the incoming socket buffer to read) and it's
thus not getting a chance to call the callback to indicate as much.

This code doesn't look like it's changed in a while. I wonder if I can
isolate the specific case where this is happening - but I do wonder whether
it's been a problem with Squid-2.x and it's something new/unique for some
site that's triggering the leak more often.

A close inspection of this code makes me more unhappy. line 56 in
client_side_body.c (inside clientProcessBody()) makes me think there's
another leak there - if the data isn't valid, comm_close() (and the comm
read handlers) aren't going to undo this - eg, it won't be undone by the
client http free path. So in that case, a clientHttpRequest ref would also
leak.

Anyway. More to come tomorrow.

codesite...@google.com

unread,
Aug 24, 2010, 11:47:16 AM8/24/10
to lusc...@googlegroups.com

Comment #17 on issue 116 by renato.ornelas: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Adrian,

i'm running 3 lusca's righy now. 2 of them leak and both are 64-bit.

I'll try to run on the busiest cache (about 850 req/s) to see if we can
track the request that leaks.


codesite...@google.com

unread,
Aug 24, 2010, 8:29:33 PM8/24/10
to lusc...@googlegroups.com

Comment #18 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Try this. Then check cache.log. Oh, and see if Lusca crashes, leaks memory,
or behaves correctly. :)

Index: client_side_body.c
===================================================================
--- client_side_body.c (revision 14762)
+++ client_side_body.c (working copy)
@@ -50,6 +50,14 @@
request_t *request = conn->body.request;
/* Note: request is null while eating "aborted" transfers */
debug(33, 2) ("clientProcessBody: start fd=%d body_size=%lu
in.offset=%ld cb=%p req=%p\n", conn->fd, (unsigned long int)
conn->body.size_left, (long int) conn->in.offset, callback, request);
+ if (conn->in.offset == 0) {
+ /* This typically will only occur when some recursive call through
the body eating path has occured -adrian */
+ /* XXX so no need atm to call the callback handler; the original
code didn't! -adrian */
+ debug(33, 1) ("clientProcessBody: cbdata %p: would've leaked;
conn->in.offset=0 here\n", cbdata);
+ cbdataUnlock(conn->body.cbdata);
+ conn->body.cbdata = conn->body.callback = NULL;
+ return;
+ }
if (conn->in.offset) {
int valid = cbdataValid(conn->body.cbdata);
if (!valid) {


codesite...@google.com

unread,
Aug 24, 2010, 9:14:01 PM8/24/10
to lusc...@googlegroups.com

Comment #19 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

It looks like it's caused by an aborted request body (ie, POST) which isn't
correctly being "eaten".

I wonder if I can craft a test case that reproduces it - I'd like to see
whether Squid-2.x / Squid-3 has the same issue.


codesite...@google.com

unread,
Aug 25, 2010, 3:56:35 AM8/25/10
to lusc...@googlegroups.com

Comment #20 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Please try the following patch against -HEAD.

Attachments:
post-diff.1.diff 2.1 KB

codesite...@google.com

unread,
Aug 25, 2010, 11:56:11 AM8/25/10
to lusc...@googlegroups.com

Comment #21 on issue 116 by renato.ornelas: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Should i try with debug flags?

codesite...@google.com

unread,
Aug 25, 2010, 11:27:44 PM8/25/10
to lusc...@googlegroups.com

Comment #22 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

No need. Just apply post-diff.1.diff to the latest lusca-head and run. You
don't need the cbdata debugging.

codesite...@google.com

unread,
Aug 26, 2010, 3:35:45 AM8/26/10
to lusc...@googlegroups.com

Comment #23 on issue 116 by renato.ornelas: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Ok..

Running for about 12h and no memory increase so far..

I'll keep you updated.

codesite...@google.com

unread,
Aug 29, 2010, 7:19:25 PM8/29/10
to lusc...@googlegroups.com

Comment #24 on issue 116 by renato.ornelas: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Adrian,

The caches are fine! Thank you very much..

I get the following warnings on the log file:


Aug 29 19:36:49 proxy squid[13393]: clientEatRequestBodyHandler: FD 5716:
no more data left in socket; but request header says there should be;
aborting for now
Aug 29 19:36:49 proxy squid[13393]: clientEatRequestBodyHandler: FD 9309:
no more data left in socket; but request header says there should be;
aborting for now
Aug 29 19:36:49 proxy squid[13393]: clientEatRequestBodyHandler: FD 13580:
no more data left in socket; but request header says there should be;
aborting for now
Aug 29 19:36:49 proxy squid[13393]: clientEatRequestBodyHandler: FD 26855:
no more data left in socket; but request header says there should be;
aborting for now
Aug 29 19:36:49 proxy squid[13393]: clientEatRequestBodyHandler: FD 6963:
no more data left in socket; but request header says there should be;
aborting for now
Aug 29 19:36:49 proxy squid[13393]: clientEatRequestBodyHandler: FD 4904:
no more data left in socket; but request header says there should be;
aborting for now
Aug 29 19:36:49 proxy squid[13393]: clientEatRequestBodyHandler: FD 8925:
no more data left in socket; but request header says there should be;
aborting for now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 25527:
no more data left in socket; but request header says there should be;
aborting for now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 3723:
no more data left in socket; but request header says there should be;
aborting for now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 20651:
no more data left in socket; but request header says there should be;
aborting for now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 33182:
no more data left in socket; but request header says there should be;
aborting for now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 5155:
no more data left in socket; but request header says there should be;
aborting for now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 16487:
no more data left in socket; but request header says there should be;
aborting for now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 8485:
no more data left in socket; but request header says there should be;
aborting for now
Aug 29 19:36:50 proxy squid[13393]: clientEatRequestBodyHandler: FD 2438:
no more data left in socket; but request header says there should be;
aborting for now


Regards,

Renato

codesite...@google.com

unread,
Aug 29, 2010, 8:36:04 PM8/29/10
to lusc...@googlegroups.com

Comment #25 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Ok. So it does look like the bug is in eating request bodies from aborted
requests.

I need to drill down and make sure I'm handling all the use cases fine so
FDs are actually properly handled (both in the keepalive case and the
close-connection case.)

Would you mind trialling out another patch or two, whilst I slowly figure
out what is actually going down?

Thanks,


Adrian

codesite...@google.com

unread,
Aug 29, 2010, 8:58:11 PM8/29/10
to lusc...@googlegroups.com

Comment #26 on issue 116 by renato.ornelas: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Adrian,

Just send me the patch!


codesite...@google.com

unread,
Oct 18, 2010, 9:55:34 PM10/18/10
to lusc...@googlegroups.com
Updates:
Status: Fixed

Comment #27 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Committed in r14805. Please test!

codesite...@google.com

unread,
Oct 21, 2010, 7:48:20 AM10/21/10
to lusc...@googlegroups.com

Comment #28 on issue 116 by maher.kassem: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

renato,

I downloaded the latest snapshot 14805 and compiled it. It STILL leaks
memory and yes I see the same "clientEatRequestBodyHandler: FD 20651: no

more data left in socket; but request header says there should be; aborting

for now" messages in cache.log If you were able to fix it, could you please
advise me as to how.

Thanks,
Maher

codesite...@google.com

unread,
Oct 21, 2010, 8:12:53 AM10/21/10
to lusc...@googlegroups.com

Comment #29 on issue 116 by renato.ornelas: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

After the fix, I see lots of the "clientEatRequestBodyHandler..." messages,
but the memory doesn't increase any more.

My biggest cache is running since:
Start Time: Wed, 25 Aug 2010 15:14:05 GMT
Current Time: Thu, 21 Oct 2010 11:14:26 GMT

I didn't try this new release, only the HEAD (from late august) + the patch
on this page

Can you provide your config file/cachemgr mem usage stats?

codesite...@google.com

unread,
Oct 21, 2010, 10:20:21 AM10/21/10
to lusc...@googlegroups.com

Comment #30 on issue 116 by maher.kassem: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Renato,

here is my config:


max_stale 1 week
minimum_expiry_time 3600 seconds
incoming_rate 4
http_port 172.16.99.100:3128 transparent
hierarchy_stoplist Really-Some-Bullshit
acl PURGE method PURGE
acl deny_to_apache url_regex -i (fujitsu|avg|symant|packages|sources|
crownforex|saxobank|CGI_LivePrices)
acl deny_apache urlpath_regex -i (\?|ytimg|\=|packages|sources|crownforex|
saxobank|CGI_LivePrices)
acl to_apache urlpath_regex -i \.(dwg|upd|tar|mp4|mp4u|vpu|dmg|msi|avi|avc|
kfb|rup|mpg|cab|exe|zip|mp3|wmv|mov|wav|ps|psf|pdf|gz|bz2|deb|tgz|rpm|rm|
ram|rar|$
url_rewrite_program /usr/local/etc/upxl.sh
url_rewrite_children 200
url_rewrite_access deny PURGE
url_rewrite_access deny deny_apache
url_rewrite_access deny deny_to_apache
url_rewrite_access allow to_apache
acl all src XXX
acl all src XXX
acl all src XXX
acl all src XXX
cache_vary on
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
cache_mem 6144 MB
maximum_object_size 4194304 KB
maximum_object_size_in_memory 2 MB
ipcache_size 102400
cache_replacement_policy heap LFUDA
memory_replacement_policy heap LFUDA
#Not a bad idea to put GDSF aswell.
cache_dir aufs /cache/squida 911805 128 256
cache_dir aufs /cache/squidb 911805 128 256
cache_dir aufs /cache/squidc 911805 128 256
access_log /logs/squid/access.log
cache_log /logs/squid/cache.log
cache_swap_log /logs/squid/%s
cache_store_log none
request_header_max_size 30 KB
refresh_pattern -i (212.30.40.10|isppro-visp|slide|kaspersky|mail.ru|aramex|
mtctouch|softarchive|odnoklassniki) 0 0 2
refresh_pattern -i (aspx|asp) 0 10 1440 reload-into-ims
refresh_pattern ^http://(.*?)/get_video\? 999999 190% 999999
override-expire ignore-no-cache ignore-private
refresh_pattern ^http://(.*?)/videodownload\? 999999 190% 999999
override-expire ignore-no-cache ignore-private
refresh_pattern -i \.(dwg|upd|tar|mp4|mp4u|vpu|dmg|msi|cfm|avi|avc|kfb|rup|
dif|swf|mpg|png|jpg|bmp|gif|giff|cab|exe|zip|mp3|wmv|jpe|tiff|mov|wav|ps|
psf|pdf|$
refresh_pattern -i \.(dwg|upd|tar|mp4|mp4u|vpu|dmg|msi|cfm|avi|avc|kfb|rup|
dif|swf|mpg|png|jpg|bmp|gif|giff|cab|exe|zip|mp3|wmv|jpe|tiff|mov|wav|ps|
psf|pdf|$
refresh_pattern -i \.(gif|jpeg|jpg|swf|png|bmp|pic)$ 999999 100% 999999
override-expire override-lastmod ignore-reload ignore-no-cache
ignore-private ignore$
refresh_pattern -i \.(exe|gz|tar|tgz|zip|arj|ace|bin|cab|msi)(\?.*|)$
999999 100% 999999 override-expire override-lastmod ignore-reload
ignore-no-cache igno$
refresh_pattern -i \.(mid|mp[234]|wav|ram|rm|au)(\?.*|)$ 999999 100% 999999
override-expire override-lastmod ignore-reload ignore-no-cache
ignore-private ig$
refresh_pattern -i \.(mpg|mpeg|avi|asf|wmv|wma)(\?.*|)$ 999999 100% 999999
override-expire override-lastmod ignore-reload ignore-no-cache
ignore-private ign$
refresh_pattern -i (dwg|upd|tar|mp4|mp4u|vpu|dmg|msi|cfm|avi|avc|kfb|rup|
dif|swf|mpg|png|jpg|bmp|gif|giff|cab|exe|zip|mp3|wmv|jpe|tiff|mov|wav|ps|
psf|pdf|gz$
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern (get_video\?|videoplayback\?|videodownload\?) 5259487
99999999% 5259487 override-expire ignore-reload
refresh_pattern -i (/cgi-bin/|\?|asp) 0 0% 0
refresh_pattern . 1440 100% 10080 reload-into-ims

quick_abort_min 512 KB
quick_abort_max 1024 KB
read_ahead_gap 512 KB
negative_ttl 1 seconds
positive_dns_ttl 60 hours
range_offset_limit 0 KB
collapsed_forwarding off #not a bad idea to give this a try with 'on'
refresh_stale_hit 300 seconds
read_timeout 180 seconds
persistent_request_timeout 30 seconds
half_closed_clients off
shutdown_lifetime 60 seconds
acl localnet src XX
acl localnetreal src XX
acl manager proto cache_object
acl localhost src XX
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker

acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

acl store_rewrite_list urlpath_regex \/(get_video\?|videodownload\?|
videoplayback.*id)
storeurl_access allow store_rewrite_list
storeurl_access deny all
storeurl_rewrite_program /usr/local/etc/store_url_rewrite
storeurl_rewrite_children 100
storeurl_rewrite_concurrency 100

cache allow all
http_access allow manager all
http_access deny manager
http_access allow PURGE all
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow localnet
http_access allow localnetreal
http_access allow all
http_reply_access allow all
icp_access allow all
cache_effective_user proxy
cache_effective_group proxy
logfile_rotate 10
memory_pools off # If system becomes short of memory, set to off
via off

forwarded_for off
reload_into_ims on
header_replace Accept */*
header_replace Accept-Encoding gzip
header_replace Accept-Language en
uri_whitespace allow
coredump_dir /logs/squid
ignore_unknown_nameservers off
client_persistent_connections on
server_persistent_connections on
pipeline_prefetch on #if there is a problem in slow pages, etc; set this to
off
store_dir_select_algorithm least-load
vary_ignore_expire on
max_filedescriptors 100000
pid_filename /var/run/squid.pid
visible_hostname Lusca-Cache
update_headers off

#tcp_outgoing_tos 0x30 localnet
zph_mode tos
zph_local 0x30
#zph_parent 0

------------------------------

As for my cachemgr mem usage I dont have cachemgr running. What i can tell
you though is that in 2 hours I consumed 20 GB's of my 24 GB's and its
increasing. Sometimes 4 MB/sec sometimes 6 sometimes 2 but its constantly
increasing. I even tried downgrading to the version 14371 and the message
in Cache.log disappeared but I am still gobbling up the RAM. Any help is
much appreciated.

Thanks,
Maher


codesite...@google.com

unread,
Oct 21, 2010, 10:26:24 AM10/21/10
to lusc...@googlegroups.com

Comment #31 on issue 116 by renato.ornelas: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

try to run:

squidclient mgr:mem

on the squid machine

codesite...@google.com

unread,
Oct 21, 2010, 10:31:27 AM10/21/10
to lusc...@googlegroups.com

Comment #32 on issue 116 by adrian.chadd: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

It's still leaking clientHttpRequest structs?

codesite...@google.com

unread,
Oct 21, 2010, 10:36:31 AM10/21/10
to lusc...@googlegroups.com

Comment #33 on issue 116 by maher.kassem: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

HTTP/1.0 200 OK
Server: Lusca/LUSCA_HEAD
Date: Thu, 21 Oct 2010 14:35:41 GMT
Content-Type: text/plain
Expires: Thu, 21 Oct 2010 14:35:41 GMT
X-Cache: MISS from Lusca-Cache
X-Cache-Lookup: MISS from Lusca-Cache:3128
Connection: close

Current memory usage:
Pool Obj Size Allocated In
Use Hit Rate
(bytes) (#) (KB) high (KB) high (hrs)
impact (%total) (#) (KB) high (KB) (%num) (number)

2K Buffer (no-zero) 2048 582 1164 1684 0.10
0 582 1164 1684 100 7887767
4K Buffer (no-zero) 4096 3377 13508 15016 0.09
0 3377 13508 15016 100 895207
8K Buffer (no-zero) 8192 331 2648 2800 0.10
0 331 2648 2800 100 360086
16K Buffer (no-zero) 16384 0 0 64 1.99
0 0 0 64 -1 51
Short Strings (no-zero) 36 3702012 130149 130153
0.00 2 3702012 130149 130153 100 73255550
Medium Strings (no-zero) 128 79846 9981 9989 0.00
0 79846 9981 9989 100 6284515
Long Strings (no-zero) 512 20423 10212 10251 0.00
0 20423 10212 10251 100 2975204
event 48 11 1 2 0.10 0
11 1 2 100 145929
close_handler 24 5232 123 133 0.09 0
5232 123 133 100 3113882
acl 64 15 1 1 2.11 0
15 1 1 100 15
acl_ip_data 24 8 1 1 2.11 0
8 1 1 100 8
acl_list 24 31 1 1 2.11 0
31 1 1 100 31
relist 80 5 1 1 2.11 0
5 1 1 100 5
CacheDigest 32 1 1 1 2.11 0
1 1 1 100 1
dwrite_q 48 0 0 1 2.11 0
0 0 1 -1 6379542
FwdServer 24 768 18 22 0.10 0
768 18 22 100 1002487
HttpReply 168 393853 64617 64618 0.00 1
393853 64617 64618 100 3584638
mem_node (no-zero) 4112 1153338 4631373
4631373 0.00 81 1153338 4631373
4631373 100 6583568
StoreEntry 88 6327395 543761 543762 0.00
9 6327395 543761 543762 100 7141185
MemObject 272 393605 104552 104554 0.00 2
393605 104552 104554 100 1807330
request_t 1384 1479 1999 2674 0.10 0
1479 1999 2674 100 1869516
helper_request 64 0 0 2 0.10 0
0 0 2 -1 228897
ClientInfo 352 18 7 7 0.00 0
18 7 7 100 18
storeSwapLogData 72 0 0 1 2.11 0
0 0 1 -1 6379542
buf_t 80 0 0 1 0.69 0
0 0 1 -1 6152878
AUFS IO State data 48 112 6 15 2.06 0
112 6 15 100 1052642
AUFS Queued read data 64 0 0 7 2.06 0
0 0 7 -1 651062
AUFS Queued write data 56 0 0 247 2.07
0 0 0 247 -1 1586093
aio_ctrl 104 0 0 43 2.07 0
0 0 43 -1 7237523
wordlist 16 11 1 1 2.11 0
11 1 1 100 14
cbdata http_port_list (1001) 136 1 1 1 2.11

0 1 1 1 100 1

cbdata acl_access (1002) 56 28 2 2 2.11
0 28 2 2 100 28
cbdata RemovalPolicy (1003) 104 4 1 1 2.11
0 4 1 1 100 4
intlist 16 1 1 1 2.11 0

1 1 1 100 1

cbdata body_size (1004) 64 3 1 1 2.11

0 3 1 1 100 3

ipcache_entry 128 15858 1983 1983 0.00 0
15858 1983 1983 100 17168
fqdncache_entry 160 3 1 1 2.11 0

3 1 1 100 3

cbdata idns_query (1005) 8680 0 0 501 0.84
0 0 0 501 -1 17165
cbdata helper (1006) 136 2 1 1 2.11 0

2 1 1 100 2

cbdata helper_server (1007) 152 300 45 45 2.11
0 300 45 45 100 300
cbdata redirectStateData (1008) 72 0 0 2
0.10 0 0 0 2 -1 107792
cbdata storeurlStateData (1009) 72 0 0 1
1.82 0 0 0 1 -1 121105
HttpHeaderEntry 40 3075421 120134 120139 0.00
2 3075421 120134 120139 100 60327902
HttpHdrRangeSpec 16 4 1 1 0.32 0
4 1 1 100 107994
HttpHdrRange 16 4 1 1 0.32 0
4 1 1 100 107479
HttpHdrContRange 24 89 3 3 0.21 0
89 3 3 100 206139
HttpHdrCc 40 228858 8940 8940 0.00 0
228858 8940 8940 100 3311669
cbdata Logfile (1012) 4192 1 5 5 2.11 0
1 5 5 100 1
MD5 digest 16 6327395 98866 98866 0.00
2 6327395 98866 98866 100 7614332
aio_thread 40 32 2 2 2.11 0
32 2 2 100 32
aio_request 96 0 0 40 2.07 0
0 0 40 -1 7237523
cbdata RebuildState (1014) 112 0 0 1 2.11

0 0 0 1 -1 3

pconn_data 32 359 12 15 0.59 0
359 12 15 100 296768
pconn_fds 32 354 12 15 0.59 0
354 12 15 100 296768
cbdata generic_cbdata (1016) 32 72 3 6 0.15
0 72 3 6 100 433630
cbdata RemovalPurgeWalker (1017) 72 0 0 1
2.10 0 0 0 1 -1 22677
cbdata ConnStateData (1018) 336 3396 1115 1238 0.09
0 3396 1115 1238 100 730321
cbdata clientHttpRequest (1019) 1152 2566 2887 3333
0.10 0 2566 2887 3333 100 1787808
cbdata aclCheck_t (1020) 352 3 2 3 2.09
0 3 2 3 100 12319722
cbdata store_client (1021) 152 896 133 208 0.10
0 896 133 208 100 2224074
cbdata storeIOState (1022) 136 112 15 43 2.06
0 112 15 43 100 1052642
cbdata FwdState (1023) 112 768 84 102 0.10
0 768 84 102 100 1002487
cbdata ps_state (1024) 200 0 0 1 2.10
0 0 0 1 -1 1002487
cbdata ConnectStateData (1025) 96 71 7 16
0.10 0 71 7 16 100 356608
cbdata HttpStateData (1026) 136 1045 139 151 0.10
0 1045 139 151 100 1002339
cbdata ErrorState (1027) 160 282 45 46 0.09
0 282 45 46 100 35267
cbdata AddVaryState (1028) 160 0 0 3 2.08
0 0 0 3 -1 25183
cbdata LocateVaryState (1029) 144 0 0 3 2.10
0 0 0 3 -1 93919
VaryData 32 2 1 1 2.10 0
2 1 1 100 93919
Total 21740383 5748545
5748766 0.00 100 21740383 5748545
5748766 100 248528452
Cumulative allocated volume: 72.63 GB
Current overhead: 14153 bytes (0.000%)
Idle pool limit: 0.00 MB
memPoolAlloc calls: 248528452
memPoolFree calls: 226788068


String Pool Impact
(%strings) (%volume)

Short Strings 97 86
Medium Strings 2 7
Long Strings 1 7
Other Strings 0 1

codesite...@google.com

unread,
Oct 21, 2010, 10:40:35 AM10/21/10
to lusc...@googlegroups.com

Comment #34 on issue 116 by maher.kassem: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

Adrian,

in 14809 apparently yes???

codesite...@google.com

unread,
Oct 21, 2010, 10:53:47 AM10/21/10
to lusc...@googlegroups.com

Comment #42 on issue 116 by renato.ornelas: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

mem_node is consuming 81% of your memory.

it isn't the same problem I had.

codesite...@google.com

unread,
Oct 21, 2010, 12:11:47 PM10/21/10
to lusc...@googlegroups.com

Comment #44 on issue 116 by maher.kassem: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

root@Cache-Lusca1:~# squidclient -h 172.16.99.101 mgr:mem
HTTP/1.0 200 OK
Server: Lusca/LUSCA_HEAD-r14809
Date: Thu, 21 Oct 2010 16:08:58 GMT
Content-Type: text/plain
Expires: Thu, 21 Oct 2010 16:08:58 GMT


X-Cache: MISS from Lusca-Cache
X-Cache-Lookup: MISS from Lusca-Cache:3128
Connection: close

Current memory usage:
Pool Obj Size Allocated In
Use Hit Rate
(bytes) (#) (KB) high (KB) high (hrs)
impact (%total) (#) (KB) high (KB) (%num) (number)

2K Buffer (no-zero) 2048 2 4 18 0.01
0 2 4 18 100 241042
4K Buffer (no-zero) 4096 1787 7148 7768 0.01
3 1787 7148 7768 100 23767
8K Buffer (no-zero) 8192 316 2528 2544 0.00
1 316 2528 2544 100 5611
16K Buffer (no-zero) 16384 0 0 16 0.02
0 0 0 16 -1 2
Short Strings (no-zero) 36 159045 5592 5593 0.00
2 159045 5592 5593 100 2085443
Medium Strings (no-zero) 128 3656 457 464 0.00
0 3656 457 464 100 173017
Long Strings (no-zero) 512 1078 539 553 0.00
0 1078 539 553 100 113680
event 48 12 1 1 0.01 0
12 1 1 100 5775
close_handler 24 2807 66 71 0.00 0
2807 66 71 100 94795
acl 64 15 1 1 0.16 0

15 1 1 100 15

acl_ip_data 24 8 1 1 0.16 0

8 1 1 100 8

acl_list 24 31 1 1 0.16 0

31 1 1 100 31

relist 80 5 1 1 0.16 0

5 1 1 100 5

CacheDigest 32 1 1 1 0.16 0

1 1 1 100 1

dwrite_q 48 0 0 1 0.16 0
0 0 1 -1 47015
FwdServer 24 353 9 10 0.01 0
353 9 10 100 30688
HttpReply 168 15578 2556 2556 0.00 1
15578 2556 2556 100 95806
mem_node (no-zero) 4112 49165 197429 197429 0.00
85 49165 197429 197429 100 124099
StoreEntry 88 43922 3775 3775 0.00 2
43922 3775 3775 100 62203
MemObject 272 15425 4098 4098 0.00 2
15425 4098 4098 100 41799
request_t 1384 447 605 661 0.00 0
447 605 661 100 55139
helper_request 64 0 0 1 0.05 0
0 0 1 -1 18504
ClientInfo 352 3 2 2 0.02 0
3 2 2 100 3
storeSwapLogData 72 0 0 1 0.16 0
0 0 1 -1 47015
buf_t 80 1 1 1 0.02 0
1 1 1 100 161220
AUFS IO State data 48 43 3 3 0.03 0
43 3 3 100 26606
AUFS Queued read data 64 0 0 1 0.03 0
0 0 1 -1 10174
AUFS Queued write data 56 0 0 7 0.05
0 0 0 7 -1 43926
aio_ctrl 104 0 0 2 0.01 0
0 0 2 -1 159654
wordlist 16 11 1 1 0.16 0

11 1 1 100 14

cbdata http_port_list (1001) 136 1 1 1 0.16

0 1 1 1 100 1

cbdata acl_access (1002) 56 28 2 2 0.16

0 28 2 2 100 28

cbdata RemovalPolicy (1003) 104 4 1 1 0.16

0 4 1 1 100 4

intlist 16 1 1 1 0.16 0

1 1 1 100 1

cbdata body_size (1004) 64 3 1 1 0.16

0 3 1 1 100 3

ipcache_entry 128 1579 198 198 0.00 0
1579 198 198 100 1687
fqdncache_entry 160 3 1 1 0.16 0

3 1 1 100 3

cbdata idns_query (1005) 8680 0 0 111 0.15
0 0 0 111 -1 1684
cbdata helper (1006) 136 2 1 1 0.16 0

2 1 1 100 2

cbdata helper_server (1007) 152 300 45 45 0.16

0 300 45 45 100 300

cbdata redirectStateData (1008) 72 0 0 1
0.15 0 0 0 1 -1 1250


cbdata storeurlStateData (1009) 72 0 0 1

0.05 0 0 0 1 -1 17254
HttpHeaderEntry 40 133600 5219 5220 0.00 2
133600 5219 5220 100 1733196
HttpHdrRangeSpec 16 7 1 1 0.00 0
7 1 1 100 1195
HttpHdrRange 16 7 1 1 0.00 0
7 1 1 100 1188
HttpHdrContRange 24 59 2 2 0.00 0
59 2 2 100 2287
HttpHdrCc 40 10386 406 406 0.00 0
10386 406 406 100 100852
cbdata Logfile (1012) 4192 1 5 5 0.16 0

1 5 5 100 1

MD5 digest 16 43922 687 687 0.00 0
43922 687 687 100 82283
aio_thread 40 32 2 2 0.16 0

32 2 2 100 32

aio_request 96 0 0 1 0.09 0
0 0 1 -1 159654
cbdata RebuildState (1014) 112 0 0 1 0.16

0 0 0 1 -1 3

pconn_data 32 294 10 10 0.00 0
294 10 10 100 8699
pconn_fds 32 289 10 10 0.00 0
289 10 10 100 8699
cbdata generic_cbdata (1016) 32 33 2 2 0.03
0 33 2 2 100 18109
cbdata ConnStateData (1017) 336 1801 591 641 0.01
0 1801 591 641 100 19691
cbdata RemovalPurgeWalker (1018) 72 0 0 1
0.16 0 0 0 1 -1 1767
cbdata clientHttpRequest (1019) 1160 437 496 544
0.00 0 437 496 544 100 55150
cbdata aclCheck_t (1020) 360 3 2 2 0.15
0 3 2 2 100 379543
cbdata store_client (1021) 152 454 68 75 0.00
0 454 68 75 100 64196
cbdata FwdState (1022) 112 353 39 43 0.01
0 353 39 43 100 30688
cbdata ps_state (1023) 200 0 0 1 0.16
0 0 0 1 -1 30688
cbdata ConnectStateData (1024) 96 75 8 11
0.02 0 75 8 11 100 12947
cbdata HttpStateData (1025) 136 282 38 43 0.00
0 282 38 43 100 29718
cbdata storeIOState (1026) 136 43 6 9 0.03
0 43 6 9 100 26606
cbdata AddVaryState (1027) 160 0 0 1 0.14
0 0 0 1 -1 1283
cbdata LocateVaryState (1028) 144 0 0 1 0.01
0 0 0 1 -1 2247
VaryData 32 4 1 1 0.01 0
4 1 1 100 2247
cbdata ErrorState (1029) 160 18 3 7 0.01
0 18 3 7 100 1855
Total 487732 232640 232834 0.00 100
487732 232640 232834 100 6464099
Cumulative allocated volume: 1.80 GB
Current overhead: 14153 bytes (0.006%)


Idle pool limit: 0.00 MB

memPoolAlloc calls: 6464099
memPoolFree calls: 5976366


String Pool Impact
(%strings) (%volume)

Short Strings 97 83
Medium Strings 2 7
Long Strings 1 8
Other Strings 0 2

Large buffers: 0 (0 KB)


------------

This is the dump on 14809 ... I am now once again getting the "2010/10/21
12:09:56| clientEatRequestBodyHandler: FD 1307: no more data left in

socket; but request header says there should be; aborting for now" message

in cache.log

Thanks,
Maher

codesite...@google.com

unread,
Dec 18, 2011, 11:14:41 PM12/18/11
to lusc...@googlegroups.com

Comment #45 on issue 116 by hedy...@gmail.com: Huge memory usage (cbdata
clientHttpRequest (1013))
http://code.google.com/p/lusca-cache/issues/detail?id=116

i have same issues on
uname -a
Linux localhost.localdomain 2.6.32.26-175.fc12.x86_64 #1 SMP Wed Dec 1
21:39:34 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux


Current memory usage:
Pool Obj Size Allocated In
Use Hit Rate
(bytes) (#) (KB) high (KB) high (hrs)
impact (%total) (#) (KB) high (KB) (%num) (number)

2K Buffer (no-zero) 2048 2 4 586 210.06
0 2 4 586 100 107383617
4K Buffer (no-zero) 4096 411 1644 6936 165.40
0 411 1644 6936 100 28321410
8K Buffer (no-zero) 8192 17 136 1008 84.81
0 17 136 1008 100 9332453
16K Buffer (no-zero) 16384 0 0 96 11.12
0 0 0 96 -1 443
32K Buffer (no-zero) 32768 0 0 64 142.72
0 0 0 64 -1 151
64K Buffer (no-zero) 65536 0 0 192 142.72
0 0 0 192 -1 144
Short Strings (no-zero) 36 529616 18620 18657 0.36
1 529616 18620 18657 100 1203370620
Medium Strings (no-zero) 128 36170 4522 4591 1.27
0 36170 4522 4591 100 199360375
Long Strings (no-zero) 512 20150 10075 10098 0.01
1 20150 10075 10098 100 29131637
event 48 8 1 4 63.86 0
8 1 4 100 2429315
close_handler 24 805 19 110 165.40 0
805 19 110 100 87442652
acl 64 59 4 4 218.36 0
59 4 4 100 118
acl_ip_data 24 23 1 1 218.36 0
23 1 1 100 46
acl_list 24 189 5 5 218.36 0
189 5 5 100 378
relist 80 4832 378 378 218.36 0
4832 378 378 100 9664
dwrite_q 48 0 0 1 218.36 0
0 0 1 -1 23835966
FwdServer 24 194 5 35 165.40 0
194 5 35 100 20193563
HttpReply 168 27627 4533 4547 0.01 0
27627 4533 4547 100 54997311
mem_node (no-zero) 4112 31465 126352 128492 0.71
10 31465 126352 128492 100 149972212
StoreEntry 88 10082748 866487 988390 218.34
68 10082748 866487 988390 100 33709630
MemObject 272 27503 7306 7322 0.01 1
27503 7306 7322 100 27770986
request_t 1384 31742 42902 42961 0.01 3
31742 42902 42961 100 26306369
helper_request 64 0 0 5 73.29 0
0 0 5 -1 11626510
storeSwapLogData 72 0 0 1 218.36 0
0 0 1 -1 23835966
buf_t 80 0 0 2 1.26 0
0 0 2 -1 120131943
AUFS IO State data 48 33 2 7 84.82 0
33 2 7 100 11089475
AUFS Queued read data 64 0 0 2 12.67 0
0 0 2 -1 5617799
AUFS Queued write data 56 0 0 7 69.39
0 0 0 7 -1 33736706
aio_ctrl 104 1 1 18 218.34 0
1 1 18 100 115263020
wordlist 16 20 1 1 218.36 0
20 1 1 100 48
cbdata http_port_list (1001) 136 2 1 1 116.21
0 2 1 1 100 4
cbdata RemovalPolicy (1002) 104 4 1 1 218.36

0 4 1 1 100 4

intlist 16 2 1 1 218.36 0
2 1 1 100 4
cbdata acl_access (1003) 56 181 10 10 218.36
0 181 10 10 100 362
cbdata body_size (1004) 64 3 1 1 218.36
0 3 1 1 100 6
ipcache_entry 128 7372 922 952 163.48 0
7372 922 952 100 525024
fqdncache_entry 160 1776 278 278 0.11 0
1776 278 278 100 11242
cbdata idns_query (1005) 8680 0 0 424 12.67
0 0 0 424 -1 542765
cbdata helper (1006) 136 1 1 1 218.36 0

1 1 1 100 1

cbdata helper_server (1007) 152 5 1 1 218.36
0 5 1 1 100 25
cbdata storeurlStateData (1008) 72 0 0 6
73.29 0 0 0 6 -1 11626510
HttpHeaderEntry 40 454078 17738 17773 0.01 1
454078 17738 17773 100 1012522342
HttpHdrRangeSpec 16 1 1 2 84.82 0
1 1 2 100 710826
HttpHdrRange 16 1 1 2 84.82 0
1 1 2 100 702925
HttpHdrContRange 24 156 4 37 165.29 0
156 4 37 100 930582
HttpHdrCc 40 23521 919 923 0.36 0
23521 919 923 100 75354286
cbdata Logfile (1011) 4192 1 5 5 218.36 0
1 5 5 100 2
MD5 digest 16 10082748 157543 179708 218.34
12 10082748 157543 179708 100 79439640
aio_thread 40 64 3 3 218.36 0
64 3 3 100 64
aio_request 96 1 1 17 218.34 0
1 1 17 100 115263020
cbdata RebuildState (1013) 112 0 0 1 218.36

0 0 0 1 -1 3

cbdata RemovalPurgeWalker (1015) 72 0 0 1
218.34 0 0 0 1 -1 2942370
cbdata ConnStateData (1016) 336 412 136 572 165.40
0 412 136 572 100 26772342
cbdata clientHttpRequest (1017) 1144 214 240 1683
165.40 0 214 240 1683 100 26300165
cbdata aclCheck_t (1018) 344 2 1 9 12.67
0 2 1 9 100 190065341
cbdata store_client (1019) 152 73429 10900 10907 0.01
1 73429 10900 10907 100 27175284
cbdata generic_cbdata (1020) 32 18 1 4 92.12
0 18 1 4 100 6007521
cbdata FwdState (1021) 112 194 22 161 165.40
0 194 22 161 100 20165074
cbdata ps_state (1022) 200 0 0 1 218.21
0 0 0 1 -1 20193563
cbdata ConnectStateData (1023) 96 11 2 92
143.07 0 11 2 92 100 20269438
cbdata storeIOState (1024) 136 33 5 20 84.82
0 33 5 20 100 11089475
cbdata HttpStateData (1025) 136 4448 591 623 92.36
0 4448 591 623 100 20102920
cbdata AddVaryState (1026) 168 0 0 1 46.53
0 0 0 1 -1 880081
cbdata ErrorState (1027) 160 2 1 106 143.07
0 2 1 106 100 1214268
cbdata LocateVaryState (1028) 144 0 0 1 208.06
0 0 0 1 -1 1057
VaryData 32 1 1 1 164.31 0
1 1 1 100 1057
cbdata SslStateData (1029) 120 0 0 2 194.56
0 0 0 2 -1 28489
cbdata RemovalPolicyWalker (1030) 56 0 0 1
199.46 0 0 0 1 -1 9
Total 21442296 1272306
1274605 0.71 100 21442296 1272306
1274605 100 3995708619
Cumulative allocated volume: 1.38 TB
Current overhead: 14338 bytes (0.001%)


Idle pool limit: 0.00 MB

memPoolAlloc calls: -299258677
memPoolFree calls: -320700974


String Pool Impact
(%strings) (%volume)

Short Strings 90 55
Medium Strings 6 13
Long Strings 3 30
Other Strings 0 2

Large buffers: 0 (0 KB)

why memPoolAlloc calls and memPoolFree calls value minus ... ?


Reply all
Reply to author
Forward
0 new messages