I didn't check it over the weekend but when I checked
it this week, innd had stopped on the test server. I
searched through the log files but couldn't find an
answer, it seems that innd just stopped (maybe I'm
looking in the wrong place).
I checked the disks, they weren't full. Also, innwatch
doesn't crash. There were no messages on the system
console. So I restarted innd. It ran for a
while, but when the production server tried to send
a batch of messages to the test server, it crashed
again. The production server tries to send a batch
every 20 minutes. If innd is running then (I restarted
innd several times), it crashes.
I did a truss of a working innd process and when
it crashed, I saw the following:
------
write(17, " 2 3 8 < 8 r 6 a h l $".., 36) = 36
getcontext(0xFFBEF2F8)
write(17, " 2 3 8 < t b n a t s c".., 50) = 50
getcontext(0xFFBEF2F8)
write(17, " 2 3 8 < 3 c y B 5 . 5".., 47) = 47
getcontext(0xFFBEF2F8)
write(17, " 2 3 8 < 3 4 c d t s 4".., 50) = 50
poll(0xFFBEFA48, 4, 300000) = 1
read(17, " t a k e t h i s < 3 9".., 1023) = 855
Incurred fault #6, FLTBOUNDS %pc = 0x0006973C
siginfo: SIGSEGV SEGV_MAPERR addr=0x00000000
Received signal #11, SIGSEGV [default]
siginfo: SIGSEGV SEGV_MAPERR addr=0x00000000
*** process killed ***
------
I looked in /usr/include/sys/fault.h and saw the following:
---
#define FLTBOUNDS 6 /* Memory bounds (invalid address) */
---
So it's a memory bounds error. Although I suppose it could be
due to a bug in the inn source code, I'm more inclined to think
that maybe I misconfigured one or more of the '.conf' files.
Here is my cycbuff.conf (comments removed).
---
cycbuff:ONE:/usr/local/news/spool/CNFS/buff001:32000
cycbuff:TWO:/usr/local/news/spool/CNFS/buff002:32000
cycbuff:THREE:/usr/local/news/spool/CNFS/buff003:32000
cycbuff:FOUR:/usr/local/news/spool/CNFS/buff004:32000
cycbuff:FIVE:/usr/local/news/spool/CNFS/buff005:32000
cycbuff:SIX:/usr/local/news/spool/CNFS/buff006:32000
metacycbuff:BINARIES:ONE,TWO,THREE,FOUR
metacycbuff:LARGE:FIVE,SIX
---
Here is an ls -l of /usr/local/news/spool/CNFS.
---
-rw-rw-r-- 1 news news 32768000 Sep 30 22:31 buff001
-rw-rw-r-- 1 news news 32768000 Sep 30 22:11 buff002
-rw-rw-r-- 1 news news 32768000 Sep 30 22:11 buff003
-rw-rw-r-- 1 news news 32768000 Sep 30 22:31 buff004
-rw-rw-r-- 1 news news 32768000 Sep 29 15:16 buff005
-rw-rw-r-- 1 news news 32768000 Sep 28 14:24 buff006
---
Am I off a decimal point somewhere?
I wanted alt.binaries.* files to be in the meta-cyclic
buffer BINARIES, files larger than 50K to be in the
buffer LARGE and the rest of the messages to be in the
regular traditional spool.
I realize these buffers aren't very large, right now I'm
only feeding two alt.binaries groups to the test server
and five "regular" groups. If/when I get it working,
I'll mount some large disk arrays, create larger CNFS
files and reconfig cycbuff.conf.
Here is my storage.conf file (minus comments).
---
method cnfs {
class: 1
newsgroups: alt.binaries.*
options: BINARIES
}
method cnfs {
class: 2
newsgroups: *
size: 50000
options: LARGE
}
method tradspool {
class: 3
newsgroups: *
}
---
I'm digging through the man pages, but I'm missing something
and would appreciate any help I could get.
Thanks!
--
David Swanger
Office of Information Technology
Auburn University, AL
swa...@mail.auburn.edu