Repeated alerts on netstat output changing?

612 views
Skip to first unread message

Kevin Kelly

unread,
Apr 3, 2013, 12:04:33 PM4/3/13
to ossec...@googlegroups.com
On numerous servers, I get Ossec alerts that the netstat output has changed several times per day. In looking at the files in the /opt/ossec/queue/diff/ldap1/533 directory, I note that the diff command thinks the files are binary and not text.

[root@ossec 533]# diff last-entry state.1364980478
Binary files last-entry and state.1364980478 differ

Looking at the state.1364980478 file with vi, I see netstat output is truncated and the last line has a "^@" symbol as the last character. Any ideas what is going on and how I can fix it?

--
Kevin Kelly
Director, Network Technology
Whitman College


Brenden Walker

unread,
Apr 3, 2013, 12:10:04 PM4/3/13
to ossec...@googlegroups.com
On Wed, 3 Apr 2013 09:04:33 -0700 (PDT) Kevin Kelly <ke...@whitman.edu> wrote:
> On numerous servers, I get Ossec alerts that the netstat output has
> changed several times per day. In looking at the files in
> the /opt/ossec/queue/diff/ldap1/533 directory, I note that the diff
> command thinks the files are binary and not text.
>
> [root@ossec 533]# diff last-entry state.1364980478
> Binary files last-entry and state.1364980478 differ
>
> Looking at the state.1364980478 file with vi, I see netstat output is
> truncated and the last line has a "^@" symbol as the last character.
> Any ideas what is going on and how I can fix it?

Are you sure nothing changed? I get these on occasion as for some reason Apache is restarting and getting a new PID.

Other than the above the netstat checking it working fine for me on v2.7.

Kevin Kelly

unread,
Apr 3, 2013, 12:41:09 PM4/3/13
to ossec...@googlegroups.com
The alerts show a big different in the listening ports like the following:

ossec: output: 'netstat -tan |grep LISTEN |grep -v 127.0.0.1 | sort':
tcp 0 0 0.0.0.0:199 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:20031 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8089 0.0.0.0:* LISTEN
tcp 0 0 10.1.1.56:10050 0.0.0.0:* LISTEN
tcp 0 0 :::22 :::* LISTEN
tcp 0 0 :::2600 :::* LISTEN
tcp 0 0 :::389 :::* LISTEN
tcp 0 0 :::5432 :::* LISTEN
tcp 0 0 :::636 :::* LISTEN
Previous output:
ossec: output: 'netstat -tan |grep LISTEN |grep -v 127.0.0.1 | sort':
tcp 0 0 0.0.0.0:199 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:20031 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8089 0.0.0.0:* LISTEN
tcp 0 0 10.1.1.56:10050 0.0.0.0:* LISTEN

The listening ports have not changed that much on these servers and the output of the previous command looks like it was truncated for some reason? It always seems to be the same three servers that generate these alerts, sometimes multiple times per day.

--
Kevin Kelly
Director, Network Technology
Whitman College

--

---
You received this message because you are subscribed to the Google Groups "ossec-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ossec-list+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


بول

unread,
Apr 3, 2013, 12:46:57 PM4/3/13
to ossec...@googlegroups.com


On Wed, Apr 3, 2013 at 5:41 PM, Kevin Kelly <ke...@whitman.edu> wrote:
The alerts show a big different in the listening ports like the following:

ossec: output: 'netstat -tan |grep LISTEN |grep -v 127.0.0.1 | sort':
tcp        0      0 0.0.0.0:199                 0.0.0.0:*                   LISTEN
tcp        0      0 0.0.0.0:20031               0.0.0.0:*                   LISTEN
tcp        0      0 0.0.0.0:5432                0.0.0.0:*                   LISTEN
tcp        0      0 0.0.0.0:8000                0.0.0.0:*                   LISTEN
tcp        0      0 0.0.0.0:8089                0.0.0.0:*                   LISTEN
tcp        0      0 10.1.1.56:10050             0.0.0.0:*                   LISTEN
tcp        0      0 :::22                       :::*                        LISTEN
tcp        0      0 :::2600                     :::*                        LISTEN
tcp        0      0 :::389                      :::*                        LISTEN
tcp        0      0 :::5432                     :::*                        LISTEN
tcp        0      0 :::636                      :::*                        LISTEN
Previous output:
ossec: output: 'netstat -tan |grep LISTEN |grep -v 127.0.0.1 | sort':
tcp        0      0 0.0.0.0:199                 0.0.0.0:*                   LISTEN
tcp        0      0 0.0.0.0:20031               0.0.0.0:*                   LISTEN
tcp        0      0 0.0.0.0:5432                0.0.0.0:*                   LISTEN
tcp        0      0 0.0.0.0:8000                0.0.0.0:*                   LISTEN
tcp        0      0 0.0.0.0:8089                0.0.0.0:*                   LISTEN
tcp        0      0 10.1.1.56:10050             0.0.0.0:*                   LISTEN

The listening ports have not changed that much on these servers and the output of the previous command looks like it was truncated for some reason?  It always seems to be the same three servers that generate these alerts, sometimes multiple times per day.

That looks like an IPv6 interface going down. (And coming back up if you get the reverse.)

--
PJH

Kevin Kelly

unread,
Apr 5, 2013, 1:39:01 PM4/5/13
to ossec...@googlegroups.com
I got five of these alerts yesterday, but the "Previous output" is always the same. Should it be changing each time? Is it stored in /opt/ossec/queue/diff/ldap1/533 and if so, I can't seem to find a match for the previous output in any of the files stored there.

ossec: output: 'netstat -tan |grep LISTEN |grep -v 127.0.0.1 | sort':
tcp 0 0 0.0.0.0:199 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:20031 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8089 0.0.0.0:* LISTEN
tcp 0 0 10.1.1.56:10050 0.0.0.0:* LISTEN
tcp 0 0 ::1:8389 :::* LISTEN
tcp 0 0 :::22 :::* LISTEN
tcp 0 0 :::2600 :::* LISTEN
tcp 0 0 :::389 :::* LISTEN
tcp 0 0 :::5432 :::* LISTEN
tcp 0 0 :::636 :::* LISTEN
Previous output:
ossec: output: 'netstat -tan |grep LISTEN |grep -v 127.0.0.1 | sort':
tcp 0 0 0.0.0.0:199 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:20031 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8089 0.0.0.0:* LISTEN

[root@ossec 533]# ls -l
total 40
drwxr-x---. 2 ossec ossec 4096 Apr 5 08:45 .
drwxr-----. 3 ossec ossec 4096 Apr 3 09:22 ..
-rw-r-----. 1 ossec ossec 1049 Apr 5 08:45 last-entry
-rw-r-----. 1 ossec ossec 1078 Apr 3 09:22 state.1365006174
-rw-r-----. 1 ossec ossec 1049 Apr 3 09:45 state.1365007531
-rw-r-----. 1 ossec ossec 1138 Apr 4 21:24 state.1365135886
-rw-r-----. 1 ossec ossec 1049 Apr 4 21:30 state.1365136246
-rw-r-----. 1 ossec ossec 1138 Apr 5 04:44 state.1365162273
-rw-r-----. 1 ossec ossec 1049 Apr 5 04:50 state.1365162633
-rw-r-----. 1 ossec ossec 1138 Apr 5 08:39 state.1365176375

Kevin Kelly

unread,
Apr 5, 2013, 5:38:23 PM4/5/13
to ossec...@googlegroups.com
From what I can tell, both the "output:" and the "Previous output:" sections are getting truncated in the email message which is why it never seems to change. The last-entry is 34 lines 3096 characters long and the state.1365197432 is 33 lines and 3007 characters long. I also note truncation in the alerts.log as well. I assume there is a hard coded limit in the source code somewhere?

ossec: output: 'netstat -tan |grep LISTEN |grep -v 127.0.0.1 | sort':
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:199 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:20031 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:32770 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8089 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:886 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:9999 0.0.0.0:* LISTEN
tcp 0 0 10.1.1.218:443 0.0.0.0:* LISTEN
tcp 0 0 10.1.1.218:80 0.0.0.0:* LISTEN
tcp 0 0 10.1.1.234:10050 0.0.0.0:* LISTEN
tcp 0 0 10.1.1.234:443 0.0.0.0:* LISTEN
tcp 0 0 10.1.1.234:80 0.0.0.0:* LISTEN
tcp 0 0 10.1.1.24
Previous output:
ossec: output: 'netstat -tan |grep LISTEN |grep -v 127.0.0.1 | sort':
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:199 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:20031 0.0.0.0:* LISTEN

Kevin Kelly

unread,
Apr 5, 2013, 6:06:35 PM4/5/13
to ossec...@googlegroups.com

Blake Johnson

unread,
Jun 25, 2013, 11:51:31 AM6/25/13
to ossec...@googlegroups.com
Kevin,

Did you ever find a solution to this?

I'm running into the same problem on Windows machines, who seem to have a lot of listeners running at any given time. Any feedback is appreciated on how you have addressed this problem in your environment.

Blake Johnson
IT Security Analyst
Alliant Energy

Michael Starks

unread,
Jun 25, 2013, 12:59:18 PM6/25/13
to ossec...@googlegroups.com
On 25.06.2013 10:51, Blake Johnson wrote:
> Kevin,
>
> Did you ever find a solution to this?
>
> I'm running into the same problem on Windows machines, who seem to
> have a lot of listeners running at any given time. Any feedback is
> appreciated on how you have addressed this problem in your
> environment.

Alerts do have a hard-coded limit for the email messages. That's the
crux of the problem. You have to modify the source to change this
behavior.

FSoyer

unread,
Aug 12, 2013, 6:19:46 AM8/12/13
to ossec...@googlegroups.com
Hi Michael,
have you found a solution for this ?
I've the problem on some servers on which I've just found that this was raised by connections from FTP passive clients, opening a random port on the server. The problem was that this port never appears in the email so I needed to trace with my own script netstat to find it.
Many thanks
Frank

dan (ddp)

unread,
Aug 12, 2013, 9:13:56 AM8/12/13
to ossec...@googlegroups.com
On Mon, Aug 12, 2013 at 6:19 AM, FSoyer <frank...@gmail.com> wrote:
> Hi Michael,
> have you found a solution for this ?
> I've the problem on some servers on which I've just found that this was
> raised by connections from FTP passive clients, opening a random port on the
> server. The problem was that this port never appears in the email so I
> needed to trace with my own script netstat to find it.
> Many thanks
> Frank
>

**
>> crux of the problem. You have to modify the source to change this
>> behavior.
**

> Le mardi 25 juin 2013 18:59:18 UTC+2, Michael Starks a écrit :
>>
>>
>> Alerts do have a hard-coded limit for the email messages. That's the
>> crux of the problem. You have to modify the source to change this
>> behavior.
>
Reply all
Reply to author
Forward
0 new messages