OSSEC & Logstash

3,390 views
Skip to first unread message

Joshua Garnett

unread,
Mar 8, 2014, 5:02:35 PM3/8/14
to ossec...@googlegroups.com
All,

I'll probably write a blog post on this, but I wanted to share some work I've done today.  http://vichargrave.com/ossec-log-management-with-elasticsearch/ shows how to use OSSEC's syslog output to route messages to Elasticsearch.  The problem with this method is it uses UDP.  Even when sending packets to a local process UDP by definition is unreliable.  Garbage collections and other system events can cause packets to be lost.  I've found it tends to cap out at around 1,500 messages per minute. 

To address this issue I've put together a logstash config that will read the alerts from /var/ossec/logs/alerts/alerts.log.  On top of solving the reliability issue, it also fixes issues with multi-lines being lost, and adds geoip lookups for the src_ip.  I tested it against approximately 1GB of alerts (3M events).

input {
  file {
    type => "ossec"
    path => "/var/ossec/logs/alerts/alerts.log"
    sincedb_path => "/opt/logstash/"
    codec => multiline {
      pattern => "^\*\*"
      negate => true
      what => "previous"
    }
  }
}

filter {
  if [type] == "ossec" {
    # Parse the header of the alert
    grok {
      # Matches  2014 Mar 08 00:57:49 (some.server.com) 10.1.2.3->ossec
      # (?m) fixes issues with multi-lines see https://logstash.jira.com/browse/LOGSTASH-509
      match => ["message", "(?m)\*\* Alert %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} \(%{DATA:reporting_host}\) %{IP:reporting_ip}\-\>%{DATA:reporting_source}\nRule: %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
      
      # Matches  2014 Mar 08 00:00:00 ossec-server01->/var/log/auth.log
      match => ["message", "(?m)\*\* Alert %{DATA:timestamp_seconds}:%{SPACE}%{WORD}?%{SPACE}\- %{DATA:ossec_group}\n%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} %{DATA:reporting_host}\-\>%{DATA:reporting_source}\nRule: %{NONNEGINT:rule_number} \(level %{NONNEGINT:severity}\) \-\> '%{DATA:signature}'\n%{GREEDYDATA:remaining_message}"]
    }

    # Attempt to parse additional data from the alert
    grok {
      match => ["remaining_message", "(?m)(Src IP: %{IP:src_ip}%{SPACE})?(Src Port: %{NONNEGINT:src_port}%{SPACE})?(Dst IP: %{IP:dst_ip}%{SPACE})?(Dst Port: %{NONNEGINT:dst_port}%{SPACE})?(User: %{USER:acct}%{SPACE})?%{GREEDYDATA:real_message}"]
    }

    geoip {
      source => "src_ip"
    }

    mutate {
      convert      => [ "severity", "integer"]
      replace      => [ "@message", "%{real_message}" ]
      replace      => [ "@fields.hostname", "%{reporting_host}"]
      add_field    => [ "@fields.product", "ossec"]
      add_field    => [ "raw_message", "%{message}"]
      add_field    => [ "ossec_server", "%{host}"]
      remove_field => [ "type", "syslog_program", "syslog_timestamp", "reporting_host", "message", "timestamp_seconds", "real_message", "remaining_message", "path", "host", "tags"]
    }
  }
}

output {
   elasticsearch {
     host => "10.0.0.1"
     cluster => "mycluster"
   }
}

Here are a few examples of the output this generates.

{
   "@timestamp":"2014-03-08T20:34:08.847Z",
   "@version":"1",
   "ossec_group":"syslog,sshd,invalid_login,authentication_failed,",
   "reporting_ip":"10.1.2.3",
   "reporting_source":"/var/log/auth.log",
   "rule_number":"5710",
   "severity":5,
   "signature":"Attempt to login using a non-existent user",
   "src_ip":"112.65.211.164",
   "geoip":{
      "ip":"112.65.211.164",
      "country_code2":"CN",
      "country_code3":"CHN",
      "country_name":"China",
      "continent_code":"AS",
      "region_name":"23",
      "city_name":"Shanghai",
      "latitude":31.045600000000007,
      "longitude":121.3997,
      "timezone":"Asia/Shanghai",
      "real_region_name":"Shanghai",
      "location":[
         121.3997,
         31.045600000000007
      ]
   },
   "@message":"Mar  8 01:00:59 someserver sshd[22874]: Invalid user oracle from 112.65.211.164\n",
   "@fields.hostname":"someserver.somedomain.com",
   "@fields.product":"ossec",
   "raw_message":"** Alert 1394240459.2305861: - syslog,sshd,invalid_login,authentication_failed,\n2014 Mar 08 01:00:59 (someserver.somedomain.com) 10.1.2.3->/var/log/auth.log\nRule: 5710 (level 5) -> 'Attempt to login using a non-existent user'\nSrc IP: 112.65.211.164\nMar  8 01:00:59 someserver sshd[22874]: Invalid user oracle from 112.65.211.164\n",
   "ossec_server":"ossec-server.somedomain.com"
}

and 

{
   "@timestamp":"2014-03-08T21:15:23.278Z",
   "@version":"1",
   "ossec_group":"syslog,sudo",
   "reporting_source":"/var/log/auth.log",
   "rule_number":"5402",
   "severity":3,
   "signature":"Successful sudo to ROOT executed",
   "acct":"nagios",
   "@message":"Mar  8 00:00:03 ossec-server sudo:   nagios : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/lib/some/command",
   "@fields.hostname":"ossec-server",
   "@fields.product":"ossec",
   "raw_message":"** Alert 1394236804.1451: - syslog,sudo\n2014 Mar 08 00:00:04 ossec-server->/var/log/auth.log\nRule: 5402 (level 3) -> 'Successful sudo to ROOT executed'\nUser: nagios\nMar 8 00:00:03 ossec-server sudo: nagios : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/lib/some/command",
   "ossec_server":"ossec-server.somedomain.com"
}

If you combine the above with a custom Elasticsearch template, you can put together some really nice Kibana dashboards.


--Josh


Nick Turley

unread,
Mar 9, 2014, 1:50:43 AM3/9/14
to ossec...@googlegroups.com
This is awesome. Thanks for posting. I recently updated our OSSEC environment to utilize ElasticSearch/Logstash/Kibana. Everything has been working great, but the one annoyance has been multi-line messages being lost. I've considered switching over to monitoring alerts.log directly, but haven't had time. I'll have to try out your config. :)

Nick

Jeremy Rossi

unread,
Mar 9, 2014, 7:47:39 AM3/9/14
to ossec...@googlegroups.com, ossec...@googlegroups.com
This is great.  We have started to add json and zeromq output in git to make things like this even simpler.   I don't think the json format is perfect for logstash but it might be worth checking out to make this simpler.   Also please let's us know if their are ways to make this even better.  

Zeromq output:

Json format:


Sent from my iPhone
--

---
You received this message because you are subscribed to the Google Groups "ossec-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ossec-list+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Michael Starks

unread,
Mar 9, 2014, 9:50:48 AM3/9/14
to ossec...@googlegroups.com
On 03/09/2014 12:50 AM, Nick Turley wrote:
> This is awesome. Thanks for posting. I recently updated our OSSEC
> environment to utilize ElasticSearch/Logstash/Kibana. Everything has
> been working great, but the one annoyance has been multi-line messages
> being lost. I've considered switching over to monitoring alerts.log
> directly, but haven't had time. I'll have to try out your config. :)
>
> Nick

Joshua's work is very nice. Also, don't forget that alerts.log can be
set to write in a non-multiline way:
http://ossec-docs.readthedocs.org/en/latest/syntax/head_ossec_config.global.html

Chris H

unread,
Mar 19, 2014, 10:54:30 AM3/19/14
to ossec...@googlegroups.com
Hi, Joshua. 

I'm using a very similar technique.  Are you applying a mapping template, or using the default?  I'm using the default automatic templates, because frankly I don't fully understand templates.  What this means though is that my daily indexes are larger than the uncompressed alerts.log, between 2-4GB per day, and I'm rapidly running out of disk space.  I gather than this can be optimised by enabling compression and excluding the _source and _all fields through the mapping template, but I'm not sure exactly how this works.  Just wondered if you've come across the same problem.

Thanks.

Joshua Garnett

unread,
Mar 19, 2014, 3:37:41 PM3/19/14
to ossec...@googlegroups.com
Chris,

Yeah digging into the templates was another big win for me.  For instance, if you try to do a topN query on signature with the default template, you end up with words like the and and as your top hits.  Setting signature to not_analyzed ensures the field isn't tokenized.  Below is my template.

--Josh

Logstash settings:

output {
   elasticsearch {
     host => "10.0.0.1"
     cluster => "mycluster"
     index => "logstash-ossec-%{+YYYY.MM.dd}"
     index_type => "ossec"
     template_name => "template-ossec"
     template => "/etc/logstash/elasticsearch_template.json"
     template_overwrite => true
   }
}

elasticsearch_template.json

{
  "template":"logstash-ossec-*",
  "settings":{
    "index.analysis.analyzer.default.stopwords":"_none_",
    "index.refresh_interval":"5s",
    "index.analysis.analyzer.default.type":"standard"
  },
  "mappings":{
    "ossec":{
      "properties":{
        "@fields.hostname":{
          "type":"string",
          "index":"not_analyzed"
        },
        "@fields.product":{
          "type":"string",
          "index":"not_analyzed"
        },
        "@message":{
          "type":"string",
          "index":"not_analyzed"
        },
        "@timestamp":{
          "type":"date"
        },
        "@version":{
          "type":"string",
          "index":"not_analyzed"
        },
        "acct":{
          "type":"string",
          "index":"not_analyzed"
        },
        "ossec_group":{
          "type":"string",
          "index":"not_analyzed"
        },
        "ossec_server":{
          "type":"string",
          "index":"not_analyzed"
        },
        "raw_message":{
          "type":"string",
          "index":"analyzed"
        },
        "reporting_ip":{
          "type":"string",
          "index":"not_analyzed"
        },
        "reporting_source":{
          "type":"string",
          "index":"analyzed"
        },
        "rule_number":{
          "type":"integer"
        },
        "severity":{
          "type":"integer"
        },
        "signature":{
          "type":"string",
          "index":"not_analyzed"
        },
        "src_ip":{
          "type":"string",
          "index":"not_analyzed"
        },
        "geoip":{
          "type" : "object",
          "dynamic": true,
          "path": "full",
          "properties" : {
            "location" : { "type" : "geo_point" }
          }
        }
      },
      "_all":{
        "enabled":true
      }
    }
  }
}


Chris H

unread,
Mar 20, 2014, 6:43:48 AM3/20/14
to ossec...@googlegroups.com
Thanks, I'll have a look.  For me the default template created each field as a multi-field, with the regular, analysed field and an additional "raw" un-analysed field.  I'm extracting quite a lot of fields from the different log types, which is something I was doing in Splunk before trying elasticsearch.

        "Alert_Level" : {
          "type" : "multi_field",
          "fields" : {
            "Alert_Level" : {
              "type" : "string",
              "omit_norms" : true
            },
            "raw" : {

              "type" : "string",
              "index" : "not_analyzed",
              "omit_norms" : true,
              "index_options" : "docs",
              "include_in_all" : false,
              "ignore_above" : 256
            }
          }
        },


I created a new default template in elasticsearch:

curl -XPUT 'http://localhost:9200/_template/template_logstash/' -d '{
  "template": "logstash-*",
  "settings": {
    "index.store.compress.stored": true
  },
  "mappings": {
    "_default_": {
      "_source": { "compress": "true" },
      "_all" : {
        "enabled" : false
      }
    }
  }
}'


This has applied, but the compression doesn't seem to do much.  I'm at the point where I might only be able to store a limited amount of data in elasticsearch :(

Chris

Vic Hargrave

unread,
Mar 20, 2014, 1:02:52 PM3/20/14
to ossec...@googlegroups.com
Since writing my blog on using Elasticseach for OSSEC log management, I've upgraded to Elasticsearch 1.0.1 which does not seem to be able get logs data from Logstash 1.3.2 or 1.3.3.  The solution is to use "elasticsearch_http" in the "output" section of the logstash configuration file.  When you do that all is well.  

For more information on better log ingestion rates, check out Brad Lhotsky's article - http://edgeofsanity.net/article/2012/12/26/elasticsearch-for-logging.html.

sercan acar

unread,
Apr 7, 2014, 1:31:45 PM4/7/14
to ossec...@googlegroups.com
Thank you Joshua Garnett. I've switched from syslog to localhost to reading the log file directly.

Few questions:
  • Is there are way to filter with "Alert Level X or above"? (This is more generic Kibana question)
  • Which field did you use for the Bettermap Panel? I've added the panel with geoip.lattitude however the panel fails to load without any errors
  • Is there a reason why you choose to remove fields? for me syslog_timestamp is much cleaner than @timestamp
Cheers

Joshua Garnett

unread,
Apr 8, 2014, 12:24:36 AM4/8/14
to ossec...@googlegroups.com
Hi Sercan,
  • Kibana/Elasticsearch uses lucene syntax by default.  To filter Alert Level 5 or above use:  severity:[5 TO *]
  • geoip.location is the correct field for Bettermap
  • @timestamp is the standard field used for the DateTime.  I didn't see the need to have the extra field.  It'd be easy to add in if you prefer it.
--Josh


sercan acar

unread,
Apr 10, 2014, 4:33:49 PM4/10/14
to ossec...@googlegroups.com
Thank you Josh. Not sure why I though filtering would be more complicated, lucene syntax is simple enough and it is very easy to add the timestamp field back in.

I'm having deficilties with the Bettermap. The panel loads with values in different colour codes and number of alerts (so far so good) however the background is blank and loading bar is in a loop. What have I done wrong?

Sercan

Joshua Garnett

unread,
Apr 11, 2014, 1:59:54 PM4/11/14
to ossec...@googlegroups.com
Sercan,

The BetterMaps map provider has been cranky lately. I've seen issues over the past few days with loading the actual map.

--Josh

Denis

unread,
May 6, 2014, 7:01:09 AM5/6/14
to ossec...@googlegroups.com
I was trying to configure everything "Joshua" way, and i see all data is coming into stdin{}, but when i switch to elasticsearch index,  index is empty.
how to debug why data is not coming into index? is there any more debug keys available?

thank you

Denis

unread,
May 7, 2014, 4:40:23 AM5/7/14
to ossec...@googlegroups.com
ok, the problem is that ossec rotates logs every midnight, and chmod is 740, so have to deal with that.
cheers

sercan acar

unread,
May 9, 2014, 5:26:11 AM5/9/14
to ossec...@googlegroups.com
Hi,

Is there a way to control the alert level which is stored by elasticsearch? I know you can do this through rsyslog, but is it possible through logstash.conf?

With 200+ clients and they are generating around 2GB of data a day!

Regards,

Joshua Garnett

unread,
May 12, 2014, 9:28:27 AM5/12/14
to ossec...@googlegroups.com
Sercan,

There are a few ways you can handle this.  2GB a day seems a little on the high side for 200+ clients, so you may want to look at creating rules for noisy non-security related messages as severity 0, which essentially /dev/nulls the messages.  The other option is to use the log_alert_level setting of alerts, which allows you to configure what severity levels are logged to the file.

All of that said, be very careful about throwing away even low severity log messages.  You never know what will be useful after a security incident.

--Josh



--

Villiers Tientcheu Ngandjeuu

unread,
Aug 12, 2014, 9:18:29 AM8/12/14
to ossec...@googlegroups.com

Hi Joshua,
Thank you for your post. I'am also concern with Ossec and Logstash for a business.
I used your configuration in my test environment with somes changes in the name of the cluster and ip adress.
This my test environment: I have two virtual host in the same network and the ping together, one let's say A with logstash and the other let's say B with Elasticsearch.
In host A, I have copied somes ossec log and aggregate them in an unique file in other to get what you have with "alert.log".
The other parameter in the configuration file remain the same with what you mentionned.
But this is the issue I have: Iogstash doesn't create any index in Elasticsearch cluster and I don't know why. Have you met this issue?
However, Elasticsearch instance détectes logstash instance. And when I configure in logstash's file config, to send output in stdout, I get something, the result you have.
So, why logstash can't send result of the parsing Elasticsearch.
I use logstash-1.4.0 and Elasticsearch-1.3.0
Thank you for any help!
 

Joshua Garnett

unread,
Aug 13, 2014, 10:12:07 AM8/13/14
to ossec...@googlegroups.com
We just did the upgrade to logstash 1.4 and Elasticsearch 1.2 a few weeks ago.  Everything appears to still be working.  

My updated output config:

output {
  elasticsearch {
    node_name => "ossec-server"
    host => "10.0.0.1"
    cluster => "mycluster"
    protocol => "transport"
    index => "logstash-ossec-%{+YYYY.MM.dd}"
    index_type => "ossec"
    template_name => "template-ossec"
    template => "/etc/logstash/elasticsearch_template.json"
    template_overwrite => true
  }
}

You should make sure that host has been changed to the IP of your Elasticsearch instance.  Also, cluster should match the name you've specified in the Elasticsearch config.

Example /etc/elasticsearch/elasticsearch.yml:

---
cluster:
  name: mycluster
  routing:
    allocation:
      concurrent_streams: 6
      node_concurrent_recoveries: 6

... (more config) ...


--Josh 


Villiers Tientcheu Ngandjeuu

unread,
Aug 14, 2014, 11:08:32 AM8/14/14
to ossec...@googlegroups.com
Hi Josh,
Everything is ok now! In fact, I had to remove the condition if [type=ossec] in logstash's config file. However I have a question: is there any problem with the condition in logstash's output?
output {
           if [type] == "ossec" {
                 elasticsearch {
                    host => "127.0.0.1"
                    cluster => "ossec"
                    index => "logstash-ossec-%{+YYYY.MM.dd}"
                    index_type => "ossec"
                    template_name => "template-ossec"
                    template => "/usr/local/share/logstash/elasticsearch_template.json"
                    template_overwrite => true
              }
        }
}

Le samedi 8 mars 2014 23:02:35 UTC+1, Joshua Garnett a écrit :
Message has been deleted

Glenn Ford

unread,
Dec 30, 2014, 8:56:54 AM12/30/14
to ossec...@googlegroups.com
That was my bad on setup for output parameters, please ignore. not up and running yet but closer.

On Monday, December 29, 2014 3:13:17 PM UTC-5, Glenn Ford wrote:
Hi Joshua,

When I do this I get this error:

./logstash agent -f ./logstash.conf
Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones {:level=>:warn}
log4j, [2014-12-29T15:10:20.039]  WARN: org.elasticsearch.discovery: [logstash-xxx-xxxxxxx.xxx-5946-4022] waited for 30s and no initial state was set by the discovery

Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
        at org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
        at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(java/lang/Thread.java:745)

Any ideas whats wrong here?

Glenn Ford

unread,
Dec 30, 2014, 9:27:56 AM12/30/14
to ossec...@googlegroups.com
How did you securely configure to get around the fact OSSEC permissions don't allow access to that file?

I believe the reason this isn't working for me is because the file is not accessible (logstash shows no errors running, aggravating).

I temporarily modified logstash to allow login and tried this:

]# su - logstash
-bash-4.1$ pwd
/opt/logstash
-bash-4.1$ stat /var/ossec/logs/alerts/alerts.log
stat: cannot stat `/var/ossec/logs/alerts/alerts.log': Permission denied




On Saturday, March 8, 2014 5:02:35 PM UTC-5, Joshua Garnett wrote:

dan (ddp)

unread,
Dec 31, 2014, 6:51:25 AM12/31/14
to ossec...@googlegroups.com
On Mon, Dec 29, 2014 at 3:13 PM, Glenn Ford <gmfp...@gmail.com> wrote:
> Hi Joshua,
>
> When I do this I get this error:
>
> ./logstash agent -f ./logstash.conf
> Using milestone 2 input plugin 'file'. This plugin should be stable, but if
> you see strange behavior, please let us know! For more information on plugin
> milestones, see http://logstash.net/docs/1.4.2-modified/plugin-milestones
> {:level=>:warn}
> log4j, [2014-12-29T15:10:20.039] WARN: org.elasticsearch.discovery:
> [logstash-xxx-xxxxxxx.xxx-5946-4022] waited for 30s and no initial state was
> set by the discovery
>
> Exception in thread ">output"
> org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
> at
> org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
> at
> org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(java/lang/Thread.java:745)
>
> Any ideas whats wrong here?
>

Something's wrong in your output section? Elastic search isn't running?

>
>
> On Saturday, March 8, 2014 5:02:35 PM UTC-5, Joshua Garnett wrote:
>>

Slobodan Aleksić

unread,
Jan 22, 2015, 7:52:52 AM1/22/15
to ossec...@googlegroups.com
I managed it by putting logstash user in the ossec group. Not nice but
works.
> --
>
> ---
> You received this message because you are subscribed to the Google
> Groups "ossec-list" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ossec-list+...@googlegroups.com
> <mailto:ossec-list+...@googlegroups.com>.

jp1...@gmail.com

unread,
Feb 18, 2015, 3:12:45 PM2/18/15
to ossec...@googlegroups.com
So, this works OK for me on alerts.log - does anyone have a logstash conf that works on the archives.log if you have ossec saving all logs to that?

mangaso...@gmail.com

unread,
Sep 22, 2016, 1:13:27 PM9/22/16
to ossec-list
Hi JP1, you found a pattern for archive.log file?

mangaso...@gmail.com

unread,
Sep 22, 2016, 1:13:27 PM9/22/16
to ossec-list
Hi JP1, you found a pattern for archive.log file?

Em quarta-feira, 18 de fevereiro de 2015 17:12:45 UTC-3, jp1...@gmail.com escreveu:

Patrick Rogne

unread,
Dec 20, 2018, 9:19:21 AM12/20/18
to ossec-list
Thank you for your work on this awesome conf file.  I have been working with it latley but noticed today that the new version of logstash 6.6 looks like it will not be supporting the multiline codec anymore?  I hope I am wrong, can you confirm this?
Reply all
Reply to author
Forward
0 new messages