Logstash or filebeat Configuration for wazuh

2,775 views
Skip to first unread message

Saddique Khan

unread,
Sep 4, 2023, 11:00:39 AM9/4/23
to Wazuh | Mailing List
Hello Team,

     I have installed Wazuh on kubernetes server. I want to inject my ASA logs to Wazuh. These logs are in logstash but I don't know that what are the configurations to pass logs from logstash to wazuh for injecting logs to Wazuh. In addition, even if I inject the logs, where would i see the logs in wazuh Dashboard. 

   I don't see Elastic search configuration of elastic search which is forwarding logs to wazuh dashboard. I am lost with the settings. If you put me in right direction. I would appreciate it.

Regards,

Saddique

tomas....@wazuh.com

unread,
Sep 4, 2023, 3:38:52 PM9/4/23
to Wazuh | Mailing List

Hello Saddique,


To inject your ASA logs from Logstash to Wazuh, you need to configure the Logstash output plugin to send the logs to the Wazuh manager. You can use the syslog Logstash output plugin for this purpose. The configuration should include the Wazuh manager IP address and port.


Once the logs are injected into Wazuh (check this documentation), you can view them in the Wazuh dashboard. The Wazuh dashboard is integrated with Kibana, so you can access it through the Kibana interface. In the dashboard, you will find various visualizations and logs related to your ASA logs.


Regarding the Elasticsearch configuration, Wazuh uses Elasticsearch as the backend for storing and indexing logs. The Elasticsearch configuration is managed by the Wazuh indexer cluster, which is responsible for forwarding logs to the Wazuh dashboard. You don't need to configure Elasticsearch separately for forwarding logs to the Wazuh dashboard.


If you need further assistance, please let us know.


Best regards,


Tomás Turina

Saddique Khan

unread,
Sep 5, 2023, 7:50:38 AM9/5/23
to Wazuh | Mailing List
Hello tomas,

      I have proceed everything which you suggested but I can't see any agents connected to wazuh for ASA to analysis the logs.Since, Every agent has its own logs, therefore, I am confused that where would I see the logs.

Greetings,

Saddique

Saddique Khan

unread,
Sep 6, 2023, 9:36:47 AM9/6/23
to Wazuh | Mailing List
Hello Tomas,

                  I have configured the logstash.conf with in the logstash. I checked it with my eleastic search and it is working perfectly fine but it is not working with the wazuh. Could you please verify that either I should send the logs to the indexer port or the manager port for syslogs. 

                 output {
Output
{

elasticsearch {
hosts => [ "http://manager-ip:1514" ]
manage_template => false
index => "logstash-syslog-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}


Regards,

Saddique  

Saddique Khan

unread,
Sep 6, 2023, 9:50:03 AM9/6/23
to Wazuh | Mailing List
Hello Tomas,

            Here is the 9200 port indexer error in the logstash. 

          [2023-09-06T13:40:42,368][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://Service-Ip:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://Service-Ip:9200][Manticore::ClientProtocolException] service-Ip:9200 failed to respond"}

      This is Indexer pod logs:
      

Wed, Sep 6 2023 2:59:16 pm

[2023-09-06T12:59:16,190][ERROR][o.o.s.a.BackendRegistry ] [wazuh-indexer-0] Cannot retrieve roles for User [name=kibanaserver, backend_roles=[], requestedTenant=null] from ldap due to OpenSearchSecurityException[OpenSearchSecurityException[No user kibanaserver found]]; nested: OpenSearchSecurityException[No user kibanaserver found];

Wed, Sep 6 2023 2:59:16 pm

org.opensearch.OpenSearchSecurityException: OpenSearchSecurityException[No user kibanaserver found]

Wed, Sep 6 2023 2:59:16 pm

Caused by: org.opensearch.OpenSearchSecurityException: No user kibanaserver found

Greetings,

Saddique 

tomas....@wazuh.com

unread,
Sep 6, 2023, 12:30:00 PM9/6/23
to Wazuh | Mailing List
Hi Saddique,

I think we are not understanding each other.

You have two options for ingesting your Logstash logs, depending on what you what to do:
  • If you want to send these logs to the Wazuh Manager for analysis via the rules engine and generate alerts, you can do what I recommend above. Simply configure your Logstash to output syslog output and send it to the Wazuh Manager, which you'll need to configure to listen for these logs. A second option would be to configure your Logstash to write to a file and use the logcollector module (this can be done in both Wazuh Agent and Manager) to read these logs and send them to the rules engine.
  • If you only want to view these logs in your Wazuh Dashboard instance, you'll need to configure your Logstash to ingest these logs directly into the Wazuh indexer. I want to clarify that this option will allow you to see the logs in the Wazuh Dashboard (from the Discover section, but you can also create as many visualizations as you want with them) but they will not be analyzed by the Wazuh Manager, so no alerts will be generated. To do this, you can follow this documentation.
Best regards,

Tomás Turina

Saddique Khan

unread,
Sep 11, 2023, 10:09:57 AM9/11/23
to Wazuh | Mailing List
Hello Tomas,

             Yes I got your point.  I am trying to configure the system with your point one configuration.

             I followed the configuration as you suggested in the above message. Let me explain what I did.
            
            1. I configured these in ossec.conf file on manager pod and restarted it.

             <remote>
                    <connection>syslog</connection>
                   <port>514</port>
                    <protocol>tcp</protocol>
                    <allowed-ips>0.0.0.0/0</allowed-ips>
             </remote>
 
       2.  I configured these in ossec.conf file on worker pod and restarted it.

            <remote>
                    <connection>syslog</connection>
                   <port>514</port>
                    <protocol>tcp</protocol>
                    <allowed-ips>0.0.0.0/0</allowed-ips>
                   <local_ip>manager-listening-IP</local_ip>
             </remote>

So now when I telnet to my manager IP: "telnet my-manager-IP 514"

It is connected and it does not disconnect. Plus I can't end the telnet session and it remains open.

  3. This is my docker container logstash.conf file settings:

    
output {
elasticsearch {
hosts => [ "http://manager-ip:514" ]
manage_template => false
index => "khan-logstash-syslog-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}

when I start container, it doesn't connect from the container to manager. The below is the error log from my pod to the wazuh manager 514 port.

[2023-09-11T14:01:10,774][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://Manager-Ip:514/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://Manager-IP:514/][Manticore::SocketTimeout] Read timed out"}.

If you could help me with this, I will appreciate it.

Greetings,

Saddique

tomas....@wazuh.com

unread,
Sep 12, 2023, 9:37:12 AM9/12/23
to Wazuh | Mailing List
Hi Saddique,

What it seems to be wrong is the configuration you are using in logstash.conf. It should be something like this:

output {
syslog {
host => "manager-ip"
port => 514
}
}

You could also try the second option and configure a file output to be monitored with logcollector. Example:

output {
file {
path => "/tmp/test"
}
}

With this configuration in the ossec.conf file:

<localfile>
  <location>/tmp/test</location>
  <log_format>json</log_format>
</localfile>


Let me know how this goes.

Tomás Turina

Saddique Khan

unread,
Sep 15, 2023, 10:47:48 AM9/15/23
to Wazuh | Mailing List
Hello Tomas,

         we are running logstash from the pod. It is throwing syslog plugin error.

Regards,

Saddique

tomas....@wazuh.com

unread,
Sep 20, 2023, 4:27:39 PM9/20/23
to Wazuh | Mailing List
Hi Saddique,

As you mentioned to me in private, it seems that you already have the syslog output working.

Now, you should verify that the manager is receiving these logs. For this, you can enable logall / logall_json so every message that the manager receives is stored in archives.log / archives.json files. This is the configuration you need:

<global>
  <logall>yes</logall> <!-- archives.log -->
  <logall_json>yes</logall_json> <!-- archives.json -->
</global>


In case you can see these logs, you'll need to create some decoders and rules so they match a rule, so they can be visualized in your dashboard alongside the other alerts. For this, this documentation will help you. You can also test these decoders/rules with our logtest tool, this tools receives a log as an input and shows you if it matches any decoder/rule.

Let us know if this information helps you.

Tomás Turina

Saddique Khan

unread,
Sep 26, 2023, 8:48:39 AM9/26/23
to Wazuh | Mailing List
Hello Tomas,

           Thanks for the reply. Yes I am able to receive the logs ipn 514 port now, I am going to use this port for ASA and some other network flows to the other ports. I will use ASA default rules for time being and Later I will create rules and decoders for the new logs. I am appreciate your concern. 

Regards,
Saddique
Reply all
Reply to author
Forward
0 new messages