Distributed Logstash deployment

320 views
Skip to first unread message

Robert H

unread,
Oct 29, 2018, 1:22:46 AM10/29/18
to Wazuh mailing list
Hi all,
I'm trying to test something in my lab and the first attempted failed yesterday.  I have a 3 node Elasticsearch cluster.  I have in the past had logstash on 2 of the 3 nodes taking ssl alerts from filebeat on 2 Wazuh mangers (in a cluster).  It worked fine.  Right now, I have only one logstash on one of the Elasticsearch nodes and both Wazuh managers sending filebeat data to it.  I'd like to test out using logstash on a dedicated node (not on the same host as elasticsearch).  I tried this yesterday, I installed a new node and installed only logstash on it.  I followed the same configuration as the documentation shows and worked before.  However this time, even though the alert flow from the wazuh managers to the logstash node, to and a connection on port 9200 to one of the Elasticsearch nodes, I did not see any new data in Kibana/wazuh app.  I did disable ssl/tls and it still did not result in data in Kibana.

In this lab setup I'm not using x-pack security.  I wonder if logstash is set on it's own host, if there is a need for a logstash user configuration?  Could you describe how this would work?

Best regards,
Robert

Juanjo Jiménez

unread,
Oct 29, 2018, 6:58:18 AM10/29/18
to rhe...@proficio.com, wa...@googlegroups.com

Hello Robert,

Let me help you with this. Our current documentation is valid for distributed architectures where Logstash is installed on the same machine as Elasticsearch, so we should consider adding documentation for proper configuration of separated Logstash instances.

Ok, now let’s see if we can fix your problem.

After installing Logstash, I assume that you configured it using the distributed configuration file, as seen on this step (Logstash.2.b). Keep in mind that you need to specify the Elasticsearch IP address at the bottom of the file:

output {
    elasticsearch {
        hosts => ["<PUT_HERE_ELASTICSEARCH_IP>:9200"]
        index => "wazuh-alerts-3.x-%{+YYYY.MM.dd}"
        document_type => "wazuh"
    }
}

After saving the file and restarting the Logstash service, you may be getting this kind of log messages on /var/log/logstash/logstash-plain.log:

Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://192.168.56.104:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://192.168.56.104:9200/][Manticore::SocketException] Connection refused (Connection refused)"}

I discovered that we need to edit the Elasticsearch configuration file, and modify this setting: network.host. On my test environment, this setting appears commented like this:

#network.host: 192.168.0.1

And I changed to this:

network.host: 0.0.0.0

(Notice that I removed the # at the beginning of the line). The 0.0.0.0 IP will make Elasticsearch listen on all network interfaces.

After that, I restarted the Elasticsearch service using systemctl restart elasticsearch, and then, I started to see the alerts being indexed on Elasticsearch. Please, try these steps and let’s see if everything is properly working now.

Let me know if you need more help with this, I’ll be glad to assist you.

Regards,
Juanjo


--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/eeb1bbaf-cc7a-4dbb-a042-e0f5e58c47e1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Robert H

unread,
Oct 29, 2018, 11:33:45 PM10/29/18
to Wazuh mailing list
Hi Juanjo,
Here's what I've done.
## Note: I have previously reloaded the template by modifying the number of shards by this method described by Jesus.

Tonight, repeated this steps.

[root@node5 ~]# systemctl stop logstash
[root@node5 ~]# vi /etc/logstash/conf.d/01-wazuh.conf 
[root@node5 ~]# cat /etc/logstash/conf.d/01-wazuh.conf 

output {
    elasticsearch {
        hosts => ["192.168.1.252:9200"]
        index => "wazuh-alerts-3.x-%{+YYYY.MM.dd}"
        document_type => "wazuh"
    }
}

[root@node5 ~]# systemctl start logstash
[root@node5 ~]# tail -f /var/log/logstash/logstash-plain.log 
[2018-10-27T20:43:58,216][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-10-27T20:43:58,602][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5000"}
[2018-10-27T20:43:58,685][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x3e27cafb run>"}
[2018-10-27T20:43:58,724][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-10-27T20:43:58,820][INFO ][org.logstash.beats.Server] Starting server on port: 5000
[2018-10-27T20:43:59,127][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-10-27T20:53:42,321][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2018-10-27T20:53:47,577][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>22, "name"=>"[main]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/logstash-input-beats-5.1.6-java/lib/logstash/inputs/beats.rb:212:in `run'"}, {"thread_id"=>20, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:316:in `read_batch'"}]}}
[2018-10-27T20:53:47,579][ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2018-10-27T20:53:49,868][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x3e27cafb run>"}


[root@node3 ~]# cat /etc/elasticsearch/elasticsearch.yml
network.host: ["localhost", "192.168.1.252"]


Regards,
Robert

Juanjo Jiménez

unread,
Oct 30, 2018, 6:04:29 AM10/30/18
to rhe...@proficio.com, wa...@googlegroups.com

Hello again Robert,

Thanks a lot for sharing your configuration files. Your Logstash and Elasticsearch configurations seem correct, and they should work properly.

Also, I can see some error messages on your Logstash log file when trying to restart the service. Are you still having problems when trying to start the service? Try using the systemctl restart logstash command and after that, the systemctl status logstash command. Keep an eye on the log file to see if the same problem appears.

Let me know if you need more help with this, we’ll be glad to assist you.

Regards,
Juanjo


--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

Robert H

unread,
Oct 31, 2018, 12:31:18 AM10/31/18
to Wazuh mailing list
Hi Juanjo,
Thanks for your ideas and suggestions.  Doh!!  It turned out to that I forgot to add in the sources (network range) in my home zone in firewalld.  I had added the port, 5000, but I did not added the network.  After adding the network it worked and is working now again with ssl encryption.  I had to restart filebeat and wazuh manager to verify after ssl was enabled that alerts were still flowing to Kibana.

It's all working now, with one exception.  Monitoring in Elastic x-pack.  I'm using the free, basic license.  For the Logstash that was on the same host as Elastic, I just un-commented these lines.

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.sniffing: false

Since this new logstash is not on the same host, it is by itself on a separate VM, I tried to also add these lines:  Do I need to add (un-comment) the username and pass?  Also, 
The documentation, says if x-pack security is enabled use username/pass.  that is not enabled.  I can ping the node1 and node2 names.
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password
xpack.monitoring.elasticsearch.url: ["http://node1:9200", "http://node2:9200"]

Do you have any ideas?

Thanks,
Robert

Reply all
Reply to author
Forward
0 new messages