How to install wazuh app in multi kibana instance environment

816 views
Skip to first unread message

Robert H

unread,
Nov 7, 2018, 11:56:53 AM11/7/18
to Wazuh mailing list
Hi,
I'm trying to install the wazuh app in an environment with multiple kibana instances.  First, I just followed the typical instructions for installing it.  The bundle optimization command ran all night but didn't seem to finish.  Currently the Wazuh plugin does show up in the left navigation, but when I click on it, and then try to click on the "gear" icon to setup the API, it errors.

There are multiple Kibana's install in a non standard directory, say /opt/kibana1, /opt/kibana2, etc.  They are all isolated from each other.

First I should also say that the wazuh managers are running version 3.5.  The Elastic stack is version 6.4.0.  I have run this command originally, but then realized it's in the /usr/share directory.


I cd'd to bin.  then cd .. to move back a level and then ran the above.

  389  cd /opt/kibana2/bin/
  390  ll
  391  pwd
  392  sudo export NODE_OPTIONS="--max-old-space-size=3072"    <---  (this error'd maybe I should have run it without sudo.  the system has a lot of RAM, so I just ran the plugin install)
  393  cd ..
  394  sudo -u kibana bin/kibana-plugin install https://packages.wazuh.com/wazuhapp/wazuhapp-3.6.1_6.4.0.zip

///////////////////////

Checking the file/directory permissions

drwxrwxr-x.    4 kibana kibana      63 Nov  2 08:17 optimize
-rw-rw-r--.    1 kibana kibana     748 Aug 17 16:35 package.json
drwxrwxr-x.    3 kibana kibana      19 Nov  1 15:58 plugins


Here's a screenshot of the errors.

Screenshot from 2018-11-07 07-59-35.png


Can you help me get this worked out?

Regards,
Robert

Javier Castro

unread,
Nov 7, 2018, 8:41:13 PM11/7/18
to Wazuh mailing list
Hello Robert,
one detail in your set up is having Wazuh managers v3.5, but installing Wazuh app v3.6.1.

The errors you are getting are related to your x-pack configuration. The Wazuh app needs Kibana to log in into elasticsearch with a user having certain privileges. Please follow this guide to set up a proper user: https://documentation.wazuh.com/current/user-manual/kibana-app/configure-xpack/index.html

Regarding your original question, if you want to use multiple Kibana you will need to adjust your Wazuh app configuration in all of them to avoid indexing repeated information into wazuh-monitoring indices.

For that, edit /your_kibana_folder/plugins/wazuh/config.yml:

  • One of your instances will index data. For that you need to use: wazuh.monitoring.enabled: true
  • The rest of your instances won't index data. For that you need to use: wazuh.monitoring.enabled: worker
Hope that helps.

Regards.

Robert H

unread,
Nov 7, 2018, 11:29:31 PM11/7/18
to Wazuh mailing list
Thanks Javier,
As I was typing out the 3.5 and 3.6.1 it occurred to me that was an issue.  I'll remove it and use the correct version app and check out the user information you mentioned.

Can I ask another question, sort of unrelated?  What would be the easiest way to not forward (block, drop, etc.) the alerts generated by the manager into the wazuh app?  Meaning, if someone only wanted to see alerts in the Wazuh app from agents, not including the manager(s), is the only way to do to use the decoder and rule exclusion or is there another way?

Regards,
Robert

Robert H

unread,
Nov 8, 2018, 12:49:52 AM11/8/18
to Wazuh mailing list
I uninstalled the plugin and reinstalled the correct version.  In the Kibana yml the kibana user is already in there.  But that's the system user, I guess it doesn't have access to the index.

I've created a xyz-wazuh-alerts-* index.  I think I need to create a new kibana user/role and associate it with this new index.   Is that correct?
Then I might need to use that new kibana user in the kibana.yml file.

Interestingly, there is are two users in there now.  It works fine on another Kibana where we are not using Wazuh app.  I'm thinking to try # out the elastic one, once I create the new user.

elasticsearch.url: "https://IP:9200"
elasticsearch.username: "elastic"

elasticsearch.username: "kibana"
elasticsearch.password: "xxxxxxxxx"

Also, on the wazuh.monitoring index, if we have 10 total kibana's and 3 have Wazuh app, is there a way to create xyz-wazuh-monitoring for each to keep them separate?  The way you explained it, it sounds like only one kibana could have the monitoring index.  

Regards,
Robert

jesus.g...@wazuh.com

unread,
Nov 8, 2018, 3:50:29 AM11/8/18
to Wazuh mailing list

Hi Robert,

What would be the easiest way to not forward (block, drop, etc.) the alerts generated by the manager into the Wazuh app?
My approach is about Logstash and its filters, I’ve quickly tested the next filter and it seems to be working:

filter {
   if [agent][id] == "000" {
      drop {}
   }
}

Add it to your Logstash configuration file (in each Logstash if you have more than one instance) usually located at /etc/logstash/conf.d/01-wazuh.conf and restart it once done:

systemctl restart logstash

I’ve created a xyz-wazuh-alerts-* index. I think I need to create a new kibana user/role and associate it with this new index. Is that correct? Then I might need to use that new kibana user in the kibana.yml file

Since your environment is a bit mixed, my suggestion is to use the elastic user for your specific use case regardless our X-Pack guide. This means you should have this at the kibana.yml file:

elasticsearch.username: "elastic"
elasticsearch.password: "elastic_user_password"

You should not have two elasticsearch.username in the kibana.yml file. You should use the elastic user too when log in the Kibana UI so you are using elastic in both sides (server and UI).

Is there a way to create xyz-wazuh-monitoring for each to keep them separate?

Those indices are created from the Wazuh app and their are not supporting custom index name for now. In any case I’m going to include it in our roadmap, it looks interesting. By the way you
could try to create a custom package, the affected lines are below:

$ cd /usr/share/kibana/plugins/wazuh 
$ grep -R "wazuh-monitoring-" -n
server/integration-files/visualizations/overview/overview-general.js:25:          '{"index":"wazuh-monitoring-3.x-*","filter":[],"query":{"query":"","language":"lucene"}}'
server/integration-files/monitoring-template.js:14:  template: 'wazuh-monitoring-3.x-*',
server/monitoring.js:75:  const index_pattern = 'wazuh-monitoring-3.x-*';
server/monitoring.js:76:  const index_prefix = 'wazuh-monitoring-3.x-';
server/initialize.js:186:          item.title.includes('wazuh-monitoring-*') ||
server/initialize.js:187:          item.id.includes('wazuh-monitoring-*')
server/lib/elastic-wrapper.js:749:        id: 'index-pattern:wazuh-monitoring-*'
config.yml:90:# Configure wazuh-monitoring-3.x-* indices shards and replicas.

Once modified, just restart Kibana:

systemctl restart kibana

This point is not tested so it might break your app, be careful.

The way you explained it, it sounds like only one kibana could have the monitoring index

A Wazuh app instance could work in three ways regarding the wazuh-monitoring indices like Javier said depending on the config.yml value:

  • wazuh.monitoring.enabled: true this will show you data in the Agent status visualization and will fetch agents from the Wazuh API then it will ingest data into Elasticsearch.
  • wazuh.monitoring.enabled: worker this will show you data in the Agent status visualization only.
  • wazuh.monitoring.enabled: false this wont show the Agent status visualization neither will ingest data.

The problem is to have two true instance plus using a common Wazuh API in both instances, they will fetch the same agents twice, that’s why we implemented the worker option.

I hope all these questions are now clearer.

Best regards,
Jesús

Robert H

unread,
Nov 8, 2018, 11:39:47 PM11/8/18
to Wazuh mailing list
Thanks Javier!
I will work through your suggestions.

Thanks!
Robert

Robert H

unread,
Dec 3, 2018, 12:09:44 PM12/3/18
to Wazuh mailing list
Thanks Jesus,
I did apply the filter to remove manager events from the log flow and I removed the extra elasticsearch user: entry and things are looking good.  I can now get to the API setup page and will configure that after I setup https for the API.

Thanks again for the information!

Regards,
Robert

jesus.g...@wazuh.com

unread,
Dec 3, 2018, 12:53:32 PM12/3/18
to Wazuh mailing list
Hi Robert,

Ok, that sounds good for us. Let us know if you need any more help with this. Remember to keep that configuration safe
for further upgrades. The Logstash filter will be overridden if you `curl` our configuration in an upgrade, just `curl` our configuration and
edit in the same way as we discussed before.

Also, remember to stop/restart services while modifying their configuration or upgrading.

Best regards,
Jesús

Robert H

unread,
Dec 6, 2018, 12:32:08 PM12/6/18
to Wazuh mailing list
Hi Jesus,
I have some additional information related to running multiple Kibana, and wazuh-monitoring indexes on one system.

We have multiple Kibana instances setup and they operation separately.  2 of the instances (one not installed yet) will use the Wazuh app plugin.

The locations of the Kibana instances are in /opt/kibana-company1  /opt/kibana-company2  etc.  so the API setup in each case will be to unique managers.  You mentioned it might not work if the Wazuh api from different Kibana's went to the same managers pulling the same data.  In our setup, we have 2 sets of wazuh managers and 2 separate Kibana's so we could set up separate Wazuh API entries for them.

All the data is stored in the same Elastic cluster, using different indexes to keep things separate.

Could you describe again how to run step through the modification to set xzy-wazuh-monitoring and abc-wazuh-monitoring indexes again?  

Regards,
Robert

jesus.g...@wazuh.com

unread,
Dec 10, 2018, 9:04:42 AM12/10/18
to Wazuh mailing list

Hi Robert,

Further releases will include a setting to customize this easily, for now, follow these steps to customize wazuh-monitoring indices:

Move to the desired app directory:

$ cd /usr/share/kibana/plugins/wazuh

Look for affected files (this output may be different in your case depending on the installed app):

$ grep -R "wazuh-monitoring-" -n 
server/integration-files/visualizations/overview/overview-general.js:25: '{"index":"wazuh-monitoring-3.x-*","filter":[],"query":{"query":"","language":"lucene"}}' 
server/integration-files/monitoring-template.js:14: template: 'wazuh-monitoring-3.x-*', 
server/monitoring.js:75: const index_pattern = 'wazuh-monitoring-3.x-*'; 
server/monitoring.js:76: const index_prefix = 'wazuh-monitoring-3.x-'; 
server/initialize.js:186: item.title.includes('wazuh-monitoring-*') || 
server/initialize.js:187: item.id.includes('wazuh-monitoring-*') 
server/lib/elastic-wrapper.js:749: id: 'index-pattern:wazuh-monitoring-*' 
config.yml:90:# Configure wazuh-monitoring-3.x-* indices shards and replicas.

Replace all the above occurrences by your desired pattern.

Once modified, just restart Kibana:

systemctl restart kibana

Regards,
Jesús

Robert H

unread,
Dec 10, 2018, 12:37:52 PM12/10/18
to Wazuh mailing list
Hi Jesus,
I got it this time.  :)  Thanks for your help.

Best regards,
Robert

jesus.g...@wazuh.com

unread,
Dec 13, 2018, 11:21:20 AM12/13/18
to Wazuh mailing list
Hello again Robert,

Just to add this ticket for your information https://github.com/wazuh/wazuh-kibana-app/issues/1092

Regards,
Jesús

Robert H

unread,
Dec 14, 2018, 4:16:13 PM12/14/18
to Wazuh mailing list
Thanks for the update Jesus.

Regards,
Robert

jesus.g...@wazuh.com

unread,
Dec 17, 2018, 6:25:35 AM12/17/18
to Wazuh mailing list
Hello again Robert H and anyone interested in this,

I've just merged a pull request for customizing the Wazuh monitoring pattern. It will be released with the next Wazuh version.


Don't use this directly on your current app, it won't work.

Best regards,
Jesús

Robert H

unread,
Dec 18, 2018, 4:55:49 PM12/18/18
to Wazuh mailing list
Hi Jesus,
Thanks for the help.  I changed the monitoring pattern today and it worked great!

Regards,
Robert

Robert H

unread,
Jan 4, 2019, 7:34:27 PM1/4/19
to Wazuh mailing list
Hi Jesus and Javier,
I added the 2nd kibana instance today that included the Wazuh app.  I thought it was okay, but now I noticed on the first kibana/wazuh instance when the page refreshes a pop-up shows saying 5 of 95 shards failed.  Can you help me resolve this?

To update the configuration.

We now have 4 kibana instances:

/opt/kibana-company1 (no wazuh plugin)
/opt/kibana-company2 (no wazuh plugin)
/opt/kibana-company3 (wazuh plugin)
/opt/kibana-company4 (wazuh plugin)

I have changed the index names in the wazuh/config.yml file to abc-wazuh-alert- and abc-wazuh-monitoring, like you showed in this thread before.  the company3 one which was setup before is xyz-wazuh-alerts- and xyz-wazuh-monitoring- 

I have not setup the wazuh app API connection yet for the company4 instance as I have to wait for the company to open that access.  Also, I don't not have alert data coming yet from filebeat and the wazuh managers as I'm waiting for our corporate firewall and load balancer to be setup.

This failed shard pop up shows up in the Company3 Kibana/Wazuh Security Events, when the page reloads every time.

failed_shards.png



I think it might be related to the monitoring index.  Here is the information from when the newest kibana started up.

{"type":"log","@timestamp":"2019-01-04T23:06:45Z","tags":["\u001b[34mwazuh\u001b[39m","initialize","info"],"pid":1931,"message":"Found 0 index patterns"}
{"type":"log","@timestamp":"2019-01-04T23:06:45Z","tags":["\u001b[34mwazuh\u001b[39m","initialize","info"],"pid":1931,"message":"Found 0 valid index patterns for Wazuh alerts"}
{"type":"log","@timestamp":"2019-01-04T23:06:45Z","tags":["\u001b[34mwazuh\u001b[39m","initialize","info"],"pid":1931,"message":"Default index pattern not found, creating it...     "}
{"type":"log","@timestamp":"2019-01-04T23:06:45Z","tags":["\u001b[34mwazuh\u001b[39m","initialize","info"],"pid":1931,"message":"Creating index pattern: ps-wazuh-alerts-3.x-*"}
{"type":"log","@timestamp":"2019-01-04T23:06:45Z","tags":["\u001b[34mwazuh\u001b[39m","[initialize][checkAPIEntriesExtensions]","info"],"pid":1931,"message":"Successfully updat     ed API entry extensions with ID: 1543872724755"}
{"type":"log","@timestamp":"2019-01-04T23:06:45Z","tags":["\u001b[34mwazuh\u001b[39m","initialize","info"],"pid":1931,"message":"Created index pattern: ps-wazuh-alerts-3.x-*"}
{"type":"log","@timestamp":"2019-01-04T23:06:45Z","tags":["\u001b[34mwazuh\u001b[39m","initialize","info"],"pid":1931,"message":"Waiting for default index pattern creation to complete..."}
{"type":"log","@timestamp":"2019-01-04T23:06:45Z","tags":["\u001b[34mwazuh\u001b[39m","initialize","info"],"pid":1931,"message":"No older .wazuh index found -> no need to reind     ex."}
{"type":"log","@timestamp":"2019-01-04T23:06:45Z","tags":["\u001b[34mwazuh\u001b[39m","initialize","info"],"pid":1931,"message":".wazuh-version document already exists. Updating version information..."}
{"type":"log","@timestamp":"2019-01-04T23:06:45Z","tags":["\u001b[34mwazuh\u001b[39m","initialize","info"],"pid":1931,"message":"Successfully updated .wazuh-version index"}
{"type":"log","@timestamp":"2019-01-04T23:06:46Z","tags":["\u001b[34mwazuh\u001b[39m","monitoring","info"],"pid":1931,"message":"Successfully created today index."}
{"type":"log","@timestamp":"2019-01-04T23:06:46Z","tags":["\u001b[34mwazuh\u001b[39m","initialize","info"],"pid":1931,"message":"Index pattern created..."}
{"type":"log","@timestamp":"2019-01-04T23:06:46Z","tags":["\u001b[34mwazuh\u001b[39m","initialize","info"],"pid":1931,"message":"Refreshing known fields for \"index-pattern:ps-wazuh-alerts-3.x-*\""}
{"type":"log","@timestamp":"2019-01-04T23:06:46Z","tags":["\u001b[34mwazuh\u001b[39m","initialize","info"],"pid":1931,"message":"App ready to be used."}
{"type":"log","@timestamp":"2019-01-04T23:06:47Z","tags":["\u001b[34mwazuh\u001b[39m","monitoring","info"],"pid":1931,"message":"Waiting for Kibana and Elasticsearch servers to be ready..."}
{"type":"log","@timestamp":"2019-01-04T23:06:47Z","tags":["\u001b[34mwazuh\u001b[39m","monitoring","info"],"pid":1931,"message":"Updating wazuh-monitoring template..."}
{"type":"log","@timestamp":"2019-01-04T23:06:47Z","tags":["\u001b[34mwazuh\u001b[39m","monitoring","info"],"pid":1931,"message":"Creating today index..."}
{"type":"log","@timestamp":"2019-01-04T23:06:47Z","tags":["\u001b[34mwazuh\u001b[39m","monitoring","info"],"pid":1931,"message":"Checking if wazuh-monitoring pattern exists..."     }
{"type":"log","@timestamp":"2019-01-04T23:06:47Z","tags":["\u001b[34mwazuh\u001b[39m","monitoring","info"],"pid":1931,"message":"Didn't find wazuh-monitoring pattern for Kibana      v6.x. Proceeding to create it..."}
{"type":"log","@timestamp":"2019-01-04T23:06:47Z","tags":["\u001b[34mwazuh\u001b[39m","monitoring","info"],"pid":1931,"message":"No need to delete  old wazuh-monitoring pattern     ."}
{"type":"log","@timestamp":"2019-01-04T23:06:47Z","tags":["\u001b[34mwazuh\u001b[39m","monitoring","info"],"pid":1931,"message":"Creating index pattern: ps-wazuh-monitoring-3.x     -*"}
{"type":"log","@timestamp":"2019-01-04T23:06:47Z","tags":["\u001b[34mwazuh\u001b[39m","monitoring","info"],"pid":1931,"message":"Created index pattern: ps-wazuh-monitoring-3.x-     *"}


Regards,
Robert

Robert H

unread,
Jan 7, 2019, 12:37:40 PM1/7/19
to Wazuh mailing list
Yes, I confirmed I see the 5 of 95 shards warning message only on the Security Events page, not on PCI_DSS, FIM, or Policy Monitoring.   Also the Agent status graph hasn't worked since last Friday.
**Note**  after I installed the 2nd Wazuh/kibana (/opt/kibana-company4), I did not restart the 1st one (/opt/kibana-company3).  Should that be done?

This is the status of the monitoring indices on the /opt/kibana-company3 instance:

$ grep -R "wazuh-monitoring-" -n
server/integration-files/visualizations/overview/overview-general.js:22:                        "searchSourceJSON": "{\"index\":\"abc-wazuh-monitoring-3.x-*\",\"filter\":[],\"query\":{\"query\":\"\",\"language\":\"lucene\"}}"
server/integration-files/monitoring-template.js:14:    "template": "abc-wazuh-monitoring-3.x-*",
server/lib/elastic-wrapper.js:712:                id: 'index-pattern:abc-wazuh-monitoring-*'
server/initialize.js:116:                if(item.title.includes('abc-wazuh-monitoring-*') || item.id.includes('abc-wazuh-monitoring-*')) continue;
server/monitoring.js:53:    const index_pattern  = "abc-wazuh-monitoring-3.x-*";
server/monitoring.js:54:    const index_prefix   = "abc-wazuh-monitoring-3.x-";
config.yml:90:# Configure abc-wazuh-monitoring-3.x-* indices shards and replicas.

agent_status_missing.png


This is the status of the monitoring indices on the /opt/kibana-company4 instances.  (Note. I do not have alerts flowing into the ES cluster yet, have not yet connected the Wazuh API, and I have not loaded an updated template.  I just installed the Kibana first in this sequence as I'm waiting for others to complete things related to those tasks.


$ grep -R "wazuh-monitoring-" -n

server/integration-files/visualizations/overview/overview-general.js:22:                        "searchSourceJSON": "{\"index\":\"xyz-wazuh-monitoring-3.x-*\",\"filter\":[],\"query\":{\"query\":\"\",\"language\":\"lucene\"}}"

server/integration-files/monitoring-template.js:14:    "template": "xyz-wazuh-monitoring-3.x-*",

server/lib/elastic-wrapper.js:712:                id: 'index-pattern:xyz-wazuh-monitoring-*'

server/initialize.js:116:                if(item.title.includes('xyz-wazuh-monitoring-*') || item.id.includes('xyz-wazuh-monitoring-*')) continue;

server/monitoring.js:53:    const index_pattern  = "xyz-wazuh-monitoring-3.x-*";

server/monitoring.js:54:    const index_prefix   = "xyz-wazuh-monitoring-3.x-";

config.yml:90:# Configure xyz-wazuh-monitoring-3.x-* indices shards and replicas.



Regards,
Robert
Reply all
Reply to author
Forward
0 new messages