Kibana, X-Pack and Building Wazuh as a Platform

1,661 views
Skip to first unread message

Stephen Hill

unread,
May 30, 2017, 6:51:21 AM5/30/17
to Wazuh mailing list
Hi All,

I'm hoping I can get some feedback on something I'm going to attempt on our dev Wazuh implementation.

A bit of background:

I'm part of a company that owns a few other companies who have independent IT departments. They want to use our Wazuh platform "as a service". At the moment Wazuh works perfectly for us and we secure behind nginx for SSL and auth. This works fine but doesn;t really allow us to go down the platform route for the rest of the group.

After reading about X-Pack on the ES site I think I may have found a way to implement it but I wanted to see if anyone can spot an holes in my plan or if the Wazuh plugin will have issues with the way we want to do this.

Firstly, I've configured X-Pack and it's working from the front end to allow our dev users to login and view the same logs we have in prod and all looks good. The next bits are what I haven't tested or implemented:

Here is the architecture (this is just a design so feel free to advise if this looks wrong!):



The next bit would be to create indices with different names instead of "wazuh-alerts-%{+YYYY.MM.dd}" so my idea for separting the alerts so we can assign permissions / roles with X-Pack is to modify the output based on the beats agent sending the logs, these would live in 2 separate config files in /etc/logstash/conf.d/:


Company1.conf:


# Wazuh - Logstash configuration file
## Remote Wazuh Manager - Filebeat input
input
{
    beats
{
        port
=> 5000
        host
=> "Company 1 IP"
        codec
=> "json_lines"
        ssl
=> true
        ssl_certificate
=> "/etc/logstash/logstash.crt"        
        ssl_key
=> "/etc/logstash/logstash.key"
   
}
}
## Local Wazuh Manager - JSON file input
#input {
#   file {
#       type => "wazuh-alerts"
#       path => "/var/ossec/logs/alerts/alerts.json"
#       codec => "json"
#   }
#}
filter
{
    geoip
{
        source
=> "srcip"
        target
=> "GeoLocation"
        fields
=> ["city_name", "continent_code", "country_code2", "country_name", "region_name", "location"]
   
}
    date
{
        match
=> ["timestamp", "ISO8601"]
        target
=> "@timestamp"
   
}
    mutate
{
        remove_field
=> [ "timestamp", "beat", "fields", "input_type", "tags", "count", "@version", "log", "offset", "type"]
   
}
}
output
{
    elasticsearch
{
        hosts
=> ["localhost:9200"]
        index
=> "company1-alerts-%{+YYYY.MM.dd}"
        document_type
=> "wazuh"
       
template => "/etc/logstash/wazuh-elastic5-template.json"
#       template => "/etc/logstash/wazuh-elastic2-template.json"
        template_name
=> "wazuh"
        template_overwrite
=> true
   
}
}


Company2.conf:

# Wazuh - Logstash configuration file
## Remote Wazuh Manager - Filebeat input
input
{
    beats
{
        port
=> 5000
        host
=> "Company 2 IP"
        codec
=> "json_lines"
        ssl
=> true
        ssl_certificate
=> "/etc/logstash/logstash.crt"        
        ssl_key
=> "/etc/logstash/logstash.key"
   
}
}
## Local Wazuh Manager - JSON file input
#input {
#   file {
#       type => "wazuh-alerts"
#       path => "/var/ossec/logs/alerts/alerts.json"
#       codec => "json"
#   }
#}
filter
{
    geoip
{
        source
=> "srcip"
        target
=> "GeoLocation"
        fields
=> ["city_name", "continent_code", "country_code2", "country_name", "region_name", "location"]
   
}
    date
{
        match
=> ["timestamp", "ISO8601"]
        target
=> "@timestamp"
   
}
    mutate
{
        remove_field
=> [ "timestamp", "beat", "fields", "input_type", "tags", "count", "@version", "log", "offset", "type"]
   
}
}
output
{
    elasticsearch
{
        hosts
=> ["localhost:9200"]
        index
=> "company2-alerts-%{+YYYY.MM.dd}"
        document_type
=> "wazuh"
       
template => "/etc/logstash/wazuh-elastic5-template.json"
#       template => "/etc/logstash/wazuh-elastic2-template.json"
        template_name
=> "wazuh"
        template_overwrite
=> true
   
}
}

Then in the Kibana front end we would have to define new index patterns:

company1-alerts-* & company2-alerts-*

I believe this would translate in to all users having index pattern matches for both companies logs, however we would then assign roles which would come with index level permissions so the only returned results would be on that companies index.

So we would create 2 roles, 1 for each companies IT team:

Company 1 role:
{
 
"cluster": [ "monitor" ],
 
"indices": [
   
{
     
"names": [ "company1-alerts-*" ],
     
"privileges": [ "read" ],
   
}
 
]
}

Company 2 role:

{
 
"cluster": [ "monitor" ],
 
"indices": [
   
{
     
"names": [ "company2-alerts-*" ],
     
"privileges": [ "read" ],
   
}
 
]
}

The Wazuh plugin. Now I know the Wazuh plugin is driven from both queries to the Elasticsearch API but also to the Wazuh manager API. This is where I don;t really know how to separate out the access to "agents" and "manager" to the front end.

Any and all thoughts are much appreciated!

Steve

Santiago Bassett

unread,
May 30, 2017, 1:14:19 PM5/30/17
to Stephen Hill, Wazuh mailing list
Hi Stephen,

this is a very interesting use case, thanks for sharing. Everything you said makes sense to me. 

On the other hand the Kibana plugin currently supports connection to multiple APIs, meaning that if you run one API per manager you should be able to switch between customers. 

In addition there are other indices named wazuh-monitoring and used by the Kibana plugin. These indices are built based on scheduled queries to the API (for example to list the historical status of an agent).

Some people from our team have been setting up similar environments, they will probably be back with more comments/ideas.

Best regards,

Santiago.


--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/8c6dadfd-3fd0-4c4a-a802-a294cf9a58e5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Pedro Sanchez

unread,
May 30, 2017, 3:07:28 PM5/30/17
to Santiago Bassett, Stephen Hill, Wazuh mailing list
Hi Stephen!

Good to see you on the mailing list.

Thanks for the architecture and design explanation, I can understand what you need to accomplish here.

Let me ask you a few questions and bring up some ideas you could find interesting.

The first thing I can notice is you are using just one Logstash instance, why don't use two of them?
Your architecture works in parallel but then you will have a bottleneck using just one Logstash instance, you could install the software as well on "Elastic Data & Manager 2", set up Filebeat at both managers to send events to both Logstash instances.
Filebeat has the option to "loadbalance" using round-robin to several Logstash ends.
In case of failover of one of your nodes, your architecture will keep up and running (take care of split-brain with just 2 ES nodes).

Second thing I would advice to improve your architecture, is adding another Kibana instance. Currently you will loose visibility if your Kibana node goes down. Kibana does not allow us to add more than one Elasticsearch API URL's but using a proxy (like Nginx) will do the trick.

About your Logstash configuration files, I am not sure about this, but I think you are trying to bind the same port twice (input section, port 5000), "host" setting will containg the local IP Address you want to listen on, but no a remote address (like company1 IP or company2 IP).

One way to could solve this, is using fields and values to difference alert origin, setting up those values in Filebeat configuration.
You could use either "tags" or "fields":

An example with fields:

filebeat:
 prospectors:
  - input_type: log
    paths:
     - "/var/ossec/logs/alerts/alerts.json"
fields:
company_name: my_company

    document_type: json
    json.message_key: log
    json.keys_under_root: true
    json.overwrite_keys: true

Later on Logstash, you could use that field to apply different filters or outputs:

output {
if [company_name] == "company1" {

elasticsearch {
hosts => ["localhost:9200"]
index => "wazuh-alerts-company1-%{+YYYY.MM.dd}"

document_type => "wazuh"
template => "/etc/logstash/wazuh-elastic5-template.json"
template_name => "wazuh"
template_overwrite => true
}
}
if [company_name] == "company2" {

elasticsearch {
hosts => ["localhost:9200"]
index => "wazuh-alerts-company2-%{+YYYY.MM.dd}"

document_type => "wazuh"
template => "/etc/logstash/wazuh-elastic5-template.json"
template_name => "wazuh"
template_overwrite => true
}
}
}

Or you could even use the field value directly into "index" variable:

index => "wazuh-alerts-%{company_name}-%{+YYYY.MM.dd}"


Using tags will be like:

Filebeat

filebeat.prospectors:
- paths: ["/var/log/app/*.json"]
  tags: ["company1"]
 
Logstash

if "company1" in [tags] {
...


Once you set the field "company_name", you could go for splitting your indices, or have same index "wazuh-alerts" for everything and use filters in Kibana/Elastic searching just for a certain value of "company_name".


Said so, let's move on into X-Pack Security plugin.
You are right about using X-Pack to map permissions/roles depending on the index pattern, it will be a useful way to do it with so many advantages and good performance.
There are different posibilities to restrict access using Security plugin: Index level security, Document level security and Field level security.
Using "Document level security" allows you to restrict access depending on field values:

{
  "indices": [
    {
      "names": [ "wazuh-*" ],
      "privileges": [ "read" ],
      "query": "{\"match\": {\"company_name\": \"company1\"}}"
    }
  ]
}

Let me go through your main question, related to Wazuh App.
In general, we have hard coded several occurrences of index pattern "wazuh-alerts-*" (for alert counting, sample alerts, visualizations...) meaning I can't guarantee an excellent behaviour not using the pre-defined index pattern.

I would recommend you to use an index name like "wazuh-alerts-company1-YYYY.MM.DD" to match our current pattern, that will made compatible a lot of capabilities within our plugin.
Even with that, I think we will need to accommodate some others requests which are stricly sending requests to "wazuh-alerts-YYYY.MM.DD", for example, "alert-count".

Rearding to Wazuh API requests and Manager alerts, you could use "Setting tab" to add several APIs and Managers, there is a global filter (transparent to the user) which is filtering everything by "manager.name" field value, meaning, the App will show only alerts containing that value.

Currently there is no privilege levels on Wazuh API, you could use X-Pack to restrict acccess to the users based on "manager.name" field.


I hope it helps, please let us know if you have more questions, I believe you have an amazing use case for Wazuh ecosystem and I will love to track your progress!

Best,
Pedro.





Stephen Hill

unread,
May 31, 2017, 5:50:01 AM5/31/17
to Wazuh mailing list, stephe...@gmail.com, Santiago Bassett
Hi Pedro / Santiago,

Thanks for all the feedback, it's all really appreciated! 

The point about Beats and Logstash on the host field was especially helpful, I clearly misunderstood the use of that field. I would have been tearing my hair out with that no doubt! :) 

I've worked with the multiple API support in the Wazuh plugin and that works really nicely!

I guess the only question I have here is surrounding the use of nginx to proxy to multiple ES API's which makes perfect sense but I can;t say I used nginx as a proxy. Probably a silly question but would it essentially be the opposite config of a reverse proxy? So all requests to ES hit the nginx web server (running locally) listening on 9200 and then forward them to the real ES coordinator nodes in round robin? Or would we redirect instead of to the coordinator nodes directly to the data nodes as they will act as coordinators by default.

Thanks,
Steve

Pedro Sanchez

unread,
May 31, 2017, 7:57:13 AM5/31/17
to Wazuh mailing list, stephe...@gmail.com, sant...@wazuh.com
Hi Stehpen,

You are very welcome.

About the Nginx proxy, I was thinking about a "first layer proxy". Elasticsearch already does load balance by itself, we don't want to mess with internal data traffic.
Kibana needs an Elasticsearch API URL to performs the requests (kibana.yml - "elasticsearch.url" setting), they will be standard "http" RESTful requests, meaning, we could forward those requests to the "endpoint" we decide.

The way I set up this some weeks ago was using Amazon ELB.
I created a new load balancer listening on the desired port, for example "my_load_balancer_proxy:2525".
Later I specify as destinations different Elasticsearch nodes: my_elastic_node1_ip:9200, my_elastic_node2_ip:9200....
Finally on Kibana configuration we will use the load balancer IP: my_load_balancer_proxy:2525
Full flow will be like:
User action in Kibana WUI -> Kibana sends a requests to "elasticsearch.url(my_load_balancer_proxy:2525)" -> ELB round-robin the request to an Elasticsearch node -> my_elastic_node1_ip:9200 or my_elastic_node2_ip:9200.

Hope it helps,
Pedro.
Message has been deleted
Message has been deleted

Stephen Hill

unread,
Jun 1, 2017, 4:46:50 AM6/1/17
to Wazuh mailing list, stephe...@gmail.com, sant...@wazuh.com
Hi Pedro,

Thanks again, makes sense. As we have to build this in our own DC I think my best option is run nginx local to each Kibana server and point the Kibana Elastic API to https://localhost:2525 and then have this as the nginx config on each server:

https {
    upstream escluster01
{
        server es01
.example.com;
        server es02
.example.com;
   
}

    server
{
    listen *:2525;
    listen
[::]:2525;

    server_name
"localhost";

    ssl on
;
    ssl_certificate
/etc/nginx/ssl/cert.pem;
    ssl_certificate_key
/etc/nginx/ssl/server.key;

    access_log
/var/log/nginx/kibana.access.log;
    error_log
/var/log/nginx/kibana.error.log;

        location
/ {
            proxy_pass https
://escluster01;
       
}
   
}
}

I'm going to attempt the build this weekend and will let you know how it goes!

Thanks,
Steve

Stephen Hill

unread,
Jun 5, 2017, 7:21:21 AM6/5/17
to Wazuh mailing list, stephe...@gmail.com, sant...@wazuh.com

Hi Pedro / Santiago,

I've got some good news from the build I did over the weekend. It's not complete but the fundamental concept of different users with different roles having permissions to only their own indices is working.

I took the advice of appending the company names to the end of the Logstash output template which means all of the Wazuh API integration seems to still be working as expected.

I still need to integrate the clustering as I am running this standalone in dev but for all intense and purposes it works for what we need it to do.

I'm not sure if you are aware but it looks like I encountered a problem that a lot of other users are seeing with Logstash and x-pack 5.4.1. If you install the x-pack plugin for Logstash and enable monitoring it is unable to create the monitoring indices no matter if you enable or disable security / anonymous access. I was tearing my hair out for a while over this. Once I uninstalled the x-pack plugin from Logstash and reinstalled Logstash from scratch everything worked as expected.

for reference the log I was seeing that caused me the pain was this:

Failed to create monitoring event {:message=>"For path: events", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}

Once I get a bit further with the build I'll put together a quick how-to on how I got there if you think that would be useful?

Here are screenshots of company 1 IT vs company 2 IT:

Company 1:

Company 2:



Thanks,
Steve

Stephen Hill

unread,
Jun 6, 2017, 10:32:26 AM6/6/17
to Wazuh mailing list, stephe...@gmail.com, sant...@wazuh.com
One slight drawback I've noticed in the Wazuh app is that the "overview" tab will only display data relating to the default manager via the settings button.

So if company 2 are logged in but the default Wazuh manager is set to company 1 you don't get to see the correct data in any of the tabs. However when I change the default for the logged in user the user from company 1 will then have the company 2 manager set as default and see the same issue.

Any idea if I can work around this?

Thanks,
Steve

Santiago Bassett

unread,
Jun 6, 2017, 12:42:10 PM6/6/17
to Stephen Hill, Wazuh mailing list
Hi Steve,

this is something we are working on. The plan is to be able to aggregate data from multiple managers, instead of having to choose one or another. 

This has to be done not only for the alerts data, but also for calls to the managers APIs. Once that is implemented I believe we would completely rely on X-pack for the RBAC and users will not have to switch mangers manually. I believe Pedro and our team are planning to release this in a future version some time soon.

Let us know if you have questions or suggestions on how to do this better.

Best regards

Pedro Sanchez

unread,
Jun 7, 2017, 7:34:55 AM6/7/17
to Wazuh mailing list, stephe...@gmail.com
Hi Steve,

Thanks for sharing your progress, it looks really good, you did it!
A quick how-to tutorial will be really useful, let us know when it be ready and we will help you to upload it to the right place in Wazuh, I was thinking even creating a new blog entry in our blog.

About the Wazuh App + choosing default manager, you are right, Santiago explained what are our next steps, our plan is to expand API capabilities, we would like to create a API cluster, to aggregate data from multiple managers being transparent for the user where is located the data exactly.
In your case, I think you want the opposite, not aggregate everything and being able to "automatic switch" depending on your company1/2 user logged in, what you need are "permissions / role access / user access", we got it in our roadmap, it's pretty necessary in our API in it is a priority we are working on.
We set up the default API in a document on .kibana index, you could programmatically change that value to set a different default manager.

curl -XGET localhost:9200/.kibana/wazuh-configuration/_search?pretty

    "hits" : [
      {
        "_index" : ".kibana",
        "_type" : "wazuh-configuration",
        "_id" : "AVnMgnTZPYNGMxxxx",
        "_score" : 1.0,
        "_source" : {
          "api_user" : "foo",
          "api_password" : "xxxx",
          "url" : "http://x.x.x.x.x",
          "api_port" : "55000",
          "insecure" : "true",
          "component" : "API",
          "active" : "true",
          "manager" : "vpc-xxx",
          "extensions" : {
            "oscap" : true,
            "audit" : true,
            "pci" : true
          }
        }
      }
    ]

You will got one hit per registered API, but only one of them will be "active" at same time.

Best regards,
Pedro.

Stephen Hill

unread,
Jun 9, 2017, 1:14:17 PM6/9/17
to Wazuh mailing list, stephe...@gmail.com
Hi Both,

Thanks again for all the great feedback, that's great that the API feature is in the road map! 

I should hopefully be able to put something together by the end of the week. Would just submitting it in a word document to you be best or should I create a pull on the documentation page of github?

So within the .kibana document I could set the active API based on the logged in user? Would this be best done by locking down the kibana document at doc level so that user could only load that specific item into the frontend config?

Thanks,
Steve

Santiago Bassett

unread,
Jun 10, 2017, 9:53:30 AM6/10/17
to Stephen Hill, Wazuh mailing list
Hi Steve,

contributions are accepted in whatever way they come. Having said that, pull requests are usually preferred.

Regarding the Kibana question, I'll let Pedro answer that one, as I have no idea how this will be done :-)

Best regards

Pedro Sanchez

unread,
Jun 13, 2017, 3:28:32 PM6/13/17
to Wazuh mailing list, stephe...@gmail.com
Sorry for late response Steve, busy days nowdays.

So within the .kibana document I could set the active API based on the logged in user? Would this be best done by locking down the kibana document at doc level so that user could only load that specific item into the frontend config?
On the Kibana document you can set the active API, tricky part will be to use FLS (field level security) to "pick" one value or another. I am thinking about different ways to accomplish this.. but at the end we have one "Wazuh App" instance and one "Kibana instance" which means we need to set up one active API at the same time... we can't have three for different three users.
Best way to develop this could be implement a new functionality in Wazuh App which allow us to "merge" or "combine" x-pack security roles with an specific API entry, for example: "My API1 will be the default API for users in role: department1".

The workaround I can think of right now... is triggering some script which will change active:true document value in .kibana according to the user logged.

Sorry I can't help much more, good point is we realized we need to improve those "roles" based capabilities.

Thanks,
Pedro.

Stephen Hill

unread,
Jun 14, 2017, 6:06:11 AM6/14/17
to Wazuh mailing list, stephe...@gmail.com
Hi Pedro,

No problem.

Thanks for clarifying, this does make sense I'll see if I can whip up a shell script to trigger on user login which will modify the document via the Elastic API. It's not a huge concern for us as long as we make users aware of the limitation they will be able to make the change themselves.

Thanks again for your help and I'll follow updates to the Wazuh app closely! :) 

Steve

Stephen Hill

unread,
Jun 15, 2017, 7:59:36 AM6/15/17
to Wazuh mailing list, stephe...@gmail.com
Hi Santiago / Pedro,

I've attached a rough copy of the setup guide for x-pack. Can you take a look and see if it looks ok for me to submit a pull request?

Thanks,
Steve
Configuring Wazuh with X-Pack Multi-Teanancy.docx

Pedro Sanchez

unread,
Jun 18, 2017, 8:29:17 AM6/18/17
to Stephen Hill, Wazuh mailing list
Hi Stephen,

Great work! It looks nice, thanks for all the effort.

Let me tell you a couple of incoherences I found, then we could decide where to upload it to (Wazuh docs? Wazuh wiki?).

I notice you are creating a Logstash user with writing capabilities, I assume this is because you prefer not use default logstash_system user, why don't do same thing with "elastic" and "kibana" users? Will it make sense ? Final goal will be to not have "root" accounts set in any configuration file.

Another tiny detail I detected, is a reference to "/etc/filebeat/logstash.crt", meaning you are encrypting traffic between Filebeat and Logstash (as Wazuh documentation says) but... what if we enable encryption all way long? Now that you have X-Pack installed, using "certgen" tool from Elasticsearch, we could encrypt traffic as well in Elasticsearch (HTTPS API and TCP data transfer), meaning all the Kibana requests to Elastic will be encrypted, and internal node-communication will be encrypted as well.
Do you think it is worth to add those capabilities to your current documentation? or do you prefer not to add more complexity ?

Last thing is using "Curator" to display index info, do we have same output using _cat/indices to Elasticsearch API?


Please don't miss understand me, I think you did a great work and we really appreciate it, I am just trying to keep improving it. 


Thanks again, best regards,
Pedro.


PS: We are working on dynamically adapt default API according to user logged. You will love it :D

Stephen Hill

unread,
Jun 18, 2017, 9:52:52 AM6/18/17
to Wazuh mailing list, stephe...@gmail.com
Hi Pedro,

Thanks for the great feedback! Those are some really good points, I also didn't know we could do some of that with the X-Pack tools!

Good point on the user side too, I have actually done that bit for my own deployment but I think it makes sense to remove the use of default users in the docs too.

The API development sounds great! :) 

I'll make the changes and send back over a copy with those bits added in.

Thanks again,
Steve

Stephen Hill

unread,
Jun 27, 2017, 9:46:09 AM6/27/17
to Wazuh mailing list, stephe...@gmail.com
Hi Pedro,

Sorry it's taken a while, had a busy couple of weeks!

I've put in your suggestions for TLS and I've made curator optional to check indices. The only reason I wanted to keep it is because I like using curator for maintenance tasks :)

I remembered why I only changed the logstash_system user for a new account and that's because the way I read the documentation, logstash_system is only for monitoring and the ES docs say to use another user for writing and managing templates. I think to keep it simple for the guide it's best to use the default accounts and maybe add in a note to recommend changing them?

I've attached the newest copy, look forward to hearing your thoughts! :) 

Steve
Configuring Wazuh with X-Pack Multi-Teanancy-27-06-2017.docx

Pedro Sanchez

unread,
Jun 30, 2017, 7:44:46 AM6/30/17
to Stephen Hill, Wazuh mailing list
Woah Stehpen!,

I think you made an excellent work with the guide, very detailed and useful, thanks you so much!
In my experience the use case you prepared and solved is very common among different users and customers.

I would add another Logstash instance to the cluster (2xNodes,2xLogstash,1xClientNode,1Kibana), it will add reliability to your infrastructure and some kind of HA (Two filebeats to Two Logstash instances to Two Nodes).
I am thinking where we can upload this documents, a blog post or a new appendix in our documentation will be great I think, anyway, I won't forget about the document you have created, I think it is a wonderful guide to add multi-tenancy.


Best regards and thanks for your contribution,
Pedro.


Stephen Hill

unread,
Jul 5, 2017, 5:30:39 AM7/5/17
to Wazuh mailing list, stephe...@gmail.com
Thanks Pedro!

I totally agree on adding more nodes for resiliency, unfortunately I was a bit constrained for resources in our data centre :( 

Really glad you think it's worth adding to the Wazuh docs / blog, if you think it needs anything else before that just let me know! :) 

Thanks again for all your help and feedback, it's been a really fun project!

Steve 

rlin...@networkconfig.net

unread,
Jul 5, 2018, 3:55:03 AM7/5/18
to Wazuh mailing list
Hi All,

As you have mentioned in documentation that it's only for Wazuh Documentation for version 2.0. Is it applicable on Wazuh 3.x current version ? If not then let me know the alternative please. Thank you

rlin...@networkconfig.net

unread,
Jul 5, 2018, 4:07:12 AM7/5/18
to Wazuh mailing list
And I am using Single host architecture Wazuh Elastic Stack server.


On Tuesday, May 30, 2017 at 3:51:21 PM UTC+5, Stephen Hill wrote:

jesus.g...@wazuh.com

unread,
Jul 10, 2018, 7:30:58 AM7/10/18
to Wazuh mailing list
Hi @rlinux57, the above conversation refers to Wazuh 2.x - Elastic 5.x... the steps described are not working or may have an unexpected result
using them in Wazuh 3.x - Elastic 6.x. The team is currently working in a new documentation to achieve a similar goal for latest versions. Stay updated
but it's not fully tested so it may be broken, be careful when manipulating Elasticsearch indices and RBAC because you can break your environment.

If you are interested we can help you achieve your goal, just let us know about your environment.

Regards,
Jesús
Reply all
Reply to author
Forward
0 new messages