Wazuh Architecture design for a MSSP

2,776 views
Skip to first unread message

Paul M

unread,
Sep 17, 2018, 4:46:01 PM9/17/18
to Wazuh mailing list
I'm researching Wazuh from the perspective of a MSSP. I came across the post about using Wazuh for large deployments: https://groups.google.com/forum/#!topic/wazuh/IqR7gJqsvIY, so it appears that it scales extremely well thanks to the ELK stack.

I want to see if there are many Wazuh users that use it not only for a large amount of total devices, but also in a MSSP type of way to monitor different clients' devices. If so, how does your architecture differ, if at all, from the recommended large scale deployment method? 

Based on https://documentation.wazuh.com/current/getting-started/architecture.html#communications-and-data-flow, does each customer have a full deployment, with corresponding architectures based on their respective sizes, complete with the Elastic Stack and then the data flows from each customer's Wazuh deployment to a master / combined Elastic Stack OR could each customer have only the Wazuh server stack (Wazuh manager, Filebeat, Wazuh API) and they would all feed into one large Elastic Stack?

Is there a way where you can allow your SOC analysts to view all of your supported environments in one pane of dashboards and see the distinction of one client to another? Can you easily view panes of dashboards for one customer at a time?

Thanks for any insight regarding these questions!

jesus.g...@wazuh.com

unread,
Sep 18, 2018, 3:13:40 AM9/18/18
to Wazuh mailing list
Hi Paul M,

Awesome questions, this discussion is very interesting from my view. If I understood you correctly, what you are planning is to have a centralized
Elastic stack then a few of separated Wazuh instances sending events to that centralized Elasticsearch, right? If so, you can take advantage of X-Pack
security features plus a custom modification for Elasticsearch in index creation time.

Custom indices

By defautl, Elasticsearch will create indices using wazuh-alerts-3.x-* prefix, you can modify it to use a custom pattern. 

Let's say we have two different customers:

- customer-01-alerts-*
- customer-02-alerts-*

Our customer-01 has Wazuh manager, Wazuh agents and Filebeat sending events to centralized Elasticsearch endpoint. Same for customer-02. Each customer
will add data to different indices. Now each customer has its data in separated indices.

Authorization and authentication

We also have a Kibana host, useful for visualizing our data but both customers are able to see data from the other customer and viceversa. X-Pack security to the rescue!
We have a guide to configure users and roles for X-Pack, see https://documentation.wazuh.com/current/user-manual/kibana-app/configure-xpack/index.html and

The main idea is to set users that can use certain indices. This way we can force the user to login when using Kibana using an username:password method.

Once you are done your users can access only to certain indices and they must login into Kibana too. That configuration affects to Elasticsearch itself so they can't retrieve data
directly from Elasticsearch if they are not authorized.

I hope it helps.

Regards,
Jesús

Paul M

unread,
Sep 18, 2018, 8:41:35 AM9/18/18
to Wazuh mailing list
Thanks Jesús for your response.

From your answer it looks like you're suggesting that we would not have to have a full ELK stack for each customer, but would only need one large ELK stack for all of our customers. 

The custom indices sound like a great solution for a clear separation to differentiate unique customers. This may also answer another question I had about how Wazuh stores the data for individual devices. There will be overlap in both IP addresses as well as hostnames due to the nature of our business. Will the custom indices be a sufficient way for Wazuh to view devices from different customers that have the same IP address and/or hostname or could that cause issues?

Regarding X-pack security. I've looked into the different benefits that the gold/platinum subscriptions of elastic would bring (X-pack alerting being another big one), but initially I want to see what all we could do without a paid tier of elasticsearch. I know it's not released yet, but it looks like RBAC is in the pipeline for Wazuh: https://github.com/wazuh/wazuh-api/issues/114.

Thanks,
Paul

jesus.g...@wazuh.com

unread,
Sep 18, 2018, 9:01:50 AM9/18/18
to Wazuh mailing list
Hello again Paul,

Yes, that's the main idea, having a large and scalable Elasticsearch cluster where you store data from your customers hosts. Once you are in Kibana you create one or more index pattern 
per customer. 

Since each customer has a X-Pack role, they can use certain index patterns. Those index patterns limit indices to be fetched. IP address should not be a problem since they are separated 
networks and they are pointing to a common Elasticsearch cluster, From my view your Elasticsearch and Kibana instances will work as a service for your customers.

Wazuh API RBAC is a future feature that will allow us to provide sercurity enhancements but I think you can work fine with the current features and your desired solution.

Regards,
Jesús

Jeremy Phillips

unread,
Sep 18, 2018, 9:44:11 AM9/18/18
to itpro...@gmail.com, wa...@googlegroups.com
I've used Wazuh in multiple MSP/SaaS type environments, but it was always a backend service that was never visible/accessible to the customer, so there was less need to segregate individual customer's data.

At first glance, IMO you should look into containers for the components to provider your per-customer segregation.  Abstract out the individual customer's configurations and data thru mount points.  I don't think the existing container images support redundancy, but no reason they can't be extended to support.  Setup a reverse proxy in front of each customer's stack to provide authN into whatever IdM is appropriate for them (OAUTH, SAML, standalone, etc...) In this approach, each customer would have their own 'instance' of the stack they can look at, and you could use the network controls of your container orchestrator to segregate network traffic (ex: kubernetes network policies) at layer 3/4 between customers.

For the MSSP SOC, have a different Kibana instance that has access to all the individual customer Elasticsearch indices, and aggregate the searches across the indices...

I'm also sure this is oversimplified and I'm missing something...  :-)

Jeremy

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/22e0f8af-a79c-430d-a044-f75bf92550b5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Paul M

unread,
Sep 18, 2018, 10:03:15 AM9/18/18
to Wazuh mailing list
Jeremy - Thanks for your input. For the foreseeable future, separating the customer data is purely more for usability for MSSP SOC analysts, the intention is not to hand over access to individual customers so that they can log in as well.

Based on your experience, would you mind sharing how you've set Wazuh for MSP environments?

Thanks,
Paul

On Tuesday, September 18, 2018 at 9:44:11 AM UTC-4, Jeremy Phillips wrote:
I've used Wazuh in multiple MSP/SaaS type environments, but it was always a backend service that was never visible/accessible to the customer, so there was less need to segregate individual customer's data.

At first glance, IMO you should look into containers for the components to provider your per-customer segregation.  Abstract out the individual customer's configurations and data thru mount points.  I don't think the existing container images support redundancy, but no reason they can't be extended to support.  Setup a reverse proxy in front of each customer's stack to provide authN into whatever IdM is appropriate for them (OAUTH, SAML, standalone, etc...) In this approach, each customer would have their own 'instance' of the stack they can look at, and you could use the network controls of your container orchestrator to segregate network traffic (ex: kubernetes network policies) at layer 3/4 between customers.

For the MSSP SOC, have a different Kibana instance that has access to all the individual customer Elasticsearch indices, and aggregate the searches across the indices...

I'm also sure this is oversimplified and I'm missing something...  :-)

Jeremy

On Mon, Sep 17, 2018 at 4:46 PM Paul M <> wrote:
I'm researching Wazuh from the perspective of a MSSP. I came across the post about using Wazuh for large deployments: https://groups.google.com/forum/#!topic/wazuh/IqR7gJqsvIY, so it appears that it scales extremely well thanks to the ELK stack.

I want to see if there are many Wazuh users that use it not only for a large amount of total devices, but also in a MSSP type of way to monitor different clients' devices. If so, how does your architecture differ, if at all, from the recommended large scale deployment method? 

Based on https://documentation.wazuh.com/current/getting-started/architecture.html#communications-and-data-flow, does each customer have a full deployment, with corresponding architectures based on their respective sizes, complete with the Elastic Stack and then the data flows from each customer's Wazuh deployment to a master / combined Elastic Stack OR could each customer have only the Wazuh server stack (Wazuh manager, Filebeat, Wazuh API) and they would all feed into one large Elastic Stack?

Is there a way where you can allow your SOC analysts to view all of your supported environments in one pane of dashboards and see the distinction of one client to another? Can you easily view panes of dashboards for one customer at a time?

Thanks for any insight regarding these questions!

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

Jeremy Phillips

unread,
Sep 18, 2018, 11:12:10 AM9/18/18
to itpro...@gmail.com, wa...@googlegroups.com
In my circumstances, since we managed the entire system, we did not have the issue of duplicate IPs/hostnames.  We also didn't treat our customers differently (so while cust-A and cust-B might have a different product/service, they both had same SLA, etc...).  The only difference was in the rules that were defined for each product/service, but those were common across all customer instances of a given product/service.  We just deployed Wazuh documentation or Wazuh consulting engagement (great service!) recommended, and scaled each layer as we hit bottlenecks.  We would add in additional metadata using labels for system/customer/etc, and filter/sort using those in dashboards in Kibana.  

If you setup independent Wazuh stacks for each customer (to avoid conflicts of hostname/IP), but had logstash feed all data to a centralized elasticsearch cluster, and then add in a guid for each customer agent (maybe synthetic guid of customerId-hostname-IP), you could use that for dashboards instead of hostname/IP.

Jeremy

To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/22e0f8af-a79c-430d-a044-f75bf92550b5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

Paul M

unread,
Sep 18, 2018, 12:02:09 PM9/18/18
to Wazuh mailing list
Again, thanks for your input Jeremy. Both you and Jesús have been really helpful in helping me (and hopefully others that come across this post) to process what the architecture could look like and how it could work in our use case.

On Tuesday, September 18, 2018 at 11:12:10 AM UTC-4, Jeremy Phillips wrote:
In my circumstances, since we managed the entire system, we did not have the issue of duplicate IPs/hostnames.  We also didn't treat our customers differently (so while cust-A and cust-B might have a different product/service, they both had same SLA, etc...).  The only difference was in the rules that were defined for each product/service, but those were common across all customer instances of a given product/service.  We just deployed Wazuh documentation or Wazuh consulting engagement (great service!) recommended, and scaled each layer as we hit bottlenecks.  We would add in additional metadata using labels for system/customer/etc, and filter/sort using those in dashboards in Kibana.  

If you setup independent Wazuh stacks for each customer (to avoid conflicts of hostname/IP), but had logstash feed all data to a centralized elasticsearch cluster, and then add in a guid for each customer agent (maybe synthetic guid of customerId-hostname-IP), you could use that for dashboards instead of hostname/IP.

Jeremy

On Tue, Sep 18, 2018 at 10:03 AM Paul M <> wrote:
Jeremy - Thanks for your input. For the foreseeable future, separating the customer data is purely more for usability for MSSP SOC analysts, the intention is not to hand over access to individual customers so that they can log in as well.

Based on your experience, would you mind sharing how you've set Wazuh for MSP environments?

Thanks,
Paul

On Tuesday, September 18, 2018 at 9:44:11 AM UTC-4, Jeremy Phillips wrote:
I've used Wazuh in multiple MSP/SaaS type environments, but it was always a backend service that was never visible/accessible to the customer, so there was less need to segregate individual customer's data.

At first glance, IMO you should look into containers for the components to provider your per-customer segregation.  Abstract out the individual customer's configurations and data thru mount points.  I don't think the existing container images support redundancy, but no reason they can't be extended to support.  Setup a reverse proxy in front of each customer's stack to provide authN into whatever IdM is appropriate for them (OAUTH, SAML, standalone, etc...) In this approach, each customer would have their own 'instance' of the stack they can look at, and you could use the network controls of your container orchestrator to segregate network traffic (ex: kubernetes network policies) at layer 3/4 between customers.

For the MSSP SOC, have a different Kibana instance that has access to all the individual customer Elasticsearch indices, and aggregate the searches across the indices...

I'm also sure this is oversimplified and I'm missing something...  :-)

Jeremy

On Mon, Sep 17, 2018 at 4:46 PM Paul M <> wrote:
I'm researching Wazuh from the perspective of a MSSP. I came across the post about using Wazuh for large deployments: https://groups.google.com/forum/#!topic/wazuh/IqR7gJqsvIY, so it appears that it scales extremely well thanks to the ELK stack.

I want to see if there are many Wazuh users that use it not only for a large amount of total devices, but also in a MSSP type of way to monitor different clients' devices. If so, how does your architecture differ, if at all, from the recommended large scale deployment method? 

Based on https://documentation.wazuh.com/current/getting-started/architecture.html#communications-and-data-flow, does each customer have a full deployment, with corresponding architectures based on their respective sizes, complete with the Elastic Stack and then the data flows from each customer's Wazuh deployment to a master / combined Elastic Stack OR could each customer have only the Wazuh server stack (Wazuh manager, Filebeat, Wazuh API) and they would all feed into one large Elastic Stack?

Is there a way where you can allow your SOC analysts to view all of your supported environments in one pane of dashboards and see the distinction of one client to another? Can you easily view panes of dashboards for one customer at a time?

Thanks for any insight regarding these questions!

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/22e0f8af-a79c-430d-a044-f75bf92550b5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+unsubscribe@googlegroups.com.

Paul M

unread,
Sep 20, 2018, 11:01:32 AM9/20/18
to Wazuh mailing list
Another item that I noticed in the documentation as a way to differentiate unique customers in the combined ELK stack would be to use labels. https://documentation.wazuh.com/current/user-manual/capabilities/labels.html . A label for the customer's name would be set for each customer and even further could be done for each site at each customer.

Paul

jesus.g...@wazuh.com

unread,
Sep 25, 2018, 5:47:44 AM9/25/18
to Wazuh mailing list
Hello again Paul,

Jeremy said interesting things about container, but only to clarify, you don't need lot of instances for separating environments. Keep in mind
that you can run multiple and independent containers in a big instance. Containers are like micro-services, they are separated between them, so
that's a different and great solution.

Lot of users have large instances in EC2 and run customized containers inside thos instances. Also you are talking about differentiating customers at data level
not at instance level, it's different.

In any case, you can share here your experiments and how you did it. Our community likes your opinion!

Best regards,
Jesús
Reply all
Reply to author
Forward
0 new messages