Analyze Tomcat Logs

627 views
Skip to first unread message

swapnils

unread,
Dec 6, 2022, 3:59:54 PM12/6/22
to Wazuh mailing list

Hello Team,

I need your suggestion in configuring tomcat logs into Wazuh. Here is my current scenario.

We have hundreds of application servers of which tomcat logs are being routed to very old ELK setup. Its a very old setup hence we are planning to either upgrade existing setup or move this application monitoring to Wazuh.

Challenge:
This old ELK stack receives around 1000k to 1500k hits/minute. Index size of this becomes 300GB/day. These indices gets purged on daily basis due to storage crunch.
If I configure these events to be recorded in Wazuh, there will be a direct impact on the storage of Wazuh.
Wazuh indices are currently configured for 90 days retention & the logall_json is too enabled. With this current Wazuh’s configuration, I feel storage would exhaust within no time.

Question:
Do I have an option to setup a specific rule / filter for these tomcat logs so that there will not be an impact on storage of a wazuh-manager & wazuh-indexer? If yes, how will be able to achieve it? Or else it is better to upgrade existing ELK stack?

Thanks in advance!
swapnils

Emiliano Zorn

unread,
Dec 7, 2022, 2:22:19 PM12/7/22
to Wazuh mailing list

Hello swapnils, hope you are doing well.


There are different ways to analyze this.

First I would like to know if the events you are talking about are alerts or raw data.
This would give a different result when calculating disk space, considering that not every log (raw data) is considered an alert.
For an index to be 300 GB, you should ingest approximately 1500 GB of raw data per day (logs). This would give us approximately 5260 alerts per second, and 52609 events per second.


Taking the data provided, of 1500 events per second, this would give us a total of 45 GB of raw data and 9 GB of alerts (indexes) per day.
You can see how the difference in GB is abysmal, so we should consider correctly what is an event and what is an alert.

As for the  logall_json  enabled, we are taking up a lot of disk space here, because we are saving these events.

The file /var/ossec/logs/archives/archives.json contains all events whether they tripped a rule or not. This is sent to cold storage if the setting logall_json is set to yes.


Regarding your question:
Do I have an option to setup a specific rule / filter for these tomcat logs so that there will not be an impact on storage of a wazuh-manager & wazuh-indexer?


The filters that can be performed with Wazuh, are once the event is ingested, this means that the log is already inside the server.
I can think of a way to filter the events through Filebeat, before the information arrives at the wazuh-manager.
But first I would like to know what kind of conditions you intend to assign to these rules/filters so that they are not sent to wazuh-manager.

Another type of solution is to create a Lifecycle policy, to automate the deletion of indexes according to certain conditions:
https://wazuh.com/blog/wazuh-index-management/

Regards.

swapnils

unread,
Dec 8, 2022, 7:36:35 AM12/8/22
to Wazuh mailing list
Hello Emiliano,
Thank you for the details explanation!

Though I will take little more time to absorb the scenario you explained as I am absolutely new to this. Please excuse me if I sound dumb.
I am attaching here an image from an outdated ELK which is currently in use(to be discarded or upgraded). I am still understanding how this setup was deployed. For now, I believe every client has a filebeat configured which is sending logs to this stack.
As you rightly mentioned, I goofed up between events and alerts. 500k 'events'/minutes is the right term to be used?!?.

The clients are currently having both wazuh-agent and filebeat installed. filebeat is sending logs to ELK & wazuh-agent is sending logs to Manager.
If I replicate filebeat configuration to ossec.conf, logall_json will eat up huge disk space.

Hope I am making sense. Regards,
events.png
alerts.png

swaps

unread,
Dec 15, 2022, 6:21:43 AM12/15/22
to Wazuh mailing list
Hello,
Did you get a chance to go through the snippets attached? That would have more clarity (if I am fumbling with words).

Thanks,

swaps

unread,
Dec 19, 2022, 1:01:26 PM12/19/22
to Wazuh mailing list
Hello,
Any thoughts/comments/suggestions?

Emiliano Zorn

unread,
Dec 21, 2022, 1:51:50 PM12/21/22
to Wazuh mailing list
Hello Swaps!

Sorry for the delay, I just had the chance to view this.

So, for example, here we are talking about alerts because we are seeing the number of events directly from the Dashboard, this means that we are taking a look at logs that already triggered an alert.

Having said that, it's correct when you say that nearly 500k Alerts per Minute are being ingested.

The problem here is that the solution you are looking for is to filter the events before they arrive at the Wazuh-Manager because that would suppose a load on the disk space in the Wazuh-Manager.

It is unfeasible to use Rules and Decoders since they are applied to the logs that are already in the manager.

So, the best solution would be to filter directly from Filebeat.
Here is a link that can help you in the filtering process and surely will guide you with better technicalities to achieve the goal.

Regards.
Reply all
Reply to author
Forward
0 new messages