kibana monitoring disk space shows inaccurate.

141 views
Skip to first unread message

Nicholai Tailor

unread,
Nov 5, 2018, 10:43:42 AM11/5/18
to wa...@googlegroups.com
Hello,

When i check kibana monitoring page. The disk space usage does not match my "df -h'"

Am i looking in the right area and if not where should i be looking?

Cheers

Stephen

unread,
Nov 7, 2018, 10:02:30 AM11/7/18
to Wazuh mailing list
Hi, Maybe because of wazuh is using "df -P" 

juancarl...@wazuh.com

unread,
Nov 8, 2018, 3:03:54 PM11/8/18
to Wazuh mailing list
Hello,

How different is the information from the Kibana interface to that of df -h?

The difference could be due to outdated information. It's also worth noting that this reflects the information of the environment where kibana is installed, confusion may arise if this is different from that of the Wazuh manager.

This is a feature of the Kibana software, which although integrated to Wazuh is not maintained by the Wazuh team but by elastic.

For more information on this you may ask a question in the elastic discussion forum, in this case specifically the x-pack section: https://discuss.elastic.co/c/x-pack. Or if you wish, you may ask in their IRC channel: https://webchat.freenode.net/#kibana

Best Regards,
Juan Carlos

Nicholai Tailor

unread,
Nov 12, 2018, 12:43:58 PM11/12/18
to juancarl...@wazuh.com, wa...@googlegroups.com
Thank you for your reply.

I dont think thats correct.

df -p and -h are a match

Also I ran a stand alone so its all in one. The Kibana interface is showing 64Gigs is used up.?

My concern is which one the correct one. Ive already ran into an issue where everything went into read only because of space.

Information needs to be consistent or whats the point?

[root@waz01 ~]# df -P
Filesystem                   1024-blocks     Used Available Capacity Mounted on
/dev/mapper/vg_local-root        5109760   111068   4998692       3% /
devtmpfs                         3992644        0   3992644       0% /dev
tmpfs                            4004620        0   4004620       0% /dev/shm
tmpfs                            4004620   385984   3618636      10% /run
tmpfs                            4004620        0   4004620       0% /sys/fs/cgroup
/dev/mapper/vg_local-usr        10475520  2774028   7701492      27% /usr
/dev/sda1                        1020588   190020    830568      19% /boot
/dev/mapper/vg_local-home       10229760    36144  10193616       1% /home
/dev/mapper/vg_local-var       104601600 59860240  44741360      58% /var
/dev/mapper/vg_local-var_log    10475520  3823568   6651952      37% /var/log
/dev/mapper/vg_local-tmp         2037760    33832   2003928       2% /tmp
/dev/mapper/wazuh-ossec        104806400 24652044  80154356      24% /var/ossec
tmpfs                             800924        0    800924       0% /run/user/297800546


[root@waz01 ~]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/vg_local-root     4.9G  109M  4.8G   3% /
devtmpfs                      3.9G     0  3.9G   0% /dev
tmpfs                         3.9G     0  3.9G   0% /dev/shm
tmpfs                         3.9G  377M  3.5G  10% /run
tmpfs                         3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/vg_local-usr       10G  2.7G  7.4G  27% /usr
/dev/sda1                     997M  186M  812M  19% /boot
/dev/mapper/vg_local-home     9.8G   36M  9.8G   1% /home
/dev/mapper/vg_local-var      100G   58G   43G  58% /var
/dev/mapper/vg_local-var_log   10G  3.7G  6.4G  37% /var/log
/dev/mapper/vg_local-tmp      2.0G   34M  2.0G   2% /tmp
/dev/mapper/wazuh-ossec       100G   24G   77G  24% /var/ossec
tmpfs                         783M     0  783M   0% /run/user/297800546

Cheers


--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/e6ed758c-9f6a-44cb-ae8b-ff16fabe3ea0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

juancarl...@wazuh.com

unread,
Nov 15, 2018, 10:02:39 AM11/15/18
to Wazuh mailing list
Hello Nicholai,

I notice from your answer  that the sum of all mounted systems excluding that for /var/ossec sum up to 64GB. I also see that the rest of /dev/mapper/ file systems mention "local" in their name. Is the /var/ossec partition a network mount by any chance?

Relevant to this, by default Wazuh periodically runs a command that monitors disk space and a rule that will create an alert if any partition reaches a 100% of usage. If you wish to get an alert for a lower percentage you can create a custom rule such as this:

  <rule id="100002" level="7" ignore="7200">
   
<if_sid>530</if_sid>
   
<match>ossec: output: 'df -P': /dev/</match>
   
<regex>9[5-9]%</regex>
   
<description>Partition usage above 95% (disk space monitor).</description>
   
<group>low_diskspace,pci_dss_10.6.1,gpg13_10.1,gdpr_IV_35.7.d,</group>
 
</rule>

This is part of the "Command monitoring" capability which allows you to easily expand on the monitoring capabilities of Wazuh beyond the logs. You may find more information about it here: https://documentation.wazuh.com/current/user-manual/capabilities/command-monitoring/index.html

I hope this will help you in monitoring your system to avoid any issues due to disk space.

Best Regards,
Juan Carlos
Reply all
Reply to author
Forward
0 new messages