Kibana failing at showing Alerts history

1,153 views
Skip to first unread message

Whit Blauvelt

unread,
Mar 6, 2018, 10:53:15 AM3/6/18
to Wazuh mailing list
Hi,

This used to be at least partly working here. But now going to Alerts for any agent in Kibana shows 0, even when my mailbox has multiple recent alerts for the systems in question. Historically we set up agents on Linux systems first. Kibana appears to have the older history for those agents. More recently we've installed the agent on a bunch of Windows systems, which are resulting in plenty of emails -- but no history of the alerts in Kibana. Looking at one Linux agent, it looks like it only has data up to the end of January. 

We're running the Ubuntu version, currently 3.2.1-1. The Linux agents have had no connection problems; the Windows agents we just had to upgrade to stop some from failing. Still, nothing making it to the Kibana screen beyond the basics of the connections being there. 

Whit

jesus.g...@wazuh.com

unread,
Mar 6, 2018, 11:18:03 AM3/6/18
to Wazuh mailing list
Hi Whit, there are few reasons causing the zero-alerts problem. First step is to check if Kibana is showing alerts on the main Discover tab, not inside the Wazuh App.
Please go to Kibana -> Discover, set the filter time to 24 hours or whatever you want to check. If Kibana redirects you to management and asks for an index pattern select
the wazuh-alerts-3.x-* and click on the start icon placed on the top right corner to confirm that we want to use it on the Kibana components (not relevant on the Wazuh App but on the
Kibana components). 

Once you are on the Discover you should see alerts, but we are going to filter by an specific agent.id. Assuming that you are not seeing alerts on the Wazuh App
from the agent with id: 010 let's write on the search bar agent.id: 010 or simply click on Add a filter -> agent.id -> is -> 010. Check if you are seeing recently added alerts from
the agent 010.

Here is main point of this, if you are seeing any alert please open one of them click on the triangle next to the left side of any alert to see the alert details,
once it's opened you should see a field named manager.name, note down it.

Go back to Wazuh App and look at the top right corner of the Wazuh App, you should see an star icon followed by <manager hostname> - <index pattern selected>.
If the manager hostname is different from the manager.name from the alert, your solution is as simple as go to Wazuh App -> Settings and click on the Check
connection icon next to your stored API, the icon is a reload icon placed between the trash (delete) icon and the pencil (edit) icon. This function also
refresh your manager.hostname filter, I've experienced this too sometimes.

If the above tip didn't solve the issue, please let us know and paste here the following command output:

# cat /var/log/elasticsearch/elasticsearch.log

If the log is so big, do it as follow:

# cat /var/log/elasticsearch/elasticsearch.log | grep ERROR
# cat /var/log/elasticsearch/elasticsearch.log | grep WARN

Best regards,
Jesús

Whit Blauvelt

unread,
Mar 6, 2018, 11:28:55 AM3/6/18
to Wazuh mailing list


On Tuesday, March 6, 2018 at 11:18:03 AM UTC-5, jesus.gonzale wrote:
Hi Whit, there are few reasons causing the zero-alerts problem. First step is to check if Kibana is showing alerts on the main Discover tab, not inside the Wazuh App.
Please go to Kibana -> Discover, set the filter time to 24 hours or whatever you want to check. If Kibana redirects you to management and asks for an index pattern select
the wazuh-alerts-3.x-* and click on the start icon placed on the top right corner to confirm that we want to use it on the Kibana components (not relevant on the Wazuh App but on the
Kibana components). 

Hi Jesus,

It showed nothing at first, but was on index pattern wazuh-alerts-3.x-2018.01* -- which would account for it not showing beyond January. Changing to wazuh-alerts-3-x-* does show current alerts. What the heck are those other choices doing there? Why is it defaulting to them? How do I prevent this?

Thanks,
Whit

Whit Blauvelt

unread,
Mar 6, 2018, 11:30:27 AM3/6/18
to Wazuh mailing list
I'm not seeing a "start icon placed on the top right corner".

- W

jesus.g...@wazuh.com

unread,
Mar 6, 2018, 11:37:16 AM3/6/18
to Wazuh mailing list
Ok Whit, happy to help. Kibana uses an index pattern to filter the data, it's right but the Wazuh App has its own default index pattern.
Out the Wazuh App you have another (or maybe the same, but they could be different). On the Wazuh App you can go to 
Settings -> Pattern and select your desired pattern, or simply use the pattern selector on the top right corner of the Wazuh App. If you 
are using components out of the Wazuh App you must to take in care about the Kibana -> Management -> Index patterns.

Also you can check your stored indices with curl elastic_ip:9200/_cat/indices whenever you want to check if something exists or not.

King regards,
Jesús

Whit Blauvelt

unread,
Mar 6, 2018, 12:16:08 PM3/6/18
to Wazuh mailing list
Hi Jesús,

The top right choices in the Wazuh App don't fix it. When I select wazuh-alerts-3-x-* there it's still broken. 

We only want to use the Wazuh App, nothing else in Kibana for now. How do we (1) fix this and (2) get rid of the two instances of the garbage index pattern "wazuh-alerts-3-x-2018.01*" that show up in both Kibana and Wazuh?

Thanks,
Whit

Whit Blauvelt

unread,
Mar 6, 2018, 12:21:52 PM3/6/18
to Wazuh mailing list
On Tuesday, March 6, 2018 at 11:37:16 AM UTC-5, jesus.g...@wazuh.com wrote:
   On the Wazuh App you can go to 
Settings -> Pattern and select your desired pattern, or simply use the pattern selector on the top right corner of the Wazuh App. 

Jesus,

The top right corner in fact defaults to the right setting, except the results show that it's using the same wrong setting (January-only) that Kibana defaults to. As for Settings > Pattern, what's the path to Settings, please? I don't see "Settings" at all.

Thanks,
Whit 

jesus.g...@wazuh.com

unread,
Mar 6, 2018, 12:32:28 PM3/6/18
to Wazuh mailing list
Hi Whit, with "Settings" I was mean the "gear" icon on the top right corner of the Wazuh App if you click there you will
see the Settings section of the Wazuh App, once you are ther, click on "Pattern" and you will see a selector with the 
fine index patterns, you could have more than the shown patterns on the Wazuh App, it's because we filter the list of index patterns
to leave the user select only those who match wazuh alerts. Also remember that whenever you go for example to Overview
General tab on the Wazuh App you will see a filter just under the search bar, it will say "manager.name: foooo" where "foooo"
should match the citated manager.name (on our last emails) field of the agents alerts. 

If you have any more questions or troubles it would be fine if you paste here an screenshot from 
Kibana -> Discover with 24 hours on timefilter and another screenshot from
Kibana -> Wazuh App -> Overview General with 24 hours on timefilter as well.

Best regards,
Jesús

Whit Blauvelt

unread,
Mar 6, 2018, 1:50:38 PM3/6/18
to Wazuh mailing list
On Tuesday, March 6, 2018 at 12:32:28 PM UTC-5, jesus.g...@wazuh.com wrote:
Hi Whit, with "Settings" I was mean the "gear" icon on the top right corner of the Wazuh App if you click there you will
see the Settings section of the Wazuh App, once you are ther, click on "Pattern" and you will see a selector with the 
fine index patterns, you could have more than the shown patterns on the Wazuh App, it's because we filter the list of index patterns
to leave the user select only those who match wazuh alerts. Also remember that whenever you go for example to Overview
General tab on the Wazuh App you will see a filter just under the search bar, it will say "manager.name: foooo" where "foooo"
should match the citated manager.name (on our last emails) field of the agents alerts. 

Thanks Jesús. The Patterns page under the * icon shows wazuh-alerts-3.x-* as selected, between two instances of the January-only pattern. But the actual selection is by January-only. This sucks. 

If you have any more questions or troubles it would be fine if you paste here an screenshot from 
Kibana -> Discover with 24 hours on timefilter and another screenshot from
Kibana -> Wazuh App -> Overview General with 24 hours on timefilter as well.

Kabana Discover works right, once the right filter is chosen. Note I never want to see the wrong filter defaulted to again, if possible. But it does work given that. In Wazuh App, however, is broken.  
wazuhbroken.png
kibanadiscover.png

Whit Blauvelt

unread,
Mar 6, 2018, 1:59:16 PM3/6/18
to Wazuh mailing list
Ah, I see, Discover does work, even in Wazuh App. It's the Panels that are pretty totally broken -- except for January.

So it looks like while Discover respects the search string set, the Panels are defaulting to the January-only string, which occurs both before and after the one that works. No idea how that even got there. For our purposes, if there's a way to lock things exclusively to the string that just works, that would be ideal.

Thanks again,
Whit

jesus.g...@wazuh.com

unread,
Mar 6, 2018, 2:41:47 PM3/6/18
to Wazuh mailing list
Ok Whit, we are now close to catch your weird situation. Only to be sure, and sorry if I'm repeating myself,
from your screenshots I see that you have some alerts, could you click on the little triangle placed next to each alert on 
the Discover to see alert details, once opened look at the manager.name field and remember it, when you come back 
to the Wazuh App I see on your screenshot that the manager.name filter is for rpc-ossec. 

Is manager.name field matching both fields from Discover and Wazuh App?

Since you have alerts but you are not seeing them on the Wazuh App I have some questions for you:

- Are you coming from a Wazuh 2.x installation and upgraded to Wazuh 3.x?
- Could you paste the output of the following commands please?

# curl elastic_ip:9200/_cat/indices

If you are indexing new alerts you should see multiple indices with the prefix wazuh-alerts-3.x-* from the above command.
Could you please note down the most recently added index with that prefix and execute the following command?

# curl elastic_ip:9200/wazuh-alerts-3.x-123-123-123/_mapping

Where 123 are example numbers, replace them by your desired index.

Finally, could you provide the output of the following command:

# curl elastic_ip:9200/_cat/templates

I know that the above commands are bored and weird but we need them to make a more in deep research about your environment,
thanks for your patience, have a nice day.


King regards,
Jesús

Whit Blauvelt

unread,
Mar 6, 2018, 3:09:27 PM3/6/18
to Wazuh mailing list
On Tuesday, March 6, 2018 at 2:41:47 PM UTC-5, jesus.g...@wazuh.com wrote:
Ok Whit, we are now close to catch your weird situation. Only to be sure, and sorry if I'm repeating myself,
from your screenshots I see that you have some alerts, could you click on the little triangle placed next to each alert on 
the Discover to see alert details, once opened look at the manager.name field and remember it, when you come back 
to the Wazuh App I see on your screenshot that the manager.name filter is for rpc-ossec. 

Is manager.name field matching both fields from Discover and Wazuh App?

Yes.

Since you have alerts but you are not seeing them on the Wazuh App I have some questions for you:

- Are you coming from a Wazuh 2.x installation and upgraded to Wazuh 3.x?

Originally. But the Kibana part has only been since Wazuh 3, which we installed some months ago now.
 
- Could you paste the output of the following commands please?

# curl elastic_ip:9200/_cat/indices

 root@rpc-ossec:~# curl 127.0.0.1:9200/_cat/indices
yellow open wazuh-alerts-3.x-2018.01.23     xZdtoPsUS-GuS7aMcOkyhQ 5 1  470528   0  116.9mb  116.9mb
yellow open wazuh-alerts-3.x-2017.12.14     nexkLe_wSEeJQy6YCsDUog 5 1      22   0  193.5kb  193.5kb
yellow open wazuh-alerts-3.x-2018.02.14     RkSi_xCPSu2GmLswDOFItA 5 1  565777   0  140.7mb  140.7mb
yellow open wazuh-monitoring-3.x-2018.01.28 mNAwjSHeQ96oH4WDIfVH9Q 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2018.02.27     KTihdSveTiy0wOC-bJXaNw 5 1  465258   0  247.2mb  247.2mb
yellow open wazuh-alerts-3.x-2018.01.06     5OlPvulBTE64BQCe6unjDQ 5 1  185906   0   55.5mb   55.5mb
yellow open wazuh-alerts-3.x-2017.12.21     WNn0wMvMQDiog7jVMS2lEA 5 1  415888   0    100mb    100mb
yellow open wazuh-monitoring-3.x-2018.03.01 x9eWBpCIRoKLQrYPeNFwBw 5 1   28080   0      7mb      7mb
yellow open wazuh-alerts-3.x-2018.02.20     YN3aNvQLTQqn89d3dCV26Q 5 1  298583   0   90.6mb   90.6mb
yellow open wazuh-alerts-3.x-2018.01.25     xIXQ8aFaQF6sDvL7sjd2uA 5 1  523338   0  130.3mb  130.3mb
yellow open wazuh-alerts-3.x-2018.01.01     PUAqid3ASwm1IkTrFYTDvQ 5 1  304593   0  105.1mb  105.1mb
yellow open wazuh-alerts-3.x-2018.02.06     -AGfjk8RSDGL8aXYwe8Ikw 5 1  386743   0  110.9mb  110.9mb
yellow open wazuh-alerts-3.x-2018.01.27     wUFSL_jSScmmB6puCN7ong 5 1  166660   0   47.1mb   47.1mb
yellow open wazuh-alerts-3.x-2018.03.06     X4mixkuySMCGBgFjLKRWUA 5 1 2034050   0    1.6gb    1.6gb
yellow open wazuh-alerts-3.x-2018.01.11     lf9ixbqtRnCqBQnSjzKYEw 5 1  361340   0   95.5mb   95.5mb
yellow open wazuh-monitoring-3.x-2017.12.26 -a8pa1ZXQkKQ2eu3YeQWVA 5 1    5904   0      1mb      1mb
yellow open wazuh-monitoring-3.x-2017.12.18 mlW8SSjnRdOVMsZMw5R-Ug 5 1    1419   0  359.6kb  359.6kb
yellow open wazuh-alerts-3.x-2018.01.19     rPXdWmHrRx612ITMPf1l4A 5 1  375768   0   97.2mb   97.2mb
yellow open wazuh-alerts-3.x-2018.01.05     sj4e97OCTWmLshNq5vjbog 5 1  834826   0  240.9mb  240.9mb
yellow open wazuh-monitoring-3.x-2018.02.28 u6S5SG5oRguaS-4AfoDN0Q 5 1   14900   0    4.4mb    4.4mb
yellow open wazuh-monitoring-3.x-2018.02.04 gcpBcBREQOGF829xW6u5bw 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.02.15 8zFrmdOCQi2Sgg5I58u1UA 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.01.19 26tf51zmRnKh3rMvcmHhLQ 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.01.29 C-tKmp8SQFGr39YyyIIWug 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.01.30 0x9cXHyFSbKopQ_KTt4A2A 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2017.12.15     HoRnLPlAQ6GhVumokEj1lg 5 1      26   0  257.7kb  257.7kb
yellow open wazuh-monitoring-3.x-2018.01.17 oGHuN-arRjub1IKHkV6Idg 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.02.21 n0lJ9lE7Tfe3my5tMEDqKg 5 1    9072   0    1.4mb    1.4mb
yellow open wazuh-alerts-3.x-2017.12.16     H_9mxSflQ1CoLGANN1wDzw 5 1       2   0   34.4kb   34.4kb
yellow open wazuh-alerts-3.x-2018.01.31     iaZVxYWNTDGv1H6rTkIWXA 5 1  333536   0   85.8mb   85.8mb
yellow open wazuh-alerts-3.x-2018.01.08     lhOVmpjeRm6hPBJ3fFkDug 5 1  521471   0  126.9mb  126.9mb
yellow open wazuh-monitoring-3.x-2018.01.26 NPrerDIDRvOqSL8RWwnSXw 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2018.02.09     0l5pF5IiSta1IIKiVIl0Hw 5 1  447422   0  113.1mb  113.1mb
yellow open wazuh-alerts-3.x-2018.02.26     JCCJI_hvQGedCnixE_dBFA 5 1  397132   0  184.6mb  184.6mb
yellow open wazuh-alerts-3.x-2018.03.02     f-812fcHRS6qDbf9hHhOIg 5 1 1151515   0  755.2mb  755.2mb
yellow open wazuh-alerts-3.x-2018.02.15     D90HxZeOTY2r-p2wjdpE-Q 5 1  372235   0   97.2mb   97.2mb
yellow open wazuh-monitoring-3.x-2017.12.21 IkuIk9WFRDeLamdJQbVOug 5 1    5904   0      1mb      1mb
yellow open wazuh-alerts-3.x-2018.01.20     MksfyNpCQyqeFVR7nom7cQ 5 1  159574   0   44.3mb   44.3mb
yellow open wazuh-monitoring-3.x-2018.01.08 IOzj9sNpTVCTHCLWCy7zfg 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.02.06 1f77l1u9SpqTWniDO6iz4w 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.02.23 coL_muRkQCGJ--iDlkvPOQ 5 1    8722   0    2.3mb    2.3mb
yellow open wazuh-monitoring-3.x-2018.01.11 NMVNIhwuS9qSM6tf74FiCA 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2018.02.18     NarzyMmYQYW6YQQ5KRfglw 5 1  127061   0   36.5mb   36.5mb
yellow open wazuh-alerts-3.x-2018.02.16     xQ3nmEVpTCyxVTh48pMTRA 5 1  494863   0  130.7mb  130.7mb
yellow open wazuh-monitoring-3.x-2018.01.09 -fDjFVjFTVWkxdcyK6NKrw 5 1    8928   0    1.2mb    1.2mb
yellow open wazuh-monitoring-3.x-2018.01.20 k9tP_PGIRbmzP6HeLVoG2A 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2018.02.24     ed86kynsRj6AQ3s3Z9grAQ 5 1  248906   0    127mb    127mb
yellow open wazuh-alerts-3.x-2017.12.28     s2S_5lmDTvGwmJz0m5lGFw 5 1  793003   0  226.1mb  226.1mb
yellow open wazuh-monitoring-3.x-2018.02.07 my2kcvhjT4u7fGN0pq70bQ 5 1    8866   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.01.18 9OEriQyESbCFRNUf_67LsA 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2017.12.23 2QKr_w63RZ6OFfIv58hNkA 5 1    5904   0      1mb      1mb
yellow open wazuh-alerts-3.x-2018.02.08     lURC7LZOSHa1YiYRrx4vhg 5 1  469142   0  118.1mb  118.1mb
yellow open wazuh-monitoring-3.x-2018.02.02 uQXpdqL8S5CC899bDOlz2g 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2018.01.26     2UHjmR0_QnmBDD71L8XERQ 5 1  421850   0  107.2mb  107.2mb
yellow open wazuh-alerts-3.x-2018.03.01     Xzq1OXFLRfKDXXs667LCQQ 5 1 1695484   0    1.1gb    1.1gb
yellow open wazuh-monitoring-3.x-2018.02.22 KU-JpuASSbWtVr-ZNDlTfA 5 1    8820   0    1.6mb    1.6mb
yellow open wazuh-alerts-3.x-2017.12.18     zC3eLql_TOytbpvWwn81TA 5 1       4   0   53.1kb   53.1kb
yellow open wazuh-alerts-3.x-2018.01.13     GxcKVbm_TnC0afjOvPUAyA 5 1  165581   0   48.2mb   48.2mb
yellow open wazuh-alerts-3.x-2018.02.01     _VRuTx2MR7W5ACxpq7XQmQ 5 1  662496   0  159.4mb  159.4mb
yellow open wazuh-monitoring-3.x-2018.02.03 uXl6YGjMS_S2wiSUdc7idw 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.01.25 W8Er4Vy5Q2S9NUOSBODDdg 5 1    8866   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.01.22 FxY7tf6USkeUfidfYBoBqQ 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2017.12.29     G1u5PppcTiSGuJ25BGygsg 5 1  705675   0  214.1mb  214.1mb
yellow open wazuh-alerts-3.x-2018.02.11     yJghVlYXRDejZ0LBFJCzeg 5 1  121006   0   33.5mb   33.5mb
yellow open wazuh-monitoring-3.x-2018.02.24 -3G3fApOQX-iFXsmhuew4A 5 1    9360   0      3mb      3mb
yellow open wazuh-alerts-3.x-2018.01.17     d08LR9m_Sfq3PV4qPZan2A 5 1  504545   0  125.1mb  125.1mb
yellow open wazuh-alerts-3.x-2018.02.02     2Vkc6lQKTtOhxxspVOSVcg 5 1  729822   0  176.7mb  176.7mb
yellow open .kibana                         CwZMvFKsShy_0KhPm4ejPw 1 1     271 154  223.8kb  223.8kb
yellow open wazuh-monitoring-3.x-2018.01.21 E296lq6QRWuVa0PDkOmk7A 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2018.02.25     46L0XyY_Qfi59pruVFGRCw 5 1  199040   0  107.2mb  107.2mb
yellow open wazuh-monitoring-3.x-2018.01.16 oWCvAnebSjW5pX0XhTTFBQ 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.02.01 rGS2rIvNTKum_IbWX9YcCg 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.01.24 q6icBLlzTB6St8Vf1RcsWQ 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.02.27 8WUkt8TMQcuUzA3ijnF94A 5 1   10842   0    3.4mb    3.4mb
yellow open wazuh-alerts-3.x-2017.12.22     mF6JgEo4S1We0VA3TeIrHw 5 1  357057   0  106.2mb  106.2mb
yellow open wazuh-monitoring-3.x-2018.01.05 F5b1RP3aRr67eDFwf1YxdQ 5 1    8618   0    1.1mb    1.1mb
yellow open wazuh-alerts-3.x-2017.12.26     -peNX9qLRyW2a_1qAzjTgQ 5 1  462999   0  117.5mb  117.5mb
yellow open wazuh-alerts-3.x-2018.02.13     Ezlunt99RYiz0M3SZGFevA 5 1  537002   0  135.4mb  135.4mb
yellow open wazuh-monitoring-3.x-2017.12.31 iPKBzGAISuyVcsTMgDY73w 5 1    5904   0  995.9kb  995.9kb
yellow open wazuh-alerts-3.x-2018.01.30     zTckgSOqQYudK4i3Xucn4w 5 1  624670   0  147.4mb  147.4mb
yellow open wazuh-monitoring-3.x-2018.01.15 twqN94JBQZ-rzNGAGjiDBw 5 1    8928   0    1.3mb    1.3mb
yellow open .wazuh                          XlR5cbvDSMinmqF4PTYJpw 5 1       2   0   20.2kb   20.2kb
yellow open wazuh-alerts-3.x-2018.01.28     EU7YUaxDQwi8dGakxBa4zQ 5 1  117245   0   32.4mb   32.4mb
yellow open wazuh-monitoring-3.x-2018.01.02 OHBJcSviS7-b3wcs4OejwQ 5 1    6222   0    1.1mb    1.1mb
yellow open wazuh-monitoring-3.x-2018.01.04 icAFIQr_RtGGSImquqR6QA 5 1    8157   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2017.12.27 hJqiKRXzS2Oc1pSqnm-0Mg 5 1    5863   0      1mb      1mb
yellow open wazuh-monitoring-3.x-2018.03.03 _EaqW5e4Q1y1oU8ETzOMOg 5 1   28080   0    7.1mb    7.1mb
yellow open wazuh-alerts-3.x-2017.12.24     MzKkHeHpRmSDA4yzwT7spg 5 1  296160   0   95.4mb   95.4mb
yellow open wazuh-alerts-3.x-2018.02.04     bmAgI_5uRDKvfx7YkaWPqg 5 1  121400   0   33.7mb   33.7mb
yellow open wazuh-monitoring-3.x-2018.02.26 sJcl2aT1SbGxgYXADNbeYQ 5 1    9647   0    3.1mb    3.1mb
yellow open wazuh-monitoring-3.x-2018.01.10 6mQMM6VWQtWZdy9u3S8TpQ 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2018.03.03     lHlGb4oAQ_6yMCb-gddL1g 5 1  470050   0  333.1mb  333.1mb
yellow open wazuh-monitoring-2017.12.11     wksfnF_7RMWtF3m-d0N_SA 5 1       1   0      6kb      6kb
yellow open wazuh-alerts-3.x-2018.01.03     V5_4st0nSdWklbvebgUnOA 5 1  838713   0  227.4mb  227.4mb
yellow open wazuh-alerts-3.x-2018.03.04     YaTdy15_RJSPItNm3739_A 5 1  372823   0  275.3mb  275.3mb
yellow open wazuh-monitoring-3.x-2017.12.22 4Ls5wFBrRZG6QrERbnKViw 5 1    5904   0      1mb      1mb
yellow open wazuh-alerts-3.x-2018.02.03     a8Hd30WASPO6CtvqRttHRw 5 1  163777   0   45.6mb   45.6mb
yellow open wazuh-monitoring-3.x-2018.02.13 jKqufq22QbyrbaL2hr2-HQ 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.02.05 5G1p2XMDRp6QMBIXlItglg 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2017.12.17     sDzltYyBRXmZRrDhqp7wrw 5 1       1   0   17.6kb   17.6kb
yellow open wazuh-monitoring-3.x-2017.12.25 SuRs4LuLSmCWk2fRaOblbQ 5 1    5904   0      1mb      1mb
yellow open wazuh-monitoring-3.x-2018.03.06 17IgrCtxSGuJZ1CrDi8ktw 5 1   23595   0    5.6mb    5.6mb
yellow open wazuh-monitoring-3.x-2018.01.12 vYUCFMFnRcSvhdpY7OmEjQ 5 1    8928   0    1.2mb    1.2mb
yellow open wazuh-alerts-3.x-2018.02.19     5DhypetxQ9m50OfsZwnuvQ 5 1  193260   0   58.7mb   58.7mb
yellow open wazuh-monitoring-3.x-2018.02.11 YU0QO2JoQ_Wopf8o5OlBkg 5 1    8866   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.02.25 _t_gXgBkRn-hCTRDlS62Bw 5 1    9295   0    2.9mb    2.9mb
yellow open wazuh-alerts-3.x-2018.01.22     AttNVx60R9uZoiliGuTYAw 5 1  365887   0   92.1mb   92.1mb
yellow open wazuh-alerts-3.x-2018.02.21     nKa4EoJzRvyI78BLmeKLGA 5 1  456259   0  132.9mb  132.9mb
yellow open wazuh-monitoring-3.x-2017.12.15 PZZYOupoSKGDiy2i2RXifw 5 1       0   0    1.2kb    1.2kb
yellow open wazuh-alerts-3.x-2018.02.12     4YdKkLUiTq2l_7yvHaokUg 5 1  343523   0     88mb     88mb
yellow open wazuh-alerts-3.x-2018.01.21     2bVOsVoLR86dT5CBD3vbfw 5 1  122405   0   34.6mb   34.6mb
yellow open wazuh-alerts-3.x-2018.02.28     0xfNGyOjSGuPF4C7Ixvjwg 5 1 1105190   0  691.6mb  691.6mb
yellow open wazuh-monitoring-3.x-2017.12.19 78vTSHxpTPqVmW8HWKy-xQ 5 1    4730   0  754.6kb  754.6kb
yellow open wazuh-alerts-3.x-2017.12.23     IK1MfFiLQreHLi-z2Svg-Q 5 1  407568   0  134.1mb  134.1mb
yellow open wazuh-alerts-3.x-2017.12.19     7YhuWilsQWKNriHUConJnQ 5 1  163985   0   35.4mb   35.4mb
yellow open wazuh-monitoring-3.x-2017.12.24 x24-F6OuTFqIWqIqfdcJlg 5 1    5863   0 1019.9kb 1019.9kb
yellow open wazuh-alerts-3.x-2018.01.16     gBTkEycIR56Ksuz6ovXeVA 5 1  355289   0   93.6mb   93.6mb
yellow open wazuh-monitoring-3.x-2018.01.27 4zSucHuZSK6Ime8nNVn-Jg 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.03.02 ICPzuyaPRc2p8uSGml-_Mw 5 1   28080   0      7mb      7mb
yellow open wazuh-alerts-3.x-2018.01.07     86k0FelLTH6iersO8yknlg 5 1  139235   0   42.1mb   42.1mb
yellow open wazuh-monitoring-3.x-2018.01.07 -3WKjRQVRNWQUBnS0t1-4w 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2018.01.24     xXU9JcaWSiuJU1a7-yputg 5 1  617915   0  152.7mb  152.7mb
yellow open wazuh-monitoring-3.x-2018.02.16 mx4QZZB4TxikvzsX_gfSQg 5 1    8941   0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2018.01.15     JxiNEOpGTyaLZocUlL-S6w 5 1  205066   0   56.5mb   56.5mb
yellow open wazuh-alerts-3.x-2017.12.20     QVh6FgqERaqQOFgUl2qMYg 5 1  333304   0   79.1mb   79.1mb
yellow open wazuh-alerts-3.x-2017.12.27     2Sk3ICPzTgScQF1k8Hx6Kg 5 1  661367   0  199.6mb  199.6mb
yellow open wazuh-alerts-3.x-2018.02.23     l1Myc1o0TZCTPT6VfBoyLw 5 1  397415   0  126.5mb  126.5mb
yellow open wazuh-monitoring-3.x-2018.01.23 KKPRyWr3QCiaF96G9ThdBA 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2017.12.28 luhnOlTfSUaqk2zYjZl4Aw 5 1    5904   0      1mb      1mb
yellow open wazuh-monitoring-3.x-2018.01.01 rukIV9uhSuSR6eEIFXmJCQ 5 1    5904   0      1mb      1mb
yellow open wazuh-alerts-3.x-2018.01.10     dfgbOEcqTteP00Vc_QC_Ag 5 1  457266   0  114.2mb  114.2mb
yellow open wazuh-alerts-3.x-2018.02.17     IOmJULPwTja9Z6onrbwy9g 5 1  168129   0   51.6mb   51.6mb
yellow open wazuh-monitoring-3.x-2018.02.10 V572CwhNTZa8bcE76Eulbw 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2017.12.31     XB8wK6ONTEKtAfRmFwBkQA 5 1  627817   0    207mb    207mb
yellow open wazuh-alerts-3.x-2018.02.10     du3Bo0i8RBWz_9E1F-XdBw 5 1  157462   0   43.2mb   43.2mb
yellow open wazuh-monitoring-3.x-2018.02.08 6Ilh14Y1S1q0K_fr0hTF8A 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-alerts-2017.12.11         I1i8xmP9Sx6Ph_Uq9fFQ1Q 5 1       2   0     38kb     38kb
yellow open wazuh-alerts-3.x-2017.12.30     -2RWwfqTTZ6RVDu8QgTZOA 5 1  271513   0   81.3mb   81.3mb
yellow open wazuh-alerts-3.x-2017.12.25     s9PeRNNeS8GJNf6ZgzqUQQ 5 1  165786   0   46.2mb   46.2mb
yellow open wazuh-monitoring-3.x-2018.03.04 38i62D-tR8ODGJEbMiH9ZQ 5 1   28080   0    7.2mb    7.2mb
yellow open wazuh-monitoring-3.x-2018.02.09 W8hJNrLWRg-Ss80KaWwNrQ 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.01.06 6TLj3Hr9Sb6wSWBgcX-YAg 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.02.20 x6TXRDBkSxqoQzVDO7u6SQ 5 1    9072   0    1.4mb    1.4mb
yellow open wazuh-monitoring-3.x-2018.03.05 j5ar75khS-6DQDdJUpfarg 5 1   28080   0    7.1mb    7.1mb
yellow open wazuh-alerts-3.x-2018.02.05     eCoWhk7iSjiIfgr4WbGUgg 5 1  422094   0  105.6mb  105.6mb
yellow open wazuh-monitoring-3.x-2018.02.17 giD15p9DQOWfPe_nmtcCng 5 1    9072   0    1.4mb    1.4mb
yellow open wazuh-monitoring-3.x-2017.12.20 maqlr-bRRUuNJUX-No0gyg 5 1    5185   0  973.7kb  973.7kb
yellow open wazuh-alerts-3.x-2018.01.18     QcamMTpSQ1qB352HnStkMg 5 1  385138   0    113mb    113mb
yellow open wazuh-monitoring-3.x-2018.01.31 mpH-O3BpS2eTTw_kihBcww 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2018.01.12     CM-C3352TE-XYIgRNgJCBw 5 1  618139   0  143.7mb  143.7mb
yellow open wazuh-alerts-3.x-2018.01.14     _kFqJ_L6QtibkGFAKL9AUQ 5 1  118146   0   32.7mb   32.7mb
yellow open wazuh-monitoring-3.x-2018.02.18 8HSxlIYbQya6_mPJHZb-kg 5 1    9072   0    1.4mb    1.4mb
yellow open wazuh-monitoring-3.x-2018.01.03 S3VVSOAmTyCTzjt1KzpnPQ 5 1    8008   0    1.2mb    1.2mb
yellow open wazuh-monitoring-3.x-2018.02.14 nkkFYxTnR3mlN0620zi_uQ 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.01.13 fMxmAKWDTG27OGcEu3mASw 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-alerts-3.x-2018.01.04     8i0u5GauSo6yW6ByU6VV0A 5 1  470167   0  129.3mb  129.3mb
yellow open wazuh-alerts-3.x-2018.02.22     XLGm6mp6SlSzM920ih4-1Q 5 1  283937   0   89.3mb   89.3mb
yellow open wazuh-alerts-3.x-2018.01.29     2xgbytEmR1Szc_AFhPbi9A 5 1  311478   0   79.8mb   79.8mb
yellow open wazuh-alerts-3.x-2018.02.07     SOVKSEfGRbaq64unbN9xUw 5 1  501964   0  127.8mb  127.8mb
yellow open wazuh-monitoring-3.x-2017.12.30 cu38b9-oSSWXpFC9PRVPAg 5 1    5904   0      1mb      1mb
yellow open wazuh-alerts-3.x-2018.01.02     bnK5yPeVSxivgT7ufdovXw 5 1  819782   0  222.9mb  222.9mb
yellow open .wazuh-version                  Tj1BCdhLQeGI6-YFa0lWXw 5 1       1   0    6.6kb    6.6kb
yellow open wazuh-monitoring-3.x-2017.12.29 bhXRqbu7RfyQy13hW6QdfA 5 1    5904   0      1mb      1mb
yellow open wazuh-alerts-3.x-2018.01.09     YBOb1DqUQrqZdeeuEWrf3Q 5 1  483273   0  120.5mb  120.5mb
yellow open wazuh-monitoring-3.x-2018.01.14 Rhf5YvoXRC2nwcycL7WKbg 5 1    8928   0    1.3mb    1.3mb
yellow open wazuh-monitoring-3.x-2018.02.19 -JCcMn5lRVW5X5oVLujBqw 5 1    9072   0    1.4mb    1.4mb
yellow open wazuh-alerts-3.x-2018.03.05     tkJb4K8JQoOqUmlRspWlKA 5 1  964356   0  607.1mb  607.1mb
yellow open wazuh-monitoring-3.x-2018.02.12 9yZinkjwRR-NnzLtmyaRCA 5 1    8928   0    1.3mb    1.3mb

If you are indexing new alerts you should see multiple indices with the prefix wazuh-alerts-3.x-* from the above command.
Could you please note down the most recently added index with that prefix and execute the following command?
 
# curl elastic_ip:9200/wazuh-alerts-3.x-123-123-123/_mapping

{"error":{"root_cause":[{"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"wazuh-alerts-3.x-028-028-028","index_uuid":"_na_","index":"wazuh-alerts-3.x-028-028-028"}],"type":"index_not_found_exception","reason":"no such index","resource.type":"index_or_alias","resource.id":"wazuh-alerts-3.x-028-028-028","index_uuid":"_na_","index":"wazuh-alerts-3.x-028-028-028"},"status":404}
 
Where 123 are example numbers, replace them by your desired index.

Did I misunderstand that request? 

Finally, could you provide the output of the following command:

# curl elastic_ip:9200/_cat/templates

root@rpc-ossec:~# curl 127.0.0.1:9200/_cat/templates
kibana_index_template:.kibana [.kibana]           0 
wazuh-kibana                  [.kibana*]          0 
wazuh-agent                   [wazuh-monitoring*] 0 
wazuh                         [wazuh-alerts-3.*]  0 
logstash                      [logstash-*]        0 50001

~ Whit

jesus.g...@wazuh.com

unread,
Mar 7, 2018, 3:14:53 AM3/7/18
to Wazuh mailing list
Hi Whit, you have a mistake on a curl command, please execute it as follow:

# curl elastic_ip:9200/wazuh-alerts-3.x-2018.03.06/_mapping

I can see on your indices that wazuh-alerts-3.x-2018.03.06 exists so it's fine for me. Also could you paste
the output of the following command:

# curl "elastic_ip:9200/wazuh-alerts-3.x-2018.03.06/_search?size=1&pretty"

Note: please use the double quotes since I'm using & character.

With the first command we are checking the mapping that is being applied to your alerts indices so I can check
if it's right, on the other hand, with the second command we are extracting a sample from an index (size=1), this way I can
check if they are well formed.

Best regards,
Jesús

Whit Blauvelt

unread,
Mar 7, 2018, 9:51:34 AM3/7/18
to Wazuh mailing list
Hi Whit, you have a mistake on a curl command, please execute it as follow:

# curl elastic_ip:9200/wazuh-alerts-3.x-2018.03.06/_mapping

Possibly I should trim this down, but in case not:

{"wazuh-alerts-3.x-2018.03.06":{"mappings":{"wazuh":{"dynamic_templates":[{"string_as_keyword":{"match_mapping_type":"string","mapping":{"doc_values":"true","type":"keyword"}}}],"properties":{"@timestamp":{"type":"date","format":"dateOptionalTime"},"@version":{"type":"text"},"AlertsFile":{"type":"keyword"},"GeoLocation":{"properties":{"area_code":{"type":"long"},"city_name":{"type":"keyword"},"continent_code":{"type":"text"},"coordinates":{"type":"double"},"country_code2":{"type":"text"},"country_code3":{"type":"text"},"country_name":{"type":"keyword"},"dma_code":{"type":"long"},"ip":{"type":"keyword"},"latitude":{"type":"double"},"location":{"type":"geo_point"},"longitude":{"type":"double"},"postal_code":{"type":"keyword"},"real_region_name":{"type":"keyword"},"region_name":{"type":"keyword"},"timezone":{"type":"text"}}},"agent":{"properties":{"id":{"type":"keyword"},"ip":{"type":"keyword"},"name":{"type":"keyword"}}},"cluster":{"properties":{"name":{"type":"keyword"}}},"command":{"type":"keyword"},"data":{"properties":{"accesses":{"type":"keyword"},"account_domain":{"type":"keyword"},"account_name":{"type":"keyword"},"action":{"type":"keyword"},"audit":{"properties":{"acct":{"type":"keyword"},"auid":{"type":"keyword"},"command":{"type":"keyword"},"cwd":{"type":"keyword"},"dev":{"type":"keyword"},"directory":{"properties":{"inode":{"type":"keyword"},"mode":{"type":"keyword"},"name":{"type":"keyword"}}},"egid":{"type":"keyword"},"enforcing":{"type":"keyword"},"euid":{"type":"keyword"},"exe":{"type":"keyword"},"exit":{"type":"keyword"},"file":{"properties":{"inode":{"type":"keyword"},"mode":{"type":"keyword"},"name":{"type":"keyword"}}},"fsgid":{"type":"keyword"},"fsuid":{"type":"keyword"},"gid":{"type":"keyword"},"id":{"type":"keyword"},"key":{"type":"keyword"},"list":{"type":"keyword"},"old-auid":{"type":"keyword"},"old-ses":{"type":"keyword"},"old_enforcing":{"type":"keyword"},"old_prom":{"type":"keyword"},"op":{"type":"keyword"},"pid":{"type":"keyword"},"ppid":{"type":"keyword"},"prom":{"type":"keyword"},"res":{"type":"keyword"},"session":{"type":"keyword"},"sgid":{"type":"keyword"},"srcip":{"type":"keyword"},"subj":{"type":"keyword"},"success":{"type":"keyword"},"suid":{"type":"keyword"},"syscall":{"type":"keyword"},"tty":{"type":"keyword"},"type":{"type":"keyword"},"uid":{"type":"keyword"}}},"command":{"type":"keyword"},"data":{"type":"keyword"},"dstip":{"type":"keyword"},"dstport":{"type":"keyword"},"dstuser":{"type":"keyword"},"euid":{"type":"keyword"},"file":{"type":"keyword"},"id":{"type":"keyword"},"logon_type":{"type":"keyword"},"oscap":{"properties":{"check":{"properties":{"description":{"type":"text"},"id":{"type":"keyword"},"identifiers":{"type":"text"},"oval":{"properties":{"id":{"type":"keyword"}}},"rationale":{"type":"text"},"references":{"type":"text"},"result":{"type":"keyword"},"severity":{"type":"keyword"},"title":{"type":"keyword"}}},"scan":{"properties":{"benchmark":{"properties":{"id":{"type":"keyword"}}},"content":{"type":"keyword"},"id":{"type":"keyword"},"profile":{"properties":{"id":{"type":"keyword"},"title":{"type":"keyword"}}},"return_code":{"type":"long"},"score":{"type":"double"}}}}},"protocol":{"type":"keyword"},"pwd":{"type":"keyword"},"security_id":{"type":"keyword"},"srcip":{"type":"keyword"},"srcport":{"type":"keyword"},"srcuser":{"type":"keyword"},"status":{"type":"keyword"},"subject":{"properties":{"account_domain":{"type":"keyword"},"account_name":{"type":"keyword"},"logon_id":{"type":"keyword"},"security_id":{"type":"keyword"}}},"system_name":{"type":"keyword"},"title":{"type":"keyword"},"tty":{"type":"keyword"},"type":{"type":"keyword"},"uid":{"type":"keyword"},"url":{"type":"keyword"}}},"decoder":{"properties":{"accumulate":{"type":"long"},"fts":{"type":"long"},"ftscomment":{"type":"keyword"},"name":{"type":"keyword"},"parent":{"type":"keyword"}}},"full_log":{"type":"text"},"host":{"type":"keyword"},"id":{"type":"keyword"},"location":{"type":"keyword"},"manager":{"properties":{"name":{"type":"keyword"}}},"message":{"type":"text"},"offset":{"type":"keyword"},"path":{"type":"keyword"},"predecoder":{"properties":{"hostname":{"type":"keyword"},"program_name":{"type":"keyword"},"timestamp":{"type":"keyword"}}},"previous_log":{"type":"text"},"previous_output":{"type":"keyword"},"program_name":{"type":"keyword"},"rule":{"properties":{"cis":{"type":"keyword"},"cve":{"type":"keyword"},"description":{"type":"keyword"},"firedtimes":{"type":"long"},"frequency":{"type":"long"},"groups":{"type":"keyword"},"id":{"type":"keyword"},"info":{"type":"keyword"},"level":{"type":"long"},"mail":{"type":"boolean"},"pci_dss":{"type":"keyword"}}},"syscheck":{"properties":{"diff":{"type":"keyword"},"event":{"type":"keyword"},"gid_after":{"type":"keyword"},"gid_before":{"type":"keyword"},"gname_after":{"type":"keyword"},"gname_before":{"type":"keyword"},"inode_after":{"type":"keyword"},"inode_before":{"type":"keyword"},"md5_after":{"type":"keyword"},"md5_before":{"type":"keyword"},"mtime_after":{"type":"date","format":"dateOptionalTime"},"mtime_before":{"type":"date","format":"dateOptionalTime"},"path":{"type":"keyword"},"perm_after":{"type":"keyword"},"perm_before":{"type":"keyword"},"sha1_after":{"type":"keyword"},"sha1_before":{"type":"keyword"},"size_after":{"type":"long"},"size_before":{"type":"long"},"uid_after":{"type":"keyword"},"uid_before":{"type":"keyword"},"uname_after":{"type":"keyword"},"uname_before":{"type":"keyword"}}},"title":{"type":"keyword"},"type":{"type":"text"}}}}}}root@rpc-osse
 
I can see on your indices that wazuh-alerts-3.x-2018.03.06 exists so it's fine for me. Also could you paste
the output of the following command:

# curl "elastic_ip:9200/wazuh-alerts-3.x-2018.03.06/_search?size=1&pretty"

{
  "took" : 28,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : 2405196,
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : "wazuh-alerts-3.x-2018.03.06",
        "_type" : "wazuh",
        "_id" : "UhrW-GEBUUX51hI1FY8m",
        "_score" : 1.0,
        "_source" : {
          "rule" : {
            "firedtimes" : 3271,
            "description" : "Windows Logon Success.",
            "id" : "18107",
            "mail" : false,
            "level" : 3,
            "groups" : [
              "windows",
              "authentication_success",
              "gpg13_7.1",
              "gpg13_7.2"
            ],
            "pci_dss" : [
              "10.2.5"
            ]
          },
          "host" : "rpc-ossec",
          "full_log" : "2018 Mar 05 20:03:51 WinEvtLog: Security: AUDIT_SUCCESS(4624): Microsoft-Windows-Security-Auditing: (no user): no domain: RV-P-ER-DC-01.eisclient.local: An account was successfully logged on.    Subject:   Security ID:  S-1-0-0   Account Name:  -   Account Domain:  -   Logon ID:  0x0    Logon Type:   3    Impersonation Level:  Impersonation    New Logon:   Security ID:  S-1-5-21-488898852-1043323262-2500000034-4261   Account Name:  svc-ericom   Account Domain:  EISCLIENT   Logon ID:  0xC2A1D57   Logon GUID:  {00F6F24E-2537-44E1-356E-D805B68EC521}    Process Information:   Process ID:  0x0   Process Name:  -    Network Information:   Workstation Name: -   Source Network Address: 172.16.11.53   Source Port:  54911    Detailed Authentication Information:   Logon Process:  Kerberos   Authentication Package: Kerberos   Transited Services: -   Package Name (NTLM only): -   Key Length:  0    This event is generated when a logon session is created. It is generated on the computer that was accessed.    The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.    The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).    The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on.    The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.    The impersonation level field indicates the extent to which a process in the logon session can impersonate.    The authentication information fields provide detailed information about this specific logon request.   - Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.   - Transited services indicate which intermediate services have participated in this logon request.   - Package name indicates which sub-protocol was used among the NTLM protocols.   - Key length indicates the length of the generated session key. This will be 0 if no session key was requested.",
          "id" : "1520298234.963797",
          "decoder" : {
            "name" : "windows",
            "parent" : "windows"
          },
          "GeoLocation" : { },
          "predecoder" : {
            "program_name" : "WinEvtLog",
            "timestamp" : "2018 Mar 05 20:03:51"
          },
          "manager" : {
            "name" : "rpc-ossec"
          },
          "location" : "WinEvtLog",
          "data" : {
            "dstuser" : "(no user)",
            "logon_type" : "3",
            "security_id" : "S-1-0-0",
            "id" : "4624",
            "account_name" : "svc-ericom",
            "account_domain" : "EISCLIENT",
            "srcip" : "172.16.11.53",
            "type" : "Security",
            "data" : "Microsoft-Windows-Security-Auditing",
            "system_name" : "RV-P-ER-DC-01.eisclient.local",
            "status" : "AUDIT_SUCCESS"
          },
          "path" : "/var/ossec/logs/alerts/alerts.json",
          "agent" : {
            "name" : "RV-P-ER-DC-01",
            "ip" : "172.16.11.100",
            "id" : "084"
          },
          "@timestamp" : "2018-03-06T01:03:54.000Z"
        }
      }
    ]
  }
}

~ Whit

jesus.g...@wazuh.com

unread,
Mar 7, 2018, 11:11:14 AM3/7/18
to Wazuh mailing list
Hello again Whit, I've reviewed your data, it seems to be fine for me, so the next step is to open the dev tools
on your browser, and go to Wazuh App -> Overview General. Keep the dev tools opened before go to the Wazuh App.
Once the page is loaded take a look at the console and the network tab. Please take a screenshot of both dev tools tab.
If you see any error, open the details and paste here too please.

Best regards,
Jesús

Whit Blauvelt

unread,
Mar 7, 2018, 12:19:43 PM3/7/18
to Wazuh mailing list
Hi Jesús,

Here you go.

Thanks,
Whit
wazuhconsole.png
wazuhconsoledetail.png
wazuhnetwork.png

jesus.g...@wazuh.com

unread,
Mar 8, 2018, 5:46:48 AM3/8/18
to Wazuh mailing list
Hi Whit at this point my only bet is that your App is broken, after review your logs, indices, templates, discover, etc.
My suggestion is to remove the app and reinstall, please use the following commands:

# /usr/share/kibana/bin/kibana-plugin remove wazuh
# rm -rf /usr/share/kibana/optimize/bundles/*
# curl -XDELETE elastic_ip:9200/.kibana
# curl -XDELETE elastic_ip:9200/.wazuh
# curl -XDELETE elastic_ip:9200/.wazuh-version
# /usr/share/kibana/bin/kibana-plugin install https://packages.wazuh.com/wazuhapp/wazuhapp-3.2.1_6.2.2.zip

The last command take few minutes, you can go to another shell in order to check progress as follow:

# watch -n0 'systemctl status kibana -l'

Note: I'm assuming that you have Wazuh 3.2.1 and Elastic stack 6.2.2, otherwise it won't work, ensure all components
fit the citated versions and/or use the right Wazuh App package (https://github.com/wazuh/wazuh-kibana-app/tree/3.2#installation)

Note 2: you won't lose any alert.

Once done, open a new incognito window on your browser, insert the API credentials and go to Overview General to see if all is right now.

King regards,
Jesús

Whit Blauvelt

unread,
Mar 8, 2018, 10:02:12 AM3/8/18
to Wazuh mailing list
Hi Jesús,

Hate to say it, but that broke thing farther. The process finished. But all the Wazuh screen says is:

Ups, something went wrong...
Something went wrong

Ran the whole sequence again, and got that same result.

The plugin part appeared to be installing okay, from the command line:

root@rpc-ossec:~# /usr/share/kibana/bin/kibana-plugin install https://packages.wa2.2.zip/wazuhapp/wazuhapp-3.2.1_6
Attempting to transfer from https://packages.wazuh.com/wazuhapp/wazuhapp-3.2.1_6.2.2.zip
Transferring 4696680 bytes..............
......
Transfer complete
Retrieving metadata from plugin archive
Extracting plugin archive
Extraction complete
Optimizing and caching browser bundles...
Plugin installation complete

Then restarting with ossec-control leads to this atop the Kibana Wazuh screen:

Routes. Error. {"data":{"statusCode":500,"error":"Internal Server Error","message":"An internal server error occurred"},"status":500,"config":{"method":"GET","transformRequest":[null],"transformResponse":[null],"jsonpCallbackParam":"callback","headers":{"Accept":"application/json, text/plain, /","kbn-version":"6.2.2"},"timeout":8000,"url":"/api/wazuh-api/apiEntries"},"statusText":"Internal Server Error"}

And this in the center:

Ups, something went wrong...
Error. Elasticsearch. Could not find .kibana index on Elasticsearch or maybe Elasticsearch is down.
Please check it and try again.

 ~# dpkg --list | grep wazuh
ii  wazuh-api                           3.2.1-1                                    amd64        Wazuh API is an open source RESTful API to interact with OSSEC from your own application or with a simple web browser or tools like cURL.
ii  wazuh-manager                       3.2.1-1                                    amd64        Wazuh helps you to gain security visibility into your infrastructure by monitoring hosts at an operating system and application level. It provides the following capabilities: log analysis, file integrity monitoring, intrusions detection and policy and compliance monitoring

# dpkg --list | grep elastic
ii  elasticsearch                       6.2.2                                      all          Elasticsearch is a distributed RESTful search engine built for the cloud. Reference documentation can be found at https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html and the 'Elasticsearch: The Definitive Guide' book can be found at https://www.elastic.co/guide/en/elasticsearch/guide/current/index.html

So, what next?

Thanks,
Whit

Whit Blauvelt

unread,
Mar 8, 2018, 10:08:46 AM3/8/18
to Wazuh mailing list
Also, many iterations of this error in syslog:

Mar  8 09:57:07 localhost kibana[5109]: {"type":"response","@timestamp":"2018-03-08T14:57:07Z","tags":[],"pid":5109,"method":"get","statusCode":304,"req":{"url":"/ui/favicons/favicon-16x16.png","method":"get","headers":{"host":"rpc-ossec:5601","connection":"keep-alive","user-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.146 Safari/537.36","accept":"image/webp,image/apng,image/*,*/*;q=0.8","dnt":"1","referer":"http://rpc-ossec:5601/app/wazuh","accept-encoding":"gzip, deflate","accept-language":"en-US,en;q=0.9","if-none-match":"\"13b869be5df4bdc56920edc16a28e67a7c08203b\"","if-modified-since":"Fri, 16 Feb 2018 19:20:01 GMT"},"remoteAddress":"172.16.12.122","userAgent":"172.16.12.122","referer":"http://rpc-ossec:5601/app/wazuh"},"res":{"statusCode":304,"responseTime":1,"contentLength":9},"message":"GET /ui/favicons/favicon-16x16.png 304 1ms - 9.0B"}

Where do we configure this to go to its own log rather than syslog?

Whit

Jesús Ángel González

unread,
Mar 8, 2018, 10:10:29 AM3/8/18
to Whit Blauvelt, Wazuh mailing list
Hi Whit I forgot to say that at the end of the process you must to restart Kibana:

# systemctl restart kibana

Now open a new incognito window on your browser and try again, open dev tools and clean local storage, cookies, etc. 

It should be fine now, king regards,
Jesús 

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/83efd78b-d8a9-459d-be33-67f3456e69b9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
Best regards,
Jesús.

Whit Blauvelt

unread,
Mar 8, 2018, 10:26:59 AM3/8/18
to Wazuh mailing list
On Thursday, March 8, 2018 at 10:10:29 AM UTC-5, Jesús Ángel González wrote:
Hi Whit I forgot to say that at the end of the process you must to restart Kibana:

# systemctl restart kibana

But I had with ossec-control. Equivalent, yes?

Now open a new incognito window on your browser and try again, open dev tools and clean local storage, cookies, etc. 

Ah, missed the "incognito" part. What specifically needs to be cleaned out to not require an incognito window each time?

Thanks,
Whit 

jesus.g...@wazuh.com

unread,
Mar 8, 2018, 10:29:32 AM3/8/18
to Wazuh mailing list
Hello again Whit, it's not equivalent, the ossec-control binary affects only Wazuh Core components and Wazuh API as well.
Kibana is out of Wazuh Core and you need to restart it by yourself. 

New incognito window is needed to discard any cookies, browser storage related errors.

And regarding to your second mail about the syslog, you could define log file editing the configuration

logging.dest:
   
Default: stdout Enables you specify a file where Kibana stores log output.


Replace it with:

loggin.dest: /var/log/kibana.log

Best regards,
Jesús

Whit Blauvelt

unread,
Mar 8, 2018, 10:46:46 AM3/8/18
to Wazuh mailing list
New incognito window is needed to discard any cookies, browser storage related errors.

Yes, but then the settings don't stick because the new cookies to store them get thrown out when the window closes. The better way is to get rid of the specific cookies, which works.
 
And regarding to your second mail about the syslog, you could define log file editing the configuration

logging.dest:
   
Default: stdout Enables you specify a file where Kibana stores log output.

Replace it with:

loggin.dest: /var/log/kibana.log
 
Thanks.

Another thought: Since installing the plugin requires restarting kibana, could the plugin installation script trigger that?

Also: Since stale stuff in the cookies can lead to an error, on error why isn't the response to clean those cookies out? The cookies for a site remain under the control of that site, after all.

Whit

jesus.g...@wazuh.com

unread,
Mar 8, 2018, 11:17:26 AM3/8/18
to Wazuh mailing list
Hi Whit, you are right about the cookies management and local storage, we are planning to make it better soon.

The plugin installation is from Kibana, and we do not develop Kibana, it's a component of the Elastic stack, unnafortunately
we can do nothing with the installation command.

Regarding to advice about the cookies message it's included on our efforto make it on a better way, thanks.

So just now, how is your environment health at the end Whit?

Best regards,
Jesús

Whit Blauvelt

unread,
Mar 8, 2018, 11:33:00 AM3/8/18
to Wazuh mailing list

So just now, how is your environment health at the end Whit?
 
Jesús,

Better, thanks!

Whit

Whit Blauvelt

unread,
Mar 8, 2018, 11:43:58 AM3/8/18
to Wazuh mailing list
Jesús,

Looks like another problem. Drilling down on any agent, including ones where alerts have gone out this morning, nothing is graphed in Alert Level Evolution or Alerts beyond last midnight.

Best,
Whit

jesus.g...@wazuh.com

unread,
Mar 8, 2018, 12:08:54 PM3/8/18
to Wazuh mailing list
Hi Whit first step is to check if after the whole process all the services are running as expected:

# systemctl status wazuh-manager -l
# systemctl status wazuh-api -l
# systemctl status elasticsearch -l
# systemctl status logstash -l
# systemctl status kibana -l

Now check if there is any error on the ossec.log (Wazuh manager):

# cat /var/ossec/logs/ossec.log | grep ERROR

Check who is reading the alerts.json file (Wazuh manager):

# lsof /var/ossec/logs/alerts/alerts.json

And finally check the date on each system (manager, agents...)

# date

Maybe they are on different time...

Also check if there is any Elasticsearch, Logstash errors as follow:

# cat /var/log/elasticsearch/elasticsearch.log | grep ERR
# cat /var/log/logstash/logstash-plain.log | grep ERR

King regards,
Jesús

Whit Blauvelt

unread,
Mar 8, 2018, 12:32:53 PM3/8/18
to Wazuh mailing list
Hi again Jesús,


Hi Whit first step is to check if after the whole process all the services are running as expected:

# systemctl status wazuh-manager -l
# systemctl status wazuh-api -l
# systemctl status elasticsearch -l
# systemctl status logstash -l
# systemctl status kibana -l

All "active (running)" and good.
 
Now check if there is any error on the ossec.log (Wazuh manager):

# cat /var/ossec/logs/ossec.log | grep ERROR

Nothing. 

Check who is reading the alerts.json file (Wazuh manager):

# lsof /var/ossec/logs/alerts/alerts.json

root@rpc-ossec:~# lsof /var/ossec/logs/alerts/alerts.json
COMMAND     PID  USER   FD   TYPE DEVICE   SIZE/OFF   NODE NAME
ossec-ana 20949 ossec   10w   REG  253,1 2607248196 670039 /var/ossec/logs/alerts/alerts.json


And finally check the date on each system (manager, agents...)

# date

Maybe they are on different time...

Nope. Everything's on New York time here.  

Also check if there is any Elasticsearch, Logstash errors as follow:

# cat /var/log/elasticsearch/elasticsearch.log | grep ERR
# cat /var/log/logstash/logstash-plain.log | grep ERR

Nada. Those graphs still stop last midnight, for every single agent I've checked, Windows or Linux.

Best,
Whit


jesus.g...@wazuh.com

unread,
Mar 8, 2018, 12:59:57 PM3/8/18
to Wazuh mailing list
Ok Whit, look at the lsof output, there is no logstash/filebeat process reading it. That's the cause of your problem since midnight.
Your whole environment seems to be fine now but we need to make one more step to be right:

Only If you are using single host architecture (logstash), otherwise skip this

# curl -so /etc/logstash/conf.d/01-wazuh.conf https://raw.githubusercontent.com/wazuh/wazuh/3.2/extensions/logstash/01-wazuh-local.conf
# usermod -a -G ossec logstash // Probably permissions problem?
# systemctl restart logstash

Only If you are using distributed architecture (filebeat->logstash):

Download the Filebeat configuration file from the Wazuh repository. This is pre-configured to forward Wazuh alerts to Logstash:


Edit the file /etc/filebeat/filebeat.yml and replace ELASTIC_SERVER_IP with the IP address or the hostname of the Elastic Stack server. For example:

output:
 logstash
:
   hosts
: ["ELASTIC_SERVER_IP:5000"]

# systemctl restart filebeat
https://documentation.wazuh.com/current/installation-guide/installing-wazuh-server/wazuh_server_rpm.html#wazuh-server-rpm

If all went fine repeat the lsof command again to check if there is more than process reading the alerts.json file:

# lsof /var/ossec/logs/alerts/alerts.json

If you only see ossec-ana it's wrong, because it means that only Wazuh is on the file, you need one more process (usually Java)

Best regards,
Jesús

Whit Blauvelt

unread,
Mar 8, 2018, 2:04:21 PM3/8/18
to Wazuh mailing list
 Jesús,

Only If you are using single host architecture (logstash), otherwise skip this

Using a single host so:
That brings in the identical file to what was already there.
 
# usermod -a -G ossec logstash // Probably permissions problem?

logstash is already in /etc/group as ossec:x:125:logstash (as well as logstash:x:999:). So no reason to add it again.

# systemctl restart logstash

logstash has been running since 2/22. Okay it's restarting now ... and it's still the same. All graphs cut off at midnight last night.

This is improved though:

root@rpc-ossec:/var/log# lsof /var/ossec/logs/alerts/alerts.json

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 14743 logstash 74r REG 253,1 2889055336 670039 /var/ossec/logs/alerts/alerts.json
ossec-ana 20949 ossec 10w REG 253,1 2889055336 670039 /var/ossec/logs/alerts/alerts.json

Restarting kibana makes no difference. 

Best,
Whit
 

Jesús Ángel González

unread,
Mar 8, 2018, 2:10:01 PM3/8/18
to Whit Blauvelt, Wazuh mailing list
Logstash is not able to know when it goes down to insert the missing alerts on Elasticsearch but now it’s reading your alerts.json file, now it should be fine. Don’t look for the alerts between midnight and now on Elasticsearch because they won’t be inserted.

Let’s if it’s inserting now , you could restart the Wazuh manager or make a ssh login(it will fire an alert) .

Best regards,
Jesús 

--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

For more options, visit https://groups.google.com/d/optout.
--
Best regards,
Jesús.

Whit Blauvelt

unread,
Mar 8, 2018, 2:17:18 PM3/8/18
to Wazuh mailing list
Jesús,

Logstash is not able to know when it goes down

Does it do that often? Should it be run with monit or a cron script to restart it when it fails like that? 

Thanks,
Whit 

Jesús Ángel González

unread,
Mar 8, 2018, 2:25:24 PM3/8/18
to Whit Blauvelt, Wazuh mailing list
Whit,

It shouldn’t, any case you could make a script as you said that parses the lsof output and check of Java process is there or something similar, but the only times I saw that is whenever something went wrong at some point during installation or whatever.


Best regards,
Jesús


--
You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To post to this group, send email to wa...@googlegroups.com.
Visit this group at https://groups.google.com/group/wazuh.

For more options, visit https://groups.google.com/d/optout.
--
Best regards,
Jesús.

Whit Blauvelt

unread,
Mar 8, 2018, 3:14:05 PM3/8/18
to Wazuh mailing list
Jesús,

Logstash is not able to know when it goes down to insert the missing alerts on Elasticsearch but now it’s reading your alerts.json file, now it should be fine. 

Just received an alert by email. Checked the screen for that agent, and still nothing in the graphs after last midnight.

The email:

Date: Thu, 8 Mar 2018 15:04:48 -0500
From: Wazuh <oss...@ossec.obscured.com>
To: webm...@obscured.com
Subject: OSSEC Notification - (rpc-postgres) 172.16.12.50 - Alert level 10

Wazuh Notification.
2018 Mar 08 15:04:34
Received From: (rpc-postgres) 172.16.12.50->/var/log/apache2/access.log
Rule: 31151 fired (level 10) -> "Multiple web server 400 error codes from same source ip."
Src IP: 172.16.12.40...

Screenshot attached.

Also:

root@rpc-ossec:/var/log# lsof /var/ossec/logs/alerts/alerts.json
COMMAND     PID     USER   FD   TYPE DEVICE   SIZE/OFF   NODE NAME
java      14743 logstash   74r   REG  253,1 3105410037 670039 /var/ossec/logs/alerts/alerts.json
ossec-ana 20949    ossec   10w   REG  253,1 3105410037 670039 /var/ossec/logs/alerts/alerts.json

So it's not presently that.

Best,
Whit
missingalerts.png

Whit Blauvelt

unread,
Mar 8, 2018, 3:21:21 PM3/8/18
to Wazuh mailing list
Same thing for a Windows agent -- a more up-to-date one as it happens. For this too, alerts have been out by email recently.

- W
missingalerts2.png

jesus.g...@wazuh.com

unread,
Mar 9, 2018, 3:34:38 AM3/9/18
to Wazuh mailing list
Hi Whit,

Ok so just now your environment seems to be fine except for these missing alerts from last midnigth.
From our mails I can see that Logstash is now reading the alerts.json and your agents are sending data properly.
One more time, let's see if these missing alerts are inserted on Elasticsearch:

curl localhost:9200/_cat/indices -s | grep "2018.03.09"

With the above command we are making a pipe to the filter the indices list.

If you see not empty result go to Kibana -> Discover (not Wazuh App) to check if in fact we have alerts inserted after last midnight.
Once you are done open an alert from the Discover (clicking on the little triangle next to each alert) and you should see Table/JSON,
click on JSON and paste the content here, thanks.

On the other hand, if you are not seeing alerts inserted on Elasticsearch let's see your disk usage and inspect the ram usage. 
Please, paste the output of the following commands:

# df -h
# free -m

Since Elasticsearch needs a lot of RAM and disk usage, it could be a clue for our problem. 

Best regards,
Jesús

Whit Blauvelt

unread,
Mar 9, 2018, 9:30:36 AM3/9/18
to Wazuh mailing list
Hi Jesús,

Still nothing making it to the Wazuh app's graphs since midnight the day before yesterday.
 
Ok so just now your environment seems to be fine except for these missing alerts from last midnigth.
From our mails I can see that Logstash is now reading the alerts.json and your agents are sending data properly.
One more time, let's see if these missing alerts are inserted on Elasticsearch:

curl localhost:9200/_cat/indices -s | grep "2018.03.09"

root@rpc-ossec:/var/log# curl localhost:9200/_cat/indices -s | grep "2018.03.09"
yellow open wazuh-monitoring-3.x-2018.03.09 a5PxdZIIQHynQCrq2Sm6Pw 5 1   16856   0    4.2mb    4.2mb
root@rpc-ossec:/var/log# curl localhost:9200/_cat/indices -s | grep "2018.03.08"
yellow open wazuh-alerts-3.x-2018.03.08     lJVoLe-HQriZzdCbkckR5Q 5 1  369805   0  273.8mb  273.8mb
yellow open wazuh-monitoring-3.x-2018.03.08 5ERSwLLVS5qlUdnB_CBYkw 5 1    4680   0    1.6mb    1.6mb

If you see not empty result go to Kibana -> Discover (not Wazuh App) to check if in fact we have alerts inserted after last midnight.
Once you are done open an alert from the Discover (clicking on the little triangle next to each alert) and you should see Table/JSON,
click on JSON and paste the content here, thanks.

Ah, a box saying to select wazuh-alerts or wazuh-monitoring, see below. Of the two the last it shows for the first is  wazuh-alerts-3.x-2018.03.08, for the second -- and going back many months. On the second, just a string of dates for this January, ending with wazuh-monitoring-3.x-2018.01.26. So where my initial report here was nothing beyond January, it was most definitely looking only at the wazuh-monitoring series. Note we've never been to this screen, so whatever's getting set here is getting set somehow through the basic install/upgrade process following Wazuh recipes.

On the other hand, if you are not seeing alerts inserted on Elasticsearch let's see your disk usage and inspect the ram usage. 
Please, paste the output of the following commands:

# df -h
# free -m

Since Elasticsearch needs a lot of RAM and disk usage, it could be a clue for our problem.

root@rpc-ossec:/var/log# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           799M   59M  740M   8% /run
/dev/vda1        40G   34G  3.8G  91% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
cgmfs           100K     0  100K   0% /run/cgmanager/fs
tmpfs           799M     0  799M   0% /run/user/0
root@rpc-ossec:/var/log# free -m
              total        used        free      shared  buff/cache   available
Mem:           7983        1986         131          24        5864        5636
Swap:         16383         505       15878

Whit 

alertsormonitoring.png

jesus.g...@wazuh.com

unread,
Mar 9, 2018, 10:44:43 AM3/9/18
to Wazuh mailing list
Hi Whit, we have a new problem now, all your components seems to be fine but the output
of df -h command is dangerous:

/dev/vda1        40G   34G  3.8G  91% /

Elasticsearch has a default limitation to not to reach the 85% of disk usage,
you are using 91%, it could be the main reason why you are not seeing anything more
since midnight. Keep in mind that Elasticsearch works like a RAID
disk, it replicates your data. Depends on your shard number it could be bigger than
you think.

At this point my suggestion is to review your hard disk, upgrade it. You also could backup your data and remove old indices to free space.

Here you can read more about Elasticsearch disk allocator:


It seems everything's working properly now except for Elasticsearch due to disk usage.

King regards,
Jesús 

Whit Blauvelt

unread,
Mar 9, 2018, 2:40:34 PM3/9/18
to Wazuh mailing list
Hi Jesús,

Thanks for the tip on space. I shut down kibana and elasticsearch, move /var/log/elasticsearch to a new partition, ending up with the quite reasonable

root@rpc-ossec:~# df
Filesystem     1K-blocks     Used Available Use% Mounted on
/dev/vda1       41251136 15164492  24378204  39% /
/dev/vdc1      103080224 16176324  81644688  17% /var/log/elasticsearch

restarted the VM, and the Wazuh app says:
Ups, something went wrong...
Something went wrong

Obviously the next step is reinstalling the Wazuh app again. Which I'll try. But please, when you're going to throw an error message, make it maximally informative. Meanwhile Kibana > Discover is showing alerts from today, so that level is okay.

Whit

Whit Blauvelt

unread,
Mar 9, 2018, 3:10:54 PM3/9/18
to Wazuh mailing list

Obviously the next step is reinstalling the Wazuh app again. Which I'll try.

Well, I was wrong about that. Reinstalling the Wazuh app still leads to "Ups, something went wrong...". There are, of course, links to the fine manuals below that. But those links aren't to specific pages on troubleshooting. 

Whit

 

Whit Blauvelt

unread,
Mar 9, 2018, 3:19:01 PM3/9/18
to Wazuh mailing list
Also not a cookie problem.

Whit

jesus.g...@wazuh.com

unread,
Mar 20, 2018, 1:32:42 PM3/20/18
to Wazuh mailing list
Hi Whit,

from my knowledge I understand this thread should be closed since we have had a private conversation last days
and it was working. On the other hand I want to show a little summary of our scenario to give more information to
our great community. Main points here have been the following things:

- Elastic stack needs a considerable amount of RAM
- High data volume environments should be executed on a big machine or be clusterized
- The Elasticsearch templates must to be inserted before Logstash is running for the very first time.
- Whenever the Elasticsearch hard disk reaches 85% usage, Elasticsearch stops indexing process.
- Minor problems related to reinstallation and/or Kibana modifications are going to be autosolved on our next packages

Any case, feel free to open a new thread whenever you need it Whit, have a nice day.

Best regards,
Jesús

Whit Blauvelt

unread,
Mar 20, 2018, 2:41:08 PM3/20/18
to Wazuh mailing list
Thanks again Jesús, as well for putting the summary here for the next person. A few notes to your notes:

- Elastic stack needs a considerable amount of RAM

We're seeing no indication of load problems in a VM with 8 G. Right now less than 2 G is being used, while monitoring nearly 200 systems.
 
- High data volume environments should be executed on a big machine or be clusterized

As we discussed, we need published rules of thumb for what's "high data." 
 
- The Elasticsearch templates must to be inserted before Logstash is running for the very first time.

We'd done so. It was running fine for some time. Somehow, updating using the Ubuntu debs provided by Wazuh resulted in their loss along the way. This was not initial breakage, but upgrade breakage.
 
- Whenever the Elasticsearch hard disk reaches 85% usage, Elasticsearch stops indexing process.

Yes, that was a surprise here. As far as I could see, it doesn't log a notice about that. Maybe I missed it.
 
- Minor problems related to reinstallation and/or Kibana modifications are going to be autosolved on our next packages

Also, more meaningful error messages will be a big plus. You're doing wonders with the complex project; yet there's still a long way to go. As you're going, I expect you'll do well with it.

Best,
Whit 
Reply all
Reply to author
Forward
0 new messages