export all vulnerabilities

1,243 views
Skip to first unread message

joh nte

unread,
Apr 7, 2021, 5:38:39 AM4/7/21
to Wazuh mailing list
Hi,

i'm running Wazuh 4.1.2 with more than 300 agents connected and i want to export all the vulnerabilities, divided by severity, for all the agents.

I've made a visualization using this buckets:
>Split rows agent.ip: Descending
>Split rows data.vulnerability.cve: Descending
>Split rows data.vulnerability.package.name: Descending
>Split rows data.vulnerability.rationale: Descending
>Split rows data.vulnerability.references: Descending

and filtering the results with > data.vulnerability.severity : "High"

This works great selecting a 24 hours period, however, i have to consider a week period and, by doing this, it give me an error ([esaggs] > Service Unavailable)

It seems like he can't load all the database because is too big.
Furthermore, there are a lot of duplicates for the agents whic vulnerability scanner runs everiday.

Is there a way to export all the vulnerabilities, for all the agents, for a long period, at the same time bypassing the error?

Thanks

joh nte

unread,
Apr 7, 2021, 5:47:30 AM4/7/21
to Wazuh mailing list
Btw: removing the "references", the number of duplicates goes down, due to the fact that, for one CVE, there are more references and each of those take a new row, and i'm fine with remove the "references", but there are other duplicates due to the fact that the same CVE affects more packages, for example, i see CVE-2020-12654 because affects more packages (kernel, kernel-tools, kernel-tools-libs, python-perf ecc..). Can i accomunate the packages? otherwise i could eliminate "package.name" from the buckets and bypass this too.

Franco Charriol

unread,
Apr 9, 2021, 10:30:53 AM4/9/21
to Wazuh mailing list
Hi,
In order to reproduce your error could you provide the Kibana and Elastic versions are you using?
Are you using X-Pack or Opendistro, too? in the case of using Opendistro, which version?

I recreate the visualization with Kibana 7.10.0 and ODFE 1.12.0, filtering for more than one week I have no errors on the view, but I'm getting another error downloading the PDF report.
The last one was reported here and solved in dev but not released. yet.

Are you getting the error immediately after changing the time filter?

view vulnerabilities.png

Regarding your question about accumulating the packages, as you correctly mentioned CVE databases depend on the sources so it could have different data about the same package.
You could check how it wors here

Regards

joh nte

unread,
Apr 12, 2021, 11:43:10 AM4/12/21
to Wazuh mailing list
Hi, and thanks for the reply!

I'm using Wazuh 4.0.3 and kibana should be 7.9.1;  i've installed this wazuh manager using the unattendend installation script (curl -so ~/all-in-one-installation.sh https://raw.githubusercontent.com/wazuh/wazuh-documentation/4.0/resources/open-distro/unattended-installation/all-in-one-installation.sh && bash ~/all-in-one-installation.sh), how can i check other versions?

My error shows up when (i think) there are too many data to visualize..
For example, if i try to se all the vulnerabilities, for all the hosts, with the rime range of "a month", without filtering by severity, the error shows up after 10/15 seconds.
But if i filter for only one IP there are no issue!
Another example: if i try to view all the medium severities, using a month time range, the error shows up after 10/15 seconds, but if i apply a filter like " data.vulnerability.severity: "Medium" and data.vulnerability.cve: CVE-2020-* " it shows me all the 2020 medium vulnerabilities for all the hosts and for founded in a month. 

Franco Charriol

unread,
Apr 12, 2021, 4:05:04 PM4/12/21
to joh nte, Wazuh mailing list
Hi Joh,
Regarding
how can i check other versions?
If you installed the stack from the unattended script and you did no updates you should have 7.9.1 for Elastic stack and 1.11.0 for Opendistro
To ensure you could check it, depending on your package system,  with the console running these commands
# apt
dpkg -l elasticsearch
dpkg -l opendistro

# yum
sudo yum list --installed | grep elasticsearch
sudo yum list --installed | grep opendistro

Regarding the error, I found an issue when the Other bucket option is true in some of the Split rows.
image.png
If you have this as true you could try putting them in false.
But if this is not your case, maybe you are getting this error due to a performance limit (process or RAM), the unattended installation script installs all components in the same host. Maybe you need to separate it (at least Elasticsearch and Kibana) into separate machines or hosts to get more efficient processing scalability.

Please let me know if this was helpful for your problem.
Best

--
You received this message because you are subscribed to a topic in the Google Groups "Wazuh mailing list" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/wazuh/wvbevHMC0jE/unsubscribe.
To unsubscribe from this group and all its topics, send an email to wazuh+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/8ca8ae8e-7276-40b2-ae4a-c7fcd350282cn%40googlegroups.com.

joh nte

unread,
Apr 22, 2021, 5:53:32 AM4/22/21
to Wazuh mailing list

Hi, sorry for the late response.

I've checked Elasticsearch and Opendistro and, as you said, are version 7.9.1 and 1.11.0 respectively.

I have multiple Wazuh Manager, one with a quad core with 16GB of RAM and 160GB on SAS device and one with a quad core with 8GB of RAM and 100GB SSD.

Now that i think of it, actually, the one with 16 GB,  despite having three times the number of agents (over 350), runs into the same error but when trying to show an higher amount of data compared to the other one with 8GB.
Can the amount of RAM be the problem?

btw, the separate bucket option is on false.

Message has been deleted

Franco Charriol

unread,
Apr 23, 2021, 4:42:43 PM4/23/21
to Wazuh mailing list
Hi Joh,
The visualizations do not applique the Wazuh-manager services, they only affect the performance of the Elasticsearch cluster.
The visualizations query indices through the Elasticsearch API.
The Wazuh managers only create the alerts that Filebeat "put" into Elastic indices.

But if your Wazuh managers are in the same host (machine) they are affecting the same infrastructure causing probably a bad RAM performance.
Here is a doc from Elastic about Tune for Search, maybe you can find it helpful.

I'll be waiting for your feedback
Best.
Reply all
Reply to author
Forward
0 new messages