No API is available to connect.

229 views
Skip to first unread message

360 ALLROUND

unread,
Jun 28, 2023, 1:24:10 PM6/28/23
to Wazuh mailing list
hi team,

Hope you are doing well.

I am getting the following error in my Upguard console when I tried logging in [API connection] No API is available to connect.

I restarted all the services like wash-Indexer, wash-dashboard, file beat, and demon-reloaded everything seems to work fine.

However,  wazuh-manager showed some errors I've attached the necessary logs for your reference.

-Regards
  Ruben

Error1.jpeg
Error2.jpeg
Wazuh-Console1.PNG
Error3.jpeg

Jorge Eduardo Molas

unread,
Jun 28, 2023, 3:36:34 PM6/28/23
to Wazuh mailing list
Hi! 
Analysisd could down either by a segmentation fault or by a kill signal. 
In order to find the problem faster:
  • Find the latest logs of analysisd (in /var/ossec/logs/ossec.log), for example if a signal killed analysisd you can find this log 2023/06/28 00:45:39 wazuh-analysisd: INFO: (1225): SIGNAL [(15)-(Terminated)] Received. Exit Cleaning...
  • Check your syslog log (if you are using centos, the file is /var/log/messages) and find any logs from analysisd.

360 ALLROUND

unread,
Jun 30, 2023, 6:01:35 AM6/30/23
to Wazuh mailing list
Hi Jorge, 

Thanks for your reply

I checked in ossec.log for wazuh-analysisd and /var/log/messages for analysisd. 
I checked as a super user but it shows like permission denied. 

Attached both screen shots for your reference, please let me know the workaround for this issue. 

-Regards 
  Ruben. 

IMG-20230630-WA0000.jpg
IMG-20230630-WA0001.jpg

360 ALLROUND

unread,
Jul 3, 2023, 1:14:59 AM7/3/23
to Wazuh mailing list
Hi Team, 

Is there any update on this? 

-Regards 
  Ruben

Jorge Eduardo Molas

unread,
Jul 3, 2023, 1:30:47 PM7/3/23
to Wazuh mailing list
Hi Ruben! Sorry for the delay.
In order to look further into your problem, can you please answer a few questions:
  • What kind of installation did you do? 
  • Is it a fresh one or is this a new issue you encounter?
  • If it is a fresh installation. Could you let me know if you used a guide in our Wazuh Documentation? Which one? 
Regards!

Jorge Eduardo Molas

unread,
Jul 3, 2023, 1:48:11 PM7/3/23
to Wazuh mailing list
Ruben, could you give me the output of the following command /var/ossec/var/run# ls -lah?

360 ALLROUND

unread,
Jul 4, 2023, 12:20:26 AM7/4/23
to Wazuh mailing list
Hi Jorge, 

This is old installation and not a new one. 

I have attached the log for your review. 

-Regards 
  Ruben 

tmp_df904d58-33b4-41f8-8d5d-a88b8f46e96b.png

Jorge Eduardo Molas

unread,
Jul 4, 2023, 3:31:52 PM7/4/23
to Wazuh mailing list
Hi Ruben, sorry for the misunderstanding. 
I would need to check the permissions of the /var/ossec/var/run directory, for this reason, I need the output of the ls -lah command from that path. 
You could run it like this: ls -lah /var/ossec/var/run
Regards!

360 ALLROUND

unread,
Jul 5, 2023, 4:26:14 AM7/5/23
to Wazuh mailing list
I've attached the results of requested command. 
tmp_7a524869-9b13-4bb6-942c-fe8523fb6156.png

Jorge Eduardo Molas

unread,
Jul 6, 2023, 12:50:00 PM7/6/23
to Wazuh mailing list

Hello Ruben! It seems that the error is due to either user or directory permissions.
I will replicate your case. Can you tell me the type of Wazuh installation you have implemented? Please also share with me the documentation you have followed.
Regards!

360 ALLROUND

unread,
Jul 7, 2023, 5:42:06 AM7/7/23
to Wazuh mailing list
Hi Jorge, 

I have a single Node installation here. I updated the wazuh and it's components to the latest version 4.4.

Now wazuh-manager is running fine without any errors. 

 However, now I am getting a new error only on the wazuh-dashboard. 

I've attached the log screenshot for your reference, please let me know how to resolve this. 

-Regards 
  Ruben 
tmp_de74181d-a10e-41c5-be06-f961977d4474.png

Jorge Eduardo Molas

unread,
Jul 7, 2023, 10:43:26 AM7/7/23
to Wazuh mailing list
Hi Ruben!
It seems that you have reached the shard limit allowed per node.
I understand that you have updated to version 4.4, could you tell me what type of update you followed (Wazuh, OpenDistro, ElasticSearch basic license)?

However, according to your log, you could:
  • Delete indices. This frees shards. You could do it with old indices you don't want/need. Or even, you could automate it with ILM/ISM policies to delete old indices after a period of time as explained in this post.
  • Add more nodes to your Elasticsearch/Wazuh indexer cluster.
  • Increment the max shards per node (not recommended)

360 ALLROUND

unread,
Jul 7, 2023, 11:17:04 AM7/7/23
to Wazuh mailing list
Hi Jorge,

I updated using wazuh central component update that's listed on the Wazuh site.

Regarding the indices, the URL https://siem.navitaslifesciences.com is not even opening. since the application itself is not opening in the URL, how may I suppose to clear the old indices?

previously I also had issues with the shard being full. however, I resolved it by clearing it through the Wazuh console.

-Regards
  Ruben

360 ALLROUND

unread,
Jul 11, 2023, 4:36:04 AM7/11/23
to Wazuh mailing list
Hi Team, 

Is there any update on this? 

-Regards 
 Ruben 

Jorge Eduardo Molas

unread,
Jul 11, 2023, 8:55:26 AM7/11/23
to Wazuh mailing list
Sorry Ruben for the delay.
I understand that you already delete old indices from the indexer. So, regarding your last logs with shard issues,  there are fixed?
Can you share again, ossec.log to get any situation about  https://siem.navitaslifesciences.com load?
Regards! 

360 ALLROUND

unread,
Jul 12, 2023, 3:32:15 AM7/12/23
to Wazuh mailing list
Hi Jorge, 

Thanks for your response. 

I've attached the ossec.log as requested. 

Regards 
Ruben 

tmp_94dea49b-a2f8-4e4a-b4db-383ee166d09a.png
tmp_2a703e8a-4f2d-4145-9c8b-f279b5f5a2f7.png

360 ALLROUND

unread,
Jul 14, 2023, 2:36:51 AM7/14/23
to Wazuh mailing list
Hi Jorge, 

Is there any updates? 

-Regards 
 Ruben 

Jorge Eduardo Molas

unread,
Jul 16, 2023, 9:37:15 PM7/16/23
to Wazuh mailing list
Hi Ruben. Sorry for the delay.
This seems to be a dashboard issue. Can you perform these commands to check the status?

journalctl -u wazuh-dashboard
cat /usr/share/wazuh-dashboard/data/wazuh/logs/wazuhapp.log | grep -i -E “error|warn”

Regards!

360 ALLROUND

unread,
Jul 18, 2023, 12:38:29 AM7/18/23
to Wazuh mailing list
Hi Jorge, 
Thanks for your reply. 
I've attached the logs as requested. 

-Regards 
  Ruben 

tmp_fc39a08d-2eec-405f-aad3-6a605bf32c92.png
tmp_9f223980-d3f4-4701-b827-4aab005e96b9.png

Jorge Eduardo Molas

unread,
Jul 18, 2023, 9:50:36 AM7/18/23
to Wazuh mailing list
Hi Ruben! Could you paste the output of journalctl? The screenshot cut off the messages part.
Regards!

360 ALLROUND

unread,
Jul 19, 2023, 12:44:19 PM7/19/23
to Wazuh mailing list
Hi Jorge, 
Please see the  pasted contents below. 

Jul 07 13:09:08 ip-10-118-1-113.ec2.internal systemd[1]: wazuh-dashboard.service failed. Jul 12 11:59:10 ip-10-118-1-113.ec2.internal systemd[1]: Started wazuh-dashboard. Jul 12 11:59:15 ip-10-118-1-113.ec2.internal opensearch-dashboards[5387]: {"type":"log","@timestamp":"2023-07-12T06:29:15Z","tags":["info","plugins-service"],"pid":5387,"message":"Plugin \"dataSourceManagement\" has been disabled since the following direct or transitive dependencies are missing or disabled: [dataSource]"} Jul 12 11:59:15 ip-10-118-1-113.ec2.internal opensearch-dashboards[5387]: {"type":"log","@timestamp":"2023-07-12T06:29:15Z","tags":["info","plugins-service"],"pid":5387,"message":"Plugin \"dataSource\" is disabled."} Jul 12 11:59:15 ip-10-118-1-113.ec2.internal opensearch-dashboards[5387]: {"type":"log","@timestamp":"2023-07-12T06:29:15Z","tags":["info","plugins-service"],"pid":5387,"message":"Plugin \"visTypeXy\" is disabled."} Jul 12 11:59:15 ip-10-118-1-113.ec2.internal opensearch-dashboards[5387]: {"type":"log","@timestamp":"2023-07-12T06:29:15Z","tags":["info","plugins-service"],"pid":5387,"message":"Plugin \"mlCommonsDashboards\" is disabled."} Jul 12 11:59:15 ip-10-118-1-113.ec2.internal opensearch-dashboards[5387]: {"type":"log","@timestamp":"2023-07-12T06:29:15Z","tags":["warning","config","deprecation"],"pid":5387,"message":"\"opensearch.requestHeadersWhitelist\" is deprecated and has been replaced by \"opensearch.requestHeadersAllowlist\""} Jul 12 11:59:15 ip-10-118-1-113.ec2.internal opensearch-dashboards[5387]: {"type":"log","@timestamp":"2023-07-12T06:29:15Z","tags":["info","plugins-system"],"pid":5387,"message":"Setting up [45] plugins: [alertingDashboards,usageCollection,opensearchDashboardsUsageCollection,opensearchDashboardsLegacy,mapsLegacy,share,opensearchUiShared,legacyExport,embeddable,expressions,data,home,console,apmOss,management,indexPatternManagement,advancedSettings,savedObjects,reportsDashboards,indexManagementDashboards,dashboard,visualizations,visTypeTimeline,timeline,visTypeVega,visTypeTable,visTypeMarkdown,visBuilder,tileMap,regionMap,customImportMapDashboards,inputControlVis,ganttChartDashboards,visualize,notificationsDashboards,charts,visTypeVislib,visTypeTimeseries,visTypeTagcloud,visTypeMetric,discover,savedObjectsManagement,securityDashboards,wazuh,bfetch]"} Jul 12 11:59:15 ip-10-118-1-113.ec2.internal opensearch-dashboards[5387]: {"type":"log","@timestamp":"2023-07-12T06:29:15Z","tags":["info","savedobjects-service"],"pid":5387,"message":"Waiting until all OpenSearch nodes are compatible with OpenSearch Dashboards before starting saved objects migrations..."} Jul 12 11:59:16 ip-10-118-1-113.ec2.internal opensearch-dashboards[5387]: {"type":"log","@timestamp":"2023-07-12T06:29:16Z","tags":["info","savedobjects-service"],"pid":5387,"message":"Starting saved objects migrations"} Jul 12 11:59:16 ip-10-118-1-113.ec2.internal opensearch-dashboards[5387]: {"type":"log","@timestamp":"2023-07-12T06:29:16Z","tags":["info","savedobjects-service"],"pid":5387,"message":"Detected mapping change in \"properties.visualization-visbuilder\""} Jul 12 11:59:16 ip-10-118-1-113.ec2.internal opensearch-dashboards[5387]: {"type":"log","@timestamp":"2023-07-12T06:29:16Z","tags":["info","savedobjects-service"],"pid":5387,"message":"Creating index .kibana_2."} Jul 12 11:59:16 ip-10-118-1-113.ec2.internal opensearch-dashboards[5387]: {"type":"log","@timestamp":"2023-07-12T06:29:16Z","tags":["error","opensearch","data"],"pid":5387,"message":"[validation_exception]: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;"} Jul 12 11:59:16 ip-10-118-1-113.ec2.internal opensearch-dashboards[5387]: {"type":"log","@timestamp":"2023-07-12T06:29:16Z","tags":["warning","savedobjects-service"],"pid":5387,"message":"Unable to connect to OpenSearch. Error: validation_exception: [validation_exception] Reason: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;"} Jul 12 11:59:16 ip-10-118-1-113.ec2.internal opensearch-dashboards[5387]: {"type":"log","@timestamp":"2023-07-12T06:29:16Z","tags":["fatal","root"],"pid":5387,"message":"ResponseError: validation_exception: [validation_exception] Reason: Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;\n at onBody (/usr/share/wazuh-dashboard/node_modules/@opensearch-project/opensearch/lib/Transport.js:374:23)\n at IncomingMessage.onEnd (/usr/share/wazuh-dashboard/node_modules/@opensearch-project/opensearch/lib/Transport.js:293:11)\n at IncomingMessage.emit (events.js:412:35)\n at IncomingMessage.emit (domain.js:475:12)\n at endReadableNT (internal/streams/readable.js:1333:12)\n at processTicksAndRejections (internal/process/task_queues.js:82:21) {\n meta: {\n body: { error: [Object], status: 400 },\n statusCode: 400,\n headers: {\n 'content-type': 'application/json; charset=UTF-8',\n 'content-length': '377'\n },\n meta: {\n context: null,\n request: [Object],\n name: 'opensearch-js',\n connection: [Object],\n attempts: 0,\n aborted: false\n }\n }\n}"} Jul 12 11:59:16 ip-10-118-1-113.ec2.internal opensearch-dashboards[5387]: {"type":"log","@timestamp":"2023-07-12T06:29:16Z","tags":["info","plugins-system"],"pid":5387,"message":"Stopping all plugins."} Jul 12 11:59:16 ip-10-118-1-113.ec2.internal opensearch-dashboards[5387]: FATAL {"error":{"root_cause":[{"type":"validation_exception","reason":"Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;"}],"type":"validation_exception","reason":"Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [999]/[1000] maximum shards open;"},"status":400} Jul 12 11:59:16 ip-10-118-1-113.ec2.internal systemd[1]: wazuh-dashboard.service: main process exited, code=exited, status=1/FAILURE Jul 12 11:59:16 ip-10-118-1-113.ec2.internal systemd[1]: Unit wazuh-dashboard.service entered failed state. Jul 12 11:59:16 ip-10-118-1-113.ec2.internal systemd[1]: wazuh-dashboard.service failed.

-Regards 
  Ruben 

Jorge Eduardo Molas

unread,
Jul 20, 2023, 10:54:47 AM7/20/23
to Wazuh mailing list
Hi Ruben, thank you. 
According to your logs of Jul 12, you still have problems with shards in the indexer, you should solve it by following the possible solutions mentioned above.
Regards!

360 ALLROUND

unread,
Jul 24, 2023, 12:30:24 AM7/24/23
to Wazuh mailing list
Hi Jorge, 

Thanks for your response. 

As I said the solutions which you provided had to be done from the dashboard, which isn't loading from the site. 
Please try look on this site https://siem.navitaslifesciences.com

If there's is any option to clear the indexer from the command line please guide me. 
-Regards 
  Ruben 

Jorge Eduardo Molas

unread,
Jul 25, 2023, 2:40:27 PM7/25/23
to Wazuh mailing list
Hi Ruben. Sorry for the delay.
Ok! I thought you already cleared the old indexes. To perform that follow the commands below:

1.  Check your indices listing with:  
curl -u <username>:<password> https://<indexer IP>:9200/_cat/indices/wazuh-alerts* -k     #Output green open wazuh-alerts-4.x-2023.05.25 0KfoS4T9Tq-3KjgQh5v1mg 3 0 128 0 386kb 386kb green open wazuh-alerts-4.x-2022.12.28 6cKOiF8cTl2XySZiv7cFkA 3 0 122 0 318.9kb 318.9kb green open wazuh-alerts-4.x-2023.05.26 SHwfkT29QWeeFHDa8ZYegA 3 0 1 0 14.1kb 14.1kb green open wazuh-alerts-4.x-2022.12.27 dc2XNM-TQu6umsGwfnGtDA 3 0 210 0 593kb 593kb
     Note that Wazuh index name is composed by the pattern wazuh-alerts-<version>-<YYYY>.<MM>.<DD>
2.  So, you are able to delete specfic days performing:

curl -u : -XDELETE https://:9200/wazuh-alerts-4.x-2022.12.27 -k

3. If you want remove an entire month, you can use wildcard "*"


Let me know if this information is useful.

Regards!

Reply all
Reply to author
Forward
0 new messages