indexer-connector 403 error

505 views
Skip to first unread message

Alan Jackson

unread,
Mar 3, 2025, 4:16:23 PM3/3/25
to Wazuh | Mailing List
After we deleted an agent from our wazuh system (which hadn't been deactivated on the client side, and so the agent immediately tried to re-enroll), we've been seeing errors in the wazuh server ossec log, and the indexer log.

From the indexer log side, it appears to be an SSL connection failing, but from the logs I can't determine where the request is originating and terminating (it appears to be a threaded request), so it's pretty hard to diagnose. 
As far as I can see, all the SSL certs are valid (dates, CNs etc), so I don't think it's a case of a cert expiring.

Any help appreciated...

ossec.log: 
2025/03/04 00:00:11 indexer-connector: ERROR: HTTP response code said error, status code: 403.

/var/log/wazuh-indexer/wazuh.log:
[2025-03-04T09:34:41,914][ERROR][o.o.h.n.s.SecureNetty4HttpServerTransport] [node-1] Exception during establishing a SSL connection: java.net.SocketException: Connection reset
java.net.SocketException: Connection reset
at java.base/sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:401) ~[?:?]
at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:434) ~[?:?]
at org.opensearch.transport.CopyBytesSocketChannel.readFromSocketChannel(CopyBytesSocketChannel.java:156) ~[transport-netty4-client-2.16.0.jar:2.16.0]
at org.opensearch.transport.CopyBytesSocketChannel.doReadBytes(CopyBytesSocketChannel.java:141) ~[transport-netty4-client-2.16.0.jar:2.16.0]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151) [netty-transport-4.1.111.Final.jar:4.1.111.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) [netty-transport-4.1.111.Final.jar:4.1.111.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) [netty-transport-4.1.111.Final.jar:4.1.111.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) [netty-transport-4.1.111.Final.jar:4.1.111.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [netty-transport-4.1.111.Final.jar:4.1.111.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:994) [netty-common-4.1.111.Final.jar:4.1.111.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.111.Final.jar:4.1.111.Final]
at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]

Natalia Castillo

unread,
Mar 10, 2025, 1:52:33 AM3/10/25
to Wazuh | Mailing List

Based on the logs you've shared, I can see two related issues occurring in your Wazuh environment after deleting an agent that wasn't properly deactivated first.

From your ossec.log, there's a 403 Forbidden response from the indexer-connector, while your wazuh-indexer log shows SSL connection issues with a connection reset error. These issues are likely related to the improper agent removal process.

Here's what's likely happening:

  1. The agent is still trying to reconnect to the Wazuh server since it wasn't properly deactivated
  2. The server has deleted the agent's credentials/certificates from its trusted store
  3. When the agent attempts to establish an SSL connection, the server rejects it (connection reset)
  4. The indexer-connector then gets a 403 Forbidden when trying to process data related to this agent

To resolve this issue:

  1. First, check if the agent is still in the Wazuh server's list with:
    /var/ossec/bin/agent_control -l
  2. If the agent appears with a "Disconnected" status, properly remove it:
    /var/ossec/bin/manage_agents -r [WAZUH_AGENT_ID]
  3. If the agent doesn't appear in the list but is still trying to connect, you'll need to clean up the registration on the agent side. Connect to the agent machine and:
    • Stop the Wazuh agent service
    • Remove the client keys file (typically at /var/ossec/etc/client.keys)
    • Restart the agent service
  4. Check for any stale certificates in your Wazuh server's certificate store related to this agent
  5. Restart the Wazuh manager and indexer services after cleanup:
    systemctl restart wazuh-manager systemctl restart wazuh-indexer

If the issue persists, you might need to check the Wazuh API logs as well, as there could be additional connection issues between the manager and the indexer related to authentication.

Anselm Garcia

unread,
Mar 10, 2025, 6:37:43 AM3/10/25
to Wazuh | Mailing List
I have exactly the same issue. The issue was begun at the moment I tried to rejoin an agent, deleting its clients.key file and removing it using manage_agents form the manager.
At this moment I had network issues.
When the network issues were solved, I rejoined the agent successfully with the same procedure, but the errors didn't stop.
I stopped the agent service to check if the errors stop, but I had no success.
I tried also to filter all agents connections to the Wazuh, disabling the rules that I have in the perimeter firewall to check if the connection errors persist, and they didn't stop.
That's very strange, because at that moment none agent was able to connect to the Wazuh manager.

I checked Wazuh API logs and I have no error there.

The errors appear in ossec.log like this:

2025/03/10 11:25:55 indexer-connector: ERROR: HTTP response code said error, status code: 403.
2025/03/10 11:25:56 indexer-connector: ERROR: HTTP response code said error, status code: 403.
2025/03/10 11:25:58 indexer-connector: ERROR: HTTP response code said error, status code: 403.
2025/03/10 11:25:59 indexer-connector: ERROR: HTTP response code said error, status code: 403.
2025/03/10 11:26:00 indexer-connector: ERROR: HTTP response code said error, status code: 403
.

And also in the wazuh-cluster.log of the indexer (/var/log/wazuh-indexer/wazuh-cluster.log), like this:

[2025-03-10T11:27:23,388][ERROR][o.o.h.n.s.SecureNetty4HttpServerTransport] [antimoni.iit.idisc.es] Exception during establishing a SSL connection: java.net.SocketException: Connection reset

java.net.SocketException: Connection reset
        at java.base/sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:401) ~[?:?]
        at java.base/sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:434) ~[?:?]
        at org.opensearch.transport.CopyBytesSocketChannel.readFromSocketChannel(CopyBytesSocketChannel.java:156) ~[transport-netty4-client-2.16.0.jar:2.16.0]
        at org.opensearch.transport.CopyBytesSocketChannel.doReadBytes(CopyBytesSocketChannel.java:141) ~[transport-netty4-client-2.16.0.jar:2.16.0]
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151) [netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) [netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) [netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) [netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [netty-transport-4.1.111.Final.jar:4.1.111.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:994) [netty-common-4.1.111.Final.jar:4.1.111.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.111.Final.jar:4.1.111.Final]
        at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]


Any help would be appreciated. It's very difficult to diagnose the error.

Alan Jackson

unread,
Mar 11, 2025, 12:55:01 AM3/11/25
to Wazuh | Mailing List
Thanks for the assistance.

The agent was decommissioned very shortly after the initial removal (our usual process which resulted in the agent being shut down BEFORE being removed from wazuh wasn't followed), so it's certainly not that agent's connection causing the error.
From the command history, it looks like the agent was removed from wazuh, but re-enrolled itself, then the agent was shut down and the new agent ID given to it also removed from wazuh.

My suspicion is the error is generated by a connection between the indexer and the wazuh-manager system (not sure which direction). There is presumably still some knowledge about this deleted agent in one of these two systems, but not the other? However every method I've tried to locate any config related to the agent has shown nothing - which includes the agent_control bin and also querying the agent API via the portal dev tools. It doesn't show up in the manger client.keys.

I've already restarted the full stack, in fact I've even upgraded the stack to see if that would help.

Can you provide instructions on how to check for the certificate details in the wazuh server certificate store? All the certificate .pem files I've found around the place are valid?

There aren't any corresponding log entries in the wazuh-manager's api.log file, unfortunately.  There is the following:

2025/03/11 13:10:00 INFO: wazuh-wui 10.0.30.50 "GET /cluster/nodes" with parameters {"select": "name"} and body {} done in 0.021s: 401
2025/03/11 13:10:00 INFO: wazuh-wui 10.0.30.50 "GET /cluster/nodes" with parameters {"select": "name"} and body {} done in 0.023s: 401
2025/03/11 13:10:01 INFO: wazuh-wui 10.0.30.50 "POST /security/user/authenticate" with parameters {} and body {} done in 1.421s: 200
2025/03/11 13:10:01 INFO: wazuh-wui 10.0.30.50 "POST /security/user/authenticate" with parameters {} and body {} done in 0.845s: 200
2025/03/11 13:10:02 INFO: wazuh-wui 10.0.30.50 "GET /cluster/nodes" with parameters {"select": "name"} and body {} done in 0.133s: 200
2025/03/11 13:10:02 INFO: wazuh-wui 10.0.30.50 "GET /cluster/nodes" with parameters {"select": "name"} and body {} done in 0.138s: 200

But that looks like an initial failed auth which is retried & succeeds...

Regards,
--Alan

Anselm Garcia

unread,
Mar 11, 2025, 12:37:34 PM3/11/25
to Wazuh | Mailing List
I've found the problem.
I guess those errors are related to the upgrade of the wazuh cluster to a newer version. In my case, the upgrade to version 4.11.0.
What I've discovered is that for some strange reason, the user admin has lost write access to wazuh-states-vulnerabilities-wazuh indexes.

To solve the problem, I've executed the following request, and the errors stopped.

curl -X PUT "https://indexer-ip-address:9200/wazuh-states-vulnerabilities-wazuh/_settings" -u admin:<password> --insecure -H "Content-Type: application/json" -d '
{
  "index.blocks.write": false
}'

Please, let me know if it works for you.

James McGeoy

unread,
Mar 11, 2025, 1:19:46 PM3/11/25
to Wazuh | Mailing List
Anselm,

This fixed the issue for me! My admin user did indeed lose write access to the wazuh-states-vulnerabilities index. You are my hero. Thank you!

-James

Alan Jackson

unread,
Mar 11, 2025, 3:09:56 PM3/11/25
to Wazuh | Mailing List
Looks good so far! Out of interest, how did you locate/diagnose this?

Regards,
--Alan

Anselm Garcia

unread,
Apr 15, 2025, 4:38:35 PM4/15/25
to Wazuh | Mailing List
It was quite obvious that if the wazuh-manager service was trying to connect do wazuh-indexer it had to be something related to the vulnerability detector.
If the index exists, then it means that it should not have enough permisions.
So, the conclusion is that I had to investigate how to provide the needed permissions to connect.

Alan Jackson

unread,
Jan 28, 2026, 11:13:44 PMJan 28
to Wazuh | Mailing List
thread necro, but I just encountered this again with different indexes (maybe new ones after a few wazuh version updates - currently on 4.14.2) - `wazuh-states-inventory-protocols-wazuh` and `wazuh-states-inventory-networks-wazuh` as opposed to `wazuh-states-vulnerabilities-wazuh`

curl -k --cert /etc/wazuh-indexer/certs/admin.pem --key /etc/wazuh-indexer/certs/admin-key.pem -XPUT "https://<indexer-ip>:9200/wazuh-states-inventory-protocols-wazuh/_settings" -H "Content-Type: application/json" -d '{"index.blocks.write": false}'

Turning the various debug options up to level 2 showed a fairly explicit stack trace as to which index was being blocked from writing - although I have absolutely no idea why they would be blocked, or what changes that makes wazuh decide to write to them.

{"delete":{"_index":"wazuh-states-inventory-networks-wazuh","_id":"295_af5fe079c7d49f657e8728955615f15bd5f1b8a4","status":403,"error":{"type":"cluster_block_exception","reason":"index [wazuh-states-inventory-networks-wazuh] blocked by: [FORBIDDEN/8/index write (api)];"}}},{"delete":{"_index":"wazuh-states-inventory-networks-wazuh","_id":"295_4af8286714d539374ebf9da004affa2f7f39cfe5","status":403,"error":{"type":"cluster_block_exception","reason":"index [wazuh-states-inventory-networks-wazuh] blocked by: [FORBIDDEN/8/index write (api)];"}}}]}
Reply all
Reply to author
Forward
0 new messages