MacOS agents not updating remotely

199 views
Skip to first unread message

Farshin Hashim

unread,
Nov 5, 2024, 11:11:08 PM11/5/24
to Wazuh | Mailing List
Hi Team,

I hope you are doing well, I have upgraded my Wazuh setup from 4.7.3 to 4.8.1. Now I have the following issues.
  1.  I have tried updating agents remotely using the endpoint API. Windows agents are getting updated, but Mac agents respond that the repository is not reachable.
  2. Vulnerability detection is not working after the update; it says it is disabled or not working. I have checked ossec .conf; everything seems intact.
  3. I have integrated a script to collect all installed applications in the agents to create a packages dashboard, using  https://opensourcesecurityblogs.com/wazuh-endpoints-inventory-packages-in-one-dashboard/ . These logs (ruleid 10122) vanishes after some time.
Any help is appreciated.
Thanks,

Stuti Gupta

unread,
Nov 6, 2024, 12:53:55 AM11/6/24
to Wazuh | Mailing List
Hi  Farshin

The error, The repository is not reachable, means the manager can't find an Agent version in the external repository. Does the manager and agent have an internet connection?

Make sure the agents are connected to the manager to perform the remote upgrading
Can you please let us know which version of Wazuh Manager you using?
Make sure the wazuh-manager version is higher or equal To the wazuh agent
The agent upgrade module is responsible for carrying out the entire agent upgrade process remotely: https://documentation.wazuh.com/current/user-manual/agents/remote-upgrading/agent-upgrade-module.html
Please share the ossec.log of wazuh agent and manager.
You can also run the WPK package from the Wazuh manager and upgrade the wazuh-agent

Please follow the steps:
Download the specific version you need
https://documentation.wazuh.com/current/user-manual/agent/agent-management/remote-upgrading/wpk-list.html
Then run this command to upgrade(change the file name)
/Library/Ossec/bin/agent_upgrade -a <agent_id> -f /path/to/wpk -x upgrade.sh

Note: there isn’t a WPK file available for Apple Silicon to download. However, you can generate it yourself by following this guide: https://documentation.wazuh.com/current/development/packaging/generate-wpk-package.html#macos-wpk.We are currently addressing this issue, as you can see here: https://github.com/wazuh/wazuh/issues/25976. In the meantime, I recommend manually updating the agent.

Vulnerability:
Can you please verify that have you followed the vulnerability detection module https://documentation.wazuh.com/current/user-manual/capabilities/vulnerability-detection/configuring-scans.html.

Please make sure to update <vulnerability-detection> and <indexer> block in /var/ossec/etc/ossec.conf in version 4.8.1. 

Replace `0.0.0.0` with  the indexer IP in the Filebeat config file, For example:
output.elasticsearch.hosts:
  - 127.0.0.1:9200

Wazuh indexer node's IP address or hostname. If you have a Wazuh indexer cluster, add a `<host>` entry for each one of your nodes. For example, in a two-node configuration:
<hosts>
  <host>https://10.0.0.1:9200</host>
  <host>https://10.0.0.2:9200</host>
</hosts>

Check the certificate name:
ll /etc/filebeat/certs
Verify the Filebeat certificate name and path are correct and update the `<indexer>` block in `/var/ossec/etc/ossec.conf` accordingly.  Additionally, you can use this command to verify the certificate paths, names and indexer ip:
curl -u <user>:<pass> --cacert <path.pem> --cert <path-client.pem> --key <path-client-key.pem> -X GET "https://<IP>:9200/_cluster/health"

Save the Wazuh indexer username and password into the Wazuh manager keystore using the Wazuh-keystore tool:
/var/ossec/bin/wazuh-keystore -f indexer -k username -v <INDEXER_USERNAME>
/var/ossec/bin/wazuh-keystore -f indexer -k password -v <INDEXER_PASSWORD>
The admin password used to login to the dashboard is the same for the indexer password and username

After that, save the configuration and restart the manager/cluster using the command:
systemctl restart wazuh-manager

If this didnt resolve the issue then please share the output of the following command
cat /var/ossec/logs/ossec.log | grep vul

Refer: https://documentation.wazuh.com/current/upgrade-guide/troubleshooting.html

For the custom script please make sure it is present in the /var/ossec/integrations/ with correct ownership and permission and mentioned in the ossec.conf. Also please share ossec.log 

Hope to hear from you soon 

Farshin Hashim

unread,
Nov 6, 2024, 1:20:39 AM11/6/24
to Wazuh | Mailing List

Hi Stuti,
Yes agents were online while i was updating. I'm currently on wazuh 4.8.1 and the link you have shared only has 4.8.2, https://documentation.wazuh.com/current/user-manual/agent/agent-management/remote-upgrading/wpk-list.html
Please find the ossec.conf attached
<!--
  Wazuh - Manager - Default configuration for ubuntu 22.04
  More info at: https://documentation.wazuh.com
  Mailing list: https://groups.google.com/forum/#!forum/wazuh
-->

<ossec_config>
  <global>
    <jsonout_output>yes</jsonout_output>
    <alerts_log>yes</alerts_log>
    <logall>no</logall>
    <logall_json>no</logall_json>
    <email_notification>no</email_notification>
    <smtp_server>smtp.example.wazuh.com</smtp_server>
    <email_from>wa...@example.wazuh.com</email_from>
    <email_to>reci...@example.wazuh.com</email_to>
    <email_maxperhour>12</email_maxperhour>
    <email_log_source>alerts.log</email_log_source>
    <agents_disconnection_time>10m</agents_disconnection_time>
    <agents_disconnection_alert_time>0</agents_disconnection_alert_time>
  </global>

  <alerts>
    <log_alert_level>3</log_alert_level>
    <email_alert_level>12</email_alert_level>
  </alerts>

  <!-- Choose between "plain", "json", or "plain,json" for the format of internal logs -->
  <logging>
    <log_format>plain</log_format>
  </logging>

  <remote>
    <connection>secure</connection>
    <port>1514</port>
    <protocol>tcp</protocol>
    <queue_size>131072</queue_size>
  </remote>

  <!-- Policy monitoring -->
  <rootcheck>
    <disabled>no</disabled>
    <check_files>yes</check_files>
    <check_trojans>yes</check_trojans>
    <check_dev>yes</check_dev>
    <check_sys>yes</check_sys>
    <check_pids>yes</check_pids>
    <check_ports>yes</check_ports>
    <check_if>yes</check_if>

    <!-- Frequency that rootcheck is executed - every 12 hours -->
    <frequency>43200</frequency>

    <rootkit_files>etc/rootcheck/rootkit_files.txt</rootkit_files>
    <rootkit_trojans>etc/rootcheck/rootkit_trojans.txt</rootkit_trojans>

    <skip_nfs>yes</skip_nfs>
  </rootcheck>

  <wodle name="cis-cat">
    <disabled>yes</disabled>
    <timeout>1800</timeout>
    <interval>1d</interval>
    <scan-on-start>yes</scan-on-start>

    <java_path>wodles/java</java_path>
    <ciscat_path>wodles/ciscat</ciscat_path>
  </wodle>
 
  <!-- Getpackages -->
   <wodle name="command">
     <disabled>no</disabled>
     <command>/var/ossec/integrations/getpackages.py</command>
     <interval>1h</interval>
     <run_on_start>yes</run_on_start>
     <ignore_output>yes</ignore_output>
   </wodle>

  <!-- Osquery integration -->
  <wodle name="osquery">
    <disabled>no</disabled>
    <run_daemon>yes</run_daemon>
    <log_path>/var/log/osquery/osqueryd.results.log</log_path>
    <config_path>/etc/osquery/osquery.conf</config_path>
    <add_labels>yes</add_labels>
  </wodle>
 
  <!-- System inventory -->
  <wodle name="syscollector">
    <disabled>no</disabled>
    <interval>1h</interval>
    <scan_on_start>yes</scan_on_start>
    <hardware>yes</hardware>
    <os>yes</os>
    <network>yes</network>
    <packages>yes</packages>
    <ports all="no">yes</ports>
    <processes>yes</processes>

    <!-- Database synchronization settings -->
    <synchronization>
      <max_eps>10</max_eps>
    </synchronization>
  </wodle>

  <sca>
    <enabled>yes</enabled>
    <scan_on_start>yes</scan_on_start>
    <interval>12h</interval>
    <skip_nfs>yes</skip_nfs>
  </sca>

  <vulnerability-detection>
    <enabled>yes</enabled>
    <index-status>yes</index-status>
    <feed-update-interval>60m</feed-update-interval>
  </vulnerability-detection>

  <!-- File integrity monitoring -->
  <syscheck>
    <disabled>no</disabled>

    <!-- Frequency that syscheck is executed default every 12 hours -->
    <frequency>43200</frequency>

    <scan_on_start>yes</scan_on_start>

    <!-- Generate alert when new file detected -->
    <alert_new_files>yes</alert_new_files>

    <!-- Don't ignore files that change more than 'frequency' times -->
    <auto_ignore frequency="10" timeframe="3600">no</auto_ignore>

    <!-- Directories to check  (perform all possible verifications) -->
    <directories>/etc,/usr/bin,/usr/sbin</directories>
    <directories>/bin,/sbin,/boot</directories>

    <!-- Files/directories to ignore -->
    <ignore>/etc/mtab</ignore>
    <ignore>/etc/hosts.deny</ignore>
    <ignore>/etc/mail/statistics</ignore>
    <ignore>/etc/random-seed</ignore>
    <ignore>/etc/random.seed</ignore>
    <ignore>/etc/adjtime</ignore>
    <ignore>/etc/httpd/logs</ignore>
    <ignore>/etc/utmpx</ignore>
    <ignore>/etc/wtmpx</ignore>
    <ignore>/etc/cups/certs</ignore>
    <ignore>/etc/dumpdates</ignore>
    <ignore>/etc/svc/volatile</ignore>

    <!-- File types to ignore -->
    <ignore type="sregex">.log$|.swp$</ignore>

    <!-- Check the file, but never compute the diff -->
    <nodiff>/etc/ssl/private.key</nodiff>

    <skip_nfs>yes</skip_nfs>
    <skip_dev>yes</skip_dev>
    <skip_proc>yes</skip_proc>
    <skip_sys>yes</skip_sys>

    <!-- Nice value for Syscheck process -->
    <process_priority>10</process_priority>

    <!-- Maximum output throughput -->
    <max_eps>100</max_eps>

    <!-- Database synchronization settings -->
    <synchronization>
      <enabled>yes</enabled>
      <interval>5m</interval>
      <max_interval>1h</max_interval>
      <max_eps>10</max_eps>
    </synchronization>
  </syscheck>

  <!-- Active response -->
  <global>
    <white_list>127.0.0.1</white_list>
    <white_list>^localhost.localdomain$</white_list>
    <white_list>127.0.0.53</white_list>
  </global>

  <command>
    <name>disable-account</name>
    <executable>disable-account</executable>
    <timeout_allowed>yes</timeout_allowed>
  </command>

  <command>
    <name>restart-wazuh</name>
    <executable>restart-wazuh</executable>
  </command>

  <command>
    <name>firewall-drop</name>
    <executable>firewall-drop</executable>
    <timeout_allowed>yes</timeout_allowed>
  </command>

  <command>
    <name>host-deny</name>
    <executable>host-deny</executable>
    <timeout_allowed>yes</timeout_allowed>
  </command>

  <command>
    <name>route-null</name>
    <executable>route-null</executable>
    <timeout_allowed>yes</timeout_allowed>
  </command>

  <command>
    <name>win_route-null</name>
    <executable>route-null.exe</executable>
    <timeout_allowed>yes</timeout_allowed>
  </command>

  <command>
    <name>netsh</name>
    <executable>netsh.exe</executable>
    <timeout_allowed>yes</timeout_allowed>
  </command>

  <!--
  <active-response>
    active-response options here
  </active-response>
  -->

  <!-- Log analysis -->
  <localfile>
    <log_format>command</log_format>
    <command>df -P</command>
    <frequency>360</frequency>
  </localfile>

  <localfile>
    <log_format>full_command</log_format>
    <command>netstat -tulpn | sed 's/\([[:alnum:]]\+\)\ \+[[:digit:]]\+\ \+[[:digit:]]\+\ \+\(.*\):\([[:digit:]]*\)\ \+\([0-9\.\:\*]\+\).\+\ \([[:digit:]]*\/[[:alnum:]\-]*\).*/\1 \2 == \3 == \4 \5/' | sort -k 4 -g | sed 's/ == \(.*\) ==/:\1/' | sed 1,2d</command>
    <alias>netstat listening ports</alias>
    <frequency>360</frequency>
  </localfile>

  <localfile>
    <log_format>full_command</log_format>
    <command>last -n 20</command>
    <frequency>360</frequency>
  </localfile>

  <ruleset>
    <!-- Default ruleset -->
    <decoder_dir>ruleset/decoders</decoder_dir>
    <rule_dir>ruleset/rules</rule_dir>
    <rule_exclude>0215-policy_rules.xml</rule_exclude>
    <list>etc/lists/audit-keys</list>
    <list>etc/lists/amazon/aws-eventnames</list>
    <list>etc/lists/security-eventchannel</list>

    <!-- User-defined ruleset -->
    <decoder_dir>etc/decoders</decoder_dir>
    <rule_dir>etc/rules</rule_dir>
  </ruleset>

  <rule_test>
    <enabled>yes</enabled>
    <threads>1</threads>
    <max_sessions>64</max_sessions>
    <session_timeout>15m</session_timeout>
  </rule_test>

  <!-- Configuration for wazuh-authd -->
  <auth>
    <disabled>no</disabled>
    <port>1515</port>
    <use_source_ip>no</use_source_ip>
    <purge>yes</purge>
    <use_password>yes</use_password>
    <ciphers>HIGH:!ADH:!EXP:!MD5:!RC4:!3DES:!CAMELLIA:@STRENGTH</ciphers>
    <!-- <ssl_agent_ca></ssl_agent_ca> -->
    <ssl_verify_host>no</ssl_verify_host>
    <ssl_manager_cert>etc/sslmanager.cert</ssl_manager_cert>
    <ssl_manager_key>etc/sslmanager.key</ssl_manager_key>
    <ssl_auto_negotiate>no</ssl_auto_negotiate>
  </auth>

  <cluster>
    <name>wazuh</name>
    <node_name>node01</node_name>
    <node_type>master</node_type>
    <key></key>
    <port>1516</port>
    <bind_addr>0.0.0.0</bind_addr>
    <nodes>
        <node>NODE_IP</node>
    </nodes>
    <hidden>no</hidden>
    <disabled>yes</disabled>
  </cluster>

</ossec_config>

<ossec_config>
  <localfile>
    <log_format>syslog</log_format>
    <location>/var/ossec/logs/active-responses.log</location>
  </localfile>

  <localfile>
    <log_format>syslog</log_format>
    <location>/var/log/auth.log</location>
  </localfile>

  <localfile>
    <log_format>syslog</log_format>
    <location>/var/log/syslog</location>
  </localfile>

  <localfile>
    <log_format>syslog</log_format>
    <location>/var/log/dpkg.log</location>
  </localfile>

  <localfile>
    <log_format>syslog</log_format>
    <location>/var/log/kern.log</location>
  </localfile>


</ossec_config>

<ossec_config>

  <indexer>
    <enabled>yes</enabled>
    <hosts>
      <host>https://127.0.0.1:9200</host>
    </hosts>
    <ssl>
      <certificate_authorities>
        <ca>/etc/filebeat/certs/root-ca.pem</ca>
      </certificate_authorities>
      <certificate>/etc/filebeat/certs/filebeat.pem</certificate>
      <key>/etc/filebeat/certs/filebeat-key.pem</key>
    </ssl>
  </indexer>

</ossec_config>

Yes the script is located in integration directory

Farshin Hashim

unread,
Nov 6, 2024, 3:27:51 AM11/6/24
to Wazuh | Mailing List
Here are some more info

cat /var/ossec/logs/ossec.log | grep -i -E "error|warn"
2.png

3.png
Let me know if anything else required
Message has been deleted

Stuti Gupta

unread,
Nov 7, 2024, 5:07:25 AM11/7/24
to Wazuh | Mailing List
The reason that your vulnerability detection is not working is because of Red cluster health the cluster health should be green. To resolve that issue, please delte the unassigned shards You can use the command:
curl -k -XGET -u user:pass "https://<indexer_ip>:9200/_cat/shards" | grep UNASSIGNED | awk '{print $1}' | xargs -i curl -k -XDELETE -u user:pass "https://<indexer_ip>:9200/{}"  
 https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-reroute.html
You may have to delete old data to ensure you can maintain the retention period that you need.
Alerts generated by Wazuh are sent to an Elasticsearch daily index named wazuh-alerts-4.x-YYYY.MM.DD by using the default configuration. You can create policies that govern the lifecycle of the indices based on different phases.You can follow the steps mentioned in this document https://documentation.wazuh.com/current/user-manual/wazuh-indexer/wazuh-indexer-tuning.html.Or you can mnaully delete the old not required indexes:
The API call to delete indices is:
DELETE <index_name>
Or CLI command
curl -k -u admin:admin -XDELETE https://<WAZUH_INDEXER_IP>:9200/wazuh-alerts-4.x-YYYY.MM.DD
You can also take snapshots of the indices that automatically back up your Wazuh indices in local or Cloud-based storage and restore them at any given time. To do so please refer to https://wazuh.com/blog/index-backup-managementhttps://wazuh.com/blog/wazuh-index-management/

Another reason you cannot get the vulnerabilities is that the agent are not able to connect to the wazuh-manager the reason 
 wazuh-remoted: WARNING: Agent key already in use: agent ID '024'
As the log warning describes, it is not possible to register a new agent if its name is identical to another already registered. There are multiple ways to avoid/fix this. I list some of them below:
Delete the existing agent
Using the API: DELETE /agents
Remove agent using the CLI :
You could run this command in your Wazuh Manager to list all your registered agents:
/var/ossec/bin/manage_agents -l
 And then delete the one you need using its ID:
/var/ossec/bin/manage_agents -r <ID>
Set a different name in the enrollment configuration
This option consists of your new agent registering with auto-enrollment using a different name. To achieve this, you will need to add a <name></name> tag with a different name into the enrollment section of your agent's ossec.conf.
     <client>
         ...
         <enrollment>
             <agent_name>EXAMPLE_NAME</agent_name>
             ...
         </enrollment>
     </client>

Register the agent manually
The last option is to register the agent manually, specifying a different name from the one that already exists. To do this, you'll need to run something similar to this on your agent :
/var/ossec/bin/agent-auth -m <manager_IP> -A <agent_name>
Please resolve that error and make sure the agent are connected and alive

For the 4.8.1 wpk file, you can use this link : https://packages.wazuh.com/4.x/wpk/macos/x86_64/pkg/wazuh_agent_v4.8.1_macos_x86_64.wpk

Additionally for the Mitre Technique ID T1078 error:
could you share more information so that we can better troubleshoot the problem?
1. Check Mitre ATT&CK data feed updates: make sure that the data feed containing Mitre ATT&CK information has been updated after the Wazuh upgrade. Run the following command to update the database:
sudo /var/ossec/bin/wazuh-db update
This command will update the Wazuh database with the latest Mitre ATT&CK information.
2. Restart Wazuh services: after updating the database, restart the Wazuh services to apply the changes.
sudo systemctl restart wazuh-manager
sudo systemctl restart wazuh-api
3. Verify the existence and permissions of the database file:
ls -l /var/ossec/var/db/mitre.db

Hope this helps
Reply all
Reply to author
Forward
0 new messages