Hi Wazuh Team,
I would like to ask for your recommendation regarding a Wazuh environment migration.
Currently, I have two Wazuh managers:
Production Wazuh: 192.168.10.14 (with ~20 active agents already connected)
Development Wazuh: 192.168.10.15
Due to an issue on the current production Wazuh (192.168.10.14), I am planning to promote the development Wazuh instance to become the new production environment.
My question is regarding the best practice for this migration:
Is it better to change the IP address of the new production Wazuh (192.168.10.15) to the old production IP (192.168.10.14) so that existing agents do not need to be reconfigured?
OR
Is it better to update and repoint all agents to the new Wazuh manager at 192.168.10.15?
From a stability, security, and operational perspective, I would like to follow the recommended and safest approach.
Could you please advise on:
Which approach is preferred?
Any potential risks or caveats for each option?
Whether agent re-registration or key regeneration would be required in either scenario?
Thank you very much for your guidance.
Best regards,
Robby
As per the details shared, you are currently using an all-in-one Wazuh deployment with:
Production Wazuh: 192.168.10.14 (approximately 20 active agents)
Development Wazuh: 192.168.10.15 (no active agents at present)
Due to the issues observed on the current production Wazuh server (192.168.10.14), you are planning to promote the development Wazuh instance as the new production environment.
Since the existing production server is facing issues, it is not recommended to take a complete backup of configuration files directly from the affected production server. Instead, please refer to the official Wazuh migration documentation for creating and restoring central components, and apply only the required configurations.
Creating Wazuh central components:
https://documentation.wazuh.com/current/migration-guide/creating/wazuh-central-components.html
Restoring Wazuh central components:
https://documentation.wazuh.com/current/migration-guide/restoring/wazuh-central-components.html
To retain historical logs in the new environment, you may use one of the following supported approaches:
Migrate Wazuh indices using snapshots:
https://documentation.wazuh.com/current/user-manual/wazuh-indexer/migrating-wazuh-indices.html
Use the recovery.py script to decompress and reindex old logs into the new Wazuh indexer:
https://documentation.wazuh.com/current/migration-guide/restoring/wazuh-central-components.html#restoring-old-logs
It is recommended to stop the Wazuh manager service on the new server and copy the /var/ossec/etc/client.keys file from the current production server to the new server. This allows agents to communicate using the same IDs and keys (Optional). After copying, restart the Wazuh manager service.
systemctl stop wazuh-manager
systemctl start wazuh-manager
Option 1: Change the IP address of the new production Wazuh to 192.168.10.14
Pros: No changes needed on the agent configuration.
Cons / Actions Required:
Recreate certificates and deploy them with the new IP on the new server:
Update the IP address in the following configuration files. Refer to this document
/etc/wazuh-indexer/opensearch.yml
/etc/filebeat/filebeat.yml
/var/ossec/etc/ossec.conf
/etc/wazuh-dashboard/opensearch_dashboards.yml
/usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml
Option 2: Update and repoint all agents to the new Wazuh manager at 192.168.10.15
Pros: No need to change the IP on the server. Cleaner from a network perspective.
Cons / Actions Required:
Replace client.keys on the new server if you want to keep the same agent IDs and keys as mentioned above.
Update all agent configurations to point to the new Wazuh manager IP.
Both approaches are technically valid. From a stability, security, and operational perspective, the preferred approach is Option 2—updating agents to point to the new server—if you do not want to reuse the same IP. This avoids the risk of IP conflicts and ensures a cleaner certificate and configuration setup.
I hope it helps. Please let us know if you have any further questions or concerns.
Regards,
--
You received this message because you are subscribed to the Google Groups "Wazuh | Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/wazuh/20348bb6-280c-40dc-a624-e7b3cde4153fn%40googlegroups.com.
Hi Ismail,
I tried option 1 and checked the SSL, ASN, and host configuration in:
/etc/wazuh-indexer/opensearch.yml
/etc/filebeat/filebeat.yml
/var/ossec/etc/ossec.conf
/etc/wazuh-dashboard/opensearch_dashboards.yml
/usr/share/wazuh-dashboard/data/wazuh/config/wazuh.yml
I noticed that the host was still set to localhost / 127.0.0.1, so I changed it from the old server to the new server.
After that, I checked the agents and they are all showing as active. I also checked the logs in Discovery → Archives / Alerts, and I can see logs from the agents appearing there.
Does this mean the migration is successful, or is there anything else I should double-check to be sure everything is properly migrated?
Regards,
Robby
Thank you for the confirmation.
Based on what you’ve described, the migration looks successful, as the agents are now connected to the new manager and are actively sending logs. Since there is no requirement to migrate historical logs, alerts, or index data from the old manager, this confirms that your primary objective has been achieved.
As an additional validation step, you may verify the functionality of the Wazuh Dashboard modules such as Threat Hunting, Vulnerability Detection, File Integrity Monitoring, and others. You can refer to the following documentation for guidance on navigating and validating these sections of the dashboard: https://documentation.wazuh.com/current/user-manual/wazuh-dashboard/navigating-the-wazuh-dashboard.html
Additionally, it is recommended to:
Monitor the manager logs for any warnings or errors:
tail -f /var/ossec/logs/ossec.log | grep -iE "error|warn"
Check the indexer cluster health from the dashboard by navigating to:
Menu → Indexer Management → Dev Tools, and running:
GET _cluster/health