llama3 model deployment with error

75 views
Skip to first unread message

Somer Rabee

unread,
Sep 14, 2025, 7:14:25 AM (7 days ago) Sep 14
to Wazuh | Mailing List

Hi all,

I have a working Wazuh cluster consisting of:

  • Wazuh Indexer: 3 nodes

  • Wazuh Manager: 3 nodes

  • Version: 4.12 (OpenSearch 2.19)

Additionally, I have a running instance of the Llama 3 AI model via Ollama. All nodes are within the same LAN (192.168.11.0/24).

I am attempting to deploy Llama 3 using the OpenSearch Machine Learning plugin and map the deployed model to the OpenSearch Assistant chatbot.

Using Dev Tools, the deployment completed successfully. However, when testing the deployed model, I encountered the following error:

"Remote inference host name has private IP address"

After investigating, I found that OpenSearch blocks access to private IPs even if a DNS name is used.

Could you please advise on the recommended method from Wazuh for deploying local AI models in OpenSearch ML and mapping them to the OpenSearch Chatbot?

Best regards,

hasitha.u...@wazuh.com

unread,
Sep 15, 2025, 12:01:21 AM (6 days ago) Sep 15
to Wazuh | Mailing List
Hi  Somer,

Please let me know which document you have followed to integrate Wazuh with Llama 3?

I recommend checking out our latest blog on configuring the infrastructure needed for AI-powered threat hunting with an LLM. In it, we set up Ollama to run the Llama 3 model directly on the Wazuh server.
Ref: https://wazuh.com/blog/leveraging-artificial-intelligence-for-threat-hunting-in-wazuh/
This integration will take effect on the Wazuh server side through the integrator module.

You can learn more about how to configure the integrator module by following this documentation.
Ref: https://documentation.wazuh.com/current/user-manual/manager/integration-with-external-apis.html

You can also check this guide to have alert enrichment using LLMs.
Ref: https://documentation.wazuh.com/current/proof-of-concept-guide/leveraging-llms-for-alert-enrichment.html

Let me know if you encounter any issues while following the blog post.

Somer Rabee

unread,
Sep 15, 2025, 4:41:46 AM (6 days ago) Sep 15
to Wazuh | Mailing List

Hi Hasitha,

Thank you for your kind response.

I have followed all of the blogs you mentioned and successfully managed to run the Llama3 AI model on the Wazuh server using the provided script. In addition, I followed this blog:

🔗 Leveraging Claude Haiku in the Wazuh Dashboard for LLM-powered insights

This article explains how to connect an external AI model and integrate it with the Wazuh Dashboard through the OpenSearch Assistant Chatbot.

I am trying to achieve a similar setup—mapping our locally hosted Llama3 model to our locally hosted Wazuh cluster, and then directly integrating it into the Wazuh Dashboard via the OpenSearch Assistant Chatbot.

To do this, I enabled the OpenSearch ML plugins and OpenSearch Assistant Chatbot, and deployed the Llama3 model using Wazuh Dashboard Dev Tools. Deployment was successful, but when testing with the following request:

POST _plugins/_ml/models/XBqeR5kBhG8ZkElcbR1W/_predict?pretty { "parameters": { "prompt": "who are you?", "max_tokens": 100 } }

I received the error:

"Remote inference host name has private IP address: <Local-IP>"

Upon further research, I found that OpenSearch ML plugins block connections to private IP addresses (hardcoded restriction).

Could you please advise if there is any possible workaround to bypass this limitation? If not, what would be the best method recommended by Wazuh to connect a locally hosted AI model directly to the Wazuh Dashboard via the OpenSearch Assistant Chatbot?

Best regards,

Message has been deleted

Somer Rabee

unread,
Sep 17, 2025, 4:15:15 AM (4 days ago) Sep 17
to Wazuh | Mailing List
Any suggestion ...?

hasitha.u...@wazuh.com

unread,
Sep 20, 2025, 7:07:26 AM (19 hours ago) Sep 20
to Wazuh | Mailing List
Hi Somer,

Please allow me some time; I am checking this issue internally. I’ll get back to you with my findings.  
Reply all
Reply to author
Forward
0 new messages