llama3 model deployment with error

189 views
Skip to first unread message

Somer Rabee

unread,
Sep 14, 2025, 7:14:25 AM9/14/25
to Wazuh | Mailing List

Hi all,

I have a working Wazuh cluster consisting of:

  • Wazuh Indexer: 3 nodes

  • Wazuh Manager: 3 nodes

  • Version: 4.12 (OpenSearch 2.19)

Additionally, I have a running instance of the Llama 3 AI model via Ollama. All nodes are within the same LAN (192.168.11.0/24).

I am attempting to deploy Llama 3 using the OpenSearch Machine Learning plugin and map the deployed model to the OpenSearch Assistant chatbot.

Using Dev Tools, the deployment completed successfully. However, when testing the deployed model, I encountered the following error:

"Remote inference host name has private IP address"

After investigating, I found that OpenSearch blocks access to private IPs even if a DNS name is used.

Could you please advise on the recommended method from Wazuh for deploying local AI models in OpenSearch ML and mapping them to the OpenSearch Chatbot?

Best regards,

hasitha.u...@wazuh.com

unread,
Sep 15, 2025, 12:01:21 AM9/15/25
to Wazuh | Mailing List
Hi  Somer,

Please let me know which document you have followed to integrate Wazuh with Llama 3?

I recommend checking out our latest blog on configuring the infrastructure needed for AI-powered threat hunting with an LLM. In it, we set up Ollama to run the Llama 3 model directly on the Wazuh server.
Ref: https://wazuh.com/blog/leveraging-artificial-intelligence-for-threat-hunting-in-wazuh/
This integration will take effect on the Wazuh server side through the integrator module.

You can learn more about how to configure the integrator module by following this documentation.
Ref: https://documentation.wazuh.com/current/user-manual/manager/integration-with-external-apis.html

You can also check this guide to have alert enrichment using LLMs.
Ref: https://documentation.wazuh.com/current/proof-of-concept-guide/leveraging-llms-for-alert-enrichment.html

Let me know if you encounter any issues while following the blog post.

Somer Rabee

unread,
Sep 15, 2025, 4:41:46 AM9/15/25
to Wazuh | Mailing List

Hi Hasitha,

Thank you for your kind response.

I have followed all of the blogs you mentioned and successfully managed to run the Llama3 AI model on the Wazuh server using the provided script. In addition, I followed this blog:

🔗 Leveraging Claude Haiku in the Wazuh Dashboard for LLM-powered insights

This article explains how to connect an external AI model and integrate it with the Wazuh Dashboard through the OpenSearch Assistant Chatbot.

I am trying to achieve a similar setup—mapping our locally hosted Llama3 model to our locally hosted Wazuh cluster, and then directly integrating it into the Wazuh Dashboard via the OpenSearch Assistant Chatbot.

To do this, I enabled the OpenSearch ML plugins and OpenSearch Assistant Chatbot, and deployed the Llama3 model using Wazuh Dashboard Dev Tools. Deployment was successful, but when testing with the following request:

POST _plugins/_ml/models/XBqeR5kBhG8ZkElcbR1W/_predict?pretty { "parameters": { "prompt": "who are you?", "max_tokens": 100 } }

I received the error:

"Remote inference host name has private IP address: <Local-IP>"

Upon further research, I found that OpenSearch ML plugins block connections to private IP addresses (hardcoded restriction).

Could you please advise if there is any possible workaround to bypass this limitation? If not, what would be the best method recommended by Wazuh to connect a locally hosted AI model directly to the Wazuh Dashboard via the OpenSearch Assistant Chatbot?

Best regards,

Message has been deleted

Somer Rabee

unread,
Sep 17, 2025, 4:15:15 AM9/17/25
to Wazuh | Mailing List
Any suggestion ...?

hasitha.u...@wazuh.com

unread,
Sep 20, 2025, 7:07:26 AM9/20/25
to Wazuh | Mailing List
Hi Somer,

Please allow me some time; I am checking this issue internally. I’ll get back to you with my findings.  

Somer Rabee

unread,
Sep 21, 2025, 6:51:17 AM9/21/25
to Wazuh | Mailing List
Hi Hasitha,

Thank you for your  efforts.
Take your time, I'm waiting you...

Thanks in advance.

hasitha.u...@wazuh.com

unread,
Sep 23, 2025, 5:38:13 AM9/23/25
to Wazuh | Mailing List
Hi Somer,

I found the issue on the OpenSearch GitHub. You can refer to this.
https://github.com/opensearch-project/ml-commons/issues/2142

The error seems to be coming from these lines, that is within a conditional. I am not sure if there is a way to avoid the condition. Maybe you could give it a try with a hostname instead of an IP, or ensure this does not fulfill the condition to be considered as a private IP:
https://github.com/opensearch-project/ml-commons/blob/2.19.2.0/ml-algorithms/src/main/java/org/opensearch/ml/engine/httpclient/MLHttpClientFactory.java#L79-L108

After reading the code, it seems the following IPs could be considered as private:
127.0.0.1
10.x.y.z
172.[16 to 32].x.y
192.168.x.y


I guess if the configured IP does not match these IPs, it should not display the mentioned message.

Let me know the update on this.

Somer Rabee

unread,
Sep 25, 2025, 3:46:41 AM9/25/25
to Wazuh | Mailing List

Hi Hasitha,

Thank you for your feedback.

I understand that the error is caused by OpenSearch restrictions. I was mainly seeking guidance on possible workarounds for this limitation.

I’ll continue working on this, and if I manage to resolve it, I’ll share the solution here.

Best regards,


hasitha.u...@wazuh.com

unread,
Sep 27, 2025, 12:11:05 AM9/27/25
to Wazuh | Mailing List
Hi Somer,

Please do share your solution here if you manage to resolve it — that would be very helpful for others facing the same issue. Meanwhile, if I come across any effective workaround, I’ll update you as well.

Somer Rabee

unread,
Nov 19, 2025, 8:12:13 AM11/19/25
to Wazuh | Mailing List
Hi all,

after a long research, i have solved this issue using several methods, the best of them is to configure your cluster as the following:

PUT /_cluster/settings
{
  "persistent": {
    "plugins": {
      "ml_commons": {
        "only_run_on_ml_node": "false",
        "model_access_control_enabled": "true",
        "native_memory_threshold": "99",
        "allow_registering_model_via_local_file": "true",
        "connector": {
          "private_ip_enabled": "true"
        }
      }
    }
  }
}


Best regards.

Reply all
Reply to author
Forward
0 new messages