Logs not appearing in Kibana

437 views
Skip to first unread message

Marc Bonoan

unread,
Jul 26, 2021, 3:45:55 PM7/26/21
to Wazuh mailing list
I set up a custom decoder and its parsing it correctly in log test and it says Alert to be generated.

Here is a sample of the log which I got from checking archive.log

2021 Jul 26 19:30:06 EXPRESS: {"level":"debug","label":"EXPRESS","timestamp":"2021 Jul 26 19:30:06","message":"172.31.20.154 - - [26/Jul/2021:19:30:06 +0000] \"GET /api/ping HTTP/1.1\" 200 4 \"-\" \"ELB-HealthChecker/2.0\"","environment":"development","meta":{"environment":"development"}}

Here is the result of logtest


**Phase 1: Completed pre-decoding.
       full event: '2021 Jul 26 19:30:06 EXPRESS: {"level":"debug","label":"EXPRESS","timestamp":"2021 Jul 26 19:30:06","message":"172.31.20.154 - - [26/Jul/2021:19:30:06 +0000] \"GET /api/ping HTTP/1.1\" 200 4 \"-\" \"ELB-HealthChecker/2.0\"","environment":"development","meta":{"environment":"development"}}  '
       timestamp: '(null)'
       hostname: 'ip-172-31-15-155'
       program_name: '(null)'
       log: '2021 Jul 26 19:30:06 EXPRESS: {"level":"debug","label":"EXPRESS","timestamp":"2021 Jul 26 19:30:06","message":"172.31.20.154 - - [26/Jul/2021:19:30:06 +0000] \"GET /api/ping HTTP/1.1\" 200 4 \"-\" \"ELB-HealthChecker/2.0\"","environment":"development","meta":{"environment":"development"}}  '

**Phase 2: Completed decoding.
       No decoder matched.

**Phase 3: Completed filtering (rules).
       Rule id: '100200'
       Level: '0'
       Description: 'marctest logs - Parent'
2021 Jul 26 19:30:06 EXPRESS: {"level":"debug","label":"EXPRESS","timestamp":"2021 Jul 26 19:30:06","message":"172.31.20.154 - - [26/Jul/2021:19:30:06 +0000] \"GET /api/ping HTTP/1.1\" 200 4 \"-\" \"ELB-HealthChecker/2.0\"","environment":"development","meta":{"environment":"development"}}


**Phase 1: Completed pre-decoding.
       full event: '2021 Jul 26 19:30:06 EXPRESS: {"level":"debug","label":"EXPRESS","timestamp":"2021 Jul 26 19:30:06","message":"172.31.20.154 - - [26/Jul/2021:19:30:06 +0000] \"GET /api/ping HTTP/1.1\" 200 4 \"-\" \"ELB-HealthChecker/2.0\"","environment":"development","meta":{"environment":"development"}}'
       timestamp: '2021 Jul 26 19:30:06'
       hostname: 'ip-172-31-15-155'
       program_name: 'EXPRESS'
       log: '{"level":"debug","label":"EXPRESS","timestamp":"2021 Jul 26 19:30:06","message":"172.31.20.154 - - [26/Jul/2021:19:30:06 +0000] \"GET /api/ping HTTP/1.1\" 200 4 \"-\" \"ELB-HealthChecker/2.0\"","environment":"development","meta":{"environment":"development"}}'

**Phase 2: Completed decoding.
       decoder: 'express_dev'
       level: 'debug'
       label: 'EXPRESS'
       timestamp: '2021 Jul 26 19:30:06'
       message: '172.31.20.154 - - [26/Jul/2021:19:30:06 +0000] "GET /api/ping HTTP/1.1" 200 4 "-" "ELB-HealthChecker/2.0"'
       environment: 'development'
       meta.environment: 'development'
       srcip: '172.31.20.154'
       meta.request_date: '26/Jul/2021:19:30:06'
       meta.request_method: 'GET'
       meta.request_url: '/api/ping'
       meta.request_protocol: 'HTTP/1.1'
       meta.response_code: '200'
       meta.response_size: '4'
       meta.user_agent: 'ELB-HealthChecker/2.0'

**Phase 3: Completed filtering (rules).
       Rule id: '100201'
       Level: '3'
       Description: 'Dev EXPRESS Logs'
**Alert to be generated.


It seems like everything is working but I do not see the logs in Kibana.

Thank you


Marc Bonoan

unread,
Jul 26, 2021, 4:33:53 PM7/26/21
to Wazuh mailing list
I can see the field when I add it as a filter in Kiabana but no entries appear. And I just made this field by editing the regex in the decoder.


Screenshot_1.png

Marc Bonoan

unread,
Jul 26, 2021, 4:40:04 PM7/26/21
to Wazuh mailing list
It is also showing up in alert.log

elw...@wazuh.com

unread,
Jul 27, 2021, 2:48:00 AM7/27/21
to Wazuh mailing list
Hello Marc,

Can you please share with me the following:

  • An example of the alert from alerts.json file

  • Check if Filebeat is reading the alerts: lsof /var/ossec/logs/alerts/alerts.json

  • Filebeat can reach and connect to Elasticsearc: filebeat test output

  • Run the shown commands in dev tools of kibana:
    image (101).png


Regards,
Wali
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted

elw...@wazuh.com

unread,
Jul 29, 2021, 8:36:55 AM7/29/21
to Wazuh mailing list

Hello Marc,

I have just crafted a very simple decoder (that do not perform any parsing) and rule to test your log and it seems to reach Elasticsearch/kibana correctly :

Decoder:

<decoder name="express">
  <program_name>EXPRESS</program_name>
</decoder>


Rule:

<group name="test">
<rule id="111254" level="3">
 <decoded_as>express</decoded_as>
 <description>test express alert</description>
</rule>
</group>



Sending the log:

echo '2021 Jul 27 00:48:43 EXPRESS: {"level":"debug","label":"EXPRESS","timestamp":"2021 Jul 27 00:48:43","message":"46.165.195.139 - - [27/Jul/2021:00:48:43 +0000] \"GET /api/ping HTTP/1.1\" 200 4 \"-\" \"Pingdom.com_bot_version_1.4_(http://www.pingdom.com/)\"","environment":"development","meta":{"environment":"development"}}' >> /var/log/messages

Result:

image (102).png



Having said that, regarding your use case, the fact that the alert is generated in alerts.json and Filebeat is reading it correctly reveals that it might be for some reason it is being dropped between Filebeat and Elasticsearch, for that can you share with the following:

  • If you are using Logstash (although it is not required) :  cat /var/log/logstash/logstash-plain.log | grep -i -E "error|warn"
  • Elasticsearch logs: cat /var/log/elasticsearch/elasticsearch.log | grep -i -E "error|warn|critical|fatal" // Or your <cluster-name.log> if not a default cluster
  • Double check the alert is indeed in alerts.json : grep -i pingdom /var/ossec/logs/alerts/alerts.json
  • Any further information about your envirnoment would heplful.

On another note, please delete the previous message of the AWS alert as it reveals information that you may not want to share publicly (make sure to omit always sensitive data).

Regards,
Wali
On Wednesday, July 28, 2021 at 5:50:11 PM UTC+2 marc....@performanceadvantage.ca wrote:

Jul 28 15:46:44 ip-x-x-x-x kibana[24344]: {"type":"log","@timestamp":"2021-07-28T15:46:44Z","tags":["info","plugins-system"],"pid":24344,"message":"Setting up [49] plugins: [opendistroAlertingKibana,u
Jul 28 15:46:44  ip-x-x-x-x  kibana[24344]: {"type":"log","@timestamp":"2021-07-28T15:46:44Z","tags":["info","savedobjects-service"],"pid":24344,"message":"Waiting until all Elasticsearch nodes are comp
Jul 28 15:46:44  ip-x-x-x-x   kibana[24344]: {"type":"log","@timestamp":"2021-07-28T15:46:44Z","tags":["info","savedobjects-service"],"pid":24344,"message":"Starting saved objects migrations"}
Jul 28 15:46:45  ip-x-x-x-x   kibana[24344]: {"type":"log","@timestamp":"2021-07-28T15:46:45Z","tags":["info","plugins-system"],"pid":24344,"message":"Starting [49] plugins: [opendistroAlertingKibana,usa
Jul 28 15:46:45  ip-x-x-x-x   kibana[24344]: {"type":"log","@timestamp":"2021-07-28T15:46:45Z","tags":["error","elasticsearch","data"],"pid":24344,"message":"[ResponseError]: Response Error"}
Jul 28 15:46:45 ip-x-x-x-x   kibana[24344]: {"type":"log","@timestamp":"2021-07-28T15:46:45Z","tags":["error","elasticsearch","data"],"pid":24344,"message":"[ResponseError]: Response Error"}
Jul 28 15:46:45  ip-x-x-x-x   kibana[24344]: {"type":"log","@timestamp":"2021-07-28T15:46:45Z","tags":["error","plugins","wazuh","initialize"],"pid":24344,"message":"Response Error"}
Jul 28 15:46:45 i ip-x-x-x-x   kibana[24344]: {"type":"log","@timestamp":"2021-07-28T15:46:45Z","tags":["error","plugins","wazuh","initialize"],"pid":24344,"message":"Response Error"}
Jul 28 15:46:45  ip-x-x-x-x  kibana[24344]: {"type":"log","@timestamp":"2021-07-28T15:46:45Z","tags":["listening","info"],"pid":24344,"message":"Server running at https://0.0.0.0:443"}
Jul 28 15:46:46  ip-x-x-x-x   kibana[24344]: {"type":"log","@timestamp":"2021-07-28T15:46:46Z","tags":["info","http","server","Kibana"],"pid":24344,"message":"http server running at https://0.0.0.0:443"}


On Wednesday, July 28, 2021 at 7:14:47 AM UTC-4 Marc Bonoan wrote:
Anythin else I could check? I have also done a complete restart of the server. Disk usage and Resources on the server are fine as well

On Tuesday, July 27, 2021 at 9:23:32 AM UTC-4 Marc Bonoan wrote:
Other logs in alerts.log are showing up in kibana like this one

Rule: 80202 (level 3) -> 'AWS Cloudtrail: wafv2.amazonaws.com - UpdateIPSet.'
{"integration": "aws", "aws": {"log_info": {"aws_account_alias": "", "log_file": "AWSLogs/217785959066/CloudTrail/ca-central-1/2021/07/27/217785959066_CloudTrail_ca-central-1_20210727T1305Z_WR0drDlZPOBctxag.json.gz", "s3bucket": "aws-cloudtrail-logs-217785959066-39b5d175"}, "eventVersion": "1.08", "userIdentity": {"type": "AssumedRole", "principalId": "AROATFNIOAKNMNE4TZYJY:AWSWAFSecurityAutomations-ALB-LogParser-JXYNGQDYI153", "arn": "arn:aws:sts::217785959066:assumed-role/AWSWAFSecurityAutomations-ALB-LambdaRoleLogParser-197YMCD6OZ6XR/AWSWAFSecurityAutomations-ALB-LogParser-JXYNGQDYI153", "accountId": "217785959066", "accessKeyId": "ASIATFNIOAKND6UQK7R2", "sessionContext": {"sessionIssuer": {"type": "Role", "principalId": "AROATFNIOAKNMNE4TZYJY", "arn": "arn:aws:iam::217785959066:role/AWSWAFSecurityAutomations-ALB-LambdaRoleLogParser-197YMCD6OZ6XR", "accountId": "217785959066", "userName": "AWSWAFSecurityAutomations-ALB-LambdaRoleLogParser-197YMCD6OZ6XR"}, "webIdFederationData": {}, "attributes": {"creationDate": "2021-07-27T11:52:17Z", "mfaAuthenticated": "false"}}}, "eventTime": "2021-07-27T13:02:22Z", "eventSource": "wafv2.amazonaws.com", "eventName": "UpdateIPSet", "awsRegion": "ca-central-1", "sourceIPAddress": "99.79.127.250", "userAgent": "Boto3/1.17.42 Python/3.8.10 Linux/4.14.231-180.360.amzn2.x86_64 exec-env/AWS_Lambda_python3.8 Botocore/1.20.42", "requestParameters": {"name": "AWSWAFSecurityAutomations-ALBScannersProbesSetIPV6", "scope": "REGIONAL", "id": "68c42720-6b70-4661-b517-bd59ce7a145b", "description": "Block Scanners/Probes IPV6 addresses", "addresses": [], "lockToken": "b783e843-c303-41da-a71d-168c4291d89b"}, "responseElements": {"nextLockToken": "af1c23bb-e966-496d-849e-1cb0b1f47a85"}, "requestID": "162cd5ef-571a-4fba-b7a2-7b649fe50679", "eventID": "d411de6a-c008-4db9-b279-3de67365cb11", "readOnly": false, "eventType": "AwsApiCall", "apiVersion": "2019-04-23", "managementEvent": true, "recipientAccountId": "217785959066", "eventCategory": "Management", "source": "cloudtrail", "aws_account_id": "217785959066", "source_ip_address": "99.79.127.250"}}
integration: aws
aws.log_info.log_file: AWSLogs/217785959066/CloudTrail/ca-central-1/2021/07/27/217785959066_CloudTrail_ca-central-1_20210727T1305Z_WR0drDlZPOBctxag.json.gz
aws.log_info.s3bucket: aws-cloudtrail-logs-217785959066-39b5d175
aws.eventVersion: 1.08
aws.userIdentity.type: AssumedRole
aws.userIdentity.principalId: AROATFNIOAKNMNE4TZYJY:AWSWAFSecurityAutomations-ALB-LogParser-JXYNGQDYI153
aws.userIdentity.arn: arn:aws:sts::217785959066:assumed-role/AWSWAFSecurityAutomations-ALB-LambdaRoleLogParser-197YMCD6OZ6XR/AWSWAFSecurityAutomations-ALB-LogParser-JXYNGQDYI153
aws.userIdentity.accountId: 217785959066
aws.userIdentity.accessKeyId: ASIATFNIOAKND6UQK7R2
aws.userIdentity.sessionContext.sessionIssuer.type: Role
aws.userIdentity.sessionContext.sessionIssuer.principalId: AROATFNIOAKNMNE4TZYJY
aws.userIdentity.sessionContext.sessionIssuer.arn: arn:aws:iam::217785959066:role/AWSWAFSecurityAutomations-ALB-LambdaRoleLogParser-197YMCD6OZ6XR
aws.userIdentity.sessionContext.sessionIssuer.accountId: 217785959066
aws.userIdentity.sessionContext.sessionIssuer.userName: AWSWAFSecurityAutomations-ALB-LambdaRoleLogParser-197YMCD6OZ6XR
aws.userIdentity.sessionContext.attributes.creationDate: 2021-07-27T11:52:17Z
aws.userIdentity.sessionContext.attributes.mfaAuthenticated: false
aws.eventTime: 2021-07-27T13:02:22Z
aws.eventSource: wafv2.amazonaws.com
aws.eventName: UpdateIPSet
aws.awsRegion: ca-central-1
aws.sourceIPAddress: 99.79.127.250
aws.userAgent: Boto3/1.17.42 Python/3.8.10 Linux/4.14.231-180.360.amzn2.x86_64 exec-env/AWS_Lambda_python3.8 Botocore/1.20.42
aws.requestParameters.name: AWSWAFSecurityAutomations-ALBScannersProbesSetIPV6
aws.requestParameters.scope: REGIONAL
aws.requestParameters.id: 68c42720-6b70-4661-b517-bd59ce7a145b
aws.requestParameters.description: Block Scanners/Probes IPV6 addresses
aws.requestParameters.addresses: []
aws.requestParameters.lockToken: b783e843-c303-41da-a71d-168c4291d89b
aws.responseElements.nextLockToken: af1c23bb-e966-496d-849e-1cb0b1f47a85
aws.requestID: 162cd5ef-571a-4fba-b7a2-7b649fe50679
aws.eventID: d411de6a-c008-4db9-b279-3de67365cb11
aws.readOnly: false
aws.eventType: AwsApiCall
aws.apiVersion: 2019-04-23
aws.managementEvent: true
aws.recipientAccountId: 217785959066
aws.eventCategory: Management
aws.source: cloudtrail
aws.aws_account_id: 217785959066
aws.source_ip_address: 99.79.127.250

image.png


On Tue, Jul 27, 2021 at 9:00 AM Marc Bonoan <marc....@performanceadvantage.ca> wrote:
  • An example of the alert from alerts.json file
** Alert 1627348565.3521493: - test_dev_app_logs
2021 Jul 27 01:16:05 ip-180-1-15-145->Wazuh-AWS
Rule: 100201 (level 3) -> 'Dev EXPRESS Logs'
Src IP: 46.165.195.139
2021 Jul 27 00:48:43 EXPRESS: {"level":"debug","label":"EXPRESS","timestamp":"2021 Jul 27 00:48:43","message":"46.165.195.139 - - [27/Jul/2021:00:48:43 +0000] \"GET /api/ping HTTP/1.1\" 200 4 \"-\" \"Pingdom.com_bot_version_1.4_(http://www.pingdom.com/)\"","environment":"development","meta":{"environment":"development"}}
level: debug
label: EXPRESS
timestamp: 2021 Jul 27 00:48:43
message: 46.165.195.139 - - [27/Jul/2021:00:48:43 +0000] "GET /api/ping HTTP/1.1" 200 4 "-" "Pingdom.com_bot_version_1.4_(http://www.pingdom.com/)"
environment: development
meta.environment: development
meta.request_date: 27/Jul/2021:00:48:43

meta.request_method: GET
meta.request_url: /api/ping
meta.request_protocol: HTTP/1.1
meta.response_code: 200
meta.response_size: 4
meta.user_agent: Pingdom.com_bot_version_1.4_(http://www.pingdom.com/)


    • Check if Filebeat is reading the alerts: lsof /var/ossec/logs/alerts/alerts.json

      filebeat  18037  root    8r   REG  202,1 323237113 663351 /var/ossec/logs/alerts/alerts.json
      ossec-ana 18343 ossec   15w   REG  202,1 323237113 663351 /var/ossec/logs/alerts/alerts.json
    • Filebeat can reach and connect to Elasticsearc: filebeat test output

      elasticsearch: https://127.0.0.1:9200...
        parse url... OK
        connection...
          parse host... OK
          dns lookup... OK
          addresses: 127.0.0.1
          dial up... OK
        TLS...
          security: server's certificate chain verification is enabled
          handshake... OK
          TLS version: TLSv1.3
          dial up... OK
        talk to server... OK
        version: 7.10.0
    • Run the shown commands in dev tools of kibana:
    • GET _cluster/allocation/explain
    {
      "index" : "security-auditlog-2021.06.24",
      "shard" : 0,
      "primary" : false,
      "current_state" : "unassigned",
      "unassigned_info" : {
        "reason" : "CLUSTER_RECOVERED",
        "at" : "2021-07-26T19:20:51.243Z",
        "last_allocation_status" : "no_attempt"
      },
      "can_allocate" : "no",
      "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
      "node_allocation_decisions" : [
        {
          "node_id" : "fJaAPxI-R_WzVSclZ-s-aw",
          "node_name" : "node-1",
          "transport_address" : "127.0.0.1:9300",
          "node_decision" : "no",
          "deciders" : [
            {
              "decider" : "same_shard",
              "decision" : "NO",
              "explanation" : "a copy of this shard is already allocated to this node [[security-auditlog-2021.06.24][0], node[fJaAPxI-R_WzVSclZ-s-aw], [P], s[STARTED], a[id=8jkFot0-Q4GJVGR3R9_XCQ]]"
            }
          ]
        }
      ]
    }
    • GET _cluster/health
    #! Deprecation: this request accesses system indices: [.kibana_1, .opendistro-anomaly-detector-jobs, .opendistro-anomaly-detectors], but in a future major version, direct access to system indices will be prevented by default
    {
      "cluster_name" : "elasticsearch",
      "status" : "yellow",
      "timed_out" : false,
      "number_of_nodes" : 1,
      "number_of_data_nodes" : 1,
      "active_primary_shards" : 753,
      "active_shards" : 753,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 124,
      "delayed_unassigned_shards" : 0,
      "number_of_pending_tasks" : 0,
      "number_of_in_flight_fetch" : 0,
      "task_max_waiting_in_queue_millis" : 0,
      "active_shards_percent_as_number" : 85.86088939566704
    }

    --
    You received this message because you are subscribed to the Google Groups "Wazuh mailing list" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to wazuh+un...@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/wazuh/e22c1d2a-3e66-48f0-9927-5372608a8f7an%40googlegroups.com.


    --
    Marc Bonoan

    IT Manager



    --
    Marc Bonoan

    IT Manager

    Marc Bonoan

    unread,
    Jul 29, 2021, 9:30:32 AM7/29/21
    to Wazuh mailing list
    Logstash is not being used on my instance I think. 

    This is from elastisearch log. There is an error for a monitor, but I have had this from before from testing some opendistro alerts.


    [2021-07-29T00:00:18,855][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1] this node is unhealthy: health check failed on [/var/lib/elasticsearch/nodes/0]
    [2021-07-29T00:00:28,856][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1] this node is unhealthy: health check failed on [/var/lib/elasticsearch/nodes/0]
    [2021-07-29T00:00:30,105][WARN ][r.suppressed             ] [node-1] path: /_template/wazuh-agent, params: {name=wazuh-agent}
    [2021-07-29T00:00:30,234][WARN ][r.suppressed             ] [node-1] path: /wazuh-statistics-2021.31w, params: {index=wazuh-statistics-2021.31w}
    [2021-07-29T00:00:30,251][WARN ][r.suppressed             ] [node-1] path: /wazuh-statistics-2021.31w, params: {index=wazuh-statistics-2021.31w}
    [2021-07-29T00:00:33,375][INFO ][c.a.o.a.MonitorRunner    ] [node-1] Error running script for monitor 7sIUhnkBh3AOvupsYfvP, trigger: FsIkhnkBh3AOvupsLv79
    org.elasticsearch.script.ScriptException: compile error
    [2021-07-29T00:00:33,447][WARN ][c.a.o.a.MonitorRunner    ] [node-1] Operation failed. Retrying in 50ms.
    [2021-07-29T00:00:33,468][WARN ][c.a.o.a.MonitorRunner    ] [node-1] Operation failed. Retrying in 50ms.
    [2021-07-29T00:00:33,535][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-1] uncaught exception in thread [DefaultDispatcher-worker-2]
    [2021-07-29T00:00:38,856][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1] this node is unhealthy: health check failed on [/var/lib/elasticsearch/nodes/0]
    [2021-07-29T00:00:48,856][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1] this node is unhealthy: health check failed on [/var/lib/elasticsearch/nodes/0]
    [2021-07-29T00:00:55,728][WARN ][r.suppressed             ] [node-1] path: /_cat/templates/wazuh, params: {name=wazuh}
    [2021-07-29T00:00:58,857][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1] this node is unhealthy: health check failed on [/var/lib/elasticsearch/nodes/0]
    [2021-07-29T00:01:00,107][WARN ][r.suppressed             ] [node-1] path: /_template/wazuh-agent, params: {name=wazuh-agent}
    [2021-07-29T00:01:00,239][WARN ][r.suppressed             ] [node-1] path: /wazuh-statistics-2021.31w, params: {index=wazuh-statistics-2021.31w}
    [2021-07-29T00:01:00,256][WARN ][r.suppressed             ] [node-1] path: /wazuh-statistics-2021.31w, params: {index=wazuh-statistics-2021.31w}
    [2021-07-29T00:01:08,857][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1] this node is unhealthy: health check failed on [/var/lib/elasticsearch/nodes/0]
    [2021-07-29T00:01:18,858][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1] this node is unhealthy: health check failed on [/var/lib/elasticsearch/nodes/0]
    [2021-07-29T00:01:25,733][WARN ][r.suppressed             ] [node-1] path: /_template/wazuh, params: {name=wazuh}
    [2021-07-29T00:01:28,858][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1] this node is unhealthy: health check failed on [/var/lib/elasticsearch/nodes/0]
    [2021-07-29T00:01:30,110][WARN ][r.suppressed             ] [node-1] path: /_template/wazuh-agent, params: {name=wazuh-agent}
    [2021-07-29T00:01:30,242][WARN ][r.suppressed             ] [node-1] path: /wazuh-statistics-2021.31w, params: {index=wazuh-statistics-2021.31w}
    [2021-07-29T00:01:30,266][WARN ][r.suppressed             ] [node-1] path: /wazuh-statistics-2021.31w, params: {index=wazuh-statistics-2021.31w}
    [2021-07-29T00:01:33,372][INFO ][c.a.o.a.MonitorRunner    ] [node-1] Error running script for monitor 7sIUhnkBh3AOvupsYfvP, trigger: FsIkhnkBh3AOvupsLv79
    org.elasticsearch.script.ScriptException: compile error
    [2021-07-29T00:01:33,499][WARN ][c.a.o.a.MonitorRunner    ] [node-1] Operation failed. Retrying in 50ms.
    [2021-07-29T00:01:33,519][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-1] uncaught exception in thread [DefaultDispatcher-worker-2]
    [2021-07-29T00:01:34,949][WARN ][c.a.o.a.MonitorRunner    ] [node-1] Operation failed. Retrying in 50ms.
    [2021-07-29T00:01:38,859][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1] this node is unhealthy: health check failed on [/var/lib/elasticsearch/nodes/0]
    [2021-07-29T00:01:48,859][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1] this node is unhealthy: health check failed on [/var/lib/elasticsearch/nodes/0]
    [2021-07-29T00:01:58,860][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1] this node is unhealthy: health check failed on [/var/lib/elasticsearch/nodes/0]
    [2021-07-29T00:02:00,110][WARN ][r.suppressed             ] [node-1] path: /_template/wazuh-agent, params: {name=wazuh-agent}
    [2021-07-29T00:02:00,247][WARN ][r.suppressed             ] [node-1] path: /wazuh-statistics-2021.31w, params: {index=wazuh-statistics-2021.31w}
    [2021-07-29T00:02:00,277][WARN ][r.suppressed             ] [node-1] path: /wazuh-statistics-2021.31w, params: {index=wazuh-statistics-2021.31w}

    _____________

    I also checked the alerts.log for pingdom and saw multiple alerts there.

    elw...@wazuh.com

    unread,
    Jul 30, 2021, 5:52:49 AM7/30/21
    to Wazuh mailing list
    Hello Marc,

    I could not spot anything relevant from the previous logs that point to dropping events. Can you please perform the following and share the logs/configuration files (by uploading them so I can review the whole content):

    •  Simulate an alert :

    • echo '2021 Jul 27 00:48:43 EXPRESS: {"level":"debug","label":"EXPRESS","timestamp":"2021 Jul 27 00:48:43","message":"46.165.195.139 - - [27/Jul/2021:00:48:43 +0000] \"GET /api/ping HTTP/1.1\" 200 4 \"-\" \"Pingdom.com_bot_version_1.4_(http://www.pingdom.com/)\"","environment":"development","meta":{"environment":"development"}}' >> /var/log/messages

    • Share Filebeat logs file:

      grep -i filebeat /var/log/messages >> filebeatlogs

    • Elasticsearch logs: /var/log/elasticsearch/elasticsearch.log

    • Share Wazuh manager configuration file: /var/ossec/etc/ossec.conf

    • Share the rules/decoders used for Express.
    Thanks for your collaboration.

    Regards,
    Wali
    Reply all
    Reply to author
    Forward
    0 new messages