Alertmanager sending duplicate alert to webhook receiever

83 views
Skip to first unread message

Nabarun Sen

unread,
Jul 29, 2020, 3:49:06 AM7/29/20
to Prometheus Users
Hi All,

I need a help as my alertmanager sending duplicate alert( for 1 alert 3 notification) to webhook receiver when we set alertmanager replica to 3 but when I put alertmanager replica 1 it is sending one notofication.

Please let me know if anyone faced similar issue.


Alertmanager configuration:

global:
  resolve_timeout: 5m
route:
  receiver: 'webhook_receiver'
  group_wait: 30s
  group_interval: 10h
  repeat_interval: 24h
  routes:
  - receiver: 'webhook_receiver'
    match_re:
      alertname: NodedownAlert|ServiceDown
    group_wait: 30s
    group_interval: 10h
    repeat_interval: 24h
    group_by: ['description']
receivers:
  - name: webhook_receiver
    webhook_configs:
      - send_resolved: true

templates:
  - '/etc/alertmanager/config/notification.tmpl'

Thanks
Nabarun Sen

Stuart Clark

unread,
Jul 29, 2020, 4:06:06 AM7/29/20
to Nabarun Sen, Prometheus Users
It sounds like your replicas aren't meshing with each other. Do the command lines for each one include the IP address or dns names of the other replicas?
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Nabarun Sen

unread,
Jul 29, 2020, 4:19:46 AM7/29/20
to Prometheus Users
I have three replica now for alertmanager

NAME                                                      READY   STATUS    RESTARTS   AGE    IP               NODE      NOMINATED NODE   READINESS GATES
alertmanager-prometheus-operator-alertmanager-0           2/2     Running   0          113s   10.233.110.73    worker1   <none>           <none>
alertmanager-prometheus-operator-alertmanager-1           2/2     Running   0          109s   10.233.103.120   worker2   <none>           <none>
alertmanager-prometheus-operator-alertmanager-2           2/2     Running   0          109s   10.233.110.72    worker1   <none>           <none>

root@ansible-node .kube]# kubectl get svc -n ns-prometheus-lab|grep alertmanager
alertmanager-operated                          ClusterIP   None            <none>        9093/TCP,9094/TCP,9094/UDP   11m
prometheus-operator-alertmanager               NodePort    10.233.6.218    <none>        9093:31570/TCP               11m
[root@ansible-node .kube]#


I can get response from telnet inside one alertmanager pod to alertmanager service

[root@ansible-node prometheus-operator-qa-lab-final]# kubectl exec -it alertmanager-prometheus-operator-alertmanager-0 alertmanager -n ns-prometheus-lab -- sh
Defaulting container name to alertmanager.
Use 'kubectl describe pod/alertmanager-prometheus-operator-alertmanager-0 -n ns-prometheus-lab' to see all of the containers in this pod.
/alertmanager $ telnet 10.233.6.218:9093
Connected to 10.233.6.218:9093

Nabarun Sen

unread,
Jul 29, 2020, 4:25:53 AM7/29/20
to Prometheus Users
I can get the response inside the pod by using DNS name

[root@ansible-node prometheus-operator-qa-lab-final]# kubectl exec -it alertmanager-prometheus-operator-alertmanager-0 alertmanager -n ns-prometheus-lab -- sh
Defaulting container name to alertmanager.
Use 'kubectl describe pod/alertmanager-prometheus-operator-alertmanager-0 -n ns-prometheus-lab' to see all of the containers in this pod.
/alertmanager $ nslookup 10.233.110.73
Server:         169.254.25.10
Address:        169.254.25.10:53

73.110.233.10.in-addr.arpa      name = alertmanager-prometheus-operator-alertmanager-0.alertmanager-operated.ns-prometheus-lab.svc.cluster.local

/alertmanager $ nslookup 10.233.103.120
Server:         169.254.25.10
Address:        169.254.25.10:53

120.103.233.10.in-addr.arpa     name = alertmanager-prometheus-operator-alertmanager-1.alertmanager-operated.ns-prometheus-lab.svc.cluster.local

/alertmanager $ nslookup 10.233.110.72
Server:         169.254.25.10
Address:        169.254.25.10:53

72.110.233.10.in-addr.arpa      name = 10-233-110-72.prometheus-operator-alertmanager.ns-prometheus-lab.svc.cluster.local

/alertmanager $



On Wednesday, 29 July 2020 15:49:06 UTC+8, Nabarun Sen wrote:

Stuart Clark

unread,
Jul 29, 2020, 4:27:40 AM7/29/20
to Nabarun Sen, Prometheus Users
On 2020-07-29 09:19, Nabarun Sen wrote:
> I have three replica now for alertmanager
>
> NAME READY
> STATUS RESTARTS AGE IP NODE NOMINATED NODE
> READINESS GATES
> alertmanager-prometheus-operator-alertmanager-0 2/2
> Running 0 113s 10.233.110.73 worker1 <none>
> <none>
> alertmanager-prometheus-operator-alertmanager-1 2/2
> Running 0 109s 10.233.103.120 worker2 <none>
> <none>
> alertmanager-prometheus-operator-alertmanager-2 2/2
> Running 0 109s 10.233.110.72 worker1 <none>
> <none>
>

Ahh. You are using Kubernetes. Is this via Prometheus Operator or just
running Alertmanager independently?

Can you list the command line being run for each of those pods?
>> [1]
>>
>> templates:
>> - '/etc/alertmanager/config/notification.tmpl'
>>
>> Thanks
>> Nabarun Sen
>
> --
> You received this message because you are subscribed to the Google
> Groups "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to prometheus-use...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/prometheus-users/5034fe12-8931-4659-b40d-0d697b644bd2o%40googlegroups.com
> [2].
>
>
> Links:
> ------
> [1]
> http://prometheus-webhook.monitoring:8080/v1/webhook?group=operation
> [2]
> https://groups.google.com/d/msgid/prometheus-users/5034fe12-8931-4659-b40d-0d697b644bd2o%40googlegroups.com?utm_medium=email&utm_source=footer

--
Stuart Clark

Nabarun Sen

unread,
Jul 29, 2020, 4:29:45 AM7/29/20
to Prometheus Users
I am using prometheus operator

[root@ansible-node prometheus-operator-qa-lab-final]# kubectl get pods -n ns-prometheus-lab
NAME                                                      READY   STATUS    RESTARTS   AGE
alertmanager-prometheus-operator-alertmanager-0           2/2     Running   0          21m
alertmanager-prometheus-operator-alertmanager-1           2/2     Running   0          21m
alertmanager-prometheus-operator-alertmanager-2           2/2     Running   0          21m
prometheus-operator-kube-state-metrics-74f47948f9-x2fbd   1/1     Running   0          22m
prometheus-operator-operator-9944b44f8-tc886              2/2     Running   0          22m
prometheus-operator-prometheus-node-exporter-5hf8r        1/1     Running   0          22m
prometheus-operator-prometheus-node-exporter-7w59x        1/1     Running   0          22m
prometheus-operator-prometheus-node-exporter-98fc9        1/1     Running   0          22m
prometheus-operator-prometheus-node-exporter-q7gl9        1/1     Running   0          22m
prometheus-operator-prometheus-node-exporter-wh2ml        1/1     Running   0          22m
prometheus-prometheus-operator-prometheus-0               3/3     Running   0          21m
[root@ansible-node prometheus-operator-qa-lab-final]#




On Wednesday, 29 July 2020 15:49:06 UTC+8, Nabarun Sen wrote:

Nabarun Sen

unread,
Jul 29, 2020, 4:31:43 AM7/29/20
to Prometheus Users
Command line parameter in alertmanager

[root@ansible-node prometheus-operator-qa-lab-final]# kubectl exec -it alertmanager-prometheus-operator-alertmanager-0 alertmanager -n ns-prometheus-lab -- sh
Defaulting container name to alertmanager.
Use 'kubectl describe pod/alertmanager-prometheus-operator-alertmanager-0 -n ns-prometheus-lab' to see all of the containers in this pod.
/alertmanager $ ps -eaf|grep alert
    1 1000      1:31 /bin/alertmanager --config.file=/etc/alertmanager/config/alertmanager.yaml --cluster.listen-address=[10.233.110.73]:9094 --storage.path=/alertmanager --data.retention=120h --web.listen-address=:9093 --web.external-url=http://prometheus-operator-alertmanager.ns-prometheus-lab:9093 --web.route-prefix=/ --cluster.peer=alertmanager-prometheus-operator-alertmanager-0.alertmanager-operated.ns-prometheus-lab.svc:9094 --cluster.peer=alertmanager-prometheus-operator-alertmanager-1.alertmanager-operated.ns-prometheus-lab.svc:9094 --cluster.peer=alertmanager-prometheus-operator-alertmanager-2.alertmanager-operated.ns-prometheus-lab.svc:9094
/alertmanager $ exit
[root@ansible-node prometheus-operator-qa-lab-final]# kubectl exec -it alertmanager-prometheus-operator-alertmanager-1 alertmanager -n ns-prometheus-lab -- sh
Defaulting container name to alertmanager.
Use 'kubectl describe pod/alertmanager-prometheus-operator-alertmanager-1 -n ns-prometheus-lab' to see all of the containers in this pod.
/alertmanager $ ps -eaf|grep alert
    1 1000      2:37 /bin/alertmanager --config.file=/etc/alertmanager/config/alertmanager.yaml --cluster.listen-address=[10.233.103.120]:9094 --storage.path=/alertmanager --data.retention=120h --web.listen-address=:9093 --web.external-url=http://prometheus-operator-alertmanager.ns-prometheus-lab:9093 --web.route-prefix=/ --cluster.peer=alertmanager-prometheus-operator-alertmanager-0.alertmanager-operated.ns-prometheus-lab.svc:9094 --cluster.peer=alertmanager-prometheus-operator-alertmanager-1.alertmanager-operated.ns-prometheus-lab.svc:9094 --cluster.peer=alertmanager-prometheus-operator-alertmanager-2.alertmanager-operated.ns-prometheus-lab.svc:9094
/alertmanager $ exit
[root@ansible-node prometheus-operator-qa-lab-final]# kubectl exec -it alertmanager-prometheus-operator-alertmanager-2 alertmanager -n ns-prometheus-lab -- sh
Defaulting container name to alertmanager.
Use 'kubectl describe pod/alertmanager-prometheus-operator-alertmanager-2 -n ns-prometheus-lab' to see all of the containers in this pod.
/alertmanager $ ps -eaf|grep alert
    1 1000      1:45 /bin/alertmanager --config.file=/etc/alertmanager/config/alertmanager.yaml --cluster.listen-address=[10.233.110.72]:9094 --storage.path=/alertmanager --data.retention=120h --web.listen-address=:9093 --web.external-url=http://prometheus-operator-alertmanager.ns-prometheus-lab:9093 --web.route-prefix=/ --cluster.peer=alertmanager-prometheus-operator-alertmanager-0.alertmanager-operated.ns-prometheus-lab.svc:9094 --cluster.peer=alertmanager-prometheus-operator-alertmanager-1.alertmanager-operated.ns-prometheus-lab.svc:9094 --cluster.peer=alertmanager-prometheus-operator-alertmanager-2.alertmanager-operated.ns-prometheus-lab.svc:9094
/alertmanager $



On Wednesday, 29 July 2020 15:49:06 UTC+8, Nabarun Sen wrote:

Nabarun Sen

unread,
Jul 29, 2020, 4:35:10 AM7/29/20
to Prometheus Users

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2020-07-29T08:07:36Z"
  generateName: alertmanager-prometheus-operator-alertmanager-
  labels:
    alertmanager: prometheus-operator-alertmanager
    app: alertmanager
    controller-revision-hash: alertmanager-prometheus-operator-alertmanager-68878cccd6
    statefulset.kubernetes.io/pod-name: alertmanager-prometheus-operator-alertmanager-0
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:generateName: {}
        f:labels:
          .: {}
          f:alertmanager: {}
          f:app: {}
          f:controller-revision-hash: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"df6924a1-6657-48e6-ae0b-c55d9e8dc13a"}:
            .: {}
            f:apiVersion: {}
            f:blockOwnerDeletion: {}
            f:controller: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
      f:spec:
        f:containers:
          k:{"name":"alertmanager"}:
            .: {}
            f:args: {}
            f:env:
              .: {}
              k:{"name":"POD_IP"}:
                .: {}
                f:name: {}
                f:valueFrom:
                  .: {}
                  f:fieldRef:
                    .: {}
                    f:apiVersion: {}
                    f:fieldPath: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:livenessProbe:
              .: {}
              f:failureThreshold: {}
              f:httpGet:
                .: {}
                f:path: {}
                f:port: {}
                f:scheme: {}
              f:periodSeconds: {}
              f:successThreshold: {}
              f:timeoutSeconds: {}
            f:name: {}
            f:ports:
              .: {}
              k:{"containerPort":9093,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:name: {}
                f:protocol: {}
              k:{"containerPort":9094,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:name: {}
                f:protocol: {}
              k:{"containerPort":9094,"protocol":"UDP"}:
                .: {}
                f:containerPort: {}
                f:name: {}
                f:protocol: {}
            f:readinessProbe:
              .: {}
              f:failureThreshold: {}
              f:httpGet:
                .: {}
                f:path: {}
                f:port: {}
                f:scheme: {}
              f:initialDelaySeconds: {}
              f:periodSeconds: {}
              f:successThreshold: {}
              f:timeoutSeconds: {}
            f:resources:
              .: {}
              f:requests:
                .: {}
                f:memory: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
            f:volumeMounts:
              .: {}
              k:{"mountPath":"/alertmanager"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/etc/alertmanager/config"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/etc/alertmanager/secrets/alertmanager-prometheus-operator-alertmanager"}:
                .: {}
                f:mountPath: {}
                f:name: {}
                f:readOnly: {}
          k:{"name":"config-reloader"}:
            .: {}
            f:args: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:name: {}
            f:resources:
              .: {}
              f:limits:
                .: {}
                f:cpu: {}
                f:memory: {}
              f:requests:
                .: {}
                f:cpu: {}
                f:memory: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
            f:volumeMounts:
              .: {}
              k:{"mountPath":"/etc/alertmanager/config"}:
                .: {}
                f:mountPath: {}
                f:name: {}
                f:readOnly: {}
        f:dnsPolicy: {}
        f:enableServiceLinks: {}
        f:hostname: {}
        f:imagePullSecrets:
          .: {}
          k:{"name":"image-pull-secret"}:
            .: {}
            f:name: {}
        f:restartPolicy: {}
        f:schedulerName: {}
        f:securityContext:
          .: {}
          f:fsGroup: {}
          f:runAsNonRoot: {}
          f:runAsUser: {}
        f:serviceAccount: {}
        f:serviceAccountName: {}
        f:subdomain: {}
        f:terminationGracePeriodSeconds: {}
        f:volumes:
          .: {}
          k:{"name":"alertmanager-prometheus-operator-alertmanager-db"}:
            .: {}
            f:emptyDir: {}
            f:name: {}
          k:{"name":"config-volume"}:
            .: {}
            f:name: {}
            f:secret:
              .: {}
              f:defaultMode: {}
              f:secretName: {}
          k:{"name":"secret-alertmanager-prometheus-operator-alertmanager"}:
            .: {}
            f:name: {}
            f:secret:
              .: {}
              f:defaultMode: {}
              f:secretName: {}
    manager: kube-controller-manager
    operation: Update
    time: "2020-07-29T08:07:36Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:nodeName: {}
      f:status:
        f:conditions:
          .: {}
          k:{"type":"PodScheduled"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
    manager: kube-scheduler
    operation: Update
    time: "2020-07-29T08:07:39Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:conditions:
          k:{"type":"ContainersReady"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Initialized"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Ready"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
        f:containerStatuses: {}
        f:hostIP: {}
        f:phase: {}
        f:podIP: {}
        f:podIPs:
          .: {}
          k:{"ip":"10.233.110.73"}:
            .: {}
            f:ip: {}
        f:startTime: {}
    manager: kubelet
    operation: Update
    time: "2020-07-29T08:07:47Z"
  name: alertmanager-prometheus-operator-alertmanager-0
  namespace: ns-prometheus-lab
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: StatefulSet
    name: alertmanager-prometheus-operator-alertmanager
    uid: df6924a1-6657-48e6-ae0b-c55d9e8dc13a
  resourceVersion: "2567232"
  selfLink: /api/v1/namespaces/ns-prometheus-lab/pods/alertmanager-prometheus-operator-alertmanager-0
  uid: 591e85db-4d1d-4401-816b-937c6f54c5ff
spec:
  containers:
  - args:
    - --config.file=/etc/alertmanager/config/alertmanager.yaml
    - --cluster.listen-address=[$(POD_IP)]:9094
    - --storage.path=/alertmanager
    - --data.retention=120h
    - --web.listen-address=:9093
    - --web.route-prefix=/
    - --cluster.peer=alertmanager-prometheus-operator-alertmanager-0.alertmanager-operated.ns-prometheus-lab.svc:9094
    - --cluster.peer=alertmanager-prometheus-operator-alertmanager-1.alertmanager-operated.ns-prometheus-lab.svc:9094
    - --cluster.peer=alertmanager-prometheus-operator-alertmanager-2.alertmanager-operated.ns-prometheus-lab.svc:9094
    env:
    - name: POD_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 10
      httpGet:
        path: /-/healthy
        port: web
        scheme: HTTP
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 3
    name: alertmanager
    ports:
    - containerPort: 9093
      name: web
      protocol: TCP
    - containerPort: 9094
      name: mesh-tcp
      protocol: TCP
    - containerPort: 9094
      name: mesh-udp
      protocol: UDP
    readinessProbe:
      failureThreshold: 10
      httpGet:
        path: /-/ready
        port: web
        scheme: HTTP
      initialDelaySeconds: 3
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 3
    resources:
      requests:
        memory: 200Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /etc/alertmanager/config
      name: config-volume
    - mountPath: /alertmanager
      name: alertmanager-prometheus-operator-alertmanager-db
    - mountPath: /etc/alertmanager/secrets/alertmanager-prometheus-operator-alertmanager
      name: secret-alertmanager-prometheus-operator-alertmanager
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: prometheus-operator-alertmanager-token-krvlh
      readOnly: true
  - args:
    - -webhook-url=http://127.0.0.1:9093/-/reload
    - -volume-dir=/etc/alertmanager/config
    imagePullPolicy: IfNotPresent
    name: config-reloader
    resources:
      limits:
        cpu: 100m
        memory: 25Mi
      requests:
        cpu: 100m
        memory: 25Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /etc/alertmanager/config
      name: config-volume
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: prometheus-operator-alertmanager-token-krvlh
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  hostname: alertmanager-prometheus-operator-alertmanager-0
  imagePullSecrets:
  - name: image-pull-secret
  nodeName: worker1
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 2000
    runAsNonRoot: true
    runAsUser: 1000
  serviceAccount: prometheus-operator-alertmanager
  serviceAccountName: prometheus-operator-alertmanager
  subdomain: alertmanager-operated
  terminationGracePeriodSeconds: 120
  tolerations:
  - effect: NoExecute
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: config-volume
    secret:
      defaultMode: 420
      secretName: alertmanager-prometheus-operator-alertmanager
  - name: secret-alertmanager-prometheus-operator-alertmanager
    secret:
      defaultMode: 420
      secretName: alertmanager-prometheus-operator-alertmanager
  - emptyDir: {}
    name: alertmanager-prometheus-operator-alertmanager-db
  - name: prometheus-operator-alertmanager-token-krvlh
    secret:
      defaultMode: 420
      secretName: prometheus-operator-alertmanager-token-krvlh
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2020-07-29T08:07:40Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2020-07-29T08:07:47Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2020-07-29T08:07:47Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2020-07-29T08:07:40Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://ea1875bb2c42d62a3fa6e4b49bdd6c40c69bd5ac691f7e94ebaa30d9ec1feef8
    lastState: {}
    name: alertmanager
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2020-07-29T08:07:43Z"
  - containerID: docker://415d38df2f865a2d86a3a04486d5ba34816b10d16e1f03c3d1403342d4e5515a
    lastState: {}
    name: config-reloader
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2020-07-29T08:07:44Z"
  hostIP: 172.42.42.21
  phase: Running
  podIP: 10.233.110.73
  podIPs:
  - ip: 10.233.110.73
  qosClass: Burstable
  startTime: "2020-07-29T08:07:40Z"


On Wednesday, 29 July 2020 15:49:06 UTC+8, Nabarun Sen wrote:

Stuart Clark

unread,
Jul 29, 2020, 4:40:23 AM7/29/20
to Nabarun Sen, Prometheus Users
It does seem to be picking up the peers which is good. What is shown in
the peers section under "Status" in the UI?

--
Stuart Clark

Nabarun Sen

unread,
Jul 29, 2020, 4:42:38 AM7/29/20
to Prometheus Users

Status

Uptime:
2020-07-29T08:07:44.259Z

Cluster Status

Name:
01EECSST58XVVR0QXH1QWHB1XK
Status:
ready
Peers:
Reply all
Reply to author
Forward
0 new messages