EmptyDir on local SSD?

1,255 views
Skip to first unread message

Richard Musiol

unread,
Mar 2, 2017, 12:21:01 PM3/2/17
to Kubernetes user discussion and Q&A
Hi,

I would like to use GKE's local SSD feature to have fast temporary disk space.

The problem when using it with a "hostPath" volume as described on https://cloud.google.com/container-engine/docs/local-ssd is that the temporary files do not get removed when the pod gets deleted. Over time the local SSD would fill up.

The volume type "emptyDir" would do what I want, but I don't see how to put it on the local SSD.

Any ideas?

Cheers,
Richard

Tim Hockin

unread,
Mar 2, 2017, 12:33:30 PM3/2/17
to kubernet...@googlegroups.com
There isn't a clean way to express what you want today.  There are some ideas about being able to express local storage as volumes, but that work is a long pipeline for what feels.like a simple request.

We already have an idea of "medium" in emptyDir.  What if we extended that?  The question becomes how to express that multitudes of potential SSD technologies, current and yet to be developed, without resorting to calling them all the same?

You could imagine a way to config kubelet to build a map of local mountpoints as named "Local" media, and then allow users to request those.  It's imperfect in a lot of ways, but it might be tractable.

@vishh @msau this comes up not FREQUENTLY but enough that maybe we want to think of a short term goal here?

Just thinking out loud...

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Michelle Au

unread,
Mar 2, 2017, 5:33:09 PM3/2/17
to kubernet...@googlegroups.com
Hi Richard,

Are you sharing the local SSD between many pods, or just one pod per SSD?

If sharing is ok, then in the short term we could look into one of the following approaches:
1. Ability to create a GKE cluster with kubelet installed on top of a PD-SSD.  Then all EmptyDirs will use this PD.  It's not going to perform as well as a local SSD though.
2. Use the alpha flex volumes interface with an lvm plugin, that can carve out lvs out of a vg comprised of local SSDs. The vgs would have to be precreated by some DaemonSet on each node before any normal pods start running.  This approach would be meant as a short term solution for now and require some extra management by the user/admin.  Flex itself is alpha and going through lots of revision.

If dedicated disks are required, then we don't have any short-term solutions besides hostpath.  The long term solution is to expose disks as LocalDisk PVs, and for the temporary use cases, have an "inline" option where the PV gets created and destroyed with the pod.

-Michelle


Richard Musiol

unread,
Mar 3, 2017, 1:24:50 PM3/3/17
to kubernet...@googlegroups.com
Hi Michelle, Hi Tim,

there is one local SSD per node, so yes, it is shared between many pods.

Solution 1 is already a step forward, but I would really like to use a local SSD. The use case is that I'm running Buildkite agents on that cluster and I want to autoscale them. The limits of a normal disk were very quickly reached with that kind of workload, switching to the local SSD solved it.

Solution 2 sounds involved, but I'm willing to be a guinea pig for this because the workload is not production critical.

-Richard

'Michelle Au' via Kubernetes user discussion and Q&A <kubernet...@googlegroups.com> schrieb am Do., 2. März 2017 um 23:33 Uhr:
Hi Richard,

Are you sharing the local SSD between many pods, or just one pod per SSD?

If sharing is ok, then in the short term we could look into one of the following approaches:
1. Ability to create a GKE cluster with kubelet installed on top of a PD-SSD.  Then all EmptyDirs will use this PD.  It's not going to perform as well as a local SSD though.
2. Use the alpha flex volumes interface with an lvm plugin, that can carve out lvs out of a vg comprised of local SSDs. The vgs would have to be precreated by some DaemonSet on each node before any normal pods start running.  This approach would be meant as a short term solution for now and require some extra management by the user/admin.  Flex itself is alpha and going through lots of revision.

If dedicated disks are required, then we don't have any short-term solutions besides hostpath.  The long term solution is to expose disks as LocalDisk PVs, and for the temporary use cases, have an "inline" option where the PV gets created and destroyed with the pod.

-Michelle


On Thu, Mar 2, 2017 at 9:33 AM, 'Tim Hockin' via Kubernetes user discussion and Q&A <kubernet...@googlegroups.com> wrote:
There isn't a clean way to express what you want today.  There are some ideas about being able to express local storage as volumes, but that work is a long pipeline for what feels.like a simple request.

We already have an idea of "medium" in emptyDir.  What if we extended that?  The question becomes how to express that multitudes of potential SSD technologies, current and yet to be developed, without resorting to calling them all the same?

You could imagine a way to config kubelet to build a map of local mountpoints as named "Local" media, and then allow users to request those.  It's imperfect in a lot of ways, but it might be tractable.

@vishh @msau this comes up not FREQUENTLY but enough that maybe we want to think of a short term goal here?

Just thinking out loud...
On Mar 2, 2017 9:21 AM, "Richard Musiol" <neel...@gmail.com> wrote:
Hi,

I would like to use GKE's local SSD feature to have fast temporary disk space.

The problem when using it with a "hostPath" volume as described on https://cloud.google.com/container-engine/docs/local-ssd is that the temporary files do not get removed when the pod gets deleted. Over time the local SSD would fill up.

The volume type "emptyDir" would do what I want, but I don't see how to put it on the local SSD.

Any ideas?

Cheers,
Richard

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-users/IVg6QasyxV0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.

David Aronchick

unread,
Mar 3, 2017, 1:36:42 PM3/3/17
to 'David Aronchick' via Kubernetes user discussion and Q&A
Is a preStop hook enough as a temp solution?


On Fri, Mar 3, 2017 at 10:24 AM, Richard Musiol <ma...@richard-musiol.de> wrote:
Hi Michelle, Hi Tim,

there is one local SSD per node, so yes, it is shared between many pods.

Solution 1 is already a step forward, but I would really like to use a local SSD. The use case is that I'm running Buildkite agents on that cluster and I want to autoscale them. The limits of a normal disk were very quickly reached with that kind of workload, switching to the local SSD solved it.

Solution 2 sounds involved, but I'm willing to be a guinea pig for this because the workload is not production critical.

-Richard

'Michelle Au' via Kubernetes user discussion and Q&A <kubernetes-users@googlegroups.com> schrieb am Do., 2. März 2017 um 23:33 Uhr:
Hi Richard,

Are you sharing the local SSD between many pods, or just one pod per SSD?

If sharing is ok, then in the short term we could look into one of the following approaches:
1. Ability to create a GKE cluster with kubelet installed on top of a PD-SSD.  Then all EmptyDirs will use this PD.  It's not going to perform as well as a local SSD though.
2. Use the alpha flex volumes interface with an lvm plugin, that can carve out lvs out of a vg comprised of local SSDs. The vgs would have to be precreated by some DaemonSet on each node before any normal pods start running.  This approach would be meant as a short term solution for now and require some extra management by the user/admin.  Flex itself is alpha and going through lots of revision.

If dedicated disks are required, then we don't have any short-term solutions besides hostpath.  The long term solution is to expose disks as LocalDisk PVs, and for the temporary use cases, have an "inline" option where the PV gets created and destroyed with the pod.

-Michelle


On Thu, Mar 2, 2017 at 9:33 AM, 'Tim Hockin' via Kubernetes user discussion and Q&A <kubernetes-users@googlegroups.com> wrote:
There isn't a clean way to express what you want today.  There are some ideas about being able to express local storage as volumes, but that work is a long pipeline for what feels.like a simple request.

We already have an idea of "medium" in emptyDir.  What if we extended that?  The question becomes how to express that multitudes of potential SSD technologies, current and yet to be developed, without resorting to calling them all the same?

You could imagine a way to config kubelet to build a map of local mountpoints as named "Local" media, and then allow users to request those.  It's imperfect in a lot of ways, but it might be tractable.

@vishh @msau this comes up not FREQUENTLY but enough that maybe we want to think of a short term goal here?

Just thinking out loud...
On Mar 2, 2017 9:21 AM, "Richard Musiol" <neel...@gmail.com> wrote:
Hi,

I would like to use GKE's local SSD feature to have fast temporary disk space.

The problem when using it with a "hostPath" volume as described on https://cloud.google.com/container-engine/docs/local-ssd is that the temporary files do not get removed when the pod gets deleted. Over time the local SSD would fill up.

The volume type "emptyDir" would do what I want, but I don't see how to put it on the local SSD.

Any ideas?

Cheers,
Richard

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-users/IVg6QasyxV0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

Richard Musiol

unread,
Mar 3, 2017, 1:42:21 PM3/3/17
to 'David Aronchick' via Kubernetes user discussion and Q&A
I would need a *Post*Stop hook. But yes, maybe the best solution right now would be to wrap the agent into some helper that on SIGTERM first forwards the signal to the agent and then does the cleanup before exiting itself.

'David Aronchick' via Kubernetes user discussion and Q&A <kubernet...@googlegroups.com> schrieb am Fr., 3. März 2017 um 19:36 Uhr:
Is a preStop hook enough as a temp solution?

On Fri, Mar 3, 2017 at 10:24 AM, Richard Musiol <ma...@richard-musiol.de> wrote:
Hi Michelle, Hi Tim,

there is one local SSD per node, so yes, it is shared between many pods.

Solution 1 is already a step forward, but I would really like to use a local SSD. The use case is that I'm running Buildkite agents on that cluster and I want to autoscale them. The limits of a normal disk were very quickly reached with that kind of workload, switching to the local SSD solved it.

Solution 2 sounds involved, but I'm willing to be a guinea pig for this because the workload is not production critical.

-Richard

'Michelle Au' via Kubernetes user discussion and Q&A <kubernet...@googlegroups.com> schrieb am Do., 2. März 2017 um 23:33 Uhr:
Hi Richard,

Are you sharing the local SSD between many pods, or just one pod per SSD?

If sharing is ok, then in the short term we could look into one of the following approaches:
1. Ability to create a GKE cluster with kubelet installed on top of a PD-SSD.  Then all EmptyDirs will use this PD.  It's not going to perform as well as a local SSD though.
2. Use the alpha flex volumes interface with an lvm plugin, that can carve out lvs out of a vg comprised of local SSDs. The vgs would have to be precreated by some DaemonSet on each node before any normal pods start running.  This approach would be meant as a short term solution for now and require some extra management by the user/admin.  Flex itself is alpha and going through lots of revision.

If dedicated disks are required, then we don't have any short-term solutions besides hostpath.  The long term solution is to expose disks as LocalDisk PVs, and for the temporary use cases, have an "inline" option where the PV gets created and destroyed with the pod.

-Michelle


On Thu, Mar 2, 2017 at 9:33 AM, 'Tim Hockin' via Kubernetes user discussion and Q&A <kubernet...@googlegroups.com> wrote:
There isn't a clean way to express what you want today.  There are some ideas about being able to express local storage as volumes, but that work is a long pipeline for what feels.like a simple request.

We already have an idea of "medium" in emptyDir.  What if we extended that?  The question becomes how to express that multitudes of potential SSD technologies, current and yet to be developed, without resorting to calling them all the same?

You could imagine a way to config kubelet to build a map of local mountpoints as named "Local" media, and then allow users to request those.  It's imperfect in a lot of ways, but it might be tractable.

@vishh @msau this comes up not FREQUENTLY but enough that maybe we want to think of a short term goal here?

Just thinking out loud...
On Mar 2, 2017 9:21 AM, "Richard Musiol" <neel...@gmail.com> wrote:
Hi,

I would like to use GKE's local SSD feature to have fast temporary disk space.

The problem when using it with a "hostPath" volume as described on https://cloud.google.com/container-engine/docs/local-ssd is that the temporary files do not get removed when the pod gets deleted. Over time the local SSD would fill up.

The volume type "emptyDir" would do what I want, but I don't see how to put it on the local SSD.

Any ideas?

Cheers,
Richard

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-users/IVg6QasyxV0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-users/IVg6QasyxV0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.

Vishnu Kannan

unread,
Mar 3, 2017, 3:30:46 PM3/3/17
to Kubernetes user discussion and Q&A
Can you use a memory backed EmptyDir volume? 

Solution `2` described by msau@ would work in the short term, but we'd much rather invest in our existing plans for exposing local SSDs as Persistent Volumes and get that feature to alpha by v1.7 for example.
More details on the existing plans/design here.

--Vish

On Fri, Mar 3, 2017 at 10:42 AM, Richard Musiol <ma...@richard-musiol.de> wrote:
I would need a *Post*Stop hook. But yes, maybe the best solution right now would be to wrap the agent into some helper that on SIGTERM first forwards the signal to the agent and then does the cleanup before exiting itself.
'David Aronchick' via Kubernetes user discussion and Q&A <kubernetes-users@googlegroups.com> schrieb am Fr., 3. März 2017 um 19:36 Uhr:
Is a preStop hook enough as a temp solution?

On Fri, Mar 3, 2017 at 10:24 AM, Richard Musiol <ma...@richard-musiol.de> wrote:
Hi Michelle, Hi Tim,

there is one local SSD per node, so yes, it is shared between many pods.

Solution 1 is already a step forward, but I would really like to use a local SSD. The use case is that I'm running Buildkite agents on that cluster and I want to autoscale them. The limits of a normal disk were very quickly reached with that kind of workload, switching to the local SSD solved it.

Solution 2 sounds involved, but I'm willing to be a guinea pig for this because the workload is not production critical.

-Richard

'Michelle Au' via Kubernetes user discussion and Q&A <kubernetes-users@googlegroups.com> schrieb am Do., 2. März 2017 um 23:33 Uhr:
Hi Richard,

Are you sharing the local SSD between many pods, or just one pod per SSD?

If sharing is ok, then in the short term we could look into one of the following approaches:
1. Ability to create a GKE cluster with kubelet installed on top of a PD-SSD.  Then all EmptyDirs will use this PD.  It's not going to perform as well as a local SSD though.
2. Use the alpha flex volumes interface with an lvm plugin, that can carve out lvs out of a vg comprised of local SSDs. The vgs would have to be precreated by some DaemonSet on each node before any normal pods start running.  This approach would be meant as a short term solution for now and require some extra management by the user/admin.  Flex itself is alpha and going through lots of revision.

If dedicated disks are required, then we don't have any short-term solutions besides hostpath.  The long term solution is to expose disks as LocalDisk PVs, and for the temporary use cases, have an "inline" option where the PV gets created and destroyed with the pod.

-Michelle


On Thu, Mar 2, 2017 at 9:33 AM, 'Tim Hockin' via Kubernetes user discussion and Q&A <kubernetes-users@googlegroups.com> wrote:
There isn't a clean way to express what you want today.  There are some ideas about being able to express local storage as volumes, but that work is a long pipeline for what feels.like a simple request.

We already have an idea of "medium" in emptyDir.  What if we extended that?  The question becomes how to express that multitudes of potential SSD technologies, current and yet to be developed, without resorting to calling them all the same?

You could imagine a way to config kubelet to build a map of local mountpoints as named "Local" media, and then allow users to request those.  It's imperfect in a lot of ways, but it might be tractable.

@vishh @msau this comes up not FREQUENTLY but enough that maybe we want to think of a short term goal here?

Just thinking out loud...
On Mar 2, 2017 9:21 AM, "Richard Musiol" <neel...@gmail.com> wrote:
Hi,

I would like to use GKE's local SSD feature to have fast temporary disk space.

The problem when using it with a "hostPath" volume as described on https://cloud.google.com/container-engine/docs/local-ssd is that the temporary files do not get removed when the pod gets deleted. Over time the local SSD would fill up.

The volume type "emptyDir" would do what I want, but I don't see how to put it on the local SSD.

Any ideas?

Cheers,
Richard

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-users/IVg6QasyxV0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-users/IVg6QasyxV0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

Rodrigo Campos

unread,
Mar 3, 2017, 3:46:17 PM3/3/17
to kubernet...@googlegroups.com
A preStop hook, although will help, I doubt it can work. Because if the pod crashes or something, the preStop I guess is not executed. But maybe you don't hit the problem often in practice.

Another ugly option might be to periodically clean it, with a daemon set or a cron job on the node. Or even inotify. But that is assuming you know when things can be cleaned up, that I don't know if that is the case.

A way to know (without me knowing about the app layerz that may introduce more alternatives) might be trying to map which pod is using which directory inside the host path. If a directory that doesn't belong to a running pod, then it can be erased. This can run in a daemon set too, and ask the kubernetes API for ruining pods (and dirs are pod names) or using an external DB to match them. The pod name can be known inside your container, so it seems simple. And probably unlikely to collide, but not guaranteed

Also, if all of this fits in memory, you may just use that (en empty dir with medium memory). For sure it's faster than disk :-)

ale...@gmail.com

unread,
Mar 4, 2017, 2:31:29 PM3/4/17
to Kubernetes user discussion and Q&A, ma...@richard-musiol.de
Maybe you can use a flexVolume plugin? (in bash like the lvm example)
(https://github.com/kubernetes/kubernetes/blob/master/examples/volumes/flexvolume/README.md)
in the mount you can check if the ssd drive is mounted and then create a new directory that you can remove when the unmount method is called.

Richard Musiol

unread,
Mar 7, 2017, 4:35:21 PM3/7/17
to Kubernetes user discussion and Q&A
I have solved this with a wrapper now which does the cleanup on SIGTERM after the child has exited.

Thanks for the suggestions, though!

Cheers,
Richard

Rodrigo Campos

unread,
Mar 7, 2017, 6:39:51 PM3/7/17
to kubernet...@googlegroups.com
Sorry, I don't follow. But I'm curious, what did you do, exactly? A sidecar?
To unsubscribe from this group and all its topics, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

Vishnu Kannan

unread,
Mar 7, 2017, 8:52:25 PM3/7/17
to Kubernetes user discussion and Q&A
Richard,

Can you describe your workflow/use-case a bit more? 
Are you having I/O starvation issues with accessing container images and/or logging as well?

--Vish

To unsubscribe from this group and all its topics, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscribe@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.

Richard Musiol

unread,
Mar 8, 2017, 7:23:19 AM3/8/17
to Kubernetes user discussion and Q&A
I modified the Docker image to run a script which forwards SIGTERM to the child process and which deletes the files after the child exited.

To unsubscribe from this group and all its topics, send an email to kubernetes-use...@googlegroups.com.

To post to this group, send email to kubernet...@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages