GCE snapshot wrong snapshot size

561 views
Skip to first unread message

Stevo Novkovski

unread,
Dec 27, 2016, 1:10:22 PM12/27/16
to gce-discussion
Hello,

I have 100GB disk, 50GB is free but when taking snapshot i`m getting

Disk size   Snapshot size
100GB          76.5GB


How can this be 76.5GB when disk is 50GB full?


df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       100G   50G   51G  50% /
devtmpfs        2.0G     0  2.0G   0% /dev
tmpfs           2.0G  2.7M  2.0G   1% /dev/shm
tmpfs           2.0G  205M  1.8G  11% /run
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
tmpfs           395M     0  395M   0% /run/user/0






It is a first snapshot of the disk.

Faizan (Google Cloud Support)

unread,
Dec 29, 2016, 4:43:12 PM12/29/16
to gce-discussion
Hello Stevo,

I was not able to reproduce the issue with my test instance, to troubleshot further can you provide me with the following information:
1. Your project ID(you can send it through private message).
2. How did you create the snapshot through UI, gcloud or API?
3. Was the snapshot created when disk was in use?
4. Was the disk created with a previous snapshot?

Looking forward to your response.

Faizan

Stevo Novkovski

unread,
Jan 1, 2017, 8:41:41 PM1/1/17
to gce-discussion
I send you private msg.

Stevo Novkovski

unread,
Jan 1, 2017, 8:45:13 PM1/1/17
to gce-discussion
Keep in mind that there are snapshots of two DIFFERENT DISK ENTITIES but was re-created with same DISK NAME, in this case "disk-5859b89ff911c928100f6636".


On Thursday, December 29, 2016 at 10:43:12 PM UTC+1, Faizan (Google Cloud Support) wrote:

Faizan (Google Cloud Support)

unread,
Jan 9, 2017, 11:10:51 AM1/9/17
to gce-discussion
I did some further research on this issue. I think the key thing to know is that snapshots happen at the *block device* level not at the *file system level*. File systems do all kinds of funny things such as 
A. Marking blocks deleted internally instead of passing deletion down to the FS level (to prevent the overhead of zeroing blocks).
B. Copying data on write (to allow recovery of crash during a write).
C. Having overhead for metadata.
D. Allowing blocks to be part empty (but the whole block is still used as far as the device level knows).
etc. 

So while the filesystem may know that certain blocks are free for re-use, this is never passed down to the device level so those blocks are still included in those snapshots

You can try setting up discard (aka TRIM) which may help this as it makes linux pass more information from the filesystem level down to the device level. There is some info on this at https://cloud.google.com/compute/docs/disks/performance#optimizing_persistent_disks. Note that enabling discard won't immediately make snapshot size go down, it would only do so for future freed blocks. This also is a best practice for taking snapshots.

I hope it helps.

Faizan

Stevo Novkovski

unread,
Jan 9, 2017, 7:01:31 PM1/9/17
to gce-discussion
mounting my ssd persistent disk with "discard" fix this 'problem'.

Paolo Mainardi

unread,
Feb 17, 2017, 4:56:51 AM2/17/17
to gce-discussion
I am experiencing the same problem, but in a different scenario, the disk is bounded to a k8s pod and out of my control in terms of mount/remount/formatting.





gke-spark-op-services-default-pool-a644826a-s1uc paolomainardi # du -chs /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gitlab-data
9.5G /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/gitlab-data
9.5G total

As you can see in the first image, the snapshot size is ever 93.05GB

Any kind of help is much appreciated :)
P.
Reply all
Reply to author
Forward
0 new messages