Distroless Dockerfiles for gpu and cpu

68 views
Skip to first unread message

Ruoshi Sun

unread,
Aug 28, 2020, 10:06:56 AM8/28/20
to Discuss

Hi all,

I'm an HPC staff member at the University of Virginia. I prepared 2 distroless Dockerfiles that are equivalent (or at least meant to be) to the gpu and cpu official Dockerfiles:


The corresponding images are hosted on Docker Hub (2.3.0-distroless and 2.3.0-cpu-distroless):
(Ignore the overview - that's for 2.2.0 only.)

GPU
I used a 3-stage build:
  • "py" installs python packages
  • "lib" installs CUDA and system libraries
  • The site-packages in "py" and necessary libraries in "lib" are copied to production stage.
This results in an image size of 1.18 GB, 18% smaller than the official image "2.3.0-gpu" (1.44 GB).

CPU
I used a 2-stage build:
  • "py" installs python packages. Note: Instead of "pip install tensorflow" in the official Dockerfile I used tensorflow-cpu. The former would result in an image size of about 300 MB.
  • The site-packages in "py" are copied to production stage.
This results in an image size of 227 MB, 60% smaller than the official image "2.3.0" (582 MB).

Limitations
  • If users need to install additional packages, they must use pip installed outside the container.
  • The python version is restricted to be 3.7.3 since the distroless base image uses Debian 10.

These distroless containers seem to work well on our HPC cluster, although more extensive testing by the community would be appreciated. I didn't find any related discussion/issues regarding this topic. I wonder if distroless is of interest and worth contributing to the GitHub repo.

Best,
Ruoshi

Ruoshi Sun

unread,
Aug 29, 2020, 2:47:21 PM8/29/20
to Discuss
Reorganized naming scheme. The Dockerfile for gpu is now:

Updated overview for detailed explanation of tags:

The relevant Dockerfiles for this discussion are Dockerfile.distroless and Dockerfile.cpu in:
(The other variants are for users of our HPC system where CUDA libraries can be mounted natively at runtime.)

Using this example as a benchmark:
the performance of our distroless container is the same compared to the official 2.3.0-gpu container.

Martin Wicke

unread,
Sep 1, 2020, 11:03:47 PM9/1/20
to Ruoshi Sun, sig-...@tensorflow.org, Discuss
We wouldn't want to add them directly to the repo, it already contains too many things. 

+sig-...@tensorflow.org what would be a good place?

--
You received this message because you are subscribed to the Google Groups "Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss+u...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/discuss/0dfd6bb8-e38b-4eea-8832-19af4a723ba9n%40tensorflow.org.

Austin Anderson

unread,
Sep 3, 2020, 5:02:22 PM9/3/20
to Martin Wicke, Ruoshi Sun, sig-...@tensorflow.org, Discuss
These would be great to add to a new directory in https://github.com/tensorflow/build/tree/master/images.

--
To unsubscribe from this group and stop receiving emails from it, send an email to build+un...@tensorflow.org.

Ruoshi Sun

unread,
Sep 3, 2020, 6:25:32 PM9/3/20
to Austin Anderson, Martin Wicke, sig-...@tensorflow.org, Discuss

Thanks. I just tested tf-nightly 2.4 with cuda11 cudnn8 and it works well. However, some of the library paths in cuda11 are different from cuda10. Does the Dockerfile have to be generic enough to work for all tf 2.x and cuda 10.x/11?

 

https://github.com/uvarc/rivanna-docker/blob/master/tensorflow/2.4.0/Dockerfile.distroless

docker pull uvarc/tensorflow:2.4.0-distroless (We will overwrite this when 2.4 is released. This image is for testing purposes only.)

Austin Anderson

unread,
Sep 3, 2020, 7:06:28 PM9/3/20
to Ruoshi Sun, Martin Wicke, sig-...@tensorflow.org, Discuss
SIG Build's repo is for community build content with no consistent guarantees, so as long as you explain the dockerfiles and your intentions in a README, it's no problem.

You could also simply add a link to your own repo in the images directory's README, which would still share your work without needing to keep pushing to the SIG Build repo if you need to make updates. It depends on your preference.

Ruoshi Sun

unread,
Sep 4, 2020, 9:22:57 AM9/4/20
to Austin Anderson, Martin Wicke, sig-...@tensorflow.org, Discuss

Thanks again for the suggestion. I guess I’ll add a link in the README, since we may not build images for every single version.

Ruoshi Sun

unread,
Sep 8, 2020, 7:32:42 AM9/8/20
to Discuss, Martin Wicke, sig-...@tensorflow.org, Discuss, Austin Anderson
Submitted a PR
Reply all
Reply to author
Forward
0 new messages