I see two options, which should both work for either docker images or .debs: converting to a tarball beforehand and using http_archive, or merging the tarballs with a custom repository rule. I typically do the former, so I'll talk mostly about that.
.deb files, docker images, and most other ways of packaging code are fundamentally just tarballs with metadata. For most things I build against, the metadata doesn't matter (no hook scripts that matter etc). This means you can just use the tarballs and ignore the rest.
`dpkg-deb --fsys-tarfile foo.deb` will extract the tarball for a .deb. From here, you can merge them together however you want (typically none of the files overlap, so you can just stick them together in any order). For example, if you look in
http://robotics.mvla.net/spartanrobotics/releases/src/2019_frc971_software_20200103_final.tar.xz at //debian:packages.bzl, there are a set of steps for automatically bundling up a package with all of its dependencies, and some rules you could start from.
I usually end up just using the dpkg command-line tools, because they're pretty stable and widespread. If you want to avoid that, I see a couple of Python packages which can extract that tarball directly.
Each layer in a docker image is a tarball too. A docker save is a tarball of tarballs, and some JSON to tell which is which. There are some special filename conventions for deleting files ("whiteout" files), but you can probably ignore those for this use case.
If you set up something like the
first deb_packages example (docker_build is the old name for container_image), you'll get everything you care about in the final layer. This means you could just grab the
-layer.tar output from the container_image directly. Then, you can take this tarball and use it with http_archive to build against.
If you want to merge all the layers in a container, it's fairly straightforwards too. I have some Python code that does it in ~150 lines of pure Python, but unfortunately that's not opensource.
https://github.com/jwilder/docker-squash is an example doing it in Go. `docker export` would probably also work. I figured out most of that stuff by just extracting a docker save tarball and looking around.
As I mentioned at the beginning, I've had a lot of success converting various forms of packages to tarballs as a pre-build step. I typically use bazel targets to do that, which makes it nicely reproducible and reasonably hermetic. That reduces the number of files to download when actually building the code using these dependencies, which generally performs better. However, it does increase the number of steps to add/update dependencies (edit WORKSPACE+BUILD, build, upload, edit WORKSPACE again with the new sha256). If you want, it should be straightforwards to convert any of these techniques to a
repository rule which downloads the individual files and combines them while loading the repository.