By "exactly what you mentioned," I'm guessing you mean that you are wanting to leverage Container Linux's declarativity, deployed size, and automatic updating to build out a number of singleton installations across many (potentially differing) clients.
You certainly _can_ use Container Linux for this purpose, but much of the tooling isn't going to help you for it. It is presumed that there is some additional scheduling/deployment layer above the node which handles allocating loads to the node. Since that is not the case for you, you will have to use something else for this tooling.
One option is fleet[1], though it is now deprecated. You _could_ create a centralized etcd cluster to which all of your nodes connect to obtain their (dynamic) load descriptions. This is an elegant solution if the security of etcd (TLS + auth) is sufficient for your purposes, and if your nodes are going to be connected to the internet.
Another option is a configuration manager, such as Ansible[2]. You could manage your load based on systemd units which are copied over and/or updated by the configuration manager.
Another option is a synchronization script, which is basically a hand-rolled configuration manager. It does basically the same thing but with a simpler, more custom script.
Each of these uses the same basic conceit: all load is defined entirely by systemd unit files. These unit files should define services which pull down container images and then execute them in your container runtime (I would use rkt for your local runtime, since it plays better with systemd, but docker is a perfectly valid choice). Thus the entirety of your post-install customizations are contained in the unit files. Container Linux automatic updates will be completely transparent to this process, since the unit files will be stored in either /etc/systemd or /run/, depending on the method you choose. Both of those directories are safe from modification by the update processes.