Arif:
In general, the best way to approach the docker code base is to trace a command from end to end. I usually will start with the cli command[1], then trace that through to the http endpoint to see how the command is handled. You can see an example of the container routes[2], but different subsystems register their routes individual. From there, I usually use code search to identify a specific backend implementations that match up to the actual http handlers. For the most part, the behavior of the daemon can be traced from the handlers into the daemon package[3]. Depending on what you are trying to understand, working from there might be enough, but be careful about making assumptions about how things work.
The first time doing this can be daunting, as it is more than a few indirects of calls and interfaces. Adding print statements along various call paths and rebuilding the daemon and cli is a great exercise in building understanding of how the daemon works. Once you use this practice enough, you'll build an intuition that will be more than enough to get around.
This is a worthy cause but most of the "easy" stuff is already taken care of. I am not saying there are no optimizations left in docker. I am saying that you'll have to make hard trade offs to bring optimizations to docker. It is very easy to push a change that looks like an optimization but can actually make things slower for a certain class of users (I am more than guilty of this). That said, there may be areas to reduce lock contention that I am unaware of.
When building out optimizations for docker, the key is first find something that can be benchmarked and then optimized in the broad case. Make sure the benchmarks take into account the full domain of input, as well as machine type and size. Also, make sure that the cost of the optimization is appropriately balanced against complexity. For example, an optimization that breaks a guarantee or expands an interface awkwardly may not be considered because the maintenance to keep the optimization may be too costly.
In the near term, I think the bulk of optimization work lies in effective measurement. For 1.13, we added experimental support for export of prometheus metrics. New metrics can be added with go-metrics[4]. Measuring more stuff in docker internals will help greatly in identifying areas that need larger work to optimize.
I hope this helps.
Stephen.