Greetings Silvia,
I wouldn't say it is wrong per se. It is simply a tradeoff. The idea is to use a smaller subset of samples so that you can run the optimization process faster and explore parameters more efficiently. Aside from time differences, the optimization results themselves shouldn't be much different between running with 10 vs 24 individuals. Presumably, if you select a subset of samples that are representative of the whole population (or populations), the optimization process should be applicable to the whole pool of individuals sequenced. This can change somewhat depending on the underlying structure of your samples (e.g., if you have data from very divergent populations or different species you might want to optimize them a bit separately).
In summary, using the subset is mainly to aid with running things faster, allowing for a more efficient exploration of the parameter space.
Aside from this, my main advise is always to mention some key details about parameter optimization. While important, the optimization process itself is not the end goal of the project. It is easy to fall into optimization rabbit holes with diminishing returns. Also, the optimization process is not perfect. Some datasets don't biologically conform to the logic behind optimization (e.g., when having very divergent populations). Others might converge to multiple optimal parameters. Ultimately, the goal is to use the optimization as a tool for downstream biological insights, which is the central goal behind most projects.
Hope this helps.
Angel