Linearity and homoskedasticity are assumptions of the model, not the estimator. If these are not met, the model is misspecified.
Normality is an assumption of ML standard errors but not point estimates. Using rubust ML addresses violations of this assumption. My guess is that this is what the other student was referring to.
I do not know what you mean by independency. Single-level ML estimation assumes independent observations. Latent variable models imply local independence of indicators conditional on the latent variable if there are no modeled violations such as cross-loadings or correlated residuals.
Time precedence is more complicated. This was introduced in the 18th century, probably by David Hume, as part of an empiricist effort to apply the idea of causation to sense impressions which were assumed to be completely passive in nature. The assumption received immediate criticism, most famously from Immanuel Kant, at the time and has been criticized in more recent literature by the likes of Hugh Mellor and David Lewis. My impression is that it is now the received view that the concept of causation itself does not imply time precedence. Another motivation to avoid building time precedence into the concept of causation is that it then prevents us from relying on causation to determine the direction of time.
That said, in the behavioral sciences, cases of backward causation are presumed very rare and unusual. So, weak time ordering is probably a very defensible assumption as a methodological assumption for this domain of causes. Strict time ordering is a little more open to criticism. However, if the theory of the causal process that you are modeling describes a causal effect that unfolds over time, then strict ordering may be a plausible assumption. Nonetheless, it is important to recognize that this is an empirical assumption about the specific phenomenon being modeled, not a general conceptual assumption about causation in the abstract.