In the lectures we define, variance as
Var[X]=E[(X-E[X])^2] and
SD[X]=sqrt(Variance) and this standard deviation tells us the spread of random variables about mean. This standard deviation is root mean square (rms) kind of thing. In introductory physics lab course, the explanation for this is given by saying that the value can be on both sides of mean, so error can be positive and negative and the mean of errors can be 0, so we square them and then take average and then take square root of them to get back to original scale.
I have doubt that why can't we use absolute deviation for which we can define variance as Var[X]=(E[ |X-E[X]| )^2 and thus AD[X]= sqrt(variance) = E[ |X-E[X]| ]. In that case also the problem of cancelling of errors is not there as we are taking their absolute values.
I have found that this absolute deviation also follows the properties which the rms SD does.
Like for this absolute deviation, Chebyshev identity becomes P(|X-u|>=k(AD)) <= 1/k
And also, with this new definition, Var[aX]=a^2Var[X] and Var[a+X]=Var[X] also the standardised random variable exactly has same form i.e., Z=X-E[X]/AD[X].
I don't understand though it follows the same properties as that of rms SD, why the absolute deviation is not used in determing the spread of random variables. Also it is easier to calculate.
I have this doubt for a long time but don't get any satisfactory answer yet. Please help in clarifying the doubt.
Thanks!