Actually, on second thought, I think I can see that most of your variables are indeed univariate variables and this should be easy to explain. There is a reason that you have consistent results for front, mean, and all in these cases. The front end of one variable is the first and only variable; the mean lambda for one variable is the lambda value you found for that variable; using all variables means using the one variable. These results have to be the same.
Regarding the burn-in option, it should not be used with univariate data, and thanks to your inadvertent discovery, we will have to add a trap to stop it. It might also be worth adding a machine learning style algorithm to not seek a solution if Z scores during the burn-in phase are small. What the burn-in method does is use a smaller number of permutations (I think about 10% of the total requested) to find Z scores in intervals, and then fits a spline to find where Z — not lambda! — is maximized. Once found, then the full set of permutations is run for a lambda value that corresponds to the preliminary Z maximization, obtaining a Z from that full distribution.
There are two ways the burn-in method of optimization can go haywire. One is if there is a flat surface for which to fit a spline. In these cases, Z would probably be small, wherever it is optimized, and whether it was optimized appropriately, since finding a maximum in a surface that is flat is fraught with potential error. Second is if one is trying to optimize a multivariate lambda (assuming two or more variables) with a single variable, in which case the burn-in method makes no sense. This is like asking the analysis to ignore the larger number of requested permutations for a straightforward analysis and have it make a biased judgement first based on a distribution with 10% of the permutations. (A good future argument will be to have the number of burn-in permutations as an option, but as stated in the help file, optimization strategies are still in need of development.)
In practical terms, you should XX-out all burn-in results in your table except for shape. It is illogical to perform that optimization, a lesson we will have to fix on the next software version to prevent a silly trial.
Finally, regarding your Avg. Body Weight result, it is good to remember what is happening in the analysis, because although statistical results like this seem odd, they can make sense. The lambda is practically 0. The significant P-value indicates that lambda = 0.0000007 — basically a star phylogeny with the tiniest bit of structure near the root of the tree, so small as to be easily ignored — has a result to suggest the trace of the generalized least squares residual variance you find with the covariance matrix used from this result is smaller than expected by chance. You *could* call that phylogenetic signal but you would be foolish to suggest it resembles what you would expect with Brownian Motion, which tends to be what we use as a standard to think about phylogenetic signal, meaning the K value is rather truthfully small here. I think we were pretty clear in the paper that one should call it phylogenetic signal when both the effect size is large and lambda suggests the data look like they were sampled from a BM process. The latter thing is not at all true and former thing is subjectively also not true, even if the P-value is borderline true.
Generally if P = 0.05 one would expect Z = 1.96, so you might also have an issue with strangely distributed residuals in your GLS fits.
I think that covers most bases — hope it made sense.
Best,
Mike