I've had good results bootstrapping subsamples in order to measure uncertainty empirically. Personally, I feel these are a better estimate of the uncertainty, since they answer the question "How similar would my results be, if I repeated the experiment?".
More formally, I believe that the standard uncertainties calculated just w.r.t. each variable are partial derivatives, considering the width of the function along each axis of the uncertainty function. Whereas bootstrapping captures the full uncertainty, taking into account the correlations between the variables. But it's been a while since I've looked at this, so take this with a grain of salt. I think there's a good discussion in DS Sivia, "Data Analysis: a Bayesian Tutorial"