Hi:
Brandon showed you one approach to computing simulation-based
confidence bands for nonlinear regression; another is to use a
bootstrap approach to the problem. A rather naive/Q&D way to do this
is to select a set of x-values at which predictions are desired, use
the bootstrap to get a set of predicted values at each x and then take
the .025 and .975 quantiles as its 95% bootstrap CI at x. This will
produce a set of confidence limits for each x that you can put into a
data frame and pass to ggplot(), using geom_ribbon() to plot the CIs.
I'd look at John Fox's paper:
http://socserv.socsci.mcmaster.ca/jfox/Books/Companion/appendix/Appendix-Bootstrapping.pdf
It discusses two ways to bootstrap a linear model, depending on
whether you want to consider the x-values to be fixed in the bootstrap
process or not. You can take the ideas from that paper and try to
adapt them to the process of bootstrapping your nonlinear model.
The problem with getting confidence limits from nonlinear models is
that not only is the expectation function nonlinear, but so is the
variance function, and it is rarely the case that the variance
function has a known form. This is one reason why predict.nls() does
not return confidence (or prediction) limits. The bootstrap often
works well in this type of problem, absent serious outliers. The
propagation of error approach that Brandon linked is another way to
attack the problem using a specific type of variance approximation.
Some people try to linearize a nonlinear model (when possible), find
the CIs/PIs from the linearized fit and transform back, but that has
not proven to be an effective solution. It is usually better to work
in the original units, with perhaps a more appropriate
parameterization of the model in the nonlinear case.
You may run into (convergence) problems fitting your model with
bootstrapped data, so you should become familiar with the try() and
tryCatch() functions and use them for each iteration of the bootstrap
fitting process. A large number of convergence failures is likely a
sign that your fitted model is not very stable and should not be
trusted.
One final point: as far as prediction intervals go, fuggedaboudit. In
linear models, there is an analytic form for the prediction variance
that is capable of extending confidence limits to prediction limits in
a straightforward, theoretically defensible way. This does not extend
to nonlinear models. Period. If the variance function for the _mean_
response is generally unknown in nonlinear models, it is even more
difficult to find one for an _individual_ response, and AFAIK using
simulation-based techniques for this purpose is a futile exercise. For
example, neither the bootstrap nor the propagation-of-error technique
applies to predicting individual responses from a nonlinear model.
Dennis