Based on your other post I assume this a RTMB question (not TMB).
An easy way to get gradients of each term is to ADREPORT all terms. Here, using a simple normal model:
## Example model and likelihood
library(RTMB)
nobs <- 10
parms <- list(mu=0, sd=1)
xobs <- rnorm(nobs, mean=2, sd=3)
f <- function(parms) {
getAll(parms)
nll <- -dnorm(xobs, mu, sd, log=TRUE)
ADREPORT(nll)
sum(nll)
}
## Get MLE
obj <- MakeADFun(f, parms)
opt <- nlminb(obj$par, obj$fn, obj$gr)
You can now get the gradient of each term (row) wrt to each parameter (column)like this:
obj <- MakeADFun(f, parms, ADreport=TRUE)
obj$gr(opt$par)
However, note that this approach is very slow when the number of terms (nobs) is much higher than the number of parameters (2) because TMB uses reverse mode AD.
If this is an issue, you can get the same gradients another way (using an 'adjoint trick'):
## From T: R^2 -> R^nobs
## Generate T2: R^nobs -> R^2
T <- GetTape(obj)
T2 <- MakeTape(function(weight) {
WT <- MakeTape(function(x) sum(T(x) * weight), opt$par)
WT$jacfun()(advector(opt$par))
}, rep(1,nobs))
t(T2$jacobian(rep(1,nobs)))