Excellent questions!
As you know, boosting produces an initial model, then a second model that concentrates on instances the initial model misclassifies, then a third that concentrates on instances the first two misclassify, and so on. If you use a learner that produces a simple model that does not overfit, this process is guaranteed to converge to a good classifier.
However, if the learner overfits the training set all bets are off. In the extreme, the very first model might correctly classify all training instances (making further iterations unnecessary), and yet generalise poorly to fresh test instances.
Random Forest produces a complex model and is prone to overfitting, so you are correct in thinking that it is probably unsuitable for boosting.
However, if you limit the depth of the trees that Random Forest uses (there is a maxDepth parameter that does this), you force a simpler model — which might be worth boosting.
cheers
ian