It strongly depends on whether you are averaging or summing the loss across minibatches. Both conventions are used.
If you're summing, obviously the magnitude of the updates will strongly depend on the batch size, so then you definitely have to adjust at least the learning rate accordingly.
If you're averaging, you might still need to adapt the hyperparameters, however. This is because larger minibatches reduce the variance of the gradient, which makes it possible to use a slightly larger learning rate. iirc a rule of thumb is the following: if the minibatch size goes up by a factor of g, then the learning rate can be raised by a factor of sqrt(g). In practice it's usually better to try some values and find the best one though. There are a lot of different interacting factors that influence this optimization process.
momentum can usually stay the same in my experience. In fact, I rarely optimize this parameter at all anymore, I usually leave it as 0.9.
Sander