Random seed - since computers generate random numbers somewhat non-randomly depending on where they start, this lets you choose the starting point so you can get repeat-ability in your randomness (eg initialization)
Batch size - take your optimization steps based on one example, all the examples, or a few of the examples. Usually a few examples gives you a good compromise between not stepping recklessly in the wrong direction and not taking forever. This number is how many you want to be "a few" for your problem.
Solver type - what optimization algorithm are you using to find a solution. There's a ton of resources for these online & pros/cons of each one. Try looking them up by name.
Base learning rate - this is the multiplier for the gradient of your loss function. It's the scaling of the size of the step you take toward the optimal solution. This can be adjusted over your epochs to slowly narrow in on your ideal solution using your solver options. (the solver options depend on the solver type - another reason to take some time reading up on them)
Epochs - how long do you want to train. try a big-ish number and then increase or decrease if your model needs more time or converges
Snapshot interval - how often do you want to save your progress? A snapshot is a file containing the model parameters at a certain epoch. If you have a big model, save sparingly - it'll eat up your hard drive
Validation interval - how often do you want to see how you're doing? This is how often Digits applies your model to your validation set to see how it's generalizing to unseen data.
This is just a quick summary, most of these have a lot of nuance so definitely take the time to read up on them, where to start, and how to adjust based on your problem.
Happy learning!