Just construct your network as if you had all inputs and targets for all the tasks. Call lasagne.layers.get_output() with all output layers (i.e., for both tasks):
outputA, outputB = lasagne.layers.get_output([outputlayerA, outputlayerB])
Then construct two separate loss expressions, one per task.
lossA = something(outputA, targetA)
lossB = something(outputB, targetB)
Construct two separate update dictionaries, one per task, updating only the parameters involved in that task.
paramsA = lasagne.layers.get_all_params(outputA, trainable=True)
paramsB = lasagne.layers.get_all_params(outputB, trainable=True)
updatesA = lasagne.updates.nesterov_momentum(lossA, paramsA, ...)
updatesB = lasagne.updates.nesterov_momentum(lossB, paramsB, ...)
Compile two separate training functions:
train_fn_A = theano.function([inputA1, inputA2, ..., targetA], lossA, updates=updatesA)
train_fn_B = theano.function([inputB1, inputB2, ..., targetB], lossB, updates=updatesB)
The list of inputs could also contain inputs that are shared between the tasks.
Then, in your training loop, always call the function matching the task you've got a batch from.
Hope this helps!
Jan