Hi S4TF team,
I found some documentation on training model on multiple TPU cores
here. But I'm unable to get it to work.
var threadState = ThreadState(model: model, optimizer: optimizer, id: 0, devices: tpuDevices, useAutomaticMixedPrecision: false)
where
var tpuDevices = [Device]()
for device in Device.allDevices {
if device.kind == .TPU {
tpuDevices.append(device)
}
}
But it gives the following error:
error: Couldn't lookup symbols:
x10_training_loop.ThreadState.init(model: τ_0_0, optimizer: τ_0_1, id: Swift.Int, devices: Swift.Array<TensorFlow.Device>, useAutomaticMixedPrecision: Swift.Bool) -> x10_training_loop.ThreadState<τ_0_0, τ_0_1>
x10_training_loop.ThreadState.init(model: τ_0_0, optimizer: τ_0_1, id: Swift.Int, devices: Swift.Array<TensorFlow.Device>, useAutomaticMixedPrecision: Swift.Bool) -> x10_training_loop.ThreadState<τ_0_0, τ_0_1>
Moreover, it's unclear how to set the thread
id argument (looks like that's what TensorFlow would manage automatically behind-the-scenes given all the
devices (i.e. TPU cores) to train the model on).
Looking towards some help.
Regards