Grupos
Conversaciones
Todos los grupos y mensajes
Enviar comentarios a Google
Ayuda
Formación
Iniciar sesión
Grupos
TPU Users
Conversaciones
Información
TPU Users
Contactar con los propietarios y administradores
1-10 de 10
Community discussion and support for TPU users
Marcar todo como leído
Denunciar grupo
0 grupos seleccionados
John
22/9/22
How to understand the padding rules on cloud TPU?
Hello, there. I want to ask about TPU padding rules, such as padding batch size to 128 and feature
no leída,
How to understand the padding rules on cloud TPU?
Hello, there. I want to ask about TPU padding rules, such as padding batch size to 128 and feature
22/9/22
Rohan Mahajan
2
3/5/21
tpu distributed strategy
Hi Russell, Thanks for your response. I am not a google employee(only google cloud user) so can't
no leída,
tpu distributed strategy
Hi Russell, Thanks for your response. I am not a google employee(only google cloud user) so can't
3/5/21
Faisal
27/11/20
Matrix.Vector, Vector.Vector multiply, then max of a vector
Firstly I apologize if this is a very basic question, but I am very new to TPU/GPU, so asking this. I
no leída,
Matrix.Vector, Vector.Vector multiply, then max of a vector
Firstly I apologize if this is a very basic question, but I am very new to TPU/GPU, so asking this. I
27/11/20
Gil Motta
,
Russell Power
3
16/11/20
TPU vs GPU Loss is bigger on TPU
Hi Russel, Thanks for the suggestion but I have no idea where to look for bfloat16 for the final
no leída,
TPU vs GPU Loss is bigger on TPU
Hi Russel, Thanks for the suggestion but I have no idea where to look for bfloat16 for the final
16/11/20
Santosh Gupta
28/7/20
Data fetch bottleneck at inference, but not during training for TPU
This is what my inference setup looks like autotune = tf.data.experimental.AUTOTUNE with strategy.
no leída,
Data fetch bottleneck at inference, but not during training for TPU
This is what my inference setup looks like autotune = tf.data.experimental.AUTOTUNE with strategy.
28/7/20
G
19/7/20
How can I see if the ASIC of the Coral Dev Board is working?
Is there a command that shows me the load of the TPU, in order to know if it does accelerate my
no leída,
How can I see if the ASIC of the Coral Dev Board is working?
Is there a command that shows me the load of the TPU, in order to know if it does accelerate my
19/7/20
Santosh Gupta
1/7/20
Having trouble getting models to run on TPU “NotImplementedError: TPUStrategy.run(fn, …) does not support pure eager execution…”
I am attempting to make Keras models that use Bert is a component of the overall model architecture,
no leída,
Having trouble getting models to run on TPU “NotImplementedError: TPUStrategy.run(fn, …) does not support pure eager execution…”
I am attempting to make Keras models that use Bert is a component of the overall model architecture,
1/7/20
dimple a shajahan ed16d009
,
Paige Bailey
2
27/6/20
Chamfer Loss in tensorflow
You can file a feature request to have Chamfer Loss implemented in TF Addons ( +add...@tensorflow.org
no leída,
Chamfer Loss in tensorflow
You can file a feature request to have Chamfer Loss implemented in TF Addons ( +add...@tensorflow.org
27/6/20
Peter Peter
10/9/19
TPU from PyCharm
Hello I have a big matrix with data (about 80GB) and I can't copy it to google drive. Is possible
no leída,
TPU from PyCharm
Hello I have a big matrix with data (about 80GB) and I can't copy it to google drive. Is possible
10/9/19
nwo...@ebay.com
, …
Jonathan Hseu
9
13/8/18
TPU pre-trained model with bfloat 16 transformation to traditional tensorflow floating poing
Hey Kyle, Additional things that I mentioned during the call, but it'll be easier to explain them
no leída,
TPU pre-trained model with bfloat 16 transformation to traditional tensorflow floating poing
Hey Kyle, Additional things that I mentioned during the call, but it'll be easier to explain them
13/8/18