TypeError: Cannot convert 0.0 to EagerTensor of dtype int32

3,089 views
Skip to first unread message

Yegane Aghamohammadi

unread,
Apr 20, 2020, 1:08:09 PM4/20/20
to Discuss
am new to Tensorflow and I want to build a model with weight initializer.I want to initialize the weights with random int32. I defined a function to produce random numbers as the initializer of the weights. The function itself is ok. But When I want to use the function in one layer as the kernel_initializer, I encounter this error:

TypeError: Cannot convert 0.0 to EagerTensor of dtype int32

My code is below:


from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, MaxPooling2D, AveragePooling2D, Softmax
import tensorflow.keras.backend as K

model = Sequential()

model.add(Conv2D(6,kernel_size=5,activation='relu',input_shape=(32,32,1),name='Conv1'))


def my_init(shape, dtype=None):
    return K.random_normal(shape, dtype=tf.int32)

model.add(Dense(64, kernel_initializer=my_init))

I also tried another function which I defined np.array datatypes. It does not get an error but when I tried to see the weights of the layers, It showed that the types of the weights are float32, not int32. My problem is to feed and to get int32 datatypes for weights. 

Any help would be appreciated.

Paul Pauls

unread,
Apr 22, 2020, 3:59:31 AM4/22/20
to Discuss
The initializer function you have defined is not valid as you can not request random integer variables from a normal distribution. It is not clearly defined how to convert the the floats returned by a random normal distribution to integers, which is why Tensorflow is not even attempting it and throwing an error. If for the purpose of the exercise you would like to create a random matrix of integers, use tf.random.uniform, as in:
def create_int_matrix(shape, minval, maxval):
   
return tf.random.uniform(shape, minval=minval, maxval=maxval, dtype=tf.int32)

Furthermore, now that you know how to create a random integer matrix in Tensorflow, you still can't use this as the initializer for your Dense layers. Dense layers only support floating point dtypes.
If you insist on using a Dense layers with an integer dtype you need to create your own custom Dense layer and own custom optimizer for that Dense layer as outlined in the great tensorflow documentation (see tensorflow.org -> Learn)

Personally I would highly recommend you to revise your approach to the problem and using the predefined Tensorflow ecosystem rather than building everything from scratch, as this is usually way easier. 
For example, if your requirement of using integers in the Dense layers stems from the specification of having integers as the predicted output why not add a tf.round at the end of your model call.
Reply all
Reply to author
Forward
0 new messages