again, i am stuck to .numpy() problem

1,204 views
Skip to first unread message

ALI Q SAEED

unread,
Nov 29, 2021, 11:29:47 AM11/29/21
to Keras-users
Hi there,
i am trying to Discretization every feature map generated by a conv layer, however  Discretization function expected float values for boundaries not tensors. based on that, when i use .numpy() to extract the values of tensors(which are scalars) to be fed as float values to the boundaries args i get the following error: 

AttributeError: 'KerasTensor' object has no attribute 'numpy'

and if i feed the scalar directly to the Discretization function without using .numpy(), i get the following error : 

TypeError: Expected float for argument 'boundaries' not <KerasTensor: shape=() dtype=float32 (created by layer 'tf.math.divide_8')>.

In my code, i used Lambda layer with custom function to slice a conv layer and extract feature maps one by one to apply Discretization function on them, then re-combine the  discretized feature maps back to a single tensor to be returned.

# My custom function
def custom_layer(conv_layer):
                for k in range( conv_layer.shape[1]):                          # shape (-1, 32, 48, 48)
                            sliced_tensor =  conv_layer[0,k,:,:]
                             bin1 = tf.reduce_max(sliced_tensor, axis=[0,1])
                             bin2 = tf.reduce_min (sliced_tensor, axis=[0,1])
                             bin3 = tf.math.divide(tf.math.subtract(bin1,  bin2  ) ,2)
                             layer = Discretization(bin_boundaries=[bin1,bin2,bin3])     
                             sliced_tensor= layer(sliced_tensor)             # Here is the problem
                  
                   #----  combining updated feature maps into a new_conv_layer -----#
                   return new_conv_layer

# model building 
inputs = tf.keras.layers.Input(shape=(1,48,48))
conv1 = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')(inputs)
lambda_layer = tensorflow.keras.layers.Lambda(custom_layer)(conv1)


Note: in the above code i fed the bins(which are scalars) directly to the function without using .numpy().   

My questions :
1. is there any possibility to use .numpy inside custom function called by lambda layer?
2. is it possible to use Discretization func.  in such away? if not, is it possible to mimic its functionality using tf ops?
 
Any help is highly appreciated.
Thanks 

Matias Valdenegro

unread,
Nov 29, 2021, 12:39:01 PM11/29/21
to keras...@googlegroups.com

What is this Discretization function exactly? If its implemented in numpy, you cannot use it within a keras layer, as gradients cannot be propagated through it. Also consider that the function has to be differentiable too.


On Monday, 29 November 2021 17:29:46 CET ALI Q SAEED wrote:

| Hi there,

| i am trying to Discretization every feature map generated by a conv layer,

| however  Discretization function expected float values for boundaries not

| tensors. based on that, when i use .numpy() to extract the values of

| tensors(which are scalars) to be fed as float values to the boundaries args

| i get the following error:

|

| *AttributeError: 'KerasTensor' object has no attribute 'numpy'*

|

| and if i feed the scalar directly to the Discretization function without

| using .numpy(), i get the following error :

|

| *TypeError: Expected float for argument 'boundaries' not <KerasTensor:

| shape=() dtype=float32 (created by layer 'tf.math.divide_8')>.*

| *My questions :*

| *1. is there any possibility to use .numpy inside custom function called by

| lambda layer?*

| *2. is it possible to use **Discretization func.  in such away? if not, is

| it possible to mimic its functionality using tf ops?*

ALI Q SAEED

unread,
Nov 29, 2021, 3:20:43 PM11/29/21
to Keras-users
Thanks for your reply. 
Discretization is a preprocessing layer which buckets continuous features by ranges (reference). it accepts  any tf.Tensor or tf.RaggedTensor of dimension 2 or higher as input.

According to the documentation in tensorflow webpage,  Preprocessing data can be done before the model or inside the model. 
I get confused how could i use this layer inside a model (as mentioned in the documentation) whilst i can't convert a tensor to numpy (during model design) to be able to feed numpy to such layer.

Thanks

Matias Valdenegro

unread,
Nov 29, 2021, 3:52:28 PM11/29/21
to keras...@googlegroups.com

Good, the Discretize layer does not support bin boundaries to be symbolic tensors, they need to be fixed floating point values, better make them a hyper-parameter, not a value that depends on the data.


On Monday, 29 November 2021 21:20:42 CET ALI Q SAEED wrote:

| Thanks for your reply.

| Discretization is a preprocessing layer which buckets continuous features

| by ranges (reference

Lance Norskog

unread,
Nov 29, 2021, 5:04:34 PM11/29/21
to ALI Q SAEED, Keras-users
Aha!

The Discretization layer requires that the bins need to be constant values, not tensors.

You are trying to solve a specific problem with a generalized solution. To achieve the discretization you want, you can normalize (standardize) the feature map values to range from 0 to 1, then discretize with bins ([0.0, 0.5, 1.0]). If the activation for the feature maps is 'relu', then you know that the minimum value is 0.

It should be acceptable for the feature maps to be rescaled in this operation. Deep learning networks are very resilient to linear rescaling.

Cheers,

Lance


PENAFIAN: E-mel ini dan apa-apa fail yang dikepilkan bersamanya ("Mesej") adalah ditujukan hanya untuk kegunaan penerima(-penerima) yang termaklum di atas dan mungkin mengandungi maklumat sulit. Anda dengan ini dimaklumkan bahawa mengambil apa jua tindakan bersandarkan kepada, membuat penilaian, mengulang hantar, menghebah, mengedar, mencetak, atau menyalin Mesej ini atau sebahagian daripadanya oleh sesiapa selain daripada penerima(-penerima) yang termaklum di atas adalah dilarang. Jika anda telah menerima Mesej ini kerana kesilapan, anda mesti menghapuskan Mesej ini dengan segera dan memaklumkan kepada penghantar Mesej ini menerusi balasan e-mel. Pendapat, rumusan, dan sebarang maklumat lain di dalam Mesej ini yang tidak berkait dengan urusan rasmi Universiti Kebangsaan Malaysia (UKM) adalah difahami sebagai bukan dikeluar atau diperakui oleh mana-mana pihak yang disebut.

DISCLAIMER : This e-mail and any files transmitted with it ("Message") is intended only for the use of the recipient(s) named above and may contain confidential information. You are hereby notified that the taking of any action in reliance upon, or any review, retransmission, dissemination, distribution, printing or copying of this Message or any part thereof by anyone other than the intended recipient(s) is strictly prohibited. If you have received this Message in error, you should delete this Message immediately and advise the sender by return e-mail. Opinions, conclusions and other information in this Message that do not relate to the official business of The National University of Malaysia (UKM) shall be understood as neither given nor endorsed by any of the aforementioned.

--
You received this message because you are subscribed to the Google Groups "Keras-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to keras-users...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/keras-users/759d2951-2515-406b-a2d5-2d7ce0f6dc86n%40googlegroups.com.


--
Lance Norskog
lance....@gmail.com
Redwood City, CA

ALI Q SAEED

unread,
Nov 30, 2021, 3:30:20 AM11/30/21
to Keras-users
Thanks Lance and  Matias for you reply and help. 
God bless you guys. 

Reply all
Reply to author
Forward
0 new messages