RDFT of SignalConv2D

109 views
Skip to first unread message

Scott Ransom

unread,
May 17, 2022, 6:13:13 AM5/17/22
to tensorflow-compression
Hi,

I noticed that the trainable weights of a kernel when using SignalConv2D are stored in complex form (I think the output of performing an RDFT on the kernel). Is there a benefit to doing it in this manner as opposed to just using the weights in their original float32 form? 

Johannes Ballé

unread,
May 19, 2022, 4:21:19 AM5/19/22
to tensorflow-...@googlegroups.com
Hi Scott,

yes, reparameterizing the weights in RDFT space tends to improve the conditioning of the optimization problem, so it can be trained with a larger learning rate without becoming unstable. There's more analysis about this method in this paper: https://arxiv.org/abs/1802.00847

Best!
Johannes

On Tue, May 17, 2022 at 3:13 AM Scott Ransom <scott.l...@gmail.com> wrote:
Hi,

I noticed that the trainable weights of a kernel when using SignalConv2D are stored in complex form (I think the output of performing an RDFT on the kernel). Is there a benefit to doing it in this manner as opposed to just using the weights in their original float32 form? 

--
You received this message because you are subscribed to the Google Groups "tensorflow-compression" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tensorflow-compre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/tensorflow-compression/fbe1f995-8439-49ac-bcbe-cb9af9c6d350n%40googlegroups.com.

Scott Ransom

unread,
May 20, 2022, 3:45:39 AM5/20/22
to tensorflow-compression
Thanks for the explanation :)
Reply all
Reply to author
Forward
0 new messages