Runtime kernel registration

21 views
Skip to first unread message

t kevin

unread,
May 11, 2021, 2:43:16 AM5/11/21
to TensorFlow Developers
Dear Developers,

I get a question about

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/registration/registration.h#L26


```
// Note that there are two sides to 'registration':

// - Definition (compile-time): making op and kernel definitions _available_.

// - Usage (run-time): adding particular (available) definitions of ops and

// kernels to the global OpRegistry / KernelRegistry, to be found when

// constructing and executing graphs.

//

// Currently, definition and usage happen to be coupled together: all

// 'available' definitions (from the REGISTER_*' macros) are added to the global

// registries on startup / library load.
```

For now we need a mechanism to support flexible kernel registration for debugging and hardware support purposes. That is, we want to compile all the kernels we have to avoid messing up with dlopen share objects dynamically and only run a subset of them at the runtime, say controlled by a filter function or environment varibale.

The "SELECTIVE_REGISTRATION" doesn't work here since it requires a compile time constant as the op/kernel filter.

My question is that do you think it's reasonable and necessary to change the  "definition and usage happen to be coupled together"  situation and add a runtime kernel registration mechanism?

Thanks
Kevin


Reply all
Reply to author
Forward
0 new messages