I'd like to be able to add methods to ops in my dialect. I can do this today by defining a custom OpTrait with my method and wiring it up via tblgen NativeOpTrait. It's great because I can compose the traits really nicely but is limited because there's no way to dynamically detect if a generic op has a trait and no way to then dispatch the method.This was working, but now I'm finding myself needing the ability to get type-erased access to these methods similar to what AbstractOperation provides: I'm writing serialization code that would like to generically process Operation* pointers. I'm having trouble with that as there's no vtable/no isa/etc to help make this work.I found Nicolas' code here:It seems to do exactly what I want but is a decent amount of boilerplate that I'd like to avoid copy/pasting. I remember there being some mention of another approach at a previous team meeting but forgot what the outcome was.
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAOB_hFS8o%3DGOdwtVzOhnBcYmKNgoAns-yd9KNi7p0b97YsJ9oQ%40mail.gmail.com.
Ahh yeah that sounds similar to what I was trying to do at first. To make serialization code cleaner and keep definitions with the opdefs I wanted to add traits that denoted how each op was to be serialized. I tried using extraClassDeclaration to add methods when required but naturally there's no way to dynamically call those. This may be relevant to SPIR-V and other formats that would be nice to serialize directly from MLIR without needing a separate serialization table/system.