- the affine grayboxes could haveotherwise been processed in parallel. In summary, there can be anAffineFunctionPass that needs to only implement an runOnOp(op) where op iseither a FuncOp or an AffineGrayBoxOp.Some of the current polyhedral passes/utilities can continue using walk (foreg. [normalizeMemRefs](https://github.com/tensorflow/mlir/blob/331c663bd2735699267abcc850897aeaea8433eb/include/mlir/Transforms/Utils.h#L89), while many will just have to be changed to use walkAffine.* **Simplification / Canonicalization**There has to be a simplification that drops unused block arguments fromregions of ops that aren't function ops (since this is easy for non functionops) - in case this isn't already in place.
This will allow removal of deadmemrefs that could otherwise be blocked by operand uses in affine.graybox opswith the corresponding region arguments not really having any uses inside.Given this, no additional bookkeeping is needed as a result of having memrefsas explicit operands for gray boxes. [MemRefCastFold](https://github.com/tensorflow/mlir/blob/ef77ad99a621985aeca1df94168efc9489de95b6/lib/Dialect/StandardOps/Ops.cpp#L228) is the only canonicalization patternthat the *affine.graybox* has to implement, and this is easily/cleanly done(by replacing the argument and its uses with a memref of a different type).Overall, having memrefs as explicit arguments is a good middle ground tomake it easier to let standard SSA passes / scalar optimizations /canonicalizations work unhindered in conjunction with polyhedral passes, andwith the latter not worrying about explicitly checking for escaping/hiddenmemref accesses. More discussion a little below in the next section.* There are situations/utilities where one can consistently performrewriting/transformation/analysis cutting across grayboxes. One example is[normalizeMemRefs](https://github.com/tensorflow/mlir/blob/331c663bd2735699267abcc850897aeaea8433eb/include/mlir/Transforms/Utils.h#L89), which turns all non-identity layout maps into identityones. Having memrefs explicitly captured is a hindrance here, butmlir::replaceAllMemrefUsesWith can be extended to transparently perform thereplacement inside any affine grayboxes encountered if the caller says so.
In other cases like scalar replacement, memref packing / explicit copying,DMA generation, pipelining of DMAs, transformations are supposed to beblocked by those boundaries because the accesses inside the graybox can't bemeaningfully analyzed in the context of the surrounding code. As such, thememrefs there are treated as escaping / non-dereferencing.* In the presence of affine constructs, the inliner can now simply inlinefunctions by putting the callee inside an affine graybox, without having toworry about symbol restrictions.* There has to be a mlir::getEnclosingAffineGrayBox(op) that returns the closestenclosing *affine.graybox* op or null if it hits a function op.
## Other Benefits and Implications1. The introduction of this op allows arbitrary control flow (list of basicblocks with terminators) to be used within and mixed with affine.fors/ifswhile staying in the same function. Such a list of blocks will be carried byan *affine.graybox* op whenever it's not at the top level.2. Non-affine data accesses can now be represented through*affine.load/affine.store* without the need for outlining.3. Symbol restrictions for affine constructs will no longer restrict inlining:any function can now be inlined into another by enclosing the just inlinedfunction into a graybox.
Furthermore, a memref anyway never folds to a constant. The onlycanonicalization related to a memref currently is a [memref_castwhich can easily be extended to fold with an *affine.graybox* op (update itsargument and all uses inside). As such, there aren't any cases where theargument list has to be shrunk/grown from the outside. And for the cases wherethe types have to be updated, it's straightforward since there is sort of only asingle use for that op instance (it's not declarative or "callable" fromelsewhere like a FuncOp).
Another design point could be of requiring symbols associated with theaffine constructs used in a graybox, but defined outside, to be explicitlylisted as operands/arguments, in addition to the memrefs used. This makesisValidSymbol really simple. One won't need isValidSymbol(Value \*v,AffineGrayBoxOp op). Anything that is at the top-level of an *affine.graybox* opor its region argument will become a valid symbol. However, other than this, itdoesn't simplify anything else. Instead, it adds/duplicates abookkeeping with respect to propagation of constants, similar, to someextent, to the argument rewriting done for interprocedural constant propagation.Similarly, the other extreme of requiring everything from the outside used in an*affine.graybox* to be explicitly listed as its operands and region arguments iseven worse on this front.In summary, it appears that the requirement to explicitly capture only thememrefs inside an affine.graybox's region is a good middle ground and betterthan other options.
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/b229eeaf-2a53-4df3-a690-2ef7f0946232%40tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
Hi Mehdi,Thanks very for your quick and detailed response! I've updated and incorporated some of the obvious comments/fixes, and provided better explanation at other places. I'll wait for other feedback before responding inline in detail. In short, I want to emphasize that although the op is not marked "IsolatedFromAbove", it is *effectively isolated from above for polyhedral/affine purposes* since walkAffine used by such passes treats affine.graybox ops opaquely and because all memrefs from the outside used inside a graybox op have to appear as its operands and arguments.
(Reg. FunctionLike op trait, nothing crucial here: I just wanted to inherit most of its accessors due to the properties it shares with FuncOp. What you mention below as the defining property of FunctionLike doesn't appear in the bulletted list of its doc comment, and it isn't obvious from reading its methods. It would great to either document this in g3doc or augment the doc comment. I was interesting in inheriting accessors stemming from the 2nd, 4th, and 5th property from the doc comment list on FunctionLike).
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/80845286-96e7-4947-9c8e-cf55b9770dca%40tensorflow.org.
On Sat, Sep 28, 2019 at 8:39 PM 'Uday Bondhugula' via MLIR <ml...@tensorflow.org> wrote:Hi Mehdi,Thanks very for your quick and detailed response! I've updated and incorporated some of the obvious comments/fixes, and provided better explanation at other places. I'll wait for other feedback before responding inline in detail. In short, I want to emphasize that although the op is not marked "IsolatedFromAbove", it is *effectively isolated from above for polyhedral/affine purposes* since walkAffine used by such passes treats affine.graybox ops opaquely and because all memrefs from the outside used inside a graybox op have to appear as its operands and arguments.I'd still want to see a better motivation for this. It isn't clear to me at the moment that not having explicit capture of memref would be a real issue in practice (I don't see my previous comment addressed from this point of view: you're duplicating information that is trivially gathered in the IR).
The bullet points come as a whole and you can't cherry-pick, which is why I asked exactly what you're after in order to refactor it / split it to expose what makes sense.The first bullet point is "Ops can be used with SymbolTable in the parent Op and have names", and the 3rd bullet point "the absence of a region corresonds to an external function" seem problematic to me.
On Sunday, September 29, 2019 at 9:43:04 AM UTC+5:30, Mehdi AMINI wrote:On Sat, Sep 28, 2019 at 8:39 PM 'Uday Bondhugula' via MLIR <ml...@tensorflow.org> wrote:Hi Mehdi,Thanks very for your quick and detailed response! I've updated and incorporated some of the obvious comments/fixes, and provided better explanation at other places. I'll wait for other feedback before responding inline in detail. In short, I want to emphasize that although the op is not marked "IsolatedFromAbove", it is *effectively isolated from above for polyhedral/affine purposes* since walkAffine used by such passes treats affine.graybox ops opaquely and because all memrefs from the outside used inside a graybox op have to appear as its operands and arguments.I'd still want to see a better motivation for this. It isn't clear to me at the moment that not having explicit capture of memref would be a real issue in practice (I don't see my previous comment addressed from this point of view: you're duplicating information that is trivially gathered in the IR).This is exactly what the first two paragraphs under "Rationale and Design Alternatives" discuss in detail. They are basically arguing that:(a) having to inspect, scan, gather memrefs accessed within grayboxes from above within all affine passes is not worth the special casing needed just for the affine.graybox op
. As an example, consider a pass that's computing memref regions and generating packing code for a memref. The scan of uses that currently happens via methods like getUses(), replaceAllMemRefUsesWith() will all just work transparently and do the work: the non-dereferencing uses of that memref on an affine.graybox op just makes things like double buffering, data copy generation, etc. all bail out on those (just because it isn't polyhedrally analyzeable unless the graybox can be eliminated and you get a larger encompassing affine region)
-- the same way they currently bail out on any call ops taking memrefs as arguments or return ops returning memrefs. The same is true for memref dependence analysis: there isn't a way to represent dependences between an affine access and another one that is inside another graybox dominated by it - for all these purposes, the latter access is like one happening on a memref that has escaped at the graybox op boundary. With explicit capture of memref's, an affine graybox gets treated as any other op (for eg. like a 'call' op that takes memref as an operand), and so for an affine pass, you don't even have to know that the affine.graybox exists, and one won't even have to change a line of code in any of the passes: that's the upshot! walkAffine simply won't walk their regions, and "operand uses" consistently have all that the affine pass needs for *every* op.
The isolation of a graybox's region for polyhedral passes running from above is necessary (this is why it's not a white box), and to do so cleanly, we need to explicitly capture memrefs used inside as operand uses on the graybox.(b) the bookkeeping needed to maintain the memref arguments is trivial, and there is barely any benefit to having them implicitly captured (unlike scalar SSA values that do not refer to memory, where the tradeoffs are different and so we implicitly capture), and it's definitely not worth the trouble of the special casing/handling described in (a).I can augment the rationale in those paras in case these weren't already clear.
Hi Uday,Thanks! This is great overall
On Sat, Sep 28, 2019 at 8:28 AM 'Uday Bondhugula' via MLIR <ml...@tensorflow.org> wrote:
Hi all,
1. The *affine.graybox* op has zero results, zero or more operands, and holdsa single region, which is a list of zero or more blocks. The op's region canhave zero or more arguments, each of which can only be a *memref*. Theoperands bind 1:1 to its region's arguments. The op can't use any memrefsdefined outside of it, but can use any other SSA values that dominate it. Itsregion's blocks can have terminators the same way as current MLIR functions(FuncOp) can. Control from any *return* ops in its region returns to rightafter the *affine.graybox* op. The op will have the trait *FunctionLike*.
I'm not convinced by the "FunctionLike" traits: in general a function-like defines a block of code that can be referenced by a symbol later.We can likely express the kind of property you want, possibly with another traits, what are you trying to achieve here?
* **Simplification / Canonicalization**There has to be a simplification that drops unused block arguments fromregions of ops that aren't function ops (since this is easy for non functionops) - in case this isn't already in place.This would have to be a canonicalization on the op itself (here a canonicalization on the affine.graybox).(We can implement this as a trait, but that is just an implementation detail).
This will allow removal of deadmemrefs that could otherwise be blocked by operand uses in affine.graybox opswith the corresponding region arguments not really having any uses inside.Given this, no additional bookkeeping is needed as a result of having memrefsas explicit operands for gray boxes. [MemRefCastFold](https://github.com/tensorflow/mlir/blob/ef77ad99a621985aeca1df94168efc9489de95b6/lib/Dialect/StandardOps/Ops.cpp#L228) is the only canonicalization patternthat the *affine.graybox* has to implement, and this is easily/cleanly done(by replacing the argument and its uses with a memref of a different type).Overall, having memrefs as explicit arguments is a good middle ground tomake it easier to let standard SSA passes / scalar optimizations /canonicalizations work unhindered in conjunction with polyhedral passes, andwith the latter not worrying about explicitly checking for escaping/hiddenmemref accesses. More discussion a little below in the next section.* There are situations/utilities where one can consistently performrewriting/transformation/analysis cutting across grayboxes. One example is[normalizeMemRefs](https://github.com/tensorflow/mlir/blob/331c663bd2735699267abcc850897aeaea8433eb/include/mlir/Transforms/Utils.h#L89), which turns all non-identity layout maps into identityones. Having memrefs explicitly captured is a hindrance here, butmlir::replaceAllMemrefUsesWith can be extended to transparently perform thereplacement inside any affine grayboxes encountered if the caller says so.It seems to me that this would have to be done with some sort of op-interface to avoid layering violation between core utilities (like replaceAllMemrefUsesWith) and a dialect specific Operation (affine.graybox).
In particular, the "funcop" is just another operation: my take was already that any region help by a non-affine operation should be treated the same as held by a func-op from the perspective of affine.Since the removal of MLFunc and the use of generic region there was really nothing specific left about FuncOp as far as I can tell, but the doc was blindly updated to replace MLFunction with Function.This is something that we should clarify with respect to symbols: either we always for these to be defined inside an affine region (a region attached to an affine operation), in which case you have to use a graybox to materialize symbols through explicit capture, or we should consider that any SSA value defined in a non-affine region can be used as a symbol inside an affine region.I'm referring to this section: https://github.com/tensorflow/mlir/blob/master/g3doc/Dialects/Affine.md#restrictions-on-dimensions-and-symbols ; which after defining what an affine region is, I would update along the line of :> A symbolic identifier can be bound to an SSA value that is either:> - defined in a non-affine region,> - defined in a region separated from the current region by an affine.graybox operation in the nesting structure (the value is "captured", possibly implicitly, by an affine.graybox)> - the result of a constant operation,> - or the result of an affine.apply operation that recursively takes as arguments any symbolic identifiers.
## Rationale and Design Alternatives - What to Capture as Arguments?An alternative design is to allow all SSA values including memrefs to beimplicitly captured, i.e., zero operands and arguments for the op. This ishowever inconvenient for all polyhedral transformations and analyses which willhave to check and scan any affine.grayboxes encountered to see if any memrefsare being used therein, and if so, they would most likely treat them as if thememrefs were being passed to a function call. This would be the case withdependence analysis, memref region computation/analysis, fusion, explicitcopying/packing, DMA generation, pipelining, scalar replacement and anythingdepending on the former analyses (like tiling, unroll and jam). Having memrefsas explicit operands/arguments is a good middle ground to make it easier to letstandard SSA passes / scalar optimization / canonicalization workunhindered in conjunction with polyhedral passes, and with the latter notworrying about explicitly checking for escaping/hidden memref accesses.It isn't clear to me why this would be costly though? This is something than can be cached easily during analysis/transformation in a map, and isn't trivial to compute without any ambiguity or loss of precision. The information is encoded structurally in the IR even with implicit capture. The explicit capture looks like "caching the result of an analysis in the IR itself" to me right now.
Furthermore, a memref anyway never folds to a constant. The onlycanonicalization related to a memref currently is a [memref_castwhich can easily be extended to fold with an *affine.graybox* op (update itsargument and all uses inside). As such, there aren't any cases where theargument list has to be shrunk/grown from the outside. And for the cases wherethe types have to be updated, it's straightforward since there is sort of only asingle use for that op instance (it's not declarative or "callable" fromelsewhere like a FuncOp).Because this explicit capture is a pure "passthrough" (or am I missing something?), it acts as an inconvenience in any patterns though. You can't just blindly apply patterns or follow use-def chains across the affine.graybox boundary. This is the kind of restriction when doing explicit capture that should be more motivated. Our (short and recent) experience over the last few months developing other dialects is that so far implicit capture is in general more convenient and explicit capture should be motivated by the need to block otherwise problematic canonicalization/optimization across the region boundary.
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/d4413c45-aadd-4e32-9300-7b6fa021c849%40tensorflow.org.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/d4413c45-aadd-4e32-9300-7b6fa021c849%40tensorflow.org.
---- Alex
func @foo(%m : memref<...>) {affine.for %i = 0 to 42 {
%0 = "crazy.op"() ({%1 = "crazy.load"(%m) : (memref<...>) -> index
%2 = "crazy.more_load"(%m, %1) : (memref<...>, index) -> f32
"crazy.yield"(%2) : (f32) -> ()}) : () -> (f32)
}
}where we have no idea about the effect of "crazy.op" or any of its nested ops on the memrefs it is allowed to
On Sunday, September 29, 2019 at 1:09:59 AM UTC+5:30, Mehdi AMINI wrote:
This is something that we should clarify with respect to symbols: either we always for these to be defined inside an affine region (a region attached to an affine operation), in which case you have to use a graybox to materialize symbols through explicit capture, or we should consider that any SSA value defined in a non-affine region can be used as a symbol inside an affine region.I'm referring to this section: https://github.com/tensorflow/mlir/blob/master/g3doc/Dialects/Affine.md#restrictions-on-dimensions-and-symbols; which after defining what an affine region is, I would update along the line of :> A symbolic identifier can be bound to an SSA value that is either:> - defined in a non-affine region,> - defined in a region separated from the current region by an affine.graybox operation in the nesting structure (the value is "captured", possibly implicitly, by an affine.graybox)> - the result of a constant operation,> - or the result of an affine.apply operation that recursively takes as arguments any symbolic identifiers.This sounds accurate to me except for some of the terminology. "non-affine region" is confusing - because there isn't any such thing as a non-affine region! Every op is part of some affine region, either of the closest enclosing affine.graybox op or of the func op. The "affine region" term is usable only to demarcate such a region (anything above the closest enclosing affine.graybox and anything inside the graybox ops that are in this region aren't part of the region).
IIUC, your first bullet should just be replaced by:- defined at the top level of a function or of an affine graybox op(just like for a function, the top level of an affine graybox is the top part of its region's entry block)
Hi Uday,Thanks for your answers, appreciate it.On Mon, Sep 30, 2019 at 2:31 AM 'Uday Bondhugula' via MLIR <ml...@tensorflow.org> wrote:On Sunday, September 29, 2019 at 1:09:59 AM UTC+5:30, Mehdi AMINI wrote:This is something that we should clarify with respect to symbols: either we always for these to be defined inside an affine region (a region attached to an affine operation), in which case you have to use a graybox to materialize symbols through explicit capture, or we should consider that any SSA value defined in a non-affine region can be used as a symbol inside an affine region.I'm referring to this section: https://github.com/tensorflow/mlir/blob/master/g3doc/Dialects/Affine.md#restrictions-on-dimensions-and-symbols; which after defining what an affine region is, I would update along the line of :> A symbolic identifier can be bound to an SSA value that is either:> - defined in a non-affine region,> - defined in a region separated from the current region by an affine.graybox operation in the nesting structure (the value is "captured", possibly implicitly, by an affine.graybox)> - the result of a constant operation,> - or the result of an affine.apply operation that recursively takes as arguments any symbolic identifiers.This sounds accurate to me except for some of the terminology. "non-affine region" is confusing - because there isn't any such thing as a non-affine region! Every op is part of some affine region, either of the closest enclosing affine.graybox op or of the func op. The "affine region" term is usable only to demarcate such a region (anything above the closest enclosing affine.graybox and anything inside the graybox ops that are in this region aren't part of the region).Sorry to being sloppy with terminology here :)I had defined (my first sentence above): "an affine region (a region attached to an affine operation)". My idea was that you could bind a symbol whenever the symbol is defined in a region that isn't attached to an affine operation. For example
spv.func @foo() {not_affine.async_launch() {%symbol = ....affine.for .... {// here %symbol can be bound as a symbol since it is defined in a region not attached to an affine operation.}}}IIUC, your first bullet should just be replaced by:- defined at the top level of a function or of an affine graybox op(just like for a function, the top level of an affine graybox is the top part of its region's entry block)I'm not comfortable to refer to "function" here: we don't have "first class" function anymore in MLIR (SpirV dialect is using a different operation for representing
functions, and GPU kernel would likely do the same).Any region attached to an operation that isn't affine should be able to be considered identically to a function body if it encloses an affine operation (cf my example above).This is the language I was trying to get at with "non-affine region", of course I'm fine with any terminology that would represent the same concept :)On Mon, Sep 30, 2019 at 9:16 AM 'Uday Bondhugula' via MLIR <ml...@tensorflow.org> wrote:
On Monday, September 30, 2019 at 9:16:00 PM UTC+5:30, Uday Bondhugula wrote:Hi Alex,
On Monday, September 30, 2019 at 8:58:06 PM UTC+5:30, Alex Zinenko wrote:func @foo(%m : memref<...>) {affine.for %i = 0 to 42 {
%0 = "crazy.op"() ({%1 = "crazy.load"(%m) : (memref<...>) -> index
%2 = "crazy.more_load"(%m, %1) : (memref<...>, index) -> f32
"crazy.yield"(%2) : (f32) -> ()}) : () -> (f32)
}
}where we have no idea about the effect of "crazy.op" or any of its nested ops on the memrefs it is allowed toThe code like this is not at an issue and will just work correctly both the way things are now and with the affine.graybox proposal. In the above example, the '%m' operand uses on the "crazy.*" ops will be seenas non-deferencing uses of %m, and all affine utilities/passes will correctly bail out on / act conservatively on those as far as that memref goes. In effect, the ops where you are using a memref as an operand (but not dereferencing them like in affine.load, affine.store, affine.dma_start/wait) are all also similar to say a call where %m is an operand. There is no need to separately/specially restrict what's inside affine.for or affine.if - they all get treated conservatively for the parts that have to be. However, if you want to do something special/advanced that is less conservative, we'll have to think about what that is depending on the use case.~ Udaycapture implicitly. Without restricting the set of ops supported inside `affine.for`, we should treat all unknown ops conservatively. "affine.greybox" wouldn't be any different from "crazy.op" in this sense, so it does notOnly if affine.graybox explicitly captures the memref would it be no different from "crazy.op". The special handling argument in the proposal is for the design point where an affine.graybox implicitly captures the memrefs used inside. In your case, %m is an operand on "crazy.load", and not an implicitly used memref in its region.It isn't clear to me that we have the same reading of the code here inside the affine.fo you have "crazy.op" which is defining a new enclosing region, just like affine.graybox would.Inside this region we have a use of a memref in "crazy.load", this is an implicit capture for the "crazy.op" and does not seem different to me than if affine.graybox was allowing to implicit capture memrefs.Can you help clarify here?
Thanks,--Mehdi
On Tuesday, October 1, 2019 at 8:43:07 AM UTC+5:30, Mehdi AMINI wrote:Hi Uday,Thanks for your answers, appreciate it.On Mon, Sep 30, 2019 at 2:31 AM 'Uday Bondhugula' via MLIR <ml...@tensorflow.org> wrote:On Sunday, September 29, 2019 at 1:09:59 AM UTC+5:30, Mehdi AMINI wrote:This is something that we should clarify with respect to symbols: either we always for these to be defined inside an affine region (a region attached to an affine operation), in which case you have to use a graybox to materialize symbols through explicit capture, or we should consider that any SSA value defined in a non-affine region can be used as a symbol inside an affine region.I'm referring to this section: https://github.com/tensorflow/mlir/blob/master/g3doc/Dialects/Affine.md#restrictions-on-dimensions-and-symbols; which after defining what an affine region is, I would update along the line of :> A symbolic identifier can be bound to an SSA value that is either:> - defined in a non-affine region,> - defined in a region separated from the current region by an affine.graybox operation in the nesting structure (the value is "captured", possibly implicitly, by an affine.graybox)> - the result of a constant operation,> - or the result of an affine.apply operation that recursively takes as arguments any symbolic identifiers.This sounds accurate to me except for some of the terminology. "non-affine region" is confusing - because there isn't any such thing as a non-affine region! Every op is part of some affine region, either of the closest enclosing affine.graybox op or of the func op. The "affine region" term is usable only to demarcate such a region (anything above the closest enclosing affine.graybox and anything inside the graybox ops that are in this region aren't part of the region).Sorry to being sloppy with terminology here :)I had defined (my first sentence above): "an affine region (a region attached to an affine operation)". My idea was that you could bind a symbol whenever the symbol is defined in a region that isn't attached to an affine operation. For exampleThis doesn't really cover it all. For eg. in Example 2, Example 3 of the RFC resp., %pow and %v are valid symbols that are defined inside regions held by affine operations
. They become symbols for the inner affine regions (for the affine.graybox nested), not for the one they are contained in.
Second, a symbol defined at the top level of an affine.graybox is also a valid for that graybox's region
(just like the current rule of top-level of a function op). I've added a terminology section to the RFC - we probably need another term instead of overloading 'region'.
spv.func @foo() {not_affine.async_launch() {%symbol = ....affine.for .... {// here %symbol can be bound as a symbol since it is defined in a region not attached to an affine operation.}}}IIUC, your first bullet should just be replaced by:- defined at the top level of a function or of an affine graybox op(just like for a function, the top level of an affine graybox is the top part of its region's entry block)I'm not comfortable to refer to "function" here: we don't have "first class" function anymore in MLIR (SpirV dialect is using a different operation for representingBy "function", I just meant the function op here ('FuncOp'): just didn't want to use the class name.
Just responding to these parts since I agree with the rest.> 3. Conservative analysis through implicit capture
> Affine passes would treat any unknown op conservatively. This means
> that
> an unknown op with regions that implicitly captures any values is
> treated by those passes as an opaque function call that takes all
> implicitly captured values as arguments. This means that the passes
> should compute the set of implicitly captured values for unknown ops.
Actually, the answer is "No" here - they should not! For the unknown ops, why would you want to compute the set of implicitly captured values when you are going to anyway walk through the op?!
The walkAffine method in the proposal walks through inner regions of *all* region-holding ops except the affine.graybox.
And if you don't want to block an inner traversal, there is neither a need to explicitly capture nor a need to compute on-the-fly! It'd just work as is with the walk. The question really isn't about "known" vs "unknown" op here, but about an "op within the current polyhedral symbol context" vs an "op outside the current polyhedral symbol context". affine.graybox has its own symbol context, and so it falls into the latter class. In your earlier example, "crazy.op" (irrespective of whether it's known or unknown) doesn't define a new polyhedral context and so there isn't a need to block a walk into it nor explicitly capture.
>2. Stopping the pre-order traversal>In the pre-order (aka going from the top) traversal of ops, one may want to ignore certain regions attached to >the op, or treat them differently. This sounds like a reusable IR traversal pattern that can be parameterized by a >condition functor used to analyze the op and decide whether to enter its regions. This does not have to be tied >to greybox or affine.Yes, there may be a future scenario where we don't want to traverse the inner regions of certain ops although they may not be affine.grayboxes and although they fall within the current region's symbol context (for efficiency/whatever reasons) - you'd have to compute summaries on them on-the-fly irrespective of whether they have an explicit capture on them, and this would in fact be the case for 'call' ops! if you want to do something more advanced than being super-conservative. But I don't see why that affects the current design choices on affine.graybox - the unique thing about the latter is that it introduces a new symbol context or has a list of basic blocks with their terminators and so the control flow isn't affine in it. Please think about the affine.graybox as bringing in polyhedral information from another "domain" that doesn't compose with the polyhedral information of its surrounding affine region.
~ Uday
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/734a5ceb-93e0-43e8-a9a2-25de6ddf48c2%40tensorflow.org.
Hi Uday, thanks for the proposal!I see 3 principal changes that `affine.greybox` bring:1. ability to bypass the restriction on symbol definition (i.e. you can insert an `affine.greybox` anywhere)2. context sensitivity (i.e. behaves like an encapsulated inlined function call)3. the decision on what is implicitly captured vs passed as argument
This brings clear expressivity benefits as your examples show.However I believe there is also a case to be made about new analysis and transformations that can cross the `affine.greybox` boundary.I'd be curious to hear your thoughts on this topic and how `affine.greybox` improve the situation, what new analysis/transformation it allows.
Regarding `affine.greybox` itself, in my opinion the real limiter is the restriction on symbol definition and I think there may be a simpler way of addressing the problem.Consider a pair of ops to allow injecting/capturing symbols more freely, resembling:```mlir {.mlir}ssa-value `=` affine.bind_symbol ssa-value `:` index-typeaffine.release_symbol ssa-value `:` index-type```In some sense, this could be thought of as an exact complement to `affine.greybox`:1. the values specified are ssa-value that are explicitly turned into symbols2. there is no region/nesting involvedThe implication for affine passes is that symbol validity is defined by dominance by `affine.bind_symbol` (and optionally postdominance by `affine.release_symbol`) + rules induced by special ops (e.g. dim).`affine.bind_symbol` / `affine.release_symbol` act as natural boundary to affine transformations (with behavior varying on a case-by-case basis, like you mention for memref normalization and DMA).The part about Helpers, Utilities, and Passes seems to be mostly transparent (I have not thought deeply about those points but did not see particular points of concern).Inlining seems to work with minimal effort.Expressiveness benefits are the same in the 3 examples you highlight.Besides simplicity of these 2 ops without regions, there is value in explicitly tracking symbol bindings.Consider your `pow` example:```
func @nest(%n : i32) {%c2 = constant 2 : indexaffine.for %i = 0 to %n {affine.for %j = 0 to %n {
%pow = call @powi(%c2, %j) : (index, index) -> index
%pow_sym = affine.bind_symbol %pow : indexaffine.for %k = 0 to %pow_sym {
affine.for %l = 0 to %n...}}}}return}
```
It is trivial to compute that `i` and any `affine.bind_symbol` are in different program slices and that `i` can be structurally stripmined-and-sunk below `k` (assuming it is legal and profitable).I don't see a particular issue with doing the same with `affine.greybox` on this example, except that analyses/transformations/walkers/etc need to know about and work across the new special region you propose.So bottom-line, what do you think are the cost/benefits of regions vs just using ops?Where do you think something like `affine.bind_symbol` would break?
Can we think of some new analyses/transformations that cross `affine.greybox` boundaries and for which the abstraction helps (vs makes transformations more intricate to write because of a new special region)?
Thanks!
On Saturday, September 28, 2019 at 11:28:18 AM UTC-4, Uday Bondhugula wrote:
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/26b131b6-45a4-4740-873d-5021c75e885d%40tensorflow.org.
On Mon, Sep 30, 2019 at 2:31 AM 'Uday Bondhugula' via MLIR <ml...@tensorflow.org> wrote:On Sunday, September 29, 2019 at 1:09:59 AM UTC+5:30, Mehdi AMINI wrote:This is something that we should clarify with respect to symbols: either we always for these to be defined inside an affine region (a region attached to an affine operation), in which case you have to use a graybox to materialize symbols through explicit capture, or we should consider that any SSA value defined in a non-affine region can be used as a symbol inside an affine region.I'm referring to this section: https://github.com/tensorflow/mlir/blob/master/g3doc/Dialects/Affine.md#restrictions-on-dimensions-and-symbols; which after defining what an affine region is, I would update along the line of :> A symbolic identifier can be bound to an SSA value that is either:> - defined in a non-affine region,> - defined in a region separated from the current region by an affine.graybox operation in the nesting structure (the value is "captured", possibly implicitly, by an affine.graybox)> - the result of a constant operation,> - or the result of an affine.apply operation that recursively takes as arguments any symbolic identifiers.This sounds accurate to me except for some of the terminology. "non-affine region" is confusing - because there isn't any such thing as a non-affine region! Every op is part of some affine region, either of the closest enclosing affine.graybox op or of the func op. The "affine region" term is usable only to demarcate such a region (anything above the closest enclosing affine.graybox and anything inside the graybox ops that are in this region aren't part of the region).Sorry to being sloppy with terminology here :)I had defined (my first sentence above): "an affine region (a region attached to an affine operation)". My idea was that you could bind a symbol whenever the symbol is defined in a region that isn't attached to an affine operation. For exampleThis doesn't really cover it all. For eg. in Example 2, Example 3 of the RFC resp., %pow and %v are valid symbols that are defined inside regions held by affine operationsJust a nit but the way you phrased "regions held by affine operations" makes me think we don't use the same terminology: a region is held by only one operation in the IR. This is not a transitive property, the region semantic is defined by the operation it is attached to.
. They become symbols for the inner affine regions (for the affine.graybox nested), not for the one they are contained in.Again terminology maybe, but they are defined inside the affine.graybox as far as I can tell, so I can't connect to "not for the one they are contained".(SSA values defined in the graybox can't even be directly referred to outside of the graybox region in an way MLIR, this is structural)Second, a symbol defined at the top level of an affine.graybox is also a valid for that graybox's regionI agree.(just like the current rule of top-level of a function op). I've added a terminology section to the RFC - we probably need another term instead of overloading 'region'.spv.func @foo() {not_affine.async_launch() {%symbol = ....affine.for .... {// here %symbol can be bound as a symbol since it is defined in a region not attached to an affine operation.}}}IIUC, your first bullet should just be replaced by:- defined at the top level of a function or of an affine graybox op(just like for a function, the top level of an affine graybox is the top part of its region's entry block)I'm not comfortable to refer to "function" here: we don't have "first class" function anymore in MLIR (SpirV dialect is using a different operation for representingBy "function", I just meant the function op here ('FuncOp'): just didn't want to use the class name.I know, but I feel you're missing my point: FuncOp is nothing special, so you should not refer to it in any way with respect to defining the rule for binding symbols.
Thanks for the RFC! The graybox idea sounds interesting to me. I see the discussion is already pretty advanced but I would have some basic questions, mostly for my learning/understanding:
1. It seems that with greybox op we are making all the affine ops context sensitive. affine.load/affine.store/affine.if/affine.for may not be actually affine if they are within a greybox. I’m trying to understand the upsides and downsides of this approach vs using plain load/store/if/for and perhaps using graybox only for more arbitrary non-affine control flow cases. That would keep, at least, affine semantics context free, I guess.
2. Based on example 2, it seems that we may have affine constructs that are actually affine within a graybox (loop %l). Not sure if my question makes sense but, are affine algorithms expected to be applied on these "nested" affine constructs or, on the contrary, a graybox "downgrades" all the nested construct to be treated as non-affine? In practice, for example, would it be possible to fuse loop %l and loop %m using an affine transformation?
affine.for %i = 0 to %n {
affine.for %j = 0 to %n {
affine.graybox [] {
%pow = call @powi(%c2, %j) : (index, index) -> index
affine.for %k = 0 to %pow {
affine.for %l = 0 to %n {
...
}
affine.for %m = 0 to %n {
...
}
}
} // graybox end
} // %j
} // %i
Thanks!
Diego
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/e1f195b3-92c5-4448-a42f-d434aaa9546a%40tensorflow.org.
and explicitly captures (at least) the memrefs used. It's almost as if it was also an affine.graybox. I still don't see how the rule for defining a symbol is free from a reference to FuncOp unless you use other terminology that indirectly covers it.
The earlier bullets you had used "non-affine region" in its first bullet, which isn't meaningful.
There isn't such a thing as a non-affine region as it would fail IR verification. Could you clarify what your first bullet replacement would be if it's free from referring to FuncOp?
Second, irrespective of whether it's an unknown or known op
, if the goal was to start a new symbol context at "crazy.op", one would just surround it by an affine.graybox, and if one's only goal was to start a new symbol context, one would just use affine.graybox instead of a "crazy.op", since you'd already have all the infra around it. Scan and get hold of the memrefs inside, and build the graybox. affine.graybox is a known op whose only goal is to start a new polyhedral symbol context for its "affine region". If you think there are cases where various known ops would want to be isolated polyhedrally (i.e., behave like affine.graybox), defining a PolyhedralIsolateFromAbove trait is an option (walkAffine won't walk their inside).
But even this isn't as readable as just inserting the special affine.graybox op wherever you want to start a new symbol context, i.e., I tend to prefer having affine.graybox meant for this unique purpose instead of going around sticking its trait to other (known) ops of interest.
Thanks for the RFC! The graybox idea sounds interesting to me. I see the discussion is already pretty advanced but I would have some basic questions, mostly for my learning/understanding:
1. It seems that with greybox op we are making all the affine ops context sensitive. affine.load/affine.store/affine.if/affine.for may not be actually affine if they are within a greybox. I’m trying to understand the upsides and downsides of this approach vs using plain load/store/if/for and perhaps using graybox only for more arbitrary non-affine control flow cases. That would keep, at least, affine semantics context free, I guess.
2. Based on example 2, it seems that we may have affine constructs that are actually affine within a graybox (loop %l). Not sure if my question makes sense but, are affine algorithms expected to be applied on these "nested" affine constructs or, on the contrary, a graybox "downgrades" all the nested construct to be treated as non-affine? In practice, for example, would it be possible to fuse loop %l and loop %m using an affine transformation?
affine.for %i = 0 to %n {
affine.for %j = 0 to %n {
affine.graybox [] {
%pow = call @powi(%c2, %j) : (index, index) -> index
affine.for %k = 0 to %pow {
affine.for %l = 0 to %n {
...
}
affine.for %m = 0 to %n {
...
}
}
} // graybox end
} // %j
} // %i
func @graybox_func(%c2 : index, %j : index, %n : index, .... /* memref */) {
%pow = call @powi(%c2, %j) : (index, index) -> index
affine.for %k = 0 to %pow {
affine.for %l = 0 to %n {
...
}
affine.for %m = 0 to %n {
...
}
}
affine.for %i = 0 to %n {
affine.for %j = 0 to %n {
call @graybox_func(%c2, %j, %n, ... /* memref */)
} // %j
} // %i
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/025727872606CE4AA31D9EF6945969BA3CC4B1B9%40ORSMSX115.amr.corp.intel.com.
and explicitly captures (at least) the memrefs used. It's almost as if it was also an affine.graybox. I still don't see how the rule for defining a symbol is free from a reference to FuncOp unless you use other terminology that indirectly covers it.Yes, the terminology is "any op not in the affine dialect holding a region" (which I called "non-affine region" after defining it). I'm sticking closer the MLIR terminology for region here: a region correspond the "class mlir::Region" and its semantic is defined by the operation that is it directly attached to.The earlier bullets you had used "non-affine region" in its first bullet, which isn't meaningful.I don't understand what isn't meaningful here. A FuncOp is a non-affine op (an op not in the affine dialect) which is holding a region (its body), which matches the definition I gave for a "non-affine region". Feel free to propose another definition/terminology if you don't find it convenient.
There isn't such a thing as a non-affine region as it would fail IR verification. Could you clarify what your first bullet replacement would be if it's free from referring to FuncOp?I hope I clarified here.
But even this isn't as readable as just inserting the special affine.graybox op wherever you want to start a new symbol context, i.e., I tend to prefer having affine.graybox meant for this unique purpose instead of going around sticking its trait to other (known) ops of interest.Forcing affine.graybox to be present to create a symbol context is a possible design, however if you'd like to go in this direction, then since FuncOp body is nothing special we would use an affine.graybox inside a funcop to begin any new affine context. The affine verifier could check that any bound symbol is always define in a region attached to an affine.graybox.
However I still have some doubt about this possible design, as I expressed in my last email (which you haven't addressed yet I believe).
--Mehdi
"Values of index type defined at the top level of an affine.graybox or a function op" thus has no circular logic - both are well-defined known ops.
But even this isn't as readable as just inserting the special affine.graybox op wherever you want to start a new symbol context, i.e., I tend to prefer having affine.graybox meant for this unique purpose instead of going around sticking its trait to other (known) ops of interest.Forcing affine.graybox to be present to create a symbol context is a possible design, however if you'd like to go in this direction, then since FuncOp body is nothing special we would use an affine.graybox inside a funcop to begin any new affine context. The affine verifier could check that any bound symbol is always define in a region attached to an affine.graybox.As I mentioned, a FuncOp has all the properties of an affine.graybox as is.
One could assume an affine.graybox right under it in their minds or think of it as being elided right under the function definition.
However I still have some doubt about this possible design, as I expressed in my last email (which you haven't addressed yet I believe).I'm a bit lost now on which doubt you are referring to :-) - whether it's the explicit capture of memrefs thing or something else.
FYI I'm fully in agreement with Alex's points at the moment.
On Tue, Oct 1, 2019 at 5:42 AM 'Uday Bondhugula' via MLIR <ml...@tensorflow.org> wrote:Just responding to these parts since I agree with the rest.> 3. Conservative analysis through implicit capture
> Affine passes would treat any unknown op conservatively. This means
> that
> an unknown op with regions that implicitly captures any values is
> treated by those passes as an opaque function call that takes all
> implicitly captured values as arguments. This means that the passes
> should compute the set of implicitly captured values for unknown ops.
Actually, the answer is "No" here - they should not! For the unknown ops, why would you want to compute the set of implicitly captured values when you are going to anyway walk through the op?!The walkAffine method in the proposal walks through inner regions of *all* region-holding ops except the affine.graybox.And if you don't want to block an inner traversal, there is neither a need to explicitly capture nor a need to compute on-the-fly! It'd just work as is with the walk. The question really isn't about "known" vs "unknown" op here, but about an "op within the current polyhedral symbol context" vs an "op outside the current polyhedral symbol context". affine.graybox has its own symbol context, and so it falls into the latter class. In your earlier example, "crazy.op" (irrespective of whether it's known or unknown) doesn't define a new polyhedral context and so there isn't a need to block a walk into it nor explicitly capture.Actually that's probably the point I was missing: I'm seeing an unknown op with a region like an opaque call. Basically my take on an invariant of MLIR currently is that if this is valid:
func @foo(%m : memref<...>) {not.crazy {crazy.op(%m) : memref<...>}}Then the following should always be valid regardless of what is done with %m (this is from the point of view of looking at the `not.crazy` op, assuming that the inner most region is valid for crazy.op of course).
func @foo(%m : memref<...>) {not.crazy {crazy.op {// use of %m implicitly captured by crazy.op}}}
The important property here is that the enclosing `not.crazy` cannot constrain what `crazy.op` can do within its enclosed region.
When exposing the region inside crazy.op, it can provide better analysis than if it wasn't there (for instance you could know that %m is never stored to for the purpose of side-effect analysis), but the first form should already "assume the worst" and so if the first form is valid the second must be.(This goes beyond affine)
--Mehdi--Mehdi
-->2. Stopping the pre-order traversal>In the pre-order (aka going from the top) traversal of ops, one may want to ignore certain regions attached to >the op, or treat them differently. This sounds like a reusable IR traversal pattern that can be parameterized by a >condition functor used to analyze the op and decide whether to enter its regions. This does not have to be tied >to greybox or affine.Yes, there may be a future scenario where we don't want to traverse the inner regions of certain ops although they may not be affine.grayboxes and although they fall within the current region's symbol context (for efficiency/whatever reasons) - you'd have to compute summaries on them on-the-fly irrespective of whether they have an explicit capture on them, and this would in fact be the case for 'call' ops! if you want to do something more advanced than being super-conservative. But I don't see why that affects the current design choices on affine.graybox - the unique thing about the latter is that it introduces a new symbol context or has a list of basic blocks with their terminators and so the control flow isn't affine in it. Please think about the affine.graybox as bringing in polyhedral information from another "domain" that doesn't compose with the polyhedral information of its surrounding affine region.
~ Uday
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
"Values of index type defined at the top level of an affine.graybox or a function op" thus has no circular logic - both are well-defined known ops.You're misquoting here: the circular logic is there: "A FuncOp has all the properties of an affine graybox since it starts a new symbol context". Basically I'm saying that there is no reason FuncOp has any specific properties that would make it more special than other op (like spv.func) with respect to affine/symbol, and that FuncOp is special because it create a symbol context. So the only thing special about it is that you make it special (arbitrarily as far as I can tell).
But even this isn't as readable as just inserting the special affine.graybox op wherever you want to start a new symbol context, i.e., I tend to prefer having affine.graybox meant for this unique purpose instead of going around sticking its trait to other (known) ops of interest.Forcing affine.graybox to be present to create a symbol context is a possible design, however if you'd like to go in this direction, then since FuncOp body is nothing special we would use an affine.graybox inside a funcop to begin any new affine context. The affine verifier could check that any bound symbol is always define in a region attached to an affine.graybox.As I mentioned, a FuncOp has all the properties of an affine.graybox as is.So do potentially 1000 other ops? Is your intent to update the affine spec with the list of all the other ops that could behave the same way (I gave the example of spv.Func, should we update the affine spec to add this next to FuncOp?)?
Thanks, Mehdi. That mental model was very useful!
I would have another example, based on Example 3, that would help my basic understanding of the new picture that graybox is bringing and the limitations of affine representation.
For this particular example, would graybox provide any extra value vs representing the indirect load with a std.load?
%cf1 = constant 1.0 : f32
affine.for %i = 0 to 100 {
%v = affine.load %B[%i] : memref<100xf32>
affine.graybox [%A] {
// %v is now a symbol here.
%s = affine.load %A[%v] : memref<100xf32> // Indirect load
affine.store %s, %C[%i] : memref<100xf32> // Note change in the subscript here.
return;
}
affine.for %j = 0 to 100 {
%l = affine.load %C[%j] : memref<100xf32>
}
Using affine loop fusion as an example, I understand that fusion of loop %i and %j couldn’t happen because the store is only considered affine within the gray box.
However, if we changed loop %i to something like this, it would work:
…
%t = affine.graybox [%A] … {
// %v is now a symbol here.
%s = affine.load %A[%v] : memref<100xf32> // Indirect load
return %s;
}
affine.store %t, %C[%i] : memref<100xf32> // Note change in the subscript here.
…
Thanks!
Diego
From: Mehdi AMINI [mailto:joke...@gmail.com]
Sent: Wednesday, October 2, 2019 5:46 PM
To: Caballero, Diego <diego.c...@intel.com>
> The mental model I have (Uday please correct me if I'm wrong) is that your example is equivalent to:
Thanks, Mehdi. That mental model was very useful!
I would have another example, based on Example 3, that would help my basic understanding of the new picture that graybox is bringing and the limitations of affine representation.
For this particular example, would graybox provide any extra value vs representing the indirect load with a std.load?
%cf1 = constant 1.0 : f32
affine.for %i = 0 to 100 {
%v = affine.load %B[%i] : memref<100xf32>
affine.graybox [%A] {
// %v is now a symbol here.
%s = affine.load %A[%v] : memref<100xf32> // Indirect load
affine.store %s, %C[%i] : memref<100xf32> // Note change in the subscript here.
return;
}
affine.for %j = 0 to 100 {
%l = affine.load %C[%j] : memref<100xf32>
}
Using affine loop fusion as an example, I understand that fusion of loop %i and %j couldn’t happen because the store is only considered affine within the gray box.
However, if we changed loop %i to something like this, it would work:
…
%t = affine.graybox [%A] … {
// %v is now a symbol here.
%s = affine.load %A[%v] : memref<100xf32> // Indirect load
return %s;
}
affine.store %t, %C[%i] : memref<100xf32> // Note change in the subscript here.
…
g) Without more information about `any.op` (traits, etc.), this should be equivalent to the explicit capture case: if the IR was valid the first and second case, then it should be valid here.If we don't have these properties, and if `op.with_region` can constrain the validity of the region attached to `any.op`, then `any.op` is not longer in control of the semantics of the enclosed region. No transformation can operate on `any.op` without knowing all of the enclosing operations, since these can add arbitrary restrictions.For example, this is a valid IR (you can pipe this in mlir-opt right now):module {"d1.op1" () ({"d2.op2" () ({module {func @bar() {return}func @foo() {call @bar() : () -> ()return}}"d2.done" () : () -> ()}) : () -> ()}) : () -> ()}
Hi Mehdi,Getting back to this - inline response.
On Thursday, October 3, 2019 at 12:56:18 PM UTC+5:30, Mehdi AMINI wrote:g) Without more information about `any.op` (traits, etc.), this should be equivalent to the explicit capture case: if the IR was valid the first and second case, then it should be valid here.If we don't have these properties, and if `op.with_region` can constrain the validity of the region attached to `any.op`, then `any.op` is not longer in control of the semantics of the enclosed region. No transformation can operate on `any.op` without knowing all of the enclosing operations, since these can add arbitrary restrictions.For example, this is a valid IR (you can pipe this in mlir-opt right now):module {"d1.op1" () ({"d2.op2" () ({module {func @bar() {return}func @foo() {call @bar() : () -> ()return}}"d2.done" () : () -> ()}) : () -> ()}) : () -> ()}Yes, this may be valid IR, but not even canonicalization or dead code elimination works on this. Running -canonicalize on it:
$ mlir-opt -canonicalize test.mlirmodule {"d1.op1"() ( {%c0_i32 = constant 0 : i32%c0_i32_0 = constant 0 : i32"d2.op2"() ( {module {func @bar() {%c1_i32 = constant 1 : i32%c1_i32_1 = constant 1 : i32return}func @foo() {call @bar() : () -> ()return}}"d2.done"() : () -> ()}) : () -> ()}) : () -> ()}I'll respond to the connection to the affine.graybox proposal in another post, but I think just being able to represent such IR without having the basic infra working correctly on it has little meaning. A separate thread/issue should be started to discuss/fix this, before we discuss how affine or other higher order passes should handle these -- because the latter use runOnFunction as well.~ Uday
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/96eb1d19-2df7-4519-968a-ba777dabcc01%40tensorflow.org.