--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAOB_hFSBjmOpCEeqb4zeRizNfLDskvvdpn1vQCGJ9TP9vfCwAg%40mail.gmail.com.
Ah and good point!There are conceptually implicit types on a region - whether they are the function return types when hosted in an std.func or the op return types otherwise (if desired by the dialect) - there's just an absence of a nice way to query them. The code I have today can do op.getOperation()->getContainingRegion()->getContainingOp() to get the containing op and op.getOperation()->getFunction()->getType() to get the containing function type and then query either for their result types. But really it'd be nice if I could do op.getOperation()->getContainingRegion() and access (dialect-specified) results on that.I think the behavior of return depends on whether regions are just scopes within a function or whether they can represent independent execution contexts. In the case of regions just being scopes within std.func what you say about a return escaping the region and returning from the containing function makes sense, but if I was using them to implement function literals/lambdas/closures/nested executables/etc I'd definitely not want that (in this case, the salient bit is that those literals may be of a custom function op, not std.func).So maybe I can call what I'm doing a closure to draw an equivalence as it has the same rules - no implicit captures and values cannot escape besides via returns - just like functions. In such a case it makes sense for the closure to be able to contain any op a function could contain (including a full CFG of its own) and return should have a defined meaning there. Any op that returns from the closure besides return would be weird (like yield) as that implies something about the flow of execution within the closure (which should be opaque to the containing function/op).Right now I'm working around this by outlining all of my closures to other top-level functions but it'd be nice to keep them contained where they are used (so that general transformations still apply - like DCE/etc). This would prevent my current explosion of @foo_region_12_4_2 functions :)If std.return can only ever be a return of the containing std.func (which doesn't seem unreasonable definition-wise) then I can work around this by adding a my.return and using that, however I would like my.func to appear exactly like std.func with some additional attrs/operands/etc and it'd be unfortunate to make all downstream dialects need to understand my.return (and then at that point do we say that std.branch is also only valid within an std.func, etc and end up requiring full conversion of all CFG?)
.
On Tue, Jun 4, 2019 at 10:29 AM Alex Zinenko <zin...@google.com> wrote:One problem I see is the absence of first-class types on the regions. IIRC, ReturnOp currently verifies that the values it returns have the same types as the results of the function. This verification is meaningless for regions because the relation between the values produced by the regions and those produced by the enclosing op is op-specific.Another issue with ReturnOp is that one may want to use it to actually return from a function while being inside a region that is not a function region (a sort of if-error-early-return pattern).One of the prototypes of the regions proposal had a "yield" operation that would immediately exit the region and transfer control flow back to the enclosing operation, which can decide what to do next: execute the same region again, execute another region, pass control flow to its successor, etc. It did not get implemented because we did not have actual use cases for it. Do you have more than one region-containing operation where it would be necessary?On Tue, Jun 4, 2019 at 7:15 PM 'Ben Vanik' via MLIR <ml...@tensorflow.org> wrote:I've got a custom op that contains a region representing normal control flow and would like to have a return statement inside the region:func @stdFunc(%arg0 : tensor<?xf32>) -> tensor<?xf32> { %0 = my.regionOp(%i0 = %arg0 : tensor<?xf32>) : tensor<123xf32> { %1 = my.castOp %i0 : tensor<123xf32> // this should validate against my.regionOp return %1 : tensor<123xf32> } %2 = my.castOp %0 : tensor<?xf32> // this should validate against @stdFunc return %2 : tensor<?xf32> }Currently ReturnOp verifies against the containing function regardless of where it is, which in this case does not have the same type as defined by the op. Having the ReturnOp verifier check against the op containing the region enables this to work (I've got that prototyped), however River brought up that perhaps return may not be allowed inside of regions at all and instead can only return from functions.It'd be really nice to have a return op that generically returns from its region regardless of where that region was hosted. This is especially important as function becomes an op and other function (or function-like) ops may exist. For the particular sequence of operations I want to perform I cannot use a custom return op as the intent is that the ops within the region are unmodified from their original form (and dialects that may process the region contents should not have to care about my custom return op).
--Thoughts?
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ml...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAOB_hFSBjmOpCEeqb4zeRizNfLDskvvdpn1vQCGJ9TP9vfCwAg%40mail.gmail.com.
---- Alex
Ah and good point!There are conceptually implicit types on a region - whether they are the function return types when hosted in an std.func or the op return types otherwise (if desired by the dialect) - there's just an absence of a nice way to query them.
The code I have today can do op.getOperation()->getContainingRegion()->getContainingOp() to get the containing op and op.getOperation()->getFunction()->getType() to get the containing function type and then query either for their result types. But really it'd be nice if I could do op.getOperation()->getContainingRegion() and access (dialect-specified) results on that.
I think the behavior of return depends on whether regions are just scopes within a function or whether they can represent independent execution contexts. In the case of regions just being scopes within std.func what you say about a return escaping the region and returning from the containing function makes sense, but if I was using them to implement function literals/lambdas/closures/nested executables/etc I'd definitely not want that (in this case, the salient bit is that those literals may be of a custom function op, not std.func).
So maybe I can call what I'm doing a closure to draw an equivalence as it has the same rules - no implicit captures and values cannot escape besides via returns - just like functions. In such a case it makes sense for the closure to be able to contain any op a function could contain (including a full CFG of its own) and return should have a defined meaning there. Any op that returns from the closure besides return would be weird (like yield) as that implies something about the flow of execution within the closure (which should be opaque to the containing function/op).
Right now I'm working around this by outlining all of my closures to other top-level functions but it'd be nice to keep them contained where they are used (so that general transformations still apply - like DCE/etc). This would prevent my current explosion of @foo_region_12_4_2 functions :)If std.return can only ever be a return of the containing std.func (which doesn't seem unreasonable definition-wise) then I can work around this by adding a my.return and using that, however I would like my.func to appear exactly like std.func with some additional attrs/operands/etc and it'd be unfortunate to make all downstream dialects need to understand my.return (and then at that point do we say that std.branch is also only valid within an std.func, etc and end up requiring full conversion of all CFG?).
--On Tue, Jun 4, 2019 at 10:29 AM Alex Zinenko <zin...@google.com> wrote:One problem I see is the absence of first-class types on the regions. IIRC, ReturnOp currently verifies that the values it returns have the same types as the results of the function. This verification is meaningless for regions because the relation between the values produced by the regions and those produced by the enclosing op is op-specific.Another issue with ReturnOp is that one may want to use it to actually return from a function while being inside a region that is not a function region (a sort of if-error-early-return pattern).One of the prototypes of the regions proposal had a "yield" operation that would immediately exit the region and transfer control flow back to the enclosing operation, which can decide what to do next: execute the same region again, execute another region, pass control flow to its successor, etc. It did not get implemented because we did not have actual use cases for it. Do you have more than one region-containing operation where it would be necessary?On Tue, Jun 4, 2019 at 7:15 PM 'Ben Vanik' via MLIR <ml...@tensorflow.org> wrote:I've got a custom op that contains a region representing normal control flow and would like to have a return statement inside the region:--func @stdFunc(%arg0 : tensor<?xf32>) -> tensor<?xf32> { %0 = my.regionOp(%i0 = %arg0 : tensor<?xf32>) : tensor<123xf32> { %1 = my.castOp %i0 : tensor<123xf32> // this should validate against my.regionOp return %1 : tensor<123xf32> } %2 = my.castOp %0 : tensor<?xf32> // this should validate against @stdFunc return %2 : tensor<?xf32> }Currently ReturnOp verifies against the containing function regardless of where it is, which in this case does not have the same type as defined by the op. Having the ReturnOp verifier check against the op containing the region enables this to work (I've got that prototyped), however River brought up that perhaps return may not be allowed inside of regions at all and instead can only return from functions.It'd be really nice to have a return op that generically returns from its region regardless of where that region was hosted. This is especially important as function becomes an op and other function (or function-like) ops may exist. For the particular sequence of operations I want to perform I cannot use a custom return op as the intent is that the ops within the region are unmodified from their original form (and dialects that may process the region contents should not have to care about my custom return op).Thoughts?
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAOB_hFSBjmOpCEeqb4zeRizNfLDskvvdpn1vQCGJ9TP9vfCwAg%40mail.gmail.com.
---- Alex
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAOB_hFTc4TabhZ-bU8TGNf4pPCPvV840i4HZb_JfAq4tj5tHRQ%40mail.gmail.com.
Hah! I love it - I'd typed up a giant response only to find the single statement that sums up my issue and you nailed right away Mehdi :)
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/CAOB_hFS1wmMA0qxKbRY8PN2LrGJjUfG5ynN9TumPBr28RcrfMA%40mail.gmail.com.
On Jun 4, 2019, at 12:29 PM, 'Ben Vanik' via MLIR <ml...@tensorflow.org> wrote:Hah! I love it - I'd typed up a giant response only to find the single statement that sums up my issue and you nailed right away Mehdi :)I see a few options here, each of which has pros and cons:1) We could generalize return to be able to work in an arbitrary region. This is suboptimal though because we don’t want it in affine.for (as one example).
On Jun 4, 2019, at 10:38 PM, Mehdi AMINI <joke...@gmail.com> wrote:On Tue, Jun 4, 2019 at 10:29 PM Chris Lattner <clat...@google.com> wrote:On Jun 4, 2019, at 12:29 PM, 'Ben Vanik' via MLIR <ml...@tensorflow.org> wrote:Hah! I love it - I'd typed up a giant response only to find the single statement that sums up my issue and you nailed right away Mehdi :)I see a few options here, each of which has pros and cons:1) We could generalize return to be able to work in an arbitrary region. This is suboptimal though because we don’t want it in affine.for (as one example).Can you clarify what is special about affine.for with respect to the terminator? It isn't clear to me why we couldn't use return there as well.I don't see the limitation with generalizing any (CFG) region as having a single entry point (first block, taking arguments for the region) and multiple exits (always `return`, with all `return` ops in the region that must have the same operands type).
On Jun 4, 2019, at 11:00 PM, Mehdi AMINI <joke...@gmail.com> wrote:
Func @foo() {affine.for %x = … {%a = add %b, %creturn %a}}Looks like it returns from the enclosing function. While we could define it to mean whatever we want, keeping ‘return’ for nested function and closure like things makes more sense than affine.for. Even if we chose to allow it in affine.for, there will be other regions where it doesn’t make sense (e.g. TensorFlow graphs and other regions where you don’t want std dialect stuff floating around willy-nilly)I actually still don’t see a problem with affine.for or TF graph.It isn’t clear to me what has to be specific about the exit operation of a CFG region? Why couldn’t every region be terminated by a return?
I've got a custom op that contains a region representing normal control flow and would like to have a return statement inside the region:func @stdFunc(%arg0 : tensor<?xf32>) -> tensor<?xf32> { %0 = my.regionOp(%i0 = %arg0 : tensor<?xf32>) : tensor<123xf32> { %1 = my.castOp %i0 : tensor<123xf32> // this should validate against my.regionOp return %1 : tensor<123xf32> } %2 = my.castOp %0 : tensor<?xf32> // this should validate against @stdFunc return %2 : tensor<?xf32> }Currently ReturnOp verifies against the containing function regardless of where it is, which in this case does not have
On Tue, Jun 4, 2019 at 10:29 PM Chris Lattner <clat...@google.com> wrote:On Jun 4, 2019, at 12:29 PM, 'Ben Vanik' via MLIR <ml...@tensorflow.org> wrote:Hah! I love it - I'd typed up a giant response only to find the single statement that sums up my issue and you nailed right away Mehdi :)I see a few options here, each of which has pros and cons:1) We could generalize return to be able to work in an arbitrary region. This is suboptimal though because we don’t want it in affine.for (as one example).Can you clarify what is special about affine.for with respect to the terminator? It isn't clear to me why we couldn't use return there as well.I don't see the limitation with generalizing any (CFG) region as having a single entry point (first block, taking arguments for the region) and multiple exits (always `return`, with all `return` ops in the region that must have the same operands type).
--Mehdi
-Chris
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/B49CA7CB-2754-4DC5-85A2-822E4E0729CA%40google.com.
Affine.for's should actually be able to hold regions that are a list of blocks and with potentially multiple returns. In fact, that would be the way to have arbitrary control flow that is *completely contained* within an affine.for, without having to outline into a function. For eg., the following 2-d loop nest in C/C++ would map to an affine.for for the outer loop and its entire body would go into the region that is a list of basic blocks (CFG form).
for (i = 0; ...) // maps to an affine.forfor (j = 0; ....) { // this entire loop becomes a region (list of blocks) held by the affine.for for 'i'if (...) break;...if (...) continue;...}
The return's in the region are for the region (in this case, for a single iteration of 'i') and not the enclosing function or the 'for' op itself. I haven't looked at the latest code, but I think affine.for are restricted to have a single block in their region. Having a list of blocks will break a lot of the passes/utilities, and there would be no easy way to even represent the effect of a loop unroll on such an 'affine.for'. All of these difficulties go away if there is a special "region op" whose semantics are to execute the region *once*. Then, an 'affine.for' could contain just one block that itself has a single operation that is that "region op". One would have to define though what the walkers do; for many of the affine passes, they would have to treat this region op opaquely, but there will also have to be utilities that construct summaries for the region (such as the memref's dereferenced in it, SSA values live into the region); otherwise, passes like fusion and memref dep analysis would do the wrong thing. The values returned by the std.return's in the region would be checked against the region op's results.~ Uday
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/eb9b5864-0e5b-476f-b24e-4c3a2efeb0d2%40tensorflow.org.
Looks like it returns from the enclosing function. While we could define it to mean whatever we want, keeping ‘return’ for nested function and closure like things makes more sense than affine.for. Even if we chose to allow it in affine.for, there will be other regions where it doesn’t make sense (e.g. TensorFlow graphs and other regions where you don’t want std dialect stuff floating around willy-nilly)I actually still don’t see a problem with affine.for or TF graph.It isn’t clear to me what has to be specific about the exit operation of a CFG region? Why couldn’t every region be terminated by a return?Every region gets a terminator - do you mean that every region could/should use std.return literally?There are lots of different kinds of domains and abstractions, including source level abstractions that have “return ops" with language specific semantics (e.g. NRVO in clang) as well as things like exception unwinding and other terminators, etc.I think a broader issue involved is how "open" we want each dialect to be. Do we want a dialect to be a strict and tightly integrated entity that is closed for extending (supersetting) or extracting (subsetting) its components?
For certain dialects, we already allow it to be extensible so it seems we answered yes to supersetting. For subsetting, I can see its usefulness in the SPIR-V dialect. Say if I want a subset of SPIR-V for WebGPU, it would be nice for me to extract a subset of core SPIR-V instead writing each op from the ground up. Yes, the downside of this approach is that now a dialect is more open, verification becomes challenging. If some verification involves both a broader scope op (e.g., function op) and a narrower scope op (e.g., terminator op), it seems verification happening on the broader scope op means the dialect is more friendly for subsetting (so each narrower scope op is a building block free to choose by other dialects), while verification happening on the narrower scope op means the dialect is more friendly for suppersetting.
If the goal of the standard dialect (or whatever final name we settle down) is to be building blocks as other dialects (which can be quite useful for bringing up other dialects and sharing common transformations etc), then allowing std.return to be used by other dialects (if semantics match) sounds reasonable to me. If instead the standard dialect is meant to be closed and used as an integrated intermediate utility dialect, then maybe not.
On Thu, Jun 6, 2019 at 11:48 AM Chris Lattner <clat...@google.com> wrote:On Jun 6, 2019, at 9:34 AM, Lei Zhang <antia...@google.com> wrote:Looks like it returns from the enclosing function. While we could define it to mean whatever we want, keeping ‘return’ for nested function and closure like things makes more sense than affine.for. Even if we chose to allow it in affine.for, there will be other regions where it doesn’t make sense (e.g. TensorFlow graphs and other regions where you don’t want std dialect stuff floating around willy-nilly)I actually still don’t see a problem with affine.for or TF graph.It isn’t clear to me what has to be specific about the exit operation of a CFG region? Why couldn’t every region be terminated by a return?Every region gets a terminator - do you mean that every region could/should use std.return literally?There are lots of different kinds of domains and abstractions, including source level abstractions that have “return ops" with language specific semantics (e.g. NRVO in clang) as well as things like exception unwinding and other terminators, etc.I think a broader issue involved is how "open" we want each dialect to be. Do we want a dialect to be a strict and tightly integrated entity that is closed for extending (supersetting) or extracting (subsetting) its components?This is semi philosophical, but this is also a very practical consideration. Focusing on the practical consideration: *no* you do not want to superset operations by adding lots of knobs onto simple things like return, or std.addi.The reason for this is that we have centralized definitions of what the ops are, what their dynamic semantics are etc (manifested in things like constant folding rules, canonicalization patterns, verification hooks, etc). Each operation needs to have a well defined set of dynamic semantics owned by the author of the op. It isn’t ok for some “foo" dialect to come around and say “the bar.xyz op has special behavior and takes special operands in this one case that bar.xyz was never designed for”.
For certain dialects, we already allow it to be extensible so it seems we answered yes to supersetting. For subsetting, I can see its usefulness in the SPIR-V dialect. Say if I want a subset of SPIR-V for WebGPU, it would be nice for me to extract a subset of core SPIR-V instead writing each op from the ground up. Yes, the downside of this approach is that now a dialect is more open, verification becomes challenging. If some verification involves both a broader scope op (e.g., function op) and a narrower scope op (e.g., terminator op), it seems verification happening on the broader scope op means the dialect is more friendly for subsetting (so each narrower scope op is a building block free to choose by other dialects), while verification happening on the narrower scope op means the dialect is more friendly for suppersetting.Yes, this is a design problem. The author of an op or dialect gets to think about and decide what level of generalization makes sense. That said, coming back to practicality, there are some things that have to be nailed down for it to be useful.For example, we could just say that MLIR only had one op, and it takes a dictionary of stuff that indicate the behavior of the op. This design can model everything that the current design does, but it would not be “useful” :-). The decisions about what an op does and doesn’t do directly affect the correctness and convenience of analyzing and transforming the code.
If the goal of the standard dialect (or whatever final name we settle down) is to be building blocks as other dialects (which can be quite useful for bringing up other dialects and sharing common transformations etc), then allowing std.return to be used by other dialects (if semantics match) sounds reasonable to me. If instead the standard dialect is meant to be closed and used as an integrated intermediate utility dialect, then maybe not.I would really prefer to keep the standard dialect nailed down and buttoned up until we know exactly where it needs to go. It is very easy to define dialect specific return instructions.This isn’t about surface level similarity of concepts. This is about semantics and invariants that passes can assume.Exactly: the fundamental question I perceive here is how much liberty you want to leave to dialect about the termination of a region: you mentioned the possibility of customizing before but that's a design choice. We already make restrictions (around the CFG structure, around the fact that a region only has a single entry, etc.) and many arguments that justifies these restrictions seem to apply equally well to restricting the concept of region termination.
On Jun 6, 2019, at 11:58 AM, Mehdi AMINI <joke...@gmail.com> wrote:If the goal of the standard dialect (or whatever final name we settle down) is to be building blocks as other dialects (which can be quite useful for bringing up other dialects and sharing common transformations etc), then allowing std.return to be used by other dialects (if semantics match) sounds reasonable to me. If instead the standard dialect is meant to be closed and used as an integrated intermediate utility dialect, then maybe not.I would really prefer to keep the standard dialect nailed down and buttoned up until we know exactly where it needs to go. It is very easy to define dialect specific return instructions.This isn’t about surface level similarity of concepts. This is about semantics and invariants that passes can assume.Exactly: the fundamental question I perceive here is how much liberty you want to leave to dialect about the termination of a region: you mentioned the possibility of customizing before but that's a design choice. We already make restrictions (around the CFG structure, around the fact that a region only has a single entry, etc.) and many arguments that justifies these restrictions seem to apply equally well to restricting the concept of region termination.Right, but there are structural considerations (what nesting structures can be expressed with regions in the IR) and semantic considerations.For example, a tensorflow graph containing switch/merge nodes and its concurrency properties has an odd sort of execution semantics that really have little to do with “top down sequential execution of code in a basic block”. TF graphs can be represented in an MLIR region, but that region won’t have “top down sequential execution semantics”.
This is perfectly fine in our system, but ’std.return’ assumes top down sequential execution semantics, and assumes a certain relationship with std.func (which is where this whole thread got started). I don’t think it makes sense to say the “bar dialect can use std.return to mean something completely different when it shows up in a bar region”. std.return and std.func are really tied together right now, and I don’t see a reason to break that alignment.
Right, but there are structural considerations (what nesting structures can be expressed with regions in the IR) and semantic considerations.For example, a tensorflow graph containing switch/merge nodes and its concurrency properties has an odd sort of execution semantics that really have little to do with “top down sequential execution of code in a basic block”. TF graphs can be represented in an MLIR region, but that region won’t have “top down sequential execution semantics”.I don't necessarily agree with this: you could see a TF graph as returning implicit future value and dispatching asynchronous operations, but that does not invalidate that you process operations in order in a block (for instance you can't refer to a value defined by an operation later in the block, this was a contentious point of the non-CFG proposal).
This is perfectly fine in our system, but ’std.return’ assumes top down sequential execution semantics, and assumes a certain relationship with std.func (which is where this whole thread got started). I don’t think it makes sense to say the “bar dialect can use std.return to mean something completely different when it shows up in a bar region”. std.return and std.func are really tied together right now, and I don’t see a reason to break that alignment.I don't see why std.return should be fundamentally tied to std.func: this is more historical than anything. It seems to me that another view is that std.return is tied to the function body, which we generalized as a region recently. In this context having every region being terminated by std.return can be consistent.
This is restricting what you can express with regions, but a I mentioned before all the arguments for the other restrictions on regions would apply for the region termination. I am still missing the difference you're making about return.
On Jun 6, 2019, at 3:46 PM, Mehdi AMINI <joke...@gmail.com> wrote:Right, but there are structural considerations (what nesting structures can be expressed with regions in the IR) and semantic considerations.For example, a tensorflow graph containing switch/merge nodes and its concurrency properties has an odd sort of execution semantics that really have little to do with “top down sequential execution of code in a basic block”. TF graphs can be represented in an MLIR region, but that region won’t have “top down sequential execution semantics”.I don't necessarily agree with this: you could see a TF graph as returning implicit future value and dispatching asynchronous operations, but that does not invalidate that you process operations in order in a block (for instance you can't refer to a value defined by an operation later in the block, this was a contentious point of the non-CFG proposal).NextIteration doesn’t have those kind of semantics.
I don’t see how any decision about the std.return op restricts what can be done with regions?This is perfectly fine in our system, but ’std.return’ assumes top down sequential execution semantics, and assumes a certain relationship with std.func (which is where this whole thread got started). I don’t think it makes sense to say the “bar dialect can use std.return to mean something completely different when it shows up in a bar region”. std.return and std.func are really tied together right now, and I don’t see a reason to break that alignment.I don't see why std.return should be fundamentally tied to std.func: this is more historical than anything. It seems to me that another view is that std.return is tied to the function body, which we generalized as a region recently. In this context having every region being terminated by std.return can be consistent.I agree with you that anything can be made to work, it is just about balancing design tradeoffs.This is restricting what you can express with regions, but a I mentioned before all the arguments for the other restrictions on regions would apply for the region termination. I am still missing the difference you're making about return.
It we flesh this model out fully I suspect it will be quite
complicated (and will resemble some of the other proposals we had
considered), but it shows that the representation proposed by Mehdi is
not necessarily semantically problematic.
I'd also like to emphasize that NextIteration (which has fairly
straightforward "increment the frame id" semantics) is mostly a red
herring here, the problematic bit is actually the Merge operation
(which is a data-flow-graph version of a phi node). It is sufficient
but not necessary to specially treat NextIteration nodes by splitting
out a .source and a .sink (e.g. consider a graph with an "unrolled"
Switch/NextIteration/Merge loop -- it will still have NextIteration
nodes, but no back edges).
On Jun 10, 2019, at 3:55 PM, Mehdi AMINI <joke...@gmail.com> wrote:I don’t see how any decision about the std.return op restricts what can be done with regions?I agree with you that anything can be made to work, it is just about balancing design tradeoffs.This is restricting what you can express with regions, but a I mentioned before all the arguments for the other restrictions on regions would apply for the region termination. I am still missing the difference you're making about return.I was referring to the earlier points you made about custom region terminators: a C++ return could carry different semantics (around exceptions, or destructor, for instance) than a C return or a TF graph return.So forcing std.return to be the only region terminator would be making a restriction about what you can do on the region terminator itself (it wouldn't prevent from representing things like exceptions handling and scoped-destructors separately of course).
On Jun 10, 2019, at 10:45 PM, Mehdi AMINI <joke...@gmail.com> wrote:
I was referring to the earlier points you made about custom region terminators: a C++ return could carry different semantics (around exceptions, or destructor, for instance) than a C return or a TF graph return.So forcing std.return to be the only region terminator would be making a restriction about what you can do on the region terminator itself (it wouldn't prevent from representing things like exceptions handling and scoped-destructors separately of course).I’m sorry, I never meant to imply that std.return would be the only way to terminate a region. We have openly extensible terminators already (and std.return is just an op), so of course you should be able to define a “clang.return”, “yourtask.yield" or whatever.I think we are talking past each other: I am the one who implied that std.return would the only way to terminate a region and was looking into arguments as to why not?This is a design point for which it isn't clear to me why you feel that it is better to leave this fully open/extensible while other aspects of regions are not (for example around the CFG structure, around the fact that a region only has a single entry, that we have region arguments as the first blocks arguments, etc.) .