Hi all,
I'd like to propose an RFC for generalizing the pass manager in MLIR given that the previous proposal to rethink the representations of Functions/Modules as Operations was accepted by the community. Now that this proposal has been implemented, the fundamental design of the pass manager will also require a rethink. Currently the pass manager infrastructure supports two different types of passes: Module Passes and Function Passes; with each operating on the respectively named IR entity. With Modules and Functions now represented with operations, this seems overly limiting. For example, dialects may want to define a custom function operation, like the LLVM dialect does, and write transformation passes on that abstraction. The builtin func operation may also appear in other regions than the top-level module.
Given the above, I propose that we abstract the pass manager infrastructure to work on arbitrary operations at arbitrary levels of nesting. To accomplish this several pieces of the infrastructure need to be generalized:Pass Manager Structure
In the current pass manager, there are two levels of nesting in the form of ModulePassManagers(MPM) and FunctionPassManagers(FPM); where FPMs may be nested within MPMs to form pass pipelines. This closely models the legacy relationship between Function and Module before they became operations. Now that Function and Module are modelled as operations, the level at which an operation may be nested is arbitrary.
This means that the system needs to be expanded to support multiple levels of nesting. To support this, we introduce the concept of an OpPassManager(OPM).
An OPM runs passes that operate on operations of a specific type, e.g. FuncOp/ModuleOp/etc. As alluded to above, OPMs support arbitrary levels of nesting. Nesting a new pass manager is as simple as invoking the nest method on any OPM instance. This pipeline nesting *must* be explicit now that operations do not have a set nesting level, as Functions did previously.
The types of operations that are supported by an OPM are those marked as IsolatedFromAbove. This restriction is necessary as Passes must not modify state at or above the operation being operated on in order to preserve the ability for MLIR to be multi-threaded at every level of the pass manager. The rationale behind this can be found here: https://github.com/tensorflow/mlir/blob/master/g3doc/Rationale.md#multithreading-the-compiler
Command Line Specification
Along with the C++ API, the interface for building a pipeline from the command line (for tools like mlir-opt) must also change. The structure of the pipeline must become explicit as it can no longer be implicitly inferred from the type of pass being added. The pipeline specification format will work similarly to LLVM’s new pass manager, i.e. by providing a pipeline string that encodes the structure and passes to run. The syntax for this specification is as follows:
Example:
On Aug 7, 2019, at 5:49 PM, 'River Riddle' via MLIR <ml...@tensorflow.org> wrote:Hi all,
I'd like to propose an RFC for generalizing the pass manager in MLIR given that the previous proposal to rethink the representations of Functions/Modules as Operations was accepted by the community. Now that this proposal has been implemented, the fundamental design of the pass manager will also require a rethink. Currently the pass manager infrastructure supports two different types of passes: Module Passes and Function Passes; with each operating on the respectively named IR entity. With Modules and Functions now represented with operations, this seems overly limiting. For example, dialects may want to define a custom function operation, like the LLVM dialect does, and write transformation passes on that abstraction. The builtin func operation may also appear in other regions than the top-level module.Agreed, thank you for tackling this!Given the above, I propose that we abstract the pass manager infrastructure to work on arbitrary operations at arbitrary levels of nesting. To accomplish this several pieces of the infrastructure need to be generalized:Pass Manager Structure
In the current pass manager, there are two levels of nesting in the form of ModulePassManagers(MPM) and FunctionPassManagers(FPM); where FPMs may be nested within MPMs to form pass pipelines. This closely models the legacy relationship between Function and Module before they became operations. Now that Function and Module are modelled as operations, the level at which an operation may be nested is arbitrary.Yes. Ops with regions that are known IsolatedFromAbove are the natural unit of parallelization in a hierarchical representation like MLIR.This means that the system needs to be expanded to support multiple levels of nesting. To support this, we introduce the concept of an OpPassManager(OPM).What happens to function/module pass manager?
An OPM runs passes that operate on operations of a specific type, e.g. FuncOp/ModuleOp/etc. As alluded to above, OPMs support arbitrary levels of nesting. Nesting a new pass manager is as simple as invoking the nest method on any OPM instance. This pipeline nesting *must* be explicit now that operations do not have a set nesting level, as Functions did previously.
The types of operations that are supported by an OPM are those marked as IsolatedFromAbove. This restriction is necessary as Passes must not modify state at or above the operation being operated on in order to preserve the ability for MLIR to be multi-threaded at every level of the pass manager. The rationale behind this can be found here: https://github.com/tensorflow/mlir/blob/master/g3doc/Rationale.md#multithreading-the-compiler
I don’t really understand what you are getting here. I’m sorry if this is obvious, but I always assumed we would do something like:typedef OpPassManager<FuncOp> FunctionPassManager;typedef OpPassManager<ModuleOp> ModulePassManager;
Which implies that the default behavior of OpPassManager is to do a postorder traversal of the region tree of the program, visiting ops that match the template argument (which must be IsolatedFromAbove).
While we want to make it conceptually possible for people to write passes on “their own kind of function” (like LLVM IR functions), the main purpose of these foreign function representations is for interop with external systems like LLVM, it isn’t to be able to write LLVM IR transformations in MLIR.
As such, I think that pushing for more-or-less-standardization on FunctionPassManager (which is a specialization of a generic thing!) is a good thing, and optimizing for simplicity in practice is also useful.
…Command Line Specification
Along with the C++ API, the interface for building a pipeline from the command line (for tools like mlir-opt) must also change. The structure of the pipeline must become explicit as it can no longer be implicitly inferred from the type of pass being added. The pipeline specification format will work similarly to LLVM’s new pass manager, i.e. by providing a pipeline string that encodes the structure and passes to run. The syntax for this specification is as follows:
Example:
:-(. This punishes the vastly most common case. My limited muscle memory won’t know how to use this. Can we do better?
-Chris
Wouldn't it be nice to have something like below (red lines below)? The idea is to talk to PassManager via two key methods 1. addPass() and 2. addPassManager.PassManager pm;
pm.addPass(new MyModulePass());
// Add a few function passes.
OpPassManager &fpm = getFuncOpPassManager();
pm.addPassManager(fpm);
// OpPassManager &fpm = pm.nest<FuncOp>();
fpm.addPass(new MyFunctionPass());
fpm.addPass(new MyFunctionPass2());
// Run the pass manager on a module.
Module m = ...;
if (failed(pm.run(m)))
... // One of the passes signaled a failure
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/1bbe8dd7-eee8-41c3-a533-0d476c354641%40tensorflow.org.
--Disclaimer: Views, concerns, thoughts, questions, ideas expressed in this mail are of my own and my employer has no take in it.Thank You.
Madhur D. Amilkanthwar
Hi all,
I'd like to propose an RFC for generalizing the pass manager in MLIR given that the previous proposal to rethink the representations of Functions/Modules as Operations was accepted by the community. Now that this proposal has been implemented, the fundamental design of the pass manager will also require a rethink. Currently the pass manager infrastructure supports two different types of passes: Module Passes and Function Passes; with each operating on the respectively named IR entity. With Modules and Functions now represented with operations, this seems overly limiting. For example, dialects may want to define a custom function operation, like the LLVM dialect does, and write transformation passes on that abstraction. The builtin func operation may also appear in other regions than the top-level module.
Given the above, I propose that we abstract the pass manager infrastructure to work on arbitrary operations at arbitrary levels of nesting. To accomplish this several pieces of the infrastructure need to be generalized:Pass Manager Structure
In the current pass manager, there are two levels of nesting in the form of ModulePassManagers(MPM) and FunctionPassManagers(FPM); where FPMs may be nested within MPMs to form pass pipelines. This closely models the legacy relationship between Function and Module before they became operations. Now that Function and Module are modelled as operations, the level at which an operation may be nested is arbitrary. This means that the system needs to be expanded to support multiple levels of nesting. To support this, we introduce the concept of an OpPassManager(OPM).
An OPM runs passes that operate on operations of a specific type, e.g. FuncOp/ModuleOp/etc. As alluded to above, OPMs support arbitrary levels of nesting. Nesting a new pass manager is as simple as invoking the nest method on any OPM instance. This pipeline nesting *must* be explicit now that operations do not have a set nesting level, as Functions did previously.
The types of operations that are supported by an OPM are those marked as IsolatedFromAbove. This restriction is necessary as Passes must not modify state at or above the operation being operated on in order to preserve the ability for MLIR to be multi-threaded at every level of the pass manager. The rationale behind this can be found here: https://github.com/tensorflow/mlir/blob/master/g3doc/Rationale.md#multithreading-the-compiler
PassStructureNow that the structure of the pass manager has been detailed, we can discuss the passes they are going to be operating on. Just as the PassManager has been generalized, so too will the Passes. The main pass in the new infrastructure will be the OperationPass.OperationPassAn OperationPass is a transformation pass that opaquely runs on an operation of the current pass manager. As such, OperationPasses can be placed within any OpPassManager instance. This pass allows for performing transformations on operations that don’t necessarily need to know about the derived op class. This encapsulates a large margin of the types of transformations that are written, e.g. of the transformation passes that we have today very few rely on specific invariants of a FuncOp or ModuleOp. Passes like Canonicalization and CSE may operate at any level of nesting, but running at the FuncOp level allows for realizing the benefits of mulit-threading. Having passes run on different operation types allows for the pass to use other mechanisms for selective execution, such as traits placed on the operation or some configuration passed in on pass construction.
The definition of an OperationPass is very similar to that of a FunctionPass or ModulePass today:
OpPassAn OpPass is a transformation pass that runs on an instance of a specific operation type. Unlike OperationPass, an OpPass may only be placed within an OpPassManager that operates on operations of the same kind. OpPasses are defined very similarly to OperationsPass:
Analysis Management
In terms of analysis management, nested pass managers will have the same relationship that exists today between ModulePassManager and FunctionPassManager. Passes can query analyses on parent/child operations, at any level of nesting, with getCachedParentAnalysis and getChildAnalysis/getCachedChildAnalysis respectively. This is simply a generalization of the existing getCacheModuleAnalysis/getFunctionAnalysis methods.
Pass Pipeline BuildingMentioned above is the fact that pipeline building in the pass manager is now explicit vs implicit as before. This essentially means that the following pipeline:
Would now be constructed like:
Command Line Specification
Along with the C++ API, the interface for building a pipeline from the command line (for tools like mlir-opt) must also change. The structure of the pipeline must become explicit as it can no longer be implicitly inferred from the type of pass being added. The pipeline specification format will work similarly to LLVM’s new pass manager, i.e. by providing a pipeline string that encodes the structure and passes to run. The syntax for this specification is as follows:
Example:
Thoughts?
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/d4afdb96-f0f9-4b41-ad37-376a2d7ad366%40tensorflow.org.
PassManager pm;
pm.addPass(new MyModulePass());
// Add a few function passes.
OpPassManager &fpm = getFuncOpPassManager();
pm.addPassManager(fpm);
// OpPassManager &fpm = pm.nest<FuncOp>();
fpm.addPass(new MyFunctionPass());
fpm.addPass(new MyFunctionPass2());
// Run the pass manager on a module.
Module m = ...;
if (failed(pm.run(m)))
... // One of the passes signaled a failure
--
You received this message because you are subscribed to the Google Groups "MLIR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mlir+uns...@tensorflow.org.
To view this discussion on the web visit https://groups.google.com/a/tensorflow.org/d/msgid/mlir/1bbe8dd7-eee8-41c3-a533-0d476c354641%40tensorflow.org.