pipes-process

640 views
Skip to first unread message

Jeremy Shaw

unread,
Sep 23, 2013, 12:13:32 PM9/23/13
to haskell-pipes
Hello,

I have attempted to create a pipes based interface for calling external processes. 


At first stab, you would think this type would be perfect:

process :: (MonadCatch m, MonadIO m) =>
           FilePath                 -- ^ path to executable
        -> [String]                 -- ^ arguments to pass to executable
        -> Maybe String             -- ^ optional working directory
        -> Maybe [(String, String)] -- ^ optional environment (otherwise inherit)
        -> Pipe ByteString (Either ByteString ByteString) (SafeT m) ExitCode

After all, an external process has a 'stdin' it spits stuff out on stdout and stderr until it is done and then it returns an ExitCode.

alas, this does not actually work, because a process does not really behave like a pipe. So, while the type is correct, the behavior isn't.

Really, I think we need to model a process has something that has a Consumer and a Producer:

process :: (MonadCatch m, MonadIO m) =>
           FilePath                 -- ^ path to executable
        -> [String]                 -- ^ arguments to pass to executable
        -> Maybe String             -- ^ optional working directory
        -> Maybe [(String, String)] -- ^ optional environment (otherwise inherit)
        -> IO (Consumer ByteString (SafeT m) (), Producer (Either ByteString ByteString) (SafeT m) ExitCode)

Now, this does let you be devious and feed the Producer into the Consumer -- that is, send the output of your program back in as input. Though, I don't think that is actually invalid, just very uncommon.

There is a simple example here:


I have low confidence that this code is bug-free.

Any thoughts or comments?

- jeremy

Dag Odenhall

unread,
Sep 23, 2013, 3:06:39 PM9/23/13
to haskel...@googlegroups.com

Could you elaborate on why your first type doesn't work out?



--
You received this message because you are subscribed to the Google Groups "Haskell Pipes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to haskell-pipe...@googlegroups.com.
To post to this group, send email to haskel...@googlegroups.com.

Gabriel Gonzalez

unread,
Sep 23, 2013, 3:12:28 PM9/23/13
to haskel...@googlegroups.com, Jeremy Shaw
Your second approach is the correct one. It should output a separate
`Producer` and `Consumer` since there is not necessarily any
synchronization between a process's input and output ends.

I'd just recommend two changes:

First, you can factor everything through an opaque intermediate
`Process` type, i.e.:

process :: FilePath -> [String] -> Maybe String -> Maybe [(String,
String)] -> Process

Then provide two functions:

readProcess :: Process -> Producer (Either ByteString ByteString)
(SafeT m) ()

writeProcess :: Process -> Consumer ByteString (SafeT m) r

Second, I'd also recommend providing an equivalent version that uses the
`with` idiom instead of `SafeT` so that users can choose which idiom
they prefer.

Gabriel Gonzalez

unread,
Sep 23, 2013, 3:16:05 PM9/23/13
to haskel...@googlegroups.com, Dag Odenhall, Jeremy Shaw
The first type forces synchronization between the read and write ends.

Dag Odenhall

unread,
Sep 23, 2013, 3:16:05 PM9/23/13
to Gabriel Gonzalez, haskel...@googlegroups.com, Jeremy Shaw

Doesn't your separation with readProcess and writeProcess mean you're spawning two processes?

Gabriel Gonzalez

unread,
Sep 23, 2013, 3:19:59 PM9/23/13
to dag.od...@gmail.com, haskel...@googlegroups.com, Jeremy Shaw
On 09/23/2013 12:16 PM, Dag Odenhall wrote:

Doesn't your separation with readProcess and writeProcess mean you're spawning two processes?


No.  Think of `Process` as a just another handle that you can both read and write from at the same time.

Dag Odenhall

unread,
Sep 23, 2013, 3:22:43 PM9/23/13
to haskel...@googlegroups.com, Jeremy Shaw

Oh, I see. Doesn't it need to be created in IO though? What if you do want multiple processes?

Gabriel Gonzalez

unread,
Sep 23, 2013, 3:27:16 PM9/23/13
to haskel...@googlegroups.com, Dag Odenhall, Jeremy Shaw
On 09/23/2013 12:22 PM, Dag Odenhall wrote:

Oh, I see. Doesn't it need to be created in IO though? What if you do want multiple processes?


The reference library for this sort of thing is `pipes-network`.  It demonstrates the two most common idioms:

* Acquire the "handle" in IO using the with idiom and then spawn producers/consumers from that shared handle.  See `Pipes.Network.TCP` for this idiom.

* Acquire the "handle" in SafeT within a pipeline.  However, this usually limits you to one-way transfer (i.e. only reading and writing).  See `Pipes.Network.TCP.Safe` for this idiom.

Renzo Carbonara

unread,
Sep 23, 2013, 4:11:19 PM9/23/13
to haskell-pipes, Dag Odenhall, Jeremy Shaw
On Mon, Sep 23, 2013 at 4:27 PM, Gabriel Gonzalez <gabri...@gmail.com> wrote:
On 09/23/2013 12:22 PM, Dag Odenhall wrote:

Oh, I see. Doesn't it need to be created in IO though? What if you do want multiple processes?


The reference library for this sort of thing is `pipes-network`.  It demonstrates the two most common idioms:

* Acquire the "handle" in IO using the with idiom and then spawn producers/consumers from that shared handle.  See `Pipes.Network.TCP` for this idiom.

* Acquire the "handle" in SafeT within a pipeline.  However, this usually limits you to one-way transfer (i.e. only reading and writing).  See `Pipes.Network.TCP.Safe` for this idiom.


Here Gabriel is talking about the functions `Pipes.Network.TCP.Safe.{from,to}{Connect,Serve}`, which acquire a `Socket` and interact with it in a streaming fashion, all at once. However, there are other more useful functions in that module, such as `{connect,serve,listen,accept}`, that only acquire the `Socket` and provide you with a limited scope where you can safely use such `Socket` however you want.


I haven't seen your code yet, so perhaps I'm talking nonsense; but presumably, you could provide a function such as:

  process
    :: (MonadSafe m, Base m ~ IO)

    => FilePath                 -- ^ path to executable
    -> [String]                 -- ^ arguments to pass to executable
    -> Maybe String             -- ^ optional working directory
    -> Maybe [(String, String)] -- ^ optional environment (otherwise inherit)
    -> (Process -> m r)  -- ^ Computation to run on the obtained `Process`
    -> m r

That function should allocate resources before running the `(Process -> m r)` computation, and clean-up resources after.


Regards,

Renzo Carbonara.

James Deikun

unread,
Sep 23, 2013, 4:23:17 PM9/23/13
to haskel...@googlegroups.com, Dag Odenhall, Jeremy Shaw
Yes, this should be handy to have. The Safe version that spawns a
process within a pipe is a lot more useful to have than its network
equivalent, though, because unlike in network programming processes that
you only read from or only write to are probably at least as common as
ones where you want to do both.

Jeremy Shaw

unread,
Sep 23, 2013, 4:59:16 PM9/23/13
to haskell-pipes, Dag Odenhall
Ok, I've made some sweeping changes. So it is probably closer. One thing that definitely seems a bit off is writeProcess:


It is important than when you have no more input to write, that you close stdin, otherwise things like the 'cat' example will never terminate.

I added a call to 'finallly' but that causes a bunch of MonadSafe goo to end up in the type signature.

Also, it is probably a bit presumptuous to assume that just because one call to writeProcess has completed that there is no additional data to be written?

A different option would be to have writeProcess leave stdin alone, and require the user to explicitly call 'closeStdin' for the process when they are done?

I reused the CreateProcess type from System.Process -- though I needed to override the StdStream fields. Not sure how I feel about that.

More opinions?

- jeremy

Jeremy Shaw

unread,
Sep 23, 2013, 5:11:51 PM9/23/13
to haskell-pipes
pipes relies on the fact that when a Pipe is only actually doing one thing at a time. It assumes that if the pipe calls 'await' that nothing is going to happen until it gets that value it is blocked on. But in the case of a real process, it can be awaiting some stdin, but then suddenly have some output it wants to yield.

A pipe is kind of like a cooperative multitasking system. pipes await and yield and that is what causes the scheduling to happen. When a pipe yields a value, that part of the pipe goes to sleep while the 'thread' that is awaiting the value wakes up and runs.

But, a system process does not care about that. It's got a mind of its own. If the process is blocking on an 'await', it can still magically produce a value to yield. Where as in a typically pipe, if it is awaiting a value, it is because it needs that value before it is going to have anything it can yield.

Kind of a sloppy explanation, but hopefully that helps.

- jeremy

Gabriel Gonzalez

unread,
Sep 23, 2013, 5:43:42 PM9/23/13
to haskel...@googlegroups.com, Jeremy Shaw, Dag Odenhall
To simplify the `writeProcess` type, you write this:

    writeProcess :: (MonadSafe m, Base m ~ IO) => ...

That says that it uses `IO` for finalization.

You can simplify a lot of what you are doing using `pipes-concurrency`.  It will also help you avoid common sources of deadlocks.

Note that if you provide an alternative version of `writeProcess` that doesn't auto-close `stdin`, then you don't even necessarily have to provide a `Consumer`.  You can just provide an `IO` action that writes a chunk to the process's stdin:

    writeChunk :: PipesProcess -> ByteString -> IO ()

That can always be upgraded to the equivalent `Consumer` using `for` + `cat`:

    source >-> for cat (lift . writeChunk pp)

... or you can avoid the `cat` intermediate entirely by directly looping over your source of bytestrings:

    for source (lift . writeChunk pp)

Tony Day

unread,
Sep 23, 2013, 8:17:08 PM9/23/13
to haskel...@googlegroups.com, Jeremy Shaw, Dag Odenhall
Start of the art for multi-asynchronous process execution and comms would have to be emacs.

http://www.gnu.org/software/emacs/manual/html_node/elisp/Asynchronous-Processes.html

haskell-mode is a very active area of development.  inf-haskell.el and haskell-process.el would be well worth a comb through for how they handle comms with ghci and the new cabal repl.

https://github.com/haskell/haskell-mode

The concept of a sentinels in emacs is worthwhile thinking about in a pipes context:

http://nic.ferrier.me.uk/blog/2011_10/emacs_lisp_is_good_further_reports_suggest

 





John Wiegley

unread,
Sep 24, 2013, 2:46:56 PM9/24/13
to haskel...@googlegroups.com
>>>>> Gabriel Gonzalez <gabri...@gmail.com> writes:

> readProcess :: Process -> Producer (Either ByteString ByteString) (SafeT
> m) ()

Wouldn't it be better to give two Producers, one for stdout and one for stdin?
They be written two at the same time by the process, can't they? It would
then seem odd that they can only be processed in sequence.

--
John Wiegley
FP Complete Haskell tools, training and consulting
http://fpcomplete.com johnw on #haskell/irc.freenode.net

Jeremy Shaw

unread,
Sep 24, 2013, 11:48:16 PM9/24/13
to haskell-pipes
On Tue, Sep 24, 2013 at 1:46 PM, John Wiegley <jo...@fpcomplete.com> wrote:
>>>>> Gabriel Gonzalez <gabri...@gmail.com> writes:

>     readProcess :: Process -> Producer (Either ByteString ByteString) (SafeT
> m) ()

Wouldn't it be better to give two Producers, one for stdout and one for stdin?
They be written two at the same time by the process, can't they?  It would
then seem odd that they can only be processed in sequence.

I assume you mean one for stdout and one for *stderr*?

Alas, the unix process model is so fundamentally stupid that I think we really need both variants. Many command-line apps are run from the command-line where stdout and stderr are interleaved in a somewhat arbitrary manner. But, there is some time-based information there -- even if there is a bit of fuzziness. For example, an app could print several lines of success to stdout, some error message to stderr, and more success to stdout. So, the stuff on stderr is presented in the context of what happened around the same time on stdout.

If you treat them as two completely independent sources, then you lose that temporal context.

So, I think it is useful to have a version that does interleave the stdout/stderr in whatever order it seems to get them. In theory you can just use partitionEithers to separate them if you don't want them interleaved like that. But that is not always the most convenient thing to do. It's clear that there are times when it seems like having stdout and stderr be separate Producers would be the most convenient solution.

On the other hand -- I think there is a real danger to have two Producers, one for stdout and one for stderr. Let's say you only care about stdout and you don't do anything with stderr. Since you are ignoring it, nobody is reading from stderr and now stderr is at risk at blocking due to having a full output buffer, and the whole process may then block. Even worse, maybe you do care about stdout and stderr, but you try do something where you first write all of stdout to a file, and then all of stderr. You could still end up blocked. If you want to safely process stdout and stderr separately, then I think you must do that in separate threads so that you don't deadlock?

I think it is necessary that we always read data from stdout and stderr when it becomes available, though we can choose to discard one or the other if we don't actually want it.

Now, we should also note that a similar problem exists in the current code. If we start the process and use only writeProcess, but not readProcess, then the process might block trying to write output and the input will never get read.

So modeling a process as a Pipe does not work, but modelling it an independent Consumer and Producer is not entirely correct either. There is, in fact, some interaction between the Consumer and Producer ends of a process -- but not in a way that we can really reason about it?

still.. I feel like allow the user to read only stdout or only stderr is asking for more trouble than allow the user to call only readProcess vs only writeProcess.

Unfortunately, it is extremely easy to deadlock when calling a unix process that streams both inputs and outputs. I wonder if there is another way we can wrap a process into a pipe that is safer?

- jeremy




Daniel Díaz

unread,
Dec 4, 2013, 4:23:55 PM12/4/13
to haskel...@googlegroups.com
To avoid the possibility of filling the output buffers and blocking the process, while still keeping separate stdout and stderr producers, perhaps two temporary files could be created. Stdout would be written to one and stderr to the other. Clients would read the temporary files as they are being written, but would always block before reaching the "not yet written" zone (we would ensure this by keeping track of the number of bytes written to each file.)

Or perhaps these intermediate buffers could be kept in memory, if they didn't grew too big.

Could this work?

Gabriel Gonzalez

unread,
Dec 4, 2013, 1:27:16 PM12/4/13
to haskel...@googlegroups.com, Daniel Díaz
If you want to keep the buffers in memory, this is exactly what `pipes-concurrency` does.  Just use `spawn` to create a buffer that you can write to and read from at your leisure.  It lets you specify a bounded or unlimited buffer size.

This will also make sure that consumers of the buffers properly wait for more input when they exhaust the buffer and terminate when the buffer is done.  You don't need to keep track of the number of bytes written to the buffer.

I'm not certain this is the best approach, yet, because I haven't had time to think about this yet, but I just wanted to mention this potential solution to what you just described.

Levi Pearson

unread,
Dec 5, 2013, 1:51:13 PM12/5/13
to haskel...@googlegroups.com
On Tuesday, September 24, 2013 9:48:16 PM UTC-6, Jeremy Shaw wrote:

Alas, the unix process model is so fundamentally stupid that I think we really need both variants. Many command-line apps are run from the command-line where stdout and stderr are interleaved in a somewhat arbitrary manner. But, there is some time-based information there -- even if there is a bit of fuzziness. For example, an app could print several lines of success to stdout, some error message to stderr, and more success to stdout. So, the stuff on stderr is presented in the context of what happened around the same time on stdout.

If you treat them as two completely independent sources, then you lose that temporal context.

So, I think it is useful to have a version that does interleave the stdout/stderr in whatever order it seems to get them. In theory you can just use partitionEithers to separate them if you don't want them interleaved like that. But that is not always the most convenient thing to do. It's clear that there are times when it seems like having stdout and stderr be separate Producers would be the most convenient solution.

On the other hand -- I think there is a real danger to have two Producers, one for stdout and one for stderr. Let's say you only care about stdout and you don't do anything with stderr. Since you are ignoring it, nobody is reading from stderr and now stderr is at risk at blocking due to having a full output buffer, and the whole process may then block. Even worse, maybe you do care about stdout and stderr, but you try do something where you first write all of stdout to a file, and then all of stderr. You could still end up blocked. If you want to safely process stdout and stderr separately, then I think you must do that in separate threads so that you don't deadlock?

I think it is necessary that we always read data from stdout and stderr when it becomes available, though we can choose to discard one or the other if we don't actually want it.

Now, we should also note that a similar problem exists in the current code. If we start the process and use only writeProcess, but not readProcess, then the process might block trying to write output and the input will never get read.

So modeling a process as a Pipe does not work, but modelling it an independent Consumer and Producer is not entirely correct either. There is, in fact, some interaction between the Consumer and Producer ends of a process -- but not in a way that we can really reason about it?

still.. I feel like allow the user to read only stdout or only stderr is asking for more trouble than allow the user to call only readProcess vs only writeProcess.

Unfortunately, it is extremely easy to deadlock when calling a unix process that streams both inputs and outputs. I wonder if there is another way we can wrap a process into a pipe that is safer?

I am still relatively new to thinking about pipes types, but for what it's worth, I have some vague ideas that *might* be able to be realized as an abstraction over pipes.

If you haven't seen the Scheme Shell (scsh) before, I think it's a great example of designing an elegant API for the UNIX process/pipeline model, at least as far as such a thing can be said to be elegant.  The manual for it (and it's worth at least reading the introduction, which is hilarious) is here:  http://www.scsh.net/docu/html/man.html

To translate the concept roughly into Haskell with pipes, you'd model the interaction with file descriptors fairly directly with pipes (so instead of Process directly having Producers and Consumers, you'd have PosixStream to model a file descriptor, which could be represented as data ReadOnly Producer .. | WriteOnly Consumer .. | Duplex Producer Consumer .. or something like that.  A Process would then have an IntMap of handles to PosixStream values to represent its open file descriptors.

The real meat would be in a set of combinators for manipulating the way that the file descriptor PosixStream values get combined when launching new processes.  The default process "pipeline" combinator would do the same sort of hooking-up of the underlying PosixStream Producer/Consumers as pipelining in sh does, but it would be built from lower-level combinators that hook the stdin/stdout/stderr PosixStreams of two processes together in an appropriate manner.

In order to deal with the case where you want to ignore stdout, you'd build a 'pipeStdin' combinator that would hook up the stdin/stdout PosixStreams but use a 'capping' combinator on the stderr one, which would model a redirection to /dev/null.  It would also be nice to be able to arbitrarily dup and redirect PosixStreams.

Like I said, this is just a high-level handwavey idea, but I think it would make a very nice interaction model with Posix processes and allow you to deal with some of the complexity in the underlying model in a relatively pleasant way.  Feel free to disregard it if it's completely infeasible or otherwise not to your liking.

Daniel Díaz

unread,
Feb 16, 2014, 5:10:02 PM2/16/14
to haskel...@googlegroups.com, Daniel Díaz
I recently had to work with Ruby's 'Open3' package, and that got me thinking about this thread again. 

I have cobbled together a few helper functions and wrappers over System.Process that implement some of the ideas floated in the thread. Ideas like avoiding deadlock by reading continuously from the handles and buffering the results in memory. I've  also tried to avoid throwing exceptions, making errors explicit in the type signatures. 



-- stdout and stderr to different files, using pipes-safe.
example1 :: IO (Either String ((),()))
example1 = ec show $
   execute2 (proc "script1.bat" [])
            show  
            (consume "stdout.log")
            (consume "stderr.log")
   where
   consume file = surely . safely . useConsumer $
                      S.withFile file WriteMode toHandle

The code is not exactly well tested, I must say.

Any comments or suggestions welcome!

Gabriel Gonzalez

unread,
Feb 22, 2014, 7:49:15 PM2/22/14
to haskel...@googlegroups.com, Daniel Díaz, Jeremy Shaw
Sorry for the delay responding to this.  I just needed some time to think about what the appropriate API should be that would resolve many of the issues that Jeremy raised in the last thread on this subject.

I think a lot of these concurrency issues that Jeremy Shaw raised in the last thread on this subject can be handled by using `Input`s and `Output`s from `pipes-concurrency` instead of `Producer`s and `Consumer`s.

For example, let's say that within the callback our `stdout` and `stderr` have the following types:

    stdout :: Input ByteString
    stderr :: Input ByteString

Then it's easy to merge the two streams and preserve their relative ordering by using the `Alternative` instance for `Input`s:

    stdBoth :: Input (Either ByteString ByteString)
    stdBoth = fmap Left stderr <|> fmap Right stdout

The other advantage of this `pipes-concurrency` approach to modeling the handles is that the user can pass a `Buffer` specifying how to handle buffering between the process and the Haskell program.  This allows the user to tune how much input to buffer before the process should block, using `Unbounded` or `Bounded` buffers, for example.

I will try to write up a sketch of what I have in mind.

Gabriel Gonzalez

unread,
Feb 22, 2014, 9:14:30 PM2/22/14
to haskel...@googlegroups.com, Daniel Díaz
Alright, I wrote up what I had in mind and you can find my draft here:

https://github.com/Gabriel439/pipes-process


On 02/17/2014 05:10 AM, Daniel Díaz wrote:

Patrick Wheeler

unread,
Feb 22, 2014, 9:50:10 PM2/22/14
to haskel...@googlegroups.com, Daniel Díaz
Thanks for putting this out, I just got through a few experiments with Shelly, pipes-shell, and Jeremy Shaw's pipes-process repo.

Using `cmdOut  <|> cmdErr ` to mix the cmdOut and the cmdErr out works until the cmdOut buffer is always full and the cmdErr is offset until the cmdOut is empty.

I have heard of non-determinism in the order of cmdOut and cmdErr causing errors in the shell scripting world, though rarely.  Does anyone know of a work around for this?  A few google searches tells me that you can not depend on the ordering of the two and if there is a work around it is not standard knowledge.

I did learn that the cmdOut is buffered while the cmdErr is normally not.  This might be better modeled by giving cmdErr the right of way over cmdOut as `cmdErr <|> cmdOut' this way the error is more likely to end up close to the the associated output if any.

Patrick

Gabriel Gonzalez

unread,
Feb 22, 2014, 11:08:58 PM2/22/14
to haskel...@googlegroups.com, Patrick Wheeler, Daniel Díaz
Yeah, prioritizing stderr makes sense in general.

Daniel Díaz

unread,
Sep 14, 2014, 4:47:27 PM9/14/14
to haskel...@googlegroups.com, diaz.c...@gmail.com
(Sorry if this is too old a thread to resucitate.)

I have kept working in my process-streaming package. Previous versions had an ugly API, hopefully the new one (0.5.0.x) is a bit more intuitive. I would like some feedback on that: http://hackage.haskell.org/package/process-streaming

To avoid the "one of the standard streams is not drained and causes a deadlock" issue, I defined a "Siphon" type that represents a computation that always completely drains a Producer. stdout and stderr can only be consumed through Siphons.

Consuming stdout and stderr combined in the same stream is supported. There's also support for a (limited) form of process pipelines.

Gabriel Gonzalez

unread,
Sep 15, 2014, 5:57:51 PM9/15/14
to haskel...@googlegroups.com, diaz.c...@gmail.com
For some reason the documentation is not building.  If you want to manually upload your documentation, you can use this, which has worked well for me:

https://gist.github.com/Fuuzetsu/8276421

Michael Thompson

unread,
Sep 15, 2014, 9:06:07 PM9/15/14
to haskel...@googlegroups.com, diaz.c...@gmail.com
Daniel, I don't know if it is worth the trouble, but I think you can 
evade requiring transformers-4 and the high version for mtl 
by making a suitable conditional in the .cabal file so that people with 
transformers-3 (which is a boot package) end up getting 
Oh, and then requiring `Control.Monad.Trans.Except` 
rather than the mtl  `Control.Monad.Except`, since you only 
seem to need the `ExceptT` things from the transformers package. 

Indeed if I follow what lens is doing in its cabal file, it is 
possible to write:

  transformers >= 0.2 && < 0.5,
     transformers-compat       >= 0.3      && < 1,

transformers-compat will then do the right thing with people who are already using
the advanced transformers.  I suppose I should be testing this rather than speculating...

yours Michael

Michael Thompson

unread,
Sep 15, 2014, 9:20:34 PM9/15/14
to haskel...@googlegroups.com, diaz.c...@gmail.com
`optparse applicative` also does it, its dependency list is so:

        build-depends:    base == 4.*,
                          transformers >= 0.2 && < 0.5,
                          transformers-compat == 0.3.*,
                          process >= 1.0 && < 1.3,
                          ansi-wl-pprint >= 0.6 && < 0.7

In any case, I built `process-streaming` with this device.

Daniel Díaz

unread,
Sep 16, 2014, 3:07:51 AM9/16/14
to haskel...@googlegroups.com, diaz.c...@gmail.com
Would you mind submitting a pull request with the changes that got it working?

As for lens, I only depend on it for the test suite, although I provide lens-compatible prisms for CreateProcess.

Michael Thompson

unread,
Sep 16, 2014, 9:57:38 AM9/16/14
to haskel...@googlegroups.com, diaz.c...@gmail.com
Inevitably, using the web interface, I ended up making two pull requests for what is intuitively one patch.  With these patches I can build `process-streaming` and its tests either in a sandbox by itself or in my little pipes-y stuff sandbox, with transformers-0.3 among my global packages. The only disappointment is that it doesn't allow the very most recent text, partly because `pipes-attoparsec` and `pipes-text` haven't caught up with. (There is also the curiosity that the pipes repo is 4.1.1.1 but hackage has already jumped to 4.1.1.2)

Michael Thompson

unread,
Sep 16, 2014, 9:59:47 AM9/16/14
to haskel...@googlegroups.com, diaz.c...@gmail.com
Oh I meant to add that I only mentioned `lens` and `optparse-applicative` as reputably libraries that use the `transformers-compat` shim. I only noticed later that your testing material actually uses both of them.

Daniel Díaz

unread,
Sep 16, 2014, 3:52:28 PM9/16/14
to haskel...@googlegroups.com, diaz.c...@gmail.com
Many thanks!

I also tried Gabriel's script, and it worked, so now there are haddocks for 0.5.0.2.
Reply all
Reply to author
Forward
0 new messages