Could you elaborate on why your first type doesn't work out?
--
You received this message because you are subscribed to the Google Groups "Haskell Pipes" group.
To unsubscribe from this group and stop receiving emails from it, send an email to haskell-pipe...@googlegroups.com.
To post to this group, send email to haskel...@googlegroups.com.
Doesn't your separation with readProcess
and writeProcess
mean you're spawning two processes?
Doesn't your separation with
readProcess
andwriteProcess
mean you're spawning two processes?
Oh, I see. Doesn't it need to be created in IO
though? What if you do want multiple processes?
Oh, I see. Doesn't it need to be created in
IO
though? What if you do want multiple processes?
The reference library for this sort of thing is `pipes-network`. It demonstrates the two most common idioms:On 09/23/2013 12:22 PM, Dag Odenhall wrote:
Oh, I see. Doesn't it need to be created in
IO
though? What if you do want multiple processes?
* Acquire the "handle" in IO using the with idiom and then spawn producers/consumers from that shared handle. See `Pipes.Network.TCP` for this idiom.
* Acquire the "handle" in SafeT within a pipeline. However, this usually limits you to one-way transfer (i.e. only reading and writing). See `Pipes.Network.TCP.Safe` for this idiom.
>>>>> Gabriel Gonzalez <gabri...@gmail.com> writes:Wouldn't it be better to give two Producers, one for stdout and one for stdin?
> readProcess :: Process -> Producer (Either ByteString ByteString) (SafeT
> m) ()
They be written two at the same time by the process, can't they? It would
then seem odd that they can only be processed in sequence.
Alas, the unix process model is so fundamentally stupid that I think we really need both variants. Many command-line apps are run from the command-line where stdout and stderr are interleaved in a somewhat arbitrary manner. But, there is some time-based information there -- even if there is a bit of fuzziness. For example, an app could print several lines of success to stdout, some error message to stderr, and more success to stdout. So, the stuff on stderr is presented in the context of what happened around the same time on stdout.If you treat them as two completely independent sources, then you lose that temporal context.So, I think it is useful to have a version that does interleave the stdout/stderr in whatever order it seems to get them. In theory you can just use partitionEithers to separate them if you don't want them interleaved like that. But that is not always the most convenient thing to do. It's clear that there are times when it seems like having stdout and stderr be separate Producers would be the most convenient solution.On the other hand -- I think there is a real danger to have two Producers, one for stdout and one for stderr. Let's say you only care about stdout and you don't do anything with stderr. Since you are ignoring it, nobody is reading from stderr and now stderr is at risk at blocking due to having a full output buffer, and the whole process may then block. Even worse, maybe you do care about stdout and stderr, but you try do something where you first write all of stdout to a file, and then all of stderr. You could still end up blocked. If you want to safely process stdout and stderr separately, then I think you must do that in separate threads so that you don't deadlock?I think it is necessary that we always read data from stdout and stderr when it becomes available, though we can choose to discard one or the other if we don't actually want it.Now, we should also note that a similar problem exists in the current code. If we start the process and use only writeProcess, but not readProcess, then the process might block trying to write output and the input will never get read.So modeling a process as a Pipe does not work, but modelling it an independent Consumer and Producer is not entirely correct either. There is, in fact, some interaction between the Consumer and Producer ends of a process -- but not in a way that we can really reason about it?still.. I feel like allow the user to read only stdout or only stderr is asking for more trouble than allow the user to call only readProcess vs only writeProcess.Unfortunately, it is extremely easy to deadlock when calling a unix process that streams both inputs and outputs. I wonder if there is another way we can wrap a process into a pipe that is safer?
transformers-compat >= 0.3 && < 1,
transformers-compat will then do the right thing with people who are already using
the advanced transformers. I suppose I should be testing this rather than speculating...
yours Michael