11/10 Reviews - Pip

8 views
Skip to first unread message

Rodrigo

unread,
Nov 9, 2009, 6:13:07 PM11/9/09
to CSCI2950-u Fall 09 - Brown
Please post your reviews here.

joeyp

unread,
Nov 9, 2009, 11:49:01 PM11/9/09
to CSCI2950-u Fall 09 - Brown
Pip: Detecting the Unexpected in Distributed Systems
Reynolds et al
NSDI 2006

The paper presents a framework and techniques for specifying expected
behavior of distributed systems and comparing it to the actual
behavior. The actual behavior is witnessed and checked against the
specification via instrumentation, and the specification uses a custom
expectation language that is tailored for modeling communication in
distributed systems.

The results of the paper indicate that this technique of checking
against a specification (they more often use the term expectations or
expected behavior) can help to find complicated or hard to find bugs.

I like the general model of taking specifications and implementations
and moving them closer to one another. In general, it is hard to get
a specification right for an entire large system, and even harder to
fit specs in any language to existing systems. However, having such a
spec of expected behavior is a very powerful tool. What is really
needed is a way to specify what is known, and incrementally move the
implementation closer to a faithful instance of the expected
behavior. Pip works towards having this interplay work well in
distributed systems.

Pip builds on existing work in specification and domain specific
languages. There are several efforts on these fronts in existing
literature, and Pip is an instance of using instrumentation to
validate existing large systems against such specifications. The
paper also comments on existing tracing efforts that attempt to track
tasks and messages through distributed systems for later processing.
Pip distinguishes itself in its precise nature - it does not use
statistical methods, but instead uses a knowable specification along
with instrumentation to make assertions.

It also has the obvious distinction of requiring instrumentation with
pretty decent knowledge of the system in question. Someone who wants
to use Pip at least needs to know when logical tasks are starting, and
which path the computation should be on. Only tasks and messages that
a programmer is interested in need to be instrumented, but all
messages and notices that are relevant along that path need to be
annotated. Although based on what they say about automating in the
case of MACE, this might be relatively easy.

The results section was interesting. For reproducibility I could go
try and find the same bugs they found. It seems like at least the FAB
test was done in this way - someone knew about the bug ahead of time
and was wondering if they'd find it. Maybe it wouldn't fit in the
paper, but it would have been nice to see one of the checking trees
for some of the bugs they found.

The expectation language has really strong parallels with linear
temporal logic. This is a logic that allows expressing things like
"this will eventually be true" or "A will be true until B is true."
It can be combined with automata to do model checking. I was reminded
of it in the case of futures, maybes, and repeats in the language
described in the paper.

Marcelo Martins

unread,
Nov 9, 2009, 8:21:08 PM11/9/09
to brown-cs...@googlegroups.com
Paper Title "Pip: Detecting the Unexpected in Distributed Systems"

Author(s) Patrick Reynolds et al.

Date USENIX NSDI'06, May 2006

Novel Idea

Pip is a collection of programs that work together to gather, check and
display the behavior of distributed systems. It provides an expression
language for input of user's expectations, a visualization tool for
checking application misbehaviors and automatic generation of
expectations from system traces.

Main Result(s)

Pip supports and can detect both structure and performance bugs in a
variety of distributed applications with minimal developer input (via
annotations and expectations). Section 5 shows a few instances where Pip
was successfully deployed.

Impact

One of the key points of Pip is that it provides a framework where
system developers can describe their expectations using a minimalistic
domain-specific language. With minimal input and rapid quick and display
of application misbehaviors, a developer can quickly find and fix bugs
in his/her programs. Another important concept of Pip is the departure
from statistical inference techniques, which are said to limit the
granularity of bug detection.

Evidence

Section 5 presents anecdotes on how Pip was applied to detect several
distributed systems and how it helped locate hard-to-find bugs.

Prior Work

Section 6 categorizes similar approaches for analysis of distributed
systems into three groups: 1) path analysis tools (Project 5, Magpie,
Pinpoint); 2) Automated expectation checking (PSpec, MC, Paradyn); 3)
Domain-specific languages (Estelle, pi-calculus, join-calculus, P2)

Competitive work

The authors try to emphasize how Pip surpasses previous systems in terms
of granularity and expressiveness. Such systems rely on statistical
inference and/or black-box testing (MagPie, Project 5, Pinpoint).

Reproducibility

Pip's code is available from the authors' page and most of the
applications that were analyzed are also accessible. The Linux kernel
version is not up-to-date, but one can also download a virtual machine
image with the system already installed and ready to be deployed. The
main obstacle here is instrumenting the applications with the same
instructions that the authors used.

Criticism

1.) Why didn't the authors compare Pip with the other systems mentioned
in Section 5? A comparison including LoC, detection rate, programming
difficulties, etc. would be much more useful than a large. Without
comparison, it becomes difficult to say whether Pip has really any
innovation or improvement, besides the what they promise in the first
sections of the article.

2.) The system does only cover single-app distributed systems.

Ideas for future work

1.) Add binding support for web-based applications, generally written in
PHP, Ruby on Rails, Perl and Python.

yash

unread,
Nov 9, 2009, 11:55:29 PM11/9/09
to CSCI2950-u Fall 09 - Brown
Paper Title: Pip- Detecting the unexpected in Distributed Systems.

Authors: Patrick Reynolds and colleagues.

Novel Idea: This paper introduces Pip, a framework for detecting
errors in a distributed systems by comparing actual behaviour with the
expected behaviour.

Main Result: Pip easily detects both structural and performance errors
in a distributed systems easily. It compares the actual behaviour of
the system with the expected one. It is based on causal paths and uses
explicit path identifiers and programmer-written expectations to
detect unexpected errors.

Impact: Pip provides simple way to study complex distributed systems
and by the combination of expectations and visualization helps to
detect errors in complex multi-node systems.

Evidence: The paper describes the successfull implementation of Pip to
many applications including FAB, SplitStream, Bullet & RanSub and
could detect unexpected behaviour in each of them easily.

Prior work: There are other systems like Project 5, Magpie and
Pinpoint which wok similar to Pip, but Pip goes further and along with
relying on causal paths also depends on the programmer-written
expectations to detect errors in large distributed systems.

Reproducibility: Yes, it can be reproduced.

Criticism: The paper also mentions regarding other type of systems
like Project5 which differ in certain aspects from Pip, but in the
result section the paper does not give proper comparison between the
output of Pip and other systems and show the superiority of Pip.

Juexin Wang

unread,
Nov 10, 2009, 12:02:14 AM11/10/09
to CSCI2950-u Fall 09 - Brown
Paper Title  
Pip: detecting the unexpected in distributed systems
 
Author(s)  
Patrick Reynolds, Charles Killian, Janet L. Wiener, Jeffrey C. Mogul, Mehul A. Shah, Amin Vahdat

Date  
May, 2006
 
Novel Idea  
comparing actual behavior and expected behavior to expose structural errors and performance problems in distributed systems.
 
 
Main Result(s)  
Designed a domain-specific expectation language that is more expressive than general ones, resulting in expectations that are easier to write and maintain. 
Developed suite of tools to gather, check, and display the behavior of distributed systems.
Using explicit path identifiers and programmer-written expectations to check program behavior.
Impact  
Provide simple and flexible way to express and check system behaviors. The combination of expectations and visualization helps to explore and learn the unfamiliar systems.
 
Evidence  
- Pip constructs an application's behavior model from generated events.
- The basic unit of application behavior in Pip is path instance, which each is an ordered series of timestamped events.
- Programs linked against the Pip annotation library generate events and resource measurements as they run.
- Programmers write external descriptions of expectd programs behaviors, which consist of recognizers and aggregatets, using declarative language.
- Recognizers are description of structural and performance behavior; Aggregates are assertions about properties of sets of paths.
- Pip generates a search tree from expectations.The trace checker operates as a nested loop, matches results from the path database with expectations.
- Pip generates a search tree from expectations.The trace checker matches results from the path database with expectations.
 
Prior Work
N/A
 
Competitive work  
Pip is said the first to combine the path analysis and automated expectation checking.
 
Reproducibility  
 N/A
 
Question  
Is this a white-box or black-box? I think it's more of a black-box because it camparing the result rather than going into it. But if so how could we identify/fix the bugs after finding where may exist a bug?
 
Criticism  
 SInce it combines the path analysis and automated expectation checking, authors should show us the tradeoffs of this advantage.
 
On Mon, Nov 9, 2009 at 6:13 PM, Rodrigo <rodrigo...@gmail.com> wrote:

Please post your reviews here.



--
J.W
Happy to receive ur message

Sunil Mallya

unread,
Nov 9, 2009, 7:14:32 PM11/9/09
to CSCI2950-u Fall 09 - Brown
Paper Title
Pip: detecting the unexpected in distributed systems
Author(s)
Reynolds, Patrick, Killian, Charles, Wiener, Janet L., Mogul, Jeffrey
C., Shah, Mehul A., and Vahdat, Amin
Date
NSDI'06: 2006 San Jose, CA

Novel Idea
To provide a more robust solution to compare expected system behavior
and actual system behavior over existing debuggers and system
profilers.

Main Result(s)
PIP is an infrastructure to for comparing the actual system behavior
with the expected behavior which helps to identify structural and
performance bugs in a system. Pip classifies system behavior as valid
or invalid and groups behaviors into sets which can be reasoned, to do
this pip provides a declarative language to express the expectations
of system structure, communication, resource consumption and more. The
authors claim that the combination of expectations and visualizations
can help programmers explore and learn about unfamiliar systems.
Pip works by tracing the behavior of running applications that checks
the behavior by using explicit path identifiers and programmer written
expectations. The pip behavior model defines 3 types of events:
tasks, messages and notices which are analogous to real world systems
like tasks correspond to event-handling routines, messages to network
communication and notices are similar to event logs. Using these 3
types of events expectations are declared and analyzed.
Finally the authors claim that Pip can generate any needed annotations
automatically for applications constructed using a supported
middleware layer.

Evidence
They evaluate pip by implementing pip on 4 different distributed
systems – FAB, SplitStream, Bullet, RanSub in fair detail.
They are able to find bugs in all the 4 systems tested but only with a
good understanding of these systems. One interesting thing about pip
was that It was able to distinguish a single bug (splitstream) which
was caused by different entities.

Prior Work & Competitive work
Project 5, Magpie, Pinpoint

Criticism
The general feeling I get is there need to be a good understanding of
the working of these systems before pip can be applied. Hence this
cannot be used as a universal tracing tool and this looks like it
requires a lot of overhead in terms of implementation.
If a system consists of many propriety systems then pip can’t be a
helpful tool.

Andrew Ferguson

unread,
Nov 9, 2009, 11:37:00 PM11/9/09
to brown-cs...@googlegroups.com
Paper Title
"Pip: Detecting the Unexpected in Distributed Systems"

Authors
Reynolds, Patrick, Killian, Charles, Wiener, Janet L., Mogul, Jeffrey
C., Shah, Mehul A., and Vahdat, Amin

Date
NSDI 2006

Novel Idea / Main Results
Pip is a framework and infrastructure which combines automated tracing
of distributed systems with expectation/rule checking. The authors
implement Pip and discuss situations in which it would be useful, such
as debugging distributed systems (in contrast to X-Trace, which seems
like it might be more applicable to a wider variety of systems). The
bulk of the paper is spent detailing the language for expressing the
programmer's expectations about the system.

Impact
Unknown.

Evidence
The authors illustrate the use of Pip on four different systems and
describe some interesting bugs which were found by using it. It's not
clear to me that only Pip could have found those bugs, maybe better
unit test coverage or in-line tests could have also identified them,
but this is not to detract from the apparent usefulness of Pip.

Prior Work
Pip builds on the prior work in the two classes of tools it combines:
Project 5, Magpie, Pinpoint, and expectation checkers such as PSpec,
meta-level compilation, and Pardyn.

Competitive Work
I think Pip provides something that other systems do not, although it
might be interesting to apply Pip's expectation checking capabilities
on top of different tracing platforms such as X-Trace or Project5.

Reproducibility
I think the Pip source code is available; if so, it should be
reproducible.

Question
Do we think Pip can be used to detect situations like dead/live-locks?
If a programmer is smart enough to realize a deadlock might happen,
they she will probably program to prevent it -- but if she doesn't
realize that it could happen, then she probably won't write Pip rules
to look for it...

Criticism
I would have liked to see Pip applied to some more common applications
-- prior to reading this paper, I hadn't heard of FAB, SplitStream,
Bullet and RanSub and so wasn't able to evaluate Pip as well as if I
had know the applications. For some of the errors which Pip detects,
it would seem that it might have been more useful if the programmer
had included checks for those conditions in the code -- ie, with
assert() statements or better error messages. The performance
evaluation of Pip (section 3.5.2) is not detailed enough.
Finally, the authors place a lot of faith in Pip's ability to generate
expectations automatically (it is at the core of their argument about
why Pip would be easy to use). However, I'm not sure that non-authors
would place the same amount of faith in such automated expectations,
or even that they would catch the very bugs for which I would want to
employ Pip (and thus necessitating that I develop the expectations).

Ideas for further work
Can rules for Pip be extracted from the source code automatically? Or
maybe from protocol RFCs?

qiao xie

unread,
Nov 10, 2009, 12:02:33 AM11/10/09
to brown-cs...@googlegroups.com
Paper Title: Pip
Author: Patrick Reynolds, ...
Date: 2006

[Novel Idea]
Pip allow programmers to express expectations about system behaviors in a declarative language and automatically check them against the real behaviors to for the purpose of finding system bugs.
 
[Main Result]
They designed a declarative languange to describe expected behaviors of large distributed systems. 
They implemented a set of tools for gathering events, checking behavior validility, visualizing results and generating expectations from system traces.
They used Pip to find bugs of some distributed systems, including FAB, SplitStream, Bullet and RanSub.
 
 
[Impact]
Another convinient technique to debug distributed systems.
 
[Evidence]
First they presented the novel idea of Pip. Then they give an overview of the system followed by a detailed description of the declarative languange they designed. At last they showed that by applying Pip to several distributed systems they were able to find some system bugs. 
 
Reproducibility
Yes. It is available online.
 
Question
What kind of bugs cannot be found by Pip?
 
Criticism
It is not clear in the paper how to use Pip to find bugs in a distributed system.
 
 
 


2009/11/9 Rodrigo <rodrigo...@gmail.com>

Xiyang Liu

unread,
Nov 9, 2009, 11:47:25 PM11/9/09
to CSCI2950-u Fall 09 - Brown
Paper Title
Pip: Detecting the Unexpected in Distributed Systems

Author(s)
Patrick Reynolds, Charles Killian, Janet L. Wiener, Jeffrey C. Mogul,
Mehul A. Shah, and Amin Vahdat

Date
In Proc. 3rd Symp. on Networked Systems Design and Implementation
(NSDI), San Jose, CA, May, 2006

Novel Idea
Pip verifies the correctness and monitors the performance of
applications running on distributed system by comparing their actual
behavior with expected behavior. Pip gathers trace files generated by
annotations added to applications' source code and reconciles the
trace log to path instances. Every path instance is verified with all
expectations defined by programmers. The authors also proposed a
declarative language to simplify the work of defining expectations.

Main Result(s)
Pip was applied to several distributed applications. With the help of
behavior explorer and expectation checker, the authors detected 18
bugs from the 5 evaluated applications and fixed most of them. The
bugs involve both structure error and performance inefficiency.

Impact
Pip is effective in debugging and monitoring tool for distributed
applications. It accurately finds misbehavior and saves programmers
from analyzing bulk of logs.

Evidence
The authors applied Pip to 5 applications and provided the metrics
used to analyze them. Concrete bugs found from every applications were
also presented.

Competitive work
Pip combined casual path analysis and expectation checking to locate
misbehavior. It was compared to relative works in these two
categories. The other casual path analysis tools such as Project 5 and
Magpie reconstruct paths and observe misbehavior based on statistic
and inference. Other expectation checking tool Meta-level compilation
and Paradyn focus on single-node and do not evaluate performance along
casual paths across multiple nodes.

Reproducibility
Yes. We can find bugs observed by Pip if annotations and expectations
are correctly defined.

Question
How do subsequent events identify their path id? Especially when these
events are handled on different nodes.

Criticism
Pip requires modifying source code for applications which it
evaluates. It is strict fot that distributed systems are usually
comprised of several modules/components. If part of the system is
black-box and part of it is accessible, it is not possible for Pip to
skip black boxes, interrelate open components and evaluate the system
as a whole.

Ideas for further work
It may be useful to combine techniques raised by Project 5 and Pip to
monitor a hybrid distributed system - partial black-box and partial
open. It can provide accurate expectation checking for accessible
components by Pip and infer the paths and misbehavior for black-box
components by Project 5 and produce a system-wide casual path and
diagnosis by combining the two.


On Nov 9, 6:13 pm, Rodrigo <rodrigo.fons...@gmail.com> wrote:

Kevin Tierney

unread,
Nov 9, 2009, 9:57:40 PM11/9/09
to brown-cs...@googlegroups.com
Title: Pip: Detecting the Unexpected in Distributed Systems
Author(s): Reynolds, et al.
Date: NSDI 2006

Novel Idea
Pip allows people to describe how they expect their system to operate
in a declarative language. The pip model at its core consists of
paths, which are a series of events, and expectations. Pip is able to
handle the parallelism inherent in a distributed system using "thread
patterns", allowing a programmer to express event non-deterministic
event sequences declaritively.

Main Result(s)
Pip's analysis and visualizations can help sys admins diagnose faults
in their network.

Impact
I don't think the impact of this paper is that great other than to
spur other researchers to find better solutions to the problem. The
idea of proposing a new language to solve the problem might sound
appealing to computer scientists, but without widespread adoption of
this language by protocol designers it doesn't seem like it will be a
practical solution.

Evidence
The authors conduct a couple experiments to show that using their
system can find faults.

Prior Work
Project 5, Magpie. Pip, however, pinpoints abnormal behavior.

Reproducibility
Probably not, but I could at least confirm that their system can be
used for what they say it can be used for.

Question
Can we solve every problem in computer science by simply inventing a
new programming language?

Criticism
Presented with a fault, it is not clear whether the fault is the
result of abnormal system behavior or incorrectly declared
expectations. This means that on the one hand time can be saved
diagnosing system faults (true positives), but time may be lost due to
false positives through bugs in the declarative code.

Ideas for further work
No

On Mon, Nov 9, 2009 at 6:13 PM, Rodrigo <rodrigo...@gmail.com> wrote:
>

Dan Rosenberg

unread,
Nov 9, 2009, 11:01:55 PM11/9/09
to CSCI2950-u Fall 09 - Brown
Paper Title

Pip: Detecting the Unexpected in Distributed Systems

Authors
Patrick Reynolds, et al.


Date
May, 2006

Novel Idea
The authors propose Pip, a system that allows programmers to express expections
about system behavior using a declarative language, and compares actual
performance with programmer expectations to uncover bugs or unoptimized
strategies.

Main Result
Pip was applied to several distributed systems, where it successfully uncovered
bugs in each.

Impact
Pip may be an option for developers of new distributed systems to incorporate
annotations to facilitate troubleshooting and optimization.

Evidence
Besides the description of the Pip language and how its annotations work on a
higher level, the authors implement Pip and test it on several real systems.
Their results seem genuine in that they did find real bugs, but performance
evaluation is a bit spotty.

Prior Work
Pip draws on the insights of other path analysis tools, such as Project 5,
Magpie, and Pinpoint, as well as other domain-specific languages.

Reproducibility
The general mechanism for how Pip works is described at a high level, but I
would have trouble implementing it or performing tests using Pip without more
knowledge.

Criticism
Unless I missed it, it seems as though the authors produce no results on the
overhead on actual applications with annotation features enabled - they only
provide numbers on how long it takes to process annotation data, not produce
it.  In addition, requiring programmers to alter their code to support Pip
annotations makes it somewhat time-consuming to apply to existing distributed
applications, and nearly impossible for closed-source options.  Finally, Pip
would not be suitable for network administrators seeking to debug the
performance of third-party components without developing a deeper level of
understanding of those components' inner workings.

Questions/Ideas for Further Work
I would be interested in the extent that Pip can be integrated into shared
libraries, rather than modifying the code of each application, as this is the
most significant barrier to entry.

小柯

unread,
Nov 10, 2009, 1:37:58 AM11/10/09
to Rodrigo, CSCI2950-u Fall 09 - Brown
Paper Title:    Pip: Detecting the Unexpected in Distributed Systems

Authors:        Patrick Reynolds
                    Charles Killian
                    Janet L. Wiener
                    Jeffrey C. Mogul
                    Mehul A. Shah
                    Amin Vahdat

Date:           2006

Novel Idea:
    Inventing expectation language for developers to specify the expected behavior of complicated, distributed system. Not like debugging tools in single host, Pip defined special concept to deal with complexity in such a system, like concept 'future' refers to an action that might be taken in unkonow time in the future because of the multi-thread complexity.

Main Result:
    Pip is applied to several applications, and useful visualization tools are created. It seems to be a practical debugging and performance tracing tool in the field of distributed system.

Impact:
    For a long time, people are seeking for good ways to help debugging in distributed system. Since to validate the correctness of such a system is difficult, we need to define a language to state exactly our expectation of the system behavior. This then became a new research area.

Evidence:
    Many concepts and language designs are applied to help developers describe the behavior of their system. Authors then applied this to real applications to show its' success.

Prior Work:

Competitive work:
    Project 5

Reproducibility:
    Yes.

Question:


Criticism:
  
Ideas for further work:


2009/11/9 Rodrigo <rodrigo...@gmail.com>

James Tavares

unread,
Nov 9, 2009, 11:16:16 PM11/9/09
to brown-cs...@googlegroups.com
*Pip*

Paper Title: Pip: Detecting the Unexpected in Distributed Systems

Author(s): Patrick Reynolds, Charles Killian, Janet L. Wiener, Jeffrey
c. Mogul, Mehul A. Shah, and Amin Vahdat

Date: NSDI �06. May, 2006

Novel Idea: Pip allows programmers to express their expectations for
application correctness and performance by way of *recognizers* and
*aggregates*, somewhat respectively. Recognizers define a pattern which
matches multiple independent thread-level views of tasks, messages, and
notices. A system under inspection must be traced (typically by source
code annotation) at each node, after which an off-line system reconciles
data, assimilates causal paths, and checks for valid and invalid paths
against the programmer�s stated expectations.

Main Result(s): The paper describes the Pip system, provides an overview
of the syntax and features of the expectations language and annotations
library (libannotate), and reviews four case studies where Pip was used
to find bugs in real distributed systems. See �Evidence� for further
discussion on case studies and performance metrics.

Impact: A simple way to describe system expectations seems like a
powerful tool to me, although it remains to be seen how programmers and
organizations would take to such a system. Is it clear that the effort
in creating �expectations� warrants the number of bugs found/avoided?

Evidence: The authors present four case studies to illustrate the
flexibility and success of Pip at detecting bugs in distributed systems.
Profiled systems included FAB (2 bugs found), SplitStream (13 bugs),
Bullet (2 bugs), and RanSub (2 bugs). In all cases the authors used
source code annotations, leaving more automatic tracing for future work.

Prior/Competitive Work: The authors divide their analysis of related
works into two primary categories: path analysis tools and automated
expectation checking. The former includes works like Project5 and
Magpie, which rely on statistical inference for detecting causal paths,
and Pinpoint, which rely on statistical inference to detect anomalous
paths. The latter includes work like PSpec and MC, which focus on single
nodes only, and Paradyn, which cannot �express [the] causal path
structure of threads, tasks, and messages.�

Reproducibility: Full source to the Pip system is provided, making it
extremely easy for researchers to examine its effectiveness. Due to
limited descriptions, it may be more difficult to duplicate the exact
case studies presented in the paper. In the case of FAB, the authors
cite personal communications with one FAB�s authors. For the other
systems, the authors did not tell us the version in which the identified
bugs were found.

Question: open-ended criticism: in a real environment, are Pip�s
automatic expectation generation features ripe for abuse and misuse? If
�expectations� are not easier to reason about than the original,
well-structured application code, then developers might be tempted to
auto-generate �expectations� without verifying them.

Criticism:

1.) Their direct annotations approach seems risky � Pip will check the
paths that programmers *claim* the program is executing. I see
opportunity for bugs in the annotations themselves; perhaps as simple as
getting a sequence number or �bytes sent� count wrong. For this reason,
I like their approach to modifying middleware more.

2.) The distributed systems they describe all seem relativity
straightforward to model in Pip. How does difficulty in describing
expectations grow as systems get larger and more complex? (It may not be
a linear effort given an increasing likelihood to get the /expectation
itself /wrong/.)/

Future Work:

1.) From a software engineering perspective, it would be interesting to
see how much of Pip�s �expectations� can be derived from UML behavior
models. Perhaps it is possible to rely on these types of architectural
models entirely, without the need for defining a new language.

2.) As syntactic sugar, the authors should consider an async {} block as
means of expressing the notion that tasks, messages and notices within a
single async block may occur in any order, but must all occur by the end
of the block.
Reply all
Reply to author
Forward
0 new messages