Rakudo needs from Parrot in 2011

3 views
Skip to first unread message

Patrick R. Michaud

unread,
Jan 31, 2011, 9:56:54 AM1/31/11
to parro...@lists.parrot.org
At Saturday's Parrot Developer Summit [1], I agreed to write up a
post addressing what Rakudo needs from Parrot over the next 3/6/9/12
month period. This is that posting.

[1] http://irclog.perlgeek.de/parrotsketch/2011-01-29

But before I address what Rakudo needs from Parrot for the future,
I'd first like to acknowledge the many places where Parrot has
responded directly to Rakudo's needs in the past year or so:
* Parrot now has a pluggable garbage collector (gc2), which is far
more efficient and less buggy than the previous GC
* Immutable strings
* Faster string I/O and manipulation
* A much improved packfile storage format, reducing Rakudo's memory
footprint
* Vastly improved load and startup times
* A much cleaner and saner character set and encoding system, including
wider support for Unicode
* Fixes to the profiling runcore
* Implementation of :nsentry and other key flags
* Better introspection of Context PMCs and internal structures
* Many, many incremental improvements to overall Parrot performance
The above list is certainly not exhaustive, but it's indicative of
places where Parrot continues to support Rakudo and we appreciate
Parrot's many efforts in this regard.

Because of the upcoming changes we are making to nqp and our ability
to prototype improvements there, our list of specific needs from
Parrot is not a very long one (at least in terms of the number
of items involved). All of them tend to relate to speed and performance
in some manner, which is Rakudo's current focus of development.
In no specific order, our needs are:

0. Anything that makes Parrot, nqp-rx, nqp, or Rakudo run faster overall. :-)

1. GC. Although GC has much improved, it's still fairly slow
in places, especially when mark/sweep occurs. The effect can
be observed by running the following program in Rakudo:

my $time = now.x;
for 1..300 -> $step {
say $step => '#' x (50 * ((my $t2 = now.x) - $time));
$time = $t2
}

This outputs a row of #'s representing the time elapsed between
iterations. On my system, most iterations complete in under
0.06 sec, but when mark/sweep occurs -- approximately every 75
iterations -- the iteration requires 0.75 sec or longer. As
another example of noticably slow GC, see Larry Wall's "zigzag"
presentation at YAPC::Asia
(http://www.youtube.com/user/yapcasia#p/u/131/uzUTIffsc-M ,
starting at 10:30 in the video).

We recognize that Rakudo creates a lot of objects when it's
running, and could potentially make a lot less. We're working
on that. But Perl and other dynamic languages are also regularly
used to manipulate millions of data values and objects in a single
program, so Parrot GC still has to be efficient even when millions
of objects exist.

2. Profiling tools and documentation, especially at the Parrot sub level.
Parrot's built-in profiling runcore was recently "fixed" to work again
with nqp-rx and Rakudo; I'm glad for this but we haven't had a lot of
tuits to play with it. Building a suite of useful Rakudo, NQP, and
Parrot benchmarks is on my personal "to do" list.

But we still need some basic documentation and clear examples
for using Parrot's profiling capabilities. To me, the existing
profiling runcore seems to produce results for nqp-rx programs that
either don't make any sense, or I'm unable to understand the results.
As an example, I just ran the following command using version
814a916 of parrot master:

$ ./parrot --runcore profiling ops2c.pbc --dynamic \
src/dynoplibs/math.ops --quiet

This runs ops2c.pbc (an nqp-rx program) on the src/dynoplibs/math.ops
file. The profiling runcore indeed produces a parrot.pprof.###
file, and running that file through pprof2cg.pl produces a
parrot.out.### file that kcachegrind can apparently read. However,
the kcachegrind output seems to indicate that (e.g.) the "slurp"
function used to read the math.ops input file is taking 83.51
seconds out of the 102.44 seconds needed to run the program.
I'm fairly certain that is not an accurate depiction of reality.
So, either some improvements in the profiling system or some
guides to understanding the output are definitely needed.

3. Serialization. The major item that makes Rakudo startup so slow
is that we have to do so much initialization at startup to get
Rakudo's type system and setting in place. There's not a good
way in Parrot to reliably serialize a set of language-defined types,
nor to attach compile-time attributes to subroutines and other "static"
objects in the bytecode itself

Another issue with Parrot serialization is that it often tends to
be a "serialize the world" affair -- serializing a data structure
ultimately ends up serializing the underlying class data types,
their superclasses, and the like. There needs to be a mechanism
for placing boundaries around the serialization; to serialize only
the unique pieces of a model, as opposed to everything it references.

We're working on strategies to do better serialization from within
nqp, but Parrot definitely needs to explore this area as well
and devise some strategies for compile-time creation of
language-specific data structures, instead of requiring them
to always be built at program initialization.

4. Create .pbc files directly from a Parrot program.
I know this is being actively worked on, but it's an explicit
need for Rakudo and NQP and thus belongs on this list. Currently
the only reliable mechanism available for creating .pbc files is
parrot's command-line interface -- it's not possible for a Parrot
compiler to generate a .pbc on its own directly. This is why all
of the compiler tools currently produce .pir files, which are then
separately compiled by invoking parrot from a command line into
.pbc (and eventually .exe files).

This likely has some relation to #3 above regarding the need for
a better serialization strategy.

That's the list. There are other areas where we know improvements
are needed for Rakudo, such as faster lexical support, better context
handling, more efficient control exception handler setup (esp.
"return exceptions"), and the like. But at the moment we're unable
to offer very specific details on what we need to see in these
areas, and we think it'll be more effective for everyone if we
prototype and test solutions in Rakudo and/or NQP first, then
offer them to Parrot for potential adoption in its core. This
approach would be much the same as the one currently being taken
for a new Parrot object metamodel -- i.e., we've developed a new
one in NQP ("6model"), and the consensus expectation is that it
will migrate downward into the Parrot core as an alternate or
replacement for its current object system. So, we'd hope that
Parrot can be "open" to migrating improvements in other areas
from Rakudo and NQP into the Parrot core as they become more
developed. (NQP is designed to be a basis for many HLL translators,
not just Perl 6, so we feel that the improvements we offer would
be flexible enough to improve Parrot for languages beyond Perl 6.)

As far as timing needs for the above items goes, Rakudo will be
glad to see them "whenever they can be made available". We have
obvious priority towards those that offer speed improvements
(e.g. GC and other internal speed improvements) or can be added
with little direct impact to the existing Rakudo codebase
(e.g., profiling). We know that any improvements to serialization
will require a lot of design exploration and core changes, so we
don't have any specific timeline expectations there, but we also
know we should get some huge speed wins when it does occur.

I hope this outlines Rakudo's needs from Parrot in sufficient
detail to get started on planning and implementation goals; but
if any further detail is needed, please feel free to ask in the
usual places (parrot-dev, perl6-compiler, #perl6, or #parrot).

Thanks!

Pm
_______________________________________________
http://lists.parrot.org/mailman/listinfo/parrot-dev

Peter Lobsinger

unread,
Jan 31, 2011, 11:59:44 AM1/31/11
to Patrick R. Michaud, parro...@lists.parrot.org
On Mon, Jan 31, 2011 at 9:56 AM, Patrick R. Michaud <pmic...@pobox.com> wrote:
> 3.  Serialization.  The major item that makes Rakudo startup so slow
>    is that we have to do so much initialization at startup to get
>    Rakudo's type system and setting in place.
> There's not a good
>    way in Parrot to reliably serialize a set of language-defined types,

In order for me to prioritize, which problem is this? One with
serializing classes and objects? Or the one involving dynpmcs?

>    nor to attach compile-time attributes to subroutines and other "static"
>    objects in the bytecode itself

You are hitting the limits of PIR's syntax. It is very poor at
expressing concepts related to arbitrary PMC constants. Parrot
provides this functionality, mostly simply by not getting in the way.
I don't think there is any way we can improve this within the
constraints of PIR, and it follows that any compiler that targets PIR
will not have an easy time in this area. Get away from PIR (and we're
working to provide you the tools to do this), and you most likely will
not have this issue.

>    Another issue with Parrot serialization is that it often tends to
>    be a "serialize the world" affair -- serializing a data structure
>    ultimately ends up serializing the underlying class data types,
>    their superclasses, and the like.  There needs to be a mechanism
>    for placing boundaries around the serialization; to serialize only
>    the unique pieces of a model, as opposed to everything it references.

This is on the radar. Back-references within PBC was the first step
towards cross-references between PBCs, which is how jnthn described to
me what was needed.

>    We're working on strategies to do better serialization from within
>    nqp, but Parrot definitely needs to explore this area as well
>    and devise some strategies for compile-time creation of
>    language-specific data structures, instead of requiring them
>    to always be built at program initialization.
>

_______________________________________________
http://lists.parrot.org/mailman/listinfo/parrot-dev

Patrick R. Michaud

unread,
Jan 31, 2011, 1:47:33 PM1/31/11
to Peter Lobsinger, parro...@lists.parrot.org
On Mon, Jan 31, 2011 at 11:59:44AM -0500, Peter Lobsinger wrote:
> On Mon, Jan 31, 2011 at 9:56 AM, Patrick R. Michaud <pmic...@pobox.com> wrote:
> > 3.  Serialization.  The major item that makes Rakudo startup so slow
> >    is that we have to do so much initialization at startup to get
> >    Rakudo's type system and setting in place.
> > There's not a good
> >    way in Parrot to reliably serialize a set of language-defined types,
>
> In order for me to prioritize, which problem is this? One with
> serializing classes and objects? Or the one involving dynpmcs?

I think this item encompasses both problems, at least from Rakudo's
perspective. We have dynpmcs _and_ we have classes and objects that
need scope-limited serialization.

> You are hitting the limits of PIR's syntax. It is very poor at
> expressing concepts related to arbitrary PMC constants. Parrot
> provides this functionality, mostly simply by not getting in the way.

In several responses to this message (both here and on #parrot), I've
heard answers to the effect of "Parrot already provides this
functionality..." I guess my overall response to these is
"...but not yet in a form where Rakudo or the HLLs I work on can
profitably make use of it." I can describe in general terms the
problems I'm encountering, but in some areas (such as bytecode
management and profiling) I don't have sufficient knowledge of
Parrot guts to detail solutions.

A big concern I have in this area is that I think Parrot needs
some sort of dynamic linking capability whereby a dynamic loaded
library can hold references to things that are outside the library
itself. In the past my cursory scans of the packfile and bytecode
formats don't seem to account for anything like this.

> Get away from PIR (and we're
> working to provide you the tools to do this), and you most likely will
> not have this issue.

If I may channel Jim Keenan for a moment: Is there any estimate as
to which Parrot release will offer these tools? Will it appear on
the Parrot roadmap as a goal Parrot is committed to providing between
now and 4.0? Are there any preliminary design documents or discussion
that describe what these new tools might look like?

Pm
_______________________________________________
http://lists.parrot.org/mailman/listinfo/parrot-dev

Nick Wellnhofer

unread,
Jan 31, 2011, 2:05:29 PM1/31/11
to parro...@lists.parrot.org
On 31/01/2011 15:56, Patrick R. Michaud wrote:
> 1. GC. Although GC has much improved, it's still fairly slow
> in places, especially when mark/sweep occurs. The effect can
> be observed by running the following program in Rakudo:
>
> my $time = now.x;
> for 1..300 -> $step {
> say $step => '#' x (50 * ((my $t2 = now.x) - $time));
> $time = $t2
> }
>
> This outputs a row of #'s representing the time elapsed between
> iterations. On my system, most iterations complete in under
> 0.06 sec, but when mark/sweep occurs -- approximately every 75
> iterations -- the iteration requires 0.75 sec or longer. As
> another example of noticably slow GC, see Larry Wall's "zigzag"
> presentation at YAPC::Asia
> (http://www.youtube.com/user/yapcasia#p/u/131/uzUTIffsc-M ,
> starting at 10:30 in the video).

(Side note: A single iteration of the loop above seems to allocate about
3MB of memory. I also get:

$ time ./perl6 -e -1
real 0m0.752s
$ time ./perl6 -e 'for 1..100 { time }'
real 0m0.758s
$ time ./perl6 -e 'for 1..100 { now }'
real 0m4.512s

There must be something wrong with "now". End side note)

> We recognize that Rakudo creates a lot of objects when it's
> running, and could potentially make a lot less. We're working
> on that. But Perl and other dynamic languages are also regularly
> used to manipulate millions of data values and objects in a single
> program, so Parrot GC still has to be efficient even when millions
> of objects exist.

First of all, there are two performance measures we have to look at:
throughput and latency. The slowness you are pointing out is obviously a
latency problem. This can only be solved by some kind of "incremental"
or "real time" garbage collection. There are several approaches but to
my knowledge all of them require either a read or a write barrier which
Parrot doesn't provide. Baker's algorithm is a classic example requiring
a read barrier.

You also have to keep in mind that latency and throughput are competing
goals. For example, a Rakudo build will always be slower with an
incremental garbage collector compared to a "stop-the-world" collector.

A generational GC also helps by reducing the frequency of the long GC
pauses caused by a full mark and sweep cycle. It doesn't have an effect
on worst-case latency, though.

The way I see it, Parrot's GC can't make much progress until we have at
least write barriers. This isn't an easy task, unfortunately. It
basically means to wrap every PMC * or STRING * access in the whole C
code base around some kind of macro. Later, you'll have to deal with
hard-to-diagnose bugs for every place you missed. It's one of the most
unrewarding jobs I can imagine.

I hope this clarifies why there hasn't been much progress in this area
and why you shouldn't expect too much in 2011.

Nick
_______________________________________________
http://lists.parrot.org/mailman/listinfo/parrot-dev

Patrick R. Michaud

unread,
Jan 31, 2011, 2:08:57 PM1/31/11
to Peter Lobsinger, parro...@lists.parrot.org
On Mon, Jan 31, 2011 at 12:47:33PM -0600, Patrick R. Michaud wrote:
> > Get away from PIR (and we're
> > working to provide you the tools to do this), and you most likely will
> > not have this issue.
>
> If I may channel Jim Keenan for a moment: Is there any estimate as
> to which Parrot release will offer these tools? Will it appear on
> the Parrot roadmap as a goal Parrot is committed to providing between
> now and 4.0? Are there any preliminary design documents or discussion
> that describe what these new tools might look like?

After re-reading this a bit, I wish to clearly state that "No" is a
perfectly reasonable and acceptable answer to the above questions (and
that I should probably leave such statements to Jim in the first place :-).

My goal in this thread is to let Parrot developers know where Rakudo
needs things from Parrot, and not to try to pressure Parrot into
commitments of specific development or support. It's entirely
up to the Parrot leadership to decide where Parrot's priorities
are, and Rakudo will work with them the best that we can.

Pm
_______________________________________________
http://lists.parrot.org/mailman/listinfo/parrot-dev

Vasily Chekalkin

unread,
Jan 31, 2011, 2:26:26 PM1/31/11
to Patrick R. Michaud, parro...@lists.parrot.org
On Tue, Feb 1, 2011 at 6:08 AM, Patrick R. Michaud <pmic...@pobox.com> wrote:
> On Mon, Jan 31, 2011 at 12:47:33PM -0600, Patrick R. Michaud wrote:
>> > Get away from PIR (and we're
>> > working to provide you the tools to do this), and you most likely will
>> > not have this issue.
>>
>> If I may channel Jim Keenan for a moment: Is there any estimate as
>> to which Parrot release will offer these tools?  Will it appear on
>> the Parrot roadmap as a goal Parrot is committed to providing between
>> now and 4.0?  Are there any preliminary design documents or discussion
>> that describe what these new tools might look like?
>
> After re-reading this a bit, I wish to clearly state that "No" is a
> perfectly reasonable and acceptable answer to the above questions (and
> that I should probably leave such statements to Jim in the first place :-).

Answer is "Yes". POST::Compiler.pbc is my goal to provide way of
generating bytecode without PIR. In future we can have:

PIRATE: PIR->POST->PBC
nqp: nqp->PAST->POST->PBC
some_nice_hll: hll->foo->bar->baz->POST->PBC.

--
Bacek
_______________________________________________
http://lists.parrot.org/mailman/listinfo/parrot-dev

Patrick R. Michaud

unread,
Jan 31, 2011, 3:48:33 PM1/31/11
to Nick Wellnhofer, parro...@lists.parrot.org
On Mon, Jan 31, 2011 at 08:05:29PM +0100, Nick Wellnhofer wrote:
> (Side note: A single iteration of the loop above seems to allocate
> about 3MB of memory. I also get:
>
> $ time ./perl6 -e -1
> real 0m0.752s
> $ time ./perl6 -e 'for 1..100 { time }'
> real 0m0.758s
> $ time ./perl6 -e 'for 1..100 { now }'
> real 0m4.512s
>
> There must be something wrong with "now". End side note)

The fact that C<now> takes longer than C<time> is not
entirely unexpected.

The C<now> term constructs an C<Instance> object using the
Instance.from-posix method, which represents an opaque
representation of fractional atomic seconds (TAI) in an
epoch-agnostic form. Since Parrot doesn't provide this
value natively (most systems do not), Rakudo has to construct
the Instance object from the POSIX time value, adjusting it
for leap-seconds and other variations as provided by a table
of leap-seconds.

The C<time> term simply returns the POSIX time value
reported by Parrot's time_n opcode (and doesn't even
have to box it into a PMC), so it's obviously a lot faster.

So there's nothing wrong with C<now>, other than it currently
takes a lot more work to construct than C<time> and we should
probably see about optimizing it. But the expense of C<now>
serves very much to further the point of the demonstration:
Even in loops where there's a non-trivial amount of work taking
place in the body of the loop, Parrot's GC has the impact of making
some iterations take 10x longer than the rest.

> You also have to keep in mind that latency and throughput are
> competing goals. For example, a Rakudo build will always be slower
> with an incremental garbage collector compared to a "stop-the-world"
> collector.

Most of Rakudo's users are less concerned with the time needed
to build Rakudo (which they do rarely) than with the time needed
to compile and run small application programs, which they do
frequently. This is especially true for people who download
precompiled Rakudo packages and never experience the build time.
I fully grant that your comment can apply also to the time
needed to compile an application program... but as yet most
application programs are far smaller than Rakudo itself, and runtime
speed is often the dominant component there.

> I hope this clarifies why there hasn't been much progress in this
> area and why you shouldn't expect too much in 2011.

I totally understand the reasons why Parrot has not made progress
on GC, and why it's not likely to happen in 2011. I was asked to
provide a list of current Rakudo needs from Parrot, and GC
performance is one that has been on the list for quite some time.
I'm not seeking for explanations or justficiations of why things are
the way they are, I'm informing the Parrot team of Rakudo's current
needs (as I was requested to do by the participants of this
weekend's PDS).

And I again acknowledge that 2010 saw some significant improvements in
Parrot GC.

But I think I should also offer some thoughts from an HLL camp:
"better GC and memory management" is one of the frequently-cited
"likely advantages" people mention in regards to the idea of
migrating Rakudo to platforms other than Parrot. Whether running
on other platforms will result in an _actual_ advantage in this
area remains to be seen of course, but I think the ongoing
perception of "slow Parrot GC" is definitely having a very negative
impact on opinions of Parrot in the "HLL marketplace".

Not only this, but Parrot GC has been cited as a core problem at
meetings and Parrot Developer's Summits for many years now -- even
well before the first PDS in 2008. As a potential user (and indeed
as a Rakudo developer), I think it damages Parrot's image greatly
that it seems continually unable to resolve what is arguably one
of *the* most fundamental and historically important components
of any dynamic language environment.

Pm
_______________________________________________
http://lists.parrot.org/mailman/listinfo/parrot-dev

Nick Wellnhofer

unread,
Feb 1, 2011, 9:11:07 AM2/1/11
to parro...@lists.parrot.org
On 31/01/2011 21:48, Patrick R. Michaud wrote:
> So there's nothing wrong with C<now>, other than it currently
> takes a lot more work to construct than C<time> and we should
> probably see about optimizing it. But the expense of C<now>
> serves very much to further the point of the demonstration:
> Even in loops where there's a non-trivial amount of work taking
> place in the body of the loop, Parrot's GC has the impact of making
> some iterations take 10x longer than the rest.

That's a problem that every stop-the-world GC has. AFAIK the default GCs
of the JVM and CLR are also non-incremental, but they're parallelized
and generational, so the problem isn't that pronounced.

Another reason for the large pauses in your simple example is the overly
large GC threshold that Parrot currently uses. We have upcoming fixes
for that. But programs that really need a large amount of memory will
still have noticably long pauses. Parrot's GC is much like early JVMs in
the 90s in that respect.

> Most of Rakudo's users are less concerned with the time needed
> to build Rakudo (which they do rarely) than with the time needed
> to compile and run small application programs, which they do
> frequently. This is especially true for people who download
> precompiled Rakudo packages and never experience the build time.
> I fully grant that your comment can apply also to the time
> needed to compile an application program... but as yet most
> application programs are far smaller than Rakudo itself, and runtime
> speed is often the dominant component there.

Small programs shouldn't exhibit those large delays once the dynamic
threshold branch is merged and the GC parameters are tuned a little.

> I totally understand the reasons why Parrot has not made progress
> on GC, and why it's not likely to happen in 2011. I was asked to
> provide a list of current Rakudo needs from Parrot, and GC
> performance is one that has been on the list for quite some time.
> I'm not seeking for explanations or justficiations of why things are
> the way they are, I'm informing the Parrot team of Rakudo's current
> needs (as I was requested to do by the participants of this
> weekend's PDS).

I know. I simply wanted to describe our problems for other readers of
the list.

> And I again acknowledge that 2010 saw some significant improvements in
> Parrot GC.

Parrot's GC is now (almost) at a point where it should only consume a
fixed percentage of program running time under any circumstances. I
guess it's about 15-20% for mark and sweep. Another big chunk of running
time is used for memory allocation itself. So except for latency the GC
should never pose a fundamental performance problem.

> But I think I should also offer some thoughts from an HLL camp:
> "better GC and memory management" is one of the frequently-cited
> "likely advantages" people mention in regards to the idea of
> migrating Rakudo to platforms other than Parrot. Whether running
> on other platforms will result in an _actual_ advantage in this
> area remains to be seen of course, but I think the ongoing
> perception of "slow Parrot GC" is definitely having a very negative
> impact on opinions of Parrot in the "HLL marketplace".

You're absolutely right. GC is a hard problem that's actively
researched. A good GC is one of the most important selling points of a
VM. Just look at the efforts companies like (Ex-)Sun, Microsoft and IBM
have put into that.

Nick
_______________________________________________
http://lists.parrot.org/mailman/listinfo/parrot-dev

Patrick R. Michaud

unread,
Feb 1, 2011, 11:12:13 AM2/1/11
to Nick Wellnhofer, parro...@lists.parrot.org
On Tue, Feb 01, 2011 at 03:11:07PM +0100, Nick Wellnhofer wrote:
> >Even in loops where there's a non-trivial amount of work taking
> >place in the body of the loop, Parrot's GC has the impact of making
> >some iterations take 10x longer than the rest.
> [...]

> Another reason for the large pauses in your simple example is the
> overly large GC threshold that Parrot currently uses. We have
> upcoming fixes for that. But programs that really need a large
> amount of memory will still have noticably long pauses. Parrot's GC
> is much like early JVMs in the 90s in that respect.
> [...]

> Small programs shouldn't exhibit those large delays once the dynamic
> threshold branch is merged and the GC parameters are tuned a little.

Okay. In this case, part of my message is "Here's a (very) small
program that shows the large delays."

It's entirely possible that this example is small in source but
has a large memory footprint... but I don't think this should
be the case. However, your earlier message seems to indicate that
it was in fact quite big -- 3MB for the first iteration. Is there
a good mechanism in Parrot to reliably measure the memory consuption
of various parts of a program? Ideally I'd like to have some way
to know how much memory is being used at any point in a program,
and which which subroutines have been responsible for allocating it.
(Could the profiling runcore or something like it provide this
sort of information?) I think this just brings us back to the
#2 need I listed -- better profiling and performance analysis of
Parrot programs.

Pm
_______________________________________________
http://lists.parrot.org/mailman/listinfo/parrot-dev

Nick Wellnhofer

unread,
Feb 1, 2011, 11:45:17 AM2/1/11
to parro...@lists.parrot.org
On 01/02/2011 17:12, Patrick R. Michaud wrote:
> Okay. In this case, part of my message is "Here's a (very) small
> program that shows the large delays."
>
> It's entirely possible that this example is small in source but
> has a large memory footprint... but I don't think this should
> be the case. However, your earlier message seems to indicate that
> it was in fact quite big -- 3MB for the first iteration.

Your example does allocate a lot of memory, but it's used only
temporarily, so it shouldn't cause long delays with a dynamic GC
threshold. I'll give it try with the dynamic threshold branch later and
report back.

> Is there
> a good mechanism in Parrot to reliably measure the memory consuption
> of various parts of a program? Ideally I'd like to have some way
> to know how much memory is being used at any point in a program,

You can get that number with

interpinfo .INTERPINFO_TOTAL_MEM_ALLOC

It's not really accurate in master, but I plan to add better statistics.

> and which which subroutines have been responsible for allocating it.

That's a lot harder.

> (Could the profiling runcore or something like it provide this
> sort of information?) I think this just brings us back to the
> #2 need I listed -- better profiling and performance analysis of
> Parrot programs.

Yes, this should probably be part of the profiling runcore. I couldn't
find a profiling task list on the Parrot Wiki. We should create one.

I also found this Wiki page (created 15 months ago):

http://trac.parrot.org/parrot/wiki/RakudoTasklist

We should update this page and link to it from the Wiki front page.

Nick
_______________________________________________
http://lists.parrot.org/mailman/listinfo/parrot-dev

James E Keenan

unread,
Feb 1, 2011, 9:41:19 PM2/1/11
to parro...@lists.parrot.org

If kid51 may channel Jim Keenan for a moment:

To review what Roadmap Goals are all about:

Unlike our practice at Parrot Developer Summits (PDS) in 2008 and 2009,
we now apply fairly strict criteria for designating some objective as a
Roadmap Goal.

1. The goal is significant enough (as measured in eventual benefit for
our users) and/or complex enough that the Project recognizes that it
will take the combined efforts of two or more developers to reach the goal.

2. The goal is one that the Parrot project as a whole has committed to
deliver in a specified quarterly supported release.

3. As a consequence of (1) and (2) we don't designate an objective as a
Roadmap Goal unless and until we can organize a *team* of developers to
work on it. Each such team has a designated leader. Team members are
responsible to the project as a whole for achieving the goal by the
release date. The project as a whole is responsible to Parrot users for
achieving the goal by the release date.

In other words, for Roadmap Goals we take on a higher degree of mutual
and public accountability than we do for all other objectives. And
since we're just getting our feet wet at this higher degree of
accountability, our Roadmap Goals are going to be relatively few in
number and carefully selected. That selection process ought to include
preliminary discussion on blog posts, a summary posting on parrot-dev in
the weeks before the quarterly PDS and thorough discussion at the PDS.

After the PDS, the teams ought to start work on the Roadmap Goals, and
the team's members have to make certain they don't work on other
objectives *at the expense of* the Roadmap Goals.

Now, I fully expect that, notwithstanding all of the above:

(a) Most of the work in the Parrot project will continue to be done, as
it always has, by individual developers who may or may not be able to
meet an objective by a particular supported release date. That work is
and will always be invaluable -- but the Project as a whole can't commit
to deliverables when only one developer is working on a particular
objective.

(b) Most of the work being done in the Parrot project on a given day
will be in support of objectives other than Roadmap Goals. We hope that
in a given supported release we achieve *many* objectives, not just
Roadmap Goals. But, given the all-volunteer nature of our project, the
only ones we're going to put on the roadmap are those for which we can
deploy a *team* of developers.

(c) Some of the Roadmap Goals will be achieved well before their
targeted release dates.

(d) Some of the Roadmap Goals will actually be a series of goals. For
example, the IMCC Isolation and Lorito Prototype goals set at this past
weekend's summit are both set for delivery in Parrot 3.3 on April 19.
But if you read whiteknight's and cotto's posts, it's evident that there
will be more work needed to reach these goals. So we'll probably set
the next part of this work as Roadmap Goals for 3.6 on July 19.

So let's get concrete. Parrot will have to develop in certain ways --
mostly yet unknown or undecided -- in order to accommodate Rakudo's
needs between now and January 2012? Where do our principal user's needs
fit into this concept of "Roadmap Goal"?

1. As I inferred from backscrolling through #parrotsketch today, it
will probably take us two or three weeks to determine what we have to do
specifically for Rakudo and what we have to do that Rakudo needs but
that all potential Parrot users need as well.

2. Once we have a general idea of what we have to do, we have to refine
this into a list of specific objectives, i.e., the type of tasks we can
formulate in Trac tickets.

3. Once we have specific tasks, we have to canvass our membership to
see which *teams* we can recruit for which tasks. Those tasks can go on
the roadmap for 3.6, 3.9 and 4.0 -- but not for 3.3, because we've
already made our public pledges for that release. If a particular task
is targeted for 3.6, we can start work on it now provided that it
doesn't impeded our work on 3.3 Roadmap Goals.

4. If a particular 3.6 task is ready before 3.6, it can go in whenever
it's ready -- even before 3.3 -- but again, provided it doesn't divert
us from achieving the 3.3 Roadmap Goals by April 19.

5. It's quite possible that there may be an important Rakudo-oriented
objective for which we can only recruit a single developer. We'll try
to achieve that objective -- but we won't put it on the roadmap unless
we can put a team of two or more developers on it. (And, of course,
since there are at least five Rakudo/NQP developers who are also Parrot
developers and commit-bit-holders, there's nothing to keep those folks
from forming a team to work on objectives that would be very beneficial
to Rakudo.)

We're trying to nudge ourselves more in the direction of taking
collective responsibility for producing a finished product that is
useful for Rakudo and other HLL users. Encouraging ourselves to
function on task-oriented teams, and not just as a collection of solo
developers, is an important means to that end. And having tried
Unrealistic Roadmap Scheduling, we're now trying Realistic Roadmap
Scheduling.

What do we hope to achieve by this? Well, in words of the poets,

"You can't always get what you want,
You can't always get what you want,
You can't always get what you want,
But if you try some time, you just might find
You get what you need."

Thank you very much.

kid51


Some additional thoughts:

1. I suspect that much of the Rakudo discussion will take place between
Andrew Whitworth, as Parrot Project Manager, and you as Rakudo Pumpking.
But I think we've reached a point where Parrot ought to have a "Client
Relationship Manager" for Rakudo and for Rakudo to have a similar
relationship manager for Parrot.

2. Our 3.3 release on Tuesday, April 19 falls on the second night of
Passover. This implies that we may want our effective deadline for the
3.3 Roadmap Goals to fall several days earlier.

_______________________________________________
http://lists.parrot.org/mailman/listinfo/parrot-dev

Vasily Chekalkin

unread,
Feb 14, 2011, 8:50:02 PM2/14/11
to Patrick R. Michaud, parro...@lists.parrot.org
On Tue, Feb 1, 2011 at 7:48 AM, Patrick R. Michaud <pmic...@pobox.com> wrote:
> On Mon, Jan 31, 2011 at 08:05:29PM +0100, Nick Wellnhofer wrote:
>> I hope this clarifies why there hasn't been much progress in this
>> area and why you shouldn't expect too much in 2011.
>
> I totally understand the reasons why Parrot has not made progress
> on GC, and why it's not likely to happen in 2011.  I was asked to
> provide a list of current Rakudo needs from Parrot, and GC
> performance is one that has been on the list for quite some time.
> I'm not seeking for explanations or justficiations of why things are
> the way they are, I'm informing the Parrot team of Rakudo's current
> needs (as I was requested to do by the participants of this
> weekend's PDS).
>
> And I again acknowledge that 2010 saw some significant improvements in
> Parrot GC.

For the record:
1. GC MS2 (current one) should be about 30% faster than old GC MS.
Starting from 2.11 release.
2. GenGC is just around the corner. It's about 25% faster on make
spectest in Rakudo. And there is plenty of optimizations which can be
done. I just don't like idea of having exceptions from Deprecation
Policy.

Reply all
Reply to author
Forward
0 new messages