A proposal for modesty

44 views
Skip to first unread message

Wes Garland

unread,
Sep 9, 2009, 11:23:19 AM9/9/09
to comm...@googlegroups.com
Folks --

As you know, I've been following this group since its inception, and have personally been involved both in the development of CommonJS and real-world systems-oriented JS for many years.

I'm seeing proposals fly back and forth, plenty of bike shedding -- but since the codification of require(), no real work on actually achieving something which could be called "CommonJS".

We have several platforms interested in CommonJS compliance, but relatively little compatibility across them. Stepping back for a moment, I think I understand the root causes: excessive complexity and moving targets.

Let's take the Binary module as an example.

Narwhal, GPSEE, and Flusspferd have all implementations of the Binary/B proposal.  This is a VERY complicated implementation, and we've all spent many man hours on it.

Now, there is serious discussion about replacing Binary/B with Binary/C.  I don't really have a problem with this -- Binary/B is a good proposal but carries with it a number of flaws (the big one being that it is overly complex for a base data type, in my view).  But the fact remains that if Binary/C reaches "acclamation" that not only have we wasted significant time implementing Binary/B, but now we have to implement Binary/C as well -- which, frankly, is also a much more complex data type than I feel is required for its core  functionality.

I truly believe that this group is committing a serious error in compliance spec design that is actively limiting both the availability of a definition of what constitutes CommonJS and the number of implementations willing/able to conform in a reasonable time frame. Additionally, I think it is important for the group to work with than against those implementing base-level engines.

I believe we should be codifying CORE functionality first. Correct core design will allow non-core, "fancy" functionality to be built on top of core, portably, in all engines. Limiting the scope should allow for faster development of CommonJS specifications and implementations.

Concentrating first on core functionality does not limit the potential richness of the CommonJS experience down the road; we can always codify "second-tier" proposals. Additionally, having core functionality available also means that suddenly there is room for that "lone gun" coder (or small group) to simply implement a rich library which emerges as its own standard or what have you.

Two-tier libraries are certainly not a foreign idea in JavaScript.  JavaScript is truly the language of feature testing and augmentation.  In fact, Narwhal itself is largely built on this core/not-core idea in its "engines" concept.  Dan Friesen's "MonkeyScript" is another example where large swaths of the API space do not require adding core functionality, but rather extend it.  In fact, somebody like Dan should reasonably be able to snag any CommonJS implementation and pop something like MonkeyScript on top of it.  But we're never going to get to that level if we're trying to build something of MonkeyScript complexity into core CommonJS.

Now, what constitutes core functionality?   In my view, this is the minimum API-footprint required to create a complete implementation in JavaScript without significant performance penalties (algorithmically). To take a recent, trivial, topic of bike-shedding -- the stream "write" method.  The following variations have been proposed:

1 - write at most N bytes of content
2 - write exactly N bytes of content
3 - write (exactly | at most) N bytes from the middle of the content
 
Variation #3 can clearly be implemented trivially via #2 and #1.
Variation #2 can be implemented by wrapping #1 in a loop
Variation #1 cannot be implemented via #2, except by reading a single character at a time. This would constitute a significant performance penalty.

Bike Shed Debate: core functionality should implement variation #1 - bike shed resolved, test suite minimized, core engine people can implement, and move on.   Those most interested in variations #2 and #3 can design and implement a library in JavaScript using the core module, and share it with the rest of us.  Best of all: if the non-core library proves to be a design-dud, or contains a flawed paradigm, then we can replace it without throwing out the core library.  This means, for example, that somebody interested in promises could realistically develop their own CommonJS Promise-IO library out of file-core primitives.

Summary
  • "require" alone is not enough for CommonJS label
  • Current specification acclamation process too wishy-washy, make implementors nervous about spending time
  • Current specifications are emerging with excessive, non-core complexity
  • We should concentrate on core functionality: without implemented core, we have nothing
  • When core functionality is available, "fully featured" APIs can be grown by specification, market-emergence, etc
  • Testing functionality and stamping CommonJS compliance is much easier on core libraries than the thousands of edge cases in Layer-2 APIs
  • Sharing Layer-2 APIs (non-core) across implementations [a narwhalian goal, FWIW] will reduce cross-platform bugs
Wes

--
Wesley W. Garland
Director, Product Development
PageMail, Inc.
+1 613 542 2787 x 102

Daniel Friesen

unread,
Sep 9, 2009, 2:07:48 PM9/9/09
to comm...@googlegroups.com
I've been drafting IO/B for a bit. In the end, I actually started to
extract my "raw io api" (which is basically a simple object with some
methods that define what can be done, and how to do it in a completely
generic way) from the Stream page and moved it into it's own.
I suppose IO/B could be considered two-tier in streams. At the least the
Stream class can be implemented in pure-js.

What would be a Layer-0 for Binary/C?
Blob: .contentConstructor .length .byteCodeAt .concat?
(String|Blob)Buffer: .length .splice .codeAt .range .valueOf?

(stringing methods excluded since those can be implemented using the
encodings module)

ashb raised the "I give up, why don't we just write a verbose ugly api
and have people implement cleaner ones on top" flag in irc yesterday.

Binary, I'd be fine with trimming it down to the necessities (I don't
see how (String|Blob)Buffer can be implemented in pure-js though

If we're defining raw access api's to underlying layers instead of apis
meant for the end programmer to use, I'll take a byte of that.
I have no problem participating in standardizing raw low-level access
apis and using those to implement MonkeyScript (Heck, I'll probably just
take the api and shove it into my _native object for bananas and
basically use the standard as the definition for my _native access
methods), as long as it's clear that we're defining verbose low-level
apis not in any way meant for anyone at a higher level than a programmer
implementing a say, File API directly on top of that raw api.

One of my issues with having a "standard" api which I'm not so keen on,
and building my "clean" api on top of that is one tendency I've seen in
programmers in other languages... Generally as I've experienced even if
it's not all that nice to use, if there is a core api that can do
something, and a library that builds on top of that to do it cleaner. A
lot of projects/programmers will instead of picking based on what is
easy to use, they'll ignore the library unless it's a necessity, even if
it means working around quirks in that "standard" api that the library
solves. On the other hand, if we define a pure raw api and build the
actual apis as libraries around that, It is more likely that a developer
will chose an api based on how mature and usable it appears to be,
rather than by what is shoved into their face without needing to add
something extra.

To further your notes, the process (rather than these big discussions;
we're free to have those for higher layers) for layer 0 apis should
mostly only consist of:
1. identify the scope we're actually aiming at (one of my problems with
the other Stream specs we have is they focus on binary File and Stream
IO and forget about streams in general which could really come from any
data source... Like a StringBuffer for example, which doesn't have an
available binary mode, and thus does not work with the raw+TextWrapper
model used in both)
2. look at the various underlying apis that may be used to implement the
api (Java, standard C/C++ libraries, other cross-platform things like
Boost, NSPR, APR, etc...) and collect a list of common/necessary parts
of the api based on that.
3. Identify potentially performance robbing things that need to be
considered (For example, don't forget that if we go and do something
like fs.exists(path); fs.isFile(path); etc... all of us Java
implementors will be forced to create and discard a new File instance
each and every time, even if there's a possibility we could reuse that
later. Discarding in that manor should probably be done in second-tier
libraries instead of forced that way in level 0 which means that
tier-two libraries that could preserve the instance will end up
discarding over and over, and instance they could reuse within their own
instance)
4. ratify an api which at a reasonable performance provides all the low
level access we need to implement the higher layers.

A final note on Binary.
I know that narwhal is aimed at portability. However I'm not really
interested in dropping all of narwhal into MonkeyScript, and I wouldn't
doubt that other CommonJS implementors feel the same. Most of us are
interested in collaborating on chunks of code, and sharing those chunks
between projects. But not all of us feel like dropping other projects
into our project (or dropping ours into theirs) and working around that
project's methodologies.
In this case, narwhal's binary module is implemented in a portable
way... within narwhal. The structure unfortunately doesn't look like
something I can simply drop a file or two into my project and make work.
I believe Binary is a special case, where rather than working on within
our projects, it would be best if we collaborate on 1 implementation per
engine. At most we should have 1 in Java for Rhino, 1 for SpiderMonkey,
1 for V8, 1 for JavaScriptCore, and perhaps one more for JScript or
whatever WSH or ASP people are working on. Ideally of course it would be
nice if it were possible to fold as many of those C/C++ implementations
together.
I say this for multiple reasons. Of course because it's much easier if
we collaborate on a few implementations just for engines, and include
relevant implementation(s) into all our projects that need it. Because
it prevents a lot of extra work on a large core functionality trying to
implement it in other projects. And also because there have been notes
about ECMA looking at our final decision on binary, and this is the
easiest route because it means that in the end, it won't be that hard
for the JS engines themselves to incorporate our implementations
directly into the JS engines if it comes to that point.
((I'd be happy to donate my work on Binary/C for Java under any license,
which can be refactored, cleaned up, and if needed, trimmed down to
essentials; I'm to a point where you can .append to a buffer whether
it's binary or text))


Another side-note on binary. I believe the simplest part of binary is
Blob, which is basically a mirroring of String but for binary data.
Which is also almost identicle in Binary/B and Binary/C. We could go and
call Blob "Binary Level 0" and make ByteArray/*Buffer another level.
(They'll perform badly, but it should be completely possible to
implement buffers in pure-js on top of String and Blob by using .concat
and .slice; It's ugly, but it's a patch that can make buffers work even
for someone who hasn't bothered spending man-hours in implementing
memory efficient buffers)

~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
> Additionally, I think it is important for the group to work /with/
> than /against/ those implementing base-level engines.
> _*Summary*_
>
> * "require" alone is not enough for CommonJS label
> * Current specification acclamation process too wishy-washy, make
> implementors nervous about spending time
> * Current specifications are emerging with excessive, non-core
> complexity
> * We should concentrate on core functionality: without implemented
> core, we have nothing
> * When core functionality is available, "fully featured" APIs can
> be grown by specification, market-emergence, etc
> * Testing functionality and stamping CommonJS compliance is much
> easier on core libraries than the thousands of edge cases in
> Layer-2 APIs
> * Sharing Layer-2 APIs (non-core) across implementations [a

Hannes Wallnoefer

unread,
Sep 9, 2009, 3:28:35 PM9/9/09
to comm...@googlegroups.com
I feel quite different about these things. I think it's really
important that we take our time to come up with an API that provides
what developers need while keeping it plain and simple. And that may
just take some experimentation, including some dead ends.

I just implemented Binary/B as native Rhino host object for Helma NG,
and it was quite a nice experience. The one thing I don't like about
it is how ByteString tries to mimic String up to the last detail, but
other than that, it's a pretty sound proposal I think. Binary/C adds
some things that Binary/B is missing, like the copy method to shift
and copy byte ranges. But I think the *Buffer concept is too high
level for basic mutable char/byte arrays, and the class hierarchy with
the abstract Buffer base class doesn't make sense to me. So this is
where I like ByteArray from Binary/B better. But yes, we may need a
mutable character array, too...

At least for me, it's not about choosing between Binary/B and
Binary/C, it's about picking the best ideas out of both into something
that works and is sound. And no matter what it'll be called and which
methods I'll have to implement and which to drop, I have a nice
implementation of a JS growable byte buffer that will adapt easily to
it.

Speaking of which, the Helma NG Binary/B is passing all of Narhwal's
tests and some of our own. It's fast and reasonably clean, and I'll be
happy to contribute it to Rhino as soon as we have something like an
agreed-upon standard.

Hannes

2009/9/9 Wes Garland <w...@page.ca>:

Daniel Friesen

unread,
Sep 9, 2009, 3:40:43 PM9/9/09
to comm...@googlegroups.com
Hannes Wallnoefer wrote:
> ...But I think the *Buffer concept is too high

> level for basic mutable char/byte arrays, and the class hierarchy with
> the abstract Buffer base class doesn't make sense to me. So this is
> where I like ByteArray from Binary/B better. But yes, we may need a
> mutable character array, too...
> ...
> Hannes
>
AbstractBuffer in my Java class? That hierarchy is something I plan to fix.

Or are you referring to Buffer which StringBuffer and BlobBuffer inherit
from?
That exists for multiple reasons:
- So you can prototype abstract methods onto buffers. (At that, .insert,
.append, etc... could actually be built using pure-js and prototyped on
to Buffer)
- So that you can construct buffers without worrying about what type
they are (abstract algorithms like; Construct buffer, read data, append
data to buffer, continue until data source is empty, convert buffer to
non-mutable type, return that data.) using simple techniques like:
var buf = new Buffer(seq); // Takes either a String or a Blob and
creates either a StringBuffer or a BlobBuffer.
var buf = new Buffer(stream.contentConstructor); // Takes the
contentConstructor from a stream or whatever, and creates either a
StringBuffer or a BlobBuffer that matches.

Hannes Wallnoefer

unread,
Sep 9, 2009, 4:21:58 PM9/9/09
to comm...@googlegroups.com
2009/9/9 Daniel Friesen <nadir.s...@gmail.com>:
>
> Hannes Wallnoefer wrote:
>> ...But I think the *Buffer concept is too high
>> level for basic mutable char/byte arrays, and the class hierarchy with
>> the abstract Buffer base class doesn't make sense to me. So this is
>> where I like ByteArray from Binary/B better. But yes, we may need a
>> mutable character array, too...
>> ...
>> Hannes
>>
> AbstractBuffer in my Java class? That hierarchy is something I plan to fix.
>
> Or are you referring to Buffer which StringBuffer and BlobBuffer inherit
> from?

Yes, I was referring to the JS API, not your Java implementation.

> That exists for multiple reasons:
> - So you can prototype abstract methods onto buffers. (At that, .insert,
> .append, etc... could actually be built using pure-js and prototyped on
> to Buffer)
> - So that you can construct buffers without worrying about what type
> they are (abstract algorithms like; Construct buffer, read data, append
> data to buffer, continue until data source is empty, convert buffer to
> non-mutable type, return that data.) using simple techniques like:
> var buf = new Buffer(seq); // Takes either a String or a Blob and
> creates either a StringBuffer or a BlobBuffer.
> var buf = new Buffer(stream.contentConstructor); // Takes the
> contentConstructor from a stream or whatever, and creates either a
> StringBuffer or a BlobBuffer that matches.

I just don't see that happening very often. Most times when you deal
with a stream or a buffer you know what's in it and what you can put
into it. But one really good example of a generic algorithm
implemented this way could change my mind, provided there isn't any
simpler way to implement it.

To be more precise, it's the contentConstructor property and the
generic Buffer constructor I'm skeptical about. Having a common
interface for both buffers sure makes sense, and while I don't think
the common Buffer base will be very useful I think it's justifiable.

Hannes

> >
>

Wes Garland

unread,
Sep 9, 2009, 4:27:31 PM9/9/09
to comm...@googlegroups.com
Hannes:


> I feel quite different about these things. I think it's really
> important that we take our time to come up with an API that provides
> what developers need while keeping it plain and simple. And that may
> just take some experimentation, including some dead ends.

Have you stopped to consider that what developers really need is the ability to access the underlying OS facilties (like files and sockets) so that they can actually work on developing those APIs in a method more concrete than spewing documents at a wiki?

Kris Kowal

unread,
Sep 9, 2009, 4:46:53 PM9/9/09
to comm...@googlegroups.com
Again, let's not set ourselves up with a false dilemma.

We need both low level APIs on which to build high level APIs and we
need high level APIs to build interoperable applications.

It is a useful exercise, and one that you rightly point out Tom and I
are engaging, to discover what the minimal, cross-engine, cross-os
part of each of CommonJS APIs need be implemented on each engine,
either natively or as adapters. The idea is to provide a minimal low
level API foundation on which to build the application API in pure
JavaScript.

Perhaps the issue is not so much that we have failed to do this, but
that we've not realized that it would be beneficial to codify a
standard for both layers. I believe our reluctance to do this stems
not from our desire to have such layers, but a fear of getting hung up
on implementation details in the process.

Kris Kowal

Daniel Friesen

unread,
Sep 9, 2009, 5:05:44 PM9/9/09
to comm...@googlegroups.com
http://github.com/dantman/monkeyscript.lite/blob/789278c5df3dd6da77350dbfec32aa3a402f9220/src/bananas/os/io/Stream.js#L85
.yank (for lack of another name awhile ago; It's a blocking/full read)
implemented using Buffer.

Buffer itself is trivial anyways, it's not like it's any trouble at all
to implement.
function Buffer(o, l) {
if ( !arguments.length )
return this;
if ( o instanceof String )
return new StringBuffer(o);
if ( o instanceof Blob )
return new BlobBuffer(o);
if ( arguments.length > 1 ) {
if ( o === String )
return new StringBuffer(l);
if ( o === Blob )
return new BlobBuffer(l);
}
if ( o === String )
return new StringBuffer();
if ( o === Blob )
return new BlobBuffer();
}
Buffer.prototype.toSource = function () { return "(new Buffer())"; };

Heck, frankly you don't even need that if( arguments.length > 1 ) block...

So:
- No pain to implement
- When writing abstract code lets you construct buffers without needing
to check types yourself.
- Lets you easily just assign prototype methods onto Buffer.prototype
instead of StringBuffer.prototype and BlobBuffer.prototype separately.

Yank is the easiest one to come to mind (actually, I implement .read()
in the same way).

Hannes Wallnoefer

unread,
Sep 9, 2009, 5:59:12 PM9/9/09
to comm...@googlegroups.com
2009/9/9 Wes Garland <w...@page.ca>:
> Hannes:
>
>> I feel quite different about these things. I think it's really
>> important that we take our time to come up with an API that provides
>> what developers need while keeping it plain and simple. And that may
>> just take some experimentation, including some dead ends.
>
> Have you stopped to consider that what developers really need is the ability
> to access the underlying OS facilties (like files and sockets) so that they
> can actually work on developing those APIs in a method more concrete than
> spewing documents at a wiki?

I think we're still doing that (i.e trying to provide access to OS
facilities like files). It's just not that easy, because JS happens
not to have a notion of a byte array, so we have to come up with all
that stuff along the way. I do agree that some of the specs are
shooting too far, so no doubt your request to keep it simple is a good
one. But mirroring some POSIX APIs to JS isn't going to work either.

For my IO proposal, I think I might split the stream buffer part into
a separate proposal. It's not core I/O and would make both parts
easier to digest.

Hannes

Daniel Friesen

unread,
Sep 9, 2009, 6:04:41 PM9/9/09
to comm...@googlegroups.com
Hannes Wallnoefer wrote:
> ...

> I think we're still doing that (i.e trying to provide access to OS
> facilities like files). It's just not that easy, because JS happens
> not to have a notion of a byte array, so we have to come up with all
> that stuff along the way. I do agree that some of the specs are
> shooting too far, so no doubt your request to keep it simple is a good
> one. But mirroring some POSIX APIs to JS isn't going to work either.
>
Which would be why I stated, in an earlier thread:
----

To further your notes, the process (rather than these big discussions;
we're free to have those for higher layers) for layer 0 apis should
mostly only consist of:
1. identify the scope we're actually aiming at [...]

2. look at the various underlying apis that may be used to implement the
api (Java, standard C/C++ libraries, other cross-platform things like
Boost, NSPR, APR, etc...) and collect a list of common/necessary parts
of the api based on that.
3. Identify potentially performance robbing things that need to be
considered [...]

4. ratify an api which at a reasonable performance provides all the low
level access we need to implement the higher layers.
----

Mike Wilson

unread,
Sep 10, 2009, 2:43:51 PM9/10/09
to comm...@googlegroups.com
Hannes Wallnoefer wrote:
> I feel quite different about these things. I think it's really
> important that we take our time to come up with an API that provides
> what developers need while keeping it plain and simple. And that may
> just take some experimentation, including some dead ends.

Yes, I agree to this point.
It's usually the time spent going into nitty-gritty details and
trying different models that make you understand the problem
domain. That will eventually help you make the right choices in
a (maybe smaller and simpler) final design. It's a natural design
process to start out with something seemingly simple, having this
grow into a complex monster while solving different use cases, to
eventually (with the experience gained) being able to cut it down
to something simple and contained again.
That said, the "complex" phase shouldn't go on forever, there
must be limits. Also, if the complex phase is too slow/inactive,
participants seem to forget the killer use cases that made all
the difference a month ago, and everything has to be repeated
over and over again.

Just my 2c.
Best regards
Mike

Kevin Dangoor

unread,
Sep 14, 2009, 3:49:06 PM9/14/09
to comm...@googlegroups.com
Sorry about coming late to this party...

On Thu, Sep 10, 2009 at 2:43 PM, Mike Wilson <mik...@hotmail.com> wrote:

Hannes Wallnoefer wrote:
> I feel quite different about these things. I think it's really
> important that we take our time to come up with an API that provides
> what developers need while keeping it plain and simple. And that may
> just take some experimentation, including some dead ends.

Yes, I agree to this point.
It's usually the time spent going into nitty-gritty details and
trying different models that make you understand the problem
domain.

It's worth noting that to truly get to those nitty gritty details requires real apps trying to do real things. (Of course, for many of these things that we're talking about we've all built plenty of apps that use these core concepts and have developed opinions on how they should be done.)

I do think that if we're hung up on higher-level APIs then moving down to a lower level, getting agreement on that, and then building the high level on top (potentially in non-standard packages to start) is a good idea. JS does ultimately need to be at a level that is competitive with Python, Ruby, etc. in terms of ease-of-use.

Kevin

--
Kevin Dangoor

work: http://labs.mozilla.com/
email: k...@blazingthings.com
blog: http://www.BlueSkyOnMars.com

Wes Garland

unread,
Sep 14, 2009, 3:59:49 PM9/14/09
to comm...@googlegroups.com
> It's worth noting that to truly get to those nitty gritty details requires real apps trying to do real things.

FWIW, most of my frustration in this regard boils around the fact that I am trying very hard to build a significant project in CommonJS.

My team has had to veer off into "make up ad-hoc APIs out of FFI" land pretty hard as CommonJS is lagging and overly complicated for simple use cases.

On the other hand, the FFI API we've built is reaching maturity due to this process quite quickly!

Kevin Dangoor

unread,
Sep 14, 2009, 9:37:19 PM9/14/09
to comm...@googlegroups.com
On Mon, Sep 14, 2009 at 3:59 PM, Wes Garland <w...@page.ca> wrote:
> It's worth noting that to truly get to those nitty gritty details requires real apps trying to do real things.

FWIW, most of my frustration in this regard boils around the fact that I am trying very hard to build a significant project in CommonJS.


Yeah, I can see how that would be frustrating. It is good that you're coming at it from the angle of specific requirements you have right now.

I've been wanting to get Bespin's server moved over to JS, but there's been too much to do to make it happen. At this point, though, thanks to Pydermonkey we may actually be able to start migrating some code into JS.
 
My team has had to veer off into "make up ad-hoc APIs out of FFI" land pretty hard as CommonJS is lagging and overly complicated for simple use cases.

On the other hand, the FFI API we've built is reaching maturity due to this process quite quickly!

While I'm sure that's not what you wanted to be spending your time on, I can only imagine that having a good FFI is going to help you in the end!

Kevin

Daniel Friesen

unread,
Sep 14, 2009, 8:55:21 PM9/14/09
to comm...@googlegroups.com
I'm working on a level 0 for the filesystem.

Basically I'm working on:
- A completely generic stream system based on accepting a raw level 0
api, and turning it into a usable level 1 object
- A level 0 stream interface; This is just an interface, no
implementation on it's own. It defines the absolute bare bones of what a
level 0 stream must provide.
- A level 0 filesystem spec which in the most efficient way can be used
in any form of level 0 implementation (It lends itself well to both
functional and instance based apis without penalizing either one). The
.open method for this level 0 api returns an object specified by an
extension to that level 0 stream interface.
- Breaking Binary/B up into Binary/B (Blob) and IO/B/Buffer
(Buffer/StringBuffer/BlobBuffer)

Basically, broken up; A level 0 filesystem api, which has a .open which
returns a level 0 stream. Those level 0 streams can be used inside of a
level 1 stream class. A level 1 filesystem api later can use those
together to provide an .open method which returns a level 1 stream.

For now I'm using a convention of io/* for module names, and io/*/raw
for raw level 0 apis.

The level 0 stuff should make everyone happy. The level 0 filesystem and
level 0 stream should be enough for anyone to implement whatever level 1
apis they want to access the filesystem.

~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]

Kris Kowal

unread,
Sep 14, 2009, 9:56:34 PM9/14/09
to comm...@googlegroups.com

I think this approach is a good idea. We have security requirements
on the high level that won't be relevant at these levels, so this will
help separate concerns. It would be neat if all of the "file" module
could be built on binary and lower level, privileged modules that,
while unavailable in a security context, would still serve as the
basis for that functionality.

Kris Kowal

Reply all
Reply to author
Forward
0 new messages