Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Why is there no input value optimization?

225 views
Skip to first unread message

rossm...@gmail.com

unread,
Apr 10, 2012, 5:18:22 AM4/10/12
to
I have a very simple question that I have been unable to find a
satisfactory answer. The question is why do I need to manually
optimize my functions using const references?

For example:

// Optimized passing of string parameter
Widget(std::string const & name);
SetName(std::string const & name);

// Non-optimized passing of string parameter
Widget(std::string name);
SetName(std::string name);

I understand that with the latter notation there is an additional
copy involved on most compilers, but why is that exactly?
Why is it that the compiler (as smart as it is today) is unable to
optimize away the additional copy?

Why should I be writing optimization code into my interfaces?? This
seems very wrong to me.

Everyone is familiar with the Return Value Optimization that the
compiler performs to save extraneous copies of return values. I don't
understand why the same logic cannot be applied to input values.

If I write a constructor that simply initializes a member variable or a
setter function that sets a member variable. Wouldn't it be easy for the
compiler to treat that function as if it were declared as pass by
reference instead of pass by value?

I attended the Going Native 2012 conference, and even here we were
reminded to pass strings by reference. In the age of move and pass by
value semantics that C++11 has introduced, this advice seems out of place
to me.

I understand that the language probably cannot guarantee that there
will not be a copy but why haven't the compilers stepped up with this sort
of optimization as was done with return value optimization?


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Marc

unread,
Apr 10, 2012, 1:52:38 PM4/10/12
to
rossm...@gmail.com wrote:

> I have a very simple question that I have been unable to find a
> satisfactory answer. The question is why do I need to manually
> optimize my functions using const references?
>
> For example:
>
> // Optimized passing of string parameter
> Widget(std::string const & name);
> SetName(std::string const & name);
>
> // Non-optimized passing of string parameter
> Widget(std::string name);
> SetName(std::string name);

Note that in some cases, the second version is "more optimized" than
the first, see the early cpp-next.com articles.

> I understand that with the latter notation there is an additional
> copy involved on most compilers, but why is that exactly?
> Why is it that the compiler (as smart as it is today) is unable to
> optimize away the additional copy?
>
> Why should I be writing optimization code into my interfaces?? This
> seems very wrong to me.

Note that from a binary interface point of view, T& is actually
(usually) equivalent to T*, not T. So using one or the other is very
important.

> Everyone is familiar with the Return Value Optimization that the
> compiler performs to save extraneous copies of return values. I don't
> understand why the same logic cannot be applied to input values.
>
> If I write a constructor that simply initializes a member variable or a
> setter function that sets a member variable. Wouldn't it be easy for the
> compiler to treat that function as if it were declared as pass by
> reference instead of pass by value?

That would be slower for simple types like int, double.

> I attended the Going Native 2012 conference, and even here we were
> reminded to pass strings by reference. In the age of move and pass by
> value semantics that C++11 has introduced, this advice seems out of place
> to me.
>
> I understand that the language probably cannot guarantee that there
> will not be a copy but why haven't the compilers stepped up with this sort
> of optimization as was done with return value optimization?

When the compiler has access to the body of functions and inlines
them, some copies (or moves) do appear redundant. And for simple
types, the compiler can often remove them. But for not-so-simple
types, copy constructors may have an observable behavior. Removing
them is thus not just an optimization, it changes the semantics of the
program. The standard defines a very restricted set of situations
where the compiler is allowed to do that. I would love to see this set
extended
(http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_active.html#1049 for
instance) but it doesn't depend just on compilers.

Daniel Krügler

unread,
Apr 10, 2012, 1:53:57 PM4/10/12
to
On 2012-04-10 11:18, rossm...@gmail.com wrote:
> I have a very simple question that I have been unable to find a
> satisfactory answer. The question is why do I need to manually
> optimize my functions using const references?
>
> For example:
>
> // Optimized passing of string parameter
> Widget(std::string const& name);
> SetName(std::string const& name);
>
> // Non-optimized passing of string parameter
> Widget(std::string name);
> SetName(std::string name);
>
> I understand that with the latter notation there is an additional
> copy involved on most compilers, but why is that exactly?
> Why is it that the compiler (as smart as it is today) is unable to
> optimize away the additional copy?
>
> Why should I be writing optimization code into my interfaces?? This
> seems very wrong to me.
>
> Everyone is familiar with the Return Value Optimization that the
> compiler performs to save extraneous copies of return values. I don't
> understand why the same logic cannot be applied to input values.

Because the situation is quite different: When a function returns a
value this means that there will be a to-be-constructed object on the
caller side. RVO just recognizes that fact that the called function
itself has no further use for the return value, so it can construct the
object directly on the caller-side.

For function parameters, the compiler cannot apply this logic: When the
function is called, the compiler must assume that the actual parameters
are still in use (If we have a C++11 compiler and the an rvalue
argument, this argument value can be moved. But it seems to me that your
concern is not related to move semantics). It would require that the
compiler performs tracking of each argument *within* the function (This
is different from RVO, where the compiler can ignore the different
views) to recognice whether they are modified or not. Only if not, the
compiler could elide the copy. Except for very simple examples this is
generally not feasible.

> If I write a constructor that simply initializes a member variable or a
> setter function that sets a member variable. Wouldn't it be easy for the
> compiler to treat that function as if it were declared as pass by
> reference instead of pass by value?

Maybe, but that is a very special case. If you want to extend the
copy-elision rules in C++, I suggest to specify the rules for this. It
is easy to demand this by hand-waving examples, but the world is no
longer nice and sweet once you are supposed to *specify* the constraints
upon which compilers would be allowed to do that.

HTH & Greetings from Bremen,

Daniel Krügler

Roman W

unread,
Apr 10, 2012, 1:56:50 PM4/10/12
to
{ Please limit your text to fit within 80 columns, preferably around 70,
so that readers don't have to scroll horizontally to read each line.
This article has been reformatted manually by the moderator. -mod }

On Tuesday, April 10, 2012 10:18:22 AM UTC+1, rossm...@gmail.com wrote:

> I understand that the language probably cannot guarantee that there
> will not be a copy but why haven't the compilers stepped up with this sort
> of optimization as was done with return value optimization?

I *guess* the rationale for this optimization goes as follows:

Return Value Optimization means that you're doing only 1 copy instead of 2.
But a copy is done nevertheless. The optimization you want would mean that
there is no copying done at all. What if the copy constructor writes a log
message to a file? How is the compiler supposed to check for that?

RW

Dave Abrahams

unread,
Apr 10, 2012, 1:57:28 PM4/10/12
to
on Tue Apr 10 2012, rossmpublic-AT-gmail.com wrote:

> I have a very simple question that I have been unable to find a
> satisfactory answer. The question is why do I need to manually
> optimize my functions using const references?
>
> For example:
>
> // Optimized passing of string parameter
> Widget(std::string const & name);
> SetName(std::string const & name);
>
> // Non-optimized passing of string parameter
> Widget(std::string name);
> SetName(std::string name);
>
> I understand that with the latter notation there is an additional
> copy involved on most compilers, but why is that exactly?
> Why is it that the compiler (as smart as it is today) is unable to
> optimize away the additional copy?

It may not be able to see inside the function (which might be separately
compiled), to determine that the object is not, in fact, modified.

> Why should I be writing optimization code into my interfaces?? This
> seems very wrong to me.

+1. An unfortunate necessity.

> Everyone is familiar with the Return Value Optimization that the
> compiler performs to save extraneous copies of return values. I don't
> understand why the same logic cannot be applied to input values.

The same logic *is* applied to input values in the form of copy elision
when the source is an rvalue. That's *truly*, *exactly*, the same logic.

I've been musing about an optimization I call "compile-time
copy-on-write," where the object ostensibly passed by value would only
*actually* get copied at the moment it was about to be modified. But I
don't think anything like that will happen until we have module support
in the language (http://cppnow.org/session/keynote-c-modules/).


--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

Martin B.

unread,
Apr 10, 2012, 1:57:56 PM4/10/12
to
On 10.04.2012 11:18, rossm...@gmail.com wrote:
> I have a very simple question that I have been unable to find a
> satisfactory answer. The question is why do I need to manually
> optimize my functions using const references?
>
> For example:
>
> // Optimized passing of string parameter
> Widget(std::string const& name);
> ...
> // Non-optimized passing of string parameter
> Widget(std::string name);
> ...
>
> I understand that with the latter notation there is an additional
> copy involved on most compilers, but why is that exactly?
> Why is it that the compiler (as smart as it is today) is unable to
> optimize away the additional copy?
>
> Why ...
>
> I attended the Going Native 2012 conference, and even here we were
> reminded to pass strings by reference. ...


I will not go into your snipped "why" question, because probably I would
make a mess of it.

However, what I would like to raise as a QOI question is, why can't we
have (or do we?) a proper compiler warning when the compiler detects
that the passed-by-value parameter isn't modified at all and really
should have been passed by const-reference?

I'm currently working with VC++ and I'm not aware of any help by the
compiler in this area.

Is this something static analysis tools check? What about Clang or gcc,
do they offer something similar?

cheers,
Martin

--
Good C++ code is better than good C code, but
bad C++ can be much, much worse than bad C code.

Ulrich Eckhardt

unread,
Apr 10, 2012, 2:00:30 PM4/10/12
to
Am 10.04.2012 11:18, schrieb rossm...@gmail.com:
> I have a very simple question that I have been unable to find a
> satisfactory answer. The question is why do I need to manually
> optimize my functions using const references?
>
> For example:
>
> // Optimized passing of string parameter
> Widget(std::string const & name);
> SetName(std::string const & name);
>
> // Non-optimized passing of string parameter
> Widget(std::string name);
> SetName(std::string name);

Even worse:

// optimized passing of int parameter
void foo(int i);
// pessimized passing of int parameter
void foo(int const& i);
// probably pessimized passing of pair
void bar(std::pair<int, int> const& p);
// and what about this here??
SetName(std::string name) { this->name.swap(name); }

The performance of course depends on the actual implementation, but in
common implementations a reference is just a "self-dereferencing
pointer", so a reference requires another level of indirection. Also,
any function can cheat and const_cast, so a calling function must not
assume that the call doesn't modify the passed object. Making informed
predictions requires looking at the implementation of the function,
which is the reason that many modern compiler perform optimizations
across translation-unit boundaries.

Note that the thing with the int and pair above is something I learned
from micro-optimizing some code. Certain code. On one particular
platform. In a very tight loop. I don't claim that this is universally
the most performant way.


> I understand that with the latter notation there is an additional
> copy involved on most compilers, but why is that exactly? Why is it
> that the compiler (as smart as it is today) is unable to optimize
> away the additional copy?

I think that it is possible to elide copying. Passing an object into a
function is very similar to returning it from a function, and RVO is
commonplace nowadays. If the compiler knows that an object is not
modified inside a function, it can exploit that knowledge.


> Why should I be writing optimization code into my interfaces?? This
> seems very wrong to me.

I agree. The compiler should figure out the fastest way itself. This
isn't trivial though:

struct weird {
string X;
void set_with_brackets(string x) {
X = "<";
X += x;
X += ">";
}
};
weird thing = { "argh!" };
thing.set_with_brackets(thing.X);

If you change the parameter to a reference, this code suddenly changes
its result! The "const&" makes sure that you don't accidentally modify
it through this reference, it doesn't guarantee that the object itself
isn't changed by other means.


> I understand that the language probably cannot guarantee that there
> will not be a copy but why haven't the compilers stepped up with this sort
> of optimization as was done with return value optimization?

I guess that there are two reasons:
1. People are used to it, the "const&" is something that the brain has
learned to ignore. Further, this is only written in one or two places.
2. If you pass the object to modify as reference to the function, which
is a common way to avoid copying, you sacrifice the ability to chain
functions, you need two lines of code, you can't declare the object
const etc. This hurts much more than declaring the passed parameter as
reference to const in two places!


Uli

Miles Bader

unread,
Apr 10, 2012, 2:00:40 PM4/10/12
to
rossm...@gmail.com writes:
> // Non-optimized passing of string parameter
> Widget(std::string name);
> SetName(std::string name);
>
> I understand that with the latter notation there is an additional
> copy involved on most compilers, but why is that exactly?
> Why is it that the compiler (as smart as it is today) is unable to
> optimize away the additional copy?

Hmmm, if nothing else, the ABI would have to change, ... and that's
not something done easily...

-miles

--
Next to fried food, the South has suffered most from oratory.
-- Walter Hines Page

Alf P. Steinbach

unread,
Apr 10, 2012, 2:14:08 PM4/10/12
to
On 10.04.2012 11:18, rossm...@gmail.com wrote:
> I have a very simple question that I have been unable to find a
> satisfactory answer. The question is why do I need to manually
> optimize my functions using const references?
>
> For example:
>
> // Optimized passing of string parameter
> Widget(std::string const& name);
> SetName(std::string const& name);
>
> // Non-optimized passing of string parameter
> Widget(std::string name);
> SetName(std::string name);
>
> I understand that with the latter notation there is an additional
> copy involved on most compilers, but why is that exactly?
> Why is it that the compiler (as smart as it is today) is unable to
> optimize away the additional copy?

The compiler is able to do it within the current language, but it's
constrained by

* the problem of aliasing, i.e. correctness violation, and

* depending on the solution, a combinatorial explosion, and that

* depending on the solution, the linker must support the scheme.

A partial solution, to avoid all three of those problems, is to use
immutable types with reference semantics.

Also, for the particular case of `std::string` another partial solution
is to use a COW (Copy On Write) implementation, which I believe is still
how the implementation for g++ works. It works in spite of the severe
shortcomings of the `std::string` class that theoretically should foil
its positive effect. It's like the magic of the horse shoe the Niels
Bohr had over his desk: theoretically it shouldn't work, but as Niels
remarked, "I am scarcely likely to believe in such foolish nonsense.
However, I am told that a horseshoe will bring you good luck whether you
believe in it or not".

---

Here is an example of aliasing at work:

<code>
#include <iostream>
#include <string>
using namespace std;

void spoiler();

int ageOf( string name )
{
return 0?0
: name=="john"? 18
: name=="mary"? 22
: 0;
}

void foo( string const& name )
{
int const age = ageOf( name );
spoiler();
cout << name << " is " << age << " years old." << endl;
}

string bah = "john";

void spoiler() { bah = "the universe"; }

int main()
{
foo( bah );
}
</code>

---

In many if not most cases, however, the compiler can easily prove that
there is no possible aliasing. It can also emit information that makes
it better able to prove that for later compilations of other code. But
here's where both the combinatorial explosion and problem of possible
need for linker support, enter the picture.

For consider a function declared like

void foo( string s );

If that function is non-optimized, then machine code must be emitted to
/copy/ the actual argument, while if the function is optimized like ...

void foo( string const& s );

then machine code to pass an address must be generated.

Consider then that this binary choice is present for each sufficiently
large argument where the optimization is relevant, and so that with n
such arguments we're talking about 2^n implementation variants: a
/combinatorial explosion/ akin to the one for perfect forwarding.

With the now most popular compilation model of C++ the compiler can't
know which variant it should assume, if there is only one. One possible
solution is to assume that /all/ 2^n variants exist, and to use all of
them freely with different linkage level name mangling. But then the
linker has to remove all the unused function implementations, lest the
final program increase greatly in size, like generally almost doubling
in size (which might counter any positive effect).

---

Another possible solution, one that avoids both the combinatorial
explosion and the need for linker support, can be based on David
Wheeler's well known aphorism, "Any problem in computer science can be
solved by another level of indirection".

Since the reference optimization only makes sense for sufficiently large
arguments that anyway are handled via pointers/addresses, the caller can
simply, for each argument, pass a flag, e.g. in a processor register,
that tells the implementation /whether to copy/ that argument. If, in a
particular call, a particular argument is so flagged and is not of
primitive type, then the implementation must copy it and update its
pointer to point at the copy. Then it can just proceed normally.

This set of flags imposes a slight overhead on every call, in that the
implementation must check the flags, but it removes the need for linker
support.

Perhaps, in order to let the programmer decide, functions that support
and need the flags could be marked with some attribute.

And even further, perhaps calls could also be annotated so that the
programmer could take responsibility for the arguments being non-aliased.


> Why should I be writing optimization code into my interfaces?? This
> seems very wrong to me.

>From a purely idealistic point of view it is indeed very wrong to
hardcode optimization decisions into interfaces. Ideally there should be
"in", "out" and "in-out" designators as in Ada, and ideally the language
should then support proper Liskov substitution[1]. I.e., supporting
covariant "out" arguments, contravariant "in"-arguments, and enforcing
invariant "in-out" arguments.

C# is one stop closer to that ideal than C++, by having an "out"
designator, but I'm not sure if it supports proper LSP for "out".

However, C++ is very much a language that's evolved to meet practical
needs. And apparently "in", "out" and "in-out" arguments have not been
very urgent practical needs. For if they had been, then they would
presumably have been supported already (of course this argument applies
to any desired feature, but I'm just sayin').

---

It is possible to attain the /appearance/ of "in", "out" and "in-out"
support by using e.g. empty macros with suggestive names.

That pure appearance effect is apparently the main idea of
Microsoft's[2] "Standard Annotation Language" SAL wrt. the C++
programming language (for the C programming language the annotations
may however have some slight advantage, but at an extreme, mind-boggling
cost). For example, the last time I checked the SAL annotations for one
of the most used Windows API functions, MessageBox, they were still
wrong. Which is what one can expect when it's just comment-like
annotations, and not a language-supported feature checked by a compiler.

In my humble opinion such schemes, for C++, are worse than not having
the desired feature. It's just a lot of extra work. And as in the case
of Windows' MessageBox function, the annotations can mislead you.

[snip]


Cheers & hth.,

- Alf

Notes:
[1] See e.g. <url:
http://alfps.wordpress.com/2012/03/11/liskovs-substitution-principle-in-c/>
[2] Lest it appears that I'm bashing Microsoft here, no, that's not my
intention, but firstly, "SAL" is the only such annotation language that
I know of (i.e., I don't know very much!), and secondly, I'm a Microsoft
MVP, which if anything should make me biased pro Microsoft.

Alf P. Steinbach

unread,
Apr 10, 2012, 2:51:54 PM4/10/12
to
On 10.04.2012 19:53, Daniel Krügler wrote:
> On 2012-04-10 11:18, rossm...@gmail.com wrote:
>> I have a very simple question that I have been unable to find a
>> satisfactory answer. The question is why do I need to manually
>> optimize my functions using const references?
>>
[snip]
>
> For function parameters, the compiler cannot apply this logic: When
> the function is called, the compiler must assume that the actual
> parameters are still in use (If we have a C++11 compiler and the an
> rvalue argument, this argument value can be moved. But it seems to
> me that your concern is not related to move semantics). It would
> require that the compiler performs tracking of each argument
> *within* the function (This is different from RVO, where the
> compiler can ignore the different views) to recognice whether they
> are modified or not. Only if not, the compiler could elide the
> copy. Except for very simple examples this is generally not
> feasible.

Uhm, you are envisioning one impractical way to do things. From that
you imply that any solution must be infeasible. At least that's how I
read it (for otherwise it would be meaningless), and that's a fallacy.


Cheers & hth.,

- Alf



a.lu...@googlemail.com

unread,
Apr 10, 2012, 5:34:47 PM4/10/12
to
On Tuesday, April 10, 2012 8:00:30 PM UTC+2, Ulrich Eckhardt wrote:
> // optimized passing of int parameter
> void foo(int i);
> // pessimized passing of int parameter
> void foo(int const& i);
> // probably pessimized passing of pair
> void bar(std::pair<int, int> const& p);
> // and what about this here??
> SetName(std::string name) { this->name.swap(name); }
>
> The performance of course depends on the actual implementation, but
> in common implementations a reference is just a "self-dereferencing
> pointer", so a reference requires another level of
> indirection. Also, any function can cheat and const_cast, so a
> calling function must not assume that the call doesn't modify the
> passed object. Making informed predictions requires looking at the
> implementation of the function, which is the reason that many modern
> compiler perform optimizations across translation-unit boundaries.
>
> Note that the thing with the int and pair above is something I
> learned from micro-optimizing some code. Certain code. On one
> particular platform. In a very tight loop. I don't claim that this
> is universally the most performant way.

GCC contains a command line switch "-fipa-sra" which, if I understand
correctly, performs such an optimization: (from the 4.6.0 manual)

"-fipa-sra
Perform interprocedural scalar replacement of aggregates, removal of
unused parameters and replacement of parameters passed by reference by
parameters passed by value. Enabled at levels ‘-O2’, ‘-O3’ and
‘-Os’."

Can somebody confirm this actually performs the optimization in
question?

Martin B.

unread,
Apr 10, 2012, 5:38:05 PM4/10/12
to
On 10.04.2012 20:00, Ulrich Eckhardt wrote:
> Am 10.04.2012 11:18, schrieb rossm...@gmail.com:
>> I have a very simple question that I have been unable to find a
>> satisfactory answer. The question is why do I need to manually
>> optimize my functions using const references?
>>
>> For example:
>> ...
>
> Even worse:
> ...
>
> The performance of course depends on the actual implementation, but
> in common implementations a reference is just a "self-dereferencing
> pointer", so a reference requires another level of
> indirection. Also, any function can cheat and const_cast, so a
> calling function must not assume that the call doesn't modify the
> passed object. ...

Interesting point. Who at which level is required to assume that
something passed by const-ref is modified by the callee?

cheers,
Martin

--
Good C++ code is better than good C code, but
bad C++ can be much, much worse than bad C code.


Daniel Krügler

unread,
Apr 11, 2012, 2:29:25 AM4/11/12
to
Am 10.04.2012 23:38, schrieb Martin B.:
> On 10.04.2012 20:00, Ulrich Eckhardt wrote:
[..]
>> [..] Also, any function can cheat and const_cast, so a
>> calling function must not assume that the call doesn't modify the
>> passed object. ...
>
> Interesting point. Who at which level is required to assume that
> something passed by const-ref is modified by the callee?

Any object that is not a const object may be modified. In most daily
scenarios arguments provided to functions are not really const objects,
therefore a "const &T" does not really protect from modification. Worse,
in the current standard it is not even clear whether an object with
dynamic storage duration (e.g. created by new expressions) can be a
const object. The intention is that they can be, see

http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_active.html#1428

Back to your last question: If you define a const object and provide
this object to any function (by reference or pointer), you can rely on
the assumption that code that would attempt to modify the object would
enter undefined behaviour. But as Ulrichs response implied: Compilers
would have a hard time to track any possible modifications of objects
that are referenced as const objects. For this reason they usually do
not optimize away objects provided by reference.

HTH & Greetings from Bremen,

Daniel Krügler


--

Francis Glassborow

unread,
Apr 11, 2012, 12:56:13 PM4/11/12
to
On 10/04/2012 22:38, Martin B. wrote:
> On 10.04.2012 20:00, Ulrich Eckhardt wrote:
>> Am 10.04.2012 11:18, schrieb rossm...@gmail.com:
>>> I have a very simple question that I have been unable to find a
>>> satisfactory answer. The question is why do I need to manually
>>> optimize my functions using const references?
>>>
>>> For example:
>>> ...
>>
>> Even worse:
>> ...
>>
>> The performance of course depends on the actual implementation, but
>> in common implementations a reference is just a "self-dereferencing
>> pointer", so a reference requires another level of
>> indirection. Also, any function can cheat and const_cast, so a
>> calling function must not assume that the call doesn't modify the
>> passed object. ...
>
> Interesting point. Who at which level is required to assume that
> something passed by const-ref is modified by the callee?

Which reminds me that C++11 supports concurrency. Pass by value only
needs unique access at the call site, pass by const reference requires
it throughout the life of the function call.

Francis


--

rossm...@gmail.com

unread,
Apr 11, 2012, 4:40:48 PM4/11/12
to
On Tuesday, April 10, 2012 10:52:38 AM UTC-7, Marc wrote:
> Note that from a binary interface point of view, T& is actually
> (usually) equivalent to T*, not T. So using one or the other is very
> important.

Yes, I know. This is how we got here in the first place. I don't
really want to use reference semantics, but it is the recommended
practice for performance reasons. I want to say to the compiler: "I am
passing you this value. Do what you need to make this operation fast."

> > Wouldn't it be easy for the compiler to treat that function as if
> > it were declared as pass by reference instead of pass by value?

OK, I should never have said it would be "easy". I am really wondering
if it is at all possible :)

> That would be slower for simple types like int, double.

Yes, something like this would require some sort of heuristic to
determine if the parameter should be passed by reference. I'm sure
people would object to such loss of control, but in most cases
wouldn't the compiler do a better job of optimizing than the
developers rule of thumb to determine when a parameter should be
passed by reference?

rossm...@gmail.com

unread,
Apr 11, 2012, 4:43:46 PM4/11/12
to
On Tuesday, April 10, 2012 10:56:50 AM UTC-7, Roman W wrote:
> Return Value Optimization means that you're doing only 1 copy
> instead of 2. But a copy is done nevertheless. The optimization you
> want would mean that there is no copying done at all.

I am referring to the cases where a copy is made inside the function.
For constructors or setter functions for example. I guess technically
a setter function would be an assignment and not a copy at all.

What about inline functions? Do any compliers skip the extra copy for
pass by value arguments? Or are they required to do the copy to avoid
side effects?

inline Rectangle::SetHeight(int h)
{
height = h;

Dave Harris

unread,
Apr 12, 2012, 12:08:56 AM4/12/12
to
0xCDC...@gmx.at (Martin B.) wrote (abridged):
> However, what I would like to raise as a QOI question is, why can't
> we have (or do we?) a proper compiler warning when the compiler
> detects that the passed-by-value parameter isn't modified at all and
> really should have been passed by const-reference?

I would not want such a warning. There are too many cases where
passing a copy is preferred.

For example, passing an int by const reference is almost certainly
worse for performance. So, probably, is passing a pair of ints, or a
pair of pointers. If the warning depends on the size of the object,
than making a class use the pImpl idiom could trigger the warning, as
could other implementation changes.

You should consider the issue of aliasing. It may be cheaper to pass a
4-int rectangle by const reference, but using it may be more expensive
because it's harder for the compiler to be sure it is not changed
through an alias.

Another issue is the need for a standard interface, for
polymorphism. Some overrides of a virtual function might change their
arguments, and others might not.

-- Dave Harris, Nottingham, UK.


--

rossm...@gmail.com

unread,
Apr 12, 2012, 6:05:16 AM4/12/12
to
On Tuesday, April 10, 2012 11:14:08 AM UTC-7, Alf P. Steinbach wrote:
> The compiler is able to do it within the current language, but it's
> constrained by
>
> * the problem of aliasing, i.e. correctness violation, and
>
> * depending on the solution, a combinatorial explosion, and that
>
> * depending on the solution, the linker must support the scheme.

Thanks for the analysis Alf. On the surface it looks like the
separate compilation model sinks attempts to provide input value
optimization as Dave Abrahams mentioned. But perhaps a really robust
compiler/linker would be able to do it?

Here's what I've been thinking. Let's start with something simple:

- Only apply optimization to classes with non trivial constructors
or POD classes of a certain size. To avoid pessimizations.

- Restrict candidates for this optimization to parameters
that are only copied inside the function.

- If the parameter is passed by reference to any function (including
constructors) it cannot be optimized. Perhaps we may want to
allow const references for an aggressive optimization strategy but
this would not be strictly correct/conforming.

- The goal is to guarantee that when the function exits that the
parameter will not be modified if it is passed by reference.
Therefore we can create a single optimized version of the function
call and avoid the combinatorial explosion. Alf?

- Now the hand-waving part! Somehow when the linker mashes the
entire executable together it needs to substitute the non-optimized
calls to the optimized ones. I need some help on this part.


I'll add an additional observation at this point:

This optimization will never work for virtual function calls,
so the pass by reference hand optimization may never be completely
eliminated from the language.


> A partial solution, to avoid all three of those problems, is to use
> immutable types with reference semantics.

Can you elaborate a little more on this point? Do you mean that if we
had a concept of an immutable class that the compiler could recognize
then we could more easily apply copy optimizations?


> Also, for the particular case of `std::string` another partial solution
> is to use a COW (Copy On Write) implementation, which I believe is still
> how the implementation for g++ works.

Maybe we just need copy on write objects? This may solve some copying
issues.

Is there anyway that std::move can help us here? For instance, can the
temporary parameter object be moved to my member variable in my class
constructor?

--
Ross MacGregor

Geert-Jan Giezeman

unread,
Apr 12, 2012, 6:18:04 AM4/12/12
to
On 04/10/2012 08:00 PM, Ulrich Eckhardt wrote:
> Am 10.04.2012 11:18, schrieb rossm...@gmail.com:
>> I have a very simple question that I have been unable to find a
>> satisfactory answer. The question is why do I need to manually
>> optimize my functions using const references?
>>
>> For example:
>>
>> // Optimized passing of string parameter
>> Widget(std::string const& name);
>> SetName(std::string const& name);
>>
>> // Non-optimized passing of string parameter
>> Widget(std::string name);
>> SetName(std::string name);
>
> Even worse:
>
> // optimized passing of int parameter
> void foo(int i);
> // pessimized passing of int parameter
> void foo(int const& i);
> // probably pessimized passing of pair
> void bar(std::pair<int, int> const& p);
> // and what about this here??
> SetName(std::string name) { this->name.swap(name); }
>
> The performance of course depends on the actual implementation, but in
> common implementations a reference is just a "self-dereferencing
> pointer", so a reference requires another level of indirection. Also,
> any function can cheat and const_cast, so a calling function must not
> assume that the call doesn't modify the passed object. Making informed
> predictions requires looking at the implementation of the function,
> which is the reason that many modern compiler perform optimizations
> across translation-unit boundaries.
>
> Note that the thing with the int and pair above is something I learned
> from micro-optimizing some code. Certain code. On one particular
> platform. In a very tight loop. I don't claim that this is universally
> the most performant way.

The compiler knows best what is the most efficient way of passing a
value. The programmer knows best if passing by const reference is OK as
a possible optimisation. So, a possibility would be to invent some
syntax to tell the compiler to pick the most efficient method of passing
a value either by value or by const reference. E.g.

void foo(int default i)
// probably equivalent to: void foo(int i)
void foo(std::string default s)
// probably equivalent to: void foo(std:string const &s)
void foo(std::pair<int, int> default p)
// may depend on the platform how the pair is passed


Geert-Jan Giezeman

Alf P. Steinbach

unread,
Apr 12, 2012, 3:42:23 PM4/12/12
to
On 12.04.2012 12:05, rossm...@gmail.com wrote:
> On Tuesday, April 10, 2012 11:14:08 AM UTC-7, Alf P. Steinbach wrote:
>> The compiler is able to do it within the current language, but it's
>> constrained by
>>
>> * the problem of aliasing, i.e. correctness violation, and
>>
>> * depending on the solution, a combinatorial explosion, and that
>>
>> * depending on the solution, the linker must support the scheme.
>
> Thanks for the analysis Alf. On the surface it looks like the
> separate compilation model sinks attempts to provide input value
> optimization as Dave Abrahams mentioned. But perhaps a really robust
> compiler/linker would be able to do it?

With the flags approach that I described you don't need additional
linker support.

But it might be premature optimization.

It could be that at least for some programs it would, in total, slow
things down to pass those flags everywhere, e.g. because one less
register was available, or whatever.

> Here's what I've been thinking. Let's start with something simple:
>
> - Only apply optimization to classes with non trivial constructors
> or POD classes of a certain size. To avoid pessimizations.
>
> - Restrict candidates for this optimization to parameters
> that are only copied inside the function.
>
> - If the parameter is passed by reference to any function (including
> constructors) it cannot be optimized. Perhaps we may want to
> allow const references for an aggressive optimization strategy but
> this would not be strictly correct/conforming.

Wrt. passing further to some reference to const formal argument, the
main solution to the problem that the compiler can't guarantee the one
in a trillion trillion'th case in Very Bad Code, is to let the
programmer take responsibility. Leave the programmer in control. Much
is impossible if the compiler must do it all alone, because, well, if
it was outfitted with the necessary smarts then it would be very much
too slow! But with a little help from the programmer, which is what
many of the keywords in C++ are about, very little remains as
impossible. E.g. in this case the help could take the form of some
attribute which, if present, prevents the optimization, e.g. because
there's a call to a function that modifies in spite of promising not
to.

Of course, it would then be a kind of absurd situation where one
programmer is adding an attribute saying that another programmer (or
possibly him/her self) incorrectly added "const" to some formal arg.

Anyway, with such a more practical solution (programmer decides in
dubious case) the "cannot be optimized" is happily not correct. And
likewise, "would not be correct" is then happily not correct. It can
be optimized, and it will be correct, just not wrt. the current
language.

With the absence of an attribute taken as general leave to optimize,
an argument of type "cv T" is a candidate for optimization if, say,
the Boost function that produces good argument types changes the
type. Then furthermore there must be no potential modification of that
argument, under the assumption (provided by absence of attribute) that
"const" really means "not modified". Under these conditions the
optimization is permitted for that argument. Whether copying will
actually be done in any particular call, then depends on the
corresponding flag passed by the caller. If the call site promises via
this argument's flag that there is no possible aliasing, then copying
will be avoided. And what's nice about that is that the caller can
pass those flags also to a non-optimized function, which then will
copy regardless of the flags. I.e. each call site does not need to
know whether the function implementation supports the optimization or
not -- each call site's responsibility is just to decide whether it
can guarantee non-aliasing.


> - The goal is to guarantee that when the function exits that the
> parameter will not be modified if it is passed by reference.
> Therefore we can create a single optimized version of the function
> call and avoid the combinatorial explosion. Alf?

Oh yes, that's simple, as I wrote.

Instead of different variants of function, just a single function with
a hidden flags argument.

Where the flags carry each call site's knowledge of aliasing, into the
function.


> - Now the hand-waving part! Somehow when the linker mashes the
> entire executable together it needs to substitute the non-optimized
> calls to the optimized ones. I need some help on this part.

Huh.


> I'll add an additional observation at this point:
>
> This optimization will never work for virtual function calls,
> so the pass by reference hand optimization may never be completely
> eliminated from the language.

As an exercise, imagine the optimization is done, with the flags
approach.

Then it just works.


>> A partial solution, to avoid all three of those problems, is to use
>> immutable types with reference semantics.
>
> Can you elaborate a little more on this point? Do you mean that if we
> had a concept of an immutable class that the compiler could recognize
> then we could more easily apply copy optimizations?

With immutable types with reference semantics there is no copying
(except of small pointer). Think Java strings. You can reassign a Java
string variable, but that doesn't copy the string value.

Neither is there any modification of the string value.

So, the problem is then just not relevant.


>> Also, for the particular case of `std::string` another partial
>> solution is to use a COW (Copy On Write) implementation, which I
>> believe is still how the implementation for g++ works.
>
> Maybe we just need copy on write objects? This may solve some copying
> issues.

Scott Meyers, I think it was, recently remarked that COW is not
permitted for std::string under C++11 rules.

I'm not sure if that's correct, but with threading in the picture it
sounds quite plausible.

The nice thing about one's own types is that they can be more
practical wrt. safety, at least with 20-20 hindsight. Where
std::string is needlessly unsafe (e.g. construction from 0), one's own
type can be safe, and where std::string (reportedly) has to support
safe operations in some kind of multi-thread scenario, I don't know
exactly what but something that affects COW, one's own type can just
happily and more practically pass the problem on to the
programmer. Don't use me that way, or else add synchronization!

> Is there anyway that std::move can help us here? For instance, can the
> temporary parameter object be moved to my member variable in my class
> constructor?

I'm not sure that I understand that question.


Cheers & hth.,

- Alf


James K. Lowden

unread,
Apr 12, 2012, 3:44:47 PM4/12/12
to
On Thu, 12 Apr 2012 03:18:04 -0700 (PDT)
Geert-Jan Giezeman <ge...@cs.uu.nl> wrote:

> So, a possibility would be to invent some
> syntax to tell the compiler to pick the most efficient method of
> passing a value

Done. it's called "C++".

The standard afaik imposes no machine-code restriction on function
call implementation. Anything that adheres to the semantics it defines
is valid.

--jkl

James K. Lowden

unread,
Apr 12, 2012, 3:46:39 PM4/12/12
to
On Tue, 10 Apr 2012 11:14:08 -0700 (PDT)
"Alf P. Steinbach" <alf.p.stein...@gmail.com> wrote:

> Ideally there should
> be "in", "out" and "in-out" designators as in Ada

We almost have that now, right?

in: value or const reference
out: reference
in/out: reference

The last doesn't deserve distinction. It's rarely used because it's
rarely justified.

Were "in/out" syntax part of the language, the programmer has to learn
and use it, and the compiler has to enforce it. Yet it's hardly used.
For instance, there's barely one in/out parameter in the STL.
std::string::swap() is one uncommon example. I can't think of any that
don't involve some kind of similar buffer replacement. Nothing in
<algorithm> uses in/out parameters.

The best functions are true functions: one output generated from N
inputs. Sometimes, because of the way C++ is defined, the output takes
the form of an output parameter instead of a return type. (And
sometimes the return value is used to convey error status, but that's a
whole different discussion!) Functions with in/out parameters, by
contrast, have side-effects by design.

--jkl

Martin B.

unread,
Apr 12, 2012, 3:49:16 PM4/12/12
to
On 12.04.2012 06:08, Dave Harris wrote:
> 0xCDC...@gmx.at (Martin B.) wrote (abridged):
>> However, what I would like to raise as a QOI question is, why can't
>> we have (or do we?) a proper compiler warning when the compiler
>> detects that the passed-by-value parameter isn't modified at all
>> and really should have been passed by const-reference?
>
> I would not want such a warning. There are too many cases where
> passing a copy is preferred.
>
> For example, passing an int by const reference is almost certainly
> worse for performance. So, probably, is passing a pair of ints, or a
> pair of pointers. If the warning depends on the size of the object,
> than making a class use the pImpl idiom could trigger the warning,
> as could other implementation changes.
>
> You should consider the issue of aliasing. It may be cheaper to pass
> a 4-int rectangle by const reference, but using it may be more
> expensive because it's harder for the compiler to be sure it is not
> changed through an alias.
>
> Another issue is the need for a standard interface, for
> polymorphism. Some overrides of a virtual function might change
> their arguments, and others might not.

All valid points!

Still, there are far too many cases in our code base where devs just
forget the "const&" where it would make perfectly sense.

I guess it may be too advanced for a compiler.

Would be interestng though if static analysis tools do such checking -
maybe on a class by class base or based on other configuration
options.

cheers,
Martin

--
Good C++ code is better than good C code, but
bad C++ can be much, much worse than bad C code.


Francis Glassborow

unread,
Apr 12, 2012, 3:51:20 PM4/12/12
to
On 12/04/2012 11:05, rossm...@gmail.com wrote:
> On Tuesday, April 10, 2012 11:14:08 AM UTC-7, Alf P. Steinbach wrote:
>> The compiler is able to do it within the current language, but it's
>> constrained by
>>
>> * the problem of aliasing, i.e. correctness violation, and
>>
>> * depending on the solution, a combinatorial explosion, and that
>>
>> * depending on the solution, the linker must support the scheme.
>
> Thanks for the analysis Alf. On the surface it looks like the
> separate compilation model sinks attempts to provide input value
> optimization as Dave Abrahams mentioned. But perhaps a really robust
> compiler/linker would be able to do it?
>
> Here's what I've been thinking. Let's start with something simple:
>
> - Only apply optimization to classes with non trivial constructors
> or POD classes of a certain size. To avoid pessimizations.
>
> - Restrict candidates for this optimization to parameters
> that are only copied inside the function.
>
> - If the parameter is passed by reference to any function (including
> constructors) it cannot be optimized. Perhaps we may want to
> allow const references for an aggressive optimization strategy but
> this would not be strictly correct/conforming.
>
> - The goal is to guarantee that when the function exits that the
> parameter will not be modified if it is passed by reference.
> Therefore we can create a single optimized version of the function
> call and avoid the combinatorial explosion. Alf?
>

Rgat last is not sufficient in multi-threaded/concurrent code. You
must also guarantee that the bject passed be reference is not modified
elsewhere during the execution of the function. I think that is a
more demanding requirement these days and would effectively require
the implementation to do much more analysis before optimising.

Note that an implementation can already do what you ask under the 'as
if' rule as long as it can demonstrate that the choice cannot be
detected by the program behaviour (other than efficiency)

Francis

rossm...@gmail.com

unread,
Apr 13, 2012, 4:50:22 AM4/13/12
to
On Tuesday, April 10, 2012 11:14:08 AM UTC-7, Alf P. Steinbach wrote:
> Here is an example of aliasing at work:

Nice example. That code would break my naive solution as
I would have assumed that the input parameter wouldn't
be changing.

So are you saying that the compiler could detect a possible
problem with this parameter seeing how it is global (or maybe
a passed in reference) and choose not to optimize it at the
call site?

> Consider then that this binary choice is present for each sufficiently
> large argument where the optimization is relevant, and so that with n
> such arguments we're talking about 2^n implementation variants: a
> /combinatorial explosion/ akin to the one for perfect forwarding.

So you are saying:
- Sometimes you will want to apply the optimization and
sometimes not for various parameters of the same
function.
- We can implement this by defining N variations of the
function and link in the one we need at link time.

I can see how it may be implemented this way. However, I
am not entirely clear how the callee flags implementation
will work.

Do you mean that the function signature will use const
references for the optimized types and if the flag says pass
by value then an internal copy is made?

Are you suggesting putting the copy on the heap or stack?

How do you access the parameter/copy within the function?
Use an internal pointer? (Will cost an indirection for
pass by value parameters)
Checking flag before access? (Will cost for flag check)

--
Ross MacGregor

rossm...@gmail.com

unread,
Apr 13, 2012, 4:50:54 AM4/13/12
to
On Thursday, April 12, 2012 12:42:23 PM UTC-7, Alf P. Steinbach wrote:
> But with a little help from the programmer, which is what
> many of the keywords in C++ are about, very little remains as
> impossible. E.g. in this case the help could take the form of some
> attribute which, if present, prevents the optimization, e.g. because
> there's a call to a function that modifies in spite of promising not
> to.

In regard to extending the language: what if we annotate
class declarations?

We could specify if a class or struct should be passed
by value or by reference as the default behavior of the
function call optimizer. Then the developer will be in complete
control of the behavior. We will be able to say a class should
always be passed by value even if it were to contain a
non-trivial constructor!

A very simple implementation might even forgo the alias
checking and just rely on these types to never be aliased.
Probably not a recommended way to go but it would be an
option.

Likewise, a very simple implementation could even skip the
test inside the function to determine eligibility for
optimization. Parameters of pass-by-reference types will
always be passed by reference. Programmer beware.

This could also be a way to ensure no old code breaks
as only newly declared classes would participate in the
function call optimization.

It would be nice if we could expand the type system a bit
so that the following typedef would work:

typedef std::string by_ref rstring;

// Old C++
void Widget::set_name(std::string const & name)
{
this->name = name;
}

// Hand optimized free C++
void Widget::set_name(rstring name)
{
this->name = name;
}

// Hand optimized free C++
void set_name(rstring name)
{
name += ".xml"; // error: rsting is a constant type
this->name = name;
}

// Hand optimized free C++
void set_name(rstring name)
{
rsting new_name(name);
new_name += ".xml";
this->name = name;
}

--
Ross MacGregor

P. Areias

unread,
Apr 13, 2012, 2:40:49 PM4/13/12
to
>
> In regard to extending the language: what if we annotate
> class declarations?
>


> typedef std::string by_ref rstring;
>
> // Old C++
> void Widget::set_name(std::string const & name)
> {
> this->name = name;
>
> }
>
> // Hand optimized free C++
> void Widget::set_name(rstring name)
> {
> this->name = name;
>
> }
>
> // Hand optimized free C++
> void set_name(rstring name)
> {
> name += ".xml"; // error: rsting is a constant type
> this->name = name;
>
> }
>
> // Hand optimized free C++
> void set_name(rstring name)
> {
> rsting new_name(name);
> new_name += ".xml";
> this->name = name;
>
> }


The explicit copy of arguments (when required) is the trademark of
Fortran (even 2003), since ALL arguments are passed by reference. C++
approach requires more discipline but In addition, the in / out /inout
safeguards are also in the intent() identification of arguments.

So it is either:

1) All (non-constant) references with explicit deep copies inside the
functions. Safeguards may be used: Fortran 95-2003
2) A variety of argument possibilities (in the end, pointers and
values). Copies are synthesized: C++
3) Possibly a VM to deal with more intricate situations. .NET

Wasn't B. Stroustrup intimately familiar with Fortran? I am also (cf.
SIMPLAS, SIMPLASMPC, etc) and prefer, for reasons of coding
efficiency, the C++ way. I never understood the need for C#, as it
seems less productive and more baroque than C++.

In my perspective, this was a difficulty of former Languages
definitely solved by the C++ syntax.

There are still some lacunae in C++ (I wonder why STATIC reflexion is
not in C++11 as any sufficiently complex program would benefit from it
sooner or later), but argument passing is not one (for mature
programmers.)


--

Jonathan Thornburg

unread,
Apr 13, 2012, 2:55:16 PM4/13/12
to
"Alf P. Steinbach" <alf.p.stein...@gmail.com> suggested:
> Ideally there should
> be "in", "out" and "in-out" designators as in Ada

James K. Lowden <jklo...@speakeasy.net> wrote:
> We almost have that now, right?
>
> in: value or const reference
> out: reference
> in/out: reference
>
> The last doesn't deserve distinction. It's rarely used because it's
> rarely justified.

I have two objections to this last comment. The first is practical:
I'm willing to take James Lowden's word that in/out arguments are rarely
used in the code bases he usually sees, but I'm not at all convinced
that that's true for all C++ code bases.

For example, in my experience the following sorts of code all make
frequent use of in/out arguments:
* databases (the database is implicitly an in/out argument to lots of code)
* image processing (any time you modify an image in-place (e.g., apply a
smoothing operator), it needs to be an in/out argument)
* matrix computations like LU decomposition or the singular value
decomposition (which almost always involve in-place modification of
matrices)
* Fourier transforms (FFTs are almost always implemented using a sequence
of in-place updates of a work array, which eventually replace the input
values with their Fourier transform)
* integration of differential equations (which almost always involve
a "state" which is updated in-place)
* sorting algorithms, starting with the classical CS200
template <typename T> void sort(std::vector<T>& v);
* balanced tree data structures like splay trees, B-trees, AVL trees, etc
(these all update a set of tree nodes in-place)
* operating systems (think of something like the virtual-memory mapping
data structures -- these are in/out arguments to lots of OS code)
* memory for any simulation of a computer system (e.g., a virtual machine)

There's a pattern here: all of these computations involve in-place updates
to some data which we don't want to copy
[possible reasons might include
* copying would be horribly slow
* there's not enough memory in the machine to make the copy
* lots of people hold (and need to keep accessing through)
pointers to the data]

Admittedly, it's really hard (maybe impossible?) to make such computations
exception-safe. But that doesn't render these computations unimportant
or useless; it just means that other error-handling strategies need to
be used in those parts of the code.



My second objection to James Lowden's statement is more philosophical,
and touches on C++'s basic design philosophy: C++ is designed to be
a tool (or perhaps set of tools) suitable for a (very) wide range of
application areas. The fact that a given C++ feature is only used
rarely in your code, and/or that it's "dangerous" in some way, doesn't
mean that it's not valuable to have in the language, and doesn't mean
that it's not justified to use in some other sort of code.

Another way to express this is that C++ is not a one-size-fits-all
language.

<dons nomex garmets>
For example, consider C++'s goto statement. We all know its dangers,
and most code never uses it. But some code uses it a lot, and IMHO
Ritchie & Stroustrup made the right decisions by having it in C/C++.
</dons nomex garmets>

--
-- "Jonathan Thornburg [remove -animal to reply]"
<jth...@astro.indiana-zebra.edu>
Dept of Astronomy, Indiana University, Bloomington, Indiana, USA
"C++ is to programming as sex is to reproduction. Better ways might
technically exist but they're not nearly as much fun." -- Nikolai
Irgens

Dave Abrahams

unread,
Apr 13, 2012, 9:29:11 PM4/13/12
to
on Fri Apr 13 2012, Jonathan Thornburg <clcppm-poster-AT-this.is.invalid> wrote:

> "Alf P. Steinbach" <alf.p.stein...@gmail.com> suggested:
>> Ideally there should
>> be "in", "out" and "in-out" designators as in Ada
>
> James K. Lowden <jklo...@speakeasy.net> wrote:
>> We almost have that now, right?
>>
>> in: value or const reference
>> out: reference
>> in/out: reference
>>
>> The last doesn't deserve distinction. It's rarely used because it's
>> rarely justified.
>
> I have two objections to this last comment. The first is practical:
> I'm willing to take James Lowden's word that in/out arguments are rarely
> used in the code bases he usually sees, but I'm not at all convinced
> that that's true for all C++ code bases.

IMO it's wrong on its face. Every mutating member function uses an
in/out argument. I know, someone will object because "this" is a
pointer. PotAYto potAHto. It could just as easily have been a
reference, and many think it should have been.

I think it probably /is/ rare to have more than 1 in/out argument.
That's a good thing, since mutations are a source of complexity. But
there are notable exceptions, e.g. swap

> For example, in my experience the following sorts of code all make
> frequent use of in/out arguments:

<snip a list>

> There's a pattern here: all of these computations involve in-place updates
> to some data which we don't want to copy
> [possible reasons might include
> * copying would be horribly slow
> * there's not enough memory in the machine to make the copy
> * lots of people hold (and need to keep accessing through)
> pointers to the data]
>
> Admittedly, it's really hard (maybe impossible?) to make such computations
> exception-safe.

Not at all. Always remember (and don't ever forget):

"exception safety" == basic_guarantee
"exception safety" != strong_guarantee

--Dave

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com


James K. Lowden

unread,
Apr 14, 2012, 1:35:29 AM4/14/12
to
On Fri, 13 Apr 2012 11:55:16 -0700 (PDT)
Jonathan Thornburg <clcppm...@this.is.invalid> wrote:

> "Alf P. Steinbach" <alf.p.stein...@gmail.com> suggested:
> > Ideally there should
> > be "in", "out" and "in-out" designators as in Ada
>
> James K. Lowden <jklo...@speakeasy.net> wrote:
> > We almost have that now, right?
> >
> > in: value or const reference
> > out: reference
> > in/out: reference
> >
> > The last doesn't deserve distinction. It's rarely used because it's
> > rarely justified.
>
> I have two objections to this last comment. The first is practical:
> I'm willing to take James Lowden's word that in/out arguments are
> rarely used in the code bases he usually sees, but I'm not at all
> convinced that that's true for all C++ code bases.

I welcome the discussion, Jonthan. I hope to learn something from it.

I didn't cite code I usually see. I specifically cited the STL because
as far as I'm concerned that's the canonical example library. And I
didn't say it's never justified, only rarely.

The best-justified case is when the memory in question *can't* be
copied usefully because it's unique. Hardware devices e.g. video
buffers have the property.

I have experience with some of your examples:

> For example, in my experience the following sorts of code all make
> frequent use of in/out arguments:
> * databases (the database is implicitly an in/out argument to lots of
> code)

I've written several database interface libraries. I can't think of one
in/out parameter other than those specified as such by stored
procedures.

If by "the database" you mean the connection handle or similar, those
things don't need to be in the interface.

> * matrix computations like LU decomposition or the singular value
> decomposition (which almost always involve in-place modification of
> matrices)

I wrote a matrix library 15 years ago, and later adopted BLAS. Yes,
there are some tricks needed to make operators convenient to use while
avoiding temporaries, but IIRC very few i/o parameters. Not *none*,
just not that many.

> * sorting algorithms, starting with the classical CS200
> template <typename T> void sort(std::vector<T>& v);

http://www.cplusplus.com/reference/algorithm/sort/

I don't consider iterators to an unpassed container to
be i/o parameters. I think it's notable, actually, that the STL does
not provide sort<vector<T>>.

Dave Abrahams suggests that the this pointer in e.g. list::sort() can
be thought of as an i/o parameter in some sense. Down that road lies a
long computer science debate that I didn't mean to engage.

Please note, my basic point was that i/o parameters are rare enough, on
the evidence, that they don't merit distinction in the language
syntax. I would guess parameters are declared 90% by value, 9% output,
and 1% i/o. That's what I meant by "rarely used".

By "rarely justified", I meant they are to be avoided where possible.
Here's a bad example:

void sqrt( double& value );

I've worked with people who believed passionately that's more
efficient. You don't allocate two or three doubles on the stack; you
just use the one you've got!

I don't have to explain to you why that's wrong; I'm just illustrating
that i/o parameters can be used more often than they are. Any function
whose return type is the same as one of its input parameters is a
candidate. To Just Say No unless it's really needed is the mark of
careful work.

> The fact that a given C++ feature is only used
> rarely in your code, and/or that it's "dangerous" in some way,

.... in that it introduces complexity ...

> doesn't mean that it's not valuable to have in the language

Yes, of course, absolutely.

--jkl


--

Christopher Creutzig

unread,
Apr 14, 2012, 2:29:31 PM4/14/12
to
On 4/14/12 7:35 AM, James K. Lowden wrote:

> I didn't cite code I usually see. I specifically cited the STL
> because as far as I'm concerned that's the canonical example
> library. And I didn't say it's never justified, only rarely.

I regard the following calls to have in/out parameters and to not be
exotic corner cases of STL:

std::cin << 1;

std::set<int> s;
s.insert(1);

++i;

> Dave Abrahams suggests that the this pointer in e.g. list::sort()
> can be thought of as an i/o parameter in some sense. Down that road
> lies a long computer science debate that I didn't mean to engage.

But both semantically and from an optimization point of view, his
argument makes perfect sense. Pointer/reference is just an
implementation detail when it comes to “this,†and the syntactic sugar
of not having to provide this pointer/reference as an argument to the
function makes code more readable, but does not really change anything
beyond that.

> Please note, my basic point was that i/o parameters are rare enough,
> on the evidence, that they don't merit distinction in the language
> syntax. I would guess parameters are declared 90% by value, 9%
> output, and 1% i/o. That's what I meant by "rarely used".

Since I don't follow the way you are counting, it is not surprising
that my guesstimates are very different from yours. To me, modifying
operations take in/out parameters – the jury is still out on
assignment operators, but I believe those will be found guilty as
well. Not that there's anything wrong with having in/out parameters.

Actually, I'd see “output†parameters as a special case of in/out
parameters (which are denoted by non-const reference in C++) where the
input value is ignored – unless you can show me a valid way to have a
reference parameter which only gets constructed inside the function
called.


Christopher

Jonathan Thornburg

unread,
Apr 15, 2012, 2:14:04 AM4/15/12
to
James K. Lowden <jklo...@speakeasy.net> wrote:
> Please note, my basic point was that i/o parameters are rare enough, on
> the evidence, that they don't merit distinction in the language
> syntax. I would guess parameters are declared 90% by value, 9% output,
> and 1% i/o. That's what I meant by "rarely used".
>
> By "rarely justified", I meant they are to be avoided where possible.
> Here's a bad example:
>
> void sqrt( double& value );
>
> I've worked with people who believed passionately that's more
> efficient. You don't allocate two or three doubles on the stack; you
> just use the one you've got!

I completely agree -- *this* example exhibits what is almost always
a poor software design [and its efficiency is unlikely to differ
significantly from double sqrt(double x); ].

However, if we instead had
void sqrt_of_big_vector(std::vector<double>& v);
where the vector v is very large (so that copying it would be very
slow and/or might be impossible due to the machine not having enough
memory for a copy), then this might well be a good software design,
and an API which mandated separate input & output vectors would be
a poor design.

I suspect we may be in "violent agreement" about this case as well,
and merely disagree about how frequent such cases are. The point I
was trying to make in my previous post is that in some application
areas, in-place modifications to things-we-don't-want-to-copy are
quite common. And a function to do such an in-place modification
needs the data to be an in/out argument (accessed either through
explicit parameters, and/or the implicit this argument of a class
member function, and/or global variables (ick)).



Following up on Christopher Creutzig's comment
| I regard the following calls to have in/out parameters and to not be
| exotic corner cases of STL:
[[...]]
|
| std::set<int> s;
| s.insert(1);
some operations (like std::set::insert(), or sorting, or the FFT
(fast Fourier transform)) are also usefully *conceptualized* as having
in/out parameters, quite apart from any questions of efficiency.

--
-- "Jonathan Thornburg [remove -animal to reply]"
<jth...@astro.indiana-zebra.edu>
Dept of Astronomy, Indiana University, Bloomington, Indiana, USA
"C++ is to programming as sex is to reproduction. Better ways might
technically exist but they're not nearly as much fun." -- Nikolai
Irgens


James K. Lowden

unread,
Apr 15, 2012, 2:20:31 AM4/15/12
to
On Sat, 14 Apr 2012 11:29:31 -0700 (PDT)
Christopher Creutzig <chris...@creutzig.de> wrote:

> > Dave Abrahams suggests that the this pointer in e.g. list::sort()
> > can be thought of as an i/o parameter in some sense. Down that road
> > lies a long computer science debate that I didn't mean to engage.
>
> But both semantically and from an optimization point of view, his
> argument makes perfect sense. Pointer/reference is just an
> implementation detail when it comes to â??this,â?_ and the syntactic
> sugar of not having to provide this pointer/reference as an argument
> to the function makes code more readable, but does not really change
> anything beyond that.

Recall the OP suggested parameters should have syntax denoting their
i/o status (as apparently Ada has). Things like list::sort() and ++i
have no parameters. Would you suggest adding syntax to them?

I'm not saying anything about optimization, which doesn't interest me.
Far more than better compilers, faster execution derives from better
programs, which come from better programmers, who express their logic
to themselves and to other programmers in source code. The language
serves express logic to the humans first, and to the computer only
secondarily. Were that not true, C++ would never have been invented;
we would all be programming in machine code.

Your point IIUC is that things like list::sort looks something like
qsort(3) under the hood. That's not relevant to the question of i/o
parameters per se. My point was and is that C++ syntax makes the use
of i/o parameters unnecessary in most cases.

> > I would guess parameters are declared 90% by value, 9%
> > output, and 1% i/o. That's what I meant by "rarely used".
>
> Since I don't follow the way you are counting, it is not surprising
> that my guesstimates are very different from yours. To me, modifying
> operations take in/out parameters â?? the jury is still out on
> assignment operators,

(Your message arrives at my computer with a header indicating it's
encoded as Windows-1252, but iconv(1) recognizes it as utf-8.)

Well, clearly if we're talking about different things, we're apt to
have different conclusions about them.

I didn't intend to get into the theory because it's not my strength,
but since you raise the subject of assignment, it seems I have to
explain where I'm coming from for you to make sense of what I'm
saying.

As you may know, there's a body of CS research that says assignment and
functions are fundamental, and that side-effects interfere with both
cognition and optimization. I haven't studied it in detail, but I know
in my own experience -- and, I bet, you in yours -- programs are easier
to understand when the data move basically right-to-left. (I hold
operator>> in std::iostreams as an exception that doesn't contradict
the rule of single output. Maybe Stroustrup's disk was located
somewhere to the left of the caps lock key.)

Functions that have two effects -- a result on the left and an output
among the parameters -- are harder to reason about. Certainly to the
extent we embrace centuries of algebraic notation, we're bound to reap
its benefits.

The trend in computer languages is also toward functions and away from
pass-by-reference. In Fortran everything was by reference.
Classically:

SUBROUTINE F ( I )
I = I + 1
END

CALL F(3)

3 is now 4.

By contrast, the functional approach has brought us languages like
Haskell, in which all calls are by value and even variables don't
vary.

Aside: if lazy evaluation were added to C++, could Haskell programs be
written purely in terms of C++ constructors?

I never said i/o parameters can't be used or should be prohibited by
the language. I said they are rarely used and, with the exception of
low-level buffer manipulation, rarely justified. I hope my meaning and
reasoning are now clear.

--jkl

Dave Abrahams

unread,
Apr 15, 2012, 2:27:49 AM4/15/12
to
on Sat Apr 14 2012, "James K. Lowden" <jklowden-AT-speakeasy.net> wrote:

> I don't consider iterators to an unpassed container to
> be i/o parameters. I think it's notable, actually, that the STL does
> not provide sort<vector<T>>.

What in particular do you think is notable about it?

There are actually some very good arguments that the STL should have
provided that interface, and the committee is actively considering
adding it.

> Dave Abrahams suggests that the this pointer in e.g. list::sort() can
> be thought of as an i/o parameter in some sense. Down that road lies a
> long computer science debate that I didn't mean to engage.

I don't see that there's anything much debatable here. Fundamentally
the computation that sorts a list is the same thing whether you spell it
l.sort() (yes, the STL does provide that one) or sort(l). The
underlying machine model against which we program in C++ includes
mutation of state at a fundamental level. If you want to live in a
world where you can dismiss mutation of state as rare, you should be
programming in Haskell. (**)

-Dave

(**) Which isn't to say you're unwelcome here in C++-land, of course :-)

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com


Christopher Creutzig

unread,
Apr 15, 2012, 3:54:41 PM4/15/12
to
On 4/15/12 8:20 AM, James K. Lowden wrote:

> Far more than better compilers, faster execution derives from better
> programs, which come from better programmers, who express their logic
> to themselves and to other programmers in source code. The language

True. And even when it does not lead to faster execution, it is very
likely to result in better programs on other criteria.

> Your point IIUC is that things like list::sort looks something like
> qsort(3) under the hood. That's not relevant to the question of i/o

My point is not one of implementation, no. My point is that the notion
of an “input/output parameter” is just another way of talking about
side-effects or mutable state. Neither of which is bad, C++ is a
language designed around such things, but having these simply shapes the
way we code, compared to, say, functional languages without this.

> (Your message arrives at my computer with a header indicating it's
> encoded as Windows-1252, but iconv(1) recognizes it as utf-8.)

I'm afraid that is a bug in the moderation software. I see the same
problem in multiple users' postings, and it does not affect my postings
to unmoderated groups.

> cognition and optimization. I haven't studied it in detail, but I know
> in my own experience -- and, I bet, you in yours -- programs are easier
> to understand when the data move basically right-to-left. (I hold

We can agree on having a single direction being a good idea.
Left-to-right, as in buffer.str().c_str(), is perfectly fine with me.
But that may just show that I'm heavily using languages like Ruby where
this kind of operation chaining is much more common than in C++.

> Functions that have two effects -- a result on the left and an output
> among the parameters -- are harder to reason about. Certainly to the

They are also harder to combine and chain, which I regard as a very
useful programming idiom, and make the use of unnamed temporaries much
more difficult.

> I never said i/o parameters can't be used or should be prohibited by
> the language. I said they are rarely used and, with the exception of
> low-level buffer manipulation, rarely justified. I hope my meaning and
> reasoning are now clear.

I am afraid I completely missed the part where you argued against the
implicit this pointer qualifying as an i/o parameter. If you are talking
about parameters just in the syntactical sense, then I'm not sure where
that would lead, giving that C++, as you said yourself, simply does not
have a syntax to distinguish between those non-const references which
are semantically true i/o parameters and those where the input value is
simply ignored.


Christopher

James K. Lowden

unread,
Apr 15, 2012, 9:23:18 PM4/15/12
to
On Sat, 14 Apr 2012 23:27:49 -0700 (PDT)
Dave Abrahams <da...@boostpro.com> wrote:

> If you want to live in a
> world where you can dismiss mutation of state as rare, you should be
> programming in Haskell. (**)

I didn't mean to imply that mutation is rare. I meant to state that
the syntax and idiom of C++ makes the use of i/o parameters --
syntactic, explicit parameters -- rare. Not vanishingly rare, but rare
enough that I see no need for them to expressly denoted in the syntax.
I do not think

void swap ( string& str );

would be improved by any form of

void swap ( string& str [io] );

Not only that, but the language offers alternatives to i/o
parameters that are better in most cases.

Looking at Jonathan Thornburg's reasonable example

> sqrt_of_big_vector(std::vector<double>& v);

it might also be expressed as

sqrt_of_big_vector(v.begin(), v.end());
or
transform(v.begin(), v.end(), v.begin(), sqrt);
or
big_vector::sqrt()

depending on whether or not function is applied element-wise. I think
you'll agree the last version is preferable if, for reasons of
internal consistency, the operator must be an all-or-nothing
transformation.

Acknowledged, Jonathan's version also works. The OO school says
functions that act on one type should be members. (Why, then, does the
STL have a sort function? Because it works on POD types, which
themselves exist because, as Stroustrup likes to remind us, C++ isn't
only an OO language.)

> > I think it's notable, actually, that the STL does
> > not provide sort<vector<T>>.
>
> What in particular do you think is notable about it?

It's notable because the committee evidently felt it was
redundant, insofar as the standard was released (and presumably
considered complete) without it. Because acting on the whole set is a
specific case of acting on a subset, the STL satisfies both by providing
just the general interface. By example it encourages others to do the
same.

> the committee is actively considering adding it.

If you have a favorite paper explaining why we we need a function like
that, I'd be interested to read it.

Like you, I was younger when the STL first made its appearance. ;-)
I thought the lack of stuff like sort<vector<T>> was an obvious
oversight, and I wrote lots of little wrappers for them. Over the
years, as you might guess, I stopped bothering. I know, it's possible
to pass the begin() of one container and the end() of another. Very
exciting, too! And, since I haven't made that particular mistake in 10
years, I'm probably due for one.

> (**) Which isn't to say you're unwelcome here in C++-land, of
> course :-)

Thanks, I appreciate that. The whole point of hanging out on the
usenet is to have interesting discussions and learn stuff. Which,
come to think of it, is quite a bit like mutable state. Wait, I'm a
variable? I thought I was an algorithm!

Regards,

--jkl


--

wasti...@gmx.net

unread,
Apr 18, 2012, 12:56:02 PM4/18/12
to
On Monday, April 16, 2012 3:23:18 AM UTC+2, James K. Lowden wrote:
>
> Looking at Jonathan Thornburg's reasonable example
>
> > sqrt_of_big_vector(std::vector<double>& v);
>
> it might also be expressed as
>
> sqrt_of_big_vector(v.begin(), v.end());

Someone - it might have been you - earlier asserted that he doesn't
see iterators (even mutable ones) as in/out parameters. But I
disagree. Iterators are an abstraction of pointers, and
pointer-to-mutable arguments are a very common way of expressing
in/out parameters. Consider how you would pass an array to a function
for mutation. You wouldn't pass a reference to the array, you'd pass a
pointer and a size. That's what an array in/out parameter looks like
in C. And a pair of mutable iterators is what a container in/out
parameter looks like in C++.

> or
> transform(v.begin(), v.end(), v.begin(), sqrt);

That's a good implementation for the function.

> or
> big_vector::sqrt()
>
> depending on whether or not function is applied element-wise. I think
> you'll agree the last version is preferable if, for reasons of
> internal consistency, the operator must be an all-or-nothing
> transformation.

Not unless we have a numeric container where "sqrt on all elements" is
a natural operation for the type. If "sqrt on all elements" is just an
operation that some bigger algorithm needs a lot, I see absolutely no
reason why the function should be promoted to a member of the
container.

Sebastian

Dave Abrahams

unread,
Apr 19, 2012, 12:45:39 AM4/19/12
to
on Sun Apr 15 2012, "James K. Lowden" <jklowden-AT-speakeasy.net> wrote:

> On Sat, 14 Apr 2012 23:27:49 -0700 (PDT)
> Dave Abrahams <da...@boostpro.com> wrote:
>
>> If you want to live in a world where you can dismiss mutation of
>> state as rare, you should be programming in Haskell. (**)
>
> I didn't mean to imply that mutation is rare. I meant to state that
> the syntax and idiom of C++ makes the use of i/o parameters --
> syntactic, explicit parameters -- rare. Not vanishingly rare, but
> rare enough that I see no need for them to expressly denoted in the
> syntax.

Agreed.

> Looking at Jonathan Thornburg's reasonable example
>
>> sqrt_of_big_vector(std::vector<double>& v);
>
> it might also be expressed as
>
> sqrt_of_big_vector(v.begin(), v.end());
> or
> transform(v.begin(), v.end(), v.begin(), sqrt);
> or
> big_vector::sqrt()

Not really. s/::/./

> depending on whether or not function is applied element-wise. I
> think you'll agree the last version is preferable if, for reasons of
> internal consistency, the operator must be an all-or-nothing
> transformation.

Not really

inplace_sqrt(big_vector);

is preferable from many points of view.

If the operator had to be all-or-nothing in some circumstances, and
the elementwise sqrt could throw (it can't with double), I would
handle it externally:

{
vector<BigNum> t = v;
inplace_sqrt(t);
swap(t,v);
}

I would never burden every caller of inplace_sqrt with the cost of
copy/swap.

Finally, if I really wanted optimal efficiency, I'd do something with
expression templates and write something like this:

v = sqrt(v);

> Acknowledged, Jonathan's version also works. The OO school says
> functions that act on one type should be members. (Why, then, does
> the STL have a sort function?

You should read what Stepanov has to say about OO sometime. I doubt
the OO school's sayings would have much influence. And IMO he's
right.

> Because it works on POD types, which themselves exist because, as
> Stroustrup likes to remind us, C++ isn't only an OO language.)

That's not why we have PODs, but that's probably not too relevant here
anyway.

>> > I think it's notable, actually, that the STL does
>> > not provide sort<vector<T>>.
>>
>> What in particular do you think is notable about it?
>
> It's notable because the committee evidently felt it was redundant,
> insofar as the standard was released (and presumably considered
> complete) without it.

I don't think you have a very clear appreciation of committee
dynamics. There was no group decision that such an interface was
redundant. There was only so much time, the standard was late, and
the STL was a very late addition. At some point you just have to
decide to stop working on the document and publish it.

> Because acting on the whole set is a specific case of acting on a
> subset, the STL satisfies both by providing just the general
> interface. By example it encourages others to do the same.

Yes.

>> the committee is actively considering adding it.
>
> If you have a favorite paper explaining why we we need a function
> like that, I'd be interested to read it.

I don't, but there are lots of good arguments for a range-based
interface. Check out Andrei Alexandrescu's BoostCon keynote:
http://blip.tv/boostcon/boostcon-2009-keynote-2452140
>
> Like you, I was younger when the STL first made its appearance. ;-)
> I thought the lack of stuff like sort<vector<T>> was an obvious
> oversight, and I wrote lots of little wrappers for them. Over the
> years, as you might guess, I stopped bothering. I know, it's
> possible to pass the begin() of one container and the end() of
> another. Very exciting, too! And, since I haven't made that
> particular mistake in 10 years, I'm probably due for one.

There are reasons other than correctness to do this. For example,
expressivity:


http://www.boost.org/doc/libs/1_49_0/libs/range/doc/html/range/introduction.html

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com


James K. Lowden

unread,
Apr 19, 2012, 12:58:00 AM4/19/12
to
On Wed, 18 Apr 2012 09:56:02 -0700 (PDT)
wasti...@gmx.net wrote:

> You wouldn't pass a reference to the array, you'd pass a
> pointer and a size. That's what an array in/out parameter looks like
> in C. And a pair of mutable iterators is what a container in/out
> parameter looks like in C++.

Agreed. A pair of non-const iterators is the idiomatic and general way
to let a function modify a collection. One may be tempted to pass a
non-const reference instead or, worse, a non-const reference and two
indexes. Except for reasons of atomicity, in most cases two iterators
are preferable.

The original question was whether or not C++ should have some
syntactical designation of i/o parameters per se. I said No on grounds
of not much used and not much needed. There was a dust-up over whether
variables vary, which dust has I think now settled.

> > big_vector::sqrt()
> If "sqrt on all elements" is just an
> operation that some bigger algorithm needs a lot, I see absolutely no
> reason why the function should be promoted to a member of the
> container.

It took me a long time to accept the idea that friend functions are an
integral part of the class design. Prior, I thought they were just
mistakes or shortcuts. My opinions are tough, you see: They wear thin
before they wear out. ;-)

By the same token, the choice between

f( foo& );
and
foo::f();

is guided by the fact that the former can be a template and the latter
virtual. Which to choose depends on which aspect matters.

In considering your point I found myself again comparing templates
and inheritance. Knowing I was not the first, I began a desultory
search, and stumbled on this old interview with Alexander Stepanov:

http://www.stlport.org/resources/StepanovUSA.html

It contains observations that bear on this thread. Recommended.

--jkl

P. Areias

unread,
Apr 19, 2012, 1:38:19 PM4/19/12
to
Stepanov's interview is really worth reading (and his "elements of
programming" is even more). Algorithms should be written as the book
describes and are completely unrelated to OO. Think of the remainder
algorithm. You can invoke it with two polynomials or two integers, and
it is universal. However, when a hierarchy is the natural form of
reasoning (e.g. polygons, polyhedra, ) I would still adopt inheritance
and polymorphism as representations. The two approaches are
compatible, but the inheritance approach has been abused during the
last decades.
I would try to stay away from the "fake math" sharply pointed out by
Stepanov in the interview... Some design decisions (the plethora of
iterators) of STL were clearly not Mathematically motivated.

Dave Abrahams

unread,
Apr 19, 2012, 2:56:00 PM4/19/12
to
on Thu Apr 19 2012, "P. Areias" <pedromiguelareias-AT-gmail.com> wrote:

> Stepanov's interview is really worth reading (and his "elements of
> programming" is even more). Algorithms should be written as the book
> describes and are completely unrelated to OO. Think of the remainder
> algorithm. You can invoke it with two polynomials or two integers, and
> it is universal. However, when a hierarchy is the natural form of
> reasoning (e.g. polygons, polyhedra, ) I would still adopt inheritance
> and polymorphism as representations.

Instead of concept refinement? Why?

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com


0 new messages