Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

I/O class: how to design for special case

1 view
Skip to first unread message

Simon Elliott

unread,
Jan 23, 2006, 6:57:04 AM1/23/06
to
We have a number of C++ classes for I/O (serial comms etc) which all
have a similar interface. The relevant part follows:

class Cfoo
{
public:
...
bool Out(unsigned char c);
bool Out(const unsigned char* buff, unsigned long count,
unsigned long *sent);
'''
};

The two Out() methods output a single byte or a buffer of arbitrary
length

We now want to design a class (for ALSA sound output, as it happens)
which has slightly different requirements. The buffer size must be an
even multiple of a number whuch can only be obtained at run time, and
for best performance, the buffer must be a particular size.

It seems to me that we have (at least) three options:

1/ Keep the current interface and maintain an internal buffer which has
the appropriate characteristics. Advantage: we get to keep the current
interface. Disadvantage: An extra copy into the buffer.

2/ Add an extra method which returns the recommended buffer size. The
caller then allocates a buffer of the correct size and calls Out(const
unsigned char* buff...) with this buffer. Advantages: we get to keep
much of the current interface. Minimises buffer copying. Disadvantage:
seems rather brittle - the user needs to jump through lots of hoops to
use the class.

3/ Allocate the buffer internally and add a method which returns a
pointer to the buffer and its size. Advantage: Minimises buffer
copying. Disadvantages: We lose the current interface. Exposing a
pointer to an internal buffer has its own dangers.

Any other options that we're missing?

--
Simon Elliott http://www.ctsn.co.uk

H. S. Lahman

unread,
Jan 23, 2006, 12:51:37 PM1/23/06
to
Responding to Elliott...

> We have a number of C++ classes for I/O (serial comms etc) which all
> have a similar interface. The relevant part follows:
>
> class Cfoo
> {
> public:
> ...
> bool Out(unsigned char c);
> bool Out(const unsigned char* buff, unsigned long count,
> unsigned long *sent);
> '''
> };
>
> The two Out() methods output a single byte or a buffer of arbitrary
> length
>
> We now want to design a class (for ALSA sound output, as it happens)
> which has slightly different requirements. The buffer size must be an
> even multiple of a number whuch can only be obtained at run time, and
> for best performance, the buffer must be a particular size.

What you describe here is a precondition contract with the client of the
second Out method. That is, it is up to the client to make sure a
buffer is passed of the correct length for the context. So my initial
instinct is to say that Cfoo should not be changed except, perhaps, to
provide a precondition assertion. Any changes to ensure compliance
would be in the client implementation.

However, that depends on the real semantics of Cfoo. If Cfoo's mission
is to encapsulate that sort of constraint policy from the client, then
it becomes a matter of implementation of Out. That is, the Out
arguments represent the data that the client needs to send; no more and
no less. It then becomes Out's mission to pad or split up the buffer as
needed based upon the value of the Mystery Number. Again, though, since
this is a private method implementation issue, the Cfoo class interface
would not need to change.

Bottom line: I think the /implementation/ changes for /whoever/ should
logically understand how to apply the Mystery Number.


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
h...@pathfindermda.com
Pathfinder Solutions -- Put MDA to Work
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
(888)OOA-PATH

Simon Elliott

unread,
Jan 24, 2006, 5:30:01 AM1/24/06
to
On 23/01/2006, H. S. Lahman wrote:

> Responding to Elliott...

[snip for brevity]

> Bottom line: I think the implementation changes for whoever should


> logically understand how to apply the Mystery Number.

Yes, that makes sense. The performance constraints muddy the waters a
bit though. Ideally the caller shouldn't need to know about the Mystery
Number, but to hide it from the caller would involve an extra buffer
copy.

We're far enough down the road that we're committed to an object
design, and to implementing this via C++. Just as an academic exercise,
Would other design methodologies or language choices offer us any other
options here?

Cristiano Sadun

unread,
Jan 24, 2006, 6:20:29 AM1/24/06
to
Two questions:

1. Is the number different across different runtimes in the same
environment (a), or is just dependent on the environment (b)?

2. Who decides which class to use? Is it the client directly, or the
client is unaware of the specific subclass in use?

H. S. Lahman

unread,
Jan 24, 2006, 1:26:11 PM1/24/06
to
Responding to Elliott...

>>Bottom line: I think the implementation changes for whoever should
>>logically understand how to apply the Mystery Number.
>
>
> Yes, that makes sense. The performance constraints muddy the waters a
> bit though. Ideally the caller shouldn't need to know about the Mystery
> Number, but to hide it from the caller would involve an extra buffer
> copy.

Right. Dealing with nonfunctional requirements often means sacrificing
maintainability to some extent because those requirements are orthogonal
to the customer problem. So it may be more robust for Cfoo to
encapsulate the buffer management, but practicality may demand that the
client do the right thing as a precondition to invoking Cfoo.

However, I would push back a bit on worrying about performance for this
sort of situation. Somebody is going to have to pad/split the buffer
based on the the Mystery Number, so that processing is fixed. The only
marginal hit is a second buffer allocation and the data copy if Cfoo
does it. Most of that hit will <probably> be in the heap allocation, so
it may be possible to do something like a static buffer allocation that
is done once to solve the performance problem.

That is, my push back is really to be able to demonstrate that (a) there
is a performance problem and (b) having the client own the buffer
management will really solve it. IOW, I would design it right until
proven otherwise.

> We're far enough down the road that we're committed to an object
> design, and to implementing this via C++. Just as an academic exercise,
> Would other design methodologies or language choices offer us any other
> options here?

The first would be to use C rather than C++. B-) Commercial
transformation engines that do full code generation from OOA models
routinely target C in R-T/E applications to provide better performance.

Methodologically dependency management at the OOP level to make the code
more maintainable can have a substantial hit in performance. John
Lakos, in "Large Scale C++ Software Design", measured hits of up to
three orders of magnitude if one did everything possible for dependency
management so there is always a trade-off in how robust one wants to be.

<commercial>
One of the advantages of translation-based development is that one does
not have to maintain 3GL code. The problems in physical coupling that
lead to poor maintainability w/o dependency management are essentially
inherent in the languages being 3GLs (though OOPLs tend to be worse than
procedural languages because of their more complex use of type systems).
Since a translationist just does OOA models, one doesn't care if the
3GL code is maintainable (i.e., if anything needs to change it is just
regenerated after the model is changed). So the transformation engine
can thoroughly optimize it into unreadability. [I know of one
translation tool vendor who deliberately generates unreadable 3GL code
just to make sure the developer doesn't muck with it w/o changing the
models.]
</commercial>

Alas, at the paradigm level, OO development will inherently tend to have
poorer performance than procedural or functional programming. That's
because OO addresses the goal of maintainability through abstraction.
But abstraction at the OOPL level means the language is doing some of
the grunt work and it has to do it in a very generic fashion. That
necessarily means one cannot provide as much tailoring of the specific
solution as one would like. So juts to make the choice of using OO one
has already acknowledged that maintainability is a higher overall
priority than performance.

Unfortunately it goes further than simply targeting C from an OOA/D
model. The paradigm itself strives to address issues like modifying
global data through OOA/D practices. Some of those practices, like
resolving inheritance, will usually lead to things like jump table
indirection in the implementation no matter what 3GL one employs to
implement the OOA/D.

jl...@bigpond.net.au

unread,
Jan 24, 2006, 2:49:15 PM1/24/06
to
Simon,

Firstly, your description of the problem and suggested solutions is
nice. Thankyou for taking the time to explain the problem well and
come up with some possible solutions. Typically I see one liners with
very little to go on.

Secondly, Can CFoo provide a factory method that provides a CFooBuffer
with the CFoo methods able to accept this buffer.
Eg: CFooBuffer CFoo::newBufferWithDefaultSize()

This way your asking the object with the information to do the work for
you.

To avoid the buffer copy you could also have the CFoo::out methods take
a CFooBuffer but then as the CFooBuffer to write its
contents to where you need it to go. Here is an example if the output
was to go onto a socket ....

bool CFoo::out(cfooBufferInstance) { cfooBufferInstance.storeOn(socket)
}

ie: we still asking the object with the information to do the work. It
inverts control and is the OO approach IMHO.

I hope this helps,

Keep well

Rgs, James.
(http://www.jamesladdcode.com)

Simon Elliott

unread,
Jan 26, 2006, 9:08:08 AM1/26/06
to
On 24/01/2006, H. S. Lahman wrote:

> Right. Dealing with nonfunctional requirements often means
> sacrificing maintainability to some extent because those requirements
> are orthogonal to the customer problem. So it may be more robust for
> Cfoo to encapsulate the buffer management, but practicality may
> demand that the client do the right thing as a precondition to
> invoking Cfoo.

Yes.

> However, I would push back a bit on worrying about performance for
> this sort of situation. Somebody is going to have to pad/split the
> buffer based on the the Mystery Number, so that processing is fixed.
> The only marginal hit is a second buffer allocation and the data copy
> if Cfoo does it. Most of that hit will <probably> be in the heap
> allocation, so it may be possible to do something like a static
> buffer allocation that is done once to solve the performance problem.

I've prototyped this in some C code. It's possible to do this with a
single allocation and no padding and splitting. Simplifying to the
limit:

buff = malloc(magicNumber);
bytesRead=read(handle,buff,magicNumber);
while (bytesRead)
{
if (bytesRead < magicNumber)
{
padBufferWithSilence(buff+bytesRead, magicNumber-bytesRead);
}
writeToSoundCard(buff, bytesRead);
bytesRead=read(handle,buff,magicNumber);
}
free(buff);

The most unpleasant part of this from the caller's point of view is
padBufferWithSilence() which appends bytes of silence onto the end of
the buffer. If the buffer allocated by the caller is too small, this
will write all over memory which the caller does not own.

However, as you mentioned earlier, enforcing the precondition contract
should take care of that.

> That is, my push back is really to be able to demonstrate that (a)
> there is a performance problem and (b) having the client own the
> buffer management will really solve it. IOW, I would design it right
> until proven otherwise.

Yes, the premature optimisiation thing. This really needs to be
profiled.

> > We're far enough down the road that we're committed to an object
> > design, and to implementing this via C++. Just as an academic
> > exercise, Would other design methodologies or language choices
> > offer us any other options here?
>
> The first would be to use C rather than C++. B-) Commercial
> transformation engines that do full code generation from OOA models
> routinely target C in R-T/E applications to provide better
> performance.

... and you can see exactly what the code is doing without having to
look at the assembler. One problem I have with C++ is that it sometimes
does unexpected things (copies of objects not being optimised away etc)
though compilers are getting better at warning about that kind of thing.

> Methodologically dependency management at the OOP level to make the
> code more maintainable can have a substantial hit in performance.
> John Lakos, in "Large Scale C++ Software Design", measured hits of up
> to three orders of magnitude if one did everything possible for
> dependency management so there is always a trade-off in how robust
> one wants to be.

This hasn't been my experience. But I'm sure the devil's in the detail
here. To take an extremely trivial example:

char myname[246];
strcpy(myname, "Simon Elliott");

versus:

std::string myname;
myname = "Simon Elliott"

The latter has to allocate memory on the heap and has layers of
encapsulation, and I'm sure it would be orders of magnitude slower. But
as soon as we assume that we don't know how much space we need to
allocate for myname, the C code starts to become equivalent (and far
less maintainable).

[snip commercial]
Which tool do you use?

> Alas, at the paradigm level, OO development will inherently tend to
> have poorer performance than procedural or functional programming.
> That's because OO addresses the goal of maintainability through
> abstraction. But abstraction at the OOPL level means the language is
> doing some of the grunt work and it has to do it in a very generic
> fashion. That necessarily means one cannot provide as much tailoring
> of the specific solution as one would like. So juts to make the
> choice of using OO one has already acknowledged that maintainability
> is a higher overall priority than performance.

Absolutely. I'm not convinced that an object design is a good fit for
every problem, and I get the impression that often one side effect of
using an object design for an unsuited problem is a large performance
penalty.

> Unfortunately it goes further than simply targeting C from an OOA/D
> model. The paradigm itself strives to address issues like modifying
> global data through OOA/D practices. Some of those practices, like
> resolving inheritance, will usually lead to things like jump table
> indirection in the implementation no matter what 3GL one employs to
> implement the OOA/D.

Yes. However in a similar vein to my rather simplistic std::string
example above, a lot of these issues would need to be addressed in a
comprehensive C design. For example, virtual functions in C++ code may
replace switch statements in C code. If the switch statement was used
to select a function call, most compilers do the virtual function
dispatch more quickly than the switch.

Simon Elliott

unread,
Jan 26, 2006, 9:15:50 AM1/26/06
to
On 24/01/2006, jl...@bigpond.net.au wrote:
> Firstly, your description of the problem and suggested solutions is
> nice. Thankyou for taking the time to explain the problem well and
> come up with some possible solutions. Typically I see one liners with
> very little to go on.

Thanks. This design is something that I've given some thought to, and I
was left with the nagging feeling that I had omitted at least one
possibility. Hence I wanted to get some feedback and hopefully learn
something, even if I was unable to improve the design.

> Secondly, Can CFoo provide a factory method that provides a CFooBuffer
> with the CFoo methods able to accept this buffer.
> Eg: CFooBuffer CFoo::newBufferWithDefaultSize()

That's an interesting idea, and one that I hadn't considered. The only
downside is that it imposes a change in the interface. But all design
is a tradeoff in the end!

> This way your asking the object with the information to do the work
> for you.
>
> To avoid the buffer copy you could also have the CFoo::out methods
> take a CFooBuffer but then as the CFooBuffer to write its
> contents to where you need it to go. Here is an example if the output
> was to go onto a socket ....
>
> bool CFoo::out(cfooBufferInstance) {
> cfooBufferInstance.storeOn(socket) }
>
> ie: we still asking the object with the information to do the work. It
> inverts control and is the OO approach IMHO.

The only detail I'm not too certain of is how to get the data into the
cfooBuffer before it gets passed to Cfoo. I suspect there are a number
of options: exposing a pointer, inheritance...

Simon Elliott

unread,
Jan 26, 2006, 9:17:49 AM1/26/06
to
On 24/01/2006, Cristiano Sadun wrote:

> Two questions:
>
> 1. Is the number different across different runtimes in the same
> environment (a), or is just dependent on the environment (b)?

Usually (b) but there are circumstances where it could be (a).

> 2. Who decides which class to use? Is it the client directly, or the
> client is unaware of the specific subclass in use?

Both of the above. Sometimes the class will be used directly, and
sometimes it will be hidden by an interface.

H. S. Lahman

unread,
Jan 26, 2006, 3:01:45 PM1/26/06
to
Responding to Elliott...

>>However, I would push back a bit on worrying about performance for
>>this sort of situation. Somebody is going to have to pad/split the
>>buffer based on the the Mystery Number, so that processing is fixed.
>>The only marginal hit is a second buffer allocation and the data copy
>>if Cfoo does it. Most of that hit will <probably> be in the heap
>>allocation, so it may be possible to do something like a static
>>buffer allocation that is done once to solve the performance problem.
>
>
> I've prototyped this in some C code. It's possible to do this with a
> single allocation and no padding and splitting. Simplifying to the
> limit:
>
> buff = malloc(magicNumber);
> bytesRead=read(handle,buff,magicNumber);
> while (bytesRead)
> {
> if (bytesRead < magicNumber)
> {
> padBufferWithSilence(buff+bytesRead, magicNumber-bytesRead);
> }
> writeToSoundCard(buff, bytesRead);
> bytesRead=read(handle,buff,magicNumber);
> }
> free(buff);
>
> The most unpleasant part of this from the caller's point of view is
> padBufferWithSilence() which appends bytes of silence onto the end of
> the buffer. If the buffer allocated by the caller is too small, this
> will write all over memory which the caller does not own.

If I was going to use the one-time buffer approach I think I would do it
within the CFoo implementation. Maybe something like:

class Cfoo
{
private:
static unsigned char myOutBuf [MYSTERY_NUMBER];
unsigned char* myBufPtr;
...
}

Cfoo::Out (const unsigned char* buff, unsigned long count,
unsigned long* sent);
{
unsigned char* inPtr = myOutBuf;
int outCount = 0;
int totalCount = 0;

myBufPtr = buff;
while (TRUE) // Iterate over copying buffer bytes
{
// check if overall message is complete
if (totalCount == count)
{
if (outCount < MYSTERY_NUMBER)
padBufferWithSilence (...)
writeToSoundCard (...)
break;
}

// check if output buffer is full
if (outCount == MYSTERY_NUMBER)
{
writeToSoundCard (...)
outCount = 0;
myBufPtr = myOutBuf;
}

// copy input to output
*myBufPtr++ = *inPtr++;
totalCount++;
outCount++;
}
// do whatever to provide 'sent' and bool return
}

Now the client knows nothing about the Mystery Number, padding, or the
buffer copy. That is, the client doesn't call padBufferWithSilence.
[Obviously there are alternative formulations to do Mystery Number
copies via strncpy until the last. They will be marginally more
efficient because totalCount and OutCount only need to be incremented by
MYSTERY_NUMBER rather than each character. This just happened to be the
easiest one to write in a Joycian stream of consciousness. B-)]

>>>We're far enough down the road that we're committed to an object
>>>design, and to implementing this via C++. Just as an academic
>>>exercise, Would other design methodologies or language choices
>>>offer us any other options here?
>>
>>The first would be to use C rather than C++. B-) Commercial
>>transformation engines that do full code generation from OOA models
>>routinely target C in R-T/E applications to provide better
>>performance.
>
>
> ... and you can see exactly what the code is doing without having to
> look at the assembler. One problem I have with C++ is that it sometimes
> does unexpected things (copies of objects not being optimised away etc)
> though compilers are getting better at warning about that kind of thing.

That's life with OOPLs. B-) C++ is probably not as bad as many others.
When one applies notions like assignment to aggregates it becomes
difficult to cover all the bizarre special cases so to do the grunt work
behind the scenes in a generic AND bullet-proof manner the compiler has
to do things like making copies that may be superfluous in all be the
most arcane situations.

> [snip commercial]
> Which tool do you use?

Actually, I'm retired now so I rarely use any. However, I am affiliated
with Pathfinder and they provide translation tools (e.g., PathMATE). So
my opinions on relative merits of the tools tend to be somewhat biased.

>>Alas, at the paradigm level, OO development will inherently tend to
>>have poorer performance than procedural or functional programming.
>>That's because OO addresses the goal of maintainability through
>>abstraction. But abstraction at the OOPL level means the language is
>>doing some of the grunt work and it has to do it in a very generic
>>fashion. That necessarily means one cannot provide as much tailoring
>>of the specific solution as one would like. So juts to make the
>>choice of using OO one has already acknowledged that maintainability
>>is a higher overall priority than performance.
>
>
> Absolutely. I'm not convinced that an object design is a good fit for
> every problem, and I get the impression that often one side effect of
> using an object design for an unsuited problem is a large performance
> penalty.

I have no doubt that the OO paradigm isn't the best choice for a lot of
problems. For example, number crunchers using mathematical algorithms
are probably not a good choice for OO development because the math
doesn't change over time so maintenance isn't an issue. Then functional
programming will be a lot more intuitive and will yield a substantially
simpler solution. (OTOH, data preparation, gluing algorithms together,
results display, etc. are all parts of a practical scientific
application that are amenable to OO techniques.)

Another example is LALR(1) lexical analysis and parsing where the
processing is linear by syntax table row. Part of OO's power lies in
abstraction and using relationships for collaboration and managing state
variables in complex contexts. Such "side scan" complexity is missing
in linear processing. (OTOH, modeling lexical analysis itself to
provide a general purpose lexical analyzer where the syntax is defined
externally via BNF is a good problem for OO.)

However, performance issues tend to follow the 80/20 Rule with a
vengeance so one can have the best of both worlds by only optimizing for
the special situations. The OO emphasis on encapsulation allows one to
encapsulate realized code quite easily, so the paradigm can be mixed.

>>Unfortunately it goes further than simply targeting C from an OOA/D
>>model. The paradigm itself strives to address issues like modifying
>>global data through OOA/D practices. Some of those practices, like
>>resolving inheritance, will usually lead to things like jump table
>>indirection in the implementation no matter what 3GL one employs to
>>implement the OOA/D.
>
>
> Yes. However in a similar vein to my rather simplistic std::string
> example above, a lot of these issues would need to be addressed in a
> comprehensive C design. For example, virtual functions in C++ code may
> replace switch statements in C code. If the switch statement was used
> to select a function call, most compilers do the virtual function
> dispatch more quickly than the switch.

Actually, I think that is an inherent problem in C with the way the
switch is defined. With an LALR(1) compiler one has to use cascaded IFs
for the switch because the compiler doesn't know how many entries there
are when the initial switch statement is encountered. [Some C compilers
get around this today by using multiple passes through lexical analysis
and parsing, saving information from prior passes. Then there shouldn't
be a significant difference. But then they aren't LALR(1) any more.]
With languages like BLISS that wasn't a problem because the size was
defined in the initial statement so jump tables could be encoded
directly in a single pass.

jl...@bigpond.net.au

unread,
Jan 26, 2006, 9:05:11 PM1/26/06
to
Simon,

If you ask the CFooBuffer instance to get the data itself, then pass it
to the CFoo::out routine.
The data must be coming from somewhere, so try this:

CFooBuffer::readFrom(Object source)

If the Object is a socket then handle it, if its a file, then handle
that.

This approach means:

1. You ask the CFoo class for the CFooBuffer that best suits it (which
was a requirement) and,
2. You dont expose anything about the internals of CFoo or CFooBuffer
and,
3. You dont have non-required reads/write to the CFooBuffer because its
the thing that does the reading/writing
into its own internal buffer.

I hope this helps. Keep well,

Rgs, James.

Simon Elliott

unread,
Jan 27, 2006, 4:55:52 AM1/27/06
to
On 27/01/2006, jl...@bigpond.net.au wrote:

> If you ask the CFooBuffer instance to get the data itself, then pass
> it to the CFoo::out routine.
> The data must be coming from somewhere, so try this:
>
> CFooBuffer::readFrom(Object source)
>
> If the Object is a socket then handle it, if its a file, then handle
> that.

Yes, I can sort of see where you're coming from, and it does make sense
that CFooBuffer does its own reading. But:

readFrom(Object source)

In the real code, "Object" would have to be replaced by a real class or
data type. One alternative approach would be to use

readFrom(int handle)

and assume that CFooBuffer can use read(handle) for whatever handle
type is passed to it. This will work (sort of) for a file, pipe,
socket, serial connection etc. But let's say one day I want to be able
to read adat from one sound card and outout it onto another. Perhaps
this uses a different API and handle data type: int
GetDataFromSoundCard(sound_card_handle_t handle, ...)

Another approach would be to have CFooBuffer::readFrom() as a virtual
function, which gets overridden by a descendent of CFooBuffer. This
allows us to implement the read in any way we want. But the design is
starting to become more complex and harder to use.

But I think your suggestion of a factory method which creates the
buffer is a useful approach: we can imagine cases where the buffer has
to be a DMA buffer, or be contiguous in real RAM, or have special
alignment characteristics. In these cases we couldn't leave it up to
the user of the class to allocate the bufer "by hand".

jl...@bigpond.net.au

unread,
Jan 27, 2006, 6:10:44 PM1/27/06
to
Simon,

I think you have go it sorted out now. The key that you have uncovered
is getting
CFooBuffer to have the methods, abstract, virtual or otherwise. :)

>>But the design is starting to become more complex and harder to use.

OO systems take a little more time to create, but you will make this
time back the first time
you need to modify the system for a new buffer type or fix a bug.

Good luck with your implementation(s).

Rgs, James.

Simon Elliott

unread,
Jan 31, 2006, 4:43:44 PM1/31/06
to
On 27/01/2006, jl...@bigpond.net.au wrote:

>
> Good luck with your implementation(s).

Thanks!

Simon Elliott

unread,
Jan 31, 2006, 4:48:44 PM1/31/06
to
On 26/01/2006, H. S. Lahman wrote:

> If I was going to use the one-time buffer approach I think I would do
> it within the CFoo implementation. Maybe something like:

[snip]

Thanks for all your discussions of this.

In the end I decided to go for the above. The buffer, with any special
requirements it may have, is owned by the object. The user buffer is
copied in, and therefore can be any length.

If profiling indicates that this is a problem, I can expose the buffer
pointer and size, and the user can fill the buffer and call the method
which outputs the buffer directly.

However, interestingly, the copy-in method seems to perform better in a
heavily loaded system (ie the sound card doesn't under-run) which seems
counter-intuitive to me as it's doing lots of short reads of the sound
file rather than a few longer reads.

jl...@bigpond.net.au

unread,
Feb 1, 2006, 2:45:43 PM2/1/06
to
Simon,

Thanks for the feedback ! I find it rare that people report back after
they have a few answers to go on with.

Keep well,

Rgs, James.

0 new messages