Is there a more concise way of saying this:
Ptr->operator[](0)
That worked in this simple case. What about this more
complex case:
Number03 = Array01.Array->operator[](0).
Array->operator[](0).
Array->operator[](0).
Array->operator[](0).
Array->operator[](0).
Array->operator[](0).Number;
>
> Number03 = Array01.Array->operator[](0).
>
> Array->operator[](0).
>
> Array->operator[](0).
>
> Array->operator[](0).
>
> Array->operator[](0).
>
> Array->operator[](0).Number;
This looks more like an abomination than case, do you think it;s readable
despite the syntax?
But if you actually want it like that, just use at() instead of [], if
'Array' is a vector<>*.
With your original example this would not compile would it?
And with your own type you create the sensible syntax.
= Array->at(0).Array->at(0).Array->at(0).Array->at(0).Array->at(0).number;
hmmm.
As a bonus, you'll get boundary checks. (May be just slightly slower,
but that's often irrelevant unless you need to index the vector millions
of times per second.)
> That worked in this simple case. What about this more
> complex case:
>
> Number03 = Array01.Array->operator[](0).
>
> Array->operator[](0).
>
> Array->operator[](0).
>
> Array->operator[](0).
>
> Array->operator[](0).
>
> Array->operator[](0).Number;
There is still this way:
= Array03.Array[0][0].Array[0][0].Array[0][0].Array[0][0].Array[0]
[0].Array[0][0].Number;
--
VH
It is a given (like in geometry, it can't change) design
requirement that the depth of reference specified above can
not be changed. I don't want to explain my reasoning because
that would require disclosing proprietary information.
The above syntax compiles and produce the desired
functionality. I just want to be able to specify it more
concisely if possible.
"litb" <lit...@googlemail.com> wrote in message
news:7c30307f-b03c-4ecc...@p11g2000yqe.googlegroups.com...
Yes, that is not too bad, but, I want to handle boundary
checks myself. The C++ exception handling infrastructure
most likely has much more overhead than I want to pay for,
Number03 = Array01.Array[0][0].
Array[0][0].
Array[0][0].
Array[0][0].
Array[0][0].
Array[0][0].Number;
I remembered this trick since it was used in the Mac OS Classic era to
get a Pascal-like postfix dereference operator (^ in pascal I believe),
because the OS used "handles" almost everywhere. These were
doubly-indirect pointers, so you'd have for example Foo** h. Instead of
(**h).member or (*h)->member, people sometimes did h[0]->member.
Fixed that for you. The only overhead you'll almost definitely get with
at() is the range checks (so a compare and branch around the code that
throws the exception).
But I still agree; the goal was code that did the same thing, not code
that added in range checks.
SOLUTION 1.
Define a function 'in' of 7 arguments, then write something like
Number03 = in( Array01, 0, 0, 0, 0, 0, 0 );
Or, if you want a more general solution, under assumption that you cannot change
the type definitions for that structure:
SOLUTION 2.
<code>
#include <iostream>
#include <vector>
#include <stddef.h>
#ifdef _MSC_VER
# pragma warning( disable: 4503 ) // decorated name length exceeded
#endif
//----------------------------------- Silly 6-dimensional vector:
struct NumberHolder
{
double number;
NumberHolder( double x ): number( x ) {}
};
template< typename T >
struct VecPtrHolder
{
T* array;
VecPtrHolder( T& a ): array( &a ) {}
};
typedef std::vector< NumberHolder > Vec1D;
typedef std::vector< VecPtrHolder< Vec1D > > Vec2D;
typedef std::vector< VecPtrHolder< Vec2D > > Vec3D;
typedef std::vector< VecPtrHolder< Vec3D > > Vec4D;
typedef std::vector< VecPtrHolder< Vec4D > > Vec5D;
typedef std::vector< VecPtrHolder< Vec5D > > Vec6D;
//----------------------------------- Machinery to index the beast:
template< typename T > struct ElemTypeOf;
template< typename T, typename U > struct ElemTypeOf< std::vector<T, U> >
{
typedef T Type;
};
template< typename V > struct In;
template< typename V > struct In< VecPtrHolder<V> >
{
V* myArray;
In( VecPtrHolder<V>& holder ): myArray( holder.array ) {}
In< typename ElemTypeOf<V>::Type > operator[]( size_t i )
{
return (*myArray)[i];
}
};
template<>
struct In< VecPtrHolder< std::vector< NumberHolder > > >
{
typedef std::vector<NumberHolder> Vec;
Vec* myVecPtr;
In( VecPtrHolder<Vec>& v ): myVecPtr( v.array ) {}
NumberHolder& operator[]( size_t i )
{
return (*myVecPtr)[i];
}
};
template< typename V >
In< VecPtrHolder< V > > in( VecPtrHolder<V> h ) { return h; }
//----------------------------------- Example usage:
int main()
{
using namespace std;
Vec1D v1( 1, 3.14 );
Vec2D v2( 1, v1 );
Vec3D v3( 1, v2 );
Vec4D v4( 1, v3 );
Vec5D v5( 1, v4 );
Vec6D v6( 1, v5 );
VecPtrHolder< Vec6D > array01( v6 );
cout << in( array01 )[0][0][0][0][0][0].number << endl;
}
</code>
For this latter solution it's probably a good idea to extend it to support
const-ness.
But the best of idea of all is perhaps
SOLUTION 3.
To check whether the program logic is sound. It's a common newbie mistake to end
up with huge and/or multi-dimensional arrays. Generally it reflects some failure
in understanding the problem domain.
Cheers & hth.,
- Alf
--
Due to hosting requirements I need visits to <url: http://alfps.izfree.com/>.
No ads, and there is some C++ stuff! :-) Just going there is good. Linking
to it is even better! Thanks in advance!
>
> The above syntax compiles and produce the desired
> functionality. I just want to be able to specify it more
> concisely if possible.
>
>
--
VH
So it may not be portable?
The architecture that I designed inherently requires this
level of nesting to provide a design that has maximum
robustness, with code that executes as fast as possible.
Changing this requirement reduces either the degree of
robustness or performance.
(1) I do not want to add the cost of function call overhead
or the code bloat of an inline function.
(2) I only really want simpler syntax for exactly and
precisely the original code that I posted.
(3) An addtional requirement is that the simpler syntax must
be standard C++.
It does not I examined the generated assembly language, and
ran some benchmark testing.
>
> Fixed that for you. The only overhead you'll almost
> definitely get with
> at() is the range checks (so a compare and branch around
> the code that
> throws the exception).
No that is not it. I am always doing the range checks
myself, so if C++ is doing them too, it is wasting time.
Also It is not the overhead of the C++ range checks that is
far too expensive, it is the overhead of the case when an
out-of-bounds exception is thrown that is too expensive.
This is most likely far more expensive than a text output
call.
> No that is not it. I am always doing the range checks myself, so if C++ is
> doing them too, it is wasting time. Also It is not the overhead of the C++
> range checks that is far too expensive, it is the overhead of the case
> when an out-of-bounds exception is thrown that is too expensive. This is
> most likely far more expensive than a text output call.
I don't understand this. You said that range check is already done. Using []
requires the index being correct, if it isn't you have UB. So you can't
have any case where at() would throw.
So how can you have any kind of throw overhead here?
Your comments on inline functions from other message sound fishy too -- if
the function gets inlined you will have the same code as with manual
insertion. without any cost in space or time. If the compiler choses not
to inline it, there's certainly a difference, but it becomes increasingly
rare and with good chances the optimizer got the calculations better.
Source is normally written for readability, most programmers would
definitely prefer
in( Array01, 0, 0, 0, 0, 0, 0 );
to any of the alternatives.
Can you create it as inline func, compile both the original and that version
with assembly listing enabled, then post the result if you see difference in
it?
In case the compiler fails to do a sensible job, I'd probably create a
macro...
I don't want to spend a lot of time one this, suffice it to
say that the solution does not meet the spec that it be
semantically identical to the original code. ALL that I am
looking for is more concise syntax for the exact code that I
posted. The double indexed solution would exactly meet this
spec as long as it is portable.
Do you think that it is portable? I tested it and it did
work on MSVC++.
That means that your original spec requires unchecked vector indexing,
which causes UB when the index is out of boundaries?
Maybe you should rethink your specs.
>> Define a function 'in' of 7 arguments, then write
>> something like
>>
>> Number03 = in( Array01, 0, 0, 0, 0, 0, 0 );
>
> (1) I do not want to add the cost of function call overhead
> or the code bloat of an inline function.
Could you please explain exactly how does the inline function
introduce code bloat that your original code doesn't already have?
I really don't want to waste a fraction of a second on
anything at all else besides the problem at hand. I already
know that my design is optimal, thus any and all of these
aspects are off-topic.
I'm sorry i didnt follow the thread. I thought it's already clear how
that works. Well, basically *p means the same as p[0], because p[x] is
the same as *(p + x). In your case i believe it means exactly the
same, because the type of *p must be complete anyway because you
access members. But i've found situations in the past where the
equivalence broke:
// --snip--
struct X;
void f(X&);
X *getX();
int main() {
X *x = getX();
f(*x);
}
// --snap--
Note how f(x[0]); would not be valid, because it needs the size of X
(to be able to calc the right "offset") which is not known because X
wasn't defined yet.
It's portable to any standard C++ (and C, if you're not using operator
overloads) compiler. The expression
E1 [E2]
is defined to be equivalent to
*((E1) + (E2))
as long as there is no operator [] overload. In this case, E1 is of type
std::vector<T>*, and E2 is an int, therefore therefore [] can't be
overloaded. So we can transform the expression in an incremental way:
std::vector<T>* v = ...
v->operator[](0); // what we start with
(*v).operator[](0); // convert -> to equivalent dereference
(*v)[0]; // now we don't need explicit operator call
(*(v+0))[0]; // adding zero before dereferencing doesn't matter
(v[0])[0]; // reverse E1 E2 transformation shown above
v[0][0]; // final expression
I think he also meant that it perform identically (perhaps even generate
identical code). The [0] as a postfix dereference operator should achieve
this on almost any compiler, and portably (as I detailed in another post).
That is great, thanks.
That is helpful, thanks. I never bothered to learn all of
the pointer syntax because all of this stuff seemed to make
things much more complicated than necessary, thus a
potential source for error that could best be avoided.
I coded with "C" for fifteen years without ever once
dynamically allocating memory. If there is no memory
allocation then memory leaks are impossible.
Currently all of my C++ code only dynamically allocates
memory through STL container member functions. Almost of
this is though std::vector, thus I can use integer
subscripts instead of pointers.
--
VH
I could explain how it makes perfect sense, but, then I
would have to disclose proprietary information and lose my
competitive edge. Under almost all circumstances the above
construct does not make much sense. I found one where it
does.
When one is attempting to make an extremely robust system.
and have the resulting performance as fast as possible,
while as much as possible leveraging existing technologies,
(to minimize development costs) sometimes this results in
very weird looking code.
What about something like
NumberType& get_n(Array* a, size_t n) {
while(n-- > 0)
a = (*a)[0];
return a->Number;
}
If you happen to have problems with the raw number, how about
// analog
struct anal {
size_t n;
anal():n(){}
anal& operator()() { n++; return *this; }
operator size_t() { return n; }
};
get_n(Array, anal()()()()());
You could easily expand it to include array dimensions into the parens
and stuff:
struct anal {
Array *a;
anal(Array *a):a(a) {}
anal& operator()(ptrdiff_t p) { a = (*a)[p]; return *this; }
};
anal(Array)(0)(0)(1)(0).a->Number;
You get the idea. There are many ways. Make it a template and
reusable...
double Number03;
Number03 = 12345678.1234;
"litb" <lit...@googlemail.com> wrote in message
news:c7d0a447-16d5-4eb3...@c11g2000yqj.googlegroups.com...
> As a bonus, you'll get boundary checks. (May be just slightly
> slower, but that's often irrelevant unless you need to index
> the vector millions of times per second.)
In any decent implementation, you'll get the bounds checking in
both cases. The difference is that with at(), it's guaranteed,
but it's also guaranteed that you don't get the usually desired
behavior (an assertion failure).
--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
"James Kanze" <james...@gmail.com> wrote in message
news:a9247e2c-a8e8-44c1...@h20g2000yqj.googlegroups.com...
Look up 'Checked Iterators'.
James' comment was w.r.t what most ("decent") implementations do. The
standard does not mandate what an implementation should (not) provide
as a QoI feature. E.g., a conforming implementation may provide
Garbage Collection.
- Anand
"most likely" would indicate that you have not checked but you are
guessing on prejudices.
>It does not I examined the generated assembly language, and
>ran some benchmark testing.
Ah, so you did some testing. But did you benchmark your application
or a little irrelevant code sample comparing exclusively one throw to
one if() ? If the later, obviously a try-throw-catch is a lot more
expensive than a if. However, typical exception using code use a lot
lot less try-throw-catch than typical return code based code.
You should not compare:
for(int i = 0; i < 1000000 ; ++i)
{
if(foo())
;
}
to
for(int i = 0; i < 1000000 ; ++i)
{
try
{
foo();
}
catch(...) {};
}
but to:
try
{
for(int i = 0; i < 1000000 ; ++i)
foo();
}
catch(...) {};
>>
>> Fixed that for you. The only overhead you'll almost
>> definitely get with
>> at() is the range checks (so a compare and branch around
>> the code that
>> throws the exception).
>
>No that is not it. I am always doing the range checks
>myself, so if C++ is doing them too, it is wasting time.
>Also It is not the overhead of the C++ range checks that is
>far too expensive, it is the overhead of the case when an
>out-of-bounds exception is thrown that is too expensive.
>This is most likely far more expensive than a text output
>call.
But the exception due to an out-of-bound situation should be
exceptional. Error path cost should not be your main priority since
it should only happen exceptionally rarely when something went
exceptionally wrong. If it is a "normal" situation that an
out-of-bound exception gets thrown, then you have a design issue.
A boundary violation of vector::operator [] yields undefined behavior. A
good implementation will at the very least do a bounds check when
debugging is enabled. A good one should have a way to disable the check
in a sufficiently optimized release build. If the absence of a check is
of utmost importance, at the very least you should do something like
(&front()) [n], perhaps caching &front() if using it within a loop.
And please stop top-posting and quoting signatures.