extend_cast<T> : extending interfaces safely without the use of free functions

286 views
Skip to first unread message

Tahsin Mazumdar

unread,
Jul 25, 2017, 8:47:38 PM7/25/17
to ISO C++ Standard - Future Proposals
Currently there is no easy way to extend class interfaces and still remain compatible with instances of the older interface. In certain cases, this might not be all that difficult to do if we ensure that the binary layout does not change across interfaces.

Let's say as an arbitrary example we want to extend the std::vector interface, we could do the following:

template <typename T>
class extended_vector : public std::vector<T>
{
public:
 
template <typename... Args>
  extended_vector
(Args&&... args): std::vector<T>(std::forward<Args>(args)...) {}

 
auto accumulate(T init)
 
{
   
return std::accumulate(this->begin(), this->end(), init);
 
}

};

Note that since we have not added any member variables to extended_vector, the binary layout of the two classes should be identical. Thus, at least in concept, I should be able to use instances of either class interchangeably with the other's interface. However this is not the case: if I only have an instance of std::vector<T>, there is no standard way to "treat" that instance like an extended_vector<T> and take advantage of its extended interface.

std::vector<T> some_external_function_returns_vector();

auto a = some_external_function_returns_vector();
// ^ Can't use extended_vector<T>::accumulate on this
// Instead we have to use a free function:
std
::accumulate(begin(a), end(a), 2);

The standard library does this to a great extent for extending interfaces (i.e. writing free functions). But this is not an ideal solution in my opinion as it breaks encapsulation to a certain degree - related data and functions should be coupled together. std::accumulate operates primarily on the state defined inside of vector; to have them be separate forces us to pass in every member needed individually (e.g. begin() and end()) and results in cumbersome and longer syntax as we see with the <algorithm> library. This also discourages the use of classes and inheritance entirely when we are looking to scale up. From a design perspective, it forces developers to adopt a mix of C style (use of free functions) and Object-Oriented C++ style (classes and member functions) for a relatively mundane reason.

There might be another solution, simply to cast the std::vector as an extended_vector. Though this also has its problems currently:

std::vector<int> A { 1,2,3,4};
auto const& B = static_cast<extended_vector<int>&>(A);
B
.accumulate(2);



This code compiles and works as expected. However this falls under the umbrella of undefined behavior, and I would hardly use this in production. Casting in this case might be safe because the binary layout of the two classes are identical, but if someone were to add any state or member variables to extended_vector in the future, the code would still compile only to fail in unexpected ways down the line:


template <typename T>
class bad_vector : public std::vector<T>
{
 
int a;  // <-- additional state: will no longer have the same layout as std::vector
public:
 
template <typename... Args>
  bad_vector
(Args&&... args): std::vector<T>(std::forward<Args>(args)...) {}

 
auto accumulate(T init)
 
{
   
return std::accumulate(this->begin(), this->end(), a + init);
 
}
};

std
::vector<int> A { 1,2,3,4};
auto const& B = static_cast<bad_vector<int>&>(A);  // still compiles
B
.accumulate(2);   // UB: Bad!



To get around this problem I wonder if it might be possible to introduce a new form of cast that simply does a compile-time check to verify that the binary layout is the same. Essentially it could give a compile-error if the binary layouts of the classes are not equivalent and only compile if we are certain that the downcast conversion is safe. It could be as simple as checking that the extended interface does not define any additional state. If there is no additional state, only member functions (or possibly statics), any added functionality would live outside of the instance giving us no reason to expect any undefined behavior if we were to cast it to the newer interface:


std
::vector<int> A { 1,2,3,4};
auto& B = extend_cast<extended_vector<int>&>(A);  // OK; same binary layout
B
.accumulate(2);  // should be safe to do this now

auto& C = extend_cast<bad_vector<int>&>(A); // compile error; bad_vector is not binary compatible with std::vector
C
.accumulate(2); // not safe, but hopefully will never compile


This feature would allow us to extend interfaces in C++ style, enabling better encapsulation and readability and encourage the use of classes and inheritance (and as a result better code reuse through access to 'protected' members of the base, which we would not get through free functions).

Things might get a bit more complicated when virtual functions come into play - I'm not entirely certain if this is still guaranteed to work if virtual methods or overrides are added, but even if this only applies to non-polymorphic classes I think this might be useful when its pretty common practice to extend external interfaces, and having to write free functions all the time is less than ideal.

[Apologies in advance if something like this has already been proposed or is in this works; I'm not aware of anything as such at the time of writing.]

Thiago Macieira

unread,
Jul 25, 2017, 9:37:19 PM7/25/17
to std-pr...@isocpp.org
On terça-feira, 25 de julho de 2017 17:47:38 PDT Tahsin Mazumdar wrote:
> Currently there is no easy way to extend class interfaces and still remain
> compatible with instances of the older interface. In certain cases, this
> might not be all that difficult to do if we ensure that the binary layout
> does not change across interfaces.
>
> Let's say as an arbitrary example we want to extend the std::vector
> interface, we could do the following:
>
> template <typename T>
> class extended_vector : public std::vector<T>
> {
> public:
> template <typename... Args>
> extended_vector(Args&&... args): std::vector<T>(std::forward<Args>(args
> )...) {}
>
> auto accumulate(T init)
> {
> return std::accumulate(this->begin(), this->end(), init);
> }
>
> };
>
> Note that since we have not added any member variables to extended_vector,
> the binary layout of the two classes should be identical.

The standard does not guarantee that. So from this point on, let's make the
assumption that only standard-layout classes can be layout identical.

std::vector may or may not be standard-layout.

> Thus, at least in
> concept, I should be able to use instances of either class interchangeably
> with the other's interface. However this is not the case: if I only have an
> instance of std::vector<T>, there is no standard way to "treat" that
> instance like an extended_vector<T> and take advantage of its extended
> interface.

Correct, because it isn't of that type. Casting to a type that the object
isn't is ill-formed.

> There might be another solution, simply to cast the std::vector as an
> extended_vector. Though this also has its problems currently:
>
> std::vector<int> A { 1,2,3,4};
> auto const& B = static_cast<extended_vector<int>&>(A);
> B.accumulate(2);
>
> This code compiles and works as expected.

Actually, "as expected" is a matter of opinion. Since it is incorrect, someone
could have a different opinion than yours.

> To get around this problem I wonder if it might be possible to introduce a
> new form of cast that simply does a compile-time check to verify that the
> binary layout is the same. Essentially it could give a compile-error if the
> binary layouts of the classes are not equivalent and only compile if we are
> certain that the downcast conversion is safe. It could be as simple as
> checking that the extended interface does not define any additional state.
> If there is no additional state, only member functions (or possibly
> statics), any added functionality would live outside of the instance giving
> us no reason to expect any undefined behavior if we were to cast it to the
> newer interface:

Again, since the standard does not talk about layout of a class except for
standard layout classes, this new form of casting would only be allowed in
standard layout classes too. Applying to other types could be allowed as an
extension by the compilers, but you can't use it portably then.

And you can't use it in std::vector because you don't know whether it's
standard layout or not.

> This feature would allow us to extend interfaces in C++ style, enabling
> better encapsulation and readability and encourage the use of classes and
> inheritance (and as a result better code reuse through access to
> 'protected' members of the base, which we would not get through free
> functions).

I'm not sure we want to encourage that. This kind of hack is just plain UB
today.

--
Thiago Macieira - thiago (AT) macieira.info - thiago (AT) kde.org
Software Architect - Intel Open Source Technology Center

Nicol Bolas

unread,
Jul 25, 2017, 9:53:17 PM7/25/17
to ISO C++ Standard - Future Proposals
On Tuesday, July 25, 2017 at 8:47:38 PM UTC-4, Tahsin Mazumdar wrote:
The standard library does this to a great extent for extending interfaces (i.e. writing free functions). But this is not an ideal solution in my opinion as it breaks encapsulation to a certain degree - related data and functions should be coupled together. std::accumulate operates primarily on the state defined inside of vector; to have them be separate forces us to pass in every member needed individually (e.g. begin() and end()) and results in cumbersome and longer syntax as we see with the <algorithm> library. This also discourages the use of classes and inheritance entirely when we are looking to scale up. From a design perspective, it forces developers to adopt a mix of C style (use of free functions) and Object-Oriented C++ style (classes and member functions) for a relatively mundane reason.

This is the fundamental foundation of your entire argument. You want this feature because you believe what you said to be the way C++ code ought to be done.

But it's not workable.

Let's take your feature at face value. Let's say we implement it and make it work as you want it to. And let's say we define some `std::algorithm_vector`, which implements every C++ function in the <algorithm> header as a member function.

So, first question: how do I accumulate only part of a vector?

Second question: what happens if I want to create new algorithms? Algorithms that ought to be able to work with any container. You wouldn't be able to directly use `std::algorithm_vector` members, since they won't be members of that type. You'll have to either create some `my::new_algorithm_vector` type or just use non-member functions.

So even if we have this feature, you're still going to need those non-member functions. And if that's the case... what's the point of this feature?

These are the kinds of problems you come upon when you start treating member functions and inheritance as "The One True Way", rather than "useful where it's useful." There are many tools in C++ for extending the capability of types; you're doing yourself and the language a disservice by trying to fit everything into one little box.

So before this can be entertained further, I say that it needs better motivations than "C++ isn't OOP enough for me."

Nevin Liber

unread,
Jul 25, 2017, 10:31:52 PM7/25/17
to std-pr...@isocpp.org
On Tue, Jul 25, 2017 at 7:47 PM, Tahsin Mazumdar <tahsin...@gmail.com> wrote:

To get around this problem I wonder if it might be possible to introduce a new form of cast that simply does a compile-time check to verify that the binary layout is the same.

The problem is that binary layout isn't sufficient.  Your derived class may have a constructor that sets up a class invariant.  For instance:

struct B {
    explicit B(int i_ = 0) : i{i_} {}
    // ...
private:
    const int i;
};

struct D : B {
    D() : B{1} {}
    // ...
};

Casting an arbitrary instance of B to D would be wrong, since any and all functions involving D can assume that i == 1.
--
 Nevin ":-)" Liber  <mailto:ne...@eviloverlord.com>  +1-847-691-1404

Matthew Woehlke

unread,
Jul 31, 2017, 5:55:52 PM7/31/17
to std-pr...@isocpp.org, Tahsin Mazumdar
On 2017-07-25 20:47, Tahsin Mazumdar wrote:
> Currently there is no easy way to extend class interfaces and still remain
> compatible with instances of the older interface. In certain cases, this
> might not be all that difficult to do if we ensure that the binary layout
> does not change across interfaces.

Despite the nay-sayers, I think this feature would be useful. I'm pretty
sure I've run into this sort of thing, possibly even implemented the UB
mechanism. I think the issues that were raised could be solved by the
extension class being explicitly that: some new class type that can't
exist on its own, but to which a "base" class can be casted.

The trouble is... I'm pretty sure the times I've needed this feature are
because I needed to break encapsulation of a class for some reason (for
example, access protected members of a class that was created outside my
control). Given that breaking encapsulation is the most compelling use
case, I'm not sure that's something we want to standardize...

For the cases where it's merely convenient, I agree with the other
posters; I don't find mere convenience to be a sufficiently compelling
motivation.

--
Matthew

kocz...@gmail.com

unread,
Jul 31, 2017, 7:59:39 PM7/31/17
to ISO C++ Standard - Future Proposals
While I agree that his reasoning wasn't quite correct, I like the outcome. Stuff in <algorithm> is there for a reason, but extend_cast<T> would be quite useful in some cases. Let's say that I'm implementing a class `Message` that's only purpose besides storing a string (what can be done with `std::string`) is to split it into a `std::vector` of `std::array<char, 256>` so that I could easily send it with Boost.Interprocess (just an example, could be anything else). The ideal solution would be to derive from `std::string` and add that method. It would be code really easy to maintain, I could keep all the `std::string`'s features, constructors etc., just great. The only, sadly really big, problem is casting. While I could just write a function to safely convert those, it would result in more code, which in turn would cause my code to be less maintainable.
extend_cast<T> would finally enable programmers write safe and maintainable code while inheriting classes from `std::` namespace, which IMHO by now is a really bad habit without language features like the discussed one.

Michael Hava

unread,
Aug 1, 2017, 5:35:36 AM8/1/17
to ISO C++ Standard - Future Proposals, kocz...@gmail.com


Am Dienstag, 1. August 2017 01:59:39 UTC+2 schrieb kocz...@gmail.com:
Let's say that I'm implementing a class `Message` that's only purpose besides storing a string (what can be done with `std::string`) is to split it into a `std::vector` of `std::array<char, 256>` so that I could easily send it with Boost.Interprocess (just an example, could be anything else).

I honestly don't see why this should be implemented as a member function, when a free function dependent only on the public interface of std::string would be sufficient...

p_ha...@wargaming.net

unread,
Aug 1, 2017, 8:03:31 AM8/1/17
to ISO C++ Standard - Future Proposals
On Wednesday, July 26, 2017 at 10:47:38 AM UTC+10, Tahsin Mazumdar wrote:
Currently there is no easy way to extend class interfaces and still remain compatible with instances of the older interface. In certain cases, this might not be all that difficult to do if we ensure that the binary layout does not change across interfaces.

 

I feel that the problem being solved (not liking the "free-function" generic programming style currently encouraged by the STL) is something like an opt-in version of half of "universal call syntax" (a.b(...) => b(a, ...)), and/or a specific use-case for operator., where the wrapper type would have no state but the wrapped type.

On the other hand, the suggested solution (extend_cast) seems feasible now using type traits, simply by having the cast template function static_assert that the target inherits from the input, is standard_layout, and is the same size. You could probably apply the static reflection work to be absolutely sure that someone isn't casting down to a type that stows something in the base class's padding, or otherwise messes with alignment to fool the size check.


This feature would allow us to extend interfaces in C++ style, enabling better encapsulation and readability and encourage the use of classes and inheritance (and as a result better code reuse through access to 'protected' members of the base, which we would not get through free functions). 

I think that 'protected' members might be intended precisely to disallow things like this, so I'm not convinced 'better' is the right term here. Certainly 'more' code-reuse, if we're talking about protected member functions; 'less' code-reuse (and more risk to class invariants) if we're talking about protected data members.

In the end, and it's a philosophical point, "encourage the use of classes and inheritance" reads to me like "encourage the use of the letter a". It's useful/powerful (or even "core", if you prefer), but you will be sufficiently tooled for your development work irrespective of your level of use in your code.

Ryan Nicholl

unread,
Aug 23, 2017, 7:52:47 PM8/23/17
to ISO C++ Standard - Future Proposals, p_ha...@wargaming.net
I don't like the universal syntax. I think it adds too much complexity in finding where methods are defined. What happens when foo.a() exists but foo::a() does not? This is bound to be very confusing.

The extend-cast idea is very useful, but I think it needs some better syntax. Here's my proposal (basically):

template <typename T, typename Alloc = std::default_allocator<T> >
class extended_vector
 
: public &std::vector<T, Alloc>
{
 
...
public:
 
void extra_member_function() { ... }
};



std
::vector<int> vector_a;
auto && vector_extended = extend_cast<extended_vector<int> >(vector_a);

By inheriting from a reference we gain some advantages, it ensures that there is no conversion between underlying data types (since it's just a reference to the same data) but we get extra member functions and the ability to use protected members of the base class.
Inheriting from an extended type could also be done using value semantics, yet the underlying type would be a reference. This could be particularly useful for overriding some member functions without proxying the entire class.
Also, this could be useful for adding additional "phantom" or "ghost" member variables. The ability to define a member variable on the "extended" type would be useful. (They could be initialized in the constructor of the class) Of course, you wouldn't be able to take a reference to a temporary which means that you'd need the && syntax which could cause some issues when forwarding? I'd say maybe just add an "auto(ghost)" keyword for this type of variable.

torto...@gmail.com

unread,
Aug 25, 2017, 9:07:41 PM8/25/17
to ISO C++ Standard - Future Proposals, p_ha...@wargaming.net
What is wrong with doing this by constructing the derived class from the base and relying on the compiler to try and make it 
as efficient as an extend_cast would be?

template <typename T, typename Alloc = std::default_allocator<T> >
class extended_vector
 
: public &std::vector<T, Alloc>
{
  
...
public:

  extended_vector(vector& v):vector(v) { ... }

  void extra_member_function() { ... }
};


std::vector<int> vector_a;
extended_vector<int>(vector_a).extra_member_function(); //elide copy in constructor here?


Actually that might work better with a move constructor

  extended_vector(vector&& v):vector(v) { ... }

I think that is the correct (but not efficient) way to code it now. Maybe the 'extend_cast' part is actually 
something needed to replace the copy or move constructor with a 'do nothing / placement new' constructor?

torto...@gmail.com

unread,
Aug 26, 2017, 4:08:54 AM8/26/17
to ISO C++ Standard - Future Proposals, p_ha...@wargaming.net
Of course the extended_vector is a different object from vector_a. So you also have to 'copy' or 'move' it back and get that elided if you need to
switch back at any point.

The alternative is using perfect forwarding for all methods. That is something that would be useful for all kinds of wrapper types.
I think that can be done with pure library functionality once reflection is available. At the moment its a royal pain.

Ryan Nicholl

unread,
Dec 14, 2017, 3:40:58 PM12/14/17
to ISO C++ Standard - Future Proposals, p_ha...@wargaming.net
Well, allowing inheritance from a reference would give the following advantages:

First, we can get the same advantages of extend cast, because the compiler could see that they have the same address, and thus the compiler doesn't need to create a separate variable.

Extend cast has a couple of issues. First, if the compiler ONLY checks the binary layout, then you could convert a vector to an extended string by accident.

If the compiler instead checks that the base class is the same, we basically have a cast that allows us to cast vector A to B, only the memory of A is destroyed?
Or is it? What actually happens to the vector object? Is it now a vector? or is it something else? What is the type of that object now?
What happens if you do:
l1:
vector
<foo> a = ...;
l2
:
extended_vector
<foo> & b = extend_cast<extended_vector<foo>&> (a);
b
.bar();
goto l1;

So what happens after the goto statement? Do we call the destructor of b and not the destructor of a? What happens if we do "goto l2;" instead? How do we "unextend" the class?

Reinterpreting the data of the class as though it were another type is done with reinterpret_cast, I'm not sure what the extend cast is supposed to other than make that more defined behavior. But IIRC, reinterpret cast has defined behavior for certain POD types.

What it seems to me that you want is a type safe version of reinterpret cast. But that raises some issues, we can't check "binary compatibility", only whether or not either the POD layout is the same, or the types share some base in some way.

Lets say we only check the binary layout, so you could cast a std::string to a extended_vector<char> because both look something like:
struct { size_t; size_t; char *; };
But they are not compatible. (trailing 0s, SSO could be different, etc.) So instead of casting based on binary layout, lets say you cast based upon the base class.
Now what do we do when we try to "extend_cast" something that wasn't meant to be extended? Like for example having your iterator inherit from your const_iterator. Thus, you could extend_cast a const_iterator to an iterator. Doesn't sound legit to me. That's why I suggest a different syntax. I'm going to improve on it a bit here.

class foo { ... };

class bar : public &foo
{
 
public:
  bar
(...) {  }
};

Now the only rule is that bar must initialize the reference to foo. This provides a transparent forwarding of the public methods of foo to a user of the bar class, and also provides any additional methods of bar.
It's also possible that ~bar could "convert back" a temporary change. 

While this is mostly possible using proxy objects, you have to manually forward every member function. Inheriting from a reference solves the problem in a neater way.
It would also have other uses, consider these theoretical example wrappers around a C string:
class local_c_string
: public &c_string
{
 
mutable size_t len;
 
mutable bool len_cached;
public:
 local_c_string
(c_string & i) : c_string(i), len(0), len_cached(0) {}
 
bool size() const
 {
   
if( len_cached) return len;
   len
= strlen();
   
return len;
 
}
 local_c_string & operator = (c_string & other)
 
{
   
static_cast<c_string&>(*this) = other;
   len_cached
= false;

 
}

};

The compiler would be able to detect when the references are equal and thus optimize out the declaration, and it doesn't get into any of the wackyness of extend cast.

To avoid the pitfall of thinking that the derived class actually inherits from the reference class, maybe the syntax should be different?

typename &local_c_string localstr(str);

This would help distinguish between objects that are references and objects that contain references. The typename is required because I think the grammar might be ambiguous in templates otherwise.
But I'm not sure if that's an optimal solution, since it would cause reference objects to be treated differently from objects that contain references. To make sure it evaluates correctly using templates, we could say that the full type of a reference object is "&foo" rather than the class name "foo". So then:
class bar
 
: public &baz
{
...
};

template<typename T>
class foo
{
...
public:
 
using value_type = T;
};

foo
<typename& bar> a;
//
Here, a::value_type would be "&bar" rather than "bar" (not to be confused with bar&)

bar b
;
// This line would cause a compilation error. bar is not a typename.
&bar b;
// Also an error because it's not clear whether & is the unary operator & or a typename
typename & bar b (a);
// Not an error, typename& used.


This prevents them from being accidentally used where a normal object is expected, but retains the ability to use them in templates without requiring special syntax.
If there is a case where you want the default operator = etc. to function like it would for an object containing a reference rather than an reference object , you could apply the static/inline keyword:

class foo
 
: static &bar
{
...
};
// Functions like an object containing a reference and the type name is "foo"

class baz
 
: inline &bar
{
...
};
// Functions like a reference instead of an object. Type name is "&baz"

I'm not sure whether the default should be static or inline though. Maybe always require it explicitly?

adrian....@gmail.com

unread,
Dec 18, 2017, 9:35:30 AM12/18/17
to ISO C++ Standard - Future Proposals, p_ha...@wargaming.net
I have used a forward macro to generate the boilerplate to deal with this.  I think it was like:

#define FWD_FN(from, to)                  \
 
template <typename...Ts>                \
 
auto from(Ts...args) {                  \
   
return to(std::forward<Ts>(args)...); \
 
}


adrian....@gmail.com

unread,
Dec 18, 2017, 10:18:28 AM12/18/17
to ISO C++ Standard - Future Proposals, p_ha...@wargaming.net
On Thursday, December 14, 2017 at 3:40:58 PM UTC-5, Ryan Nicholl wrote:
Well, allowing inheritance from a reference would give the following advantages:

First, we can get the same advantages of extend cast, because the compiler could see that they have the same address, and thus the compiler doesn't need to create a separate variable.

Extend cast has a couple of issues. First, if the compiler ONLY checks the binary layout, then you could convert a vector to an extended string by accident.

If the compiler instead checks that the base class is the same, we basically have a cast that allows us to cast vector A to B, only the memory of A is destroyed?
Or is it? What actually happens to the vector object? Is it now a vector? or is it something else? What is the type of that object now?
What happens if you do:
l1:
vector
<foo> a = ...;
l2
:
extended_vector
<foo> & b = extend_cast<extended_vector<foo>&> (a);
b.bar();
goto l1;

So what happens after the goto statement? Do we call the destructor of b and not the destructor of a? What happens if we do "goto l2;" instead? How do we "unextend" the class?

References don't use destructors.  They don't have ownership.
I like the idea of inheriting from a reference type. This would allow extending a type safely, and even allow adding member variables (though without any members, it could be considered just as an retyped alias object). As it is inheriting from a reference, it has no ownership of the underlying referenced object, so it's destruction would leave the underlying object intact.

I'm going to have to look at your examples again, because I think our trains of thought diverged. I'm thinking that the new object that derived from a reference object could be used just like a regular object. and doesn't really need any special handling. However, the developer would have to be aware of the decoupled lifespans.

Also, not sure why you put the & before the typename. Why not after, as it seems more natural?
class X
{
 
public:
   
~X() { std::cout << "~X()" << std::endl; }
};

class Y : public X&
{
 
public:
    Y
() = delete;      // This can never be made
    Y
(X& x) : X&(x) {} // Required signature.  This would be default impl though.
   
~Y() { std::cout << "~Y()" << std::endl; }
};

class Z : public X&&
{
 
public:
    Z
() = delete;        // This can never be made
    Z
(X&& x) : X&&(x) {} // Required signature.  This would be default impl though.
   
~Z() { std::cout << "~Z()" << std::endl; }
};

int main()
{
  X x
;
 
{
    Y y0
{x};    // Y's destructor is called, but X's is not
    Y y1
{X()};  // Invalid
 
}

 
{
    Z z0
{x};    // Invalid
    Z z1
{X()};  // Z's destructor is called, but X's is not
 
}
 
return 0; // X's destructor is called
}


Haven't completely thought through if there should be a difference between an lvalue and rvalue reference inheritance though.  Unnecessary?







ss

Ryan Nicholl

unread,
Dec 30, 2017, 7:55:07 PM12/30/17
to ISO C++ Standard - Future Proposals, p_ha...@wargaming.net
I don't think you could inherit from an rvalue reference since "named" values are lvalues, then your "rvalue" would be an lvalue.

That's not to say you couldn't have "rvalue_proxy" and "lvalue_proxy" classes with different overloaded constructors. (and corresponding usages of delete).

I'm suggesting the usage of the static/inline specifiers in order to change the way operator= is defaulted. A "static" reference inheritance would cause operator= to reassign new references (like reassigning std::reference_wrapper) and an "inline" inheritance would assign the referenced values. Perhaps that distinction made by static is unsafe, since it could cause a reference to the base class to act very differently from the derived, but I don't really see a way to make it work more like a struct { foo & a; } would other than this.

I thought & foo would be easier to parse than foo &, but I don't see any reason why inheriting from foo & would be a problem.
Reply all
Reply to author
Forward
0 new messages