You said "I would like my smart references to have "reference" semantics; in other words, I do not want to have to enter the realm of C++ pointers; I do not want to have to do this:"But both of your examples seem to require - and rationalise - such. What am I missing? OK, so they're more pointer _linguistics_ than semantics, but it's still pointer syntax.
Conversely, the point of the recent smart reference proposals, AFAICT - other than making writing code a lot easier in supportive situations - is that they could be transparently incorporated into generic code that would work with plain reference.
To require pointer syntax I think defeats this and makes such proposals a moot point, since we could implement our own pretty trivilly where required - and indeed, often do. What operator dot et al propose is a way to save us that work by adding required scaffolding to the language itself, whch makes it moot.
The ironic part is - operator.() is proposed exactly because *o = 42 (o being optional) is cumbersome.In the dot proposal there is even an example of implementing optional on top of the dot op!
In any case dot op is "enabling technology" - its uses exceed using it to implement references, it is just that this particular proposal focuses way too much on references.There are other proposals which focus on other uses.
Conversely, the point of the recent smart reference proposals, AFAICT - other than making writing code a lot easier in supportive situations - is that they could be transparently incorporated into generic code that would work with plain reference.
I'm not sure what you mean by "supportive situations". And I would like to see some real examples of where it is useful to be able to write generic code using reference syntax (operator dot). I'm not denying such examples exist, I just can't think of any myself. And it's possible to write generic code that operates on all wrapper types.
But it's not. It's about handles with pointer syntax.
A smart reference would be something that could be used as-if it was the referred object, but while applying other checks/validation/proxying/younameit on top of it. It should be substitutable into template code that expects normal references - and hence never uses * or -> - without said code being any the wiser.
What you're proposing isn't a smart reference. It's an even dumber pointer.
I meant "relevant situations". Such as those where I currently have to write my own handle/proxy classes, which in 99% of cases just reimplement/forward countless trivial operators/methods. With a transparent smart reference, myself and other programmers in analogous situations - who I don't think are as uncommon as you imply - would not need to do that. We would only override the operations where we needed special behaviour, and let operator.(), or equivalent, forward anything that we didn't explicitly 'overridde'.
As I said, I am not making a case against operator dot overloading. My mistake was conflating C++ references and references in the general sense (what I am calling "views"). Smart references are types that mimic C++ references. "Views" are types that model non-owning references in the general sense. I believe this proposal is orthogonal to any operator dot overloading proposal, as not all "references" have to look like C++ references.
On Wednesday, 12 October 2016 09:09:20 UTC+8, joseph....@gmail.com wrote:I don't like the pointer semantics of not_null. Unnatural semantics hint at incorrect design. not_null is great for making existing and legacy code safer, but it is not the ideal design for a "view" type.
One of the main differences between not_null and view is that not_null detects null pointer errors at run time, while view catches them at compile time.
void foo(int* p) {
not_null<int*> nn = p; // run-time exception
view<int> = p; // compile-time error
}
And yes, I know that the potential error of an assignment of int* to not_null<int*> can be caught by a static analysis tool, but A) this requires a static analysis tool, and B) not_null and view are complementary. Let me rewrite my example:
void foo(not_null<int*> p) { // legacy interface made safer with not_null
view<int> v = *p; // modern implementation (static analysis: a-okay)
...
}
view<int> v = *p; // modern implementation (static analysis: a-okay)
...
}
But why would you use `view<int>` inside of `foo` at all? There would be no objective need, unless you're just allergic to using `->`.
One of the main differences between not_null and view is that not_null detects null pointer errors at run time, while view catches them at compile time.
void foo(int* p) {
not_null<int*> nn = p; // run-time exception
view<int> = p; // compile-time error
}
That's not a NULL pointer check. Nothing ever detects that `p` is a null pointer. It fails to compile only because `view<int>` cannot be constructed from a pointer at all. If you had done `view<int> = *p`, it would compile just fine. What it would not do is actually catch an error if you pass `nullptr` to `foo`. It would simply invoke UB.
Let me rephrase that bit at the end.
`view<int>` and `not_null<int*>` have almost exactly the same interface. The only difference between them is that one takes references and the other takes pointers.
That trivial difference is not worth creating a whole new type for.
Let me rephrase that bit at the end.
`view<int>` and `not_null<int*>` have almost exactly the same interface. The only difference between them is that one takes references and the other takes pointers.
That trivial difference is not worth creating a whole new type for.
As I hope I've demonstrated, taking a pointer makes run time errors into compile time errors and pushes those errors up the call stack.
void foo(T *t)
{
not_null<T*> nnt = t;
}
I think this is important. I've already listed one other reason: semantics. view can convey meaning. As for other features, it really is just general syntactic niceness. The design of view and optional_view are still somewhat in flux, but I can think of at least one other important feature.
I currently have it so that view<T> is implicitly convertible to T&. This allows you to do things like this:
void monitor_items() {
for (item const& i : watched_items) {
monitor(i);
}
}
This could be generic code that operates on both containers of T and containers of view<T>.
Let me rephrase that bit at the end.
`view<int>` and `not_null<int*>` have almost exactly the same interface. The only difference between them is that one takes references and the other takes pointers.
That trivial difference is not worth creating a whole new type for.
As I hope I've demonstrated, taking a pointer makes run time errors into compile time errors and pushes those errors up the call stack.
That they push errors up the call stack is true, but just as with `not_null`, it only does so to the degree that the caller uses the type in question directly. What `view<T>` doesn't do is actually cause runtime checks for the error. So if you have a `T*`, and you don't check if it's NULL, turning it into a `view<T>` is no less dangerous than turning it into a `not_null<T*>`. Indeed, it's more dangerous, since `not_null` will throw, while `view<T>` will have no idea that it has a NULL reference.
I have yet to see an example where a runtime error becomes a compile-time error as you suggest. This code is a potential runtime error:
void foo(T *t)
{
not_null<T*> nnt = t;
}
Show me the equivalent `view<T>` code that makes this a compile-time error. But only when `t` is NULL. That is, this code should compile fine for any user who calls `foo` with a not-NULL pointer. But should fail to compile when `foo` is called with `nullptr`. That's what making "run time errors into compile time errors" would be; after all, the above code only issues a runtime error if you actually call it with NULL.
If the compile-time version can't compile-time check the pointer's value, then it's not doing the same work. And therefore, it is not equivalent code; it'd an apples-to-oranges comparison.
I think this is important. I've already listed one other reason: semantics. view can convey meaning. As for other features, it really is just general syntactic niceness. The design of view and optional_view are still somewhat in flux, but I can think of at least one other important feature.
I currently have it so that view<T> is implicitly convertible to T&. This allows you to do things like this:
void monitor_items() {
for (item const& i : watched_items) {
monitor(i);
}
}
This could be generic code that operates on both containers of T and containers of view<T>.
Generic in what way? Your generic code couldn't actually *do anything* with the `view<T>`. Not directly. It could pass it to some other functions, to be sure. But it could only pass it to ones which took the `view<T>`. This function itself cannot access members of the object directly. It couldn't send it to any function that took a `const T&`. And so forth.
The only thing that would allow "generic code" to work as you suggest is operator-dot.
std::view<int> v = i;
v = j; // reassignment of the view
*v = 42; // assignment of the referenced object
std::reference_wrapper<int> v = i;
v = std::ref(j); // reassignment of the reference
v.get() = 42; // assignment of the referenced object
//or just
By the way, I don't think you can do v = 42 with a std::reference_wrapper. AFAIK, you have to use get.
r = std::ref(value);
r = value;
So difference is just *v vs v.get()
On Wednesday, 12 October 2016 14:17:19 UTC+8, Nicol Bolas wrote:On Tuesday, October 11, 2016 at 11:54:27 PM UTC-4, joseph....@gmail.com wrote:Let me rephrase that bit at the end.
`view<int>` and `not_null<int*>` have almost exactly the same interface. The only difference between them is that one takes references and the other takes pointers.
That trivial difference is not worth creating a whole new type for.
As I hope I've demonstrated, taking a pointer makes run time errors into compile time errors and pushes those errors up the call stack.
That they push errors up the call stack is true, but just as with `not_null`, it only does so to the degree that the caller uses the type in question directly. What `view<T>` doesn't do is actually cause runtime checks for the error. So if you have a `T*`, and you don't check if it's NULL, turning it into a `view<T>` is no less dangerous than turning it into a `not_null<T*>`. Indeed, it's more dangerous, since `not_null` will throw, while `view<T>` will have no idea that it has a NULL reference.
I have yet to see an example where a runtime error becomes a compile-time error as you suggest. This code is a potential runtime error:
void foo(T *t)
{
not_null<T*> nnt = t;
}Show me the equivalent `view<T>` code that makes this a compile-time error. But only when `t` is NULL. That is, this code should compile fine for any user who calls `foo` with a not-NULL pointer. But should fail to compile when `foo` is called with `nullptr`. That's what making "run time errors into compile time errors" would be; after all, the above code only issues a runtime error if you actually call it with NULL.
If the compile-time version can't compile-time check the pointer's value, then it's not doing the same work. And therefore, it is not equivalent code; it'd an apples-to-oranges comparison.
What I mean is that the API of not_null allows for run time errors, while the API of view doesn't. If you want zero-overhead code, it is desirable to eliminate run time error checking. When I say it turns a run time error into a compile time error, I mean it does that by pushing the error up the stack.
void foo(T *t)
{
view<T> vt = *t; //If null, UB, but nobody checks.
}
I think this is important. I've already listed one other reason: semantics. view can convey meaning. As for other features, it really is just general syntactic niceness. The design of view and optional_view are still somewhat in flux, but I can think of at least one other important feature.
I currently have it so that view<T> is implicitly convertible to T&. This allows you to do things like this:
void monitor_items() {
for (item const& i : watched_items) {
monitor(i);
}
}
This could be generic code that operates on both containers of T and containers of view<T>.
Generic in what way? Your generic code couldn't actually *do anything* with the `view<T>`. Not directly. It could pass it to some other functions, to be sure. But it could only pass it to ones which took the `view<T>`. This function itself cannot access members of the object directly. It couldn't send it to any function that took a `const T&`. And so forth.
The only thing that would allow "generic code" to work as you suggest is operator-dot.
I meant that the code could operate on both containers of T and containers of view<T>., because view<T> can be converted to T&. This was just a passing comment though; I wasn't trying to make a genuine case about view helping to write generic code. Incidentally though, could a proper "smart reference" which overloaded operator dot be used here? My understanding was that operator= would apply to the wrapped object (unless it were implemented in the wrapper, which would break its ref-like behaviour), so the reference itself wouldn't be copy assignable, which would prohibit storing it in a std::vector. Genuine question.
On Wednesday, October 12, 2016 at 3:07:15 AM UTC-4, joseph....@gmail.com wrote:On Wednesday, 12 October 2016 14:17:19 UTC+8, Nicol Bolas wrote:On Tuesday, October 11, 2016 at 11:54:27 PM UTC-4, joseph....@gmail.com wrote:Let me rephrase that bit at the end.
`view<int>` and `not_null<int*>` have almost exactly the same interface. The only difference between them is that one takes references and the other takes pointers.
That trivial difference is not worth creating a whole new type for.
As I hope I've demonstrated, taking a pointer makes run time errors into compile time errors and pushes those errors up the call stack.
That they push errors up the call stack is true, but just as with `not_null`, it only does so to the degree that the caller uses the type in question directly. What `view<T>` doesn't do is actually cause runtime checks for the error. So if you have a `T*`, and you don't check if it's NULL, turning it into a `view<T>` is no less dangerous than turning it into a `not_null<T*>`. Indeed, it's more dangerous, since `not_null` will throw, while `view<T>` will have no idea that it has a NULL reference.
I have yet to see an example where a runtime error becomes a compile-time error as you suggest. This code is a potential runtime error:
void foo(T *t)
{
not_null<T*> nnt = t;
}Show me the equivalent `view<T>` code that makes this a compile-time error. But only when `t` is NULL. That is, this code should compile fine for any user who calls `foo` with a not-NULL pointer. But should fail to compile when `foo` is called with `nullptr`. That's what making "run time errors into compile time errors" would be; after all, the above code only issues a runtime error if you actually call it with NULL.
If the compile-time version can't compile-time check the pointer's value, then it's not doing the same work. And therefore, it is not equivalent code; it'd an apples-to-oranges comparison.
What I mean is that the API of not_null allows for run time errors, while the API of view doesn't. If you want zero-overhead code, it is desirable to eliminate run time error checking. When I say it turns a run time error into a compile time error, I mean it does that by pushing the error up the stack.
Yes. And by doing so, it has pushed it up the stack to the point where it doesn't actually check the error. If the user has a pointer, and they want to use `view`, it is *the user* who has to test if it isn't NULL. Whereas `not_null` does the check automatically.
So the above code is safer than this:
void foo(T *t)
{
view<T> vt = *t; //If null, UB, but nobody checks.
}
Pushing errors "up the stack" is only a good thing if you actually check for them.
I think this is important. I've already listed one other reason: semantics. view can convey meaning. As for other features, it really is just general syntactic niceness. The design of view and optional_view are still somewhat in flux, but I can think of at least one other important feature.
I currently have it so that view<T> is implicitly convertible to T&. This allows you to do things like this:
void monitor_items() {
for (item const& i : watched_items) {
monitor(i);
}
}
This could be generic code that operates on both containers of T and containers of view<T>.
Generic in what way? Your generic code couldn't actually *do anything* with the `view<T>`. Not directly. It could pass it to some other functions, to be sure. But it could only pass it to ones which took the `view<T>`. This function itself cannot access members of the object directly. It couldn't send it to any function that took a `const T&`. And so forth.
The only thing that would allow "generic code" to work as you suggest is operator-dot.
I meant that the code could operate on both containers of T and containers of view<T>., because view<T> can be converted to T&. This was just a passing comment though; I wasn't trying to make a genuine case about view helping to write generic code. Incidentally though, could a proper "smart reference" which overloaded operator dot be used here? My understanding was that operator= would apply to the wrapped object (unless it were implemented in the wrapper, which would break its ref-like behaviour), so the reference itself wouldn't be copy assignable, which would prohibit storing it in a std::vector. Genuine question.
With operator-dot, attempting to call `operator=(const smart_ref<T>&)` would apply to the smart reference. Attempts to use `operator=(const T&)` would be forwarded to `T`. So smart references could be copy-assignable if you want, without breaking the ability to treat them as references.
On Wednesday, 12 October 2016 21:43:24 UTC+8, Nicol Bolas wrote:On Wednesday, October 12, 2016 at 3:07:15 AM UTC-4, joseph....@gmail.com wrote:On Wednesday, 12 October 2016 14:17:19 UTC+8, Nicol Bolas wrote:On Tuesday, October 11, 2016 at 11:54:27 PM UTC-4, joseph....@gmail.com wrote:Let me rephrase that bit at the end.
`view<int>` and `not_null<int*>` have almost exactly the same interface. The only difference between them is that one takes references and the other takes pointers.
That trivial difference is not worth creating a whole new type for.
As I hope I've demonstrated, taking a pointer makes run time errors into compile time errors and pushes those errors up the call stack.
That they push errors up the call stack is true, but just as with `not_null`, it only does so to the degree that the caller uses the type in question directly. What `view<T>` doesn't do is actually cause runtime checks for the error. So if you have a `T*`, and you don't check if it's NULL, turning it into a `view<T>` is no less dangerous than turning it into a `not_null<T*>`. Indeed, it's more dangerous, since `not_null` will throw, while `view<T>` will have no idea that it has a NULL reference.
I have yet to see an example where a runtime error becomes a compile-time error as you suggest. This code is a potential runtime error:
void foo(T *t)
{
not_null<T*> nnt = t;
}Show me the equivalent `view<T>` code that makes this a compile-time error. But only when `t` is NULL. That is, this code should compile fine for any user who calls `foo` with a not-NULL pointer. But should fail to compile when `foo` is called with `nullptr`. That's what making "run time errors into compile time errors" would be; after all, the above code only issues a runtime error if you actually call it with NULL.
If the compile-time version can't compile-time check the pointer's value, then it's not doing the same work. And therefore, it is not equivalent code; it'd an apples-to-oranges comparison.
What I mean is that the API of not_null allows for run time errors, while the API of view doesn't. If you want zero-overhead code, it is desirable to eliminate run time error checking. When I say it turns a run time error into a compile time error, I mean it does that by pushing the error up the stack.
Yes. And by doing so, it has pushed it up the stack to the point where it doesn't actually check the error. If the user has a pointer, and they want to use `view`, it is *the user* who has to test if it isn't NULL. Whereas `not_null` does the check automatically.
So the above code is safer than this:
void foo(T *t)
{
view<T> vt = *t; //If null, UB, but nobody checks.
}
Pushing errors "up the stack" is only a good thing if you actually check for them.
That's my point: it's the user's responsibility to check that the pointer isn't null before dereferencing it. Sure, not_null turns potential UB into an exception, but at a performance cost (a run time check, and possible overhead relating to exceptions). By using not_null in our interface, we impose a performance cost (however small) on the user whether or not it's even possible for them to pass a null pointer. Using view (or a plain reference) allows our interface to have zero run time overhead; we just have to trust the user not to dereference any null pointers they may have hanging around. But ultimately, even if we use not_null, the user can still dereference null pointers all day long. If we wanted to eliminate this possibility, we would be better off encouraging the use of an alternative to pointers which had no null state -- perhaps something like view :)
I think this is important. I've already listed one other reason: semantics. view can convey meaning. As for other features, it really is just general syntactic niceness. The design of view and optional_view are still somewhat in flux, but I can think of at least one other important feature.
I currently have it so that view<T> is implicitly convertible to T&. This allows you to do things like this:
void monitor_items() {
for (item const& i : watched_items) {
monitor(i);
}
}
This could be generic code that operates on both containers of T and containers of view<T>.
Generic in what way? Your generic code couldn't actually *do anything* with the `view<T>`. Not directly. It could pass it to some other functions, to be sure. But it could only pass it to ones which took the `view<T>`. This function itself cannot access members of the object directly. It couldn't send it to any function that took a `const T&`. And so forth.
The only thing that would allow "generic code" to work as you suggest is operator-dot.
I meant that the code could operate on both containers of T and containers of view<T>., because view<T> can be converted to T&. This was just a passing comment though; I wasn't trying to make a genuine case about view helping to write generic code. Incidentally though, could a proper "smart reference" which overloaded operator dot be used here? My understanding was that operator= would apply to the wrapped object (unless it were implemented in the wrapper, which would break its ref-like behaviour), so the reference itself wouldn't be copy assignable, which would prohibit storing it in a std::vector. Genuine question.
With operator-dot, attempting to call `operator=(const smart_ref<T>&)` would apply to the smart reference. Attempts to use `operator=(const T&)` would be forwarded to `T`. So smart references could be copy-assignable if you want, without breaking the ability to treat them as references.
Really? In that case, they can be stored in containers, but they don't really behave 100% like references.
int a = 0;
int b = 0;
int& ra = a;
int& rb = b;
ra = rb; // copies referenced value
smart_ref<int> sra = a;
smart_ref<int> srb = b;
sra = srb; // rebinds reference
I thought the proposal suggested using a special "rebind" function so that smart references behave exactly like regular references?
That's my point: it's the user's responsibility to check that the pointer isn't null before dereferencing it. Sure, not_null turns potential UB into an exception, but at a performance cost (a run time check, and possible overhead relating to exceptions). By using not_null in our interface, we impose a performance cost (however small) on the user whether or not it's even possible for them to pass a null pointer. Using view (or a plain reference) allows our interface to have zero run time overhead; we just have to trust the user not to dereference any null pointers they may have hanging around. But ultimately, even if we use not_null, the user can still dereference null pointers all day long. If we wanted to eliminate this possibility, we would be better off encouraging the use of an alternative to pointers which had no null state -- perhaps something like view :)
I admit that `not_null<T*>` would be better if it could be constructed from a `T&`, which would also cause it to not bother checking if it is a null reference. And I have made such a suggestion.
But regardless, I don't really understand what you're getting at here.
OK, you have some function that takes a pointer. And your function implicitly requires that this pointer not be NULL; therefore, it isn't going to check to see if it's NULL or not. And you intend to store this not-null pointer around for a time and use it later.
Then why do you not simply store a pointer? Why do you need `view<T>` instead of `T*`? Your code already assumes it's not NULL; what do you gain from using this type instead of `T*`? That you initialize it with `T&` rather than `T*`? You still have to use `*` and `->` to access the `T`.
Really? In that case, they can be stored in containers, but they don't really behave 100% like references.
int a = 0;
int b = 0;
int& ra = a;
int& rb = b;
ra = rb; // copies referenced value
smart_ref<int> sra = a;
smart_ref<int> srb = b;
sra = srb; // rebinds reference
I thought the proposal suggested using a special "rebind" function so that smart references behave exactly like regular references?
And if you want your smart references to work that way, simply declare the assignment operator `= delete`. But the operator-dot proposal doesn't exist to tell you what to do with your types; it tells you what you can do with them.
The proposal for operator-dot defines a mechanism, not a policy on how you build smart references. I imagine that many of them will `=delete` the operator; maybe all of them. But that is not a question for the `operator-dot` proposal; it's a question for proposals for actual smart reference types.
The first option gives consistent behaviour when modifying the wrapper (copy construct and copy assign do the same thing). However, it makes for inconsistent behaviour when modifying the wrapper object. For example, given smart references a and b:
I'm sure the operator dot proposal will enable all sorts of great things via run-time function overriding
Ideally, the function should not take a pointer if the pointer should not be null, because it is unsafe (for code implementing the function) and misleading (for the code calling the function). If the API can't be changed, not_null is a great way to convey meaning and to add run-time safety. If the API can be changed, the function should take a reference instead, because then you have compile-time safety.
On Thursday, 13 October 2016 02:31:06 UTC+8, Nicol Bolas wrote:That's my point: it's the user's responsibility to check that the pointer isn't null before dereferencing it. Sure, not_null turns potential UB into an exception, but at a performance cost (a run time check, and possible overhead relating to exceptions). By using not_null in our interface, we impose a performance cost (however small) on the user whether or not it's even possible for them to pass a null pointer. Using view (or a plain reference) allows our interface to have zero run time overhead; we just have to trust the user not to dereference any null pointers they may have hanging around. But ultimately, even if we use not_null, the user can still dereference null pointers all day long. If we wanted to eliminate this possibility, we would be better off encouraging the use of an alternative to pointers which had no null state -- perhaps something like view :)
I admit that `not_null<T*>` would be better if it could be constructed from a `T&`, which would also cause it to not bother checking if it is a null reference. And I have made such a suggestion.
While I like this from a compile-time safety perspective, I thought not_null was meant to be more of a transparent wrapper. I'm not sure if modifying the API of the wrapped type is within the scope of its design.
But regardless, I don't really understand what you're getting at here.
OK, you have some function that takes a pointer. And your function implicitly requires that this pointer not be NULL; therefore, it isn't going to check to see if it's NULL or not. And you intend to store this not-null pointer around for a time and use it later.
Then why do you not simply store a pointer? Why do you need `view<T>` instead of `T*`? Your code already assumes it's not NULL; what do you gain from using this type instead of `T*`? That you initialize it with `T&` rather than `T*`? You still have to use `*` and `->` to access the `T`.
Ideally, the function should not take a pointer if the pointer should not be null, because it is unsafe (for code implementing the function) and misleading (for the code calling the function). If the API can't be changed, not_null is a great way to convey meaning and to add run-time safety.
If the API can be changed, the function should take a reference instead, because then you have compile-time safety.
foo_not_null(get_a_pointer());
foo_view(*get_a_pointer());
The problem with references is that they cannot be reassigned, which makes them unusable in a lot of generic code (e.g. STL containers). As pointed out, std::reference_wrapper exists for this purpose, but its API isn't particularly nice to use as a general-purpose reference wrapper. In my proposal, I intend view<T> and optional_view<T> to work in tandem as replacements for T& and T* respectively wherever they represent "references" (in the general sense). I tried to make the case for view<T> having some semantic advantage over T&, but you've made me reconsider my argument, since T& almost always means "reference" (in the general sense) and T const& is almost always just to avoid an expensive copy. On the other hand, optional_view<T> does convey additional meaning that T* does not, since the meaning of T* is so horribly overloaded in C++.
I think the case for view is a lot stronger when it is accompanied by optional_view.
Why not store a pointer? Because it allows bugs to creep into your code. Pointers can be null, and dereferencing a null pointer results in UB, and UB is bad. Again, you could use not_null to catch any errors at run time, but why catch an error at run time when it can be caught at compile time? Sure, the safety of simple programs can be verified by eye, but not all programs are simple. If "not-null" pointers are pervasive throughout a complex system, the chance of null pointer bugs could be high.Really? In that case, they can be stored in containers, but they don't really behave 100% like references.
int a = 0;
int b = 0;
int& ra = a;
int& rb = b;
ra = rb; // copies referenced value
smart_ref<int> sra = a;
smart_ref<int> srb = b;
sra = srb; // rebinds reference
I thought the proposal suggested using a special "rebind" function so that smart references behave exactly like regular references?
And if you want your smart references to work that way, simply declare the assignment operator `= delete`. But the operator-dot proposal doesn't exist to tell you what to do with your types; it tells you what you can do with them.
The proposal for operator-dot defines a mechanism, not a policy on how you build smart references. I imagine that many of them will `=delete` the operator; maybe all of them. But that is not a question for the `operator-dot` proposal; it's a question for proposals for actual smart reference types.
I appreciate the proposal doesn't specify the design of such types. Still, I'm wondering what possible designs it would enable. If I understand correctly, you could design two categories of "smart reference":
- operator=(ref<T> const&) modifies the wrapper
- operator=(ref<T> const&) modifies the wrapped object
The first option gives consistent behaviour when modifying the wrapper (copy construct and copy assign do the same thing). However, it makes for inconsistent behaviour when modifying the wrapper object. For example, given smart references a and b:
a.foo = y.foo; // modifies wrapped object
a = b; // modifies wrapper
The second option gives consistent behaviour when modifying the wrapped object, but has inconsistent behaviour when modifying the wrapper. This is the behaviour of regular references that precludes them from being stored in containers:
ref<bar> a = b; // modifies (constructs) wrapper
a = b; // modifies wrapped object
I'm sure the operator dot proposal will enable all sorts of great things via run-time function overriding,
but the quest for the perfect "smart reference" design seems just out of reach.
It seems this is because the design of reference types is fundamentally different from the design of value types in C++. Thus, view and optional_view do not use operator dot overloading, because this seems to be the only way to get consistent behaviour when modifying both the wrapper and when modifying the wrapped object.
On Thursday, October 13, 2016 at 4:27:51 AM UTC-4, joseph....@gmail.com wrote:On Thursday, 13 October 2016 02:31:06 UTC+8, Nicol Bolas wrote:That's my point: it's the user's responsibility to check that the pointer isn't null before dereferencing it. Sure, not_null turns potential UB into an exception, but at a performance cost (a run time check, and possible overhead relating to exceptions). By using not_null in our interface, we impose a performance cost (however small) on the user whether or not it's even possible for them to pass a null pointer. Using view (or a plain reference) allows our interface to have zero run time overhead; we just have to trust the user not to dereference any null pointers they may have hanging around. But ultimately, even if we use not_null, the user can still dereference null pointers all day long. If we wanted to eliminate this possibility, we would be better off encouraging the use of an alternative to pointers which had no null state -- perhaps something like view :)
I admit that `not_null<T*>` would be better if it could be constructed from a `T&`, which would also cause it to not bother checking if it is a null reference. And I have made such a suggestion.
While I like this from a compile-time safety perspective, I thought not_null was meant to be more of a transparent wrapper. I'm not sure if modifying the API of the wrapped type is within the scope of its design.
... How does what I suggested modify the API of `T` itself?But regardless, I don't really understand what you're getting at here.
OK, you have some function that takes a pointer. And your function implicitly requires that this pointer not be NULL; therefore, it isn't going to check to see if it's NULL or not. And you intend to store this not-null pointer around for a time and use it later.
Then why do you not simply store a pointer? Why do you need `view<T>` instead of `T*`? Your code already assumes it's not NULL; what do you gain from using this type instead of `T*`? That you initialize it with `T&` rather than `T*`? You still have to use `*` and `->` to access the `T`.
Ideally, the function should not take a pointer if the pointer should not be null, because it is unsafe (for code implementing the function) and misleading (for the code calling the function). If the API can't be changed, not_null is a great way to convey meaning and to add run-time safety.
In what way is adding `not_null` not a change to an API? At the very least, it adds a user-defined conversion step, from `T*` to `not_null<T*>`. And that can break code.
For example, many string classes have implicit conversions to `const char *`. if you have a function that takes a `const char*`, you can pass one of those string types to it. However, if that API changes to `not_null<const char *>`, then you can't. That requires two user-defined conversion steps, and C++ overload resolution doesn't let you do that.
And thus making such a change in the API broke your code. So turning `T*` into `not_null<T*>` is not a safe change.
If the API can be changed, the function should take a reference instead, because then you have compile-time safety.
No, you do not have "compile-time safety". What you have is language-level assurance that, if the user somehow managed to pass a null reference, then the user has already caused UB.
But the program as a whole has not been made any safer, either at compile time or runtime. It simply makes the code more expressive of your intent (since null references are UB). But so too does `not_null<T*>`.
If a user did this:
foo_not_null(get_a_pointer());
Where `foo_not_null` requires a non-NULL pointer, and `get_a_pointer` could return NULL. This is runtime-safe. However, doing the following doesn't become compile-time safe:
foo_view(*get_a_pointer());
Where `foo_view` takes a `view<T>`. No errors are being caught here, at runtime or compile time. The user must check whether the pointer is NULL, and the user failed to do so.
This program has less safety than the `not_null` version.
At the very least, `view<T>` should have a constructor that takes a `T*` which throws if the pointer is NULL. Of course, if you did so, that would make `view<T>` equivalent to my suggested fixed version of `not_null<T*>`. So where is the advantage for `view<T>`?
The problem with references is that they cannot be reassigned, which makes them unusable in a lot of generic code (e.g. STL containers). As pointed out, std::reference_wrapper exists for this purpose, but its API isn't particularly nice to use as a general-purpose reference wrapper. In my proposal, I intend view<T> and optional_view<T> to work in tandem as replacements for T& and T* respectively wherever they represent "references" (in the general sense). I tried to make the case for view<T> having some semantic advantage over T&, but you've made me reconsider my argument, since T& almost always means "reference" (in the general sense) and T const& is almost always just to avoid an expensive copy. On the other hand, optional_view<T> does convey additional meaning that T* does not, since the meaning of T* is so horribly overloaded in C++.
In general, yes. But "in general" is talking about the reams of legacy code that exists out there. That legacy code isn't going to switch from `T*` to `optional_view<T>` no matter what.
The C++ core guidelines gives us a reasonably narrow field of usage of naked pointers: they are nullable, non-owning references to a single `T`. Exactly like your `optional_view<T>`.
So a user following good coding guidelines will use `T*` only for such cases.
Though I suppose there is merit to the idea that if you're modernizing a codebase, you need to distinguish between not-yet-modernized functions where `T*` means "anything goes", and APIs that have been modernized where `T*` means "nullable, non-owning references to a single `T`".
but the quest for the perfect "smart reference" design seems just out of reach.
It is "out of reach" only because your definition of "perfect" is inherently contradictory. You define "perfection" as emulating C++ language references exactly, while simultaneously allowing by-reference copying via operator=, which C++ language references do not do.
Emulating language references is a binary proposition. Either that's something you want, or its something you don't.
It seems this is because the design of reference types is fundamentally different from the design of value types in C++. Thus, view and optional_view do not use operator dot overloading, because this seems to be the only way to get consistent behaviour when modifying both the wrapper and when modifying the wrapped object.
And yet, that's not true at all. Your option 1 above seems perfectly consistent. Just like `a->foo == b->foo`. It only looks odd because you expect `.` to mean "access the wrapper" instead of "possibly access the wrapped object".
On Friday, 14 October 2016 00:52:53 UTC+8, Nicol Bolas wrote:On Thursday, October 13, 2016 at 4:27:51 AM UTC-4, joseph....@gmail.com wrote:On Thursday, 13 October 2016 02:31:06 UTC+8, Nicol Bolas wrote:
No, you do not have "compile-time safety". What you have is language-level assurance that, if the user somehow managed to pass a null reference, then the user has already caused UB.
But the program as a whole has not been made any safer, either at compile time or runtime. It simply makes the code more expressive of your intent (since null references are UB). But so too does `not_null<T*>`.
If a user did this:
foo_not_null(get_a_pointer());
Where `foo_not_null` requires a non-NULL pointer, and `get_a_pointer` could return NULL. This is runtime-safe. However, doing the following doesn't become compile-time safe:
foo_view(*get_a_pointer());
Where `foo_view` takes a `view<T>`. No errors are being caught here, at runtime or compile time. The user must check whether the pointer is NULL, and the user failed to do so.
This program has less safety than the `not_null` version.
At the very least, `view<T>` should have a constructor that takes a `T*` which throws if the pointer is NULL. Of course, if you did so, that would make `view<T>` equivalent to my suggested fixed version of `not_null<T*>`. So where is the advantage for `view<T>`?
Okay, I understand your point. I guess my main problem is with potential run-time cost where it isn't necessary.
However, I have just realized that the GSL allows the behaviour of contract violations to be configured (it defaults to calling std::terminate). I assume this is because we currently lack the static analysis tools that the GSL is meant to assist. I am now assuming that the run-time check is intended to be removed in release code (and if not_null were ever standardized), in favour of static detection of unchecked pointer dereferencing and conversion to not_null. This is supported by the description in F.23 of the C++ Core Guidelines.
If this were the case, not_null would not in fact be guaranteed to be "not null" at run-time; the static analyzer would produce a warning of unchecked conversion to not_null (though this check isn't specified in the C++ Core Guidelines for some reason) and UB would arise potentially far from the warning site, wherever the not_null wrapper were eventually dereferenced. On the other hand, view would be guaranteed be "not null" at run-time; the static analyzer would produce a warning of an unchecked dereference of a raw pointer at the point at which UB arises. Another minor advantage is that you don't need that extra check that I mentioned.
F.23 mentions that run-time checks can be performed in debug builds, but most debuggers will catch a null pointer dereference, so I'm not sure how useful this is.
The problem with references is that they cannot be reassigned, which makes them unusable in a lot of generic code (e.g. STL containers). As pointed out, std::reference_wrapper exists for this purpose, but its API isn't particularly nice to use as a general-purpose reference wrapper. In my proposal, I intend view<T> and optional_view<T> to work in tandem as replacements for T& and T* respectively wherever they represent "references" (in the general sense). I tried to make the case for view<T> having some semantic advantage over T&, but you've made me reconsider my argument, since T& almost always means "reference" (in the general sense) and T const& is almost always just to avoid an expensive copy. On the other hand, optional_view<T> does convey additional meaning that T* does not, since the meaning of T* is so horribly overloaded in C++.
In general, yes. But "in general" is talking about the reams of legacy code that exists out there. That legacy code isn't going to switch from `T*` to `optional_view<T>` no matter what.
I'm not entirely sure what you are responding to here, but I didn't intend view or optional_view to be for legacy code in particular (unlike not_null, which does appear to be geared towards improving the safety of legacy code). They are intended to compliment existing standard library types with a higher-level abstraction of the non-owning "reference" concept.
but the quest for the perfect "smart reference" design seems just out of reach.
It is "out of reach" only because your definition of "perfect" is inherently contradictory. You define "perfection" as emulating C++ language references exactly, while simultaneously allowing by-reference copying via operator=, which C++ language references do not do.
Emulating language references is a binary proposition. Either that's something you want, or its something you don't.
I'm conflating C++ references and references in a general sense again. It is possible to perfectly emulate C++ references (that would be my option 2). What I want (consistent behaviour for copying both wrapper and wrapped objects) is out of reach when using operator dot overloading, so I don't use it.
It seems this is because the design of reference types is fundamentally different from the design of value types in C++. Thus, view and optional_view do not use operator dot overloading, because this seems to be the only way to get consistent behaviour when modifying both the wrapper and when modifying the wrapped object.
And yet, that's not true at all. Your option 1 above seems perfectly consistent. Just like `a->foo == b->foo`. It only looks odd because you expect `.` to mean "access the wrapper" instead of "possibly access the wrapped object".
When I say it isn't consistent, I mean that if a.foo = b.foo modifies the wrapped object, then I expect a = b to modify the wrapped object as well. The problem is that there is only one operator=, and two functions I want it to perform. This is why I must, unfortunately, rely on operator* and operator-> when referring to the wrapped object.
On Friday, October 14, 2016 at 9:36:56 AM UTC-4, joseph....@gmail.com wrote:On Friday, 14 October 2016 00:52:53 UTC+8, Nicol Bolas wrote:On Thursday, October 13, 2016 at 4:27:51 AM UTC-4, joseph....@gmail.com wrote:On Thursday, 13 October 2016 02:31:06 UTC+8, Nicol Bolas wrote:
No, you do not have "compile-time safety". What you have is language-level assurance that, if the user somehow managed to pass a null reference, then the user has already caused UB.
But the program as a whole has not been made any safer, either at compile time or runtime. It simply makes the code more expressive of your intent (since null references are UB). But so too does `not_null<T*>`.
If a user did this:
foo_not_null(get_a_pointer());
Where `foo_not_null` requires a non-NULL pointer, and `get_a_pointer` could return NULL. This is runtime-safe. However, doing the following doesn't become compile-time safe:
foo_view(*get_a_pointer());
Where `foo_view` takes a `view<T>`. No errors are being caught here, at runtime or compile time. The user must check whether the pointer is NULL, and the user failed to do so.
This program has less safety than the `not_null` version.
At the very least, `view<T>` should have a constructor that takes a `T*` which throws if the pointer is NULL. Of course, if you did so, that would make `view<T>` equivalent to my suggested fixed version of `not_null<T*>`. So where is the advantage for `view<T>`?
Okay, I understand your point. I guess my main problem is with potential run-time cost where it isn't necessary.
Which, if my change for `not_null` goes through, can be easily mitigated. The cost only happens when the pointer is introduced to `not_null`. If you pass a reference, there's no check.
However, I have just realized that the GSL allows the behaviour of contract violations to be configured (it defaults to calling std::terminate). I assume this is because we currently lack the static analysis tools that the GSL is meant to assist. I am now assuming that the run-time check is intended to be removed in release code (and if not_null were ever standardized), in favour of static detection of unchecked pointer dereferencing and conversion to not_null. This is supported by the description in F.23 of the C++ Core Guidelines.
If this were the case, not_null would not in fact be guaranteed to be "not null" at run-time; the static analyzer would produce a warning of unchecked conversion to not_null (though this check isn't specified in the C++ Core Guidelines for some reason) and UB would arise potentially far from the warning site, wherever the not_null wrapper were eventually dereferenced. On the other hand, view would be guaranteed be "not null" at run-time; the static analyzer would produce a warning of an unchecked dereference of a raw pointer at the point at which UB arises. Another minor advantage is that you don't need that extra check that I mentioned.
F.23 mentions that run-time checks can be performed in debug builds, but most debuggers will catch a null pointer dereference, so I'm not sure how useful this is.
It's very useful. A debugger can detect a NULL pointer dereference, but only at the cite of use, not the place where the NULL pointer came from. If the code that stored the pointer had used `not_null`, then they could get an error at the source of the pointer. Or at least, at the edges of the system that expected it to not be NULL, rather than wherever it first got used.
The problem with references is that they cannot be reassigned, which makes them unusable in a lot of generic code (e.g. STL containers). As pointed out, std::reference_wrapper exists for this purpose, but its API isn't particularly nice to use as a general-purpose reference wrapper. In my proposal, I intend view<T> and optional_view<T> to work in tandem as replacements for T& and T* respectively wherever they represent "references" (in the general sense). I tried to make the case for view<T> having some semantic advantage over T&, but you've made me reconsider my argument, since T& almost always means "reference" (in the general sense) and T const& is almost always just to avoid an expensive copy. On the other hand, optional_view<T> does convey additional meaning that T* does not, since the meaning of T* is so horribly overloaded in C++.
In general, yes. But "in general" is talking about the reams of legacy code that exists out there. That legacy code isn't going to switch from `T*` to `optional_view<T>` no matter what.
I'm not entirely sure what you are responding to here, but I didn't intend view or optional_view to be for legacy code in particular (unlike not_null, which does appear to be geared towards improving the safety of legacy code). They are intended to compliment existing standard library types with a higher-level abstraction of the non-owning "reference" concept.
What I'm getting at is that, in code written for modern C++, it is reasonable to assume that `T*` has a specific meaning: nullable, non-owning reference to a single object. So if you're writing modern C++, you don't need `optional_view<T>` to say what the much shorter `T*` already says.
So `optional_view<T>` is only advantageous when working with a codebase where `T*` does not consistently have a specific meaning. Hopefully, we're not writing more of that kind of code...
but the quest for the perfect "smart reference" design seems just out of reach.
It is "out of reach" only because your definition of "perfect" is inherently contradictory. You define "perfection" as emulating C++ language references exactly, while simultaneously allowing by-reference copying via operator=, which C++ language references do not do.
Emulating language references is a binary proposition. Either that's something you want, or its something you don't.
I'm conflating C++ references and references in a general sense again. It is possible to perfectly emulate C++ references (that would be my option 2). What I want (consistent behaviour for copying both wrapper and wrapped objects) is out of reach when using operator dot overloading, so I don't use it.
It seems this is because the design of reference types is fundamentally different from the design of value types in C++. Thus, view and optional_view do not use operator dot overloading, because this seems to be the only way to get consistent behaviour when modifying both the wrapper and when modifying the wrapped object.
And yet, that's not true at all. Your option 1 above seems perfectly consistent. Just like `a->foo == b->foo`. It only looks odd because you expect `.` to mean "access the wrapper" instead of "possibly access the wrapped object".
When I say it isn't consistent, I mean that if a.foo = b.foo modifies the wrapped object, then I expect a = b to modify the wrapped object as well. The problem is that there is only one operator=, and two functions I want it to perform. This is why I must, unfortunately, rely on operator* and operator-> when referring to the wrapped object.
So... why is it consistent for `a->foo = b->foo` to have different behavior from `a = b`, yet `a.foo = b.foo` should have the same behavior?
The answer is simple: because you expect `->` to mean "access wrapped object", while you expect `.` to mean "access handle". You live in a world sans-operator-dot, so you don't expect `a.foo` to potentially access the wrapped object. Consistency is based on expectations.
Operator-dot represents a fundamental shift in our expectations.
And that's probably the scariest part of it, and the prime reason why I don't think it should exist.
...
Operator-dot represents a fundamental shift in our expectations.And that's probably the scariest part of it, and the prime reason why I don't think it should exist.
You don't? I assumed you were all for it. Well, then we agree on something :)
struct A
{
int i = 1;
};
struct B
{
operator A&() { return a; }
A a;
};
int main() {
B b;
auto i = static_cast<A&>(b).i;
std::cout << i;
}
struct A
{
int i = 1;
};
struct B : public A
{
};
int main() {
B b;
auto i = b.i;
std::cout << i;
}
struct A
{
int i = 1;
};
struct B : public using A
{
operator A&() { return a; }
A a;
};
int main() {
B b;
auto i = b.i;
std::cout << i;
}