--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/.
--
X x;
C* c = &x;
c.foo();
X x;
C c = x; // ASSUME X IS COPY-CONSTRUCTIBLE AND SATISFIES VIRTUAL CONCEPT C
c.foo();
--
void foo(C obj)
{
std::cout << obj.bar();
}
int main()
{
int x = 42;
foo(x);
}
int main()
{
int x = 42;
C obj = x;
std::cout << obj.bar();
}
X* xptr = new X;
C* c = xptr;
c->foo();
delete c
Hi Vicente,
Thank you for sharing your thoughts. I will try to address your points in the following.
1. I am not particularly attached to the name "virtual concept". In Haskell there is something very similar which is called "typeclass", so I could have used that name. On the other hand, the old C++0x idea of "concept maps" was also something pretty similar, except for the fact that they were meant for template-based compile-time polymorphism (just like Concepts Lite), while my goal was to provide a mechanism for non-intrusive dynamic polymorphism. I picked the name "concept" to evoke the parallel with concept maps, and the adjective "virtual" to draw a connection with virtual call dispatch. But this is really a detail to me, so I'm open to suggestions for a better terminology.
2. In your example, I do expect "x" to be deleted when doing "delete c". My original idea was that all virtual concepts should implicitly support destruction as part of the interface they define, and the every itable would contain an entry for the corresponding type's instantiation pointing to that type's destructor. However, I could as well require the destructor to be explicitly mentioned in the virtual concept's definition as a prerequisite for writing "delete c" (but also for using objects of concept type with value semantics, since objects with automatic storage duration are implicitly destroyed when they go out of scope). For other special operations, like copy-construction, I require the corresponding special member function to be explicitly mentioned in the concept definition (see the draft I linked).
3. The reason why I think the assertion (&cx == &x) should not fail is that I want to enable reference semantics. Judging from what you write, I am now led to think that satisfying this assertion is not a necessary condition for achieving my goal.
Anyway, my idea is that if a client stores a pointer to a virtual concept (say, "C* pc"), and later wants to figure out if the object pointed to by "pc" is the same object as another object pointed to by "px" (of type "T*", where "T" has an instantiation for "C"), it may do so by checking whether (pc == px).
I thought special support from the compiler would be necessary to realize this, and that implementation would be the toughest obstacle - that's why I am trying to tackle it since the beginning. Please let me know if I am mistaken.
4. I do find it reasonable to allow virtual concepts to derive from other virtual concepts, but so far I haven't really thought through this in detail. In fact, when I opened this thread I considered it quite likely that my idea would be immediately classified as unfeasible by the experts, so I didn't want to invest time and effort on a full-blown proposal before getting some feedback.
Concerning the use of techniques based on type erasure (with which I am only mildly familiar from the technical viewpoint), one of my goals is to make code using virtual concepts compile fast (ideally as fast as code using traditional inheritance-based polymorphism). This is why I would avoid library solutions such as Boost.TypeErasure, because they will likely be heavily based on templates, and that does have a significant impact on build time - and readability as well, unless macros are used, but I personally dislike macros.
X x;
C* c = &x;
c->foo();
assert(&c == &x);
C c = x;
c.foo();
On 28 September 2014 21:33, Andy Prowl <andy....@gmail.com> wrote:
> @Ville:
> Thank you for sharing your thoughts.
> Due to my lack of knowledge I cannot say I understand your point about
> reflection, so I have a few questions:
> 1. Would the performance of a function call dispatch with a machinery
> involving reflection be comparable to that of a call through a vtable?
The call of the interface/"virtual concept" function is virtual, the forwarding
of that from the adapter is a non-virtual call that may or may not be inlined
depending on whether the function called is inline.
> 2. Given "concept" (or whatever we call it) C such that "x.foo()" is valid
> when "x" is instance of a model of C, would your approach allow writing
> something like this?
>
> X x;
> C* c = &x;
> c->foo();
> assert(&c == &x);
> C c = x;
> c.foo();
Probably not. You would need to do something like
X x;
CWrap<X> cwx{x};
C* c = &cwx;
c->foo();
// the assertion for address equality would be false
C& c2 = cwx;
c.foo();
> 3. Is the "adaptation" you write about something that could be written "once
> and for all" and made part of the standard library (provided we have
> reflection, of course), or is the user supposed to code it for their own
> types/concepts? Would it require fiddling with templates and/or macros?
We don't have a facility for generating overrides for the
interface/virtual-concept
functions, so we can't make an adapter fully generic. You don't need any
macros, and you don't have to use templates if you don't want to, but making
such adapters more generic will benefit from templates. So, unless reflection
gains the capability of being able to generate overrides for base
class functions,
you would have to adapt your own types. Reflection helps with the adaptation
and can make it more generic when combined with templates, but at least in
its proposed form it can't make it fully generic/automatic.
> 4. Do we have a proposal for reflection? The one I have seen one some time
> ago (http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2014/n3951.pdf) did
> not seem very mature to me.
See
http://open-std.org/JTC1/SC22/WG21/docs/papers/2014/n4113.pdf
class IFoo
{
public:
virtual ~I() = default;
virtual void foo() = 0;
};
class A : public IFoo
{
virtual void foo() override { /* ... whatever ... */ }
void bar() { /* ... something else ... */ }
};
class B : public IFoo
{
virtual void foo() override { /* ... whatever ... */ }
void bar() { /* ... something else ... */ }
};
void poly_foo(IFoo& obj)
{
obj.foo();
}
template<typename T>
virtual concept bool BarConcept()
{
return requires(T obj)
{
{ bar(); } -> void;
};
}
virtual concept map BarConcept<A> = default;
virtual concept map BarConcept<B> = default;
void poly_bar(BarConcept& obj)
{
obj.bar();
}
X x;
C& c = x;
assert(addressof(c) == addressof(x));
W dniu czwartek, 25 września 2014 21:40:15 UTC+2 użytkownik Andy Prowl napisał:Hello everyone,
I would like to ask for feedback concerning this idea I had about non-intrusive "dynamic" polymorphism:
https://www.dropbox.com/s/tg737sf8sj1p1ht/Virtual%20concepts%20for%20polymorphic%20types.pdf?dl=1
This is just a rough sketch, please do not expect any full-blown proposal. My goal is to find out whether it makes sense to invest more time on it, or if the idea is basically bull***t.
A few highlights:
1. The idea is about introducing non-intrusive support for dynamic polymorphism (similar to Haskell's typeclasses);
2. The idea is similar to the old C++0x "concept maps", but is *not* related to templates;
3. Code using a "virtual concept" can be compiled separately and does not need to see the definition of the instances of that concept.
Hi Andy,
The field of your investigation is sure interesting and useful. This sort of thing is definitely needed. Even if it can be acheived by other means, adding a dedicated feature into the language might encourage people to use it. I am not convinced if concept maps are that useful. What is the difference between defining a concept map and defining a "wrapper" class?
Also, it looks from your paper that the binding of a type to a virtual concept does not work unless I explicitly declare it with a (possibly defaulted) concept map. Am I right? This would be a big inconvenience. This is what the original concepts were criticized for.
Also, inside the concept maps, how the visibility of names work? I mean, does normal ADL work? Which namespaces are accessible? Can I call other static member functions unqualified. Can I see fiunctions in the enclosing namespace scope?
ALso, the examples only talk about member functions. In contrast, both Concepts Lite and other type erasure libraries allow to define the interface in terms of non-member functions:
requires(T v, T w)
{
{ transform(v) } -> T;
{ cmp(v, w) } -> bool;
}
Do you intend the above to be expressible with virtual concepts?
What I find complex about Adobe.Poly (apart from finding a good documentation and the latest version of the source code) is that (1) it requires defining boilerplate adaptation code (e.g. the PlaceableImplementation from the paper by Marcus, Jarvi, and Parent); (2) the user needs to deal with templates, just like with Boost.TypeErasure; (3) it does not allow for duck-typing (unless I'm missing something).
Hi Andzrej,
I read your (excellent) blog post, but judging from the use case you show, Adobe.Poly does not seem to support duck-typing: users have to write the adaptation logic for their types themselves (i.e. Counter and CounterImpl) - that's equivalent to using a concept map unless I'm missing something, just with more boilerplate code.
Honestly, the more I look at the kind of sheganigans that are needed to work out type erasure using library solutions like Adobe.Poly and Boost.TypeErasure, the more I think that language support is fundamental.
I am now reworking my document into something more detailed, more formal, and closer to a draft. I think I have a solution that basically does what you suggest at the beginning of your blog post: take a concept definition (currently, from Concepts Lite) and let the compiler do the type-erasure magic. That means it will be possible to support free functions as well as static member functions. Not sure about data members at the moment.
I will link the draft here once I have a readable version of it.
W dniu środa, 1 października 2014 21:42:26 UTC+2 użytkownik Andy Prowl napisał:Hi Andzrej,
I read your (excellent) blog post, but judging from the use case you show, Adobe.Poly does not seem to support duck-typing: users have to write the adaptation logic for their types themselves (i.e. Counter and CounterImpl) - that's equivalent to using a concept map unless I'm missing something, just with more boilerplate code.
No, you implement the two classes (Counter and CounterImpl) once per concept. The you can use just any type with this facility w/o any adaptation, as long as it has the operations defined in the concept.
Honestly, the more I look at the kind of sheganigans that are needed to work out type erasure using library solutions like Adobe.Poly and Boost.TypeErasure, the more I think that language support is fundamental.
I share your opinion.
I am now reworking my document into something more detailed, more formal, and closer to a draft. I think I have a solution that basically does what you suggest at the beginning of your blog post: take a concept definition (currently, from Concepts Lite) and let the compiler do the type-erasure magic. That means it will be possible to support free functions as well as static member functions. Not sure about data members at the moment.
I wonder if it is even possible to do this for free functions that take two or more parameters of type T. What do you do if the erased types mismatch. Boost.TypeErasure does it, but I have never enough patience to figure out how.
template<typename T>
virtual concept bool C()
{
return requires(T x)
{
foo(x, x);
// ...
};
}
void bar(C& a, C& b)
{
foo(a, b);
}
template<typename T, typename U>
virtual requires C<T>() && C<U>()
void bar(T& a, U& b)
{
foo(a, b); // ERROR!
}
template<typename T>
virtual concept bool D();
template<typename T>
virtual concept bool C()
{
return requires(T x, D y)
{
foo(x, y);
// ...
};
}
template<typename T>
virtual concept bool D()
{
return requires(C x, T y)
{
foo(x, y);
// ...
};
}
foo(X& x, D& d) { ... } // To make X a model of C
foo(C& c, Y& y) { ... } // To make Y a model of D
void bar(C& x, D& y)
{
foo(x, y); // WHAT TO CALL HERE?
}
auto r = Rectangle{{1.0, 2.0}, 5.0, 6.0};
Shape& s = r;
Shape const* p = &r;
std::unique_ptr<Shape> up = std::make_unique<Rectangle>(r);
std::shared_ptr<Shape> sp = std::move(up);
assert(up.get() == &r); // SHALL NEVER FIRE
assert(sp.get() == &s); // SHALL NEVER FIRE
assert(&r == &s); // SHALL NEVER FIRE
Hello everyone,
I would like to ask for feedback concerning this idea I had about non-intrusive "dynamic" polymorphism:
https://www.dropbox.com/s/tg737sf8sj1p1ht/Virtual%20concepts%20for%20polymorphic%20types.pdf?dl=1
This is just a rough sketch, please do not expect any full-blown proposal. My goal is to find out whether it makes sense to invest more time on it, or if the idea is basically bull***t.
A few highlights:
1. The idea is about introducing non-intrusive support for dynamic polymorphism (similar to Haskell's typeclasses);
2. The idea is similar to the old C++0x "concept maps", but is *not* related to templates;
3. Code using a "virtual concept" can be compiled separately and does not need to see the definition of the instances of that concept.
Thank you,
Andy
It may be interesting for you to look at traits in Rust, they are very similar to what you propose and serve as a tool for both static and dynamic polymorphism. Rust is much closer to C++ in its ideology, than Haskell, so its experience may somehow assist you with your proposal.
One limitation I don't quite understand, on page 33 of the paper you present a function like this:void foo(virtual const Shape& l, virtual const Shape& r);You state that l and r must bind to the same type, just like in static concepts. In otherwords I could not call foo(Circle(), Rectangle()). Can you elaborate on why this restriction is really necessary?
For dynamic concepts, no such ambiguity exists because "Shape" is already a type. Since the dynamic types have been erased there is no ambiguity when I reference "Shape" itself. Once I'm inside of foo(), all the compiler and I know is that we have 2 abstract Shapes. Also when using plain old inheritance I can have a function taking 2 pointers/references to base classes and pass them 2 different child classes. Why am I allowed to do easily do this inheritance but not dynamic concepts? A dynamic concepts feature should try to match the semantics of inheritance in this fashion since its really generalization of that type erasure technique.
void foo(SomeConcept& c, SomeConcept& d)
{
auto e = c;
c = d;
}
void foo(SomeConcept& c)
{
SomeConcept d;
// ...
}
template<typename T, typename U>
requires SomeConcept<T> && SomeConcept<U>
void foo(SomeConcept& c, SomeConcept& d)
{
// ...
}
template<typename T, typename U>
requires SomeConcept<T> && SomeConcept<U>
void foo(T& c, U& d)
{
// ...
}