I also think, that it is not too late for Microsoft to extend the common
type system of the .NET platform (and of course C#) with multiple
inheritance. Eiffel would be a very good reference to look how multiple
inheritance can be integrated into an OO language. At any rate, it would
be better to include multiple inherintance now, than later.
The above said can also be applied to genericity, although I have heard
rumors that Microsoft will support genericity in C#. Again the languages
C++ and Eiffel and most OODBMS support this mechanism quite well,
especially Eiffel with its constrained genericity.
What does other people think about this topic ? Personally I think, that
true multiple inheritance and genericity are really very important in an
OO context, and the .NET platform (which will very likely influence
the future of the software industry) should directly support these two
very important mechanisms.
Best regards,
Volkan Arslan
"Volkan Arslan" <Volkan...@gmx.net> wrote in message
news:39C36C0D...@gmx.net...
MI seems great on the surface, however it is saturated with subtleties which
makes proper design difficult, some of these are:
1) Two (or more) base classes may contain methods that have the same
signature but different implementations...which one should be chosen? or
they may have two identically named fields...which one should be used? If
many classes are involved, things could get ugly and bugs could appear that
would be hard to trace. Interfaces solve this problem in an elegant way.
2) Overhead is added to both the compiler and the run-time system trying to
sort out the grey areas.
Overall I'm very happy that MS didn't allow multiple inheritance (and hope
they are not going to include it), I believe multiple (class) inheritance
can effectively be substituted by using a combination of single class
inheritance and allowing the implementation of many interfaces.
Klaus
"Volkan Arslan" <Volkan...@gmx.net> wrote in message
news:39C36EF4...@gmx.net...
With lots and lots of code duplication to go along with it.
>
> 2) Overhead is added to both the compiler and the run-time system trying
to
> sort out the grey areas.
Most Eiffel compilers (perhaps all) don't have problems with this.
Read Object-oriented Software Construction 2nd Edition for an example how
this
can be achieved quite elegantly. Or even better, get your hands on an Eiffel
compiler
and see for yourself. After 15 years of Eiffel I'm getting a bit tired of
seeing how
people base their views on C++'s version of MI which is quite horrid.
>
> Overall I'm very happy that MS didn't allow multiple inheritance (and hope
> they are not going to include it), I believe multiple (class) inheritance
> can effectively be substituted by using a combination of single class
> inheritance and allowing the implementation of many interfaces.
Again, programming in both Eiffel and Java has shown me how badly
Java's interface mechanism compares to the well-defined, well-controlled
MI-mechanism that one finds in Eiffel.
Sorry guys, MI is *NOT* bad, but like any other fairly advanced concept
it requires some thought up-front, but that's a trait of good software
engineering in
general.
So, I hope the .NET *common run-time* will include support for MI and then
leave
it to the language designers whether they wish to utilize this or not. I'm
quite happy
with C# not having MI, but the lack of MI in the common run-time sort of
reduces
some of Eiffel's great functionality and IMHO a VM should be at least try to
implement
a super-set of the features found in the languages using its services.
Eirik Mangseth
Stockpoint Nordic AS
Lysaker, Norway
"If I can't Eiffel in heaven, I won't go"
> I don't know the reasons for why MS is not including multiple (class)
> inheritance (MI) , but here is my opinion about it:
>
> MI seems great on the surface, however it is saturated with subtleties which
> makes proper design difficult, some of these are:
>
> 1) Two (or more) base classes may contain methods that have the same
> signature but different implementations...which one should be chosen? or
> they may have two identically named fields...which one should be used? If
> many classes are involved, things could get ugly and bugs could appear that
> would be hard to trace. Interfaces solve this problem in an elegant way.
>
> 2) Overhead is added to both the compiler and the run-time system trying to
> sort out the grey areas.
>
> Overall I'm very happy that MS didn't allow multiple inheritance (and hope
> they are not going to include it), I believe multiple (class) inheritance
> can effectively be substituted by using a combination of single class
> inheritance and allowing the implementation of many interfaces.
No, for practical software reuse, you need implementation MI, not just
interface MI. With just interface MI, you are forced to implement at least a
default routine in all concrete subclasses. With implementation MI, you can
put this in the parent class.
If you really don't like implementation MI, then in a language with
implementation MI, you could restrict yourself to interface MI, but it will
quickly become obvious why this is a silly restriction.
Also most people's problems with MI comes from C++ experience. That's just
C++, things don't have to be that way, but C++ makes a mess of most things,
so don't judge MI by C++.
In reality, C# offers very little over Java, the best thing about .NET is
that it allows multiple languages so the shortcomings of C# can be stepped
around, whereas with Java this is more difficult.
--
Ian Joyner
i.jo...@acm.org http://homepages.tig.com.au/~ijoyner/
Eiffel for Macintosh
http://www.object-tools.com/products/eiffel-s/index.html
TOOLS Pacific 20-23 Nov 2000: http://www.socs.uts.edu.au/tools/
Objects Unencapsulated -- The book (also available in Japanese)
http://www.prenhall.com/allbooks/ptr_0130142697.html
http://efsa.sourceforge.net/cgi-bin/view/Main/ObjectsUnencapsulated
> ...
> No, for practical software reuse, you need implementation MI, not just
> interface MI. With just interface MI, you are forced to implement at least a
> default routine in all concrete subclasses. With implementation MI, you can
> put this in the parent class.
>
> If you really don't like implementation MI, then in a language with
> implementation MI, you could restrict yourself to interface MI, but it will
> quickly become obvious why this is a silly restriction.
> ...
This makes the assumption that the experiencer has previously experienced a
better implementation. If, OTOH, his prior experience is Java and C++, then the
preference for single inheritance becomes quite reasonable.
Unfortunately, multiple inheritence is implemented properly in only a few
languages. I feel that these are Eiffel, Lisp and Ada (Ada thinks of itself as
being single inheritance, but I'm not convinced ... their semantics are so
different, however, that it probably isn't strictly comparable). These are all
languages that few have experienced.
Python, also, has a very interesting implementation of MI. And I believe that
Ruby has a similar one. These languages are more common, but they are normally
closely tied to C, so that the likelihood of a systems-level implementation of
support for multiple inheritance would affect them is slight.
Python uses a simple right-hand rule system of disambiguating inheritance. The
first definition that matches the key signature will be the accepted definition.
It's up to the user to take care if that isn't what was desired. Ruby allows
definitions to rewrite themselves on the fly (not a recommended procedure, but if
you need to, it's there). Ada has one main line of inheritance, but supports
mix-ins. I'd say delegation, but the standard meaning of that term has become
muddied since MS redefined it for J++. Lisp ... Lisp is Lisp. I don't really
understand it, but it supports inheritance in several flavors. I think that Lisp
invented mix-in inheritance. You use the one that you need. Eiffel uses a
nicely structured form of multiple inheritance that detects collisions at compile
time, and provided methods, through renaming and exclusion, to resolve this in
the intended way. Eiffel doesn't try to solve the problem, it gives the
programmer the tools with which to solve the problem. Eiffel detects the
problem.
http://eiffel.com/doc/online/eiffel45/Eiffel/language/BOOK.maker.html
and especially
http://eiffel.com/doc/online/eiffel45/Eiffel/language/inheritance.maker.html
to know a little bit more about MI in Eiffel. It is far from being the
complete description of what should MI be. If you want to know more about I
recommend you to read OOSC2.
Regards,
Manu
Interesting comments. (Inline comments further down)
Especially the difference between C++ and Eiffel. Would you be able to very
briefly explain why Eiffel is superior to C++ in terms of MI? for example
how it solves the problems I mentioned with MI. I know this a C# newsgroup
(and that we are suddenly discussing differences between C++ and Eiffel) but
in the context I think it could add some valuable knowledge to the
discussion.
> > Interfaces solve this problem in an elegant way.
>
> With lots and lots of code duplication to go along with it.
In my experience to a certain degree yes, but in quite a few other cases, I
have also found that it has been possible to convert complex MI designs, to
a single class inheritance structure with interfaces put at carefully
selected positions without much code duplication at all, and even with a
clearer and simpler design as a result.
> So, I hope the .NET *common run-time* will include support for MI and then
> leave
> it to the language designers whether they wish to utilize this or not. I'm
> quite happy
> with C# not having MI, but the lack of MI in the common run-time sort of
> reduces
> some of Eiffel's great functionality and IMHO a VM should be at least try
to
> implement
> a super-set of the features found in the languages using its services.
I agree totally.
Cheers,
Klaus
> Eirik
>
> Interesting comments. (Inline comments further down)
>
> Especially the difference between C++ and Eiffel. Would you be able to very
> briefly explain why Eiffel is superior to C++ in terms of MI? for example
> how it solves the problems I mentioned with MI. I know this a C# newsgroup
> (and that we are suddenly discussing differences between C++ and Eiffel) but
> in the context I think it could add some valuable knowledge to the
> discussion.
Hi Klaus,
One thing that has been mentioned in another post is name clashes. Eiffel
easily solves this problem by having a rename clause in the inheritance
specification. With C++ you must use the scope resolution operator everytime
the clash occurs in the code, whereas Eiffel rename is a once only thing.
Yes MI has problems, but there are solutions to these problems.
There is of course heaps more that I have written up in my paper on these
issues at:
or better still in my book Objects Unencapsulated (see signature).
Use C++, Eiffel, or any other MI languge on your choice.
Thanks.
Anes Sadikovic
Sent via Deja.com http://www.deja.com/
Before you buy.
If you can implement multipla interfaces like in C# you don´t need to
inherit some thing from more than one class!!!
/Joakim
"Volkan Arslan" <Volkan...@gmx.net> wrote in message
news:39C36C0D...@gmx.net...
I'll have a good look at those sources of information.
Klaus
Two major reasons:
1) Multiple inheritance has a performance cost.
2) Whose version of multiple inheritance should be implemented? Different
languages implement things differently.
That said, it's something we've discussed, and will continue to discuss.
> The above said can also be applied to genericity, although I have heard
> rumors that Microsoft will support genericity in C#. Again the languages
> C++ and Eiffel and most OODBMS support this mechanism quite well,
> especially Eiffel with its constrained genericity.
Generics are an interesting topic, and can be really useful. It's a tough
problem; not only do you need to find a good way of implementing them, but
you need to figure out how types that are generic appear in languages
without that concept. To get these things right, you need to support generic
types in the runtime itself.
We're hoping to have generic support in a future version.
Jonathan Allen
"Eirik Mangseth" <ei...@stockpoint.no> wrote in message
news:1m1x5.2040$mq2....@news1.online.no...
>
> "Klaus Michelsen" <dri...@ozemail.com.au> wrote in message
> news:KfMw5.18743$cr3.4...@ozemail.com.au...
> > I don't know the reasons for why MS is not including multiple (class)
> > inheritance (MI) , but here is my opinion about it:
> >
> > MI seems great on the surface, however it is saturated with subtleties
> which
> > makes proper design difficult, some of these are:
> >
> > 1) Two (or more) base classes may contain methods that have the same
> > signature but different implementations...which one should be chosen? or
> > they may have two identically named fields...which one should be used?
If
> > many classes are involved, things could get ugly and bugs could appear
> that
> > would be hard to trace. Interfaces solve this problem in an elegant way.
>
> With lots and lots of code duplication to go along with it.
>
> >
> > 2) Overhead is added to both the compiler and the run-time system trying
> to
> > sort out the grey areas.
>
> Most Eiffel compilers (perhaps all) don't have problems with this.
>
> Read Object-oriented Software Construction 2nd Edition for an example how
> this
> can be achieved quite elegantly. Or even better, get your hands on an
Eiffel
> compiler
> and see for yourself. After 15 years of Eiffel I'm getting a bit tired of
> seeing how
> people base their views on C++'s version of MI which is quite horrid.
> >
> > Overall I'm very happy that MS didn't allow multiple inheritance (and
hope
> > they are not going to include it), I believe multiple (class)
inheritance
> > can effectively be substituted by using a combination of single class
> > inheritance and allowing the implementation of many interfaces.
>
> Again, programming in both Eiffel and Java has shown me how badly
> Java's interface mechanism compares to the well-defined, well-controlled
> MI-mechanism that one finds in Eiffel.
>
> Sorry guys, MI is *NOT* bad, but like any other fairly advanced concept
> it requires some thought up-front, but that's a trait of good software
> engineering in
> general.
>
> So, I hope the .NET *common run-time* will include support for MI and then
> leave
> it to the language designers whether they wish to utilize this or not. I'm
> quite happy
> with C# not having MI, but the lack of MI in the common run-time sort of
> reduces
> some of Eiffel's great functionality and IMHO a VM should be at least try
to
> implement
> a super-set of the features found in the languages using its services.
>
> Eirik Mangseth
> Stockpoint Nordic AS
> Lysaker, Norway
>
> "If I can't Eiffel in heaven, I won't go"
>
> >
Cheers,
Jesús
Volkan Arslan escribió en mensaje <39C36EF4...@gmx.net>...
Please, explain why do you need to know this during design?
In any case, the argument that UML is not expressive enough to describe
MI, does not invalidate MI. Your statement is a sophism.
> Moreover, there *are* standards to
> move a design from MI to SI,
Again, this does not make SI better than MI. Another sophism.
regards,
Alex Soto
> "Volkan Arslan" <Volkan...@gmx.net> wrote in message
> news:39C36C0D...@gmx.net...
>> I also think, that it is not too late for Microsoft to extend the common
>> type system of the .NET platform (and of course C#) with multiple
>> inheritance. Eiffel would be a very good reference to look how multiple
>> inheritance can be integrated into an OO language. At any rate, it would
>> be better to include multiple inherintance now, than later.
>
> Two major reasons:
>
> 1) Multiple inheritance has a performance cost.
This is just incorrect (unless you are saying everything has a performance
cost, but it is incorrect to say the MI has any performance cost over SI,
unless Microsoft thought of particularly bad ways of doing it).
> 2) Whose version of multiple inheritance should be implemented? Different
> languages implement things differently..
Neither is this a problem. Different languages have always had different
underlying concepts, but most platforms enable software written in different
languages. For example the Unisys A Series E mode architecture (which is
really the ultimate VM, with the cleanest instruction set) allows software
in languages as different as ALGOL and COBOL to run efficiently on the same
platform. Designing a VM should not be to difficult to run multiple OO
languages with different MI implementations, or even non-OO languages.
> That said, it's something we've discussed, and will continue to discuss.
>
>> The above said can also be applied to genericity, although I have heard
>> rumors that Microsoft will support genericity in C#. Again the languages
>> C++ and Eiffel and most OODBMS support this mechanism quite well,
>> especially Eiffel with its constrained genericity.
>
> Generics are an interesting topic, and can be really useful. It's a tough
> problem; not only do you need to find a good way of implementing them, but
> you need to figure out how types that are generic appear in languages
> without that concept. To get these things right, you need to support generic
> types in the runtime itself.
>
> We're hoping to have generic support in a future version.
So is Sun in Java. I wouldn't hold my breath.
PMFJI: You need to know this during design so that you know what it is you
have to implement. Your question suggests that you don't feel design to be
important in determining the "shape" of the implementation. Ironically,
such design is even probably even more important when MI is being
considered.
Jesus' point was not sophistry. The demonstration was made that UML, a
popular and widely regarded notation did not appear to have any support for
MI. The unstated conclusion being that if it was not considered suitable
for inclusion in UML notation, then there must be strong arguments against
it's use.
You may not agree with this conclusion or the arguments that support it, but
there was no deception.
> > Moreover, there *are* standards to
> > move a design from MI to SI,
>
> Again, this does not make SI better than MI. Another sophism.
Again, not sophistry. The point is simply made that sufficient weight is
apparently given to the opinion that SI is preferable to MI that a great
deal of work has been put into easing and standardising the transition of
legacy MI models to SI. This adds weight to the claim that SI is (at least
perceived as) preferable to MI.
I can't help but wonder if "sophism" is your "word of the week" this week.
:^)
--
Jolyon Smith
Neot Software Services Ltd.
http://www.neot.co.uk
yahoo messenger ID: jolyonsmith
> IMHO, MI is not just a language headache. It is a design headache too. How
> would you specify in UML the method a subclass should call?
In that case, as you put it (or perhaps use it), it is UML that is
deficient. One of the major problems with UML is that once things get
slightly complicated you end up with horrendous diagrams that don't help.
Neither is UML a particular help in proving correctness or even a good
design.
> I think that
> there is no standard notation for this. Moreover, there *are* standards to
> move a design from MI to SI, and doing so you don't miss conceptual
> I.( i.e. you can still, watching at your design, see that a person can be a
> customer and a provider). Of course, if your design doesn't have MI, your
> implementation won't do MI.
But that is just artificial. In many cases MI will end up with cleaner
designs and implementations. If you want to restrict yourself to SI the you
can still do so in a language with MI. However, it will soon become apparent
as to why this restriction is artificial and silly. Why not allow languages
with MI so that we can have the choice? That way, if you want to restrict
yourself to SI, you can do so, but those of us who want cleaner designs can
use MI.
> Volkan Arslan escribió en mensaje <39C36EF4...@gmx.net>...
>>
>> I would be interested, why the .NET platform (common type system of .NET
>> platform and the reference language C# of .NET) don't support multiple
>> inheritance. I know that C# supports multiple interface inheritance, but
>> this is definitively not the same as multiple inheritance. This point is
>> very interesting, as there are languages like C++ and Eiffel (and of
>> course other languages) which support multiple inheritance. In case of
>> Eiffel multiple inheritance is supported very elegantly. And also most
>> OODBMS (like Matisse, Versant, Poet, ...) directly support multiple
>> inheritance. So the question is why Microsoft has decided not to support
>> multiple inheritance.
>>
>> I also think, that it is not too late for Microsoft to extend the common
>> type system of the .NET platform (and of course C#) with multiple
>> inheritance. Eiffel would be a very good reference to look how multiple
>> inheritance can be integrated into an OO language. At any rate, it would
>> be better to include multiple inherintance now, than later.
>>
>> The above said can also be applied to genericity, although I have heard
>> rumors that Microsoft will support genericity in C#. Again the languages
>> C++ and Eiffel and most OODBMS support this mechanism quite well,
>> especially Eiffel with its constrained genericity.
>>
> I invite someone who is a big supporter of implementation MI to give us
> an example where Impl. MI is usefull and where the same thing cannot be
> achieve using SI + interface MI in an effective way, or where
> duplication of code would be required.
>
> Use C++, Eiffel, or any other MI languge on your choice.
I won't write it in any language, but anywhere where you find two or more
subclasses of an interface have the same implementation procedure. In that
case you would want to share the procedure by moving in up into the
interface, but in that case it is no longer an interface (well actually, it
is, as all classes have an interface and you don't need the artificial
distinction of interfaces as in Java or header files as in C++).
I'm sorry, but these arguments are all sophism to me. The non inclusion
of MI in UML does not prove anything against MI. If UML does not have a
notation for specifing MI is a UML limitation, not a problem of MI.
Other notations like BON do.
>
> You may not agree with this conclusion or the arguments that support it, but
> there was no deception.
>
> > > Moreover, there *are* standards to
> > > move a design from MI to SI,
> >
> > Again, this does not make SI better than MI. Another sophism.
>
> Again, not sophistry. The point is simply made that sufficient weight is
> apparently given to the opinion that SI is preferable to MI that a great
> deal of work has been put into easing and standardising the transition of
> legacy MI models to SI. This adds weight to the claim that SI is (at least
> perceived as) preferable to MI.
>
> I can't help but wonder if "sophism" is your "word of the week" this week.
> :^)
If you want to blindly believe UML, that's your problem. But do not try
to convince others with sophisms. Yes, this my word of the week. I like
it as much as you like UML :^)
regards,
Alex Soto
Xcatly ! (watching too many times the X-men DVD)
UML seems to be not only a clump of all methods and notations
together, but on the other hand it seems to be the greatest common
factor of "suitabl" languages. Only some niche languages like Eiffel
support MI well enough to be useful without being too dangerous
(bugs just creep in). But this is just MY opinion.
Let the markets lead - not the technolgoy or innovations !!!
VPN
Java Inner Classes as a "solution" ?
VPN
I've heard this claim before. Can anyone point to particular language
features that make the C# "virtual machine" -- let's call it what it is --
an easier target for other languages than JVM bytecodes. (The only thing I
can think of is its support for COM pointers and pointers to blocks of raw
memory ... and the latter feature might open up big security holes for the
sake of resembling C/C++.)
Thanks.
Frank
MI has no performance cost in a language with global analysis (e.g. Eiffel,
Dylan), but what about languages which can link in new classes at runtime,
like Java and C#? Most of the performance hit will occur at link time, as
the runtime rebuilds its class tables to accomodate the new class. But,
unless there's a well-known clever algorithm for extending class tables
without halting all threads for a noticeable length of time, an implementor
in the real world will probably choose techniques less efficient than a SI
array index.
Frank
> I've heard this claim before. Can anyone point to particular language
> features that make the C# "virtual machine" -- let's call it what it is --
> an easier target for other languages than JVM bytecodes.
Any language (like Scheme or Haskell) where tail-recursion must be
implemented efficiently according to the specs of the language and re-using
the stack-frame. The authors of all the Scheme to Java compilers I've seen
admit that the JVM limitations forced them to implement tail-recursion in a
very inefficient way, completely destroying the semantic of that particular
part of the language (check the reports on Kawa for instance).
The CLR has a native "tail call" opcode so it does not suffer from this
limitation.
--
WildHeart'2k
ThanX Frank !
"Frank Mitchell" <fra...@bayarea.net> wrote in message
news:8q7sbo$mgn$1...@localhost.localdomain...
> "Ian Joyner" <i.jo...@acm.org> wrote in message
> news:B5EDACB5.4186%i.jo...@acm.org...
> > in article 8q4d51$gv2$1...@nnrp1.deja.com, asa...@my-deja.com wrote:
> >
> > > I invite someone who is a big supporter of implementation MI to give us
> > > an example where Impl. MI is usefull and where the same thing cannot be
> > > achieve using SI + interface MI in an effective way, or where
> > > duplication of code would be required.
> > >
> > > Use C++, Eiffel, or any other MI languge on your choice.
> >
> > I won't write it in any language, but anywhere where you find two or more
> > subclasses of an interface have the same implementation procedure. In that
> > case you would want to share the procedure by moving in up into the
> > interface, but in that case it is no longer an interface (well actually,
> it
> > is, as all classes have an interface and you don't need the artificial
> > distinction of interfaces as in Java or header files as in C++).
>
> Java Inner Classes as a "solution" ?
> VPN
> > --
> > Ian Joyner
> > i.jo...@acm.org http://homepages.tig.com.au/~ijoyner/
> ...
Inner classes are an anti-solution. They pretty much guarantee that the code
cannot be shared (isn't that their purpose?). I'm glad that you put "solution"
in quotes.
-- (c) Charles Hixson
-- Addition of advertisements or hyperlinks to products specifically prohibited
Personally I was most impressed by the claim (Frank Mitchell, Re: Multiple
inheritance in..., 09/19/2000, 07:22:57 - 0700) that languages that can load
routines at run time need single inheritance to achieve reasonable efficiency.
I can't validate the truth of that, but nobody has challenged it in the thread
as of now.
Jolyon Smith wrote:
> "Alex Soto" <as...@cawtech.com> wrote in message
> news:39C75A54...@cawtech.com...
> > Jesús wrote:
> > >
> > > IMHO, MI is not just a language headache. It is a design headache too.
> How
> > > would you specify in UML the method a subclass should call? I think that
> > > there is no standard notation for this.
> >
> > Please, explain why do you need to know this during design?
> > In any case, the argument that UML is not expressive enough to describe
> > MI, does not invalidate MI. Your statement is a sophism.
>
> PMFJI: You need to know this during design so that you know what it is you
> have to implement. Your question suggests that you don't feel design to be
> important in determining the "shape" of the implementation. Ironically,
> such design is even probably even more important when MI is being
> considered.
>
> Jesus' point was not sophistry. The demonstration was made that UML, a
> popular and widely regarded notation did not appear to have any support for
> MI. The unstated conclusion being that if it was not considered suitable
> for inclusion in UML notation, then there must be strong arguments against
> it's use.
>
> You may not agree with this conclusion or the arguments that support it, but
> there was no deception.
>
> > > Moreover, there *are* standards to
> > > move a design from MI to SI,
> >
> > Again, this does not make SI better than MI. Another sophism.
>
> Again, not sophistry. The point is simply made that sufficient weight is
> apparently given to the opinion that SI is preferable to MI that a great
> deal of work has been put into easing and standardising the transition of
> legacy MI models to SI. This adds weight to the claim that SI is (at least
> perceived as) preferable to MI.
>
> I can't help but wonder if "sophism" is your "word of the week" this week.
> :^)
>
> --
> Jolyon Smith
> Neot Software Services Ltd.
> http://www.neot.co.uk
> yahoo messenger ID: jolyonsmith
-- (c) Charles Hixson
> UML seems to be not only a clump of all methods and notations
> together,
Well it is the "unified" method so you would sort of expect that, no? :-)
> Let the markets lead - not the technolgoy or innovations !!!
Which is what I was driving at when "defending" Jesus' statement. UML
represents the formed conclusion of a body of opinion. Furthermore, it is a
conclusion which, if the take up of UML is to be taken as anything like
representative, has largely gotten the thumbs up from the IT industry.
If UML excludes support for MI there is a case to be argued that this very
fact demonstrates that MI has been ruled at least unnecessary by that body
of opinion.
I myself am not a big fan of UML. Neither am I particularly zealous about
the use or non-use of MI, although I am of the school of thought that views
it as largely unnecessary. I do acknowledge that it offers a more elegant
solution in some cases than SI + interfaces (and have used this fact in the
past to great effect, though I say so myself). But I also believe that
there is nothing that MI can do that SI + interfaces can't also achieve.
And based on my own experience, I prefer SI + I's over MI. It somehow
"feels" more "right". But that's just my experience.
No - no - no ! Not unnecessary, but too complicated for some languages
and UML is targeted to support SI implementations, too !!!
Veli-Pekka
Disclaimer of the day:
"Any opinions of mine
are a direct order from my Company
to you all - follow it !!"
> I'm sorry, but these arguments are all sophism to me.
Then I suggest you look it up. :^)
> The non inclusion of MI in UML does not prove anything against MI.
Since this is a discussion based largely on opinion there is no "proof"
required or provided. But when defending an opinion it is perfectly valid
to point to the opinion of others as a way of adding weight to that opinion.
UML is a recognised and widely acknowledged notation. It is reasonable
therefore to suggest that those who give the "thumbs up" to UML largely
"agree" with the assumptions that UML forces on them. And that includes
(apparently) the view that MI is not necessary.
> Other notations like BON do.
So what? :^)
> If you want to blindly believe UML, that's your problem. But do not try
> to convince others with sophisms. Yes, this my word of the week. I like
> it as much as you like UML :^)
Unfortunately however, it would seem that you understand it's meaning a
little less well. I point out again that no-one has used UML to try and
deceive or confuse anyone or anything. UML has simply been offered as
evidence of wider support for a particular POV.
IMHO that's not sophistry, that's reasoned debate.
That's exactly what I read from you and Jesus. Impliying that MI is
unnecessary from the fact that UML does not support MI is an invalid
argument used to deceive people about the effectiveness of MI. Better
go and find real arguments to back your statements. Picking on my
choice of words won't help your cause.
regards,
Alex Soto
What do you mean, "perhaps"? Of course it's the wrong word. It should be
"sophistry".
Any fule no that.
Bertie
In fact, that was a big part of the Sun vs. MS battle. MS said "I works
better this way, just look at VB and C++" while Sun was saying "That's not
Java! You can't go around changing stuff". Both were correct, but it was
Sun's ball and they went home. Thus MS made their own.
Jonathan Allen
"Frank Mitchell" <fra...@bayarea.net> wrote in message
news:8q7rid$mfl$1...@localhost.localdomain...
>
> I've heard this claim before. Can anyone point to particular language
> features that make the C# "virtual machine" -- let's call it what it is --
A good designer can always design incorrect MI as proper SI.
"Alex Soto" <as...@cawtech.com> wrote in message
news:39C75A54...@cawtech.com...
> Jesús wrote:
> >
> > IMHO, MI is not just a language headache. It is a design headache too.
How
> > would you specify in UML the method a subclass should call? I think that
> > there is no standard notation for this.
>
> Please, explain why do you need to know this during design?
> In any case, the argument that UML is not expressive enough to describe
> MI, does not invalidate MI. Your statement is a sophism.
>
> > Moreover, there *are* standards to
> > move a design from MI to SI,
>
> Again, this does not make SI better than MI. Another sophism.
>
> regards,
> Alex Soto
>
>
>
> >and doing so you don't miss conceptual
> > I.( i.e. you can still, watching at your design, see that a person can
be a
> > customer and a provider). Of course, if your design doesn't have MI,
your
> > implementation won't do MI.
> >
> > Cheers,
> > Jesús
Implementation classes give much of the same basic functionality of MI, for
example, take the possibly contrived example of a RandomNumberStream. A
stream that spits out random numbers. You could look at it in two ways:
The MI way:
RandomNumberStream is-a Stream
RandomNumberStream is-a RandomNumberGenerator
class RandomNumberStream : public Stream, private RandomNumberGenerator
{...};
or the SI way:
RandomNumberStream is-a Stream
RandomNumberStream has-a RandomNumberGenerator
class RandomNumberStream : public Stream {
RandomNumberGenerator rng;
...
};
For most stuff, it's very little difference code-wise.
Ken
Jonathan Allen
"Neil Hodgson" <ne...@scintilla.org> wrote in message
news:#o3MCspIAHA.286@cppssbbsa04...
> Jonathan Allen:
> > JVM doesn't support Delegates, or what we VB programmers know as events.
> > Instead, they all the to be wrapped in event classes.
>
> Microsoft's implementation of delegates generates Java byte code that
> will run on any JVM. The dispute here was over the change to the Java
> language rather than to the VM.
>
> http://www.microsoft.com/presspass/trial/mswitness/muglia/muglia_pt2.asp
>
> Neil
>
>
>
1. The .NET stack is typed, and the opcodes are smart enough to 'do the
right thing' w.r.t the stack. For an example, look at the numerous JVM
bytecodes to add 2 numbers together, vs. the simple add opcode in .NET.
2. The .NET stack treats all values as atomic, ie. you don't have to do
things like use pop2 or dup2 if the top of the stack holds a double.
3. The CLR supports unsigned data types.
4. The CLR supports .tailcall for more efficient implementation of recursive
languages like Scheme.
I'm sure there's more, but I'm still learning about the JVM. The more I
read, the more I think that the CLR architects did a better job. However,
this isn't too suprising given the presence of 20/20 hindsight.
--Peter
http://www.razorsoft.net
"Frank Mitchell" <fra...@bayarea.net> wrote in message
news:8q7rid$mfl$1...@localhost.localdomain...
>
> Ian Joyner wrote in message ...
> >In reality, C# offers very little over Java, the best thing about .NET is
> >that it allows multiple languages so the shortcomings of C# can be
stepped
> >around, whereas with Java this is more difficult.
>
>
[...]
> I'm sure there's more, but I'm still learning about the JVM. The more I
> read, the more I think that the CLR architects did a better job. However,
> this isn't too suprising given the presence of 20/20 hindsight.
Umm, so long as you realize that those two points only move around some of
the translation rather than eliminating it. A simple declaration that "you
don't need dup2" doesn't mean that the implementation... duplicating two
machine words of data, isn't required. It means that the operation that
used to be a compile time check is now a run-time check. Same with typed
operations. The trade-off is that the bytecode (that is, the part that
isn't supposed to be human readable) is simpler and easier for humans to
read, but executes slower.
Java does need some way to guarantee tail recursion optimizations, and
unsigned types would make sense too.
Chris Smith
--Peter
http://www.razorsoft.net
"Chris Smith" <cds...@twu.net> wrote in message
news:8q9c4b$i6n$0...@dosa.alt.net...
Some time ago, for test purposes, I built a COM server which implemented
some of the standard C++ library sets (vector, stack, queue, etc.).
Ok, C++ genericity has some severe drawbacks (implemetation in header
files...) and C++ MI also.
But using MI (ATL for the COM purposes) you can get a very elegant way to
implement common functionality in several base classes which can be
inherited by the exposed COM classes (think about the common functionality
of vector, stack, queue, deque, list and their iterators).
By using SI I had to implement the methods of every COM interface (which is
in fact for C++ a pure virtual class) in every implementation class. By
using MI I could provide every common implementation in a seperate base
class in this way that the inheriting class gets all the implementations of
all interface methods it needs without implementing all of these methods
itself.
This is a very imperative, not to say academic, example, I know, but...
the discussion about the "musts" of implementations of OO principles also
is. When we talk about the necessity of MI think about this principle of OO
design: composition is better than inheritance. This leads to the "white
box" - "Black box" problem. White box: show how to use your component (in
fact an interface), black box: hide the implementation. Inheritance (both SI
and MI) can break this principle when not used very carefully.
Hope we can get Deep Thought to tell us the answer about all the OO stuff!
Karsten
<asa...@my-deja.com> wrote in message news:8q4d51$gv2$1...@nnrp1.deja.com...
> I invite someone who is a big supporter of implementation MI to give us
> an example where Impl. MI is usefull and where the same thing cannot be
> achieve using SI + interface MI in an effective way, or where
> duplication of code would be required.
>
> Use C++, Eiffel, or any other MI languge on your choice.
>
> Thanks.
>
> Anes Sadikovic
>
>
> Sent via Deja.com http://www.deja.com/
> Before you buy.
>
> "Alex Soto" <as...@cawtech.com> wrote in message
> news:39C75A54...@cawtech.com...
>> Jesús wrote:
>>>
>>> IMHO, MI is not just a language headache. It is a design headache too.
> How
>>> would you specify in UML the method a subclass should call? I think that
>>> there is no standard notation for this.
>>
>> Please, explain why do you need to know this during design?
>> In any case, the argument that UML is not expressive enough to describe
>> MI, does not invalidate MI. Your statement is a sophism.
>
> Jesus' point was not sophistry. The demonstration was made that UML, a
> popular and widely regarded notation did not appear to have any support for
> MI. The unstated conclusion being that if it was not considered suitable
> for inclusion in UML notation, then there must be strong arguments against
> it's use.
Hang on, I don't believe the claim that UML does not support MI is correct
anyway. If it is true, then UML is even more useless than I thought.
However, it could probably be argued over since even the UML and modeling
experts I know say UML is so badly defined that no one really agrees on the
meaning of things and the same diagram can mean different things depending
on how you interpret the basic definitions.
I again make the point that if you don't like MI (perhaps because you have
seen it in C++), let us have the language with MI, and you can restrict
yourself to using it with SI, that way we will all be happy.
--
Ian Joyner
i.jo...@acm.org http://homepages.tig.com.au/~ijoyner/
Eiffel for Macintosh
> "Ian Joyner" <i.jo...@acm.org> wrote in message
> news:B5EDACB5.4186%i.jo...@acm.org...
>> in article 8q4d51$gv2$1...@nnrp1.deja.com, asa...@my-deja.com wrote:
>>
>>> I invite someone who is a big supporter of implementation MI to give us
>>> an example where Impl. MI is usefull and where the same thing cannot be
>>> achieve using SI + interface MI in an effective way, or where
>>> duplication of code would be required.
>>>
>>> Use C++, Eiffel, or any other MI languge on your choice.
>>
>> I won't write it in any language, but anywhere where you find two or more
>> subclasses of an interface have the same implementation procedure. In that
>> case you would want to share the procedure by moving in up into the
>> interface, but in that case it is no longer an interface (well actually,
> it
>> is, as all classes have an interface and you don't need the artificial
>> distinction of interfaces as in Java or header files as in C++).
>
> Java Inner Classes as a "solution" ?
But not a good solution. Why introduce yet another mechanism. Inner classes
used to provide defaults are pretty ugly.
>
> Ian Joyner wrote in message ...
>> in article #4e8McYI...@cppssbbsa02.microsoft.com, Eric Gunnerson wrote:
>>
>>> "Volkan Arslan" <Volkan...@gmx.net> wrote in message
>>> news:39C36C0D...@gmx.net...
>>
>>> 1) Multiple inheritance has a performance cost.
>>
>> This is just incorrect (unless you are saying everything has a performance
>> cost, but it is incorrect to say the MI has any performance cost over SI,
>> unless Microsoft thought of particularly bad ways of doing it).
>
>
> MI has no performance cost in a language with global analysis (e.g. Eiffel,
> Dylan), but what about languages which can link in new classes at runtime,
> like Java and C#? Most of the performance hit will occur at link time, as
> the runtime rebuilds its class tables to accomodate the new class. But,
> unless there's a well-known clever algorithm for extending class tables
> without halting all threads for a noticeable length of time, an implementor
> in the real world will probably choose techniques less efficient than a SI
> array index.
Well, actually, Eiffel compilers flatten the classes. The procedure is,
compile all the routines. Where a class inherits a routine, place a
reference to the routine in the class table, that includes all inherited
routines and routines introduced in the class, so you never have to search
parent classes for the routine at run time. When generating a call to the
routine, determine if the call refers to 0 routines, in this case give a
type error; 1 routine, in this case generate a direct jsr call, or
potentially more than one implementation of the routine via subclasses, it
that case generate a dynamic call dispatched on the dynamic object type via
the routine entered in the class table.
Having said all that, in the context of a virtual machine, performance is a
red herring: if you want performance, don't use a VM!
>
> Ian Joyner wrote in message ...
>> In reality, C# offers very little over Java, the best thing about .NET is
>> that it allows multiple languages so the shortcomings of C# can be stepped
>> around, whereas with Java this is more difficult.
>
>
> I've heard this claim before. Can anyone point to particular language
> features that make the C# "virtual machine" -- let's call it what it is --
> an easier target for other languages than JVM bytecodes. (The only thing I
> can think of is its support for COM pointers and pointers to blocks of raw
> memory ... and the latter feature might open up big security holes for the
> sake of resembling C/C++.)
Good technical question, I'll ask Bertrand Meyer at his .NET seminar
tomorrow. I was referring more to the availability because Microsoft has
encouraged third parties to make their languages available on .NET (I think
it is the .NET VM, not the C# VM), whereas Sun just wants Java only.
However, Sun is balanced out by wanting JVM on every platform whereas I
think Microsoft will tend to restrict .NET to MS platforms only and perhaps
Macintosh.
What we need is an open standard in both areas of languages and
platforms...that would be something.
Anyway comments on .NET at this stage are probably not 100% accurate because
not that much information has been put out or even assimilated to work out
what the impacts are, so please don't quote me if I change my mind in the
light of new evidence. (Oh, but I won't be changing my mind about MI not
being a good thing.)
Opinions?
VPN
> Pay attention. The previous post asked what made C#'s virtual machine
easier
> to target than Java's.
The virtual machines look quite similar here. Calling a delegate is an
invokevirtual <method> on the JVM and a callvirt <method> in IL. Methods
called through delegates are just normal methods in both. Delegates are
subclasses of either com.ms.lang.Delegate or MulticastDelegate on the JVM
and subclasses of System.Delegate or MulticastDelegate in IL. There is some
argument wrapping in the generated invoke method of a delegate for the JVM
and for IL there is an incompletely documented pattern where there are
BeginInvoke and EndInvoke methods as well as the Invoke. There may be some
magic here but without documentation its hard to understand. There are some
extra notes on verification of delegates but this is just checking by the
VM, not requiring extra code in the implementation.
Precisely which features of the .NET runtime do you think makes
implementing delegates easier?
Neil
I think you're right in that you can specify multiple base classes. But I
have not tested Jesus' specific claim (which hinges on indicating in a model
which of two identical inherited methods from different base classes should
be invoked in a given case).
I think one of the key differences here, in design choice, is that Java was
designed to be interpreted, where the CLR was designed from the beginning to
be JITed. In interpreted code, the price of doing this inside of a loop
repeatedly is going to be more visible, with a JIT the code is translated to
x86 and then run, since the translation of a given method happens but once,
it a much smaller performance metric.
The point is: OOP is a way to model the real world. But it is limited, as
any language, model or paradigm. You can't easily divide the world in pieces
that haven't nothing to do with each other, and that is what MI is for. But
it's got serious limitations: What is a mushroom, a plant or an animal?
Would you assign weights to the relations mushroom-plant and
mushroom-animal? That's what a was talking about when I used the word
'design'. Effective taxonomy, in any science, must not have MI. MI is at
least confusing in most scenarios, and you won't probably need it in your
bussiness classes( yes, this is sophistry too).
Ian Joyner escribió en mensaje ...
You're right. It won't kill performance to the point of unusability by any
means in a JIT environment. It won't even be noticable if I'm interpreting
things right. Well, I don't know if .NET shares some of the Java
verifiability requirements regarding stack slot data types, but if it does
then the performance impact won't be of any substantial import. But there
are problems with JIT implementations. Time to compile at startup is one.
It's often helpful to interpret code at application startup and then use the
UI downtime for JIT compilation. This means app startup is likely to take
even longer than it does in Java. I don't know what .NET's target
environments are, but many real time devices would prefer to interpret so
that they get more consistent performance.
This aside, however, the point is that compiled bytecode is not intended for
human consumption. Just as we don't ask CPU manufacturers to choose an
instruction set that makes intuitive sense when reading, I don't see why
.NET must do the same. I was rsponding to a message saying that .NET has a
better design for these reasons, when there is really no discernable benefit
whatsoever. I wasn't saying this design is horrible. I was saying it's not
anything to get excited about. That's all.
Chris Smith
Java bytecode was designed to be compiled/JITed from the start. The
ability to interpret it was included to make it easy to port the VM.
Jim S.
"Chris Smith" <cds...@twu.net> wrote in message
news:8qa7en$p8t$0...@dosa.alt.net...
> Jeff Peil <jp...@bigfoot.com> wrote...
> You're right. It won't kill performance to the point of unusability by
any
> means in a JIT environment. It won't even be noticable if I'm
interpreting
> things right. Well, I don't know if .NET shares some of the Java
> verifiability requirements regarding stack slot data types, but if it does
> then the performance impact won't be of any substantial import. But there
> are problems with JIT implementations. Time to compile at startup is one.
> It's often helpful to interpret code at application startup and then use
the
> UI downtime for JIT compilation. This means app startup is likely to take
> even longer than it does in Java. I don't know what .NET's target
> environments are, but many real time devices would prefer to interpret so
> that they get more consistent performance.
Regarding the problems with JITs, I agree, although persistent caching of
the emitted code can help here a lot (something .NET's JIT does, and
something I know that v4 of the Java JIT from Webgain, formerly Symantec,
does) .NET is also supporting what they call Pre-JITing where at install
time the JIT generates and persists part of the app. Both with Java and
.NET I've seen the persisted native code really help with start up.
> This aside, however, the point is that compiled bytecode is not intended
for
> human consumption. Just as we don't ask CPU manufacturers to choose an
> instruction set that makes intuitive sense when reading, I don't see why
> .NET must do the same. I was rsponding to a message saying that .NET has
a
> better design for these reasons, when there is really no discernable
benefit
> whatsoever. I wasn't saying this design is horrible. I was saying it's
not
> anything to get excited about. That's all.
Outside debugging, I would tend to agree, though it never hurts to
understand what your compiler should be emitting. Sometimes when source
isn't available and you're making calls into something that is behaving
unexpectedly, being able to follow the instruction set can be helpful in the
process of tracking down the problem.
Did I say it wasn't?
Perhaps I didn't say it clearly enough, interpreted performance of Java
bytecode was an important factor. Someone from the .NET team can correct me
if I'm wrong, but the impression I got reading their documentation was that
interpreted performance of .NET IL was not a consideration, JITted
performance still was, of course.
Different goals, make for different results.
"Jim Sculley" <nic...@abraxis.com> wrote in message
news:39C8A15F...@abraxis.com...
Maybe someone could answer these questions for me:
1. Does the JVM support a calling convention like "varargs" in the CLR (ie.
variable # of parameters)
2. Does the JVM have the equivalent of the CLR's "oneway" method modifier
that permits fire-and-forget method invocation
3. Does the JVM have the equivalent of the CLR's GC.SuppressFinalize (to
unregister an object for finalization if the work has already been done by
the application)
4. Does the JVM have the equivalent of the CLR's GC.ReRegisterForFinalize
(to reregister an object for finalization that has been resurrected in it's
first finalizer invocation).
--Peter
http://www.razorsoft.net
"Chris Smith" <cds...@twu.net> wrote in message
news:8qa7en$p8t$0...@dosa.alt.net...
Yes, clarity was the issue. Your statement was:
"I think one of the key differences here, in design choice, is that Java
was
designed to be interpreted, where the CLR was designed from the
beginning to
be JITed."
To me, that statement is saying that JITing Java bytecode was somehow an
afterthought, when in fact it was desgined to be compiled/JITed all
along.
Saying that Java was designed to be interepreted is somewhat misleading,
since it was really designed so that it *could* be interpreted. Subtle
but important distinction, especially in light of that fact that very
few VMs are interpreting these days.
Here's what James Gosling had to say when someone asked him about
whether or not Java was originally designed to be interpreted:
"No, and yes. It was really designed to be compiled, with
interpretation being
possible as an expediant way to port the VM quickly. Most instruction
sets
designed to be interpreted have much looser definitions than the Java
bytecodes. eg. the strict typing at the bytecode level is largly for
easy
high quality code generation."
Jim S.
Jonathan Allen
"Neil Hodgson" <ne...@scintilla.org> wrote in message
news:#unX4XsIAHA.195@cppssbbsa05...
Jonathan Allen
"Jesús" <alsag...@arrakis.es> wrote in message
news:eFSyoeuIAHA.283@cppssbbsa05...
> You are taking my UML example too seriously( it was just an example, not
an
> argument). But it's more serious than you think. UML is the official
> modelling tool chosen by OMG, the official authority in OOP. Anyway,
that's
> not the point. All of this is state-of-the-art sophistry ;-).
>
> The point is: OOP is a way to model the real world. But it is limited, as
> any language, model or paradigm. You can't easily divide the world in
pieces
> that haven't nothing to do with each other, and that is what MI is for.
But
> it's got serious limitations: What is a mushroom, a plant or an animal?
> Would you assign weights to the relations mushroom-plant and
> mushroom-animal? That's what a was talking about when I used the word
> 'design'. Effective taxonomy, in any science, must not have MI. MI is at
> least confusing in most scenarios, and you won't probably need it in your
> bussiness classes( yes, this is sophistry too).
>
> Ian Joyner escribió en mensaje ...
> Java bytecode was designed to be compiled/JITed from the start. The
> ability to interpret it was included to make it easy to port the VM.
Jim,
I very much doubt that point. Java bytecode is way too low level to
function well as an intermediate representation for optimizing
compilers. "Features" such as JSR, as well as unrestricted control
flow through goto, make it difficult and necessary to re-construct the
necessary high-level representations. I've more than once heared the
opinion that the bytecode set should be redesigned.
Do you have indications for your point ? If so, I wonder whether the
bytecode designers ever constructed an optimizing compiler. Not that I
ever did ...
Matthias
--
Matthias Ernst - Entwickler
matthia...@coremedia-ag.com - fon +49.40.325587.503 fax .99
CoreMedia AG - www.coremedia-ag.com
Düsternstraße 3, 20355 Hamburg, Germany
_____________________________
// He's dead, Jim.
_cfg = (PhaseCFG*)0xdeadbeef;
--Peter
http://www.razorsoft.net
-------------------------------------
From: Artur Biesiadowski [mailto:ab...@pg.gda.pl]
Sent: Wednesday, September 20, 2000 10:13 AM
To: Peter Drayton
Subject: Re: Multiple inheritance in C# and .NET platform ?
Peter Drayton wrote:
> Maybe someone could answer these questions for me:
Not directly. But remeber that we are not talking about java here, only
about JVM, so trick I'll list below can be done transparently by
compiler from language supporting these things.
> 1. Does the JVM support a calling convention like "varargs" in the CLR
(ie.
> variable # of parameters)
Just use array of Objects as parameter. No static checking of types, but
varargs don't have it neither ?
> 2. Does the JVM have the equivalent of the CLR's "oneway" method modifier
> that permits fire-and-forget method invocation
I'm not familar with that concept and I cannot find C# specification now
(I'm looking for it for 20 minuts on microsoft site - what happened to
it ??). So my answer is blind guess from your 'fire-and-forget method
invocation' hint - I suppose it allows running a method without going
there with current thread. In this case it would be basically
new Thread() {
public void run()
{
invokeOnewayMethod(with any parameters you want);
}
}.start();
Of course in imaginary language on top of JVM which would support this
keyword it would be done behind the scenes.
> 3. Does the JVM have the equivalent of the CLR's GC.SuppressFinalize (to
> unregister an object for finalization if the work has already been done by
> the application)
No, but it can be easily done inside object. Stream is already closed ?
Check it in finalize and just do not this again. Might seem to be more
work, but you do this work only at single place - instead of supressing
finalization at each place when you have done this by hand. Less error
prone I think.
> 4. Does the JVM have the equivalent of the CLR's GC.ReRegisterForFinalize
> (to reregister an object for finalization that has been resurrected in
it's
> first finalizer invocation).
Resurecting object being finalized is very bad practice. It can be done
without any meddling with JVM - such method would consist of creating
PhantomReference to object and putting it in queue - it will get called
when object is collected. You will not be able to reattach it again -
but I've said, it is VERY bad practice.
I think that points 3 and 4 are things that show bad design of CLR -
allowing to mess with things that shouldn't be messed with.
I don't think that CLR is any better than JVM. C# can be better than
java - it is a point to argue in different discussion (on c.l.j.advocacy
I suppose), but JVM is really not so bad. Of course it is very hard to
implement smalltalk on top of it - but same is true for CLR I think.
Artur
Thanks for the info. Comments inline...
"Artur Biesiadowski" <ab...@pg.gda.pl> wrote in message
news:39C8F012...@pg.gda.pl...
> Peter Drayton wrote:
>
> > Maybe someone could answer these questions for me:
>
> Not directly. But remeber that we are not talking about java here, only
> about JVM, so trick I'll list below can be done transparently by
> compiler from language supporting these things.
>
> > 1. Does the JVM support a calling convention like "varargs" in the CLR
(ie.
> > variable # of parameters)
>
> Just use array of Objects as parameter. No static checking of types, but
> varargs don't have it neither ?
>
> > 2. Does the JVM have the equivalent of the CLR's "oneway" method
modifier
> > that permits fire-and-forget method invocation
>
> I'm not familar with that concept and I cannot find C# specification now
> (I'm looking for it for 20 minuts on microsoft site - what happened to
> it ??). So my answer is blind guess from your 'fire-and-forget method
> invocation' hint - I suppose it allows running a method without going
> there with current thread. In this case it would be basically
> new Thread() {
> public void run()
> {
> invokeOnewayMethod(with any parameters you want);
> }
> }.start();
> Of course in imaginary language on top of JVM which would support this
> keyword it would be done behind the scenes.
Yup - you've got it. BTW, this stuff is not in the C# spec, but rather the
IL docs that are part of the .NET SDK.
>
>
> > 3. Does the JVM have the equivalent of the CLR's GC.SuppressFinalize (to
> > unregister an object for finalization if the work has already been done
by
> > the application)
> No, but it can be easily done inside object. Stream is already closed ?
> Check it in finalize and just do not this again. Might seem to be more
> work, but you do this work only at single place - instead of supressing
> finalization at each place when you have done this by hand. Less error
> prone I think.
Hmmm... I don't know how the JVM handles finalizers, but in the CLR this is
actually a performance/memory optimization. The CLR maintains a list of all
the objects that need to have their finalizers called when they get
collected. When an object gets marked as garbage (as part of a GC) it isn't
immediately collected, but rather a reference to it is put onto a queue of
objects pending finalization. There is a separate Finalizer thread that is
constantly running which sucks objects off the queue, runs the finalizer,
then drops the reference. When the _next_ GC happens, the object gets
collected. Calling SuppressFinalize lets the GC collect the object on the
first pass, rather than the second. This is a recommended design pattern,
where you expose a Dispose() method that does whatever cleanup is necessary,
then calls SuppressFinalize. The finalizer delegates to Dispose. This way if
the clients of the object do the right thing, it gets GCed in 1 pass. If
they forget to call Dispose, the object gets collected in 2 passes.
I was wondering if the JVM had a similar pattern, and I'm hearing the answer
is no...
>
> > 4. Does the JVM have the equivalent of the CLR's
GC.ReRegisterForFinalize
> > (to reregister an object for finalization that has been resurrected in
it's
> > first finalizer invocation).
> Resurecting object being finalized is very bad practice. It can be done
> without any meddling with JVM - such method would consist of creating
> PhantomReference to object and putting it in queue - it will get called
> when object is collected. You will not be able to reattach it again -
> but I've said, it is VERY bad practice.
Sure, it is generally bad practice to resurrect an object. If you do,
however, it's nice to be able to choose to have the finalizer called
again...
>
> I think that points 3 and 4 are things that show bad design of CLR -
> allowing to mess with things that shouldn't be messed with.
To each his own - I agree that it's more complicated, but I can certainly
imagine cases where one would want the choice. However, this is a Java
newsgroup, so I won't argue too loudly here :-)
--Peter
http://www.razorsoft.net
--------------------------------
Posted to comp.lang.java.machine by Artur Biesiadowski [ab...@pg.gda.pl]:
[...]
> This way if
> the clients of the object do the right thing, it gets GCed in 1 pass. If
> they forget to call Dispose, the object gets collected in 2 passes.
>
> I was wondering if the JVM had a similar pattern, and I'm hearing the
answer
> is no...
Indeed, there is no way to optimize it. But I don't suppose it is a real
performance hit, as really small number of objects require finalization
in java. In standard library main ones are:
- some streams (file,socket)
- window components (window,dialog,frame)
Windows can be not taken into account (how many windows can be created
and destroyed each second in application...), streams, even for very
resource heavy application, should not be in greater quantity than
1000-2000 per second (I'm talking here about some monstrous internet
application which creates and drops such number of connections each
second). This is really not a problem for gc or memory, compared to
number of garbage objects created all the time.
[...]
> Sure, it is generally bad practice to resurrect an object. If you do,
> however, it's nice to be able to choose to have the finalizer called
> again...
Yes, I suppose yes. But until now I have not met any case where
resurrecting objects would be needed in real world. I've seen one thing
that possibly could be done by resurrection - db connection wrapper for
db pool, which was returning connection to it when being finalized.
Somebody could possibly resurrect it at that point - but it is easier to
create new wrapper when it will be needed.
Anyway, this point is something that cannot be done with JVM.
Artur
David
Jolyon Smith <jsm...@neot.co.uk> wrote in message
news:#1KlWHlI...@cppssbbsa02.microsoft.com...
>
> "Alex Soto" <as...@cawtech.com> wrote in message
> news:39C7643E...@cawtech.com...
>
> > I'm sorry, but these arguments are all sophism to me.
>
> Then I suggest you look it up. :^)
>
> > The non inclusion of MI in UML does not prove anything against MI.
>
> Since this is a discussion based largely on opinion there is no "proof"
> required or provided. But when defending an opinion it is perfectly valid
> to point to the opinion of others as a way of adding weight to that
opinion.
>
> UML is a recognised and widely acknowledged notation. It is reasonable
> therefore to suggest that those who give the "thumbs up" to UML largely
> "agree" with the assumptions that UML forces on them. And that includes
> (apparently) the view that MI is not necessary.
>
>
> > Other notations like BON do.
>
> So what? :^)
>
>
> > If you want to blindly believe UML, that's your problem. But do not try
> > to convince others with sophisms. Yes, this my word of the week. I like
> > it as much as you like UML :^)
>
> Unfortunately however, it would seem that you understand it's meaning a
> little less well. I point out again that no-one has used UML to try and
> deceive or confuse anyone or anything. UML has simply been offered as
> evidence of wider support for a particular POV.
>
> IMHO that's not sophistry, that's reasoned debate.
> As you state that "when defending an opinion it is perfectly valid to
point
> to the opinion of others as a way of adding weight to that opinion" why,
> when someone else notes that "other notations like BON do", do you respond
> with "So what?".
As I indicated, my tongue was firmly inserted in my cheek, for the exact
same reason that you picked me up. Alex was poo-poohing the use of UML to
support one POV and then threw BON up to support his POV.
He can't have his cake and eat mine too. :-)
Not god at least :-)
VPN
PS: Problems understood, I'm just wasting BW here
"Jesús" <alsag...@arrakis.es> wrote in message
news:eYT4aA7IAHA.172@cppssbbsa05...
David
Jolyon Smith <jsm...@neot.co.uk> wrote in message
news:OPVaiv6...@cppssbbsa02.microsoft.com...
See my other post. The info comes straight from James Gosling himself,
and can also be found in the original white paper for the first Java
environment. You can find that somewhere on Sun's site.
Jim S.
Have you read the paper, "Finite State Code Generation" by Fraser and
Proebsting? (I don't have an URL handy, but a web search should find it.)
In this paper we will describe tiny, fast and simple code
generators that produce native code whose speed, /including/
/JIT translation/ is within 2-4X of typical compiler-generated
code.
This sort of technology looks very competitive with interpreters.
Admittedly the paper is not talking about CLR specifically, but the CLR
designers would have been aware of it.
Dave Harris, Nottingham, UK | "Weave a circle round him thrice,
bran...@cix.co.uk | And close your eyes with holy dread,
| For he on honey dew hath fed
http://www.bhresearch.co.uk/ | And drunk the milk of Paradise."
As I understand it, the typed stack is expected to make it easier
(indeed, possible) to add genericity to the VM later. I imagine it also
has an effect on the compactness of the bytecode.
Bingo! That's exactly what I've been wondering over the last few days - ever
since I read some interesting research [1] from Thomas Kister & Michael
Franz of UC Irvine where they talk about using an abstract syntax tree as
the IL. The claims are that this representation is both easier to optimize
(since the semantics are preserved) and also easier to verify (since the
tree structure is strongly typed, and can't be circumvented).
I'd love to know why they chose the lower-level approach, since the reasons
Kister, Franz, etc. cite for selecting bytecodes over some form of abstract
tree syntax sound much less relevant to the CLR than they might have been to
the original JVM.
I'm a JVM/CLR novice, but I'm finding the entire field of runtimes/VMs
totally fascinating. Wow! :-)
Regards
Peter
True, if your VM is purely interpreted. However, more modern JVMs and the
CLR(?), not to mention the "Juice" VM, compile VM bytecode to native code,
enabling mobile code to also run efficiently.
In this context, then, multiple inheritance wouldn't complicate compilation
too much if the Big Table O' Features were an array of pointers to
class-specific vtables. You could always extend the class array and append
new vtable arrays, since new classes can only extend previously added ones.
However, this is a grossly inefficient use of memory. Most real compiled
systems use some sort of table compression, and SmallEiffel substitutes
customized dispatch functions for vtables. Most real systems also optimize
non-polymorphic features into direct calls or field references. All these
optimizations would have to be revisited and redone if classes can be loaded
at runtime.
A compromise would be to limit the classes that dynamically added code could
extend or even see, the way that Dylan "libraries" do. Other examples:
- Component systems like COM and CORBA, which restrict visible classes to
those which implement "interfaces" defined in IDL.
- ISE's "Dynamic Linking in Eiffel", which if I understand it correctly uses
a "proxy" implementation similar to component technologies.
- Systems which use a scripting language to manipulate classes or routines
written in a compiled language.
Given how common this pattern is, perhaps it's a better solution than a JVM
... but it would be nice if making code mobile (or immobile) were a matter
of recompiling, not recoding.
Frank
Not COMPARABLE provides an interface that all comparable objects also
provide, that is =, <, <=, >, >=. Now in comparable only the definition of a
< needs to be left unimplemented: =, <=, >, and >= can all be given
definitions in terms of < and =. With the restriction of Java interfaces and
single implementation inheritance you also have to leave =, <=, > and >=
abstract and force all subclasses to repetitively provide these.
So here is (a simplified form) of COMPARABLE:
deferred class COMPARABLE
feature -- Comparison
infix "<" (other: like Current): BOOLEAN is
-- Is current object less than `other'?
deferred
ensure then
asymmetric: Result implies not (other < Current)
end
infix "<=" (other: like Current): BOOLEAN is
-- Is current object less than or equal to `other'?
do
Result := not (other < Current)
ensure then
definition: Result = ((Current < other) or is_equal (other))
end
infix ">" (other: like Current): BOOLEAN is
-- Is current object greater than `other'?
do
Result := other < Current
ensure then
definition: Result = (other < Current)
end
infix ">=" (other: like Current): BOOLEAN is
-- Is current object greater than or equal to `other'?
do
Result := not (Current < other)
ensure then
definition: Result = (other <= Current)
end
infix "=" (other: like Current): BOOLEAN is
-- Is `other' attached to an object of the same type
-- as current object and identical to it?
do
Result := (not (Current < other) and not (other < Current))
ensure then
trichotomy: Result = (not (Current < other) and not
(other < Current))
end
end
> You are taking my UML example too seriously( it was just an example, not an
> argument). But it's more serious than you think.
Can you please state more explicitly in your mails when we should be taking
you seriously or not ;-)
> UML is the official
> modelling tool chosen by OMG, the official authority in OOP.
Since when did the OMG become the official authority on OOP. I think that's
breaking news to most of us. (Also it is a bad argument by appealing to
authority because authority is often wrong.)
> The point is: OOP is a way to model the real world.
Actually no. Strangely, I was discussing that with Bertrand Meyer over
dinner last night after his .NET seminar. He did not disagree with me that
any modeling is to model the application at hand and that most of the
problems about birds not flying, etc are all because people take this 'real
world' modeling too seriously (you're right people do take things too
seriously!) OO modeling captures an abstraction of the world, not the real
world. When compared to the real world you can find many places where the
model does not match the real world, but it does what is needed in your
computer system because it has discarded irrelevant facts from the real
world.
> But it is limited, as
> any language, model or paradigm.
Right, because as I say above, it is not the real world we are modeling.
> You can't easily divide the world in pieces
> that haven't nothing to do with each other, and that is what MI is for.
No, MI helps us distil abstract notions individually. This results in
simplicity, because we can reason about small simple things and then
construct complex systems out of those simple blocks. MI makes this
possible.
> But
> it's got serious limitations: What is a mushroom, a plant or an animal?
If something can be simply modeled or implemented by using single
inheritance then don't use multiple inheritance. This is just simple good
design, but it does not disprove the need for MI where it too makes things
more simple.
> Would you assign weights to the relations mushroom-plant and
> mushroom-animal? That's what a was talking about when I used the word
> 'design'. Effective taxonomy, in any science, must not have MI.
That's questionable. Again analyse things in terms of SI where you can but
the restriction to SI in computer systems development is restrictive.
> MI is at
> least confusing in most scenarios,
So don't use it in confusing scenarios that would be bad design. Use it
where it makes things simpler. An example will follow in a separate follow
up on class COMPARABLE.
> and you won't probably need it in your
> bussiness classes( yes, this is sophistry too).
Perhaps not. Again don't just use MI to be clever...that is not clever, but
where it is the best solution it should not be prevented because the
language or environment has some philosophical prejudice against it.
--
> Actually, I was responding more from the perspective of why I'd prefer to
> hand-code IL than JVM bytecodes. Sorry if this wasn't clear. And I'm sure
> no-one would question that it's easier to use simple opcodes like return,
> add, etc. rather than the n (where n is a large #) type-specific opcodes the
> JVM uses. That said, we agree that tailcall & unsigned types are important
> distinctions though, right?
On the topic of JVM vs. MSIL, it is true that MSIL is supposed to be loaded
and then JITed. Having now seen some MSIL, I'm curious as to why they made
it so assembler like (with other compiler generated meta data). Why was the
whole thing not made a higher form of language independent intermediate
representation? I think that would be a better approach, like a lot of
compiler technology where a single code generator can generate code from
such high-level intermediate compiler tables. In the .NET case though, the
code generator is separated from the front end compiler, because the
supplier compiles with the front end and the user invokes the code
generator.
> cds...@twu.net (Chris Smith) wrote (abridged):
>> Umm, so long as you realize that those two points only move around
>> some of the translation rather than eliminating it.
>
> As I understand it, the typed stack is expected to make it easier
> (indeed, possible) to add genericity to the VM later. I imagine it also
> has an effect on the compactness of the bytecode.
Gee, does that mean that the industry has finally discovered 40 years later
that the Burroughs B5000 (Unisys A Series) tagged stack architecture is a
good idea!
So the "main" code might remain optimized and the new
dynamically loaded classes would have restrictions and
you could still have a wider MI than Java offers?
?right?
VPN
"Frank Mitchell" <fra...@bayarea.net> wrote in message
news:8qeuc8$mj0$1...@localhost.localdomain...
(I am not a runtime expert)
One reason is that the higher-level your IL is, the more resources it takes
(in terms of time and memory) to convert to native code. On a PC-class
machine this might not be a big issue (though it could affect performance a
fair bit), but for the embedded market (think cell phones), it could be very
problematic to write a JIT.
>
> "Peter Drayton" <pe...@razorsoft.com> wrote in message
> news:O3#vajFJAHA.255@cppssbbsa05...
>> "Ian Joyner" <i.jo...@acm.org> wrote in message
>> news:B5F10674.4247%i.jo...@acm.org...
>>> Having now seen some MSIL, I'm curious as to why they made
>>> it so assembler like (with other compiler generated meta data). Why was
>> the
>>> whole thing not made a higher form of language independent intermediate
>>> representation?
>>
>> Bingo! That's exactly what I've been wondering over the last few days -
> ever
>> since I read some interesting research [1] from Thomas Kister & Michael
>> Franz of UC Irvine where they talk about using an abstract syntax tree as
>> the IL. The claims are that this representation is both easier to optimize
>> (since the semantics are preserved) and also easier to verify (since the
>> tree structure is strongly typed, and can't be circumvented).
>>
>> I'd love to know why they chose the lower-level approach, since the
> reasons
>> Kister, Franz, etc. cite for selecting bytecodes over some form of
> abstract
>> tree syntax sound much less relevant to the CLR than they might have been
> to
>> the original JVM.
>
> (I am not a runtime expert)
>
> One reason is that the higher-level your IL is, the more resources it takes
> (in terms of time and memory) to convert to native code. On a PC-class
> machine this might not be a big issue (though it could affect performance a
> fair bit), but for the embedded market (think cell phones), it could be very
> problematic to write a JIT.
I think that's a slight guess and you would certainly have to measure the
performance/memory usage carefully. I can see that the higher you go, you
will have to do more work. But chose the right level and things will perform
well. With the machine code approach, I think there will be some
backtracking to do, so you could also lose, the lower you go.
While Peter saw some research on ASTs, this approach has been used by
commercial vendors to implement several languages with one code generator.
Unisys A Series slice compilers use this technique and the code generation
phase from AST was reasonably straight forward, fast, and produced excellent
code. So I know it works in practice.
"Ian Joyner" <i.jo...@acm.org> wrote in message
news:B5F293F2.42B5%i.jo...@acm.org...
Do you know it works in practice when you have to JIT on a phone?
>>> (I am not a runtime expert)
>>>
>>> One reason is that the higher-level your IL is, the more resources it
> takes
>>> (in terms of time and memory) to convert to native code. On a PC-class
>>> machine this might not be a big issue (though it could affect
> performance a
>>> fair bit), but for the embedded market (think cell phones), it could be
> very
>>> problematic to write a JIT.
>>
>> I think that's a slight guess and you would certainly have to measure the
>> performance/memory usage carefully. I can see that the higher you go, you
>> will have to do more work. But chose the right level and things will
> perform
>> well. With the machine code approach, I think there will be some
>> backtracking to do, so you could also lose, the lower you go.
>>
>> While Peter saw some research on ASTs, this approach has been used by
>> commercial vendors to implement several languages with one code generator.
>> Unisys A Series slice compilers use this technique and the code generation
>> phase from AST was reasonably straight forward, fast, and produced
> excellent
>> code. So I know it works in practice.
>
> Do you know it works in practice when you have to JIT on a phone?
In theory yes. If it's not workable in practice today, it should be by next
year, so don't limit thinking to today's technology ;-)
VPN
> In reality, C# offers very little over Java, the best thing about .NET
> is that it allows multiple languages so the shortcomings of C# can be
> stepped around, whereas with Java this is more difficult.
> --
> Ian Joyner i.jo...@acm.org
> http://homepages.tig.com.au/~ijoyner/
>
So, how many languages in .NET support implementation inheritamce?
What about (single) inheritance of Eiffel# expanded classes?
Does Eiffel# support Design by contract (not in the form of simple assert)?
What happend to CAT methods in Eiffel#?
Do not you miss Eiffel core librariesin E#?
AFAIK, there are several FULL Eiffel implementations can produce JVM bytecode.
So much about .net advantages over Java.
Andrew
Like Intel is RAMming the RAMbus technology
(together with the designing company & patents)
down the throats of your PCs
and even the forecasts of the future are twisted
against the cheap & easy DDR and favor RAMbus.
;-(
VPN
"Andrew" <asu...@tucows.com> wrote in message
news:#qOg122...@cppssbbsa02.microsoft.com...
Compared to the wide spectrum of object systems with MI
that have been implemented in various languages,
I don't see much difference between Eiffel's and C++'s
object systems. Using MI in either of those languages
seems painful compared to MI in other languages,
where constructs like method combination, dynamic mixins,
multi-dispatch, scoped methods, and dynamic inheritance
are available.
Ultimately, I suspect that MI is just not quite the right
abstraction in a statically typed language. It seems to
cause more problems than it fixes, and in languages that
give programmers a choice, most people seem to choose not
to use it much. Approaches like generic programming and
aspect oriented programming may turn out to be more appropriate
and more successful in the long run.
Altogether, I think both the Java and the C# language
designers made the right choice in not going with
multiple inheritance.
Tom.
Sent via Deja.com http://www.deja.com/
Before you buy.
I wouldn't bet on Rambus' technology lasting more then a couple more years,
in my eyes, Intel is doing a lot to distance themselves where they aren't
contractually bound to Rambus. I think we'll see Intel ditch Rambus as soon
as the contact's Intel signed in exchange for stock expire.
Of coure, only time will tell.
"Veli-Pekka Nousiainen" <vp.nou...@lapinlahden-teknologiakeskus.fi>
wrote in message news:DeYz5.57$Zt1....@read2.inet.fi...
> dotNET is going to rule anyway!
> Bill of Goats is going to push it and you can't help it!
>
> Like Intel is RAMming the RAMbus technology
> (together with the designing company & patents)
> down the throats of your PCs
> and even the forecasts of the future are twisted
> against the cheap & easy DDR and favor RAMbus.
> ;-(
> VPN
>
> "Andrew" <asu...@tucows.com> wrote in message
> news:#qOg122...@cppssbbsa02.microsoft.com...
> >
Eiffel# only supports single inheritance of implementation class, but we
plan for a full support of multiple inheritance for next year.
> Does Eiffel# support Design by contract (not in the form of simple
assert)?
Yes, we currently have a complete support for Design By Contract.
> What happend to CAT methods in Eiffel#?
We implemented covariance and therefore you can still have it.
> Do not you miss Eiffel core librariesin E#?
At the moment this is very true but we are working on making a specifif
version of EiffelBase for Eiffel# which will be client compatible with
EiffelBase.
> AFAIK, there are several FULL Eiffel implementations can produce JVM
bytecode.
I know of only one. Eiffel# on the other hand can reuse a thausend of
libraries made for .NET without any trouble. Most of the language that try
to target the JVM cannot reuse Java components, they are just reusing the
engine, not the metadata part.
Hope this helps,
Regards,
----------------------------------------------------------
ISE Building, 2nd floor, 270 Storke Road, Goleta CA 93117
805-685-1006, fax 805-685-6869,
Customer support: http://support.eiffel.com
Product information: <in...@eiffel.com>
http://eiffel.com
----------------------------------------------------------
Please explain
> Using MI in either of those languages
> seems painful
Have you really used MI in Eiffel or just read about it?
It is simple, quite elegant and not very painful.
> compared to MI in other languages,
> where constructs like method combination, dynamic mixins,
> multi-dispatch, scoped methods, and dynamic inheritance
> are available.
>
> Ultimately, I suspect that MI is just not quite the right
> abstraction in a statically typed language. It seems to
> cause more problems than it fixes,
Please explain
> and in languages that
> give programmers a choice, most people seem to choose not
> to use it much.
So because most cars got a Hazard Light switch which isn't used much, we
should get
rid of it?
> Approaches like generic programming and
> aspect oriented programming may turn out to be more appropriate
> and more successful in the long run.
Maybe, but I'm developing solutions today and hopefully tomorrow and
although
both topics are interesting, they don't support me in my work. Eiffel, with
its very fine
and orthogonal collection of features sure does. And, Eiffel's got generics
and does a
good job of it too.
>
> Altogether, I think both the Java and the C# language
> designers made the right choice in not going with
> multiple inheritance.
Maybe, to me it seems like they shy away from the difficulties it would
bring them
as language designers, thereby forcing their decisions upon us.
Eirik
"If I can't Eiffel in heaven, I won't go"
> Altogether, I think both the Java and the C# language
> designers made the right choice in not going with
> multiple inheritance.
Well, I think you should read more carefully the last two weeks worth of
posts in this thread to see why this is wrong. Sorry, I don't have time to
repeat it all here.
--
Ian Joyner
i.jo...@acm.org http://homepages.tig.com.au/~ijoyner/
Eiffel for Macintosh
Tom, I've used MI in both C++ and Eiffel. In the first, you have a
syntax puzzle glued onto the side of the language. In the other, you
have a language *designed* around the concept of MI. The difference is
like living in a garbage dump as opposed to a clean mountain valley :-)
But here's what really annoys me about the argument over OO/MI vs
Generic Programming. Eiffel has a great implementation of both! I've
read the Stepanov interviews, (see the related thread
http://x53.deja.com/threadmsg_md.xp?
thitnum=0&AN=669604193.1&mhitnum=0&CONTEXT=970071342.1676541970 )
and it's a pity the guy never tried Eiffel. I think he would have had a
much more powerful model to explore his generic programming concepts
there than he did with C++.
In one article he talks about the OO type system being too loose. He
gives the example of a giraffe being an animal and animals having a
mate method. If giraffe inherits the animal mate method, it gets to
mate with any animal.
Well in Eiffel, you can easily do what Stepanov wants. If you want
descendents of ANIMAL to only mate with other animals of the same type,
ANIMAL simply declares mate as:
mate (other: like Current) like Current
the generic type of "other" is now constrained to being the same as the
type of the current object. Descendents have to do *nothing*. Now any
attempt in the code to mate a giraffe with something other than a
giraffe results in a compile-time error.
Can you accomplish that with the dynamic languages you're touting? Or
with C++?
Generic programming with Eiffel is so easy, I think most Eiffelists
take it for granted and tend to wonder what the fuss is about over in
C++ land, where they're still trying to get templates to work in their
compilers.
Greg
>
>> What about (single) inheritance of Eiffel# expanded classes?
>
> Eiffel# only supports single inheritance of implementation class, but we
> plan for a full support of multiple inheritance for next year.
>
My question is about eiffel expanded classes. They are analogous to C#
value classes. You can not inherit from value classes, AFAIK.
>> Does Eiffel# support Design by contract (not in the form of simple
> assert)?
>
> Yes, we currently have a complete support for Design By Contract.
>
So that means that pre- and postconditions are part of class specification
and can not be overridden by derived classes, does it? I mean, if your C#
method override Eiffel# one does overriden method is still check for pre-
and postconditions?
>> What happend to CAT methods in Eiffel#?
>
> We implemented covariance and therefore you can still have it.
>
Does covariance available in C# too? And if not, why?
>> Do not you miss Eiffel core librariesin E#?
>
> At the moment this is very true but we are working on making a specifif
> version of EiffelBase for Eiffel# which will be client compatible with
> EiffelBase.
>
>> AFAIK, there are several FULL Eiffel implementations can produce JVM
> bytecode.
>
> I know of only one. Eiffel# on the other hand can reuse a thausend of
> libraries made for .NET without any trouble. Most of the language that
> try to target the JVM cannot reuse Java components, they are just
> reusing the engine, not the metadata part.
>
Why is that? Why languages compiled to Java bytecodes can not use Java
libraries? I an not sure about SmallEiffel implementation, but JPython
uses Java libaries as easy as its own. You can mix Java and Python code
as you like.
> Hope this helps, Regards,
>
> ----------------------------------------------------------
> ISE Building, 2nd floor, 270 Storke Road, Goleta CA 93117
> 805-685-1006, fax 805-685-6869,
> Customer support: http://support.eiffel.com Product information:
> <in...@eiffel.com> http://eiffel.com
> ----------------------------------------------------------
>
>
>
Andrew
Well, look at languages like CLOS, Smalltalk-with-MI, Python,
Flavors, Ada, Tom, and other languages with MI. They are mostly
very different from both C++ and Eiffel. They also generally
allow programming styles and design methods that are simply
not possible in either C++ or Eiffel.
> > Using MI in either of those languages seems painful
>
> Have you really used MI in Eiffel or just read about it?
> It is simple, quite elegant and not very painful.
Some. And while it is nicer in some ways than C++,
it still lacks most of the features I mentioned and
therefore felt pretty constraining to me.
Have you used MI in other languages?
> > and in languages that
> > give programmers a choice, most people seem to choose not
> > to use it much.
>
> So because most cars got a Hazard Light switch which
> isn't used much, we should get rid of it?
I think that analogy is flawed. A hazard switch is a simple device
that is used for emergencies. MI is a complex addition
to a language that permeates its use.
A better analogy might be the question of whether every car should
come with a backhoe, and I think the answer is pretty clearly
"no": most people don't need to do that kind of digging, and
the people that do want to do digging probably already have
a backhoe. Worse yet, if you give everybody a backhoe, people
may start digging indiscriminately or cause other kinds of damage.
> > Approaches like generic programming and
> > aspect oriented programming may turn out to be more appropriate
> > and more successful in the long run.
>
> Maybe, but I'm developing solutions today and hopefully
> tomorrow and although both topics are interesting, they
> don't support me in my work. Eiffel, with
> its very fine and orthogonal collection of features sure does.
Those other approaches I mention are hardly "hot topics":
they have been around for years, and you can get full support
for them based on Java and many other languages.
> > Altogether, I think both the Java and the C# language
> > designers made the right choice in not going with
> > multiple inheritance.
>
> Maybe, to me it seems like they shy away from
> the difficulties it would bring them
> as language designers, thereby forcing their
> decisions upon us.
You make it sound like the designers and users of Java
and other languages were a timid, inexperienced bunch.
Some of the people who worked on the design of Java
were building large systems with MI before Eiffel was
even created. And many people who choose Java are also
users of languages with MI.
Nobody forces you or anybody else to use Java or C#.
There are lots of different applications and needs
people have, and I think it's good that there are
different tools for them. And since engineering
involves a variety of tradeoffs, more isn't always
better.
Tom.
Generic programming already has unmatched support in
languages like ML and Haskell, and is a trival and natural
part of everyday programming there. Stepanov's work mostly
represents an attempt to get some of those ideas into
widely used languages, not some conceptual innovation.
> Now any attempt in the code to mate a
> giraffe with something other than a
> giraffe results in a compile-time error. [...]
> Can you accomplish that with the dynamic languages you're touting? Or
> with C++?
Sure. Some compilers for dynamic languages will catch those
kinds of errors at compile time, and almost all of them will catch
those kinds of errors at runtime. They also allow much more
general type constraints to be expressed. Languages like ML allow
you to specify more general constraints and guarantee
verifying them at compile time. In C++, you can express
similar constraints by using templatized methods and
constraining the template arguments.
(BTW, I think Eiffel compilers aren't guaranteed to find these
kinds of errors at compile time under all circumstances.)
> Generic programming with Eiffel is so easy, I think most Eiffelists
> take it for granted and tend to wonder what the fuss is about over in
> C++ land, where they're still trying to get templates to work in their
> compilers.
While the original intent of templates in C++ was merely to support
genericity in the form in which it exists in Eiffel,
C++ templates have matured into something altogether different
and more powerful through the widespread use of expression templates.
And the expression template approach has also forced template
implementations to become robust. In any case, Java will support
genericity in roughly the same form as Eiffel.
Assume ANIMAL has two subclasses, CAT and DOG, and
it defines a method "mate(other: like Current)".
Now, if you have an ARRAY[ANIMAL], what
happens if you try to mate item(0) with item(1)?
Eiffel has to do runtime type checking, and
there is the potential for a runtime type error. In
fact, any method call that involves "like Current" can
potentially result in a runtime type error, even
if the use of that method call could be expressed
in principle using a compile-time type-safe construct.
> Can you accomplish that with [...] C++?
The short answer is: yes. If you want to, you
can express the same behavior in C++. However,
many uses of "like Current" can be translated
into completely statically type-safe designs in C++,
designs that cannot be expressed at all with
Eiffel's more limited form of genericity.
Tom wrote:
[...]
> Assume ANIMAL has two subclasses, CAT and DOG, and
> it defines a method "mate(other: like Current)".
> Now, if you have an ARRAY[ANIMAL], what
> happens if you try to mate item(0) with item(1)?
>
[...]
>
> > Can you accomplish that with [...] C++?
>
> The short answer is: yes. If you want to, you
> can express the same behavior in C++. However,
> many uses of "like Current" can be translated
> into completely statically type-safe designs in C++,
> designs that cannot be expressed at all with
> Eiffel's more limited form of genericity.
>
> Tom.
>
> Sent via Deja.com http://www.deja.com/
> Before you buy.
>
--
http://homestead.deja.com/user.gmc333/index.html