On 8/19/2015 3:46 AM, Juha Nieminen wrote:
yes, 'generics'...
potentially though, a more expanded form of type-inference could be
another possible option (say, rather than using templates, you could
have 'auto' be able to work across function and method calls, or within
class members, ...). granted, this wont really buy a whole lot (more
just redistributing complexity, rather than eliminating it).
in such a case, internally classes and methods are specialized based on
how they are used, so calling a method with a given parameter list will
cause the compiler to generate a version of the method which accepts
those parameters, working on a version of the class with the members
using the inferred type. the compiler may still reject code for which
"impossible" situations arise in the type-inference (an inferred
variable being used as inconsistent types).
> And the other thing that also obviously needs to go is multiple
> inheritance. Because, you know, MI is evil and scary, and an invention
> of the devil. But since multiple inheritance is so fundamental to
> OO programming, they can't remove it completely, so they provide a
> crippled version of it, where only one base class can have method
> implementations and member variables. (Which, of course, means that
> if an "interface" would benefit from a default implementation, you
> are out of luck; you'll have to resort to code repetition.) This
> half-ass implementation is, obviously, "better" than MI. Because
> reasons. (And again, because it's not named "multiple inheritance",
> even though it really is. That name is evil, so if we avoid it,
> then we are A-ok.)
>
I personally suspect this is more about simplifying the implementation
than about simplifying use of the language.
MI (as it works in C++) is a fair bit more complicated at the
implementation level than SI+interfaces. class layouts are append only
with SI, and a lot of hairy edge cases and funky semantics in the
inheritance tree are simply not allowed.
nevermind if globing together a mostly no-op base class with a whole
bunch of interfaces providing default methods almost may as well be MI.
> Of course in our "better" C++ objects can only be allocated dynamically.
> Because that allows garbage collection and stuff. That's nice. Except
> for the fact that memory is not the only resource that a class could
> allocate (other common examples are file handles and sockets.) Thus
> we end up with a C-like language where you have to manually release
> those other resources or they may be leaked, just like memory in C.
> (Then of course they will provide limited automation eg. in the form
> of automatic destruction at the end of a code block. Which you usually
> have to remember to write explicitly anyway, and it can only be used
> within one single code block and doesn't work with shared objects
> which may be destroyed somewhere else entirely. But it's still better
> than C++! That's the most important part.) Then of course you have some
> other more minor issues with refence-only objects (such as it being
> more difficult to swap the contents of the objects), but those are not
> important. And we don't mind that dynamically allocated objects tend
> to consume more memory. RAM is cheap, just buy more of it.
>
yeah, this is one area where a lot of languages have fallen on their face.
some languages have fudged it, like in some implementations (mostly in
embedded land), there are implementations of Java which remove the GC
and sort of kludge-on manual memory management.
my own languages have basically just ended going in similar directions
to C++ on this front (using a mix of manual memory management and RAII
like patterns). but, they aren't really meant to replace C++ (rather
more for use in a niche use-case).
> And our "better" C++ compiling ten times faster than C++ is always
> something to brag about.
>
I suspect C and C++ really need some sort of good and standardized
precompiled header mechanism here. usually compilers cripple it by
either making it stupid ("stdafx.h" in MS land), or by having arbitrary
limitations in the name of making it behave exactly as the non-PCH case
(only works provided each compilation using has exactly the same
sequence of #include directives or similar).
IMO, "better" would be allowing it to be partially undefined how things
like preprocessor defines work across PCH boundaries (use of the PCH
mechanism in headers would be explicit and will essentially opt-out of
being sensitive to preceding preprocessor definitions). the header would
then contain some magic to tell the compiler ("hey, it is safe to use me
as a self-contained PCH").
> Yet, somehow C++ persists. There have been probably at least two dozens
> of "better" C++'s during the last 20 years. A couple of them have been
> moderately successful, the vast majority of them have been forgotten.
> But they are still better than C++, dammit!
>
this is because they either seriously screw something up, or try to aim
their claims too high (promoting their language as the new next best
thing, rather than as a language intended for a specific set of
application domains).
say, I have a scripting language which is aimed mostly at real-time
embedded on "medium end" 32-bit targets (MB of RAM and Flash and maybe
100s of MHz), while mostly overlooking lower-end 32-bit targets (kB of
RAM and Flash), or 8/16-bit targets (such as the MSP430 or AVR8).
meanwhile, it also loses out to its older "sibling" if doing basically
similar stuff on a PC (this older sibling basically doing things more
useful for use-cases on a PC).
it then exists mostly as a way to gloss over asynchronous execution on
top of an event-driven cooperative scheduler with lots of latency
constraints, which can otherwise get reasonably hairy in C and C++.
for something like an MSP430, there is really nothing a script-language
can offer over C, and ROM and RAM are enough of a premium that it
doesn't really make much sense to use a more complex runtime or an
interpreter. it is fairly cramped even with fairly trivial C programs,
like 500 lines and you have basically already used up the 2kB of ROM
space (in the MSP430G2232 or similar).
on a PC (or a higher-end ARM target), one generally doesn't need to deal
with tight latency constraints, and the RAM and address space aren't
particularly cramped, so one can afford using a bunch of RAM and a JIT
compiler.
but, saying that it mostly only makes sense on a range of ARM-based
controllers or similar is a lot less impressive sounding than claiming
it is the new language which does good for everything and will overthrow
whatever came before.
other languages aim for business or desktop applications, with pretty
much the same sort of issues.