news:ee29c026-c1c2-4d19...@googlegroups.com...
>
> I've recently attended a talk about the Google's Go language.
> You might be interested in taking a look at its documentation.
> About a half of the language was designed after C with a number
> of improvements:
...
> - sane syntax, e.g. '[10][20]*[30][40]*int' instead of
> 'int*(*[10][20])[30][40]'
Actually, I'm not quite sure what that is ... At first, I thought it was a
function pointer, but it's missing some parens for that. My guess is an
array of an array (10x20) of pointers to an array of an array (30x40) of
pointers to int ... Wrong? That seems to work for the first "sane" syntax
too.
Sometimes, you'll see complicated declarations for function pointers or
procedures, but I've not generally seem them for arrays.
Usually, I've found that if it's too complicated to read in C, it's too
complicated to use. Although many complicated declarations in C can be
created, a function pointer should be the most difficult syntax you need in
a program. Everything else should be standard types or structs.
> - numeric types of fixed sizes are guaranteed: int8,int32,int64,etc
IMO, that's good. I'd much rather be able to specify absolute sizes. C has
added that ability or something close with the C99 stdint.h types. Of
course, I happen know the cpu mode and integer sizes that my C code is being
written for. I'm only going to compile my code for the correct integer
sizes. The problem is when my code is being compiled for a different mode
or processor that uses integers of different sizes by someone else.
Let's take x86. In 16-bit modes, it's native sizes are 8-bits and 16-bits.
In 32-bit modes, it's native sizes are 8-bits and 32-bits. Of course, you
can get 32-bits in 16-bit mode and 16-bits in 32-bit mode with overrides
after the 386 (or 486?). So, someone coding in C for x86 is likely to use
8-bit integer and whatever native integer larger than 8-bits is available
whenever they need more bits. I.e., the larger integer is not a fixed size
but is either 16-bits or 32-bits as is available for that cpu mode. The
problem when only absolute sizes are available is that an absolute size may
or may not be needed for the code to work correctly, but requires generating
code for that size.
Let's say a programmer specifies int32 since only absolute sizes are
available. Let's also say they are compiling the code as 32-bits. What if
someone else is now compiling the code as 16-bit? Is a 32-bit integer
actually required for the code to work correctly, or not? In most cases,
16-bits will probably work instead of 32-bits. In a rare cases, it won't.
However, since a fixed 32-bit size was specified, the 16-bit code generator
*must* generate 32-bit integers in 16-bit code for all integers ... That's
not good.
I.e., there should be a way to specify exact types when they are needed, but
also allow the compiler to select best fit integers too for "portability"
...
> - numeric constants don't have "ambiguous" types
That may or may not be good depending on what they did to solve the problem.
It ensures the constant is a specific size, but that can result in other
issues:
1) size mismatch between comparison or assignment of a numeric type and a
numeric constant
2) increased use of casts to ensure types match in size
Having "implicit" sizes for constants, based on the size of the type they
are being assigned to or compared with, seems to be a better solution to me.
> - break is assumed at the end of every case
That's bad. I don't want that!
IMO, fall-through is an important programming concept. Fall-through is
almost a necessity for justifying use of a switch() in the first place.
Without fall-through, you'd only need to use a switch() for readability, or
a large number of case statements ... Without fall-through, a switch() is
just nested if-thens. So, you might as well code it as such.
Yes, I understand this is an attempt to reduce coding errors by novices. So
too were 'void' and 'void *'. They prevented some errors, but they also
cause more problems than they prevent. So, it can be argued that they
were misguided. I suspect automatic breaks are misguided too.
I'm sure they probably eliminated the the unstructured switch() that C
supports too, i.e., single-level switch without a section { } which
effectively acts as multiple goto's.
Did they also eliminate pointers too - like Java?
Once they get done, they'll reinvent Pascal ... ;-) I.e., no power,
ultra-safe, no usefulness, etc.
> - ++ and -- are not operators, they are statements
These are insignificant today, but were once convenient. Unfortunately,
I've been using them by themselves for years due to ANSI C's sequence points
... I.e., their advantage of being used correctly within an expression is
no longer guaranteed since ANSI C went into effect.
They also complicate parsing by simple parsers since they don't follow the
same pattern as other operators, i.e., inbetween.
> - fewer punctuators necessary, e.g. the semicolon and parens in if (IIRC)
I'm not sure what parens they could remove from C. Arguments and parameters
are the primary place they are used. They are used for casts and precedence
of operations too ...
I'd think that they'd keep the semicolon since it's primarily used to mark
the end-of-line for the C statement or C declaration. It seems odd that
they'd remove it.
Except for for() and procedure calls, I think that comma's can be eliminated
from C's syntax ... I rarely use them otherwise.
> - type conversions must be done explicitly when you're mixing types in
> expressions
More casts ... I don't see how that's a benefit.
More type constraints ... That's probably a problem.
Although many argue that C's type system is weak, there are many situations
where it gets in the way and is difficult to work around.
> All of the above (along with other unmentioned features) makes the code
> easier to read and write and gives you fewer opportunities to shoot off
> your feet.
Pointers are one of the more powerful programming concepts, yet they allow
you to "shoot off your feet". So, what's better, being able to "shoot off
your feet" (too much power), or not being able to put shoes on your feet to
protect them from frostbite in the first place (too little power)?
Unfortunately, a significant part of C isn't actually part of C. It's the
abilities of the C pre-processor ... When was the last time you coded a
constant without using a #define?
You mentioned that they attempt to ensure better matching of types. What
did they do about if() and switch()? I.e., they are "overloaded" to accept
int's, char's, long's, signed and unsigned ...
(See, there _is_ "overloading" in C. C had it first ... ;-)
Rod Pemberton