https://docs.google.com/View?id=dcsvntt2_25wpjvbbhk
-- Scott
Yes, we noted that at the time. Some cynic suggested Eckel was just
re-positioning himself to sell more books and consulting to everyone
jumping on The Next Big Thing (XP and lightweight languages). Of course
he was also right in what he wrote, so we'll never know. :)
kt
--
http://www.stuckonalgebra.com
"The best Algebra tutorial program I have seen... in a class by itself."
Macworld
If you are writing a function to determine the maximum of two numbers
passed as arguents in a dynamic typed language, what is the normal
procedure used by Eckel and others to handle someone passing in
invalid values - such as a file handle for one varible and an array
for the other?
The normal procedure is to hit such a person over the head with a stick
and shout "FOO".
rg
Moreover, the functions returning the maximum may be able to work on
non-numbers, as long as they're comparable. What's more, there are
numbers that are NOT comparable by the operator you're thinking about!.
So to implement your specifications, that function would have to be
implemented for example as:
(defmethod lessp ((x real) (y real)) (< x y))
(defmethod lessp ((x complex) (y complex))
(or (< (real-part x) (real-part y))
(and (= (real-part x) (real-part y))
(< (imag-part x) (imag-part y)))))
(defun maximum (a b)
(if (lessp a b) b a))
And then the client of that function could very well add methods:
(defmethod lessp ((x symbol) (y t)) (lessp (string x) y))
(defmethod lessp ((x t) (y symbol)) (lessp x (string y)))
(defmethod lessp ((x string) (y string)) (string< x y))
and call:
(maximum 'hello "WORLD") --> "WORLD"
and who are you to forbid it!?
--
__Pascal Bourguignon__ http://www.informatimago.com/
in C I can have a function maximum(int a, int b) that will always
work. Never blow up, and never give an invalid answer. If someone
tries to call it incorrectly it is a compile error.
In a dynamic typed language maximum(a, b) can be called with incorrect
datatypes. Even if I make it so it can handle many types as you did
above, it could still be inadvertantly called with a file handle for a
parameter or some other type not provided for. So does Eckel and
others, when they are writing their dynamically typed code advocate
just letting the function blow up or give a bogus answer, or do they
check for valid types passed? If they are checking for valid types it
would seem that any benefits gained by not specifying type are lost by
checking for type. And if they don't check for type it would seem that
their code's error handling is poor.
Another alternative is that you don't have a lot of experience with
dynamically typed languages.
The OP provided a link, behind which you can find even more links, and
they already answer all your questions. Reading them helps.
Pascal
--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
I don't see where either Bruce Eckel or Robert Martin address the
issue of what to do inside a function when they receive a variable of
an incorrect type. Martin claims that his unit testing has eliminated
the need for him to check for types. Apparently he is claiming he is
sure that his higher level code will never call maximum() with
incorrect types because he tested for all cases now and forever of
that occurring. But it appears he hasn't tested handling incorrect
types since he feels it won't happen. Or, if he has - how does he
handles it isn't stated. Reading his and Eckel's paragraphs didn't
help me see how they handle invalid types inside a function.
Yes, you're right, you don't see how this issue is addressed. It's a
good idea to get some practice with a dynamically typed language, and
then you will see that it's actually not an issue. Stop guessing.
> in C I can have a function maximum(int a, int b) that will always
> work. Never blow up, and never give an invalid answer. If someone
> tries to call it incorrectly it is a compile error.
> In a dynamic typed language maximum(a, b) can be called with incorrect
> datatypes. Even if I make it so it can handle many types as you did
> above, it could still be inadvertantly called with a file handle for a
> parameter or some other type not provided for. So does Eckel and
> others, when they are writing their dynamically typed code advocate
> just letting the function blow up or give a bogus answer, or do they
> check for valid types passed? If they are checking for valid types it
> would seem that any benefits gained by not specifying type are lost by
> checking for type. And if they don't check for type it would seem that
> their code's error handling is poor.
The type of the parameters can be infered by what is done with these
parameters.
In:
(defun m (a b) (< a b))
I don't need to specify the types of a and b, because I, my fellow
programmers, and the implementation can infer that the expected types
are those that are expected by <.
Since < is a generic function that accepts a wide range of types (from
fixnum to real, passing by ratios, floating-point numbers, bigints,
etc), my function m is implicitely generic.
In dynamically typed programming language, you are effectively writting
generic functions by default.
Why do you want to restrict your maximum function to integers modulo
2^W? Since you already have done the hard work of finding the greatest
of two values in an ordered set, why not reuse this hard work for ANY
ordered set?
In almost all instances, type specfications in statically typed
programming language are too strongs, and too early. This is how you
crash rockets! (cf. Ariane-5).
--
__Pascal Bourguignon__
http://www.informatimago.com
They don't handle it inside a function – system throws exception and You
have a chance to handle it in a way You like (close whole system,
or just ignore one user input...).
In static system if something like that happens You have „core dump”
or bad results without warning :)
Dear Sir,
Please accept my unending awe and admiration, as this has to be the single
dumbest thing I've heard in my life; so, in fact, extraordinary in its
sheer stupidity and perversion of the natural order of the universe, that
until just a few minutes back I wouldn't have believed such utterance to
even be conceivable by an intelligent human being.
Thus remaining your humble and most obd't servant,
--
Pavel Lepin
How about something like
(defun mymax (predicate &rest args)
(do-stuff-more-efficient-but-equivalent-to
(first (sort (copy-list args predicate))))
(defun mymin (predicate &rest args)
(apply #'mymax (complement predicate) args))
?
Then you can define your own maxes and mins using curry or
(defun nummax (a b) (mymax #'< a b))
(defun foomax (a b) (mymax #'foo< a b))
...
and you could delegate argument type checking to the predicate.
Norbert
OK, but sometimes it is handy to have the possibility to make compile-time
assertions which prevent you from commiting easily avoidable simple mistakes.
For example being able to declare an integer variable r as being of "row-index-type"
and another, say c, as being of "column-index-type" such that
(aref matrix r c)
would pass compilation
but if you confused it with
(aref matrix c r)
some condition (error or warning) would be signalled at compile-time.
I'd consider such a possibility handy -- sometimes.
Norbert
Agreed. I actually don't see this issue in black and white terms; I've
written lots of Lisp, and I've written lots of code in statically typed
languages, and they all have advantages and disadvantages. In the end
it all comes back to my time: how much time does it take me to ship a
debugged system? Working in Lisp, sometimes I don't get immediate
feedback from the compiler that I've done something stupid, but this is
generally counterbalanced by the ease of interactive testing, that
frequently allows me to run a new piece of code several times in the
time it would have taken me to do a compile-and-link in, say, C++.
So while I agree with you that compiler warnings are sometimes handy,
and there are occasions, working in Lisp, that I would like to have more
of them(*), it really doesn't happen to me very often that the lack of
one is more than a minor problem.
(*) Lisp compilers generally do warn about some things, like passing the
wrong number of arguments to a function, or inconsistent spelling of the
name of a local variable. In my experience, these warnings cover a
substantial fraction of the stupid mistakes I actually make.
-- Scott
Ah, good. I would have been disappointed if my post hadn't kicked off a
discussion :)
To answer your question: what a function does with its arguments is to
pass them to other functions; at some point an argument must be passed
to some built-in Lisp primitive in order to do something with it. Lisp
primitives do their own runtime type checking, so if you try to add a
file handle to an array, for instance, the addition operator will signal
an error.
So the normal procedure is simply to let the Lisp primitives signal
errors in these cases. This actually works pretty well in practice.
Yes, sometimes it is not trivial to relate the error message back to the
coding mistake. But the popular statically-typed languages actually
have a similar problem in that (for example) they don't have different
static types for pointers that can be null and pointers that can't be
null. The consequence is that it's always possible for a coding mistake
to cause the code to dereference a null pointer, since this error is
checked for only at runtime. Tracking down such errors isn't always
fun, of course, but the point is that the vast majority of programmers
work in languages in which not all errors can be detected at compile time.
Although null pointer dereferences actually can be prevented statically
(languages in the ML family take an approach that permits this),
uncomputability considerations guarantee that there are always errors
that can be detected only at runtime. So the fact is, everybody lives
with runtime checking to some extent.
-- Scott
To be fair ... in every static typed language I know of (including C)
you have to do something to override the type system in order to pass
a value of the wrong type. Of course, the various languages differ
greatly in how easy, or hard, it is to do that.
George
Indeed. This is the functional way, vs. the OO way previously
presented.
LOLZ
that is a lie.
Compilation only makes sure that values provided at compilation-time
are of the right datatype.
What happens though is that in the real world, pretty much all
computation depends on user provided values at runtime. See where are
we heading?
this works at compilation time without warnings:
int m=numbermax( 2, 6 );
this too:
int a, b, m;
scanf( "%d", &a );
scanf( "%d", &b );
m=numbermax( a, b );
no compiler issues, but will not work just as much as in python if
user provides "foo" and "bar" for a and b... fail.
What you do if you're feeling insecure and paranoid? Just what
dynamically typed languages do: add runtime checks. Unit tests are
great to assert those.
Fact is: almost all user data from the external words comes into
programs as strings. No typesystem or compiler handles this fact all
that graceful...
> On Mon, 27 Sep 2010 16:27:02 +0200, Piotr Chamera
> <piotr_...@poczta.onet.pl> wrote:
>
>>W dniu 2010-09-27 14:12, TheFlyingDutchman pisze:
>>>
>>> (...) Reading his and Eckel's paragraphs didn't
>>> help me see how they handle invalid types inside a function.
>>
>>They don't handle it inside a function – system throws exception and You
>>have a chance to handle it in a way You like (close whole system,
>>or just ignore one user input...).
>>
>>In static system if something like that happens You have „core dump”
>>or bad results without warning :)
>
> To be fair ... in every static typed language I know of (including C)
> you have to do something to override the type system in order to pass
> a value of the wrong type.
C is weakly typed. Moreover:
#include <stdio.h>
#include <stdarg.h>
int sum_int(int n,...){
int v;
va_list ap;
va_start(ap,n);
for(i=0;i<n;i++){
v+=va_arg(ap,int);
}
va_end(ap);
return(v);
}
int main(){
printf("%d\n",sum_int(3,1.2,"trois",4L));
return(0);
}
> Of course, the various languages differ
> greatly in how easy, or hard, it is to do that.
Very easy in C.
That's where the "strong testing" alternative comes in: when the test
that hits that code runs, the program breaks. Fix it. The obvious
challenge is writing compleat tests, the obvious call being made is that
that is more cost-effective than wrestling with strong static type
checking compilers.
I would even go further.
Types are only part of the story. You may distinguish between integers
and floating points, fine. But what about distinguishing between
floating points representing lengths and floating points representing
volumes? Worse, what about distinguishing and converting floating
points representing lengths expressed in feets and floating points
representing lengths expressed in meters.
If you start with the mindset of static type checking, you will consider
that your types are checked and if the types at the interface of two
modules matches you'll think that everything's ok. And six months later
you Mars mission will crash.
On the other hand, with the dynamic typing mindset, you might even wrap
your values (of whatever numerical type) in a symbolic expression
mentionning the unit and perhaps other meta data, so that when the other
module receives it, it may notice (dynamically) that two values are not
of the same unit, but if compatible, it could (dynamically) convert into
the expected unit. Mission saved!
Never is a strong word. Implicit type conversion for the win!
maximum(sqrt(-1) , 100); /* oops */
If you change maximum to take floats, the compiler has to runtime check
that neither of them are NaNs. Smells like dynamic typing to me!
-pete
In fairness, you could do this statically too, and without the consing
required by the dynamic approach.
-- Scott
I don't deny it. My point is that it's a question of mindset.
I don't think that this demonstrates an error _inside_ the maximum
function. Whatever (int) sqrt(-1) gets converted to, the maximum
function will appropriately compare it to 100 and determine which one
is higher. If someone told me that my maximum function (in this case)
was giving them an incorrect answer, I would print out what they were
passing in and show that it returned the correct higher value.
The scanf() family of functions is fine for everyday use, but not
robust enough for potentially hostile inputs. atoi() had to be
replaced by strtol(), but there's a need for a higher-leve function
built on strtol().
I wrote a generic commandline parser once, however it's almost
impossible to achieve something that is both useable and 100%
bulletproof.
or simply use c++ etc and simply use overridden operators which pick the correct
algorithm....
--
"Avoid hyperbole at all costs, its the most destructive argument on
the planet" - Mark McIntyre in comp.lang.c
> I'd like to design a language like this. If you add a quantity in
> inches to a quantity in centimetres you get a quantity in (say)
> metres. If you multiply them together you get an area, if you divide
> them you get a dimeionless scalar. If you divide a quantity in metres
> by a quantity in seconds you get a velocity, if you try to subtract
> them you get an error.
There are several existing systems which do this. The HP48 (and
descendants I expect) support "units" which are essentially dimensions.
I don't remember if it signals errors for incoherent dimensions.
Mathematica also has some units support, and it definitely does not
indicate an error: "1 Inch + 1 Second" is fine. There are probably
lots of other systems which do similar things.
"Malcolm McLean" <malcolm...@btinternet.com> wrote in message
news:1d6e115c-cada-46fc...@c10g2000yqh.googlegroups.com...
As you suggested in 'Varaibles with units' comp.programming Feb 16 2008?
[Yes with that spelling...]
I have a feeling that would quickly make programming impossible (if you
consider how many combinations of dimensions/units, and operators there
might be).
One approach I've used is to specify a dimension (ie. unit) only for
constant values, which are then immediately converted (at compile time) to a
standard unit:
a:=sin(60°) # becomes sin(1.047... radians)
d:=6 ins # becomes 152.4 mm
Here the standard units are radians, and mm. Every other calculation uses
implied units.
--
Bartc
On the other hand sqrt(4 inches^2) is quite well defined. The question
is whether to allow sqrt(1 inch). It means using rationals rather than
integers for unit superscripts.
(You can argue that you can get things like km^9s^-9g^3 even in a
simple units system. The difference is that these won't occur very
often in real programs, just when people are messing sbout with the
system, and we don't need to make messing about efficient or easy to
use).
> he problem is that if you allow expressions rather than terms then
> the experssions can get arbitrarily complex. sqrt(1 inch + 1 Second),
> for instance.
I can't imagine a context where 1 inch + 1 second would not be an
error, so this is a slightly odd example. Indeed I think that in
dimensional analysis summing (or comparing) things with different
dimensions is always an error.
>
> On the other hand sqrt(4 inches^2) is quite well defined. The question
> is whether to allow sqrt(1 inch). It means using rationals rather than
> integers for unit superscripts.
There's a large existing body of knowledge on dimensional analysis
(it's a very important tool for physics, for instance), and obviously
the answer is to do whatever it does. Raising to any power is fine, I
think (but transcendental functions, for instance, are never fine,
because they are equivalent to summing things with different
dimensions, which is obvious if you think about the Taylor expansion of
a transcendental function).
--tim
CL can do this. There is code by Roman Cunis (dated 1991) called
Measures that does exactly this in the Right Way. It even comes with a
parser that lets you do spiffy things like this:
? (+ 1s 1hr)
1hr:0min:1s
? (/ 1hr 1s)
3600
? (* 1μg 1m/s2)
1nN
?
I have an updated version of his code that streamlines it and fixes a
couple of bugs if anyone is interested.
rg
It is cumbersome to do it statically, in the current Ada standard. Doing
it by run-time checks in overloaded operators is easier, but of course
has some run-time overhead. There are proposals to extend Ada a bit to
make a static check of physical units ("dimensions") simpler. See
http://www.ada-auth.org/cgi-bin/cvsweb.cgi/acs/ac-00184.txt?rev=1.3&raw=Y
and inparticular the part where Edmond Schonberg explains a suggestion
for the GNAT Ada compiler.
> A mission failure is a failure of management. The Ariadne crash was.
Just a nit, the launcher is named "Ariane".
--
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
. @ .
>George Neuner <gneu...@comcast.net> writes:
>
>> On Mon, 27 Sep 2010 16:27:02 +0200, Piotr Chamera
>> <piotr_...@poczta.onet.pl> wrote:
>>
>>>In static system if something like that happens You have „core dump”
>>>or bad results without warning :)
>>
>> To be fair ... in every static typed language I know of (including C)
>> you have to do something to override the type system in order to pass
>> a value of the wrong type.
>
>C is weakly typed.
But still statically typed.
>Moreover:
>
>#include <stdio.h>
>#include <stdarg.h>
>
>int sum_int(int n,...){
> int v;
> va_list ap;
> va_start(ap,n);
> for(i=0;i<n;i++){
> v+=va_arg(ap,int);
> }
> va_end(ap);
> return(v);
>}
>
>int main(){
> printf("%d\n",sum_int(3,1.2,"trois",4L));
> return(0);
>}
Yes, but just to be clear, this is not an example of *weak* typing.
The vararg/stdarg form technically is untyped ... it's up to the
function to assign type and meaning to arguments passed in the varlist
and determine whether they are valid.
>> Of course, the various languages differ
>> greatly in how easy, or hard, it is to do that.
>
>Very easy in C.
But not terribly difficult in a number of languages.
George
I'm definitely interested. I have to use something like that in my
current project...
> Ron Garret <rNOS...@flownet.com> writes:
>
>>
>> CL can do this. There is code by Roman Cunis (dated 1991) called
>> Measures that does exactly this in the Right Way. It even comes with
>> a parser that lets you do spiffy things like this:
>>
>> ? (+ 1s 1hr)
>> 1hr:0min:1s
>> ? (/ 1hr 1s)
>> 3600
>> ? (* 1μg 1m/s2)
>> 1nN
>> ?
>>
>> I have an updated version of his code that streamlines it and fixes a
>> couple of bugs if anyone is interested.
>
> I'm definitely interested. I have to use something like that in my
> current project...
Also interested!
Dan
Me too. Having it in a public repo (eg on github) would be nice.
Tamas
> I'd like to design a language like this. If you add a quantity in
> inches to a quantity in centimetres you get a quantity in (say)
> metres. If you multiply them together you get an area, if you divide
> them you get a dimeionless scalar. If you divide a quantity in metres
> by a quantity in seconds you get a velocity, if you try to subtract
> them you get an error.
Done in 1992.
See
<http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/lisp/code/syntax/measures/0.html>
citation at <http://portal.acm.org/citation.cfm?id=150168>
and my extension to it as part of the Loom system:
<http://www.isi.edu/isd/LOOM/documentation/loom4.0-release-notes.html#Units>
--
Thomas A. Russ, USC/Information Sciences Institute
OK, it's here for now:
http://www.flownet.com/ron/lisp/units.lisp
It's in kind of a half-baked state at the moment, but it does work.
You'll also need globals.lisp from that same directory. (See
http://rondam.blogspot.com/2009/08/global-variables-done-right.html
for the rationale behind it.)
NOTE: the copyright status of this code is somewhat dubious. The
original code that this is based on is here:
http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/lisp/code/
syntax/measures/0.html
It doesn't have a copyright notice that I could find.
rg
>I would say the dimensional checking is underrated. It must be
>complemented with a hard and fast rule about only using standard
>(SI) units internally.
>
>Oil output internal : m^3/sec
>Oil output printed: kbarrels/day
"barrel" is not an SI unit. And when speaking about oil there isn't
even a simple conversion.
42 US gallons ? 34.9723 imp gal ? 158.9873 L
[In case those marks don't render, they are meant to be the
double-tilda sign meaning "approximately equal".]
George
I didn't go as far as that, but:
$ cat test.can
database input 'canal.sqlite'
for i=link 'Braunston Turn' to '.*'
print 'It is ';i.distance into 'distance:%M';' miles (which is '+i.distance into 'distance:%K'+' km) to ';i.place2 into 'name:place'
end for i
$ canal test.can
It is 0.10 miles (which is 0.16 km) to London Road Bridge No 90
It is 0.08 miles (which is 0.13 km) to Bridge No 95
It is 0.19 miles (which is 0.30 km) to Braunston A45 Road Bridge No 91
--
Online waterways route planner | http://canalplan.eu
Plan trips, see photos, check facilities | http://canalplan.org.uk
He didn't say it was. Internal calculations are done in SI units (in
this case, m^3/sec); on output, the internal units can be converted to
whatever is convenient.
> And when speaking about oil there isn't
> even a simple conversion.
>
> 42 US gallons ? 34.9723 imp gal ? 158.9873 L
>
> [In case those marks don't render, they are meant to be the
> double-tilda sign meaning "approximately equal".]
There are multiple different kinds of "barrels", but "barrels of oil"
are (consistently, as far as I know) defined as 42 US liquid gallons.
A US liquid gallon is, by definition, 231 cubic inches; an inch
is, by definition, 0.0254 meter. So a barrel of oil is *exactly*
0.158987294928 m^3, and 1 m^3/sec is exactly 13.7365022817792
kbarrels/day. (Please feel free to check my math.) That's
admittedly a lot of digits, but there's no need for approximations
(unless they're imposed by the numeric representation you're using).
--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Thanks,
-Antony
Well, I don't know how a C program can determine the type of an
argument, so how could it determine it is valid?
Technically (and this is the problem), the caller didn't have to
override anything to pass a value of the wrong type, in this case.
Granted the callee had to do something special, using varargs, to be
able to disable the type system there.
SO C typing is weak, if not inexistant (when disabled).
Compare with a dynamically, strongly typed language:
(defun sum (&rest args) ; no need for an argument count
(reduce (function +) args))
(defun main ()
(format t "~D~%" (sum 1.2 "trois" 4)))
CL-USER> (main)
*** - +: "trois" is not a number
% gcc -o sum-int sum-int.c && ./sum-int
19970680
There's also a non-null probability that the result of the sum_int
function be what is expected (in a test).
There are already numerous libraries that help you with this kind of
things in various languages; Python (you're crossposting to
comp.lang.python), for instance, has several, such as Unum, and
including one I've written but not yet released. It's not clear why one
would need this built into the language:
>>> print si
kg m s A K cd mol
>>> length = 3*si.in_ # underscore is needed since `in` is a keyword
>>> print length
3.0 in_
>>> lengthInCentimeters = length.convert(si.cm)
>>> print lengthInCentimeters
7.62 cm
>>> area = lengthInCentimeters*lengthInCentimeters
>>> print area
58.0644 cm**2
>>> biggerArea = 10.0*area
>>> ratio = area/biggerArea
>>> print ratio
0.1
>>> speed = (3.0*si.m)/(1.5*si.s)
>>> print speed
2.0 m/s
>>> ratio - speed
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "unity.py", line 218, in __sub__
converted = other.convert(self.strip())
File "unity.py", line 151, in convert
raise IncompatibleUnitsError, "%r and %r do not have compatible
units" % (self, other)
__main__.IncompatibleUnitsError: <Quantity @ 0x-4814a834 (2.0 m/s)> and
<Quantity @ 0x-4814a7d4 (1.0)> do not have compatible units
And everybody's favorite:
>>> print ((epsilon_0*mu_0)**-0.5).simplify()
299792458.011 m/s
>>> print c # floating point accuracy aside
299792458.0 m/s
--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
San Jose, CA, USA && 37 18 N 121 57 W && AIM/Y!M/Skype erikmaxfrancis
In Heaven all the interesting people are missing.
-- Friedrich Nietzsche
Actually, the speed of light is exactly 299792458.0 m/s by
definition. (The meter and the second are defined in terms of the
same wavelength of light; this was changed relatively recently.)
1 inch + 1 second = ~4.03e38 grams.
GORY DETAILS:
Tim Bradshaw <t...@tfeb.org> wrote:
+---------------
| Malcolm McLean said:
| > he problem is that if you allow expressions rather than terms then
| > the experssions can get arbitrarily complex. sqrt(1 inch + 1 Second),
| > for instance.
|
| I can't imagine a context where 1 inch + 1 second would not be an
| error, so this is a slightly odd example. Indeed I think that in
| dimensional analysis summing (or comparing) things with different
| dimensions is always an error.
+---------------
Unless you convert them to equivalent units first. For example, in
relativistic or cosmological physics, one often uses a units basis
wherein (almost) everything is scaled to "1":
http://en.wikipedia.org/wiki/Natural_units
When you set c = 1, then:
Einstein's equation E = mc2 can be rewritten in Planck units as E = m.
This equation means "The rest-energy of a particle, measured in Planck
units of energy, equals the rest-mass of a particle, measured in
Planck units of mass."
See also:
http://en.wikipedia.org/wiki/Planck_units
...
The constants that Planck units, by definition, normalize to 1 are the:
* Gravitational constant, G;
* Reduced Planck constant, h-bar; [h/(2*pi)]
* Speed of light in a vacuum, c;
* Coulomb constant, 1/(4*pi*epsilon_0) (sometimes k_e or k);
* Boltzmann's constant, k_B (sometimes k).
This sometimes leads people to do things that would appear sloppy
or even flat-out wrong in MKS or CGS units, such as expressing mass
in terms of length:
Consider the equation A=1e10 in Planck units. If A represents a
length, then the equation means A=1.6e-25 meters. If A represents
a mass, then the equation means A=220 kilograms. ...
In fact, natural units are especially useful when this ambiguity
is *deliberate*: For example, in special relativity space and time
are so closely related that it can be useful to not specify whether
a variable represents a distance or a time.
So it is that we find that the mass of the Sun is 1.48 km or 4.93 us, see:
http://en.wikipedia.org/wiki/Solar_mass#Related_units
In this limited sense, then, one could convert both 1 inch and 1 second
to masses[1], and *then* add them, hence:
1 inch + 1 second = ~4.03e38 grams.
;-} ;-}
-Rob
[1] 1 inch is "only" ~3.41e28 g, whereas 1 second is ~4.03e38 g,
so the latter completely dominates in the sum.
-----
Rob Warnock <rp...@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607
>George Neuner <gneu...@comcast.net> writes:
>> On 28 Sep 2010 12:42:40 GMT, Albert van der Horst
>> <alb...@spenarnc.xs4all.nl> wrote:
>>>I would say the dimensional checking is underrated. It must be
>>>complemented with a hard and fast rule about only using standard
>>>(SI) units internally.
>>>
>>>Oil output internal : m^3/sec
>>>Oil output printed: kbarrels/day
>>
>> "barrel" is not an SI unit.
>
>He didn't say it was. Internal calculations are done in SI units (in
>this case, m^3/sec); on output, the internal units can be converted to
>whatever is convenient.
That's true. But it is a situation where the conversion to SI units
loses precision and therefore probably shouldn't be done.
>
>> And when speaking about oil there isn't
>> even a simple conversion.
>>
>> 42 US gallons ? 34.9723 imp gal ? 158.9873 L
>>
>> [In case those marks don't render, they are meant to be the
>> double-tilda sign meaning "approximately equal".]
>
>There are multiple different kinds of "barrels", but "barrels of oil"
>are (consistently, as far as I know) defined as 42 US liquid gallons.
>A US liquid gallon is, by definition, 231 cubic inches; an inch
>is, by definition, 0.0254 meter. So a barrel of oil is *exactly*
>0.158987294928 m^3, and 1 m^3/sec is exactly 13.7365022817792
>kbarrels/day. (Please feel free to check my math.) That's
>admittedly a lot of digits, but there's no need for approximations
>(unless they're imposed by the numeric representation you're using).
I don't care to check it ... the fact that the SI unit involves 12
decimal places whereas the imperial unit involves 3 tells me the
conversion probably shouldn't be done in a program that wants
accuracy.
George
I know. Hence why I wrote the comment "floating point accuracy aside"
when printing it.
--
Erik Max Francis && m...@alcyone.com && http://www.alcyone.com/max/
San Jose, CA, USA && 37 18 N 121 57 W && AIM/Y!M/Skype erikmaxfrancis
If the past sits in judgment on the present, the future will be lost.
-- Winston Churchill
>> >>> print c # floating point accuracy aside
>> 299792458.0 m/s
>
> Actually, the speed of light is exactly 299792458.0 m/s by
> definition.
Yes, but just in vacuum.
Greetings,
Torsten
--
http://www.dddbl.de - ein Datenbank-Layer, der die Arbeit mit 8
verschiedenen Datenbanksystemen abstrahiert,
Queries von Applikationen trennt und automatisch die Query-Ergebnisse
auswerten kann.
Because perhaps you're thinking that oil is sent over the oceans, and
sold retails in barrils of 42 gallons?
Actually, when I buy oil, it's from a pump that's graduated in liters!
It comes from trucks with citerns containing 24 mł.
And these trucks get it from reservoirs of 23,850 mł.
"Tankers move approximately 2,000,000,000 metric tons" says the English
Wikipedia page...
Now perhaps it all depends on whether you buy your oil from Total or
from Texaco, but in my opinion, you're forgetting something: the last
drop. You never get exactly 42 gallons of oil, there's always a little
drop more or less, so what you get is perhaps 158.987 liter or
41.9999221 US gallons, or even 158.98 liter = 41.9980729 US gallons,
where you need more significant digits.
> Unless you convert them to equivalent units first. For example, in
> relativistic or cosmological physics, one often uses a units basis
> wherein (almost) everything is scaled to "1":
Heh. I spent a bunch of time doing GR so I'm used to these. Although
natural units are obviously convenient for doing calculations, I often
found that when I was getting answers that were obviously nonsensical
putting in the units would find dimansion errors I'd made.
And even that pales in comparison to the expansion and contraction of
petroleum products with temperature. Compensation to standard temp is
required in some jurisdictions but not in others...
Ok. I took the comment to be an indication that the figure was
subject to floating point accuracy concerns; in fact you meant just
the opposite.
> On Tue, 28 Sep 2010 12:15:07 -0700, Keith Thompson <ks...@mib.org>
> wrote:
> >He didn't say it was. Internal calculations are done in SI units (in
> >this case, m^3/sec); on output, the internal units can be converted to
> >whatever is convenient.
>
> That's true. But it is a situation where the conversion to SI units
> loses precision and therefore probably shouldn't be done.
I suppose that one has to choose between two fundamental designs for any
computational system of units. One can either store the results
internally in a canonical form, which generally means an internal
representation in SI units. Then all calculations are performed using
the interal units representation and conversion happens only on input or
output.
Or one can store the values in their original input form, and perform
conversions on the fly during calculations. For calculations one will
still need to have some canonical representation for cases where the
result value doesn't have a preferred unit provided. For internal
calculations this will often be the case.
Now whether one will necessarily have a loss of precision depends on
whether the conversion factors are exact or approximations. As long as
the factors are exact, one can have the internal representation be exact
as well. One method would be to use something like the Commmon Lisp
rational numbers or the Gnu mp library.
And a representation where one preserves the "preferred" unit for
display purposes based on the original data as entered is also nice.
Roman Cunis' Common Lisp library does that, and with the use of rational
numbers for storing values and conversion factors allows one to do nice
things like make sure that
30mph * 3h = 90mi
even when the internal representation is in SI units (m/s, s, m).
No. I'm just reacting to the "significant figures" issue. Real
world issues like US vs Eurozone and measurement error aside - and
without implying anyone here - many people seem to forget that
multiplying significant figures doesn't add them, and results to 12
decimal places are not necessarily any more accurate than results to 2
decimal places.
It makes sense to break macro barrel into micro units only when
necessary. When a refinery purchases 500,000 barrels, it is charged a
barrel price, not some multiple of gallon or liter price and
regardless of drop over/under. The refinery's process is continuous
and it needs a delivery if it has less than 20,000 barrels - so the
current reserve figure of 174,092 barrels is as accurate as is needed
(they need to order by tomorrow because delivery will take 10 days).
OTOH, because the refinery sells product to commercial vendors of
gasoline/petrol and heating oil in gallons or liters, it does makes
sense to track inventory and sales in (large multiples of) those
units.
Similarly, converting everything to mł simply because you can does not
make sense. When talking about the natural gas reserve of the United
States, the figures are given in Kmł - a few thousand mł either way is
irrelevant.
George
I disagree with your conclusion. Sure, the data was textual when it
was initially read by the program, but that should only be relevant to
the input processing code. The data is likely converted to some
internal representation immediately after it is read and validated,
and in a sanely-designed program, it maintains this representation
throughout its life time. If the structure of some data needs to
change during development, the compiler of a statically-typed language
will automatically tell you about any client code that was not updated
to account for the change. Dynamically typed languages do not provide
this assurance.
This is a red herring. You don't have to invoke run-time input to
demonstrate bugs in a statically typed language that are not caught by
the compiler. For example:
[ron@mighty:~]$ cat foo.c
#include <stdio.h>
int maximum(int a, int b) {
return (a > b ? a : b);
}
int foo(int x) { return 9223372036854775807+x; }
int main () {
printf("%d\n", maximum(foo(1), 1));
return 0;
}
[ron@mighty:~]$ gcc -Wall foo.c
[ron@mighty:~]$ ./a.out
1
Even simple arithmetic is Turing-complete, so catching all type-related
errors at compile time would entail solving the halting problem.
rg
In short, static typing doesn't solve all conceivable problems.
We are all aware that there is no perfect software development process
or tool set. I'm interested in minimizing the number of problems I
run into during development, and the number of bugs that are in the
finished product. My opinion is that static typed languages are
better at this for large projects, for the reasons I stated in my
previous post.
More specifically, the claim made above:
> in C I can have a function maximum(int a, int b) that will always
> work. Never blow up, and never give an invalid answer.
is false. And it is not necessary to invoke the vagaries of run-time
input to demonstrate that it is false.
> We are all aware that there is no perfect software development process
> or tool set. I'm interested in minimizing the number of problems I
> run into during development, and the number of bugs that are in the
> finished product. My opinion is that static typed languages are
> better at this for large projects, for the reasons I stated in my
> previous post.
More power to you. What are you doing here on cll then?
rg
But the above maximum() function does exactly that. The program's
behavior happens to be undefined or implementation-defined for reasons
unrelated to the maximum() function.
Depending on the range of type int on the given system, either the
behavior of the addition in foo() is undefined (because it overflows),
or the implicit conversion of the result to int either yields an
implementation-defined result or (in C99) raises an
implementation-defined signal; the latter can lead to undefined
behavior.
Since 9223372036854775807 is 2**63-1, what *typically* happens is that
the addition yields the value 0, but the C language doesn't require that
particular result. You then call maximum with arguments 0 and 1, and
it quite correctly returns 1.
>> We are all aware that there is no perfect software development process
>> or tool set. I'm interested in minimizing the number of problems I
>> run into during development, and the number of bugs that are in the
>> finished product. My opinion is that static typed languages are
>> better at this for large projects, for the reasons I stated in my
>> previous post.
>
> More power to you. What are you doing here on cll then?
This thread is cross-posted to several newsgroups, including
comp.lang.c.
> In short, static typing doesn't solve all conceivable problems.
>
> We are all aware that there is no perfect software development process
> or tool set. I'm interested in minimizing the number of problems I
> run into during development, and the number of bugs that are in the
> finished product. My opinion is that static typed languages are
> better at this for large projects, for the reasons I stated in my
> previous post.
Our experience is that a garbage collector and native bignums are much
more important to minimize the number of problems we run into during
development and the number of bugs that are in the finished products.
This all hinges on what you consider to be "a function maximum(int a,
int b) that ... always work[s] ... [and] never give[s] an invalid
answer." But if you don't consider an incorrect answer (according to
the rules of arithmetic) to be an invalid answer then the claim becomes
vacuous. You could simply ignore the arguments and return 0, and that
would meet the criteria.
If you try to refine this claim so that it is both correct and
non-vacuous you will find that static typing does not do nearly as much
for you as most of its adherents think it does.
> >> We are all aware that there is no perfect software development process
> >> or tool set. I'm interested in minimizing the number of problems I
> >> run into during development, and the number of bugs that are in the
> >> finished product. My opinion is that static typed languages are
> >> better at this for large projects, for the reasons I stated in my
> >> previous post.
> >
> > More power to you. What are you doing here on cll then?
>
> This thread is cross-posted to several newsgroups, including
> comp.lang.c.
Ah, so it is. My bad.
rg
This thread is massively cross-posted.
int maximum(int a, int b) { return a > b ? a : b; }
> But if you don't consider an incorrect answer (according to
> the rules of arithmetic) to be an invalid answer then the claim becomes
> vacuous. You could simply ignore the arguments and return 0, and that
> would meet the criteria.
I don't believe it's possible in any language to write a maximum()
function that returns a correct result *when given incorrect argument
values*.
The program (assuming a typical implementation) calls maximum() with
arguments 0 and 1. maximum() returns 1. It works. The problem
is elsewhere in the program.
(And on a hypothetical system with INT_MAX >= 9223372036854775808,
the program's entire behavior is well defined and mathematically
correct. C requires INT_MAX >= 32767; it can be as large as the
implementation chooses. In practice, the largest value I've ever
seen for INT_MAX is 9223372036854775807.)
> If you try to refine this claim so that it is both correct and
> non-vacuous you will find that static typing does not do nearly as much
> for you as most of its adherents think it does.
Speaking only for myself, I've never claimed that static typing solves
all conceivable problems. My point is only about this specific example
of a maximum() function.
[...]
OK. You finished your post with a reference to the halting problem,
which does not help to bolster any practical argument. That is why I
summarized your post in the manner I did.
I agree that static typed languages do not prevent these types of
overflow errors.
That the problem is "elsewhere in the program" ought to be small
comfort. But very well, try this instead:
[ron@mighty:~]$ cat foo.c
#include <stdio.h>
int maximum(int a, int b) { return a > b ? a : b; }
int main() {
long x = 8589934592;
printf("Max of %ld and 1 is %d\n", x, maximum(x,1));
return 0;
}
[ron@mighty:~]$ gcc -Wall foo.c
[ron@mighty:~]$ ./a.out
Max of 8589934592 and 1 is 1
It is, perhaps, but it's also an important technical point: You CAN write
correct code for such a thing.
> int maximum(int a, int b) { return a > b ? a : b; }
> int main() {
> long x = 8589934592;
> printf("Max of %ld and 1 is %d\n", x, maximum(x,1));
You invoked implementation-defined behavior here by calling maximum() with
a value which was outside the range. The defined behavior is that the
arguments are converted to the given type, namely int. The conversion
is implementation-defined and could include yielding an implementation-defined
signal which aborts execution.
Again, the maximum() function is 100% correct -- your call of it is incorrect.
You didn't pass it the right sort of data. That's your problem.
(And no, the lack of a diagnostic doesn't necessarily prove anything; see
the gcc documentation for details of what it does when converting an out
of range value into a signed type, it may well have done exactly what it
is defined to do.)
-s
--
Copyright 2010, all wrongs reversed. Peter Seebach / usenet...@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
I am not speaking for my employer, although they do rent some of my opinions.
I don't claim that it's comforting, merely that it's true.
> But very well, try this instead:
>
> [ron@mighty:~]$ cat foo.c
> #include <stdio.h>
>
> int maximum(int a, int b) { return a > b ? a : b; }
>
> int main() {
> long x = 8589934592;
> printf("Max of %ld and 1 is %d\n", x, maximum(x,1));
> return 0;
> }
> [ron@mighty:~]$ gcc -Wall foo.c
> [ron@mighty:~]$ ./a.out
> Max of 8589934592 and 1 is 1
That exhibits a very similar problem.
8589934592 is 2**33.
Given the output you got, I presume your system has 32-bit int and
64-bit long. The call maximum(x, 1) implicitly converts the long
value 8589934592 to int. The result is implementation-defined,
but typically 0. So maximum() is called with arguments of 0 and 1,
as you could see by adding a printf call to maximum().
Even here, maximum() did exactly what was asked of it.
I'll grant you that having a conversion from a larger type to a smaller
type quietly discard high-order bits is unfriendly. But it matches the
behavior of most CPUs.
Here's another example:
#include <stdio.h>
int maximum(int a, int b) { return a > b ? a : b; }
int main(void) {
double x = 1.8;
printf("Max of %f and 1 is %d\n", x, maximum(x, 1));
return 0;
}
Output:
Max of 1.800000 and 1 is 1
Note that the mistake can be diagnosed:
lint /tmp/u.c -m64 -errchk=all
(7) warning: passing 64-bit integer arg, expecting 32-bit integer:
maximum(arg 1)
--
Ian Collins
Of course. Computers always do only exactly what you ask of them. On
this view there is, by definition, no such thing as a bug, only
specifications that don't correspond to one's intentions.
Unfortunately, correspondence to intentions is the thing that actually
matters when writing code.
> I'll grant you that having a conversion from a larger type to a smaller
> type quietly discard high-order bits is unfriendly.
"Unfriendly" is not the adjective that I would choose to describe this
behavior.
There is a whole hierarchy of this sort of "unfriendly" behavior, some
of which can be caught at compile time using a corresponding hierarchy
of ever more sophisticated tools. But sooner or later if you are using
Turing-complete operations you will encounter the halting problem, at
which point your compile-time tools will fail. (c.f. the Collatz
problem)
I'm not saying one should not use compile-time tools, only that one
should not rely on them. "Compiling without errors" is not -- and
cannot ever be -- be a synonym for "bug-free."
rg
f00f.
That said... I think you're missing Keith's point.
> Unfortunately, correspondence to intentions is the thing that actually
> matters when writing code.
Yes. Nonetheless, the maximum() function does exactly what it is intended
to do *with the inputs it receives*. The failure is outside the function;
it did the right thing with the data actually passed to it, the problem
was a user misunderstanding as to what data were being passed to it.
So there's a bug -- there's code which does not do what it was intended
to do. However, that bug is in the caller, not in the maximum()
function.
This is an important distinction -- it means we can write a function
which performs that function reliably. Now we just need to figure out
how to call it with valid data... :)
We is why wee all have run time tools called unit tests, don't we?
--
Ian Collins
That argument can be made for dynamic language as well. If you write in
dynamic language (e.g. python):
def maximum(a, b):
return a if a > b else b
The dynamic language's version of maximum() function is 100% correct --
if you passed an uncomparable object, instead of a number, your call of
it is incorrect; you just didn't pass the right sort of data. And that's
your problem as a caller.
In fact, since Python's integer is infinite precision (only bounded by
available memory); in practice, Python's version of maximum() has less
chance of producing erroneous result.
The /most/ correct version of maximum() function is probably one written
in Haskell as:
maximum :: Integer -> Integer -> Integer
maximum a b = if a > b then a else b
Integer in Haskell has infinite precision (like python's int, only
bounded by memory), but Haskell also have static type checking, so you
can't pass just any arbitrary objects.
But even then, it's still not 100% correct. If you pass a really large
values that exhaust the memory, the maximum() could still produce
unwanted result.
Second problem is that Haskell has Int, the bounded integer, and if you
have a calculation in Int that overflowed in some previous calculation,
then you can still get an incorrect result. In practice, the
type-agnostic language with *mandatory* infinite precision arithmetic
wins in terms of correctness. Any language which only has optional
infinite precision arithmetic can always produce erroneous result.
Anyone can dream of 100% correct program; but anyone who believes they
can write a 100% correct program is just a dreamer. In reality, we don't
usually need 100% correct program; we just need a program that runs
correctly enough most of the times that the 0.0000001% chance of
producing erroneous result becomes irrelevant.
In summary, in this particular case with maximum() function, static
checking does not help in producing the most correct code; if you need
to ensure the highest correctness, you must use a language with
*mandatory* infinite precision integers.
Of course there's such a thing as a bug.
This version of maximum:
int maximum(int a, int b) {
return a > b ? a : a;
}
has a bug. This version:
int maximum(int a, int b) {
return a > b ? a : b;
}
I would argue, does not. The fact that it might be included in a
buggy program does not mean that it is itself buggy.
[...]
> I'm not saying one should not use compile-time tools, only that one
> should not rely on them. "Compiling without errors" is not -- and
> cannot ever be -- be a synonym for "bug-free."
Agreed. (Though C does make it notoriously easy to sneak buggy code
past the compiler.)
"in C I can have a function maximum(int a, int b) that will always
work. Never blow up, and never give an invalid answer. "
Dynamic typed languages like Python fail in this case on "Never blows
up".
> > I'm not saying one should not use compile-time tools, only that one
> > should not rely on them. "Compiling without errors" is not -- and
> > cannot ever be -- be a synonym for "bug-free."
>
> Agreed. (Though C does make it notoriously easy to sneak buggy code
> past the compiler.)
Let's just leave it at that then.
rg
> On 2010-09-30, RG <rNOS...@flownet.com> wrote:
> > Of course. Computers always do only exactly what you ask of them. On
> > this view there is, by definition, no such thing as a bug, only
> > specifications that don't correspond to one's intentions.
>
> f00f.
>
> That said... I think you're missing Keith's point.
>
> > Unfortunately, correspondence to intentions is the thing that actually
> > matters when writing code.
>
> Yes. Nonetheless, the maximum() function does exactly what it is intended
> to do *with the inputs it receives*. The failure is outside the function;
> it did the right thing with the data actually passed to it, the problem
> was a user misunderstanding as to what data were being passed to it.
>
> So there's a bug -- there's code which does not do what it was intended
> to do. However, that bug is in the caller, not in the maximum()
> function.
>
> This is an important distinction -- it means we can write a function
> which performs that function reliably. Now we just need to figure out
> how to call it with valid data... :)
We lost some important context somewhere along the line:
> > > in C I can have a function maximum(int a, int b) that will always
> > > work. Never blow up, and never give an invalid answer. If someone
> > > tries to call it incorrectly it is a compile error.
Please take note of the second sentence.
One way or another, this claim is plainly false. The point I was trying
to make is not so much that the claim is false (someone else was already
doing that), but that it can be demonstrated to be false without having
to rely on any run-time input.
rg
But you have to know a lot about the language to know that there's a
problem. You cannot sensibly test your max function on every
combination of (even int) input which it's designed for (and, of course,
it works for those).
--
Online waterways route planner | http://canalplan.eu
Plan trips, see photos, check facilities | http://canalplan.org.uk
Or using the new suffix return syntax in C++0x. Something like
template <typename T0, typename T1>
[] maximum( T0 a, T1 b) { return a > b ? a : b; }
Where the return type is deduced at compile time.
--
Ian Collins
The second sentence is not disproved by a cast from one datatype to
another (which changes the value) that happens before maximum() is
called.
int maximum(int a, int b);
int foo() {
int (*barf)() = maximum;
return barf(3);
}
This compiles fine for me. Where is the cast? Where is the error message?
Are you saying barf(3) doesn't call maximum?
Indeed. This is generic programming. And it happens that in Lisp (and
I assume in languages such as Python), sinte types are not checked at
compilation time, all the functions you write are always generic
functions.
In particular, the property "arguments are not comparable" is not
something that can be determined at compilation time, since the program
may add a compare method for the given argument at run-time (if the
comparison operator used is a generic function).
You can't have it both ways. Either I am calling it incorrectly, in
which case I should get a compiler error, or I am calling it correctly,
and I should get the right answer. That I got neither does in fact
falsify the claim. The only way out of this is to say that
maximum(8589934592, 1) returning 1 is in fact "correct", in which case
we'll just have to agree to disagree.
rg
With Tiny C on my system, your code does not cause maximum to give an
incorrect value, or to blow up:
int maximum(int a, int b)
{
printf("entering maximum %d %d\n",a,b);
if ( a > b )
return a;
else
return b;
}
int foo()
{
int (*barf)() = maximum;
return barf(3);
}
int main (int argc, char *argv[])
{
printf("maximum is %d\n",foo());
}
------------- output -----------------------------------
entering maximum 3 4198400
maximum is 4198400
How do you define "Never blows up"?
Personally, I'd consider maximum(8589934592, 1) returning 1 as a blow
up, and of the worst kind since it passes silently.
I think we have to agree to disagree, because I don't see the lack of
a compiler error at step 2 as a problem with the maximum() function.
They don't "blow up". They may throw an exception, on which you can act.
You make it sound like a core dump, which it isn't.
Pascal
--
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
> But if you don't consider an incorrect answer (according to
> the rules of arithmetic)
[Note followed up only to CLL, I assume Ron will still see it]
This has quite an interesting connection to another recent thread on
CLL. What's happening in both cases is that people are defining //what
the implementation is defined to do// to be correct, even though that
disagrees with the laws of arithmetic.
I'm not a psychologist and I don't even play one on usenet, but I think
this is quite interesting. My theory is that what is happening is that
people are slowly losing track of the fact that there might be
real-world problems to be solved, where the ordinary laws of arithmetic
might not be something you can define how you like, for instance. My
theory, in fact, is that this is rather a widely-spread thing, where,
for instance, a very large number of people are beginning to mistake
playing with the internet for the real world.
Try a language with stricter type checking:
CC /tmp/u.c
"/tmp/u.c", line 7: Error: Cannot use int(*)(int,int) to initialize
int(*)().
"/tmp/u.c", line 8: Error: Too many arguments in call to "int(*)()".
--
Ian Collins
Never has execution halt.
I think a key reason in the big rise in the popularity of interpreted
languages is that when execution halts, they normally give a call
stack and usually a good reason for why things couldn't continue. As
opposed to compiled languages which present you with a blank screen
and force you to - fire up a debugger, or much much worse, look at a
core dump - to try and discern all the information the interpreter
presents to you immediately.
>
> Personally, I'd consider maximum(8589934592, 1) returning 1 as a blow
> up, and of the worst kind since it passes silently.
If I had to choose between "blow up" or "invalid answer" I would pick
"invalid answer".
In this example RG is passing a long literal greater than INT_MAX to a
function that takes an int and the compiler apparently didn't give a
warning about the change in value as it created the cast to an int,
even with the option -Wall (all warnings). I think it's legitmate to
consider that an option for a warning/error on this condition should
be available. As far the compiler generating code that checks for a
change in value at runtime when a number is cast to a smaller data
type, I think that's also a legitimate request for a C compiler option
(in addition to other runtime check options like array subscript out
of bounds).
I was trying to give an example of a function which would never throw
an exception under any conditions which I think is unique to a static
typed language. But there are admittedly a limited number of functions
that can be written that meet that condition.
One of the benefits that I see in a static typed language (and there
are definitely advantages to dynamic typed languages as in Eckel's
example) is that I feel it is easier to try and understand someone
else's code if types are declared - particularly for function
parameters.
It's fine that you feel that way, and you shouldn't feel discouraged to
continue working in the style that suits you most.
But don't make it sound like other approaches are not valid. Both
dynamic typing and static typing have advantages and disadvantages, both
suit different personalities better, and both can be used to produce
correct working code using the right tools and methodologies.
<snip>
> > Fact is: almost all user data from the external words comes into
> > programs as strings. No typesystem or compiler handles this fact all
> > that graceful...
>
> I would even go further.
>
> Types are only part of the story. You may distinguish between integers
> and floating points, fine. But what about distinguishing between
> floating points representing lengths and floating points representing
> volumes? Worse, what about distinguishing and converting floating
> points representing lengths expressed in feets and floating points
> representing lengths expressed in meters.
fair points
> If you start with the mindset of static type checking, you will consider
> that your types are checked and if the types at the interface of two
> modules matches you'll think that everything's ok. And six months later
> you Mars mission will crash.
do you have any evidence that this is actually so? That people who
program in statically typed languages actually are prone to this "well
it compiles so it must be right" attitude?
> On the other hand, with the dynamic typing mindset, you might even wrap
> your values (of whatever numerical type) in a symbolic expression
> mentionning the unit and perhaps other meta data, so that when the other
> module receives it, it may notice (dynamically) that two values are not
> of the same unit, but if compatible, it could (dynamically) convert into
> the expected unit. Mission saved!
they *may* do this but do they *actually* do it? My (limited)
experience of dynamically typed languges is everynow and again you
attempt to apply an operator to the wrong type of operand and kerblam!
If your testing is inadaquate then it's inadaquate whatever the
typiness of your language.
there are some application domains where neither option would be
viewed as a satisfactory error handling strategy. Fly-by-wire, petro-
chemicals, nuclear power generation. Hell you'd expect better than
this from your phone!