On 22/11/2022 16:24, Dmitry A. Kazakov wrote:
> On 2022-11-22 15:11, David Brown wrote:
>
>> Familiarity with these gives you a broader background than many
>> programmers, but no insight or experience with functional programming.
>
> I don't pretend. I said I don't buy declarative approach and I don't buy
> first-class functions, however you shape them.
>
OK, I suppose - but I hope you'll forgive me if I don't believe you have
demonstrated enough experience or knowledge for your thoughts on
functional programming languages to carry any weight beyond personal
dislike. (I won't argue with personal tastes, as long as it is clear
that it is /your/ preferences for the programming /you/ do.)
>>>> Oh, so you think functional programming languages are designed and
>>>> intended to be useless, and it is only by accident or stubbornness
>>>> that anyone can actually make use of them? That is an "interesting"
>>>> argument.
>>>
>>> Absolutely. Most languages fall into this category. It is not a
>>> unique feature of FPLs...
>>
>> That is quite a cynical viewpoint!
>
> Grows from familiarity... (:-))
>
>>>>>> Whether the language supports first-class functions or not makes
>>>>>> no difference as to how well specified functions are, or whether
>>>>>> the implementation follows that specification or not.
>>>>>
>>>>> Specification is a declarative layer on top of the object language.
>>>>> In imperative procedural languages that layer is declaration of
>>>>> subprograms.
>>>>
>>>> No, that is not what "specification" means. A declaration is just
>>>> the name, parameters, types, etc., of a function. A specification
>>>> says what the function /does/.
>>>
>>> That is same. When you specify type, to say what the object "does" by
>>> being of that type.
>>
>> No, they are not the same at all. A declaration is needed to use the
>> function (or other object) in the language.
>
> Almost no language declare naked names. Most combine that with
> specification of what these names are supposed to mean = behave.
>
Again - you are mixing up specifications and declarations.
"int foo(int x)" is a /declaration/. It is all you need in C to be able
to use the function to compile code, and similar declarations are all
you need in most other languages. It is /not/ a /specification/. It
says /nothing/ about what the function means, does, or how it behaves.
If I ask you "write a function called "check" that takes a double and
returns a boolean", what would you do? You can't write that function -
you have no idea what it is going to do. I, on the other hand, can put
"bool check(double);" in my code and compile my program. How the whole
thing behaves is another matter, because you have no specification for
the function - merely its declaration.
/Please/ tell me you understand the difference! We are not going to get
far if you don't see that.
I haven't given any specifications - nor have you. We have not
discussed any particular concrete functions that could be specified.
If you like, we could write this in Haskell :
do_twice :: (a -> a) -> (a -> a)
do_twice f x = f (f x)
The first line is the declaration, which is often not necessary as the
compiler will figure it out for itself. Programmers might want more
control - for example, if you only want the function to handle int to
int functions, you could write :
do_twice :: (Int -> Int) -> (Int -> Int)
do_twice f x = f (f x)
As a specification, you could say "function "do_twice" should take a
function as an argument, and return a function that has the same effect
as applying the original function twice".
So, now you have an example of a higher level function and its
specification - /and/ an implementation, /and/ a declaration. Was that
so difficult?
>> Incidentally, you /do/ understand that "object oriented" is orthogonal
>> to imperative/declarative ? Many functional programming are object
>> oriented, just as many imperative languages are not.
>>
>>> They are not even first class.
>>
>> "Int" is a first class object type in Haskell and C. The "times"
>> function is a first class object type in Haskell, but not in C.
>>
>>> You didn't wrote:
>>>
>>> int (int) times_two; // Some functional C (no pun intended (:-))
>>>
>>
>> Indeed I didn't write that - it is not syntactically correct in either
>> sample language.
>>
>> I could have written a declaration for a higher order function in
>> Haskell:
>>
>> do_twice :: (Int -> Int) -> (Int -> Int)
>>
>> That takes a function that is int-to-int, and returns a function that
>> is int-to-int.
>
> Not the same. int (int) times_two; is supposedly a variable which values
> are int-valued functions taking one int argument. That would be the
> least possible case of first-class functions.
>
I am sorry, I really cannot understand what you are trying to say here.
You seem to be mixing up types, variables and functions here.
I could have written :
type IntFunc = Int -> Int
do_twice :: Func -> Func
do_twice f x = f (f x)
This would have been the same thing as the previous Int only "do_twice".
>> And again - these are declarations, not specifications.
>
> They are rudimentary specifications. To elaborate them is difficult in C
> and in an FPL.
>
No, declarations are not specifications at all. They give you the name
and the types involved - they don't say what the function should do.
The nearest you have to a specification is a good choice of name.
Arguably, functional programming is often just a formalised and precise
language for writing your specifications. Functional programming is
primarily concerned with what a function should /do/, and much less
concerned about /how/ it should do it. The Haskell implementation of
"do_twice" above is as clear a specification as the English language
specification I gave.
For real functional code, you will often define functions in a way that
gives the general direction for how it will work. For example, a simple
quicksort Haskell function could be :
qs [] = []
qs (x : xs) = (qs left) ++ [x] ++ (qs right)
where
left = filter (< x) xs
right = filter (>= x) xs
That could be considered a technical specification of a quicksort
algorithm. A more advanced Haskell implementation would use in-place
sorting for efficiency.
>>> The point is that if you take some really fancy functional stuff, it
>>> would be difficult or, maybe, useless to formally describe in some
>>> meta language of specifications.
>>
>> A function that cannot sensibly be described is of little use to
>> anyone. That is completely independent of the programming language
>> paradigm. If no one can tell you want "foo" does, you can't use it. I
>> cannot see any way in which imperative languages differ from
>> declarative languages in that respect.
>
> That comment was on the first-class functions. If you have a
> sufficiently complex algebra generating functions, especially during
> run-time, it becomes difficult to describe the result in specifications.
>
Is your dislike of functional programming really just that it is
possible to write higher order functions that you can't understand or
describe? People can write crap in any language. Just have a look at
<
https://thedailywtf.com/> and you can see examples of indescribable
code in every language you can imagine.
> Declarative approach is merely difficult to understand and thus to reuse
> and maintain code.
>
It is okay to say "/I/ don't understand language X, so I don't use it".
It is not okay to make general claims about what other people
understand. You can be sure that people who program in functional
programming languages /do/ understand what they are doing, and /can/
maintain and reuse their code.
Some languages have a reputation for being "write only", with Perl being
the prime contender. I've never heard that about any functional
programming language - and I challenge you to back up your claims with
references. Fail that, and you are just another whiner who mocks things
they don't understand rather accept they don't know everything.
>>>>>>> Another issue is treatment of types when each function is an
>>>>>>> operation on some types and nothing else.
>>>>>>
>>>>>> I can't understand what you mean. Functional programming language
>>>>>> functions are not operations on types.
>>>>>
>>>>> Yep.
>>>>
>>>> Oh, so you think functions in functional programming languages don't
>>>> have types or act on types?
>>>
>>> It was you who said "Functional programming language functions are
>>> not operations on types." I only agreed with you.
>>
>> I think I see the source of confusion. When you wrote "each function
>> is an operation on some types", you mean functions that operate on
>> /instances/ of certain types. There is a vast difference between
>> operating on /instances/, and operating on /types/.
>
> I did not mean first-class type objects (types of types). They raise the
> same objections as first-class procedural objects (types of
> subprograms). It would be surely interesting as an academia research and
> a nightmare in production code.
>
> Acting on a type means being a function of the type domain. E.g. sine
> acts on R. R is a type ("field" etc).
So when you write "act on a type", you mean "act on an instance of a
type" - just like "sine" acts on real numbers, or members of R, and not
on the set R.
>
>> I think what you meant to say was that in imperative languages, you
>> define functions to act on particular types (the types of the
>> parameters), while in functional programming you don't give the types
>> of the parameters so they operate on "anything".
>
> No, that would be untyped. Most FPLs are typed, I hope.
No, it would be generic programming. Generic programming is not the
same as untyped programming - in generic programming your functions are
defined to work with many types but each /use/ of the function is on
values of specific types. A fair bit of functional programming is
generic, but some of it is type-specific - you can have both (just as
you can in many languages).
>
>> This is, of course, wrong in both senses. More advanced imperative
>> let you define functions that operate on many types - they are known
>> as template functions in C++, generics in Ada, and similarly in other
>> languages.
>
> That is acting on a class. Class is a set of types, e.g. built by
> derivation closure in OO or ad-hoc as "any macro expansion the compiler
> would swallow" in the case of templates... (:-))
>
Again - no. While the definition of "class" (and even "type") varies
between languages, classes are not "sets of types". (And again, you are
mixing up "acting on something" with "acting on instances of
something".) What you are describing there is a type hierarchy in a
language with nominal subtyping - basically, the way class inheritance
works in C++ and Java. That is not the only way to do object-oriented
typing in a language.
>> And in functional programming languages you can specify types as
>> precisely or generically as you want.
>
> I don't see why an FPL could not include some "non-functional" core.
> Like C++ contains non-OO stuff borrowed from C. But talking about
> advantages and disadvantages of a given paradigm we must discuss the new
> stuff. Old stuff usually indicates incompleteness of the paradigm. E.g.
> C++, Java, Ada are nowhere close to be 100% OO. On the contrary, they
> are non-OO most of the time.
Smalltalk is one of the few languages that could be considered entirely
object-oriented.
But yes, many successful programming are multi-paradigm, and support
different ways of coding. C++ and Python support many different ways,
including functional, object-oriented, generic and procedural coding.
OCaml is primarily a functional programming language but it also
supports imperative coding. (It is also object oriented, but as I've
said before that is orthogonal to functional/imperative choices.)
Supporting multiple types of coding makes a language more flexible. But
it also comes at a cost. C++'s support for function overloading and
functional coding means there is no longer the near one-to-one
relationship between source code and assembly code that is so useful in
C for low-level debugging. OCaml's support for modifiable variables
means you don't get the level of correctness provability or thread
safety found in purer functional programming languages.
>
>> You like a programming language where you can understand a near
>> one-to-one correspondence between the source language and the
>> generated assembly. Fair enough - that's a valid and reasonable
>> preference.
>
> Much weaker that that. I want to be able to recognize complexity and
> algorithm and have limited effects on the computational environment.
> Basically, small things must look small and big things big. If there is
> a loop or recursion I'd like to be aware of these. I also like to have
> "uninteresting" details hidden. But I want sufficient stuff like memory
> management, blocking etc exposed.
>
You are still drawing an arbitrary line and saying "I like stuff on this
side, I don't want anything to do with stuff on the other side". Again,
there's nothing wrong with that - I think everyone does this.
(Personally, I work with a very wide range of programming. The line I
use for small embedded systems is in a very different place from the one
I use for PC programming.)
What is /not/ fine is to take your line and say "The stuff on this side
is good - it is good engineering and programming, letting people write
clear and maintainable code. The stuff on the other side is
incomprehensible, impractical nonsense that doesn't work."
(I've been through this with others - I really cannot understand how
some people get so narrow-minded and insular that they believe anything
different from their familiar little world is /wrong/.)
>> I have nothing against preferences. I just don't understand how
>> people can dismiss other options as impractical, useless, unintuitive,
>> impossible to use, or whatever, simply because those other languages
>> are not a style that they are familiar with or not a style they like.
>
> You get me wrong. It is not about style, though that is often more
> important than anything else. My concern is with the paradigm as a whole.
>>>> Functional programming languages have supported generic programming
>>>> and type inference for a lot longer than most imperative languages,
>>>> but those are both standard for any serious modern imperative
>>>> language (in C++ you have templates and "auto", in other languages
>>>> you have similar features).
>>>
>>> Type inference is a separate, and very controversial issue.
>>
>> It is only as controversial as any other feature where you have a
>> trade-off between implicit and explicit information. It is not really
>> any more controversial than the implicit conversions found in many
>> languages and user types.
>
> Yes, and there should be none in a well-typed program.
>
Nonsense.
>> And it is not a separate issue - unless you are using a dynamic
>> language that supports objects of any type in any place (with run-time
>> checking when the objects are used), type inference is essential to
>> how generic programming works.
>
> No, you can have static polymorphism without inference. You possibly
> mean some evil automatic instantiations of generics instead, like in
> C++. I don't want automatic instantiation either.
>
You cannot have generic programming without type inference - that's what
I said, and that's what I mean. What you /want/, or what you personally
choose to use, is irrelevant.
template <class T>
T add_one(T x) { return x + 1; }
int foo(int x) {
return add_one(x);
}
Type inference is how the compiler calls the right "add_one" instance
and how it determines what the type "T" actually is in the concrete
function call.
Type inference is not magic - compilers already have to figure out the
type of object instances in order to do type checking, implicit
conversions, and the like. gcc's "typeof" operator gives you it as a
simple extension to C (and which will perhaps become part of C23).
Using it too much in a language like C++ can make code harder to
understand - when you have "auto x = bar();", anyone reading the code
needs to make more of an effort to find the real type of "x". But if
the types involved are complicated, then it makes code much clearer,
more flexible and maintainable, and far more likely to be correct. It
is a sharp tool, and must be used responsibly.
>>>>>> Conversely, all functions in all languages operate on some types
>>>>>> and nothing else.
>>>>>
>>>>> No, in OOPL a method operates on the class. A "free function" takes
>>>>> some arguments in unrelated types.
>>>>
>>>> Methods in OOPL (or most people think of as Object Oriented
>>>> programming, such as C++, Java, Python, etc., rather than the
>>>> original intention of OOP which is now commonly called "actors"
>>>> paradigm) are syntactic sugar for a function with the class instance
>>>> as the first parameter.
>>>
>>> Not really, because methods dispatch. A method acts on the class and
>>> its implementation consists of separate bodies, which is the core of
>>> OO decomposition as opposed to other paradigms.
>>
>> Methods do not act on classes - they act on instances of a class (plus
>> any static members of the class).
>
> Rather on the closure of the class. Class is a set of types. Method acts
> on all values from all instances of the set.
>
I am assuming you are not using the word "closure" in the sense normally
used in programming. You mean a hierarchy of object types in a nominal
typing system with inheritance. Methods act on instances of a class,
and the same method may be able to act on instances of more than one
class in the hierarchy. Any given invocation is on a specific instance
of a specific class.
>> Tying function code to method names or free function names is just
>> naming syntax details, not an operation on the class or type itself.
>
> No, free function does not dispatch if class is involved. It has the
> same body is valid for values of all instances. A method selects a body
> according to the actual instance. The selected body could be composed
> out of several bodies like C++'s constructors and destructions, which
> are no methods, but one could imagine a language with that sort of
> composition of methods.
>
Suppose you have a class "A" with a non-virtual method "foo", and a
virtual method "bar". class "B" inherits from "A" and overrides both
methods.
A a;
B b;
A * ap;
B * bp;
a.foo(); // A_foo(&a)
b.foo(); // B_foo(&b)
ap = &a;
ap->foo(); // A_foo(&a)
ap = &b;
ap->foo(); // A_foo(&b) - probably wrong
bp = &a; // Compile-time error
bp = &b;
bp->foo(); // B_foo(&b)
a.bar(); // A_bar(&a)
b.bar(); // B_bar(&b)
ap = &a;
ap->bar(); // A_bar(&a) - dynamic dispatch
ap = &b;
ap->bar(); // B_bar(&b) - dynamic dispatch
bp = &a; // Compile-time error
bp = &b;
bp->bar(); // B_bar(&b)
For the non-virtual methods, you could have written :
void foo(A* a) { A_foo(a); }
void foo(B* b) { B_foo(b); }
Now there is no difference between "x.foo();" and "foo(x);".
Method calls for non-virtual methods are just the same as free
functions, but with a convenient syntax.
The virtual methods are more interesting, since the dispatch is dynamic.
This is handled by giving each instance of the classes a hidden
pointer to a virtual method table. So now the free function "bar" is a
bit more complicated:
void bar(A* a) { a->__vt[index_of_bar](a); }
(Multiple inheritance makes this messier, but the principle is the same.)