Thank you for the reminder. I guess having a PhD is not much different to a brain injury -- it has its side effects... so, it is not pure... (:
The presentation took us on your journey of "getting" the functional programming and many personal references made it very engaging for me -- thank you!
The feedback below is not about the presentation per se, but only some of the specific points in the presented material. Albert Einsteins once said "The more I learn, the more I realise how much I don't know", and this is exactly my experience -- the depth of the subject of computation is... I want to say "bottomless", but one could say it has a bottom (https://wiki.haskell.org/Bottom) (:. And it is always the case that when presenting something, we're effectively forced to cut at some level to stay "on time". Therefore, I appreciate that some of the items below where potentially omitted/avoided purposefully.
1. In discussing immutability it is easy to lead the listener to believe that the use of "val" ensures immutability. However, naming abstraction "val" simply creates an immutable reference to a value. But is the value immutable? If it is not, then even by using "val", immutability is not present. One could even argue that using "var" referencing an immutable value has much stronger guarantees of immutability.
2. If a type system is not expressive enough then we cannot rely only on type signature to "understand" 100% what is expected of a function. If a type system is expressive enough, such as in Idris (https://www.idris-lang.org/
), we have different problems -- the complexity of type expressions may result in difficulty to read them and leads to different kinds of errors. A reasonable alternative is the use of pre-conditions, which in combination with type signatures are expressive enough and remain quite readable. Another very interesting approach is taken by Clojure in a form of Clojure Spec (https://clojure.org/about/spec
3. Currying is a convenient technique to define HOFs that return functional values (i.e. return a function as their result). An alternative is to declare complex return types that represent HOFs and use inner functions to compute return values -- that is tedious. Currying simplifies this with a convenient syntax by automating type inferencing based on lists of function arguments.
4. A function purity is not violated by a local mutation. The provided example of a function using "for" and "var" represents a pure function. Mutation is not evil per se. In fact, a locally scoped mutation is often a good optimisation technique -- after all we're computing on von Neumann machines (:
5. It is important to differentiate between function definitions and computational processes they generate. This is a simple, but not an easily observable difference -- in casual conversations we often conflate recursive definitions of functions with "what they do" computation wise. A recursive definition generates either a recursive process or an iterative process. And so we know that a tail-recursive definition generates an iterative process, while non-tail recursive definition generates a recursive process. SICP covers this subject, as many other re functional programming, extremely well (https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-11.html#%_sec_1.2
6. Pattern matching is a technique for data destructuring. In order to appreciate its usefulness, it is worthwhile using something that would have more than two patterns to match, which cannot be easily emulated with a single "if/else". Otherwise, especially for those that are considering/evaluating whether they want to start using the functional paradigm, it is very easy to dismiss the power of data destructuring with pattern matching.
Some less relevant, but potentially thought-provoking remarks:
1. Unlike functional analysis, λ-calculus is not really about the study of functions, but rather the process of computation and computability, and like the abstract Turing Machine, is a universal model for computation. A curios fact is that functions in λ-calculus are lambdas -- have no names. How could such functions be defined recursively?! It could be instructive to investigate how this can be achieved and try to replicate the approach in Scala. Hint: y-combinator.
2. For those of us coming from statically typed languages, dynamically typed languages often look inferior, and we tend to stick with and search for event more stronger statically typed languages following our, potentially incorrect, intuition that one day successfully compiling a program will mean 100% correct program. And when we program, we tend to think in types and declaring new types does not bother us -- after all this is only natural. However, by introducing a new type we're faced with an interesting dilemma -- there are no functions/operations that can do anything with values of this new type... And so, we get to work implementing such functions, completely convinced that we do a "good job". But what we actually do is create more code, which means more space for defects, more code maintenance, etc. Is there an alternative? Can we minimise creation of new types? Could approaches from dynamically typed languages that utilise the "shape" of data to drive computations in combination with types from standard well tested libraries yield a more pragmatic approach to creating more reliable systems?
3. I wholeheartedly recommend SICP (https://mitpress.mit.edu/sicp
) to anyone who has not already read/studied this book. It is truly the most rewarding book on programming I personally come across. It uses Scheme as the medium to convey the material, but all the ideas and concepts are fully applicable to Scala, and it is definitely possible to work though this book by implementing exercises in Scala.