← Back to context

Comment by neonsunset

12 days ago

The problem with Haskell is that it's slow and memory-heavy (and OCaml is the same, but worse). F# and Scala (and Clojure?) are pretty much the only reasonably usable FP languages.

Where are you getting your info from?

Typical OCaml programs, when compared to similar C++ would be slower but use less memory.

F# and Scala are both OCaml in disuse. I don't know what you mean by "reasonable"... but, if the idea is "easy to reason about", then these two don't particularly stand out much.

Languages that are easy to reason about would be generally in the category where you need to do fewer translations before you get to the way the program is executed (i.e. bytecode adds an extra step, thus making a language harder to reason about). Also, languages with fewer primitives are easier to reason about, because the program text becomes more predictable.

In general, "functional" languages are harder to reason about when compared to imperative, because computers inherently don't work in the way the programs are modeled in "functional" languages, so there will be some necessary translation layer that transforms an FP program into a real computer program. There are people who believe that FP programs are easier to reason about due to the lack of side effects. In my experience, the lack of side effects doesn't come close to compensating the advantages of being able to map the program to what computer actually does.

All kinds of behind-the-scenes mechanisms in the language, s.a. garbage collector, make the reasoning harder too, in a sense. We pretend that GC makes reasoning easier by making a mental shortcut: we pretend that it doesn't matter when memory is freed. But, if you really want a full picture, GC adds a whole new layer of complexity when it comes to understanding a program.

Yet another aspect of reasoning is the ability of reasoner to act on their reasoning. I.e. the reasoning might be imperfect, but still allow to act (which is kind of the human condition, the way we are prepared to deal with the world). So, often, while imperative programs cannot be formally easily reasoned about, it's easy to informally reason about them to be efficient enough to act on that reasoning. "Functional" programs are usually the reverse: they are easier to reason about formally, but they are very unnatural to the way humans reason about everyday stuff, so, acting on them is harder for humans.

"Functional" languages tend to be more in the bytecode + GC + multiple translations camp. And, if forced to choose with these constrains, I'd say Erlang would be the easiest and the best designed language of all the "popular" ones. SML would be my pick if you need to get into the world of Haskell, but deciphering Haskell syntax brings you to the boil.

  • > Languages that are easy to reason about would be generally in the category where you need to do fewer translations before you get to the way the program is executed

    This is a very interesting definition of "easy to reason about".

    To me, "easy to reason about" means that it's easy for me to figure out what the intent of the code is, and how likely it is that the code does what it was intended to do.

    How it translates to the machine is irrelevant.

    Now, if you work in an environment where getting the most out of the machine is crucial, then I understand. In my domain, though, dealing with things like allocating and freeing memory makes it harder to see what the code is supposed to do. As a human, I don't think about which memory to store where and when that memory should be forgotten, I just act on memories.

    Functional languages, then, tend to be high level enough to not expose you to the workings of the machine, which let's me focus on what I actually want to do.

  • Heh, no.

    You are suggesting to replace FP languages with powerful type systems that perform marginally slower than C# and Java (and can access their ecosystems) with a language that is dynamically typed and performs, in most situations, marginally slower than PHP and marginally faster than Ruby.

    • Every language is both statically and dynamically typed. But the more correct way of saying this is "dynamically or statically checked". Types don't appear or disappear when a program runs. The difference is in what can be known about types and at what stage.

      What programmers actually care about is this:

      How can we check more and sooner in a way that requires less mental energy on the side of the programmer to write?

      In other words, we have three variables we want to optimize for: how much is checked, how much is checked before execution, how much effort does it take to write the check. When people argue for "statically or dynamically typed languages", they generally don't understand what they argue for (or against), as they don't have this kind of mental model in mind (they just learned the terms w/o clear understanding of what they mean).

      And so do you.

      So, I don't really know what do you mean when you say "dynamically typed". Which language is that? Are you talking about Erlang? SML? What aspect of the language are you trying to describe?

      NB. I don't think either C# or Java have good type systems. My particular problem with these is subtyping, which is also a problem in OCaml and derivatives s.a. Scala or F#. It's not a solution anyone wanted, it's a kludge that was added into these systems to deal with classes and objects. So, if we are going after good type systems... well, it wouldn't be in any language with objects, that's for sure.

      NB2. Unix Shell has a great type system. Everything is a string. It's a pleasure to work with, and you don't even need a type checker! For its domain, it seems like a perfect compromise between the three optimization objectives.