smartnut007 smartnut007 - 6 months ago 61
Java Question

Functional programming - is immutability expensive?

The question is in two parts. The first is conceptual. The next looks at the same question more concretely in Scala.


  1. Does using only immutable data structures in a programming language make implementing certain algorithms/logic inherently more computationally expensive in practice? This draws to the fact that immutability is a core tenet of purely functional languages. Are there other factors that impact this?

  2. Let's take a more concrete example. Quicksort is generally taught and implemented using mutable operations on an in-memory data structure. How does one implement such a thing in a PURE functional way with a comparable computational and storage overhead to the mutable version. Specifically in Scala. I have included some crude benchmarks below.



More Details:



I come from an imperative programming background (C++, Java). I have been exploring functional programming, specifically Scala.

Some of the primary principles of pure functional programming:


  1. Functions are first class citizens.

  2. Functions do not have side effects and hence objects/data structures are immutable.



Even though modern JVMs are extremely efficient with object creation and garbage collection is very inexpensive for short lived objects, it's probably still better to minimize object creation right? At least in a single-threaded application where concurrency and locking is not an issue. Since Scala is a hybrid paradigm, one can choose to write imperative code with mutable objects if necessary. But, as someone who has spent a lot of years trying to reuse objects and minimize allocation. I would like a good understanding of the school of thought that would not even allow that.

As a specific case, I was a little surprised by this code snippet in this tutorial 6 . It has a Java version of Quicksort followed by a neat looking Scala implementation of the same.

Here is my attempt to benchmark the implementations. I haven't done detailed profiling. But, my guess is that the Scala version is slower because the number of objects allocated is linear (one per recursion call). Is there any way chance that tail call optimizations can come into play? If I am right, Scala supports tail call optimizations for self-recursive calls. So, it should only be helping it. I am using Scala 2.8.

Java version



public class QuickSortJ {

public static void sort(int[] xs) {
sort(xs, 0, xs.length -1 );
}

static void sort(int[] xs, int l, int r) {
if (r >= l) return;
int pivot = xs[l];
int a = l; int b = r;
while (a <= b){
while (xs[a] <= pivot) a++;
while (xs[b] > pivot) b--;
if (a < b) swap(xs, a, b);
}
sort(xs, l, b);
sort(xs, a, r);
}

static void swap(int[] arr, int i, int j) {
int t = arr[i]; arr[i] = arr[j]; arr[j] = t;
}
}


Scala version



object QuickSortS {

def sort(xs: Array[Int]): Array[Int] =
if (xs.length <= 1) xs
else {
val pivot = xs(xs.length / 2)
Array.concat(
sort(xs filter (pivot >)),
xs filter (pivot ==),
sort(xs filter (pivot <)))
}
}


Scala Code to compare implementations



import java.util.Date
import scala.testing.Benchmark

class BenchSort(sortfn: (Array[Int]) => Unit, name:String) extends Benchmark {

val ints = new Array[Int](100000);

override def prefix = name
override def setUp = {
val ran = new java.util.Random(5);
for (i <- 0 to ints.length - 1)
ints(i) = ran.nextInt();
}
override def run = sortfn(ints)
}

val benchImmut = new BenchSort( QuickSortS.sort , "Immutable/Functional/Scala" )
val benchMut = new BenchSort( QuickSortJ.sort , "Mutable/Imperative/Java " )

benchImmut.main( Array("5"))
benchMut.main( Array("5"))


Results



Time in milliseconds for five consecutive runs

Immutable/Functional/Scala 467 178 184 187 183
Mutable/Imperative/Java 51 14 12 12 12

Answer

Since there are a few misconceptions flying around here, I’d like to clarify some points.

  • The “in-place” quicksort isn’t really in-place (and quicksort is not by definition in-place). It requires additional storage in the form of stack space for the recursive step, which is in the order of O(log n) in the best case, but O(n) in the worst case.

  • Implementing a functional variant of quicksort that operates on arrays defeats the purpose. Arrays are never immutable.

  • The “proper” functional implementation of quicksort uses immutable lists. It is of course not in-place but it’s got the same worst-case asymptotic runtime (O(n^2)) and space complexity (O(n)) as the procedural in-place version.

    On average, its running time is still on par with that of the in-place variant (O(n log n)). Its space complexity, however, is still O(n).


There are two obvious disadvantages of a functional quicksort implementation. In the following, let’s consider this reference implementation in Haskell (I don’t know Scala …) from the Haskell introduction:

qsort []     = []
qsort (x:xs) = qsort lesser ++ [x] ++ qsort greater
    where lesser  = (filter (< x) xs)
          greater = (filter (>= x) xs)
  1. The first disadvantage is the choice of the pivot element, which is very inflexible. The strength of modern quicksort implementations relies heavily on a smart choice of the pivot (compare “Engineering a sort function” by Bentley et al.). The above algorithm is poor in that regard, which degrades average performance considerably.

  2. Secondly, this algorithm uses list concatenation (instead of list construction) which is an O(n) operation. This doesn’t impact on the asymptotic complexity but it’s a measurable factor.

A third disadvantage is somewhat hidden: unlike the “in-place” variant, this implementation continually requests memory from the heap for the cons cells of the list and potentially scatters memory all over the place. As a result, this algorithm has a very poor cache locality. I don’t know whether smart allocators in modern functional programming languages can mitigate this – but on modern machines, cache misses have become a major performance killer.


What’s the conclusion? Unlike others, I wouldn’t say that quicksort is inherently imperative and that’s why it performs badly in an FP environment. Quite on the contrary, I would argue that quicksort is a perfect example of a functional algorithm: it translates seamlessly into an immutable environment, its asymptotic running time and space complexity are on par with the procedural implementation, and even its procedural implementation employs recursion.

But this algorithm still performs worse when constrained to an immutable domain. The reason for this is that the algorithm has the peculiar property of benefitting from a lot of (sometimes low-level) fine-tuning that can only be efficiently performed on arrays. A naive description of the quicksort misses all these intricacies (both in the functional and in the procedural variant).

After reading “Engineering a sort function” I can no longer consider quicksort an elegant algorithm. Implemented efficiently, it is a clunky mess, an engineer’s piece of work, not an artist’s (not to devalue engineering! this has its own aesthetic).


But I would also like to point out that this point is particular to quicksort. Not every algorithm is amenable to the same sort of low-level tweaking. A lot of algorithms and data structures really can be expressed without performance loss in an immutable environment.

And immutability can even decrease performance costs by removing the need of costly copies or cross-thread synchronizations.

So, to answer the original question, “is immutability expensive?” – In the particular case of quicksort, there is a cost that is indeed a result of immutability. But in general, no.