Dylan Siegler - 1 year ago 46

Java Question

I am currently taking a Data Structures class and, as you may expect, one of the things we have to do is write some of the common sorts. In writing my insertion sort algorithm, I noticed in ran significantly faster than that of my instructor's (for 400000 data points it took my algorithm about 30 seconds and his about 90). I emailed him my code and the same results happened when they were both running on the same machine. We managed to waste more than 40 minutes slowly changing his sorting method into mine until it was exactly the same, word for word, except for one seemingly arbitrary thing. First, here is my insertion sort code:

`public static int[] insertionSort(int[] A){`

//Check for illegal cases

if (A == null || A.length == 0){

throw new IllegalArgumentException("A is not populated");

}

for(int i = 0; i < A.length; i++){

int j = i;

while(j > 0 && A[j - 1] > A[j]){

int temp = A[j];

A[j] = A[j - 1];

A[j - 1] = temp;

j--;

}

}

return A;

}

Now at this point his code was exactly the same as mine except for the lines where we swap

`A[j]`

`A[j - 1]`

`int temp = A[j - 1];`

A[j - 1] = A[j];

A[j] = temp;

We found that those 3 lines are the culprits. My code was running significantly faster because of this. Perplexed, we ran

`javap -c`

`main`

`int j`

`Compiled from "me.java"`

public class me {

public me();

Code:

0: aload_0

1: invokespecial #1 // Method java/lang/Object."<init>":()V

4: return

public static void main(java.lang.String[]);

Code:

0: sipush 10000

3: newarray int

5: astore_1

6: bipush 10

8: istore_2

9: aload_1

10: iload_2

11: iaload

12: istore_3

13: aload_1

14: iload_2

15: aload_1

16: iload_2

17: iconst_1

18: isub

19: iaload

20: iastore

21: aload_1

22: iload_2

23: iconst_1

24: isub

25: iload_3

26: iastore

27: return

}

And my instructor's method's byte code:

`Compiled from "instructor.java"`

public class instructor {

public instructor();

Code:

0: aload_0

1: invokespecial #1 // Method java/lang/Object."<init>":()V

4: return

public static void main(java.lang.String[]);

Code:

0: sipush 10000

3: newarray int

5: astore_1

6: bipush 10

8: istore_2

9: aload_1

10: iload_2

11: iconst_1

12: isub

13: iaload

14: istore_3

15: aload_1

16: iload_2

17: iconst_1

18: isub

19: aload_1

20: iload_2

21: iaload

22: iastore

23: aload_1

24: iload_2

25: iload_3

26: iastore

27: return

}

I don't see any real difference between these byte codes.

`public class Tester1 {`

public static void main(String[] args){

int[] A = new int[400000];

for(int i = 0; i < A.length; i++){

A[i] = (int) (Math.random() * Integer.MAX_VALUE);

}

double start = System.currentTimeMillis();

insertionSort(A);

System.out.println("My insertion sort took " + (System.currentTimeMillis() - start) + " milliseconds.");

}

public static int[] insertionSort(int[] A){

//Check for illegal cases

if (A == null || A.length == 0){

throw new IllegalArgumentException("A is not populated");

}

for(int i = 0; i < A.length; i++){

int j = i;

while(j > 0 && A[j - 1] > A[j]){

int temp = A[j];

A[j] = A[j - 1];

A[j - 1] = temp;

j--;

}

}

return A;

}

}

And the second file:

`public class Tester2 {`

public static void main(String[] args){

int[] A = new int[400000];

for(int i = 0; i < A.length; i++){

A[i] = (int) (Math.random() * Integer.MAX_VALUE);

}

double start = System.currentTimeMillis();

otherInsertion(A);

System.out.println("Other insertion sort took " + (System.currentTimeMillis() - start) + " milliseconds.");

}

public static int[] otherInsertion(int[] A){

//Check for illegal cases

if (A == null || A.length == 0){

throw new IllegalArgumentException("A is not populated");

}

for(int i = 0; i < A.length; i++){

int j = i;

while(j > 0 && A[j - 1] > A[j]){

int temp = A[j - 1];

A[j - 1] = A[j];

A[j] = temp;

j--;

}

}

return A;

}

}

And the outputs (with no arguments, just

`java Tester1`

`java Tester2`

`My insertion sort took 37680.0 milliseconds.`

Other insertion sort took 86358.0 milliseconds.

These were run as 2 separate files in 2 different JVMs.

Answer Source

This is the effect of **loop unrolling** optimization along with **common
subexpression elimination**. Depending on the order of array access instructions, JIT can eliminate redundant loads in one case, but not in the other.

Let me explain in details. In both cases JIT unrolls 4 iterations of the inner loop.

E.g. for your case:

```
while (j > 3) {
if (A[j - 1] > A[j]) {
int temp = A[j];
A[j] = A[j - 1];
A[j - 1] = temp; \
} A[j - 1] loaded immediately after store
if (A[j - 2] > A[j - 1]) { /
int temp = A[j - 1];
A[j - 1] = A[j - 2];
A[j - 2] = temp; \
} A[j - 2] loaded immediately after store
if (A[j - 3] > A[j - 2]) { /
int temp = A[j - 2];
A[j - 2] = A[j - 3];
A[j - 3] = temp; \
} A[j - 3] loaded immediately after store
if (A[j - 4] > A[j - 3]) { /
int temp = A[j - 3];
A[j - 3] = A[j - 4];
A[j - 4] = temp;
}
j -= 4;
}
```

Then JIT eliminates redundant array loads, and the resulting assembly looks like

```
0x0000000002d53a70: movslq %r11d,%r10
0x0000000002d53a73: lea 0x0(%rbp,%r10,4),%r10
0x0000000002d53a78: mov 0x10(%r10),%ebx ; ebx = A[j]
0x0000000002d53a7c: mov 0xc(%r10),%r9d ; r9d = A[j - 1]
0x0000000002d53a80: cmp %ebx,%r9d ; if (r9d > ebx) {
0x0000000002d53a83: jle 0x0000000002d539f3
0x0000000002d53a89: mov %r9d,0x10(%r10) ; A[j] = r9d
0x0000000002d53a8d: mov %ebx,0xc(%r10) ; A[j - 1] = ebx
; }
0x0000000002d53a91: mov 0x8(%r10),%r9d ; r9d = A[j - 2]
0x0000000002d53a95: cmp %ebx,%r9d ; if (r9d > ebx) {
0x0000000002d53a98: jle 0x0000000002d539f3
0x0000000002d53a9e: mov %r9d,0xc(%r10) ; A[j - 1] = r9d
0x0000000002d53aa2: mov %ebx,0x8(%r10) ; A[j - 2] = ebx
; }
0x0000000002d53aa6: mov 0x4(%r10),%r9d ; r9d = A[j - 3]
0x0000000002d53aaa: cmp %ebx,%r9d ; if (r9d > ebx) {
0x0000000002d53aad: jle 0x0000000002d539f3
0x0000000002d53ab3: mov %r9d,0x8(%r10) ; A[j - 2] = r9d
0x0000000002d53ab7: mov %ebx,0x4(%r10) ; A[j - 3] = ebx
; }
0x0000000002d53abb: mov (%r10),%r8d ; r8d = A[j - 4]
0x0000000002d53abe: cmp %ebx,%r8d ; if (r8d > ebx) {
0x0000000002d53ac1: jle 0x0000000002d539f3
0x0000000002d53ac7: mov %r8d,0x4(%r10) ; A[j - 3] = r8
0x0000000002d53acb: mov %ebx,(%r10) ; A[j - 4] = ebx
; }
0x0000000002d53ace: add $0xfffffffc,%r11d ; j -= 4
0x0000000002d53ad2: cmp $0x3,%r11d ; while (j > 3)
0x0000000002d53ad6: jg 0x0000000002d53a70
```

Your instructor's code will look different after loop unrolling:

```
while (j > 3) {
if (A[j - 1] > A[j]) {
int temp = A[j - 1];
A[j - 1] = A[j];
A[j] = temp; <-- another store instruction between A[j - 1] access
}
if (A[j - 2] > A[j - 1]) {
int temp = A[j - 2];
A[j - 2] = A[j - 1];
A[j - 1] = temp;
}
...
```

JVM will not eliminate the subsequent load of `A[j - 1]`

, because there is another store instruction after the previous load of `A[j - 1]`

(though in this particular case such optimization is theoretically possible).

So, the assembly code will have more load instructions, and the performance will be worse:

```
0x0000000002b53a00: cmp %r8d,%r10d ; if (r10d > r8d) {
0x0000000002b53a03: jle 0x0000000002b53973
0x0000000002b53a09: mov %r8d,0xc(%rbx) ; A[j - 1] = r8d
0x0000000002b53a0d: mov %r10d,0x10(%rbx) ; A[j] = r10d
; }
0x0000000002b53a11: mov 0xc(%rbx),%r10d ; r10d = A[j - 1]
0x0000000002b53a15: mov 0x8(%rbx),%r9d ; r9d = A[j - 2]
0x0000000002b53a19: cmp %r10d,%r9d ; if (r9d > r10d) {
0x0000000002b53a1c: jle 0x0000000002b53973
0x0000000002b53a22: mov %r10d,0x8(%rbx) ; A[j - 2] = r10d
0x0000000002b53a26: mov %r9d,0xc(%rbx) ; A[j - 1] = r9d
; }
0x0000000002b53a2a: mov 0x8(%rbx),%r8d ; r8d = A[j - 2]
0x0000000002b53a2e: mov 0x4(%rbx),%r10d ; r10d = A[j - 3]
```

Note, that if you run JVM with loop unrolling optimization disabled (`-XX:LoopUnrollLimit=0`

), the performance of both cases will be the same.

**P.S.** Full disassembly of both methods is available here, obtained using

`-XX:CompileOnly=Test -XX:+UnlockDiagnosticVMOptions -XX:+PrintAssembly`