uswer1242 uswer1242 - 13 days ago 6
Java Question

Reducing amount of RAM BFS algorithm takes

I have written a 2x2x2 rubiks cube solver that uses a breadth first search algorithm to solve a cube position entered by the user. The program does solve the cube. However I encounter a problem when I enter in a hard position to solve which would be found deep in the search, I run out of heap space. My computer only has 4 GB of RAM and when I run the program I give 3GB to it. I was wondering what I could do to reduce the amount of ram the search takes. Possibly by changing a few aspects of the BFS.

static private void solve(Cube c) {

Set<Cube> cubesFound = new HashSet<Cube>();

Stack<Cube> s = new Stack<Cube>();

Set<Stack<Cube>> initialPaths = new HashSet<Stack<Cube>>();

solve(initialPaths, cubesFound);

static private void solve(Set<Stack<Cube>> livePaths, Set<Cube> cubesFoundSoFar) {
System.out.println("livePaths size:" + livePaths.size());
int numDupes = 0;

Set<Stack<Cube>> newLivePaths = new HashSet<Stack<Cube>>();

for(Stack<Cube> currentPath : livePaths) {

Set<Cube> nextStates = currentPath.peek().getNextStates();

for (Cube next : nextStates) {
if (currentPath.size() > 1 && next.isSolved()) {
System.out.println("Path length:" + currentPath.size());
System.out.println("Path:" + currentPath);

} else if (!cubesFoundSoFar.contains(next)) {
Stack<Cube> newCurrentPath = new Stack<Cube>();
} else {
numDupes += 1;
String storeStates = "positions.txt";
try {
PrintWriter outputStream = new PrintWriter(storeStates);

} catch (FileNotFoundException e) {
// TODO Auto-generated catch block

System.out.println("Duplicates found " + numDupes + ".");
solve(newLivePaths, cubesFoundSoFar);

That is the BFS but I fear that its not all the information needed to understand what is going on so here is the link to file with all the code.


By definition, "Best first search" keeps the entire search frontier of possible paths through the space.

That can be exponentially large. With 3x3x3 Rubik's cube, I think there are 12? possible moves at each point, so a 10 move sequence to solve arguably requires 12^10 combinations which is well over a billion (10^9). WIth this many states, you'll want to minimize the size of the state to minimize total storage. (Uh, you actually print all the states? " outputStream.println(cubesFoundSoFar);" Isnt that a vast amount of output?)

With 2x2x2, you only have 8 possible moves at each point. I don't know solution lengths here for random problems. If it is still length 10, you get 8^10 which is still pretty big.

Now, many move sequences lead to the same cube configuration. To recognize this you need to check that a generated move does not re-generate a position already visited. You seem to be doing that (good!) and tracking the number of hits; I'd expect that hit count to be pretty high as many paths should lead to the same configuration.

What you don't show, is how you score each move sequence to guide the search. What node does it expand next? This is where best comes into play. If you don't have any guidance (e.g., all move sequences enumerated have the same value), you truly will wander over a huge space because all move sequences are equally good. What guides your solver to a solution? You need something like a priority queue over nodes with priority being determined by score, and that I don't see in the code you presented here.

I don't know what a great heuristic for measuring the score as a quality of a move sequence, but you can start by scoring a move sequence with the number of moves it takes to arrive there. The next "best move" to try is then the one with the shortest path. That will probably help immensely.

(A simple enhancement that might work is to count the number of colors on a face; 3 colors hints [is this true?] that it might take 3-1 --> 2 moves minimum to remove the wrong colors. Then the score might be #moves+#facecolors-1, to estimate number of moves to a solution; clearly you want the shortest sequence of moves. This might be a so-called "admissible" hueristic score).

You'll also have to adjust your scheme to detecting duplicate move sequences. When you find an already encountered state, that state will now presumably have attached to it the score (move count) it takes to reach that state. When you get a hit, you've found another way to get to that same state... but the score for new path, may be smaller than what is recorded in the state. In this case, you need to revise the score of discovered duplicate state with the smaller new score. This way a path/state with score 20 may in fact be discovered to have a score of 10, which means it is suddenly improved.