Currently I am running through different sizes of 8 queens to see how these compares in performance:
- select next empty slot
- select the minimum amount of choices
So far, option 1 has managed to solve up to size 33, while option 2 has solved up to 114.
Some takes a few hundred or a few thousand, and then suddenly it takes many millions. Option 2 seem be better than option 1 on most sizes.
However, because 2 is more expensive to evaluate, it means those taking many millions move to solve takes also much longer time. Therefore, it looks like option 2 has better average performance while option 1 has better worst case performance.
Such constraint solving problems are known to be hard to predict because there is an exponential running time. Maybe one could try solving using different strategies on multiple cores and improve the worst case performance significantly that way.