I am using scikit-learn in python to classify my instances using svm.SVC; however, for some combination of parameters fitting never stops. Is this because the algorithm needs more time. Or is it possible that algorithm does not converge to an extremum point.
Note that I am not making any assumption about my data. Knowing that, does svm always converge for arbitrary datasets?
It should always converge, unless there are numerical problems numerical problems.
Make sure your data is properly scaled. It is a bad idea if different features have values in different orders of magnitude. You might want to normalize all features to the range [-1,+1], especially for problems with more than 100 features.
Q: The program keeps running (with output, i.e. many dots). What should I do?
In theory libsvm guarantees to converge. Therefore, this means you are handling ill-conditioned situations (e.g. too large/small parameters) so numerical difficulties occur.