Using the following with TF .9.0rc0 on 60,000 (train) and 26,000 (test) on or so records with 145 coded columns (1,0) trying to predict 1 or 0 for class identification..
classifier_TensorFlow = learn.TensorFlowDNNClassifier(hidden_units=[10, 20, 10],n_classes=2, steps=100)
WARNING:tensorflow:TensorFlowDNNClassifier class is deprecated. Please consider using DNNClassifier as an alternative.
score = metrics.accuracy_score(y_test, classifier_TensorFlow.predict(X_test))
print (metrics.confusion_matrix(y_test, X_pred_class))
[ 1992 15]]
classifier_TensorFlow = learn.DNNClassifier(hidden_units=[10, 20, 10],n_classes=2)
I don't think it is a bug, from the source code of DNNClassifier, I can tell that its usage differs from TensorFlowDNNClassifier. The constructor of DNNClassifier doesn't have the steps param:
def __init__(self, hidden_units, feature_columns=None, model_dir=None, n_classes=2, weight_column_name=None, optimizer=None, activation_fn=nn.relu, dropout=None, config=None)
def fit(self, x=None, y=None, input_fn=None, steps=None, batch_size=None, monitors=None):
For the "it hangs with no completion?", in the doc of the fit() method of BaseEstimator it is explained that if steps is
None (as the value by default), the model will train forever.
I still don't get why I would like to train a model forever. My guesses are that creators think this way is better for the classifier if we want to have early stopping on validation data, but as I said is only my guess.
As you could see DNNClassifier doesn't give any feedback as the deprecated TensorFlowDNNClassifier, it is supposed that the feedback can be setup with the 'config' param that is present in the constructor of DNNClassifier. So you should pass a RunConfig object as config, and in the params of this object you should set the verbose param, unfortunately I tried to set it so I can see the progress of the loss, but didn't get so lucky.
I recommend you to take a look at the latest post of Yuan Tang in his blog here, one of the creators of the skflow, aka tf learn.