I have a pandas data frame with some categorical columns. Some of these contains non-integer values.
I currently want to apply several machine learning models on this data. With some models, it is necessary to do normalization to get better result. For example, converting categorical variable into dummy/indicator variables. Indeed, pandas has a function called get_dummies for that purpose. However, this function returns the result depending on the data. So if I call get_dummies on training data, then call it again on test data, columns achieved in two cases can be different because a categorical column in test data can contains just a sub-set/different set of possible values compared to possible values in training data.
Therefore, I am looking for other methods to do one-hot coding.
What are possible ways to do one hot encoding in python (pandas/sklearn)?
In the past, I've found the easiest way to deal with this problem is to use
get_dummies and then enforce that the columns match up between test and train. For example, you might do something like:
train = train_df.get_dummies() test = test_df.get_dummies() # get the columns in train that are not in test col_to_add = np.setdiff1d(train.columns, test.columns) # add these columns to test, setting them equal to zero for c in col_to_add: test[c] = 0 # select and reorder the test columns using the train columns test = test[train.columns]
This will discard information about labels that you haven't seen in the training set, but will enforce consistency. If you're doing cross validation using these splits, I'd recommend two things. First, do
get_dummies on the whole dataset to get all of the columns (instead of just on the training set as in the code above). Second, use StratifiedKFold for cross validation so that your splits contain the relevant labels.