I'm going to try kNN classification on a dataset containing, among other features, the one called "time of day". In the context of the application, Monday 23:58 is just as close to Tuesday 00:02 as is Friday 00:04. It's the angle of the hour hand on clock's face that matters. If not that one circular feature, Euclidean distance would do.
So far I'm aware of
knn works a little bit different. It is an instance based method, meaning that the actual model consists of the instances. For each set of test samples distance matrix is computed anew in terms of computing a distance matrix <- is this where you are ?
You cannot simply define knn by the distance matrix alone. At least I don't know a way, how, given a test vector, you can compute the distance without having a corresponding train vector set.
If however you have distance matrix then take a look at the following similar question Find K nearest neighbors, starting from a distance matrix
But the documentation explicitly says:
k.nearest.neighbors(i, distance_matrix, k = 5)
i is from the numeric class and is a row from the distance_matrix.
distance_matrix is a nxn matrix.
k is from the numeric class and represent the number of neigbours that the function will return.
This imho is similar to:
apply(dm, 1, function(d) "majority vote for labels[order(d) < k]")
Given you have a distance matrix you already reinvented 80% of