Multi-scale Deep Nearest Neighbors

It appears your Web browser is not configured to display PDF files. Download adobe Acrobat or click here to download the PDF file.

Click here to download the PDF file.


Chauhan, Abhijeet




In this thesis, we aim to learn a deep embedding space suitable for k-NN. Our approach is based on minimizing the leave-one-out 1-NN classification error in the embedding space. Directly optimizing for such a rule is not tractable due to its discontinuous nature. We propose Multi-scale Deep Nearest Neighbour (MsDNN) which is a differentiable loss function that aims to maximize the expected sample margin for every training sample. The output of MsDNN is an embedding space. We evaluate the resulting space from two angles. From the classification view, during testing, we run a k-NN classifier and report the classification accuracy. But classification accuracy does not tell us the entire story about the goodness of an embedding space. Therefore, we run k-means clustering in the embedding space. Analogous to the hierarchical clustering, subclasses might exist on different scales. Our method provides a mechanism to target subclasses in different scales.


Computer Science




Carleton University

Thesis Degree Name: 

Master of Computer Science: 

Thesis Degree Level: 


Thesis Degree Discipline: 

Computer Science

Parent Collection: 

Theses and Dissertations

Items in CURVE are protected by copyright, with all rights reserved, unless otherwise indicated. They are made available with permission from the author(s).