Teaser: How Do Humans Sketch Objects?

Abstract

Humans have used sketching to depict our visual world since prehistoric times. Even today, sketching is possibly the only rendering technique readily available to all humans. This paper is the first large scale exploration of human sketches. We analyze the distribution of non-expert sketches of everyday objects such as 'teapot' or 'car'. We ask humans to sketch objects of a given category and gather 20,000 unique sketches evenly distributed over 250 object categories. With this dataset we perform a perceptual study and find that humans can correctly identify the object category of a sketch 73% of the time. We compare human performance against computational recognition methods. We develop a bag-of-features sketch representation and use multi-class support vector machines, trained on our sketch dataset, to classify sketches. The resulting recognition method is able to identify unknown sketches with 56% accuracy (chance is 0.4%). Based on the computational model, we demonstrate an interactive sketch recognition system. We release the complete crowd-sourced dataset of sketches to the community.


Downloads

Note: temporal order of strokes is encoded in the SVG/Matlab dataset. Each stroke is a Bezier Spline, and strokes that have been drawn first are at the top of a file. WhatsMySketch is available on the AppStore

BibTeX

@article{eitz2012hdhso,
author={Eitz, Mathias and Hays, James and Alexa, Marc},
title={How Do Humans Sketch Objects?},
journal={ACM Trans. Graph. (Proc. SIGGRAPH)},
year={2012},
volume={31},
number={4},
pages = {44:1--44:10}
}

Human sketch recognition

Human classifications on the full dataset. In the left column of each category page, we show the sketches that have been correctly classified. In the middle column we show the sketches that actually belong to the category but have not been recognized. In the last column we show the false positives, i.e. those sketches that humans incorrectly predicted to belong to the category.

Human Classification Results »

Computational recognition

Computational classification results on the test dataset using the best-performing SVM model as described in the paper. In the first column of each category page, we show 5 samples of the training dataset. In the second column, we show sketches that have been correctly classified. In the third column we show the sketches that actually belong to the category but have not been recognized. In the last column we show the false positives, i.e. those sketches that have been incorrectly predicted to belong to that category.

Computational Classification Results »

t-SNE layouts

For each category, we apply dimensionality reduction on the sketch feature space described in the paper (down to two dimensions). We plot the results as a 2D layout of the sketches that nicely illustrates the variety of sketching styles within each category.

t-SNE Layouts »

Representative sketches

For each category, we compute a representative, iconic sketch. We first cluster the category using mean shift. Next, for each cluster, we compute the average descriptor and identify the nearest neighbor of this average descriptor as the cluster representative.

Representative sketches »