Object detection is a well-studied problem in computer vision. One of the basic tasks is to draw tight bounding boxes around instances of various target classes in a set of images. Computer vision literature has primarily focused on intensity, with less emphasis on depth data. In this report we address the challenge of detecting 10 common household items (bed, chair, etc) in RGB-D images obtained using the Kinect. We operate on the recently released NYU-Depth V2 dataset. Our algorithm augments the deformable parts model by adding a set of vector quantized depth features that are, to the best of our knowledge, novel on this dataset.
Title
Object Detection in RGB-D Indoor Scenes
Published
EECS Department, University of California, University of California at Berkeley, Berkeley, California, 2013-01-14
Full Collection Name
Electrical Engineering & Computer Sciences Technical Reports
Other Identifiers
EECS-2013-3
Type
Text
Extent
17 p
Archive
The Engineering Library
Usage Statement
Researchers may make free and open use of the UC Berkeley Library’s digitized public domain materials. However, some materials in our online collections may be protected by U.S. copyright law (Title 17, U.S.C.). Use or reproduction of materials protected by copyright beyond that allowed by fair use (Title 17, U.S.C. § 107) requires permission from the copyright owners. The use or reproduction of some materials may also be restricted by terms of University of California gift or purchase agreements, privacy and publicity rights, or trademark law. Responsibility for determining rights status and permissibility of any use or reproduction rests exclusively with the researcher. To learn more or make inquiries, please see our permissions policies (https://www.lib.berkeley.edu/about/permissions-policies).