The number of smart appliances is rapidly increasing to foster the Internet of Things. However, identifying an appliance for interaction in a building is also becoming more confusing. Most work on simplifying contextual appliance identification and selection either requires extra effort from users or infrastructure deployments. In this paper, we present an intuitive system for users to "look up" appliances in a smart building using an image. It constructs an annotated 3D visual model of a building interior using RGB-D cameras and matches a user-provided image on the model to determine the appliances in the image. Our system matched 98% images on a public robot-collected dataset and achieved 100% recall and precision among them. We also deployed the system in our lab with human captured RGB-D videos and images, which have more degrees of freedom and noise than robots. We matched 71% of the images. Of the matched images, 63% of them achieved 80% recall, and 78% achieved 80% precision.
Intuitive Appliance Identification using Image Matching in Smart Buildings
Researchers may make free and open use of the UC Berkeley Library’s digitized public domain materials. However, some materials in our online collections may be protected by U.S. copyright law (Title 17, U.S.C.). Use or reproduction of materials protected by copyright beyond that allowed by fair use (Title 17, U.S.C. § 107) requires permission from the copyright owners. The use or reproduction of some materials may also be restricted by terms of University of California gift or purchase agreements, privacy and publicity rights, or trademark law. Responsibility for determining rights status and permissibility of any use or reproduction rests exclusively with the researcher. To learn more or make inquiries, please see our permissions policies (https://www.lib.berkeley.edu/about/permissions-policies).