We present our annotation tool for frame-by-frame bounding box annotation in videos. The tool has been used in conjunction with Amazon Mechanical Turk as well as standalone, to annotate datasets for Berkeley Deep Drive, BMW, DeepScale, and XYSense. Building upon ideas from previous works in this area, we present our improvements and optimizations on their user interfaces. We also introduce the idea of tuning such an annotation tool to reduce researcher’s friction, which we argue is just as important as streamlining a worker’s workflow due to the high cost of researcher time. We share our experiences with existing tools, and our ideas (and code) for how to make the experience better for researchers. We hope our findings and contributions reduce the cost of producing a labeled video dataset, and introduce ideas that will improve such annotation software in the future.
Title
BeaverDam: Video Annotation Tool for Computer Vision Training Labels
Published
2016-12-08
Full Collection Name
Electrical Engineering & Computer Sciences Technical Reports
Other Identifiers
EECS-2016-193
Type
Text
Extent
27 p
Archive
The Engineering Library
Usage Statement
Researchers may make free and open use of the UC Berkeley Library’s digitized public domain materials. However, some materials in our online collections may be protected by U.S. copyright law (Title 17, U.S.C.). Use or reproduction of materials protected by copyright beyond that allowed by fair use (Title 17, U.S.C. § 107) requires permission from the copyright owners. The use or reproduction of some materials may also be restricted by terms of University of California gift or purchase agreements, privacy and publicity rights, or trademark law. Responsibility for determining rights status and permissibility of any use or reproduction rests exclusively with the researcher. To learn more or make inquiries, please see our permissions policies (https://www.lib.berkeley.edu/about/permissions-policies).