We have designed BIDMachRF – an implementation of Random Forest with high CPU and GPU throughput and with full scalability. It is based on parallelism, maximal work by each datum, reduction of unnecessary data access, sorting, and data compression. BIDMachRF is optimized for GB sized large datasets, and our goal is to be 10-100x faster than SciKit-Learn Random Forests and CudaTree on these large datasets. BIDMachRF is currently a work in progress. This paper describes the current state of our implementation as well as points for improvement, which we have identified through benchmarks on classical datasets. Our current in progress version has already shown to be 5x faster than implementations such as SciKit-Learn on large sized GBs of data and is estimated to be at least 20x faster than those implementations when complete.
Title
Optimizing Random Forests on GPU
Published
2014-12-01
Full Collection Name
Electrical Engineering & Computer Sciences Technical Reports
Other Identifiers
EECS-2014-205
Type
Text
Extent
13 p
Archive
The Engineering Library
Usage Statement
Researchers may make free and open use of the UC Berkeley Library’s digitized public domain materials. However, some materials in our online collections may be protected by U.S. copyright law (Title 17, U.S.C.). Use or reproduction of materials protected by copyright beyond that allowed by fair use (Title 17, U.S.C. § 107) requires permission from the copyright owners. The use or reproduction of some materials may also be restricted by terms of University of California gift or purchase agreements, privacy and publicity rights, or trademark law. Responsibility for determining rights status and permissibility of any use or reproduction rests exclusively with the researcher. To learn more or make inquiries, please see our permissions policies (https://www.lib.berkeley.edu/about/permissions-policies).