Search and rescue is often a slow process, which puts people at risk of being trapped, stranded or even worse, killed during natural disasters, such as earthquakes, floods and hurricanes. In order to provide better rescue assistance and to achieve high survival rates, we need efficient, cost effective and small crawling robots to execute search and rescue operations during disaster situations, especially in reaching spaces that are inaccessible for larger robots or are harmful to rescuers. Thus, I worked on improving the walking speed and autonomous behaviour of OctoRoACH, an inexpensive and robust palm-sized eight legged robot developed by the Biomimetic Millisystems Lab, together with my capstone project members and advisors at UC Berkeley. Our results show that reinforcement learning algorithms is useful to improve the walking speed of existing search and rescue robots across different terrains and save more lives during disaster situations.
Title
Study of Reinforcement Learning Methods to Enable Automatic Tuning of State of The Art Legged Robots
Published
2012-05-30
Full Collection Name
Electrical Engineering & Computer Sciences Technical Reports
Other Identifiers
EECS-2012-127
Type
Text
Extent
25 p
Archive
The Engineering Library
Usage Statement
Researchers may make free and open use of the UC Berkeley Library’s digitized public domain materials. However, some materials in our online collections may be protected by U.S. copyright law (Title 17, U.S.C.). Use or reproduction of materials protected by copyright beyond that allowed by fair use (Title 17, U.S.C. § 107) requires permission from the copyright owners. The use or reproduction of some materials may also be restricted by terms of University of California gift or purchase agreements, privacy and publicity rights, or trademark law. Responsibility for determining rights status and permissibility of any use or reproduction rests exclusively with the researcher. To learn more or make inquiries, please see our permissions policies (https://www.lib.berkeley.edu/about/permissions-policies).