The Amazon Echo and Google Home exemplify a new class of home automation platforms that provide intuitive, low-cost, cloud-based speech interfaces. We present EchoBot, a system that interfaces the Amazon Echo to the ABB YuMi industrial robot to facilitate human-robot data collection for Learning from Demonstration (LfD). EchoBot uses the computation power of the Amazon cloud to robustly convert speech to text and provides continuous speech explanations to the user of the robot during operation. We study human performance with two tasks, grasping and "Tower of Hanoi" ring stacking, with four input and output interface combinations. Our experiments vary speech and keyboard as input interfaces, and speech and monitor as output interfaces. We evaluate the effectiveness of EchoBot when collecting infrequent data in the first task, and evaluate EchoBot's effectiveness with frequent data input in the second task. Results suggest that speech has potential to provide significant improvements in demonstration times and reliability over keyboards and monitors, and we observed a 57% decrease in average time to complete a task that required two hands and frequent human input over 11 participants.