Pre-trained language models (LLMs) such as GPT-3 can carry fluent, multi-turn conversations out-of-the-box, making them attractive materials for chatbot design. Further, designers can improve LLM chatbot utterances by prepending textual prompts -- instructions and examples of desired interactions -- to its inputs. However, prompt-based improvements can be brittle; designers face challenges systematically understanding how a prompt strategy might impact the unfolding of subsequent conversations across users. To address this challenge, we introduce the concept of Conversation Regression Testing. Based on sample conversations with a baseline chatbot, Conversation Regression Testing tracks how conversational errors persist or are resolved by applying different prompt strategies. We embody this technique in an interactive design tool, BotDesigner, that lets designers identify archetypal errors across multiple conversations; shows common threads of conversation using a graph visualization; and highlights the effects of prompt changes across bot design iterations. A pilot evaluation demonstrates the usefulness of both the concept of regression testing and the functionalities of BotDesigner for chatbot designers.




Download Full History