Students often have trouble knowing how to prepare for high-stakes exams. Even in the best case where legacy problems and solutions are available, there are usually no indications of the difficulty of a particular question or relevance to the material with which the student needs the most help. The problem is exacerbated by traditionally large introductory courses, where there’s no way a teacher could suggest a custom plan of study for every student, as they could in a small, face-to-face setting. In this report, we present AutoQuiz, an online, adaptive, test practice system. At its heart is a model of user content knowledge, which we call an “adapted DKT model”. We test it on two datasets, ASSISTments and PKUMOOC, to verify its effectiveness. We build a knowledge graph and encode assessment items from UC Berkeley’s non-majors introduction to computing course, CS10: The Beauty and Joy of Computing (BJC), and have volunteer students from the Spring 2018 version of the course use the system and provide qualitative feedback. We also measure the system quantitatively based on how well it improved their exam performance. The high-level user interaction is as follows: 1. If a student prefers choosing a specific question on her own or iterating through all the questions in the system, we’ll give her adequate freedom to select questions under specified topics. 2. If a student chooses “challenge” mode to test herself, we’ll pull a fixed-sized group of multiple-choice questions from an archive based on our estimation of the student’s performance on the skills she is expected to master. The student will receive automated and dynamic feedback after each submission.