In this thesis I explore and formalize the view that grammar learning is driven by meaningful language use in context. On this view, the goal of a first language learner is to become a better language user --- in particular, by acquiring linguistic constructions (structured mappings between form and meaning) that facilitate successful communication. I present a computational model in which all aspects of the language learning problem are reformulated in line with these assumptions. The representational basis of the model is a construction grammar formalism that captures constituent structure and relational constraints, both within and across the domains of form and meaning. This formalism plays a central role in two processes: language understanding, which uses constuctions to interpret utterances in context; and language learning, which seeks to improve comprehension by making judicious changes to the current set of constructions. The resulting integrated model of language structure, use and acquisition provides a cognitively motivated and computationally precise account of how children acquire their earliest multiword constructions. I define a set of operations for proposing new constructions, either to capture contextually available mappings not predicted by the current grammar, or to reorganize existing constructions. Candidate constructions are evaluated using a minimum description length criterion that balances a structural bias toward simpler grammars against a data-driven bias toward more specific grammars. When trained with a corpus of child-directed utterances annotated with situation descriptions, the model gradually acquires the concrete word combinations and item-based constructions that constitute the first steps toward adult language.