Preview

Description

Coding style is important to teach to beginning programmers, so that bad habits don't become permanent. This is often done manually at the University level because automated static analyzers cannot accurately grade based on a given rubric. However, even manual analysis of coding style encounters problems, as we have seen quite a bit of inconsistency among our graders. We introduce ACES--Automated Coding Evaluation of Style--a module that automates grading for the composition of Python programs. ACES, given certain constraints, assesses the composition of a program through static analysis, conversion from code to an Abstract Syntax Tree, and clustering (unsupervised learning), helping streamline the subjective process of grading based on style and identifying common mistakes. Further, we create visual representations of the clusters to allow readers and students understand where a submission falls, and what are the overall trends. We have applied this tool to CS61A--a CS1 level course at UC, Berkeley experiencing rapid growth in student enrollment--in an attempt to help expedite the involved process of grading code based off of composition, as well as reduce human grader inconsistencies.

Details

Files

Actions

Statistics

from
to
Export
Download Full History