This paper describes an ongoing project in which on-demand writing is used as part of a diagnostic evaluation of newly admitted college students. The authors will present preliminary evidence of the validity and reliability of human and machine scored essays and the usefulness of the feedback provided to students on their writing and their self-regulated learning. The purpose of the project, the Diagnostic Assessment and Achievement of College Skills (DAACS), is to develop a no-stakes diagnostic assessment designed to help students transition to college. Research suggests that mindset, self-regulation, and grit are predictors of college success (Duckworth, Peterson, Matthews, & Kelly, 2007; Zimmerman & Schunk, 2011). DAACS provides feedback to students about their strengths and weaknesses in mathematics, reading, and writing, as well as their self-regulated learning (SRL) skills, and links students to appropriate resources and support. For the writing assessment, students write an essay reflecting on the results of a survey about their self-regulated learning skills. In addition to encouraging students to reflect on their SRL, the writing assessment provides feedback about their writing and directs them towards relevant writing resources. Although this approach is similar to Directed Self-Placement (Gere et al., 2010), DAACS is not intended to be used for placement; rather, it helps students determine whether they need support as writers. The scoring rubric developed for this assessment includes five criteria: content, organization, paragraphs, sentences, and conventions. An open source, automated program called Lightside was trained with hundreds of essays scored by human raters. Preliminary results indicate acceptable agreement between human and machine scores, and high rates of student and advisor satisfaction. These results and implications for uses of machine scoring of essays will be presented.