Evolution of Instructor Response? Analysis of Five Years of Feedback to Students

  • Susan Lang The Ohio State University
Keywords: first-year composition, instructor response, instructor training, writing analytics

Abstract

  • Background: Research incorporating large data sets and data and text mining methodologies is making initial contributions to writing studies. In writing program administration (WPA) work, one could best characterize the body of publications as small but growing, led by such work as Moxley and Eubanks’ 2015 “On Keeping Score: Instructors' vs. Students' Rubric Ratings of 46,689 Essays” and Arizona State University’s Science of Learning & Educational Technology (SoLET) Lab. Given the information that large-scale textual analysis can provide, it seems incumbent on program administrators to explore ways to make regular and aggressive use of such opportunities to give both students and instructors more resources for learning and development. This project is one attempt to add to this corpus of work; the sample for the study consisted of 17,534 pieces of student writing representing 141,659 discrete comments on that writing, with 58,300 unique words out of over 8.25 million total words written. This data is used to examine trends in the program’s instructor commentary over five years’ time.  By doing so, this study revisits a fundamental task of writing instruction—responding to student writing, and from the data’s results considers how large writing programs with constant turnover of graduate teaching assistants (GTAs) might manage their ongoing instructor professional development and how those GTAs will improve their ability to teach and respond to writing.
  • Literature Review: Researchers have attempted to unpack and understand the task of instructor commentary for several decades; the published literature demonstrates a complex and occasionally ambivalent relationship with this central task of writing instruction. Recent scholarship has moved from the small-scale studies long used by the field to implement large-scale examinations of the instruction occurring in writing programs.
  • Research questions: Three questions guided the inquiry:
  1. Does the work of new instructors (MA1s) more closely resemble the lexicon of novice or experienced responders to student writing?
  2. How does the new instructors’ work compare to that of more experienced (PHD1 or INS) instructors in the program throughout their time?
  3. How does their work evolve over a four-semester longitudinal time frame (as MA1 or MA2 experience levels) in the first-year writing program? [Please note that the abbreviations used above and throughout the article to designate instructor experience levels are as follows: MA1 (first-year master’s students); MA2 (second-year master’s students); PHD1 (first-year doctoral students); INS (instructors—those with 3 or more years’ experience teaching and who are not currently pursuing an additional degree—nearly all of these individuals held a Master’s degree)].
  • Methodology: This study extends the work of Anson and Anson (2017) who first surveyed writing instructors and program administrators to create wordlists that survey respondents associated with “high-quality” and “novice” responses, and then examined a corpus of nearly 50,000 peer responses produced at a single university to learn to what extent instructors and student peers adopted this lexicon. Specifically, the study analyzes a corpus of instructor comments to students using the Anson and Anson wordlists associated with principled and novice commentary to see if new writing instructors align more closely with the concepts represented in either list during their first semester in the program.  It then tracks four cohorts for evolution and change in their vocabulary of feedback over their next three semesters in the program; the study also compares the vocabulary used in their comments to that used by experienced instructors in the program over the same time.
  • Results: The study found that from the outset, the new instructors (MA1) incorporated more of the principled response terms than the novice response terms. Overall, in comparing the MA1 instructors with the most experienced group (INS), the results reveal three important findings about the feedback of both MA1s and INSs in this program.
  1. While there are some differences in commentary as seen via examination of the two lexicons, the differences are perhaps less than one might assume.
  2. The cohorts do increase their use of the principled terms as they move through the two years’ appointment in the program, but few of the increases demonstrate statistical significance.
  3. Few of the terms from either the novice or principled lexicon, with the exception of terms that also appear in the assignment descriptions, what I label as “content terms,” appear frequently in the overall corpus.
  • Discussion: Based on the results, the instructors in this program had acquired a more consistent vocabulary, but not primarily one based on Anson and Anson’s two lexicons—instead, the most frequent and commonly used terms seem to come from a more local “canon,” that is, one based on the assignment descriptions and course outcomes. Regardless of whether the acquisition of a common vocabulary came from more global concepts or an assignment-based local canon, using common terms is something that Nancy Sommers (1982) saw as contributing to “thoughtful commentary” on student writing. As no one has previously studied how quickly new instructors acquire a professional vocabulary for responding to student writing, it is hard to know whether or not the results of this particular group of instructors would be considered “typical.” However, it may well be that the context of this writing program contributed to a more accelerated acquisition.
  • Conclusions: Working with the lexicons developed via Anson and Anson’s survey is a useful starting point for understanding more of what our instructors actually do when responding to student writing, as well as for identifying critical differences in our instructors’ comments. The lexicons, though, only provide us with a subset of expected (thus acceptable) terms included in commentary—terms that afford students the opportunity to act upon receiving them via revision or transfer. 
  • Directions for Future Research: Additional research is necessary to expand and refine the lexicons and their impact on student writing. One possibility is to return to the current data set to engage in additional lexical analysis of both the novice and principled lexicons as well as the overall frequency tables to understand how terms are used in the context of response by the various instructor groups. Differences in the application of the terms might help us understand why comments might be labeled as more or less helpful to writers.  Another strategy is to examine the data in terms of markers of stance; finally, topic modeling could be used to locate more subtle differences in the instructor comments that are not as easily identifiable with lexical analysis. Such examinations could serve as a baseline for broadening the study out to other sets of assignments and commentary, perhaps helping us build a set of threshold concepts for talking about writing with our students. Ultimately, it is important to replicate and expand Anson and Anson’s survey to other stakeholder groups. As with much research on the teaching of writing, we default to the group most accessible to us—other writing professionals. Replicating this survey with other stakeholders—graduate teaching assistants, undergraduate students at both lower and upper division levels— could help us understand whether or not a gap exists in understanding what constitutes good feedback from the various stakeholders.

References

Ädel, A. (2017). Remember that your reader cannot read your mind: Problem/solution-oriented metadiscourse in teacher feedback on student writing. English for Specific Purposes, 45, 54–68.

Adler-Kassner, L., & Wardle, E. (2015). Naming what we know: Threshold concepts of writing studies. Boulder, CO: University Press of Colorado.

Anson, C. M. (1989). Writing and response: Theory, practice, and research. Urbana, IL: National Council of Teachers of English.

Anson, C. M. (2000). Response and the social construction of error. Assessing Writing, 7, 5–21.

Anson, C. M., & Moore, J. (2016). Critical transitions: Writing and the question of transfer. Anderson, SC: Parlor Press.

Anson, I. G., & Anson, C. M. (2017). Assessing peer and instructor response to writing: A corpus analysis from an expert survey. Assessing Writing, 33, 12–24.

Aull, L. (2015). First-year university writing: A corpus-based study with implications for pedagogy. New York: Springer.

Aull, L. L., & Lancaster, Z. (2014). Linguistic markers of stance in early and advanced academic writing: A corpus-based comparison. Written Communication, 31(2), 151–183.

Bailey, R., & Garner, M. (2010). Is the feedback in higher education assessment worth the paper it is written on? Teachers’ reflections on their practices. Teaching in Higher Education, 15(2), 187–198.

Brannon, L., & Knoblauch, C. H. (1982). On students' rights to their own texts: A model of teacher response. College Composition and Communication, 33(2), 157–166.

Cho, K., Schunn, C. D., & Charney, D. (2006). Commenting on writing typology and perceived helpfulness of comments from novice peer reviewers and subject matter experts. Written Communication, 23(3), 260–294.

Cohn, J. D., & Stewart, M. (2016). Promoting metacognitive thought through response to low-stakes writing. Journal of Response to Writing, 2(1), 58 – 74.

Connors, R. J., & Lunsford, A. A. (1993). Teachers' rhetorical comments on student papers. College Composition and Communication, 44(2), 200–223.

Dixon, Z., & Moxley, J. (2013). Everything is illuminated: What big data can tell us about teacher commentary. Assessing Writing, 18(4), 241–256.

Ferris, D. (2015). A catalytic event for response research? Introducing our new journal: Editor’s introduction. Journal of Response to Writing, 1(1), 1–9.

Ferris, D. R. (2014). Responding to student writing: Teachers’ philosophies and practices. Assessing Writing, 19, 6–23.

Johnson, A. C., Wilson, J., & Roscoe, R. D. (2017). College student perceptions of writing errors, text quality, and author characteristics. Assessing Writing, 34, 72–87.

Keh, C. L. (1990). Feedback in the writing process: A model and methods for implementation. ELT Journal, 44(4), 294–304.

Laflen, A., & Smith, M. (2017). Responding to student writing online: Tracking student interactions with instructor feedback in a Learning Management System. Assessing Writing, 31, 39–52.

Lancaster, A. (2016, October). Responding to writing through instructor screencasts: Cognitive walkthrough, reader response, and student-centered access. In Professional Communication Conference (IPCC), 2016 IEEE International (pp. 1-5).

Lang, S. (2016). Taming big data through agile approaches to instructor training and assessment: Managing ongoing professional development in large first-year writing programs. Writing Program Administration: Journal of the Council of Writing Program Administrators, 39(1), 81–104.

Lang, S., & Baehr, C. (2012). Data mining: A hybrid methodology for complex and dynamic research. College Composition and Communication, 64(1), 172–194.

McGrath, A., & Atkinson-Leadbeater, K. (2016). Instructor comments on student writing: Learner response to electronic written feedback. Transformative Dialogues: Teaching & Learning Journal, 8(3), 1 - 16.

Montgomery, J. L., & Baker, W. (2007). Teacher-written feedback: Student perceptions, teacher self-assessment, and actual teacher performance. Journal of Second Language Writing, 16(2), 82–99.

Moss, P., Pullin, D., Gee, J., Haertel, E., & Young, L. (Eds.). (2008). Assessment, Equity, and Opportunity to Learn (Learning in Doing: Social, Cognitive and Computational Perspectives). Cambridge: Cambridge University Press.

Moss, P., Girard, B., & Greeno, J. (2008). Sociocultural Implications for Assessment II. In P. Moss, D. Pullin, J. Gee, E. Haertel, & L. Young (Eds.), Assessment, Equity, and Opportunity to Learn (Learning in Doing: Social, Cognitive and Computational Perspectives, pp. 295-332). Cambridge: Cambridge University Press.

Moxley, J. M., & Eubanks, D. (2015). On keeping score: Instructors' vs. students' rubric ratings of 46,689 essays. WPA: Writing Program Administration, 39, 53–80.

Obermark, L., Brewer, E., & Halasek, K. (2015). Moving from the one and done to a culture of collaboration: Revising professional development for TAs. Writing Program Administration, 39(1), 32–53.

QDA Miner [Computer software]. (2014). Retrieved from https://provalisresearch.com

Pullin, D. (2008). Assessment, Equity, and Opportunity to Learn. In P. Moss, D. Pullin, J. Gee, E. Haertel, & L. Young (Eds.), Assessment, Equity, and Opportunity to Learn (Learning in Doing: Social, Cognitive and Computational Perspectives, pp. 333-352). Cambridge: Cambridge University Press.

Ruggiero, M. A. (2017, April). Remixing responses: How multimodal feedback encourages reflection and awareness. Presentation at Student Success in Writing Conference, Savannah, GA.

Simpson, J. (2017). Responding to our students' writing: What is good for us and for them? HOW Journal, 10(1), 45–52.

Sommers, N. (1982). Responding to student writing. College Composition and Communication, 33(2), 148–156.

Sommers, N., & Saltz, L. (2004). The novice as expert: Writing the freshman year. College Composition and Communication, 56(1), 124–149. doi:10.2307/4140684

Staples, S., Egbert, J., Biber, D., & Gray, B. (2016). Academic writing development at the university level: Phrasal and clausal complexity across level of study, discipline, and genre. Written Communication, 33(2), 149–183.

Stern, L. A., & Solomon, A. (2006). Effective faculty feedback: The road less traveled. Assessing Writing, 11(1), 22–41.

Straub, R. (1996). Teacher’s response as conversation: More than casual talk, an exploration. Rhetoric Review, 14(2), 374 - 399.

Straub, R. (1996). The concept of control in teacher response: Defining the varieties of “directive” and “facilitative” commentary. College Composition and Communication, 47(2), 223–251.

Straub, R. (2006). Key works on teacher response. Portsmouth, NH: Boynton/Cook Publishers.

Straub, R., & Lunsford, R. F. (1995). Twelve readers reading: Responding to college student writing. NY: Hampton Press.

White, E. M.& Wright, C. A. (2016). Assigning, responding, and evaluating writing: A Writing Teacher’s Guide, 5th ed. NY: Bedford/St. Martin.

Wordstat [Computer software]. (2014). Retrieved from https://provalisresearch.com

Yoon, D., Chen, N., Randles, B., Cheatle, A., Löckenhoff, C. E., Jackson, S. J., ... & Guimbretière, F. (2016). RichReview++: Deployment of a collaborative multi-modal annotation system for instructor feedback and peer discussion. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (pp. 195–205). ACM.

Published
2018-12-09