Category Archives: 6.1

A simple rubric for on-demand writing

6.1 Designing Student Assessments around Criteria and Standards: Assessment criteria and standards are clear.

Ensuring that assessment criteria and standards are clear requires teachers to know exactly which skills and levels of understanding they expect students to master, and to communicate those expectations plainly to students. Rubrics can help with both tasks. Shermis and Di Vesta (2011) outline three key steps to formulate good rubrics:

  • identify a critical dimension of behavior at a particular developmental level
  • articulate the rubric to a point that different evaluators can obtain consistency
  • communicate with learners what is expected of them (paraphrase, p. 137).

On-demand writing about literature requires particularly succinct and quickly comprehensible rubrics, since students have a limited amount of time in which to develop their ideas about the text and form them into an organized, focused, and convincing essay. At the end of our Voice and Protest unit, I wanted to assess whether our 10th-grade honors students were able to synthesize multiple vignettes from our anchor text, Fountain and Tomb, in order to support an interpretive claim about the implications of the book’s structure. Students knew there would be a final in-class essay about the book, but were not given the prompts ahead of time. When they arrived, they were given the following, on a half-sheet of paper:


For each question I included brief directions on how to create a focused, supported answer: “Choose one (reason or influence/effect or theme) to focus on and use specific textual evidence from three different vignettes.” These directions reminded the students of two critical dimensions of strong writing that we had focused on throughout the unit (focused claims/theses and citing specific textual evidence, either in the form of concrete details or exact quotations); they also prompted students to see this as a synthesis task, one that required combining evidence from multiple stories to infer a big idea about the text’s format and purpose. In this sense, the prompts themselves are part of the rubric, because they identify the writing standards and imply the critical reading task for the assessment. Since I provided three high-level prompts to choose among, students had some power to shape their own assessment experience according to individual interest or preference.

The half sheet next articulates those standards further by delineating the three components on which I will base my evaluation. I have included quantitative expectations (one claim, at least three pieces of evidence from three different vignettes) as well as descriptive ones (claim is focused, evidence is clearly related, adequate analysis follows). In the preceding unit we worked on how to choose strong evidence in support of an argument, and how to analyze evidence by picking it apart and paying close attention to the exact language of quotations, the context surrounding details and quotations, and the implications of word choice and syntax. Consequently, students were familiar with those (bold-type) descriptors.

Because I wanted my students to use their time thinking deeply about the prompts and developing their written responses, I planned an assignment format that would take less than five minutes to read, comprehend, and question for clarification. I chose to present grading criteria as bullet-point descriptions of what a successful in-class essay would include, and to limit the criteria to three, with equal point value. When I asked for clarifying questions, I received three: one question about how many paragraphs I wanted students to write; one question about whether they only needed one piece of evidence from each vignette; and one about whether they could use three pieces of evidence from one or two vignettes instead of dealing with three vignettes total. These questions indicated to me that students understood my descriptive criteria (i.e., what it meant to have clearly related evidence and adequate analysis) and the purpose of the assignment (i.e., level of thinking required by the prompts) but not what the final product should look like. Consequently, one change I would make to this assignment in the future would be to spend some class time beforehand discussing the differences between common types of on-demand writing (for example, a claim paragraph used as a reading quiz, versus an in-class essay at the end of a unit) and presenting a few “real-life” student work examples of each.


Shermis, M. and Di Vesta, F. (2011). Classroom Assessment in Action. Lanham, MD: Rowman and Littlefield Publishers, Inc.

“Mark [their] words”: formative and summative assessment tools for student discussion skills

6. Assessment The teacher uses multiple data elements (both formative and summative) to plan, inform and adjust instruction and evaluate student learning. As English Language Arts (ELA) teachers we must create accurate, transparent, growth-oriented assessment tools around multiple literacy skills, including speaking and listening. Speaking and listening form an important component of state literacy standards, as in the following anchor standard: “[Students can] prepare for and participate effectively in a range of conversations and collaborations with diverse partners, building on others’ ideas and expressing their own clearly and persuasively” (Common Core State Standards Initiative, 2016). In addition, discussion provides the basis for many learning activities in ELA classrooms, and plays an important role in helping students clarify and express their own ideas about a text or topic, as well as build on, respond to, and synthesize the ideas of others. Finally, I believe that explicitly assessing discussions helps honor diverse linguistic and literacy skills, particularly for students whose verbal expression is stronger than their command of written academic language.

The following narrative describes a two-day assessment process of small group discussion skills in a 9th grade (General Education) Language Arts classroom. The focus for the unit was on building literary discussion skills, so the final (graded) discussion provided a summative assessment of those skills. I used the Common Core Anchor Standard cited above as a basis for both formative and summative assessment criteria.

Students discussed Toni Morrison’s novel, The Bluest Eye; they had finished the book by the final (graded) discussion. A series of short discussions, led by students but with input and guidance provided by me, preceded the final set of assessed discussions. Students set the group norms and reading schedule and created open-ended, higher-level (from Bloom’s taxonomy) questions to guide ongoing discussions. For the assessments, I informed the students that I would be observing rather than participating in discussion, and that I would be taking notes on their questions, use of textual evidence, responses, and ability to build on each others’ ideas. As a group, we also discussed the physical signs of good academic discussion: eye contact; respectful listening; and books/notebooks open, looking for evidence. The discussion group contained 7 students; these students had chosen to read and discuss (self-selected) The Bluest Eye. Students represented widely diverse social, economic, ethnic, and academic backgrounds. Both discussions lasted approximately 20 minutes.

I tracked the formative assessment data using two sheets of paper, divided into four columns each. As students discussed I wrote quick notes summarizing students’ verbal contributions to the discussion as well as physical attentiveness to the group. I also tallied verbal contributions by type, using the following symbols: ?=asked a question; T=referred to text; Q=quoted text; R=responded to a question;B=built on other students’ idea. The following evidence is from that initial tracking sheet:

Notes from formatively assessed discussion

Notes from formatively assessed discussion

You can see that my notes include observations of the student’s critical reading skills, such as “Personal/real world connection” or “confusion over narrator.” I have also recorded how fully present students were with the discussion, such as “Used restroom for most of discussion…[but] joined in on conversation [when returned].” At the conclusion of the discussion, while students were packing up, I recorded a quick assessment of each student’s overall contributions, in the form of a “want” (next steps to improve discussion skill) and a “wow” (specific strength demonstrated during discussion). The middle student above, for example, needed to move from summary (“figuring out what’s going on”) to interpretation (“figuring out what that means or implies”) but also demonstrated strong responsiveness to others’ ideas.

Clearly there are several limitations around this kind of close assessment of discussion skill. One of those is group size; I was keeping track of 7 students and could have added, at the most, one more and still recorded accurate tallies and direct evidence of reading and discussion skill. The other is that it requires knowledgeable, trained assessors. While volunteer discussion leaders could track individual participation using the tally marks, recording notes about levels of interpretive reading or personal engagement with the text requires knowledge of the text, the students, academic language expectations, and literacy instruction.

After class, I “translated” those notes and tallies into a formative assessment slip, an example of which is shown below:

Formative feedback slip, student 2

Formative feedback slip

You can see that I have directly transposed the tally marks to the assessment slip, and that a key to those tally marks is listed at the bottom of the slip. The slip provides assessment evidence that is both measurable and transparent. The slip also provides growth-oriented comments, in the form of a “want” (next step for specific improvement) and a “wow” (a specific skill strength demonstrated during discussion.) I passed these slips out to students the next day, in preparation for our final, graded discussion, and had students write a personal discussion skill goal on the back of the slip based on my feedback. The example below is from the back of the same slip shown above:

Student 2: self-evaluation and goal for next (graded) discussion

Student’s self-evaluation and goal for next (graded) discussion on back of formative feedback slip

I suggested that this student, “Work on moving from ‘figuring out what’s going on’ to interpreting text using that quote & textual evidence’.” I then translated that suggestion into student-friendly language: “Think about ‘how’ or ‘why’ that’s happening in the book.” The student set a personal goal of “talk less but better,” which demonstrated responsiveness both to my feedback, and to the number of tally marks, which showed frequent building on others’ ideas (a real strength in this student’s discussion skill, and noted in my “wow”) but not deep follow through on text references or quotations. At the end of the second, graded discussion I asked students to reflect briefly on whether they had met their goal, and this student responded that she had, because she had “2 instead of 10.” In fact, this student was accurate in her self-assessment. During the final discussion, she built twice on another student’s ideas, then made two original responses to the text, followed by a direct quotation and a response that considered the meaning of that quotation (see summative feedback slips at end of this post, second student’s gradesheet). Her self-assessment demonstrated both accurate evaluation and a sense of accomplishment at having met her learning goal.

After students had set personal goals for the second, graded (summatively assessed) discussion, I showed them the grading slip I would be using to track their discussion skills, and drew their attention to the back of the slip, which outlined the bases for grading, shown below:

Explanation of Summative Grading Expectations

Explanation of Summative Grading Expectations

As you can see, I have made my assessment criteria transparent and specific, by listing both the type of contributions I will be looking for, using those symbols from the formative assessment (?,T,Q,R,B) and the physical signs of engaged discussion we discussed previously. I also added an explanation of how I would evaluate the quality of their contributions, both quantitatively (specific requirements for proficiency, at top) and qualitatively (“If your question, use of text, response…demonstrates complex thinking, skilled interpretation, or expands on idea…you will receive a ‘+’ next to that tally mark.”) At the bottom of the explanation of grading I have aligned the categories of skill with letter grades.

After students set personal goals (participated in evaluation criteria formation) and I explained my own grading criteria, I reminded the students that I would be observing and evaluating their discussion skill rather than participating in the discussion, and asked them to begin with one of their “best” high-level questions. While students discussed, I took notes on my grading sheet, shown below:

Summative feedback slips

Summative feedback slips/grading sheet

As you can see, this grading sheet is designed to be cut into individual feedback slips, but by having those individual slips attached during the actual evaluative process I was able to keep accurate notes on individual students. I lined the grading sheets up in front of me, and wrote students’ names in the order that they were sitting, to make it easier to keep accurate track of tallies. Because this was a summative assessment, I concentrated on quantitative data (tally marks) and evaluating the quality of those contributions (+ marks) rather than capturing students’ exact words. I closely aligned formative and summative assessments: visually (grading slip format, common tally marks); chronologically (summative assessment occurred day after formative assessment, students received formative feedback and set personal goal immediately before graded discussion); and conceptually (common bases for evaluation, common physical signs of engaged discussion).

While students were packing up, I recorded a “wonder”: a question that grew out of the understanding of the book that the student demonstrated during the discussion. The middle student above, for example, had shown great insight around why one of the most abusive characters in the book was unable to “love his daughter in the right way” (because he had not learned to love himself.) In response, I asked this student, “Has anyone in this story truly learned how to love themselves so they can love someone else?” I am particularly proud of the “wonder” questions I posed to individual students for two reasons. First, because they honor student’s deep thinking and insightful reading. Second, because they are growth-oriented in the most important sense: they ask students to re-engage with their own thinking, and to consider the implications of their own ideas.

After class, I was able to add up the tally marks and numbers of + signs, assign grades based on that data, and add a final “wow,” a note of appreciation for a specific strength each student had demonstrated over the course of the discussion. The grading slips made this process both accurate and timely; I was able to return grading slips and have grades entered electronically on the day following the discussion.

In the future, one thing I would change about this grading process would be to return grading slips in person, with a voiced appreciation for how much I enjoyed hearing that student’s ideas about the book. Because we were wrapping up a unit and I was short on time, I simply placed the grading slips in student’s hanging file folders and informed the class they were available. While written feedback is important, I believe that the form of the assessment should be closely aligned with the skill assessed. Since this was an assessment of listening and speaking, I wish I had explicitly added that component to my evaluation and honored students with verbal as well as written praise.


Common Core State Standards Initiative (2016). College and Career Readiness Anchor Standards for Speaking and Listening 1 (CCSS.ELA-Literacy.CCRA.SL.1). Retrieved from, January 23, 2016