Category Archives: 6.2

“Mark [their] words”: formative and summative assessment tools for student discussion skills

6. Assessment The teacher uses multiple data elements (both formative and summative) to plan, inform and adjust instruction and evaluate student learning. As English Language Arts (ELA) teachers we must create accurate, transparent, growth-oriented assessment tools around multiple literacy skills, including speaking and listening. Speaking and listening form an important component of state literacy standards, as in the following anchor standard: “[Students can] prepare for and participate effectively in a range of conversations and collaborations with diverse partners, building on others’ ideas and expressing their own clearly and persuasively” (Common Core State Standards Initiative, 2016). In addition, discussion provides the basis for many learning activities in ELA classrooms, and plays an important role in helping students clarify and express their own ideas about a text or topic, as well as build on, respond to, and synthesize the ideas of others. Finally, I believe that explicitly assessing discussions helps honor diverse linguistic and literacy skills, particularly for students whose verbal expression is stronger than their command of written academic language.

The following narrative describes a two-day assessment process of small group discussion skills in a 9th grade (General Education) Language Arts classroom. The focus for the unit was on building literary discussion skills, so the final (graded) discussion provided a summative assessment of those skills. I used the Common Core Anchor Standard cited above as a basis for both formative and summative assessment criteria.

Students discussed Toni Morrison’s novel, The Bluest Eye; they had finished the book by the final (graded) discussion. A series of short discussions, led by students but with input and guidance provided by me, preceded the final set of assessed discussions. Students set the group norms and reading schedule and created open-ended, higher-level (from Bloom’s taxonomy) questions to guide ongoing discussions. For the assessments, I informed the students that I would be observing rather than participating in discussion, and that I would be taking notes on their questions, use of textual evidence, responses, and ability to build on each others’ ideas. As a group, we also discussed the physical signs of good academic discussion: eye contact; respectful listening; and books/notebooks open, looking for evidence. The discussion group contained 7 students; these students had chosen to read and discuss (self-selected) The Bluest Eye. Students represented widely diverse social, economic, ethnic, and academic backgrounds. Both discussions lasted approximately 20 minutes.

I tracked the formative assessment data using two sheets of paper, divided into four columns each. As students discussed I wrote quick notes summarizing students’ verbal contributions to the discussion as well as physical attentiveness to the group. I also tallied verbal contributions by type, using the following symbols: ?=asked a question; T=referred to text; Q=quoted text; R=responded to a question;B=built on other students’ idea. The following evidence is from that initial tracking sheet:

Notes from formatively assessed discussion

Notes from formatively assessed discussion

You can see that my notes include observations of the student’s critical reading skills, such as “Personal/real world connection” or “confusion over narrator.” I have also recorded how fully present students were with the discussion, such as “Used restroom for most of discussion…[but] joined in on conversation [when returned].” At the conclusion of the discussion, while students were packing up, I recorded a quick assessment of each student’s overall contributions, in the form of a “want” (next steps to improve discussion skill) and a “wow” (specific strength demonstrated during discussion). The middle student above, for example, needed to move from summary (“figuring out what’s going on”) to interpretation (“figuring out what that means or implies”) but also demonstrated strong responsiveness to others’ ideas.

Clearly there are several limitations around this kind of close assessment of discussion skill. One of those is group size; I was keeping track of 7 students and could have added, at the most, one more and still recorded accurate tallies and direct evidence of reading and discussion skill. The other is that it requires knowledgeable, trained assessors. While volunteer discussion leaders could track individual participation using the tally marks, recording notes about levels of interpretive reading or personal engagement with the text requires knowledge of the text, the students, academic language expectations, and literacy instruction.

After class, I “translated” those notes and tallies into a formative assessment slip, an example of which is shown below:

Formative feedback slip, student 2

Formative feedback slip

You can see that I have directly transposed the tally marks to the assessment slip, and that a key to those tally marks is listed at the bottom of the slip. The slip provides assessment evidence that is both measurable and transparent. The slip also provides growth-oriented comments, in the form of a “want” (next step for specific improvement) and a “wow” (a specific skill strength demonstrated during discussion.) I passed these slips out to students the next day, in preparation for our final, graded discussion, and had students write a personal discussion skill goal on the back of the slip based on my feedback. The example below is from the back of the same slip shown above:

Student 2: self-evaluation and goal for next (graded) discussion

Student’s self-evaluation and goal for next (graded) discussion on back of formative feedback slip

I suggested that this student, “Work on moving from ‘figuring out what’s going on’ to interpreting text using that quote & textual evidence’.” I then translated that suggestion into student-friendly language: “Think about ‘how’ or ‘why’ that’s happening in the book.” The student set a personal goal of “talk less but better,” which demonstrated responsiveness both to my feedback, and to the number of tally marks, which showed frequent building on others’ ideas (a real strength in this student’s discussion skill, and noted in my “wow”) but not deep follow through on text references or quotations. At the end of the second, graded discussion I asked students to reflect briefly on whether they had met their goal, and this student responded that she had, because she had “2 instead of 10.” In fact, this student was accurate in her self-assessment. During the final discussion, she built twice on another student’s ideas, then made two original responses to the text, followed by a direct quotation and a response that considered the meaning of that quotation (see summative feedback slips at end of this post, second student’s gradesheet). Her self-assessment demonstrated both accurate evaluation and a sense of accomplishment at having met her learning goal.

After students had set personal goals for the second, graded (summatively assessed) discussion, I showed them the grading slip I would be using to track their discussion skills, and drew their attention to the back of the slip, which outlined the bases for grading, shown below:

Explanation of Summative Grading Expectations

Explanation of Summative Grading Expectations

As you can see, I have made my assessment criteria transparent and specific, by listing both the type of contributions I will be looking for, using those symbols from the formative assessment (?,T,Q,R,B) and the physical signs of engaged discussion we discussed previously. I also added an explanation of how I would evaluate the quality of their contributions, both quantitatively (specific requirements for proficiency, at top) and qualitatively (“If your question, use of text, response…demonstrates complex thinking, skilled interpretation, or expands on idea…you will receive a ‘+’ next to that tally mark.”) At the bottom of the explanation of grading I have aligned the categories of skill with letter grades.

After students set personal goals (participated in evaluation criteria formation) and I explained my own grading criteria, I reminded the students that I would be observing and evaluating their discussion skill rather than participating in the discussion, and asked them to begin with one of their “best” high-level questions. While students discussed, I took notes on my grading sheet, shown below:

Summative feedback slips

Summative feedback slips/grading sheet

As you can see, this grading sheet is designed to be cut into individual feedback slips, but by having those individual slips attached during the actual evaluative process I was able to keep accurate notes on individual students. I lined the grading sheets up in front of me, and wrote students’ names in the order that they were sitting, to make it easier to keep accurate track of tallies. Because this was a summative assessment, I concentrated on quantitative data (tally marks) and evaluating the quality of those contributions (+ marks) rather than capturing students’ exact words. I closely aligned formative and summative assessments: visually (grading slip format, common tally marks); chronologically (summative assessment occurred day after formative assessment, students received formative feedback and set personal goal immediately before graded discussion); and conceptually (common bases for evaluation, common physical signs of engaged discussion).

While students were packing up, I recorded a “wonder”: a question that grew out of the understanding of the book that the student demonstrated during the discussion. The middle student above, for example, had shown great insight around why one of the most abusive characters in the book was unable to “love his daughter in the right way” (because he had not learned to love himself.) In response, I asked this student, “Has anyone in this story truly learned how to love themselves so they can love someone else?” I am particularly proud of the “wonder” questions I posed to individual students for two reasons. First, because they honor student’s deep thinking and insightful reading. Second, because they are growth-oriented in the most important sense: they ask students to re-engage with their own thinking, and to consider the implications of their own ideas.

After class, I was able to add up the tally marks and numbers of + signs, assign grades based on that data, and add a final “wow,” a note of appreciation for a specific strength each student had demonstrated over the course of the discussion. The grading slips made this process both accurate and timely; I was able to return grading slips and have grades entered electronically on the day following the discussion.

In the future, one thing I would change about this grading process would be to return grading slips in person, with a voiced appreciation for how much I enjoyed hearing that student’s ideas about the book. Because we were wrapping up a unit and I was short on time, I simply placed the grading slips in student’s hanging file folders and informed the class they were available. While written feedback is important, I believe that the form of the assessment should be closely aligned with the skill assessed. Since this was an assessment of listening and speaking, I wish I had explicitly added that component to my evaluation and honored students with verbal as well as written praise.


Common Core State Standards Initiative (2016). College and Career Readiness Anchor Standards for Speaking and Listening 1 (CCSS.ELA-Literacy.CCRA.SL.1). Retrieved from, January 23, 2016

Portfolio Evaluations as Memory Practice

In Brain Rules, John Medina (2014) outlines three essential steps for remembering information:  encoding, consolidation, and retrieval.  In this post I will focus on a teaching tool that incorporates all three of these steps to help students develop strong writing skills:  evaluation by portfolio.*

Portfolio as Encoding Tool

The moment at which information first enters our brain holds critical importance.  The more we are able to initially elaborate (connect meaning to) the information the better we will remember and use it.  When a student prepares a portfolio of collected writings,  each new piece is related to a previous sample.  Student writing is placed in the portfolio to demonstrate progression of specific and related writing skills.  The collection tells a more meaningful story about that progression and gives the individual work samples greater meaning through context.  If the portfolio is theme-based that contextual meaning increases.  In Medina’s (2014) terms, the portfolio allows student and teacher to “understand what the (new) information means.” (p. 139)

One of the ways in which memories are effectively encoded is through real-world examples.  These examples further the elaboration process because they enable us to see information “at work” in different situations and to understand the common principles underlying those examples.  The writing portfolio provides multiple and increasing examples from the student’s own life and work.  It also allows students to extract larger writing concepts from varied writing samples by comparing common elements such as use of an opening “hook,” support through evidence, development through analysis, etc.

By collecting writing in a portfolio, students also create a “familiar setting” for information input and retrieval.  Though the collected student writing must address diverse audiences, purposes, and genres, the portfolio itself provides a stable environment.

Portfolio as Consolidation Tool

Medina (2014) introduces the concept of memory consolidation as “reversion”:

There is increasing evidence that when previously consolidated memories are recalled from long-term storage into consciousness, they revert to short-term memories…these memories may need to become reprocessed if they are to remain in durable form. (p. 146)

Portfolio evaluations require students to revisit their writing again and again.  The portfolio may focus on revision across multiple pieces multiple times or may ask students to simply revisit and reflect on their previous work.  Either way, students must reprocess the skills they used and reevaluate the writing they produced.

Portfolio as Retrieval Tool (and for Slow Consolidation)

Modern brain science has begun to uncover the complicated and time-consuming process that allows us to store and reliably retrieve memories.  Stable memories require many intertwining “roots” between cortex and hippocampus. This neurological “conversation,” described as consolidation once the memories have been released to the cortex, can take years.  So Medina (2014) proposes:

Given that system consolidation can take years, perhaps critical information should be repeated on a yearly or semiyearly basis. (p. 157)

Requiring students to prepare writing portfolios that “follow” them throughout their high school career allows this kind of repetition.  The “critical information” on which writing depends requires repeated practice.  It also requires repeated reflection, multiple opportunities to name individual components of effective writing and to question, “Have I used them successfully here?”

In addition, multi-year portfolios allow teachers across grade levels to adjust their teaching practices in accordance with what they observe as student writing develops.  Do we need to focus repetition practice more on selecting textual evidence?  Should we make more concrete distinctions between expository and narrative writing?  Or provide more consistent schemata around writing processes?

Medina (2014) claims that, “Our brains only approximate reality because new knowledge and memories are mixed and stored as one.” (p. 159)  If this is so, then evaluative portfolios approximate the reality of the brain, storing old and new work as part of an ongoing, creative whole.

*Evaluation by portfolio offers a teaching tool well aligned with Medina’s understanding of memory in many disciplines.  Because English Language Arts classrooms more commonly collect writing for portfolio evaluation I have chosen to focus my examples on the writing portfolio.

Medina, J. (2014). Memory. In Brain Rules: 12 Principles for Surviving and Thriving at Work, Home, and School. Seattle, WA: Pear Press (pp.125-159).