6. Assessment – The teacher uses multiple data elements (both formative and summative) to plan, inform and adjust instruction and evaluate student learning. As English Language Arts (ELA) teachers we must create accurate, transparent, growth-oriented assessment tools around multiple literacy skills, including speaking and listening. Speaking and listening form an important component of state literacy standards, as in the following anchor standard: “[Students can] prepare for and participate effectively in a range of conversations and collaborations with diverse partners, building on others’ ideas and expressing their own clearly and persuasively” (Common Core State Standards Initiative, 2016). In addition, discussion provides the basis for many learning activities in ELA classrooms, and plays an important role in helping students clarify and express their own ideas about a text or topic, as well as build on, respond to, and synthesize the ideas of others. Finally, I believe that explicitly assessing discussions helps honor diverse linguistic and literacy skills, particularly for students whose verbal expression is stronger than their command of written academic language.
The following narrative describes a two-day assessment process of small group discussion skills in a 9th grade (General Education) Language Arts classroom. The focus for the unit was on building literary discussion skills, so the final (graded) discussion provided a summative assessment of those skills. I used the Common Core Anchor Standard cited above as a basis for both formative and summative assessment criteria.
Students discussed Toni Morrison’s novel, The Bluest Eye; they had finished the book by the final (graded) discussion. A series of short discussions, led by students but with input and guidance provided by me, preceded the final set of assessed discussions. Students set the group norms and reading schedule and created open-ended, higher-level (from Bloom’s taxonomy) questions to guide ongoing discussions. For the assessments, I informed the students that I would be observing rather than participating in discussion, and that I would be taking notes on their questions, use of textual evidence, responses, and ability to build on each others’ ideas. As a group, we also discussed the physical signs of good academic discussion: eye contact; respectful listening; and books/notebooks open, looking for evidence. The discussion group contained 7 students; these students had chosen to read and discuss (self-selected) The Bluest Eye. Students represented widely diverse social, economic, ethnic, and academic backgrounds. Both discussions lasted approximately 20 minutes.
I tracked the formative assessment data using two sheets of paper, divided into four columns each. As students discussed I wrote quick notes summarizing students’ verbal contributions to the discussion as well as physical attentiveness to the group. I also tallied verbal contributions by type, using the following symbols: ?=asked a question; T=referred to text; Q=quoted text; R=responded to a question;B=built on other students’ idea. The following evidence is from that initial tracking sheet:
You can see that my notes include observations of the student’s critical reading skills, such as “Personal/real world connection” or “confusion over narrator.” I have also recorded how fully present students were with the discussion, such as “Used restroom for most of discussion…[but] joined in on conversation [when returned].” At the conclusion of the discussion, while students were packing up, I recorded a quick assessment of each student’s overall contributions, in the form of a “want” (next steps to improve discussion skill) and a “wow” (specific strength demonstrated during discussion). The middle student above, for example, needed to move from summary (“figuring out what’s going on”) to interpretation (“figuring out what that means or implies”) but also demonstrated strong responsiveness to others’ ideas.
Clearly there are several limitations around this kind of close assessment of discussion skill. One of those is group size; I was keeping track of 7 students and could have added, at the most, one more and still recorded accurate tallies and direct evidence of reading and discussion skill. The other is that it requires knowledgeable, trained assessors. While volunteer discussion leaders could track individual participation using the tally marks, recording notes about levels of interpretive reading or personal engagement with the text requires knowledge of the text, the students, academic language expectations, and literacy instruction.
After class, I “translated” those notes and tallies into a formative assessment slip, an example of which is shown below:
You can see that I have directly transposed the tally marks to the assessment slip, and that a key to those tally marks is listed at the bottom of the slip. The slip provides assessment evidence that is both measurable and transparent. The slip also provides growth-oriented comments, in the form of a “want” (next step for specific improvement) and a “wow” (a specific skill strength demonstrated during discussion.) I passed these slips out to students the next day, in preparation for our final, graded discussion, and had students write a personal discussion skill goal on the back of the slip based on my feedback. The example below is from the back of the same slip shown above:
I suggested that this student, “Work on moving from ‘figuring out what’s going on’ to interpreting text using that quote & textual evidence’.” I then translated that suggestion into student-friendly language: “Think about ‘how’ or ‘why’ that’s happening in the book.” The student set a personal goal of “talk less but better,” which demonstrated responsiveness both to my feedback, and to the number of tally marks, which showed frequent building on others’ ideas (a real strength in this student’s discussion skill, and noted in my “wow”) but not deep follow through on text references or quotations. At the end of the second, graded discussion I asked students to reflect briefly on whether they had met their goal, and this student responded that she had, because she had “2 instead of 10.” In fact, this student was accurate in her self-assessment. During the final discussion, she built twice on another student’s ideas, then made two original responses to the text, followed by a direct quotation and a response that considered the meaning of that quotation (see summative feedback slips at end of this post, second student’s gradesheet). Her self-assessment demonstrated both accurate evaluation and a sense of accomplishment at having met her learning goal.
After students had set personal goals for the second, graded (summatively assessed) discussion, I showed them the grading slip I would be using to track their discussion skills, and drew their attention to the back of the slip, which outlined the bases for grading, shown below:
As you can see, I have made my assessment criteria transparent and specific, by listing both the type of contributions I will be looking for, using those symbols from the formative assessment (?,T,Q,R,B) and the physical signs of engaged discussion we discussed previously. I also added an explanation of how I would evaluate the quality of their contributions, both quantitatively (specific requirements for proficiency, at top) and qualitatively (“If your question, use of text, response…demonstrates complex thinking, skilled interpretation, or expands on idea…you will receive a ‘+’ next to that tally mark.”) At the bottom of the explanation of grading I have aligned the categories of skill with letter grades.
After students set personal goals (participated in evaluation criteria formation) and I explained my own grading criteria, I reminded the students that I would be observing and evaluating their discussion skill rather than participating in the discussion, and asked them to begin with one of their “best” high-level questions. While students discussed, I took notes on my grading sheet, shown below:
As you can see, this grading sheet is designed to be cut into individual feedback slips, but by having those individual slips attached during the actual evaluative process I was able to keep accurate notes on individual students. I lined the grading sheets up in front of me, and wrote students’ names in the order that they were sitting, to make it easier to keep accurate track of tallies. Because this was a summative assessment, I concentrated on quantitative data (tally marks) and evaluating the quality of those contributions (+ marks) rather than capturing students’ exact words. I closely aligned formative and summative assessments: visually (grading slip format, common tally marks); chronologically (summative assessment occurred day after formative assessment, students received formative feedback and set personal goal immediately before graded discussion); and conceptually (common bases for evaluation, common physical signs of engaged discussion).
While students were packing up, I recorded a “wonder”: a question that grew out of the understanding of the book that the student demonstrated during the discussion. The middle student above, for example, had shown great insight around why one of the most abusive characters in the book was unable to “love his daughter in the right way” (because he had not learned to love himself.) In response, I asked this student, “Has anyone in this story truly learned how to love themselves so they can love someone else?” I am particularly proud of the “wonder” questions I posed to individual students for two reasons. First, because they honor student’s deep thinking and insightful reading. Second, because they are growth-oriented in the most important sense: they ask students to re-engage with their own thinking, and to consider the implications of their own ideas.
After class, I was able to add up the tally marks and numbers of + signs, assign grades based on that data, and add a final “wow,” a note of appreciation for a specific strength each student had demonstrated over the course of the discussion. The grading slips made this process both accurate and timely; I was able to return grading slips and have grades entered electronically on the day following the discussion.
In the future, one thing I would change about this grading process would be to return grading slips in person, with a voiced appreciation for how much I enjoyed hearing that student’s ideas about the book. Because we were wrapping up a unit and I was short on time, I simply placed the grading slips in student’s hanging file folders and informed the class they were available. While written feedback is important, I believe that the form of the assessment should be closely aligned with the skill assessed. Since this was an assessment of listening and speaking, I wish I had explicitly added that component to my evaluation and honored students with verbal as well as written praise.
Common Core State Standards Initiative (2016). College and Career Readiness Anchor Standards for Speaking and Listening 1 (CCSS.ELA-Literacy.CCRA.SL.1). Retrieved from corestandards.org, January 23, 2016