Final Reflections 

08/17/2021

Three months ago, at the start of CEP813, I posted my initial thoughts about assessment. Those thoughts included three main beliefs:

  • Assessment is most useful when it is used at the classroom level to shed light on the status and progress of individual learners.
  • Assessment must be an ongoing process that makes use of different techniques to paint a more complete picture of a students' thinking.
  • Effective assessments result in feedback that can be acted upon by both students and teachers.

After three months of rather intense study of educational assessment, I still believe these things to be true, however, I now feel I have a deeper understanding of the foundations on which these beliefs rest, plus I have a few revisions.

Belief #1: Assessment is most useful when it is used at the classroom level to shed light on the status and progress of individual learners.

Origins: This belief certainly has its origins in my current job working on a research project that develops technology-enhanced assessments for classroom use. However, it also stems from my work as a middle school teacher in an urban setting during No Child Left Behind. We, like many who were part of "low performing" schools, were constantly reminded of the many ways our school didn't measure up, and almost all of the conversation was about how to nudge those standardized test scores higher. The upshot was a lot of focus placed on kids who were on the verge of proficiency and very little on everyone else. As a middle school science teacher, my scores were less important than English language arts (ELA) or math, so I was put in the unenviable position of doing test prep in subjects I was not prepared to teach. In the end, testing season came and went (with a lot of fanfare) and then we waited months to find out the results, or in my case, I sometimes never found out how my students did. Hence, my distaste for large scale testing and my preference for classroom assessments that provide teachers and students with information they can actually use. 

Updates: In the first unit of CEP813, we learned about the historical foundations of assessment (social efficiency, behaviorism, social constructivism), and I tweeted about the connection between high stakes testing and social efficiency, but I never made the connection to my own experience as a teacher, until now. The goal of social efficiency is to efficiently train people to function in society. Children are viewed as adults-in-training with future roles to fill and, according to Lorrie Shepard (2000), "because it was not possible to teach every student the skills of every vocation, scientific measures of ability were also needed to predict one's future role in life and thereby determine who was best suited for each endeavor" (p. 4).  As you might expect, those "scientific measures" favored people with power and influence in the dominant culture (white men) and disadvantaged nearly everyone else. The racist origins of certain kinds of standardized testing were introduced in Unit 1 (e.g. IQ tests) and revisited in Unit 6 on Racism and Bias in assessment (e.g. the SAT). 

As someone who is a reluctant social media participant, I generated a veritable "tweet storm" (6 tweets!) during the unit on racism and bias. It's interesting to look back and see myself agreeing with proponents of equity (here and here), and expressing outrage at the unfair treatment of low income people and people of color in education (here and here), and then taking a less certain tone after engaging in Harvard's Project Implicit (here and here). I must admit, I was not pleased by my results on some of their implicit bias tests. 

Of course, I loved watching the video clip of Dylan Wiliam explaining the formative assessment process and reading Wiliam and Black's transformative 1998 article: Inside the black box: Raising standards through classroom assessment. The messages contained in both of these pieces, as well as other Unit 2 resources, resonated with me as a teacher and aligned well with work we do at the Institute for Innovative Assessment (my current job). I also found Bennett's 2011 criticism of Wiliam and Black's claims about the effectiveness of formative assessment interesting, and actually agree with him on a number of points, most notably, the fact that teachers are typically not assessment experts, and are therefore not really equipped to create valid classroom assessments and make effective use of the results. This got me thinking (and tweeting) about my own teacher preparation and how little I actually knew about assessment when I was in the classroom. In the end, I concluded that, even though Bennett made some good points, the act of issuing grades was always something I dreaded as a teacher, and the process of formative assessment is so well aligned with what I believe to be good teaching, that the effort of learning how to create good assessments is well worth it. 

Revised Belief #1Assessment is most useful when it happens at the classroom level and sheds light on the status and progress of individual learners.

Two modest changes made to Belief #1 were: 1)  "is used" is replaced with "happens" to emphasize that assessment should be considered a process rather than an object (test, quiz, assignment), and 2) "to" is replaced with "and" to make it clear that formative assessment does not end with the collection of information. Instead, the information should be examined,  inferences drawn about students' learning progress, and actions taken to improve or extend learning.

Belief #2: Assessment must be an ongoing process that makes use of different techniques to paint a more complete picture of a students' thinking.

Origins: This belief probably has its origins in my years as an educator with a "hands-on" science museum, and then was reinforced during my time as a middle school teacher. I have always enjoyed learning by doing, so when I became and educator, I gravitated toward learning activities and projects that require students to interact with materials. Of course, the lessons didn't always work. Sometimes students would appear to be learning as they moved through the activities, only to find out later they had very little idea why they were doing what they were doing. Other times, the entire lesson blow up into a huge mess that drew the ire of both the Principal and custodian - two people you definitely want on your side in a school. However, occasionally, the act of interacting with the materials and the choices I tried to build into assignments seemed to draw out knowledge and abilities I did not realize students' had. This experience is where the emphasis on different techniques and painting a more complete picture of students' thinking comes from.

Updates: In this blog post I wrote about how performance assessments are back in fashion in science with the adoption of the Next Generation Science Standards (NGSS) and linked improvements in the new versions to alignment with curriculum and instruction; the opportunity to use the assessments to test students ability to apply their knowledge to a new situation (called "transfer"); and the opportunity to use technology to broaden the range of phenomena students are able to explore.   

Alignment between curriculum, assessments, and learning targets is enabled by an approach to planning we learned about at the end of Unit 2 called Understanding by Design (UbD). UbD takes a "backwards" approach to instructional planning where the learning goals are identified first, followed by the identification of assessments that can be used to collect evidence of progress. This approach places assessment at the center of instructional design and helps support an approach that is ongoing throughout a course or unit. In Unit 4 we explored Universal Design for Learning (UDL) and its thoughtful approach to accessibility and fairness. The three principals of UDL - provide multiple means of engagement, provide multiple means of representation, and provide multiple means of action and expression - formed the basis for my belief that "different techniques" should be used to "paint a more complete picture of students' thinking".  

In hindsight, I don't think "different techniques" really gets to the heart of my current thinking. Instead, I am suggesting a few more substantial changes to Belief #2.

Revised Belief #2Assessment must be an ongoing and fair process that provides students with numerous varied opportunities to demonstrate what they know and can do.

Looking back at some of my writings (e.g. this blog post and this tweet) I notice the impact made on my thinking by the experience of exploring UbD and UDL together with technology-enabled or enhanced assessment. I had been exposed to these idea several times over the course of my career before embarking on this course: as a teacher attending "backwards"  planning professional development, and in my work in accessible assessment research. However, thinking about the three together affected my thinking enough to consider the end of Unit 4 as a turning point. Thinking about how each (UbD, UDL and the affordances of technology) can be used to in teaching and planning has clarified by ideas about what "good" assessment looks like and how it should work. The other place where I feel my original thinking was given a kick in the pants was my experience taking the Project Implicit tests. Very enlightening and humbling!

Belief #3Effective assessments result in feedback that can be acted upon by both students and teachers.

Origins: This belief definitely has its origins in my current job where we are working to create useful reporting (i.e. feedback) about student understanding that is automatically generated based on students' answer patterns on our interactive assessment tasks. It turns out this is an incredibly challenging endeavor, especially since we are trying to do this in a way that works across different curricula and without the benefit of knowing anything about the students. In some ways, this may be an impossible task. Thinking back to Bennett's (2011) "measurement issue" with formative assessment, I can't help but wonder how we, as teachers, could know whether the inferences we are making about student understanding are correct (see this tweet). However, in other ways, this "objective" automated feedback presents an opportunity to offer feedback to students that has not been influenced by the implicit and explicit biases of the teacher, and may actually work better for some students.

Updates: Material from three CEP813 units furthered my thinking about feedback: Unit 3, which is all about feedback, Unit 6 on racism and bias, and Unit 4 on formative assessments in digital contexts. Even though I came into the class with some background in the theoretical underpinnings of feedback and some experience with trying the get it right, I still had, and have, a lot to learn. I wrestled with the idea of "useful feedback" a lot in this course, as evidenced by its prominence in my Formative Assessment Checklist and in the sample assessment I created to explore Content Management Systems in Unit 5, as well as my many tweets on the topic (e.g. here, here, and here).  

The most impactful readings about feedback from this course for me were Hattie & Timperley's (2007) The power of feedback and Yeager & Dweck's (2012) Mindsets that promote resilience. Hattie & Timperley made it clear that content and timing of the feedback matters, and that it is possible for even well-intentioned feedback to do more harm than good. Yeager & Dweck prompted me to think about the importance of how feedback is received by the learner, rather than just being concerned with what the teacher delivers. I also found the process of thinking through what to write for the blog post assignment on how it felt to receive feedback from a peer very interesting. It didn't exactly take me back to middle school, but close!

As part of the final unit in the course on summative assessment, I chose to continue exploring the impact of assessment feedback on learning. Koenka (2020) describes a very interesting study of the effect on motivation when different types of feedback (grades, comments, grades and comments, no feedback) are provided at the conclusion of a summative assessment. The study is conducted using a unique sample (girls attending an elite private school) so I'm not sure how generalizable the findings are, however, it is interesting to read about the potential effect of what may seem like small changes in the type of feedback students receive (See tweet).

Revised Belief #3: Effective assessments result in feedback that can be acted upon by both students and teachers to improve learning

Upon rereading belief 3, I realized that a possible action that could result from feedback is for the learner to rip up the paper and throw it in the trash, or in the case of digital feedback, simply hit delete. Clearly acting on feedback is not enough. To be considered effective, the actions should be in the interest of improved learning. Otherwise, I stick with my original belief.

Final Thoughts: This course completes the requirements for me to receive a Graduate Certificate in Online Teaching and Learning from MSU. I feel like I have become a better writer and thinker due to the rigor of this program and might even be prepared to enter the classroom again (however not without exploring other options first: teachers work way too hard!)  I must say, the experience has been more challenging than I imagined, however the challenge has been worth it!  If I were a bit younger or had not already completed a masters degree, I would definitely consider continuing on. Bravo MSU MAET!

References and Links

Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education: Principles, Policy & Practice, 18(1), 5-25. doi:10.1080/0969594X.2010.513678

Black, P. & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. The Phi Delta Kappan, 80(2), 139-144, 146-148.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81-112.

Koenka, A. C. (2020). Grade expectations: the motivational consequences of performance feedback on a summative assessment. The Journal of Experimental Education. https://www-tandfonline-com.proxy1.cl.msu.edu/doi/full/10.1080/00220973.2020.1777069?scroll=top&needAccess=true.

Shepard, L. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4-14.

Yeager, D. S., & Dweck, C. S. (2012). Mindsets that promote resilience: When students believe that personal characteristics can be developed. Educational Psychologist, 47(4), 302-314.

Image Credit

Photo by Manuela Adler from Pexels

Create your website for free! This website was made with Webnode. Create your own for free today! Get started