It’s been a whole semester, can you believe it?! Initially, we started CEP813 with Three Things we believe about assessment. Through the materials in the six modules, I have come to strengthen and/or change these beliefs. Before looking at my current beliefs, let’s check back to my initial post, here, and see where I started.
As you can see, there were three important points I believed to be true:
- Assessments must be relevant
- Assessments must be given throughout the course
- Assessments must provide feedback
See more in the initial post to learn more about the specifics of these three beliefs.
My Current Beliefs
Since the beginning of the course, however, I have learned so much throughout the modules. This has led me to adjust my beliefs, albeit mostly to make them more specific than outright change:
1) Assessment Must Be Given Throughout The Course → Assessment Must Be Dynamic and Adaptive to the Student
I am moving my initial #2 to #1 simply based off of my learning process throughout CEP813. One of the biggest things that hit me like a ton of bricks was the very first learning section of the first module. In this introduction module, I learned the difference of Assessment for learning, Assessment of Learning and Assessment as learning. Noyce (2011) did a great job of outlining these three types of assessment
- Assessment of Learning (AOL): Used for the purposes of “accountability or to rank or certify students.” This is also known as summative assessment.
- Assessment for Learning (AFL): Any assessment designed or used specifically to boost student learning. Also, commonly known as formative assessment.
- Assessment as Learning (AAS): Leverages the knowledge that metacognition—or thinking about thinking—is an essential part of expert learning processes.
It seems basic, but as a (relatively new) PE teacher, I’ve simply thought of assessment as a test– One that is given at the end of each unit or semester and provides a grade (judgment) to the student. Not so! In other words, I’ve been wrong in thinking assessment should solely be used to judge the student’s learning, but rather I should also include it to help boost their learning! A ha!
So, the idea of AFL definitely backs my initial thought of assessment being given throughout the course, also reiterated by Wiggins and McTighe (2005) who argue that we need to “use far fewer narrow prompts that are intended to elicit the “correct” answer to a familiar question.” (p.48). In that statement, Wiggins and McTighe also opened up another door to the idea of letting the student guide the assessment so we can gather their true level of understanding, rather than trying to elicit a “correct” answer. Thus, I’ve also built onto the initial the idea and suggest that assessment must be dynamic and adaptive to the student. Shepard (2000) notes that, “Dynamic assessment [is] finding out what a student is able to do independently as well as what can be done with adult guidance.” (p.10). So in conclusion, I believe that an assessment needs to be adaptive to each student (more open-ended) and dynamic (one that students can navigate themselves with the help of a teacher).
As module 2 of #CEP813 comes to a close, I look back to the three things I believed to be true about assessment. While they still hold true, I definitely see a greater depth to those beliefs.
— Liz Hogan (@edu_hogan02) September 24, 2018
2) Assessment Must Be Relevant → Assessment Must Inform Instruction
The next thing I’ve changed (slightly) is that assessment must be relevant. I think it goes without saying that most teachers assess within their semiotic domain. But what makes a better assessment is one that not only relates to the current teachings but one that informs the teacher. Piggy-backing off of a dynamic assessment which can be done with adult guidance, as mentioned in my first belief, Quellmalz (2013), further explains that by allowing the teacher to assist, it should allow both teachers and students alike to “monitor learning and modify instruction.” (p. 3) While informing instruction can take during the actual assessment, Hattie & Timperley, (2007) also note that feedback allows students (and/or their teachers) to set further appropriately challenging goals as the previous ones are attained, thus establishing the conditions for ongoing learning (p.88-89). In other words, while a teacher may be provided with information about the student regarding a specific subject of teaching, if the process or the results from the assessment can’t help one better instruct the students, it is not useful. Shepard (2000) cements the significance of this by noting that informing the teacher of where to improve teaching (or this case, the coach and practices), is just as an important outcome as informing the student-athlete (p.12).
Going off of my last tweet, this has been an eye-opening experience with regards to the three things I believe about assessment. I think I understood assessment prior, but now I definitely have a MUCH deeper understanding of WHY we assess the things we do. #CEP813
— Liz Hogan (@edu_hogan02) October 5, 2018
3) Assessment Must Provide Feedback → Assessment Must Provide Feedback for Effective Learning
Lastly, and perhaps most importantly, my initial third point was that assessment must provide feedback. Given that my previous belief alluded to informing teachers, I thought it was crucial to specifically mention that the assessment must provide feedback for effective learning, that is, for the student’s benefit. Black and William (1998) describe feedback as, “…any information that is provided to the performer of any action about that performance.” (p. 53). In other words, if the teacher knows where the student is in relation to the goals, but the student doesn’t, it is of no use. Therefore, effective feedback must contain information about the progress of the student but also information to the student about how to proceed to understand more. Module 5 and 6 presented the idea that technology could also be a significant part in providing this feedback. Gee (2003) brought to light the idea that video games could help people learn through the immediate and effective feedback that they provided. Surprisingly, the use of ID cards at the University of Arizona helped to identify and support students who were at risk of dropping out (Liao, 2018). As I made mention in my ADC,
“The more both the student and myself can pinpoint where they are in the learning process, the more likely it is that we can come up with a plan to help reach the goals set at the beginning of the course or even create more challenging goals in the future to allow for ongoing learning and understanding (Hattie & Timperley, 2000, p.88-89).”
Thus, regardless of the platform being used, it is important that the student receives feedback about where they are, where they need to be, and how to get there.
Unfortunately, it is all too often overheard, “Do what I say, not what I do.” This course, however, has provided me with the opportunity to take my beliefs about assessment and breathe life into them! See below on how I have tried to apply my beliefs to creations throughout the course. As a reminder, these are my three current beliefs:
- Assessment Must Be Dynamic and Adaptive to the Student
- Assessment Must Inform Instruction
- Assessment Must Provide Feedback for Effective Learning
In this module, students were asked to create Annotated Assessment Exemplar and Formative Assessment Design 1.0. Below are a few examples from those creations that tie into my current beliefs about assessment:
Belief 1- In my FAD 1.0, I believe I truly applied the belief of the assessment being dynamic and adaptive. Each student had the ability to create their own practice and evaluate themselves and their peers through the lens of their own eyes. There was no “right” or “wrong” answer that was being looked for. Overall, the assessment was dynamic and adaptive to each individual student.
Belief 2- In addition to being dynamic and adaptive, the assessment provided allowed the coach to further inform their instruction. That is, this assessment could be given multiple times throughout the practice, and the coach could gain insight into potential weaknesses on his or her team– not only through watching the practice portion but also through the peer assessment.
Belief 3- My Annotated assessment exemplar was a current evaluation that my school uses for 1-3 grade behavioral problems. As I made note of, this assessment allows for effective feedback (ie the student can easily see by sad or happy faces) how they are doing along with comments of how to move forward. “It that connects directly to the real world and is something that can be used many times throughout a school-year, providing constant feedback on the progress being made (while also identifying areas that may need to be tweaked).”
In this module, students were asked to create an Assessment Design Checklist 1.0 and an Assessment Genre Critical Review. Below are a few examples from those creations that tie into my current beliefs about assessment:
Belief 1- Unfortunately, I think I learned more about how my ADC 1.0 and the assessment genre critical review were less dynamic and adaptive than I had wished for. Nonetheless, the Physical Fitness Test (PFT) does allow for the students to complete on their own and is adaptive to each skill level, while being offered multiple times throughout the year. I do wish it could be a little more open ended. My ADC 1.0 at this time needed more work!
Belief 2- In my Assessment genre critical review, I discovered that the Physical Fitness Test could be an excellent tool to inform instruction. As I mentioned, “Students are clearly below, at or exceeding the standard provided to them in the beginning (ie, their mile time was slower than acceptable or faster than the time given to meet).” With that in mind, teachers should be able to deduce areas of weakness that need to be taught more, and perhaps some strengths that can be tweaked.
Belief 3- Perhaps the biggest change of mind came when I read that Wiggins and McTighe believe that there is this distinct difference between knowledge and understanding (2005). My ADC 1.0, albeit raw, does start to allude that there must be effective feedback to determine if the student has the knowledge or truly understands the content.
In this module, students were asked to create their Assessment Design Checklist 2.0 and Formative Assessment Design 2.0. Below are a few examples from those creations that tie into my current beliefs about assessment:
Belief 1- Perhaps the biggest area of growth for me took place during this module and the creation of ADC 2.0. In this version, I realized that assessments must be dynamic and adaptive. As I put in my ADC 2.0, in order to answer “Is this assessment dynamic to each student” it must meet the following points:
- The student is navigating and completing the assessment
- The assessment allows the teacher to help guide the student
- The assessment has targeted occasions and provides the means to scaffold next steps
- The assessment is more open-ended
Belief 2- Another area of growth in my ADC 2.0 was the inclusion of the idea of informing instruction. I think I made an important comment when I said, “When we think about informing instruction, we should also realize that assessment is not just for the student, but rather the teacher as well. If the feedback informs the teacher, it should allow both the student and the teacher to set more challenging goals compared to the previous ones understood, thereby “establishing the conditions for ongoing learning.”” Ongoing learning is the goal for any teacher, and in order to do so, we must be informed about our students!
Belief 3- Finally, a big change to my FAD 2.0 was the change in the skill levels, from comparing to other players to comparing to a set standard of skill levels. This is to avoid comparisons to other students (Black & William, 1998, p.143). I also noted that with the rubric, video evidence and sit down meeting with the coach, a student should easily be able to answer the three questions that surround effective feedback given to us by Hattie and Temperley (2007): Where am I going? (What are the goals?), How am I going? (What progress is being made toward the goal?), and Where to next? (What activities need to be undertaken to make better progress) (p.86).
In this module, students were asked to complete the Formative Assessment Design Peer Feedback and finish up their Assessment Design Checklist 3.0. Below are a few examples from those creations that tie into my current beliefs about assessment:
Belief 1- By taking the time to review a peer’s work, I was able to better understand my vision of a dynamic and (ultimately) adaptive assessment. I noticed my peer had some excellent questions and ways he wished to provide feedback, but I was left wondering (and needed to ask myself with regards to my FAD), “One thing I think could be clarified is what type of questions will be asked. Are these questions simply multiple choice, fill in the blank, etc? For an assessment to be dynamic, I think the more open-ended the better.”
Belief 2- An area that I thought my peer excelled in was in my belief of informing instruction. I made note, “Given that the questions are tagged and reports are given based on the current learning profile, the teacher can see how each individual student is fairing and where instruction needs to focus.” I thought this was excellent, and something I should consider for my own design!
Belief 3- Finally, in my ADC 3.0, I was able to relate my belief back to my profession in the PE world. I wrote, “It would be like playing a game and having no idea if you won. As a PE teacher, it is important that my students are provided with effective feedback so that they can continue to work on areas of weaknesses while improving their strengths.” In other words, I believe I finally understood not just the “what” we must look for when we assess, but “why” and specifically to my subject area!
In this module, students were asked to create a Critical Review of a CMS and create their own CMS Assessment. Below are a few examples from those creations that tie into my current beliefs about assessment:
Belief 1- When reviewing my CMS of choice (Schoology) I discovered that a strength it provided was the ability to ask a range of questions that allow students to be assessed in a variety if ways– definitely dynamic and adaptive to every student! I made note of all of the features an assessment can provide, “In addition to the standard multiple choice, true-false questions that the previous feature offered, an assessment allows a student to highlight, drag and drop, and also do short answer math questions. This is great for teachers looking to check-in for understanding.”
Belief 2- Perhaps one of the biggest benefits of creating my own assessment in Schoology was understanding the feedback that a teacher receives. I realized that through the use of my assessment, “…it should be very clear who knows what health-related fitness components and SMART goals and who can actually apply that knowledge into the formation of a personalized goal.” In other words, as a teacher, I’ll be able to use the assessment to identify strengths and weaknesses of my class which helps me inform where my instruction should go.
Belief 3- Another benefit of creating my own assessment in Schoology was understanding the feedback that a student receives. Some feedback is immediate (ie true/false questions etc) but some would require more time (fill in paragraphs, etc). While this assessment isn’t ideal in that it’s not all immediate, it is nice that there is an opportunity to provide in-depth feedback so that the student can truly understand where they are and how they can get to a higher level of understanding. I make note of that, “For example, students are given feedback on concrete terms, like their health-related fitness components and the meaning of SMART goals immediately, but then they are also provided with more in-depth feedback about how to improve their PFT in a way that gives them ownership.”
In the final module, students were asked to create a Game-based Assessment Plan and Game-based Assessment Project. Below are a few examples from those creations that tie into my current beliefs about assessment:
Belief 1- Perhaps one of the coolest (and biggest benefits) of using Twine for our Game-Based Assessment Plan, is that it was clearly dynamic and adaptive. Twine helps users create a choose your own adventure (CYOA) game, and thus, students are completely leading themselves and have the opportunity to explore and make decisions (even fill in the blank!) on their own!
Belief 2- In my actual game-based assessment project, I found that creating a game-based assessment through twine made it extremely difficult to inform instruction. There is no doubt that a student could seek help from the instructor and thus he or she might be informed of areas of weakness, but unless you know how to code, it is hard to create a game that keeps a tally of moves and choices. As I made note, it wasn’t the be all end all, but, “That being said, it would be a lot more beneficial if the program naturally tallied chosen paths to better inform assessment.” Something to continuing learning!
Belief 3- Finally, I found that by creating a game, I could find ways to provide effective feedback to students without even having to provide “assessment results” so to speak. As I mentioned, feedback is immediate and effective:
“Immediately after each passage is chosen the user is told whether that was the desired response or not. There are some opportunities to go back and rework the solution, and some that tell the player immediately that it was the incorrect answer and why.”
Overall, I’ve had a blast in this course. I didn’t think much about assessments before, but now I have a lot of ideas of how I can better incorporate them into my lesson plans and curriculum- even as a PE teacher! It has been through the use of these creations that I can see my beliefs come to life.
Black, P. & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139-144.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.
Liao, S. (2018, March 12). University of Arizona tracks students ID cards to detect who might drop out. The Verge
Noyce, P. (2011). Introduction and overview: The elusive promise of formative assessment. In P.E. Noyce & D.T. Hickey (Eds.) New frontiers in formative assessment. Cambridge, MA: Harvard Education Press.
Quellmalz, E.S. (2013). Technology to support next-generation classroom formative assessment for learning. San Francisco: WestEd. Retrieved from http://www.wested.org/resources/technology-to-support-next-generation-classroom-formative-assessment-for-learning/
Shepard, L. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4-14.
Wiggins, G. P., & McTighe, J. (2005). Understanding by Design. Alexandria, VA: Assoc. for Supervision and Curriculum Development.