Assessment is a cyclical process. It involves planning to measure and report on student learning, then taking action in hopes of making improvements where expectations are not met by reality.
I’ve written before about the importance of defining the what of assessment (AKA learning outcomes), but you should also consider when assessment will occur and why. Specifically, I’m now going to delve into the definitions and use cases for formative and summative assessment efforts.
Formative Assessment
Formative assessment is when measurement occurs during an activity and is intended for results to inform approaches for the remainder of the experience.
By design, the intention is to measure learning at a particular point in a course, program, or event. As such, you should not expect students to demonstrate mastery of learning at this point; they are still in the process of learning! However, they should be demonstrating learning at a level appropriate to the concepts being introduced or reinforced.
An example of a formative assessment could be orientation leaders gathering feedback from students and families throughout a multi-day orientation program. Information on awareness of resources could inform changes to reiterate this information the next day. Likewise, feedback on the accessibility of offices for registration services (such as scheduling courses, obtaining student IDs, and touring buildings) could inform staffing and planned activities for the remainder of the orientation experience.
Formative assessment could also be as simple as a student activities professional checking in with workshop participants to see how the students are doing. Are students understanding what’s being presented? When SA pros share what’s left of the workshop, do students want to engage with the material differently? You can see how this information can be useful to shape the remainder of a given experience, but perhaps didn’t realize its dual benefit for informing operations and student learning.
Putting a finer point on it, formative assessment sheds light on how well a program may be rolling out and experienced by participants. Practitioners can use this information to make real-time adjustments to best cater to students and aim to meet the end goals of the intervention.
At my institution, National Louis University (NLU), our student engagement staff made quick, formative changes when assessing the Golden Eagle Leader Program. When everything pivoted online due to COVID-19, we saw a surge in students wanting to participate in this multi-workshop leadership program.
The operational data around interest and usage helped us increase the amount and timing of workshop offerings to likewise increase access. Feedback collected from students after individual program sessions allowed for tweaks in subsequent or additional sessions to ensure that we best conveyed content to reinforce learning outcomes aligned across sessions. You can learn more about the NLU’s student engagement team’s virtual success, powered by Presence, here.
It is worth pointing out that formative assessment may be more useful for some learning interventions than others. After all, many programs may not be easily changed upon short notice or as they are occurring. As such, formative assessment may not be as feasible for large-scale or multi-partner projects in which facilitators don’t have unilateral authority or resources to make immediate changes.
Summative Assessment
Summative assessment is when measurement occurs after an activity and is intended for results to inform the approach for the next offering of the experience.
Once a course, program, or event has ended, learning outcomes may need to be measured immediately or at a specific later time. Typically, summative assessment informs what knowledge or skill sets are acquired, retained, or demonstrated after going through the intervention. Such data represents the end of an experience. Data collection may be conducted immediately following the program or after a set period of time to allow time for reflection or for changed behavior to be demonstrated.
An example of summative assessment could be career services collecting feedback en masse from students and employers on how to better conduct career fairs — so as to facilitate connections, networking, and information sharing. It is appropriate to ask these questions after the experience so that students and employers can reflect on the entirety of the fair. They can share what worked well and what didn’t in order to offer constructive feedback to shape the fair’s design and facilitation next time.
Areas like advising, tutoring, admissions, and library services could collect summative assessment data from individual students after individual use of services or office visits. Attendance data and post-visit feedback could inform staff of whether goals were achieved or tasks properly executed. And, operationally, they could inform decisions for improving the experience for the individual student’s next visit or for future students.
Two examples from my own institution immediately come to mind for me regarding summative assessment. Our orientation professionals shared results from assessments and articulated how they would use the results to make improvements for the next year. Veteran and Military Programs used assessment feedback from students as integrated content to inform training and educational sessions with faculty and staff about student programs and services.
A benefit to summative assessment is the ability to review the collected data after the intervention is over. It is a natural time for reflection and interpretation of results for future improvement. Moreover, events or activities may be fast-paced or complex in their construction, so post-event reflections and data-informed decision-making may be better suited for all stakeholders involved.
Remember, however, that summative assessment is based on specific students offering feedback about a program or experience. Once changes are made to the intervention, new or next student reactions may be different. Moreover, just because certain past students wanted changes or reacted in certain ways, it does not necessarily mean that future students would have reacted in the same way. This is contrasted with formative assessment in which any changes made based on feedback should benefit the students who offered the data in the first place.
Strive for Balance
The point of this post is not to champion or disparage one form of assessment. Rather, I hope that in defining and differentiating formative and summative assessment, that you can see the appropriateness of each approach for various interventions.
I encourage you to use a combination of the techniques. Summative data can be put alongside formative results to interpret student journeys in relation to learning demonstrated at the conclusion of an experience.
Where summative/end outcomes may not be in line with the target, did you see formative results showing a positive, upward trajectory along the way, like a straight upward sloping line? If so, then you’ll know the misstep or area to focus is at the end of the experience.
Alternatively, a summative result may exceed expectations, but their formative journey is full of ups and downs, representing a rough and possibly stressful journey. In this circumstance, you should not be satisfied with the summative result but go back to try and improve the low points from formative results.
Just like the time, place, and value of balancing qualitative and quantitative data, I wish you well in planning for a healthy mix of formative and summative assessment efforts.
If you have particularly productive pairings, we’d love to hear about them! Share with us @themoderncampus and @JoeBooksLevy.