Higher Education Research and Development Society of Australasia
We’ve heard the essay is dead (thanks, GenAI), and we are constantly challenged to transition towards authentic assessment experiences in our courses. In a flurry of innovation, we’re seeing more group presentations, interactive orals, podcasts, posters, elevator pitches, and enterprise challenges.
Because of this, we need to re-centre constructive alignment.
To ensure we have a shared language, please allow me to provide a refresher on constructive alignment:
The Course Learning Outcomes (CLOs) are developed first. CLOs outline the key concepts that students will learn/apply, and the skills they should be able to display at the end of the course (communicating to specific audiences or presentation skills, are common examples). From this, you need to consider how you will be able to verify the students can do these things. So, assessments (summative and formative) are mapped out next. At this stage a solid understanding of how your formative assessment experiences (assessment FOR learning) are scaffolded is needed: how they build on each other, and towards the summative assessment. These are skill-building steps along the learning journey: you give feedback that helps them reach a higher level of skill (those interested in reading more about this should search for Vygotsky’s ‘Zone of Proximal Development’). Formative assessments allow students to practice and to gain feedback on skill development; importantly, they signal the performance expectations of the summative assessment. Remember, summative assessments should aim to verify that the CLOs have been attained. Once the CLOs and assessments have been mapped out, the specific learning activities can be designed.
I have laboured over the distinction between formative and summative assessment to illustrate the problem with employing authentic assessments without a consideration of their associated skills. For example, I’ve observed rubrics for group presentations with criteria for:
I’ve also noticed that these courses make an assumption about prior learning. Perhaps it is reasonable to assume that students arrive at our courses with presentation or teamwork skills developed? Perhaps, an entire chain of educators up to that point thought the same. It is possible that these students experienced a different approach to education than our Western model. It is also plausible that they scraped through previous group assessments, lifted up by their peers. They may not have received any individual formative (or summative) feedback about their development of presentation skills. So, these students may have never been taught how to work in a team, or how to present. We need to ensure that students don’t slip through the cracks in our assumptions of developed skills.
I have made this mistake too – I did not map out learning experiences for students that matched the mode of assessment; expecting presentation skills as a given. I included rubric criteria for skills I did not scaffold students towards. Rubrics remain an area that I often see accidental assessment of ‘untaught’ content in courses. The bottom line is: we need to avoid assessing something we haven’t explicitly taught. We especially need to be wary with summatively assessing non-taught skills where we have not provided feedback or practice.
This is a call to action – not to be afraid of authentic tasks, but to ensure our courses are truly aligned to allow for the development of skills we want to see demonstrated in our modes of assessment.
Take the example of a group assignment (if you have one of these in your course):
As a quick fix: consider where you can add in remedial support for students on the lead-up to the summative assessment. I’ve seen this as a series of “masterclasses”, where students can opt-in for additional training/scaffolding if they haven’t had development opportunities in a specific area (such as creating PowerPoint slides with minimal text, presentation timing, how to record/edit videos, or how to fairly distribute tasks in a group assessment).
On that note, this year my colleagues (Peta Callaghan and Paul Moss) and I have been trialling new methods to develop the skills required for successful group assessments. We have introduced a method for scaffolding the fair distribution of tasks, and ensuring accountability, referred to as ‘task allocation’. Helping students identify and allocate tasks across the whole project has made the management of group assessments easier (from the perspective of this Course Coordinator!), with student feedback also being positive.
So, as we embed authentic assessments in our courses, we shouldn’t forget the basics of constructive alignment. This is a reminder for us to ensure that we are teaching and scaffolding the skills we seek to assess. For educators, this means really mapping out the learning activities, formative assessment opportunities, and building in remedial support (particularly in areas like teamwork and presentation skills).
Image source: https://pixabay.com/illustrations/lecturer-mentor-teacher-training-8339698/
The HERDSA Connect Blog offers comment and discussion on higher education issues; provides information about relevant publications, programs and research and celebrates the achievements of our HERDSA members.
HERDSA Connect links members of the HERDSA community in Australasia and beyond by sharing branch activities, member perspectives and achievements, book reviews, comments on contemporary issues in higher education, and conference reflections.
Members are encouraged to respond to articles and engage in ongoing discussion relevant to higher education and aligned to HERDSA’s values and mission. Contact Daniel Andrews Daniel.Andrews@herdsa.org.au to propose a blog post for the HERDSA Connect blog.
HERDSA members can login to comment and subscribe.
Add new comment