Bookmarked Learning recognition beyond an ATAR by gregmiller68 (gregmiller68.com)

Despite the need to engage in rigorous processes to develop Learner Profiles for students, in mid December when HSC/VCE/SACE etc., and ATAR results are released, we will still see the media bombard us with league style comparisons of schools and their end of year results. There will also be many schools, promoting enviable ATAR results of students suited to an examination approach to learning. However, I remain positive that one day, and one day soon, each one of our students will leave each one of our schools with more than one number on one day and a certificate filled with only marks and bands. I look forward to the day, hopefully one day soon, where we will have a Learner Profile which showcases the very best of who a young adult is and what they can do so they can find their place of meaning in this rapidly changing world.

Greg Miller talks about the various efforts in Australia to recognise learning beyond ATAR. This includes New South Wales Digital Wallet, South Australian Learner Profile Pilot Project and the New Metrics Project. It will be interesting to see how technology develops to accommodate these changes, whether it be timetables and assessment.
Bookmarked What Can Students Do? by Cameron PatersonCameron Paterson (learningshore.edublogs.org)

Teachers do too much of the learning and thinking for students. It does not have to be this way. When teachers work harder than students, young people become inculcated into coming to school to watch the adults work. If we want them to learn; if we want them to think, this is not something that can be outsourced. And if we want them to take responsibility for the culture and feel of the classroom and school, we need to invite them into the conversation, and even step away and let them take the lead. What do you complain about having to do that your students could do tomorrow?

Cameron Paterson reflects upon the question of what can students do in the classroom? He shares examples of where his students have co-constructed assessment criteria, self-assessed their work, written their own report comments and taught their own lessons. This reminds me of Bianca Hewes’ work with ‘meddles and missions’.

Personally, I have tried a few of these things when I was in the classroom, making the curriculum explicit and getting the students to work with me to design assessments. I even got my Year 8 Media Studies class to design their own excursion, including making inquiries with various places in preparation. In these situations I guess the focus of the teaching were the skills associated with how to learn.

The issue that I had was that I was only one part of the week for these students and that this was all vastly different to how other teachers and classes were operating. I guess the point then is how much can students do when we let them?

It is interesting thinking about all this outside of the classroom. In my role working with teachers and administration on some of the day-to-day technical trivialities, such as academic reporting and attendance. It is always so easy to just fix problems as they arise. However, I always endeavour to meet half-way, whether it be to provide a short summary or to actually walk through a problem. The challenges in these situations is the limits of time, I wonder if that too is sometimes the challenge in the classroom too.

Bookmarked Self-Assessing Creative Problem Solving (brainbaking.com)

So what’s the point of all this? Well, since we now have a self-test that measures more than simply divergent thinking and is specifically geared towards computing education, we could start experimenting with interventions in courses and measure its effects pre and post intervention using the CPPST

Wouter Groeneveld discusses his development of the Creative Programming Problem Solving Test (CPPST), a self-assessment test that measures more than just divergent thinking. It explores various aspects of problem-solving associated with programming with the intent to help developers test the efficacy of interventions. I am not sure I am really a true ‘programmer’, but a part of my work is involved in creating solutions for problems with the tools at hand. What I liked about the test was the way in which it helped think and reflect through the act of answering the various questions. Whatever the outcome, I felt that there was something in the actual asking of the various questions.

It all has me thinking about the ATC21s project from a few years ago and the attempt to capture the capabilities in themselves. One of my takeaways was that capabilities are often captured through something and the ability to separate the doing from the thing can be very hard.

Replied to University Entrance (daily-ink.davidtruss.com)

As long as universities focus primarily on marks, this will drive high schools to focus on grades. This will drive high school students into classes and programs that are about outputting good grades, not producing intrinsic learners, passionate about learning, and ready to take on all the challenges universities have to offer.

David, the means of gaining entrance into university certainly is a wicked problem. You might be interested in people like Greg Miller, Paul Browning and Peter Hutton as they too have explored different forms of measurement in their own ways. There also seems to be a number of projects in Australia to develop alternative pathways to university.
Liked New Metrics for Success (Getting Smart)

Milligan’s New Metrics for Success project is a collaborative research partnership between The University of Melbourne and 40 ‘first-mover’ schools to create assessment tools, influence the development of policy and accelerate change. In this partnership, leading educators are working with academic experts to reimagine schooling in Australia. With the support of The University of Melbourne, innovative Australian school leaders have established a broad network of institutions that are influencing the wider educational system by sharing evidence of their transformative practices.

Bookmarked Creativity Self-Assessment Is Nonsense by Wouter GroeneveldWouter Groeneveld (brainbaking.com)

Curiousness and persistence slightly increase your chance at creating something that will be labeled as creative by the field. But only ever so slightly. All the other parameters need to match up as well, and we are at the mercy of entropy for most of these.

All this is somehow soothing to me. It could mean that the difference between great creative individuals—Einstein, Nietzsche, Edison, von Neumann, da Vinci—and people like you and me is not so much the intelligence, perseverance, or insight, but rather being in the right place at the right time1.

Wouter Groeneveld explains that creativity is not in what is created, but rather in the critic.

creativity is in fact a label that is put onto something (not someone) by an expert in the field that is not the maker. No single painter can claim his or her work is very creative: that is a job for the art critics—who are the domain experts that probably used to paint themselves. It is the work, the produce, that is creative. We say that someone “is creative”, but we really mean that someone “produced something creative”.

This reminds me of the work done by the ATC21s project to assess ’21st century’ skills. They offer the follow suggestions in conclusion:

Moving these aspirations from curriculum documents to classrooms is a more challenging task. Several policy strategies appear to be key in supporting this process:

  • Developing materials that illustrate where and how these skills may be integrated into content area plans and lessons, which are the common organizers of curriculum.
  • Incorporating pedagogies for teaching these skills in pre-service preparation and in ongoing learning opportunities for teachers.
  • Ensuring that classroom tools are widely available for enacting these skills – including access to technologies, materials, and exemplar tasks that will allow teachers to organize and students to engage in productive activities.
  • Creating assessments that can evaluate these skills and that create incentives for these abilities to be widely taught as a regular part of the curriculum.
  • Developing an understanding of how these capacities may develop overtime – with opportunity, scaffolding, and instruction – so that teachers can envision how to organize supports for learning in these complex domains.

Page 308

Bookmarked Mapping Assessment by Written By RON RITCHHART (ronritchhart.com)

I propose that we think of assessment as occurring on two dimensions. The first dimension (let’s set this on a horizontal continua) is the degree of evaluation in which we engage. At the far end of this continua (we’ll place it on the right), we are highly evaluative, desiring scores and measures that quantify outcomes in a fairly precise way. Here, we judge work against clearly defined criteria that we apply to see just how close to the mark a student gets. Such evaluation can produce ranks and comparisons. On the other end of this continua (we’ll place it on the left) we might seek to understand students where they are, making sense of their actions and respond through our grounded interpretation. Here, rather than come with predetermined criteria, we open ourselves to the possibilities and variations in both learning styles and outcomes that a close examination of our students’ learning might provide.

“With this map of the terrain in hand, we can begin to place our various assessment practices in the appropriate quadrant. ”
The second dimension (let’s set this on a vertical continua) is the extent to which our assessments are integrated in our instruction and part of the ongoing learning of the classroom. At one end (we’ll place it at the top) we have assessment that is highly embedded in our teaching and students’ learning. That means that we don’t stop or pause our instruction in order to assess but instead embed it as a regular part of our practice. At the other end of the continua (placed at the bottom) we have assessment that is set apart from instruction and student learning. Here, we declare a formal end to our instruction and move into a deliberate assessment phase that we hope will reveal something about students’ learning. A basic graph of these two dimensions produces four quadrants that we might use to map the terrain of assessment (see Figure 1).

Ron Ritchhart provides a model for mapping assessment based on two dimensions: integration and evaluation. He provides examples for each of the quadrants, including providing feedback on performance (Quadrant A), checking for understanding and misconceptions (Quadrant B), examination of teacher’s documentation of learning (Quadrant C) and formal summative assessments (Quadrant D). In the end, the purpose of the map is to ‘to know where we are, and where we might go or want to be’.
Replied to The HSC – what it is and what it needs to be. by gregmiller68 (gregmiller68.com)

Whilst the HSC has been in continuous review for decades it now needs refurbishment. In doing so, we need to keep the best of what it offers and replace what needs to go with new metrics which offer a far more complete picture of each young adult’s knowledge, understanding, skills, capabilities and dispositions, and how they are applied.

As I have said, what the HSC is and what it needs to be are two very different things.

Greg, this seems to be the wicked problem of our time. It has been interesting to see various universities form connections with schools, such as Templestowe and Swinburne University. The problem is that the status quo still seems to be based on scores and ranking.

Intrigued with University of Melbourne’s ‘New Metrics’ program. They have a bit of history with exploring new areas for assessment with the ATC21s program (whitepaper can be found here), however I am not sure what really came of that work.

Bookmarked Remote Teaching Tip: Assessments in an Online Environment by Bill Ferriter (blog.williamferriter.com)

if the questions on your assessment can be Googled AND you are worried about cheating, then you have written a bad assessment.

Bill Ferriter suggests that before you worry about how you are going to assess learning online, you need to address the question of what you are assessing for.

  • We need to know the level of rigor of the essential standard that we are assessing before we can write a question that will generate reliable information on student mastery.
  • We need to decide on the kinds of things that students should know and be able to do if they have mastered the essential standard that we are assessing.
  • We need to write and then deliver a small handful (3-5) of questions for each essential standard that we are assessing.
  • We need to think through the common misconceptions that we are likely to see in student responses to our questions.
  • For any constructed response questions or performance assessments, we need to decide together what “mastery” will look like in student responses.
  • That might include developing exemplars of different levels of student performance or creating shared scoring rubrics.

If the focus is multiple choice questions, Ferriter uses MasteryConnect, while if it is about deomonstrations, he uses Flipgrid. Although there are many other options out there, these work within his context. As he explains:

Your goal is to find tools that:

  • Have little to no learning curve for you or your students.
  • Aren’t blocked by your district’s firewall.
  • Fit into your budget — or the budget of your school.

Ferriter closes with a reflection on how he deals with the threat of students cheating. FIrstly, he makes a concerted effort to lower the stakes on my classroom assessments by making them smaller and providing students the opportunity to repeat where needed. In addition to this, he suggests that if the answer is in fact Google-able then maybe it is actually just poor assessment.

Your piece about cheating reminds me about an experience I had in Year 10 Science when we had an open-book test. I remember Ms. Hé not paying too much attention to our chatter during tests. We would turn and talk with colleagues to get the answer. The funny thing was that it did not really make a difference. I cannot remember what grade I got, but I know it was not great. I think it clearly highlighted the lack of care I had for the subject. Cheating made little difference. In hindsight, I wonder if that was in fact her strategy, not sure. It was a useful lesson to learn.

Liked Visible Learning could end exams (EDUWELLS)

If a nation agreed to classrooms consistently developing an environment of Assessment for Learning where there are open and transparent activities designed for students and teacher to track, feedback and reflect on strengths, weaknesses and gaps in knowledge and skills as part of the learning, then maybe this “AFL record” could be what formed the final record of achievement for a student. This record would have been visible and moderated all along as it developed with the student, teacher and school agreed in what it reported about the learner.

If we had no exams and a exiting school was centred on students’, teachers’, schools’ and parents’ involvement in a national system of learning progress and transparent dialogue, teachers could return to a focus on learning and progress and not preparation for the divisive and alien environment of exam silence.

Bookmarked
With an eye towards reforming assessment practices, Jon Dron compiles a list of principles associated with assessment:

  • The primary purpose of assessment is to help the learner to improve their learning. All assessment should be formative.
  • Assessment without feedback (teacher, peer, machine, self) is judgement, not assessment, pointless.
  • Ideally, feedback should be direct and immediate or, at least, as prompt as possible.
  • Feedback should only ever relate to what has been done, never the doer.
  • No criticism should ever be made without also at least outlining steps that might be taken to improve on it.
  • Grades (with some very rare minor exceptions where the grade is intrinsic to the activity, such as some gaming scenarios or, arguably, objective single-answer quizzes with T/F answers) are not feedback.
  • Assessment should never ever be used to reward or punish particular prior learning behaviours (e.g. use of exams to encourage revision, grades as goals, marks for participation, etc) .
  • Students should be able to choose how, when and on what they are assessed.
  • Where possible, students should participate in the assessment of themselves and others.
  • Assessment should help the teacher to understand the needs, interests, skills, and gaps in knowledge of their students, and should be used to help to improve teaching.
  • Assessment is a way to show learners that we care about their learning.

He elaborates on these further in regards to credentials and objective quizzes. Dron believes that students should have autonomy when it comes to assessment and the best model for this is the creation of a portfolio of evidence.

A portfolio of evidence, including a reflective commentary, is usually going to be the backbone of any fair, humane, effective assessment … It is worth noting that, unlike written exams and their ilk, such methods are actually fun for all concerned, albeit that the pleasure comes from solving problems and overcoming challenges, so it is seldom easy.

This is a useful provocation in regards to assessment and feedback. It is also interesting to think about in regards to things like open badeges.

Replied to Feedback on the Capabilities for a Changing World. by gregmiller68 (gregmiller68.com)

Our next challenge is to turn an improving ‘back end’ tracking tool into a more interactive and intuitive online experience for students and parents which engages them more than twice a year.

Thank you Greg for continuing to share the journey of your school. I am really intrigued as to how well the students are able to speak to this data?
Bookmarked Why Should We Allow Students to Retake Assessments? by Peter DeWitt (blogs.edweek.org)

The question regarding retakes isn’t simply, “Should students get a second chance?” Rather, it is, “How can we use assessments to help students improve?” If we incentivize success on the first assessment by planning enticing enrichment activities and guide students in correcting the learning errors identified on that assessment, we’re much more likely to realize Benjamin Bloom’s dream of having all students, ALL students learn well.

Thomas Guskey responds to concerns raised around offering students the opportunity to retake tests and assessment.

To bring improvement, Bloom stressed formative assessments must be followed by high-quality, corrective instruction designed to remedy whatever learning errors the assessments identified. Unlike reteaching, which typically involves simply repeating the original instruction, correctives present concepts in new ways and engage students in different learning experiences.

He explains that concerns about time and coverage can be overcome by using a corrective process, that this is what real life is like (i.e. surgeon, pilot), and the everyday reality of mastery and fair grades (i.e driver’s license.)

I guess it raises the question, what is the point of feedback, if students are not given the opportunity to act upon it?

Bookmarked The Hitch-hiker’s Guide to Alternative Assessment (damiantgordon.com)
Damian Gordon collates an extensive list of alternative assessment ideas. There has been a lot written about the tools to use in association with online learning, but less in regards to the various assessment practices.

Along with Bianca Hewes’ discussion of Project Based Learning and Pernille Ripp’s Choose Your Own Adventure, this guide is useful in helping us rethinking the options.

via Stephen Downes

Replied to Sweeping changes to HSC and syllabus proposed by government review (The Sydney Morning Herald)

The report proposed reducing more than 170 senior-level courses to a “limited set of rigorous, high-quality, advanced courses”. Vocational and academic subjects would slowly be brought closer so that eventually every course would mix theory and application.

HSC students would also have to complete a single major project, which would allow the development and assessment of skills such as gathering and analysing, as well as so-called general capabilities such as team work and communication.

It is interesting to consider the proposed changes in the NSW Curriculum Review Interim Report against other curriculum frameworks, like New Zealand. It also reminds me of a comment someone once made to me that curriculum is the best guess for tomorrow. I was also intrigued by Marten Koomen’s take, especially highlighting Masters’ Rasch over Reckase. It makes me rethink the use of ‘crowded curriculum‘.