Assessment possibilities?

Picking up on the comments from Kayte and Mark on assessment… one of the things we might hope EnquiryBlogger could help with is evaluation, both formative and summative, for both learners and coaches/educators. We need to develop a rigorous discipline around what quality looks like in authentic enquiry.

The first questions we’d want to ask are independent of technology:

  • What feedback does a learner need to stay on course, and to reflect on their progress?
  • What signals does a coach/educator look for when gauging the quality of an enquiry?

We can then ask how EB might support these monitoring/assessment tasks, as well as wondering if EB opens new possibilities that are impossible/impractical in the non-digital world.

A few examples of what EB might offer. The EB project has very limited resources, and the scope for implementing sophisticated ‘analytics’ is limited, but we’ll start simple and see how far we can get:

  1. How am I doing? EB assists with basic information management: stuff shouldn’t get lost if blogged. If each step has an associated tag, then display which tags have been used, and how many times (i.e. a tag cloud is a basic way to do this)
  2. How strong are they on <Learning Power dimension X>? Here we would be looking for indicators that we might correlate with stretching (or not) on a particular dimension. Trivially, asking lots of questions presumably bears some relation to critical curiosity; exchanging messages with peers will have something to do with learning relationships, etc.
  3. How is the whole cohort doing? Aggregated views of such ‘analytics’ could provide a coach/educator with powerful views, analogous to cohort learning power charts. Thus, Are learners struggling with stage X? might be evidenced by no, a few, or v brief blog posts with tag X across the group.
  4. What is the quality of reflection? Natural language parsing can detect the use of particular written expressions that educators take to signal higher quality reflection (note that this is a more ambitious possibility — no promises here!)

OK, this is just to kick off this strand. As a digital platform, we would ideally want to demonstrate that we can not only support what might otherwise be done in a pen+paper workbook, but go beyond this to create new possibilities. I haven’t even looked at the Australian quality teaching matrix for enquiry that Ruth showed us, which I’m sure would trigger new thoughts.

Your thoughts welcomed!

This entry was posted in 8. Validation, 9. Reflections on Experiment. Bookmark the permalink.

4 Responses to Assessment possibilities?

  1. markmoorhouse says:

    Hi All

    Just posting to capture some further thinking re: assessment. Tasks I need to do are address Simon’s points more squarely and also read the information emailed by Alec Patton (Innovation Unit/Learning Futures/All-Round-Good-Egg-Anyway) about other exapmples of Enquiry-Based Learning Structures. Which in itself highlights the danger of us lot too poor to be enjoying foreign climes right now getting too far ahead with our thinking when there are other stakeholders within the LF community, not least of whom are the young learners, who must have input to this co-constructive process to develop an “owned” tool.

    Nevertheless, some thoughts:

    We must have learners actively reviewing within Enquiryblogger, not simply populating the space with evidence for “professionals” to review. (The prompt for this and a fair few other related thoughts was reading a brilliant publication by Chris Watkins: “Research Matters: Learning, Performance and Improvement; Summer 2010; Issue 34” and also seeing the “Stuck Station”s around Lindsey Regan’s Art classroom at Matthew Moss). So…another possibility for the menu would be a dialogue box/space/pulldown/whatever called “My Most Recent Failure”, which could have three areas:

    A. Would supply the prompt “Was it due to 1. Lack of effort OR 2. Using the Wrong Strategy?”

    B. Would invite the sharing of “My Thoughts and Feelings”.

    C. Would prompt for ideas about “What I Might Do Differently”.

    To get the review cycle going throughout the project has got to have value.

    Another thought is that we should bear in mind that we could already be assessing in Step 7 with some rigour, with subject staff reviewing learners’ blogs for Ks3 or even KS4 coverage of any number of subject-specific assessment criteria. Are they really going to trawl all these blogs for this stuff? You’re damn right they are! It takes pressure of them to have ready-supplied evidence to help them satisfy assessment grid requirements for their charges. And as far as I know Larry Rosenstock, widely (and rightly!!) revered within Learning Futures, does not operate a separate assessment rubric for “Learning”, but is happy to prove the educational value added by his approach by performance within existing assessment frameworks augmented by the power of community and peer assessment enacted via 8-weekly Presentation of Learning (Exhibition) evenings. If we hang on a year or two to get Yr7s to the end of the Key Stage 3 and Yr10s to the end of Key Stage 4, then we can “assess” the efficacy of EBL by increased achievement. And in the interim, isn’t the mix of qualitative evidence in blogs and outcomes enough?

    If it isn’t, (and this is an odd one this: the Watkins periodical cited above again cites masses of evidence that learner-centred practices improve performance in standardised tests) then we will have to attempt the construction of yet another matrix with the potential to stultify the organic and fractal nature of the real learning growth it seeks to measure, like pinning butterflies for the specimen case.

    We will have to be really, really careful then, or we will go the way of some many other worthy expeditions down this road….

    This being said, and I hope we will say it many more times more yet to keep us alert to the danger, the parsing Simon mentioned sounds really interesting. There may or may not be socio-cultural variables to account for, but I really like to thought of the deftness of the capture of this evidence. The main event of the learner’s journey, remains untouched by this scrutiny.

    I have also started looking at Socratic thinking matrices, spurred on by some work Anne De’A Achevarria (thinkwell.org) did with us as a staff. The page I have found most fruitful is this one:

    http://www.fcs.utah.edu/faculty/herrin/deep_learning.html#tax

    Scroll from matrix to matrix (there’s a few) and it is really quite interesting.

    But Watkin’s reminds us that learning is more than just thinking. And so whilst some way of making judgements about depth of learning will be useful, we might be as well looking at the 8 Steps to see what questions are prompted in thinking of them. (Again, we need to tie in with all other stakeholders to ensure that the 8 Steps are an acceptable framework for the LF composite model to follow.) Some thinking, then, to be done about what range of qualities might express themselves within steps 1 to 6, and how we might talk about them. And the range, continuum-style, is good ground I reckon. It is in the Queensland New Basics Rich Task Grading Masters also, which I’ll try to attach for information.

    I am going to come back to all this though, taking Qing’s advice and trying out that U theory of letting it all go and coming back to it.

    Inabit.

    Mark

    • markmoorhouse says:

      Pictures At An Exhibition Queensland New Basics Rich Task document including Grading Master in Media section to view.

      Cheers

      Mark

  2. markmoorhouse says:

    A Modest Proposal…

    Hi Folks,

    I have an assessment process for you: might be marvellous, might not be – see what you think.

    The process would be an “Assessment” page on-line, linked in with Enquiryblogger. It would be live throughout the life of the project to be manipulated by the teacher/pedagogue/mentor/supervisor/assessor whenever appropriate (formative as well as summative assessment then). The page would show 10 sliders moveable between polarities for each of the 10 assessment strands: Strands 1 – 6 reflect steps 1 – 6 of Ruth’s 8 step process; Strand 7 assesses the social significance of the learning; Strand 8 assesses the quality of on-going self-review conducted by the learner; Strand 9, the level of social learning undertaken; and Strand 10, the amount of good, old-fashioned graft. The provocation for steps 7 – 10 came from Ruth, Biesta’s book “Beyond Learning”, the Chris Watkins text cited in previous post and Dweck and Claxton’s (seperately!) clarity on the importance of effort.

    The Enquiryblogger assessment tool would be landscape, with rubic ‘a’ on the left and ‘b’ on the right (see below) The assessor would click and slide the pointer into position as and when fit and would be able to revise their judgements at any tine before the final formative “Final Assessment” button is hit.

    The continua (?) have 10 graduations beneath them (not showing to the user) so that when, and only when, all strands have been summatively assessed and the “Final” (or ‘Submit’, or whatever) button is hit, a grade from A to E is automatically revealed. This is now a fixed grade for this learning event, the Final/Submit button being a once-only operation.

    I think that the sliders between polarized statements avoid the creation of another awful and exhaustive matrix which attempts to define graduation in words and that the assessment process is transparent and wieldly. I think it hits a lot of important features of learning also.

    Anyway, here are the statements.

    1

    a. Choice of place or object reveals intense personal significance and/or intellectual curiosity.

    b. Choice of place or object reveals little, if any personal significance and/or intellectual curiosity.

    2

    a. Observation is multi-faceted, creative and dynamic with differing perspectives captured in close detail.

    b. Observation is cursory with only one or two perspectives exploited and little attention to detail.

    3

    a. Multiple levels of questioning exploited, including those to help define, describe, explain, analyze, construct and discriminate.

    b. Only one or two levels of questioning exploited.

    4

    a. A wide range of compelling and illuminating narratives are gathered from primary and secondary sources.

    b. Only one or two loosely related narratives are gathered from either primary or secondary sources.

    5

    a. Mapping is complex yet lucid with many lines of thought pushed to detailed levels and both supporting and verifying evidence included and referenced.

    b. Mapping is crude and under-developed with a few short paths and little supporting and refuting evidence incorporated.

    6

    a. Several connections, both academic and non-academic, are made and explained clearly and thoroughly, using specialized language where appropriate, demonstrating a deep understanding of concepts.

    b. Only one or two connections with existing knowledge made, explained without the use of specialized language and with little depth of understanding demonstrated.

    7

    a. A status of real-world significance for others is achieved by the learning, for both those close to the learner and others remote within the global learning community so that funds of knowledge and understanding are significantly enhanced.

    b. The learning achieved has little significance for others and does not add to funds of knowledge and understanding.

    8

    a. Self-review is conducted frequently and using specialized language, with both success and failure reported honestly and thoroughly, with significant thought and discussion invested in understanding the dynamics of process and adapting learning practice to maximize the benefits of effective strategy and avoid the repetition of ineffective behaviour.

    b. Cursory reporting of success and failure in non-specialized language with a minimal investment of thought and conversion in self-review.

    9

    a. The social dynamic of learning is fully exploited with authentic engagement with a range of others through coaching, challenging ,teaching and supporting.

    b. Little exploitation of the benefits of learning with others.

    10

    a. Significant investment of time and energy in the learning both inside and outside time officially allocated, with sustained journeying away from the comfort zone and into the learning zone.

    b. Minimal effort invested and little discomfort encountered.

    Comments please, to dismiss this approach or knock it into a better shape or to suggest alternatives, if only when some other grains of sand get to log-on/return from holiday.

    Cheers

    Mark

  3. markmoorhouse says:

    Oh and if the EBL was not the full 8 Steps but began with a given problem or challenge, then a truncated Assessment page (and indeed EnquiryBlogger) could selected, with steps 1 and 2, and their associated assessment strands, omitted. It might be nice if EB gave this initial choice (Full Personal Enquiry or Given Problem) for both teachers and at the first stage of creating an account for a project. It would, perhaps, encourage the use of both methods of EBL.

    Keep up the fight.

    Mark

Leave a Reply

Your email address will not be published. Required fields are marked *

For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

I agree to these terms.