With the move toward Proficiency Based Education the Agency of Education in collaboration with educators around the state has been developing a number of support materials. Perhaps the most impactful work they are doing is in the area of assessment. The folks at the agency are particularly focused on developing assessments for the transferrable skills. They intend to begin piloting those assessments next year. I am still trying to wrap my head around it all but here is I think I understand.
To put this all in perspective I want to give you some sense of the scope of assessments. We have experienced the SBAC now and we have a clearer idea of what assessment entails. The work the AOE is focused on is in the area of "moderated tasks". The work we have been engaged in during inservice has been around "benchmark tasks".
First I will start with the state's intent and vision for these moderated tasks and then I will get into how they were developed.
The Agency is currently drafting a set of "moderated tasks" to help educators learn how to create their own tasks and to help calibrate the scoring of Transferrable Skill Indicators. For those of you who have been around for a while this sounds very much like the writing and math network work.
The process sounds like it has evolved from the portfolio days. The State's intent is to train teachers on these Moderated Tasks (not sure what that entails yet) and calibrate scoring. They are building an online platform/database where teachers can submit student work from both Moderated and Teacher Developed tasks and have the work scored online by teachers from around the state.
So I can sense some anxiety building here. This is not a mandate to return to some kind of statewide portfolio system. This looks like it is going to be a support system for schools to help with the assessment of Transferrable Skills.
The way the state leadership developed these Moderated Tasks is impressive and well worth learning from. They began with the Performance Indicators and decided on a traditional 4 point scale (my only quibble with this proces -- see Single Point Rubrics). From there they used Bloom's Taxonomy (ok I have an issue with this too -- see Marzano's Taxonomy) to write scoring criteria.
They only completed two of the PBGR's for transferrable skills and are in the process of completing the third, but the work they have done is exciting -- well exciting if you are a curriculum geek. Unlike the scoring criteria of far too many rubrics the language for each level is crisp, discrete and tied tightly to Bloom's language. The work you see here is written for 11th and 12th grade. Overtime this language will need to be refined and cascaded down to earlier grades, however as a starting point it is great.
Once the scoring criteria was complete the development teams began developing "Task Models" or descriptions of what a quality and targeted assessments would need. They first asked "What features does a Performance Task have to have to really assess transferrable skills" They selected one PBGR "Clear and Effective Communication" and identified the Indicators they thought most important.
In the long run we (or someone) will also develop scoring criteria in the content areas but work has yet to be done. However if you think about the impact of developing LDC modules we will begin to collect content specific scoring criteria.
There is still plenty to unpack about this assessment design process and perhaps the pilot assessments will make that work easier.