Partnership for Assessment of Readiness for College and Careers (PARCC)

From FreedomKentucky
Jump to: navigation, search

Overview

Two national efforts are under way to create uniform tests to assess student performance in all states against the new Common Core State Standards, which have been developed by a partnership of the National Governors’ Association and the Council of Chief State School Officers.

One of the two groups competing to create the main Common Core State Assessments is called the “Partnership for Assessment of Readiness for College and Careers” (PARCC). States in this consortium include:

Governing States: Arizona District of Columbia Florida Illinois Indiana Louisiana Maryland Massachusetts New York Rhode Island Tennessee

Participating (Not Governing) States: Alabama Arkansas California Colorado Delaware Georgia Kentucky Mississippi New Hampshire New Jersey North Dakota Ohio Oklahoma Pennsylvania South Carolina

PARCC member Kris Ellington made an important presentation on her group’s effort to a Brookings Institution conference on October 28, 2010. Comments below are based on papers, Power Point presentations, and the audio recording from that conference. All are available on line.

In her Power Point, Ms. Ellington explains the main PARCC goal:

“States in the Partnership are committed to building their collective capacity to increase the rates at which students graduate from high school prepared for success in college and the workplace.”

That is a good goal, and it is well-aligned with requirements from Kentucky’s Senate Bill 1 from the 2009 Regular Legislative Session.

However, once Kentucky’s two-decades of history with reform assessments is considered, PARCC’s proposals to implement this basic goal appear to have some serious problems. In fact, the proposals imply that the PARCC consortium has at best limited knowledge of Kentucky’s assessment experience since 1992. In consequence, it is possible the PARCC effort could repeat some serious mistakes that the Bluegrass State took almost two decades to figure out.

In the following discussion, key points from the PARCC presentation to Brookings are listed along with following commentary based on Kentucky’s rich history with Progressive testing programs.


Key Features of the PARCC Proposal

• PARCC Comment: “Higher Quality Tests: PARCC assessments will include sophisticated items and performance tasks to measure critical thinking, strategic problem solving, research and writing.”

Unfortunately, this sounds EXACTLY like the kinds of promises Kentuckians heard when the Kentucky Instructional Results Information System (KIRIS) assessment started way back in 1992. Kentuckians heard somewhat similar promises again when the state’s Commonwealth Accountability Testing System (CATS) assessment replaced the discredited KIRIS assessment in 1999. CATS only lasted until 2009, when it was also thrown out by the Kentucky legislature.

The fact that both those reform-oriented assessments ultimately proved unsuccessful creates considerable cause for concern about the PARCC plan.

Performance Items as Assessment Elements – Issues

Kentucky has a lot of experience with performance items in assessments (Performance Events, Math and Writing Portfolios, and lots of Open-Response Questions). The history isn’t very good.

For example, the “Performance Events” that were used at the start of KIRIS crashed by 1996. They never provided stable information. Worse, in 1996 a poor choice for the Performance Event task resulted in totally unusable results for every middle school in Kentucky.

Other performance items that were dropped for poor performance in Kentucky include “Math Portfolios,” which were launched in 1993 and crashed in 1996.

The state also had an extended period of using “Writing Portfolios” in the accountability program. These were finally removed from the assessment and accountability program in 2009. Math Portfolios interfered with real math instruction, and Writing Portfolios, while a great instructional tool for teachers, proved hopelessly unworkable as an assessment item. They actually wound up interfering with writing instruction, as pointed out in this You Tube video.

Finally, open response questions have been a continual challenge in our testing program. They also have issues about validity and reliability, and they are expensive and time-consuming to create, administer and score. Furthermore, due to the time involved to administer them, open response written questions tend to limit the amount of the curriculum that can be tested with each individual student. You simply cannot use many of these types of questions in assessments without making undesirable tradeoffs: either testing time gets grossly excessive, or you limit the amount of testing of content for each student.

In Kentucky, to date, the extensive use of open-response, performance-oriented questions has resulted in incomplete testing of individual students. Neither the KIRIS nor CATS assessments ever provided valid and reliable data for individual students. Senate Bill 1 will not allow that deficiency to continue; so, if Kentucky is to use PARCC’s assessments, those products must manage some very challenging “test engineering” issues of adequate content coverage versus use of open-response questions versus acceptable testing times in a way Kentucky never discovered.

Based on the presentation made to the Brookings Institution, it looks like PARCC is a very long way away from making that happen.

PARCC Could Turn Formative Assessments on Their Ear

• PARCC Comment: “Through-Course Testing: Students will take parts of the assessment at key times during the school year, closer to when they learn the material.”

At first, the PARCC presentation made it seem like their program was going to provide us with “Formative Assessments” for classroom teacher use to track student progress throughout the school year.

However, Slide 9 in Ms. Ellington’s presentation says that this ‘during-the-year’ testing will also be used for “Summative” accountability at the end of the school term.

That turns the basic, non-threatening idea of formative assessments on its ear. All of a sudden, these tests are now high stakes.

It appears the real reason PARCC plans on doing testing at several points during the school year is to allow more use of open-response writing questions, one type of performance item which Ms. Ellington admits will mostly have to be “hand scored.”

That probably won’t work out.

Kentucky never, ever, was able to get hand-scored assessment results back to teachers in a timely manner. In fact, the state regularly had to apply for waivers from the US Department of Education because mathematics and reading results from the Kentucky Core Content Tests used with CATS and for NCLB were often delayed until well after the start of the following school term – a violation of NCLB requirements. Results often took four months or more to arrive back at schools.

If the PARCC open-response question heavy ‘Through-Course Testing’ results take this long to return to the school building, they will offer nothing like the kind of near-real time responses that true formative assessments are intended to produce. Also, keep in mind that, presently, few states use as many open-response questions as Kentucky. If PARCC’s assessments are to be used in all states, the load on the country’s test scoring providers will be huge. Even longer delays seem inevitable.

Costs will also rise sharply for states that currently rely on few open-response questions, as these are expensive as well as time-consuming test items.

Gambling on Technology that Does Not Currently Exist

• PARCC Comment: “Maximize Technology: PARCC assessments in most grades will be computer based.”

There is a very long road to travel from this exciting concept to a working and practical program. Testing expert Greg Cizek, who also spoke at the Brookings conference, says that we don’t have the technology today. Never-the-less, Ellington’s proposal promises it will be on line and operating in 2014.


• PARCC Comment: “Cross-State Comparability: States in PARCC will adopt common assessments and common performance standards.”

If we can really make that work, it will be very valuable.

However, comparability across states takes more than just using the same tests. The National Assessment of Educational Progress (NAEP) has already shown us that, as pointed out in this other Wiki item. States today have very different student demographics, and it isn’t possible to develop an accurate understanding of real performance across states through simplistic comparison of overall average scores. I have written extensively about how this problem impacts interpretation of NAEP, such as here.


For this to really work, PARCC will have to provide considerable disaggregated data for each state by race, poverty rate, and learning disability status along with some carefully developed guidance on how to FAIRLY use the data to draw valid conclusions, an issue the NAEP has been wrestling with for years.


• PARCC Comment: Report achievement results based on a clear definition of college and career readiness, so students will know if they are on track early enough to make adjustments.

Testing expert Cizek pointed out at the conference that there isn’t agreement at present on what college readiness actually looks like and how it can be measured. This requires the test to not only report on achievement, but to also have predictive qualities. Cizek says that is going to be hard to do, although we already have tests in Kentucky from the ACT, Incorporated that are doing this job. Though he didn’t say so, the issue for Cizek may involve the fact that in a number of states, the SAT, not the ACT, is used for college acceptance. Those states might not want to sign on to an ACT-like model. Thus, while Kentucky already has a model test for college readiness, not all states may agree with it, and PARCC has a way to go to achieve this goal.


• PARCC Comment: Compare results against a common high standard because readiness shouldn’t differ across states or income levels.

This is a good goal, provided the Common Core State Standards work out in actual practice to be as good as we hope they will be.


• PARCC Comment: Help make accountability policies better drivers of improvement by basing them on more sophisticated and meaningful assessments.

Again, this is a great goal, but we’ve tried to make this very concept work in Kentucky for nearly two decades. So far, we have not succeeded.


• PARCC Comment: Promote good instruction by providing teachers useful, meaningful and timely information, which will help them adjust instruction, individualize interventions, and fine-tune lessons throughout the school year.

For sure, KIRIS and CATS always failed in this area. The results have almost always arrived back at schools well after the next school term was already under way, too late to help inform changes to curriculum before teachers were already back in the “classroom trenches” and too busy to do the job well. Also, this makes the PARCC tests diagnostic as well as predictive of college work and measures of achievement. That is an awful lot to accomplish with one test. In addition, comments made by Ms. Ellington indicate that while the end-of-course exam will be largely machine scored, though with some innovative artificial intelligence scoring of open response questions, the during-the-year tests will incorporate more open-response questions that will require a lot of hand scoring. If so, there are very serious questions about whether PARCC understands that results from formative-like assessments need to be returned in a timely manner, as well, if students and teachers are to get much benefit from the results.

Concerns about the PARCC effort

The Bluegrass Institute is concerned that the PARCC effort seems to be operating in ignorance of the two-decade-long history in Kentucky with performance type testing. PARCC has not mentioned specifically how they plan to overcome the issues that Kentucky has encountered with trying to make a remarkably similar assessment program function well.

The Prichard Committee for Academic Excellence is also raising concerns about the apparent misunderstanding of the concepts of formative assessments that are found in the PARCC proposals. They posted a series of blogs with those concerns here and here.

Those Prichard concerns draw in considerable measure on a paper by Margaret Heritage, created for the Council of Chief State School Officers. Heritage’s paper strongly criticizes the consortia plans for the Common Core State Assessments, saying the proposals are uninformed about the key concepts of formative assessment and may actually interfere with the success of schools with formative approaches.

Heritage writes:

“At a time of unprecedented opportunity, it is regrettable that roles of the teacher and the student in enabling learning are not at the center of current thinking about formative assessment within the proposed next-generation assessment systems. This may well result in a lost opportunity to firmly situate formative assessment in the practices of U.S. teachers.”

She also takes issue with the concept of interim, during the year assessments, saying there is no empirical evidence that interim assessments improve student learning.

It won’t be cheap!

The Common Core State Assessment effort isn’t cheap. PARCC has already received grants of $185.9 million for work it will do over the next four years.


Additional information

You can hear Ms. Ellington’s spoken comments to the Brookings conference in an audio presentation accessible from a link here.


Her comments begin at 49 minutes and 46 seconds into the audio recording.

Late Breaking Update

On July 12, 2011, Education Week reported that the PARCC proposal for testing was being scaled back dramatically due to states' concerns about costs. There were also concerns that the original model, with five testing sessions per school term, would overly intrude into shaping of the curriculum. The changes may overcome some objections that PARCC's original design, which blended summative and formative assessments, violated the concepts of formative assessment as a non-threatening program for educators to get a better idea of student progress.

Read more about these late-breaking changes in the Education Week article, "State Consortium Scales Back Common-Assessment Design".

http://www.bipps.org/bipps-blog/http://www.facebook.com/pages/The-Bluegrass-Institute/58521621985?ref=tshttp://www.vimeo.com/freedomkyhttp://twitter.com/BIPPShttp://www.youtube.com/user/FreedomKentuckyIcons.png
Getting StartedTakeActionButton.png