Data Driven and Rudderless

Since the Nation at Risk report in 1983, educational leaders have scrambled to rebuild schools, first more like businesses, then based more on science. The scientific revolution in schools came in the form of data-driven decision making. By creating, tracking and analyzing data on student performance, we would know just what to do to, well, get better data. Over the years, we've bought into this scientific approach with standards-based curriculum and annual testing. NCLB raised the stakes significantly and school report cards mailed home to parents to measure and label schools are now part of the landscape. Yet just how scientific is the data we are using to mold our children's education?

Academic Testing

Let's look at Oregon's Statewide Assessment system. Currently, all students in Oregon are tested in Reading, Mathematics and Science in an online multiple guess format and in Writing using student created work and a rubric for evaluating it. At grade 10, there are 11 Reading core standards, 14 Writing core standards, 8 Mathematics core standards and 4 Science core standards. Each has subordinate content standards and most are measured in the Oregon OAKS assessments (exceptions are in Writing, where only some of the standards will be assessed). There are standards for most other content areas as well but the state does not currently assess any of them nor report to the public on school performance in these areas. This set of standards is as close to an Oregon State Curriculum as Oregon has dared venture. There is an expectation that all classrooms will instruct to the standards first and to any other content or skills only as additional time permits.

The reading, writing, math and science standards are a pretty good set, carefully crafted and with broad input. Yet the elevation of these student test results as a key measure of a given school's success and worth has reached beyond what is taught in English, Math and Science classrooms. All other disciplines now take a back seat to these new core subjects. Electives are disappearing so schools can focus laserlike on raising test scores in the core areas.

We have effectively elevated these 37 standards above all else that happens in high schools. They surpass electives, advanced courses for juniors and seniors (who don't have to test), project-based learning, in depth examinations and discovery learning, college entrance and survival rates and everything else that makes up a high school experience. 4 of the Reading standards focus on literary devices or text. Here is one of those literary core standards and all of its component content standards to be taught, learned and measured:

Literary Text: Examine Content and Structure: Examine content and structure of
grade-level literary text.

EL.HS.LI.09 Identify various literary devices, including figurative language, imagery, allegory, and
symbolism; evaluate the significance of the devices; and explain their appeal.
EL.HS.LI.10 Interpret and evaluate the impact of subtleties, contradictions, and ironies in a text.
EL.HS.LI.11 Explain how voice and the choice of a narrator affect characterization and the tone, plot,
and credibility of a text.
EL.HS.LI.12 Analyze an author’s development of time and sequence, including the use of complex
literary devices, such as foreshadowing or flashbacks.
EL.HS.LI.13 Evaluate the impact of word choice and figurative language on tone, mood, and theme.
EL.HS.LI.14 Identify and describe the function of dialogue, soliloquies, asides, character foils, and
stage directions in dramatic literature.
EL.HS.LI.15 Analyze the impact the choice of literary form has on the author’s message or
EL.HS.LI.16 Analyze the way in which a work of literature is related to the themes and issues of
its historical period.
EL.HS.LI.17 Compare works that express a universal theme, and provide evidence to support
the ideas expressed in each work.
EL.HS.LI.18 Compare and contrast the presentation of a similar theme or topic across literary
forms to explain how the selection of form shapes the theme or topic.
EL.HS.LI.19 Analyze a work of literature, showing how it reflects the heritage, traditions,
attitudes, and beliefs of its author.

With no offense intended to English teachers, shouldn't we perhaps measure this standard against all that is taught or could be taught in the unmeasured fields of social studies, business, foreign language, health, physical education or the wealth of electives that used to make up the comprehensive high school? If only 50% of our students could meet the above content standards, would our school now be substandard, regardless of other student learning?

At one school in our area, outstanding scores have been achieved by doubling up students' coursework in math or English or science for any student who doesn't meet the benchmark. In their math, reading or science labs, students are taught, tested and retested to bring their scores up. They practice the formats of questions on Oregon's OAKS tests. Most do indeed meet the benchmark under this regimen, up to 90% in fact. We know that at least their short-term retention of the skills and content was successful.

This school has embraced data-driven decisions and their intervention is working. It is working to improve the data. No one is measuring the full curriculum and no one is measuring readiness for the next level. What is being measured is performance on the OAKS assessments and the interventions are giving better results. To get better results, we have focused the whole of education on what works to improve the data. 

Measuring Student Behavior

Similarly, Positive Behavior Supports (PBS) has taken hold in Oregon and has much to offer in making schools and classrooms student-friendly and allowing teachers to focus on teaching and learning rather than discipline. PBS is highly data-driven, tracking student referrals and analyzing data to determine where and when discipline problems are happening most often. The common experience of new PBS schools goes something like this: we had 1500 referrals a year and after implementing PBS, our referrals dropped to 400 or less, allowing more time for teachers to teach and for administrators to spend time in classrooms.

In fact, much about PBS makes good sense and is worth adopting. It's the data though that raises questions. There are many ways to look at student behavior in a school. Being most familiar with high school, I can think of several:
  • How many students are in the hallways or elsewhere during class time?
  • How is student attendance?
  • Do students feel safe and valued at school?
  • What are the impressions of visitors to the school?
  • How clean or littered are the student areas?
  • How much vandalism and violence occur at the school?
  • Are students on time to class? Do they arrive with their materials?
  • When students are given time in class to work, do they work?
  • Are rules enforced fairly and consistently? Do students feel they are fair and consistent?
  • Do teachers report that student behavior is an obstacle to instruction?
  • What is the atmosphere in the classrooms? the cafeteria? the hallways and common areas?
A few of these are easily measured; most are not. Most would require some subjective assessments in the form of a survey or random interviews or simply impressions of students and adults. As you look over the list, are some more important than others? In our data-driven world of education, we look instead for a single measure that we hope will reflect and encompass most of those. In the case of PBS, it is referrals. The ultimate goal of a PBS school is to reduce referrals. Consider the following possibilities (some decidedly not PBS-like!) for reducing referrals:
  • Discourage teachers from writing referrals
  • Increase enforcement and stiffen penalties to discourage misbehavior
  • Create a two-tiered system of referrals where only some referrals are counted
  • Define and teach the behavior expectations like you teach the academic ones
  • Reward good behavior 4 times more than you punish bad behavior
  • Simplify the behavior expectations and make sure everyone understands and can recite them
If your school were under pressure to reduce referrals, which would you institute? The last 3 come from the PBS playbook; the first 3 do not. I can assure you though that the first 3 are at least as prevalent in many PBS schools.

Now let's look at the analysis of referrals. Analyzing my school's referrals from the first term of the school year, I see about 50 referrals. 19 (the largest group) are for class disruptions. Therefore, we need to develop strategies to reduce class disruptions, correct? The second largest group though is fights with 7. Which is the more appropriate focus? Do we play by the numbers (as we are often instructed) and simply attack the highest incidence or do we consider severity? If we consider severity, how are we measuring it? Is one fight worth two disruptions, three, four?

Further complicating this picture is the subjective act of categorizing referrals. We chose to categorize as fights a shove, an act of horseplay and one incident that took place off campus. We could have ignored the off campus incident, called the shove an assault and called the horseplay a disruption. 

Ignoring Variables

Every good scientist knows that variables must be acknowledged and controlled for data to be worth its salt. In education though, we neither acknowledge the variables that go into our data nor are there any controls placed on them. We use data the way cage fighters use boxing rules. We are in the game to win. Or at least, that is the case for those who are winning.

Consider some of the variables for student testing performance:
  • Recent review of the relevant skills and content
  • Number of testing opportunities (there are 3, consuming about 9 weeks of school time each year)
  • School demographics -- parents' education and income, proximity to a college
  • Rewards or punishments for meeting or exceeding the benchmarks
  • Teacher interest in and valuation of the testing experience
  • Practice tests taken similar to the real ones
  • Curriculum targeting the standards being tested
  • Amount of time students spend on "readin', writin' and 'rithmetic" versus other subjects (a not insignificant factor at the middle and high school levels)
At the high school level, consider also that we are rated based on the performance of our 10th graders, not our 12th graders. We begin testing 10th graders in October and retest until early March. These are students we have taught for between 1.1 and 1.6 years. Compare that to 8th grade assessments that measure a middle school's 3-year performance or 5th grade assessments that measure an elementary school's 6-year performance. 

We educators, like most social scientists today, crave the credibility of being a real science. We want to be taken seriously and to show that we are collecting data, acting on the data and improving our data. But students are not like chemical elements or molecules. They are more complex. Education reduced to a few columns of data will be a travesty. We are drifting toward a future when the specific data targets we have chosen will drive everything we do and all that is rich (and frankly, most effective) in education will be lost.

I do not argue against data, its collection or its analysis. I only lament that we have adopted rigid data standards that, like our academic ones, are driving us to the lowest common denominator. In the 26 years since Nation at Risk, we still have no legitimate measure of how we are doing or what we need to do differently.


  1. Postscript: Today I learned that one district in our area is discussing reducing graduation requirements to raise the on-time graduation rate data. One hopes more than data manipulation is being considered.


Post a Comment

I'm interested in your comments.