top of page
Search

Why School Data Rarely Changes Instruction

  • Scott Murphey
  • 4 days ago
  • 3 min read

Rethinking specificity, alignment, and decision architecture in school improvement


Schools collect enormous amounts of data.

Benchmark assessments. Unit tests. Progress monitoring. Standards trackers. District dashboards.

Yet instructional change remains inconsistent.

The issue is not a lack of effort. In many schools, the problem is the opposite. We collect more data than ever before. The deeper challenge is structural: we often collect the wrong data, in inconsistent formats, without the systems or authority to act on what it reveals.


We Don’t Always Collect Instructionally Precise Data

Standards are frequently treated as broad performance labels rather than precise skill targets.

When standards are not unpacked into specific knowledge, concepts, and cognitive demands, assessments tend to measure something general instead of something actionable.

A student may be marked as “below standard,” but that designation rarely identifies the precise breakdown. Was the difficulty conceptual? Procedural? Linguistic? Related to problem structure? Depth of knowledge?

Without clarity at that level, instructional adjustment becomes guesswork.

If data does not isolate the specific misunderstanding, it cannot guide precise intervention.


Our Tools Are Not Designed for System Coherence

Even when meaningful data is collected, the tools we use often undermine clarity.

Many schools rely on broad proficiency categories such as “Beginning,” “Developing,” “Meeting,” and “Advanced.” While these labels appear structured, they lack precise, shared definitions tied to specific knowledge and skill components.

Without clear performance descriptors, two teachers may assign the same label for very different reasons.

Subjectivity increases. Communication decreases.

If a fourth-grade team reports that 60% of students are “Meeting,” what does that actually mean?

Meeting what, specifically? At what depth? With what evidence?

More importantly, in what context?

Without specifying task conditions, application context, and depth of thinking, proficiency labels become abstractions. Two classrooms may report identical percentages while measuring fundamentally different things.

If data does not capture these variables, horizontal alignment becomes difficult and of limited use and vertical alignment becomes nearly impossible.

Specificity is not a technical preference. It is the foundation of coherence.


We Review Data Without Building a Discipline of Follow-Through

Even when useful data exists, another breakdown occurs.

We review it.

We discuss it.

We identify trends.

And then we move on.

Too often, data meetings conclude without:

  • A clearly defined instructional adjustment

  • A specific implementation window

  • A scheduled follow-up review

  • Agreement on what evidence will indicate improvement

Improvement requires more than conversation. It requires a disciplined cycle.

Effective instructional systems operate iteratively:

  1. Identify a specific breakdown.

  2. Define a targeted instructional adjustment.

  3. Implement with consistency.

  4. Reassess using aligned measures.

  5. Refine and repeat.

This cycle must be narrow and sustained. When schools attempt to address too many issues simultaneously, focus dissipates. Attention shifts to the next concern before the previous one is resolved.

Without structured follow-through, data remains episodic rather than cumulative.


The Pacing Constraint

A final and often unspoken barrier is pacing.

Even when data signals a need for reteaching or restructuring, pacing guides create pressure to move forward. Teachers are expected to “use the data,” but they are also expected to maintain instructional velocity.

When coverage competes with mastery, coverage often wins.

The result is predictable:

  1. Broad data is collected.

  2. Trends are identified.

  3. Instruction continues largely unchanged.

  4. The next assessment cycle begins.

The issue is not teacher commitment. It is system design.


What Would Make Data Matter?

If schools want data to meaningfully shape instruction, several structural shifts are necessary:

  • Unpack standards into precise knowledge and skill components.

  • Design assessments aligned to intended cognitive demand.

  • Backwards design teaching based on those assessments.

  • Standardize data tools for horizontal and vertical coherence.

  • Provide the necessary time for follow up cycles.

  • Focus on mastery over pacing.


Collecting data is easy.


Designing instructional systems that consistently respond to it is harder.

Until schools move from reporting data to building decision architecture, data will continue to inform conversations more than it transforms classrooms.

 
 
 

Recent Posts

See All
bottom of page