logo

Welcome to the
new PowerSchool

5 Steps to Data-Driven Decision-Making

December 13, 2016

By Joel Hames, Vice President, Product

A few months into the school year, and the first day of school feels like ancient history. Students are navigating the campus with confidence, teachers are knee-deep in their curriculum, and administrators are busy with the day-to-day challenges of keeping hundreds (or thousands) of students safe and ready to learn. While many focus on the operational logistics required to keep our schools running, effective leaders are digging in on what happens in the classroom. And when you do so, you can quickly become overwhelmed by the amount of data available. From attendance numbers to assessment scores, class sizes to intervention plan milestones – every bit of information about students can become another bit of information to be collected, organized, and analyzed.

With all of this data at your disposal, how do you begin to make sense of it? How do you decide what to focus on, and what to ignore? We’ve seen efforts to place data ‘front and center’ in the past, with data systems that enable a teacher or administrator to mine through a treasure trove of available information. Is this the best way of using data effectively? In my experience, absolutely not. Those that are focused on students don’t want, or need, to spend hours parsing large data sets to find hidden gems. It isn’t a function of desire to improve, it’s one of logistics. In an environment where efficiency matters – where you have to decide daily how to help individual students improve – you can’t spend time on ineffective and inefficient processes. Those become ‘busy work’ and they detract from our best efforts.

To leverage data in a meaningful way, to truly become a data expert, you must step back and reassess what is collected. Move from data-driven decision making to decision-driven data collection. Recognize that not all data is created equal, and that spending time with deliberately collected and organized data is the foundation of a healthy ecosystem of decision making. Here are a few steps to get started.

Define your goals

Before we turn our attention to the myriad of information available, we must first pay attention to our expected outcomes. Too often we dive into the data with no real sign post. We do not know which direction to go, and that results in drawing meaningless (or even wrong) conclusions from what we see. Data tells a story. Be sure that you know the outline of the story before getting into the weeds to understand the details. Ask the following questions:

  • What do we want to understand about our students?
  • How have we assessed this in the past? What did that teach us about today?
  • Do stakeholders agree on the outcomes we expect?

Think about your district or school intervention program. If you want to know whether it is effective or not, start with alignment on the goals of the program. Can you state them clearly, and do others in your district agree that the goals are relevant and meaningful? If you can’t do this, then looking at all the data in the world isn’t going to help advance your programs and student achievement effectively. Nail this, and you’re off to a great start.

Ask answerable questions

Your goals should lead directly to questions that can be answered. What you don’t want are questions like, “Is our program effective?” That’s not going to help you. You need something deeper and more to the point. Try, “Has our after-school reading program increased student comprehension scores on the first quarter district benchmark assessment?” Or, “How many students enrolled in an online remediation course scored a 3 or 4 on their end-of-course grade?” Figure out how to turn your high level goals into something measureable, with constraints that allow you to share with your colleagues the exact thing you are trying to assess.

With this step, there’s a major caution: do not go and get the data yet. This step, defining your questions, comes before you collect the data. It’s easy to be swayed by data you already have. To look at it and decide afterwards what matters. Sometimes this is as simple as saying, “well, we have a districtwide assessment that produces reading comprehension data three times per year, so we should look at our reading comprehension trends using that information.” This approach severely limits your ability to collect data that gets at the fundamental issues you are facing. In the case above, perhaps we need more frequent data about reading comprehension. Or maybe a better assessment exists, and we are not using it because of difficulty collecting the data. Or perhaps there are more specific skills that can be assessed that aren’t through the assessment that already exists. Whatever the case, if you use a data collection device or data that already exists to drive your questions, you’ve artificially limited your ability to ask and answer questions that truly matter to your students.

In other words, seek alignment in your goals and uncover questions that drive at those goals before you look at the data you have or can collect. This is the start of a healthy process.

Go get the data

Now that you know your goals and have questions to answer, it’s time to collect information that informs progress. This is often the most challenging step. It requires thoughtful analysis of current resources and processes. In many cases, it requires work to establish new collection tools and agreements around how those tools will be used. Ultimately, your goal is to collect data that answers your questions and drives at your goals. It isn’t to compromise on the quality of data for the sake of convenience. This is where resistance is often found.

As part of a holistic approach to curriculum planning, assessment, and intervention, the tools used should have internal validity. And in this context, ‘tools’ doesn’t represent a specific piece of paper or online application. A tool is simply an artifact used that produces usable information. To have internal validity, the tool must be consistently applied across the landscape impacted. Teachers, assessment administrators, and any others who are tasked with bringing this process to life must share an understanding of why we are assessing, what data matters, and how it will be used. In order for us to make decisions based on what we get back from these tools, we have to have confidence that the application of the tool has been consistent.

Finally, this all has to be part of a culture of informed decision-making. People should care that we collect this information, and that something more than ‘gut feeling’ should inform how we improve outcomes. Countless trainings, workshops, and coaching opportunities are available to help a district move staff towards a rational assessment practice. Take advantage of these. If funds are limited, then focus on small wins, using conversations with teachers to drive understanding of how a specific process, assessment, and piece of data can make their work with students a bit more effective. It may require some adaptation on their part. Recognize that this can be difficult, and that people are much more likely to change if and when they see a concrete outcome. In other words, attend to the concerns and the stress that can accompany the practical changes that this step entails.

Organize and analyze

Now that you have data, informed by your goals, framed by measureable questions, and properly collected, you need some way of organizing information. The method of organization should be established before you embark on the collection, and many resources exist to assist you with this. The short version is this: look for a system that supports the processes above. Find something that allows you to easily collect data from outside sources, whether electronic or not, and organize it according to the goals you’ve set out.

This integration of data is critical. Too often we compromise our efforts by creating silos of information. This can be organizational silos: the special education department collects data that the testing and assessment office cannot access. And it can be technical silos: an online assessment program is used that has no connection to our broader system for organizing and analyzing our information. When we have barriers like these, our fortitude is challenged, and our staff have their stamina drained either trying to move information between systems, or switching back and forth to make sense of what we’ve collected. It may be obvious: this drain has a significantly negative impact on our ability to answer the questions we established earlier.

No matter the system you use, ensure that you have a high degree of interoperability. Look for solutions that simplify data exchange, allowing your technical staff to easily connect disparate systems, and your instructional staff to use one source of ‘truth’ for data. All the trendy things we like to discuss, like dashboards, scorecards, and analytics, only become real once we have our data flowing freely source to destination. Solve this, and your efforts to organize and analyze will be much more successful.

After your data is organized, you can begin the powerful work of understanding what the data is telling you. If you look around, you’ll see graduate courses on how to read data. This is a lengthy topic that can require an in-depth look all by itself. For our purposes, we’re going to keep it simple: be open. How can you be open? Start with these basic principles:

  • Be Meta: look at the data from a distance. Work with others to ask questions like: are we missing information? Is there outlier data that should be examined? Is the data organized in a way that makes sense? What agreements about the quality of the data can be reached before diving into the results the data drives at?
  • Be Careful: Don’t jump to conclusions. Often, we immediately find information in data sets that can be misleading. That’s in part because we bring biases to the data. We see a trend where there may be no trend, or where the trend is relatively insignificant. These initial conclusions can shape our deeper analysis. Survey the information provided and state your biases. Before your conclusions, challenge your own assumptions and find ways to either disprove or prove what you think you see.
  • Be Deliberate: Move through the data with purpose, and avoid jumping around. A high quality data set can be exciting because of its ability to highlight answers that matter to us. It is tempting to say ‘this is what the data says’ before we understand if there are any distractors, or data that is meaningless. Do we have too few results to have relevance? Do we need to analyze the data more deeply to understand its connection to things like socio-economic factors? Have we looked at data from all assessments that we intend to inform our decisions? It’s entirely possible to see bits of data, draw conclusions, and move on to new questions. Slow down and be deliberate. Document what you think you believe, and continue moving through the information with intention.

The bottom line is this: it is easy to take a “laissez faire” attitude with the information we collect. For those of us who aren’t data scientists, we can be led astray by making assumptions about what they data is, or isn’t. I’ve often seen people treat a trend line as gospel – it is what it is – without taking a moment to reflect on how the data has been organized or compiled. This doesn’t have to be a complex or lengthy task. It does require careful questioning and an approach that looks at the data holistically, before jumping to conclusions. In fact, it requires avoiding any jumping to conclusions, instead focusing on a deliberate, careful walk to conclusions. When we do this, we uncover much more impactful information.

Do something meaningful

This may be the most obvious step. It may also be the one that is most often ignored. We’ve gone through this whole process of ensuring that we have quality data aligned with our goals, and now we need to act. Is this familiar? You get together with a group of people, talk about what the data has uncovered, and then you each go back to your respective offices and pursue activities that have no connection to what the data told you. If this has happened to you, you are not alone. This is an artifact of constraints. We are constrained in time and resources, and in these circumstances we often fall back on what is familiar. We know what good instruction looks like, so we just trust that by identifying that an issue exists, or that we haven’t achieved our goals, the system will attend to itself. Unfortunately, that is the rarest exception to the rule.

Acting with purpose requires that we can draw a line from our goals, through our measures and results, to the actions we take to improve student outcomes. What do you do if the intervention program you’ve implemented has not increased achievement as much as expected? Where is this information highlighted, how is it discussed, and how are decisions made to adjust? Most often, these decisions are based on ‘gut feelings’, or a demonstration of a new technology or process that feels right. I’ve sat in countless meetings in schools where a decision to pursue a solution was made based on a guess, with no plan to either assess whether that guess has value, or what the impact of that guess is early in its adoption. What this does is deteriorate our confidence in our solutions. It separates action from meaning, since any meaning that exists is transitory, based on opinion.

What we need to do is to tie decision-making to outcomes, back to decision-making. Shorten the feedback loop between our understanding of the data and the expected outcomes of programs and processes. Ask questions like: is this textbook adoption based on something we know about our students, or based on what we think about our students? And if the answer is elusive, identify the information that should be available, and go get it. Use this to drive decision making, and act with meaning. Once we do this, we establish for all staff that what we pursue is done with meaning and conviction. This understanding helps drives commitment, and avoids the common “oh, this is just another idea-of-the-day” attitude that persists on many campuses.

Decision-driven data collection is more than a reaction to the push for data-driven decision making. It is a fundamental re-thinking, or in some cases, a fundamental reminder, that we must approach information about our students with care. Our efforts to organize our thoughts, align towards meaningful goals, and collect and analyze information consistently and with care are critical to fostering student growth. For years, our teachers have been told that they should collect more information, login to complex data systems and parse that information, and then go back and do something about it. This has been a failure, often met with resistance or apathy. Rather than increase the work on teachers, let’s make existing work more relevant and impactful.

In the end, efforts to define goals, ask measureable questions, collect, organize and analyze data, and follow-up with specific actions based on the answers found, will go much further than efforts to combine all data available into a singular system and unleash educators on this relatively unstructured and meaningless information. Just because data exists doesn’t mean that data has meaning for your teachers and students. Be thoughtful about where you start, and you’ll be surprised at the powerful destination you get to.

Read Radnor Township School District’s story on the steps they took to become a data-driven district.

See how many others are using

The largest user community in K-12 education technology

32M
STUDENTS
66M
PARENTS
100M
USERS
13,000
DISTRICTS