• Luke Pascual

How can institutions utilize data more effectively?

Every minute in 2019, 18.1 million texts were sent, 188 million emails were relayed, and Americans used over 4.4 petabytes (that’s 4.4 million gigabytes) of internet data. We live in an era of an inundation of information and only those who can keep themselves from getting buried within it will get ahead. Knowing how to efficiently interpret information on a large scale is what all industries, particularly education, are struggling to figure out.

In the age of the internet, data can seem to be inaccessible if an organization doesn’t operate out of the heart of Silicon Valley. However, information is often abundant and accessible, particularly across social media or open-source resources. Surveys and other direct user-input information are also invaluable collections of metadata (that’s data about data) that can be used to answer questions seemingly unrelated to their original purpose. So, given these possibilities, how can leaders of K-12 and higher education institutions effectively utilize data? The process is multi-step, and includes Contextualization, Collection, Interpretation, and Communication (CCIC).


Data utilization always comes from the desire to answer an inquiry or to solve a problem. Before you can go about digging for insights, you must understand what exactly you are looking for and where to find it. This often requires an institution or a team of administrators to think outside the box. Equally important is knowing what type of information is even possible to be found—It isn’t realistic to search for second-by-second data points on the emotions being felt by every faculty, administrator, staff, or student throughout the year, but mental health surveys and monitoring the ever-changing sentiments of Twitter feeds can offer similar insights.


Clearly there is plenty of information out there to be pulled, but sometimes the most bountiful deposits of data are those that are buried the deepest. Allocating time to dig it up or to devise a manner of efficiently collecting it—often through APIs or the use of coding languages—is a necessary part of the process. Data cleaning, or the stripping away of the unnecessary information that was collected with the data needed, is another step that takes time and care if the next step, (interpretation), is to operate smoothly.


This step is often considered to be the most difficult. Properly interpreting the insights offered from thoroughly collected and cleaned data is typically the work of experts in the subject matter. Given its discretionary nature, having multiple inputs from various sources (both people and data sources), can be a valuable part of the data interpretation process.


In a team setting, a perfect understanding of a dataset is useless if it cannot be communicated with others. Translating the technical components of the data collected to a general audience—and specifically to decision-makers—is a crucial part of the data utilization process.

While these insights represent just a fraction of all the components and actions necessary to fully utilize data within institutions, investing now, particularly when the world of education appears to be in chaos, is vital to stay competitive amidst an uncertain educational landscape.