Skip to main content
Analytics

Why Useless Data Is Worse Than No Data

By July 19, 2017December 1st, 2021No Comments

Data – the basis of making decisions

To characterise a process, data is a necessity. In the absence of data, important parameters of a process are not tracked, their significance cannot be evaluated and their behaviour over the course of time cannot be understood. To glean such understanding, measurements and sensors can be employed – and are readily done so in this emerging age of Internet of Things and ‘always-on’ connectivity. However, one must be prudent in precisely identifying what data to gather.

As technologists and environmental practitioners, we can easily get absorbed in data at times; probing it for insights and marvelling at trends and behaviours. However, no matter how compelling the data is visualised, one cannot lose sight of the purpose of data – to be a decision-making tool to inform of actions to be undertaken. The basis of making appropriate decisions hinges upon the appropriateness and credibility of the data. The data must be useful.

What defines useful data?

  • appropriate parameters are measured
  • it’s updated at sufficient frequency
  • it’s acquired at representative locations with sufficient spatial density
  • reports in real-time from remote locations
  • it’s authenticated to maintain data quality
  • it contextualised with respect to contributing factors

If these criteria are satisfied, the data collection exercise is optimally useful.
Failure to deliver on one or more of these criteria will compromise the usefulness of the data. It can adversely detract from the objective for which the data was gathered – potentially misleading the decision-making process for determining the next actions to be undertaken.

Data inundation

The definition of the usefulness of data does not stop at the data collection exercise – this is only the start of it. Large-scale continuous monitoring, while satisfying the aforementioned criteria, brings about a new challenge: data inundation. Visualising and presenting this data is the first step. Beyond this, data mining, analytics and machine learning need to be applied to derive the meaningful insights from such high volumes of data. But data analytics are only as good at the data itself – this demonstrates that establishing data usefulness is an iterative feedback loop.

The ultimate aim is to dispel the uncertainty that shrouds the process being monitored – this is driven by actionable insights that results from the data collection/authentication/analytics methodology employed. The appropriateness of these resulting actionable insights are dependent on the preceding steps. To borrow a phrase: garbage in, garbage out. In fact, data that is not optimally useful can have knock-on repercussions that exacerbate issues further down the line. BCG Perspectives[1] depict this as below:

Poor-quality data can implicate the next stage of decision-making which can lead to delays, cost overruns and missed opportunities. Useless data becomes magnified into poor interpretation and misinformed decisions.  This emphasises the need for data quality management which employs the feedback mechanism for continuous improvement. Therefore, useful data informs us not only about the process being monitored but also about how the data quality can be improved.

[1] https://www.bcgperspectives.com/content/articles/big-data-digital-economy-how-to-avoid-big-data-trap/