I previously assumed that data always led to insight, but I now firmly believe that is not the case. Regardless of our role in an organization, we tend to have a bias that if we look hard enough, good things will be revealed. Unfortunately, they may not. Here are several examples of this concept:
- If you must ask what the data is telling you, then something is amiss. There should be expectations established ahead of time along with a reporting mechanism to paint a clear picture for the data consumer. This addresses the answer before the question ever comes up.
- “Have you looked it at this way?” These conversations and ‘mental gymnastics’ often occur when clear answers elude us, or when decisions/expectations are made in a vacuum. The data either answers a question, or it doesn’t and you either solicited feedback prior to rollout or you didn’t. It should not be subject to wild tangents or interpretations.
- Data is expected to contain gold. There may be a lot of things that fool you into thinking it is gold. I don’t envision ‘Pyrite Metrics’ catching on.
- Data collection and storage does not mean you will get results. “We have lots of data” is a concerning phrase indicating no one knows what is in it. Effective use of data is correlated to having an established process.
- Insight by itself does not necessarily translate into action. Actionable insight remains the goal. However, we often overestimate how easy it is to find the real insights and do the right things with it.
- Preconceived notions should be evaluated for truth, but don’t get tunnel vision. The data might tell a completely different story from what you thought you knew or even considered. Trying to find “proof” that what you thought was happening might have value, but not exploring all avenues might leave you shorthanded.
What data should be focused on?
For well over a century, the U.S. Bureau of Labor Statistics (BLS) has collected data and published reports on occupational injuries, illnesses, and fatalities. As a result, safety professionals have utilized this information to drive safety metrics and goals. The reason is obvious – companies are mandated to collect injury data in a specific manner and BLS serves it up on a silver platter to allow for consistent benchmarking by industry and location.
It is a blessing that there is industry knowledge and historical perspective as it relates to injury and illness data. It is simultaneously a curse since the safety profession has not adopted any other widely adopted means of measuring safety at a universal level. The result is that safety has been measured based solely on lagging injury rates and that pattern continues to this day. This is a problem since low injury rates can occur despite working unsafely, thus providing workers and organizations a false sense of security. Are you good or are you lucky?
A case has been made for years to adopt leading indicators to measure safety in addition to injury or lagging indicators. However, leading indicators are not set up with a universal formula like the lagging indicators in the form of OSHA injury rates. Unfortunately, there are hundreds, if not thousands, of different leading indicator possibilities. Companies are compelled to throw a proverbial dart at the board to choose which ones to adopt. Ideally, a variety of differing performance indicators are chosen to better evaluate how safe an organization is. So, what should an organization do?
Monitoring vs. Metrics
The unfortunate side effect of metrics for measuring safety is that by establishing a desired range, the desire to achieve a certain number often takes precedence over what is actually being measured.
Instead of implementing metrics to answer, ‘What is safe?’, a process should be put in place to determine if the safety controls are appropriate, functioning, and effective. This is basically monitoring and evaluation, as is done in all other business functions. Even today, many companies have not integrated safety metrics in the same way they have those for operations.
Monitoring can be established through the collection and analysis of relevant data and used to determine a program’s progress toward reaching its objectives, as well as to guide management decisions. Examples of monitoring activities include observations, management walkthroughs, and conversations. The focus of this monitoring should be pointed towards the components of the work activity, as well as the people, results, and recommendations. Monitoring evaluations should focus on the expected and achieved accomplishments, examining the results (inputs, activities, outputs, outcomes, and impacts) and contextual factors, to better understand if the organization is headed in the right direction.
In the end, the monitoring and evaluation process should provide evidence-based information that is useful, credible, and reliable. The key takeaways from an evaluation should be used by organizational leaders to influence future decision-making regarding the program. Ultimately, the entire process should give leaders confidence to act appropriately when presented with compelling insights.
For more details: Click here & Contact SafetyStratus Now.
Cary comes to the SafetyStratus team as the Vice President of Operations with almost 30 years of experience in several different industries. He began his career in the United States Navy’s nuclear power program. From there he transitioned into the public sector as an Environmental, Health & Safety Manager in the utility industry. After almost thirteen years, he transitioned into the construction sector as a Safety Director at a large, international construction company. Most recently he held the position of Manager of Professional Services at a safety software company, overseeing the customer success, implementation, and process consulting aspects of the services team.
At SafetyStratus, he is focused on helping achieve the company’s vision of “Saving lives and the environment by successfully integrating knowledgeable people, sustainable processes, and unparalleled technology”.