The complexity of implementing experience return processes
From our experience of working with dozens of companies in their management of security, to the most complex activities to consolidate the returns they are known experience (in English ROE - Return of Experience or REX ) or Lessons Learned . Learning from experience is one of the most important tools for security management: Taking advantage of everything learned in previous company activities allows us to consolidate knowledge, not make the mistakes of the past and above all, raise the "bar of our activities" starting from a base above what we already did. Therefore, as we discussed the Feedback from operational experience is an important tool for safety management.
On a tactical level, experience return processes work in two directions. The first is the one that when ending as a project, task or activity, points for improvement are sought from the experience, which are used as inputs at the beginning of the following projects or tasks. Therefore, within project closings or project phase closings, we develop an activity that will help us to feed back our knowledge database for future projects, in the form of recommendations, new product requirements or new process requirements.
The second experience return process works against event. That is, in the face of an event associated with a security problem, a set of internal processes are activated to evaluate what has happened, with the aim of learning and rectifying what is concluded.
Thes known sessions Lessons Learned in project closures, for example, they are an opportunity for dialogue and collaborative learning among working groups and even between different organizations.
Without a doubt, we can consider these feedback processes "to be affected by invisible barriers" that affect their implementation and recurrence in organizations: learning from unwanted events, incidents and accidents is not as trivial as is sometimes thought, especially if this learning must have an organizational level character.
Several steps must be developed correctly to consolidate the learning of what we did wrong, and in each of the steps, obstacles tend to appear that will have a technical, organizational or cultural nature. What is clear, according to our experience as security consultants, is that each of the steps must be done correctly and rigorously to ensure effective learning: Report à Analysis à Planning of corrective actions, implementation of corrective actions (including the exchange of information) à Monitoring the effectiveness of the improvement.
When they do not request help as external consultants in implementing this type of process in organizations, we often see what we call the "symptom of lack of learning". We go on to see below some questions and solutions that will help us identify these symptoms of lack of learning as well as solutions to improve the implementation of experience return processes.
1 - Covering up or silencing events
As we all know, on many occasions, many incidents or near misses are not reported, since it is considered by some part of the organization that it is not worth the effort because it is not their concern, to cover up to avoid further investigations in this regard or because of the belief that the company never does anything about safety. Clearly, every time an incident goes unreported, an opportunity is missed to learn that perhaps it will prevent an accident tomorrow. Apart from the missed opportunity, on many occasions, it can lead to erroneous confidence in the security of the system itself and can introduce bias in the analysis reports of positive security trends.
Resistance to event reporting can be reduced with some techniques:
- Through the implementation of automated reporting systems. For example, on the railroad, signals that jump red can be recorded automatically without human intervention as a complement to the written reports produced by the drivers themselves.
- Eliminate the culture of guilt in the organization. The blame culture overemphasizes the responsibility of the person involved in the incident rather than identifying causal factors related to the management system, organization, or process that enabled or encouraged the error. Against this, there is talk of a fair culture, based on an atmosphere of trust in which people are encouraged, and even rewarded, for providing essential information related to safety (including their own mistakes).
2 - Superficial analysis of the event
On many other occasions we find that event analyzes stop at superficial causes and problems, rather than underlying or organizational factors. In addition, learnings are simplified at various levels, and recommendations are typically limited to the person responsible for executing the hazardous activity and typically with little responsibility within the company. Poc to occasionally, a manager appears as the person to whom an improvement recommendation must be sent due to a safety-related event. Against this, we normally recommend that, first, the people involved in event analysis be experienced and "senior" professionals, with training and the ability to detect real causal factors and understand the systematic causes of failures in complex systems.
We also recommend applying adequate times to the analyzes, avoiding prioritizing productivity over safety. Without time, it is not possible to do detailed and in-depth analysis. Finally, we warn of the well-known "managerial bias" towards technical rather than organizational arrangements, reducing the importance of the organization's managers to their responsibility in incidents, thereby minimizing the organization's contributions to the event and, therefore, leaving behind Organizational improvements that are vitally important to avoid incurring more security-related events.
We also see that many companies use the term root cause and encourage analysts to delve beyond the immediate causes to find the root cause. Root causes, for example, using the 5W method (the 5 whys). From Leedeo we believe that this type of procedures assumes a too linear and reductionist approach to causality that is not always applicable to systems sociotechnical complexes and accidents due to systemic causes. In this sense, our recommendation is not to apply magic shelf formulas to simplify a process that we can consider complex. Therefore, we are committed to seeking and understanding the underlying causal structure of the incident, identifying the contributing factors that can be numerous and do not always coincide with a strict deterministic causality. Instead of asking usthe why the unwanted event occurred, we must ask ourselves how the events unfold, that is, what factors contributed to the event.
3 - Corporate self-centeredness. I am different and better
Corporate egocentrism is another of thesbarriers that we identify in the processes of return of the experience. In several major accidents, failure to learn from incidents and accidents elsewhere was a factor in contribute to the appearance of serious events. In general, the feeling of "that cannot happen to us is common; we are different and operatesrwe are much better ", which are also closely related to external reputation or prestige (by oneself, from colleagues and from the company itself) inin the sense of "dirty clothes are not washed in public". When we hear phrases like the following, we must (feeling very sorry for the reader) place them on a plane of corporate egocentrism:
- We work better than them
- We are not in the same industry. Our industry is different from the others
- Our internal processes need a special verification
- We have never had an accident before
- We have our own rules
- We've done it this way for the last 15 years and we've never had a problem
- We have a culture of safetyd best
Tod to these egocentric mentalities of the company will undoubtedly make it difficult to learn within the organization and, more or less directly, to use lessons learned.
4 - Foster teams committed to safety and continuous improvement
In general, we must encourage in our organizations people with an attitude that challenges the state of affairs, with employees attentive to conditions or activities that may have an unwanted effect on safety, since we must understand that accidents are often the result of a series of decisions and actions that reflect failures in the shared assumptions, values and beliefs of the organization.
Remember that normally, the security of complex systems is supervised or ensured by people who control the correct operation, detect anomalies and try to correct them. If they are not open to new information that challenges their mental models, the learning cycle will not complete. This usually happens because of operational personnel who are too busy or focused to reflect on the fundamentals that affect safety; mistrust of the team that analyzes events: "they are technocrats at the plant who have no idea what you work here "and resistance to change.
In relation to attitude, we find what we call corporate amnesia: people forget things and, therefore, organizations forget things. Outsourcing, aging workforce, and insufficient knowledge transfer from more experienced workers help this process. We will compensate all these situations with knowledge management tools, adequate training of the generational changes that happen in the company.
5 - Why is the bad news not taken correctly within an organization?
In general, although it is difficult to recognize it, organizations tend not to be open to bad news and the bearers of negative reports are singled out as colleagues who do not play as a team. Worse still, on many occasions, in addition, the bearers of bad news are ignored due to their negative attitude towards the situations and activities of the company.
Related to this issue, we must always contextualize that in complex systems, the boundary between safe and unsafe operation is imprecise and fluctuates in the time. In effect, organizations are exposed to competing forces (internal and external) that lead to a drift in people's practices, attitudes and beliefs over time. Therefore, periodically sources of danger, organizational security models and barriers must be discussed, questioned and, if necessary, evolved. In general, the presence of conflicting views on safety should be seen as a source of enrichment, rather than as a problem to be eliminated.
6 - The ritualization and therefore
the trivialization of the processes of return of the experience
In Leedeo we call ritualization to a feeling within the organization that those that are correctly when the processes and protocols set by the company are followed to the letter, without fulfilling their objective or meaning. In other words, we carry out a lessons learned meeting to complete the file and be able to close the project, without giving value to the process. Without a doubt, this type of organizational climate is not conducive to learning.
7 - Improvements and recommendations
are neither implemented nor evaluated for their effectiveness
On many occasions, the recommendations or corrective actions that derive from the processes of return of experience or lessons learned, are not implemented or their implementation is very slow. Normally this happens due to an economic (budgetary) issue or insufficient time for its implementation. Talking about time in a corporation in the end is still a matter of prioritization, therefore, perhaps we should talk about priorities instead of time available for the implementation of improvement actions. It is also important to speak of the complacency of the management in matters of safety, prioritizing productivity over safety and, certainly again, resistance to change.
Once implemented, which as we can see is not easy, we must make sure that the implementation of the recommendations really solved the underlying problem. At this point again, the overconfidence that everything we do is the right thing to do, it can play a trick on us. It can also be important in this regard, to think if the indicators we have allow us to measure what we want to evaluate their effectiveness. Perhaps it will be necessary to change or evolve indicators or dashboards that allow us to detect what caused a crack in ours security management system.
At Leedeo Engineering, we are specialists in supporting our clients at any level required for RAM and Safety tasks, and both at the level of infrastructure or on-board equipment. Do not hesitate to contact us >>
Are you interested in our articles about RAMS engineering and Technology?
Sign up for our newsletter and we will keep you informed of the publication of new articles.