So to close I will just summarise the four steps of the process.
Identify tasks associated with the system. Consider operations, maintenance and dealing with situations that arise
Rank tasks on the list into priorities, with the highest being the ones where we are more likely to learn the most from carrying out task analysis. We should not be analysing easy tasks
Analyse the tasks, but make sure this is done properly involving the right people and being systematic. It is much better to complete a small number of analyses really well than to do a large number badly
Finally use the findings. Otherwise, we have essentially wasted our time.
I suggest the first of the four stages of task risk management is to generate a comprehensive list of tasks for the system where we wish to apply task analysis. My experience is that this is rarely done because people want to dive straight into analysing specific tasks.
When I do get people to accept that the starting point for our analysis should be a list of tasks they often want to simply use the list of existing procedures. However, my experience is that most organisations have not developed procedures in a very systematic fashion, and more often than not is that procedures do not exists for tasks that should be our highest priority for task analysis.
I have found that a structured brain-storm is usually the best, asking people to work systematically through their system to identify tasks. A drawing can be particularly useful. For example, working left to right on this drawing I can see that operationally we need to receive material from a tanker, manage stock levels in the tank, changeover filters when the online one gets blocked and changeover pumps if the online fails. Also, I know we will need some system start-up and shutdown procedures. From a maintenance perspective I can see that we need to change or clean filter elements, repair the pump, calibrate instruments and test trip functions.
This step is very simple, and perhaps that is why people will often want to skip it. But by starting at this point people start to think more systematically, and are less inclined to pluck tasks from thin air for analysis. Also, the lists are very useful in their own right. They can be used for gap analyses.
At one of my clients we used a macro to convert the lists into training packages that grouped tasks into modules that were printed out as a workbook that was given to trainees. Also, it is very useful to know what tasks people are doing when looking at this like workload or when managing change.
Step 2 of the process is to review the list and prioritise the tasks that should be analysed first. These should be the tasks where there is interaction with major hazards and where there is potential for human failure. In other words, where are we going to get the most benefit from carrying out task analysis.
I find that peoples gut feel for which tasks are most critical is fairly unreliable. They usually choose tasks that they are familiar with, and often reassure themselves that there is not an issue with certain tasks because they have a procedure. My experience also is that both standard health and safety risk assessments and process safety analyses such as HAZOP are rarely much use, largely because the approaches taken to human factors are unsystematic.
I have had most success with a simple scoring system. The basis for this was presented in an HSE report in 1999. I have adapted it through experience, but the basic principle is that each task is scored between 0 and 3 against five criteria. Add up the scores for a total. The ones with the highest score are the most critical and hence the highest priority for task analysis.
Having used this method a lot over the last few years I am fully confident that, whilst it is relatively quick and easy to do, the output is very useful. It ensures a degree of objectivity and is particularly useful for demonstrating that you understand human factors risks, which you can refer to in a safety report or case if you are a COMAH site, offshore establishment etc.
An additional benefit of the scoring system is that it can highlight anomalies in the way you manage human factors risks without going through the time and effort of carrying out a full task analysis. For example one of the scores asks about the vulnerability to error. If you score highly on that criteria it is suggesting that constant vigilance is required. Given that we know humans are not great at vigilance this score can prompt you to consider whether the current task method is safe or whether arrangements need to be changed. Another score is about overriding safety devices. Again, if you score this high it prompts you to consider whether it is appropriate that a task requires safety devices to be overridden.
Step 3 is analysing the tasks that had the highest score in step 2. I’m not planning to talk much about this today because I think it is very much a standard technique for anyone working in human factors. But I would point out that a lot of people have a negative perception because they can see how long it takes per task and think they have to analyse every task. That is why the first 2 steps in the process are so important. Also, it doesn’t help that a lot of people only engage with task analysis because someone says they have to.
My experience is that every time we have done a task analysis properly we have learnt something.
Properly means involving the right people, putting procedures aside, keeping to a good structure and carrying out a human error analysis. This last bit can be a bit of a drag, but it is very interesting how often new issues come to light when you look back at the analysis you have just completed. Recently we analysed how to test a trip function. We had accepted that overrides would be required, but when we looked at the potential errors we realised there was a significant vulnerability. As a result we concluded that a completely different method was required.
The fourth stage is probably the most important, and it is to actual do something with the findings. Unfortunately it is often be overlooked. I suspect because a lot of people have got involve because they have been told they have to do it.
We should always be asking ourselves how risks can be engineered out. This is relatively straightforward when we are involved in the design stage for new projects, although task analysis often carried out too late to make fundamental changes. This is a factor where the 4 step process can help because you can develop task lists and criticality ranking very early on, and so ensure that human factors issues are integrated into the project plan.
One outcome is invariably the development of new or improved procedures. The detailed task analysis can give us the steps to include in the procedure, but the criticality rating also gives us a guide to what type of procedure should be provided and how it should be used in practice. This requires a buy in to the idea that the same does not fit all for procedures and it is unreasonable to expect people to follow procedures for every task. Equally, for most companies the idea that a procedure must printed, followed and signed every time certain tasks are performed is a new one, which takes some time to accept.
Although I stand by the basic guidance based on criticality, I have come across quite a number of situations where people are performing some of the most critical tasks on a very regular basis. People are often tempted to try and re-score the task, but I think this is not acceptable. The correct approach is to engineer the task so that its hazard or vulnerability to error is reduced. But this is not always possible.
I would suggest the solution requires a much better understanding of competence. But if we have completed our task analyses correctly a lot of the information needed to define competence requirements has already been recorded.
Another aspect of using the output is making sure the process remain live. That means revisiting all 4 steps. For example, if there is an issue with a task there should be a number of question you ask. Was the task on the list and if not why. What criticality was it assigned and does the incident suggest this was incorrect? Was the task analysis correct and if not why? And were the findings implemented effectively.