Friday, December 27, 2024
The 2024 Blog Digest - Q3/Q4
The 2024 Blog Digest - Q3/Q4 brings you all of The Safety Artisan's blog posts from the first six months of this year. I hope that you find this a useful resource!
The 2024 Blog Digest - Q1/Q2: 25 Posts!
The 2024 Blog Digest - Q3/Q4 is it for this year - thanks, everyone!
Meet the Author
Learn safety engineering with me, an industry professional with 25 years of experience, I have:
•Worked on aircraft, ships, submarines, ATMS, trains, and software;
•Tiny programs to some of the biggest (Eurofighter, Future Submarine);
•In the UK and Australia, on US and European programs;
•Taught safety to hundreds of people in the classroom, and thousands online;
•Presented on safety topics at several international conferences.
#coursesafetyengineering #ineedsafety #knowledgeofsafety #learnsafety #safetyblog #safetydo #safetyengineer #safetyengineertraining #safetyengineeringcourse #safetyprinciples
Simon Di Nucci https://www.safetyartisan.com/2024/12/26/the-2024-blog-digest-q3-q4/
Monday, December 23, 2024
Reflections on a Career in Safety, Part 4
In 'Reflections on a Career in Safety, Part 4', I want to talk about Consultancy, which is mostly what I've been doing for the last 20 years!
Consultancy
As I said near the beginning, I thought that in the software supportability team, we all wore the same uniform as our customers. We didn't cost them anything. We were free. We could turn up and do a job. You would think that would be an easy sell, wouldn't you?
Not a bit of it. People want there to be an exchange of tokens. If we're talking about psychology, if something doesn't cost them anything, they think, well, it can't be worth anything. So we pay for something really does affect our perception of whether it's any good.
Photo by Cytonn Photography on Unsplash
So I had to go and learn a lot of sales and marketing type stuff in order to sell the benefits of bringing us in, because, of course, there was always an overhead of bringing new people into a program, particularly if they were going to start asking awkward questions, like how are we going to support this in service? How are we going to fix this? How is this going to work?
So I had to learn a whole new language and a whole new way of doing business and going out to customers and saying, we can help you, we can help you get a better result. Let's do this. So that was something new to learn. We certainly didn't talk about that at university. Maybe you do more business focussed stuff these days. You can go and do a module, I don't know, in management or whatever; very, very useful stuff, actually. It's always good to be able to articulate the benefits of doing something because you've got to convince people to pay for it and make room for it.
Doing Too Little, or Too Much
And in safety, I’ve got two jobs.
First of all, I suppose it's the obvious one. Sometimes you go and see a client, they're not aware of what the law says they're supposed to do or they're not aware that there's a standard or a regulation that says they've got to do something – so they're not doing it. Maybe I go along and say, ah, look, you've got to do this. It's the law. This is what we need to do.
Photo by Quino Al on Unsplash
Then, there's a negotiation because the customer says, oh, you consultants, you're just making up work so you can make more money. So you've got to be able to show people that there's a benefit, even if it's only not going to jail. There's got to be a benefit. So you help the clients to do more in order to achieve success.
You Need to Do Less!
But actually, I spend just as much time advising clients to do less, because I see lots of clients doing things that appear good and sensible. Yes, they're done with all the right motivation. But you look at what they're doing and you say, well, this you're spending all this money and time, but it's not actually making a difference to the safety of the product or the process or whatever it is.
You're chucking money away really, for very little or no effect. Sometimes people are doing work that actually obscures safety. They dive into all this detail and go, well, actually, you've created all this data that's got to be managed and that's actually distracting you from this thing over here, which is the thing that's really going to hurt people.
So, I spend my time helping people to focus on what's important and dump the comfort blanket, OK, because lots of times people are doing stuff because they've always done it that way, or it feels comforting to do something. And it's really quite threatening to them to say, well, actually, you think you're doing yourself a favor here, but it doesn't actually work. And that's quite a tough sell as well, getting people to do less.
Photo by Prateek Katyal on Unsplash
However, sometimes less is definitely more in terms of getting results.
Part 5 will follow next week!
New to System Safety? Then start here. There’s more about The Safety Artisan here. Subscribe for free regular emails here.
#Careerinsafety #ishealthandsafetyagoodcareer #ishseagoodcareer #issafetyagoodcareer #issafetymanagementagoodcareer #Lecture #Part4 #reflections #safetycareer #safetyguideforcareerandtechnicaleducation
Simon Di Nucci https://www.safetyartisan.com/2021/07/21/reflections-on-a-career-in-safety-part-4/
Monday, December 16, 2024
Reflections on a Career in Safety, Part 3
In 'Reflections on a Career in Safety, Part 3' I continue talking about different kinds of Safety, moving onto...
Projects and Products
Then moving on to the project side, where teams of people were making sure a new aeroplane, a new radio, a new whatever it might be, was going to work in service; people were going to be able to use it, easily, support it, get it replaced or repaired if they had to. So it was a much more technical job - so lots of software, lots of people, lots of process and more people.
Moving to the software team was a big shock to me. It was accidental. It wasn't a career move that I had chosen, but I enjoyed it when I got there. For everything else in the Air Force, there was a rule. There was a process for doing this. There were rules for doing that. Everything was nailed down. When I went to the software team, I discovered there are no rules in software, there are only opinions.
The 'H' is software development is for 'Happiness'
So straight away, it became a very people-focused job because if you didn't know what you were doing, then you were a bit stuck. I had to go through a learning curve, along with every other technician who was on the team. And the thing about software with it being intangible is that it becomes all about the process. If a physical piece of kit like the display screen isn't working, it's pretty obvious. It's black, it's blank, nothing is happening. It's not always obvious that you've done something wrong with software when you're developing it.
So we were very heavily reliant on process; again, people have got to decide what's the right process for this job? What are we going to do? Who's going to do it? Who's able to do it? And it was interesting to suddenly move into this world where there were no rules and where there were some prima donnas.
Photo by Sandy Millar on Unsplash
We had a handful of really good programmers who could do just about anything with the aeroplane, and you had to make the best use of them without letting them get out of control. Equally, you had people on the other end of the scale who'd been posted into the software team, who really did not want to be there. They wanted to get their hands dirty, fixing aeroplanes. That's what they wanted to do. Interesting times.
From the software team, I moved on to big projects like Eurofighter, that's when I got introduced to:
Systems Engineering
And I have no problem with plugging systems engineering because as a safety engineer, I know if there is good systems engineering and good project management, I know my job is going to be so much easier. I’ve turned up on a number of projects as a consultant or whatever, and I say, OK, where's the safety plan? And they say, oh, we want you to write it. OK, yeah, I can do that. Whereas the project management plan or where's the systems engineering management plan?
If there isn't one or it's garbage – as it sometimes is – I’m sat there going, OK, my just my job just got ten times harder, because safety is an emergent property. So you can say a piece of kit is on or off. You can say it's reliable, but you can't tell whether it's safe until you understand the context. What are you asking it to do in what environment? So unless you have something to give you that wider and bigger picture and put some discipline on the complexity, it's very hard to get a good result.
Photo by Sam Moqadam on Unsplash
So systems engineering is absolutely key, and I'm always glad to work with the good systems engineer and all the artifacts that they've produced. That's very important. So clarity in your documentation is very helpful. Being , if you're lucky, at the very beginning of a program, you've got an opportunity to design safety, and all the other qualities you want, into your product. You've got an opportunity to design in that stuff from the beginning and make sure it's there, right there in the requirements.
Also, systems engineers doing the requirements, working out what needs to be done, what you need the product to do, and just as importantly, what you need it not to do, and then passing that on down the chain. That's very important. And I put in the title “managing at a distance” because, unlike in the operations world where you can say “that's broken, can you please go and fix it”.
Managing at a Distance
It's not as direct as that. You're looking at your process, you're looking at the documentation, you're working with, again, lots and lots of people, not all of whom have the same motivation that you do.
Photo by Bonneval Sebastien on Unsplash
Industry wants to get paid. They want to do the minimum work to get paid, to maximize their profit. You want the best product you can get. The pilots want something that punches holes in the sky and looks flash and they don't really care much about much else, because they're quite inoculated to risk.
So you've got people with competing motivations and everything has got to be worked indirectly. You don't get to control things directly. You've got to try and influence and put good things in place, in almost an act of faith that, good things in place and good things will result. A good process will produce a good product. And most of the time that's true. So (my last slide on work), I ended up doing consultancy, first internally and then externally.
Part 4 will follow next week!
New to System Safety? Then start here. There’s more about The Safety Artisan here. Subscribe for free regular emails here.
#Careerinsafety #ishealthandsafetyagoodcareer #ishseagoodcareer #issafetyagoodcareer #issafetymanagementagoodcareer #Lecture #Part3 #reflections #safetycareer #safetyguideforcareerandtechnicaleducation #SystemsEngineering
Simon Di Nucci https://www.safetyartisan.com/2021/07/14/reflections-on-a-career-in-safety-part-3/
Monday, December 9, 2024
Reflections on a Career in Safety, Part 2
In 'Reflections on a Career in Safety, Part 2' I move on to ...
Different Kinds of Safety
So I'm going to talk a little bit about highlights, that I hope you'll find useful. I went straight from university into the Air Force and went from this kind of environment to heavy metal, basically. I guess it's obvious that wherever you are if you're doing anything in industry, workplace health and safety is important because you can hurt people quite quickly.
Workplace Health and Safety
In my very first job, we had people doing welding, high voltage electrics, heavy mechanical things; all built out of centimeter-thick steel. It was tough stuff and people still managed to bend it. So the amount of energy that was rocking around there, you could very easily hurt people. Even the painters – that sounds like a safe job, doesn't it? – but aircraft paint at that time a cyanoacrylate. It was a compound of cyanide that we used to paint aeroplanes with.
All the painters and finishers had to wear head-to-toe protective equipment and breathing apparatus. If you're giving people air to breathe, if you get that wrong, you can hurt people quite quickly. So even managing the hazards of the workplace introduced further hazards that all had to be very carefully controlled.
Photo by Ömer Yıldız on Unsplash
And because you're in operations, all the decisions about what kind of risks and hazards you're going to face, they've already been made long before. Decisions that were made years ago, when a new plane or ship or whatever it was, was being bought and being introduced . Decisions made back then, sometimes without realizing it, meant that we were faced with handling certain hazards and you couldn't get rid of them. You just had to manage them as best you could.
Overall, I think we did pretty well. Injuries were rare, despite the very exciting things that we were dealing with sometimes. We didn't have too many near misses – not that we heard about anyway. Nevertheless, that was always there in the background. You're always trying to control these things and stop them from getting out of control.
One of the things about a workplace in operations and support, whether you're running a fleet of aeroplanes or you're servicing some kit for somebody else and then returning it to them, it tends to be quite a people-centric job. So, large groups of people doing the job, supervision, organization, all that kind of stuff. And that can all seem very mundane, a lot of HR-type stuff. But it's important and it's got to be dealt with.
So the real world of managing people is a lot of logistics. Making sure that everybody you need is available to do the work, making sure that they’ve got all the kit, all the technical publications that tell them what to do, the information that they need. It's very different to university – a lot of seemingly mundane stuff – but it's got to be got right because the consequences of stuffing up can be quite serious.
Safe Systems of Work
So moving on to some slightly different topics, when I got onto working with Aeroplanes, there was an emphasis on a safe system of work, because doing maintenance on a very complex aeroplane was quite an involved process and it had to be carefully controlled. So we would have what’s usually referred to as a Permit to Work system where you very tightly control what people are allowed to do to any particular plane. It doesn't matter whether it's a plane or a big piece of mining equipment or you're sending people in to do maintenance on infrastructure; whatever it might be, you've got to make sure that the power is disconnected before people start pulling it apart, et cetera, et cetera.
Photo by Leon Dewiwje on Unsplash
And then when you put it back together again, you've got to make sure that there aren't any bits leftover and everything works before you hand it back to the operators because they're going to go and do some crazy stuff with it. You want to make sure that the plane works properly. So there was an awful lot of process in that. And in those days, it was a paperwork process. These days, I guess a lot would be computerized, but it's still the same process.
If you muck up the process, it doesn't matter whether . If you've got a rubbish process, you're going to get rubbish results and it doesn't change that. You just stuff up more quickly because you've got a more powerful tool. And for certain things we had to take, I've called it special measures. In my case, we were a strike squadron, which meant our planes would carry nuclear weapons if they had to.
Special Processes for Special Risks
So if the Soviets charged across the border with 20,000 tanks and we couldn't stop them, then it was time to use – we called them buckets of sunshine. Sounds nice, doesn't it? Anyway, so there were some fairly particular processes and rules for looking after buckets of sunshine. And I'm glad to say we only ever used dummies. But when you when the convoy arrived and yours truly has to sign for the weapon and then the team starts loading it, then that does concentrate your mind as an engineer. I think I was twenty-two, twenty-three at the time.
Photo by Oscar Ävalos on Unsplash
Somebody on station stuffed up on the paperwork and got caught. So that was two careers of people my age, who I knew, that were destroyed straight away, just by not being too careful about what they were doing. So, yeah, that does concentrate the mind. If you’re dealing with, let’s say you're in a major hazard facility, you're in a chemical plant where you've got perhaps thousands of tonnes of dangerous chemicals, there are some very special risk controls, which you have to make sure are going to work most of the time.
And finally, there is ‘airworthiness’: decisions about whether we could fly an aeroplane, even though some bits of it were not working. So that was a decision that I got to make once I got signed off to do it. But it's a team job. You talk to the specialists who say, this bit of the aeroplane isn't working, but it doesn't matter as long as you don't do “that”.
Photo by Eric Bruton on Unsplash
So you have to make sure that the pilots knew, OK, this isn't working. This is the practical effect from your point of view. So you don't switch this thing on or rely on this thing working because it isn't going to work. There were various decisions about that were an exciting part of the job, which I really enjoyed. That's when you had to understand what you were doing, not on your own, because there were people who'd been there a lot longer than me. But we had to make things work as best we could – that was life.
Part 3 will follow next week!
New to System Safety? Then start here. There’s more about The Safety Artisan here. Subscribe for free regular emails here.
#Careerinsafety #ishealthandsafetyagoodcareer #ishseagoodcareer #issafetyagoodcareer #issafetymanagementagoodcareer #Lecture #Part2 #reflections #safetycareer #safetyguideforcareerandtechnicaleducation #SystemsEngineering
Simon Di Nucci https://www.safetyartisan.com/2021/07/07/reflections-on-a-career-in-safety-part-2/
Monday, December 2, 2024
Reflections on a Career in Safety, Part 1
This is Part 1 of my 'Reflections on a Career in Safety', from "Safety for Systems Engineering and Industry Practice", a lecture that I gave to the University of Adelaide in May 2021. My thanks to Dr. Kim Harvey for inviting me to do this and setting it up.
The Lecture, Part 1
Hi, everyone, my name Simon Di Nucci and I'm an engineer, I actually – it sounds cheesy - but I got into safety by accident. We'll talk about that later. I was asked to talk a little bit about career stuff, some reflections on quite a long career in safety, engineering, and other things, and then some stuff that hopefully you will find interesting and useful about safety work in industry and working for government.
Context: my Career Summary
I've got three areas to talk about, operations and support, projects and product development, and consulting.
I have been on some very big projects, Eurofighter, Future Submarine Programme, and some others that have been huge multi-billion-dollar programs, but also some quite small ones as well. They're just as interesting, sometimes more so. In the last few years, I've been working in consultancy. I have some reflections on those topics and some brief reflections on a career in safety.
Starting Out in the Air Force
So a little bit about my career to give you some context. I did 20 years in the Royal Air Force in the U.K., as you can tell from my accent, I'm not from around here. I started off fresh out of university, with a first degree in aerospace systems engineering. And then after my Air Force training, my first job was as an engineering manager on ground support equipment: in General Engineering Flight, it was called.
We had people looking after the electrical and hydraulic power rigs that the aircraft needed to be maintained on the ground. And we had painters and finishers and a couple of carpenters and a fabric worker and some metal workers and welders, that kind of stuff. So I went from a university where we were learning about all this high-tech stuff about what was yet to come in the aerospace industry. It was a bit of the opposite end to go to, a lot of heavy mechanical engineering that was quite simple.
And then after that, we had a bit of excitement because six weeks after I started, in my very first job, the Iraqis invaded Kuwait. I didn't go off to war, thank goodness, but some of my people did. We all got ready for that: a bit of excitement.
Photo by Jacek Dylag on Unsplash
After that, I did a couple of years on a squadron, on the front line. We were maintaining and fixing the aeroplanes and looking after operations. And then from there, I went for a complete change. Actually, I did three years on a software maintenance team and that was a very different job, which I'll talk about later. I had the choice of two unpleasant postings that I really did not want, or I could go to the software maintenance team.
Into Software by accident as well!
I discovered a burning passion to do software to avoid going to these other places. And that's how I ended up there. I had three, fantastic years there and really enjoyed that. Then, I was thinking of going somewhere down south to be in the UK, to be near family, but we went further north. That's the way things happen in the military.
I got taken on as the rather grandly titled Systems and Software Specialist Officer on the Typhoon Field Team. The Eurofighter Typhoon wasn't in service at that point. (That didn't come in until 2003 when I was in my last Air Force job, actually.) We had a big team of handpicked people who were there to try and make sure that the aircraft was supportable when it came into service.
One of the big things about the new aircraft was it had tons of software on board. There were five million lines of code on board, which was a lot at the time, and a vast amount of data. It was a data hog; it ate vast amounts of data and it produced vast amounts of data and that all needed to be managed. It was on a scale beyond anything we'd seen before. So it was a big shock to the Air Force.
More Full-time Study
Photo by Mike from Pexels
Then after that, I was very fortunate. (This is a picture of York, with the minister in the background.) I spent a year full-time doing the safety-critical systems engineering course at York, which was excellent. It was a privilege to be able to have a year to do that full-time. I've watched a lot of people study part-time when they've got a job and a family, and it's really tough. So I was very, very pleased that I got to do that.
After that, I went to do another software job where this time we were in a small team and we were trying to drive software supportability into new projects coming into service, all kinds of stuff, mainly aircraft, but also other things as well. That was almost like an internal consultancy job. The only difference was we were free, which you would think would make it easier to sell our services. But the opposite is the case.
Finally, in my last Air Force job, I was part of the engineering authority looking after the Typhoon aircraft as it came into service, which is always a fun time. We just got the plane into service. And then one of the boxes that I was responsible for malfunctioned. So the undercarriage refused to come down on the plane, which is not what you want. We did it did get down safely in the end, but then the whole fleet was grounded and we had to fix the problem. So some more excitement there. Not always of the kind that you want, but there we go. So that took me up to 2006.
At that point, I transitioned out of the Air Force and I became a consultant
So, I always regarded consultants with a bit of suspicion up until then, and now I am one. I started off with a firm called QinetiQ, which is also over here. And I was doing safety mainly with the aviation team. But again, we did all sorts, vehicles, ships, network logistics stuff, all kinds of things. And then in 2012, I joined Frazer-Nash in order to come to Australia.
So we appeared in Australia in November 2012. And we've been here in Adelaide all almost all that time. And you can't get rid of us now because we're citizens. So you're stuck with us. But it's been lovely. We love Adelaide and really enjoy, again, the varied work here.
Adelaide CBD, photo by Simon Di Nucci
Part 2 will follow next week!
New to System Safety? Then start here. There's more about The Safety Artisan here. Subscribe for free regular emails here.
#Careerinsafety #ishealthandsafetyagoodcareer #ishseagoodcareer #issafetyagoodcareer #issafetymanagementagoodcareer #Lecture #Part1 #reflections #safetycareer #safetyguideforcareerandtechnicaleducation #SystemsEngineering
Simon Di Nucci https://www.safetyartisan.com/2021/06/30/reflections-on-a-career-in-safety-part-1/
Monday, November 25, 2024
Functional Safety
The following is a short, but excellent, introduction to the topic of 'Functional Safety' by the United Kingdom Health and Safety Executive (UK HSE). It is equally applicable outside the UK, and the British Standards ('BS EN') are versions of international ISO/IEC standards - e.g. the Australian version ('AS/NZS') is often identical to the British standard.
My comments and explanations are shown .
"Functional safety is the part of the overall safety of plant and equipment that depends on the correct functioning of safety-related systems and other risk reduction measures such as safety instrumented systems (SIS), alarm systems and basic process control systems (BPCS).
SIS
SIS are instrumented systems that provide a significant level of risk reduction against accident hazards. They typically consist of sensors and logic functions that detect a dangerous condition and final elements, such as valves, that are manipulated to achieve a safe state.
The general benchmark of good practice is BS EN 61508, Functional safety of electrical/electronic/programmable electronic safety related systems. BS EN 61508 has been used as the basis for application-specific standards such as:
- BS EN 61511: process industry
- BS EN 62061: machinery
- BS EN 61513: nuclear power plants
BS EN 61511, Functional safety - Safety instrumented systems for the process industry sector, is the benchmark standard for the management of functional safety in the process industries. It defines the safety lifecycle and describes how functional safety should be managed throughout that lifecycle. It sets out many engineering and management requirements, however, the key principles of the safety lifecycle are to:
- use hazard and risk assessment to identify requirements for risk reduction
- allocate risk reduction to SIS or to other risk reduction measures (including instrumented systems providing safety functions of low / undefined safety integrity)
- specify the required function, integrity and other requirements of the SIS
- design and implement the SIS to satisfy the safety requirements specification
- install, commission and validate the SIS
- operate, maintain and periodically proof-test the SIS
- manage modifications to the SIS
- decommission the SIS
BS EN 61511 also defines requirements for management processes (plan, assess, verify, monitor and audit) and for the competence of people and organisations engaged in functional safety. An important management process is Functional Safety Assessment (FSA) which is used to make a judgement as to the functional safety and safety integrity achieved by the safety instrumented system.
Alarm Systems
Alarm systems are instrumented systems designed to notify an operator that a process is moving out of its normal operating envelope to allow them to take corrective action. Where these systems reduce the risk of accidents, they need to be designed to good practice requirements considering both the E,C&I design and human factors issues to ensure they provide the necessary risk reduction.
In certain limited cases, alarm systems may provide significant accident risk reduction, where they also might be considered as a SIS. The general benchmark of good practice for management of alarm systems is BS EN 62682.
BPCS
BPCS are instrumented systems that provide the normal, everyday control of the process. They typically consist of field instrumentation such as sensors and control elements like valves which are connected to a control system, interfaced, and could be operated by a plant operator. A control system may consist of simple electronic devices like relays or complicated programmable systems like DCS (Distributed Control System) or PLCs (Programmable Logic Controllers).
BPCS are normally designed for flexible and complex operation and to maximize production rather than to prevent accidents. However, it is often their failure that can lead to accidents, and therefore they should be designed to good practice requirements. The general benchmark of good practice for instrumentation in process control systems is BS 6739."
Copyright
The above text is reproduced under Creative Commons Licence from the UK HSE's webpage. The Safety Artisan complies with such licensing conditions in full.
Back to Home Page
#basicprocesscontrolsystem #coursesafetyengineering #engineersafety #functionalsafety #functionalsafetystandard #ineedsafety #knowledgeofsafety #learnfunctionalsafety #learnsafety #needforsafety #safetyblog #safetydo #safetyengineer #safetyengineerskills #safetyengineertraining #safetyengineeringcourse #safetyinstrumentedsystem #safetyprinciples #softwaresafety #theneedforsafety #whatisfunctionalsafety
Simon Di Nucci https://www.safetyartisan.com/2021/06/26/functional-safety/
Monday, November 18, 2024
Risk Management 101
Welcome to Risk Management 101, where we're going to go through these basic concepts of risk management. We're going to break it down into the constituent parts and then we're going to build it up again and show you how it's done. I've been involved in risk management, in project risk management, safety risk management, etc., for a long, long time. I hope that I can put my experience to good use, helping you in whatever you want to do with this information.
Maybe you're getting an interview. Maybe you want to learn some basics and decide whether you want to know more about risk management or not. Whatever it might be, I think you'll find this short session really useful. I hope you enjoy it and thanks for watching.
https://youtu.be/dOKALqXYtrg
Welcome to Risk Management 101, where we're going to...
You can get the RM101 Course as part of the FREE Triple Learning Bundle.
Risk Management 101, Topics
- Hazard Identification;
- Hazard Analysis;
- Risk Estimation;
- Risk Evaluation;
- Risk Reduction; and
- Risk Acceptance.
Risk Management 101, Transcript
Click here for the full transcript:
Introduction
Hi everyone and welcome to Risk Management 101. We're going to go through these basic concepts of risk management. We're going to break it down into the constituent parts. Then we're going to build it up again and show you how it's done.
My name is Simon Di Nucci and I have a lot of experience working in risk management, project risk management, safety risk management, etc. I’m hoping that I can put my experience to good use, helping you in whatever you want to do with this information. Whether you're going for an interview or you want to learn some basics. You can watch this video and decide if you want to know more about risk management or you don’t need to. Whatever it might be, you'll find this short session useful. I hope you enjoy it and thanks for watching.
Topics For This Session
Risk Management 101. So what does it all mean? We're going to break risk management down into we've got six constituent parts. I'm using a particular standard that breaks it down this way. Other standards will do this in different ways. We'll talk about that later. Here we've got risk management broken down in to; hazard identification, hazard analysis, risk estimation, risk evaluation (and ALARP), risk reduction, and risk acceptance.
Risk Management
Let's get right on to that. Risk management – what is it? It’s defined as “the systematic application of management policies, procedures and practises to the tasks of hazard identification, hazard analysis, risk estimation, risk and ALARP evaluation, risk reduction, and risk acceptance”.
There are a couple of things to note here. We're talking about management policies, procedures and practices. The ‘how’ we do it. Whether it's a high-level policy or low-level common practice. E.g. how things are done in our organisation vs how the day-to-day tasks are done? And it's also worth saying that when we talk about ‘hazards’, that's a safety ‘ism’. If we were doing security risk management, we can be talking about ‘threats’. We can also be talking about ‘causes’ in day-to-day language. So, we can be talking about something causing a risk or leading to a risk. More on that later, but that's an overview of what risk management is.
Part 1
Let's look at it in a different way. For those of you who like a visual representation, here is a graph of the hierarchical breakdown. They need to happen in order, more-or-less, left to right. And as you can see, there's a link between risk evaluation and risk reduction. We’ll come on to that. So, it's not ‘or’ it’s a serial ‘this is what you have to do’. Sometimes they're linked together more intimately.
Hazard Identification
First of all, hazard identification. So, this is the process where we identify and list hazards and accidents associated with the system. You may notice that some words here are in bold. Where a word is in bold, we are going to give the definition of what it is later.
These hazards could lead to an accident but only associated with the system. That's the scope. If we were talking about a system that was an aeroplane, or a ship, or a computer, we would have a very different scope. There would also be a different way that maybe accidents would happen.
On a more practical level, how do we do hazard identification? I'm not going to go into any depth here, but there are certain classic ones. We can consult with our workers and inspect the workplace where they're operating. And in some countries, that's a legal requirement (Including in Australia where I live). Another option is we can look at historical data. And indeed, in some countries and in some industries, that's a requirement. A requirement means we have to do that. And we can use special analysis techniques. Now, I’m not going to talk about any of those analysis techniques today. You can watch some other sessions on The Safety Artisan to see that.
Hazard Analysis
Having done hazard identification, we've asked ourselves ‘What could go wrong?’. We can put some more detail on and ask, ‘How could it go wrong? And how often?’. That kind of stuff. So, we want to go into more detail about the hazards and accidents associated with this particular system. And that will help us to define some accident sequences. We can start with something that creates a hazard and then the hazard may lead to an accident. And that's what we're talking about. We will show that using graphics late, which will be helpful.
But again, more on terminology. In different industries, we call it different things. We tend to say ‘accident’ in the UK and Australia. In the U.S., they might call it a ‘mishap’, which is trying to get away from the idea that something was accidental. Nobody meant it to happen. Mishap is a more generic term that avoids that implication. We also talk about ‘losses’ or we talk about ‘breaches’ in the security world. We have some issue where somebody has been able to get in somewhere that they should not. And we can talk about accident sequences. Or, in a more common language, we call it a sequence of events. That's all it is.
Risk Estimation
Now we’re talking about the risk estimation. We’ve thought about our hazards and accidents and how they might progress from one to another. Let's think about, ‘How big is the risk of this actually happening?’. Again, we'll unpack this further later at the next level. But for now, we're going to talk about the systematic use of available information. Systematic- so, ordered. We're following a process. This isn't somebody on their own taking a subjective view ‘Look, I think it's not that’. It's a process that is repeatable. We want to do something systematic. It's thorough, it's repeatable, and so it's defendable. We can justify the conclusions that we've come to because we've done it with some rigour. We've done it in a systematic way. That's important. Particularly if we're talking about harm coming to people or big losses.
Risk and ALARP Evaluation
Now, risk evaluation is just taking that estimated risk just now and comparing it to something and saying, “How serious is this risk?”. Is it something that is very low? If it's very insignificant then we're not bothered about it. We can live with it. We can accept it. Or is it bigger than that? Do we need to do something more about it? Again, we want to be systematic. We want to determine whether risk reduction is necessary. Is this acceptable as it is or is it too high and we need to reduce it? That's the core of risk evaluation.
In this UK-based standard – we're using terminology is found in different forms around the world. But in the UK, they talk about ‘tolerability’. We're talking about the absolute level of risk. There probably is an upper limit that's allowed in the law or in our industry. And there's a lower limit that we're aiming for. In an ideal world, we'd like all our risks to be low-level risks. That would be terrific.
So, that's ‘tolerability’. And you might hear it called different things. And then within the UK system, there're three classes of ‘tolerability’ at risk. We could say it's either ‘broadly acceptable’- it’s very low. It’s down in the target region where we like to get all our risks. It's ‘tolerable’- we can expose people to this risk or we can live with this risk, but only if we've met certain other criteria. And then there's the risk that it's so big. It’s so far up there, we can't do that. We can't have that under any circumstances. It's unacceptable. You can imagine a traffic light system where we have categorised our risk.
And then there’s the test of whether our risk can be accepted in the UK. It's called ALARP. We reduce the risk As Low As Reasonably Practicable. And in other places, you’ll see SFARP. We've eliminated or minimised the risk So Far As Is Reasonably Practicable. In the nuclear industry, they talk about ALARA: As Low As Reasonably Achievable. And then different laws use different tests. Whichever one you use, there's a test that we have got to use to say, “Can we accept the risk?” “Have we done enough risk reduction?”. And whatever you've put in those square brackets, that's the test that you're using. And that will vary from jurisdiction to jurisdiction. The basic concept of risk evaluation is estimating the level of risk. Then compare it to some standard or some regulation. Whatever one it might be, that's what we do. That's risk evaluation.
Risk Reduction
We’ve asked, “Do we need to reduce risk further?”. And if we do, we need to do some risk reduction. Again, we’re being systematic. This is not some subjective thing where we go “I have done some stuff, it'll be alright. That's enough.”. We're being a bit more rigorous than that. We've got a systematic process for reducing risk. And in many parts of the world, we’re directed to do things in a certain way.
This is an illustration from an Australian regulation. In this regulation, we're aiming to eliminate risk. We want to start with the most effective risk reduction measures. Elimination is “We’ve reduced the risk to zero”. That would be lovely if we could do that but we can't always do that.
What's the next level? We could get rid of this risk by substituting something less risky. Imagine we've got a combustion engine powering something. The combustion engine needs flammable fuel and it produces toxic fumes. It could release carbon monoxide and CO2 and other things that we don't want. We ask, “Can we get rid of that?”. Could we have an electric motor instead and have a battery instead? That might be a lot safer than the combustion engine. That is a substitution. There are still risks with electricity. But by doing this we've substituted something risky for something less risky.
Or we could isolate the hazard. Let’s use the combustion engine as an example again. We can say, “I'll put that in the fuel and the exhaust somewhere, a long way from people”. Then it’ll be a long way from where it can do harm or cause a loss.” And that's another way of dealing with it.
Or we could say, “I'm going to reduce the risks through engineering controls”. We could put in something engineered. For example, we can put in a smoke detector. A very simple, therefore highly reliable, device. It’s certainly more reliable than a human. You can install one that can detect some noxious gases. It's also good if it’s a carbon monoxide detector. Humans cannot detect carbon monoxide at all. (Except if you've got carbon monoxide poisoning, you'll know about it. Carbon monoxide poisoning gives you terrible headaches and other symptoms.) But of course, that's not a good way to detect that you're breathing in poisonous gas. We do not want to do it that way.
So, we can have an engineering control to protect people. Or we can an interlock. We can isolate things in a building or behind a wall or whatever. And if somebody opens the door, then that forces the thing to cut out so it's no longer dangerous. There are different things for engineering controls that we can introduce. They do not rely on people. They work regardless of what any person does.
Next on the list, we could reduce exposure to the hazard by using administrative controls. That's giving somebody some rules to follow a procedure. “Do this. Don't do that.” Now, that's all good. We can give people warning signs and warn people not to approach something. But, of course, sometimes people break the rules for good reasons. Maybe they don't understand. Or, maybe they don't know the danger. Maybe they've got to do something or maybe the procedure that we've given them doesn't work very well. It's too difficult to get the job done, so people cut corners. So, procedural protection can be weak. And a bit hit and miss sometimes.
And then finally, we can give people personal protective equipment. We can give them some eye protection. I'm wearing glasses because I'm short-sighted. But you can get some goggles to protect your eyes from damage. Damage like splashes, flying fragments, sparks, etc. We can have a hard hat so that if we’re on a building site and something drops from above on us that protects the old brain box. It won't stop the accident from happening, but it will help reduce the severity of the accident. That's the least effective. We're doing nothing to prevent the accident from happening. We're reducing the severity in certain circumstances. For example, if you drop a ton of bricks on me, it doesn't matter whether I'm wearing a hard hat or not. I'm still going to get crushed. But with one brick, I should be able to survive that if I'm wearing a hard hat.
Risk Acceptance
Let's move on to risk acceptance. At some stage, if we have reduced the risk to a point where we can accept it. We can live with it and we’ve decided that we're going to need to do whatever it is that is exposing us to the risk. Let's use the system: get in our car to enable us to go from a to b quickly and independently. So, we're going to accept the risk of driving in our car. We’ve decided we're going to do that. We make risk acceptance decisions every day, often without thinking about it. We get in a car every day on average and we don't worry about the risk, but it's always there. We've just decided to accept it.
But in this example we've got, it's not an individual deciding to do something on the spur of the moment. Nor is it based on personal experience. We've got a systematic process where a bunch of people come together. The relevant stakeholders agree that a risk has been assessed or has been estimated and has been evaluated. They agree that the risk reduction is good enough and that we will accept that risk. There’s a bit more to it than you and I saying, “That'll be alright.”
Part 2
Let's summarise where we’ve got to. We've talked about these six components of risk management. That's terrific. And as you can see, they all go together. Risk evaluation and risk reduction are more tightly coupled. That’s because when we do some risk reduction, we then re-evaluate the risk. We ask ‘Can we accept it?’. If the answer is ‘No.’ we need to do some more work. Then we do some more risk reduction. So those tend to be a bit more coupled together at the end. That's the level we've got to. We're now going to go to the next level.
So, we're going to explain these things. We've talked about hazard identification and hazard analysis, but what is a hazard? And what is an accident? And what is an accident sequence? We're going to unpack that a bit more. We’re going to take it to the next level. And throughout this, we're talking about risk over and over again. Well, what is ‘risk’? We're going to unpack that to the next level as well. It all comes down to this anyway. This is a safety standard. We're talking about harm to people. How likely is that harm and how severe might it be? But it might be something else. It might be a loss or a security breach. A financial loss. It might be a negative result for our project. We might find ourselves running late. Or we're running over budget. We’re failing to meet quality requirements. Or we’re failing to deliver the full functionality that we said we would. Whatever it might be.
Hazard
So, let's unpack this at the next level. A hazard is a term that we use, particularly in safety. As I say, we call it other things in different realms. But in the safety world, it's a physical situation or it's a state of a system. And as it says, it often follows from some initiating event which we may call a ‘cause’. The hazard may lead to an accident. The key thing to remember is once a hazard exists, an accident is possible, but it's not certain. You can imagine the sort of cartoon banana skin on the pavement gag. Well, the banana skin is the hazard. In the cartoon, the cartoon character always steps on the banana skin. They always fall over the comic effect. But in the real world, nobody may tread on the banana skin and slip over. There could be nobody there to slip over all the banana skin. Or even if somebody does, they could catch themselves. Or they fall, but it's on a soft surface and they don't hurt themselves so there's no harm.
So, the accident isn't certain. And in fact, we can have what we call ‘non-accident’ outcomes. We can have harmless consequences. A hazard is an important midway step. I heard it called an accident waiting to happen, which is a helpful definition. An accident waiting to happen, but it doesn't mean that the accident is inevitable.
Accident
But the accident can happen. Again, the ‘accident’, ‘mishap’, or ‘unintended event’. Something we did not want or a sequence of events that causes harm. And in this case, we're talking about harm to people. And as I say, it might be a security breach. It might be a financial loss. It might be reputational damage. Something might happen that is very embarrassing for an organisation or an individual. Or again, we could have a hiccup with our project.
Harm
But in this case, we're talking about harm. And this kind of standard, we're using what you might call a body count approach to the harm. We're talking about actual death, physical injury, or damage to the health of people. This standard also considers the damage to property and the environment. Now, very often we are legally required to protect people and the environment from harm. Property less so. But there will be financial implications of losses of property or damage to the systems. We don't want that. But it's not always criminally illegal to do that. Whereas usually, hurting people and damaging the environment is. So, this is ‘harm’. We do not want this thing to happen. We do not want this impact. Safety is a much tougher business in this instance. If we have a problem with our project, it’s embarrassing but we could recover it. It’s more difficult to do that when we hurt somebody.
Risk
And always in these terms, we're talking about ‘risk’. What is ‘risk’? Risk is a combination of two things. It's a combination of the likelihood of harm or loss and the severity of that harm or loss. It’s those two things together. And we've got a very simple illustration here, a little table. And they're often known as a risk matrix, but don’t worry about that too much. Whatever you want to call it. We've got a little two by two table here and we've got likelihood in the white text and severity in the black. We can imagine where there's a risk where we have a low likelihood of a ‘low harm’ or a ‘low impact’ accident or outcome. We say, ‘That's unlikely to happen and even if it does not much is going to happen.’ It’s going to be a very small impact. So, we'd say that that's a low risk.
Then at the other end of the spectrum, we can imagine something that has a high likelihood of happening. And that likelihood also has a high impact. Things that happen that we definitely do not want to happen.
#howtoriskmanagement #howtoriskmanagementanalysis #isriskmanagement #learnriskmanagement #learnriskmanagementanalysis #riskmanage #riskmanagedframework #riskmanagement #riskmanagementanalysistechnique #riskmanagementanalysistraining #riskmanagementanalysistutorial #riskmanagementdefinition #riskmanagementframework #riskmanagementplan #riskmanagementprocess #riskmanagementtechnique #riskmanagementtraining #riskmanagementtutorial #riskmanagementvideo #riskmanager #whatriskmanagement
Simon Di Nucci https://www.safetyartisan.com/2021/05/14/risk-management-101/
Subscribe to:
Posts (Atom)
How to Get the Most fromThe Safety Artisan #2 Hi everyone, and welcome to The Safety Artisan. I'm Simon, your host. This is 'How to...
-
Q&A: Reflections on a Career in Safety Now we move on to Q&A: 'Reflections on a Career in Safety'. Q&A Session | Q...
-
Introduction to System Safety Risk Assessment In this 'Introduction to System Safety Risk Assessment', we will pull together several...
-
Navigating the Safety Case Navigating the Safety Case is Part 4 of a four-part series on safety cases. In it, we look at timing issues and t...