Monday, August 4, 2025



Software Safety Assurance

Software Safety Assurance is the fourth in a new series of six blog posts on Principles of Software Safety Assurance. In them, we look at the 4+1 principles that underlie all software safety standards. (The previous post in the series is here.)



Read on for These Benefits...



This post deals with some crucial software assurance topics: what is it? what does it mean? I add further explaining some key topics, based on my wide experience in the industry since 1994.



There are some important case studies here. They add depth and diversity to those already presented in previous posts. This post also addresses the crucial issues of diverse assurance techniques, as no one approach is likely to be adequate for safety significant software.



Content



We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.



The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.



Software Assurance = Justified Confidence



Principle 4+1:



The confidence established in addressing the software safety principles shall be commensurate to the contribution of the software to system risk.‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.



All safety-related software systems must adhere to the four aforementioned principles. To prove that each of the guiding principles has been established for the software, evidence must be presented.



Depending on the characteristics of the software system itself, the dangers that are present, and the principle that is being shown, the proof may take many different forms. The strength and quantity of the supporting evidence will determine how confidently or assuredly the premise is established.



Therefore, it's crucial to confirm that the level of trust developed is always acceptable. This is frequently accomplished by making sure that the level of confidence attained corresponds to the contribution the software makes to system risk. This strategy makes sure that the areas that lower safety risk the most receive the majority of attention (when producing evidence).



This method is extensively used today. Many standards employ concepts like Safety Integrity Levels (SILs) or Development Assurance Levels (DALs) to describe the amount of confidence needed in a certain software function.



Examples



The flight control system for the Boeing 777 airplane is a Fly-By-Wire (FBW) system ... The Primary Flight Computer (PFC) is the central computation element of the FBW system. The triple modular redundancy (TMR) concept also applies to the PFC architectural design. Further, the N-version dissimilarity issue is integrated into the TMR concept.



Details are given of a 'special case procedure' within the principles' framework which has been developed specifically to handle the particular problem of the assessment of software-based protection systems. The application of this 'procedure' to the Sizewell B Nuclear Power Station computer-based primary protection system is explained.



Suitability of Evidence



Once the essential level of confidence has been established, it is crucial to be able to judge whether it has been reached. Several factors must be taken into account when determining the degree of confidence with which each principle is put into practice.



The suitability of the evidence should be taken into consideration first. The constraints of the type of evidence being used must be considered too. These restrictions will have an impact on the degree of confidence that can be placed in each sort of evidence with regard to a certain principle.



Examples of these restrictions include the degree of test coverage that can be achieved, the precision of the models employed in formal analysis approaches, or the subjectivity of review and inspection. Most techniques have limits on what they can achieve.



Due to these limitations, it could be necessary to combine diverse types of evidence to reach the required degree of confidence in any one of the principles. The reliability of each piece of evidence must also be taken into account. This takes into account the degree of confidence in the item of evidence's capacity to perform as expected.



This is also frequently referred to as evidence rigor or evidence integrity. The rigorousness of the technique employed to produce the evidence item determines its reliability. The primary variables that will impact trustworthiness are Tools, Personnel, Methodology, Level of Audit and Review, and Independence.



The four software safety principles will never change. However, there is a wide range of trust in how those principles are developed. We now know that a determination must be made regarding the degree of assurance required for any given system's principles to be established. We now have our guiding principle.



Since it affects how the previous four principles are put into practice, this concept is also known as Principle 4+1.



Software Safety Assurance: End of Part 4 (of 6)



This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.



Meet the Author



My name’s Simon Di Nucci. I’m a practicing system safety engineer, and I have been, for the last 25 years; I’ve worked in all kinds of domains, aircraft, ships, submarines, sensors, and command and control systems, and some work on rail air traffic management systems, and lots of software safety. So, I’ve done a lot of different things!



Principles of Software Safety Training



Learn more about this subject in my course 'Principles of Safe Software' here. The next post in the series is here.



My course on Udemy, 'Principles of Software Safety Standards' is a cut-down version of the full Principles Course. Nevertheless, it still scores 4.42 out of 5.00 and attracts comments like:



- "It gives me an idea of standards as to how they are developed and the downward pyramid model of it." 4* Niveditha V.



- "This was really good course for starting the software safety standareds, comparing and reviewing strengths and weakness of them. Loved the how he try to fit each standared with4+1 principles. Highly recommend to anyone that want get into software safety." 4.5* Amila R.



- "The information provides a good overview. Perfect for someone like me who has worked with the standards but did not necessarily understand how the framework works." 5* Mahesh Koonath V.



- "Really good overview of key software standards and their strengths and weaknesses against the 4+1 Safety Principles." 4.5* Ann H.

#bestsafetyassurance #howmuchdoessoftwareassurancecost #howmuchissoftwareassurance #justifiedconfidence #safetyassurancecourse #safetyassuranceinsoftwareengineering #safetyassurancetraining #safetyrelatedsoftware #safetysignificantsoftware #softwareassurance #softwareassurancebestpractices #softwareassurancecertification #softwareassurancelevel #softwareassuranceprocess #softwareassurancestandards #softwareassurancetraining #softwaresafetyassurance #softwaresafetyexamples #softwaresafetyrequirements #softwaresecurityassuranceprocessstartsfromwhichphase #softwaresystemsafety #suitabilityofevidence #whatissoftwareasurance

Simon Di Nucci https://www.safetyartisan.com/2022/11/09/software-safety-assurance/

Saturday, August 2, 2025



Work Health and Safety

Australian Work Health & Safety law, or WHS, addresses both safe design and workplace (occupational) safety.  It imposes duties upon designers, manufacturers, importers, and suppliers of plant, structures, and substances.



The four-lesson discount bundle, including Safe Design, is available here at a Discount!



WHS Law in Practice



WHS legislation is powerful and elegant, and it yields a lot of useful content, whether you are in an Australian jurisdiction or not. It is based on the UK’s approach to health and safety at work, but it has incorporated lessons learned from four decades of experience there.



In 2011, Safe Work Australia developed the model work health and safety (WHS) laws to be implemented across Australia. To become legally binding the Commonwealth, states and territories must separately implement them as their own laws. Safe Work Australia is responsible for maintaining the model WHS laws, but we don’t regulate or enforce them.Safe Work Australia



However, Australia’s federal system complicates the application of our laws. The Safety Artisan will attempt to cut through this complexity and explain the core concepts needed for practical success.



WHS Codes of Practice



Safe Work Australia notes that:



Model Codes of Practice are practical guides to achieving the standards of health and safety required under the model WHS Act and Regulations.Safe Work Australia



They also go on to say:



An approved code of practice applies to anyone who has a duty of care in the circumstances described in the code. In most cases, following an approved code of practice would achieve compliance with the health and safety duties in a jurisdiction’s WHS Act and Regulations.



Like regulations, codes of practice deal with particular issues and do not cover all hazards or risks that may arise. Health and safety duties require you to consider all risks associated with work, not only those risks that regulations and codes of practice exist for.



While approved codes of practice are not law, they are admissible in court proceedings. Courts may regard an approved code of practice as evidence of what is known about a hazard, risk or control and may rely on the relevant code to determine what is reasonably practicable in the circumstances.



We ignore these words at our peril!



Head back to the Topics Page for more safety training.



Simon Di Nucci https://www.safetyartisan.com/work-health-and-safety/

Wednesday, July 30, 2025



Home

The Safety Artisan gives you:



1. The flexibility that enables you to work and study2. Easy access to recorded classes to watch later3. Dynamic delivery based on practical experience



Learn safety engineering with me: a current industry professional with 25 years of experience.



Blog | Courses | Email



The Safety Artisan: Latest Articles



Free Lessons



How to Prepare for the CISSP Exam



System Safety Concepts & Principles



Risk Management 101



Safety Analysis Lessons



System Hazard Analysis (Mil-Std-882E) Course



System Requirements Hazard Analysis



Preliminary Hazard Identification



Software/Safety Lessons



Principles of Safe Software Course



Identify & Analyze Functional Hazards Course



System Safety Engineering Process



Testimonials



The way you teach this subject makes it comprehensible and part of an integral whole. It seems like your approach is rare (and valuable) in the world of System Safety.Thomas AnthonyDirector, Aviation Safety and Security ProgramViterbi School of EngineeringUniversity of Southern California



Understanding safety law can be difficult and, at times, confronting.  Thankfully, Simon has a knack of bringing clarity to complex legal requirements, using real work examples to help understanding.  I highly recommend Simon to any director or manager wanting to understand their legal obligations and ensure a safe workplace.Jonathan Carroll, Senior Leadership, Pacific National



Valuable information, Clear explanations, Engaging delivery, Helpful practice activities, Accurate course description, Knowledgeable instructor.Manuel Louie B. Santos, reviewing “Risk Management 101”



Explanation about the military standard was very interesting, because for the first time somebody talked about possible disadvantages.Henri Van Buren, reviewing “System Safety Risk Analysis Programs”



4,500+ Students on Udemy



200+ Reviews, scoring:



- Principles of Software Safety Standards (4.48 out of 5.00);



- How to Design a System Safety Program (4.08 out of 5.00);



- How to Prepare for the CISSP Exam (4.61 out of 5.00); and



- Risk Management 101 (4.39 out of 5.00).



Why Safety Engineering Training?



The world needs safety engineers - a lot of them. Everything we use needs to be designed, manufactured, supplied, transported, and so on, and we need to do that without causing harm.



So, there’s a lot of need for safety engineering training. Want a (well-paid) career as a safety engineer? Need to do a safety-engineering-related task or project?  Do you need to understand what your team is doing? Maybe you need to ask - or answer - the right questions in an interview.



There’s a lot of need for safety engineering skills, but they are difficult to get because training places are quite limited. Qualifications are expensive and take a long time to acquire.



I hope that by putting these lessons online, you’ll find them helpful. Who am I? Learn more.



It’s about Countering Fear – and Increasing Confidence



I decided to launch this site because I think there is a lot of fear around safety.  People worry about getting it wrong, and therefore sometimes that can result in poor behaviors or poor performance. They shy away from doing anything about safety rather than just doing what they can.



This is a shame because safety is often just structured common sense.



It’s an engineering discipline like any other. Except that we need to involve people other than engineers. We need to involve operators, maintainers, and regulators. We need to involve end-users. So it’s quite a social activity as well, which I’m afraid can be a challenge for some of us engineers!  (I’m as guilty of that as anybody else.) Nevertheless, there’s a lot we can do, and it isn’t as difficult as we think it is.



About the Author



learn more



Simon Di Nucci https://www.safetyartisan.com/

Monday, July 28, 2025



Software Safety Principle 4

Software Safety Principle 4 is the third in a new series of six blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards. (The previous post in the series is here.)



We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold across projects and domains.



The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.



Principle 4: Hazardous Software Behaviour



The fourth software safety principle is:



Principle 4: Hazardous behaviour of the software shall be identified and mitigated.‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.



Software safety requirements imposed on a software design can capture the high-level safety requirements' intent. However, this does not ensure that all of the software's potentially dangerous behaviors have been considered. Because of how the software has been created and built, there will frequently be unanticipated behaviors that cannot be understood through a straightforward requirements decomposition. These risky software behaviors could be caused by one of the following:



- Unintended interactions and behaviors brought on by software design choices; or



- Systematic mistakes made when developing software.



On 1 August 2005, a Boeing Company 777-200 aircraft, registered 9M-MRG, was being operated on a scheduled international passenger service from Perth to Kuala Lumpur, Malaysia. The crew experienced several frightening and contradictory cockpit indications.



This incident illustrates the issues that can result from unintended consequences of software design. Such incidents could only be foreseen through a methodical and detailed analysis of potential software failure mechanisms and their repercussions (both on the program and external systems). Putting safeguards in place to address potential harmful software behavior is possible if it has been found. However, doing so requires us to examine the potential impact of software design decisions.



Not all dangerous software behavior will develop as a result of unintended consequences of the software design. As a direct result of flaws made during the software design and implementation phases, dangerous behavior may also be seen. Seemingly minor development mistakes can have serious repercussions.



It's important to stress that this is not a problem with software quality in general. We exclusively focus on faults that potentially result in dangerous behavior for software safety assurance. As a result, efforts can be concentrated on lowering systematic errors in areas where they might have an impact on safety.



Since systematically establishing direct hazard causality for every error may not be possible in practice, it may be preferable for a while to accept what is regarded as best practice. However, the justification for doing so ought to at the very least be founded on knowledge from the software safety community on how the particular problem under consideration has led to safety-related accidents. 



To guarantee that adequate rigor is applied to their development, it is also crucial to identify the most crucial components of the software design. Any software behavior that may be risky must be recognized and stopped if there we are to be confident that the software will always behave safely.



Software Safety Principle 4: End of Part 3 (of 6)



This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.



Meet the Author



My name’s Simon Di Nucci. I’m a practicing system safety engineer, and I have been, for the last 25 years; I’ve worked in all kinds of domains, aircraft, ships, submarines, sensors, and command and control systems, and some work on rail air traffic management systems, and lots of software safety. So, I’ve done a lot of different things!



Learn more about this subject in my course 'Principles of Safe Software' here. The next post in the series is here.

#issoftwaresafe #softwareengineeringriskanalysis #softwareengineeringriskmanagement #softwarehazardanalysisandresolutionindesign #softwarehazards #softwareimplementationrisks #softwareoperationalrisk #softwarequalityrisks #softwarerequirementsrisks #softwareriskcategories #softwareriskcategorization #softwareriskcharacteristics #softwareriskclassification #softwareriskcomponents #softwareriskdefinition #softwareriskexposure #softwareriskfactors #softwareriskidentification #softwareriskissues #softwareriskmanagementprocess #softwareriskmanagementprocessincludes #softwareriskmitigationrecommendations #softwarerisktypes #softwaresafetyexamples #softwaresafetyhazardanalysis #whatarecomputerhazards #whatarethehazardsofcomputer

Simon Di Nucci https://www.safetyartisan.com/2022/10/05/software-safety-principle-4/

Sunday, July 27, 2025



Blog Articles

Safety Engineering and Risk Management Blog Articles - The Safety Artisan



Start here with the Blog! The posts featured on this page introduce safety basics, such as definitions and fundamental safety concepts. They also discuss related topics.



You can also start here if you know how to do safety in one industry and want to understand how it's done in another. Similarly, you might be familiar with safety practices in one country but want to know how things are done elsewhere.



Latest Articles



Blog Articles: Selected Highlights



System Safety Concepts



The Safety Artisan equips you with System Safety Concepts. We look at the basic concepts of safety, risk and hazard in order to understand how to assess and manage them. Exploring these fundamental topics provides the foundations for all other safety topics, but it doesn't have to be complex. The basics are simple, but they need to be thoroughly understood and practised consistently to achieve success. This video explains the issues and discusses how to achieve that success.From the Lesson Description



What does 'Safe' really mean? Find out Here.



System Safety Principles



... I discuss the Principles of System Safety, as set out by the US Federal Aviation Authority in their System Safety Handbook.  Although this was published in 2000, the principles still hold good (mostly) and are worth discussing.  I comment on those topics where modern practice has moved on, and those jurisdictions where the US approach does not sit well.From the Lesson Description



Human Factors



In this 40-minute video, I'm joined by a friend, colleague and Human Factors specialist, Peter Benda. Peter has 23 years of experience in applying Human Factors to large projects in all kinds of domains. In this session we look at some fundamentals: what does Human Factors engineering aim to achieve? Why do it? And what sort of tools and techniques are useful? As this is The Safety Artisan, we also discuss some real-world examples of how Human Factors can contribute to accidents or help to prevent them.From the Lesson Description



Catch the discussion Here.



Functional Safety



Functional safety is the part of the overall safety of a system or piece of equipment that depends on automatic protection operating correctly in response to its inputs or failure in a predictable manner (fail-safe). The automatic protection system should be designed to properly handle likely human errors, systematic errors, hardware failures and operational/environmental stress.Wikipedia



For a brief introduction to Functional Safety Click Here.



Head back to the Topics Page for more safety training.



Simon Di Nucci https://www.safetyartisan.com/start-here-with-the-blog/

Thursday, July 24, 2025



System Safety Assessment

In this System Safety Assessment course, I will take you through a suite of safety analysis tasks. They are designed to deal with a complex system, but can be simplified (known as 'tailoring'). I start with Preliminary Hazard Identification and work through detailed analyses, each with a different point of view of the system.



Each lesson can be purchased individually, but there are discounts for the whole course here.



System Safety



The system safety concept calls for a risk management strategy based on identification, analysis of hazards and application of remedial controls using a systems-based approachHarold E. Roland; Brian Moriarty (1990). System Safety Engineering and Management.



System Safety Engineering



Every approach to safety has a context that needs to be understood to get the best results. I have used the Tasks from a system safety engineering standard called Military-Standard-882E, or Mil-Std-882E, for short. This has been around for a long time and is very widely used. It was developed for use on US military systems, but it has found its way, sometimes in disguise, into many other programs around the world.



However, any safety analysis standard can be applied blindly – it is not a substitute for competent decision-making. So, I explain the limitations with each Task and how to overcome them.



Safety Assessment



A safety assessment is a comprehensive and systematic investigation and analysis of all aspects of risks to health and safety associated with major incidents that may potentially occur in the course of operation of the major hazard facility...Guide for Major Hazard Facilities: Safety Assessment, Safe Work Australia, 2012



Safety Assessment



Head back to the Topics Page for more safety training.



Simon Di Nucci https://www.safetyartisan.com/safety-analysis/

Monday, July 21, 2025



Software Safety Principles 2 and 3

Software Safety Principles 2 and 3 is the second in a new series of blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards. (The previous blog post is here.)



We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.



The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.



Principle 2: Requirement Decomposition



The second software safety principle is:



Principle 2: The intent of the software safety requirements shall be maintained throughout requirements decomposition.‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.



The requirements and design are gradually broken down as the software development lifecycle moves forwards, leading to the creation of a more intricate software design. The term "derived software requirements" refers to the criteria that were derived for the more intricate software design. The intent of those criteria must be upheld as the software safety requirements are broken down once they have been established as comprehensive and accurate at the highest (most abstract) level of design.



An example of the failure of requirements decomposition is the crash of Lufthansa Flight 2904 at Warsaw on 14 September 1993.



In essence, the issue is one of ongoing requirements validation. How do we show that the requirements expressed at one level of design abstraction are equal to those defined at a more abstract level? This difficulty arises constantly during the software development process.



It is insufficient to only consider requirements fulfillment. The software safety requirements had been met in the Flight 2904 example. However, they did not match the intent of the high-level safety requirements in the real world.



Human factors difficulties (a warning may be presented to a pilot as necessary, but that warning may not be noticed on the busy cockpit displays) are another consideration that may make the applicability of the decomposition more challenging.



Ensuring that all necessary details are included in the first high-level need is one possible theoretical solution to this issue. However, it would be difficult to accomplish this in real life. It is inevitable that design choices requiring more specific criteria will be made later in the software development lifecycle. It is not possible to accurately know this detail until that design choice has been made.



The decomposition of safety criteria must always be handled if the program is to be regarded as safe to use.



Requirements Satisfaction



The third software safety assurance principle is:



Principle 3: Software safety requirements shall be satisfied.‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.



It must be confirmed that a set of "valid" software safety requirements has been met after they have been defined. This set may be assigned software safety requirements (Principle 1), or refined or derived software safety requirements (Principle 2). The fact that these standards are precise, well-defined, and actually verifiable is a crucial need for their satisfaction.



The sorts of verification techniques used to show that the software safety requirements have been met will vary on the degree of safety criticality, the stage of development, and the technology being employed. Therefore, attempting to specify certain verification methodologies that ought to be employed for the development of verification findings is neither practical nor wise.



Mars Polar Lander was an ambitious mission to set a spacecraft down near the edge of Mars' south polar cap and dig for water ice. The mission was lost on arrival on December 3, 1999.



Given the complexity and safety-critical nature of many software-based systems, it is obvious that using just one type of software verification is insufficient. As a result, a combination of verification techniques is frequently required to produce the verification evidence. Testing and expert review are frequently used to produce primary or secondary verification evidence. However, formal verification is increasingly emphasized because it may more reliably satisfy the software safety standards.



The main obstacle to proving that the software safety standards have been met is the evidence's inherent limitations as a result of the methods described above. The characteristics of the problem space are the root of the difficulties.



Given the complexity of software systems, especially those used to achieve autonomous capabilities, there are challenges with completeness for both testing and analysis methodologies. The underlying logic of the software can be verified using formal methods, but there are still significant drawbacks. Namely, it is difficult to provide assurance of model validity. Also, formal methods do not deal with the crucial problem of hardware integration.



Clearly, the capacity to meet the stated software safety requirements is a prerequisite for ensuring the safety of software systems.



Software Safety Principles 2 & 3: End of Part 2 (of 6)



This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.



Meet the Author



My name’s Simon Di Nucci. I’m a practicing system safety engineer, and I have been, for the last 25 years; I’ve worked in all kinds of domains, aircraft, ships, submarines, sensors, and command and control systems, and some work on rail air traffic management systems, and lots of software safety. So, I’ve done a lot of different things!



Learn more about this subject in my course 'Principles of Safe Software' here. The next post in the series is here.

#decomposedrequirements #decompositionofrequirements #howtowritesoftwaresafetyrequirements #requirementdecomposition #requirementssatisfaction #satisfiestherequirements #satisfyrequirement #softwaresafetycertification #softwaresafetyplan #softwaresafetyrequirement #softwaresafetyrequirements #softwaresafetyrequirementsexample #softwaresafetyrequirementsspecification #softwaresafetystandard #softwaresafetystandards #softwaresafetytesting #softwaresystemsafety

Simon Di Nucci https://www.safetyartisan.com/2022/09/28/software-safety-principles-2-and-3/

Safe Design in Australia: Overview, Statistics, and Principles This post provides an overview of Safe Design in Australia: Overview, Statis...