[C-safe-secure-studygroup] A couple of thoughts after tonight's meeting

Clive Pygott clivepygott at gmail.com
Wed Jun 27 19:58:22 BST 2018


One of the take aways from tonight was that Martin is going to revisit the
topic of 'profiles' before the next meeting. I think one aspect of safety
vs. security has been overlooked, so I'm taking this opportunity to put it
on record (whilst I remember).

As I see it, the discussion has tended to characterise safety as wanting
tightly controlled rules that (as far as possible) eliminate the
possibility of undesirable behaviour - even if this means forbidding some
code with well defined semantics, as opposed to security that wants to
eliminate significant undesirable behaviour whilst being more tolerant of
code that is merely 'suspect' - i.e. the choice between false positives and
false negatives for situations that are not clear cut 'good' or 'bad'.

I agree that this is real and useful distinction, but I'd suggest it is not
the only distinction between safety and security standards:

   - a standard like MISRA actually says nothing about system safety, it
   can't. Software can't hurt anyone until you put it in a system and allow it
   to interact with the outside world, and this relationship between the code
   and the outside world is essentially unique to each system. A MISRA like
   standard is 'foundational', it asks '*have you achieved a sound base on
   which to build your system?*'. It serves the same purpose as a building
   inspector who come to see that a building is being given adequate
   foundations, who's approval says nothing about whether the building will be
   suitable for its intended purpose. Ensuring a system is safe is a much
   wider issue, that involves the specification of the requirements, the
   processes of construction, and the verification of the results. This gets
   incorporated into system wide standards, like IEC 61508, indeed MISRA is
   designed to satisfy one of 61508's requirements for code development
   - by contrast, security standards as well as including these
   foundational requirements, also incorporate elements of design
   requirements to mitigate known security issues. For example, in
   TS17961 all the rules regarding tainted data are assuming (a reasonable)
   model of the system - that there is trusted data inside the control of the
   program, and untrusted external data, and enforcing the requirement that
   untrusted data should be santised before use.

Essentially the difference is that safety standards tend to only prohibit
the use of certain constructs, where a security standard can also require
the code to use a particular algorithm.

A separate thought with respect to profiles - what level of assurance are
we targeting?  We keep talking about safety and security as if they each
have a single set of requirements, but for safety (at least in Europe)
ISO/IEC 61508 is the starting point for safety management, and it defines
four Safety Integrity Levels, from SIL1 (could possibly cause a minor
injury) to SIL4 (potential for causing multiple deaths). In the security
domain, something like ISO/IEC 15408 (Common Criteria) defines seven
Evaluation Assurance Levels  from EAL1 (mildly embarrassing)  to EAL7
(national disaster).  Where do we see our rules fitting in? and do we need
different profiles for different levels of concern?

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.trustable.io/pipermail/c-safe-secure-studygroup/attachments/20180627/d46d7e72/attachment.html>

More information about the C-safe-secure-studygroup mailing list