[C-safe-secure-studygroup] A couple of thoughts after tonight's meeting

Martin Sebor msebor at gmail.com
Thu Jun 28 18:08:59 BST 2018

Thanks for writing this up!  It reflects some of my ow thoughts
that I've been struggling to articulate.

I would very much prefer to focus on creating a "foundational"
framework that's domain agnostic and leave it up users to decide
how to apply it in a way that best fits the specifics of their

That said, I would not call MISRA foundational by any means.
It imposes exceedingly severe restrictions in hopes (but with
no evidence) of avoiding problems in a roundabout way and
under very limited circumstances, without even attempting
to address the underlying problems.  Rules like 17.2 or 21.3
are like keeping surgeons from using sharp instruments because
they can injure people when used improperly.  There's nothing
wrong with recursion -- it's unbounded recursion that's
a problem.  Likewise, there's nothing wrong with dynamic
memory allocation -- it's allocating too much memory (or
any other finite resource) that can be a problem, and failing
to deal properly with memory (or other resource) exhaustion.

I don't think it's our call to make to tell programmers what
parts of the language to use.  Rather, we should give them
guidance on how to use the language safely and securely, and
help them find misuses by outlining precise rules for analyzers
to detect when the language is used unsafely (or what bugs to
detect).  If some of our users decide that some aspects of
the language are too hard to use correctly despite it then
it's up to them to choose alternatives.  To be sure, if
a language feature is so poorly designed that it simply
cannot be used safely it would be appropriate to limit or
even ban its use altogether.  But I don't think too many
of them rise to that level if we accept C's inherent
limitations (like those of arrays, pointers, etc.)


On 06/27/2018 12:58 PM, Clive Pygott wrote:
> Hi
> One of the take aways from tonight was that Martin is going to revisit
> the topic of 'profiles' before the next meeting. I think one aspect of
> safety vs. security has been overlooked, so I'm taking this opportunity
> to put it on record (whilst I remember).
> As I see it, the discussion has tended to characterise safety as wanting
> tightly controlled rules that (as far as possible) eliminate the
> possibility of undesirable behaviour - even if this means forbidding
> some code with well defined semantics, as opposed to security that wants
> to eliminate significant undesirable behaviour whilst being more
> tolerant of code that is merely 'suspect' - i.e. the choice between
> false positives and false negatives for situations that are not clear
> cut 'good' or 'bad'.
> I agree that this is real and useful distinction, but I'd suggest it is
> not the only distinction between safety and security standards:
>   * a standard like MISRA actually says nothing about system safety, it
>     can't. Software can't hurt anyone until you put it in a system and
>     allow it to interact with the outside world, and this relationship
>     between the code and the outside world is essentially unique to each
>     system. A MISRA like standard is 'foundational', it asks '/have you
>     achieved a sound base on which to build your system?/'. It serves
>     the same purpose as a building inspector who come to see that a
>     building is being given adequate foundations, who's approval says
>     nothing about whether the building will be suitable for its intended
>     purpose. Ensuring a system is safe is a much wider issue, that
>     involves the specification of the requirements, the processes of
>     construction, and the verification of the results. This gets
>     incorporated into system wide standards, like IEC 61508, indeed
>     MISRA is designed to satisfy one of 61508's requirements for code
>     development
>   * by contrast, security standards as well as including these
>     foundational requirements, also incorporate elements of design
>     requirements to mitigate known security issues. For example, in
>     TS17961 all the rules regarding tainted data are assuming (a
>     reasonable) model of the system - that there is trusted data inside
>     the control of the program, and untrusted external data, and
>     enforcing the requirement that untrusted data should be santised
>     before use.
> Essentially the difference is that safety standards tend to only
> prohibit the use of certain constructs, where a security standard can
> also require the code to use a particular algorithm.
> A separate thought with respect to profiles - what level of assurance
> are we targeting?  We keep talking about safety and security as if they
> each have a single set of requirements, but for safety (at least in
> Europe) ISO/IEC 61508 is the starting point for safety management, and
> it defines four Safety Integrity Levels, from SIL1 (could possibly cause
> a minor injury) to SIL4 (potential for causing multiple deaths). In the
> security domain, something like ISO/IEC 15408 (Common Criteria) defines
> seven Evaluation Assurance Levels  from EAL1 (mildly embarrassing)  to
> EAL7 (national disaster).  Where do we see our rules fitting in? and do
> we need different profiles for different levels of concern?
>     Clive
> _______________________________________________
> C-safe-secure-studygroup mailing list
> C-safe-secure-studygroup at lists.trustable.io
> https://lists.trustable.io/cgi-bin/mailman/listinfo/c-safe-secure-studygroup

More information about the C-safe-secure-studygroup mailing list