[trustable-software] Software that is not trustable

Paul Albertella paul.albertella at codethink.co.uk
Wed Apr 3 12:04:00 BST 2019


Hi Dan,

Welcome to Trustable!

On Mon, 1 Apr, 2019 at 9:41 AM, Dan Shearer <dan at shearer.org> wrote:
> This technical list has been debating what trustable software is. It
> is much easier to agree of what makes software untrustable, and work
> back from there.

Yes, I agree that this is important, and the concept of 
"untrustability" is explicitly referenced in the original 'Hypothesis 
for software to be trustable'  [1] with the (rather unwieldy) 
requirement that "It does not do what it is not supposed to do".

However, I'm not sure that it's *always* easier to agree what makes 
software untrustable, for the same reason that it's hard to identify 
what makes it trustable: because the factors that inform these 
judgements can vary hugely between different domains and types of 
software, and different software development practices.

One of the dangers with 'official' lists of prohibitive criteria for 
software, for example, is that they tend to focus on a specific 
context, instead of articulating the underlying principles they are 
based on. The specific behaviour associated with these principles (e.g. 
dynamic memory allocation) may have been considered unacceptable in one 
context or at one point in time, but may be considered perfectly 
reasonable in another.

This doesn't necessarily devalue these sets of prohibitive criteria, 
but it becomes problematic when we attempt to apply them more widely, 
or to software that is either much more complex than the original 
context envisaged, or which was developed with a different set of goals 
and using a different set of disciplines. Then there can be a tendency 
for engineers to dismiss the criteria as irrelevant, or to refactor 
code to follow the 'letter of the law' instead of the underlying 
principles. I've encountered this sort of reaction myself when trying 
to 'retrofit' MISRA standards to software that was developed for a 
GNU/Linux platform, for example.

Hence I think it's important for us to consider how we develop and 
communicate this sort of constraint. Focussing on negative cases and 
specific facts may well be the *best* way to do this, but that doesn't 
necessarily make it easier :-)

On Mon, 1 Apr, 2019 at 9:41 AM, Dan Shearer <dan at shearer.org> wrote:
> I suggest listing some things that make software definitely not
> trustable. I don't mean dubious, but absolutely definitely not to be
> trusted. This is like the concept of never events in medicine, such as
> this list by the NHS which includes handy tips such as "don't amputate
> the wrong limb"
<---snip -->


> This isn't a novel idea. Clearly it isn't helpful to write a long list
> of things that are untrustable, but listing the categories with
> examples and exceptions is helpful.
> 
> Therefore I think the "What is Trustable Software?" debate is should
> look at negative cases, and specific facts. A partial list of sources
> that can indicate software is untrustable:

It would certainly be valuable to develop sets of criteria like this. 
One of Trustable's stated goals is to encourage this sort of activity 
to take place in the open, so that these resources are available to 
everyone and no-one has an excuse for not taking them into account in 
their software. This is what is envisaged for the "Public Constraints" 
workstream that Amanda proposed in a previous post [2].

The idea here is for the Trustable community to act as a shared 
repository of common, public and peer-reviewed constraints, and as a 
forum for identifying, developing and discussing them. An example that 
we have been working on is a documented set of hazards for autonomous 
vehicles that have been identified through STPA analysis [3].

However, I do not think that we should *mandate* which of these 
constraints *must* apply in order for software to be considered 
"Trustable" - that is something that already happens through standards 
and certification processes, and requires domain-specific expertise. 
However, Trustable might usefully serve as a forum for discussing which 
existing constraints are important or valuable, examine how these apply 
in particular domains, and act as a home for new sets of public 
constraints developed in the open.

Hence three of the challenges that we've identified for Trustable are:

1. How can we identify, develop and publish (or publicise) common sets 
of constraints in the open, to describe what *all* software in the 
associated domain(s) is supposed to do (and *not* supposed to do)?

2. How can we ensure that these constraints can be effectively used for 
different types of software, and in different practices of software 
development and refinement?

3. How can we be confident that a specific piece of software. which 
claims to honours a specific set of constraints, has actually taken 
these into account in its development and refinement?

We will be trying to address the first two of these in the "Public 
Constraints" workstream and the last in the in the "Process" workstream.

Cheers,

Paul

[1] 
https://gitlab.com/trustable/documents/wikis/hypothesis-for-software-to-be-trustable
[2] 
https://lists.trustable.io/pipermail/trustable-software/2019-March/000514.html
[3] https://gitlab.com/trustable/av-stpa


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.trustable.io/pipermail/trustable-software/attachments/20190403/385f962c/attachment.html>


More information about the trustable-software mailing list