[trustable-software] Trustable Software Engineering

Paul Sherwood paul.sherwood at codethink.co.uk
Thu Jul 21 08:16:45 UTC 2016


On 2016-07-21 08:56, Colin Robbins wrote:
>>> "Trust me, I'm a doctor/priest/politician" doesn't really wash 
>>> these days,
>>> does it?
>
> No.  And the TSI is not saying that.
> It saying here's a framework of a set of things that you need to 
> consider if
> you wish to claim a solution is trustworthy.  It up to you whether 
> you
> determine the solution trustable.
> The doctor/priest analogy is good here.  They have been through a 
> process of
> education, examination and peer-review to show they have the
> attributes needed
> to be considered capable of being trusted as an expert in their field 
> - "the
> system".  You then need to look at a specific individual and decide 
> if they
> are trustable - and that is a personal decision, not one a system can 
> make.

Aha - I think we agree, but are using the words differently. I'm 
suggesting "trustable" is the best 'we' can get to.. and "trustworthy" 
is what others decide based on their independent assessment of us and 
our work.

> Don't get me wrong, I am not saying TSI is perfect - but it has been 
> a
> significant collaborative effort put together over many years, so we 
> should
> learn from the good aspects.

I'm fine with that, but I get feisty when people exploit established 
words without delivering on the meanings they're borrowing.

> On the CESG CPA - there are rules to follow - these are defined at 
> the
> "characteristics", and they are specific to a product class.

Sorry, I'm failing to find them on the website? I did find the sitemap 
[1] with a single reference to 'characteristic', i.e. a page about 
deleting one :)

>>> So, if/when the V2 meters are hacked, who do we shame in public for
>>> claiming they were 'trustable' in the first place?
>
> Depends on the reason for failure.
>   -- The National Technical Authority, GCHQ who define the rules by
> which they
> should be assessed (The CPA scheme and characteristics).
>   -- The test lab who were not rigorous in their testing
>   -- The energy company that failed to adopt the required operational
> security
> practice.
>
> The point here is, there are many reasons a system might fail - 
> ranging from
> bad software to good software used poorly.
> As such, trust in a system is far more that trust in the software.

Agreed. So why are they claiming the meters 'Trustable' then - it 
creates a false sense of security imo.

> I may have fantastic software, running on a VM.  The attacker may be 
> able to
> pause the VM, alter a few bits of memory, then restart the VM.   Hey
> presto my
> software now has admin privileges.  Very little my software can do to
> prevent
> that (some paranoid systems do have detection controls).
>
> My point is software is one part of a system.  Sure we have to get 
> the
> software right, but that only addresses part of the "trustable" 
> issue.

Agreed, again. So can we stop pretending our current implementations 
are trustable, when they're not really?

The reason I'm digging in on this is that it's hard to get attention on 
how to address real risks and problems, when official-looking marketing 
suggests (to any layman at least) that the problems are already solved.

br
Paul

[1] https://www.cesg.gov.uk/sitemap




More information about the trustable-software mailing list