[trustable-software] From security to safety, or the other way around?

Paul Sherwood paul.sherwood at codethink.co.uk
Wed Sep 28 06:59:07 UTC 2016


Hi John,
On 2016-09-27 15:30, John Lewis wrote:
> I am new to this discussion so please have patience with me if I have
> missed some earlier points and have got hold of the wrong end of the
> stick.

 From what you've written, I think you're pretty much up to speed... 
welcome to the fight :-)

> Standards for writing safety-critical (e.g. for fly-by-wire aircraft,
> space etc.) have been around for years, as have the languages, tools
> (V&V, testing) and processes (CMMi Level 5) to support it. From my
> experience the automotive industry is not as advanced as aerospace in
> producing safety-critical software (I could but will not give
> examples).

That's an interesting point, and it would be great to actually verify 
it. I wonder whether that's possible somehow? Ideally all of the 
standards would be opened so they could be compared, digested and 
rationalised.

> Similarly a lot of work has been done on security dating back to the
> 1980s - Trusted Computer Base (Orange Book), Capability Architectures
> (Capsicum) and more recently seL4. There are questions about the
> integrity of seL4 - and I think most people would agree that we still
> have a way to go.

I've not been keeping up on Sel4, but was under the impression when it 
launched that the big claim was all about mathematically verifiable 
integrity. Could you sum up the questions, or provide links please?

> The "real world” has moved on and we need both - as soon as you
> connect a car (or any device) to the internet - it becomes vulnerable
> to attack.
>
> Am I correct in thinking that this is the problem that you are trying
> to solve?

It's one of them for sure. In the original RFC/call-to-action [1] I 
tried to highlight the need for software which we can be sure about. So 
this includes safety, security and all the other factors which can 
damage the behaviour of code.

Vulnerability of connected devices is clearly one of the key problems, 
but there are others too. For example we need better processes (Agile, 
shmagile), and better measurement of many things (engineer productivity 
and reliability). And given the increasing reliance on open source, we 
need to figure out how to strengthen or even re-architect existing 
projects in light of new constraints and requirements.

> If so, then I have the following comments. If not please ignore me.
>
> 1 The code has to be safe from the threat. This in many cases means
> state-sponsored attacks (APT) - stuxnet etc. One could argue that
> there is little risk to cars being an APT target. I would disagree
> (see below) and many other prospective IoT applications are equally
> vulnerable e.g. smart meters/grid.

I haven't heard the APT acronym before, but happy to learn it, assuming 
you mean Advanced Persistent Threat. I completely agree cars are at 
risk.

> 2 A “Mission-Critical” system (one that has to be both safety
> critical AND secure) is either Trusted or it is not. Binary (I don’t
> get hung-up on terminology, you know what I mean). Something that is
> coded to CERT is not Trusted, so cannot be “Mission Critical” -
> zero-day exploits / coding/testing errors - merely Trust_able, _so
> better than nothing but not adequate for M-C applications.

I'm interested in this binary state argument. Previously on this list 
we got into the difference between 'trustable' and 'trustworthy' [2] I 
take issue with others branding themselves as 'Trustworthy' since I 
believe that it's up to the receiver of x to determine whether they 
believe that x is trustworthy or not.

But the point you're making on Trusted is different. For that class of 
Mission-Critical system, do the parties involved have control of all of 
the software, including the tools required to build it? If not, I wonder 
how they can be so confident.

> 3 Because there are so many potential holes, Trustable software is
> about as much use as a chocolate fireguard if the theat is an APT.

That's fighting talk - you're definitely on the right list :)

So if you're ok with my distinction between 'trustable' and 
'trustworthy', I'm suggesting we need to aim for processes (with 
measurements/tests) and code which satisfy the scrutiny of a broad group 
of relevant experts, from a range of organisations and industries.

Hopefully some of those experts in the course of their analysis would 
state that they themselves consider foo to be 'Trusted', but I don't 
think 'Trusted' is a claim we should try to make for the rest of the 
world.

I think the above is possible, and we do have some chance of success, 
even against APTs

> ______________
> Threat Assessment
>
> APTs are normally against CNI (power, water, telecoms) or defence
> related (stealing defence technology) but are also viable against
> commercial targets. So in times of tension (or TTW) many more
> commercial organisations are likely to be targets e.g. Tesco, (food
> distribution). Because APTs can be triggered we don’t know whether
> the attacks are currently in place (but sleeping). We can try to find
> out (Huwei) but cannot be sure that our efforts have been successful
> and remain successful (the P in APT).
>
> I can predict with a reasonable degree of certainty that many 
> vehicles
> have already been subjected to an APT that has not been activated. 
> How
> do you prove they haven’t?

I concur, sadly.

One of the things we can do to mitigate is push for complete 
transparency and provenance of all toolchain and target code, and 
establish properly reproducible builds [3]. Just because the bucket has 
always leaked, doesn't mean we need to get wet forever.

br
Paul

[1] 
https://lists.veristac.io/pipermail/trustable-software/2016-July/000000.html
[2] 
https://lists.veristac.io/pipermail/trustable-software/2016-July/000004.html
[3] https://reproducible-builds.org



More information about the trustable-software mailing list