[trustable-software] No silent failures?

Colin Robbins colin.robbins at qonex.com
Thu Jul 28 08:37:24 UTC 2016


>Moving this along to a more concrete example.... If I steal your data, 

>then you still have it. You have had a silent failure insofar as your need 

>for the data remains satisfied. This is why Verizon report that 80% of data 

>breaches are discovered by unrelated third parties.

 

Again, isn’t this about good monitoring.   

For example, along the lines of GPG 13, Table 1 page 29.  (https://www.cesg.gov.uk/guidance/protective-monitoring-hmg-ict-systems-gpg-13)

 

If you don’t look, you will have a silent failure.

If you are good at looking, then it will take a very skilled and motivated attacker to exfiltrate data (beyond the capability of the majority of the attacks we see).

 

Cheers,

 

 

Colin Robbins

Qonex (the consulting arm of Nexor)

Tel: +44 (0) 115 953 5541 

 

 

 

From: trustable-software [mailto:trustable-software-bounces at lists.veristac.io] On Behalf Of Duncan Hart
Sent: 27 July 2016 13:56
To: Discussion about trustable software engineering <trustable-software at lists.veristac.io>
Subject: Re: [trustable-software] No silent failures?

 

Thanks Colin, appreciate it.

 

I agree about monitoring, but here's the thing.....

 

I really believe that this is what really matters when we talk about attack surface and it's measurement.  The one little failure in a component that you are not using will all but surely go unnoticed absent an alarm system for that failure. But we rarely alarm for failures that we do not believe matter. Why should we? The answer is that we have no need to alarm for failures that do not matter but only on the single but paramount condition that that failure does not lead to another.

 

Moving this along to a more concrete example.... If I steal your data, then you still have it. You have had a silent failure insofar as your need for the data remains satisfied. This is why Verizon report that 80% of data breaches are discovered by unrelated third parties.

 

Demanding 'no failure' is the same as all other impossible demands of the "prove a negative" sort. But demanding that a failure will be silent for an interval less than that required for that failure to become larger is to introduce into security engineering a margin of safety. It's that margin of safety concept that is so engrained in so many other fields of engineering that I want us to have the ability to grow to that kind of maturity. I live in hope...

 

Warmest regards, all the best,

  Duncan

 

<This message is on-the-record unless we agree otherwise>

 

 

 

On 27 July 2016 at 11:58, Colin Robbins <colin.robbins at qonex.com <mailto:colin.robbins at qonex.com> > wrote:

Hello Duncan,

 

>From my perspective your logic holds true.

It emphasises the need to good monitoring and testing to assure that systems are working as expected (thus 3 layers of redundancy).

 

Regards,

 

 

Colin Robbins

Qonex (the consulting arm of Nexor)

Tel: +44 (0) 115 953 5541 

 

From: trustable-software [mailto:trustable-software-bounces at lists.veristac.io <mailto:trustable-software-bounces at lists.veristac.io> ] On Behalf Of Duncan Hart
Sent: 25 July 2016 22:34
To: Discussion about trustable software engineering <trustable-software at lists.veristac.io <mailto:trustable-software at lists.veristac.io> >
Subject: [trustable-software] No silent failures?

 

​Hello folks,

 

I'm wondering if you good folks could help develop my thinking further....

 

I have come to accept that silent component failure is a contributor to system failure like no other. 

 

  Imagine you have a system with 3-way redundancy :

 

  If one component fails then nothing bad happens.

 

  Even if 2 components fail nothing bad happens.

 

  But if the first, and the first and second fail AND you don't know that they have, then on the one hand the redundancy can be said to be effective and, on the other hand, each failure that you do not notice, because the redundancy is covering you, brings you one step closer to an entire system failure.

 

  When an entire system failure occurs, you will declaim that it was impossible because you have (had) 3-way redundancy, but you didn't. You once did, then you had 2-way, then no redundancy at all, then you had a failure.

 

Does the logic hold true? How might this manifest itself in a software environment?

 

Thoughts, comments, feedback much appreciated.

 

Warmest regards,

  Duncan

 

 

<This message is on-the-record unless we agree otherwise>


_______________________________________________
trustable-software mailing list
trustable-software at lists.veristac.io <mailto:trustable-software at lists.veristac.io> 
https://lists.veristac.io/cgi-bin/mailman/listinfo/trustable-software

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.veristac.io/pipermail/trustable-software/attachments/20160728/6fcfefd2/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4833 bytes
Desc: not available
URL: <https://lists.veristac.io/pipermail/trustable-software/attachments/20160728/6fcfefd2/attachment-0001.bin>


More information about the trustable-software mailing list