[trustable-software] Exploring the "Hypothesis for software to be trustable"

Paul Sherwood paul.sherwood at codethink.co.uk
Wed Jan 3 16:29:12 GMT 2018

On 2018-01-03 14:04, trustable at panic.fluff.org wrote:
>> A surprising amount of crucial decisions need to be taken based on 
>> instinct.
>        I'm not disputing this, however, if you want your decisions to
> be reproducible, that is to say capable of being assessed for validity
> they require some form of reproducible measure.

Have we established somewhere that all decisions need to be 
reproducible? If so I think we are may be doomed to remain a 

I'm currently of the view that decisions need to be **traceable**, i.e. 
we can assess evidence of who made which decisions when etc. But I don't 
expect that all decisions are going to be objective, or that we can 
retrospectively assess all of the factors that influenced a decision.

>       You could argue that we regular fool ourselves and suffer from
> oberver bias because we never record what our measures are for
> success.

Yup :)

>>> To quote a conversation elsewhere discussing the following volume
>>>   [https://www.amazon.co.uk/dp/B00INUYS2U]
>>> " 1. Management cares about measurements because measurements inform
>>>      uncertain decisions.
>> Measurements can inform. Often they misinform.
>   Again I don't disagree with this, but we can ONLY learn whether
> these measures are misinformation if we record them and apply them.
> Their value is in the ability to look with clarity retrospecively and
> assess the usage of those measusres

I agree with the concept of measurement, but that doesn't mean that we 
should attempt to measure everything.

>  >>
>>>   2. For any decision or set of decisions, there are a large 
>>> combination
>>>      of things to measure and ways to measure them but perfect 
>>> certainty
>>>      is rarely a realistic option.
>> Agreed.
>>>  3. Therefore, management needs a method to analyze options for 
>>> reducing
>>>     uncertainty about decisions. "
>> OK, but it doesn't work for everything, and IME management cannot 
>> reply entirely on any 'method'. We have to make decisions in the 
>> presence of uncertainty.
>>> I'd make the point that though designing experiments which allow us 
>>> to
>>> measure things can sometimes be complex, without being able to do 
>>> this
>>> we are unable to confirm our findings and verify that the cause of
>>> aberrant behaviour in the systems or the construction of the systems.
>> If you're holding to the line that we have to measure everything, I'm 
>> disagreeing.
>    I believe that the it is VERY complex to assess whether one has
> achieved anything without some measure to assess against. I also
> strongly hold the view that for a system to be reproducible, such as a
> system which builds software, then that system must be composed of
> measures to ensure we are agregated and presenting consistent data

Maybe you are avoiding my question :)

Currently I think we can and should collect measurements, but I expect 
them to be incomplete and imperfect. Does incompleteness/imperfection in 
itself already invalidate the hypothesis? I'm assuming not, given that I 
don't believe we're aiming for a binary yes/no measure... more likely 
we're initially aiming for some kind of scoring mechanism.


More information about the trustable-software mailing list