[trustable-software] We've got this all wrong !

Andrew Banks andrew at andrewbanks.com
Fri Apr 20 07:02:41 BST 2018


Hi all

 

I’m wondering if we need to take an additional approach – looking at the competency (in a formal meaning of the word) of the people doing this work:

 

                t.software SHALL be written by t.competent (ie SQEP)

 

 

Over on the System Safety Mailing List, which has also been discussing a “Fire Code” for Software (amongst other stuff) Steve Tockley made a rather interesting comment:

 

Well, this opens up a completely different proverbial can of worms, namely a fundamental lack of professionalism. Imagine a ChemE, CivE, or any other *real* engineer letting supporting documentation get out of synch with the product. I’m back to my earlier observation that the software industry is dominated by highly paid amateurs. That has got to change.

 

This was linked to a previous comment:

 

I am already on record as having said (referring to the software industry as a whole) “We are an industry of highly paid amateurs” - Claiming that one is an engineer simply because the words “software engineer” are printed on one’s business card are simply not sufficient. I strongly recommend that we start a parallel effort to the recent “don’t call them bugs, call them defects” movement. In this new movement, anyone who uses the term “software engineer” is required to:

 

A) provide a reference to a definition of the term “engineer(ing)” that has been accepted by already-recognized engineers (e.g., Civil, Chemical, Mechanical, Industrial, . . .)

B) Show how what they are doing on a day-to-day basis on their projects is consistent with that legitimate engineer-accepted definition

 

I’m increasingly concerned that it is deemed, not just acceptable, but even NORMAL to not have a (reasonably) tight set of requirements BEFORE we start work!

 

A line I’ve used many times over the years: You tell us we don’t have the time or budget to do it properly the first time – yet we seem to have the time and budget to bodge it a second, third, fourth (etc) time.

 

Meanwhile, I exhort anyone on this list to join the “don’t call them bugs, call them defects” movement – we MUST stop talking about “bugs” as if they are some inconsequential fluffy creature… they are “defects” which have the potential to kill.

 

 

Regards

Andrew

 

 

 

From: trustable-software [mailto:trustable-software-bounces at lists.trustable.io] On Behalf Of Duncan Hart
Sent: 20 April 2018 00:50
To: Trustable software engineering discussion; will.barnard at codethink.co.uk
Subject: Re: [trustable-software] We've got this all wrong !

 

Hello Will, how are you?

 

You've made a very good point, let me try and expand upon it...

 

We're ignoring the need for contextualised performance information to be used as an *input* to inform the development process, rather than just an output. It needs to be circular, rather than linear.

 

Cheers!

 

-- Duncan Hart --

Melbourne, Australia

dah at seriousaboutsecurity.com

 

 

 

 

 

 

On 20 April 2018 at 02:25, Will Barnard <will.barnard at codethink.co.uk> wrote:

It certainly is interesting to hear details of the processes different teams are using to develop software. Here we have another anecdotal report of a team successfully using an ad-hoc approach. But I don't think anyone is saying it is not possible to produce software in this manner. If you think back to what the SEI said in CMM, teams of "Heroes" can produce software with little or no control or process. I believe the important distinction is whether such an approach can deliver trustable software in a reliable and reproducible manner.

We would need is some quantifiable metrics that the software produced in such cases is trustable before we could advocate this approach. I am not sure simply getting paid is a reliable measure of trustability, Microsoft are one of the most successful tech companies, but I would not use that as a measure of the trustability of their software...

You raise an interesting point about practices in the DevOps community. While the DevOps movement has blurred the distinctions between development and operations, there are still some essential differences between software projects where the scope is largely to integrate and deploy software developed by other teams and those projects where significant quantities of new code will be produced.

Will


From: trustable at panic.fluff.org



Sent: 16 April 2018 01:38:39 GMT-07:00
To: Trustable software engineering discussion  <mailto:trustable-software at lists.trustable.io> <trustable-software at lists.trustable.io>
Subject: [trustable-software] We've got this all wrong !

 

Having returned from yet another project where the developers yet again 
drive behaviour where the cost of doing requirements are too much overhead 
and the decry how it prevents their ability to 'delivery on schedule' It's 
lead to re-reading about the 5 paragraph briefing style
       https://en.wikipedia.org/wiki/Five_paragraph_order
It is particularly interesting to look at the way 'Intent' is addressed 
here.
       https://en.wikipedia.org/wiki/Intent_(military) <https://en.wikipedia.org/wiki/Intent_%28military%29> 
 
It has been my experience that particularly with the DevOPS community they 
are given some 'Intent' and some 'Scene Setting' There is rarely agreement 
until something is finally delivered and even only then value is assigned 
to the effort expended.
 
In fact during this period what occurs is a decompision of the Intents 
into actions which can be task tracked. This task tracking in turn being 
adapted to by an approach like
       https://en.wikipedia.org/wiki/Pomodoro_Technique
to minimise waste.
 
Testing in these examples often come as an after thought again. It appear 
that it is mostly in place to prevent regression in the software deployed 
within itself.
 
So perhaps we need to look again at what we are trying to achieve here.
The original hypothesis stated that for software to be trustable
     * we know where it comes from
     * we know how to build it
     * we can reproduce it
     * we know what it does
     * it does what it is supposed to do
     * we can update it and be confident it will not break or regress
 
None of this states that we need requirements to build software.
None of this states that we need to have the entire source code of a 
project
 
If a tests only purpose is to confirm behaviour and prevent regression in 
that behaviour, then tests being an after through isn't a problem. There 
issue here might be confirmation bias but little else.
 
I'm suggesting that the defintions of software raised in
    https://gitlab.com/trustable/workflow/blob/master/markdown/basics.md
are in fact flawed because the begin with an assumption that we have 
requirements and that tests confirm requirements.
 
I'm going to suggest that in fact test are simply confirmation of 
behaviour and that in fact we NEVER have requirements all we have is very 
'hand wavy' intents which decomposition in the form of actions performed 
in fairly unstructured manner with good luck delivery a piece of software.
 
At best we know we got there because someone pays the bill.
 
Edmund
 
-- 

  _____  

 
Edmund J. Sutcliffe                     Thoughtful Solutions; Creatively
 <mailto:edmunds at panic.fluff.org> <edmunds at panic.fluff.org>               Implemented and Communicated
 <http://panic.fluff.org> <http://panic.fluff.org>                +44 (0) 7976 938841
 
 

  _____  

 
trustable-software mailing list
trustable-software at lists.trustable.io
https://lists.trustable.io/cgi-bin/mailman/listinfo/trustable-software

 


_______________________________________________
trustable-software mailing list
trustable-software at lists.trustable.io
https://lists.trustable.io/cgi-bin/mailman/listinfo/trustable-software

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.trustable.io/pipermail/trustable-software/attachments/20180420/8e78f349/attachment-0001.html>


More information about the trustable-software mailing list