[trustable-software] What is Trustable About ?
trustable at panic.fluff.org
trustable at panic.fluff.org
Fri Apr 12 13:40:20 BST 2019
Today automation is at heart of what we do
* Society is driven by increased automation
* Pressures for financial returns and quality of life drives this
* Complexity arises through automation and faster decision cycles
* Faster decisions cycles are brought around through automation
* Interdependency in supply chains and their automation are hard to
evaluate
* 'Blast Radius' of poor decisions are hidden in the complexity
For the delivery of such systems to improve we need to be able to reason
systematically about these deliveries, ideally meaning we don't have to
wait for the external warning signs of failure such as those found on the
balance sheet of corporations or disxovered in our environment such as
those after the Three Mile island or other environment diasters.
It is my opinion that in fact as time goes on we see greater regulatory
chane and drive for enforcable regulation. This is particularly becoming
clear with approaches such as active campaigning and investors, as seen in
the car industry with Ralph Nader, Max Schrems and Facebook, Ben Goldacre
and pharmaceutical, Jo Maugham and the Good Law project.
It is in my view that 'trustable' is following this model of activist
actions. Very much as the 'Free Software' movement is activist actions.
* The Value of Trustable - 'Show me the Money'
The value in the 'trustable' approach is the ability to systematically
reason about the stated intents and their validation of their constraints
within a given set of limits for a context. It is then possible to take
these claims and reason about them against the limits expressed by an
assessment context which might is arbitrated against whether this be by a
geopolitical process such as regulator or legal, or from
anarcho-syndicalism kaizen approaches.
* Where are the Problems ?
It is my view, that there is no 'one' way for the delivery of systems and
software. This has been demonstrated by the successful delivery of systems
consuming software without a standardised method.
In my experience, almost always there is a level of revisionism in how
well or otherwise that delivery went and the assessment of the success of
the delivered intents. This means that there is anecdotal improvement on
the delivery of t.systems and t.software, when we could do more more
In my view, t.constraints of t.intents and the t.limits of the t.context
within which t.systems are assessed is one where there are several
continuous improvement cycles around those context and intents. We get to
go back on failures and improve them.
For software to be worthy of trust, we want to be able to reason about the
what has been delivered made a points in time and from this reasoning
bring around improvement.
* How can we do this ?
I believe we can approach it from an Argumentation approach
http://www.argumentationtoolkit.org/
** t.intent to t.constraint from source code ?
I would assert a claim that it is possible to programatically map, the
t.constraint (in effect tests) to the source code which is executed with
some reasonable statistical probability.
I would assert a claim that the ability to consistently execute tests and
reproducibly build the system would improve this statistical approach.
It may be possible to associate the source code with t.intents
programtically from the evaluation of the git meta data such as commit
messages to formal statements of t.intent.
Even, if this meta data is weak, I believe we should be able to may to
assert a claim about the filesystem position of the source-code and the
t.validation which impacts that code, could be associated though some
backward chaining to that of a t.intent. I would in fact encourage this as
it may provide an opportunity to illucidate hidden t.intents within the
systems.
A further issue which has to be resolve here is the need to handle
dispensation, that is to say the action of being licensed or permitted to
perform operations. Within the git meta-data we have knowledge about who
has performed such action, but the organisation structure and
dispensations are frequently recorded elsewhere. The increases use of
automated review systems such as Gerrit means that this structure is being
recorded within the git meta-data so there is hope here.
It is my current guess that tools like those discussed
https://pragprog.com/book/atcrime/your-code-as-a-crime-scene
and developoed by
https://sourced.tech/
https://bitergia.com/
and widely discussed here
https://www.msrconf.org/
show that such approaches have been possible
** t.limits to t.context
The difficulty here is to construct a suitable machine parsable
representation of a t.context within which a t.system is executing the
t.software. Interestingly particularly around geopolitical process there
are organisations marking up regulator and legal documentation as a
service ready for machine reasoning.
It is my view that Toulmin's model of Argumentation
https://en.wikipedia.org/wiki/Stephen_Toulmin#The_Toulmin_Model_of_Argument
is the way to record these issues and supported by Wigmore Charts
https://en.wikipedia.org/wiki/Wigmore_chart
help visualise where these relationships between the evidence/claim and
argument are populated and where more effort in the deliver more t.validat
One crucial piece which comes with a t.context is frequently the need to
handle dispensation, the ability to be permitted or licenses to perform
things.
In the case of Anarcho-syndicalism kaisen developed t.context the markup
is an issue to be resolved.
** How do we align t.limits t.constraints with t.context
It is my guess that the approach is something like this
https://en.wikipedia.org/wiki/Stephen_Toulmin#/media/File:Toulmin_Argumentation_Example.gif
We have a series of facts, in effect the data gained from the parsing of
the data and meta data created during the construction of software. we
offer warrant for these facts from our stated t.intents and a rebuttal to
those t.intents from our t.limits. The t.context giving us details about
the supported arguments available.
So let me work this a little further.
If we are to address a t.context to address safety, we can do so by only
three classes of argument with which to make a conclusion.
* Control of design flaws
* Control of errors during operation
* Random faults.
I can provide a warrant from my t.intent saying that my system defended
against random faults by having n+2 systems running the code.
I can provide fact in the form of test result evidence and code evidence
to support this warrant.
My rebuttal within the context is I've applied a design stategy and test
results. This can be argued about and a conclusion derived.
The difficulty here is in actually applying the reasoning. Perhaps this is
the last remaining space in which a human needs to be involved...
However, I do believe that very quickly we can quantatively measure the
amount of generated evidence, and how these are used to warrant claims
from the t.intents. I also believe we can enumerate t.limits within a
t.context and associated the claim and rebuttals with t.contraints and
t.limits.
This should allow us to compare and contrast what is delivered very
quickly and see where more effort should be applied in further
t.constraints or tests which support those constraints.
My hope is that this approach could be iterated over and improved in such
a manner that we can ease the manner in which arguments are made and
improve the evaluation of complaince
Edmund
-----
Please note for the above discussion I've explicityly been referring to
the concepts discussed here
https://gitlab.com/trustable/documents/blob/master/markdown/concepts.md
--
========================================================================
Edmund J. Sutcliffe Thoughtful Solutions; Creatively
<edmunds at panic.fluff.org> Implemented and Communicated
<http://panic.fluff.org> +44 (0) 7976 938841
More information about the trustable-software
mailing list