[trustable-software] Evidence Relationships in SDLC
Daniel Firth
dan.firth at codethink.co.uk
Thu Mar 29 11:15:52 BST 2018
Hi everyone
On 29/03/18 10:02, Paul Sherwood wrote:
>
>> playing devil's advocate, you can argue that the requirement contains
>> the condition safe which is a semantic packaging of the situation of
>> test failure and the implicit definition of the test function
>>
>
> which as I said is IMO the actual requirement, and doesn't mention the
> word 'safe'.
>
> **For practical purposes**, irrespective of 'semantic packaging' and
> 'implicit definition', I believe that
>
> "The workman should never be at risk"
>
> - is clear
> - is concise
> - can easily be understood by stakeholders parsing plain english
> - and to most people's understanding would be considered to be one
> single requirement, i.e. unitary/cohesive.
>
Semantics of a requirement aside (versus let's say a model statement,
for the sake of argument, we'll call the others 1-7 that), I'd just like
to jump on the use of the word "test" when talking about the above. The
issue of demonstrating the above statement holds is one that can be, and
(ethically) must be deduced /a priori /from the model, and not one that
you test empirically by switching the box on prior to confidence.
There are legitimate cases where a posteriori testing is the right (or
only) option, and that distinction should be made at the outset. I think
the Kantian language is the right language to use here, and it's one
that's accessible to everyone, so I'll make sure it's completely clear here:
https://en.wikipedia.org/wiki/Analytic%E2%80%93synthetic_distinction
* *analytic proposition*: a proposition whose predicate concept is
contained in its subject concept
* *synthetic proposition*: a proposition whose predicate concept
is*not*contained in its subject concept but related
* /*a priori*/*proposition*: a proposition whose justification
does/not/rely upon experience. Moreover, the proposition can be
validated by experience, but is not grounded in experience.
Therefore, it is logically necessary.
* /*a posteriori*/*proposition*: a proposition whose justification
does rely upon experience. The proposition is validated by, and
grounded in, experience. Therefore, it is logically contingent.
To be clear, analytic and synthetic refer to the content of the
statements, where /a priori/ and /a posteriori/ refer to how statements
can be known to be the case. Analytic statements offer no new
information - they are definitional, synthetic statements suggest a
hypothesis not contained in the definition alone. /A priori/ statements
are ones that can be known without consulting the contents of
experience, /a posteriori/ statements are ones that require consulting
experience to verify or refute.
Continuing from wikipedia:
The analytic/synthetic distinction and the/a priori///a
posteriori/distinction together yield four types of propositions:
1. analytic/a priori/
2. synthetic/a priori/
3. analytic/a posteriori/
4. synthetic/a posteriori/
Kant posits the third type as obviously self-contradictory. Ruling it
out, he discusses only the remaining three types as components of his
epistemological frameworkâeach, for brevity's sake, becoming,
respectively, "analytic", "synthetic a priori", and "empirical" or "a
posteriori" propositions. This triad will account for all propositions
possible.
The goal at least on my part - whether it be type-checking from the
compiler or deducing the validity of statements from a SAT model
(consider, requirements are just certain "special", say non-negotiable,
statements) - is to make sure *as much* of the promises and guarantees
about runtime behaviour can fall in the "/synthetic a priori/" bucket
and as little as possible need fall in the "empirical" or "/a
posteriori/" bucket, and that we're aiming to make that the case as
early as possible in the chain.
> As discussed previously, part of the problem we face is that it is
> much harder than most people realise to map real-world scenarios into
> tight language and thence into code. I include expert mathematicians
> and logicians in 'most people'.
It's fine that this is a hard problem. There still has to be *an*
element of human judgement backing up a model onto requirements, and
that's fine. Where we're able to mechanise, prove, deduce and most
importantly not rely on a posteriori testing, we absolutely should.
Br, Dan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.trustable.io/pipermail/trustable-software/attachments/20180329/93710b85/attachment-0001.html>
More information about the trustable-software
mailing list