[trustable-software] Learning how to trust the Linux kernel

Paul Albertella paul.albertella at codethink.co.uk
Wed Jun 17 11:41:20 BST 2020


Hi Edmund,

Thanks very much for your comments.

You wrote:
> The data you present to support you case in these arguments, has a 
> ludicrously small sample size and there is no comment about whether the 
> same kernel was consistently used.

This is a fair point. Further work would clearly be required to confirm 
the findings and document the test inputs, if these were to be used to 
support a formal argument.

The purpose of the article was to share some preliminary findings, as a 
starting point for more in-depth analysis. The test code and 
configurations that were used are shared in the repository to allow 
others to repeat them.

> The article makes the assumption that in fact you can, in a reproducible 
> and statistically sensible manner, produce an artifact which when can be 
> reasoned about, as you have in this piece.

Agreed: the techniques that the study explores cannot be used to make 
claims about software unless we can also make confident statements about 
what was tested and how it was constructed.

However, I don't believe that this devalues the tools and techniques 
examined.

 > It should be noted that bypassing both of these approaches has now
 > become widely understood particularly approaches using like page 
table > matching for userspace/kernel space synonyms[1] is well 
understood and
 > the ability to place these within your tool chain without notice is
 > again a far from impossible approach.

Understanding the ways in which measures intended to mitigate risks may 
themselves be subverted, bypassed or compromised is an essential part of 
constructing a Trustable argument.

This study focused on one specific risk and some existing measures that 
might be used to mitigate it. Identifying other risks such as your 
example and examining strategies to mitigate these would be an excellent 
way to build upon these humble foundations.


Edmund's points highlight some important considerations (constant 
refrains in this space): any Trustable argument - whether it is about 
safety, security or some other category of risk - must necessarily be 
open to constant review and refinement, to allow newly identified 
hazards to be incorporated, and we must be able to consistently 
reproduce the evidence used to support those arguments for any version 
of the software.

I do not believe that this means incomplete, outdated or insufficiently 
substantiated arguments, or the measures upon which they are based, are 
without value. An assessment of the limitations of argumentation and 
evidence should always form part of risk analysis, and a clear-sighted 
examination of these deficiencies can lead to improvements and increased 
confidence.

However, problems arise when such considerations are abstracted away in 
favour of a simple binary statement ("safe" or "not safe"), that is 
accepted with blind faith, or without consideration of the full context 
for that statement.

Regards,

Paul







More information about the trustable-software mailing list