[trustable-software] Security vs. Safety in Linux

Niall Dalton niall.dalton at gmail.com
Fri Jul 13 17:21:51 BST 2018


On Fri, Jul 13, 2018 at 8:08 AM John Ellis <john at ellis-and-associates.com>
wrote:

> So does that mean you cannot have a secure, safety product? Or, does it
> mean that you shouldn't start from a secure distro?
>


My take is that you can have a secure safe system. Sharing of resources
makes this immensely harder though. And security mechanisms, especially
software aimed at solving problems in sharing resources, make safety much
harder again.

The problem is pretty fundamental: high-performance systems tend to
speculate across protection boundaries, with the assumption that if
erroneous accesses do not become visible (i.e. architecturally committed)
that it's secure. Various side-channels can be used to extra information
from the speculation though. Access to precise and steady clocks/timers
lets you extract information from variance in performance due to the
speculation.. those same clocks we need for safety..

This is compounded by optimizations in system software. For example, many
hypervisors and operating systems implement deduplication of memory pages.
Part of the downside of multiple copies of operating systems, basic
libraries etc, across different guest OS sharing system resources is that
you increase memory pressure and decrease performance. To somewhat recover,
systems find the duplicated pages and share them. This is now both a safety
(unpredictable timing if the pages are COW'd) and security concern (timing
and address space layout leakage).

For security reasons, there is a tendency to introduce non-determinism
deliberately, in software, to thwart the timings on the CPU side. But.. on
many systems you can bypass it from the accelerators in the SOC alongside
the CPU. For example, from an integrated GPU. You can construct precise
timing via non-timing related opengl (or webgl) graphics primitives even if
the explicit timing APIs are unavailable or deliberately return fuzzy time.

More than that though. If, as is common on high-performance SOCs, the
accelerators share some resources among themselves and with the CPU, then
there's another avenue. With some mental gymnastics to understand the quite
different memory hierarchy and addressing, on many SOCs you can construct a
way to read system DRAM from opengl/webgl shaders. The accelerators are
quite effective at accelerating attacks against microarchitectural features
of the CPUs..

The answer appears to be carefully unsharing resources we so carefully
shared between different parts of the workload.

Now this world is tricky enough given a static code load. Add dynamic code
loading or updating, and the fun and games really begin. The traditional
approaches to attestation, almost always crypto-based, have a fundamental
problem. They're great at certifying origins and the entity itself (the
bytes of the new program). But what we want is attestation that the
behavior of the software is what we desire. ​We'd settle for knowing that
the software doesn't exhibit known-bad behavior (e.g. perhaps we don't want
it calling certain system calls, or if it calls one particular function it
must call another particular function with n milliseconds). There has been
work on this but, as far as I'm aware, it's not in common use.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.trustable.io/pipermail/trustable-software/attachments/20180713/ccbbdfe7/attachment.html>


More information about the trustable-software mailing list