selfsecure systems - redunancy?

36 views
Skip to first unread message

'1093'4218'2184189'481'0'414

unread,
Nov 13, 2016, 3:28:17 PM11/13/16
to qubes-users
Hello,

due to this artical

https://nakedsecurity.sophos.com/2016/10/19/linux-kernel-bugs-we-add-them-in-and-then-take-years-to-get-them-out/

Linux bugs are expoitable for ca. 1-2 years, until they are fixed.

Selfsecure Sytems are running redudant subsystems....

Will it be possible to run to VM's in parallel on the "same task"?
The technology of this VM's are 100% idenpendet (no parts of the coding is a copy of each other).
The command is only 100% clean, if both instances will do the same and so it is executed and otherwise blocked and logged.

Would this work?

Which VM will be the counterpart to the standard Linux Templates?

E.g. Would it possible to match up Win-VM and Linux-VM?
(Why this will not work for internet browsing for example?)


Kind Regards

Vít Šesták

unread,
Nov 14, 2016, 6:25:56 PM11/14/16
to qubes-users
Well, I have considered something similar in the past. My objective was slightly different (backwoods vs. vulnerable code), but the reasoning why ít is not as useful idea as it might look will be similar:

1. It cannot prevent some kind of attacks because of covert channels.
2. It can actually lower the level of security.
3. Not as easy to implement.

I am not saying it is totally useless. There might be some cases where security benefits outweigt security drawbacks and where this is easy to implement. I cannot name any such case, though.

More details about issues:

1. Even when you use two physically separate machines and tinfoil them in various ways (sound isolation, heat isolation, countermeasures against power analysis), you cannot remove all the covert channels. It is hard (though in some special cases possible) to remove timing side channel. But when one computation gives a different result, it can be also some covert channel. Sure, if someone reads logs, she might find something suspicious.

Well, it can prevent for example directory traversal attacks if one of the implementations is non-vulnerable or if there is no attack payload that succeeds in both systems. But it can hardly prevent data leak from a RCEd system.

2. One might argue: If it prevents some vulnerabilities, it is better than nothing, isn't it? Well, this is not always the case. If we consider RCE vulnerabilities (where I assume that we cannot prevent data leaks), attacker can choose which system to attack.

3. There are many practical difficulties. I can show some of them on web brbrowser:

* There are some random-based protocols, e.g. TLS. Even if they here is the same source for random numbers (unrealistic for some major reasons, e.g. race conditions), two completely different libraries can use it slightly differently and generate different number. In TLS, this would result in two different VMs trying to read/write a different traffic.
* Differences in implementation: Different browsers support different ciphersuites and protocols. They will also send a different UserAgent header. Maybes they will render the same pages slightly differently (e.g. even different rounding can affect this). SQL queries might differ slightly, but it is hard to find it out that they are essentially the same.
* How would you handle persistent state?

Whenever you feel this could be a viable way, ask yourself: Isn't there an even cheaper way for reaching the goal?

Regards,
Vít Šesták 'v6ak'

'81029438'1094328'0194328'0914328

unread,
Nov 16, 2016, 2:43:57 PM11/16/16
to qubes-users
Hello Vít Šesták,

yes I agree, that IT designs (nearly) everything very complex (or why every browser shows the same data, e.g. HTML slightly different - it makes not a real sense).

In the physical world you have the so called Fit-Form-Function-Code. This means you define how long-wide-high, heavy and what kind of function an object has. Now 30 years later, you replace this object 1 with an object 2, which must fulfill the same "interface" parameters and again you know straight away what will be the function and how you must handle this object, without deeper insider knowledge, why an engineer designed it more clever than 30 years ago.

Conclusion if the OS1 and the OS2 (over even OS3 ...OSn) get in parallel the task to compress a file with the method x, than if all programmers keep always the FFC in mind, than the output, the compressed file will be always look the same.

Now you don't need to know all the details in the different implementations, if OS1, 2, 3 deliver the same result, but the others are totally different, so you agree that OS1,2 and 3 have done a clean job and the others are a little bit odd and you skip these outputs.

In the end you file system is just saving only files with a positive correlation.

Ok, programmers are lazy and like to copy code between the different OS or applications.

This method will only work, if the coding, which is involved in this task was 100% redundant - an independent development - or even better in a different technology (CPU vs FPGA vs GPU). You must guarantee that bugs cannot be transferred (copied) between the independent codings.

Sure, that's a hard pice of work - but if you reach it than you are able to check everything quite simple like a black box form the outside.

If the result is as expected - all worked perfect!
If not, someone don't know to deliver a professional coding - or is corrupt and like to deliver some kind of backdoors.

The same you can do with the RAM, or to play a movie, or to copy a file, or to encrypt a file.

Who knows really, if your encryption don't leaks some bits in some other files, the RAM, elsewhere...?

But if you have N teams, which are running in competition to deliver clean results, than you will gain it.

The sky-guide system is vital and runs 4 different chipsets (there could be a bug in the chips or a myon passed by and destroyed suddenly some parts of the structure - and nothing works now as desinged...), 4 independent operating systems, 4 different applications and in the end a voting-system - if someone is not in line, the other redundant non-corrupt systems take over the control and all is automatic.

But yes, the first task might be quite simple - like encrypt something via CPU and GPU and FPGA.

Ok, if you use the same prime number with a backdoor in all 4 redundant systems, you will be fooled - so this kind of attack you need another counter measurement than pure redundancy. But why not, you can check any prime number redundant, if it will be safe or corrupt - the "opinions" of n different OS might, or might be not the same - by default.

Exactly in this field you can find very odd things...

Kind Regards

Jean-Philippe Ouellet

unread,
Nov 16, 2016, 3:09:42 PM11/16/16
to '81029438'1094328'0194328'0914328, qubes-users
On Wed, Nov 16, 2016 at 2:43 PM, '81029438'1094328'0194328'0914328
<kerste...@gmail.com> wrote:
> ... idealistic description of heterogeneous computations and validating i/o proxy ...

This method of verification is not the panacea it may appear to be.

If an attacker can find vulnerabilities (potentially for different
inputs at different times) in each respective system (which may or may
not be that difficult in practice), then their exploit payload could
simply produce identically-incorrect results for an agreed upon
different operation at a later time, and your validating proxy would
not catch it because all outputs are identical.

Vít Šesták

unread,
Nov 17, 2016, 2:39:15 AM11/17/16
to qubes-users
I remember some more examples of redundant systems. For example, ancient computer SAPO (see https://en.m.wikipedia.org/wiki/SAPO_(computer) ). Cardiostimulators are AFAIR reportedly also designed in this way (different CPUs on a different architecture with different code written by different people). But all those cases (including your space example), the reason is reliability and safety (i.e. prevention of accidental failure), not security (i.e. prevention of someone intentionally forcing the system to do or leak something it should not do/leak). As both me and Jean-Phillipe suggested, it is not panacea. And it can also lower security if attacker uses a covert channel.

Regards,
Vít Šesták 'v6ak'

'1093784'091384'091832'04918'03249819438

unread,
Nov 18, 2016, 10:56:38 AM11/18/16
to qubes-users
Hello Jean-Philippe Ouellet,

yes you are right, if more that 50% are corrupt and well coordinated - you get locked down via this "insider-threats".

But if you use really independent teams and perhaps you have some cover-agents running around, as long they are not coordinated or not the majority of the parallel independent channels to process, you will be able to make a simple black-box checking.

In my opinion, this can even help exactly for the trusted BIOS-boot-chain. If 4 independent teams come to the same conclusion, even if you need again and again many changes and new updates due to better hardware support - if they make a clean job, all will finish in the same result.

This fight-system as had no overruling at all in its lifetime, so the reliability or up time was pretty high, compared to other IT-solutions.

Also in the maintenance you can calculate, how you can increase the up time in critical systems with redundancy.

Kind Regards

'0918'3049182'304918'029348'019243

unread,
Nov 18, 2016, 11:03:26 AM11/18/16
to qubes-users
Hello,

Here are also quite a bunch of self-healing engineering, if you like to setup a self-secure system...

p24

http://cui.unige.ch/~dimarzo/papers/JAMT.pdf

Other call it reconciliation in the IT, if you check up to techs against each other.

Kind Regards

'019384'0193284'0912834'09832'104

unread,
Nov 18, 2016, 11:15:52 AM11/18/16
to qubes-users
Hello,

Redundancy Management Technique for Space Shuttle Computers:

The calculation of the same outputs by each critical computer and the synchronization of inputs are used to provide the means of achieving total failure coverage of flight-critical functions for a small computational
resource and hardware cost.

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.136.9216&rep=rep1&type=pdf

Why, the mission-critical functions of any avionic calculation, might be in practice so much different from a mission critical encryption - might not help to overcome all this human-factor-errors of buggy code?

Kind Regards


Vít Šesták

unread,
Nov 18, 2016, 12:02:29 PM11/18/16
to qubes-users
For encryption:

* You have inherently a problem with random numbers. Virtually anything nontrivial here needs a source of random numbers. You need not only both independent systems to use the same source of random numbers, but also to use them in the same way. This is possible, but not easy. I also would not call such implementations „totally independent“.
* Also configuration of both systems would have to be somewhat aligned.
* It could work then. It could detect a logic flaw or a bug like heartbleed. It cannot fix crypto design issue and cannot prevent data leaking through remote code execution (RCE).

For backdoored systems and RCE: I've already mentioned that in case of remote code execution or backdoor, you can't prevent data leak without fixing all relevant covert channels, which is far from being easy. Moreover, the design of redundant systems introduces an inherent covert channel: The information if computation succeeded (all systems returned the same value) or failed (there are two systems that returned a different value) is a 1bit information that the malicious system can leak (provided that all other systems return the same value). Having redundant systems makes the situation even worse in some ways – data can leak provided that *at least one system* leaks them through a covert channel. Now, the set of people you trust is larger, not sure.

Comparison with flight control: I don't think they do this in order to defeat backdoors or attacks. They rather want to detect (and remedy from) accidental failures. No matter how much those goals might look similar, they are in many ways different.

Regards,
Vít Šesták 'v6ak'
Reply all
Reply to author
Forward
0 new messages