Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

CVE-2017-14482 - Red Hat Customer Portal

13 views
Skip to first unread message

ken

unread,
Sep 21, 2017, 5:52:16 PM9/21/17
to GNU Emacs List
Security vulnerability in emacs v.25.2 & earlier:


https://access.redhat.com/security/cve/CVE-2017-14482

Kaushal Modi

unread,
Sep 21, 2017, 6:03:37 PM9/21/17
to geb...@mousecar.com, GNU Emacs List
This has already been fixed in the recently released Emacs 25.3 (emergency
release to fix just that).

It would be recommended to update to Emacs 25.3. If that is not possible,
then apply the workaround mentioned in that link.

On Thu, Sep 21, 2017, 5:52 PM ken <geb...@mousecar.com> wrote:

> Security vulnerability in emacs v.25.2 & earlier:
>
>
> https://access.redhat.com/security/cve/CVE-2017-14482
>
> --

Kaushal Modi

ken

unread,
Sep 21, 2017, 7:07:29 PM9/21/17
to Kaushal Modi, GNU Emacs List
Many people, including myself, have systems with package managers...
which have dependencies... all of which make upgrading outside of
package management unfeasible.

Alberto Luaces

unread,
Sep 22, 2017, 3:37:22 AM9/22/17
to help-gn...@gnu.org
ken writes:

> Many people, including myself, have systems with package
> managers... which have dependencies... all of which make upgrading
> outside of package management unfeasible.

The workaround doesn't require installing nor upgrading nothing.

--
Alberto


Emanuel Berg

unread,
Sep 22, 2017, 3:49:03 AM9/22/17
to help-gn...@gnu.org
Yes it does - it requires installing the
workaround :)

Obviously the corrected version in the
repositories is ideal.

It would be interesting to see the original
code that created this vulnerability. I mean,
so you know what *not* to write... ehem.

Also, it would be interesting to learn how it
was discovered. Wait, don't tell me! It was an
antivirus program, right? That program would
sure be interesting to see as well ;)

--
underground experts united
http://user.it.uu.se/~embe8573


ken

unread,
Sep 22, 2017, 12:41:05 PM9/22/17
to help-gn...@gnu.org
On 09/22/2017 03:37 AM, Alberto Luaces wrote:
> ken writes:
>
>> Many people, including myself, have systems with package
>> managers... which have dependencies... all of which make upgrading
>> outside of package management unfeasible.
> The workaround doesn't require installing nor upgrading nothing.
>
Which is why I included the link... which you can find in my original post.



Emanuel Berg

unread,
Sep 22, 2017, 3:08:13 PM9/22/17
to help-gn...@gnu.org
ken wrote:

>>> Many people, including myself, have systems
>>> with package managers... which have
>>> dependencies... all of which make upgrading
>>> outside of package management unfeasible.
>>>
>> The workaround doesn't require installing
>> nor upgrading nothing.
>>
> Which is why I included the link... which you
> can find in my original post.

I don't understand all the antagonism, if
that's what it is. Ken did good. The only one
who did comparably good is Ryu.

Mario Castelán Castro

unread,
Sep 22, 2017, 4:12:45 PM9/22/17
to help-gn...@gnu.org
On 22/09/17 02:48, Emanuel Berg wrote:
> It would be interesting to see the original
> code that created this vulnerability. I mean,
> so you know what *not* to write... ehem.

That is the problem with nearly all contemporary programs: Instead of
verifying that they are correct; it is verified that they are not
incorrect according to a small set of ways in which a program may be
incorrect. This idea doomed to fail is the foundation of conventional
testing, SMT testing (as in Klee), lint-like tools and manual source
code auditing.

Paraphrasing Dijkstra, all these methods can prove that a program is
incorrect, but almost never (only trivial toy programs) can prove that
it is correct.

Only verifying that programs are correct using formal logic and an
appropriate specification can eliminate software bugs (hardware can
always malfunction, because of ionizing radiation, “metastability” of
flip-flops, because somebody disconnects the power cord, and many other
things).

At present there is limited infrastructure for verified programming, but
it can be done. See e.g.: https://cakeml.org/.

--
Do not eat animals; respect them as you respect people.
https://duckduckgo.com/?q=how+to+(become+OR+eat)+vegan

signature.asc

Emanuel Berg

unread,
Sep 22, 2017, 6:14:44 PM9/22/17
to help-gn...@gnu.org
Mario Castelán Castro wrote:

> That is the problem with nearly all
> contemporary programs: Instead of verifying
> that they are correct; it is verified that
> they are not incorrect according to a small
> set of ways in which a program may be
> incorrect. [...]

Also, formal verification that is applied on
a model will only prove that *the model* is
correct, not the real thing.

There is one automated way of testing that
I know of that actually works, and that is for
shell tools that have the super-simple text
interface where the tool accepts a limited set
of arguments. Say that it accepts one integer
as the first argument and then one string as
its second. Then it is trivial to setup a test
program that will just invoke repeatedly with
randomized integers and strings. Because its
random, it might find borderline cases which
makes absolutely no sense (but still shouldn't
break the program), but which supposedly
rational human beings would never think of
to input, and thus without the test program,
would never get tested.

But for a huge software system like Emacs,
formal verification being obviously out of the
question, even simple-minded brute-force
testing is difficult to set up, at least for
anything bigger than the smallest module.el.
So the byte-compiler and style checks
(`checkdoc-current-buffer') is what you got.

Instead, I think the best way to test is just
to have lots of people using it. At least major
flaws will surface really soon this way.

There are also languages like Dafny where you
can state formally what a function should do,
and the verification tool will match that to
your code. Problem is, it isn't refined to any
degree where it actually will matter for
every-day programming. Often, the code will be
correct, and the formal description is correct
as well, only the verifier will still say it
doesn't compute. And programming should
obviously not be about tinkering with code and
expressions that already makes sense, just so
that the computer will agree it does.
So straight down the line, this will amount to
toy/demo programs as well.

Here is an example:

method Main () {
var n := f(4, 5);
print "The result is ";
print n;
}

method f(a:int, b:int) returns (r:int)
requires a >= 0;
requires b >= 0;
ensures r == 4*a + b;
{
r := 0;
var i := 0;
var j := 0;
while (i < a || j < b)
decreases a + b - (i + j);
invariant 0 <= i <= a;
invariant 0 <= j <= b;
invariant r == 4*i + j;
{
if (j < b) {
r := r + 1;
j := j + 1;
}
else {
r := r + 4;
i := i + 1;
}
}
}

There is a Dafny mode for Emacs, and with Mono
(because it is an .NET thing to begin with),
perhaps one could hook that up into a neat IDE.
Still, it will only amount to poor Dafny.

Charles A. Roelli

unread,
Sep 23, 2017, 6:06:01 AM9/23/17
to Mario Castelán Castro, help-gn...@gnu.org
The code that caused CVE-2017-14482 (aka Bug#28350) was 100% correct.
It was also far too powerful, so its behavior had to be properly
limited. There is no way to find such a "bug" without reading the
code and trying to understand its use.

Óscar Fuentes

unread,
Sep 23, 2017, 8:54:01 AM9/23/17
to help-gn...@gnu.org
cha...@aurox.ch (Charles A. Roelli) writes:

> The code that caused CVE-2017-14482 (aka Bug#28350) was 100% correct.
> It was also far too powerful, so its behavior had to be properly
> limited.

The two sentences above are contradictory.

> There is no way to find such a "bug" without reading the
> code and trying to understand its use.

Maybe, in the Elisp case, this is true, but not in the general case.


Eli Zaretskii

unread,
Sep 23, 2017, 9:13:01 AM9/23/17
to help-gn...@gnu.org
> From: Óscar Fuentes <o...@wanadoo.es>
> Date: Sat, 23 Sep 2017 14:53:36 +0200
>
> cha...@aurox.ch (Charles A. Roelli) writes:
>
> > The code that caused CVE-2017-14482 (aka Bug#28350) was 100% correct.
> > It was also far too powerful, so its behavior had to be properly
> > limited.
>
> The two sentences above are contradictory.

Not really. But they don't tell the whole story: the vulnerability
was actually caused by Gnus, MH-E, and perhaps other MUAs who decided
to automatically support enriched text, without checking the code
first. Otherwise, enriched.el per se has/had no problem whatsoever.

Glenn Morris

unread,
Sep 23, 2017, 1:19:07 PM9/23/17
to Eli Zaretskii, help-gn...@gnu.org
Eli Zaretskii wrote:

> But they don't tell the whole story: the vulnerability was actually
> caused by Gnus, MH-E, and perhaps other MUAs who decided to
> automatically support enriched text, without checking the code first.
> Otherwise, enriched.el per se has/had no problem whatsoever.

I disagree. Simply opening a file in an unpatched Emacs can run
arbitrary code with zero prompting. This is a massive security risk that
is entirely internal to enriched.el (possibly with the 'display property
more generally). It does get worse that Gnus would trust enriched.el to
decode mail messages too. But anyone using Emacs from 21.1 to 25.2
should be aware of this issue, whether or not they use Emacs for mail.

Eli Zaretskii

unread,
Sep 23, 2017, 1:34:32 PM9/23/17
to help-gn...@gnu.org
> From: Glenn Morris <r...@gnu.org>
> Cc: help-gn...@gnu.org
> Date: Sat, 23 Sep 2017 13:18:59 -0400
>
> Eli Zaretskii wrote:
>
> > But they don't tell the whole story: the vulnerability was actually
> > caused by Gnus, MH-E, and perhaps other MUAs who decided to
> > automatically support enriched text, without checking the code first.
> > Otherwise, enriched.el per se has/had no problem whatsoever.
>
> I disagree. Simply opening a file in an unpatched Emacs can run
> arbitrary code with zero prompting.

How did that file end up in a directory you can access? Why are you
visiting a file about which you know nothing at all?

And how is that different from a Lisp package that creates display
properties out of thin air?

> This is a massive security risk that is entirely internal to
> enriched.el (possibly with the 'display property more generally).

More generally, Emacs itself. Even more generally, any software you
use.

> It does get worse that Gnus would trust enriched.el to decode mail
> messages too. But anyone using Emacs from 21.1 to 25.2 should be
> aware of this issue, whether or not they use Emacs for mail.

If you use software you didn't write, you are at risk. If you don't
want the risk of ending up in a car crash, the only way is not to
leave home.

Bob Proulx

unread,
Sep 23, 2017, 4:28:10 PM9/23/17
to help-gn...@gnu.org
ken wrote:
> Many people, including myself, have systems with package managers... which
> have dependencies... all of which make upgrading outside of package
> management unfeasible.

That's great! Using distributions with security teams much simplifies
things for the end user. Otherwise every user would need to closely
follow each and every one of the zillion software projects installed
on their system. Software packaging makes this simpler.

In which case the best answer for people using pre-packaged emacs is
to install the security upgrade from their distribution which includes
the fix.

Spot checking things now (some days later) and it looks like the major
distributions I looked at have already released fixed versions
available for people using those distros to install. Just upgrade to
them. Life is good!

Bob


Yuri Khan

unread,
Sep 23, 2017, 4:51:21 PM9/23/17
to Eli Zaretskii, help-gn...@gnu.org
On Sun, Sep 24, 2017 at 12:34 AM, Eli Zaretskii <el...@gnu.org> wrote:

> Why are you visiting a file about which you know nothing at all?

Why not? Opening a file in a text editor is not normally considered a
hazardous activity.

Mario Castelán Castro

unread,
Sep 23, 2017, 10:09:03 PM9/23/17
to help-gn...@gnu.org
On 22/09/17 17:14, Emanuel Berg wrote:
> Also, formal verification that is applied on
> a model will only prove that *the model* is
> correct, not the real thing.

You seem to be confused, verifying that a program is correct *requires*
a model. Verifying the model is a different and separate task.

> […] Then it is trivial to setup a test
> program that will just invoke repeatedly with
> randomized integers and strings. […]

Random testing is very inefficient because most inputs are garbage and
are treated uniformly by the program under test. For example, feeding
random input to a compiler will result almost surely in only ill-formed
programs and thus will not exercise anything but the parser. Good
testing must exercise code paths that only run in rare corner cases and
the probability that random testing achieves this is very small.

But like I said, testing is fundamentally flawed. Testing can tell you
that a program is defective, but not that a program is free from defects!

> There are also languages like Dafny where you
> can state formally what a function should do,
> and the verification tool will match that to
> your code. […]

Taking a glance at Danfy, it seems like it trusts the answers of a SMT
solver (Microsoft's Z3) and does not generate proofs of correctness (but
I can easily be wrong; I did not check in deep because I dislike .NET).
This is not what I am referring about when I say “proving programs
correct”. I mean software like CakeML <https://cakeml.org/>. It is
linked to a proof assistant (HOL4). You can develop there the
specification of the program and prove it correct according to the
specification.

There is still much work to be done to make formal verification tools
like this more usable, but it must be noted that in the case of CakeML
it *already* works. CakeML is itself formally verified using HOL4.

Unfortunately there is little documentation material to learn to use
CakeML. Using HOL4 or other proof assistant requires at least a solid
intuition for formal logic and some knowledge in mathematics. Anybody
wanting to call himself a programmer must become comfortable with using
a proof assistant because this is a prerequisite to writing correct
software. *ANY* other approach leads to defective software, *especially*
ordinary testing[1].

Notes:

[1]: There is also software that is not itself proved correct, but
generates a solution for a problem along with a proof that the solution
is correct. For example, many SAT solvers meet this description.
Provided one can verify the proof, one can the ascertain that the
solution is correct, but the program may still generate incorrect
“solutions” in other cases.
signature.asc

Eli Zaretskii

unread,
Sep 23, 2017, 10:53:42 PM9/23/17
to help-gn...@gnu.org
> From: Yuri Khan <yuri....@gmail.com>
> Date: Sun, 24 Sep 2017 03:50:51 +0700
> Cc: "help-gn...@gnu.org" <help-gn...@gnu.org>
A file whose source you don't trust or are unfamiliar with should
initially be examined with find-file-literally, if your security is
indeed important for you. That emulates what most other text editors
do when you open a file.

Emanuel Berg

unread,
Sep 24, 2017, 2:31:49 AM9/24/17
to
Agree 100%

Emanuel Berg

unread,
Sep 24, 2017, 2:39:02 AM9/24/17
to
Bob Proulx wrote:

> That's great! Using distributions with
> security teams much simplifies things for the
> end user. Otherwise every user would need to
> closely follow each and every one of the
> zillion software projects installed on their
> system. Software packaging makes
> this simpler.

Yes, except for some cases, because it requires
that enough people use it so that the stuff is
kept up to date.

For example, there should be many lispers
reading this. SBCL, ECL, CCL, what have you.
Take a look at the software in your repos.
Compare it to the versions you'd find on the
web. People aren't cool enough in general for
the really cool people to find what they want.

Why it has to be like this I have no idea.
Why can't you get the latest stuff the
same way?

And it is not about getting the bleeding edge
just for the sake of it. Some stuff is really,
really outdated and there is no way around it
except bypassing the package
manager altogether.

Emanuel Berg

unread,
Sep 24, 2017, 2:47:38 AM9/24/17
to
Mario Castelán Castro wrote:

> You seem to be confused, verifying that
> a program is correct *requires* a model.
> Verifying the model is a different and
> separate task.

A very, very small fraction of programmers will
ever care to (or indeed be able to) create
a model of the program just to verify the model
and then verify that the model is in agreement
with the program - this is just insane to ask
of anyone, and it isn't realistic one bit to
ever be a practical alternative.

> Random testing is very inefficient because
> most inputs are garbage and are treated
> uniformly by the program under test.
> For example, feeding random input to
> a compiler will result almost surely in only
> ill-formed programs

To a compiler - ? This can be done with simple
shell tools that perform basic computation!

> But like I said, testing is fundamentally
> flawed. Testing can tell you that a program
> is defective, but not that a program is free
> from defects!

That's an intellectual excercise university
buffs do on toy/demo programs. But here, it is
not programming theory for space-fleet cadets
anymore, but a whole different domain, which
I like to call "reality".

Philipp Stephani

unread,
Sep 24, 2017, 3:14:18 AM9/24/17
to Eli Zaretskii, help-gn...@gnu.org
That's an unrealistic requirement; nobody will ever do this. Emacs must
make sure to never run untrusted code when visiting a file, unless the user
explicitly asked for (via the enable-local-eval variable).

Emanuel Berg

unread,
Sep 24, 2017, 3:48:16 AM9/24/17
to
Philipp Stephani wrote:

> That's an unrealistic requirement; nobody
> will ever do this. Emacs must make sure to
> never run untrusted code when visiting
> a file, unless the user explicitly asked for
> (via the enable-local-eval variable).

How do you explore a computer system?
That's right. Let's see what's in the files!

Mario Castelán Castro

unread,
Sep 24, 2017, 9:39:21 AM9/24/17
to help-gn...@gnu.org
On 24/09/17 01:47, Emanuel Berg wrote:
> A very, very small fraction of programmers will
> ever care to (or indeed be able to) create
> a model of the program just to verify the model
> and then verify that the model is in agreement
> with the program - this is just insane to ask
> of anyone, and it isn't realistic one bit to
> ever be a practical alternative.

That is not how formal verification works in practice. Instead many
(probably most) tools work by programming directly in the logic of the
proof assistant and then extracting (without manual intervention) a
program in a conventional programming language that has the same
behavior as what was programmed in the logic.

Contrary to what you assert, there are computer program well beyond
trivial that have a specification and proof of correctness. I already
mentioned the CakeML compiler <https://cakeml.org/>. Other examples are
the seL4 microkernel <https://sel4.systems/> and the Compcert compiler.

It is true that formal verification of a program requires several times
the effort compared to writing a comparable non-verified program (but
with many bugs). I argue that this effort is necessary, because it is
the only way to write correct software.

I think you will agree that although writing undocumented software is
easier than writing well-documented software, writing documentation is
part of software development and thus undocumented software must be
considered incomplete. In the same way, I extend this to the claim that
formal verification is part of software development and thus unverified
software is incomplete.

Although writing incomplete software is easier than writing complete
software, the task should not be considered solved. It is like just
building half of a bridge. Surely it is easier than building all of it;
but it is not enough.

> To a compiler - ? This can be done with simple
> shell tools that perform basic computation!

If by “simple shell tools that perform basic computation” you mean
testing with random inputs, note that I already explained why this will
be an awful testing method for a compiler. I will not repeat the
explanation.
signature.asc

Óscar Fuentes

unread,
Sep 24, 2017, 10:43:15 AM9/24/17
to help-gn...@gnu.org
Mario Castelán Castro <mario...@yandex.com> writes:

> It is true that formal verification of a program requires several times
> the effort compared to writing a comparable non-verified program (but
> with many bugs). I argue that this effort is necessary, because it is
> the only way to write correct software.
>
> I think you will agree that although writing undocumented software is
> easier than writing well-documented software, writing documentation is
> part of software development and thus undocumented software must be
> considered incomplete. In the same way, I extend this to the claim that
> formal verification is part of software development and thus unverified
> software is incomplete.
>
> Although writing incomplete software is easier than writing complete
> software, the task should not be considered solved. It is like just
> building half of a bridge. Surely it is easier than building all of it;
> but it is not enough.

It seems that you think that formal verification says that the software
is correct. That's in theory. Practice is different, as usual.

Instead of writing a lengthy explanation about why formal verification
can't be a guarantee about the correctness of a piece of sotware, I'll
simply reference a study about failures on verified systems:

https://blog.acolyer.org/2017/05/29/an-empirical-study-on-the-correctness-of-formally-verified-distributed-systems/

"Through code review and testing, we found a total of 16 bugs, many of
which produce serious consequences, including crashing servers,
returning incorrect results to clients, and invalidating verification
guarantees."


to...@tuxteam.de

unread,
Sep 24, 2017, 10:54:12 AM9/24/17
to help-gn...@gnu.org
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Sun, Sep 24, 2017 at 04:42:54PM +0200, Óscar Fuentes wrote:

[...]

> It seems that you think that formal verification says that the software
> is correct. That's in theory. Practice is different, as usual.

And to embellish this discussion with an Argument by Authority:

"Beware of bugs in the above code; I have only proved it
correct, not tried it."

http://www-cs-faculty.stanford.edu/~knuth/faq.html

- -- t
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlnHxwkACgkQBcgs9XrR2kbRFQCfQcFHXf/zQz4I1zN1ENqbEjRc
v/IAnRQLGZc9D74GvaTl+aCMFHhDoa8y
=pOWe
-----END PGP SIGNATURE-----

Maxim Cournoyer

unread,
Oct 2, 2017, 4:13:31 PM10/2/17
to help-gn...@gnu.org
Have you heard about GNU Guix/GuixSD[1]? While not all the packages are
always at the latest version, the maintainers strive to keep the CVEs patched
and it is otherwise straightforward to update a package definition and
use it locally (no need to be root!), or better, contribute the
patch back.

In fact, I see some people starting to use Guix atop traditional distros
to get bleeding edge packages.

Maxim

[1] https://www.gnu.org/software/guix/

Robert Thorpe

unread,
Oct 2, 2017, 4:13:36 PM10/2/17
to Philipp Stephani, el...@gnu.org, help-gn...@gnu.org
Philipp Stephani <p.ste...@gmail.com> writes:

> Eli Zaretskii <el...@gnu.org> schrieb am So., 24. Sep. 2017 um 04:54 Uhr:
>
>> > From: Yuri Khan <yuri....@gmail.com>
>> > Date: Sun, 24 Sep 2017 03:50:51 +0700
>> > Cc: "help-gn...@gnu.org" <help-gn...@gnu.org>
>> >
>> > On Sun, Sep 24, 2017 at 12:34 AM, Eli Zaretskii <el...@gnu.org> wrote:
>> >
>> > > Why are you visiting a file about which you know nothing at all?
>> >
>> > Why not? Opening a file in a text editor is not normally considered a
>> > hazardous activity.
>>
>> A file whose source you don't trust or are unfamiliar with should
>> initially be examined with find-file-literally, if your security is
>> indeed important for you. That emulates what most other text editors
>> do when you open a file.
>>
>>
> That's an unrealistic requirement; nobody will ever do this. Emacs must
> make sure to never run untrusted code when visiting a file, unless the user
> explicitly asked for (via the enable-local-eval variable).

I think it would be very useful if Emacs had a concept of trusted-zones.

So, a person could declare their main local partition to be trusted. Or
they could declare it to be trusted except for the browser cache (for
example).

They could declare a lower degree of trust for some directories or
mount-points.

BR,
Robert Thorpe


Emanuel Berg

unread,
Oct 2, 2017, 4:13:49 PM10/2/17
to help-gn...@gnu.org
Maxim Cournoyer wrote:

> Have you heard about GNU Guix/GuixSD[1]?
> While not all the packages are always at the
> latest version, the maintainers strive to
> keep the CVEs patched and it is otherwise
> straightforward to update a package
> definition and use it locally (no need to be
> root!), or better, contribute the patch back.
>
> In fact, I see some people starting to use
> Guix atop traditional distros to get bleeding
> edge packages.
>
> [1] https://www.gnu.org/software/guix/

Not heard of but from your description it
sounds exactly what I call for. I'll try it!

Emanuel Berg

unread,
Oct 2, 2017, 4:13:50 PM10/2/17
to help-gn...@gnu.org
Mario Castelán Castro wrote:

>> To a compiler - ? This can be done with
>> simple shell tools that perform
>> basic computation!
>
> If by “simple shell tools that perform basic
> computation” you mean testing with random
> inputs, note that I already explained why
> this will be an awful testing method for
> a compiler.

It won't work for a compiler.

Emanuel Berg

unread,
Oct 2, 2017, 4:13:50 PM10/2/17
to help-gn...@gnu.org
Óscar Fuentes wrote:

> It seems that you think that formal
> verification says that the software is
> correct. That's in theory. Practice is
> different, as usual.
>
> Instead of writing a lengthy explanation
> about why formal verification can't be
> a guarantee about the correctness of a piece
> of sotware, I'll simply reference a study
> about failures on verified systems:

Hold - perhaps the verification has to be
verified as well?

C'mon now, this is just another
Computer Science hangup. Just like
functional programming, or testing for that
matter - which also is an academic pursuit, by
the way! [1] - but as always, there is no
silver bullet solution.

If I'd send the space fleet to the oldest
galaxies of the universe, I'd like all methods
anyone could think of to make as sure as
possible the software is correct.

I'd start with very skilled and motivated
programmers, proceed with sound programming
practices, then code review, and then
excessive testing.

I suppose formal verification would be
a distant fourth.

[1]

@book{ammann,
author = {Paul Ammann and Jeff Offut},
title = {Introduction to Software Testing},
publisher = {Cambridge University Press},
year = 2008,
ISBN = {978-0-521-88038-1},
edition = {6th edition}

Mario Castelán Castro

unread,
Oct 2, 2017, 4:14:41 PM10/2/17
to help-gn...@gnu.org
On 24/09/17 09:42, Óscar Fuentes wrote:
> It seems that you think that formal verification says that the software
> is correct. That's in theory. Practice is different, as usual.

This depends on what exactly is meant by “correct”.

If by “correctness” it is meant a formula in *formal logic*, then yes,
we can prove that the program behaves correctly.

If by “correct” it is meant an informal concept, then proving
correctness is in general not possible because a specification in formal
logic is required to prove anything. One may believe that one has
formalized correct behavior but later one finds that the formalization
does not accurately capture the informal concept.

The cases of the “bugs” mentioned in the paper you referenced are error
in formalizing “correct behavior”. That this is possible is not breaking
news, as you seem to think. This is a caveat well known to anybody
involved in formal verification.

Specifically, several of those “bugs” are errors in describing the
behavior of the environment in which the program is assumed to run. One
possible view in this circumstance is that the program *is* correct, but
it was run in an environment where the formal guarantees does not apply.
This is similar as how one can prove the implication that *if* ‘X’ is a
real number *then* ‘X^2≥0’. Suppose somebody applies the conclusion
where the antecedent fails; for example, in the complex numbers, then
the conclusion is false, but so is the antecedent, so there is no
contradiction here.

There is also the issue of having a fundamentally unsound logic. We can
reason about this risk as follows: The underlying logic of many proof
assistants is, or can, be proved sound (note that soundness implies
consistency) assuming that ZFC is consistent. In turn we have reasons to
believe that ZFC is consistent (I will not elaborate on that because it
deviates too much from the original topic of this conversation).

> Instead of writing a lengthy explanation about why formal verification
> can't be a guarantee about the correctness of a piece of sotware, I'll
> simply reference a study about failures on verified systems:
>
> https://blog.acolyer.org/2017/05/29/an-empirical-study-on-the-correctness-of-formally-verified-distributed-systems/

The paper is of interest to me. Thanks for bringing it into my
attention. I only took a glance but maybe I will eventually read it with
detail. The blog post is just padding; you should have linked the paper
directly.
signature.asc

Glenn Morris

unread,
Oct 2, 2017, 4:14:42 PM10/2/17
to Eli Zaretskii, help-gn...@gnu.org
Eli Zaretskii wrote:

> A file whose source you don't trust or are unfamiliar with should
> initially be examined with find-file-literally, if your security is
> indeed important for you. That emulates what most other text editors
> do when you open a file.

Wow. I find this an extraordinary statement. For example, it means
that "emacs [-Q] somefile" could eg happily delete your home directory.
Please reconsider.

Mario Castelán Castro

unread,
Oct 2, 2017, 4:14:42 PM10/2/17
to help-gn...@gnu.org
On 24/09/17 18:06, Emanuel Berg wrote:
> Hold - perhaps the verification has to be
> verified as well?

What any proof of correctness proves is an implication saying roughly:
“IF the environment meets these requisites THEN the program does […]”.

The “bugs” described in the aforementioned paper are not a flaw in the
proof. The source of the problem is that EITHER the programs under test
were being used in an environment does not meet the requisites (i.e.:
“IF the environment meets”) OR the user was expecting the program to do
something different than what is guaranteed by the conclusion “the
program does […]”.

> C'mon now, this is just another
> Computer Science hangup. Just like
> functional programming, or testing for that
> matter - which also is an academic pursuit, by
> the way! [1] - but as always, there is no
> silver bullet solution.

This is the prevalent attitude among programmers, and this is the reason
that we are showered by an endless stream of security patches and bug fixes.

> If I'd send the space fleet to the oldest
> galaxies of the universe, I'd like all methods
> anyone could think of to make as sure as
> possible the software is correct.
>
> I'd start with very skilled and motivated
> programmers, proceed with sound programming
> practices, then code review, and then
> excessive testing.
>
> I suppose formal verification would be
> a distant fourth.

Well, then it is a very good thing that you are NOT in charge of
designing that piece of software.
signature.asc

Emanuel Berg

unread,
Oct 2, 2017, 4:14:42 PM10/2/17
to help-gn...@gnu.org
Mario Castelán Castro wrote:

> This is the prevalent attitude among
> programmers

Perhaps because it makes sense? It is known as
"conventional wisdom".

> we are showered by an endless stream of
> security patches and bug fixes.

Yes, and what is the problem with that?

>> If I'd send the space fleet to the oldest
>> galaxies of the universe, I'd like all
>> methods anyone could think of to make as
>> sure as possible the software is correct.
>>
>> I'd start with very skilled and motivated
>> programmers, proceed with sound programming
>> practices, then code review, and then
>> excessive testing.
>>
>> I suppose formal verification would be
>> a distant fourth.
>
> Well, then it is a very good thing that you
> are NOT in charge of designing that piece
> of software.

What do you mean "not in charge"?

http://user.it.uu.se/~embe8573/sounds/up.mp3

Emanuel Berg

unread,
Oct 2, 2017, 4:14:44 PM10/2/17
to help-gn...@gnu.org
Glenn Morris wrote:

> Wow. I find this an extraordinary statement.
> For example, it means that "emacs [-Q]
> somefile" could eg happily delete your home
> directory. Please reconsider.

On Unix systems we like to say everything is
a file [1].

The first thing you do when you don't
understand something is to have a glance under
the hood which translates to reviewing one or
more files.

This is such everyday-behavior I never even
considered it could cause the kind of damage
you describe. If indeed it can, this should be
the exception and the default behavior should
not allow it.

[1] "For example, let's say you want to find
information about your CPU. The /proc
directory contains a special file –
/proc/cpuinfo – that contains
this information.

You don't need a special command that
tells you your CPU info – you can just
read the contents of this file using any
standard command that works with
plain-text files. For example, you could
use the command cat /proc/cpuinfo to print
this file's contents to the terminal –
printing your CPU information to
the terminal.

You could even open /proc/cpuinfo in
a text editor to view its contents."

https://www.howtogeek.com/117939/htg-explains-what-everything-is-a-file-means-on-linux/

Ludwig, Mark

unread,
Oct 2, 2017, 4:14:45 PM10/2/17
to Glenn Morris, Eli Zaretskii, help-gn...@gnu.org
> From Glenn Morris, Monday, September 25, 2017 4:27 PM
>
> Eli Zaretskii wrote:
>
> > A file whose source you don't trust or are unfamiliar with should
> > initially be examined with find-file-literally, if your security is
> > indeed important for you. That emulates what most other text editors
> > do when you open a file.
>
> Wow. I find this an extraordinary statement. For example, it means
> that "emacs [-Q] somefile" could eg happily delete your home directory.
> Please reconsider.

It is an unhappy reality, but this is no different from other sophisticated
file formats. Consider the wisdom of "firefox foo.html" where
you do not know what is in foo.html. You may /think/ you just want to
"view" what is in foo.html....



Óscar Fuentes

unread,
Oct 2, 2017, 4:14:50 PM10/2/17
to help-gn...@gnu.org
Mario Castelán Castro <mario...@yandex.com> writes:

> On 24/09/17 09:42, Óscar Fuentes wrote:
>> It seems that you think that formal verification says that the software
>> is correct. That's in theory. Practice is different, as usual.
>
> This depends on what exactly is meant by “correct”.

"correct" means that the client (the people who required the software)
says that the program fulfills his requirements. Sometimes you need to
wait an infinite amount of time for obtaining client's approbation :-)

> If by “correctness” it is meant a formula in *formal logic*, then yes,
> we can prove that the program behaves correctly.

Ever heard of a chap named Kurt Gödel? Or his cousin Alan Turing? :-)

> If by “correct” it is meant an informal concept, then proving
> correctness is in general not possible because a specification in formal
> logic is required to prove anything. One may believe that one has
> formalized correct behavior but later one finds that the formalization
> does not accurately capture the informal concept.

Yes. But we have to deal with informal specifications. Furthermore, it
is almost certain that there are "correct" informal specifications that
do not admit a verifiable formal specification.

> The cases of the “bugs” mentioned in the paper you referenced are error
> in formalizing “correct behavior”. That this is possible is not breaking
> news, as you seem to think. This is a caveat well known to anybody
> involved in formal verification.

Yet you seem to give it little importance. Once we assume that a
verifiable specification might be incorrect in the sense you mention, we
must acknowledge that the verification itself does not say that the
program is correct (wrt the "informal yet correct" specification).

That is, the formal approach might help to improve software quality, but
it is no guarantee of bug-free software.

Why I said "might" above? I postulate that there will be cases where
experts will consider more trustworthy a program written on a
traditional but convenient language than other which passed
verification. This is because some formal specifications will look more
suspicious than a program that implements the informal specification,
because of the constraints imposed by the formalization process.

> Specifically, several of those “bugs” are errors in describing the
> behavior of the environment in which the program is assumed to run. One
> possible view in this circumstance is that the program *is* correct, but
> it was run in an environment where the formal guarantees does not apply.
> This is similar as how one can prove the implication that *if* ‘X’ is a
> real number *then* ‘X^2≥0’. Suppose somebody applies the conclusion
> where the antecedent fails; for example, in the complex numbers, then
> the conclusion is false, but so is the antecedent, so there is no
> contradiction here.
>
> There is also the issue of having a fundamentally unsound logic. We can
> reason about this risk as follows: The underlying logic of many proof
> assistants is, or can, be proved sound (note that soundness implies
> consistency) assuming that ZFC is consistent. In turn we have reasons to
> believe that ZFC is consistent (I will not elaborate on that because it
> deviates too much from the original topic of this conversation).

Oh! the consistence of ZFC! now we are on pure mathematical land! :-)

Software is an industry. We must provide what is requested from us, in
terms of functionality, performance and cost (both in time and money).
Formal methods are being proposed as (yet another) silver bullet, which,
as every other silver bullet we were sold on the past, will fail to
deliver its grandiose promise. An improvement? absolutely. For some
types of software formal verification is a huge advance. But for other
classes of software (see it? types/classes :-) its impact will not go
further than the indirect improvement gained from building upon solid
implementations of algorithms, at least on the cases where the price to
pay on performance is not too much.

>> Instead of writing a lengthy explanation about why formal verification
>> can't be a guarantee about the correctness of a piece of sotware, I'll
>> simply reference a study about failures on verified systems:
>>
>> https://blog.acolyer.org/2017/05/29/an-empirical-study-on-the-correctness-of-formally-verified-distributed-systems/
>
> The paper is of interest to me. Thanks for bringing it into my
> attention. I only took a glance but maybe I will eventually read it with
> detail. The blog post is just padding; you should have linked the paper
> directly.

You are welcomed. The blog post is a good comment on the paper that
exposes the gist of it and might be of interest to non-experts or people
who is too busy.


Mario Castelán Castro

unread,
Oct 2, 2017, 4:14:53 PM10/2/17
to help-gn...@gnu.org
On 25/09/17 16:49, Emanuel Berg wrote:
> Mario Castelán Castro wrote:
>
>> This is the prevalent attitude among
>> programmers
>
> Perhaps because it makes sense? It is known as
> "conventional wisdom".
>
>> we are showered by an endless stream of
>> security patches and bug fixes.
>
> Yes, and what is the problem with that?

Obviously each such fix is evidence that a security vulnerability or
another defect existed in the first place. A shower of fixes means that
bugs are extremely prevalent in software. Otherwise it would have been
impossible to sustain this tremendous bug discovery rate for years
(check any web front end to the CVE database for details).

Thus it is clear that the mainstream approach to writing software is
failing to delivers reliable software. That is why we have the need for
an alternative. Among all possible approaches, *only* formal testing has
the possibility of virtually eliminating software defects.

Note that errors in the model of the environment (as in the paper
previously mentioned in this conversation) are not a *consequence* of
formal verification. This is just a symptom, and moreover it is routine
in non-formally verified software (i.e.: the developers make assumptions
about the environment that will not always be met).

For example, in Linux[1] they use the -fno-strict-pointer-aliasing
compiler flag which is just a way to shove under the rug the problem of
some incorrect assumptions they make about the semantics of C. This is
an error of the implicit model of the environment. It is not stated in
formal logic, but it is an error nonetheless.

Formal verification provide a solution for this problem: If the low
level software that makes up the environment for high level software has
a formal specification, then the possibility of errors in the model of
the environment can be eliminated by using the pre-existing formal model
of the environment.

[1] I mean Linux, the *kernel* used in GNU/Linux and other OS.
signature.asc

Emanuel Berg

unread,
Oct 2, 2017, 4:14:53 PM10/2/17
to help-gn...@gnu.org
Mario Castelán Castro wrote:

> Thus it is clear that the mainstream approach
> to writing software is failing to delivers
> reliable software. That is why we have the
> need for an alternative. Among all possible
> approaches, *only* formal testing has the
> possibility of virtually eliminating
> software defects.

Utopian revolutionary ideas!

But guess what? In 1000 years there will be
knives that are sharper, safer, lighter,
stiffer (yet flexible where/if it is an
advantage), supremely balanced, made in
materials that will never rust or corrode, with
a blade that will never brake and a handle that
is super-ergonomic -

still, even the master carpenter on
Easter Island will occasionally make an
incorrect cut, and once in a blue moon actually
cut himself, using this knife!

Emanuel Berg

unread,
Oct 2, 2017, 4:15:01 PM10/2/17
to help-gn...@gnu.org
Ludwig, Mark wrote:

> It is an unhappy reality, but this is no
> different from other sophisticated file
> formats. Consider the wisdom of "firefox
> foo.html" where you do not know what is in
> foo.html. You may /think/ you just want to
> "view" what is in foo.html....

What's with all the comparison to what other
people do with their software?

If they got it wrong, all the more reason for
us to get it right!

PS. Facts for fans: Lisp is from 1958 onwards,
Emacs from 1976, Netscape is a youngster
from 1994 and Firefox, from 2002, is either
a ridiculously young chap or a female whose
absolute prime is but a few years away :)

Berg, Emanuel

Ludwig, Mark

unread,
Oct 2, 2017, 4:15:18 PM10/2/17
to Emanuel Berg, help-gn...@gnu.org
> From: Emanuel Berg, Tuesday, September 26, 2017 12:51 AM
>
> If they got it wrong, all the more reason for
> us to get it right!

I don't see anyone arguing that the status quo is /good/,
or that it should stay as is.

By all means, help make it better. This thread is about how
to deal with the existing software security risk. That's why it's an
unhappy reality.


Mario Castelán Castro

unread,
Oct 2, 2017, 4:15:23 PM10/2/17
to help-gn...@gnu.org
On 25/09/17 18:58, Óscar Fuentes wrote:
> Mario Castelán Castro <mario...@yandex.com> writes:
>> This depends on what exactly is meant by “correct”.
>
> "correct" means that the client (the people who required the software)
> says that the program fulfills his requirements. Sometimes you need to
> wait an infinite amount of time for obtaining client's approbation :-)

The same answer applies: If a client either provides himself or accepts
a formula in formal logic as a description of his requirements, then
yes, we can prove that a program is correct according to this concept.

If the client can not provide an *absolutely accurate* description (this
is necessarily a specification in formal logic) of what his requirements
are, then we can not assure the client that the program meets his
requirements. This is not a fault of the programmer, but of the client
for being vague about what his requirements are.

>> If by “correctness” it is meant a formula in *formal logic*, then yes,
>> we can prove that the program behaves correctly.
>
> Ever heard of a chap named Kurt Gödel? Or his cousin Alan Turing? :-)

What you are doing here is intellectual fraud. You are conveying the
impression that there is an obstacle that prevents formal verification,
yet you did not even _mention_ what this obstacle is.

Given that you mentioned Gödel, I guess (I can merely *guess* your
intent because you did not explain it; this is your fault, not mine)
that you believe that either the Gödel-Rosser incompleteness
theorem[Mend, p. 210] or one of the many variations is an obstacle for
formal verification. This is incorrect. What these theorems state is
that in any formal system that meets certain prerequisites, there is an
assertion P such that neither P nor ¬P are a theorem of the formal
system. Also, it seems that you are confusing algorithmic undecidable
problems (like the halting problem) with formally undecidability
propositions (like the axiom of choice within ZF).

In practice, using well-established logic systems (e.g.: HOL and ZFC)
one does not _accidentally_ runs into formally undecidable statements.
Moreover, suppose that you are “sure” that a certain proposition P that
is undecidable in (say) HOL is “true”. What should you do? Since you are
sure that P is “true”, then you must have proved P it in _some_ formal
system (otherwise your claim has no logical basis) and you trust that
this formal system is consistent. You can then either embed your formal
system within HOL, or add the appropriate axioms (both options are a
routine procedure in most proof assistants), and then prove P within
your embedding in HOL or within your HOL+axioms system.

Also, it is dubious whether it is a good idea to write a program whose
correctness depends on baroque axioms like large cardinals. This
commentary holds regardless of whether you are interested in formally
proving its correctness.

> […] it
> is almost certain that there are "correct" informal specifications that
> do not admit a verifiable formal specification.

This is yet more intellectual fraud (an unjustified claim).

> […] We must provide what is requested from us, in
> terms of functionality, performance and cost […]

Somebody has to take a decision between cheap software and reliable
software. Those are mutually exclusive.

The predominating choice is cheap software. As evidence for this claim I
note the very high frequency of bug reports including security
vulnerabilities.

*****
I have spent already enough time addressing your misconceptions. If you
reply to this message with even more misconceptions, I will not reply
because I am unwilling to spend even more time explaining what you
should already know. It is *YOUR* task to make sure you know what you
are talking about (and you have failed so far), not mine!.

If you are interested in formal logic, either because of genuine
interest or just to criticize, I suggest [Mend] as a starting point.

[Mend] E. Mendelson “Introduction to Mathematical Logic”, 6th edition
(2015).
signature.asc

Philipp Stephani

unread,
Oct 2, 2017, 4:15:33 PM10/2/17
to Ludwig, Mark, Glenn Morris, Eli Zaretskii, help-gn...@gnu.org
Ludwig, Mark <ludwi...@siemens.com> schrieb am Di., 26. Sep. 2017 um
05:44 Uhr:

> > From Glenn Morris, Monday, September 25, 2017 4:27 PM
> >
> > Eli Zaretskii wrote:
> >
> > > A file whose source you don't trust or are unfamiliar with should
> > > initially be examined with find-file-literally, if your security is
> > > indeed important for you. That emulates what most other text editors
> > > do when you open a file.
> >
> > Wow. I find this an extraordinary statement. For example, it means
> > that "emacs [-Q] somefile" could eg happily delete your home directory.
> > Please reconsider.
>
> It is an unhappy reality, but this is no different from other sophisticated
> file formats. Consider the wisdom of "firefox foo.html" where
> you do not know what is in foo.html. You may /think/ you just want to
> "view" what is in foo.html....
>
>
>
Viewing an HTML document will never run arbitrary code, let alone delete
the user's home directory. Unlike Emacs, browsers have pretty good
sandboxes.

Narendra Joshi

unread,
Oct 2, 2017, 4:15:34 PM10/2/17
to Glenn Morris, Eli Zaretskii, help-gn...@gnu.org
Glenn Morris <r...@gnu.org> writes:

> Eli Zaretskii wrote:
>
>> But they don't tell the whole story: the vulnerability was actually
>> caused by Gnus, MH-E, and perhaps other MUAs who decided to
>> automatically support enriched text, without checking the code first.
>> Otherwise, enriched.el per se has/had no problem whatsoever.
>
> I disagree. Simply opening a file in an unpatched Emacs can run
> arbitrary code with zero prompting. This is a massive security risk that
> is entirely internal to enriched.el (possibly with the 'display property
> more generally). It does get worse that Gnus would trust enriched.el to
> decode mail messages too. But anyone using Emacs from 21.1 to 25.2
I just checked my Emacs version and its

```
GNU Emacs 27.0.50 (build 1, x86_64-pc-linux-gnu, X toolkit, Xaw3d scroll
bars) of 2017-09-17
```
Are we going to skip Emacs 26?

> should be aware of this issue, whether or not they use Emacs for mail.
>

--
Narendra Joshi

Philipp Stephani

unread,
Oct 2, 2017, 4:15:35 PM10/2/17
to Narendra Joshi, Glenn Morris, help-gn...@gnu.org
Narendra Joshi <naren...@gmail.com> schrieb am Di., 26. Sep. 2017 um
20:43 Uhr:
You're building from master. That already has the major version after the
next release version, since changes pushed to master will end up in Emacs
27.

Ludwig, Mark

unread,
Oct 2, 2017, 4:15:37 PM10/2/17
to Philipp Stephani, Glenn Morris, Eli Zaretskii, help-gn...@gnu.org
> From: Philipp Stephani, Tuesday, September 26, 2017 12:46 PM
>
> Viewing an HTML document will never run arbitrary code, let alone delete the user's home directory.
> Unlike Emacs, browsers have pretty good sandboxes.

"Never run arbitrary code" + "pretty good sandboxes"?

Maybe they do /now/. Do we have to review the history?
How sure are you that the sandboxing works when you
directly invoke the browser and feed it a local HTML file
on the command line?

By all means, help make Emacs better.
Give it digital certificate technology and a sandbox, if you like.

Narendra Joshi

unread,
Oct 2, 2017, 4:15:39 PM10/2/17
to to...@tuxteam.de, help-gn...@gnu.org
<to...@tuxteam.de> writes:

> On Sun, Sep 24, 2017 at 04:42:54PM +0200, Óscar Fuentes wrote:
>
> [...]
>
>> It seems that you think that formal verification says that the software
>> is correct. That's in theory. Practice is different, as usual.
>
> And to embellish this discussion with an Argument by Authority:
>
> "Beware of bugs in the above code; I have only proved it
> correct, not tried it."
>
> http://www-cs-faculty.stanford.edu/~knuth/faq.html
And I would like to add this to the farrago presented in this thread:
http://wiki.c2.com/?LetItCrash

You talk about formal verification (always?) but for some use cases,
it's okay to start with "Let it crash!". People have done it
(successfully).

--
Narendra Joshi

Óscar Fuentes

unread,
Oct 2, 2017, 4:15:43 PM10/2/17
to help-gn...@gnu.org
> *****
> I have spent already enough time addressing your misconceptions. If you
> reply to this message with even more misconceptions, I will not reply
> because I am unwilling to spend even more time explaining what you
> should already know. It is *YOUR* task to make sure you know what you
> are talking about (and you have failed so far), not mine!.
>

Well, as a humble software developer who is happy whenever I get things
done "well enough", let me ask a few simple questions. You can simply
respond with "yes" or "no", if you wish:

* Is there a proof that demonstrates that every program admits a
verifiable formal specification, or a method for knowing in advance
which programs admit such specification?

* Is there a proof that demonstrates that the verification will take a
maximum amount of computing resources? (let's say polinomial on some
metric of the program's code).

* Are there reliable methods for estimating the amount of man/hours
(give or take one order of magnitude) required for creating a
verifiable formal specification from some informal specification
language used in the industry?

* Are there studies that, based on past practice, demonstrate that the
formal approach is more economic than the traditional one, either
along the development phase or along the complete life cycle of the
software?

>> […] We must provide what is requested from us, in
>> terms of functionality, performance and cost […]
>
> Somebody has to take a decision between cheap software and reliable
> software. Those are mutually exclusive.
>
> The predominating choice is cheap software. As evidence for this claim I
> note the very high frequency of bug reports including security
> vulnerabilities.

Categorizing software as "cheap" or "reliable" misses the point
entirely. Either the software is adequate or not. And the adequateness
criteria varies wildly, but two things are always present: if your
sofware is more expensive than the value it provides, it is inadequate;
if your software takes too much time to develop, it is inadequate.

I've seen too many academics that fail to understand those simple facts.

> If you are interested in formal logic, either because of genuine
> interest or just to criticize, I suggest [Mend] as a starting point.
>
> [Mend] E. Mendelson “Introduction to Mathematical Logic”, 6th edition
> (2015).

Many years ago I felt in love with mathematical logic and devoted many
many (that's two manys) hours to its study. In the end I decided that
was impractical and sterile and forgot almost everything. Since,
automatic theorem provers got much better to the point of being usable
for certain types of problems. Right now, I'm looking forward to
implementing dependent types on my programming language. I have Coq and
Idris around and played with it but so far I failed to see them as an
improvement on my work. That means that it is exciting and feels
promising, but I see no clear gain from using them.

With age, I turned skeptic about new things. If formal methods is the
silver bullet, the proponents are those who must provide evidence and so
far it is missing, apart from some niche cases (and then the evidence is
dubious, as shown on the paper that I mentioned on a previous message).


Richard Melville

unread,
Oct 2, 2017, 4:15:53 PM10/2/17
to Narendra Joshi, to...@tuxteam.de, help-gn...@gnu.org
You must be an Erlang aficionado :-)

Richard

Emanuel Berg

unread,
Oct 2, 2017, 4:18:25 PM10/2/17
to help-gn...@gnu.org
Óscar Fuentes wrote:

> Here the context is not having requirements
> on a useful way for developing the software.
> Interactive development is about
> communicating with the client for extracting
> a minimally specified set of requirements,
> analyze them, implement them (sometimes
> partially) and use that implementation as
> a basis for communicating with the client
> again. Rinse, repeat. Eventually the client
> is happy with the result.

Isn't that rather "iterative" or "ping-pong
programming"?

> This is not an approach you can apply to
> every type of project (not really good for
> avionics :-)

Why not? A simulator! We have a pilot on this
list. I think he can confirm a lot of the human
training is done with simulators. So why can't
they be used to test/sharpen new software as
well? Even in combination with the human part?

> provided that you have the necessary
> communication skills and tools for
> rapid development.

Well, you want that anyway. The tools, I
mean :)

Óscar Fuentes

unread,
Oct 2, 2017, 4:18:26 PM10/2/17
to help-gn...@gnu.org
Emanuel Berg <moa...@zoho.com> writes:

> Óscar Fuentes wrote:
>
>> Here the context is not having requirements
>> on a useful way for developing the software.
>> Interactive development is about
>> communicating with the client for extracting
>> a minimally specified set of requirements,
>> analyze them, implement them (sometimes
>> partially) and use that implementation as
>> a basis for communicating with the client
>> again. Rinse, repeat. Eventually the client
>> is happy with the result.
>
> Isn't that rather "iterative" or "ping-pong
> programming"?

I looked at

http://www.extremeprogramming.org/rules/iterative.html

and

https://codepen.io/DonKarlssonSan/post/ping-pong-programming

AFAIU, those are different things. To begin with, development is not the
same as programming. Development means defining the product from first
concept to deployment (even support). Apart from that, the client is the
center of the development process. At the poject advances, you improve
your understanding and he makes up his mind, chooses among options,
understands the trade-offs, etc. For several reasons, this process makes
for very satisfied clients.


dekk...@gmail.com

unread,
Oct 2, 2017, 4:18:27 PM10/2/17
to help-gn...@gnu.org
On 09/29, Eli Zaretskii wrote:
>> From: Mario Castelán Castro <mario...@yandex.com>
>> Date: Tue, 26 Sep 2017 09:46:46 -0500
>>
>> > "correct" means that the client (the people who required the software)
>> > says that the program fulfills his requirements. Sometimes you need to
>> > wait an infinite amount of time for obtaining client's approbation :-)
>>
>> The same answer applies: If a client either provides himself or accepts
>> a formula in formal logic as a description of his requirements, then
>> yes, we can prove that a program is correct according to this concept.
>>
>> If the client can not provide an *absolutely accurate* description (this
>> is necessarily a specification in formal logic) of what his requirements
>> are, then we can not assure the client that the program meets his
>> requirements. This is not a fault of the programmer, but of the client
>> for being vague about what his requirements are.
>
>Good luck finding many clients that can provide such a set of
>requirements. Most of the projects I deal with in my daytime job have
>to do with clients that cannot even provide _in_formal requirements,
>and depend on me and my team to do that for them.

Ouch - there's a project doomed from the start.

>> > […] We must provide what is requested from us, in
>> > terms of functionality, performance and cost […]
>>
>> Somebody has to take a decision between cheap software and reliable
>> software. Those are mutually exclusive.
>
>The world is not black-and-white, it's an infinite set of gray shades.
>If you are running a practical operation that needs to satisfy clients
>and be self-sustaining, you will have to choose one of those shades.
>You seem to be advocating the "reliable software" extreme, which,
>according to your definitions, is unreachable in any practical project
>of a large enough size. This is a kind of academic solution that does
>not translate well to any software engineering practices that lead to
>a delivery soon enough for clients to want to order your solutions.
>
>IOW, I'm firmly with Óscar here.
>
>> The predominating choice is cheap software. As evidence for this claim I
>> note the very high frequency of bug reports including security
>> vulnerabilities.

That's more to with poor teaching & understanding of how to code securely.

>I think you are misinterpreting the reasons for those bugs and
>vulnerabilities. The real reasons are the tremendous complexity of
>software we are required to produce nowadays, and the respectively
>inadequate level of formal-proof technologies that prevent their use
>in large-scale projects.
>
>IOW, we are simply trying to solve problems that are in principle
>insoluble with the current technology. So what we get are solutions
>that are 90% reliable, and the rest are bugs and vulnerabilities.
>
>> I have spent already enough time addressing your misconceptions. If you
>> reply to this message with even more misconceptions, I will not reply
>> because I am unwilling to spend even more time explaining what you
>> should already know. It is *YOUR* task to make sure you know what you
>> are talking about (and you have failed so far), not mine!.
>
>Please consider dropping your arrogant style and allow that others
>come into this discussion with some level of experience and knowledge,
>which should be respected as valid, instead of discarding it. If you
>disregard engineering practices, then your pure science is not
>interesting, at least not to those who have practical problems to
>solve every day.
>
signature.asc

Eli Zaretskii

unread,
Oct 2, 2017, 4:18:28 PM10/2/17
to help-gn...@gnu.org
> Date: Fri, 29 Sep 2017 15:59:21 +0100
> From: dekk...@gmail.com
>
> >Good luck finding many clients that can provide such a set of
> >requirements. Most of the projects I deal with in my daytime job have
> >to do with clients that cannot even provide _in_formal requirements,
> >and depend on me and my team to do that for them.
>
> Ouch - there's a project doomed from the start.

Not at all! We've been successful more than once, sometimes extremely
successful (meaning that the client is extremely happy with the
results, including the stability and compliance with user
expectations).

Stefan Monnier

unread,
Oct 2, 2017, 4:18:31 PM10/2/17
to help-gn...@gnu.org
> So, a person could declare their main local partition to be trusted.

I don't know about you, but my home directory/partition is full of files
over which I have no real control (typically clones of other people's
Git repositories). So it sounds risky.


Stefan


Mario Castelán Castro

unread,
Oct 2, 2017, 4:18:32 PM10/2/17
to help-gn...@gnu.org
I am sorry for the delayed reply. I (incorrectly) assumed that nobody
would still be interested in this conversation and thus I did not check
this thread until today.

On 26/09/17 18:31, Óscar Fuentes wrote:
> Well, as a humble software developer who is happy whenever I get things
> done "well enough", let me ask a few simple questions. You can simply
> respond with "yes" or "no", if you wish:
>
> * Is there a proof that demonstrates that every program admits a
> verifiable formal specification, or a method for knowing in advance
> which programs admit such specification?

If interpreted literally, the answer is trivial: Yes, for any program
you can formulate formal specifications. For example, the specification
that the program always terminates is meaningful for any program.

It seems that what you really mean is “can *any* informal specification
be converted into a formal specification?”.

In short: No, because it may be too vague but IF you can unambiguously
describe the allowed behaviors of a program, THEN you can formalize it.

To be a bit more specific: IF you can write an “evaluator program” that
takes an execution trace (a description of how the program that is
supposed to meet the informal specification behaved in a concrete run)
and says either "good" or "bad" depending on whether *that* trace is
acceptable according to your specification, THEN you can formalize the
specification implicitly defined by your program in any proof assistant
(because you can, at the very least, simply describer your program
within the formal logic).

The converse is not true; you can express *more* specifications in
formal logic than the ones for which an evaluator program can be written.

When wondering whether X property can be formalized, ask yourself
whether you can write a program that evaluates this property.

Now think of this specification (literally *this* string): “THE PROGRAM
DOES WHAT I WANT IT TO DO”. This is too vague (both for an human and for
formal logic) and can not be formalized.

> * Is there a proof that demonstrates that the verification will take a
> maximum amount of computing resources? (let's say polinomial on some
> metric of the program's code).

The question can be interpreted in at least 2 ways:

(1): If you mean the computer time to verify an *already done* proof:

It is worth noting that when computer-verified proofs are involved, an
human usually writes a *sketch of proof* that can be seen as a guide for
the computer to generate the actual proof and check that it is valid.
The actual proofs (in the underlying logic) involve a lot of repetitive
work that can be automated. By writing only a proof sketch instead of
the full proof this work can be left to the computer.

In practice, proofs of the aforementioned type are fast to verify; the
time to verify is roughly proportional to the total length of all
*proofs* (not the number of theorems). It does not grow exponentially
(or something like that) because the human who writes the proof must do
the “reasoning” that requires “insight”, “creativity”, etc.

For example, in HOL4 <https://hol-theorem-prover.org/> and HOL Light
<http://www.cl.cam.ac.uk/~jrh13/hol-light/> the user rarely writes
actual proofs; instead he writes programs that *generate* the proofs.
For each step in the actual proof a function call is made to a module
called “Thm” (theorem) requesting that this step is performed. If and
only if all steps are valid, then an object of type “thm” (theorem type)
is generated. Thus, if there are “bugs” the program can fail to produce
a theorem object, but it will not produce a “false” theorem.

On the other hand, in Metamath
<http://us.metamath.org/mpeuni/mmset.html> the user must write the whole
proof, not just a proof sketch.

(2): If you mean the time that it would take a computer to verify that a
real-life program meets a specification *without the human giving any
advice*, then this would be unpractical for real-life software. That is
why a human must write a proof/proof sketch. If you are interested in
addressing this question from a *purely theoretical* view, then search
the mathematical *literature* (Wikipedia does not count) for Rice theorem.

> * Are there reliable methods for estimating the amount of man/hours
> (give or take one order of magnitude) required for creating a
> verifiable formal specification from some informal specification
> language used in the industry?

I do not know of such a method. My guess is a person that is
knowledgeable in *both* formal verification and the area of the program
to be verified (e.g.: avionics, kernels, compilers, etc.) can give an
educated estimate is. You may want to check about existing projects of
formalizations of informal specifications.

> * Are there studies that, based on past practice, demonstrate that the
> formal approach is more economic than the traditional one, either
> along the development phase or along the complete life cycle of the
> software?

When reliability is the top priority, formal verification is the only
option. Hence, no comparison of cost can be done

As for programs where reliability is not a priority, I have seen some
studies comparing cost, but I do not recall the full reference. A search
in your preferred academic search engine should give some relevant results.

But like I said already, for me a program that lacks a proof of
correctness is unfinished, just like a program that lacks documentation
is unfinished.

> Categorizing software as "cheap" or "reliable" misses the point
> entirely. Either the software is adequate or not. And the adequateness
> criteria varies wildly, but two things are always present: if your
> sofware is more expensive than the value it provides, it is inadequate;
> if your software takes too much time to develop, it is inadequate.

One can speak of “the money the user is willing to pay for the
software”, but speaking of “the *value* of the software” is so vague
that its meaningless.

Imagine a car-manufacturing company. If the cars use
“drive-through-wire” (i.e.: the user only controls the input to a
computer, not the actual mechanical controls of the car; those are
controlled by the computer), then what is the *value* of proving correct
the software of the computerized control system? An owner of the care
probably would say that the company must pay whatever it costs, because
his life is at risk. The stock holders will disagree.

> Many years ago I felt in love with mathematical logic and devoted many
> many (that's two manys) hours to its study. In the end I decided that
> was impractical and sterile and forgot almost everything.

Of course. Anybody studying mathematics (by which I mean deductive
sciences in general) without an interest of learning it *for its own
sake*, regardless of any practical application, is doomed to failure.

> Since,
> automatic theorem provers got much better to the point of being usable
> for certain types of problems.

Note that proof assistants are different form automated theorem provers.

Proof assistants verify your proof and tell you what you need to prove
as you write the proof sketch (mentioned above). With automated theorem
provers, you give them a conjecture and axioms and they either provide a
proof, terminate without proof, or fail to terminate. In practice there
is limited (so far) application of automated theorem proving for proof
assistants but the situation is improving. For example, HOL4 has several
automated provers built-in; Several are specific-purpose; for example,
arithmetic decision procedures. MESON and METIS are general purpose
provers but they can only solve simple goals. The tool holyhammer which
is part of HOL4 can interface with external provers.

> Right now, I'm looking forward to
> implementing dependent types on my programming language. I have Coq and
> Idris […]

Note that those are not automated theorem provers. Coq and Idris are
proof verifiers/proof assistants and programming languages (not all
proof verifiers are programming languages; these are specific cases).

> With age, I turned skeptic about new things. If formal methods is the
> silver bullet, the proponents are those who must provide evidence […]

I mentioned to you some verified software. The problem is not that the
people advancing formal methods are failing, but that typical
programmers are way too stupid, unmotivated, lazy and overall mediocre
(just like the population in general).

Such is the mediocrity that typical C programs are full of UD behavior
(e.g.: type punning and undefined behavior with pointer arithmetic). Do
you think that the same programmers who write this code (that obviously
do not even bother studying the details of their programming language)
will bother learning serious mathematics? Obviously not.

Writing provably correct software is much harder than writing buggy
software. Nearly all programmers write buggy software and count bugs as
a “normal thing” just like rain.

Programming is easy; doing it right is hard, very hard (and note that
testing is not “doing it right”).
signature.asc

Emanuel Berg

unread,
Oct 2, 2017, 4:18:37 PM10/2/17
to help-gn...@gnu.org
Óscar Fuentes wrote:

> AFAIU, those are different things. To begin
> with, development is not the same as
> programming. Development means defining the
> product from first concept to deployment
> (even support). Apart from that, the client
> is the center of the development process.
> At the poject advances, you improve your
> understanding and he makes up his mind,
> chooses among options, understands the
> trade-offs, etc. For several reasons, this
> process makes for very satisfied clients.

Now this was a terminology issue only. As far
as the method is concerned, I fully embrace it,
and it is same way I do it - save for, I am the
server as well as the client :)

Regarding my proposals, I just made that up on
the fly and I haven't read one word about it,
and I was surprised there was already a theory
to it. If there is and those terms are not in
agreement with what you mean, then I take it
back :) (But the terms still make sense :))

"Interactive programming" tho I always
associated with the Lisp "eval and go", and
eval the smallest part if you will and have the
biggest part affected by this the moment it
(the smallest part) is evaluated. (Probably
some other languages can do this as well
- Erlang?)

Programming is perhaps not exactly the same as
development but this property (eval and go) is
a huge asset for overall development as it is
so easy and fast and pleasant to try out new
things and not have to compile, build, and "get
there" each time to see the effects first hand.

Emanuel Berg

unread,
Oct 2, 2017, 4:18:38 PM10/2/17
to help-gn...@gnu.org
Ha ha :) You are not as security-obsessed
I mean aware as the OP! But for the sake of the
argument, I think we should replace the words
"main local partition" with "a subsection which
has been set up to be and to-be-kept safe".

Obviously, setting up a safe place which isn't
safe - then no, that won't affect security to
the better :)

No worries, we get that.

Eli Zaretskii

unread,
Oct 2, 2017, 4:18:56 PM10/2/17
to help-gn...@gnu.org
> From: Robert Thorpe <r...@robertthorpeconsulting.com>
> Cc: el...@gnu.org, help-gn...@gnu.org
> Date: Sun, 24 Sep 2017 19:29:17 +0100
>
> >> A file whose source you don't trust or are unfamiliar with should
> >> initially be examined with find-file-literally, if your security is
> >> indeed important for you. That emulates what most other text editors
> >> do when you open a file.
> >>
> >>
> > That's an unrealistic requirement; nobody will ever do this. Emacs must
> > make sure to never run untrusted code when visiting a file, unless the user
> > explicitly asked for (via the enable-local-eval variable).
>
> I think it would be very useful if Emacs had a concept of trusted-zones.
>
> So, a person could declare their main local partition to be trusted. Or
> they could declare it to be trusted except for the browser cache (for
> example).

I think we currently lack the infrastructure to support such
functionality in Emacs. IMO we should welcome work on such
infrastructure, if someone wants to step forward and lead the
development in that direction.

Eli Zaretskii

unread,
Oct 2, 2017, 4:19:01 PM10/2/17
to help-gn...@gnu.org
> From: Mario Castelán Castro <mario...@yandex.com>
> Date: Tue, 26 Sep 2017 09:46:46 -0500
>
> > "correct" means that the client (the people who required the software)
> > says that the program fulfills his requirements. Sometimes you need to
> > wait an infinite amount of time for obtaining client's approbation :-)
>
> The same answer applies: If a client either provides himself or accepts
> a formula in formal logic as a description of his requirements, then
> yes, we can prove that a program is correct according to this concept.
>
> If the client can not provide an *absolutely accurate* description (this
> is necessarily a specification in formal logic) of what his requirements
> are, then we can not assure the client that the program meets his
> requirements. This is not a fault of the programmer, but of the client
> for being vague about what his requirements are.

Good luck finding many clients that can provide such a set of
requirements. Most of the projects I deal with in my daytime job have
to do with clients that cannot even provide _in_formal requirements,
and depend on me and my team to do that for them.

> > […] We must provide what is requested from us, in
> > terms of functionality, performance and cost […]
>
> Somebody has to take a decision between cheap software and reliable
> software. Those are mutually exclusive.

The world is not black-and-white, it's an infinite set of gray shades.
If you are running a practical operation that needs to satisfy clients
and be self-sustaining, you will have to choose one of those shades.
You seem to be advocating the "reliable software" extreme, which,
according to your definitions, is unreachable in any practical project
of a large enough size. This is a kind of academic solution that does
not translate well to any software engineering practices that lead to
a delivery soon enough for clients to want to order your solutions.

IOW, I'm firmly with Óscar here.

> The predominating choice is cheap software. As evidence for this claim I
> note the very high frequency of bug reports including security
> vulnerabilities.

Eli Zaretskii

unread,
Oct 2, 2017, 4:19:02 PM10/2/17
to help-gn...@gnu.org
> From: Glenn Morris <r...@gnu.org>
> Cc: help-gn...@gnu.org
> Date: Mon, 25 Sep 2017 17:26:45 -0400
>
> Eli Zaretskii wrote:
>
> > A file whose source you don't trust or are unfamiliar with should
> > initially be examined with find-file-literally, if your security is
> > indeed important for you. That emulates what most other text editors
> > do when you open a file.
>
> Wow. I find this an extraordinary statement. For example, it means
> that "emacs [-Q] somefile" could eg happily delete your home directory.

Unless you trust Emacs to have absolutely zero exploitable
vulnerabilities, including those not yet revealed, sure it could.
Although not "happily", which seems to be uncalled for.

And why is "-Q" part of this, anyway? The use case under
consideration is precisely that the user nonchalantly visits a file
from their _normal_ Emacs session. Using -Q already assumes some
unusual care, in which case find-file-literally is a more logical
measure.

> Please reconsider.

I don't see why I should. You seem to be misinterpreting what I wrote
in some strange direction, if what I wrote really bothers you.

Eli Zaretskii

unread,
Oct 2, 2017, 4:19:08 PM10/2/17
to help-gn...@gnu.org
> From: Philipp Stephani <p.ste...@gmail.com>
> Date: Sun, 24 Sep 2017 07:13:55 +0000
>
> A file whose source you don't trust or are unfamiliar with should
> initially be examined with find-file-literally, if your security is
> indeed important for you. That emulates what most other text editors
> do when you open a file.
>
> That's an unrealistic requirement; nobody will ever do this.

If you care about your security, you will. Nowadays, no text file
should be considered safe, if you don't know or don't trust its origin.

> Emacs must make sure to never run untrusted
> code when visiting a file, unless the user explicitly asked for (via the enable-local-eval variable).

Emacs does. But since this is done by humans, sometimes errors creep
in, and in this case the error took many years to be uncovered. Which
is why taking local precautions is always a good idea.

Eli Zaretskii

unread,
Oct 2, 2017, 4:19:08 PM10/2/17
to help-gn...@gnu.org
> From: Philipp Stephani <p.ste...@gmail.com>
> Date: Tue, 26 Sep 2017 17:46:10 +0000
> Cc: "help-gn...@gnu.org" <help-gn...@gnu.org>
>
> Unlike Emacs, browsers have pretty good sandboxes.

Volunteers are welcome to work on a sandbox infrastructure for Emacs,
and submit the resulting patches.

Óscar Fuentes

unread,
Oct 2, 2017, 4:19:25 PM10/2/17
to help-gn...@gnu.org
dekk...@gmail.com writes:

>>Good luck finding many clients that can provide such a set of
>>requirements. Most of the projects I deal with in my daytime job have
>>to do with clients that cannot even provide _in_formal requirements,
>>and depend on me and my team to do that for them.
>
> Ouch - there's a project doomed from the start.

Ever heard of interactive development? I've doing that for the last 20
years and the success ratio is almost 100%.

>>> The predominating choice is cheap software. As evidence for this claim I
>>> note the very high frequency of bug reports including security
>>> vulnerabilities.
>
> That's more to with poor teaching & understanding of how to code securely.

Yes, because there are practical methods that result in guaranteed
defect-free software.

Sigh.


Emanuel Berg

unread,
Oct 2, 2017, 4:19:47 PM10/2/17
to help-gn...@gnu.org
Óscar Fuentes wrote:

> Ever heard of interactive development?
> I've doing that for the last 20 years and the
> success ratio is almost 100%.

I've heard of that and that's what we have with
Elisp and Common Lisp with SLIME. The success
ratio is never "almost 100%" tho so I suspect
you refer to some other "interactiveness" that
I'm unaware of?

Óscar Fuentes

unread,
Oct 2, 2017, 4:20:12 PM10/2/17
to help-gn...@gnu.org
Emanuel Berg <moa...@zoho.com> writes:

> Óscar Fuentes wrote:
>
>> Ever heard of interactive development?
>> I've doing that for the last 20 years and the
>> success ratio is almost 100%.
>
> I've heard of that and that's what we have with
> Elisp and Common Lisp with SLIME. The success
> ratio is never "almost 100%" tho so I suspect
> you refer to some other "interactiveness" that
> I'm unaware of?

Here the context is not having requirements on a useful way for
developing the software. Interactive development is about communicating
with the client for extracting a minimally specified set of
requirements, analyze them, implement them (sometimes partially) and use
that implementation as a basis for communicating with the client again.
Rinse, repeat. Eventually the client is happy with the result.

This is not an approach you can apply to every type of project (not
really good for avionics :-) but, when it is applicable, it works
extremely well, provided that you have the necessary communication

Emanuel Berg

unread,
Oct 2, 2017, 9:04:54 PM10/2/17
to help-gn...@gnu.org
Stefan Monnier wrote:

>> I think we should replace the words "main
>> local partition" with "a subsection which
>> has been set up to be and to-be-kept safe".
>
> Then I think the number of Emacs users who
> could make use of such a feature would be
> vanishingly small.

Yeah, I think it'd be developed by the people
who believes in the idea and will themselves
use the result ...

hey, that even sounds familiar :)
0 new messages