Frama-C-discuss mailing list archives

This page gathers the archives of the old Frama-C-discuss archives, that was hosted by Inria's gforge before its demise at the end of 2020. To search for mails newer than September 2020, please visit the page of the new mailing list on Renater.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Frama-c-discuss] Issue understanding Value analysis approximation on loop bounds

  • Subject: [Frama-c-discuss] Issue understanding Value analysis approximation on loop bounds
  • From: pascal.cuoq at (Pascal Cuoq)
  • Date: Sat, 5 May 2012 18:53:03 +0200
  • In-reply-to: <>
  • References: <> <>

> With Value analysis, one seems to understand how it works but as soon
> as one gets to a more complicated/real life example, Value analysis
> behaviour is surprising. And the only answer of its developers is "you
> need to understand its internals". Right now, its seems that value
> analysis is a complex tool that can be only used by its developers at
> CEA (with outstanding results, cf. your submission to the robot
> debugging contest).
> Maybe that's the main difference between expensive tools Polyspace and
> Astr?e vs. Value analysis (beyond "correctness", cf. your blog post).
> The former targets "real programmers", i.e. people not having a PhD on
> Abstract interpretation, while the latter targets other developers
> doing static analysis tools. I'm fine with that, but then don't blame
> potential users for not understanding your tool and its target.

This is interesting, because your conclusion is at odds with
all reports we have had from other users of these expensive tools.
I think this deserves investigation.

Which versions of the two other tools that you mention
were you using for your comparison, how much user guidance did
they require to provide the precision that you expect, and how
much computer resources (time, memory) did they need then?
Say, on the simplified example that you posted. Or on the original
code if you prefer. If you have data for a wider sample, we are
interested too.

The value analysis has several sophisticated options that
require deep understanding of its working principles, that's true
(but to the best of my knowledge it is not different with
PolySpace or Astr?e). At the same time, the tutorial makes
it clear that you need very little to start using it: you need to
understand how to stub functions, and there is a single
global option that almost always increases precision (at the
cost of more resources, usually): -slevel. On your simplified
example and, if I understand correctly, on the original code,
the option works as described: for a given program, increasing
the value of -slevel makes the analysis more precise (you said
something about 10000). You do not need to understand what
it does to use it. You need to know that a higher level usually
means more resources consumed and better precision.

I would be delighted to know what single global option you used
to make PolySpace or Astr?e analyse your example(s), if only
to mention their respective names casually at parties.

Most users of the three tools you compared do not try to use
them for a complete formal verification of the code. Two of them
do not have a language to express functional specifications,
although it is always possible to write observers in C. They are
usually used more casually, reviewing the alarms (that mostly
can be assumed to be false alarms), not trying to fiddle with
the settings until everything is proved. Complete formal
verification of general codebases is what Hoare logic is for,
and if that's what you want, by all means, use Hoare-logic
based tools and such proofs assistants as Coq or PVS to
prove the few goals that aren't proved automatically.
You will have a lot of fun along the way.

Still, the value analysis can be used to get results close to
complete formal verification of the absence of run-time errors
and possibly simple functional properties,
with few false alarms remaining. This requires a strong
commitment to the platform, so if you think this kind of use
of this plug-in, the first idea to consider is sub-contracting to
experts: Atos and CS both have such experts.
Atos also offers training:

It is also possible to contract with us for training and/or support:

But there is no point in us trying to document more of the expert
options and making this documentation available in the Open Source
package. There is too much information about them available already.
They only confuse potential users. The proper way to use the
advanced options is in the context of a contract with us, with
NDAs in place, in which circumstances we can take the time
to look at the issues you are encountering and tell you about the
one option that you need to know about for the objective you are
trying to reach. We can even explain to you how it works, then.

For now, I understand you are only evaluating tools.
If you find that the other tools that you mentioned are more
cost-efficient, it would be a crime against scientific advancement
not to buy them instead.