Verifying numerical precision with Frama-C's value analysis
Pascal Cuoq - 22nd Jan 2011Frama-C's value analysis wasn't aimed at verifying numerical precision of C functions when it was conceived. There already was a specialized project for this purpose.
However the value analysis needed to handle floating-point computations correctly (that is without omitting any of the possible behaviors the function can have at run-time) in order to be generally correct. Some industrial users at the same time wanted floating-point values to be handled more precisely than the "always unknown" placeholder used in very early versions. Floating-point computations are notoriously underspecified in C so "handling floating-point computations correctly" became something of a journey with gradual improvements in each release since Beryllium. The goal was only reached in the last release (missing support for the single-precision float
type was the last known limitation and that was removed in Carbon).
Besides in the value analysis the primary solution to work around issues is case analysis. Again the underlying algorithms were there from the start but they have been continuously refined until the Carbon release. For a numerical function case analysis typically means subdividing the definition domain of the function in small sub-intervals on which interval arithmetic is as precise as the sub-interval is narrow.
Interestingly the combination of these different elements means that the value analysis can be used to verify the numerical precision of a C program in a context where the required precision is not too high but it's absolutely vital that it is uniformly reached on the entire definition interval of the program.
Enough chit-chat. Here is a C function:
/*@ requires 0.0 <= rad <= 1.58 ; */ double cos_taylor(double rad) { double sq = rad * rad; double r = 1. + sq * (sq * (sq * (-1. / 720.) + (1. / 24.)) - 0.5 ); return r; }
This C function is part of an application that needs at some point to compute a cosine. The designer decided that a Taylor development was the best way to implement this cosine and determined that the function x ↦ 1 - x^2/2 + x^4/4! - x^6/6! was a good enough approximation for an algorithm that can be implemented with only 4 multiplications.
One important detail is that the aforementioned study was about a mathematical function mapping reals to reals through use of real operations. The C implementation on the other hand uses double
variables and double
operations. Verifying that the result computed by cos_taylor
is never far from the result that would have been computed by the same program using real numbers can be one link in the chain of trust between conceived and realized system.
The usual mathematical methods for studying functions do not work well on the floating-point version. For instance the C function cos_taylor
may be decreasing on [0 .. 1.58] but it's absolutely not obvious that it is. Floating-point computations sometimes behave strangely and it is not easy for the non-specialist to tell whether this happens for this particular function.
Analyzing the above function as-is does not tell us much:
frama-c -val cos_taylor.c -main cos_taylor ... [value] Values for function cos_taylor: sq ∈ [-0. .. 2.4964] r ∈ [-0.2482 .. 1.]
The option -all-rounding-modes
makes the analyzer take into account all compilations and executions allowed by the C standard (whereas without this option assumptions are made about the architecture and FPU rounding mode). For a small function like this the option can be expected to have limited influence:
frama-c -val cos_taylor.c -main cos_taylor -all-rounding-modes ... [value] Values for function cos_taylor: sq ∈ [-0. .. 2.4964] r ∈ [-0.2482 .. 1.]
The results the program would give using real numbers instead of floating-point ones are also guaranteed to be in the variation intervals computed with option -all-rounding-modes. We can use this guarantee to our advantage: to verify that the one is close to the other let us just make sure that they are both inside the same narrow interval the interval computed by option -all-rounding-modes.
If we were to use only the above results in our reasoning it would go like this: on the input interval [0.0..1.58] the result computed by function cos_taylor
using real numbers always is within [-0.2482 .. 1.] whereas the result computed with floating-point is always within [-0.2482 .. 1.]. The absolute error between the two is therefore never more than 1.2482. For a cosine function implementation this is an unsatisfying precision result.
However we can make the analysis more precise by splitting the input interval in small sub-intervals:
/*@ requires 0.0 <= rad <= 1.58 ; */ double cos_taylor(double rad) { /*@ assert rad <= 1. / 16. || 1. / 16. <= rad <= 2. / 16. || 2. / 16. <= rad <= 3. / 16. || 3. / 16. <= rad <= 4. / 16. || 4. / 16. <= rad <= 5. / 16. || 5. / 16. <= rad <= 6. / 16. || 6. / 16. <= rad <= 7. / 16. || 7. / 16. <= rad <= 8. / 16. || 8. / 16. <= rad <= 9. / 16. || 9. / 16. <= rad <= 10. / 16. || 10. / 16. <= rad <= 11. / 16. || 11. / 16. <= rad <= 12. / 16. || 12. / 16. <= rad <= 13. / 16. || 13. / 16. <= rad <= 14. / 16. || 14. / 16. <= rad <= 15. / 16. || 15. / 16. <= rad <= 16. / 16. || 16. / 16. <= rad <= 17. / 16. || 17. / 16. <= rad <= 18. / 16. || 18. / 16. <= rad <= 19. / 16. || 19. / 16. <= rad <= 20. / 16. || 20. / 16. <= rad <= 21. / 16. || 21. / 16. <= rad <= 22. / 16. || 22. / 16. <= rad <= 23. / 16. || 23. / 16. <= rad <= 24. / 16. || 24. / 16. <= rad <= 25. / 16. || 25. / 16. <= rad <= 26. / 16. || 26. / 16. <= rad ; */ double sq = rad * rad; double r = 1. + sq * (sq * (sq * (-1. / 720.) + (1. / 24.)) - 0.5 ); return r; }
frama-c -val cos_taylor.c -main cos_taylor -all-rounding-modes -slevel 100 ... [value] Values for function cos_taylor: sq ∈ [-0. .. 2.4964] r ∈ [-0.0153848312717 .. 1.]
The result interval for r
is slightly improved but the previous reasoning still isn't satisfactory even with the improved interval. The synthetic results displayed at the end of the analysis hide the information that would be useful to us that is how close real and double results are in each of the small 1/16th-width intervals.
We can make the value analysis display these intermediate results by inserting the following line in the code between the computation of r
and the return
statement.
Frama_C_show_each_r(rad r);
Here is one line of the generated information:
[value] Called Frama_C_show_each_r([-0. .. 0.0625] [0.998046875 .. 1.])
Now we're talking! This line means that for the input interval [-0. .. 0.0625] both the real and the double results are in [0.998046875 .. 1.] and therefore the absolute difference between the two can never be more than 0.002. This part of the input interval is the best case. Looking at the worst case:
[value] Called Frama_C_show_each_r([1.5 .. 1.5625] [-0.0104477405548 .. 0.0867156982422])
The bound on the difference between real and double results we can deduce in the interval [1.5 .. 1.5625] is slightly less than 0.1. That's better than before but probably still not satisfactory.
We can and will refine the division to improve this bound. But before that here is my chance to document two convenient options I never had had the chance to talk about.
The first useful option is -float-relative
which changes the format for displaying intervals of floating-point values. Instead of lower and upper bound this option displays lower bound and width which is very convenient when the width is the only thing we are interested in. When this option is set the aforementioned intervals for r
become [0.998046875 ++ 0.001953125]
and [-0.0104477405548 ++ 0.097163438797]
directly giving us the bounds 0.001953125 and 0.097163438797.
There are some expressions for which naive interval propagation naturally introduces approximation. The simplest example is the statement y = x*x;
where x
is known to be in the interval [-10. .. 10.]. Naive interval propagation computes the interval [-100. .. 100.] for y
when y
could have been inferred to be positive with only the available information. The function cos_taylor
exhibits the same behavior: this is why the computed synthetic interval for r
was improved when we used option -slevel
together with an annotation to split the input interval.
The second previously undocumented option automatically detects expressions that look like they could benefit from subdivisions and applies the subdivisions without need for user annotations. It is compatible with option -slevel
: the effects of the two options combine for even greater precision. Because the subdivisions are limited to the scope of the expression this option does not cause combinatorial explosion the way -slevel
can. The precision gains are limited but should be considered as a bonus obtained only at the cost of CPU time. This option's name is -subdivide-float-var
and it takes an integer argument that represents how much additional CPU time you are willing to spend trying to improve precision. Like what happens for -slevel
the CPU time is not consumed if the circumstances in which it looks like it could be used usefully do not occur.
Let's try it:
frama-c -val cos_taylor.c -main cos_taylor -all-rounding-modes -slevel 100 -float-relative -subdivide-float-var 50 ... [value] Called Frama_C_show_each_r([-0. ++ 0.0625] [0.9980475107 ++ 0.00195248929991]) ... [value] Called Frama_C_show_each_r([1.5 ++ 0.0625] [0.0074385681914 ++ 0.0626786193086]) ...
Here we are lucky and the effect of this option is most sensitive where it matters: the width of the worst case result sub-interval is diminished by more than 33%.
Let us pretend for the sake of the argument that a difference of 0.05 between real and double computations is acceptable: we only need to make sub-divisions narrower for the intervals that do not reach this objective. I tried it and ended up with the assertion below.
/*@ assert rad <= 1. / 16. || 1. / 16. <= rad <= 2. / 16. || 2. / 16. <= rad <= 3. / 16. || ... 13. / 16. <= rad <= 14. / 16. || 14. / 16. <= rad <= 15. / 16. || 30. / 32. <= rad <= 31. / 32. || 31. / 32. <= rad <= 32. / 32. || 32. / 32. <= rad <= 33. / 32. || ... 50. / 32. <= rad <= 51. / 32. || 51. / 32. <= rad <= 52. / 32. || 26. / 16. <= rad ; */
It is less regular than previously and being hand-written (well copy-pasted) the risk of an error in the annotation is especially high. But one of the very important lines in the output is "cos_taylor.c:4:[value] Assertion got status valid". This means that the intervals I wrote for rad
when taken together cover completely the input interval specified by the requires
clause. So if I mistyped and did not divide exactly as I intended it does not matter: the reasoning is still correct.
The other important result is of course that this new sub-division shows that no double result is further than 0.0492
from its corresponding real result. In fact the generated log contains the input interval and could be used to check the double result directly against a reference function. If the reference function is known to be decreasing on [0 .. π] the image of each input sub-interval through the reference function can be computed and a bound can be given on the difference between the reference function and the C function as approximated and computed using floating-point.
One last hint: interpreting ACSL formulas is a difficult exercise for the value analysis. If you omit the first and last bounds that is if you write rad <= 1. / 16. || ...
instead of 0.0 <= rad <= 1. / 16. || ...
you make it easier for the analyzer to recognize that the sub-intervals cover completely the input interval from the pre-condition. Otherwise it may fail. It fails for instance if the bound is 0.0 for good reasons that are difficult to circumvent: the neighborhood of zero in floating-point is to use an expression made popular a couple of years ago a bag of hurt.