Frama-C-discuss mailing list archives

This page gathers the archives of the old Frama-C-discuss archives, that was hosted by Inria's gforge before its demise at the end of 2020. To search for mails newer than September 2020, please visit the page of the new mailing list on Renater.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Frama-c-discuss] Plugins in next Frama-C?

  • Subject: [Frama-c-discuss] Plugins in next Frama-C?
  • From: pascal.cuoq at (Pascal Cuoq)
  • Date: Wed, 6 Oct 2010 14:03:47 -0700
  • In-reply-to: <>
  • References: <> <> <>

>> The value analysis will require much less memory for many large
>> analyses, along with other less striking improvements. The ratios
>> range from 4 to 100 for the existing memory consumption benchmarks
>> that I had lying around (that is, from 4 times less memory required to
>> 100 times less memory required).
> Very good! I assume this is the result of the rangemap ?work.

Actually, no. This is the result of doing the round-up of "performance
bugs and unimplemented optimizations" and fixing/implementing them for
the improved tutorial that the blog's second post alludes to. An
almost complete tutorial will maybe be published before Carbon is
released, but of course, you will need 16GiB of memory to reproduce it
entirely with Boron, whereas any computer that can run a web browser
will be able to reproduce it with Carbon (as long as you quit the web
browser to make room).

In fact, the benchmark with only 4 times improvement in memory use is,
not coincidentally, the one that would benefit most from rangemaps.
I'm looking forward to see how much they improve that one once they
are finished implementing.

> BTW, are there any figures about the size of C code that can be
> handled by Frama-C? 1,000 lines? 10,000 lines? 100,000 lines? In which
> time?

It really depends on the code: the tutorial is about verifying a
library of 1000 lines, and the last step in the tutorial for verifying
the strongest result will take 5 hours of CPU time (CPU time was not
improved as much as memory use during my round-up of optimizations).
But I think we have heard about 300 000 lines analyzed completely with
useful results produced for some definition of "useful" (NOT

The preparation of the tutorial also revealed potential optimizations
that would reduce CPU time, but they didn't seem they would apply to
many other analyses, and they might worsen some of them.