Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixpoint not reached because narrowing for Enums does not guarantee termination #1673

Closed
michael-schwarz opened this issue Feb 11, 2025 · 4 comments · Fixed by #1675
Closed
Milestone

Comments

@michael-schwarz
Copy link
Member

michael-schwarz commented Feb 11, 2025

@jerhard and I debugged one of the issues where fixpoint refinement does not terminate identified by our students in #1671 . It turns out the non-termination is due to infinite descending chains encountered during narrowing in the Enum domain after refining with Def_Exc.

// PARAM: --set ana.int.refinement fixpoint --enable ana.int.enums --enable ana.int.def_exc  --enable ana.int.interval --set sem.int.signed_overflow assume_none
int main() {
   int count = 0;
   while (1) {
     count++;
     count++;
   }
 }

In detail the following happens:

  • Widening terminates and a switch to narrowing is performed
  • Def_exc benefits from the Interval information [1,infty] and infers a value of Exc {0} after the first increment
  • The next increment turns this into Exc {1}
  • The information is transferred to Enum
  • At the loop head narrowing happens by meet in the Enum domain leading to Exc {1}
  • The first increment and refinement with interval information leads to a value Exc {0,2} in Def_Exc
  • The second increment action turns this into Exc {1,3}
  • This information is propagated to enums
  • ...

Without refinement, these issues do not surface as:

  • Def_exc does no narrowing of exclusion sets
  • Enum does not do computations

The second point is why Enums gets away with setting narrow to meet: Without refinement the only values that can become part of the exclusion set are given by constants in the program. Such guarantees go away when performing refinement.

There are multiple solutions:

  • a) Do not refine to Exc {0} for intervals that do not contain 0
  • b) Set narrow in Enums to return its left argument
  • c) Do not refine between Enums and Def_Exc
  • d) Fix narrowing in Enums to satisfy descending chain condition, but unlike b) maintain some precision, e.g., by bounding the size of exclusion sets created during narrowing by a constant k, and return the left argument during narrowing if it already has k or more elements.

@jerhard and I would propose to go for solution d).

@sim642
Copy link
Member

sim642 commented Feb 11, 2025

  • c) Do not refine between Enmus and Def_Exc

This seems the more obvious solution to me because it avoids non-program constants from getting into enums domain.
With this in mind, other enums domain refinements may also need to be reconsidered if they allow something similar (in theory).

  • d) Fix narrowing in Enums to satisfy descending chain condition, but unlike b) maintain some precision, e.g., by bounding the size of exclusion sets created during narrowing by a constant k, and return the left argument during narrowing if it already has k or more elements.

This creates a new tunable constant which affects analysis precision. So far the enums domain has been intended to not need that by not computing. With such collapse in place, that somewhat changes the idea because now it would also be fine to compute with enums.

What I'm afraid of is the analysis of large switches over many enum constants. By fixing some default k, it means that we are now less precise than before because the exclusion set is not excluding all 20 previous cases or whatever.

@michael-schwarz
Copy link
Member Author

By fixing some default k, it means that we are now less precise than before because the exclusion set is not excluding all 20 previous cases or whatever.

Note that this k would only apply to narrowing, if the exclusion grows as a consequence of a sequence of guards, it can become arbitrarily large. Tbf, I cannot think of a non-degenerate case where we would want the exclusion set to grow so significantly during narrowing.

@sim642
Copy link
Member

sim642 commented Feb 11, 2025

Note that this k would only apply to narrowing, if the exclusion grows as a consequence of a sequence of guards, it can become arbitrarily large.

I think sv-benchmarks LDV tasks have a lot of large switches with all kinds of gotos between the cases where some intermediate branching points within the single switch may be a widening point.

Tbf, I cannot think of a non-degenerate case where we would want the exclusion set to grow so significantly during narrowing.

If enum exclusion sets only get elements from negative guards and not computations/int domain refinement, then is there ever anything to narrow even?

The enums domain is an interesting thing whose termination is not intended to follow purely from the domain ordering but also somehow from its abstract operations (which sounds like an interesting research in itself). If narrowing termination of exclusion sets is enforced by size, then the dual question of widening of inclusion sets also comes up: we just join inclusion sets, which doesn't fulfill ACC either. Either that also needs a bound or if there's a more clever semantic reason why widening terminates, then shouldn't that also apply to narrowing?

@michael-schwarz
Copy link
Member Author

I now implemented a version of c) that allows some refining, namely shrinking inclusion sets, which should still be allowed without risking termination in #1675 .

We should decide if we prefer that or the limiting to 10 from #1674. I personally don't really have strong opinions either way...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants