-
Notifications
You must be signed in to change notification settings - Fork 396
Code Questions
Stuart Mentzer edited this page Jan 24, 2015
·
2 revisions
This is a place for questions and possible issues that are not yet official issues.
- These relaxations are in sequential if blocks (not else if) so they are cumulative. If they were meant to be independent we need to change the effective relaxation factors:
if ( GSiter > 5 ) {...} // Relaxes by 1/2
if ( GSiter > 10 ) {...} // 1/8 not 1/4
if ( GSiter > 15 ) {...} // 1/80 not 1/10
- In the final convergence (escape) test in:
std::abs( sum( TDT - TDTLast ) / sum( TDT ) )
the values:
sum( TDT - TDTLast ) and sum( TDT )
are not summing the abs of each element so mixed sign values will cancel and the iteration could quit when TDT
and TDTLast
are far apart. Doesn't seem likely to be what was intended. Should this be changed to sum the abs of each element diff? (This will cause more iterations in some cases so could affect run times and solutions.)
- Why is
EnthOld
updated in this relaxation block instead of outside of it? Seems odd:
if ( CondFDRelaxFactor != 1.0 ) {
...
EnthOld = EnthNew;
}
-
TDT(i)
is not clipped to the Min and Max temperature limits in every code path. That may be fine but might be worth a look.
- CLIP: Testing confirmed that for some (non-convex) surfaces you can get multiple entry/exits of the receiving surface but the code is set up to just save the last IN and OUT points to clip. Seems like it should either clip each entry/exit or at least clip from the first entry to the last exit if such an approx suffices (correctness, not performance, issue).
- CLIPPOLY: I found that
NVOUT
can be zero for some edges (which makes sense for Sutherland-Hodgman) after which the other edges are looped over even though there is nothing to find. It seemed safe to add a break out of the loop onceNVOUT==0
but if this is wrong please note it here. - SHDGSS & SHDBKS: Adjusting near-duplicate points: Unlike elsewhere they don't break out of the loops when a "close" point is found, so it keeps going with the already modified test point, which seems unlikely to be what is intended.
- The SUM variables for area computation are summing int64 values but are doubles. If we knew that SUM wouldn't overflow it would be a little faster to accumulate into an int64: probably they are double for overflow protection.
- The mix of int64 and double arrays has a negative performance impact, mostly because the conversion operations inhibit vectorization of the array copying loops. If it can be shown that double has enough precision this would be a good change to make.