You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've noticed that the current implementation of the floating-point to fixed-point conversion in the code might lead to inconsistent results across different platforms. The issue stems from the direct manipulation and rounding of floating-point numbers, which can behave differently on various platforms due to differences in floating-point arithmetic implementation.
template <typename T>
voidparseFloat(const T &value)
{
int64_t integerPart = static_cast<int64_t>(std::floor(value));
T fractionalPart = value - integerPart;
this->value = (integerPart << FixLut::PRECISION) +
static_cast<int64_t>(fractionalPart * FixLut::ONE);
}
By first using std::floor to isolate the integer part and then handling the fractional part separately, this method avoids some of the pitfalls associated with direct floating-point arithmetic and rounding.
The text was updated successfully, but these errors were encountered:
I've noticed that the current implementation of the floating-point to fixed-point conversion in the code might lead to inconsistent results across different platforms. The issue stems from the direct manipulation and rounding of floating-point numbers, which can behave differently on various platforms due to differences in floating-point arithmetic implementation.
Here is the current implementation for reference:
Here's the suggested approach for reference:
https://github.com/SkynetNext/Fixed64/blob/main/include/Fixed64.h
By first using
std::floor
to isolate the integer part and then handling the fractional part separately, this method avoids some of the pitfalls associated with direct floating-point arithmetic and rounding.The text was updated successfully, but these errors were encountered: